code
stringlengths 2.5k
6.36M
| kind
stringclasses 2
values | parsed_code
stringlengths 0
404k
| quality_prob
float64 0
0.98
| learning_prob
float64 0.03
1
|
---|---|---|---|---|
**Pytorch to Tflite via Keras**
First upload the '**model_mobilenetv2_seg_small.py**' file from portrait-net repo into colab root folder i.e **'/content'**. It contains the **architecture** of the model. Now upload the **weights** of the trained model i.e '**pnet_video.pth**' into colab.
```
import torch
import torchvision
import model_mobilenetv2_seg_small
dummy_input = torch.randn(1, 4, 224, 224).cuda()
model = torch.load('/content/pnet_video.pth').cuda()
torch.onnx.export(model, dummy_input, "pnet_video.onnx", verbose=True)
```
**Note:** Providing input and output names sets the display names for values within the model's graph. Setting these does not change the semantics of the graph; it is only for readability.
The inputs to the network consist of the flat list of inputs (i.e. the values you would pass to the forward() method) followed by the flat list of parameters. You can partially specify names, i.e. provide a list here shorter than the number of inputs to the model, and we will only set that subset of names, starting from the beginning.
You may also use the **entire saved model**, including model architecture and weights, as input to the converter.
**ONNX to Keras**
Install latest version of **tensorflow, onnx and onnx2keras**.
```
!pip install tensorflow-gpu
!pip install onnx
!git clone https://github.com/nerox8664/onnx2keras.git
```
Change directory to **onnx2keras** root folder.
```
%cd onnx2keras
```
Load the onnx model and convert it to keras model with **onnx2keras**. Ensure that the parameter **change_ordering** is True, for changing the channel format from **NCHW to NHWC**.
```
import tensorflow as tf
import onnx
from onnx2keras import onnx_to_keras
from tensorflow.keras.models import load_model
# Load ONNX model
onnx_model = onnx.load('/content/pnet_video.onnx')
# Call the converter and save keras model
k_model = onnx_to_keras(onnx_model, ['input.1'],change_ordering=True)
k_model.save('/content/pnet_video.h5')
```
**NB:** It may take about a minute for the conversion process to complete.
**Keras Model Modification**
Our model contains **two outputs** corresponding to mask and edge. We need to **remove** the **edge output** from the model, since it is not need during model inference.
```
from tensorflow.keras.models import Model
from tensorflow.keras.models import load_model
from tensorflow.keras.layers import Activation, Lambda, Reshape
# Load keras model
k_model=load_model('/content/pnet_video.h5')
k_model.summary()
# Remove edge branch from output
edge_model=Model(inputs=k_model.input,outputs=k_model.layers[-2].output)
edge_model.summary()
```
Now our model has one **two channel** output corresponding to foreground and output. First add a **softmax layer** at the end of the model(over channel axis) to restrict the output range between 0 and 1.
```
# Add softmax on output
sm=Lambda(lambda x: tf.nn.softmax(x))(edge_model.output)
soft_model=Model(inputs=edge_model.input, outputs=sm)
soft_model.summary()
```
Now, let's get the softmax slice for the **foreground** channel, using **strided slice**.
```
# Get foreground softmax slice
ip = soft_model.output
str_slice=Lambda(lambda x: tf.strided_slice(x, [0,0, 0, 1], [1,224, 224, 2], [1, 1, 1, 1]))(ip)
stride_model=Model(inputs=soft_model.input, outputs=str_slice)
stride_model.summary()
```
Finally, **flatten** the output to 1D i.e 1x224x22x1 => 1x50176
```
# Flatten output
output = stride_model.output
newout=Reshape((50176,))(output)
reshape_model=Model(stride_model.input,newout)
reshape_model.summary()
```
Save the **final keras** model.
```
# Save keras model
reshape_model.save('/content/portrait_video.h5')
```
**Keras to Tflite**
Finally, convert the keras model to tflite using tensorflow **tflite-converter**.
```
# Convert to tflite
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_keras_model(reshape_model)
tflite_model = converter.convert()
open("/content/portrait_video.tflite", "wb").write(tflite_model)
```
**NB:** For verification or model inspection use [Netron](https://lutzroeder.github.io/netron/) web-app.
|
github_jupyter
|
import torch
import torchvision
import model_mobilenetv2_seg_small
dummy_input = torch.randn(1, 4, 224, 224).cuda()
model = torch.load('/content/pnet_video.pth').cuda()
torch.onnx.export(model, dummy_input, "pnet_video.onnx", verbose=True)
!pip install tensorflow-gpu
!pip install onnx
!git clone https://github.com/nerox8664/onnx2keras.git
%cd onnx2keras
import tensorflow as tf
import onnx
from onnx2keras import onnx_to_keras
from tensorflow.keras.models import load_model
# Load ONNX model
onnx_model = onnx.load('/content/pnet_video.onnx')
# Call the converter and save keras model
k_model = onnx_to_keras(onnx_model, ['input.1'],change_ordering=True)
k_model.save('/content/pnet_video.h5')
from tensorflow.keras.models import Model
from tensorflow.keras.models import load_model
from tensorflow.keras.layers import Activation, Lambda, Reshape
# Load keras model
k_model=load_model('/content/pnet_video.h5')
k_model.summary()
# Remove edge branch from output
edge_model=Model(inputs=k_model.input,outputs=k_model.layers[-2].output)
edge_model.summary()
# Add softmax on output
sm=Lambda(lambda x: tf.nn.softmax(x))(edge_model.output)
soft_model=Model(inputs=edge_model.input, outputs=sm)
soft_model.summary()
# Get foreground softmax slice
ip = soft_model.output
str_slice=Lambda(lambda x: tf.strided_slice(x, [0,0, 0, 1], [1,224, 224, 2], [1, 1, 1, 1]))(ip)
stride_model=Model(inputs=soft_model.input, outputs=str_slice)
stride_model.summary()
# Flatten output
output = stride_model.output
newout=Reshape((50176,))(output)
reshape_model=Model(stride_model.input,newout)
reshape_model.summary()
# Save keras model
reshape_model.save('/content/portrait_video.h5')
# Convert to tflite
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_keras_model(reshape_model)
tflite_model = converter.convert()
open("/content/portrait_video.tflite", "wb").write(tflite_model)
| 0.862887 | 0.922761 |
# Machine Learning Engineer Nanodegree
## Model Evaluation & Validation
## Project: Predicting Boston Housing Prices
Welcome to the first project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with **'Implementation'** in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a **'Question X'** header. Carefully read each question and provide thorough answers in the following text boxes that begin with **'Answer:'**. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
## Getting Started
In this project, you will evaluate the performance and predictive power of a model that has been trained and tested on data collected from homes in suburbs of Boston, Massachusetts. A model trained on this data that is seen as a *good fit* could then be used to make certain predictions about a home — in particular, its monetary value. This model would prove to be invaluable for someone like a real estate agent who could make use of such information on a daily basis.
The dataset for this project originates from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Housing). The Boston housing data was collected in 1978 and each of the 506 entries represent aggregated data about 14 features for homes from various suburbs in Boston, Massachusetts. For the purposes of this project, the following preprocessing steps have been made to the dataset:
- 16 data points have an `'MEDV'` value of 50.0. These data points likely contain **missing or censored values** and have been removed.
- 1 data point has an `'RM'` value of 8.78. This data point can be considered an **outlier** and has been removed.
- The features `'RM'`, `'LSTAT'`, `'PTRATIO'`, and `'MEDV'` are essential. The remaining **non-relevant features** have been excluded.
- The feature `'MEDV'` has been **multiplicatively scaled** to account for 35 years of market inflation.
Run the code cell below to load the Boston housing dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.
```
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from sklearn.cross_validation import ShuffleSplit
# Import supplementary visualizations code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the Boston housing dataset
data = pd.read_csv('housing.csv')
prices = data['MEDV']
features = data.drop('MEDV', axis = 1)
# Success
print "Boston housing dataset has {} data points with {} variables each.".format(*data.shape)
```
## Data Exploration
In this first section of this project, you will make a cursory investigation about the Boston housing data and provide your observations. Familiarizing yourself with the data through an explorative process is a fundamental practice to help you better understand and justify your results.
Since the main goal of this project is to construct a working model which has the capability of predicting the value of houses, we will need to separate the dataset into **features** and the **target variable**. The **features**, `'RM'`, `'LSTAT'`, and `'PTRATIO'`, give us quantitative information about each data point. The **target variable**, `'MEDV'`, will be the variable we seek to predict. These are stored in `features` and `prices`, respectively.
### Implementation: Calculate Statistics
For your very first coding implementation, you will calculate descriptive statistics about the Boston housing prices. Since `numpy` has already been imported for you, use this library to perform the necessary calculations. These statistics will be extremely important later on to analyze various prediction results from the constructed model.
In the code cell below, you will need to implement the following:
- Calculate the minimum, maximum, mean, median, and standard deviation of `'MEDV'`, which is stored in `prices`.
- Store each calculation in their respective variable.
```
prices = data['MEDV']
# TODO: Minimum price of the data
minimum_price = np.amin(prices)
print minimum_price
# TODO: Maximum price of the data
maximum_price = np.amax(prices)
print maximum_price
# TODO: Mean price of the data
mean_price = np.mean(prices)
# TODO: Median price of the data
median_price = np.median(prices)
# TODO: Standard deviation of prices of the data
std_price = np.std(prices)
# Show the calculated statistics
print "Statistics for Boston housing dataset:\n"
print "Minimum price: ${:,.2f}".format(minimum_price)
print "Maximum price: ${:,.2f}".format(maximum_price)
print "Mean price: ${:,.2f}".format(mean_price)
print "Median price ${:,.2f}".format(median_price)
print "Standard deviation of prices: ${:,.2f}".format(std_price)
```
### Question 1 - Feature Observation
As a reminder, we are using three features from the Boston housing dataset: `'RM'`, `'LSTAT'`, and `'PTRATIO'`. For each data point (neighborhood):
- `'RM'` is the average number of rooms among homes in the neighborhood.
- `'LSTAT'` is the percentage of homeowners in the neighborhood considered "lower class" (working poor).
- `'PTRATIO'` is the ratio of students to teachers in primary and secondary schools in the neighborhood.
_Using your intuition, for each of the three features above, do you think that an increase in the value of that feature would lead to an **increase** in the value of `'MEDV'` or a **decrease** in the value of `'MEDV'`? Justify your answer for each._
**Hint:** Would you expect a home that has an `'RM'` value of 6 be worth more or less than a home that has an `'RM'` value of 7?
**Answer: **
* **RM**: More number of rooms would mean that the house is bigger in area, and can accomodate bigger families. So an **increase** in _RM_ will lead to an **increase** in _MEDV_.
* **LSTAT**: If the percentage of lower class homeowners is less, that would mean that the place has a higher standard of living and the cost of living is higher. Hence if the inverse were to be true, then it would deter rich folks from living in the area, because like attracts like - rich folks tend to live in rich neighborhoods amidst other rich folks and form communities around them. So an **increase** in _LSTAT_ would lead to a **decrease** in _MEDV_
* **PTRATIO**: A higher ratio means that for each teacher, therea are more students to teach - it could be because there aren't enough educational institutions in the area. That could be an inconvenience to families who's family member is a student - they'd be forced to travel to institutions that are far away. So an **increase** in _PTRATIO_ would **decrease** the value of _MEDV_.
----
## Developing a Model
In this second section of the project, you will develop the tools and techniques necessary for a model to make a prediction. Being able to make accurate evaluations of each model's performance through the use of these tools and techniques helps to greatly reinforce the confidence in your predictions.
### Implementation: Define a Performance Metric
It is difficult to measure the quality of a given model without quantifying its performance over training and testing. This is typically done using some type of performance metric, whether it is through calculating some type of error, the goodness of fit, or some other useful measurement. For this project, you will be calculating the [*coefficient of determination*](http://stattrek.com/statistics/dictionary.aspx?definition=coefficient_of_determination), R<sup>2</sup>, to quantify your model's performance. The coefficient of determination for a model is a useful statistic in regression analysis, as it often describes how "good" that model is at making predictions.
The values for R<sup>2</sup> range from 0 to 1, which captures the percentage of squared correlation between the predicted and actual values of the **target variable**. A model with an R<sup>2</sup> of 0 is no better than a model that always predicts the *mean* of the target variable, whereas a model with an R<sup>2</sup> of 1 perfectly predicts the target variable. Any value between 0 and 1 indicates what percentage of the target variable, using this model, can be explained by the **features**. _A model can be given a negative R<sup>2</sup> as well, which indicates that the model is **arbitrarily worse** than one that always predicts the mean of the target variable._
For the `performance_metric` function in the code cell below, you will need to implement the following:
- Use `r2_score` from `sklearn.metrics` to perform a performance calculation between `y_true` and `y_predict`.
- Assign the performance score to the `score` variable.
```
# TODO: Import 'r2_score'
from sklearn.metrics import r2_score
def performance_metric(y_true, y_predict):
""" Calculates and returns the performance score between
true and predicted values based on the metric chosen. """
# TODO: Calculate the performance score between 'y_true' and 'y_predict'
score = r2_score(y_true, y_predict)
# Return the score
return score
```
### Question 2 - Goodness of Fit
Assume that a dataset contains five data points and a model made the following predictions for the target variable:
| True Value | Prediction |
| :-------------: | :--------: |
| 3.0 | 2.5 |
| -0.5 | 0.0 |
| 2.0 | 2.1 |
| 7.0 | 7.8 |
| 4.2 | 5.3 |
*Would you consider this model to have successfully captured the variation of the target variable? Why or why not?*
Run the code cell below to use the `performance_metric` function and calculate this model's coefficient of determination.
```
# Calculate the performance of this model
score = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3])
print "Model has a coefficient of determination, R^2, of {:.3f}.".format(score)
```
**Answer:** An _R2 score_ closer to 1 indicates an accurate prediction, and since this model is close quite close to 1, it's performance metric seems to be quite good.
### Implementation: Shuffle and Split Data
Your next implementation requires that you take the Boston housing dataset and split the data into training and testing subsets. Typically, the data is also shuffled into a random order when creating the training and testing subsets to remove any bias in the ordering of the dataset.
For the code cell below, you will need to implement the following:
- Use `train_test_split` from `sklearn.cross_validation` to shuffle and split the `features` and `prices` data into training and testing sets.
- Split the data into 80% training and 20% testing.
- Set the `random_state` for `train_test_split` to a value of your choice. This ensures results are consistent.
- Assign the train and testing splits to `X_train`, `X_test`, `y_train`, and `y_test`.
```
# TODO: Import 'train_test_split'
from sklearn.cross_validation import train_test_split
# TODO: Shuffle and split the data into training and testing subsets
X_train, X_test, y_train, y_test = train_test_split(features, prices, test_size=0.2, random_state=0)
# Success
print "Training and testing split was successful."
```
### Question 3 - Training and Testing
*What is the benefit to splitting a dataset into some ratio of training and testing subsets for a learning algorithm?*
**Hint:** What could go wrong with not having a way to test your model?
**Answer: ** A learning curve needs to be able to do two things - to **learn** as much as it can from the given data, and to **generalize** well for unseen data. Hence by splitting a dataset into different ratios of training and testing subsets, we can analyze the performance of our model - more training points will mean more data for our learning algorithm to lean from, but we'll have fewer unseen data - we won't have a good idea of how well the algorithm can generalize. The inverse is also true.
If we don't have a way to test our model then there's no way to analyze the model for _bias_ or _variance_. The only way to know is by testing our algorithm with unseen data-points and analyzing its performance against them.
----
## Analyzing Model Performance
In this third section of the project, you'll take a look at several models' learning and testing performances on various subsets of training data. Additionally, you'll investigate one particular algorithm with an increasing `'max_depth'` parameter on the full training set to observe how model complexity affects performance. Graphing your model's performance based on varying criteria can be beneficial in the analysis process, such as visualizing behavior that may not have been apparent from the results alone.
### Learning Curves
The following code cell produces four graphs for a decision tree model with different maximum depths. Each graph visualizes the learning curves of the model for both training and testing as the size of the training set is increased. Note that the shaded region of a learning curve denotes the uncertainty of that curve (measured as the standard deviation). The model is scored on both the training and testing sets using R<sup>2</sup>, the coefficient of determination.
Run the code cell below and use these graphs to answer the following question.
```
# Produce learning curves for varying training set sizes and maximum depths
vs.ModelLearning(features, prices)
```
### Question 4 - Learning the Data
*Choose one of the graphs above and state the maximum depth for the model. What happens to the score of the training curve as more training points are added? What about the testing curve? Would having more training points benefit the model?*
**Hint:** Are the learning curves converging to particular scores?
**Answer: **
* **max_depth = 1**: In this graph, we see the training curve and the testing curves almost merge, but this isn't necessarily an indicator of good performance, as we can see that the _R2 scores_ for both the curves are pretty low. This could mean that our classifier suffers from a *high bias*, adding more training points isn't making it any better.
* **max_depth = 3**: Both the curves in this graph have high scores and they almost converge. This is a good fit for our classifier, and there's no need to add more training points, since that might not make much difference.
* **max_depth = 6**: The classifier seems to be trained well, but the training and testing curves don't seem to be converging. This seems to be a good fit too, but probably suffers from a slight variance. Adding more training points coould help it improve further.
* **max_depth = 10** the training curve has a good score but there's a large gap between the training curve and the tsting curve. This could indicate that the model suffers from a high variance. Adding more training points can help make the model better.
Since both the curves plateau even when the number of training points cross 350, the model doesn't seem to be getting any better. So adding more points won't make the model significantly better at predicting unseen data.
### Complexity Curves
The following code cell produces a graph for a decision tree model that has been trained and validated on the training data using different maximum depths. The graph produces two complexity curves — one for training and one for validation. Similar to the **learning curves**, the shaded regions of both the complexity curves denote the uncertainty in those curves, and the model is scored on both the training and validation sets using the `performance_metric` function.
Run the code cell below and use this graph to answer the following two questions.
```
vs.ModelComplexity(X_train, y_train)
```
### Question 5 - Bias-Variance Tradeoff
*When the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance? How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions?*
**Hint:** How do you know when a model is suffering from high bias or high variance?
**Answer: ** When the model is trained with maximum depth of 1, then it suffers from high bias. If it's trained with maximum depth of 10 then it suffers from high variance.
High bias occurs when our model depends on more features than the ones that we have taken into consideration. This will result in a low R2 score.
As we increase the maximum depth, we find that the training and validation curves are further and further apart, which indicates high variance. This might be because we're trying to overfit the model with too many features, some of which might not be necessary. We might also need more training data to help generalize the model better, so that the two curves converge at a relatively high score.
### Question 6 - Best-Guess Optimal Model
*Which maximum depth do you think results in a model that best generalizes to unseen data? What intuition lead you to this answer?*
**Answer: ** A maximum depth of 3 seems to optimally generalize best to unseen data. As we can see from the graph, the increase in score for the training curve is proportional to the increase in score for the validation curve until the maximum depth value is 3. After which, the score of the validation curve does not increase in the same proportion as that of the training curve, which means that the variance is increasing.
-----
## Evaluating Model Performance
In this final section of the project, you will construct a model and make a prediction on the client's feature set using an optimized model from `fit_model`.
### Question 7 - Grid Search
*What is the grid search technique and how it can be applied to optimize a learning algorithm?*
**Answer: ** For an algorithm that accepts a set of parameters, Grid Search builds different models by tuning the parameters to different values, and then cross validates the models to decide which combination of parameters (or tune) gives the best performance. These parameters are specified in a grid.
It can be used to optimize a learning algorithm since it returns the best of a family of models, one that gives the most accurate predictions on the testing data.
### Question 8 - Cross-Validation
*What is the k-fold cross-validation training technique? What benefit does this technique provide for grid search when optimizing a model?*
**Hint:** Much like the reasoning behind having a testing set, what could go wrong with using grid search without a cross-validated set?
**Answer: ** The k-fold technique splits our data-set into k sub-sets. One of the k subsets is used as the testing data, and the rest (k-1) subsets are used as the training data. The machine learning algorithm is trained with the training data-set and then tested for performance using the testing data-set.
This process is repeated k times, when each time a different subset is used as the testing data. Once this is done, the average performance is calculated from the results of the k experiments.
In this way, our algorithm can use all the data for training, and all the data for testing too. This process takes a longer time, but it improves the accuracy of the grid search.
This techique could help in eliminating bias, because we take into account the entire data-set. It aids grid search in making a more accurate prediction.
### Implementation: Fitting a Model
Your final implementation requires that you bring everything together and train a model using the **decision tree algorithm**. To ensure that you are producing an optimized model, you will train the model using the grid search technique to optimize the `'max_depth'` parameter for the decision tree. The `'max_depth'` parameter can be thought of as how many questions the decision tree algorithm is allowed to ask about the data before making a prediction. Decision trees are part of a class of algorithms called *supervised learning algorithms*.
In addition, you will find your implementation is using `ShuffleSplit()` for an alternative form of cross-validation (see the `'cv_sets'` variable). While it is not the K-Fold cross-validation technique you describe in **Question 8**, this type of cross-validation technique is just as useful!. The `ShuffleSplit()` implementation below will create 10 (`'n_iter'`) shuffled sets, and for each shuffle, 20% (`'test_size'`) of the data will be used as the *validation set*. While you're working on your implementation, think about the contrasts and similarities it has to the K-fold cross-validation technique.
For the `fit_model` function in the code cell below, you will need to implement the following:
- Use [`DecisionTreeRegressor`](http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeRegressor.html) from `sklearn.tree` to create a decision tree regressor object.
- Assign this object to the `'regressor'` variable.
- Create a dictionary for `'max_depth'` with the values from 1 to 10, and assign this to the `'params'` variable.
- Use [`make_scorer`](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.make_scorer.html) from `sklearn.metrics` to create a scoring function object.
- Pass the `performance_metric` function as a parameter to the object.
- Assign this scoring function to the `'scoring_fnc'` variable.
- Use [`GridSearchCV`](http://scikit-learn.org/0.17/modules/generated/sklearn.grid_search.GridSearchCV.html) from `sklearn.grid_search` to create a grid search object.
- Pass the variables `'regressor'`, `'params'`, `'scoring_fnc'`, and `'cv_sets'` as parameters to the object.
- Assign the `GridSearchCV` object to the `'grid'` variable.
```
# TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV'
from sklearn.metrics import make_scorer
from sklearn.tree import DecisionTreeRegressor
from sklearn.grid_search import GridSearchCV
def fit_model(X, y):
""" Performs grid search over the 'max_depth' parameter for a
decision tree regressor trained on the input data [X, y]. """
# Create cross-validation sets from the training data
cv_sets = ShuffleSplit(X.shape[0], n_iter = 10, test_size = 0.20, random_state = 0)
# TODO: Create a decision tree regressor object
regressor = DecisionTreeRegressor()
# TODO: Create a dictionary for the parameter 'max_depth' with a range from 1 to 10
params = {
'max_depth': [1,2,3,4,5,6,7,8,9,10]
}
# TODO: Transform 'performance_metric' into a scoring function using 'make_scorer'
scoring_fnc = make_scorer(performance_metric)
# TODO: Create the grid search object
grid = GridSearchCV(regressor, params, scoring_fnc, cv=cv_sets)
# Fit the grid search object to the data to compute the optimal model
grid = grid.fit(X, y)
# Return the optimal model after fitting the data
return grid.best_estimator_
```
### Making Predictions
Once a model has been trained on a given set of data, it can now be used to make predictions on new sets of input data. In the case of a *decision tree regressor*, the model has learned *what the best questions to ask about the input data are*, and can respond with a prediction for the **target variable**. You can use these predictions to gain information about data where the value of the target variable is unknown — such as data the model was not trained on.
### Question 9 - Optimal Model
_What maximum depth does the optimal model have? How does this result compare to your guess in **Question 6**?_
Run the code block below to fit the decision tree regressor to the training data and produce an optimal model.
```
# Fit the training data to the model using grid search
reg = fit_model(X_train, y_train)
# Produce the value for 'max_depth'
print "Parameter 'max_depth' is {} for the optimal model.".format(reg.get_params()['max_depth'])
```
**Answer: ** The model predicts an optimum max_depth value of 4, as opposed to the guess made earlier in question 6, where I guessed that the optimum max_depth value could be 3.
### Question 10 - Predicting Selling Prices
Imagine that you were a real estate agent in the Boston area looking to use this model to help price homes owned by your clients that they wish to sell. You have collected the following information from three of your clients:
| Feature | Client 1 | Client 2 | Client 3 |
| :---: | :---: | :---: | :---: |
| Total number of rooms in home | 5 rooms | 4 rooms | 8 rooms |
| Neighborhood poverty level (as %) | 17% | 32% | 3% |
| Student-teacher ratio of nearby schools | 15-to-1 | 22-to-1 | 12-to-1 |
*What price would you recommend each client sell his/her home at? Do these prices seem reasonable given the values for the respective features?*
**Hint:** Use the statistics you calculated in the **Data Exploration** section to help justify your response.
Run the code block below to have your optimized model make predictions for each client's home.
```
# Produce a matrix for client data
client_data = [[5, 17, 15], # Client 1
[4, 32, 22], # Client 2
[8, 3, 12]] # Client 3
# Show predictions
for i, price in enumerate(reg.predict(client_data)):
print "Predicted selling price for Client {}'s home: ${:,.2f}".format(i+1, price)
```
**Answer: ** The predicted prices for each of the houses seems perfectly reasonable.
On comparing these prices with the minimum price and maximum price from the Boston housing data-set, we can understand that the predicted prices certainly aren't outliers.
The average of these three prices is 507657.84, which seems close to the calculated mean of the entire Boston housing data-set $454342.94.
Client 3 has a house that has more rooms than the other clients, has lesser poverty percentage level and has the least student-teacher ratio than the other houses, so naturally the predicted value of the house is higher than the rest.
Client 2 on the other hand, has the least number of rooms, has the highest poverty percentage level and has the largest student-teacher ratio among the three clients, hence the predicted value of this house is the least of all.
Client 1 has a house that has more rooms than Client 2 but lesser than Client 3. The house also seems to be in a better neighborhood and has better student-teacher ratio than that of Client 2, but not as good as that of Client 3. So the predicted value of this house is greater than Client 2's, but lesser than Client 3's.
### Sensitivity
An optimal model is not necessarily a robust model. Sometimes, a model is either too complex or too simple to sufficiently generalize to new data. Sometimes, a model could use a learning algorithm that is not appropriate for the structure of the data given. Other times, the data itself could be too noisy or contain too few samples to allow a model to adequately capture the target variable — i.e., the model is underfitted. Run the code cell below to run the `fit_model` function ten times with different training and testing sets to see how the prediction for a specific client changes with the data it's trained on.
```
vs.PredictTrials(features, prices, fit_model, client_data)
```
### Question 11 - Applicability
*In a few sentences, discuss whether the constructed model should or should not be used in a real-world setting.*
**Hint:** Some questions to answering:
- *How relevant today is data that was collected from 1978?*
- *Are the features present in the data sufficient to describe a home?*
- *Is the model robust enough to make consistent predictions?*
- *Would data collected in an urban city like Boston be applicable in a rural city?*
**Answer: ** The range in prices is high, so it needs to be more consistent with it's predictions.
We might be ignoring some not so obvious but important features - for example employment opportunities in the neighborhood, age of the house, crime rates, health-care facilities etc. The features present in the data-set are good ,but they might not be enough to describe a home.
The model is not making consistent predictions, looks like it flutuates a bit - judging from th range in priec predictions
Some of the featues might not be applicable in a rural city, for example - poverty level might not be aa important a factor as for people in urban areas (most people have a very simple way of life in rural areas).
> **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to
**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.
|
github_jupyter
|
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from sklearn.cross_validation import ShuffleSplit
# Import supplementary visualizations code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the Boston housing dataset
data = pd.read_csv('housing.csv')
prices = data['MEDV']
features = data.drop('MEDV', axis = 1)
# Success
print "Boston housing dataset has {} data points with {} variables each.".format(*data.shape)
prices = data['MEDV']
# TODO: Minimum price of the data
minimum_price = np.amin(prices)
print minimum_price
# TODO: Maximum price of the data
maximum_price = np.amax(prices)
print maximum_price
# TODO: Mean price of the data
mean_price = np.mean(prices)
# TODO: Median price of the data
median_price = np.median(prices)
# TODO: Standard deviation of prices of the data
std_price = np.std(prices)
# Show the calculated statistics
print "Statistics for Boston housing dataset:\n"
print "Minimum price: ${:,.2f}".format(minimum_price)
print "Maximum price: ${:,.2f}".format(maximum_price)
print "Mean price: ${:,.2f}".format(mean_price)
print "Median price ${:,.2f}".format(median_price)
print "Standard deviation of prices: ${:,.2f}".format(std_price)
# TODO: Import 'r2_score'
from sklearn.metrics import r2_score
def performance_metric(y_true, y_predict):
""" Calculates and returns the performance score between
true and predicted values based on the metric chosen. """
# TODO: Calculate the performance score between 'y_true' and 'y_predict'
score = r2_score(y_true, y_predict)
# Return the score
return score
# Calculate the performance of this model
score = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3])
print "Model has a coefficient of determination, R^2, of {:.3f}.".format(score)
# TODO: Import 'train_test_split'
from sklearn.cross_validation import train_test_split
# TODO: Shuffle and split the data into training and testing subsets
X_train, X_test, y_train, y_test = train_test_split(features, prices, test_size=0.2, random_state=0)
# Success
print "Training and testing split was successful."
# Produce learning curves for varying training set sizes and maximum depths
vs.ModelLearning(features, prices)
vs.ModelComplexity(X_train, y_train)
# TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV'
from sklearn.metrics import make_scorer
from sklearn.tree import DecisionTreeRegressor
from sklearn.grid_search import GridSearchCV
def fit_model(X, y):
""" Performs grid search over the 'max_depth' parameter for a
decision tree regressor trained on the input data [X, y]. """
# Create cross-validation sets from the training data
cv_sets = ShuffleSplit(X.shape[0], n_iter = 10, test_size = 0.20, random_state = 0)
# TODO: Create a decision tree regressor object
regressor = DecisionTreeRegressor()
# TODO: Create a dictionary for the parameter 'max_depth' with a range from 1 to 10
params = {
'max_depth': [1,2,3,4,5,6,7,8,9,10]
}
# TODO: Transform 'performance_metric' into a scoring function using 'make_scorer'
scoring_fnc = make_scorer(performance_metric)
# TODO: Create the grid search object
grid = GridSearchCV(regressor, params, scoring_fnc, cv=cv_sets)
# Fit the grid search object to the data to compute the optimal model
grid = grid.fit(X, y)
# Return the optimal model after fitting the data
return grid.best_estimator_
# Fit the training data to the model using grid search
reg = fit_model(X_train, y_train)
# Produce the value for 'max_depth'
print "Parameter 'max_depth' is {} for the optimal model.".format(reg.get_params()['max_depth'])
# Produce a matrix for client data
client_data = [[5, 17, 15], # Client 1
[4, 32, 22], # Client 2
[8, 3, 12]] # Client 3
# Show predictions
for i, price in enumerate(reg.predict(client_data)):
print "Predicted selling price for Client {}'s home: ${:,.2f}".format(i+1, price)
vs.PredictTrials(features, prices, fit_model, client_data)
| 0.611382 | 0.993505 |
# 100 numpy exercises
This is a collection of exercises that have been collected in the numpy mailing list, on stack overflow and in the numpy documentation. The goal of this collection is to offer a quick reference for both old and new users but also to provide a set of exercices for those who teach.
If you find an error or think you've a better way to solve some of them, feel free to open an issue at <https://github.com/rougier/numpy-100>
#### 1. Import the numpy package under the name `np` (★☆☆)
```
import numpy as np
```
#### 2. Print the numpy version and the configuration (★☆☆)
```
np.version.version
np.show_config()
```
#### 3. Create a null vector of size 10 (★☆☆)
```
a = np.zeros(10)
```
#### 4. How to find the memory size of any array (★☆☆)
```
a.size * a.itemsize
```
#### 5. How to get the documentation of the numpy add function from the command line? (★☆☆)
```
np.info(np.add)
# %run 'python -c "import numpy; numpy.info(numpy.add)"'
```
#### 6. Create a null vector of size 10 but the fifth value which is 1 (★☆☆)
```
a = np.zeros(10)
a[4] = 1
a
```
#### 7. Create a vector with values ranging from 10 to 49 (★☆☆)
```
a = np.arange(10, 50)
```
#### 8. Reverse a vector (first element becomes last) (★☆☆)
```
b = a[::-1]
b[0] = 1
a
```
#### 9. Create a 3x3 matrix with values ranging from 0 to 8 (★☆☆)
```
a = np.arange(9)
a.reshape((3, 3))
```
#### 10. Find indices of non-zero elements from \[1,2,0,0,4,0\] (★☆☆)
```
a = np.array([1,2,0,0,4,0])
np.nonzero(a)
```
#### 11. Create a 3x3 identity matrix (★☆☆)
```
a = np.eye(3)
a
```
#### 12. Create a 3x3x3 array with random values (★☆☆)
```
a = np.random.random((3, 3, 3)) # np.info(np.random.rand)
a
```
#### 13. Create a 10x10 array with random values and find the minimum and maximum values (★☆☆)
```
a = np.random.rand(10, 10)
a.max()
a.min()
```
#### 14. Create a random vector of size 30 and find the mean value (★☆☆)
```
a = np.random.rand(30)
a.mean(0)
```
#### 15. Create a 2d array with 1 on the border and 0 inside (★☆☆)
```
a = np.zeros((10, 10))
a[0, :] = 1
a[:, 0] = 1
a[-1, :] = 1
a[:, -1] = 1
a
Z = np.ones((10,10))
Z[1:-1,1:-1] = 0
print(Z)
```
#### 16. How to add a border (filled with 0's) around an existing array? (★☆☆)
```
Z = np.ones((5,5))
Z = np.pad(Z, pad_width=1, mode='constant', constant_values=0)
print(Z)
```
#### 17. What is the result of the following expression? (★☆☆)
```python
0 * np.nan
np.nan == np.nan
np.inf > np.nan
np.nan - np.nan
0.3 == 3 * 0.1
```
#### 18. Create a 5x5 matrix with values 1,2,3,4 just below the diagonal (★☆☆)
```
Z = np.diag(1+np.arange(4),k=-1)
print(Z)
```
#### 19. Create a 8x8 matrix and fill it with a checkerboard pattern (★☆☆)
#### 20. Consider a (6,7,8) shape array, what is the index (x,y,z) of the 100th element?
#### 21. Create a checkerboard 8x8 matrix using the tile function (★☆☆)
#### 22. Normalize a 5x5 random matrix (★☆☆)
#### 23. Create a custom dtype that describes a color as four unsigned bytes (RGBA) (★☆☆)
#### 24. Multiply a 5x3 matrix by a 3x2 matrix (real matrix product) (★☆☆)
#### 25. Given a 1D array, negate all elements which are between 3 and 8, in place. (★☆☆)
#### 26. What is the output of the following script? (★☆☆)
```python
# Author: Jake VanderPlas
print(sum(range(5),-1))
from numpy import *
print(sum(range(5),-1))
```
#### 27. Consider an integer vector Z, which of these expressions are legal? (★☆☆)
```python
Z**Z
2 << Z >> 2
Z <- Z
1j*Z
Z/1/1
Z<Z>Z
```
#### 28. What are the result of the following expressions?
```python
np.array(0) / np.array(0)
np.array(0) // np.array(0)
np.array([np.nan]).astype(int).astype(float)
```
#### 29. How to round away from zero a float array ? (★☆☆)
#### 30. How to find common values between two arrays? (★☆☆)
#### 31. How to ignore all numpy warnings (not recommended)? (★☆☆)
#### 32. Is the following expressions true? (★☆☆)
```python
np.sqrt(-1) == np.emath.sqrt(-1)
```
#### 33. How to get the dates of yesterday, today and tomorrow? (★☆☆)
#### 34. How to get all the dates corresponding to the month of July 2016? (★★☆)
#### 35. How to compute ((A+B)\*(-A/2)) in place (without copy)? (★★☆)
#### 36. Extract the integer part of a random array using 5 different methods (★★☆)
#### 37. Create a 5x5 matrix with row values ranging from 0 to 4 (★★☆)
#### 38. Consider a generator function that generates 10 integers and use it to build an array (★☆☆)
#### 39. Create a vector of size 10 with values ranging from 0 to 1, both excluded (★★☆)
#### 40. Create a random vector of size 10 and sort it (★★☆)
#### 41. How to sum a small array faster than np.sum? (★★☆)
#### 42. Consider two random array A and B, check if they are equal (★★☆)
#### 43. Make an array immutable (read-only) (★★☆)
#### 44. Consider a random 10x2 matrix representing cartesian coordinates, convert them to polar coordinates (★★☆)
#### 45. Create random vector of size 10 and replace the maximum value by 0 (★★☆)
#### 46. Create a structured array with `x` and `y` coordinates covering the \[0,1\]x\[0,1\] area (★★☆)
#### 47. Given two arrays, X and Y, construct the Cauchy matrix C (Cij =1/(xi - yj))
#### 48. Print the minimum and maximum representable value for each numpy scalar type (★★☆)
#### 49. How to print all the values of an array? (★★☆)
#### 50. How to find the closest value (to a given scalar) in a vector? (★★☆)
#### 51. Create a structured array representing a position (x,y) and a color (r,g,b) (★★☆)
#### 52. Consider a random vector with shape (100,2) representing coordinates, find point by point distances (★★☆)
#### 53. How to convert a float (32 bits) array into an integer (32 bits) in place?
#### 54. How to read the following file? (★★☆)
```
1, 2, 3, 4, 5
6, , , 7, 8
, , 9,10,11
```
#### 55. What is the equivalent of enumerate for numpy arrays? (★★☆)
#### 56. Generate a generic 2D Gaussian-like array (★★☆)
#### 57. How to randomly place p elements in a 2D array? (★★☆)
#### 58. Subtract the mean of each row of a matrix (★★☆)
#### 59. How to sort an array by the nth column? (★★☆)
#### 60. How to tell if a given 2D array has null columns? (★★☆)
#### 61. Find the nearest value from a given value in an array (★★☆)
#### 62. Considering two arrays with shape (1,3) and (3,1), how to compute their sum using an iterator? (★★☆)
#### 63. Create an array class that has a name attribute (★★☆)
#### 64. Consider a given vector, how to add 1 to each element indexed by a second vector (be careful with repeated indices)? (★★★)
#### 65. How to accumulate elements of a vector (X) to an array (F) based on an index list (I)? (★★★)
#### 66. Considering a (w,h,3) image of (dtype=ubyte), compute the number of unique colors (★★★)
#### 67. Considering a four dimensions array, how to get sum over the last two axis at once? (★★★)
#### 68. Considering a one-dimensional vector D, how to compute means of subsets of D using a vector S of same size describing subset indices? (★★★)
#### 69. How to get the diagonal of a dot product? (★★★)
#### 70. Consider the vector \[1, 2, 3, 4, 5\], how to build a new vector with 3 consecutive zeros interleaved between each value? (★★★)
#### 71. Consider an array of dimension (5,5,3), how to mulitply it by an array with dimensions (5,5)? (★★★)
#### 72. How to swap two rows of an array? (★★★)
#### 73. Consider a set of 10 triplets describing 10 triangles (with shared vertices), find the set of unique line segments composing all the triangles (★★★)
#### 74. Given an array C that is a bincount, how to produce an array A such that np.bincount(A) == C? (★★★)
#### 75. How to compute averages using a sliding window over an array? (★★★)
#### 76. Consider a one-dimensional array Z, build a two-dimensional array whose first row is (Z\[0\],Z\[1\],Z\[2\]) and each subsequent row is shifted by 1 (last row should be (Z\[-3\],Z\[-2\],Z\[-1\]) (★★★)
#### 77. How to negate a boolean, or to change the sign of a float inplace? (★★★)
#### 78. Consider 2 sets of points P0,P1 describing lines (2d) and a point p, how to compute distance from p to each line i (P0\[i\],P1\[i\])? (★★★)
#### 79. Consider 2 sets of points P0,P1 describing lines (2d) and a set of points P, how to compute distance from each point j (P\[j\]) to each line i (P0\[i\],P1\[i\])? (★★★)
#### 80. Consider an arbitrary array, write a function that extract a subpart with a fixed shape and centered on a given element (pad with a `fill` value when necessary) (★★★)
#### 81. Consider an array Z = \[1,2,3,4,5,6,7,8,9,10,11,12,13,14\], how to generate an array R = \[\[1,2,3,4\], \[2,3,4,5\], \[3,4,5,6\], ..., \[11,12,13,14\]\]? (★★★)
#### 82. Compute a matrix rank (★★★)
#### 83. How to find the most frequent value in an array?
#### 84. Extract all the contiguous 3x3 blocks from a random 10x10 matrix (★★★)
#### 85. Create a 2D array subclass such that Z\[i,j\] == Z\[j,i\] (★★★)
#### 86. Consider a set of p matrices wich shape (n,n) and a set of p vectors with shape (n,1). How to compute the sum of of the p matrix products at once? (result has shape (n,1)) (★★★)
#### 87. Consider a 16x16 array, how to get the block-sum (block size is 4x4)? (★★★)
#### 88. How to implement the Game of Life using numpy arrays? (★★★)
#### 89. How to get the n largest values of an array (★★★)
#### 90. Given an arbitrary number of vectors, build the cartesian product (every combinations of every item) (★★★)
#### 91. How to create a record array from a regular array? (★★★)
#### 92. Consider a large vector Z, compute Z to the power of 3 using 3 different methods (★★★)
#### 93. Consider two arrays A and B of shape (8,3) and (2,2). How to find rows of A that contain elements of each row of B regardless of the order of the elements in B? (★★★)
#### 94. Considering a 10x3 matrix, extract rows with unequal values (e.g. \[2,2,3\]) (★★★)
#### 95. Convert a vector of ints into a matrix binary representation (★★★)
#### 96. Given a two dimensional array, how to extract unique rows? (★★★)
#### 97. Considering 2 vectors A & B, write the einsum equivalent of inner, outer, sum, and mul function (★★★)
#### 98. Considering a path described by two vectors (X,Y), how to sample it using equidistant samples (★★★)?
#### 99. Given an integer n and a 2D array X, select from X the rows which can be interpreted as draws from a multinomial distribution with n degrees, i.e., the rows which only contain integers and which sum to n. (★★★)
#### 100. Compute bootstrapped 95% confidence intervals for the mean of a 1D array X (i.e., resample the elements of an array with replacement N times, compute the mean of each sample, and then compute percentiles over the means). (★★★)
|
github_jupyter
|
import numpy as np
np.version.version
np.show_config()
a = np.zeros(10)
a.size * a.itemsize
np.info(np.add)
# %run 'python -c "import numpy; numpy.info(numpy.add)"'
a = np.zeros(10)
a[4] = 1
a
a = np.arange(10, 50)
b = a[::-1]
b[0] = 1
a
a = np.arange(9)
a.reshape((3, 3))
a = np.array([1,2,0,0,4,0])
np.nonzero(a)
a = np.eye(3)
a
a = np.random.random((3, 3, 3)) # np.info(np.random.rand)
a
a = np.random.rand(10, 10)
a.max()
a.min()
a = np.random.rand(30)
a.mean(0)
a = np.zeros((10, 10))
a[0, :] = 1
a[:, 0] = 1
a[-1, :] = 1
a[:, -1] = 1
a
Z = np.ones((10,10))
Z[1:-1,1:-1] = 0
print(Z)
Z = np.ones((5,5))
Z = np.pad(Z, pad_width=1, mode='constant', constant_values=0)
print(Z)
0 * np.nan
np.nan == np.nan
np.inf > np.nan
np.nan - np.nan
0.3 == 3 * 0.1
Z = np.diag(1+np.arange(4),k=-1)
print(Z)
# Author: Jake VanderPlas
print(sum(range(5),-1))
from numpy import *
print(sum(range(5),-1))
Z**Z
2 << Z >> 2
Z <- Z
1j*Z
Z/1/1
Z<Z>Z
np.array(0) / np.array(0)
np.array(0) // np.array(0)
np.array([np.nan]).astype(int).astype(float)
np.sqrt(-1) == np.emath.sqrt(-1)
1, 2, 3, 4, 5
6, , , 7, 8
, , 9,10,11
| 0.305697 | 0.96793 |
<img src="http://i2.wp.com/www.casualoptimist.com/wp-content/uploads/2014/10/9781846145506.jpg" style="width:300px"></img>
Steven Pinker has a new [book](http://www.amazon.com/Steven-Pinker-Sense-Style-Paperback/dp/B00SCT7DVU/ref=sr_1_2?ie=UTF8&qid=1423417779&sr=8-2&keywords=pinker+style) out. It's a style manual - a book about what authors can do to improve the clarity of their prose. Actually, the book would be more accurately described as guide to style manuals. Unlike, other style manuals Pinker takes stock of his knowledge of cognitive science and linguistics and explains the logic behind various rules that pertain to language's use. Once this logic is explained the reader can apply it to cases that haven't been covered by the book. The book is great and I recommend it to every scientist.
Anyway, I want to discuss one of the examples that Pinker uses to illustrate bad style. As scientist, Pinker takes many examples from academic writing. My favorite is:
> Participants read assertions whose veracity was either affirmed or denied by the subsequent presentation of an assessment word.
The so-called nominalizations (assertions, veracity, presentation, assessment) make this sentence difficult to comprehend. Once we remove them we get:
>We presented participants with a sentence, followed by the word TRUE or FALSE.
In addition to nominalizations Pinker warns against abstract nouns such as levels, strategies, perspective or prospects. Just like with nominalizations, omitting the abstract nouns makes comprehension easier. Here is an example from Pinker. Abstract nouns are underlined.
> The researchers found that groups that are typically associated with low alcoholism <u>levels</u> actually have moderate amounts of alcohol <u>intake</u> yet still have low <u>levels</u> of high <u>intake</u> associated with alcoholism, such as Jews.
<img href="http://i2.wp.com/www.casualoptimist.com/wp-content/uploads/2014/10/9781846145506.jpg?resize=620%2C954">
Pinker recommends his version with abstract nouns omitted.
> The researchers found that in groups with little alcoholism such as Jews, people actually drink moderate amounts of alcohol, but few of them drink too much and become alcoholics.
I want to discuss Pinker's replacement of "low alcoholism level" with "little alcoholism". Where does the "low alcoholism level" come from? It's not difficult to guess. The original measure was the proportion of alcoholism in certain populations. This is a continuous variable. The authors wanted to predict the amount of alcohol from alcoholism level. Instead of running a GLM regression, the researchers binned populations into groups according to alcoholism. Each group then describes (discrete) alcoholism level. I agree with Pinker that "alcoholism level" and indeed any case of discretized variable is a monstrosity and should be avoided. But the problem does not go away, if we just rewrite the sentence. Pinker's revision makes a different claim than the original text. What Pinker highlights is that the author's claim is of little interest since no one thinks of alcoholism as a discrete variable. At the same time, it is easy to see that the claim, that we are interested in, concerns the continuous variable. As such this claim can't be distilled with Anova. Something has to give. Psychologists give up their claims and their research questions. I say, they should abandon Anova.
|
github_jupyter
|
<img src="http://i2.wp.com/www.casualoptimist.com/wp-content/uploads/2014/10/9781846145506.jpg" style="width:300px"></img>
Steven Pinker has a new [book](http://www.amazon.com/Steven-Pinker-Sense-Style-Paperback/dp/B00SCT7DVU/ref=sr_1_2?ie=UTF8&qid=1423417779&sr=8-2&keywords=pinker+style) out. It's a style manual - a book about what authors can do to improve the clarity of their prose. Actually, the book would be more accurately described as guide to style manuals. Unlike, other style manuals Pinker takes stock of his knowledge of cognitive science and linguistics and explains the logic behind various rules that pertain to language's use. Once this logic is explained the reader can apply it to cases that haven't been covered by the book. The book is great and I recommend it to every scientist.
Anyway, I want to discuss one of the examples that Pinker uses to illustrate bad style. As scientist, Pinker takes many examples from academic writing. My favorite is:
> Participants read assertions whose veracity was either affirmed or denied by the subsequent presentation of an assessment word.
The so-called nominalizations (assertions, veracity, presentation, assessment) make this sentence difficult to comprehend. Once we remove them we get:
>We presented participants with a sentence, followed by the word TRUE or FALSE.
In addition to nominalizations Pinker warns against abstract nouns such as levels, strategies, perspective or prospects. Just like with nominalizations, omitting the abstract nouns makes comprehension easier. Here is an example from Pinker. Abstract nouns are underlined.
> The researchers found that groups that are typically associated with low alcoholism <u>levels</u> actually have moderate amounts of alcohol <u>intake</u> yet still have low <u>levels</u> of high <u>intake</u> associated with alcoholism, such as Jews.
<img href="http://i2.wp.com/www.casualoptimist.com/wp-content/uploads/2014/10/9781846145506.jpg?resize=620%2C954">
Pinker recommends his version with abstract nouns omitted.
> The researchers found that in groups with little alcoholism such as Jews, people actually drink moderate amounts of alcohol, but few of them drink too much and become alcoholics.
I want to discuss Pinker's replacement of "low alcoholism level" with "little alcoholism". Where does the "low alcoholism level" come from? It's not difficult to guess. The original measure was the proportion of alcoholism in certain populations. This is a continuous variable. The authors wanted to predict the amount of alcohol from alcoholism level. Instead of running a GLM regression, the researchers binned populations into groups according to alcoholism. Each group then describes (discrete) alcoholism level. I agree with Pinker that "alcoholism level" and indeed any case of discretized variable is a monstrosity and should be avoided. But the problem does not go away, if we just rewrite the sentence. Pinker's revision makes a different claim than the original text. What Pinker highlights is that the author's claim is of little interest since no one thinks of alcoholism as a discrete variable. At the same time, it is easy to see that the claim, that we are interested in, concerns the continuous variable. As such this claim can't be distilled with Anova. Something has to give. Psychologists give up their claims and their research questions. I say, they should abandon Anova.
| 0.530966 | 0.718792 |
```
%pylab inline
import torch
from torch.utils.data import IterableDataset
from torchvision import transforms
import webdataset as wds
from itertools import islice
url = "http://storage.googleapis.com/nvdata-openimages/openimages-train-000000.tar"
url = f"pipe:curl -L -s {url} || true"
```
# Desktop Usage and Caching
WebDataset is an ideal solution for training on petascale datasets kept on high performance distributed data stores like AIStore, AWS/S3, and Google Cloud. Compared to data center GPU servers, desktop machines have much slower network connections, but training jobs on desktop machines often also use much smaller datasets. WebDataset also is very useful for such smaller datasets, and it can easily be used for developing and testing on small datasets and then scaling up to large datasets by simply using more shards.
Here are different usage scenarios:
| environment | caching strategy |
|-|-|
| cloud training against cloud buckets | use WebDataset directly with cloud URLs |
| on premises training with high performance store (e.g., AIStore) | use WebDataset directly with storage URLs. |
| prototyping, development, testing for large scale training | copy a few shards to local disk OR use automatic shard caching OR use DBCache |
| on premises training with slower object stores/networks | use automatic shard caching or DBCache for entire dataset |
| desktop deep learning, smaller dataset | copy all shards to disk manually OR use automatic shard caching |
| training with IterableDataset sources other than WebDataset | use DBCache |
_The upshot is: you can write a single I/O pipeline that works for both local and remote data, and for both small and large datasets, and you can fine-tune performance and take advantage of local storage by adding the `cache_dir` and `DBCache` options._
Let's look at how these different methods work.
## Direct Copying of Shards
Let's take the OpenImages dataset as an example; it's half a terabyte large. For development and testing, you may not want to download the entire dataset, but you may also not want to use the dataset remotely. With WebDataset, you can just download a small number of shards and use them during development.
```
!test -f /tmp/openimages-train-000000.tar || curl -L -s http://storage.googleapis.com/nvdata-openimages/openimages-train-000000.tar > /tmp/openimages-train-000000.tar
dataset = wds.WebDataset("/tmp/openimages-train-000000.tar")
repr(next(iter(dataset)))[:200]
```
Note that the WebDataset class works the same way on local files as it does on remote files. Furthermore, unlike other kinds of dataset formats and archive formats, downloaded datasets are immediately useful and don't need to be unpacked.
## Automatic Shard Caching
Downloading a few shards manually is useful for development and testing. But WebDataset permits us to automate downloading and caching of shards. This is accomplished by giving a `cache_dir` argument to the WebDataset constructor. Note that caching happens in parallel with iterating through the dataset. This means that if you write a WebDataset-based I/O pipeline, training starts immediately; the training job does not have to wait for any shards to download first.
Automatic shard caching is useful for distributing deep learning code, for academic computer labs, and for cloud computing.
In this example, we make two passes through the dataset, using the cached version on the second pass.
```
!rm -rf ./cache
# just using one URL for demonstration
url = "http://storage.googleapis.com/nvdata-openimages/openimages-train-000000.tar"
dataset = wds.WebDataset(url, cache_dir="./cache")
print("=== first pass")
for sample in dataset:
pass
print("=== second pass")
for i, sample in enumerate(dataset):
for key, value in sample.items():
print(key, repr(value)[:50])
print()
if i >= 3: break
!ls -l ./cache
```
Using automatic shard caching, you end up with bit-identical copies of the original dataset in the local shard cache. By default, shards are named based on a MD5 checksum of their original URL. If you want to reuse the downloaded cached files, you can override the cache file naming with the `cache_name=` argument to `WebDataset` and `DBCache`.
You can disable shard caching by setting the shard cache directory name to `None`.
## Automatic Sample Caching
WebDataset also provides a way of caching training samples directly. This works with samples coming from any IterableDataset as input. The cache is stored in an SQLite3 database. Sample-based caching is implemented by the `DBCache` class. You specify a filename for the database and the maximum number of samples you want to cache. Samples will initially be read from the original IterableDataset, but after either the samples run out or the maximum number of samples has been reached, subsequently, samples will be served from the database cache stored on local disk. The database cache persists between invocations of the job.
Automatic sample caching is useful for developing and testing deep learning jobs, as well as for caching data coming from slow IterableDataset sources, such as network-based database connections or other slower data sources.
```
!rm -rf ./cache.db
dataset = wds.WebDataset(url).compose(wds.DBCache, "./cache.db", 1000)
print("=== first pass")
for sample in dataset:
pass
print("=== second pass")
for i, sample in enumerate(dataset):
for key, value in sample.items():
print(key, repr(value)[:50])
print()
if i >= 3: break
!ls -l ./cache.db
```
You can disable the cache by changing the cache file name to `None`. This makes it easy to enable/disable the cache for testing.
Sample-based caching using `DBCache` gives you more flexibility than shard-based caching: you can cache before or after decoding and before or after data augmentation. However, unlike shard-based caching, the cache won't be considered "complete" until the number of cached samples requested have been cached. The `DBCache` class is primarily useful for testing, and for caching data that comes from `IterableDataset` sources other than `WebDataset`.
|
github_jupyter
|
%pylab inline
import torch
from torch.utils.data import IterableDataset
from torchvision import transforms
import webdataset as wds
from itertools import islice
url = "http://storage.googleapis.com/nvdata-openimages/openimages-train-000000.tar"
url = f"pipe:curl -L -s {url} || true"
!test -f /tmp/openimages-train-000000.tar || curl -L -s http://storage.googleapis.com/nvdata-openimages/openimages-train-000000.tar > /tmp/openimages-train-000000.tar
dataset = wds.WebDataset("/tmp/openimages-train-000000.tar")
repr(next(iter(dataset)))[:200]
!rm -rf ./cache
# just using one URL for demonstration
url = "http://storage.googleapis.com/nvdata-openimages/openimages-train-000000.tar"
dataset = wds.WebDataset(url, cache_dir="./cache")
print("=== first pass")
for sample in dataset:
pass
print("=== second pass")
for i, sample in enumerate(dataset):
for key, value in sample.items():
print(key, repr(value)[:50])
print()
if i >= 3: break
!ls -l ./cache
!rm -rf ./cache.db
dataset = wds.WebDataset(url).compose(wds.DBCache, "./cache.db", 1000)
print("=== first pass")
for sample in dataset:
pass
print("=== second pass")
for i, sample in enumerate(dataset):
for key, value in sample.items():
print(key, repr(value)[:50])
print()
if i >= 3: break
!ls -l ./cache.db
| 0.403684 | 0.909586 |
# Gradient descent in action
## Goal
The goal of this lab is to explore how chasing function gradients can find the function minimum. If the function is a loss function representing the quality of a model's fit to a training set, we can use function minimization to train models.
When there is no symbolic solution to minimizing the loss function, we need an iterative solution, such as gradient descent.
## Set up
```
import numpy as np
import pandas as pd
from mpl_toolkits.mplot3d import Axes3D # required even though not ref'd!
from sklearn.linear_model import LinearRegression, LogisticRegression, Lasso, Ridge
from sklearn.ensemble import RandomForestRegressor
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, precision_score, recall_score, log_loss, mean_absolute_error
import matplotlib.pyplot as plt
import matplotlib as mpl
%config InlineBackend.figure_format = 'retina'
def normalize(X):
X = X.copy()
for colname in X.columns:
u = np.mean(X[colname])
s = np.std(X[colname])
if s>0.0:
X[colname] = (X[colname] - u) / s
else:
X[colname] = (X[colname] - u)
return X
def plot3d(X, y, b0_range, b1_range):
b0_mesh, b1_mesh = np.meshgrid(b0_range, b1_range, indexing='ij')
L = np.zeros(b0_mesh.shape)
for i in range(len(b0_range)):
for j in range(len(b1_range)):
L[i,j] = loss([b0_range[i],b1_range[j]], X=X, y=y)
fig = plt.figure(figsize=(5,4))
ax = fig.add_subplot(111, projection='3d')
surface = ax.plot_surface(b0_mesh, b1_mesh, L, alpha=0.7, cmap='coolwarm')
ax.set_xlabel('$\\beta_0$', fontsize=14)
ax.set_ylabel('$\\beta_1$', fontsize=14)
```
## Simple function gradient descent
Let's define a very simple quadratic in one variable, $y = f(x) = (x-2)^2$ and then use an iterative solution to find the minimum value.
```
def f(x) : return (x-2)**2
```
We can hide all of the plotting details in a function, as we will use it multiple times.
```
def fplot(f,xrange,fstr='',x0=None,xn=None):
plt.figure(figsize=(3.5,2))
lx = np.linspace(*xrange,200)
fx = [f(x) for x in lx]
plt.plot(lx, fx, lw=.75)
if x0 is not None:
plt.scatter([x0], [f(x0)], c='orange')
plt.scatter([xn], [f(xn)], c='green')
plt.xlabel("$x$", fontsize=12)
plt.ylabel(fstr, fontsize=12)
fplot(f, xrange=(0,4), fstr="$(x-2)^2$")
```
To minimize a function of $x$, we need the derivative of $f(x)$, which is just a function that gives the slope of the curve at every $x$.
**1. Define a function returning the derivative of $f(x)$**
You can ask for symbolic derivatives at a variety of sites, but here's one [solution](https://www.symbolab.com/solver/derivative-calculator/%5Cfrac%7Bd%7D%7Bdx%7D%5Cleft(x-2%5Cright)%5E%7B2%7D).
```
def df(x): ...
```
<details>
<summary>Solution</summary>
<pre>
def df(x): return 2*(x-2)</pre>
</details>
**2. Pick an initial $x$ location and take a single step according to the derivative**
Use a learning rate of $\eta = 0.4$. The output should be `1.76`. (Also keep in mind that the minimum value is clearly at $x=2$.)
```
x = .8 # initial x location
x = ...
print(x)
```
<details>
<summary>Solution</summary>
<pre>
x = x - .4 * df(x); print(x)
</pre>
</details>
**Q.** How can we symbolically optimize a quadratic function like this with a single minimum?
<details>
<summary>Solution</summary>
When the derivative goes to zero, it means the curve is flat, which in turn means we are at the function minimum. Set the derivative equal to zero and solve for $x$: $\frac{d}{dx} (x-2)^2 = 2(x-2) = 2x-4 = 0$. Solving for $x$ gives $x=2$.
</details>
**3. Create a loop that takes five more steps (same learning rate)**
The output should look like:
```
1.952
1.9904
1.99808
1.999616
1.9999232
```
```
for i in range(5):
x = x - 0.4 * df(x);
print(x)
```
<details>
<summary>Solution</summary>
<pre>
for i in range(5):
x = x - 0.4 * df(x); print(x)
</pre>
</details>
Notice how fast the iteration moves $x$ to the location where $f(x)$ is minimum!
### Minimizing a more complicated function
This iterative minimization approach works for any (smooth) function, assuming we choose a small enough learning rate. For example, let's find one of the minima for $f(x) = x \sin(0.6x)$ in the range \[-1,10\]. The plot should look something like:
<img src="xsinx.png" width="200">
Depending on where we start, minimization will find either minimum at $x=0$ or at $8.18$. The location of the lowest function value is called the global minimum and any others are called local minima.
**1. Define a function for $x \sin(0.6x)$**
```
def f(x) : ...
```
<details>
<summary>Solution</summary>
<pre>
def f(x) : return np.sin(0.6*x)*x
</pre>
</details>
```
fplot(f, xrange=(-1,10), fstr="$x \sin(0.6x)$")
#plt.tight_layout(); plt.savefig("xsinx.png",dpi=150,bbox_inches=0)
```
**2. Define the derivative function: $\frac{df}{dx} = 0.6x \cos(0.6 x) + \sin(0.6 x)$**
```
def df(x): ...
```
<details>
<summary>Solution</summary>
<pre>
def df(x): return 0.6*x * np.cos(0.6*x) + np.sin(0.6*x)
</pre>
</details>
**3. Pick a random initial value, $x_0$, between -1 and 10; display that value**
```
x0 = np.random.rand()*11 - 1 # pick value between -1 and 10
x0
```
**4. Start $x$ at $x_0$ and iterate 12 times using the gradient descent method**
Use a learning rate of 0.4.
```
x = x0
for i in range(12):
x = x - .4 * df(x); print(f"{x:.10f}")
```
**5. Plot the starting and stopping locations on the curve**
```
fplot(f, xrange=(-1,10), fstr="$x \sin(0.6x)$", x0=x0, xn=x)
```
**6. Rerun the notebook several times to see how the random start location affects where it terminates.**
**Q.** Rather than iterating a fixed number of times, what's a better way to terminate the iteration?
<details>
<summary>Solution</summary>
A simple stopping condition is when the (norm of the) gradient goes to zero, meaning that it does not suggest we move in any direction to get a lower loss of function value. We could also check to see if the new $x$ location is substantially different from the previous.
</details>
## The effect of learning rate on convergence
Let's move back to the simple function $f(x) = (x-2)^2$ and consider different learning rates to see the effect.
```
def df(x): return 2*(x-2)
```
Let's codify the minimization process in a handy function:
```
def minimize(df,x0,eta):
x = x0
for i in range(10):
x = x - eta * df(x);
print(f"{x:.2f}")
```
**1. Update the gradient descent loop to use a learning rate of 1.0**
Notice how the learning rate is so large that iteration oscillates between two (incorrect) solutions. The output should be:
```
3.20
0.80
3.20
0.80
3.20
0.80
3.20
0.80
3.20
0.80
```
```
minimize(df, x0=0.8, eta=...)
```
**2. Update the gradient descent loop to use a learning rate of 2.0**
Notice how the solution diverges when the learning rate is too big. The output should be:
```
5.60
-8.80
34.40
-95.20
293.60
-872.80
2626.40
-7871.20
23621.60
-70856.80
```
```
minimize(df, x0=0.8, eta=...)
```
**2. Update the gradient descent loop to use a learning rate of 0.01**
Notice how **slowly** the solution converges when the learning rate is two small. The output should be:
```
0.82
0.85
0.87
0.89
0.92
0.94
0.96
0.98
1.00
1.02
```
```
minimize(df, x0=0.8, eta=...)
```
**Q.** How do you choose the learning rate $\eta$?
<details>
<summary>Solution</summary>
The learning rate is specific to each problem unfortunately. A general strategy is to start with a small $\eta$ and gradually increase it until it starts to oscillate around the solution, then back off a little bit. Having a single global learning rate for un-normalized data usually means very slow convergence. A learning rate small enough to be appropriate for a variable with small range is unlikely to be appropriate for variable with a large range. This is overcome with the more sophisticated gradient descent methods, such as the Adagrad strategy you will use in your project. In that case, we keep a history of gradients and use that to speed up descent in directions that are historically shallow in the gradient.
</details>
## Examine loss surface for LSTAT var from Boston dataset
Turning to a common toy data set, the Boston housing data set, let's pick the most important single feature and look at the loss function for simple OLS regression.
**1. Load the Boston data set into a data frame**
```
boston = load_boston()
X = pd.DataFrame(boston.data, columns=boston.feature_names)
y = boston.target
X.head()
```
**2. Train an OLS linear regression model**
```
lm = LinearRegression()
lm.fit(X, y)
```
**3. Using `rfpimp` package, display the feature importances**
```
from rfpimp import *
I = importances(lm, X, y)
plot_importances(I)
```
**4. LSTAT is most important variable so train a new model with just `X['LSTAT']`**
Print out the true $\beta_0, \beta_1$ coefficients.
```
X_ = X['LSTAT'].values.reshape(-1,1) # Extract just one x variable
lm = LinearRegression()
lm.fit(X_, y)
print(f"True OLS coefficients: {np.array([lm.intercept_]+list(lm.coef_))}")
```
**5. Show marginal plot of LSTAT vs price**
```
fig, ax1 = plt.subplots(figsize=(5,2.0))
ax1.scatter(X_, y, s=15, alpha=.5)
lx = np.linspace(np.min(X_), np.max(X_), num=len(X))
ax1.plot(lx, lm.predict(lx.reshape(-1,1)), c='orange')
ax1.set_xlabel("LSTAT", fontsize=10)
ax1.set_ylabel("price", fontsize=10)
plt.show()
```
**6. Define an MSE loss function for single variable regression**
$$
\frac{1}{n} \sum_{i=1}^n (y - (\beta_0 + \beta_1 x^{(i)}))^2
$$
```
def loss(B,X,y): # B=[beta0, beta1]
y_pred = ...
return np.mean(...)
```
<details>
<summary>Solution</summary>
<pre>
def loss(B,X,y):
y_pred = B[0] + X*B[1]
return np.mean((y - y_pred)**2)
</pre>
</details>
**7. Check the loss function value at the true OLS coordinates**
```
loss(np.array([34.55384088, -0.95004935]), X_, y) # demo loss function at minimum
```
**8. Plot the loss function in 3D in region around $\beta$s**
When you enter the correct loss function above, the plot should look something like:
<img src="boston-loss.png" width="200">
```
b0_range = np.linspace(-50, 120, 70)
b1_range = np.linspace(-6, 4, 70)
plot3d(X_, y, b0_range, b1_range)
#plt.tight_layout(); plt.savefig("boston-loss.png",dpi=150,bbox_inches=0)
```
### Repeat using normalized data
**1. Normalize the $x$ variables**
```
X_norm = normalize(X)
```
**2. Retrain the model**
```
X_ = X_norm['LSTAT'].values.reshape(-1,1)
lm = LinearRegression()
lm.fit(X_, y)
print(f"True OLS coefficients: {np.array([lm.intercept_]+list(lm.coef_))}")
```
**3. Show the marginal plot again**
Notice how only the $x$ scale has changed but not $y$, nor has the shape changed.
```
fig, ax1 = plt.subplots(figsize=(5,2.0))
ax1.scatter(X_, y, s=15, alpha=.5)
lx = np.linspace(np.min(X_), np.max(X_), num=len(X))
ax1.plot(lx, lm.predict(lx.reshape(-1,1)), c='orange')
ax1.set_xlabel("LSTAT", fontsize=10)
ax1.set_ylabel("price", fontsize=10)
plt.show()
```
**4. Plot the cost surface with a region around the new minimum location**
```
b0_range = np.linspace(15, 30, 70)
b1_range = np.linspace(-10, 5, 70)
plot3d(X_, y, b0_range, b1_range)
```
**Q.** Compare the loss function contour lines of the unnormalized and normalized variables.
<details>
<summary>Solution</summary>
The normalized variables clearly result in a bowl shaped loss function, which gives spherical contours. A gradient descent method with a single learning rate will convergent much faster given visible shape.
</details>
**Q.** Look at the loss function directly from above; in which direction do the gradients point?
<details>
<summary>Solution</summary>
The negative of the gradients will point directly at the minimum loss function value location. The gradients themselves, however, point in the exact opposite direction.</details>
|
github_jupyter
|
import numpy as np
import pandas as pd
from mpl_toolkits.mplot3d import Axes3D # required even though not ref'd!
from sklearn.linear_model import LinearRegression, LogisticRegression, Lasso, Ridge
from sklearn.ensemble import RandomForestRegressor
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, precision_score, recall_score, log_loss, mean_absolute_error
import matplotlib.pyplot as plt
import matplotlib as mpl
%config InlineBackend.figure_format = 'retina'
def normalize(X):
X = X.copy()
for colname in X.columns:
u = np.mean(X[colname])
s = np.std(X[colname])
if s>0.0:
X[colname] = (X[colname] - u) / s
else:
X[colname] = (X[colname] - u)
return X
def plot3d(X, y, b0_range, b1_range):
b0_mesh, b1_mesh = np.meshgrid(b0_range, b1_range, indexing='ij')
L = np.zeros(b0_mesh.shape)
for i in range(len(b0_range)):
for j in range(len(b1_range)):
L[i,j] = loss([b0_range[i],b1_range[j]], X=X, y=y)
fig = plt.figure(figsize=(5,4))
ax = fig.add_subplot(111, projection='3d')
surface = ax.plot_surface(b0_mesh, b1_mesh, L, alpha=0.7, cmap='coolwarm')
ax.set_xlabel('$\\beta_0$', fontsize=14)
ax.set_ylabel('$\\beta_1$', fontsize=14)
def f(x) : return (x-2)**2
def fplot(f,xrange,fstr='',x0=None,xn=None):
plt.figure(figsize=(3.5,2))
lx = np.linspace(*xrange,200)
fx = [f(x) for x in lx]
plt.plot(lx, fx, lw=.75)
if x0 is not None:
plt.scatter([x0], [f(x0)], c='orange')
plt.scatter([xn], [f(xn)], c='green')
plt.xlabel("$x$", fontsize=12)
plt.ylabel(fstr, fontsize=12)
fplot(f, xrange=(0,4), fstr="$(x-2)^2$")
def df(x): ...
x = .8 # initial x location
x = ...
print(x)
1.952
1.9904
1.99808
1.999616
1.9999232
for i in range(5):
x = x - 0.4 * df(x);
print(x)
def f(x) : ...
fplot(f, xrange=(-1,10), fstr="$x \sin(0.6x)$")
#plt.tight_layout(); plt.savefig("xsinx.png",dpi=150,bbox_inches=0)
def df(x): ...
x0 = np.random.rand()*11 - 1 # pick value between -1 and 10
x0
x = x0
for i in range(12):
x = x - .4 * df(x); print(f"{x:.10f}")
fplot(f, xrange=(-1,10), fstr="$x \sin(0.6x)$", x0=x0, xn=x)
def df(x): return 2*(x-2)
def minimize(df,x0,eta):
x = x0
for i in range(10):
x = x - eta * df(x);
print(f"{x:.2f}")
3.20
0.80
3.20
0.80
3.20
0.80
3.20
0.80
3.20
0.80
minimize(df, x0=0.8, eta=...)
5.60
-8.80
34.40
-95.20
293.60
-872.80
2626.40
-7871.20
23621.60
-70856.80
minimize(df, x0=0.8, eta=...)
0.82
0.85
0.87
0.89
0.92
0.94
0.96
0.98
1.00
1.02
minimize(df, x0=0.8, eta=...)
boston = load_boston()
X = pd.DataFrame(boston.data, columns=boston.feature_names)
y = boston.target
X.head()
lm = LinearRegression()
lm.fit(X, y)
from rfpimp import *
I = importances(lm, X, y)
plot_importances(I)
X_ = X['LSTAT'].values.reshape(-1,1) # Extract just one x variable
lm = LinearRegression()
lm.fit(X_, y)
print(f"True OLS coefficients: {np.array([lm.intercept_]+list(lm.coef_))}")
fig, ax1 = plt.subplots(figsize=(5,2.0))
ax1.scatter(X_, y, s=15, alpha=.5)
lx = np.linspace(np.min(X_), np.max(X_), num=len(X))
ax1.plot(lx, lm.predict(lx.reshape(-1,1)), c='orange')
ax1.set_xlabel("LSTAT", fontsize=10)
ax1.set_ylabel("price", fontsize=10)
plt.show()
def loss(B,X,y): # B=[beta0, beta1]
y_pred = ...
return np.mean(...)
loss(np.array([34.55384088, -0.95004935]), X_, y) # demo loss function at minimum
b0_range = np.linspace(-50, 120, 70)
b1_range = np.linspace(-6, 4, 70)
plot3d(X_, y, b0_range, b1_range)
#plt.tight_layout(); plt.savefig("boston-loss.png",dpi=150,bbox_inches=0)
X_norm = normalize(X)
X_ = X_norm['LSTAT'].values.reshape(-1,1)
lm = LinearRegression()
lm.fit(X_, y)
print(f"True OLS coefficients: {np.array([lm.intercept_]+list(lm.coef_))}")
fig, ax1 = plt.subplots(figsize=(5,2.0))
ax1.scatter(X_, y, s=15, alpha=.5)
lx = np.linspace(np.min(X_), np.max(X_), num=len(X))
ax1.plot(lx, lm.predict(lx.reshape(-1,1)), c='orange')
ax1.set_xlabel("LSTAT", fontsize=10)
ax1.set_ylabel("price", fontsize=10)
plt.show()
b0_range = np.linspace(15, 30, 70)
b1_range = np.linspace(-10, 5, 70)
plot3d(X_, y, b0_range, b1_range)
| 0.631367 | 0.986429 |
# CIFAR10 Image Classification Using DenseNet-121
## Basic import
```
import tensorflow as tf
import fastestimator as fe
import matplotlib.pyplot as plt
import numpy as np
```
## Step1: Create FastEstimator `Pipeline`
### Load Dataset
First, we load the training and evaluation dataset into memory use keras API.
```
(x_train, y_train), (x_eval, y_eval) = tf.keras.datasets.cifar10.load_data()
print("train image shape is {}".format(x_train.shape))
print("train label shape is {}".format(y_train.shape))
print("eval image shape is {}".format(x_eval.shape))
print("eval label shape is {}".format(y_eval.shape))
#Parameters
epochs = 50
steps_per_epoch = None
validation_steps = None
batch_size = 64
```
### Define `Pipeline`
`Pipeline` is the object that define how the training and evaulation data being ingested to the network.
It has three basic arguments:
* **batch_size**: (int) The batch size
* **data**: (dict) the data source. It should be the nested dictionary like {"mode1": {"feature1": numpy_array, "feature2": numpy_array, ...}, ...}
* **ops**: (list, obj) The list of pipeline processs block. For this example, we only use Minmax, so it can be the object.
```
from fastestimator.op.tensorop import Minmax
batch_size = batch_size
data = {"train": {"x": x_train,
"y": y_train},
"eval": {"x": x_eval,
"y": y_eval}}
pipeline = fe.Pipeline(batch_size=batch_size, data=data, ops=Minmax(inputs="x", outputs="x2"))
```
### Validate The Input Pipeline
Once the pipeline was created, it is better to validate it with pipeline method, **show_results**, which will return a sample batch of pipeline data that give you a clue of how it works.
Because the pipeline has two different modes, "train" and "eval", we can take a looks of both examples.
```
fig, ax = plt.subplots(1,2)
train_sample = pipeline.show_results(mode="train")
print("the shape of train image batch is {}".format(train_sample[0]["x"].numpy().shape))
print("the shape of train label batch is {}".format(train_sample[0]["y"].numpy().shape))
ax[0].imshow(train_sample[0]["x"].numpy()[0])
ax[0].set_title("the first image in train batch")
eval_sample = pipeline.show_results(mode="eval")
print("the shape of eval image batch is {}".format(eval_sample[0]["x"].numpy().shape))
print("the shape of eval label batch is {}".format(eval_sample[0]["y"].numpy().shape))
ax[1].imshow(eval_sample[0]["x"].numpy()[0])
ax[1].set_title("the first image in eval batch")
plt.show()
```
### Validate The Pipeline Output
There are totally three keys in the pipeline
1. "y": the label
2. "x": the input image
3. "x2": the processed output image.
In the previous example we only validate the input image. We still need to validate the processed output image, since it will be the actual input data source for the network after all. <br/>
The image process chain only has Minmax operation, which will map the minimum pixel value to 0 and maximum to 1.
```
print("In train_sample[\"x\"] the max is {}, the min is {}".format(np.max(train_sample[0]["x"].numpy()), np.min(train_sample[0]["x"].numpy())))
print("In train_sample[\"x2\"] the max is {}, the min is {}".format(np.max(train_sample[0]["x2"].numpy()), np.min(train_sample[0]["x2"].numpy())))
print("In eval_sample[\"x\"] the max is {}, the min is {}".format(np.max(eval_sample[0]["x"].numpy()), np.min(eval_sample[0]["x"].numpy())))
print("In eval_sample[\"x2\"] the max is {}, the min is {}".format(np.max(eval_sample[0]["x2"].numpy()), np.min(eval_sample[0]["x2"].numpy())))
```
## Step2: Create FastEstimator `Network`
`Network` is the object that define the whole logic of neural network, including models, loss functions, optimizers ... etc.
A Network can have several different models and loss funcitons (like GAN), but in this case, we are going to build a single model network.
### Define Keras Model Function
The Model architecture of Fastestimator is defined by Tensorflow API (Keras). Here we used the pre-defined Keras function for building DensNet-121, and follow it by the custom layer to make it fit the Cifar10 dataset.
```
from tensorflow.keras.applications.densenet import DenseNet121
from tensorflow.keras.layers import Dense, Input
def DenseNet121_cifar10():
inputs = Input((32,32,3))
x = DenseNet121(weights=None, input_shape=(32,32,3), include_top=False, pooling='avg')(inputs)
outputs = Dense(10, activation='softmax')(x)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
return model
```
### Compile model
Here We compile models with `fe.build`, which has four arguments:
* **model_def**: The model definition function.
* **model_name**: The name of the model. It will be used when storing the model.
* **optimizer**: The optimizer. It can either be str or tf.optimizers object.
* **loss_name**: The name of the loss. Pleas be aware it is the dictionary key name and will be used in `Network` definition.
```
from fastestimator.op.tensorop import ModelOp, SparseCategoricalCrossentropy
model = fe.build(model_def=DenseNet121_cifar10,
model_name="densenet121",
optimizer="adam",
loss_name="loss")
```
### Define `Network` from `FEMode`
So far we already have `FEmodel` and `Pipeline`, but how those networks connect to each other is still not defined yet.
`Network` API is created for this reason. Its input argument is a list of operations each have IO "keys". By sharing the keys, those operations can connect in the way you like.
```
network = fe.Network(ops=[
ModelOp(inputs="x2", model=model, outputs="y_pred"),
SparseCategoricalCrossentropy(y_true="y", y_pred="y_pred", outputs="loss"),
])
```
The network will connect like the following graph
<img src="network_workflow.png">
## Step 3: Create `Estimator`
`Estimator` is the APi that wrap up the `Pipeline`, `Network` and other training metadata together.
The `Estimator` basically have 4 arguments:
* **pipeline**: the pipeline
* **network** the network
* **epoch** the epoch number of training
* **traces** the list of `trace` object. They are pretty like the callbacks of Keras. The trace object will be called on specific timing during the training. Here we used **Accuracy** for getting model accuracy, **ModelSaver** for saving the best model checkpoint, and **LRController** for adapting learning rate.
```
import tempfile
from fastestimator.trace import Accuracy, ModelSaver, LRController, TensorBoard
save_dir = tempfile.mkdtemp()
estimator = fe.Estimator(
network=network,
pipeline=pipeline,
epochs=epochs,
steps_per_epoch=steps_per_epoch,
validation_steps=validation_steps,
traces=[
Accuracy(true_key="y", pred_key="y_pred"),
ModelSaver(model_name="densenet121", save_dir=save_dir, save_best=True),
LRController(model_name="densenet121", reduce_on_eval=True)
])
```
## Start Training
We use `Estimator` method **fit** to train the model.
```
estimator.fit()
```
## Validate Model
After we trained the model, we might want to validate the model by running inference on evaluation datasets. Because FE so far doesn't support doing inference using estimator, We ues Keras API.
First load the keras model (storing by **ModelSaver**)
```
import os
model_path = os.path.join(save_dir, 'densenet121_best_loss.h5')
trained_model = tf.keras.models.load_model(model_path, compile=False)
```
Because the keras model doesn't include the data preprocessing pipeline, we cannot ingest the raw dataset to the model. Instead, we need to create the same pipeline again with batch size equal to whole evaluation dataset and feed the processed to the keras model.
```
pipeline = fe.Pipeline(batch_size=10000, data=data, ops=Minmax(inputs="x", outputs="x2"))
eval_sample = pipeline.show_results(mode="eval")
x_input = eval_sample[0]["x2"].numpy()
y_input = eval_sample[0]["y"].numpy()
y_output = trained_model.predict(x_input)
y_predict = np.argmax(y_output, axis=1).reshape(10000,1)
print("the evaluation accuracy is {}".format(np.count_nonzero((y_input == y_predict))/10000))
```
Let's have a look as a random inference sample
```
rand_int = np.random.randint(10000)
fig, ax = plt.subplots()
ax.imshow(x_input[rand_int])
ax.set_title("the input image")
print("the ground truth label is {}, and the prediction is {}".format(y_input[rand_int], y_predict[rand_int]))
```
|
github_jupyter
|
import tensorflow as tf
import fastestimator as fe
import matplotlib.pyplot as plt
import numpy as np
(x_train, y_train), (x_eval, y_eval) = tf.keras.datasets.cifar10.load_data()
print("train image shape is {}".format(x_train.shape))
print("train label shape is {}".format(y_train.shape))
print("eval image shape is {}".format(x_eval.shape))
print("eval label shape is {}".format(y_eval.shape))
#Parameters
epochs = 50
steps_per_epoch = None
validation_steps = None
batch_size = 64
from fastestimator.op.tensorop import Minmax
batch_size = batch_size
data = {"train": {"x": x_train,
"y": y_train},
"eval": {"x": x_eval,
"y": y_eval}}
pipeline = fe.Pipeline(batch_size=batch_size, data=data, ops=Minmax(inputs="x", outputs="x2"))
fig, ax = plt.subplots(1,2)
train_sample = pipeline.show_results(mode="train")
print("the shape of train image batch is {}".format(train_sample[0]["x"].numpy().shape))
print("the shape of train label batch is {}".format(train_sample[0]["y"].numpy().shape))
ax[0].imshow(train_sample[0]["x"].numpy()[0])
ax[0].set_title("the first image in train batch")
eval_sample = pipeline.show_results(mode="eval")
print("the shape of eval image batch is {}".format(eval_sample[0]["x"].numpy().shape))
print("the shape of eval label batch is {}".format(eval_sample[0]["y"].numpy().shape))
ax[1].imshow(eval_sample[0]["x"].numpy()[0])
ax[1].set_title("the first image in eval batch")
plt.show()
print("In train_sample[\"x\"] the max is {}, the min is {}".format(np.max(train_sample[0]["x"].numpy()), np.min(train_sample[0]["x"].numpy())))
print("In train_sample[\"x2\"] the max is {}, the min is {}".format(np.max(train_sample[0]["x2"].numpy()), np.min(train_sample[0]["x2"].numpy())))
print("In eval_sample[\"x\"] the max is {}, the min is {}".format(np.max(eval_sample[0]["x"].numpy()), np.min(eval_sample[0]["x"].numpy())))
print("In eval_sample[\"x2\"] the max is {}, the min is {}".format(np.max(eval_sample[0]["x2"].numpy()), np.min(eval_sample[0]["x2"].numpy())))
from tensorflow.keras.applications.densenet import DenseNet121
from tensorflow.keras.layers import Dense, Input
def DenseNet121_cifar10():
inputs = Input((32,32,3))
x = DenseNet121(weights=None, input_shape=(32,32,3), include_top=False, pooling='avg')(inputs)
outputs = Dense(10, activation='softmax')(x)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
return model
from fastestimator.op.tensorop import ModelOp, SparseCategoricalCrossentropy
model = fe.build(model_def=DenseNet121_cifar10,
model_name="densenet121",
optimizer="adam",
loss_name="loss")
network = fe.Network(ops=[
ModelOp(inputs="x2", model=model, outputs="y_pred"),
SparseCategoricalCrossentropy(y_true="y", y_pred="y_pred", outputs="loss"),
])
import tempfile
from fastestimator.trace import Accuracy, ModelSaver, LRController, TensorBoard
save_dir = tempfile.mkdtemp()
estimator = fe.Estimator(
network=network,
pipeline=pipeline,
epochs=epochs,
steps_per_epoch=steps_per_epoch,
validation_steps=validation_steps,
traces=[
Accuracy(true_key="y", pred_key="y_pred"),
ModelSaver(model_name="densenet121", save_dir=save_dir, save_best=True),
LRController(model_name="densenet121", reduce_on_eval=True)
])
estimator.fit()
import os
model_path = os.path.join(save_dir, 'densenet121_best_loss.h5')
trained_model = tf.keras.models.load_model(model_path, compile=False)
pipeline = fe.Pipeline(batch_size=10000, data=data, ops=Minmax(inputs="x", outputs="x2"))
eval_sample = pipeline.show_results(mode="eval")
x_input = eval_sample[0]["x2"].numpy()
y_input = eval_sample[0]["y"].numpy()
y_output = trained_model.predict(x_input)
y_predict = np.argmax(y_output, axis=1).reshape(10000,1)
print("the evaluation accuracy is {}".format(np.count_nonzero((y_input == y_predict))/10000))
rand_int = np.random.randint(10000)
fig, ax = plt.subplots()
ax.imshow(x_input[rand_int])
ax.set_title("the input image")
print("the ground truth label is {}, and the prediction is {}".format(y_input[rand_int], y_predict[rand_int]))
| 0.651022 | 0.965479 |
<img src="../../images/qiskit-heading.gif" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left">
The latest version of this notebook is available on https://github.com/Qiskit/qiskit-tutorial.
# Writing a Transpiler Pass
## Introduction
A central component of Qiskit Terra is the transpiler, which is designed for modularity and extensibility. The goal is to be able to easily write new circuit transformations (known as transpiler *passes*), and combine them with other existing passes. In this way, the transpiler opens up the door for research into aggressive optimization of quantum circuits.
In this notebook, we show how to develop a simple transpiler pass. To do so, we first introduce the internal representation of quantum circuits in Qiskit, in the form of a Directed Acyclic Graph or DAG. Then, we illustrate a simple swap mapper pass, which transforms an input circuit to be compatible with a limited-connectivity quantum device.
## Introducing the DAG
In Qiskit, we represent circuits internally using a Directed Acyclic Graph or **DAG**. The advantage of this representation over a pure list of gates (i.e. *netlist*) is that the flow of information between operations are explicit, making it easier for passes to make transformation decisions without changing the semantics of the circuit.
Let's start by building a simple circuit, and examining its DAG.
```
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit
from qiskit.dagcircuit import DAGCircuit
q = QuantumRegister(3, 'q')
c = ClassicalRegister(3, 'c')
circ = QuantumCircuit(q, c)
circ.h(q[0])
circ.cx(q[0], q[1])
circ.measure(q[0], c[0])
circ.rz(0.5, q[1]).c_if(c, 2)
circ.draw()
```
In the DAG, there are 3 kinds of graph nodes: qubit/clbit input nodes (green), operation nodes (blue), and output nodes (red). Each edge indicates data flow (or dependency) between two nodes.
```
from qiskit.converters import circuit_to_dag
from qiskit.tools.visualization import dag_drawer
dag = circuit_to_dag(circ)
dag_drawer(dag)
```
Therefore, writing a transpiler pass means using Qiskit's DAGCircuit API to analyze or transform the circuit. Let's see some examples of this.
**a. Get all op nodes in the DAG:**
```
dag.op_nodes()
```
Each node is an instance of the ``DAGNode`` class. Let's examine the information stored in the second op node.
```
node = dag.op_nodes()[3]
print("node name: ", node.name)
print("node op: ", node.op)
print("node qargs: ", node.qargs)
print("node cargs: ", node.cargs)
print("node condition: ", node.condition)
```
**b. Add an operation to the back:**
```
from qiskit.extensions.standard import HGate
dag.apply_operation_back(HGate(), qargs=[q[0]])
dag_drawer(dag)
```
**c. Add an operation to the front:**
```
from qiskit.extensions.standard import ToffoliGate
dag.apply_operation_front(ToffoliGate(), qargs=[q[0], q[1], q[2]], cargs=[])
dag_drawer(dag)
```
**d. Substitute a node with a subcircuit:**
```
from qiskit.extensions.standard import CHGate, U2Gate, CnotGate
mini_dag = DAGCircuit()
p = QuantumRegister(2, "p")
mini_dag.add_qreg(p)
mini_dag.apply_operation_back(CHGate(), qargs=[p[1], p[0]])
mini_dag.apply_operation_back(U2Gate(0.1, 0.2), qargs=[p[1]])
# substitute the cx node with the above mini-dag
cx_node = dag.op_nodes(op=CnotGate).pop()
dag.substitute_node_with_dag(node=cx_node, input_dag=mini_dag, wires=[p[0], p[1]])
dag_drawer(dag)
```
Finally, after all transformations are complete, we can convert back to a regular QuantumCircuit object.
This is what the transpiler does! It takes a circuit, operates on it in DAG form, and outputs a transformed circuit.
```
from qiskit.converters import dag_to_circuit
circuit = dag_to_circuit(dag)
circuit.draw()
```
## Implementing a BasicMapper Pass
Now that we are familiar with the DAG, let's use it to write a transpiler pass. Here we will implement a basic pass for mapping an arbitrary circuit to a device with limited qubit connectivity. We will call this the BasicMapper. This pass is included in Qiskit Terra as well.
The first thing to do when writing a transpiler pass is to decide whether the pass class derives from a ``TransformationPass`` or ``AnalysisPass``. Transformation passes modify the circuit, while analysis passes only collect information about a circuit (to be used by other passes). Then, the ``run(dag)`` method is implemented which does the main task. Finally, the pass has to be registered inside the ``qiskit.transpiler.passes`` module.
This pass functions as follows: it traverses the dag layer-by-layer (each layer is a group of operations that does not acts on independent qubits, so in theory all operations in a layer can be done independently). For each operation, if it does not already meet the coupling map constraints, the pass identifies a swap path and inserts swaps to bring the two qubits close to each other.
Follow the comments in the code for more details.
```
from copy import copy
from qiskit.transpiler.basepasses import TransformationPass
from qiskit.transpiler import Layout
from qiskit.extensions.standard import SwapGate
class BasicSwap(TransformationPass):
"""
Maps (with minimum effort) a DAGCircuit onto a `coupling_map` adding swap gates.
"""
def __init__(self,
coupling_map,
initial_layout=None):
"""
Maps a DAGCircuit onto a `coupling_map` using swap gates.
Args:
coupling_map (CouplingMap): Directed graph represented a coupling map.
initial_layout (Layout): initial layout of qubits in mapping
"""
super().__init__()
self.coupling_map = coupling_map
self.initial_layout = initial_layout
def run(self, dag):
"""
Runs the BasicSwap pass on `dag`.
Args:
dag (DAGCircuit): DAG to map.
Returns:
DAGCircuit: A mapped DAG.
Raises:
TranspilerError: if the coupling map or the layout are not
compatible with the DAG
"""
new_dag = DAGCircuit()
if self.initial_layout is None:
if self.property_set["layout"]:
self.initial_layout = self.property_set["layout"]
else:
self.initial_layout = Layout.generate_trivial_layout(*dag.qregs.values())
if len(dag.qubits()) != len(self.initial_layout):
raise TranspilerError('The layout does not match the amount of qubits in the DAG')
if len(self.coupling_map.physical_qubits) != len(self.initial_layout):
raise TranspilerError(
"Mappers require to have the layout to be the same size as the coupling map")
current_layout = self.initial_layout.copy()
for layer in dag.serial_layers():
subdag = layer['graph']
for gate in subdag.twoQ_gates():
physical_q0 = current_layout[gate.qargs[0]]
physical_q1 = current_layout[gate.qargs[1]]
if self.coupling_map.distance(physical_q0, physical_q1) != 1:
# Insert a new layer with the SWAP(s).
swap_layer = DAGCircuit()
path = self.coupling_map.shortest_undirected_path(physical_q0, physical_q1)
for swap in range(len(path) - 2):
connected_wire_1 = path[swap]
connected_wire_2 = path[swap + 1]
qubit_1 = current_layout[connected_wire_1]
qubit_2 = current_layout[connected_wire_2]
# create qregs
for qreg in current_layout.get_registers():
if qreg not in swap_layer.qregs.values():
swap_layer.add_qreg(qreg)
# create the swap operation
swap_layer.apply_operation_back(SwapGate(),
qargs=[qubit_1, qubit_2],
cargs=[])
# layer insertion
edge_map = current_layout.combine_into_edge_map(self.initial_layout)
new_dag.compose_back(swap_layer, edge_map)
# update current_layout
for swap in range(len(path) - 2):
current_layout.swap(path[swap], path[swap + 1])
edge_map = current_layout.combine_into_edge_map(self.initial_layout)
new_dag.extend_back(subdag, edge_map)
return new_dag
```
Let's test this pass on a small example circuit.
```
q = QuantumRegister(7, 'q')
in_circ = QuantumCircuit(q)
in_circ.h(q[0])
in_circ.cx(q[0], q[4])
in_circ.cx(q[2], q[3])
in_circ.cx(q[6], q[1])
in_circ.cx(q[5], q[0])
in_circ.rz(0.1, q[2])
in_circ.cx(q[5], q[0])
```
Now we construct a pass manager that contains our new pass. We pass the example circuit above to this pass manager, and obtain a new, transformed circuit.
```
from qiskit.transpiler import PassManager
from qiskit.transpiler import CouplingMap
from qiskit import BasicAer
pm = PassManager()
coupling = [[0, 1], [1, 2], [2, 3], [3, 4], [4, 5], [5, 6]]
coupling_map = CouplingMap(couplinglist=coupling)
pm.append([BasicSwap(coupling_map)])
out_circ = pm.run(in_circ)
in_circ.draw(output='mpl')
out_circ.draw(output='mpl')
```
Note that this pass only inserts the swaps necessary to make every two-qubit interaction conform to the device coupling map. It does not, for example, care about the direction of interactions, or the native gate set supported by the device. This is a design philosophy of Qiskit's transpiler: every pass performs a small, well-defined action, and the aggressive circuit optimization is achieved by the pass manager through combining multiple passes.
|
github_jupyter
|
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit
from qiskit.dagcircuit import DAGCircuit
q = QuantumRegister(3, 'q')
c = ClassicalRegister(3, 'c')
circ = QuantumCircuit(q, c)
circ.h(q[0])
circ.cx(q[0], q[1])
circ.measure(q[0], c[0])
circ.rz(0.5, q[1]).c_if(c, 2)
circ.draw()
from qiskit.converters import circuit_to_dag
from qiskit.tools.visualization import dag_drawer
dag = circuit_to_dag(circ)
dag_drawer(dag)
dag.op_nodes()
node = dag.op_nodes()[3]
print("node name: ", node.name)
print("node op: ", node.op)
print("node qargs: ", node.qargs)
print("node cargs: ", node.cargs)
print("node condition: ", node.condition)
from qiskit.extensions.standard import HGate
dag.apply_operation_back(HGate(), qargs=[q[0]])
dag_drawer(dag)
from qiskit.extensions.standard import ToffoliGate
dag.apply_operation_front(ToffoliGate(), qargs=[q[0], q[1], q[2]], cargs=[])
dag_drawer(dag)
from qiskit.extensions.standard import CHGate, U2Gate, CnotGate
mini_dag = DAGCircuit()
p = QuantumRegister(2, "p")
mini_dag.add_qreg(p)
mini_dag.apply_operation_back(CHGate(), qargs=[p[1], p[0]])
mini_dag.apply_operation_back(U2Gate(0.1, 0.2), qargs=[p[1]])
# substitute the cx node with the above mini-dag
cx_node = dag.op_nodes(op=CnotGate).pop()
dag.substitute_node_with_dag(node=cx_node, input_dag=mini_dag, wires=[p[0], p[1]])
dag_drawer(dag)
from qiskit.converters import dag_to_circuit
circuit = dag_to_circuit(dag)
circuit.draw()
from copy import copy
from qiskit.transpiler.basepasses import TransformationPass
from qiskit.transpiler import Layout
from qiskit.extensions.standard import SwapGate
class BasicSwap(TransformationPass):
"""
Maps (with minimum effort) a DAGCircuit onto a `coupling_map` adding swap gates.
"""
def __init__(self,
coupling_map,
initial_layout=None):
"""
Maps a DAGCircuit onto a `coupling_map` using swap gates.
Args:
coupling_map (CouplingMap): Directed graph represented a coupling map.
initial_layout (Layout): initial layout of qubits in mapping
"""
super().__init__()
self.coupling_map = coupling_map
self.initial_layout = initial_layout
def run(self, dag):
"""
Runs the BasicSwap pass on `dag`.
Args:
dag (DAGCircuit): DAG to map.
Returns:
DAGCircuit: A mapped DAG.
Raises:
TranspilerError: if the coupling map or the layout are not
compatible with the DAG
"""
new_dag = DAGCircuit()
if self.initial_layout is None:
if self.property_set["layout"]:
self.initial_layout = self.property_set["layout"]
else:
self.initial_layout = Layout.generate_trivial_layout(*dag.qregs.values())
if len(dag.qubits()) != len(self.initial_layout):
raise TranspilerError('The layout does not match the amount of qubits in the DAG')
if len(self.coupling_map.physical_qubits) != len(self.initial_layout):
raise TranspilerError(
"Mappers require to have the layout to be the same size as the coupling map")
current_layout = self.initial_layout.copy()
for layer in dag.serial_layers():
subdag = layer['graph']
for gate in subdag.twoQ_gates():
physical_q0 = current_layout[gate.qargs[0]]
physical_q1 = current_layout[gate.qargs[1]]
if self.coupling_map.distance(physical_q0, physical_q1) != 1:
# Insert a new layer with the SWAP(s).
swap_layer = DAGCircuit()
path = self.coupling_map.shortest_undirected_path(physical_q0, physical_q1)
for swap in range(len(path) - 2):
connected_wire_1 = path[swap]
connected_wire_2 = path[swap + 1]
qubit_1 = current_layout[connected_wire_1]
qubit_2 = current_layout[connected_wire_2]
# create qregs
for qreg in current_layout.get_registers():
if qreg not in swap_layer.qregs.values():
swap_layer.add_qreg(qreg)
# create the swap operation
swap_layer.apply_operation_back(SwapGate(),
qargs=[qubit_1, qubit_2],
cargs=[])
# layer insertion
edge_map = current_layout.combine_into_edge_map(self.initial_layout)
new_dag.compose_back(swap_layer, edge_map)
# update current_layout
for swap in range(len(path) - 2):
current_layout.swap(path[swap], path[swap + 1])
edge_map = current_layout.combine_into_edge_map(self.initial_layout)
new_dag.extend_back(subdag, edge_map)
return new_dag
q = QuantumRegister(7, 'q')
in_circ = QuantumCircuit(q)
in_circ.h(q[0])
in_circ.cx(q[0], q[4])
in_circ.cx(q[2], q[3])
in_circ.cx(q[6], q[1])
in_circ.cx(q[5], q[0])
in_circ.rz(0.1, q[2])
in_circ.cx(q[5], q[0])
from qiskit.transpiler import PassManager
from qiskit.transpiler import CouplingMap
from qiskit import BasicAer
pm = PassManager()
coupling = [[0, 1], [1, 2], [2, 3], [3, 4], [4, 5], [5, 6]]
coupling_map = CouplingMap(couplinglist=coupling)
pm.append([BasicSwap(coupling_map)])
out_circ = pm.run(in_circ)
in_circ.draw(output='mpl')
out_circ.draw(output='mpl')
| 0.759761 | 0.987387 |
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
def abline(a, b, label_, c=None):
"""Plot a line from slope and intercept"""
axes = plt.gca()
x_vals = np.array(axes.get_xlim())
y_vals = a * x_vals + b
plt.plot(x_vals, y_vals, label=label_, color=c, zorder=1)
num_sample = 290
num_outlier = 10
total_samples = num_sample + num_outlier
theta = [-1, 0.2]
#X = np.random.random_sample(num_sample) * 0.5
X = np.random.normal(0, 0.5, num_sample)
y = theta[0]*X + theta[1]
y += np.random.normal(0, 0.1, num_sample)
X_out = np.random.normal(-2, 0.5, num_outlier)
y_out = np.random.normal(0, 0.5, num_outlier)
plt.figure(figsize=(8, 8))
plt.scatter(X, y, s=2, label='the major cluster of data')
plt.scatter(X_out, y_out, s=2, label="outliers")
abline(theta[0], theta[1], 'ground truth', "red")
plt.legend()
y = np.append(y, y_out)
X = np.append(X, X_out)
print(X.shape, y.shape)
iters = 20000
X = X.reshape((total_samples, 1))
intercept = np.ones((X.shape[0], 1))
X_concatenate = np.concatenate((X, intercept), axis=1)
thetas=[]
max_=[]
min_=[]
avg_=[]
var_=[]
ts=[]
for t in np.arange(-10, 0, 0.2):
theta_hat = np.zeros(2)
for _ in range(iters):
y_pred = np.dot(X_concatenate, theta_hat)
error = (y-y_pred)**2
grad = np.dot(-1*X_concatenate.T, np.multiply(np.exp(t*error), 2 * (y-y_pred)))
loss_mean = np.sum(np.exp(t * error))
theta_hat = theta_hat - 0.01 * grad/loss_mean
thetas.append([theta_hat[0], theta_hat[1]])
print(theta_hat)
loss = error * 0.5
ts.append(t)
avg_.append(np.mean(loss))
max_.append(max(loss))
min_.append(min(loss))
var_.append(np.var(loss))
print("t={}, max loss: {}, min loss: {}, avg loss: {}, variance: {}".format(t, max(loss), min(loss), np.mean(loss), np.var(loss)))
for t in np.arange(0, 10, 0.2):
theta_hat = np.zeros(2)
for _ in range(iters):
y_pred = np.dot(X_concatenate, theta_hat)
error = (y-y_pred)**2
grad = np.dot(-1*X_concatenate.T, np.multiply(np.exp(t*error), 2 * (y-y_pred)))
loss_mean = np.sum(np.exp(t * error))
theta_hat = theta_hat - 0.01 * grad/loss_mean
thetas.append([theta_hat[0], theta_hat[1]])
print(theta_hat)
loss = error * 0.5
ts.append(t)
avg_.append(np.mean(loss))
max_.append(max(loss))
min_.append(min(loss))
var_.append(np.var(loss))
print("t={}, max loss: {}, min loss: {}, avg loss: {}, variance: {}".format(t, max(loss), min(loss), np.mean(loss), np.var(loss)))
import matplotlib.pylab as pl
from matplotlib import rc
rc('text', usetex=True)
colors_positive = pl.cm.Reds(np.linspace(0,0.8, 50))
colors_negative = pl.cm.Blues(np.linspace(0, 0.8, 50))
plt.figure(figsize=(4, 3.5))
ax = plt.subplot(1, 1, 1)
print(len(thetas))
for i in range(len(thetas)):
if i > 50:
abline(thetas[i][0], thetas[i][1], None, c=colors_positive[min(int((i-50)*1.1), 39)])
elif i < 50:
abline(thetas[i][0], thetas[i][1], None, c=colors_negative[min(int((49-i)*3.2), 49)])
plt.scatter(X, y, s=1, c='#8c564b', zorder=2)
plt.scatter(X_out, y_out, s=3, c='#8c564b', zorder=2)
abline(thetas[50][0], thetas[50][1], None, c='#e377c2')
ax.tick_params(color='#dddddd')
ax.spines['bottom'].set_color('#dddddd')
ax.spines['top'].set_color('#dddddd')
ax.spines['right'].set_color('#dddddd')
ax.spines['left'].set_color('#dddddd')
plt.xlim(-3.5, 2.5)
plt.ylim(-1.2, 1.8)
plt.title("linear regression", fontsize=17)
plt.xlabel(r'$x$', fontsize=17)
plt.ylabel(r'$y$', fontsize=17)
plt.tight_layout()
plt.savefig("2-linear_regression.pdf")
```
|
github_jupyter
|
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
def abline(a, b, label_, c=None):
"""Plot a line from slope and intercept"""
axes = plt.gca()
x_vals = np.array(axes.get_xlim())
y_vals = a * x_vals + b
plt.plot(x_vals, y_vals, label=label_, color=c, zorder=1)
num_sample = 290
num_outlier = 10
total_samples = num_sample + num_outlier
theta = [-1, 0.2]
#X = np.random.random_sample(num_sample) * 0.5
X = np.random.normal(0, 0.5, num_sample)
y = theta[0]*X + theta[1]
y += np.random.normal(0, 0.1, num_sample)
X_out = np.random.normal(-2, 0.5, num_outlier)
y_out = np.random.normal(0, 0.5, num_outlier)
plt.figure(figsize=(8, 8))
plt.scatter(X, y, s=2, label='the major cluster of data')
plt.scatter(X_out, y_out, s=2, label="outliers")
abline(theta[0], theta[1], 'ground truth', "red")
plt.legend()
y = np.append(y, y_out)
X = np.append(X, X_out)
print(X.shape, y.shape)
iters = 20000
X = X.reshape((total_samples, 1))
intercept = np.ones((X.shape[0], 1))
X_concatenate = np.concatenate((X, intercept), axis=1)
thetas=[]
max_=[]
min_=[]
avg_=[]
var_=[]
ts=[]
for t in np.arange(-10, 0, 0.2):
theta_hat = np.zeros(2)
for _ in range(iters):
y_pred = np.dot(X_concatenate, theta_hat)
error = (y-y_pred)**2
grad = np.dot(-1*X_concatenate.T, np.multiply(np.exp(t*error), 2 * (y-y_pred)))
loss_mean = np.sum(np.exp(t * error))
theta_hat = theta_hat - 0.01 * grad/loss_mean
thetas.append([theta_hat[0], theta_hat[1]])
print(theta_hat)
loss = error * 0.5
ts.append(t)
avg_.append(np.mean(loss))
max_.append(max(loss))
min_.append(min(loss))
var_.append(np.var(loss))
print("t={}, max loss: {}, min loss: {}, avg loss: {}, variance: {}".format(t, max(loss), min(loss), np.mean(loss), np.var(loss)))
for t in np.arange(0, 10, 0.2):
theta_hat = np.zeros(2)
for _ in range(iters):
y_pred = np.dot(X_concatenate, theta_hat)
error = (y-y_pred)**2
grad = np.dot(-1*X_concatenate.T, np.multiply(np.exp(t*error), 2 * (y-y_pred)))
loss_mean = np.sum(np.exp(t * error))
theta_hat = theta_hat - 0.01 * grad/loss_mean
thetas.append([theta_hat[0], theta_hat[1]])
print(theta_hat)
loss = error * 0.5
ts.append(t)
avg_.append(np.mean(loss))
max_.append(max(loss))
min_.append(min(loss))
var_.append(np.var(loss))
print("t={}, max loss: {}, min loss: {}, avg loss: {}, variance: {}".format(t, max(loss), min(loss), np.mean(loss), np.var(loss)))
import matplotlib.pylab as pl
from matplotlib import rc
rc('text', usetex=True)
colors_positive = pl.cm.Reds(np.linspace(0,0.8, 50))
colors_negative = pl.cm.Blues(np.linspace(0, 0.8, 50))
plt.figure(figsize=(4, 3.5))
ax = plt.subplot(1, 1, 1)
print(len(thetas))
for i in range(len(thetas)):
if i > 50:
abline(thetas[i][0], thetas[i][1], None, c=colors_positive[min(int((i-50)*1.1), 39)])
elif i < 50:
abline(thetas[i][0], thetas[i][1], None, c=colors_negative[min(int((49-i)*3.2), 49)])
plt.scatter(X, y, s=1, c='#8c564b', zorder=2)
plt.scatter(X_out, y_out, s=3, c='#8c564b', zorder=2)
abline(thetas[50][0], thetas[50][1], None, c='#e377c2')
ax.tick_params(color='#dddddd')
ax.spines['bottom'].set_color('#dddddd')
ax.spines['top'].set_color('#dddddd')
ax.spines['right'].set_color('#dddddd')
ax.spines['left'].set_color('#dddddd')
plt.xlim(-3.5, 2.5)
plt.ylim(-1.2, 1.8)
plt.title("linear regression", fontsize=17)
plt.xlabel(r'$x$', fontsize=17)
plt.ylabel(r'$y$', fontsize=17)
plt.tight_layout()
plt.savefig("2-linear_regression.pdf")
| 0.447702 | 0.769297 |
# Adadelta
:label:`sec_adadelta`
Adadelta is yet another variant of AdaGrad. The main difference lies in the fact that it decreases the amount by which the learning rate is adaptive to coordinates. Moreover, traditionally it referred to as not having a learning rate since it uses the amount of change itself as calibration for future change. The algorithm was proposed in :cite:`Zeiler.2012`. It is fairly straightforward, given the discussion of previous algorithms so far.
## The Algorithm
In a nutshell Adadelta uses two state variables, $\mathbf{s}_t$ to store a leaky average of the second moment of the gradient and $\Delta\mathbf{x}_t$ to store a leaky average of the second moment of the change of parameters in the model itself. Note that we use the original notation and naming of the authors for compatibility with other publications and implementations (there is no other real reason why one should use different Greek variables to indicate a parameter serving the same purpose in momentum, Adagrad, RMSProp, and Adadelta). The parameter du jour is $\rho$. We obtain the following leaky updates:
$$\begin{aligned}
\mathbf{s}_t & = \rho \mathbf{s}_{t-1} + (1 - \rho) \mathbf{g}_t^2, \\
\mathbf{g}_t' & = \sqrt{\frac{\Delta\mathbf{x}_{t-1} + \epsilon}{\mathbf{s}_t + \epsilon}} \odot \mathbf{g}_t, \\
\mathbf{x}_t & = \mathbf{x}_{t-1} - \mathbf{g}_t', \\
\Delta \mathbf{x}_t & = \rho \Delta\mathbf{x}_{t-1} + (1 - \rho) \mathbf{x}_t^2.
\end{aligned}$$
The difference to before is that we perform updates with the rescaled gradient $\mathbf{g}_t'$ which is computed by taking the ratio between the average squared rate of change and the average second moment of the gradient. The use of $\mathbf{g}_t'$ is purely for notational convenience. In practice we can implement this algorithm without the need to use additional temporary space for $\mathbf{g}_t'$. As before $\eta$ is a parameter ensuring nontrivial numerical results, i.e., avoiding zero step size or infinite variance. Typically we set this to $\eta = 10^{-5}$.
## Implementation
Adadelta needs to maintain two state variables for each variable, $\mathbf{s}_t$ and $\Delta\mathbf{x}_t$. This yields the following implementation.
```
%mavenRepo snapshots https://oss.sonatype.org/content/repositories/snapshots/
%maven ai.djl:api:0.7.0-SNAPSHOT
%maven ai.djl:basicdataset:0.7.0-SNAPSHOT
%maven org.slf4j:slf4j-api:1.7.26
%maven org.slf4j:slf4j-simple:1.7.26
%maven ai.djl.mxnet:mxnet-engine:0.7.0-SNAPSHOT
%maven ai.djl.mxnet:mxnet-native-auto:1.7.0-a
%load ../utils/plot-utils
%load ../utils/Functions.java
%load ../utils/GradDescUtils.java
%load ../utils/Accumulator.java
%load ../utils/StopWatch.java
%load ../utils/Training.java
%load ../utils/TrainingChapter11.java
NDList initAdadeltaStates(int featureDimension) {
NDManager manager = NDManager.newBaseManager();
NDArray sW = manager.zeros(new Shape(featureDimension, 1));
NDArray sB = manager.zeros(new Shape(1));
NDArray deltaW = manager.zeros(new Shape(featureDimension, 1));
NDArray deltaB = manager.zeros(new Shape(1));
return new NDList(sW, deltaW, sB, deltaB);
}
public class Optimization {
public static void adadelta(NDList params, NDList states, Map<String, Float> hyperparams) {
float rho = hyperparams.get("rho");
float eps = (float) 1e-5;
for (int i = 0; i < params.size(); i++) {
NDArray param = params.get(i);
NDArray state = states.get(2 * i);
NDArray delta = states.get(2 * i + 1);
// Update parameter, state, and delta
// In-place updates with the '__'i methods (ex. muli)
// state = rho * state + (1 - rho) * param.gradient^2
state.muli(rho).addi(param.getGradient().square().mul(1 - rho));
// rescaledGradient = ((delta + eps)^(1/2) / (state + eps)^(1/2)) * param.gradient
NDArray rescaledGradient = delta.add(eps).sqrt()
.div(state.add(eps).sqrt()).mul(param.getGradient());
// param -= rescaledGradient
param.subi(rescaledGradient);
// delta = rho * delta + (1 - rho) * g^2
delta.muli(rho).addi(rescaledGradient.square().mul(1 - rho));
}
}
}
```
Choosing $\rho = 0.9$ amounts to a half-life time of 10 for each parameter update. This tends to work quite well. We get the following behavior.
```
AirfoilRandomAccess airfoil = TrainingChapter11.getDataCh11(10, 1500);
public TrainingChapter11.LossTime trainAdadelta(float rho, int numEpochs) {
int featureDimension = airfoil.getFeatureArraySize();
Map<String, Float> hyperparams = new HashMap<>();
hyperparams.put("rho", rho);
return TrainingChapter11.trainCh11(Optimization::adadelta,
initAdadeltaStates(featureDimension),
hyperparams, airfoil,
featureDimension, numEpochs);
}
trainAdadelta(0.9f, 2);
```
As usual, for a concise implementation, we simply create an instance of `adadelta` from the `Optimizer` class.
```
// TODO: Adadelta not yet implemented in DJL
// Optimizer adadelta = Optimizer.adadelta().optRho(0.9f).build();
// TrainingChapter11.trainConciseCh11(adadelta, airfoil, 2);
```
## Summary
* Adadelta has no learning rate parameter. Instead, it uses the rate of change in the parameters itself to adapt the learning rate.
* Adadelta requires two state variables to store the second moments of gradient and the change in parameters.
* Adadelta uses leaky averages to keep a running estimate of the appropriate statistics.
## Exercises
1. Adjust the value of $\rho$. What happens?
1. Show how to implement the algorithm without the use of $\mathbf{g}_t'$. Why might this be a good idea?
1. Is Adadelta really learning rate free? Could you find optimization problems that break Adadelta?
1. Compare Adadelta to Adagrad and RMS prop to discuss their convergence behavior.
|
github_jupyter
|
%mavenRepo snapshots https://oss.sonatype.org/content/repositories/snapshots/
%maven ai.djl:api:0.7.0-SNAPSHOT
%maven ai.djl:basicdataset:0.7.0-SNAPSHOT
%maven org.slf4j:slf4j-api:1.7.26
%maven org.slf4j:slf4j-simple:1.7.26
%maven ai.djl.mxnet:mxnet-engine:0.7.0-SNAPSHOT
%maven ai.djl.mxnet:mxnet-native-auto:1.7.0-a
%load ../utils/plot-utils
%load ../utils/Functions.java
%load ../utils/GradDescUtils.java
%load ../utils/Accumulator.java
%load ../utils/StopWatch.java
%load ../utils/Training.java
%load ../utils/TrainingChapter11.java
NDList initAdadeltaStates(int featureDimension) {
NDManager manager = NDManager.newBaseManager();
NDArray sW = manager.zeros(new Shape(featureDimension, 1));
NDArray sB = manager.zeros(new Shape(1));
NDArray deltaW = manager.zeros(new Shape(featureDimension, 1));
NDArray deltaB = manager.zeros(new Shape(1));
return new NDList(sW, deltaW, sB, deltaB);
}
public class Optimization {
public static void adadelta(NDList params, NDList states, Map<String, Float> hyperparams) {
float rho = hyperparams.get("rho");
float eps = (float) 1e-5;
for (int i = 0; i < params.size(); i++) {
NDArray param = params.get(i);
NDArray state = states.get(2 * i);
NDArray delta = states.get(2 * i + 1);
// Update parameter, state, and delta
// In-place updates with the '__'i methods (ex. muli)
// state = rho * state + (1 - rho) * param.gradient^2
state.muli(rho).addi(param.getGradient().square().mul(1 - rho));
// rescaledGradient = ((delta + eps)^(1/2) / (state + eps)^(1/2)) * param.gradient
NDArray rescaledGradient = delta.add(eps).sqrt()
.div(state.add(eps).sqrt()).mul(param.getGradient());
// param -= rescaledGradient
param.subi(rescaledGradient);
// delta = rho * delta + (1 - rho) * g^2
delta.muli(rho).addi(rescaledGradient.square().mul(1 - rho));
}
}
}
AirfoilRandomAccess airfoil = TrainingChapter11.getDataCh11(10, 1500);
public TrainingChapter11.LossTime trainAdadelta(float rho, int numEpochs) {
int featureDimension = airfoil.getFeatureArraySize();
Map<String, Float> hyperparams = new HashMap<>();
hyperparams.put("rho", rho);
return TrainingChapter11.trainCh11(Optimization::adadelta,
initAdadeltaStates(featureDimension),
hyperparams, airfoil,
featureDimension, numEpochs);
}
trainAdadelta(0.9f, 2);
// TODO: Adadelta not yet implemented in DJL
// Optimizer adadelta = Optimizer.adadelta().optRho(0.9f).build();
// TrainingChapter11.trainConciseCh11(adadelta, airfoil, 2);
| 0.767167 | 0.971019 |
```
import math
import numpy as np
import pandas as pd
from collections import Counter
from PyImpetus import PPIMBC
from sklearn.svm import LinearSVC, SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
from sklearn.model_selection import KFold, StratifiedKFold
from sklearn.naive_bayes import GaussianNB
from sklearn.preprocessing import StandardScaler, OneHotEncoder
import time
original_ais_df=pd.read_pickle('AIS_UNACORN_Seatracks.pkl')
original_ais_df.head()
original_ais_df.info()
original_ais_df=original_ais_df[['cog','sog', 'beam','latitude','longitude','heading', 'length','mmsi']].dropna()
original_ais_df=original_ais_df.head(10000)
len(original_ais_df)
data, Y = original_ais_df.drop(['mmsi'], axis=1), original_ais_df['mmsi'].values
data=original_ais_df[['cog','sog', 'beam','latitude','longitude','heading', 'length']]
# We want to time our algorithm
start = time.time()
# Use KFold for understanding the performance of PyImpetus
kfold = KFold(n_splits=5, random_state=27, shuffle=True)
# This will hold all the accuracy scores
scores = list()
# Perform CV
for train, test in kfold.split(data):
# Split data into train and test based on folds
x_train, x_test = data.iloc[train], data.iloc[test]
y_train, y_test = Y[train], Y[test]
# Convert the data into numpy arrays
x_train, x_test = x_train.values, x_test.values
model = DecisionTreeClassifier(random_state=27)
model.fit(x_train, y_train)
preds = model.predict(x_test)
score = accuracy_score(y_test, preds)
scores.append(score)
print("Score: ", score)
# Compute average score
print("\n\nAverage Accuracy: ", sum(scores)/len(scores))
# Finally, check out the total time taken
end = time.time()
print("\n\nTotal Time Required (in seconds): ", end-start)
# We want to time our algorithm
start = time.time()
# Use KFold for understanding the performance of PyImpetus
kfold = KFold(n_splits=5, random_state=27, shuffle=True)
# This will hold all the accuracy scores
scores = list()
# Perform CV
for train, test in kfold.split(data):
# Split data into train and test based on folds
x_train, x_test = data.iloc[train], data.iloc[test]
y_train, y_test = Y[train], Y[test]
# Create a PyImpetus classification object and initialize with required parameters
# NOTE: To achieve fast selection, set cv=0 for disabling the use of any internal cross-validation
model = PPIMBC(LogisticRegression(random_state=27), cv=0, num_simul=50, random_state=27, verbose=2)
# Fit this above object on the train part and transform the train dataset into selected feature subset
# NOTE: x_train has to be a dataframe and y_train has to be a numpy array
x_train = model.fit_transform(x_train, y_train)
# Transform the test set as well
# NOTE: x_test has to be a dataframe
x_test = model.transform(x_test)
# Check out the features selected
print("Markov Blanket: ", model.MB)
# Check out the scores of each feature. The scores are in order of the selected feature list
# NOTE: You can use these scores ina feature selection ensemble
print("Feature importance: ", model.feat_imp_scores)
# Plot the feature importance scores
model.feature_importance()
# Convert the data into numpy arrays
x_train, x_test = x_train.values, x_test.values
model = DecisionTreeClassifier(random_state=27)
model.fit(x_train, y_train)
preds = model.predict(x_test)
score = accuracy_score(y_test, preds)
scores.append(score)
print("Score: ", score)
# Compute average score
print("\n\nAverage Accuracy: ", sum(scores)/len(scores))
# Finally, check out the total time taken
end = time.time()
print("\n\nTotal Time Required (in seconds): ", end-start)
```
|
github_jupyter
|
import math
import numpy as np
import pandas as pd
from collections import Counter
from PyImpetus import PPIMBC
from sklearn.svm import LinearSVC, SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
from sklearn.model_selection import KFold, StratifiedKFold
from sklearn.naive_bayes import GaussianNB
from sklearn.preprocessing import StandardScaler, OneHotEncoder
import time
original_ais_df=pd.read_pickle('AIS_UNACORN_Seatracks.pkl')
original_ais_df.head()
original_ais_df.info()
original_ais_df=original_ais_df[['cog','sog', 'beam','latitude','longitude','heading', 'length','mmsi']].dropna()
original_ais_df=original_ais_df.head(10000)
len(original_ais_df)
data, Y = original_ais_df.drop(['mmsi'], axis=1), original_ais_df['mmsi'].values
data=original_ais_df[['cog','sog', 'beam','latitude','longitude','heading', 'length']]
# We want to time our algorithm
start = time.time()
# Use KFold for understanding the performance of PyImpetus
kfold = KFold(n_splits=5, random_state=27, shuffle=True)
# This will hold all the accuracy scores
scores = list()
# Perform CV
for train, test in kfold.split(data):
# Split data into train and test based on folds
x_train, x_test = data.iloc[train], data.iloc[test]
y_train, y_test = Y[train], Y[test]
# Convert the data into numpy arrays
x_train, x_test = x_train.values, x_test.values
model = DecisionTreeClassifier(random_state=27)
model.fit(x_train, y_train)
preds = model.predict(x_test)
score = accuracy_score(y_test, preds)
scores.append(score)
print("Score: ", score)
# Compute average score
print("\n\nAverage Accuracy: ", sum(scores)/len(scores))
# Finally, check out the total time taken
end = time.time()
print("\n\nTotal Time Required (in seconds): ", end-start)
# We want to time our algorithm
start = time.time()
# Use KFold for understanding the performance of PyImpetus
kfold = KFold(n_splits=5, random_state=27, shuffle=True)
# This will hold all the accuracy scores
scores = list()
# Perform CV
for train, test in kfold.split(data):
# Split data into train and test based on folds
x_train, x_test = data.iloc[train], data.iloc[test]
y_train, y_test = Y[train], Y[test]
# Create a PyImpetus classification object and initialize with required parameters
# NOTE: To achieve fast selection, set cv=0 for disabling the use of any internal cross-validation
model = PPIMBC(LogisticRegression(random_state=27), cv=0, num_simul=50, random_state=27, verbose=2)
# Fit this above object on the train part and transform the train dataset into selected feature subset
# NOTE: x_train has to be a dataframe and y_train has to be a numpy array
x_train = model.fit_transform(x_train, y_train)
# Transform the test set as well
# NOTE: x_test has to be a dataframe
x_test = model.transform(x_test)
# Check out the features selected
print("Markov Blanket: ", model.MB)
# Check out the scores of each feature. The scores are in order of the selected feature list
# NOTE: You can use these scores ina feature selection ensemble
print("Feature importance: ", model.feat_imp_scores)
# Plot the feature importance scores
model.feature_importance()
# Convert the data into numpy arrays
x_train, x_test = x_train.values, x_test.values
model = DecisionTreeClassifier(random_state=27)
model.fit(x_train, y_train)
preds = model.predict(x_test)
score = accuracy_score(y_test, preds)
scores.append(score)
print("Score: ", score)
# Compute average score
print("\n\nAverage Accuracy: ", sum(scores)/len(scores))
# Finally, check out the total time taken
end = time.time()
print("\n\nTotal Time Required (in seconds): ", end-start)
| 0.727879 | 0.494019 |
# Image Classification with the MNIST Dataset
Deep learning excels at pattern (image) recognition by trial and error. By training a deep neural network with sufficeint data and providing the network with feedback on its performance via training, the network can identify, though a huge amount of iteration, its own set of conditions by which it can act in the correct way.
## The MNIST Dataset
The acurate image classification of the *MNIST dataset* is a collection of 70.000 grayscale images of handwritten digits from 0-9.
## Training and Validation Data and Labels
When working with images for deep learning, weneed both the images themselves, usually denotead as `X`, and also, correct labels for these images, usually denoted as `Y`. Furthermore, we need `X` and `Y` values both for training the model and a separate set of `X` and `Y` values for validating the performence of the model after it has been trained. Therefore, we need 4 segments of data for the MNIST dataset:
1. `x_train` - images used for training the neural network
2. `y_train` - correct labels for the `x_train` images, used to evaluate the model's predictions during training
3. `x_valid` - images set aside for validating the performance of the model after it has been trained
4. `y_valid` - correct labels for the `x_valid` images, used to evaluate the model's predictions after it has been trained
The process of preparing data for analysis is called *Data Engineering*.
## Loading the Data into Memoey (with Keras)
Keras has many useful built in functions designed for the computer vision tasks. It is also a legitimate choice for deep learning in a professional setting due to its readability and efficiency. One of the many helpful features that Keras provides are modules containing many helper methods for many common datasets, including MNIST.
```
from tensorflow.keras.datasets import mnist
```
With the `mnist` module, we can easily load the MNIST data, already partitioned into images and labels for both training and validation.
```
# the data split between train and validation sets
(x_train, y_train), (x_valid, y_valid) = mnist.load_data()
```
### Exploring the MNIST Data
Each image itself is a 2D array with the dimensions 28x28.
```
x_train.shape
x_valid.shape
```
These 28x28 images are represented as a collection of unsigned 8-bit integer values
0 and 255, the values corresponding with a pixel's grayscale value where 0 is black and
255 is white and all other values are in between.
```
x_train.dtype
x_train.min()
x_train.max()
x_train[0]
```
Using `matplotlib` we can render one of these grayscale images in our dataset.
```
import matplotlib.pyplot as plt
image = x_train[0]
plt.imshow(image, cmap='gray')
```
The answer what is this number is in the `y_train` data, which contains correct labels for the data.
```
y_train[0]
```
## Preparing the Data for Training
In deep learning, it is common that data needs to be transformed to be in the ideal state for training. There are 3 tasks we should perform with the data in preparation for training:
1. Flatten the image data, to simplify the image input into the model.
2. Normalize the image data, to make the image input values easier to work with for the model
3. Categorize the labels, to make the label values easier to work with for the model
### Flattening the Image Data
It is possible for a deep learning model to accept a 2-dimensional image but we are going to reshape each image into a single array of 784 continuous pixels. This is also called flattening the image. We will use the helper method `reshape`.
```
x_train = x_train.reshape(60000, 784)
x_valid = x_valid.reshape(10000, 784)
```
The image has been reshaped and is now a collection of 1D arrays containing 784 pixel values each.
```
x_train.shape
x_train[0]
```
### Normalizing the Image Data
Deep learning models are better at dealing with floating point numbers between 0 and 1. Converting integer values to floating point values between 0 and 1 is called *normalization*. Here we will divide all the pixel values by 255.
```
x_train = x_train/255
x_valid = x_valid/255
```
The values are all floating point values between 0.0 and 1.0.
```
x_train.dtype
x_train.min()
x_train.max()
```
### Categorical encoding \ Categorically Encoding the Labels
Categorical encodingis a kind of transformation that modifies the data so that each value is a collection of all possible categories, with the actual category that this particular value is set as true.
Categorical encoding is tranforming values which are intended to be understood as categorical labels into a representation that makes their categorical nature explicit to the model.
Keras provides a utility to *categorically encode values* and here we use it to perform encoding for both the training and validation labels.
```
import tensorflow.keras as keras
num_categories = 10
y_train = keras.utils.to_categorical(y_train, num_categories)
y_valid = keras.utils.to_categorical(y_valid, num_categories)
y_train[0:9]
```
## Creating the Model
With the data prepared for training, it is now time to create the model that we will train with the data. The first basic model will be made up of several layers and will be comprised of 3 main parts:
1. An input layer, which will receive data in some expected format
2. Several *hidden layers*, each comprised of many neurons. Each *neuron* will have the ability to affect the networks's guess with its weights, which are values that will be updated over many iterations as the network gets feedback on its performance and learns
3. An output layer, which will depict the network's guess for a given image
### Instanting the Model
We will use Kera's *Sequential* model class to instantiate an instance of a model that will have a series of layers that data will pass through in sequence.
```
from tensorflow.keras.models import Sequential
model = Sequential()
```
### Creating the Input Layer
We will add the input layer which will be densely connected, meaning that each neuron in it, and its weights, will affect every neuron in the next layer. To do this with Keras, we use Kera's *Dense* layer class.
```
from tensorflow.keras.layers import Dense
```
The `units` argument specifies the number of neurons in the layer. Choosing the corect number of neurons is what puts the "science" in "data science" as it is a matter of capturing the statistical complexity of the dataset (we are going to use 512).
We will use `relu` activation function, which in short, will help our network to learn how to make more sophisticated guesses about data than if it were required to make guesses based on some striclty linear function.
The `input_shape` value specifies the shape of the incoming data which in our situation is a 1D array of 784 values.
```
model.add(Dense(units=512, activation='relu', input_shape=(784,)))
```
### Creating the Hidden Layer
Now we will add an additional densely connected layer. These layers give the network more parameters to contibute towards its guesses, and therefore, more subtle opportunities for accurate learning.
```
model.add(Dense(units = 512, activation='relu'))
```
### Creating the Output Layer
Finally, we will add an input layer. This layer uses activation function `softmax` which will result in each of the layer's values being a probability between 0 and 1 and will result in all the outputs of the layer adding to 1. In this case, since the network is to make a guess about a single image belonging to 1 of 10 possible categories, there will be 10 outputs. Each output gices the model's guess (a probability) that the image belongs to specific class.
```
model.add(Dense(units = 10, activation='softmax'))
```
### Summarizing the Model
Keras provides the model instance method *summary* which will print readable summary of a model. Note the number of trainable parameters. Each of these can be adjusted during training and will contribute towards the trained model's guesses.
```
model.summary()
```
### Compiling the model
In the final step we need to do before we can actually train our model with data to *compile* it. Here we specify a *loss function* which will be used for the model to undderstand how well it is performing during training. We also specify that becase we would like to track `accuracy` while the model trains.
```
model.compile(loss='categorical_crossentropy', metrics=['accuracy'])
```
## Training the Model
Now that we have prepared training and validation data and a model, it's time to train our model with our training data and verify it with its validation data. "Training the model with data" is often also called "fitting a model to data".
When fitting (training) a model with Keras, we use the model's *fit* method. It expects the following arguments:
- the training data
- the labels for the training data
- the number of times it should train on the entire training dataset (called an *epoch*)
- the validation or test data and its labels
```
history = model.fit(x_train, y_train, epochs=5, verbose=1,
validation_data=(x_valid, y_valid))
```
### Observing accuracy
For each of the 5 epochs, notice the `accuracy` and `val_accuracy` scores. `accuracy` states how well the model did for the epoch on all the training data. `val_accuracy` states how well the model did on the validation data, which if you recall, was not used at all for training the model.
The next step would be to use this model to classify new not-yer-seen handwritten images. This is called *inference*.
MNIST is not only useful for its historical influence on Computer Vision but it's also a great *benchmark* and *debugging tool*.
```
# Clear the memory
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
```
## Additional exercise
Ultimately, each neuron is tring to fit a line to some data. Below, we have some datapoints and a randomly drawn line using the equation y = mx + b.
Try changing the `m` and the `b` in order to find the lowest possible loss.
```
import numpy as np
from numpy.polynomial.polynomial import polyfit
import matplotlib.pyplot as plt
m = 5
b = 15
x = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
y = np.array([10, 20, 25, 30, 40, 45, 40, 50, 60, 55])
y_hat = x*m+b
def get_rmse(x_data, y_data, m, b):
squared_error = 0
for i in range(len(x_data)):
y_hat = m*x_data[i]+b
squared_error += (y_data[i]-y_hat)**2
mse = squared_error / len(x_data)
return mse ** .5
print(get_rmse(x, y, m, b))
plt.plot(x, y, '.')
plt.plot(x, y_hat, '-')
plt.show()
print("Loss: ", np.sum((y-y_hat)**2)/len(x))
# Clear the memory
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
```
|
github_jupyter
|
from tensorflow.keras.datasets import mnist
# the data split between train and validation sets
(x_train, y_train), (x_valid, y_valid) = mnist.load_data()
x_train.shape
x_valid.shape
x_train.dtype
x_train.min()
x_train.max()
x_train[0]
import matplotlib.pyplot as plt
image = x_train[0]
plt.imshow(image, cmap='gray')
y_train[0]
x_train = x_train.reshape(60000, 784)
x_valid = x_valid.reshape(10000, 784)
x_train.shape
x_train[0]
x_train = x_train/255
x_valid = x_valid/255
x_train.dtype
x_train.min()
x_train.max()
import tensorflow.keras as keras
num_categories = 10
y_train = keras.utils.to_categorical(y_train, num_categories)
y_valid = keras.utils.to_categorical(y_valid, num_categories)
y_train[0:9]
from tensorflow.keras.models import Sequential
model = Sequential()
from tensorflow.keras.layers import Dense
model.add(Dense(units=512, activation='relu', input_shape=(784,)))
model.add(Dense(units = 512, activation='relu'))
model.add(Dense(units = 10, activation='softmax'))
model.summary()
model.compile(loss='categorical_crossentropy', metrics=['accuracy'])
history = model.fit(x_train, y_train, epochs=5, verbose=1,
validation_data=(x_valid, y_valid))
# Clear the memory
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
import numpy as np
from numpy.polynomial.polynomial import polyfit
import matplotlib.pyplot as plt
m = 5
b = 15
x = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
y = np.array([10, 20, 25, 30, 40, 45, 40, 50, 60, 55])
y_hat = x*m+b
def get_rmse(x_data, y_data, m, b):
squared_error = 0
for i in range(len(x_data)):
y_hat = m*x_data[i]+b
squared_error += (y_data[i]-y_hat)**2
mse = squared_error / len(x_data)
return mse ** .5
print(get_rmse(x, y, m, b))
plt.plot(x, y, '.')
plt.plot(x, y_hat, '-')
plt.show()
print("Loss: ", np.sum((y-y_hat)**2)/len(x))
# Clear the memory
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
| 0.756717 | 0.996455 |
# My First Notebook
```
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from matplotlib import style
from scipy.integrate import odeint
from IPython.display import display, Math
%config InlineBackend.figure_format = 'retina'
style.use("default")
```
We will try out the Lotka Volterra Predator Prey model
$$
\frac{dx}{dt} = ax - bxy
$$
$$
\frac{dy}{dt} = cxy - dy
$$
```
def model(SV, t, obj):
[x, m] = SV
tau = obj.tau
zeta = obj.zeta
kp = obj.kp
p = obj.p
dxbydt = m
dmbydt = (kp*p - x - 2*tau*zeta*m)/(tau**2)
return [dxbydt, dmbydt]
class second_order_system:
def __init__(self):
self.tau = 1.0
self.zeta = 0.1
self.kp = 2.0
self.p = 1.0
self.x0 = 0
self.y0 = 0
self.tmax = 40
self.nsteps = 10000
def solve(self):
SV0 = [self.x0, self.y0]
time = np.linspace(0, self.tmax, self.nsteps)
solution = odeint(
model,
SV0,
time,
args = (self,)
)
self.solution = solution
self.xsolution = solution[:,0]
self.ysolution = solution[:,1]
self.time = time
sos1 = second_order_system()
sos1.zeta = 0.1
sos1.solve()
sos2 = second_order_system()
sos2.zeta = 1
sos2.solve()
sos3 = second_order_system()
sos3.zeta = 3
sos3.solve()
plt.plot(sos1.time, sos1.xsolution, 'b', label = r'$\zeta$ = %.2f' %(sos1.zeta))
plt.plot(sos2.time, sos2.xsolution, 'k', label = r'$\zeta$ = %.2f' %(sos2.zeta))
plt.plot(sos3.time, sos3.xsolution, 'r', label = r'$\zeta$ = %.2f' %(sos3.zeta))
plt.xlabel("Time (seconds)", fontsize=12)
plt.ylabel("x(t)", fontsize=12)
plt.legend(loc='best',title='Dampening \nCoefficient')
plt.xlim([0, sos1.tmax])
plt.ylim(bottom=0)
plt.title(
"Second Order System Response Curve \n" +
r"$K_p =$ %.2f, $\tau = $ = %.2f" %(sos1.kp, sos1.tau)
);
#plt.savefig("2nd_order_response.pdf")
#plt.savefig("2nd_order_response.png", dpi=4000)
tau = 40.0
omega = 4.0
A = 2.0
Kp = 1.5
phi = np.arctan(-omega*tau)
def sine_input(x):
tau = 4.0
omega = 2.0
A = 2.0
Kp = 6.0
phi = np.arctan(-omega*tau)
a11 = omega*tau*np.exp(-x/tau)/(1+ (tau**2)*(omega**2))
a12 = (1/np.sqrt(1+ (tau**2)*(omega**2)))* np.sin(omega*x+phi)
y = A*Kp*(a11+a12)
return y
t = np.linspace(0,20,10000);
y = sine_input(t);
x = A*np.sin(t*omega + phi);
plt.plot(t,x,'r', label ='Input')
plt.plot(t,y,'b', label ='Output')
plt.xlabel("Time (seconds)", fontsize=12)
plt.ylabel("x(t)", fontsize=12)
plt.legend(loc='best')
plt.xlim([0, 20])
#plt.ylim(bottom=0)
plt.title(
r"For a $1^{st}$ order system with input "+
r"$x(t) = A*sin(\omega t)$"+ "\n"+
r"A = %.1f, Kp = %.1f, $\omega =$ %.1f, $\tau =$ %.1f" %(A,Kp,omega,tau)
);
plt.savefig("sinusoidal.pdf")
#plt.savefig("2nd_order_response.png", dpi=4000)
```
|
github_jupyter
|
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from matplotlib import style
from scipy.integrate import odeint
from IPython.display import display, Math
%config InlineBackend.figure_format = 'retina'
style.use("default")
def model(SV, t, obj):
[x, m] = SV
tau = obj.tau
zeta = obj.zeta
kp = obj.kp
p = obj.p
dxbydt = m
dmbydt = (kp*p - x - 2*tau*zeta*m)/(tau**2)
return [dxbydt, dmbydt]
class second_order_system:
def __init__(self):
self.tau = 1.0
self.zeta = 0.1
self.kp = 2.0
self.p = 1.0
self.x0 = 0
self.y0 = 0
self.tmax = 40
self.nsteps = 10000
def solve(self):
SV0 = [self.x0, self.y0]
time = np.linspace(0, self.tmax, self.nsteps)
solution = odeint(
model,
SV0,
time,
args = (self,)
)
self.solution = solution
self.xsolution = solution[:,0]
self.ysolution = solution[:,1]
self.time = time
sos1 = second_order_system()
sos1.zeta = 0.1
sos1.solve()
sos2 = second_order_system()
sos2.zeta = 1
sos2.solve()
sos3 = second_order_system()
sos3.zeta = 3
sos3.solve()
plt.plot(sos1.time, sos1.xsolution, 'b', label = r'$\zeta$ = %.2f' %(sos1.zeta))
plt.plot(sos2.time, sos2.xsolution, 'k', label = r'$\zeta$ = %.2f' %(sos2.zeta))
plt.plot(sos3.time, sos3.xsolution, 'r', label = r'$\zeta$ = %.2f' %(sos3.zeta))
plt.xlabel("Time (seconds)", fontsize=12)
plt.ylabel("x(t)", fontsize=12)
plt.legend(loc='best',title='Dampening \nCoefficient')
plt.xlim([0, sos1.tmax])
plt.ylim(bottom=0)
plt.title(
"Second Order System Response Curve \n" +
r"$K_p =$ %.2f, $\tau = $ = %.2f" %(sos1.kp, sos1.tau)
);
#plt.savefig("2nd_order_response.pdf")
#plt.savefig("2nd_order_response.png", dpi=4000)
tau = 40.0
omega = 4.0
A = 2.0
Kp = 1.5
phi = np.arctan(-omega*tau)
def sine_input(x):
tau = 4.0
omega = 2.0
A = 2.0
Kp = 6.0
phi = np.arctan(-omega*tau)
a11 = omega*tau*np.exp(-x/tau)/(1+ (tau**2)*(omega**2))
a12 = (1/np.sqrt(1+ (tau**2)*(omega**2)))* np.sin(omega*x+phi)
y = A*Kp*(a11+a12)
return y
t = np.linspace(0,20,10000);
y = sine_input(t);
x = A*np.sin(t*omega + phi);
plt.plot(t,x,'r', label ='Input')
plt.plot(t,y,'b', label ='Output')
plt.xlabel("Time (seconds)", fontsize=12)
plt.ylabel("x(t)", fontsize=12)
plt.legend(loc='best')
plt.xlim([0, 20])
#plt.ylim(bottom=0)
plt.title(
r"For a $1^{st}$ order system with input "+
r"$x(t) = A*sin(\omega t)$"+ "\n"+
r"A = %.1f, Kp = %.1f, $\omega =$ %.1f, $\tau =$ %.1f" %(A,Kp,omega,tau)
);
plt.savefig("sinusoidal.pdf")
#plt.savefig("2nd_order_response.png", dpi=4000)
| 0.466603 | 0.836287 |
# Prediction of BoardGameGeek Reviews
## NAME: Ruochen Chang
## ID: 1001780924
# Introduction
#### This is a blog illustrates the implementation of Naive Bayes from scratch.Our goal in this blog is to build a classification model to predict the rating of reviews using Naive Bayes.
#### I just refered the Naive Bayes model from the Internet and built the classification model from scratch by myself.3
#### The basic idea of Naive Bayes is: For a given item to be classified, find the probability of occurrence of each category under the condition that this item appears, whichever is the largest, it is considered that the item to be classified belongs to that category.
# Naive Bayes model:
$$ P(Y=y_i│X)=\frac{P(Y=y_i ) ∏_{i=1}d P(Y=y_i)}{P(X)} $$
#### Because all the Y and P(X) are the same, so we can equate the model to such model:
$$ P(Y=y_i│X)=arg maxP(Y=y_i)∏_{i=1} d P(X_i |Y=y_i) $$
#### So we need to calculate the probability and conditional probability of our data.
# Steps to do the Naive Bayes
## a. Divide the dataset as train data for 70% and test data for 30%.
### Data Description:
#### This review file has 2 columns, comment and rating.
#### comment is the review text we should classify
#### rating is the score of the reviews.
### Our goal is predicting the rating according to the comment text.
#### For this data, the value of data is continuous. So I make them discreet as such rules:
#### First, I rounded them to integer number. Then,
#### rate as 1 for numbers from 0 to 2;
#### rate as 2 for numbers from 3 to 4;
#### rate as 3 for numbers from 5 to 6;
#### rate as 4 for numbers from 7 to 8;
#### rate as 5 for numbers from 9 to 10;
#### After loading all the data to the jupyter, I did some pre-processing including text cleaning, tokenization and remove stopwords.
#### Our data is often confusing and unintuitive. Therefore, we always have to pre-process the data in a series, which makes the data format more standardized and the content more reasonable. Common data preprocessing methods are: fill in the null value, remove the outliers, data cleaning, tokenization, remove stopwords and so on.
```
import pandas as pd
from sklearn.utils import shuffle
from sklearn.model_selection import train_test_split
import numpy as np
original_data = pd.read_csv('reviews.csv')
all_data = pd.DataFrame(original_data, columns=['comment', 'rating']).dropna()
all_data = shuffle(all_data)
all_data = pd.DataFrame(all_data).reset_index(drop=True)
def round_amount(a):
res = int(float(a))
if res == 0 or res == 1 or res == 2:
label = 0
if res == 3 or res == 4:
label = 1
if res == 5 or res == 6:
label = 2
if res == 7 or res == 8:
label = 3
if res == 9 or res == 10:
label = 4
return label
all_data['rating'] = all_data['rating'].apply(round_amount)
import re
import string
def clean_text(text):
# Make text lowercase, remove text in square brackets,remove links,remove punctuation
# remove words containing numbers.'''
text = text.lower()
text = re.sub('\[.*?\]', '', text)
text = re.sub('<.*?>+', '', text)
text = re.sub('[%s]' % re.escape(string.punctuation), '', text)
text = re.sub('\n', '', text)
text = re.sub('\w*\d\w*', '', text)
return text
# Applying the cleaning function to both test and training datasets
all_data['comment'] = all_data['comment'].apply(lambda x: clean_text(x))
import nltk
from nltk.corpus import stopwords
def remove_stopwords(text):
words = [w for w in text if w not in stopwords.words('english')]
return words
train = all_data[:int(0.7*len(all_data))]
train = pd.DataFrame(train)
test = all_data[int(0.7*len(all_data)):]
test = pd.DataFrame(test)
print("length of train data: ", len(train))
print("length of test data: ", len(test))
# tokenization
tokenizer = nltk.tokenize.RegexpTokenizer(r'\w+')
train['comment'] = train['comment'].apply(lambda x: tokenizer.tokenize(x))
test['comment'] = test['comment'].apply(lambda x: tokenizer.tokenize(x))
train['comment'] = train['comment'].apply(lambda x: remove_stopwords(x))
test['comment'] = test['comment'].apply(lambda x: remove_stopwords(x))
print("train data:")
print(train.head())
print("\n")
print("test data:")
print(test.head())
```
## b. Build a vocabulary as list.
#### Building a vocabulary means build a dictionary for all the words with their occurrence under every label like this: {'happy': [10, 20, 30, 40, 50], ...}. This example means the word happy occurs 10 times under label 1, 20 times under label 2, 30 times under label 3 and so on.
#### To be more reasonable, I removed words whose occurrence are less than 10.
```
all_words = {}
all_s = ""
for index, row in train.iterrows():
s = " ".join(row['comment'])
all_s = all_s + s
all_words = all_s.lower().split(' ')
def count_words(data):
vocabulary_list = {} # {'word':[]}
for index, row in data.iterrows():
for word in row['comment']:
if word not in vocabulary_list:
vocabulary_list[word] = [0, 0, 0, 0, 0]
else:
if row['rating'] == 0:
vocabulary_list[word][0] += 1
if row['rating'] == 1:
vocabulary_list[word][1] += 1
if row['rating'] == 2:
vocabulary_list[word][2] += 1
if row['rating'] == 3:
vocabulary_list[word][3] += 1
if row['rating'] == 4:
vocabulary_list[word][4] += 1
for word in list(vocabulary_list.keys()):
if vocabulary_list[word][0]+vocabulary_list[word][1]+vocabulary_list[word][2]+vocabulary_list[word][3]+vocabulary_list[word][4] < 10:
del vocabulary_list[word]
return vocabulary_list
vocabulary_list = count_words(train)
print('examples of the vocabulary list:')
print(list(vocabulary_list.items())[:20])
```
#### write the vocabulary to a txt file.
```
f = open('data.txt','w')
f.write(str(vocabulary_list))
f.close()
```
## c. Calculate the probability and conditional probability for all the words.
#### calculate the total number of every label.
```
total_length = len(train)
def cal_label_count():
result = []
for i in range(5):
count = 0
for index, row in train.iterrows():
if row['rating'] == i:
count += 1
result.append(count)
return result
label_count = cal_label_count()
print(label_count)
```
##### Probability of the occurrence: P[word] = num of documents containing this word / num of all documents
##### Conditional probability based on the sentiment: P[word | Positive] = number of positive documents containing this word / num of all positive review documents
#### There are 5 labels totally. So I build a probability list and a conditional probability list to save different 5 labels.
### To make our model more reasonable, I used Laplace smoothing to solve the problem of zero probability.
## Laplace Smoothing:
#### The zero probability problem is that if a certain amount x does not appear in the observation sample library (training set), the result of probability of the entire instance will be 0 when calculating the probability of an instance. In the problem of text classification, when a word does not appear in the training sample, the probability of that word is 0, and it is also 0 when the probability of text occurrence is calculated using multiplication. Clearly, this is unreasonable, and you cannot arbitrarily think that the probability of an event is 0 because it is not observed. In order to solve the problem of zero probability, the French mathematician Laplace first proposed the method of adding 1 to estimate the probability of a phenomenon that a data has not occurred, so this smoothing is also called Laplace smoothing. Assuming that the training sample is very large, the estimated probability change caused by adding 1 to the count of each component x can be ignored, but it can easily and effectively avoid the zero probability problem.
```
def cal_prob(i):
count = 0
for index, row in train.iterrows():
if row['rating'] == i:
count += 1
return (count+1)/(len(train)+5)
# prior probability
prior_list = []
for i in range(5):
prior_list.append(cal_prob(i))
print("prior probability: ", prior_list)
def conditional_prob(word, i):
all_count = label_count[i]
if word in vocabulary_list:
return (vocabulary_list[word][i]+1)/(all_count+5)
if word not in vocabulary_list:
return 1/(all_count+5)
print("\nOcurrence of going word under label 1: ", conditional_prob('going', 1))
```
## d. predict test data
#### For test data, we have also pre-processed before, so it is clean data to make prediction. I classified all the test data accroding to our model and print the accuracy. The result of accuracy is about 40%.
```
def classify(s):
pred_list = []
for i in range(5):
pred = prior_list[i]
for word in s:
newpred = conditional_prob(word, i)
pred *= newpred
pred_list.append(pred)
max_prob = max(pred_list)
return pred_list.index(max_prob)
pred_right = 0
for index, row in test.iterrows():
if row['rating'] == classify(row['comment']):
pred_right += 1
accuracy = pred_right/len(test)
print("*********predict accuracy*********")
print(accuracy)
```
# Challenge:
#### This data is continuous, so I made them discreet. At first, I divided the rating value to 10 grades, but the accuracy is about 20%. So I chose to divide the rating value to 5 grades which is more reasonable because there are so many websites setting the review rating as 5 grades.
#### In the future, I want to have a try to build a SVM model and LSTM model to make classification because the time is limited this time.
|
github_jupyter
|
import pandas as pd
from sklearn.utils import shuffle
from sklearn.model_selection import train_test_split
import numpy as np
original_data = pd.read_csv('reviews.csv')
all_data = pd.DataFrame(original_data, columns=['comment', 'rating']).dropna()
all_data = shuffle(all_data)
all_data = pd.DataFrame(all_data).reset_index(drop=True)
def round_amount(a):
res = int(float(a))
if res == 0 or res == 1 or res == 2:
label = 0
if res == 3 or res == 4:
label = 1
if res == 5 or res == 6:
label = 2
if res == 7 or res == 8:
label = 3
if res == 9 or res == 10:
label = 4
return label
all_data['rating'] = all_data['rating'].apply(round_amount)
import re
import string
def clean_text(text):
# Make text lowercase, remove text in square brackets,remove links,remove punctuation
# remove words containing numbers.'''
text = text.lower()
text = re.sub('\[.*?\]', '', text)
text = re.sub('<.*?>+', '', text)
text = re.sub('[%s]' % re.escape(string.punctuation), '', text)
text = re.sub('\n', '', text)
text = re.sub('\w*\d\w*', '', text)
return text
# Applying the cleaning function to both test and training datasets
all_data['comment'] = all_data['comment'].apply(lambda x: clean_text(x))
import nltk
from nltk.corpus import stopwords
def remove_stopwords(text):
words = [w for w in text if w not in stopwords.words('english')]
return words
train = all_data[:int(0.7*len(all_data))]
train = pd.DataFrame(train)
test = all_data[int(0.7*len(all_data)):]
test = pd.DataFrame(test)
print("length of train data: ", len(train))
print("length of test data: ", len(test))
# tokenization
tokenizer = nltk.tokenize.RegexpTokenizer(r'\w+')
train['comment'] = train['comment'].apply(lambda x: tokenizer.tokenize(x))
test['comment'] = test['comment'].apply(lambda x: tokenizer.tokenize(x))
train['comment'] = train['comment'].apply(lambda x: remove_stopwords(x))
test['comment'] = test['comment'].apply(lambda x: remove_stopwords(x))
print("train data:")
print(train.head())
print("\n")
print("test data:")
print(test.head())
all_words = {}
all_s = ""
for index, row in train.iterrows():
s = " ".join(row['comment'])
all_s = all_s + s
all_words = all_s.lower().split(' ')
def count_words(data):
vocabulary_list = {} # {'word':[]}
for index, row in data.iterrows():
for word in row['comment']:
if word not in vocabulary_list:
vocabulary_list[word] = [0, 0, 0, 0, 0]
else:
if row['rating'] == 0:
vocabulary_list[word][0] += 1
if row['rating'] == 1:
vocabulary_list[word][1] += 1
if row['rating'] == 2:
vocabulary_list[word][2] += 1
if row['rating'] == 3:
vocabulary_list[word][3] += 1
if row['rating'] == 4:
vocabulary_list[word][4] += 1
for word in list(vocabulary_list.keys()):
if vocabulary_list[word][0]+vocabulary_list[word][1]+vocabulary_list[word][2]+vocabulary_list[word][3]+vocabulary_list[word][4] < 10:
del vocabulary_list[word]
return vocabulary_list
vocabulary_list = count_words(train)
print('examples of the vocabulary list:')
print(list(vocabulary_list.items())[:20])
f = open('data.txt','w')
f.write(str(vocabulary_list))
f.close()
total_length = len(train)
def cal_label_count():
result = []
for i in range(5):
count = 0
for index, row in train.iterrows():
if row['rating'] == i:
count += 1
result.append(count)
return result
label_count = cal_label_count()
print(label_count)
def cal_prob(i):
count = 0
for index, row in train.iterrows():
if row['rating'] == i:
count += 1
return (count+1)/(len(train)+5)
# prior probability
prior_list = []
for i in range(5):
prior_list.append(cal_prob(i))
print("prior probability: ", prior_list)
def conditional_prob(word, i):
all_count = label_count[i]
if word in vocabulary_list:
return (vocabulary_list[word][i]+1)/(all_count+5)
if word not in vocabulary_list:
return 1/(all_count+5)
print("\nOcurrence of going word under label 1: ", conditional_prob('going', 1))
def classify(s):
pred_list = []
for i in range(5):
pred = prior_list[i]
for word in s:
newpred = conditional_prob(word, i)
pred *= newpred
pred_list.append(pred)
max_prob = max(pred_list)
return pred_list.index(max_prob)
pred_right = 0
for index, row in test.iterrows():
if row['rating'] == classify(row['comment']):
pred_right += 1
accuracy = pred_right/len(test)
print("*********predict accuracy*********")
print(accuracy)
| 0.356895 | 0.953665 |
## Tutorial on how to combine different Fields into a `NestedField` object
In some applications, you may have access to different fields that each cover only part of the region of interest. Then, you would like to combine them all together. You may also have a field covering the entire region and another one only covering part of it, but with a higher resolution. The set of those fields form what we call nested fields.
It is possible to combine all those fields with kernels, either with different if/else statements depending on particle position, or using recovery kernels (if only two levels of nested fields).
However, an easier way to work with nested fields in Parcels is to combine all those fields into one `NestedField` object. The Parcels code will then try to successively interpolate the different fields.
For each Particle, the algorithm is the following:
1. Interpolate the particle onto the first `Field` in the `NestedFields` list.
2. If the interpolation succeeds or if an error other than `ErrorOutOfBounds` is thrown, the function is stopped.
3. If an `ErrorOutOfBounds` is thrown, try step 1) again with the next `Field` in the `NestedFields` list
4. If interpolation on the last `Field` in the `NestedFields` list also returns an `ErrorOutOfBounds`, then the Particle is flagged as OutOfBounds.
This algorithm means that **the order of the fields in the `NestedField` matters**. In particular, the smallest/finest resolution fields have to be listed _before_ the larger/coarser resolution fields.
This tutorial shows how to use these `NestedField` with a very idealised example.
```
%matplotlib inline
from parcels import Field, NestedField, FieldSet, ParticleSet, JITParticle, plotTrajectoriesFile, AdvectionRK4
import numpy as np
```
First define a zonal and meridional velocity field defined on a high resolution (dx = 100m) 2kmx2km grid with a flat mesh. The zonal velocity is uniform and 1 m/s, and the meridional velocity is equal to 0.5 * cos(lon / 200 * pi / 2) m/s.
```
dim = 21
lon = np.linspace(0., 2e3, dim, dtype=np.float32)
lat = np.linspace(0., 2e3, dim, dtype=np.float32)
lon_g, lat_g = np.meshgrid(lon, lat)
V1_data = np.cos(lon_g / 200 * np.pi/2)
U1 = Field('U1', np.ones((dim, dim), dtype=np.float32), lon=lon, lat=lat)
V1 = Field('V1', V1_data, grid=U1.grid)
```
Now define the same velocity field on a low resolution (dx = 2km) 20kmx4km grid.
```
xdim = 11
ydim = 3
lon = np.linspace(-2e3, 18e3, xdim, dtype=np.float32)
lat = np.linspace(-1e3, 3e3, ydim, dtype=np.float32)
lon_g, lat_g = np.meshgrid(lon, lat)
V2_data = np.cos(lon_g / 200 * np.pi/2)
U2 = Field('U2', np.ones((ydim, xdim), dtype=np.float32), lon=lon, lat=lat)
V2 = Field('V2', V2_data, grid=U2.grid)
```
We now combine those fields into a `NestedField` and create the fieldset
```
U = NestedField('U', [U1, U2])
V = NestedField('V', [V1, V2])
fieldset = FieldSet(U, V)
pset = ParticleSet(fieldset, pclass=JITParticle, lon=[0], lat=[1000])
output_file = pset.ParticleFile(name='NestedFieldParticle.nc', outputdt=50)
pset.execute(AdvectionRK4, runtime=14000, dt=10, output_file=output_file)
output_file.export() # export the trajectory data to a netcdf file
plt = plotTrajectoriesFile('NestedFieldParticle.nc', show_plt=False)
plt.plot([0,2e3,2e3,0,0],[0,0,2e3,2e3,0], c='orange')
plt.plot([-2e3,18e3,18e3,-2e3,-2e3],[-1e3,-1e3,3e3,3e3,-1e3], c='green');
```
As we observe, there is a change of dynamic at lon=2000, which corresponds to the change of grid.
The analytical solution to the problem:
\begin{align}
dx/dt &= 1;\\
dy/dt &= \cos(x \pi/400);\\
\text{with } x(0) &= 0, y(0) = 1000
\end{align}
is
\begin{align}
x(t) &= t;\\
y(t) &= 1000 + 400/\pi \sin(t \pi / 400)
\end{align}
which is captured by the High Resolution field (orange area) but not the Low Resolution one (green area).
### Keep track of the field interpolated
For different reasons, you may want to keep track of the field you have interpolated. You can do that easily by creating another field that share the grid with original fields.
Watch out that this operation has a cost of a full interpolation operation.
```
fieldset = FieldSet(U, V) # Need to redefine fieldset because FieldSets need to be constructed before ParticleSets
F1 = Field('F1', np.ones((U1.grid.ydim, U1.grid.xdim), dtype=np.float32), grid=U1.grid)
F2 = Field('F2', 2*np.ones((U2.grid.ydim, U2.grid.xdim), dtype=np.float32), grid=U2.grid)
F = NestedField('F', [F1, F2])
fieldset.add_field(F)
from parcels import Variable
def SampleNestedFieldIndex(particle, fieldset, time):
particle.f = fieldset.F[time, particle.depth, particle.lat, particle.lon]
class SampleParticle(JITParticle):
f = Variable('f', dtype=np.int32)
pset = ParticleSet(fieldset, pclass= SampleParticle, lon=[1000], lat=[500])
pset.execute(SampleNestedFieldIndex, runtime=0, dt=0)
print('Particle (%g, %g) interpolates Field #%d' % (pset[0].lon, pset[0].lat, pset[0].f))
pset[0].lon = 10000
pset.execute(SampleNestedFieldIndex, runtime=0, dt=0)
print('Particle (%g, %g) interpolates Field #%d' % (pset[0].lon, pset[0].lat, pset[0].f))
```
|
github_jupyter
|
%matplotlib inline
from parcels import Field, NestedField, FieldSet, ParticleSet, JITParticle, plotTrajectoriesFile, AdvectionRK4
import numpy as np
dim = 21
lon = np.linspace(0., 2e3, dim, dtype=np.float32)
lat = np.linspace(0., 2e3, dim, dtype=np.float32)
lon_g, lat_g = np.meshgrid(lon, lat)
V1_data = np.cos(lon_g / 200 * np.pi/2)
U1 = Field('U1', np.ones((dim, dim), dtype=np.float32), lon=lon, lat=lat)
V1 = Field('V1', V1_data, grid=U1.grid)
xdim = 11
ydim = 3
lon = np.linspace(-2e3, 18e3, xdim, dtype=np.float32)
lat = np.linspace(-1e3, 3e3, ydim, dtype=np.float32)
lon_g, lat_g = np.meshgrid(lon, lat)
V2_data = np.cos(lon_g / 200 * np.pi/2)
U2 = Field('U2', np.ones((ydim, xdim), dtype=np.float32), lon=lon, lat=lat)
V2 = Field('V2', V2_data, grid=U2.grid)
U = NestedField('U', [U1, U2])
V = NestedField('V', [V1, V2])
fieldset = FieldSet(U, V)
pset = ParticleSet(fieldset, pclass=JITParticle, lon=[0], lat=[1000])
output_file = pset.ParticleFile(name='NestedFieldParticle.nc', outputdt=50)
pset.execute(AdvectionRK4, runtime=14000, dt=10, output_file=output_file)
output_file.export() # export the trajectory data to a netcdf file
plt = plotTrajectoriesFile('NestedFieldParticle.nc', show_plt=False)
plt.plot([0,2e3,2e3,0,0],[0,0,2e3,2e3,0], c='orange')
plt.plot([-2e3,18e3,18e3,-2e3,-2e3],[-1e3,-1e3,3e3,3e3,-1e3], c='green');
fieldset = FieldSet(U, V) # Need to redefine fieldset because FieldSets need to be constructed before ParticleSets
F1 = Field('F1', np.ones((U1.grid.ydim, U1.grid.xdim), dtype=np.float32), grid=U1.grid)
F2 = Field('F2', 2*np.ones((U2.grid.ydim, U2.grid.xdim), dtype=np.float32), grid=U2.grid)
F = NestedField('F', [F1, F2])
fieldset.add_field(F)
from parcels import Variable
def SampleNestedFieldIndex(particle, fieldset, time):
particle.f = fieldset.F[time, particle.depth, particle.lat, particle.lon]
class SampleParticle(JITParticle):
f = Variable('f', dtype=np.int32)
pset = ParticleSet(fieldset, pclass= SampleParticle, lon=[1000], lat=[500])
pset.execute(SampleNestedFieldIndex, runtime=0, dt=0)
print('Particle (%g, %g) interpolates Field #%d' % (pset[0].lon, pset[0].lat, pset[0].f))
pset[0].lon = 10000
pset.execute(SampleNestedFieldIndex, runtime=0, dt=0)
print('Particle (%g, %g) interpolates Field #%d' % (pset[0].lon, pset[0].lat, pset[0].f))
| 0.535098 | 0.984185 |
# 4 OTU Picking and Rarefaction Depth Selection
Amanda Birmingham, CCBB, UCSD ([email protected])
<a name = "table-of-contents"></a>
## Table of Contents
* [Introducing OTU Picking](#introducing-otu-picking)
* [Checking the Reference Set](#checking-the-reference-set)
* [Running OTU Picking](#running-otu-picking)
* [Introducing Rarefaction](#introducing-rarefaction)
* [Viewing the Counts Per Sample](#viewing-the-counts-per-sample)
* [Selecting Rarefaction Depth](#selecting-rarefaction-depth)
Related Notebooks:
* 1 Introducing 16S Microbiome Primary Analysis
* 2 Setting Up Starcluster for QIIME
* 3 Validation, Demultiplexing, and Quality Control
* 5 Analyzing Core Diversity
<a id = "introducing-otu-picking"></a>
## Introducing OTU Picking
The heart of 16S microbiome analysis is the assignment of samples to the microbial "species" (technically "operational taxonomic units", or OTUs) from which they originated. This step is known as "OTU picking", and its results are strongly affected by the picking method used. The available picking methods are differentiated primarily by the sort of set of 16S sequences are used as the reference, defining the OTUs to which new sequences can be assigned; they are:
* *de novo*: No reference is used. Can find never-before-seen OTUs but is slow and sensitive to noise.
* *closed reference*: A reference is used and experimental sequences not clustering with one of the reference sequences are discarded. Fast but potentially misses relevant novel findings.
* *open reference*: A reference is used, but experimental sequences not clustering with one of the reference sequences are then assigned to OTUs using the *de novo* method. A compromise method with some of the advantages and some of the drawbacks of both its parents.
More details on these approaches can be found at http://qiime.org/tutorials/otu_picking.html?highlight=otu%20picking . Following the QIIME recommendations, we prefer open-reference picking in situations in which the customer has no explicit preference otherwise (which is likely to be most of them!)
[Table of Contents](#table-of-contents)
<a id = "checking-the-reference-set"></a>
## Checking the Reference Set
Also unless otherwise specified by the customer, the Greengenes set of 16S gene sequences is our preferred reference. Because this set is widely used, its current version is included as part of the QIIME install. The path to the fasta file containing the Greengenes 16S sequences is listed in the QIIME config, and can be seen in the output of the command
print_qiime_config.py
In the `QIIME config values` beginning with `assign_taxonomy_reference_seqs_fp`, which will look something like
assign_taxonomy_reference_seqs_fp: /usr/local/lib/python2.7/dist-packages/qiime_default_reference/gg_13_8_otus/rep_set/97_otus.fasta
The file-path shown here will be used as input to the OTU picking process.
[Table of Contents](#table-of-contents)
<a id = "running-otu-picking"></a>
## Running OTU Picking
Again, in QIIME, OTU picking can be accomplished with a single command-line call--but this one usually takes a while to process, and therefore benefits greatly from being run in parallel. The `-a` switch tells QIIME to use parallel processing, and the number associated with the `-O` switch tells it how many processors to use. It is best not to use ALL the CPUs on your cluster, but to leave at least one for other necessary processing; for example, with a 3-node c3.2xlarge cluster, which has 3\*8=24 CPUs, you might specify 22 or 23 CPUs.
The full command has the format
pick_open_reference_otus.py -a -O [number of CPUs] -i [sequences file path] -r [reference file path] -o [output directory path]
and an example looks like
pick_open_reference_otus.py -a -O 23 -i /data/library_split/seqs.fna -r /usr/local/lib/python2.7/dist-packages/qiime_default_reference/gg_13_8_otus/rep_set/97_otus.fasta -o /data/open_ref_output/
I have seen this step take from 20 minutes to 2 hours, depending on data size, on the three-node cluster described above.
The config file on the QIIME AMI used for this cluster routes temporary files to the `/data` directory, so during the run you may see various files appear there; these have various prefixes and extensions that look like .e[numbers] or .o[numbers] (ex: `>ALIGN_9P6_0.o49`, `POTU_VuwL_13.e14`, etc) The "e" files are errors from individual jobs, while the "o" files are logs from individual jobs. QIIME is *supposed* to clean all of these up on completion, but I have rarely seen this. However, aside from clearing them out when your job is done, you don't need to give them any particular attention as the results are summarized in the `log_[datetime].txt` file; note that many of the "errors" at particular steps are dealt with successfully at later steps, so just because you see error files doesn't mean the run failed. To check that it hasn't, it is a good idea to skim through the log to ensure that you don't see any text listed in the "Stderr:" fields for each logged command.
The `pick_open_reference_otus.py` command generates a lot of output; a high-level overview is given on the generated `index.html` page, and details are available at http://qiime.org/scripts/pick_open_reference_otus.html?highlight=pick_open_reference_otus , but only a few of the outputs are used directly in subsequent steps. These are:
* `otu_table_mc2_w_tax_no_pynast_failures.biom`: The OTU table in biom format; basically, this is a very large, usually very sparse table listing the number of reads for each identified OTU from each sample. This particular version of the table is the one excluding OTUs with fewer than 2 sequences and sequences that fail to align with PyNAST, and including OTU taxonomy assignments. It is thus the "cleanest" version and the one you'll want to use going forward.
* `rep_set.fna`: A fasta file of one representative sequence from each identified OTU (note that there are several different ways to choose the representative sequences, and generally it is ok just to use whatever `pick_open_reference_otus.py`'s default is).
* `rep_set.tre`: A phylogenetic tree of the reference sequences for the identified OTUs, describing their inferred evolutionary relationships.
[Table of Contents](#table-of-contents)
<a name = "introducing-rarefaction"></a>
## Introducing Rarefaction
In addition to the all-important OTU table as well as the representative sequence set and its phylogenetic tree, you need one more piece of information from the OTU-picking output--but this one requires a judgment call from you as the analyst. This needed info is the sequence depth that will be used in the diversity analyses. As stated in the QIIME tutorial at http://nbviewer.ipython.org/github/biocore/qiime/blob/1.9.1/examples/ipynb/illumina_overview_tutorial.ipynb , "Many of the analyses that follow require that there are an equal number of sequences in each sample, so you need to review the counts/sample detail and decide what depth you'd like. Any samples that don't have at least that many sequences will not be included in the analyses, so this is always a trade-off between the number of sequences you throw away and the number of samples you throw away."
You may wonder why we look at the counts/sample information to make this decision NOW, rather than having done so earlier when we got that information from `split_libraries_fastq.py`. The reason is that some reads get filtered out during the OTU picking process--possibly even quite a lot of them, for certain data sets and certain picking methods. Therefore, it is necessary to make the sequence depth decision based on the revised distribution of counts per sample after the OTU picking is complete.
[Table of Contents](#table-of-contents)
<a name = "viewing-the-counts-per-sample"></a>
## Viewing the Counts Per Sample
To generate a listing of this distribution, run the `summarize-table` command from the `biom` software (included as part of the QIIME install) on the `otu_table_mc2_w_tax_no_pynast_failures.biom` file:
biom summarize-table -i [biom table]
as in this example:
biom summarize-table -i /data/open_ref_output/otu_table_mc2_w_tax_no_pynast_failures.biom
This will print summary information about counts/sample to stdout; you may well want it in a more persistent format so that it is easy to re-sort, etc, so you may want to pipe it to a file.
The output looks, in part, like this:
Num samples: 399
Num observations: 33667
Total count: 6125023
Table density (fraction of non-zero values): 0.019
Counts/sample summary:
Min: 0.0
Max: 38638.0
Median: 14466.000
Mean: 15350.935
Std. dev.: 7720.364
Sample Metadata Categories: None provided
Observation Metadata Categories: taxonomy
Counts/sample detail:
925.sm1z: 0.0
925.sl1z: 0.0
925.sn2x: 0.0
925.waterfilter.ij.50: 0.0
925.sm4y: 1.0
925.waterfilter.ik.50: 1.0
925.sn4z: 1.0
925.in2z: 1.0
925.in2y: 1.0
925.sm3z: 1.0
925.sl1x: 1.0
925.sl3x: 1.0
925.so2y: 1.0
925.waterfilter.ie.50: 1.0
925.sl4z: 1.0
925.sk5z: 1.0
925.sn5x: 1.0
925.waterfilter.il.50: 1.0
925.sn3x: 1.0
925.io2z: 1.0
925.sl5y: 1.0
925.ia3y: 1.0
925.waterfilter.id.50: 1.0
925.waterfilter.ic.120: 1.0
925.bisonfeces1: 1.0
925.bisonfeces3: 1.0
925.waterfilter.ib.80: 1.0
925.sm1y: 1.0
925.im2z: 1.0
925.so3x: 1.0
925.waterfilter.ih.50: 2.0
925.sn3z: 2.0
925.sk5y: 3.0
925.so3y: 3.0
925.y3ntc.h12: 330.0
925.sg5x: 1383.0
925.if2z: 1580.0
925.y4ntc.h12: 2159.0
925.ie5x: 4966.0
925.ie5y: 5709.0
925.sg5y: 5888.0
925.if5y: 6256.0
925.sf2y: 6644.0
The most useful part of the file is the `Counts/sample detail` section, which shows an ascending-sorted list of counts per sample. For example, in the example show above, we know from the `Num samples` line that there are 399 samples (although most are not shown in this snippet of output), and we see that 4 of them have zero reads, 25 have one read, 2 have two reads, 2 have three reads, 1 has 330 reads, and all the rest have more than 1000 reads.
[Table of Contents](#table-of-contents)
<a name = "selecting-rarefaction-depth"></a>
## Selecting Rarefaction Depth
Using this information, you need to decide what the minimum sequencing depth that all samples included in further analyses need to have (and thus, which samples to leave out). As noted in the quoted text above, for those samples that have MORE than the minimum sequencing depth, the samples will be included, but only a subset of their reads up to the minimum sequencing depth will be used in the further analyses, so you are also deciding how many *reads* to leave out. If the customer has specified a preferred minimum sequencing depth, that makes the choice easy! However, most users savvy enough to have done this are also savvy enough to do their own analyses, so you are unlikely to get that information from a customer. However, the customer may have shared some information that could inform your decision:
* **Preference for breadth or depth**: If the customer wants to cast a wide net and look at as many samples as possible, even at the risk of missing weak effects on some of them, then a lower sequencing depth will be acceptable. Conversely, if she wants to look for subtle differences in microbial communities, a higher depth will likely be necessary.
* **Especially important samples**: If the customer has indicated that particular samples are critical to the analysis and should be included if at all possible, that may affect the chosen sequencing depth. For example, in the example given above, if the customer had specified that it was crucial to include as many NTC (non-treated control) samples as possible, using a minimum sequencing depth of 330 so as to include sample 925.y3ntc.h12 would be preferable to using a higher number, while if no such constraint existed, a minimum depth somewhere between 1383 and 4966 would probably be a better choice.
If no guidance exists, a good rule of thumb is to **favor samples over sequences**. The reason for this preference is that current research indicates that "low" sequence depths (10s - 1000s of sequences per sample) often capture all but very subtle variations, as shown in panels (a) and (b) of Figure 2 from Kuczynski et al., [Direct sequencing of the human microbiome readily reveals community differences](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2898070/), Genome Biology, 2010:

These panels show "(a) The full dataset (approximately 1,500 sequences per sample); (b) the dataset sampled at only 10 sequences per sample, showing the same pattern".
Keep in mind that even with such constraints, some samples may be beyond use. For example, anything with fewer than 15 reads is unlikely to give any usable diversity information. Taking the sequencing depth down to numbers in the 10s or low 100s should be considered very cautiously: only gross differences will be detectable, and the vast majority of read information will be discarded.
Note that you may want the minimum sequencing depth to be the number of samples associated with a least-read-endowed sample to be included. For example, given the counts/detail shown above, picking a depth of 1000 would indeed lead sample 925.sg5x to be included in the samples analyed downstream, but would only include 1000 of its 1383 sequences (and 1000 of each higher-read sample's reads) in all future analyses. By comparison, selecting the maximum number of reads that still allows the inclusion of sample 925.sg5x, or 1383, ensures that as many reads as possible are used and thus the best power is gained. However, some researchers may nonetheless want to use a nice round number, for easier mental comparison to other experiments.
Once you have decided on the sequence depth to use, make a note of it. Also gather general information about how this choice affects the sample set (for example, "using a sequencing depth of 1383, 34 of 399 samples, or 8.5%, are removed from further processing") and include it in the report to the customer.
[Table of Contents](#table-of-contents)
|
github_jupyter
|
# 4 OTU Picking and Rarefaction Depth Selection
Amanda Birmingham, CCBB, UCSD ([email protected])
<a name = "table-of-contents"></a>
## Table of Contents
* [Introducing OTU Picking](#introducing-otu-picking)
* [Checking the Reference Set](#checking-the-reference-set)
* [Running OTU Picking](#running-otu-picking)
* [Introducing Rarefaction](#introducing-rarefaction)
* [Viewing the Counts Per Sample](#viewing-the-counts-per-sample)
* [Selecting Rarefaction Depth](#selecting-rarefaction-depth)
Related Notebooks:
* 1 Introducing 16S Microbiome Primary Analysis
* 2 Setting Up Starcluster for QIIME
* 3 Validation, Demultiplexing, and Quality Control
* 5 Analyzing Core Diversity
<a id = "introducing-otu-picking"></a>
## Introducing OTU Picking
The heart of 16S microbiome analysis is the assignment of samples to the microbial "species" (technically "operational taxonomic units", or OTUs) from which they originated. This step is known as "OTU picking", and its results are strongly affected by the picking method used. The available picking methods are differentiated primarily by the sort of set of 16S sequences are used as the reference, defining the OTUs to which new sequences can be assigned; they are:
* *de novo*: No reference is used. Can find never-before-seen OTUs but is slow and sensitive to noise.
* *closed reference*: A reference is used and experimental sequences not clustering with one of the reference sequences are discarded. Fast but potentially misses relevant novel findings.
* *open reference*: A reference is used, but experimental sequences not clustering with one of the reference sequences are then assigned to OTUs using the *de novo* method. A compromise method with some of the advantages and some of the drawbacks of both its parents.
More details on these approaches can be found at http://qiime.org/tutorials/otu_picking.html?highlight=otu%20picking . Following the QIIME recommendations, we prefer open-reference picking in situations in which the customer has no explicit preference otherwise (which is likely to be most of them!)
[Table of Contents](#table-of-contents)
<a id = "checking-the-reference-set"></a>
## Checking the Reference Set
Also unless otherwise specified by the customer, the Greengenes set of 16S gene sequences is our preferred reference. Because this set is widely used, its current version is included as part of the QIIME install. The path to the fasta file containing the Greengenes 16S sequences is listed in the QIIME config, and can be seen in the output of the command
print_qiime_config.py
In the `QIIME config values` beginning with `assign_taxonomy_reference_seqs_fp`, which will look something like
assign_taxonomy_reference_seqs_fp: /usr/local/lib/python2.7/dist-packages/qiime_default_reference/gg_13_8_otus/rep_set/97_otus.fasta
The file-path shown here will be used as input to the OTU picking process.
[Table of Contents](#table-of-contents)
<a id = "running-otu-picking"></a>
## Running OTU Picking
Again, in QIIME, OTU picking can be accomplished with a single command-line call--but this one usually takes a while to process, and therefore benefits greatly from being run in parallel. The `-a` switch tells QIIME to use parallel processing, and the number associated with the `-O` switch tells it how many processors to use. It is best not to use ALL the CPUs on your cluster, but to leave at least one for other necessary processing; for example, with a 3-node c3.2xlarge cluster, which has 3\*8=24 CPUs, you might specify 22 or 23 CPUs.
The full command has the format
pick_open_reference_otus.py -a -O [number of CPUs] -i [sequences file path] -r [reference file path] -o [output directory path]
and an example looks like
pick_open_reference_otus.py -a -O 23 -i /data/library_split/seqs.fna -r /usr/local/lib/python2.7/dist-packages/qiime_default_reference/gg_13_8_otus/rep_set/97_otus.fasta -o /data/open_ref_output/
I have seen this step take from 20 minutes to 2 hours, depending on data size, on the three-node cluster described above.
The config file on the QIIME AMI used for this cluster routes temporary files to the `/data` directory, so during the run you may see various files appear there; these have various prefixes and extensions that look like .e[numbers] or .o[numbers] (ex: `>ALIGN_9P6_0.o49`, `POTU_VuwL_13.e14`, etc) The "e" files are errors from individual jobs, while the "o" files are logs from individual jobs. QIIME is *supposed* to clean all of these up on completion, but I have rarely seen this. However, aside from clearing them out when your job is done, you don't need to give them any particular attention as the results are summarized in the `log_[datetime].txt` file; note that many of the "errors" at particular steps are dealt with successfully at later steps, so just because you see error files doesn't mean the run failed. To check that it hasn't, it is a good idea to skim through the log to ensure that you don't see any text listed in the "Stderr:" fields for each logged command.
The `pick_open_reference_otus.py` command generates a lot of output; a high-level overview is given on the generated `index.html` page, and details are available at http://qiime.org/scripts/pick_open_reference_otus.html?highlight=pick_open_reference_otus , but only a few of the outputs are used directly in subsequent steps. These are:
* `otu_table_mc2_w_tax_no_pynast_failures.biom`: The OTU table in biom format; basically, this is a very large, usually very sparse table listing the number of reads for each identified OTU from each sample. This particular version of the table is the one excluding OTUs with fewer than 2 sequences and sequences that fail to align with PyNAST, and including OTU taxonomy assignments. It is thus the "cleanest" version and the one you'll want to use going forward.
* `rep_set.fna`: A fasta file of one representative sequence from each identified OTU (note that there are several different ways to choose the representative sequences, and generally it is ok just to use whatever `pick_open_reference_otus.py`'s default is).
* `rep_set.tre`: A phylogenetic tree of the reference sequences for the identified OTUs, describing their inferred evolutionary relationships.
[Table of Contents](#table-of-contents)
<a name = "introducing-rarefaction"></a>
## Introducing Rarefaction
In addition to the all-important OTU table as well as the representative sequence set and its phylogenetic tree, you need one more piece of information from the OTU-picking output--but this one requires a judgment call from you as the analyst. This needed info is the sequence depth that will be used in the diversity analyses. As stated in the QIIME tutorial at http://nbviewer.ipython.org/github/biocore/qiime/blob/1.9.1/examples/ipynb/illumina_overview_tutorial.ipynb , "Many of the analyses that follow require that there are an equal number of sequences in each sample, so you need to review the counts/sample detail and decide what depth you'd like. Any samples that don't have at least that many sequences will not be included in the analyses, so this is always a trade-off between the number of sequences you throw away and the number of samples you throw away."
You may wonder why we look at the counts/sample information to make this decision NOW, rather than having done so earlier when we got that information from `split_libraries_fastq.py`. The reason is that some reads get filtered out during the OTU picking process--possibly even quite a lot of them, for certain data sets and certain picking methods. Therefore, it is necessary to make the sequence depth decision based on the revised distribution of counts per sample after the OTU picking is complete.
[Table of Contents](#table-of-contents)
<a name = "viewing-the-counts-per-sample"></a>
## Viewing the Counts Per Sample
To generate a listing of this distribution, run the `summarize-table` command from the `biom` software (included as part of the QIIME install) on the `otu_table_mc2_w_tax_no_pynast_failures.biom` file:
biom summarize-table -i [biom table]
as in this example:
biom summarize-table -i /data/open_ref_output/otu_table_mc2_w_tax_no_pynast_failures.biom
This will print summary information about counts/sample to stdout; you may well want it in a more persistent format so that it is easy to re-sort, etc, so you may want to pipe it to a file.
The output looks, in part, like this:
Num samples: 399
Num observations: 33667
Total count: 6125023
Table density (fraction of non-zero values): 0.019
Counts/sample summary:
Min: 0.0
Max: 38638.0
Median: 14466.000
Mean: 15350.935
Std. dev.: 7720.364
Sample Metadata Categories: None provided
Observation Metadata Categories: taxonomy
Counts/sample detail:
925.sm1z: 0.0
925.sl1z: 0.0
925.sn2x: 0.0
925.waterfilter.ij.50: 0.0
925.sm4y: 1.0
925.waterfilter.ik.50: 1.0
925.sn4z: 1.0
925.in2z: 1.0
925.in2y: 1.0
925.sm3z: 1.0
925.sl1x: 1.0
925.sl3x: 1.0
925.so2y: 1.0
925.waterfilter.ie.50: 1.0
925.sl4z: 1.0
925.sk5z: 1.0
925.sn5x: 1.0
925.waterfilter.il.50: 1.0
925.sn3x: 1.0
925.io2z: 1.0
925.sl5y: 1.0
925.ia3y: 1.0
925.waterfilter.id.50: 1.0
925.waterfilter.ic.120: 1.0
925.bisonfeces1: 1.0
925.bisonfeces3: 1.0
925.waterfilter.ib.80: 1.0
925.sm1y: 1.0
925.im2z: 1.0
925.so3x: 1.0
925.waterfilter.ih.50: 2.0
925.sn3z: 2.0
925.sk5y: 3.0
925.so3y: 3.0
925.y3ntc.h12: 330.0
925.sg5x: 1383.0
925.if2z: 1580.0
925.y4ntc.h12: 2159.0
925.ie5x: 4966.0
925.ie5y: 5709.0
925.sg5y: 5888.0
925.if5y: 6256.0
925.sf2y: 6644.0
The most useful part of the file is the `Counts/sample detail` section, which shows an ascending-sorted list of counts per sample. For example, in the example show above, we know from the `Num samples` line that there are 399 samples (although most are not shown in this snippet of output), and we see that 4 of them have zero reads, 25 have one read, 2 have two reads, 2 have three reads, 1 has 330 reads, and all the rest have more than 1000 reads.
[Table of Contents](#table-of-contents)
<a name = "selecting-rarefaction-depth"></a>
## Selecting Rarefaction Depth
Using this information, you need to decide what the minimum sequencing depth that all samples included in further analyses need to have (and thus, which samples to leave out). As noted in the quoted text above, for those samples that have MORE than the minimum sequencing depth, the samples will be included, but only a subset of their reads up to the minimum sequencing depth will be used in the further analyses, so you are also deciding how many *reads* to leave out. If the customer has specified a preferred minimum sequencing depth, that makes the choice easy! However, most users savvy enough to have done this are also savvy enough to do their own analyses, so you are unlikely to get that information from a customer. However, the customer may have shared some information that could inform your decision:
* **Preference for breadth or depth**: If the customer wants to cast a wide net and look at as many samples as possible, even at the risk of missing weak effects on some of them, then a lower sequencing depth will be acceptable. Conversely, if she wants to look for subtle differences in microbial communities, a higher depth will likely be necessary.
* **Especially important samples**: If the customer has indicated that particular samples are critical to the analysis and should be included if at all possible, that may affect the chosen sequencing depth. For example, in the example given above, if the customer had specified that it was crucial to include as many NTC (non-treated control) samples as possible, using a minimum sequencing depth of 330 so as to include sample 925.y3ntc.h12 would be preferable to using a higher number, while if no such constraint existed, a minimum depth somewhere between 1383 and 4966 would probably be a better choice.
If no guidance exists, a good rule of thumb is to **favor samples over sequences**. The reason for this preference is that current research indicates that "low" sequence depths (10s - 1000s of sequences per sample) often capture all but very subtle variations, as shown in panels (a) and (b) of Figure 2 from Kuczynski et al., [Direct sequencing of the human microbiome readily reveals community differences](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2898070/), Genome Biology, 2010:

These panels show "(a) The full dataset (approximately 1,500 sequences per sample); (b) the dataset sampled at only 10 sequences per sample, showing the same pattern".
Keep in mind that even with such constraints, some samples may be beyond use. For example, anything with fewer than 15 reads is unlikely to give any usable diversity information. Taking the sequencing depth down to numbers in the 10s or low 100s should be considered very cautiously: only gross differences will be detectable, and the vast majority of read information will be discarded.
Note that you may want the minimum sequencing depth to be the number of samples associated with a least-read-endowed sample to be included. For example, given the counts/detail shown above, picking a depth of 1000 would indeed lead sample 925.sg5x to be included in the samples analyed downstream, but would only include 1000 of its 1383 sequences (and 1000 of each higher-read sample's reads) in all future analyses. By comparison, selecting the maximum number of reads that still allows the inclusion of sample 925.sg5x, or 1383, ensures that as many reads as possible are used and thus the best power is gained. However, some researchers may nonetheless want to use a nice round number, for easier mental comparison to other experiments.
Once you have decided on the sequence depth to use, make a note of it. Also gather general information about how this choice affects the sample set (for example, "using a sequencing depth of 1383, 34 of 399 samples, or 8.5%, are removed from further processing") and include it in the report to the customer.
[Table of Contents](#table-of-contents)
| 0.767516 | 0.836621 |
# Setup
```
from __future__ import print_function, division
import pandas as pd
from datetime import datetime
```
# Data import
```
df = pd.read_csv("data/ebola_data.csv")
df.sample(3)
```
# Cleaning
## Split up Indicator
```
outcome = []
for i in range(len(df.value)):
if 'CFR' in df.Indicator[i]:
outcome.append('CFR')
elif 'deaths' in df.Indicator[i]:
outcome.append('death')
elif 'cases' in df.Indicator[i]:
outcome.append('cases')
else:
print(check)
status = []
for i in range(len(df.value)):
if 'of confirmed Ebola' in df.Indicator[i]:
status.append('confirmed')
elif 'of probable Ebola' in df.Indicator[i]:
status.append('probable')
elif 'of confirmed, probable and suspected Ebola' in df.Indicator[i]:
status.append('all')
else:
status.append('suspected')
days = []
for i in range(len(df.value)):
if '21 days' in df.Indicator[i]:
days.append(21)
elif '7 days' in df.Indicator[i]:
days.append(7)
else:
days.append(0)
df['outcome'] = outcome
df['status'] = status
df['days'] = days
df = df.drop('Indicator',axis=1)
df.sample(3)
```
## Date
```
date = []
for i in range(len(df.value)):
datetime_object = datetime.strptime(df.Date[i], '%Y-%m-%d')
date.append(datetime_object)
df['date'] = date
df = df.drop('Date',axis=1)
df.sample(3)
```
## Value
```
df['value'] = df['value'].astype('int')
df.sample(3)
```
## Country
```
df.rename(columns={'Country': 'country'}, inplace=True)
df.sample(3)
```
# Africa countries
```
countries = list(df.country.unique())
africa_ebola = ['Guinea',
'Liberia',
'Sierra Leone',
'Nigeria',
'Senegal',
'Mali',
'Liberia 2',
'Guinea 2']
africa_lat_long = {'Guinea':[9.935430, -9.695052],
'Liberia':[6.426983, -9.429671],
'Sierra Leone':[8.460466, -11.779898],
'Nigeria':[9.081746, 8.675196],
'Senegal':[14.496320, -14.452312],
'Mali':[17.570332, -3.996270],
'Liberia 2':[6.426983, -9.429671],
'Guinea 2':[9.935430, -9.695052]}
africa_df = df[df['country'].isin(africa_ebola)]
africa_df.sample(3)
```
## Deal with Liberia 2, Guinea 2 -- Drop them
```
#df["A"][(df["B"] > 50) & (df["C"] == 900)]
africa_df[(africa_df['country']=="Liberia 2") & (africa_df['outcome']== "cases")]['value'].sum()
# drop Liberia 2 and Guinea 2
africa_df = africa_df[(africa_df['country']!="Liberia 2") & (africa_df['country']!="Guinea 2")]
africa_df = africa_df.reset_index(drop=True)
```
## To Lat Long
```
lat = []
long = []
for i in range(len(africa_df.value)):
lat.append(africa_lat_long[africa_df.country[i]][0])
long.append(africa_lat_long[africa_df.country[i]][1])
len(lat)
africa_df['lat'] = lat
africa_df['long'] = long
africa_df.sample(10)
```
# What are we counting?
Confirmed and probable for cases and death - 2 data points per country/date
```
africa_df['status'].unique()
africa_df = africa_df[(africa_df['status']=="confirmed") | (africa_df['status']=="probable")]
africa_df = africa_df.reset_index(drop=True)
africa_df.sample(10)
```
## split up days series
```
africa_df['days'].unique()
#split up to different data frames
africa_df_0 = africa_df[(africa_df['days']==0)]
africa_df_0.shape
africa_df_7 = africa_df[(africa_df['days']==7)]
africa_df_7.shape
africa_df_21 = africa_df[(africa_df['days']==21)]
africa_df_21.shape
```
## sum up confirmed and probable
```
africa_df_0_sorting = africa_df_0[(africa_df_0['outcome']!='CFR')]
africa_df_0_sorting = africa_df_0_sorting.drop('days',axis=1)
africa_df_0_sorting.head(6)
africa_sorted = africa_df_0_sorting.groupby(['country','lat','long','outcome','date']).sum()
africa_sorted = africa_sorted.reset_index()
africa_sorted.to_csv("data/africa_sorted.csv")
```
# Add patient zero stuff
```
africa_sorted.sample(5)
cols = list(africa_sorted.columns)
datetime_object = datetime.strptime("2013-12-06", '%Y-%m-%d')
pz = ["Meliandou", 8.616038, -10.061179, "death", datetime_object, 1]
patient_zero = pd.DataFrame([pz], columns=cols)
start = patient_zero
datetime_object = datetime.strptime("2013-12-13", '%Y-%m-%d')
pz = ["Meliandou", 8.616038, -10.061179, "death", datetime_object, 2]
patient_zero = pd.DataFrame([pz], columns=cols)
start = start.append(patient_zero)
datetime_object = datetime.strptime("2013-12-29", '%Y-%m-%d')
pz = ["Meliandou", 8.616038, -10.061179, "death", datetime_object, 3]
patient_zero = pd.DataFrame([pz], columns=cols)
start = start.append(patient_zero)
datetime_object = datetime.strptime("2014-01-01", '%Y-%m-%d')
pz = ["Meliandou", 8.616038, -10.061179, "death", datetime_object, 4]
patient_zero = pd.DataFrame([pz], columns=cols)
start = start.append(patient_zero)
datetime_object = datetime.strptime("2014-02-02", '%Y-%m-%d')
pz = ["Meliandou", 8.616038, -10.061179, "death", datetime_object, 6]
patient_zero = pd.DataFrame([pz], columns=cols)
start = start.append(patient_zero)
datetime_object = datetime.strptime("2014-03-12", '%Y-%m-%d')
pz = ["Gbandou", 8.526113, -10.288549, "death", datetime_object, 3]
patient_zero = pd.DataFrame([pz], columns=cols)
start = start.append(patient_zero)
datetime_object = datetime.strptime("2014-02-11", '%Y-%m-%d')
pz = ["Dandu Pombo", 9.032877, -9.953984, "death", datetime_object, 1]
patient_zero = pd.DataFrame([pz], columns=cols)
start = start.append(patient_zero)
datetime_object = datetime.strptime("2014-02-28", '%Y-%m-%d')
pz = ["Dandu Pombo", 9.032877, -9.953984, "death", datetime_object, 4]
patient_zero = pd.DataFrame([pz], columns=cols)
start = start.append(patient_zero)
datetime_object = datetime.strptime("2014-03-31", '%Y-%m-%d')
pz = ["Dandu Pombo", 9.032877, -9.953984, "death", datetime_object, 6]
patient_zero = pd.DataFrame([pz], columns=cols)
start = start.append(patient_zero)
datetime_object = datetime.strptime("2014-01-26", '%Y-%m-%d')
pz = ["Dawa", 9.032877, -9.953984, "death", datetime_object, 1]
patient_zero = pd.DataFrame([pz], columns=cols)
start = start.append(patient_zero)
datetime_object = datetime.strptime("2014-02-11", '%Y-%m-%d')
pz = ["Dawa", 9.032877, -9.953984, "death", datetime_object, 3]
patient_zero = pd.DataFrame([pz], columns=cols)
start = start.append(patient_zero)
datetime_object = datetime.strptime("2014-03-27", '%Y-%m-%d')
pz = ["Dawa", 9.032877, -9.953984, "death", datetime_object, 8]
patient_zero = pd.DataFrame([pz], columns=cols)
start = start.append(patient_zero)
start = start.append(africa_sorted)
start = start.reset_index(drop=True)
start.sample(5)
africa_sorted = start
```
# Make JSON sorted by each country, each bubble
```
countries = list(africa_sorted.country.unique())
dates = list(africa_sorted.date.unique())
outcomes = list(africa_sorted.outcome.unique())
myData = []
for outs in outcomes:
for place in countries:
newDict = {}
for i in range(len(dates)):
newValue = africa_sorted[(africa_sorted['country']==place)&(africa_sorted['outcome']==outs)
&(africa_sorted['date']==dates[i])]
if newValue.shape[0] == 0:
newValue = 0
else:
store = newValue
newValue = int(newValue.iloc[0,5])
newDate = str(dates[i])[:10]
newDict[newDate] = newValue
newDict['country'] = place
newDict['lat'] = float(store.iloc[0,1])
newDict['long'] = float(store.iloc[0,2])
newDict['outcome'] = str(store.iloc[0,3])
myData.append(newDict)
# patient zero
# https://www.livescience.com/48527-ebola-toddler-patient-zero.html
newDict = {}
newDict['country'] = 'Guinea'
newDict['lat'] = 8.615048
newDict['long'] = -10.061007
newDict['outcome'] = "patient zero"
newDict['2013-12-06'] = 1
myData.append(newDict)
import json
with open('data/ebolaData.txt', 'w') as outfile:
json.dump(myData, outfile)
```
# Make dataframe for data
```
strDates = []
for i in range(len(dates)):
a = str(dates[i])[:10]
strDates.append(a)
columns = ['country','lat','long','outcome'] + strDates
df_new = pd.DataFrame(columns = columns)
for outs in outcomes:
for place in countries:
newList = []
newList1 = []
for i in range(len(dates)):
newValue = africa_sorted[(africa_sorted['country']==place)&(africa_sorted['outcome']==outs)
&(africa_sorted['date']==dates[i])]
if newValue.shape[0] == 0:
newValue = 0
else:
store = newValue
newValue = int(newValue.iloc[0,5])
newList.append(newValue)
newList1.append(place)
newList1.append(float(store.iloc[0,1]))
newList1.append(float(store.iloc[0,2]))
newList1.append(str(store.iloc[0,3]))
newList = newList1 + newList
df_new = df_new.append(pd.Series(newList, index=columns), ignore_index=True)
df_new.describe()
df_new
df_new.to_csv("data/patient_zero.csv")
```
|
github_jupyter
|
from __future__ import print_function, division
import pandas as pd
from datetime import datetime
df = pd.read_csv("data/ebola_data.csv")
df.sample(3)
outcome = []
for i in range(len(df.value)):
if 'CFR' in df.Indicator[i]:
outcome.append('CFR')
elif 'deaths' in df.Indicator[i]:
outcome.append('death')
elif 'cases' in df.Indicator[i]:
outcome.append('cases')
else:
print(check)
status = []
for i in range(len(df.value)):
if 'of confirmed Ebola' in df.Indicator[i]:
status.append('confirmed')
elif 'of probable Ebola' in df.Indicator[i]:
status.append('probable')
elif 'of confirmed, probable and suspected Ebola' in df.Indicator[i]:
status.append('all')
else:
status.append('suspected')
days = []
for i in range(len(df.value)):
if '21 days' in df.Indicator[i]:
days.append(21)
elif '7 days' in df.Indicator[i]:
days.append(7)
else:
days.append(0)
df['outcome'] = outcome
df['status'] = status
df['days'] = days
df = df.drop('Indicator',axis=1)
df.sample(3)
date = []
for i in range(len(df.value)):
datetime_object = datetime.strptime(df.Date[i], '%Y-%m-%d')
date.append(datetime_object)
df['date'] = date
df = df.drop('Date',axis=1)
df.sample(3)
df['value'] = df['value'].astype('int')
df.sample(3)
df.rename(columns={'Country': 'country'}, inplace=True)
df.sample(3)
countries = list(df.country.unique())
africa_ebola = ['Guinea',
'Liberia',
'Sierra Leone',
'Nigeria',
'Senegal',
'Mali',
'Liberia 2',
'Guinea 2']
africa_lat_long = {'Guinea':[9.935430, -9.695052],
'Liberia':[6.426983, -9.429671],
'Sierra Leone':[8.460466, -11.779898],
'Nigeria':[9.081746, 8.675196],
'Senegal':[14.496320, -14.452312],
'Mali':[17.570332, -3.996270],
'Liberia 2':[6.426983, -9.429671],
'Guinea 2':[9.935430, -9.695052]}
africa_df = df[df['country'].isin(africa_ebola)]
africa_df.sample(3)
#df["A"][(df["B"] > 50) & (df["C"] == 900)]
africa_df[(africa_df['country']=="Liberia 2") & (africa_df['outcome']== "cases")]['value'].sum()
# drop Liberia 2 and Guinea 2
africa_df = africa_df[(africa_df['country']!="Liberia 2") & (africa_df['country']!="Guinea 2")]
africa_df = africa_df.reset_index(drop=True)
lat = []
long = []
for i in range(len(africa_df.value)):
lat.append(africa_lat_long[africa_df.country[i]][0])
long.append(africa_lat_long[africa_df.country[i]][1])
len(lat)
africa_df['lat'] = lat
africa_df['long'] = long
africa_df.sample(10)
africa_df['status'].unique()
africa_df = africa_df[(africa_df['status']=="confirmed") | (africa_df['status']=="probable")]
africa_df = africa_df.reset_index(drop=True)
africa_df.sample(10)
africa_df['days'].unique()
#split up to different data frames
africa_df_0 = africa_df[(africa_df['days']==0)]
africa_df_0.shape
africa_df_7 = africa_df[(africa_df['days']==7)]
africa_df_7.shape
africa_df_21 = africa_df[(africa_df['days']==21)]
africa_df_21.shape
africa_df_0_sorting = africa_df_0[(africa_df_0['outcome']!='CFR')]
africa_df_0_sorting = africa_df_0_sorting.drop('days',axis=1)
africa_df_0_sorting.head(6)
africa_sorted = africa_df_0_sorting.groupby(['country','lat','long','outcome','date']).sum()
africa_sorted = africa_sorted.reset_index()
africa_sorted.to_csv("data/africa_sorted.csv")
africa_sorted.sample(5)
cols = list(africa_sorted.columns)
datetime_object = datetime.strptime("2013-12-06", '%Y-%m-%d')
pz = ["Meliandou", 8.616038, -10.061179, "death", datetime_object, 1]
patient_zero = pd.DataFrame([pz], columns=cols)
start = patient_zero
datetime_object = datetime.strptime("2013-12-13", '%Y-%m-%d')
pz = ["Meliandou", 8.616038, -10.061179, "death", datetime_object, 2]
patient_zero = pd.DataFrame([pz], columns=cols)
start = start.append(patient_zero)
datetime_object = datetime.strptime("2013-12-29", '%Y-%m-%d')
pz = ["Meliandou", 8.616038, -10.061179, "death", datetime_object, 3]
patient_zero = pd.DataFrame([pz], columns=cols)
start = start.append(patient_zero)
datetime_object = datetime.strptime("2014-01-01", '%Y-%m-%d')
pz = ["Meliandou", 8.616038, -10.061179, "death", datetime_object, 4]
patient_zero = pd.DataFrame([pz], columns=cols)
start = start.append(patient_zero)
datetime_object = datetime.strptime("2014-02-02", '%Y-%m-%d')
pz = ["Meliandou", 8.616038, -10.061179, "death", datetime_object, 6]
patient_zero = pd.DataFrame([pz], columns=cols)
start = start.append(patient_zero)
datetime_object = datetime.strptime("2014-03-12", '%Y-%m-%d')
pz = ["Gbandou", 8.526113, -10.288549, "death", datetime_object, 3]
patient_zero = pd.DataFrame([pz], columns=cols)
start = start.append(patient_zero)
datetime_object = datetime.strptime("2014-02-11", '%Y-%m-%d')
pz = ["Dandu Pombo", 9.032877, -9.953984, "death", datetime_object, 1]
patient_zero = pd.DataFrame([pz], columns=cols)
start = start.append(patient_zero)
datetime_object = datetime.strptime("2014-02-28", '%Y-%m-%d')
pz = ["Dandu Pombo", 9.032877, -9.953984, "death", datetime_object, 4]
patient_zero = pd.DataFrame([pz], columns=cols)
start = start.append(patient_zero)
datetime_object = datetime.strptime("2014-03-31", '%Y-%m-%d')
pz = ["Dandu Pombo", 9.032877, -9.953984, "death", datetime_object, 6]
patient_zero = pd.DataFrame([pz], columns=cols)
start = start.append(patient_zero)
datetime_object = datetime.strptime("2014-01-26", '%Y-%m-%d')
pz = ["Dawa", 9.032877, -9.953984, "death", datetime_object, 1]
patient_zero = pd.DataFrame([pz], columns=cols)
start = start.append(patient_zero)
datetime_object = datetime.strptime("2014-02-11", '%Y-%m-%d')
pz = ["Dawa", 9.032877, -9.953984, "death", datetime_object, 3]
patient_zero = pd.DataFrame([pz], columns=cols)
start = start.append(patient_zero)
datetime_object = datetime.strptime("2014-03-27", '%Y-%m-%d')
pz = ["Dawa", 9.032877, -9.953984, "death", datetime_object, 8]
patient_zero = pd.DataFrame([pz], columns=cols)
start = start.append(patient_zero)
start = start.append(africa_sorted)
start = start.reset_index(drop=True)
start.sample(5)
africa_sorted = start
countries = list(africa_sorted.country.unique())
dates = list(africa_sorted.date.unique())
outcomes = list(africa_sorted.outcome.unique())
myData = []
for outs in outcomes:
for place in countries:
newDict = {}
for i in range(len(dates)):
newValue = africa_sorted[(africa_sorted['country']==place)&(africa_sorted['outcome']==outs)
&(africa_sorted['date']==dates[i])]
if newValue.shape[0] == 0:
newValue = 0
else:
store = newValue
newValue = int(newValue.iloc[0,5])
newDate = str(dates[i])[:10]
newDict[newDate] = newValue
newDict['country'] = place
newDict['lat'] = float(store.iloc[0,1])
newDict['long'] = float(store.iloc[0,2])
newDict['outcome'] = str(store.iloc[0,3])
myData.append(newDict)
# patient zero
# https://www.livescience.com/48527-ebola-toddler-patient-zero.html
newDict = {}
newDict['country'] = 'Guinea'
newDict['lat'] = 8.615048
newDict['long'] = -10.061007
newDict['outcome'] = "patient zero"
newDict['2013-12-06'] = 1
myData.append(newDict)
import json
with open('data/ebolaData.txt', 'w') as outfile:
json.dump(myData, outfile)
strDates = []
for i in range(len(dates)):
a = str(dates[i])[:10]
strDates.append(a)
columns = ['country','lat','long','outcome'] + strDates
df_new = pd.DataFrame(columns = columns)
for outs in outcomes:
for place in countries:
newList = []
newList1 = []
for i in range(len(dates)):
newValue = africa_sorted[(africa_sorted['country']==place)&(africa_sorted['outcome']==outs)
&(africa_sorted['date']==dates[i])]
if newValue.shape[0] == 0:
newValue = 0
else:
store = newValue
newValue = int(newValue.iloc[0,5])
newList.append(newValue)
newList1.append(place)
newList1.append(float(store.iloc[0,1]))
newList1.append(float(store.iloc[0,2]))
newList1.append(str(store.iloc[0,3]))
newList = newList1 + newList
df_new = df_new.append(pd.Series(newList, index=columns), ignore_index=True)
df_new.describe()
df_new
df_new.to_csv("data/patient_zero.csv")
| 0.123762 | 0.563918 |
# COVID-19 Tracking U.S. Cases
> Tracking coronavirus total cases, deaths and new cases by country.
- comments: true
- author: Pratap Vardhan
- categories: [overview, interactive, usa]
- hide: true
- permalink: /covid-overview-us/
```
#hide
print('''
Example of using jupyter notebook, pandas (data transformations), jinja2 (html, visual)
to create visual dashboards with fastpages
You see also the live version on https://gramener.com/enumter/covid19/united-states.html
''')
#hide
import numpy as np
import pandas as pd
from jinja2 import Template
from IPython.display import HTML
#hide
from pathlib import Path
if not Path('covid_overview.py').exists():
! wget https://raw.githubusercontent.com/pratapvardhan/notebooks/master/covid19/covid_overview.py
#hide
import covid_overview as covid
#hide
COL_REGION = 'Province/State'
# Confirmed, Recovered, Deaths
US_POI = [
'Alabama', 'Alaska', 'Arizona', 'Arkansas', 'California',
'Colorado', 'Connecticut', 'Delaware', 'Diamond Princess',
'District of Columbia', 'Florida', 'Georgia', 'Grand Princess',
'Guam', 'Hawaii', 'Idaho', 'Illinois', 'Indiana', 'Iowa', 'Kansas',
'Kentucky', 'Louisiana', 'Maine', 'Maryland', 'Massachusetts',
'Michigan', 'Minnesota', 'Mississippi', 'Missouri', 'Montana',
'Nebraska', 'Nevada', 'New Hampshire', 'New Jersey', 'New Mexico',
'New York', 'North Carolina', 'North Dakota', 'Ohio', 'Oklahoma',
'Oregon', 'Pennsylvania', 'Puerto Rico', 'Rhode Island',
'South Carolina', 'South Dakota', 'Tennessee', 'Texas', 'Utah',
'Vermont', 'Virgin Islands', 'Virginia', 'Washington',
'West Virginia', 'Wisconsin', 'Wyoming']
filter_us = lambda d: d[d['Country/Region'].eq('US') & d['Province/State'].isin(US_POI)]
kpis_info = [
{'title': 'New York', 'prefix': 'NY'},
{'title': 'Washington', 'prefix': 'WA'},
{'title': 'California', 'prefix': 'CA'}]
data = covid.gen_data(region=COL_REGION, filter_frame=filter_us, kpis_info=kpis_info)
#hide
data['table'].head(5)
#hide_input
template = Template(covid.get_template(covid.paths['overview']))
dt_cols, LAST_DATE_I = data['dt_cols'], data['dt_last']
html = template.render(
D=data['summary'], table=data['table'],
newcases=data['newcases'].loc[:, dt_cols[LAST_DATE_I - 15]:dt_cols[LAST_DATE_I]],
COL_REGION=COL_REGION,
KPI_CASE='US',
KPIS_INFO=kpis_info,
LEGEND_DOMAIN=[5, 50, 500, np.inf],
np=np, pd=pd, enumerate=enumerate)
HTML(f'<div>{html}</div>')
```
Visualizations by [Pratap Vardhan](https://twitter.com/PratapVardhan)[^1]
[^1]: Source: ["COVID-19 Data Repository by Johns Hopkins CSSE"](https://systems.jhu.edu/research/public-health/ncov/) [GitHub repository](https://github.com/CSSEGISandData/COVID-19). Link to [notebook](https://github.com/pratapvardhan/notebooks/blob/master/covid19/covid19-overview-us.ipynb), [orignal interactive](https://gramener.com/enumter/covid19/united-states.html)
|
github_jupyter
|
#hide
print('''
Example of using jupyter notebook, pandas (data transformations), jinja2 (html, visual)
to create visual dashboards with fastpages
You see also the live version on https://gramener.com/enumter/covid19/united-states.html
''')
#hide
import numpy as np
import pandas as pd
from jinja2 import Template
from IPython.display import HTML
#hide
from pathlib import Path
if not Path('covid_overview.py').exists():
! wget https://raw.githubusercontent.com/pratapvardhan/notebooks/master/covid19/covid_overview.py
#hide
import covid_overview as covid
#hide
COL_REGION = 'Province/State'
# Confirmed, Recovered, Deaths
US_POI = [
'Alabama', 'Alaska', 'Arizona', 'Arkansas', 'California',
'Colorado', 'Connecticut', 'Delaware', 'Diamond Princess',
'District of Columbia', 'Florida', 'Georgia', 'Grand Princess',
'Guam', 'Hawaii', 'Idaho', 'Illinois', 'Indiana', 'Iowa', 'Kansas',
'Kentucky', 'Louisiana', 'Maine', 'Maryland', 'Massachusetts',
'Michigan', 'Minnesota', 'Mississippi', 'Missouri', 'Montana',
'Nebraska', 'Nevada', 'New Hampshire', 'New Jersey', 'New Mexico',
'New York', 'North Carolina', 'North Dakota', 'Ohio', 'Oklahoma',
'Oregon', 'Pennsylvania', 'Puerto Rico', 'Rhode Island',
'South Carolina', 'South Dakota', 'Tennessee', 'Texas', 'Utah',
'Vermont', 'Virgin Islands', 'Virginia', 'Washington',
'West Virginia', 'Wisconsin', 'Wyoming']
filter_us = lambda d: d[d['Country/Region'].eq('US') & d['Province/State'].isin(US_POI)]
kpis_info = [
{'title': 'New York', 'prefix': 'NY'},
{'title': 'Washington', 'prefix': 'WA'},
{'title': 'California', 'prefix': 'CA'}]
data = covid.gen_data(region=COL_REGION, filter_frame=filter_us, kpis_info=kpis_info)
#hide
data['table'].head(5)
#hide_input
template = Template(covid.get_template(covid.paths['overview']))
dt_cols, LAST_DATE_I = data['dt_cols'], data['dt_last']
html = template.render(
D=data['summary'], table=data['table'],
newcases=data['newcases'].loc[:, dt_cols[LAST_DATE_I - 15]:dt_cols[LAST_DATE_I]],
COL_REGION=COL_REGION,
KPI_CASE='US',
KPIS_INFO=kpis_info,
LEGEND_DOMAIN=[5, 50, 500, np.inf],
np=np, pd=pd, enumerate=enumerate)
HTML(f'<div>{html}</div>')
| 0.447219 | 0.7169 |
# Import dependancies
```
# Import our dependencies
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler,OneHotEncoder
import pandas as pd
import tensorflow as tf
# Import our input dataset
attrition_df = pd.read_csv('HR-Employee-Attrition.csv')
attrition_df.head()
```
# Preprocessing
```
# Generate our categorical variable list
attrition_cat = attrition_df.dtypes[attrition_df.dtypes == "object"].index.tolist()
# Check the number of unique values in each column
attrition_df[attrition_cat].nunique()
# Create a OneHotEncoder instance
enc = OneHotEncoder(sparse=False)
# Fit and transform the OneHotEncoder using the categorical variable list
encode_df = pd.DataFrame(enc.fit_transform(attrition_df[attrition_cat]))
# Add the encoded variable names to the DataFrame
encode_df.columns = enc.get_feature_names(attrition_cat)
encode_df.head()
# Merge one-hot encoded features and drop the originals
attrition_df = attrition_df.merge(encode_df,left_index=True, right_index=True)
attrition_df = attrition_df.drop(attrition_cat,1)
attrition_df.head()
```
# Train test split scale compile fit
```
# Split our preprocessed data into our features and target arrays
y = attrition_df["Attrition_Yes"].values
X = attrition_df.drop(["Attrition_Yes","Attrition_No"],1).values
# Split the preprocessed data into a training and testing dataset
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=78)
# Create a StandardScaler instance
scaler = StandardScaler()
# Fit the StandardScaler
X_scaler = scaler.fit(X_train)
# Scale the data
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
# Define the model - deep neural net
number_input_features = len(X_train[0])
hidden_nodes_layer1 = 8
hidden_nodes_layer2 = 5
nn = tf.keras.models.Sequential()
# First hidden layer
nn.add(tf.keras.layers.Dense(units=hidden_nodes_layer1, input_dim=number_input_features, activation="relu"))
# Second hidden layer
nn.add(tf.keras.layers.Dense(units=hidden_nodes_layer2, activation="relu"))
# Output layer
nn.add(tf.keras.layers.Dense(units=1, activation="sigmoid"))
# Check the structure of the model
nn.summary()
# Compile the model
nn.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
# Train the model
fit_model = nn.fit(X_train,y_train,epochs=100)
```
# Evaluate model
```
# Evaluate the model using the test data
model_loss, model_accuracy = nn.evaluate(X_test,y_test,verbose=2)
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
# Create a DataFrame containing training history
history_df = pd.DataFrame(fit_model.history, index=range(1,len(fit_model.history["loss"])+1))
# Plot the loss
display(history_df.plot(y="loss"))
# Plot the accuracy
display(history_df.plot(y="accuracy"))
```
## Checkpoints
# Saving and Loading Node Weights
```
# Import checkpoint dependencies
import os
from tensorflow.keras.callbacks import ModelCheckpoint
# Define the checkpoint path and filenames
os.makedirs("checkpoints/",exist_ok=True)
checkpoint_path = "checkpoints/weights.{epoch:02d}.hdf5"
# Compile the model
nn.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
# Create a callback that saves the model's weights every 5 epochs
cp_callback = ModelCheckpoint(
filepath=checkpoint_path,
verbose=1,
save_weights_only=True,
save_freq=350
)
# Train the model
fit_model = nn.fit(X_train_scaled,y_train,epochs=100,callbacks=[cp_callback])
# Evaluate the model using the test data
model_loss, model_accuracy = nn.evaluate(X_test_scaled,y_test,verbose=2)
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
# Define the model - deep neural net
number_input_features = len(X_train[0])
hidden_nodes_layer1 = 8
hidden_nodes_layer2 = 5
nn_new = tf.keras.models.Sequential()
# First hidden layer
nn_new.add(tf.keras.layers.Dense(units=hidden_nodes_layer1, input_dim=number_input_features, activation="relu"))
# Second hidden layer
nn_new.add(tf.keras.layers.Dense(units=hidden_nodes_layer2, activation="relu"))
# Output layer
nn_new.add(tf.keras.layers.Dense(units=1, activation="sigmoid"))
# Compile the model
nn_new.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
# Restore the model weights
nn_new.load_weights("checkpoints/weights.100.hdf5")
# Evaluate the model using the test data
model_loss, model_accuracy = nn_new.evaluate(X_test_scaled,y_test,verbose=2)
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
```
# Saving and Loading Full Models
```
# Export our model to HDF5 file
nn_new.save("trained_attrition.h5")
# Import the model to a new object
nn_imported = tf.keras.models.load_model('trained_attrition.h5')
# Evaluate the model using the test data
model_loss, model_accuracy = nn_new.evaluate(X_test_scaled,y_test,verbose=2)
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
len(X[0])
```
|
github_jupyter
|
# Import our dependencies
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler,OneHotEncoder
import pandas as pd
import tensorflow as tf
# Import our input dataset
attrition_df = pd.read_csv('HR-Employee-Attrition.csv')
attrition_df.head()
# Generate our categorical variable list
attrition_cat = attrition_df.dtypes[attrition_df.dtypes == "object"].index.tolist()
# Check the number of unique values in each column
attrition_df[attrition_cat].nunique()
# Create a OneHotEncoder instance
enc = OneHotEncoder(sparse=False)
# Fit and transform the OneHotEncoder using the categorical variable list
encode_df = pd.DataFrame(enc.fit_transform(attrition_df[attrition_cat]))
# Add the encoded variable names to the DataFrame
encode_df.columns = enc.get_feature_names(attrition_cat)
encode_df.head()
# Merge one-hot encoded features and drop the originals
attrition_df = attrition_df.merge(encode_df,left_index=True, right_index=True)
attrition_df = attrition_df.drop(attrition_cat,1)
attrition_df.head()
# Split our preprocessed data into our features and target arrays
y = attrition_df["Attrition_Yes"].values
X = attrition_df.drop(["Attrition_Yes","Attrition_No"],1).values
# Split the preprocessed data into a training and testing dataset
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=78)
# Create a StandardScaler instance
scaler = StandardScaler()
# Fit the StandardScaler
X_scaler = scaler.fit(X_train)
# Scale the data
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
# Define the model - deep neural net
number_input_features = len(X_train[0])
hidden_nodes_layer1 = 8
hidden_nodes_layer2 = 5
nn = tf.keras.models.Sequential()
# First hidden layer
nn.add(tf.keras.layers.Dense(units=hidden_nodes_layer1, input_dim=number_input_features, activation="relu"))
# Second hidden layer
nn.add(tf.keras.layers.Dense(units=hidden_nodes_layer2, activation="relu"))
# Output layer
nn.add(tf.keras.layers.Dense(units=1, activation="sigmoid"))
# Check the structure of the model
nn.summary()
# Compile the model
nn.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
# Train the model
fit_model = nn.fit(X_train,y_train,epochs=100)
# Evaluate the model using the test data
model_loss, model_accuracy = nn.evaluate(X_test,y_test,verbose=2)
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
# Create a DataFrame containing training history
history_df = pd.DataFrame(fit_model.history, index=range(1,len(fit_model.history["loss"])+1))
# Plot the loss
display(history_df.plot(y="loss"))
# Plot the accuracy
display(history_df.plot(y="accuracy"))
# Import checkpoint dependencies
import os
from tensorflow.keras.callbacks import ModelCheckpoint
# Define the checkpoint path and filenames
os.makedirs("checkpoints/",exist_ok=True)
checkpoint_path = "checkpoints/weights.{epoch:02d}.hdf5"
# Compile the model
nn.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
# Create a callback that saves the model's weights every 5 epochs
cp_callback = ModelCheckpoint(
filepath=checkpoint_path,
verbose=1,
save_weights_only=True,
save_freq=350
)
# Train the model
fit_model = nn.fit(X_train_scaled,y_train,epochs=100,callbacks=[cp_callback])
# Evaluate the model using the test data
model_loss, model_accuracy = nn.evaluate(X_test_scaled,y_test,verbose=2)
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
# Define the model - deep neural net
number_input_features = len(X_train[0])
hidden_nodes_layer1 = 8
hidden_nodes_layer2 = 5
nn_new = tf.keras.models.Sequential()
# First hidden layer
nn_new.add(tf.keras.layers.Dense(units=hidden_nodes_layer1, input_dim=number_input_features, activation="relu"))
# Second hidden layer
nn_new.add(tf.keras.layers.Dense(units=hidden_nodes_layer2, activation="relu"))
# Output layer
nn_new.add(tf.keras.layers.Dense(units=1, activation="sigmoid"))
# Compile the model
nn_new.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
# Restore the model weights
nn_new.load_weights("checkpoints/weights.100.hdf5")
# Evaluate the model using the test data
model_loss, model_accuracy = nn_new.evaluate(X_test_scaled,y_test,verbose=2)
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
# Export our model to HDF5 file
nn_new.save("trained_attrition.h5")
# Import the model to a new object
nn_imported = tf.keras.models.load_model('trained_attrition.h5')
# Evaluate the model using the test data
model_loss, model_accuracy = nn_new.evaluate(X_test_scaled,y_test,verbose=2)
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
len(X[0])
| 0.916652 | 0.884788 |
# Independence Tests Power over Increasing Sample Size
```
import sys, os
import multiprocessing as mp
from joblib import Parallel, delayed
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import t
from power import power
from hyppo.independence import CCA, MGC, RV, Dcorr, Hsic, HHG
from hyppo.tools import indep_sim
sys.path.append(os.path.realpath('..'))
import seaborn as sns
sns.set(color_codes=True, style='white', context='talk', font_scale=1.5)
PALETTE = sns.color_palette("Set1")
sns.set_palette(PALETTE[3:], n_colors=9)
MAX_SAMPLE_SIZE = 100
STEP_SIZE = 5
SAMP_SIZES = range(5, MAX_SAMPLE_SIZE + STEP_SIZE, STEP_SIZE)
POWER_REPS = 5
SIMULATIONS = [
"linear",
"exponential",
"cubic",
"joint_normal",
"step",
"quadratic",
"w_shaped",
"spiral",
"uncorrelated_bernoulli",
"logarithmic",
"fourth_root",
"sin_four_pi",
"sin_sixteen_pi",
"square",
"two_parabolas",
"circle",
"ellipse",
"diamond",
"multiplicative_noise",
"multimodal_independence",
]
TESTS = [
# CCA,
# MGC,
# RV,
Dcorr,
Hsic,
# HHG,
]
def estimate_power(sim, test):
est_power = np.array([np.mean([power(test, sim, n=i, p=1, noise=True) for _ in range(POWER_REPS)])
for i in SAMP_SIZES])
np.savetxt('../fast/vs_samplesize/{}_{}.csv'.format(sim, test.__name__),
est_power, delimiter=',')
return est_power
def fast_estimate_power(sim, test):
est_power = np.array([np.mean([power(test, sim, n=i, p=1, noise=True, auto=True) for _ in range(POWER_REPS)])
for i in SAMP_SIZES])
np.savetxt('../fast/vs_samplesize/{}_Fast_{}.csv'.format(sim, test.__name__),
est_power, delimiter=',')
return est_power
# outputs = Parallel(n_jobs=-1, verbose=100)(
# [delayed(estimate_power)(sim, test) for sim in SIMULATIONS for test in TESTS]
# )
outputs = Parallel(n_jobs=-1, verbose=100)(
[delayed(fast_estimate_power)(sim, test) for sim in SIMULATIONS for test in TESTS]
)
def plot_power():
fig, ax = plt.subplots(nrows=4, ncols=5, figsize=(25,20))
sim_title = [
"Linear",
"Exponential",
"Cubic",
"Joint Normal",
"Step",
"Quadratic",
"W-Shaped",
"Spiral",
"Bernoulli",
"Logarithmic",
"Fourth Root",
"Sine 4\u03C0",
"Sine 16\u03C0",
"Square",
"Two Parabolas",
"Circle",
"Ellipse",
"Diamond",
"Multiplicative",
"Independence"
]
plt.suptitle("Multivariate Independence Testing (Increasing Sample Size)", y=0.93, va='baseline')
for i, row in enumerate(ax):
for j, col in enumerate(row):
count = 5*i + j
sim = SIMULATIONS[count]
for test in TESTS:
power = np.genfromtxt('../fast/vs_samplesize/{}_{}.csv'.format(sim, test.__name__),
delimiter=',')
# hsic_power = np.genfromtxt('../fast/vs_samplesize/{}_Hsic.csv'.format(sim),
# delimiter=',')
colors = {
"MGC" : "#e41a1c",
"Dcorr" : "#377eb8",
"Hsic" : "#4daf4a",
}
test_name = test.__name__
if test_name in ["Dcorr", "Hsic"]:
fast_power = np.genfromtxt('../fast/vs_samplesize/{}_Fast_{}.csv'.format(sim, test.__name__),
delimiter=',')
if test_name == "MGC":
col.plot(SAMP_SIZES, power, color=colors[test_name], label=test_name, lw=2)
elif test_name in ["Dcorr", "Hsic"]:
col.plot(SAMP_SIZES, power, color=colors[test_name], label=test_name, lw=4)
col.plot(SAMP_SIZES, fast_power, color=colors[test_name], label="Fast " + test_name, lw=4, linestyle='dashed')
else:
col.plot(SAMP_SIZES, power, label=test_name, lw=2)
col.set_xticks([])
if i == 3:
col.set_xticks([SAMP_SIZES[0], SAMP_SIZES[-1]])
col.set_ylim(-0.05, 1.05)
col.set_yticks([])
if j == 0:
col.set_yticks([0, 1])
col.set_title(sim_title[count])
fig.text(0.5, 0.07, 'Sample Size', ha='center')
fig.text(0.07, 0.5, 'Statistical Power Relative to Hsic', va='center', rotation='vertical')
leg = plt.legend(bbox_to_anchor=(0.5, 0.07), bbox_transform=plt.gcf().transFigure,
ncol=5, loc='upper center')
leg.get_frame().set_linewidth(0.0)
for legobj in leg.legendHandles:
legobj.set_linewidth(5.0)
plt.subplots_adjust(hspace=.50)
plt.savefig('../fast/figs/indep_power_sampsize.pdf', transparent=True, bbox_inches='tight')
plot_power()
```
|
github_jupyter
|
import sys, os
import multiprocessing as mp
from joblib import Parallel, delayed
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import t
from power import power
from hyppo.independence import CCA, MGC, RV, Dcorr, Hsic, HHG
from hyppo.tools import indep_sim
sys.path.append(os.path.realpath('..'))
import seaborn as sns
sns.set(color_codes=True, style='white', context='talk', font_scale=1.5)
PALETTE = sns.color_palette("Set1")
sns.set_palette(PALETTE[3:], n_colors=9)
MAX_SAMPLE_SIZE = 100
STEP_SIZE = 5
SAMP_SIZES = range(5, MAX_SAMPLE_SIZE + STEP_SIZE, STEP_SIZE)
POWER_REPS = 5
SIMULATIONS = [
"linear",
"exponential",
"cubic",
"joint_normal",
"step",
"quadratic",
"w_shaped",
"spiral",
"uncorrelated_bernoulli",
"logarithmic",
"fourth_root",
"sin_four_pi",
"sin_sixteen_pi",
"square",
"two_parabolas",
"circle",
"ellipse",
"diamond",
"multiplicative_noise",
"multimodal_independence",
]
TESTS = [
# CCA,
# MGC,
# RV,
Dcorr,
Hsic,
# HHG,
]
def estimate_power(sim, test):
est_power = np.array([np.mean([power(test, sim, n=i, p=1, noise=True) for _ in range(POWER_REPS)])
for i in SAMP_SIZES])
np.savetxt('../fast/vs_samplesize/{}_{}.csv'.format(sim, test.__name__),
est_power, delimiter=',')
return est_power
def fast_estimate_power(sim, test):
est_power = np.array([np.mean([power(test, sim, n=i, p=1, noise=True, auto=True) for _ in range(POWER_REPS)])
for i in SAMP_SIZES])
np.savetxt('../fast/vs_samplesize/{}_Fast_{}.csv'.format(sim, test.__name__),
est_power, delimiter=',')
return est_power
# outputs = Parallel(n_jobs=-1, verbose=100)(
# [delayed(estimate_power)(sim, test) for sim in SIMULATIONS for test in TESTS]
# )
outputs = Parallel(n_jobs=-1, verbose=100)(
[delayed(fast_estimate_power)(sim, test) for sim in SIMULATIONS for test in TESTS]
)
def plot_power():
fig, ax = plt.subplots(nrows=4, ncols=5, figsize=(25,20))
sim_title = [
"Linear",
"Exponential",
"Cubic",
"Joint Normal",
"Step",
"Quadratic",
"W-Shaped",
"Spiral",
"Bernoulli",
"Logarithmic",
"Fourth Root",
"Sine 4\u03C0",
"Sine 16\u03C0",
"Square",
"Two Parabolas",
"Circle",
"Ellipse",
"Diamond",
"Multiplicative",
"Independence"
]
plt.suptitle("Multivariate Independence Testing (Increasing Sample Size)", y=0.93, va='baseline')
for i, row in enumerate(ax):
for j, col in enumerate(row):
count = 5*i + j
sim = SIMULATIONS[count]
for test in TESTS:
power = np.genfromtxt('../fast/vs_samplesize/{}_{}.csv'.format(sim, test.__name__),
delimiter=',')
# hsic_power = np.genfromtxt('../fast/vs_samplesize/{}_Hsic.csv'.format(sim),
# delimiter=',')
colors = {
"MGC" : "#e41a1c",
"Dcorr" : "#377eb8",
"Hsic" : "#4daf4a",
}
test_name = test.__name__
if test_name in ["Dcorr", "Hsic"]:
fast_power = np.genfromtxt('../fast/vs_samplesize/{}_Fast_{}.csv'.format(sim, test.__name__),
delimiter=',')
if test_name == "MGC":
col.plot(SAMP_SIZES, power, color=colors[test_name], label=test_name, lw=2)
elif test_name in ["Dcorr", "Hsic"]:
col.plot(SAMP_SIZES, power, color=colors[test_name], label=test_name, lw=4)
col.plot(SAMP_SIZES, fast_power, color=colors[test_name], label="Fast " + test_name, lw=4, linestyle='dashed')
else:
col.plot(SAMP_SIZES, power, label=test_name, lw=2)
col.set_xticks([])
if i == 3:
col.set_xticks([SAMP_SIZES[0], SAMP_SIZES[-1]])
col.set_ylim(-0.05, 1.05)
col.set_yticks([])
if j == 0:
col.set_yticks([0, 1])
col.set_title(sim_title[count])
fig.text(0.5, 0.07, 'Sample Size', ha='center')
fig.text(0.07, 0.5, 'Statistical Power Relative to Hsic', va='center', rotation='vertical')
leg = plt.legend(bbox_to_anchor=(0.5, 0.07), bbox_transform=plt.gcf().transFigure,
ncol=5, loc='upper center')
leg.get_frame().set_linewidth(0.0)
for legobj in leg.legendHandles:
legobj.set_linewidth(5.0)
plt.subplots_adjust(hspace=.50)
plt.savefig('../fast/figs/indep_power_sampsize.pdf', transparent=True, bbox_inches='tight')
plot_power()
| 0.326164 | 0.696236 |
```
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```
# Figures
This notebook contains code for generating the figures and tables from the paper _"Understanding and Improving Interpolation in Autoencoders via an Adversarial Regularizer"_.
The code is mainly provided as an example and may require modification to be run in a different setting.
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import scipy.ndimage
import lib.eval
import collections
import tensorflow as tf
import glob
import lib.utils
import all_aes
from absl import flags
import sys
FLAGS = flags.FLAGS
FLAGS(['--lr', '0.0001'])
import os
if not os.path.exists('figures'):
os.makedirs('figures')
def flatten_lines(lines, padding=2):
padding = np.ones((lines.shape[0], padding) + lines.shape[2:])
lines = np.concatenate([padding, lines, padding], 1)
lines = np.concatenate(lines, 0)
return np.transpose(lines, [1, 0] + list(range(2, lines.ndim)))
def get_final_value_median(values, steps, N=20):
sorted_steps = np.argsort(steps)
values = np.array(values)[sorted_steps]
return np.median(values[-N:])
HEIGHT = 32
WIDTH = 32
N_LINES = 16
START_ANGLE = 5*np.pi/7
END_ANGLE = 3*np.pi/2.
```
### Example line interpolations
#### Samples
```
example_lines = np.zeros((N_LINES, HEIGHT, WIDTH))
# Cover the space of angles somewhat evenly
angles = np.linspace(0, 2*np.pi - np.pi/N_LINES, N_LINES)
np.random.shuffle(angles)
for n, angle in enumerate(angles):
example_lines[n] = lib.data.draw_line(angle, HEIGHT, WIDTH)[..., 0]
fig = plt.figure(figsize=(15, 1))
ax = plt.Axes(fig, [0., 0., 1., 1.])
ax.set_axis_off()
fig.add_axes(ax)
ax.imshow(flatten_lines(example_lines), cmap=plt.cm.gray, interpolation='nearest')
plt.gca().set_axis_off()
plt.savefig('figures/line_samples.pdf', aspect='normal')
```
#### Correct interpolation
```
line_interpolation = np.zeros((N_LINES, HEIGHT, WIDTH))
angles = np.linspace(START_ANGLE, END_ANGLE, N_LINES)
for n in range(N_LINES):
line_interpolation[n] = lib.data.draw_line(angles[n], HEIGHT, WIDTH)[..., 0]
fig = plt.figure(figsize=(15, 1))
ax = plt.Axes(fig, [0., 0., 1., 1.])
ax.set_axis_off()
fig.add_axes(ax)
ax.imshow(flatten_lines(line_interpolation), cmap=plt.cm.gray, interpolation='nearest')
plt.gca().set_axis_off()
plt.savefig('figures/line_correct_interpolation.pdf', aspect='normal')
print lib.eval.line_eval(line_interpolation[np.newaxis, ..., np.newaxis])
```
#### Data-space interpolation
```
line_interpolation = np.zeros((N_LINES, HEIGHT, WIDTH))
start_line = lib.data.draw_line(START_ANGLE, HEIGHT, WIDTH)[..., 0]
end_line = lib.data.draw_line(END_ANGLE, HEIGHT, WIDTH)[..., 0]
weights = np.linspace(1, 0, N_LINES)
for n in range(N_LINES):
line_interpolation[n] = weights[n]*start_line + (1 - weights[n])*end_line
fig = plt.figure(figsize=(15, 1))
ax = plt.Axes(fig, [0., 0., 1., 1.])
ax.set_axis_off()
fig.add_axes(ax)
ax.imshow(flatten_lines(line_interpolation), cmap=plt.cm.gray, interpolation='nearest')
plt.gca().set_axis_off()
plt.savefig('figures/line_data_interpolation.pdf', aspect='normal')
print lib.eval.line_eval(line_interpolation[np.newaxis, ..., np.newaxis])
```
#### Abrupt change
```
line_interpolation = np.zeros((N_LINES, HEIGHT, WIDTH))
start_line = lib.data.draw_line(START_ANGLE, HEIGHT, WIDTH)[..., 0]
end_line = lib.data.draw_line(END_ANGLE, HEIGHT, WIDTH)[..., 0]
for n in range(N_LINES):
line_interpolation[n] = start_line if n < N_LINES/2 else end_line
fig = plt.figure(figsize=(15, 1))
ax = plt.Axes(fig, [0., 0., 1., 1.])
ax.set_axis_off()
fig.add_axes(ax)
ax.imshow(flatten_lines(line_interpolation), cmap=plt.cm.gray, interpolation='nearest')
plt.gca().set_axis_off()
plt.savefig('figures/line_abrupt_interpolation.pdf', aspect='normal')
print lib.eval.line_eval(line_interpolation[np.newaxis, ..., np.newaxis])
```
#### Overshooting
```
line_interpolation = np.zeros((N_LINES, HEIGHT, WIDTH))
angles = np.linspace(START_ANGLE, END_ANGLE - 2*np.pi, N_LINES)
for n in range(N_LINES):
line_interpolation[n] = lib.data.draw_line(angles[n], HEIGHT, WIDTH)[..., 0]
fig = plt.figure(figsize=(15, 1))
ax = plt.Axes(fig, [0., 0., 1., 1.])
ax.set_axis_off()
fig.add_axes(ax)
ax.imshow(flatten_lines(line_interpolation), cmap=plt.cm.gray, interpolation='nearest')
plt.gca().set_axis_off()
plt.savefig('figures/line_overshooting_interpolation.pdf', aspect='normal')
print lib.eval.line_eval(line_interpolation[np.newaxis, ..., np.newaxis])
```
#### Unrealistic
```
line_interpolation = np.zeros((N_LINES, HEIGHT, WIDTH))
angles = np.linspace(START_ANGLE, END_ANGLE, N_LINES)
blur = np.sin(np.linspace(0, np.pi, N_LINES))
for n in range(N_LINES):
line = lib.data.draw_line(angles[n], HEIGHT, WIDTH)[..., 0]
line_interpolation[n] = scipy.ndimage.gaussian_filter(line + np.sqrt(blur[n]), blur[n]*1.5)
fig = plt.figure(figsize=(15, 1))
ax = plt.Axes(fig, [0., 0., 1., 1.])
ax.set_axis_off()
fig.add_axes(ax)
ax.imshow(flatten_lines(line_interpolation), cmap=plt.cm.gray, interpolation='nearest', vmin=-1, vmax=1)
plt.gca().set_axis_off()
plt.savefig('figures/line_unrealistic_interpolation.pdf', aspect='normal')
```
### Line results table
```
RESULTS_PATH = '/home/craffel/data/dberth/RERUNS/*/lines32'
experiments = collections.defaultdict(list)
for run_path in glob.glob(RESULTS_PATH):
for path in glob.glob(os.path.join(run_path, '*')):
experiments[os.path.split(path)[-1]].append(os.path.join(path, 'tf', 'summaries'))
ALGS = collections.OrderedDict([
('Baseline', 'AEBaseline_depth16_latent16_scales4'),
('Dropout', 'AEDropout_depth16_dropout0.5_latent16_scales4'),
('Denoising', 'AEDenoising_depth16_latent16_noise1.0_scales4'),
('VAE', 'VAE_beta1.0_depth16_latent16_scales4'),
('AAE', 'AAE_adversary_lr0.0001_depth16_disc_layer_sizes100,100_latent16_scales4'),
('VQ-VAE', 'AEVQVAE_advdepth16_advweight0.0_beta10.0_depth16_emaTrue_latent16_noise0.0_num_blocks1_num_latents10_num_residuals1_reg0.5_scales3_z_log_size14'),
('ACAI', 'ARAReg_advdepth16_advweight0.5_depth16_latent16_reg0.2_scales4'),
])
experiment_results = collections.defaultdict(
lambda: collections.defaultdict(
lambda: collections.defaultdict(
lambda: collections.defaultdict(list))))
for experiment_key, experiment_paths in experiments.items():
for n, experiment_path in enumerate(experiment_paths):
print 'Getting results for', experiment_key, n
for events_file in glob.glob(os.path.join(experiment_path, 'events*')):
try:
for e in tf.train.summary_iterator(events_file):
for v in e.summary.value:
experiment_results[experiment_key][n][v.tag]['step'].append(e.step)
experiment_results[experiment_key][n][v.tag]['value'].append(v.simple_value)
except Exception as e:
print e
mean_distance = collections.defaultdict(list)
mean_smoothness = collections.defaultdict(list)
for experiment_name, events_lists in experiment_results.items():
for events in events_lists.values():
mean_distance[experiment_name].append(get_final_value_median(
events['mean_distance_1']['value'], events['mean_distance_1']['step']))
mean_smoothness[experiment_name].append(get_final_value_median(
events['mean_smoothness_1']['value'], events['mean_smoothness_1']['step']))
print 'Metric & ' + ' & '.join(ALGS.keys()) + ' \\\\'
print 'Mean Distance ($\\times 10^{-3}$) & ' + ' & '.join(
['{:.2f}$\pm${:.2f}'.format(np.mean(mean_distance[alg_name])*10**3, np.std(mean_distance[alg_name])*10**3)
for alg_name in ALGS.values()]) + ' \\\\'
print 'Mean Smoothness & ' + ' & '.join(
['{:.2f}$\pm${:.2f}'.format(np.mean(mean_smoothness[alg_name]), np.std(mean_smoothness[alg_name]))
for alg_name in ALGS.values()]) + ' \\\\'
```
### Real line interpolation examples
```
line_interpolation = np.zeros((N_LINES, HEIGHT, WIDTH))
start_line = lib.data.draw_line(START_ANGLE, HEIGHT, WIDTH)[..., 0]
end_line = lib.data.draw_line(END_ANGLE, HEIGHT, WIDTH)[..., 0]
DATASET = 'lines32'
BATCH = 64
for alg_name, alg_path in ALGS.items():
ae_path = os.path.join(RESULTS_PATH.replace('*', 'RUN3'), alg_path)
ae, _ = lib.utils.load_ae(ae_path, DATASET, BATCH, all_aes.ALL_AES)
with lib.utils.HookReport.disable():
ae.eval_mode()
input_lines = np.concatenate([
start_line[np.newaxis, ..., np.newaxis],
end_line[np.newaxis, ..., np.newaxis]])
start_latent, end_latent = ae.eval_sess.run(ae.eval_ops.encode, {ae.eval_ops.x: input_lines})
weights = np.linspace(1, 0, N_LINES).reshape(-1, 1, 1, 1)
interped_latents = weights*start_latent[np.newaxis] + (1 - weights)*end_latent[np.newaxis]
output_interp = ae.eval_sess.run(ae.eval_ops.decode, {ae.eval_ops.h: interped_latents})
fig = plt.figure(figsize=(15, 1))
ax = plt.Axes(fig, [0., 0., 1., 1.])
ax.set_axis_off()
fig.add_axes(ax)
ax.imshow(flatten_lines(output_interp[..., 0]), cmap=plt.cm.gray, interpolation='nearest')
plt.gca().set_axis_off()
plt.savefig('figures/line_{}_example.pdf'.format(alg_name.lower()), aspect='normal')
```
### Real data interpolations
```
BATCH = 64
DBERTH_RESULTS_PATH = '/home/craffel/data/dberth/RERUNS/RUN2'
DATASETS_DEPTHS = collections.OrderedDict([('mnist32', 16), ('svhn32', 64), ('celeba32', 64)])
LATENTS = [2, 16]
ALGS_FORMAT = collections.OrderedDict([
('Baseline', 'AEBaseline_depth{depth}_latent{latent}_scales3'),
('Dropout', 'AEDropout_depth{depth}_dropout0.5_latent{latent}_scales3'),
('Denoising', 'AEDenoising_depth{depth}_latent{latent}_noise1.0_scales3'),
('VAE', 'VAE_beta1.0_depth{depth}_latent{latent}_scales3'),
('AAE', 'AAE_adversary_lr0.0001_depth{depth}_disc_layer_sizes100,100_latent{latent}_scales3'),
('VQ-VAE', 'AEVQVAE_beta10.0_depth{depth}_latent{latent}_num_latents10_run1_scales3_z_log_size14'),
('ACAI', 'ARAReg_advdepth{depth}_advweight0.5_depth{depth}_latent{latent}_reg0.2_scales3'),
])
DATASETS_MINS = {'mnist32': -1, 'celeba32': -1.2, 'svhn32': -1}
DATASETS_MAXS = {'mnist32': 1, 'celeba32': 1.2, 'svhn32': 1}
N_IMAGES_PER_INTERPOLATION = 16
N_IMAGES = 4
def interpolate(sess,
ops,
image_left,
image_right,
dataset_min,
dataset_max,
interpolation=N_IMAGES_PER_INTERPOLATION):
def batched_op(op, op_input, array):
return sess.run(op, feed_dict={op_input: array})
# Interpolations
interpolation_x = np.array([image_left, image_right], 'f')
latent_x = batched_op(ops.encode, ops.x, interpolation_x)
latents = []
for x in range(interpolation):
latents.append((latent_x[:1] * (interpolation - x - 1) +
latent_x[1:] * x) / float(interpolation - 1))
latents = np.concatenate(latents, axis=0)
interpolation_y = batched_op(ops.decode, ops.h, latents)
interpolation_y = interpolation_y.reshape(
(interpolation, 1) + interpolation_y.shape[1:])
interpolation_y = interpolation_y.transpose(1, 0, 2, 3, 4)
image_interpolation = lib.utils.images_to_grid(interpolation_y)
padding = np.ones((image_interpolation.shape[0], 2) + image_interpolation.shape[2:])
image = np.concatenate(
[image_left, padding, image_interpolation, padding, image_right],
axis=1)
image = (image - dataset_min)/(dataset_max - dataset_min)
image = np.clip(image, 0, 1)
return image
def get_dataset_samples(sess, ops, dataset, batches=100):
batch = FLAGS.batch
with tf.Graph().as_default():
data_in = dataset.make_one_shot_iterator().get_next()
with tf.Session() as sess_new:
images = []
labels = []
while True:
try:
payload = sess_new.run(data_in)
images.append(payload['x'])
assert images[-1].shape[0] == 1
labels.append(payload['label'])
if len(images) == batches:
break
except tf.errors.OutOfRangeError:
break
images = np.concatenate(images, axis=0)
labels = np.concatenate(labels, axis=0)
latents = [sess.run(ops.encode,
feed_dict={ops.x: images[p:p + batch]})
for p in range(0, images.shape[0], FLAGS.batch)]
latents = np.concatenate(latents, axis=0)
latents = latents.reshape([latents.shape[0], -1])
return images, latents, labels
left_images = collections.defaultdict(lambda: None)
right_images = collections.defaultdict(lambda: None)
for dataset, depth in DATASETS_DEPTHS.items():
for latent in LATENTS:
for alg_name, alg_format in ALGS_FORMAT.items():
for n in range(N_IMAGES):
output_name = '{}_{}_latent_{}_interpolation_{}'.format(dataset, alg_name.lower(), latent, n + 1)
alg_path = os.path.join(DBERTH_RESULTS_PATH, dataset, alg_format.format(depth=depth, latent=latent))
if 1: # try:
ae, ds = lib.utils.load_ae(
alg_path, dataset, BATCH, all_aes.ALL_AES, return_dataset=True)
with lib.utils.HookReport.disable():
ae.eval_mode()
images, latents, labels = get_dataset_samples(ae.eval_sess,
ae.eval_ops,
ds.test)
labels = np.argmax(labels, axis=1)
if left_images[n] is None:
left_img_idx = n
if dataset == 'celeba32':
right_img_idx = N_IMAGES + n
else:
if n < N_IMAGES/2:
right_img_idx = np.flatnonzero(labels == labels[n])[N_IMAGES + n]
else:
right_img_idx = np.flatnonzero(labels != labels[n])[N_IMAGES + n]
print left_img_idx, labels[left_img_idx]
print right_img_idx, labels[right_img_idx]
left_images[n] = images[left_img_idx]
right_images[n] = images[right_img_idx]
left_image = left_images[n]
right_image = right_images[n]
image = interpolate(ae.eval_sess, ae.eval_ops, left_image, right_image,
DATASETS_MINS[dataset], DATASETS_MAXS[dataset])
fig = plt.figure(figsize=(15, 1))
ax = plt.Axes(fig, [0., 0., 1., 1.])
ax.set_axis_off()
fig.add_axes(ax)
ax.imshow(np.squeeze(image), cmap=plt.cm.gray, interpolation='nearest')
plt.gca().set_axis_off()
plt.savefig('figures/{}.pdf'.format(output_name), aspect='normal')
plt.close()
for n in range(N_IMAGES):
del left_images[n]
del right_images[n]
DATASET_NAMES = {'mnist32': 'MNIST', 'svhn32': 'SVHN', 'celeba32': 'CelebA'}
output = ""
for dataset, depth in DATASETS_DEPTHS.items():
for latent in LATENTS:
output += r"""
\begin{figure}
\centering
"""
for n in range(N_IMAGES):
alg_list = collections.OrderedDict()
for alg_name, alg_format in ALGS_FORMAT.items():
figure_name = '{}_{}_latent_{}_interpolation_{}'.format(dataset, alg_name.lower(), latent, n + 1)
alg_list[figure_name] = alg_name
if alg_name == ALGS_FORMAT.keys()[-1]:
reset = r"\addtocounter{{subfigure}}{{-{}}}".format(len(ALGS_FORMAT))
else:
reset = ""
output += r"""
\begin{{subfigure}}[b]{{\textwidth}}
\centering\parbox{{.09\linewidth}}{{\vspace{{0.3em}}\subcaption{{}}\label{{fig:{figure_name}}}}}
\parbox{{.75\linewidth}}{{\includegraphics[width=\linewidth]{{figures/{figure_name}.pdf}}}}{reset}
\end{{subfigure}}
""".format(figure_name=figure_name, reset=reset)
if alg_name == ALGS_FORMAT.keys()[-1]:
output += r"""
\vspace{0.5em}
"""
output += r"""
\caption{{Example interpolations on {} with a latent dimensionality of {} for """.format(
DATASET_NAMES[dataset], latent*16)
output += ', '.join([r'(\subref{{fig:{}}}) {}'.format(fn, an) for fn, an in alg_list.items()])
output += r""" autoencoders.}}
\label{{fig:{}_{}_interpolations}}
\end{{figure}}
""".format(dataset, latent)
print output
```
### VAE line samples
```
RESULTS_PATH = '/home/craffel/data/autoencoder/results_final/lines32'
line_interpolation = np.zeros((N_LINES, HEIGHT, WIDTH))
start_line = lib.data.draw_line(START_ANGLE, HEIGHT, WIDTH)[..., 0]
end_line = lib.data.draw_line(END_ANGLE, HEIGHT, WIDTH)[..., 0]
DATASET = 'lines32'
BATCH = 64
ae_path = os.path.join(RESULTS_PATH, 'VAE_beta1.0_depth16_latent16_scales4')
ae, _ = lib.utils.load_ae(ae_path, DATASET, BATCH, all_aes.ALL_AES)
with lib.utils.HookReport.disable():
ae.eval_mode()
random_latents = np.random.standard_normal(size=(16*16, 2, 2, 16))
random_images = ae.eval_sess.run(ae.eval_ops.decode, {ae.eval_ops.h: random_latents})
fig = plt.figure(figsize=(15, 15))
ax = plt.Axes(fig, [0., 0., 1., 1.])
ax.set_axis_off()
fig.add_axes(ax)
padding = np.ones((2, WIDTH*N_LINES + 4*N_LINES))
line_matrix = np.concatenate([
np.concatenate([padding, flatten_lines(random_images[n:n + 16, ..., 0]), padding], axis=0)
for n in range(0, 16*16, 16)], axis=0)
ax.imshow(line_matrix, cmap=plt.cm.gray, interpolation='nearest')
plt.gca().set_axis_off()
plt.savefig('figures/line_vae_samples.pdf'.format(alg_name.lower()), aspect='normal')
```
### Single-layer classifier table
```
def get_all_results(results_path, event_key):
experiments = collections.defaultdict(list)
for run_path in glob.glob(results_path):
for path in glob.glob(os.path.join(run_path, '*')):
experiments[os.path.split(path)[-1]].append(os.path.join(path, 'tf', 'summaries'))
experiment_results = collections.defaultdict(
lambda: collections.defaultdict(
lambda: collections.defaultdict(
lambda: collections.defaultdict(list))))
for experiment_key, experiment_paths in experiments.items():
for n, experiment_path in enumerate(experiment_paths):
print 'Getting results for', experiment_key, n
for events_file in glob.glob(os.path.join(experiment_path, 'events*')):
try:
for e in tf.train.summary_iterator(events_file):
for v in e.summary.value:
experiment_results[experiment_key][n][v.tag]['step'].append(e.step)
experiment_results[experiment_key][n][v.tag]['value'].append(v.simple_value)
except Exception as e:
print e
event_values = collections.defaultdict(list)
for experiment_name, events_lists in experiment_results.items():
for events in events_lists.values():
event_values[experiment_name].append(get_final_value_median(
events[event_key]['value'], events[event_key]['step']))
return event_values
RESULTS_PATH = '/home/craffel/data/dberth/RERUNS/*/mnist32'
accuracy = get_all_results(RESULTS_PATH, 'latent_accuracy_1')
ALGS = collections.OrderedDict([
('Baseline', 'AEBaseline_depth16_latent{}_scales3'),
('Dropout', 'AEDropout_depth16_dropout0.5_latent{}_scales3'),
('Denoising', 'AEDenoising_depth16_latent{}_noise1.0_scales3'),
('VAE', 'VAE_beta1.0_depth16_latent{}_scales3'),
('AAE', 'AAE_adversary_lr0.0001_depth16_disc_layer_sizes100,100_latent{}_scales3'),
('VQ-VAE', 'AEVQVAE_advdepth16_advweight0.0_beta10.0_depth16_emaTrue_latent{}_noiseFalse_num_blocks1_num_latents10_num_residuals1_reg0.5_scales3_z_log_size14'),
('ACAI', 'ARAReg_advdepth16_advweight0.5_depth16_latent{}_reg0.2_scales3')])
for latent_size in [2, 16]:
print '{} & '.format(latent_size*16) + ' & '.join(
['{:.2f}$\pm${:.2f}'.format(
np.mean(accuracy[alg_name.format(latent_size)]),
np.std(accuracy[alg_name.format(latent_size)]))
for alg_name in ALGS.values()]) + ' \\\\'
RESULTS_PATH = '/home/craffel/data/dberth/RERUNS/*/svhn32'
accuracy = get_all_results(RESULTS_PATH, 'latent_accuracy_1')
ALGS = collections.OrderedDict([
('Baseline', 'AEBaseline_depth64_latent{}_scales3'),
('Dropout', 'AEDropout_depth64_dropout0.5_latent{}_scales3'),
('Denoising', 'AEDenoising_depth64_latent{}_noise1.0_scales3'),
('VAE', 'VAE_beta1.0_depth64_latent{}_scales3'),
('AAE', 'AAE_adversary_lr0.0001_depth64_disc_layer_sizes100,100_latent{}_scales3'),
('VQ-VAE', 'AEVQVAE_advdepth16_advweight0.0_beta10.0_depth64_emaTrue_latent{}_noiseFalse_num_blocks1_num_latents10_num_residuals1_reg0.5_scales3_z_log_size14'),
('ACAI', 'ARAReg_advdepth64_advweight0.5_depth64_latent{}_reg0.2_scales3')])
for latent_size in [2, 16]:
print '{} & '.format(latent_size*16) + ' & '.join(
['{:.2f}$\pm${:.2f}'.format(
np.mean(accuracy[alg_name.format(latent_size)]),
np.std(accuracy[alg_name.format(latent_size)]))
for alg_name in ALGS.values()]) + ' \\\\'
RESULTS_PATH = '/home/craffel/data/dberth/RERUNS/*/cifar10'
accuracy = get_all_results(RESULTS_PATH, 'latent_accuracy_1')
ALGS = collections.OrderedDict([
('Baseline', 'AEBaseline_depth64_latent{}_scales3'),
('Dropout', 'AEDropout_depth64_dropout0.75_latent{}_scales3'),
('Denoising', 'AEDenoising_depth64_latent{}_noise1.0_scales3'),
('VAE', 'VAE_beta1.0_depth64_latent{}_scales3'),
('AAE', 'AAE_adversary_lr0.0001_depth64_disc_layer_sizes100,100_latent{}_scales3'),
('VQ-VAE', 'AEVQVAE_advdepth16_advweight0.0_beta10.0_depth64_emaTrue_latent{}_noiseFalse_num_blocks1_num_latents10_num_residuals1_reg0.5_scales3_z_log_size14'),
('ACAI', 'ARAReg_advdepth64_advweight0.5_depth64_latent{}_reg0.2_scales3')])
for latent_size in [16, 64]:
print '{} & '.format(latent_size*16) + ' & '.join(
['{:.2f}$\pm${:.2f}'.format(
np.mean(accuracy[alg_name.format(latent_size)]),
np.std(accuracy[alg_name.format(latent_size)]))
for alg_name in ALGS.values()]) + ' \\\\'
```
|
github_jupyter
|
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import scipy.ndimage
import lib.eval
import collections
import tensorflow as tf
import glob
import lib.utils
import all_aes
from absl import flags
import sys
FLAGS = flags.FLAGS
FLAGS(['--lr', '0.0001'])
import os
if not os.path.exists('figures'):
os.makedirs('figures')
def flatten_lines(lines, padding=2):
padding = np.ones((lines.shape[0], padding) + lines.shape[2:])
lines = np.concatenate([padding, lines, padding], 1)
lines = np.concatenate(lines, 0)
return np.transpose(lines, [1, 0] + list(range(2, lines.ndim)))
def get_final_value_median(values, steps, N=20):
sorted_steps = np.argsort(steps)
values = np.array(values)[sorted_steps]
return np.median(values[-N:])
HEIGHT = 32
WIDTH = 32
N_LINES = 16
START_ANGLE = 5*np.pi/7
END_ANGLE = 3*np.pi/2.
example_lines = np.zeros((N_LINES, HEIGHT, WIDTH))
# Cover the space of angles somewhat evenly
angles = np.linspace(0, 2*np.pi - np.pi/N_LINES, N_LINES)
np.random.shuffle(angles)
for n, angle in enumerate(angles):
example_lines[n] = lib.data.draw_line(angle, HEIGHT, WIDTH)[..., 0]
fig = plt.figure(figsize=(15, 1))
ax = plt.Axes(fig, [0., 0., 1., 1.])
ax.set_axis_off()
fig.add_axes(ax)
ax.imshow(flatten_lines(example_lines), cmap=plt.cm.gray, interpolation='nearest')
plt.gca().set_axis_off()
plt.savefig('figures/line_samples.pdf', aspect='normal')
line_interpolation = np.zeros((N_LINES, HEIGHT, WIDTH))
angles = np.linspace(START_ANGLE, END_ANGLE, N_LINES)
for n in range(N_LINES):
line_interpolation[n] = lib.data.draw_line(angles[n], HEIGHT, WIDTH)[..., 0]
fig = plt.figure(figsize=(15, 1))
ax = plt.Axes(fig, [0., 0., 1., 1.])
ax.set_axis_off()
fig.add_axes(ax)
ax.imshow(flatten_lines(line_interpolation), cmap=plt.cm.gray, interpolation='nearest')
plt.gca().set_axis_off()
plt.savefig('figures/line_correct_interpolation.pdf', aspect='normal')
print lib.eval.line_eval(line_interpolation[np.newaxis, ..., np.newaxis])
line_interpolation = np.zeros((N_LINES, HEIGHT, WIDTH))
start_line = lib.data.draw_line(START_ANGLE, HEIGHT, WIDTH)[..., 0]
end_line = lib.data.draw_line(END_ANGLE, HEIGHT, WIDTH)[..., 0]
weights = np.linspace(1, 0, N_LINES)
for n in range(N_LINES):
line_interpolation[n] = weights[n]*start_line + (1 - weights[n])*end_line
fig = plt.figure(figsize=(15, 1))
ax = plt.Axes(fig, [0., 0., 1., 1.])
ax.set_axis_off()
fig.add_axes(ax)
ax.imshow(flatten_lines(line_interpolation), cmap=plt.cm.gray, interpolation='nearest')
plt.gca().set_axis_off()
plt.savefig('figures/line_data_interpolation.pdf', aspect='normal')
print lib.eval.line_eval(line_interpolation[np.newaxis, ..., np.newaxis])
line_interpolation = np.zeros((N_LINES, HEIGHT, WIDTH))
start_line = lib.data.draw_line(START_ANGLE, HEIGHT, WIDTH)[..., 0]
end_line = lib.data.draw_line(END_ANGLE, HEIGHT, WIDTH)[..., 0]
for n in range(N_LINES):
line_interpolation[n] = start_line if n < N_LINES/2 else end_line
fig = plt.figure(figsize=(15, 1))
ax = plt.Axes(fig, [0., 0., 1., 1.])
ax.set_axis_off()
fig.add_axes(ax)
ax.imshow(flatten_lines(line_interpolation), cmap=plt.cm.gray, interpolation='nearest')
plt.gca().set_axis_off()
plt.savefig('figures/line_abrupt_interpolation.pdf', aspect='normal')
print lib.eval.line_eval(line_interpolation[np.newaxis, ..., np.newaxis])
line_interpolation = np.zeros((N_LINES, HEIGHT, WIDTH))
angles = np.linspace(START_ANGLE, END_ANGLE - 2*np.pi, N_LINES)
for n in range(N_LINES):
line_interpolation[n] = lib.data.draw_line(angles[n], HEIGHT, WIDTH)[..., 0]
fig = plt.figure(figsize=(15, 1))
ax = plt.Axes(fig, [0., 0., 1., 1.])
ax.set_axis_off()
fig.add_axes(ax)
ax.imshow(flatten_lines(line_interpolation), cmap=plt.cm.gray, interpolation='nearest')
plt.gca().set_axis_off()
plt.savefig('figures/line_overshooting_interpolation.pdf', aspect='normal')
print lib.eval.line_eval(line_interpolation[np.newaxis, ..., np.newaxis])
line_interpolation = np.zeros((N_LINES, HEIGHT, WIDTH))
angles = np.linspace(START_ANGLE, END_ANGLE, N_LINES)
blur = np.sin(np.linspace(0, np.pi, N_LINES))
for n in range(N_LINES):
line = lib.data.draw_line(angles[n], HEIGHT, WIDTH)[..., 0]
line_interpolation[n] = scipy.ndimage.gaussian_filter(line + np.sqrt(blur[n]), blur[n]*1.5)
fig = plt.figure(figsize=(15, 1))
ax = plt.Axes(fig, [0., 0., 1., 1.])
ax.set_axis_off()
fig.add_axes(ax)
ax.imshow(flatten_lines(line_interpolation), cmap=plt.cm.gray, interpolation='nearest', vmin=-1, vmax=1)
plt.gca().set_axis_off()
plt.savefig('figures/line_unrealistic_interpolation.pdf', aspect='normal')
RESULTS_PATH = '/home/craffel/data/dberth/RERUNS/*/lines32'
experiments = collections.defaultdict(list)
for run_path in glob.glob(RESULTS_PATH):
for path in glob.glob(os.path.join(run_path, '*')):
experiments[os.path.split(path)[-1]].append(os.path.join(path, 'tf', 'summaries'))
ALGS = collections.OrderedDict([
('Baseline', 'AEBaseline_depth16_latent16_scales4'),
('Dropout', 'AEDropout_depth16_dropout0.5_latent16_scales4'),
('Denoising', 'AEDenoising_depth16_latent16_noise1.0_scales4'),
('VAE', 'VAE_beta1.0_depth16_latent16_scales4'),
('AAE', 'AAE_adversary_lr0.0001_depth16_disc_layer_sizes100,100_latent16_scales4'),
('VQ-VAE', 'AEVQVAE_advdepth16_advweight0.0_beta10.0_depth16_emaTrue_latent16_noise0.0_num_blocks1_num_latents10_num_residuals1_reg0.5_scales3_z_log_size14'),
('ACAI', 'ARAReg_advdepth16_advweight0.5_depth16_latent16_reg0.2_scales4'),
])
experiment_results = collections.defaultdict(
lambda: collections.defaultdict(
lambda: collections.defaultdict(
lambda: collections.defaultdict(list))))
for experiment_key, experiment_paths in experiments.items():
for n, experiment_path in enumerate(experiment_paths):
print 'Getting results for', experiment_key, n
for events_file in glob.glob(os.path.join(experiment_path, 'events*')):
try:
for e in tf.train.summary_iterator(events_file):
for v in e.summary.value:
experiment_results[experiment_key][n][v.tag]['step'].append(e.step)
experiment_results[experiment_key][n][v.tag]['value'].append(v.simple_value)
except Exception as e:
print e
mean_distance = collections.defaultdict(list)
mean_smoothness = collections.defaultdict(list)
for experiment_name, events_lists in experiment_results.items():
for events in events_lists.values():
mean_distance[experiment_name].append(get_final_value_median(
events['mean_distance_1']['value'], events['mean_distance_1']['step']))
mean_smoothness[experiment_name].append(get_final_value_median(
events['mean_smoothness_1']['value'], events['mean_smoothness_1']['step']))
print 'Metric & ' + ' & '.join(ALGS.keys()) + ' \\\\'
print 'Mean Distance ($\\times 10^{-3}$) & ' + ' & '.join(
['{:.2f}$\pm${:.2f}'.format(np.mean(mean_distance[alg_name])*10**3, np.std(mean_distance[alg_name])*10**3)
for alg_name in ALGS.values()]) + ' \\\\'
print 'Mean Smoothness & ' + ' & '.join(
['{:.2f}$\pm${:.2f}'.format(np.mean(mean_smoothness[alg_name]), np.std(mean_smoothness[alg_name]))
for alg_name in ALGS.values()]) + ' \\\\'
line_interpolation = np.zeros((N_LINES, HEIGHT, WIDTH))
start_line = lib.data.draw_line(START_ANGLE, HEIGHT, WIDTH)[..., 0]
end_line = lib.data.draw_line(END_ANGLE, HEIGHT, WIDTH)[..., 0]
DATASET = 'lines32'
BATCH = 64
for alg_name, alg_path in ALGS.items():
ae_path = os.path.join(RESULTS_PATH.replace('*', 'RUN3'), alg_path)
ae, _ = lib.utils.load_ae(ae_path, DATASET, BATCH, all_aes.ALL_AES)
with lib.utils.HookReport.disable():
ae.eval_mode()
input_lines = np.concatenate([
start_line[np.newaxis, ..., np.newaxis],
end_line[np.newaxis, ..., np.newaxis]])
start_latent, end_latent = ae.eval_sess.run(ae.eval_ops.encode, {ae.eval_ops.x: input_lines})
weights = np.linspace(1, 0, N_LINES).reshape(-1, 1, 1, 1)
interped_latents = weights*start_latent[np.newaxis] + (1 - weights)*end_latent[np.newaxis]
output_interp = ae.eval_sess.run(ae.eval_ops.decode, {ae.eval_ops.h: interped_latents})
fig = plt.figure(figsize=(15, 1))
ax = plt.Axes(fig, [0., 0., 1., 1.])
ax.set_axis_off()
fig.add_axes(ax)
ax.imshow(flatten_lines(output_interp[..., 0]), cmap=plt.cm.gray, interpolation='nearest')
plt.gca().set_axis_off()
plt.savefig('figures/line_{}_example.pdf'.format(alg_name.lower()), aspect='normal')
BATCH = 64
DBERTH_RESULTS_PATH = '/home/craffel/data/dberth/RERUNS/RUN2'
DATASETS_DEPTHS = collections.OrderedDict([('mnist32', 16), ('svhn32', 64), ('celeba32', 64)])
LATENTS = [2, 16]
ALGS_FORMAT = collections.OrderedDict([
('Baseline', 'AEBaseline_depth{depth}_latent{latent}_scales3'),
('Dropout', 'AEDropout_depth{depth}_dropout0.5_latent{latent}_scales3'),
('Denoising', 'AEDenoising_depth{depth}_latent{latent}_noise1.0_scales3'),
('VAE', 'VAE_beta1.0_depth{depth}_latent{latent}_scales3'),
('AAE', 'AAE_adversary_lr0.0001_depth{depth}_disc_layer_sizes100,100_latent{latent}_scales3'),
('VQ-VAE', 'AEVQVAE_beta10.0_depth{depth}_latent{latent}_num_latents10_run1_scales3_z_log_size14'),
('ACAI', 'ARAReg_advdepth{depth}_advweight0.5_depth{depth}_latent{latent}_reg0.2_scales3'),
])
DATASETS_MINS = {'mnist32': -1, 'celeba32': -1.2, 'svhn32': -1}
DATASETS_MAXS = {'mnist32': 1, 'celeba32': 1.2, 'svhn32': 1}
N_IMAGES_PER_INTERPOLATION = 16
N_IMAGES = 4
def interpolate(sess,
ops,
image_left,
image_right,
dataset_min,
dataset_max,
interpolation=N_IMAGES_PER_INTERPOLATION):
def batched_op(op, op_input, array):
return sess.run(op, feed_dict={op_input: array})
# Interpolations
interpolation_x = np.array([image_left, image_right], 'f')
latent_x = batched_op(ops.encode, ops.x, interpolation_x)
latents = []
for x in range(interpolation):
latents.append((latent_x[:1] * (interpolation - x - 1) +
latent_x[1:] * x) / float(interpolation - 1))
latents = np.concatenate(latents, axis=0)
interpolation_y = batched_op(ops.decode, ops.h, latents)
interpolation_y = interpolation_y.reshape(
(interpolation, 1) + interpolation_y.shape[1:])
interpolation_y = interpolation_y.transpose(1, 0, 2, 3, 4)
image_interpolation = lib.utils.images_to_grid(interpolation_y)
padding = np.ones((image_interpolation.shape[0], 2) + image_interpolation.shape[2:])
image = np.concatenate(
[image_left, padding, image_interpolation, padding, image_right],
axis=1)
image = (image - dataset_min)/(dataset_max - dataset_min)
image = np.clip(image, 0, 1)
return image
def get_dataset_samples(sess, ops, dataset, batches=100):
batch = FLAGS.batch
with tf.Graph().as_default():
data_in = dataset.make_one_shot_iterator().get_next()
with tf.Session() as sess_new:
images = []
labels = []
while True:
try:
payload = sess_new.run(data_in)
images.append(payload['x'])
assert images[-1].shape[0] == 1
labels.append(payload['label'])
if len(images) == batches:
break
except tf.errors.OutOfRangeError:
break
images = np.concatenate(images, axis=0)
labels = np.concatenate(labels, axis=0)
latents = [sess.run(ops.encode,
feed_dict={ops.x: images[p:p + batch]})
for p in range(0, images.shape[0], FLAGS.batch)]
latents = np.concatenate(latents, axis=0)
latents = latents.reshape([latents.shape[0], -1])
return images, latents, labels
left_images = collections.defaultdict(lambda: None)
right_images = collections.defaultdict(lambda: None)
for dataset, depth in DATASETS_DEPTHS.items():
for latent in LATENTS:
for alg_name, alg_format in ALGS_FORMAT.items():
for n in range(N_IMAGES):
output_name = '{}_{}_latent_{}_interpolation_{}'.format(dataset, alg_name.lower(), latent, n + 1)
alg_path = os.path.join(DBERTH_RESULTS_PATH, dataset, alg_format.format(depth=depth, latent=latent))
if 1: # try:
ae, ds = lib.utils.load_ae(
alg_path, dataset, BATCH, all_aes.ALL_AES, return_dataset=True)
with lib.utils.HookReport.disable():
ae.eval_mode()
images, latents, labels = get_dataset_samples(ae.eval_sess,
ae.eval_ops,
ds.test)
labels = np.argmax(labels, axis=1)
if left_images[n] is None:
left_img_idx = n
if dataset == 'celeba32':
right_img_idx = N_IMAGES + n
else:
if n < N_IMAGES/2:
right_img_idx = np.flatnonzero(labels == labels[n])[N_IMAGES + n]
else:
right_img_idx = np.flatnonzero(labels != labels[n])[N_IMAGES + n]
print left_img_idx, labels[left_img_idx]
print right_img_idx, labels[right_img_idx]
left_images[n] = images[left_img_idx]
right_images[n] = images[right_img_idx]
left_image = left_images[n]
right_image = right_images[n]
image = interpolate(ae.eval_sess, ae.eval_ops, left_image, right_image,
DATASETS_MINS[dataset], DATASETS_MAXS[dataset])
fig = plt.figure(figsize=(15, 1))
ax = plt.Axes(fig, [0., 0., 1., 1.])
ax.set_axis_off()
fig.add_axes(ax)
ax.imshow(np.squeeze(image), cmap=plt.cm.gray, interpolation='nearest')
plt.gca().set_axis_off()
plt.savefig('figures/{}.pdf'.format(output_name), aspect='normal')
plt.close()
for n in range(N_IMAGES):
del left_images[n]
del right_images[n]
DATASET_NAMES = {'mnist32': 'MNIST', 'svhn32': 'SVHN', 'celeba32': 'CelebA'}
output = ""
for dataset, depth in DATASETS_DEPTHS.items():
for latent in LATENTS:
output += r"""
\begin{figure}
\centering
"""
for n in range(N_IMAGES):
alg_list = collections.OrderedDict()
for alg_name, alg_format in ALGS_FORMAT.items():
figure_name = '{}_{}_latent_{}_interpolation_{}'.format(dataset, alg_name.lower(), latent, n + 1)
alg_list[figure_name] = alg_name
if alg_name == ALGS_FORMAT.keys()[-1]:
reset = r"\addtocounter{{subfigure}}{{-{}}}".format(len(ALGS_FORMAT))
else:
reset = ""
output += r"""
\begin{{subfigure}}[b]{{\textwidth}}
\centering\parbox{{.09\linewidth}}{{\vspace{{0.3em}}\subcaption{{}}\label{{fig:{figure_name}}}}}
\parbox{{.75\linewidth}}{{\includegraphics[width=\linewidth]{{figures/{figure_name}.pdf}}}}{reset}
\end{{subfigure}}
""".format(figure_name=figure_name, reset=reset)
if alg_name == ALGS_FORMAT.keys()[-1]:
output += r"""
\vspace{0.5em}
"""
output += r"""
\caption{{Example interpolations on {} with a latent dimensionality of {} for """.format(
DATASET_NAMES[dataset], latent*16)
output += ', '.join([r'(\subref{{fig:{}}}) {}'.format(fn, an) for fn, an in alg_list.items()])
output += r""" autoencoders.}}
\label{{fig:{}_{}_interpolations}}
\end{{figure}}
""".format(dataset, latent)
print output
RESULTS_PATH = '/home/craffel/data/autoencoder/results_final/lines32'
line_interpolation = np.zeros((N_LINES, HEIGHT, WIDTH))
start_line = lib.data.draw_line(START_ANGLE, HEIGHT, WIDTH)[..., 0]
end_line = lib.data.draw_line(END_ANGLE, HEIGHT, WIDTH)[..., 0]
DATASET = 'lines32'
BATCH = 64
ae_path = os.path.join(RESULTS_PATH, 'VAE_beta1.0_depth16_latent16_scales4')
ae, _ = lib.utils.load_ae(ae_path, DATASET, BATCH, all_aes.ALL_AES)
with lib.utils.HookReport.disable():
ae.eval_mode()
random_latents = np.random.standard_normal(size=(16*16, 2, 2, 16))
random_images = ae.eval_sess.run(ae.eval_ops.decode, {ae.eval_ops.h: random_latents})
fig = plt.figure(figsize=(15, 15))
ax = plt.Axes(fig, [0., 0., 1., 1.])
ax.set_axis_off()
fig.add_axes(ax)
padding = np.ones((2, WIDTH*N_LINES + 4*N_LINES))
line_matrix = np.concatenate([
np.concatenate([padding, flatten_lines(random_images[n:n + 16, ..., 0]), padding], axis=0)
for n in range(0, 16*16, 16)], axis=0)
ax.imshow(line_matrix, cmap=plt.cm.gray, interpolation='nearest')
plt.gca().set_axis_off()
plt.savefig('figures/line_vae_samples.pdf'.format(alg_name.lower()), aspect='normal')
def get_all_results(results_path, event_key):
experiments = collections.defaultdict(list)
for run_path in glob.glob(results_path):
for path in glob.glob(os.path.join(run_path, '*')):
experiments[os.path.split(path)[-1]].append(os.path.join(path, 'tf', 'summaries'))
experiment_results = collections.defaultdict(
lambda: collections.defaultdict(
lambda: collections.defaultdict(
lambda: collections.defaultdict(list))))
for experiment_key, experiment_paths in experiments.items():
for n, experiment_path in enumerate(experiment_paths):
print 'Getting results for', experiment_key, n
for events_file in glob.glob(os.path.join(experiment_path, 'events*')):
try:
for e in tf.train.summary_iterator(events_file):
for v in e.summary.value:
experiment_results[experiment_key][n][v.tag]['step'].append(e.step)
experiment_results[experiment_key][n][v.tag]['value'].append(v.simple_value)
except Exception as e:
print e
event_values = collections.defaultdict(list)
for experiment_name, events_lists in experiment_results.items():
for events in events_lists.values():
event_values[experiment_name].append(get_final_value_median(
events[event_key]['value'], events[event_key]['step']))
return event_values
RESULTS_PATH = '/home/craffel/data/dberth/RERUNS/*/mnist32'
accuracy = get_all_results(RESULTS_PATH, 'latent_accuracy_1')
ALGS = collections.OrderedDict([
('Baseline', 'AEBaseline_depth16_latent{}_scales3'),
('Dropout', 'AEDropout_depth16_dropout0.5_latent{}_scales3'),
('Denoising', 'AEDenoising_depth16_latent{}_noise1.0_scales3'),
('VAE', 'VAE_beta1.0_depth16_latent{}_scales3'),
('AAE', 'AAE_adversary_lr0.0001_depth16_disc_layer_sizes100,100_latent{}_scales3'),
('VQ-VAE', 'AEVQVAE_advdepth16_advweight0.0_beta10.0_depth16_emaTrue_latent{}_noiseFalse_num_blocks1_num_latents10_num_residuals1_reg0.5_scales3_z_log_size14'),
('ACAI', 'ARAReg_advdepth16_advweight0.5_depth16_latent{}_reg0.2_scales3')])
for latent_size in [2, 16]:
print '{} & '.format(latent_size*16) + ' & '.join(
['{:.2f}$\pm${:.2f}'.format(
np.mean(accuracy[alg_name.format(latent_size)]),
np.std(accuracy[alg_name.format(latent_size)]))
for alg_name in ALGS.values()]) + ' \\\\'
RESULTS_PATH = '/home/craffel/data/dberth/RERUNS/*/svhn32'
accuracy = get_all_results(RESULTS_PATH, 'latent_accuracy_1')
ALGS = collections.OrderedDict([
('Baseline', 'AEBaseline_depth64_latent{}_scales3'),
('Dropout', 'AEDropout_depth64_dropout0.5_latent{}_scales3'),
('Denoising', 'AEDenoising_depth64_latent{}_noise1.0_scales3'),
('VAE', 'VAE_beta1.0_depth64_latent{}_scales3'),
('AAE', 'AAE_adversary_lr0.0001_depth64_disc_layer_sizes100,100_latent{}_scales3'),
('VQ-VAE', 'AEVQVAE_advdepth16_advweight0.0_beta10.0_depth64_emaTrue_latent{}_noiseFalse_num_blocks1_num_latents10_num_residuals1_reg0.5_scales3_z_log_size14'),
('ACAI', 'ARAReg_advdepth64_advweight0.5_depth64_latent{}_reg0.2_scales3')])
for latent_size in [2, 16]:
print '{} & '.format(latent_size*16) + ' & '.join(
['{:.2f}$\pm${:.2f}'.format(
np.mean(accuracy[alg_name.format(latent_size)]),
np.std(accuracy[alg_name.format(latent_size)]))
for alg_name in ALGS.values()]) + ' \\\\'
RESULTS_PATH = '/home/craffel/data/dberth/RERUNS/*/cifar10'
accuracy = get_all_results(RESULTS_PATH, 'latent_accuracy_1')
ALGS = collections.OrderedDict([
('Baseline', 'AEBaseline_depth64_latent{}_scales3'),
('Dropout', 'AEDropout_depth64_dropout0.75_latent{}_scales3'),
('Denoising', 'AEDenoising_depth64_latent{}_noise1.0_scales3'),
('VAE', 'VAE_beta1.0_depth64_latent{}_scales3'),
('AAE', 'AAE_adversary_lr0.0001_depth64_disc_layer_sizes100,100_latent{}_scales3'),
('VQ-VAE', 'AEVQVAE_advdepth16_advweight0.0_beta10.0_depth64_emaTrue_latent{}_noiseFalse_num_blocks1_num_latents10_num_residuals1_reg0.5_scales3_z_log_size14'),
('ACAI', 'ARAReg_advdepth64_advweight0.5_depth64_latent{}_reg0.2_scales3')])
for latent_size in [16, 64]:
print '{} & '.format(latent_size*16) + ' & '.join(
['{:.2f}$\pm${:.2f}'.format(
np.mean(accuracy[alg_name.format(latent_size)]),
np.std(accuracy[alg_name.format(latent_size)]))
for alg_name in ALGS.values()]) + ' \\\\'
| 0.628749 | 0.877948 |
# Big O Examples
In the first part of the Big-O example section we will go through various iterations of the various Big-O functions. Make sure to complete the reading assignment!
Let's begin with some simple examples and explore what their Big-O is.
## O(1) Constant
```
def func_constant(values):
'''
Prints first item in a list of values.
'''
print (values[0])
func_constant([1,2,3])
```
Note how this function is constant because regardless of the list size, the function will only ever take a constant step size, in this case 1, printing the first value from a list. so we can see here that an input list of 100 values will print just 1 item, a list of 10,000 values will print just 1 item, and a list of **n** values will print just 1 item!
## O(n) Linear
```
def func_lin(lst):
'''
Takes in list and prints out all values
'''
for val in lst:
print (val)
func_lin([1,2,3])
```
This function runs in O(n) (linear time). This means that the number of operations taking place scales linearly with n, so we can see here that an input list of 100 values will print 100 times, a list of 10,000 values will print 10,000 times, and a list of **n** values will print **n** times.
## O(n^2) Quadratic
```
def func_quad(lst):
'''
Prints pairs for every item in list.
'''
for item_1 in lst:
for item_2 in lst:
print (item_1,item_2)
lst = [0, 1, 2, 3]
func_quad(lst)
```
Note how we now have two loops, one nested inside another. This means that for a list of n items, we will have to perform n operations for *every item in the list!* This means in total, we will perform n times n assignments, or n^2. So a list of 10 items will have 10^2, or 100 operations. You can see how dangerous this can get for very large inputs! This is why Big-O is so important to be aware of!
______
## Calculating Scale of Big-O
In this section we will discuss how insignificant terms drop out of Big-O notation.
When it comes to Big O notation we only care about the most significant terms, remember as the input grows larger only the fastest growing terms will matter. If you've taken a calculus class before, this will reminf you of taking limits towards infinity. Let's see an example of how to drop constants:
```
def print_once(lst):
'''
Prints all items once
'''
for val in lst:
print (val)
print_once(lst)
```
The print_once() function is O(n) since it will scale linearly with the input. What about the next example?
```
def print_3(lst):
'''
Prints all items three times
'''
for val in lst:
print (val)
for val in lst:
print (val)
for val in lst:
print (val)
print_3(lst)
```
We can see that the first function will print O(n) items and the second will print O(3n) items. However for n going to inifinity the constant can be dropped, since it will not have a large effect, so both functions are O(n).
Let's see a more complex example of this:
```
def comp(lst):
'''
This function prints the first item O(1)
Then is prints the first 1/2 of the list O(n/2)
Then prints a string 10 times O(10)
'''
print (lst[0])
midpoint = int(len(lst)/2)
for val in lst[:midpoint]:
print (val)
for x in range(10):
print ('number')
lst = [1,2,3,4,5,6,7,8,9,10]
comp(lst)
```
So let's break down the operations here. We can combine each operation to get the total Big-O of the function:
$$O(1 + n/2 + 10)$$
We can see that as n grows larger the 1 and 10 terms become insignificant and the 1/2 term multiplied against n will also not have much of an effect as n goes towards infinity. This means the function is simply O(n)!
## Worst Case vs Best Case
Many times we are only concerned with the worst possible case of an algorithm, but in an interview setting its important to keep in mind that worst case and best case scenarios may be completely different Big-O times. For example, consider the following function:
```
def matcher(lst,match):
'''
Given a list lst, return a boolean indicating if match item is in the list
'''
for item in lst:
if item == match:
return True
return False
lst
matcher(lst,1)
matcher(lst,11)
```
Note that in the first scenario, the best case was actually O(1), since the match was found at the first element. In the case where there is no match, every element must be checked, this results in a worst case time of O(n). Later on we will also discuss average case time.
Finally let's introduce the concept of space complexity.
## Space Complexity
Many times we are also concerned with how much memory/space an algorithm uses. The notation of space complexity is the same, but instead of checking the time of operations, we check the size of the allocation of memory.
Let's see a few examples:
```
def printer(n=10):
'''
Prints "hello world!" n times
'''
for x in range(n):
print ('Hello World!')
printer()
```
Note how we only assign the 'hello world!' variable once, not every time we print. So the algorithm has O(1) **space** complexity and an O(n) **time** complexity.
Let's see an example of O(n) **space** complexity:
```
def create_list(n):
new_list = []
for num in range(n):
new_list.append('new')
return new_list
print (create_list(5))
```
Note how the size of the new_list object scales with the input **n**, this shows that it is an O(n) algorithm with regards to **space** complexity.
_____
Thats it for this lecture, before continuing on, make sure to complete the homework assignment below:
# Homework Assignment
Your homework assignment after this lecture is to read the fantastic explanations of Big-O at these two sources:
* [Big-O Notation Explained](http://stackoverflow.com/questions/487258/plain-english-explanation-of-big-o/487278#487278)
* [Big-O Examples Explained](http://stackoverflow.com/questions/2307283/what-does-olog-n-mean-exactly)
|
github_jupyter
|
def func_constant(values):
'''
Prints first item in a list of values.
'''
print (values[0])
func_constant([1,2,3])
def func_lin(lst):
'''
Takes in list and prints out all values
'''
for val in lst:
print (val)
func_lin([1,2,3])
def func_quad(lst):
'''
Prints pairs for every item in list.
'''
for item_1 in lst:
for item_2 in lst:
print (item_1,item_2)
lst = [0, 1, 2, 3]
func_quad(lst)
def print_once(lst):
'''
Prints all items once
'''
for val in lst:
print (val)
print_once(lst)
def print_3(lst):
'''
Prints all items three times
'''
for val in lst:
print (val)
for val in lst:
print (val)
for val in lst:
print (val)
print_3(lst)
def comp(lst):
'''
This function prints the first item O(1)
Then is prints the first 1/2 of the list O(n/2)
Then prints a string 10 times O(10)
'''
print (lst[0])
midpoint = int(len(lst)/2)
for val in lst[:midpoint]:
print (val)
for x in range(10):
print ('number')
lst = [1,2,3,4,5,6,7,8,9,10]
comp(lst)
def matcher(lst,match):
'''
Given a list lst, return a boolean indicating if match item is in the list
'''
for item in lst:
if item == match:
return True
return False
lst
matcher(lst,1)
matcher(lst,11)
def printer(n=10):
'''
Prints "hello world!" n times
'''
for x in range(n):
print ('Hello World!')
printer()
def create_list(n):
new_list = []
for num in range(n):
new_list.append('new')
return new_list
print (create_list(5))
| 0.344774 | 0.974869 |
## Convolutional Neural Network with MNIST dataset
## Import Classes and Functions
```
import os
import sys
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D
from keras import backend as K
from keras.utils import to_categorical
from livelossplot import PlotLossesKeras
```
## Initialize Random Number Generator
```
# fix random seed for reproducibility
seed = 7
np.random.seed(seed)
num_classes = 10
# input image dimensions
img_rows, img_cols = 28, 28
```
## Load The Dataset
The data, shuffled and split between train and test sets
```
(X_train, y_train), (X_test, y_test) = mnist.load_data()
```
### Plot the first few examples
```
plt.figure(figsize=(12,3))
for i in range(10):
plt.subplot(1, 10, i+1)
plt.imshow(X_train[i].reshape((img_rows, img_cols)), cmap='gray', interpolation='nearest')
plt.axis('off')
```
### Reshape the data
```
if K.image_data_format() == 'channels_first':
X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1)
X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
```
### Normalize the data
```
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
print('X_train shape:', X_train.shape)
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
```
### Convert class vectors to binary class matrices
```
y_train = to_categorical(y_train, num_classes)
y_test = to_categorical(y_test, num_classes)
```
## Define The Neural Network Model
```
def create_model():
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='Adadelta', metrics=['accuracy'])
return model
```
### Create the Model
```
model = create_model()
```
## Define training parameters
```
batch_size = 128
epochs = 5
```
## Train the model
```
model.fit(X_train, y_train, batch_size=batch_size,
epochs=epochs, verbose=1, validation_data=(X_test, y_test), callbacks=[PlotLossesKeras()])
score = model.evaluate(X_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
```
|
github_jupyter
|
import os
import sys
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D
from keras import backend as K
from keras.utils import to_categorical
from livelossplot import PlotLossesKeras
# fix random seed for reproducibility
seed = 7
np.random.seed(seed)
num_classes = 10
# input image dimensions
img_rows, img_cols = 28, 28
(X_train, y_train), (X_test, y_test) = mnist.load_data()
plt.figure(figsize=(12,3))
for i in range(10):
plt.subplot(1, 10, i+1)
plt.imshow(X_train[i].reshape((img_rows, img_cols)), cmap='gray', interpolation='nearest')
plt.axis('off')
if K.image_data_format() == 'channels_first':
X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1)
X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
print('X_train shape:', X_train.shape)
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
y_train = to_categorical(y_train, num_classes)
y_test = to_categorical(y_test, num_classes)
def create_model():
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='Adadelta', metrics=['accuracy'])
return model
model = create_model()
batch_size = 128
epochs = 5
model.fit(X_train, y_train, batch_size=batch_size,
epochs=epochs, verbose=1, validation_data=(X_test, y_test), callbacks=[PlotLossesKeras()])
score = model.evaluate(X_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
| 0.615781 | 0.949949 |
```
import os
import json
import pickle
import random
from collections import defaultdict, Counter
from indra.literature.adeft_tools import universal_extract_text
from indra.databases.hgnc_client import get_hgnc_name, get_hgnc_id
from adeft.discover import AdeftMiner
from adeft.gui import ground_with_gui
from adeft.modeling.label import AdeftLabeler
from adeft.modeling.classify import AdeftClassifier
from adeft.disambiguate import AdeftDisambiguator, load_disambiguator
from indra_db_lite.api import get_entrez_pmids_for_hgnc
from indra_db_lite.api import get_entrez_pmids_for_uniprot
from indra_db_lite.api import get_plaintexts_for_text_ref_ids
from indra_db_lite.api import get_text_ref_ids_for_agent_text
from indra_db_lite.api import get_text_ref_ids_for_pmids
from adeft_indra.grounding import AdeftGrounder
from adeft_indra.s3 import model_to_s3
from adeft_indra.model_building.escape import escape_filename
def get_text_ref_ids_for_entity(ns, id_):
if ns == 'HGNC':
pmids = get_entrez_pmids_for_hgnc(id_)
elif ns == 'UP':
pmids = get_entrez_pmids_for_uniprot(id_)
return list(get_text_ref_ids_for_pmids(pmids).values())
adeft_grounder = AdeftGrounder()
shortforms = ['NPH3']
model_name = ':'.join(sorted(escape_filename(shortform) for shortform in shortforms))
results_path = os.path.abspath(os.path.join('../../', 'results', model_name))
miners = dict()
all_texts = {}
for shortform in shortforms:
text_ref_ids = get_text_ref_ids_for_agent_text(shortform)
content = get_plaintexts_for_text_ref_ids(text_ref_ids, contains=shortforms)
text_dict = content.flatten()
miners[shortform] = AdeftMiner(shortform)
miners[shortform].process_texts(text_dict.values())
all_texts.update(text_dict)
longform_dict = {}
for shortform in shortforms:
longforms = miners[shortform].get_longforms()
longforms = [(longform, count, score) for longform, count, score in longforms
if count*score > 2]
longform_dict[shortform] = longforms
combined_longforms = Counter()
for longform_rows in longform_dict.values():
combined_longforms.update({longform: count for longform, count, score
in longform_rows})
grounding_map = {}
names = {}
for longform in combined_longforms:
groundings = adeft_grounder.ground(longform)
if groundings:
grounding = groundings[0]['grounding']
grounding_map[longform] = grounding
names[grounding] = groundings[0]['name']
longforms, counts = zip(*combined_longforms.most_common())
pos_labels = []
list(zip(longforms, counts))
grounding_map, names, pos_labels = ground_with_gui(longforms, counts,
grounding_map=grounding_map,
names=names, pos_labels=pos_labels, no_browser=True, port=8890)
result = [grounding_map, names, pos_labels]
result
grounding_map, names, pos_labels = [{'non phototropic hypocotyl 3': 'UP:Q9FMF5',
'nonphototropic hypocotyl 3': 'UP:Q9FMF5'},
{'UP:Q9FMF5': 'RPT3'},
['UP:Q9FMF5']]
excluded_longforms = []
grounding_dict = {shortform: {longform: grounding_map[longform]
for longform, _, _ in longforms if longform in grounding_map
and longform not in excluded_longforms}
for shortform, longforms in longform_dict.items()}
result = [grounding_dict, names, pos_labels]
if not os.path.exists(results_path):
os.mkdir(results_path)
with open(os.path.join(results_path, f'{model_name}_preliminary_grounding_info.json'), 'w') as f:
json.dump(result, f)
additional_entities = {
'HGNC:7907': ['NPHP3', ['NPH3', 'NPHP3']],
'HGNC:8077': ['NXPH3', ['NPH3', 'NXPH3', 'neurexophilin', 'KIAA1159']],
'UP:Q9FMF5': ['RPT3', ['NPH3']],
}
unambiguous_agent_texts = {}
labeler = AdeftLabeler(grounding_dict)
corpus = labeler.build_from_texts(
(text, text_ref_id) for text_ref_id, text in all_texts.items()
)
agent_text_text_ref_id_map = defaultdict(list)
for text, label, id_ in corpus:
agent_text_text_ref_id_map[label].append(id_)
entity_text_ref_id_map = {
entity: set(
get_text_ref_ids_for_entity(*entity.split(':', maxsplit=1))
)
for entity in additional_entities
}
intersection1 = []
for entity1, trids1 in entity_text_ref_id_map.items():
for entity2, trids2 in entity_text_ref_id_map.items():
intersection1.append((entity1, entity2, len(trids1 & trids2)))
intersection2 = []
for entity1, trids1 in agent_text_text_ref_id_map.items():
for entity2, pmids2 in entity_text_ref_id_map.items():
intersection2.append((entity1, entity2, len(set(trids1) & trids2)))
intersection1
intersection2
all_used_trids = set()
for entity, agent_texts in unambiguous_agent_texts.items():
used_trids = set()
for agent_text in agent_texts[1]:
trids = set(get_text_ref_ids_for_agent_text(agent_text))
new_trids = list(trids - all_texts.keys() - used_trids)
content = get_plaintexts_for_text_ref_ids(new_trids, contains=agent_texts[1])
text_dict = content.flatten()
corpus.extend(
[
(text, entity, trid) for trid, text in text_dict.items() if len(text) >= 5
]
)
used_trids.update(new_trids)
all_used_trids.update(used_trids)
for entity, trids in entity_text_ref_id_map.items():
new_trids = list(set(trids) - all_texts.keys() - all_used_trids)
_, contains = additional_entities[entity]
content = get_plaintexts_for_text_ref_ids(new_trids, contains=contains)
text_dict = content.flatten()
corpus.extend(
[
(text, entity, trid) for trid, text in text_dict.items() if len(text) >= 5
]
)
names.update({key: value[0] for key, value in additional_entities.items()})
names.update({key: value[0] for key, value in unambiguous_agent_texts.items()})
pos_labels = list(set(pos_labels) | additional_entities.keys() |
unambiguous_agent_texts.keys())
%%capture
classifier = AdeftClassifier(shortforms, pos_labels=pos_labels, random_state=1729)
param_grid = {'C': [100.0], 'max_features': [10000]}
texts, labels, pmids = zip(*corpus)
classifier.cv(texts, labels, param_grid, cv=5, n_jobs=5)
classifier.stats
disamb = AdeftDisambiguator(classifier, grounding_dict, names)
disamb.dump(model_name, results_path)
print(disamb.info())
model_to_s3(disamb)
from adeft.disambiguate import load_disambiguator
disamb = load_disambiguator("BAL")
disamb
print(_28.info())
```
|
github_jupyter
|
import os
import json
import pickle
import random
from collections import defaultdict, Counter
from indra.literature.adeft_tools import universal_extract_text
from indra.databases.hgnc_client import get_hgnc_name, get_hgnc_id
from adeft.discover import AdeftMiner
from adeft.gui import ground_with_gui
from adeft.modeling.label import AdeftLabeler
from adeft.modeling.classify import AdeftClassifier
from adeft.disambiguate import AdeftDisambiguator, load_disambiguator
from indra_db_lite.api import get_entrez_pmids_for_hgnc
from indra_db_lite.api import get_entrez_pmids_for_uniprot
from indra_db_lite.api import get_plaintexts_for_text_ref_ids
from indra_db_lite.api import get_text_ref_ids_for_agent_text
from indra_db_lite.api import get_text_ref_ids_for_pmids
from adeft_indra.grounding import AdeftGrounder
from adeft_indra.s3 import model_to_s3
from adeft_indra.model_building.escape import escape_filename
def get_text_ref_ids_for_entity(ns, id_):
if ns == 'HGNC':
pmids = get_entrez_pmids_for_hgnc(id_)
elif ns == 'UP':
pmids = get_entrez_pmids_for_uniprot(id_)
return list(get_text_ref_ids_for_pmids(pmids).values())
adeft_grounder = AdeftGrounder()
shortforms = ['NPH3']
model_name = ':'.join(sorted(escape_filename(shortform) for shortform in shortforms))
results_path = os.path.abspath(os.path.join('../../', 'results', model_name))
miners = dict()
all_texts = {}
for shortform in shortforms:
text_ref_ids = get_text_ref_ids_for_agent_text(shortform)
content = get_plaintexts_for_text_ref_ids(text_ref_ids, contains=shortforms)
text_dict = content.flatten()
miners[shortform] = AdeftMiner(shortform)
miners[shortform].process_texts(text_dict.values())
all_texts.update(text_dict)
longform_dict = {}
for shortform in shortforms:
longforms = miners[shortform].get_longforms()
longforms = [(longform, count, score) for longform, count, score in longforms
if count*score > 2]
longform_dict[shortform] = longforms
combined_longforms = Counter()
for longform_rows in longform_dict.values():
combined_longforms.update({longform: count for longform, count, score
in longform_rows})
grounding_map = {}
names = {}
for longform in combined_longforms:
groundings = adeft_grounder.ground(longform)
if groundings:
grounding = groundings[0]['grounding']
grounding_map[longform] = grounding
names[grounding] = groundings[0]['name']
longforms, counts = zip(*combined_longforms.most_common())
pos_labels = []
list(zip(longforms, counts))
grounding_map, names, pos_labels = ground_with_gui(longforms, counts,
grounding_map=grounding_map,
names=names, pos_labels=pos_labels, no_browser=True, port=8890)
result = [grounding_map, names, pos_labels]
result
grounding_map, names, pos_labels = [{'non phototropic hypocotyl 3': 'UP:Q9FMF5',
'nonphototropic hypocotyl 3': 'UP:Q9FMF5'},
{'UP:Q9FMF5': 'RPT3'},
['UP:Q9FMF5']]
excluded_longforms = []
grounding_dict = {shortform: {longform: grounding_map[longform]
for longform, _, _ in longforms if longform in grounding_map
and longform not in excluded_longforms}
for shortform, longforms in longform_dict.items()}
result = [grounding_dict, names, pos_labels]
if not os.path.exists(results_path):
os.mkdir(results_path)
with open(os.path.join(results_path, f'{model_name}_preliminary_grounding_info.json'), 'w') as f:
json.dump(result, f)
additional_entities = {
'HGNC:7907': ['NPHP3', ['NPH3', 'NPHP3']],
'HGNC:8077': ['NXPH3', ['NPH3', 'NXPH3', 'neurexophilin', 'KIAA1159']],
'UP:Q9FMF5': ['RPT3', ['NPH3']],
}
unambiguous_agent_texts = {}
labeler = AdeftLabeler(grounding_dict)
corpus = labeler.build_from_texts(
(text, text_ref_id) for text_ref_id, text in all_texts.items()
)
agent_text_text_ref_id_map = defaultdict(list)
for text, label, id_ in corpus:
agent_text_text_ref_id_map[label].append(id_)
entity_text_ref_id_map = {
entity: set(
get_text_ref_ids_for_entity(*entity.split(':', maxsplit=1))
)
for entity in additional_entities
}
intersection1 = []
for entity1, trids1 in entity_text_ref_id_map.items():
for entity2, trids2 in entity_text_ref_id_map.items():
intersection1.append((entity1, entity2, len(trids1 & trids2)))
intersection2 = []
for entity1, trids1 in agent_text_text_ref_id_map.items():
for entity2, pmids2 in entity_text_ref_id_map.items():
intersection2.append((entity1, entity2, len(set(trids1) & trids2)))
intersection1
intersection2
all_used_trids = set()
for entity, agent_texts in unambiguous_agent_texts.items():
used_trids = set()
for agent_text in agent_texts[1]:
trids = set(get_text_ref_ids_for_agent_text(agent_text))
new_trids = list(trids - all_texts.keys() - used_trids)
content = get_plaintexts_for_text_ref_ids(new_trids, contains=agent_texts[1])
text_dict = content.flatten()
corpus.extend(
[
(text, entity, trid) for trid, text in text_dict.items() if len(text) >= 5
]
)
used_trids.update(new_trids)
all_used_trids.update(used_trids)
for entity, trids in entity_text_ref_id_map.items():
new_trids = list(set(trids) - all_texts.keys() - all_used_trids)
_, contains = additional_entities[entity]
content = get_plaintexts_for_text_ref_ids(new_trids, contains=contains)
text_dict = content.flatten()
corpus.extend(
[
(text, entity, trid) for trid, text in text_dict.items() if len(text) >= 5
]
)
names.update({key: value[0] for key, value in additional_entities.items()})
names.update({key: value[0] for key, value in unambiguous_agent_texts.items()})
pos_labels = list(set(pos_labels) | additional_entities.keys() |
unambiguous_agent_texts.keys())
%%capture
classifier = AdeftClassifier(shortforms, pos_labels=pos_labels, random_state=1729)
param_grid = {'C': [100.0], 'max_features': [10000]}
texts, labels, pmids = zip(*corpus)
classifier.cv(texts, labels, param_grid, cv=5, n_jobs=5)
classifier.stats
disamb = AdeftDisambiguator(classifier, grounding_dict, names)
disamb.dump(model_name, results_path)
print(disamb.info())
model_to_s3(disamb)
from adeft.disambiguate import load_disambiguator
disamb = load_disambiguator("BAL")
disamb
print(_28.info())
| 0.16944 | 0.155655 |
# 초심자를 위한 PyMODI 튜토리얼
<img src="https://github.com/LUXROBO/pymodi/blob/master/docs/_static/img/logo.png?raw=true" height=150 width=500>
# PyMODI 설치하기
### 컴퓨터에 파이모디가 설치하고 싶을 때
파이썬이 컴퓨터에 설치되어 있다면, PyMODI 를 설치하는 것은 pip 라는 커맨드를 이용해 설치할 수 있습니다.
커맨드창 (윈도우 + r) 을 열고 다음 내용을 입력하면 설치가 가능합니다.
$ python -m pip install pymodi --user
만약 git clone 을 이용해 pymodi 를 다운 받았다면, 다음과 같은 커맨드를 사용하면 됩니다.
$ python setup.py install --user
### 하지만 지금은 Jupyter 상에서 설치를 해보도록 하겠습니다.
지금부터 코드 안에서 ctrl + enter 를 누르거나 상다 배너의 run 을 누르면 지금 활성화되어 있는 코드가 실행될 것입니다.
아래 코드를 클리하고, ctrl + enter 를 눌러 실행해 보세요.
코드 박스 좌측 상단의 ln \[*\] 가 숫자로 바뀌면, 실행이 끝난 것입니다.
```
# PyMODI 를 설치해 보자
import sys
!{sys.executable} -m pip install pymodi
```
PyMODI의 설치가 끝났다면, PyMODI 를 사용할 수 있게, 패키지를 가져올 것입니다. 파이썬에서 설치한 패키지를 가져오는 것은 import 라는 키워드로 진행됩니다. 다음 코드 블럭을 실행하여 PyMODI 패키지를 가져오세요.
```
import modi
```
제대로 임포트가 되었다면, 현재 PyMODI 의 버전이 표시될 것입니다.
# 파이썬 기초 다시보기
본격적으로 PyMODI 를 이용하기 앞서, 기본적인 파이썬 문법을 돌아보겠습니다.
### Hello World
헬로 월드를 화면에 띄워보는 것은 언어를 배우는 좋은 시작점입니다.
파이썬에서 화면에 무언가를 표시하고 싶을 때는, print 라는 키워드를 사용합니다.
```python
print(<출력하고 싶은 것>)
```
다음 코드 블럭을 실행하여 "Hello World" 를 화면에 띄워봅시다.
```
print("Hello World")
```
### 변수
파이썬에서는 자신이 원하는 값을 담기 위한 변수라는 것이 존재합니다. 변수의 이름은 자신이 원하는 대로 만들 수가 있고, = 을 이용하여 변수에 값을 담습니다. 다음 코드 블럭에서 num 이라는 변수를 생성하여 5라는 값을 담아 보겠습니다.
```
num = 5
```
이제 num 에 5가 담겨 있기 때문에, num 을 부를 때마다 숫자 5 로 여겨지게 됩니다. 아래 코드에서는 num 에 2를 더한 값을 표시합니다.
```
print(num + 2)
```
다만, 변수를 부를 때에는, 따옴표를 사용해서는 안됩니다.
```
print("num")
```
따옴표로 감싸져 있는 텍스트는 변수가 아니라 문자열 그대로를 의미하게 됩니다.
### 조건문
코드를 작성하다 보면, 특정 조건에만 실행하고 싶은 부분이 있을 수 있습니다. 파이썬에서는 if 문을 이용하여 조건을 확인합니다.
```python
if <조건>:
<실행할 것>
```
위 코드는 조건이 만족할 때만 if 문 안의 코드를 실행합니다.
파이썬에서 활용할 수 있는 조건은 다음과 같은 것들이 있습니다.
* True, False: 진리값
* == : 같을 때
* \> : 클 때
* \>= : 크거나 같을 때
* < : 작을 때
* <= : 작거나 같을 때
* not : 부정
아래 코드 블럭은 num 이 10 보다 클 때만 화면에 표시할 것입니다.
```
num = 5 # num 에 5를 담는다.
if num > 10: # num 이 10보다 크다면
print(num) # num 을 프린트하라.
num = 15
if num > 10:
print(num)
```
여러 개의 조건문을 이을 수도 있습니다. if 문 다음에 elif 문을 이어서 작성하면, 위의 조건문들이 실패하고, elif 안의 조건이 만족할 때, 코드가 실행됩니다. 만약 모든 if 문, elif 문이 실패한다면 else 문 안의 코드가 실행됩니다.
```
num = 15
if num > 15:
print("Bigger than 15!!")
elif num > 10:
print("Bigger than 10!!")
else:
print("Less than 10!!")
```
### 루프
컴퓨터의 힘은 반복적인 작업을 쉽고 빠르게 할 수 있다는 점입니다. 이러한 반복을 하기 위해 파이썬에서는 For 문과 While 문을 이용합니다.
#### For 문
For 문은 일정 횟수만큼 반복하고 싶을 때에 사용합니다. For 문의 문법은 다음과 같습니다.
```python
for i in range(<횟수>):
<실행할 코드>
```
아래 코드는 0 부터 10개의 숫자를 출력합니다.
```
for i in range(10):
print(i)
```
위의 방법을 이용해서 0부터 100까지의 숫자를 모두 더할 수 있습니다.
```
sum = 0 # sum 이라는 변수를 만들어서 0에서 시작한다
for i in range(101): # 0 부터 100까지 반복한다.
sum = sum + i # sum 값에 i 를 계속 더한다.
print(sum) # 합을 출력한다
```
#### While 문
While 문은 조건이 만족할 때에 계속 안의 코드를 반복합니다.
```python
while <조건>:
<실행할 코드>
```
조건이 참일 동안 코드를 실행하고, 조건이 거짓이 되는 순간 루프를 나오게 됩니다.
```
num = 0
while num < 10: # num 이 10 보다 작은 동안
print(num) # 수를 출력한다.
num = num + 1 # num 에 1을 더한다.
```
루프는 break 이라는 키워드로 인위적으로 탈출할 수 있습니다.
```
num = 0
while True: # 조건이 항상 참이므로 계속 루프가 실행된다.
num = num + 1 # num에 1을 더한다.
if num > 10: # num 이 10보다 크다면
break # 루프를 탈출한다.
print(num) # 마지막 num의 값을 출력한다
```
이번에는 while 문을 이용해서 0부터 100까지 값을 더해보겠습니다. 아래 코드를 완성해서 올바른 답을 구해보세요!
```
sum = 0
while <조건>:
<실행할 코드>
print(sum)
```
# 첫 번째 PyMODI 프로젝트!!
파이썬을 이용해 MODI 모듈을 움직여보겠습니다!
PyMODI 에서 각 모듈을 이용하는 방법은 아래 링크에 잘 설명되어 있습니다.
https://pymodi.readthedocs.io/en/master/modi.module.html#subpackages
아래 튜토리얼을 따라해 보세요!
### MODI 만들기
#### 시작하기에 앞서, 네트워크 모듈에 버튼 모듈 하나, LED 모듈 하나를 연결하고, 컴퓨터에 연결하세요.
MODI 모듈을 덩어리를 연결하기 위해, MODI 객체를 생성합니다.
다음 코드로 bundle 이라는 이름의 MODI 객체를 만들어 봅시다.
```
import modi # pymodi 패키지 가져오기
bundle = modi.MODI() # MODI 만들기
```
MODI 에 연결되면, 연결되어 있는 모듈들이 같이 연결됩니다. 다음과 같은 방식으로 연결된 모듈들을 모두 볼 수 있습니다.
```
modules = bundle.modules # MODI 에 연결된 모든 모듈들
print(modules)
```
모듈의 종류에 따라서 모아서 볼 수도 있습니다.
```
print(bundle.leds) # LED 모듈들만 모아 보기
print(bundle.buttons) # 버튼 모듈들만 모아 보기
```
인덱스를 줌으로써 모듈들 중 하나를 불러올 수도 있습니다.
다음 코드는 led, button 이라는 이름의 변수를 만들어, 모듈들을 저장합니다.
```
led = bundle.leds[0] # 첫 번쨰 LED 모듈을 가져온다.
button = bundle.buttons[0] # 첫 번째 버튼 모듈을 가져온다.
print(led)
print(button)
```
이제 led와 버튼 모듈이 생겼습니다.
led 모듈을 이용해서 초록색 불이 들어오게 해 봅시다.
led 모듈의 이용 방법은 다음 문서에 정리되어 있습니다.
https://pymodi.readthedocs.io/en/master/modi.module.output_module.html#module-modi.module.output_module.led
```
led.green = 255 # led 의 초록 rgb 값을 최대값인 255로 설정
```
RGB 값을 직접 설정해 줄 수도 있습니다.
```
led.rgb = 148, 0, 211 # RGB 값을 바이올렛 색인 rgb(148, 0, 211) 로 설정
```
색을 바꾸기만 하는 것은 지루하니, 5번 깜빡이게 해 봅시다!!
```
import time
led.turn_off() # 먼저, led 를 끕니다.
for i in range(5): # 5 번 반복합니다.
led.blue = 255 # 파란색 불을 켭니다.
time.sleep(0.5) # 0.5초 기다립니다.
led.turn_off() # led 를 끕니다.
time.sleep(0.5) # 0.5 초 기다립니다.
```
이번에는, 버튼을 누를 때마다, 불이 들어오게 해 보겠습니다.
```
import time
while True: # 계속 반복합니다.
if button.pressed: # 버튼이 눌리면,
led.turn_on() # 불을 켭니다.
elif button.double_clicked: # 더블 클릭 되면
break # 루프 실행을 중지합니다.
else: # 아니면
led.turn_off() # 불을 끕니다.
time.sleep(0.1) # 0.1초 기다립니다.
```
마지막으로, 직접 스크립트를 써보겠습니다. button.toggled 를 이용해서, 버튼을 누를 때마다, led 의 색이 초록빛, 빨간빛으로 바뀌게 해 보십시오. button.toggled 은 버튼이 눌릴 때마다 True, False 로 번갈아 바뀝니다.
```
# 아래 코드를 완성해 보세요
import time
while True:
if <condition>:
<do something>
else:
<do something>
if button.double_clicked:
break
time.sleep(0.1)
```
# 마무리
훌륭합니다! 당신은 PyMODI 튜토리얼을 완수하였습니다.
PyMODI 를 이용한 프로젝트는 무엇이든 위와 같은 구조를 가지게 됩니다.
MODI 를 만들어서 모듈들을 가져오고, document 를 살펴서 모듈들을 조종하면 됩니다.
PyMODI 의 놀라운 힘을 직접 발휘해 보세요!!
@ 일반적인 PyMODI 코드의 구조
```python
import modi # 모디 패키지 임포트
bundle = modi.MODI() # MODI 만들기
<모듈 이름> = bundle.<모듈 타입>s[<모듈의 인덱스>]
...
```
|
github_jupyter
|
# PyMODI 를 설치해 보자
import sys
!{sys.executable} -m pip install pymodi
import modi
print(<출력하고 싶은 것>)
print("Hello World")
num = 5
print(num + 2)
print("num")
if <조건>:
<실행할 것>
num = 5 # num 에 5를 담는다.
if num > 10: # num 이 10보다 크다면
print(num) # num 을 프린트하라.
num = 15
if num > 10:
print(num)
num = 15
if num > 15:
print("Bigger than 15!!")
elif num > 10:
print("Bigger than 10!!")
else:
print("Less than 10!!")
for i in range(<횟수>):
<실행할 코드>
for i in range(10):
print(i)
sum = 0 # sum 이라는 변수를 만들어서 0에서 시작한다
for i in range(101): # 0 부터 100까지 반복한다.
sum = sum + i # sum 값에 i 를 계속 더한다.
print(sum) # 합을 출력한다
while <조건>:
<실행할 코드>
num = 0
while num < 10: # num 이 10 보다 작은 동안
print(num) # 수를 출력한다.
num = num + 1 # num 에 1을 더한다.
num = 0
while True: # 조건이 항상 참이므로 계속 루프가 실행된다.
num = num + 1 # num에 1을 더한다.
if num > 10: # num 이 10보다 크다면
break # 루프를 탈출한다.
print(num) # 마지막 num의 값을 출력한다
sum = 0
while <조건>:
<실행할 코드>
print(sum)
import modi # pymodi 패키지 가져오기
bundle = modi.MODI() # MODI 만들기
modules = bundle.modules # MODI 에 연결된 모든 모듈들
print(modules)
print(bundle.leds) # LED 모듈들만 모아 보기
print(bundle.buttons) # 버튼 모듈들만 모아 보기
led = bundle.leds[0] # 첫 번쨰 LED 모듈을 가져온다.
button = bundle.buttons[0] # 첫 번째 버튼 모듈을 가져온다.
print(led)
print(button)
led.green = 255 # led 의 초록 rgb 값을 최대값인 255로 설정
led.rgb = 148, 0, 211 # RGB 값을 바이올렛 색인 rgb(148, 0, 211) 로 설정
import time
led.turn_off() # 먼저, led 를 끕니다.
for i in range(5): # 5 번 반복합니다.
led.blue = 255 # 파란색 불을 켭니다.
time.sleep(0.5) # 0.5초 기다립니다.
led.turn_off() # led 를 끕니다.
time.sleep(0.5) # 0.5 초 기다립니다.
import time
while True: # 계속 반복합니다.
if button.pressed: # 버튼이 눌리면,
led.turn_on() # 불을 켭니다.
elif button.double_clicked: # 더블 클릭 되면
break # 루프 실행을 중지합니다.
else: # 아니면
led.turn_off() # 불을 끕니다.
time.sleep(0.1) # 0.1초 기다립니다.
# 아래 코드를 완성해 보세요
import time
while True:
if <condition>:
<do something>
else:
<do something>
if button.double_clicked:
break
time.sleep(0.1)
import modi # 모디 패키지 임포트
bundle = modi.MODI() # MODI 만들기
<모듈 이름> = bundle.<모듈 타입>s[<모듈의 인덱스>]
...
| 0.099295 | 0.928668 |
# Multivariate Linear Regression
The dataset was obtained from https://medium.com/we-are-orb/multivariate-linear-regression-in-python-without-scikit-learn-7091b1d45905.
We're using multivariate linear regression to predict the price of an apartment/house from given input size and number of bedrooms. This model assumes that the data is linear, and in this case, forms a plane.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
# read data and parse it
data = pd.read_csv('../datasets/multivariate_linear_regression.csv',
names=["areas","bedrooms","prices"])
# normalizing the data to be between 0 to 1
data = (data - data.mean())/data.std()
bedrooms = data['bedrooms'].values
prices = data['prices'].values
areas = data['areas'].values
x = np.array([areas, bedrooms]).T
# another column with all ones is concatenated to x to make it easier
# for computing inner products
ones = np.ones((x.shape[0], 1))
x = np.concatenate([ones, x], 1)
y = np.array(prices).reshape([-1, 1])
fig = plt.figure()
ax = Axes3D(fig)
ax.set_title('Scatter plot between area, bedrooms, and prices')
ax.set_ylabel('Area (normalized)')
ax.set_xlabel('Bedrooms (normalized)')
ax.set_zlabel('Price (normalized)')
ax.scatter(bedrooms, areas, prices, color='green')
# Hyperparameters
epochs = 3000 # number of time steps
learning_rate = 0.001 # sensitivity between time steps
# estimates are of the form (a, b, c) in z = a + bx + cy
def calculateError(x, y, estimates):
y_hat = x @ estimates.T # estimator for y value
return ((y - y_hat) ** 2).mean() # MSE error between y and y_hat
def gradientDescent(x, y, estimates, learning_rate, epochs):
error = np.zeros(epochs)
for epoch in range(epochs):
y_hat = x @ estimates.T
# instantaneous increase in error = 2 * x_i * (y - y_hat)
change = (2 * x * (y - y_hat)).mean(axis=0)
# gradient descent step
estimates = estimates + learning_rate * change
error[epoch] = calculateError(x, y, estimates)
return (estimates, error)
# Starting out with all coefficients being 1
estimates = np.ones((1, x.shape[1]))
estimates, error = gradientDescent(x, y, estimates, learning_rate, epochs)
print("Estimates: ", estimates)
plt.xlabel('Epoch (time step)')
plt.ylabel('Error (MSE)')
plt.title('Gradual change in Error with every time step')
plt.plot(np.arange(epochs), error)
fig = plt.figure()
ax = Axes3D(fig)
ax.set_title('Best fit plane between area, bedrooms, and prices')
ax.set_ylabel('Area (normalized)')
ax.set_xlabel('Bedrooms (normalized)')
ax.set_zlabel('Price (normalized)')
ax.scatter(bedrooms, areas, prices, color='green')
ax.plot_trisurf(bedrooms, areas, estimates[0][0] + estimates[0][1] * areas
+ estimates[0][2] * bedrooms, alpha=0.5)
plt.savefig('../result-plots/multivariate_linear_regression.svg')
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
# read data and parse it
data = pd.read_csv('../datasets/multivariate_linear_regression.csv',
names=["areas","bedrooms","prices"])
# normalizing the data to be between 0 to 1
data = (data - data.mean())/data.std()
bedrooms = data['bedrooms'].values
prices = data['prices'].values
areas = data['areas'].values
x = np.array([areas, bedrooms]).T
# another column with all ones is concatenated to x to make it easier
# for computing inner products
ones = np.ones((x.shape[0], 1))
x = np.concatenate([ones, x], 1)
y = np.array(prices).reshape([-1, 1])
fig = plt.figure()
ax = Axes3D(fig)
ax.set_title('Scatter plot between area, bedrooms, and prices')
ax.set_ylabel('Area (normalized)')
ax.set_xlabel('Bedrooms (normalized)')
ax.set_zlabel('Price (normalized)')
ax.scatter(bedrooms, areas, prices, color='green')
# Hyperparameters
epochs = 3000 # number of time steps
learning_rate = 0.001 # sensitivity between time steps
# estimates are of the form (a, b, c) in z = a + bx + cy
def calculateError(x, y, estimates):
y_hat = x @ estimates.T # estimator for y value
return ((y - y_hat) ** 2).mean() # MSE error between y and y_hat
def gradientDescent(x, y, estimates, learning_rate, epochs):
error = np.zeros(epochs)
for epoch in range(epochs):
y_hat = x @ estimates.T
# instantaneous increase in error = 2 * x_i * (y - y_hat)
change = (2 * x * (y - y_hat)).mean(axis=0)
# gradient descent step
estimates = estimates + learning_rate * change
error[epoch] = calculateError(x, y, estimates)
return (estimates, error)
# Starting out with all coefficients being 1
estimates = np.ones((1, x.shape[1]))
estimates, error = gradientDescent(x, y, estimates, learning_rate, epochs)
print("Estimates: ", estimates)
plt.xlabel('Epoch (time step)')
plt.ylabel('Error (MSE)')
plt.title('Gradual change in Error with every time step')
plt.plot(np.arange(epochs), error)
fig = plt.figure()
ax = Axes3D(fig)
ax.set_title('Best fit plane between area, bedrooms, and prices')
ax.set_ylabel('Area (normalized)')
ax.set_xlabel('Bedrooms (normalized)')
ax.set_zlabel('Price (normalized)')
ax.scatter(bedrooms, areas, prices, color='green')
ax.plot_trisurf(bedrooms, areas, estimates[0][0] + estimates[0][1] * areas
+ estimates[0][2] * bedrooms, alpha=0.5)
plt.savefig('../result-plots/multivariate_linear_regression.svg')
| 0.569853 | 0.991844 |
## WaMDaM Directions and Use Cases
### By Adel M. Abdallah, Jan 2022
# Step 2: Install WaMDaM Wizard and Connect to the database
### i. Download the WaMDaM Wizard software
Download the latest release from https://github.com/WamdamProject/WaMDaM_Wizard/releases
### ii. Launch WaMDaM Wizard
Once downloaded, double click at the executable “wamdam.exe” and this main window will appear. Click **More info** hyperlink if you encounter warning dialog box (Figure 1), then click **Run anyway** which will show the Wizard interface (Figure 2)
<img src="https://github.com/WamdamProject/WaMDaM-software-ecosystem/blob/master/mkdocs/Edit_MD_Files/images/run.PNG?raw=true" style="float:center;width:600px;padding:20px">
<h3><center>**Figure 1:** Installation (Windows 10)</center></h3>
<img src="https://github.com/WamdamProject/WaMDaM-software-ecosystem/blob/master/mkdocs/Edit_MD_Files/images/Wizard.PNG?raw=true" style="float:center;width:600px;padding:20px">
<h3><center>**Figure 2:** WaMDaM Wizard landing interface</center></h3>
If you’re interested, the source code of the Wizard is available on GitHub here https://github.com/WamdamProject/WaMDaM_Wizard
<br>
### iii. Connect to the SQLite database file
Click the **Connect to SQLite** tab (Figure 1), then click the button **Connect to an Existing SQLite WaMDaM database**
From the previous step, it is expected that you already have clones the GitHub repo
https://github.com/WamdamProject/WaMDaM_JupyterNotebooks
Navigate to the location on your desktop where you have the GitHub clones folder. For example:
C:\Users\Adel\Documents\GitHub\WamdamProject\WaMDaM_JupyteNotebooks\3_VisualizePublish\Files\Original
Connect to the SQLite file WEAP_WASH_BearRiver.sqlite
<br>
# Congratualtions!
### iv. View loaded data in WaMDaM tables (Optional) Not needed for this Ecosystem paper
This step here is not needed to replicate the work. If you want to view the WaMDaM table structure and populated data,
• Download and install the free and open source tool **DB Browser For SQLite** to query the database and view its tables. Download from https://sqlitebrowser.org/
• Download the already populated SQL **BearRiverDatasets_August_2018_Final.sqlite** file from GitHub at
https://github.com/WamdamProject/WaMDaM_UseCases/tree/master/3_SQLite_database
• Launch **DB Browser For SQLite** and Connect to the SQLite file you downloaded (Figure 2). Click Open Database. You can see the structure of WaMDaM tables by clicking at **Database Structure**. Click **Browse Data** to see the populated tables. Click
**Execute SQL**.
Type this simple query below and click at the execute triangle button.
SELECT * FROM ObjectTypes
The query results in all the Object Type table columns with all its populated data in rows for all the Resource Types in the database.
<img src="https://github.com/WamdamProject/WaMDaM-software-ecosystem/blob/master/mkdocs/Edit_MD_Files/QuerySelect/images/DB_BrowserSQL.png?raw=true" style="float:center;width:700px;padding:20px">
**Figure 2:** The interface for _DB Browser For SQLite_ software which views SQLite tables and enables executing SQL queries against WaMDaM database
### v. Learn about controlled vocabularies (Optional)
This step is just for your information and in case you want to use it or make changes to the existing workbooks
* This step is also optional and not needed to replicate the work. Read further if you want to see how WaMDaM controlled vocabularies work or to suggesting new terms to add.
<br>
* WaMDaM controlled vocabulary are hosted online and can be accessed at http://vocabulary.wamdam.org/
<br>
* Each time you use the WaMDaM Wizard to load water management data, it calls this repository to download and update the SQLite controlled vocabulary tables.
<br>
* Continue to use your model’s native terms (e.g., how your model refers to object types, attributes, and instances). Add the controlled term next to each native term (i.e., register them against each other). Registering your model's native terms against these CVs will allow you to relate, query, and compare all your model’s data to other registered data from other models and datasets within the database. Open one of the Excel workbook examples to see how the CVs work.
In the HomePage spreadsheet, click at the button **Update Controlled Vocabularies** which will call the online WaMDaM controlled vocabulary registry and download or update the most recent vocabularies to your template.
Open the far right spreadsheet **ControlledVocabularies** which lists all of the downloaded terms for use as a dropdown menu in the rest of the input data spreadsheets.
<br>
* The source code for the vocabulary app can be accessed at https://github.com/WamdamProject/WaMDaM_ControlledVocabularies
|
github_jupyter
|
## WaMDaM Directions and Use Cases
### By Adel M. Abdallah, Jan 2022
# Step 2: Install WaMDaM Wizard and Connect to the database
### i. Download the WaMDaM Wizard software
Download the latest release from https://github.com/WamdamProject/WaMDaM_Wizard/releases
### ii. Launch WaMDaM Wizard
Once downloaded, double click at the executable “wamdam.exe” and this main window will appear. Click **More info** hyperlink if you encounter warning dialog box (Figure 1), then click **Run anyway** which will show the Wizard interface (Figure 2)
<img src="https://github.com/WamdamProject/WaMDaM-software-ecosystem/blob/master/mkdocs/Edit_MD_Files/images/run.PNG?raw=true" style="float:center;width:600px;padding:20px">
<h3><center>**Figure 1:** Installation (Windows 10)</center></h3>
<img src="https://github.com/WamdamProject/WaMDaM-software-ecosystem/blob/master/mkdocs/Edit_MD_Files/images/Wizard.PNG?raw=true" style="float:center;width:600px;padding:20px">
<h3><center>**Figure 2:** WaMDaM Wizard landing interface</center></h3>
If you’re interested, the source code of the Wizard is available on GitHub here https://github.com/WamdamProject/WaMDaM_Wizard
<br>
### iii. Connect to the SQLite database file
Click the **Connect to SQLite** tab (Figure 1), then click the button **Connect to an Existing SQLite WaMDaM database**
From the previous step, it is expected that you already have clones the GitHub repo
https://github.com/WamdamProject/WaMDaM_JupyterNotebooks
Navigate to the location on your desktop where you have the GitHub clones folder. For example:
C:\Users\Adel\Documents\GitHub\WamdamProject\WaMDaM_JupyteNotebooks\3_VisualizePublish\Files\Original
Connect to the SQLite file WEAP_WASH_BearRiver.sqlite
<br>
# Congratualtions!
### iv. View loaded data in WaMDaM tables (Optional) Not needed for this Ecosystem paper
This step here is not needed to replicate the work. If you want to view the WaMDaM table structure and populated data,
• Download and install the free and open source tool **DB Browser For SQLite** to query the database and view its tables. Download from https://sqlitebrowser.org/
• Download the already populated SQL **BearRiverDatasets_August_2018_Final.sqlite** file from GitHub at
https://github.com/WamdamProject/WaMDaM_UseCases/tree/master/3_SQLite_database
• Launch **DB Browser For SQLite** and Connect to the SQLite file you downloaded (Figure 2). Click Open Database. You can see the structure of WaMDaM tables by clicking at **Database Structure**. Click **Browse Data** to see the populated tables. Click
**Execute SQL**.
Type this simple query below and click at the execute triangle button.
SELECT * FROM ObjectTypes
The query results in all the Object Type table columns with all its populated data in rows for all the Resource Types in the database.
<img src="https://github.com/WamdamProject/WaMDaM-software-ecosystem/blob/master/mkdocs/Edit_MD_Files/QuerySelect/images/DB_BrowserSQL.png?raw=true" style="float:center;width:700px;padding:20px">
**Figure 2:** The interface for _DB Browser For SQLite_ software which views SQLite tables and enables executing SQL queries against WaMDaM database
### v. Learn about controlled vocabularies (Optional)
This step is just for your information and in case you want to use it or make changes to the existing workbooks
* This step is also optional and not needed to replicate the work. Read further if you want to see how WaMDaM controlled vocabularies work or to suggesting new terms to add.
<br>
* WaMDaM controlled vocabulary are hosted online and can be accessed at http://vocabulary.wamdam.org/
<br>
* Each time you use the WaMDaM Wizard to load water management data, it calls this repository to download and update the SQLite controlled vocabulary tables.
<br>
* Continue to use your model’s native terms (e.g., how your model refers to object types, attributes, and instances). Add the controlled term next to each native term (i.e., register them against each other). Registering your model's native terms against these CVs will allow you to relate, query, and compare all your model’s data to other registered data from other models and datasets within the database. Open one of the Excel workbook examples to see how the CVs work.
In the HomePage spreadsheet, click at the button **Update Controlled Vocabularies** which will call the online WaMDaM controlled vocabulary registry and download or update the most recent vocabularies to your template.
Open the far right spreadsheet **ControlledVocabularies** which lists all of the downloaded terms for use as a dropdown menu in the rest of the input data spreadsheets.
<br>
* The source code for the vocabulary app can be accessed at https://github.com/WamdamProject/WaMDaM_ControlledVocabularies
| 0.737347 | 0.532243 |
# 循环
- 循环是一种控制语句块重复执行的结构
- while 适用于广度遍历
- for 开发中经常使用
## while 循环
- 当一个条件保持真的时候while循环重复执行语句
- while 循环一定要有结束条件,否则很容易进入死循环
- while 循环的语法是:
while loop-contunuation-conndition:
Statement
```
i = 0
while i<10:
print('hahaha')
i += 1
```
## 示例:
sum = 0
i = 1
while i <10:
sum = sum + i
i = i + 1
```
sum = 0
i = 1
while i<10:
sum = sum + 1
i = i + 1
```
## 错误示例:
sum = 0
i = 1
while i <10:
sum = sum + i
i = i + 1
- 一旦进入死循环可按 Ctrl + c 停止
## EP:


```
count = 0
while count < 100:
print(count)
count -= 1
i = 1
while i < 10:
if i % 2 ==0:
print(i)
i += 1
i = 1
while i <= 10:
if i == 5:
print(i)
break
else:
i += 1
```
# 验证码
- 随机产生四个字母的验证码,如果正确,输出验证码正确。如果错误,产生新的验证码,用户重新输入。
- 验证码只能输入三次,如果三次都错,返回“别爬了,我们小网站没什么好爬的”
- 密码登录,如果三次错误,账号被锁定
```
import random
number1 = random.randint(1000,9999)
print(number1)
a = eval(input('a'))
c=1
while a!=number1:
c=c+1
if a == number1:
print('验证码正确')
break
if c==4:
print('没机会了')
break
else:
number1 = number1 = random.randint(1000,9999)
print(number1)
a = eval(input('a'))
```
## 尝试死循环
## 实例研究:猜数字
- 你将要编写一个能够随机生成一个0到10之间的且包括两者的数字程序,这个程序
- 提示用户连续地输入数字直到正确,且提示用户输入的数字是过高还是过低
## 使用哨兵值来控制循环
- 哨兵值来表明输入的结束
- 
## 警告

## for 循环
- Python的for 循环通过一个序列中的每个值来进行迭代
- range(a,b,k), a,b,k 必须为整数
- a: start
- b: end
- k: step
- 注意for 是循环一切可迭代对象,而不是只能使用range
# 在Python里面一切皆对象
```
number = 0
sum = 0
for count in range(5):
number = eval(input("Enter an integer:"))
sum += number
print("sum is",sum)
print("count is",count)
```
## EP:
- 
```
sum = 0
for i in range(1001):
sum += i
print(sum)
sum = 0 ##sum被赋值了之后就使用不了了
i = 0
while i < 1001:
sum = sum + i
i += 1
print(sum)
i = 1
sum_ = 0
while sum_<1001:
sum
sum_ += i
i += 1
print(sum_)
sum_ = 0
for i in range(1,10001):
sum_ += i
if sum_ > 10000:
break
print(sum_)
number = 0
sum = 0
for count
for i in range(10):
if i == 5:
break
print(i)
for i in range(10):
if i == 5:
countinue
print(i)
for i in range(10):
for j in range(10):
print(i,j)
for i in range(10):
for j in range(10):
print(i,i)
for i in range(10):
for j in range(10):
for k in range(10):
print(i,j,k)
for i in range(10):
for j in range(10):
for k in range(10):
if k ==5:
break
print(i,j,k)
```
## 嵌套循环
- 一个循环可以嵌套另一个循环
- 每次循环外层时,内层循环都会被刷新重新完成循环
- 也就是说,大循环执行一次,小循环会全部执行一次
- 注意:
> - 多层循环非常耗时
- 最多使用3层循环
## EP:
- 使用多层循环完成9X9乘法表
- 显示50以内所有的素数
```
for i in range(1,10):
for j in range(1,i+1):
print('{}*{}={}\t'.format(j,i,i*j),end= '')
print()
for i in range(1,10):
for j in range(1,i+1):
print(j,'x',i,'=',i*j,end=',')
print()
i = 2
print'50以内的素数为'
while i<50:
n = 2
while n <=(i / n):
if not(i % n):
break
n = n+1
if (n>i/n):
print('i')
i = i+1
```
## 关键字 break 和 continue
- break 跳出循环,终止循环
- continue 跳出此次循环,继续执行
## 注意


# Homework
- 1

```
A = eval(input('请输入一个数'))
```
- 2

```
money = 10000
i = 1
for i in range(10):
money = 10000 * ((1+0.05)** i )
i += 1
print(money)
```
- 3

```
for i in range(100,1000):
if i%5==0 & i%6==0:
print(i,end=' ')
```
- 4

- 5

```
n = 1
while n < 120:
if n**3 < 12000:
print(n)
n += 1
n = 1
while n < 120:
if n**2 >12000:
print(n)
break
n += 1
```
- 6

- 7

- 8

```
j = 3
i = 1
a = i/j
if i == 97:
break
j += 2
i += 2
```
- 9

- 10

- 11

```
for i in range(1,8):
for j in range(1,8):
print(i,j)
```
- 12

|
github_jupyter
|
i = 0
while i<10:
print('hahaha')
i += 1
sum = 0
i = 1
while i<10:
sum = sum + 1
i = i + 1
count = 0
while count < 100:
print(count)
count -= 1
i = 1
while i < 10:
if i % 2 ==0:
print(i)
i += 1
i = 1
while i <= 10:
if i == 5:
print(i)
break
else:
i += 1
import random
number1 = random.randint(1000,9999)
print(number1)
a = eval(input('a'))
c=1
while a!=number1:
c=c+1
if a == number1:
print('验证码正确')
break
if c==4:
print('没机会了')
break
else:
number1 = number1 = random.randint(1000,9999)
print(number1)
a = eval(input('a'))
number = 0
sum = 0
for count in range(5):
number = eval(input("Enter an integer:"))
sum += number
print("sum is",sum)
print("count is",count)
sum = 0
for i in range(1001):
sum += i
print(sum)
sum = 0 ##sum被赋值了之后就使用不了了
i = 0
while i < 1001:
sum = sum + i
i += 1
print(sum)
i = 1
sum_ = 0
while sum_<1001:
sum
sum_ += i
i += 1
print(sum_)
sum_ = 0
for i in range(1,10001):
sum_ += i
if sum_ > 10000:
break
print(sum_)
number = 0
sum = 0
for count
for i in range(10):
if i == 5:
break
print(i)
for i in range(10):
if i == 5:
countinue
print(i)
for i in range(10):
for j in range(10):
print(i,j)
for i in range(10):
for j in range(10):
print(i,i)
for i in range(10):
for j in range(10):
for k in range(10):
print(i,j,k)
for i in range(10):
for j in range(10):
for k in range(10):
if k ==5:
break
print(i,j,k)
for i in range(1,10):
for j in range(1,i+1):
print('{}*{}={}\t'.format(j,i,i*j),end= '')
print()
for i in range(1,10):
for j in range(1,i+1):
print(j,'x',i,'=',i*j,end=',')
print()
i = 2
print'50以内的素数为'
while i<50:
n = 2
while n <=(i / n):
if not(i % n):
break
n = n+1
if (n>i/n):
print('i')
i = i+1
A = eval(input('请输入一个数'))
money = 10000
i = 1
for i in range(10):
money = 10000 * ((1+0.05)** i )
i += 1
print(money)
for i in range(100,1000):
if i%5==0 & i%6==0:
print(i,end=' ')
n = 1
while n < 120:
if n**3 < 12000:
print(n)
n += 1
n = 1
while n < 120:
if n**2 >12000:
print(n)
break
n += 1
j = 3
i = 1
a = i/j
if i == 97:
break
j += 2
i += 2
for i in range(1,8):
for j in range(1,8):
print(i,j)
| 0.026056 | 0.796134 |
[](https://github.com/labmlai/annotated_deep_learning_paper_implementations)
[](https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/activations/fta/experiment.ipynb)
[](https://www.comet.ml/labml/fta/69be11f83693407f82a86dcbb232bcfe?experiment-tab=chart&showOutliers=true&smoothing=0&transformY=smoothing&viewId=rlJOpXDGtL8zbkcX66R77P5me&xAxis=step)
## [Fuzzy Tiling Activations](https://nn.labml.ai/activations/fta/index.html)
Here we train a transformer that uses [Fuzzy Tiling Activation](https://nn.labml.ai/activations/fta/index.html) in the
[Feed-Forward Network](https://nn.labml.ai/transformers/feed_forward.html).
We use it for a language model and train it on Tiny Shakespeare dataset
for demonstration.
However, this is probably not the ideal task for FTA, and we
believe FTA is more suitable for modeling data with continuous variables.
### Install the packages
```
!pip install labml-nn comet_ml --quiet
```
### Enable [Comet](https://www.comet.ml)
```
#@markdown Select in order to enable logging this experiment to [Comet](https://www.comet.ml).
use_comet = False #@param {type:"boolean"}
if use_comet:
import comet_ml
comet_ml.init(project_name='fta')
```
### Imports
```
import torch
import torch.nn as nn
from labml import experiment
from labml.configs import option
from labml_nn.activations.fta.experiment import Configs
```
### Create an experiment
```
experiment.create(name="fta", writers={"screen", "comet"} if use_comet else {'screen'})
```
### Configurations
```
conf = Configs()
```
Set experiment configurations and assign a configurations dictionary to override configurations
```
experiment.configs(conf, {
'tokenizer': 'character',
'prompt_separator': '',
'prompt': 'It is ',
'text': 'tiny_shakespeare',
'seq_len': 256,
'epochs': 32,
'batch_size': 16,
'inner_iterations': 10,
'optimizer.optimizer': 'Adam',
'optimizer.learning_rate': 3e-4,
})
```
Set PyTorch models for loading and saving
```
experiment.add_pytorch_models({'model': conf.model})
```
### Start the experiment and run the training loop.
```
# Start the experiment
with experiment.start():
conf.run()
```
|
github_jupyter
|
!pip install labml-nn comet_ml --quiet
#@markdown Select in order to enable logging this experiment to [Comet](https://www.comet.ml).
use_comet = False #@param {type:"boolean"}
if use_comet:
import comet_ml
comet_ml.init(project_name='fta')
import torch
import torch.nn as nn
from labml import experiment
from labml.configs import option
from labml_nn.activations.fta.experiment import Configs
experiment.create(name="fta", writers={"screen", "comet"} if use_comet else {'screen'})
conf = Configs()
experiment.configs(conf, {
'tokenizer': 'character',
'prompt_separator': '',
'prompt': 'It is ',
'text': 'tiny_shakespeare',
'seq_len': 256,
'epochs': 32,
'batch_size': 16,
'inner_iterations': 10,
'optimizer.optimizer': 'Adam',
'optimizer.learning_rate': 3e-4,
})
experiment.add_pytorch_models({'model': conf.model})
# Start the experiment
with experiment.start():
conf.run()
| 0.605566 | 0.940134 |

[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/NER_IT.ipynb)
# **Detect entities in Italian text**
## 1. Colab Setup
```
# Install Java
! apt-get update -qq
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
! java -version
# Install pyspark
! pip install --ignore-installed -q pyspark==2.4.4
# Install SparkNLP
! pip install --ignore-installed spark-nlp
```
## 2. Start the Spark session
```
import os
import json
os.environ['JAVA_HOME'] = "/usr/lib/jvm/java-8-openjdk-amd64"
import pandas as pd
import numpy as np
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
from sparknlp.annotator import *
from sparknlp.base import *
import sparknlp
from sparknlp.pretrained import PretrainedPipeline
spark = sparknlp.start()
```
## 3. Select the DL model
```
# If you change the model, re-run all the cells below.
# Applicable models: wikiner_840B_300
MODEL_NAME = "wikiner_840B_300"
```
## 4. Some sample examples
```
# Enter examples to be transformed as strings in this list
text_list = [
"""William Henry Gates III (nato il 28 ottobre 1955) è un magnate d'affari americano, sviluppatore di software, investitore e filantropo. È noto soprattutto come co-fondatore di Microsoft Corporation. Durante la sua carriera in Microsoft, Gates ha ricoperto le posizioni di presidente, amministratore delegato (CEO), presidente e capo architetto del software, pur essendo il principale azionista individuale fino a maggio 2014. È uno dei più noti imprenditori e pionieri del rivoluzione dei microcomputer degli anni '70 e '80. Nato e cresciuto a Seattle, Washington, Gates ha co-fondato Microsoft con l'amico d'infanzia Paul Allen nel 1975, ad Albuquerque, nel New Mexico; divenne la più grande azienda di software per personal computer al mondo. Gates ha guidato l'azienda come presidente e CEO fino a quando non si è dimesso da CEO nel gennaio 2000, ma è rimasto presidente e divenne capo architetto del software. Alla fine degli anni '90, Gates era stato criticato per le sue tattiche commerciali, che erano state considerate anticoncorrenziali. Questa opinione è stata confermata da numerose sentenze giudiziarie. Nel giugno 2006, Gates ha annunciato che sarebbe passato a un ruolo part-time presso Microsoft e un lavoro a tempo pieno presso la Bill & Melinda Gates Foundation, la fondazione di beneficenza privata che lui e sua moglie, Melinda Gates, hanno fondato nel 2000. [ 9] A poco a poco trasferì i suoi doveri a Ray Ozzie e Craig Mundie. Si è dimesso da presidente di Microsoft nel febbraio 2014 e ha assunto un nuovo incarico come consulente tecnologico per supportare il neo nominato CEO Satya Nadella.""",
"""La Gioconda è un dipinto ad olio del XVI secolo creato da Leonardo. Si tiene al Louvre di Parigi."""
]
```
## 5. Define Spark NLP pipeline
```
document_assembler = DocumentAssembler() \
.setInputCol('text') \
.setOutputCol('document')
tokenizer = Tokenizer() \
.setInputCols(['document']) \
.setOutputCol('token')
# The wikiner_840B_300 is trained with glove_840B_300, so the embeddings in the
# pipeline should match. Same applies for the other available models.
if MODEL_NAME == "wikiner_840B_300":
embeddings = WordEmbeddingsModel.pretrained('glove_840B_300', lang='xx') \
.setInputCols(['document', 'token']) \
.setOutputCol('embeddings')
elif MODEL_NAME == "wikiner_6B_300":
embeddings = WordEmbeddingsModel.pretrained('glove_6B_300', lang='xx') \
.setInputCols(['document', 'token']) \
.setOutputCol('embeddings')
elif MODEL_NAME == "wikiner_6B_100":
embeddings = WordEmbeddingsModel.pretrained('glove_100d') \
.setInputCols(['document', 'token']) \
.setOutputCol('embeddings')
ner_model = NerDLModel.pretrained(MODEL_NAME, 'it') \
.setInputCols(['document', 'token', 'embeddings']) \
.setOutputCol('ner')
ner_converter = NerConverter() \
.setInputCols(['document', 'token', 'ner']) \
.setOutputCol('ner_chunk')
nlp_pipeline = Pipeline(stages=[
document_assembler,
tokenizer,
embeddings,
ner_model,
ner_converter
])
```
## 6. Run the pipeline
```
empty_df = spark.createDataFrame([['']]).toDF('text')
pipeline_model = nlp_pipeline.fit(empty_df)
df = spark.createDataFrame(pd.DataFrame({'text': text_list}))
result = pipeline_model.transform(df)
```
## 7. Visualize results
```
result.select(
F.explode(
F.arrays_zip('ner_chunk.result', 'ner_chunk.metadata')
).alias("cols")
).select(
F.expr("cols['0']").alias('chunk'),
F.expr("cols['1']['entity']").alias('ner_label')
).show(truncate=False)
```
|
github_jupyter
|
# Install Java
! apt-get update -qq
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
! java -version
# Install pyspark
! pip install --ignore-installed -q pyspark==2.4.4
# Install SparkNLP
! pip install --ignore-installed spark-nlp
import os
import json
os.environ['JAVA_HOME'] = "/usr/lib/jvm/java-8-openjdk-amd64"
import pandas as pd
import numpy as np
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
from sparknlp.annotator import *
from sparknlp.base import *
import sparknlp
from sparknlp.pretrained import PretrainedPipeline
spark = sparknlp.start()
# If you change the model, re-run all the cells below.
# Applicable models: wikiner_840B_300
MODEL_NAME = "wikiner_840B_300"
# Enter examples to be transformed as strings in this list
text_list = [
"""William Henry Gates III (nato il 28 ottobre 1955) è un magnate d'affari americano, sviluppatore di software, investitore e filantropo. È noto soprattutto come co-fondatore di Microsoft Corporation. Durante la sua carriera in Microsoft, Gates ha ricoperto le posizioni di presidente, amministratore delegato (CEO), presidente e capo architetto del software, pur essendo il principale azionista individuale fino a maggio 2014. È uno dei più noti imprenditori e pionieri del rivoluzione dei microcomputer degli anni '70 e '80. Nato e cresciuto a Seattle, Washington, Gates ha co-fondato Microsoft con l'amico d'infanzia Paul Allen nel 1975, ad Albuquerque, nel New Mexico; divenne la più grande azienda di software per personal computer al mondo. Gates ha guidato l'azienda come presidente e CEO fino a quando non si è dimesso da CEO nel gennaio 2000, ma è rimasto presidente e divenne capo architetto del software. Alla fine degli anni '90, Gates era stato criticato per le sue tattiche commerciali, che erano state considerate anticoncorrenziali. Questa opinione è stata confermata da numerose sentenze giudiziarie. Nel giugno 2006, Gates ha annunciato che sarebbe passato a un ruolo part-time presso Microsoft e un lavoro a tempo pieno presso la Bill & Melinda Gates Foundation, la fondazione di beneficenza privata che lui e sua moglie, Melinda Gates, hanno fondato nel 2000. [ 9] A poco a poco trasferì i suoi doveri a Ray Ozzie e Craig Mundie. Si è dimesso da presidente di Microsoft nel febbraio 2014 e ha assunto un nuovo incarico come consulente tecnologico per supportare il neo nominato CEO Satya Nadella.""",
"""La Gioconda è un dipinto ad olio del XVI secolo creato da Leonardo. Si tiene al Louvre di Parigi."""
]
document_assembler = DocumentAssembler() \
.setInputCol('text') \
.setOutputCol('document')
tokenizer = Tokenizer() \
.setInputCols(['document']) \
.setOutputCol('token')
# The wikiner_840B_300 is trained with glove_840B_300, so the embeddings in the
# pipeline should match. Same applies for the other available models.
if MODEL_NAME == "wikiner_840B_300":
embeddings = WordEmbeddingsModel.pretrained('glove_840B_300', lang='xx') \
.setInputCols(['document', 'token']) \
.setOutputCol('embeddings')
elif MODEL_NAME == "wikiner_6B_300":
embeddings = WordEmbeddingsModel.pretrained('glove_6B_300', lang='xx') \
.setInputCols(['document', 'token']) \
.setOutputCol('embeddings')
elif MODEL_NAME == "wikiner_6B_100":
embeddings = WordEmbeddingsModel.pretrained('glove_100d') \
.setInputCols(['document', 'token']) \
.setOutputCol('embeddings')
ner_model = NerDLModel.pretrained(MODEL_NAME, 'it') \
.setInputCols(['document', 'token', 'embeddings']) \
.setOutputCol('ner')
ner_converter = NerConverter() \
.setInputCols(['document', 'token', 'ner']) \
.setOutputCol('ner_chunk')
nlp_pipeline = Pipeline(stages=[
document_assembler,
tokenizer,
embeddings,
ner_model,
ner_converter
])
empty_df = spark.createDataFrame([['']]).toDF('text')
pipeline_model = nlp_pipeline.fit(empty_df)
df = spark.createDataFrame(pd.DataFrame({'text': text_list}))
result = pipeline_model.transform(df)
result.select(
F.explode(
F.arrays_zip('ner_chunk.result', 'ner_chunk.metadata')
).alias("cols")
).select(
F.expr("cols['0']").alias('chunk'),
F.expr("cols['1']['entity']").alias('ner_label')
).show(truncate=False)
| 0.574634 | 0.889337 |
# livelossplot example: PyTorch torchbearer
[torchbearer](https://github.com/ecs-vlc/torchbearer) is a model fitting library for PyTorch. As of version 0.2.6 it includes native support for `livelossplot`, through the [LiveLossPlot callback](https://torchbearer.readthedocs.io/en/latest/code/callbacks.html#torchbearer.callbacks.live_loss_plot.LiveLossPlot). In this notebook, we'll train a simple CNN on CIFAR10 with torchbearer and livelossplot.
<a href="https://colab.research.google.com/github/stared/livelossplot/blob/master/examples/torchbearer.ipynb" target="_parent">
<img src="https://colab.research.google.com/assets/colab-badge.svg"/>
</a>
```
!pip install torchbearer --quiet
%matplotlib inline
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
from torchvision import transforms
import torchbearer
from torchbearer.cv_utils import DatasetValidationSplitter
from torchbearer import Trial
from torchbearer.callbacks import LiveLossPlot
```
## Data
We'll use CIFAR10 for this demo, with the usual normalisations
```
BATCH_SIZE = 256
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
dataset = torchvision.datasets.CIFAR10(root='./tmp/cifar', train=True, download=True,
transform=transforms.Compose([transforms.ToTensor(), normalize]))
splitter = DatasetValidationSplitter(len(dataset), 0.1)
trainset = splitter.get_train_dataset(dataset)
valset = splitter.get_val_dataset(dataset)
traingen = torch.utils.data.DataLoader(trainset, pin_memory=True, batch_size=BATCH_SIZE, shuffle=True, num_workers=10)
valgen = torch.utils.data.DataLoader(valset, pin_memory=True, batch_size=BATCH_SIZE, shuffle=True, num_workers=10)
testset = torchvision.datasets.CIFAR10(root='./tmp/cifar', train=False, download=True,
transform=transforms.Compose([transforms.ToTensor(), normalize]))
testgen = torch.utils.data.DataLoader(testset, pin_memory=True, batch_size=BATCH_SIZE, shuffle=False, num_workers=10)
```
## Model
A simple, 3 layer CNN should do the trick, since we're using batch norm we won't worry about weight initialisation
```
class SimpleModel(nn.Module):
def __init__(self):
super(SimpleModel, self).__init__()
self.convs = nn.Sequential(
nn.Conv2d(3, 16, stride=2, kernel_size=3),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.Conv2d(16, 32, stride=2, kernel_size=3),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.Conv2d(32, 64, stride=2, kernel_size=3),
nn.BatchNorm2d(64),
nn.ReLU()
)
self.classifier = nn.Linear(576, 10)
def forward(self, x):
x = self.convs(x)
x = x.view(-1, 576)
return self.classifier(x)
model = SimpleModel()
```
## Running
Now we're ready to run, we use one trial here for the training and validation and one for evaluation
```
optimizer = optim.Adam(filter(lambda p: p.requires_grad, model.parameters()), lr=0.001)
loss = nn.CrossEntropyLoss()
trial = Trial(
model, optimizer, loss,
metrics=['acc', 'loss'],
callbacks=[LiveLossPlot()]).to('cuda')
trial.with_generators(train_generator=traingen, val_generator=valgen)
history = trial.run(verbose=0, epochs=25)
trial = Trial(model, metrics=['acc', 'loss', 'top_5_acc']).with_test_generator(testgen).to('cuda')
_ = trial.evaluate(data_key=torchbearer.TEST_DATA)
```
|
github_jupyter
|
!pip install torchbearer --quiet
%matplotlib inline
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
from torchvision import transforms
import torchbearer
from torchbearer.cv_utils import DatasetValidationSplitter
from torchbearer import Trial
from torchbearer.callbacks import LiveLossPlot
BATCH_SIZE = 256
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
dataset = torchvision.datasets.CIFAR10(root='./tmp/cifar', train=True, download=True,
transform=transforms.Compose([transforms.ToTensor(), normalize]))
splitter = DatasetValidationSplitter(len(dataset), 0.1)
trainset = splitter.get_train_dataset(dataset)
valset = splitter.get_val_dataset(dataset)
traingen = torch.utils.data.DataLoader(trainset, pin_memory=True, batch_size=BATCH_SIZE, shuffle=True, num_workers=10)
valgen = torch.utils.data.DataLoader(valset, pin_memory=True, batch_size=BATCH_SIZE, shuffle=True, num_workers=10)
testset = torchvision.datasets.CIFAR10(root='./tmp/cifar', train=False, download=True,
transform=transforms.Compose([transforms.ToTensor(), normalize]))
testgen = torch.utils.data.DataLoader(testset, pin_memory=True, batch_size=BATCH_SIZE, shuffle=False, num_workers=10)
class SimpleModel(nn.Module):
def __init__(self):
super(SimpleModel, self).__init__()
self.convs = nn.Sequential(
nn.Conv2d(3, 16, stride=2, kernel_size=3),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.Conv2d(16, 32, stride=2, kernel_size=3),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.Conv2d(32, 64, stride=2, kernel_size=3),
nn.BatchNorm2d(64),
nn.ReLU()
)
self.classifier = nn.Linear(576, 10)
def forward(self, x):
x = self.convs(x)
x = x.view(-1, 576)
return self.classifier(x)
model = SimpleModel()
optimizer = optim.Adam(filter(lambda p: p.requires_grad, model.parameters()), lr=0.001)
loss = nn.CrossEntropyLoss()
trial = Trial(
model, optimizer, loss,
metrics=['acc', 'loss'],
callbacks=[LiveLossPlot()]).to('cuda')
trial.with_generators(train_generator=traingen, val_generator=valgen)
history = trial.run(verbose=0, epochs=25)
trial = Trial(model, metrics=['acc', 'loss', 'top_5_acc']).with_test_generator(testgen).to('cuda')
_ = trial.evaluate(data_key=torchbearer.TEST_DATA)
| 0.923975 | 0.978447 |
# TorchDyn Quickstart
**TorchDyn is the toolkit for continuous models in PyTorch. Play with state-of-the-art architectures or use its powerful libraries to create your own.**
Central to the `torchdyn` approach are continuous neural networks, where *width*, *depth* (or both) are taken to their infinite limit. On the optimization front, we consider continuous "data-stream" regimes and gradient flow methods, where the dataset represents a time-evolving signal processed by the neural network to adapt its parameters.
By providing a centralized, easy-to-access collection of model templates, tutorial and application notebooks, we hope to speed-up research in this area and ultimately contribute to turning neural differential equations into an effective tool for control, system identification and common machine learning tasks.
```
import sys ; sys.path.append('../')
from torchdyn.models import *
from torchdyn.datasets import *
from torchdyn import *
```
## Generate data from a static toy dataset
We’ll be generating data from toy datasets. In torchdyn, we provide a wide range of datasets often use to benchmark and understand Neural ODEs. Here we will use the classic moons dataset and train a Neural ODE for binary classification
```
d = ToyDataset()
X, yn = d.generate(n_samples=512, noise=1e-1, dataset_type='moons')
import matplotlib.pyplot as plt
colors = ['orange', 'blue']
fig = plt.figure(figsize=(3,3))
ax = fig.add_subplot(111)
for i in range(len(X)):
ax.scatter(X[i,0], X[i,1], s=1, color=colors[yn[i].int()])
```
Generated data can be easily loaded in the dataloader with standard `PyTorch` calls
```
import torch
import torch.utils.data as data
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
X_train = torch.Tensor(X).to(device)
y_train = torch.LongTensor(yn.long()).to(device)
train = data.TensorDataset(X_train, y_train)
trainloader = data.DataLoader(train, batch_size=len(X), shuffle=True)
```
We utilize [Pytorch Lightning](https://github.com/PyTorchLightning/pytorch-lightning) to handle training loops, logging and general bookkeeping. This allows `torchdyn` and Neural Differential Equations to have access to modern best practices for training and experiment reproducibility.
In particular, we combine modular `torchdyn` models with `LightningModules` via a `Learner` class:
```
import torch.nn as nn
import pytorch_lightning as pl
class Learner(pl.LightningModule):
def __init__(self, model:nn.Module):
super().__init__()
self.model = model
def forward(self, x):
return self.model(x)
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self.model(x)
loss = nn.CrossEntropyLoss()(y_hat, y)
logs = {'train_loss': loss}
return {'loss': loss, 'log': logs}
def configure_optimizers(self):
return torch.optim.Adam(self.model.parameters(), lr=0.01)
def train_dataloader(self):
return trainloader
```
## Define a Neural ODE
Analogously to most forward neural models we want to realize a map
$$
x \mapsto \hat y
$$
where $\hat y$ becomes the best approximation of a true output $y$ given an input $x$.
In torchdyn you can define very simple Neural ODE models of the form
$$ \left\{
\begin{aligned}
\dot{z}(s) &= f(z(s), \theta)\\
z(0) &= x\\
\hat y & = z(1)
\end{aligned}
\right. \quad s\in[0,1]
$$
by just specifying a neural network $f$ and giving some simple settings.
**Note:** This Neural ODE model is of *depth-invariant* type as neither $f$ explicitly depend on $s$ nor the parameters $\theta$ are depth-varying. Together with their *depth-variant* counterpart with $s$ concatenated in the vector field was first proposed and implemented by [[Chen T. Q. et al, 2018]](https://arxiv.org/abs/1806.07366)
### Define the vector field (DEFunc)
The first step is to define any PyTorch `torch.nn.Module`. This takes the role of the Neural ODE vector field $f(h,\theta)$
```
f = nn.Sequential(
nn.Linear(2, 16),
nn.Tanh(),
nn.Linear(16, 2)
)
```
In this case we chose $f$ to be a simple MLP with one hidden layer and $\tanh$ activation
### Define the NeuralDE
The final step to define a Neural ODE is to instantiate the torchdyn's class `NeuralDE` passing some customization arguments and `f` itself.
In this case we specify:
* we compute backward gradients with the `'adjoint'` method.
* we will use the `'dopri5'` (Dormand-Prince) ODE solver from `torchdiffeq`;
```
model = NeuralDE(f, sensitivity='adjoint', solver='dopri5').to(device)
```
## Train the Model
```
learn = Learner(model)
trainer = pl.Trainer(min_epochs=200, max_epochs=300)
trainer.fit(learn)
```
With the method `trajectory` of `NeuralDE` objects you can quickly evaluate the entire trajectory of each data point in `X_train` on an interval `s_span`
```
s_span = torch.linspace(0,1,100)
trajectory = model.trajectory(X_train, s_span).detach().cpu()
```
### Plot the Training Results
We can first plot the trajectories of the data points in the depth domain $s$
```
color=['orange', 'blue']
fig = plt.figure(figsize=(10,2))
ax0 = fig.add_subplot(121)
ax1 = fig.add_subplot(122)
for i in range(500):
ax0.plot(s_span, trajectory[:,i,0], color=color[int(yn[i])], alpha=.1);
ax1.plot(s_span, trajectory[:,i,1], color=color[int(yn[i])], alpha=.1);
ax0.set_xlabel(r"$s$ [Depth]") ; ax0.set_ylabel(r"$h_0(s)$")
ax1.set_xlabel(r"$s$ [Depth]") ; ax1.set_ylabel(r"$z_1(s)$")
ax0.set_title("Dimension 0") ; ax1.set_title("Dimension 1")
```
Then the trajectory in the *state-space*
As you can see, the Neural ODE steers the data-points into regions of null loss with a continuous flow in the depth domain. Finally, we can also plot the learned vector field $f$
```
# evaluate vector field
n_pts = 50
x = torch.linspace(trajectory[:,:,0].min(), trajectory[:,:,0].max(), n_pts)
y = torch.linspace(trajectory[:,:,1].min(), trajectory[:,:,1].max(), n_pts)
X, Y = torch.meshgrid(x, y) ; z = torch.cat([X.reshape(-1,1), Y.reshape(-1,1)], 1)
f = model.defunc(0,z.to(device)).cpu().detach()
fx, fy = f[:,0], f[:,1] ; fx, fy = fx.reshape(n_pts , n_pts), fy.reshape(n_pts, n_pts)
# plot vector field and its intensity
fig = plt.figure(figsize=(4, 4)) ; ax = fig.add_subplot(111)
ax.streamplot(X.numpy().T, Y.numpy().T, fx.numpy().T, fy.numpy().T, color='black')
ax.contourf(X.T, Y.T, torch.sqrt(fx.T**2+fy.T**2), cmap='RdYlBu')
```
**Sweet! You trained your first Neural ODE! Now go on and learn more advanced models with the next tutorials**
|
github_jupyter
|
import sys ; sys.path.append('../')
from torchdyn.models import *
from torchdyn.datasets import *
from torchdyn import *
d = ToyDataset()
X, yn = d.generate(n_samples=512, noise=1e-1, dataset_type='moons')
import matplotlib.pyplot as plt
colors = ['orange', 'blue']
fig = plt.figure(figsize=(3,3))
ax = fig.add_subplot(111)
for i in range(len(X)):
ax.scatter(X[i,0], X[i,1], s=1, color=colors[yn[i].int()])
import torch
import torch.utils.data as data
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
X_train = torch.Tensor(X).to(device)
y_train = torch.LongTensor(yn.long()).to(device)
train = data.TensorDataset(X_train, y_train)
trainloader = data.DataLoader(train, batch_size=len(X), shuffle=True)
import torch.nn as nn
import pytorch_lightning as pl
class Learner(pl.LightningModule):
def __init__(self, model:nn.Module):
super().__init__()
self.model = model
def forward(self, x):
return self.model(x)
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self.model(x)
loss = nn.CrossEntropyLoss()(y_hat, y)
logs = {'train_loss': loss}
return {'loss': loss, 'log': logs}
def configure_optimizers(self):
return torch.optim.Adam(self.model.parameters(), lr=0.01)
def train_dataloader(self):
return trainloader
f = nn.Sequential(
nn.Linear(2, 16),
nn.Tanh(),
nn.Linear(16, 2)
)
model = NeuralDE(f, sensitivity='adjoint', solver='dopri5').to(device)
learn = Learner(model)
trainer = pl.Trainer(min_epochs=200, max_epochs=300)
trainer.fit(learn)
s_span = torch.linspace(0,1,100)
trajectory = model.trajectory(X_train, s_span).detach().cpu()
color=['orange', 'blue']
fig = plt.figure(figsize=(10,2))
ax0 = fig.add_subplot(121)
ax1 = fig.add_subplot(122)
for i in range(500):
ax0.plot(s_span, trajectory[:,i,0], color=color[int(yn[i])], alpha=.1);
ax1.plot(s_span, trajectory[:,i,1], color=color[int(yn[i])], alpha=.1);
ax0.set_xlabel(r"$s$ [Depth]") ; ax0.set_ylabel(r"$h_0(s)$")
ax1.set_xlabel(r"$s$ [Depth]") ; ax1.set_ylabel(r"$z_1(s)$")
ax0.set_title("Dimension 0") ; ax1.set_title("Dimension 1")
# evaluate vector field
n_pts = 50
x = torch.linspace(trajectory[:,:,0].min(), trajectory[:,:,0].max(), n_pts)
y = torch.linspace(trajectory[:,:,1].min(), trajectory[:,:,1].max(), n_pts)
X, Y = torch.meshgrid(x, y) ; z = torch.cat([X.reshape(-1,1), Y.reshape(-1,1)], 1)
f = model.defunc(0,z.to(device)).cpu().detach()
fx, fy = f[:,0], f[:,1] ; fx, fy = fx.reshape(n_pts , n_pts), fy.reshape(n_pts, n_pts)
# plot vector field and its intensity
fig = plt.figure(figsize=(4, 4)) ; ax = fig.add_subplot(111)
ax.streamplot(X.numpy().T, Y.numpy().T, fx.numpy().T, fy.numpy().T, color='black')
ax.contourf(X.T, Y.T, torch.sqrt(fx.T**2+fy.T**2), cmap='RdYlBu')
| 0.757615 | 0.987981 |
(concepts:errors)=
# Error handling in ``configpile``
The goal behind ``configpile`` error reporting is to provide helpful error messages to the user. In
particular:
- ``configpile`` does not rely on Python exceptions, rather implements its own error class
- The error class is designed to be used in a
[result type](https://en.wikipedia.org/wiki/Result_type) that follows existing Python usage
patterns. (To be pedantic, it is not monadic.)
- ``configpile`` accumulate errors instead of stopping at the first error
- Instead of relying on stack traces to convey contextual information, ``configpile`` errors
store context information that is manually added when results are processed.
## Errors
The base error type is `Err`, which contains either a single error or a sequence of errors.
A single error is constructed through the {meth}`~configpile.userr.Err.make` static method.
Errors can be pretty-printed. If the [Rich](https://github.com/Textualize/rich) library is available, some light formatting will be applied.
```
from configpile import Err
e1 = Err.make("First error", context_info = 1, other_info = "bla")
e1.pretty_print()
```
Errors can be collected in a single {class}`~configpile.userr.Err` instance, and pretty-printing will collect errors occurring in the same context.
```
e1 = Err.make("First error", context_info = 1, other_info = "blub")
e2 = Err.make("Second error", context_info = 1)
e12 = Err.collect1(e1, e2)
e12.pretty_print()
```
A sequence of single errors can always be recovered:
```
e12.errors()
```
## Results
The error type is designed to be used in functions that either return a valid value, or an error.
Such functions return a result, or a {data}`configpile.userr.Res` type.
Note that the {data}`configpile.userr.Res` type is parameterized by the valid value type:
in the example below, it is {class}`int`.
An example of such a function would be:
```
from configpile.userr import Res
def parse_int(s: str) -> Res[int]:
try:
return int(s)
except ValueError as e:
return Err.make(str(e))
```
and would give the following results:
```
parse_int("invalid")
parse_int(1234)
```
Results can be processed further. For example, the function that squares the value contained in a
result, while leaving any error untouched, can be written:
```
def square_result(res: Res[int]) -> Res[int]:
if isinstance(res, Err):
return res
return res*res
```
... or, using the {func}`~configpile.userr.map` helper:
```
from configpile import userr
def square_result1(res: Res[int]) -> Res[int]:
return userr.map(lambda x: x*x, res)
```
and we have, unsurprisingly:
```
square_result(parse_int("invalid"))
square_result1(parse_int(4))
```
The {func}`~configpile.userr.flat_map` function is useful to chain processing where each step can fail.
```
import math
def square_root(x: int) -> Res[float]:
if x < 0:
return Err.make(f"Cannot take square root of negative number {x}")
else:
return math.sqrt(float(x))
userr.flat_map(square_root, parse_int("valid"))
userr.flat_map(square_root, parse_int("2"))
userr.flat_map(square_root, parse_int("-2"))
```
## Combining results and errors
Finally, the {mod}`~configpile.userr` module offers ways to combine results.
For example, if one parses several integers, one can collect the results in a tuple using the
{func}`configpile.userr.collect` function.
```
userr.collect(parse_int(2), parse_int(3))
userr.collect(parse_int(3), parse_int("invalid"))
userr.collect(parse_int("invalid"), parse_int("invalid"))
```
See also {func}`configpile.userr.collect_seq` when dealing with sequences.
Errors can be collected and combined too. The {meth}`configpile.userr.Err.collect1` method expect
at least one argument and returns an {class}`~configpile.userr.Err`, while
{meth}`configpile.userr.Err.collect` can deal with no argument being passed, or with optional
arguments.
In particular, optional errors, of type `Optional[Err]`, are great for validation: a {data}`None`
value indicates no error, while an error indicates that one or several problems are present.
```
from typing import Optional, Sequence
a = -2
b = 1
check_a: Optional[Err] = Err.check(a > 0, "a must be positive")
check_b: Optional[Err] = Err.check(b > 0, "b must be positive")
Err.collect(check_a, check_b)
```
|
github_jupyter
|
from configpile import Err
e1 = Err.make("First error", context_info = 1, other_info = "bla")
e1.pretty_print()
e1 = Err.make("First error", context_info = 1, other_info = "blub")
e2 = Err.make("Second error", context_info = 1)
e12 = Err.collect1(e1, e2)
e12.pretty_print()
e12.errors()
from configpile.userr import Res
def parse_int(s: str) -> Res[int]:
try:
return int(s)
except ValueError as e:
return Err.make(str(e))
parse_int("invalid")
parse_int(1234)
def square_result(res: Res[int]) -> Res[int]:
if isinstance(res, Err):
return res
return res*res
from configpile import userr
def square_result1(res: Res[int]) -> Res[int]:
return userr.map(lambda x: x*x, res)
square_result(parse_int("invalid"))
square_result1(parse_int(4))
import math
def square_root(x: int) -> Res[float]:
if x < 0:
return Err.make(f"Cannot take square root of negative number {x}")
else:
return math.sqrt(float(x))
userr.flat_map(square_root, parse_int("valid"))
userr.flat_map(square_root, parse_int("2"))
userr.flat_map(square_root, parse_int("-2"))
userr.collect(parse_int(2), parse_int(3))
userr.collect(parse_int(3), parse_int("invalid"))
userr.collect(parse_int("invalid"), parse_int("invalid"))
from typing import Optional, Sequence
a = -2
b = 1
check_a: Optional[Err] = Err.check(a > 0, "a must be positive")
check_b: Optional[Err] = Err.check(b > 0, "b must be positive")
Err.collect(check_a, check_b)
| 0.423696 | 0.908699 |
```
from IPython.display import clear_output, display, HTML, Javascript
from typing import List
import json
def handle_input(match: List[int]) -> str:
for jdx in range(len(match)):
if match[jdx] == 0:
match[jdx] = -1
return str(jdx)
return str(-1)
handler_fn = handle_input
%%javascript
class Match {
constructor(identifier) {
this.grid = Array(9).fill(0)
this.gridDom = this.grid.map((_, idx) => {
const cell = document.createElement('div')
cell.className = 'ttt-cell'
cell.innerText = '-'
cell.onclick = () => this.handleClick(idx)
return cell
})
this.container = document.getElementById(identifier)
for (const cell of this.gridDom) {
this.container.appendChild(cell)
}
}
get side() {
return Math.sqrt(this.grid.length)
}
reset = () => {
for (const idx in this.grid) {
this.grid[idx] = 0
this.gridDom[idx].innerText = '-'
}
}
restartGame = () => {
alert('Game over!')
this.reset()
}
handleClick = (idx) => {
if (this.grid[idx] !== 0) return alert('Cell already used!')
this.grid[idx] = 1
this.gridDom[idx].innerText = 'X'
const over = this.checkWin()
if (over) return
executePython(`handler_fn(${JSON.stringify(this.grid)})`).then((jdx) => {
if (jdx === '-1') return this.restartGame()
this.grid[jdx] = -1
this.gridDom[jdx].innerText = 'O'
return new Promise((resolve) => setTimeout(resolve, 100))
}).then(() => {
this.checkWin()
})
}
checkGroup = (group) => {
const sum = group.reduce((a, v) => a + v, 0)
return Math.floor(Math.abs(sum) / group.length) * Math.sign(sum)
}
checkWin = () => {
// check rows
for (let idx = 0; idx < this.side; idx++) {
const row = this.grid.slice(idx * this.side, idx * this.side + this.side)
const winner = this.checkGroup(row)
if (Math.abs(winner) === 0) continue
alert(`${winner === 1 ? 'X' : 'O'} is the winner!`)
this.restartGame()
return true
}
return false
}
}
window.Match = Match
function executePython(python) {
return new Promise(resolve => {
const cb = {
iopub: {
output: data => resolve(data.content.text.trim())
}
}
Jupyter.notebook.kernel.execute(`print(${python})`, cb)
})
}
def play_game(handler=handle_input):
global handler_fn
handler_fn = handler
display(HTML("""
<style>
#grid {
display: flex;
flex-wrap: wrap;
flex-direction: row;
}
.ttt-cell {
width: 33%;
}
</style>
"""))
display(HTML(f"<div id='grid'></div>"))
display(Javascript("new window.Match('grid', )"))
```
|
github_jupyter
|
from IPython.display import clear_output, display, HTML, Javascript
from typing import List
import json
def handle_input(match: List[int]) -> str:
for jdx in range(len(match)):
if match[jdx] == 0:
match[jdx] = -1
return str(jdx)
return str(-1)
handler_fn = handle_input
%%javascript
class Match {
constructor(identifier) {
this.grid = Array(9).fill(0)
this.gridDom = this.grid.map((_, idx) => {
const cell = document.createElement('div')
cell.className = 'ttt-cell'
cell.innerText = '-'
cell.onclick = () => this.handleClick(idx)
return cell
})
this.container = document.getElementById(identifier)
for (const cell of this.gridDom) {
this.container.appendChild(cell)
}
}
get side() {
return Math.sqrt(this.grid.length)
}
reset = () => {
for (const idx in this.grid) {
this.grid[idx] = 0
this.gridDom[idx].innerText = '-'
}
}
restartGame = () => {
alert('Game over!')
this.reset()
}
handleClick = (idx) => {
if (this.grid[idx] !== 0) return alert('Cell already used!')
this.grid[idx] = 1
this.gridDom[idx].innerText = 'X'
const over = this.checkWin()
if (over) return
executePython(`handler_fn(${JSON.stringify(this.grid)})`).then((jdx) => {
if (jdx === '-1') return this.restartGame()
this.grid[jdx] = -1
this.gridDom[jdx].innerText = 'O'
return new Promise((resolve) => setTimeout(resolve, 100))
}).then(() => {
this.checkWin()
})
}
checkGroup = (group) => {
const sum = group.reduce((a, v) => a + v, 0)
return Math.floor(Math.abs(sum) / group.length) * Math.sign(sum)
}
checkWin = () => {
// check rows
for (let idx = 0; idx < this.side; idx++) {
const row = this.grid.slice(idx * this.side, idx * this.side + this.side)
const winner = this.checkGroup(row)
if (Math.abs(winner) === 0) continue
alert(`${winner === 1 ? 'X' : 'O'} is the winner!`)
this.restartGame()
return true
}
return false
}
}
window.Match = Match
function executePython(python) {
return new Promise(resolve => {
const cb = {
iopub: {
output: data => resolve(data.content.text.trim())
}
}
Jupyter.notebook.kernel.execute(`print(${python})`, cb)
})
}
def play_game(handler=handle_input):
global handler_fn
handler_fn = handler
display(HTML("""
<style>
#grid {
display: flex;
flex-wrap: wrap;
flex-direction: row;
}
.ttt-cell {
width: 33%;
}
</style>
"""))
display(HTML(f"<div id='grid'></div>"))
display(Javascript("new window.Match('grid', )"))
| 0.528533 | 0.296352 |
# Naive Bayes
```
# Naive bayes machine learning algorithm works on conditional probablity of independent variables in a dataset
# Pros:
#-> It is easy and fast to predict class of test data set. It also perform well in multi class prediction.
#-> When assumption of independence holds, a Naive Bayes classifier performs better
# compare to other models like logistic regression and you need less training data.
#-> It perform well in case of categorical input variables compared to numerical variable(s).
# For numerical variable, normal distribution is assumed (bell curve, which is a strong assumption).
# Cons:
#-> If categorical variable has a category (in test data set), which was not observed in training data set,
# then model will assign a 0 (zero) probability and will be unable to make a prediction.
# This is often known as “Zero Frequency”. To solve this, we can use the smoothing technique.
# One of the simplest smoothing techniques is called Laplace estimation.
#-> On the other side naive Bayes is also known as a bad estimator,
# so the probability outputs from predict_proba are not to be taken too seriously.
#-> Another limitation of Naive Bayes is the assumption of independent predictors.
# In real life, it is almost impossible that we get a set of predictors which are completely independent.
# 4 Applications of Naive Bayes Algorithms
#-> Real time Prediction: Naive Bayes is an eager learning classifier and it is sure fast.
# Thus, it could be used for making predictions in real time.
#-> Multi class Prediction: This algorithm is also well known for multi class prediction feature.
# Here we can predict the probability of multiple classes of target variable.
#-> Text classification/ Spam Filtering/ Sentiment Analysis: Naive Bayes classifiers mostly used in text
# classification (due to better result in multi class problems and independence rule) have
# higher success rate as compared to other algorithms. As a result, it is widely
# used in Spam filtering (identify spam e-mail) and Sentiment Analysis (in social media analysis,
# to identify positive and negative customer sentiments)
#-> Recommendation System: Naive Bayes Classifier and Collaborative Filtering together
# builds a Recommendation System that uses machine learning and data mining techniques to filter unseen
# information and predict whether a user would like a given resource or not.
# There are three types of Naive Bayes model under the scikit-learn library:
#-> Gaussian: It is used in classification and it assumes that features follow a normal distribution.
#-> Multinomial: It is used for discrete counts.
# For example, let’s say, we have a text classification problem.
# Here we can consider Bernoulli trials which is one step further and instead of “word occurring in the document”,
# we have “count how often word occurs in the document”, you can think of
# it as “number of times outcome number x_i is observed over the n trials”.
#-> Bernoulli: The binomial model is useful if your feature vectors are binary (i.e. zeros and ones).
# One application would be text classification with ‘bag of words’ model where
# the 1s & 0s are “word occurs in the document” and “word does not occur in the document” respectively.
import pandas as pd
from sklearn.datasets import load_wine
wine = load_wine()
dir(wine)
wine.feature_names
df = pd.DataFrame(wine.data,columns = wine.feature_names) # input variables
df.head()
target = wine.target #output variable
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(df,target,test_size = 0.3)
len(X_train)
len(X_test)
df.info()
```
# Assuming Gaussian distribution
```
# Gaussian distribution method is used where all the features or variables are continuous
from sklearn.naive_bayes import GaussianNB
model = GaussianNB()
model.fit(X_train,y_train)
model.score(X_test,y_test)
model.predict(X_test)
y_test
```
# Assuming Multinomial distribution
```
# Used when it is needed to find out the number of times a class of target feature has occured
from sklearn.naive_bayes import MultinomialNB
model2 = MultinomialNB()
model2.fit(X_train,y_train)
model2.score(X_test,y_test)
# We see that gaussian classifier works better than multinomial classifier
```
|
github_jupyter
|
# Naive bayes machine learning algorithm works on conditional probablity of independent variables in a dataset
# Pros:
#-> It is easy and fast to predict class of test data set. It also perform well in multi class prediction.
#-> When assumption of independence holds, a Naive Bayes classifier performs better
# compare to other models like logistic regression and you need less training data.
#-> It perform well in case of categorical input variables compared to numerical variable(s).
# For numerical variable, normal distribution is assumed (bell curve, which is a strong assumption).
# Cons:
#-> If categorical variable has a category (in test data set), which was not observed in training data set,
# then model will assign a 0 (zero) probability and will be unable to make a prediction.
# This is often known as “Zero Frequency”. To solve this, we can use the smoothing technique.
# One of the simplest smoothing techniques is called Laplace estimation.
#-> On the other side naive Bayes is also known as a bad estimator,
# so the probability outputs from predict_proba are not to be taken too seriously.
#-> Another limitation of Naive Bayes is the assumption of independent predictors.
# In real life, it is almost impossible that we get a set of predictors which are completely independent.
# 4 Applications of Naive Bayes Algorithms
#-> Real time Prediction: Naive Bayes is an eager learning classifier and it is sure fast.
# Thus, it could be used for making predictions in real time.
#-> Multi class Prediction: This algorithm is also well known for multi class prediction feature.
# Here we can predict the probability of multiple classes of target variable.
#-> Text classification/ Spam Filtering/ Sentiment Analysis: Naive Bayes classifiers mostly used in text
# classification (due to better result in multi class problems and independence rule) have
# higher success rate as compared to other algorithms. As a result, it is widely
# used in Spam filtering (identify spam e-mail) and Sentiment Analysis (in social media analysis,
# to identify positive and negative customer sentiments)
#-> Recommendation System: Naive Bayes Classifier and Collaborative Filtering together
# builds a Recommendation System that uses machine learning and data mining techniques to filter unseen
# information and predict whether a user would like a given resource or not.
# There are three types of Naive Bayes model under the scikit-learn library:
#-> Gaussian: It is used in classification and it assumes that features follow a normal distribution.
#-> Multinomial: It is used for discrete counts.
# For example, let’s say, we have a text classification problem.
# Here we can consider Bernoulli trials which is one step further and instead of “word occurring in the document”,
# we have “count how often word occurs in the document”, you can think of
# it as “number of times outcome number x_i is observed over the n trials”.
#-> Bernoulli: The binomial model is useful if your feature vectors are binary (i.e. zeros and ones).
# One application would be text classification with ‘bag of words’ model where
# the 1s & 0s are “word occurs in the document” and “word does not occur in the document” respectively.
import pandas as pd
from sklearn.datasets import load_wine
wine = load_wine()
dir(wine)
wine.feature_names
df = pd.DataFrame(wine.data,columns = wine.feature_names) # input variables
df.head()
target = wine.target #output variable
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(df,target,test_size = 0.3)
len(X_train)
len(X_test)
df.info()
# Gaussian distribution method is used where all the features or variables are continuous
from sklearn.naive_bayes import GaussianNB
model = GaussianNB()
model.fit(X_train,y_train)
model.score(X_test,y_test)
model.predict(X_test)
y_test
# Used when it is needed to find out the number of times a class of target feature has occured
from sklearn.naive_bayes import MultinomialNB
model2 = MultinomialNB()
model2.fit(X_train,y_train)
model2.score(X_test,y_test)
# We see that gaussian classifier works better than multinomial classifier
| 0.797241 | 0.990927 |
# Homework– Churn Prediction
**02/19/2019**
**Mengheng Xue**
### Data Preprocessing
```
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Importing the dataset
dataset = pd.read_csv('Week3_Mocked_Customer_Data_With_Missing.csv')
dataset.head(5)
# Plot histograms of “age” group by “churn”
dataset['age'].hist(by=dataset['churn_flag'])
# separete independent and dependent variables
X = dataset.iloc[:, 2:-1].values
y = dataset.iloc[:, 1].values
# Taking care of missing data
from sklearn.impute import SimpleImputer
imputer = SimpleImputer(missing_values = np.nan, strategy = 'most_frequent')
imputer.fit(X[:, :])
X[:, :] = imputer.transform(X[:, :])
# t-test of “tot_bill” regarding “churn”
from scipy.stats import ttest_ind
tot_bill = X[:, 5]
t_test, p_value = ttest_ind(tot_bill, y)
print('t-test index = {}\np-value = {}'.format(t_test, p_value))
# Encoding categorical data
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
labelencoder_X_1 = LabelEncoder()
X[:, 0] = labelencoder_X_1.fit_transform(X[:, 0]) # gender
labelencoder_X_2 = LabelEncoder()
X[:, 1] = labelencoder_X_2.fit_transform(X[:, 1]) # marrital
labelencoder_X_3 = LabelEncoder()
X[:, 8] = labelencoder_X_3.fit_transform(X[:, 8].astype(str)) # fortune
onehotencoder = OneHotEncoder(categorical_features = [0, 1, 8])
X = onehotencoder.fit_transform(X).toarray()
# Avoid dummy variable trap
X = np.delete(X, 2, axis=1) # delete dummy variable for marrital
X = np.delete(X, 4, axis=1) # delete dummy variable for fortune
# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train)
X_test = sc_X.transform(X_test)
# Applying PCA
from sklearn.decomposition import PCA
pca = PCA(n_components = 6) # components with variance larger than 0.05
X_train = pca.fit_transform(X_train)
X_test = pca.transform(X_test)
explained_variance = pca.explained_variance_ratio_
print(explained_variance.round(4))
```
### Traning and Testing Logistic Model
```
# Fitting Logistic Regression to the Training set
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(random_state = 0)
classifier.fit(X_train, y_train)
# Predicting the Test set results
y_pred = classifier.predict(X_test)
```
### Evaluation of LR Model
From confusion matrix, we could see that we predict most of y on the test set correctly (TP and TN), which means our model is well performed.
```
# Making the Confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
print("confusion matrix: ")
print(cm)
# Compute precision score
from sklearn.metrics import precision_score
precision = precision_score(y_test, y_pred, pos_label=1)
print("precision = %0.3f" % precision)
# Compute recall score
from sklearn.metrics import recall_score
recall = recall_score(y_test, y_pred, pos_label=1)
print("recall = %0.3f" % recall)
# Compute F1 score
from sklearn.metrics import f1_score
F1 = f1_score(y_test, y_pred, pos_label=1)
print("F1 = %0.3F" % F1)
```
### Precision-Recall Curve and ROC Curve
We can see that AUC of both Precision-Recall curve and ROC curve lager than 0.9, which shows our logistic model is well performed.
```
# Compute Precision-Recall and plot curve
from sklearn.metrics import precision_recall_curve
precision, recall, thresholds = precision_recall_curve(y_test, y_pred, pos_label=1)
area = auc(recall, precision)
print("Area Under Curve: %0.2f" % area)
plt.clf()
plt.plot(recall, precision, 'b-', label='Precision-Recall curve')
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.ylim([0.0, 1.05])
plt.xlim([0.0, 1.0])
plt.title('Precision-Recall curve: AUC=%0.2f' % area)
plt.legend(loc="lower left")
plt.show()
# Compute ROC curve AUC
from sklearn.metrics import roc_curve
from sklearn.metrics import auc
fpr, tpr, threshold = roc_curve(y_test, y_pred, pos_label=1)
roc_auc = auc(fpr, tpr)
print('ROC AUC = {}'.format(roc_auc))
# Plot the ROC curve
plt.title('Receiver Operating Characteristic')
plt.plot(fpr, tpr, 'b', label = 'AUC = %0.2f' % roc_auc)
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
```
## Summary
### Comparsion Using different data transforming
We use different data transformation techniques to improve prediction performance. The outcomes are displayed in the following table. We can see that:
+ Generally, our logistic model performs very well, obtaining both precision and recall larger than 0.9 on the test set.
+ We use normalization then PCA to improve the performance. However, it does not change so much. We think the raw data features are already well structured features and obtain excellent results, there is limited room to improve the performance by data transformation.
+ We only discuss the performance of our model in terms of prediction accuracy here. We think data transformation is more important in terms of time cost of training, which will make gradient decent faster than we only use the raw data. Here, however, since our data size is relatively small and well structured (relatively close, no extreme outliers), it may be not obvious that the model is faster when we apply data transformation.
| data transform | Precision | Recall | F1 score |
| --- | --- | --- | --- |
| raw data | 0.932 | 0.917 | 0.925 |
| normalize | 0.934 | 0.922 | 0.928 |
| PCA | 0.936 | 0.921 | 0.928 |
|
github_jupyter
|
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Importing the dataset
dataset = pd.read_csv('Week3_Mocked_Customer_Data_With_Missing.csv')
dataset.head(5)
# Plot histograms of “age” group by “churn”
dataset['age'].hist(by=dataset['churn_flag'])
# separete independent and dependent variables
X = dataset.iloc[:, 2:-1].values
y = dataset.iloc[:, 1].values
# Taking care of missing data
from sklearn.impute import SimpleImputer
imputer = SimpleImputer(missing_values = np.nan, strategy = 'most_frequent')
imputer.fit(X[:, :])
X[:, :] = imputer.transform(X[:, :])
# t-test of “tot_bill” regarding “churn”
from scipy.stats import ttest_ind
tot_bill = X[:, 5]
t_test, p_value = ttest_ind(tot_bill, y)
print('t-test index = {}\np-value = {}'.format(t_test, p_value))
# Encoding categorical data
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
labelencoder_X_1 = LabelEncoder()
X[:, 0] = labelencoder_X_1.fit_transform(X[:, 0]) # gender
labelencoder_X_2 = LabelEncoder()
X[:, 1] = labelencoder_X_2.fit_transform(X[:, 1]) # marrital
labelencoder_X_3 = LabelEncoder()
X[:, 8] = labelencoder_X_3.fit_transform(X[:, 8].astype(str)) # fortune
onehotencoder = OneHotEncoder(categorical_features = [0, 1, 8])
X = onehotencoder.fit_transform(X).toarray()
# Avoid dummy variable trap
X = np.delete(X, 2, axis=1) # delete dummy variable for marrital
X = np.delete(X, 4, axis=1) # delete dummy variable for fortune
# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train)
X_test = sc_X.transform(X_test)
# Applying PCA
from sklearn.decomposition import PCA
pca = PCA(n_components = 6) # components with variance larger than 0.05
X_train = pca.fit_transform(X_train)
X_test = pca.transform(X_test)
explained_variance = pca.explained_variance_ratio_
print(explained_variance.round(4))
# Fitting Logistic Regression to the Training set
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(random_state = 0)
classifier.fit(X_train, y_train)
# Predicting the Test set results
y_pred = classifier.predict(X_test)
# Making the Confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
print("confusion matrix: ")
print(cm)
# Compute precision score
from sklearn.metrics import precision_score
precision = precision_score(y_test, y_pred, pos_label=1)
print("precision = %0.3f" % precision)
# Compute recall score
from sklearn.metrics import recall_score
recall = recall_score(y_test, y_pred, pos_label=1)
print("recall = %0.3f" % recall)
# Compute F1 score
from sklearn.metrics import f1_score
F1 = f1_score(y_test, y_pred, pos_label=1)
print("F1 = %0.3F" % F1)
# Compute Precision-Recall and plot curve
from sklearn.metrics import precision_recall_curve
precision, recall, thresholds = precision_recall_curve(y_test, y_pred, pos_label=1)
area = auc(recall, precision)
print("Area Under Curve: %0.2f" % area)
plt.clf()
plt.plot(recall, precision, 'b-', label='Precision-Recall curve')
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.ylim([0.0, 1.05])
plt.xlim([0.0, 1.0])
plt.title('Precision-Recall curve: AUC=%0.2f' % area)
plt.legend(loc="lower left")
plt.show()
# Compute ROC curve AUC
from sklearn.metrics import roc_curve
from sklearn.metrics import auc
fpr, tpr, threshold = roc_curve(y_test, y_pred, pos_label=1)
roc_auc = auc(fpr, tpr)
print('ROC AUC = {}'.format(roc_auc))
# Plot the ROC curve
plt.title('Receiver Operating Characteristic')
plt.plot(fpr, tpr, 'b', label = 'AUC = %0.2f' % roc_auc)
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
| 0.768212 | 0.928279 |
```
# ignore this
%load_ext music21.ipython21
```
# What is `music21`?
`Music21` is a Python-based toolkit for computer-aided musicology.
People use `music21` to answer questions from musicology using computers, to study large datasets of music, to generate musical examples, to teach fundamentals of music theory, to edit musical notation, study music and the brain, and to compose music (both algorithmically and directly).
One of `music21`'s mottos is "Listen Faster." With the toolkit you should be able to find interesting moments and get a sense of the overall profile of a piece or a repertory of pieces. We hope that with the computer you'll have more time for listening and playing for enjoyment and use less of your time listening for work.
The system has been around since 2008 and is constantly growing and expanding. The approaches and traditions in `music21` have been used in many previous software systems. See :ref:`about` for information on the authors and background of the project.
The *21* in `music21` refers to its origins as a project nurtured at MIT. At MIT all courses have numbers and music, along with some other humanities departments, are numbered `21`. The music departments of MIT, along with Harvard, Smith, and Mount Holyoke Colleges, helped bring this toolkit from its easiest roots to a mature system.
## Finding solutions in a hurry
With `music21` adds a collection of specialized tools and objects to the general-purpose and easy to understand "Python" programming language. Install `music21` and type `python3` (or, better, `ipython`) and load it by typing:
```
from music21 import *
```
...and thousands of musical tools become available to you. For instance, want to see a note on the screen? Type these lines:
```
n = note.Note("D#3")
n.duration.type = 'half'
n.show()
```
Need a whole line of notes? Even easier:
```
littleMelody = converter.parse("tinynotation: 3/4 c4 d8 f g16 a g f#")
littleMelody.show()
```
Want to hear the melody? It's just as easy! (Please give it a second or two after hitting play for the piano sounds to load):
```
littleMelody.show('midi')
```
Want to view the opening tone-row of Schoenberg's Fourth String quartet as a matrix?
```
print(serial.rowToMatrix([2, 1, 9, 10, 5, 3, 4, 0, 8, 7, 6, 11]) )
```
Get a quick graph showing how common various pitches are in a fourteenth century piece:
```
dicant = corpus.parse('trecento/Fava_Dicant_nunc_iudei')
dicant.plot('histogram', 'pitch')
dicant.show()
```
This example, and many below, come from the `music21` built-in corpus of thousands of pieces that come with the system to help you get started right from the beginning. We believe in "Batteries Included" as a core principle. So for instance, every Bach chorale is included, so that you can do things like add the note name in german to every note in Bach chorale, BWV295:
```
bwv295 = corpus.parse('bach/bwv295')
bwv295 = bwv295.measures(0,5) #_DOCS_HIDE
for thisNote in bwv295.recurse().notes:
thisNote.addLyric(thisNote.pitch.german)
bwv295.show()
```
Prepare an incipit index (thematic catalog) of every Bach chorale that is in 3/4: (we'll just look at the first 25 here)
```
catalog = stream.Opus()
for work in corpus.chorales.Iterator(1, 26):
firstTimeSignature = work.parts[0].measure(1).getTimeSignatures()[0]
if firstTimeSignature.ratioString == '3/4':
incipit = work.measures(0,2)
catalog.insert(0, incipit.implode())
catalog.show()
```
Advanced analysis tools are included. Want to know how unstable the rhythmic profile of a piece is? Use Ani Patel's nPVI function on it:
```
s = corpus.parse('AlhambraReel')
analysis.patel.nPVI(s.flatten())
```
## Learning `music21`
`Music21` can be simple to use but it is also extremely powerful. Like all powerful software (Photoshop compared to MS Paint, AutoCAD, Excel), there's a bit of a learning curve, especially for people who haven't programmed before.
To use `music21`, some familiarity with the "Python" programming language is needed. Python is widely regarded as one of the easiest languages to learn and is often taught as a first programming language. You don't need to be a seasoned programmer; just a little bit of Python and you will be able to get started and explore music in new ways with `music21`.
Probably the hardest thing about `music21` is getting it installed and writing the first line of code. The installation instructions at :ref:`Installing music21 <usersGuide_01_installing>` will help you get started, and then we can continue with the rest of the User's Guide.
If you need help at any time, there are always helpful `music21` fanatics at the mailing list, https://groups.google.com/g/music21list/.
Continue on to :ref:`Installing music21 <usersGuide_01_installing>` or learn more about :ref:`who made the system and who supported it <about>`.
|
github_jupyter
|
# ignore this
%load_ext music21.ipython21
from music21 import *
n = note.Note("D#3")
n.duration.type = 'half'
n.show()
littleMelody = converter.parse("tinynotation: 3/4 c4 d8 f g16 a g f#")
littleMelody.show()
littleMelody.show('midi')
print(serial.rowToMatrix([2, 1, 9, 10, 5, 3, 4, 0, 8, 7, 6, 11]) )
dicant = corpus.parse('trecento/Fava_Dicant_nunc_iudei')
dicant.plot('histogram', 'pitch')
dicant.show()
bwv295 = corpus.parse('bach/bwv295')
bwv295 = bwv295.measures(0,5) #_DOCS_HIDE
for thisNote in bwv295.recurse().notes:
thisNote.addLyric(thisNote.pitch.german)
bwv295.show()
catalog = stream.Opus()
for work in corpus.chorales.Iterator(1, 26):
firstTimeSignature = work.parts[0].measure(1).getTimeSignatures()[0]
if firstTimeSignature.ratioString == '3/4':
incipit = work.measures(0,2)
catalog.insert(0, incipit.implode())
catalog.show()
s = corpus.parse('AlhambraReel')
analysis.patel.nPVI(s.flatten())
| 0.331877 | 0.971913 |
```
import pandas as pd
import numpy as np
from sklearn import tree
from sklearn.neighbors import KNeighborsClassifier, KNeighborsRegressor
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from warnings import filterwarnings
from DatasetsEvaluator import DatasetsEvaluator as de
filterwarnings('ignore')
```
## Example finding a single file
```
datasets_tester = de.DatasetsTester()
matching_datasets = datasets_tester.find_by_name(['pol'], "classification")
matching_datasets
```
## Example collecting all datasets meeting some specified criteria
```
matching_datasets = datasets_tester.find_datasets(
problem_type = "classification",
min_num_classes = 2,
max_num_classes = 20,
min_num_minority_class = 5,
max_num_minority_class = np.inf,
min_num_features = 0,
max_num_features = np.inf,
min_num_instances = 500,
max_num_instances = 5_000,
min_num_numeric_features = 2,
max_num_numeric_features = 50,
min_num_categorical_features=0,
max_num_categorical_features=50)
print("Number matching datasets found:", len(matching_datasets))
display(matching_datasets.head())
```
## Example collecting the datasets specified above and running classification tests
```
# After viewing the matching datasets, it's possible to collect all, or some subset of these. The following
# code collects 5 matching datasets.
# Note: some datasets may have errors loading.
# Note: As this uses the default False for keep_duplicated_names, some datasets may be removed.
datasets_tester.collect_data(max_num_datasets_used=5, method_pick_sets='pick_first', preview_data=False)
# The following code undoes the previous collection and collects all matching datasets.
# This is currently commented out, as it takes longer to execute.
# datasets_tester.collect_data(max_num_datasets_used=-1, preview_data=False)
dt_1 = tree.DecisionTreeClassifier(min_samples_split=50, max_depth=6, random_state=0)
dt_2 = tree.DecisionTreeClassifier(min_samples_split=25, max_depth=5, random_state=0)
knn_1 = KNeighborsClassifier(n_neighbors=5)
knn_2 = KNeighborsClassifier(n_neighbors=10)
summary_df = datasets_tester.run_tests(estimators_arr = [
("Decision Tree", "Original Features", "min_samples_split=50, max_depth=6", dt_1),
("Decision Tree", "Original Features", "min_samples_split=25, max_depth=5", dt_2),
("kNN", "Original Features", "n_neighbors=5", knn_1),
("kNN", "Original Features", "n_neighbors=10", knn_2)])
display(summary_df)
```
## Example collecting regression datasets and performing regression tests on these
```
datasets_tester = DatasetsTester()
# This example uses the default settings to select the datasets, then displays the results.
# In the subsequent cell, we choose to collect a subset of these.
matching_datasets = datasets_tester.find_datasets(problem_type = "regression",)
print("Number matching datasets found:", len(matching_datasets))
display(matching_datasets.head())
dt = tree.DecisionTreeRegressor(min_samples_split=50, max_depth=5, random_state=0)
knn = KNeighborsRegressor(n_neighbors=10)
datasets_tester.collect_data(max_num_datasets_used=10)
# This provides an example using some non-default parameters.
summary_df = datasets_tester.run_tests(estimators_arr = [
("Decision Tree", "Original Features", "Default", dt),
("kNN", "Original Features", "Default", knn)],
num_cv_folds=3,
scoring_metric='r2',
show_warnings=True)
display(summary_df)
```
## Example wrting to and reading from local cache
```
cache_folder = "c:\\dataset_cache"
# This will read from openml.org
datasets_tester.collect_data(max_num_datasets_used=10, preview_data=False, save_local_cache=True, path_local_cache=cache_folder)
# This will read from the local cache
datasets_tester.collect_data(max_num_datasets_used=10, preview_data=False, check_local_cache=True, path_local_cache=cache_folder)
```
## Example Comparing Two Pipelines
```
datasets_tester = DatasetsTester()
matching_datasets = datasets_tester.find_by_name(['arsenic-male-bladder'], "classification")
datasets_tester.collect_data()
pipe1 = Pipeline([('scaler', MinMaxScaler()), ('knn_classifier', KNeighborsClassifier())])
pipe2 = Pipeline([('scaler', StandardScaler()), ('knn_classifier', KNeighborsClassifier())])
# This provides an example using some non-default parameters.
summary_df = datasets_tester.run_tests(estimators_arr = [
("kNN with MinMaxScaler", "Original Features", "Default", pipe1),
("kNN with StandardScaler", "Original Features", "Default", pipe2)],
num_cv_folds=3,
show_warnings=True)
display(summary_df)
```
|
github_jupyter
|
import pandas as pd
import numpy as np
from sklearn import tree
from sklearn.neighbors import KNeighborsClassifier, KNeighborsRegressor
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from warnings import filterwarnings
from DatasetsEvaluator import DatasetsEvaluator as de
filterwarnings('ignore')
datasets_tester = de.DatasetsTester()
matching_datasets = datasets_tester.find_by_name(['pol'], "classification")
matching_datasets
matching_datasets = datasets_tester.find_datasets(
problem_type = "classification",
min_num_classes = 2,
max_num_classes = 20,
min_num_minority_class = 5,
max_num_minority_class = np.inf,
min_num_features = 0,
max_num_features = np.inf,
min_num_instances = 500,
max_num_instances = 5_000,
min_num_numeric_features = 2,
max_num_numeric_features = 50,
min_num_categorical_features=0,
max_num_categorical_features=50)
print("Number matching datasets found:", len(matching_datasets))
display(matching_datasets.head())
# After viewing the matching datasets, it's possible to collect all, or some subset of these. The following
# code collects 5 matching datasets.
# Note: some datasets may have errors loading.
# Note: As this uses the default False for keep_duplicated_names, some datasets may be removed.
datasets_tester.collect_data(max_num_datasets_used=5, method_pick_sets='pick_first', preview_data=False)
# The following code undoes the previous collection and collects all matching datasets.
# This is currently commented out, as it takes longer to execute.
# datasets_tester.collect_data(max_num_datasets_used=-1, preview_data=False)
dt_1 = tree.DecisionTreeClassifier(min_samples_split=50, max_depth=6, random_state=0)
dt_2 = tree.DecisionTreeClassifier(min_samples_split=25, max_depth=5, random_state=0)
knn_1 = KNeighborsClassifier(n_neighbors=5)
knn_2 = KNeighborsClassifier(n_neighbors=10)
summary_df = datasets_tester.run_tests(estimators_arr = [
("Decision Tree", "Original Features", "min_samples_split=50, max_depth=6", dt_1),
("Decision Tree", "Original Features", "min_samples_split=25, max_depth=5", dt_2),
("kNN", "Original Features", "n_neighbors=5", knn_1),
("kNN", "Original Features", "n_neighbors=10", knn_2)])
display(summary_df)
datasets_tester = DatasetsTester()
# This example uses the default settings to select the datasets, then displays the results.
# In the subsequent cell, we choose to collect a subset of these.
matching_datasets = datasets_tester.find_datasets(problem_type = "regression",)
print("Number matching datasets found:", len(matching_datasets))
display(matching_datasets.head())
dt = tree.DecisionTreeRegressor(min_samples_split=50, max_depth=5, random_state=0)
knn = KNeighborsRegressor(n_neighbors=10)
datasets_tester.collect_data(max_num_datasets_used=10)
# This provides an example using some non-default parameters.
summary_df = datasets_tester.run_tests(estimators_arr = [
("Decision Tree", "Original Features", "Default", dt),
("kNN", "Original Features", "Default", knn)],
num_cv_folds=3,
scoring_metric='r2',
show_warnings=True)
display(summary_df)
cache_folder = "c:\\dataset_cache"
# This will read from openml.org
datasets_tester.collect_data(max_num_datasets_used=10, preview_data=False, save_local_cache=True, path_local_cache=cache_folder)
# This will read from the local cache
datasets_tester.collect_data(max_num_datasets_used=10, preview_data=False, check_local_cache=True, path_local_cache=cache_folder)
datasets_tester = DatasetsTester()
matching_datasets = datasets_tester.find_by_name(['arsenic-male-bladder'], "classification")
datasets_tester.collect_data()
pipe1 = Pipeline([('scaler', MinMaxScaler()), ('knn_classifier', KNeighborsClassifier())])
pipe2 = Pipeline([('scaler', StandardScaler()), ('knn_classifier', KNeighborsClassifier())])
# This provides an example using some non-default parameters.
summary_df = datasets_tester.run_tests(estimators_arr = [
("kNN with MinMaxScaler", "Original Features", "Default", pipe1),
("kNN with StandardScaler", "Original Features", "Default", pipe2)],
num_cv_folds=3,
show_warnings=True)
display(summary_df)
| 0.793146 | 0.901964 |
<a href="https://colab.research.google.com/github/alfianhid/Prediksi-Churn-Rate-Pada-Sebuah-Bank-Menggunakan-PySpark/blob/main/Prediksi_Churn_Rate_Pada_Sebuah_Bank_Menggunakan_PySpark.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
**Pertama, kita perlu install JDK karena program/aplikasi Spark berjalan di JVM (berbasis Java)**
```
# Saya menggunakan JDK 8 karena alasan lebih stabil
!apt-get install openjdk-8-jdk
```
**Setelah install JDK, kita perlu menge-set path environment Java supaya kodingan kita di Spark bisa berjalan**
```
import os # library OS berfungsi menjembatansi proses/tugas antara kodingan dengan sistem operasi
os.environ["JAVA_HOME"]="/usr/lib/jvm/java-8-openjdk-amd64" # path folder JDK bisa kita dapatkan dari output saat install JDK
!echo $JAVA_HOME # cek kembali apakah sudah di-set dengan benar
```
**Setelah JDK terinstall dengan benar, baru kita install Spark-nya. Di sini, saya menggunakan PySpark (ngoding Spark pake Python)**
```
!pip install pyspark
```
**Selanjutnya kita perlu mengintegrasikan Google Colab dengan Google Drive karena di sana tempat kita menyimpan dataset**
```
from google.colab import drive
drive.mount('/content/drive') # path folder default Google Drive di Google Colab
```
**Lalu, kita wajib men-start session di Spark sebelum ngoding. Ini berfungsi agar kita bisa menjalankan library-library Spark dengan lancar.**
```
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('Analisis Data Nasabah Bank').getOrCreate()
# Sumber Dataset: https://archive.ics.uci.edu/ml/datasets/bank+marketing (UCI Machine Learning Repository)
```
# **Alur Kerja**
# Real Data >> Data Preparation >> Data Cleaning >> Data Transformation >> Data Modeling >> Prediction Process with Machine Learning
**Kemudian tinggal kita lakukan "read data" terhadap dataset yang telah kita upload di Google Drive**
```
# "inferSchema=True" berfungsi agar kita bisa menampilkan informasi dari dataframe kita nanti
# "header=True" berfungsi agar baris pertama pada dataset diubah menjadi header dalam dataframe
df = spark.read.csv('/content/drive/MyDrive/Colab Notebooks/Datasets/dataset-nasabah-bank.csv',inferSchema=True,header=True)
```
**Setelah itu, kita bisa mencoba melihat dimensi data atau seberapa banyak data pada dataset yang kita miliki**
```
print((df.count(),len(df.columns))) # baris x kolom
```
**Dilanjutkan dengan mem-print Schema dari dataframe untuk mengetahui informasi dataframe kita**
```
df.printSchema() # nullable=true artinya nilai/value pada variabel bisa kosong atau tanpa nilai
```
**Next, seperti biasa, kita tampilkan lima baris data teratas pada dataframe kita**
```
df.show(5)
```
**Dari lima baris data tersebut, perlu dilakukan "data cleaning" pada kolom yang tak memiliki arti/makna supaya analisis data kita lebih efisien**
```
my_data = df.drop(*['default', 'contact', 'day','month']) # menghapus kolom default, contact, day, dan month
my_data.columns # menampilkan sisa kolom setelah proses drop kolom dijalankan
```
**Dan selanjutnya, kita bisa mencoba menampilkan informasi lain dari dataframe yang kita miliki untuk lebih mengenal data kita**
```
my_data.describe().show()
```
**Nah, dari summary di atas, kita bisa melihat bahwa masih banyak kolom yang memiliki nilai berupa null. Maka dari itu, kita perlu melakukan normalisasi data dengan menghapus data yang memiliki nilai null tersebut.**
```
df.na.drop(subset=["job","marital","education","housing","loan","poutcome","deposit"]) \ # menghapus baris data null hanya pada kolom terpilih
.show(truncate=False)
```
**Setelah berhasil menormalisasi data, kita lanjutkan lagi eksplorasi dataframe kita supaya hubungan terasa lebih mesra**
```
# Di sini, kita akan menghitung jumlah data (baris) pada setiap kategori dari sebuah variabel data
my_data.groupBy('job').count().show()
print()
my_data.groupBy('marital').count().show()
print()
my_data.groupBy('education').count().show()
print()
my_data.groupBy('loan').count().show()
print()
my_data.groupBy('poutcome').count().show()
print()
my_data.groupBy('deposit').count().show()
```
**Akhirnya kita sampai di pintu gerbang akhir Data Pre-processing. Mulai dari sini, kita akan memfungsikan dataset kita untuk sebuah fungsi prediksi. Adapun yang akan kita prediksi adalah "churn rate" dari setiap nasabah bank. Churn rate adalah persentase nasabah yang berhenti menggunakan produk dan layanan bank. Ini sangat penting untuk dianalisis karena menjadi acuan dalam menentukan masa depan bank.**
```
from pyspark.ml.feature import StringIndexer, OneHotEncoder
# Membuat objek dari StringIndexer class dan menge-set kolom input & output
SI_job = StringIndexer(inputCol='job',outputCol='job_Index')
SI_marital = StringIndexer(inputCol='marital',outputCol='marital_Index')
SI_education = StringIndexer(inputCol='education',outputCol='education_Index')
SI_housing = StringIndexer(inputCol='housing',outputCol='housing_Index')
SI_loan = StringIndexer(inputCol='loan',outputCol='loan_Index')
SI_poutcome = StringIndexer(inputCol='poutcome',outputCol='poutcome_Index')
SI_deposit = StringIndexer(inputCol='deposit',outputCol='deposit_Index')
# Mentransformasi / mengubah data ke dalam bentuk yang baru untuk mempermudah proses prediksi
my_data = SI_job.fit(my_data).transform(my_data)
my_data = SI_marital.fit(my_data).transform(my_data)
my_data = SI_education.fit(my_data).transform(my_data)
my_data = SI_housing.fit(my_data).transform(my_data)
my_data = SI_loan.fit(my_data).transform(my_data)
my_data = SI_poutcome.fit(my_data).transform(my_data)
my_data = SI_deposit.fit(my_data).transform(my_data)
```
**Lalu, kita lihat hasil transformasi data tadi. Di sini, kita akan melihat 10 baris data teratas dari dataframe.**
```
my_data.select('job', 'job_Index', 'marital', 'marital_Index','housing','housing_Index','poutcome','poutcome_Index','deposit','deposit_Index').show(10)
```
**Setelah itu, kita akan mentransformasi data lagi ke dalam kolom OneHotEncoder sebelum seleksi fitur dari vektor biner**
```
# Buat objek dan set kolom input & output
OHE = OneHotEncoder(inputCols=['job_Index', 'marital_Index','education_Index','housing_Index','loan_Index','poutcome_Index','deposit_Index'],outputCols=['job_OHE', 'marital_OHE','education_OHE','housing_OHE','loan_OHE','poutcome_OHE','deposit_OHE'])
# Proses transformasi data
my_data = OHE.fit(my_data).transform(my_data)
# Lihat hasil transformasi data
my_data.select('job', 'job_Index', 'job_OHE','education','education_Index','education_OHE').show(10)
```
**Lalu, kita transformasi lagi menggunakan VectorAssembler supaya kita bisa melakukan seleksi fitur**
```
from pyspark.ml.feature import VectorAssembler # men-transformasi list menjadi vektor biner
# menge-set kolom input & output dari transformasi data
assembler = VectorAssembler(inputCols=['age',
'job_Index',
'marital_Index',
'education_Index',
'balance',
'housing_Index',
'loan_Index',
'duration',
'campaign',
'pdays',
'previous',
'poutcome_Index',
'job_OHE',
'marital_OHE',
'housing_OHE',
'education_OHE',
'loan_OHE',
'poutcome_OHE'],
outputCol='features')
# Proses transformasi data
final_data = assembler.transform(my_data)
```
**Lanjut seperti biasa, menampilkan 10 baris data teratas dari dataframe**
```
final_data.select('features','deposit_Index').show(10)
```
**Nah, dari hasil seleksi fitur di atas, kemudian akan digunakan sebagai data model untuk memprediksi churn rate**
```
model_df = final_data.select(['features','deposit_Index'])
model_df = model_df.withColumnRenamed("deposit_Index","label") # mengubah nama kolom supaya lebih mudah dimengerti
model_df.printSchema()
```
**Kemudian data model yang telah dibuat tadi akan dilakukan training dan testing, sehingga nantinya bisa menghasilkan nilai churn rate**
```
training_df,test_df = model_df.randomSplit([0.75,0.25]) # 75% dari data untuk training dan 25% dari data untuk testing
```
**Yak lanjut aja dengan membuat model regresi logistik untuk melihat akurasi training dan testing kita**
```
from pyspark.ml.classification import LogisticRegression # Prediksi menggunakan Metode Logistic Regression
log_reg = LogisticRegression().fit(training_df)
```
**Akhirnya sampai di penghujung kodingan wkwkwk. Daripada tambah pusing, langsung aja print akurasi hasil training dan testing**
```
lr_summary = log_reg.summary
lr_summary.accuracy # Akurasi secara keseluruhan = 79%. Hasil akurasi biasanya berkorelasi dengan konsep underfitting/overfitting.
# Referensi bacaan mengenai konsep underfitting/overfitting: https://s.id/yhXPu
```
**Kita bandingkan dengan ambang batas area under ROC untuk mengetahui seberapa baik performa prediksi kita**
```
# Referensi bacaan mengenai threshold ROC pada Logistic Regression: https://s.id/yheC6
lr_summary.areaUnderROC # Threshold ROC = 87%. Karena akurasi < Threshold, maka hasil prediksi belum baik.
```
**And finally, last but not least, kita tampilkan hasil prediksi churn rate dari dataframe yang telah kita train dan test tadi**
```
predictions = log_reg.transform(test_df)
predictions.select('label','prediction').show(10) # menampilkan 10 baris data teratas
```
**Yap! Dapat dilihat bahwa hasil prediksi kita kurang sempurna. Namun yang penting, kita sudah berhasil tahu dan belajar mengenai langkah-langkah dari awal sampai akhir untuk menganalisis data menggunakan PySpark. Atas perhatiannya, saya ucapkan terima kasih :)**
|
github_jupyter
|
# Saya menggunakan JDK 8 karena alasan lebih stabil
!apt-get install openjdk-8-jdk
import os # library OS berfungsi menjembatansi proses/tugas antara kodingan dengan sistem operasi
os.environ["JAVA_HOME"]="/usr/lib/jvm/java-8-openjdk-amd64" # path folder JDK bisa kita dapatkan dari output saat install JDK
!echo $JAVA_HOME # cek kembali apakah sudah di-set dengan benar
!pip install pyspark
from google.colab import drive
drive.mount('/content/drive') # path folder default Google Drive di Google Colab
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('Analisis Data Nasabah Bank').getOrCreate()
# Sumber Dataset: https://archive.ics.uci.edu/ml/datasets/bank+marketing (UCI Machine Learning Repository)
# "inferSchema=True" berfungsi agar kita bisa menampilkan informasi dari dataframe kita nanti
# "header=True" berfungsi agar baris pertama pada dataset diubah menjadi header dalam dataframe
df = spark.read.csv('/content/drive/MyDrive/Colab Notebooks/Datasets/dataset-nasabah-bank.csv',inferSchema=True,header=True)
print((df.count(),len(df.columns))) # baris x kolom
df.printSchema() # nullable=true artinya nilai/value pada variabel bisa kosong atau tanpa nilai
df.show(5)
my_data = df.drop(*['default', 'contact', 'day','month']) # menghapus kolom default, contact, day, dan month
my_data.columns # menampilkan sisa kolom setelah proses drop kolom dijalankan
my_data.describe().show()
df.na.drop(subset=["job","marital","education","housing","loan","poutcome","deposit"]) \ # menghapus baris data null hanya pada kolom terpilih
.show(truncate=False)
# Di sini, kita akan menghitung jumlah data (baris) pada setiap kategori dari sebuah variabel data
my_data.groupBy('job').count().show()
print()
my_data.groupBy('marital').count().show()
print()
my_data.groupBy('education').count().show()
print()
my_data.groupBy('loan').count().show()
print()
my_data.groupBy('poutcome').count().show()
print()
my_data.groupBy('deposit').count().show()
from pyspark.ml.feature import StringIndexer, OneHotEncoder
# Membuat objek dari StringIndexer class dan menge-set kolom input & output
SI_job = StringIndexer(inputCol='job',outputCol='job_Index')
SI_marital = StringIndexer(inputCol='marital',outputCol='marital_Index')
SI_education = StringIndexer(inputCol='education',outputCol='education_Index')
SI_housing = StringIndexer(inputCol='housing',outputCol='housing_Index')
SI_loan = StringIndexer(inputCol='loan',outputCol='loan_Index')
SI_poutcome = StringIndexer(inputCol='poutcome',outputCol='poutcome_Index')
SI_deposit = StringIndexer(inputCol='deposit',outputCol='deposit_Index')
# Mentransformasi / mengubah data ke dalam bentuk yang baru untuk mempermudah proses prediksi
my_data = SI_job.fit(my_data).transform(my_data)
my_data = SI_marital.fit(my_data).transform(my_data)
my_data = SI_education.fit(my_data).transform(my_data)
my_data = SI_housing.fit(my_data).transform(my_data)
my_data = SI_loan.fit(my_data).transform(my_data)
my_data = SI_poutcome.fit(my_data).transform(my_data)
my_data = SI_deposit.fit(my_data).transform(my_data)
my_data.select('job', 'job_Index', 'marital', 'marital_Index','housing','housing_Index','poutcome','poutcome_Index','deposit','deposit_Index').show(10)
# Buat objek dan set kolom input & output
OHE = OneHotEncoder(inputCols=['job_Index', 'marital_Index','education_Index','housing_Index','loan_Index','poutcome_Index','deposit_Index'],outputCols=['job_OHE', 'marital_OHE','education_OHE','housing_OHE','loan_OHE','poutcome_OHE','deposit_OHE'])
# Proses transformasi data
my_data = OHE.fit(my_data).transform(my_data)
# Lihat hasil transformasi data
my_data.select('job', 'job_Index', 'job_OHE','education','education_Index','education_OHE').show(10)
from pyspark.ml.feature import VectorAssembler # men-transformasi list menjadi vektor biner
# menge-set kolom input & output dari transformasi data
assembler = VectorAssembler(inputCols=['age',
'job_Index',
'marital_Index',
'education_Index',
'balance',
'housing_Index',
'loan_Index',
'duration',
'campaign',
'pdays',
'previous',
'poutcome_Index',
'job_OHE',
'marital_OHE',
'housing_OHE',
'education_OHE',
'loan_OHE',
'poutcome_OHE'],
outputCol='features')
# Proses transformasi data
final_data = assembler.transform(my_data)
final_data.select('features','deposit_Index').show(10)
model_df = final_data.select(['features','deposit_Index'])
model_df = model_df.withColumnRenamed("deposit_Index","label") # mengubah nama kolom supaya lebih mudah dimengerti
model_df.printSchema()
training_df,test_df = model_df.randomSplit([0.75,0.25]) # 75% dari data untuk training dan 25% dari data untuk testing
from pyspark.ml.classification import LogisticRegression # Prediksi menggunakan Metode Logistic Regression
log_reg = LogisticRegression().fit(training_df)
lr_summary = log_reg.summary
lr_summary.accuracy # Akurasi secara keseluruhan = 79%. Hasil akurasi biasanya berkorelasi dengan konsep underfitting/overfitting.
# Referensi bacaan mengenai konsep underfitting/overfitting: https://s.id/yhXPu
# Referensi bacaan mengenai threshold ROC pada Logistic Regression: https://s.id/yheC6
lr_summary.areaUnderROC # Threshold ROC = 87%. Karena akurasi < Threshold, maka hasil prediksi belum baik.
predictions = log_reg.transform(test_df)
predictions.select('label','prediction').show(10) # menampilkan 10 baris data teratas
| 0.399577 | 0.924142 |
```
# history
# n个样本 分成c类 一共能分成多少类, 划分空间?
import numpy as np
import imageio
import matplotlib.pyplot as plt
im = imageio.imread('imageio:chelsea.png')
# 建表以后LUT
def GammaTable(gamma):
invGamma = 1.0 / gamma
table = np.array([((i / 255.0) ** invGamma) * 255
for i in np.arange(0, 256)]).astype("uint8")
return table
def LUT(image , lutTable):
# image: grayscale or RGB color image
# luTable: [255,] 1D numpy array mapping 0-255 values to ohter values
lut = lambda x: lutTable[x]
return lut(image)
def ForLUT2(x, lutTable):
return lutTable[x]
def LUT2(image ,lutTable):
return ForLUT2(image,lutTable)
def LUT3(image,lutTable):
return lutTable[image]
testlut = im[:,:,1]
testlutgamma = 2
resultlut1 = LUT(testlut,GammaTable(testlutgamma))
resultlut2 = LUT2(testlut,GammaTable(testlutgamma))
resultlut3 = LUT3(testlut,GammaTable(testlutgamma))
print("LUT和LUT2得到的答案是否完全相同,答案:",end = "")
print((resultlut1==resultlut2).all())
print("LUT和LUT3得到的答案是否完全相同,答案:",end = "")
print((resultlut1==resultlut3).all())
print("验证,都和原图像不同(经过处理),答案:",end = "")
print((resultlut3==im[:,:,1]).all())
def GammaTableChange(gamma):
invGamma = 1.0 / gamma
table = np.zeros(256)
for i in np.arange(0, 256):
table[i] = (((i/255.0)**invGamma)*255).astype("uint8")
return table
def TestEq(table1,table2):
print("两表内容是否完全一致"+str((table1 == table2).all()))
# 直接伽马函数映射
def DirectGammaFunc(image,gamma):
invGamma = 1.0/gamma
fuc = lambda x: ((x/255.0)**invGamma)*255
return fuc(image).astype("uint8")
# 直接伽马函数计算
def DirectCaculate(image,gamma):
newimg = np.copy(image)
def ImThresh(im, minv, maxv):
BinImg = np.zeros(im.shape, dtype=im.dtype)
for i in range(im.shape[0]):
for j in range(im.shape[1]):
if im[i,j]>=minv and im[i,j]<=maxv:
BinImg[i,j]=1
else:
BinImg[i,j]=0
return BinImg
def ImThreshv2(image, minv, maxv):
assert(len(image.shape)==2)
# 二值逻辑
group1 = image >= minv
group2 = image <= maxv
# 与操作 即要求满足 像素值>=minv 又要 <=maxv
return (group1*group2).astype(np.uint8)
# 杰哥的写法
def ImThreshv3(image, minv, maxv):
assert(len(image.shape)==2)
# 只是这里用的非逻辑值,数字化为0,1(uint8类型)
newimg = np.copy(image)
newimg[newimg > maxv] = 0
newimg[newimg < minv] = 0
newimg[newimg != 0] = 1
return newimg.astype(np.uint8)
R = im[:,:,0]
gamma = 1.5
TestEq(LUT(R,GammaTable(gamma)),LUT(R,GammaTableChange(gamma)))
R_crrt = LUT(R,GammaTable(gamma))
R_crrt2 = LUT2(R,GammaTable(gamma))
R_crrt_test = DirectGammaFunc(R,gamma)
inverGrey = lambda x: (255-x)
plt.figure(figsize=(15,20))
ax = plt.subplot(121)
ax.set_title("Test1")
plt.imshow(im)
R_crrt_inv = inverGrey(R_crrt)
ax = plt.subplot(122)
ax.set_title("Test2")
plt.imshow(im)
plt.show()
plt.figure(figsize=(16,10))
ax1 = plt.subplot(121)
ax1.imshow(ImThresh(R,130,256),cmap = "gray")
ax2 = plt.subplot(122)
ax2.imshow(ImThreshv3(R,130,256),cmap = "gray")
plt.show()
plt.figure(figsize=(15, 10))
ax = plt.subplot(2,2,1)
ax.set_title('Original Image')
plt.imshow(R,cmap = "gray")
ax = plt.subplot(2,2,2)
ax.set_title('Gamma: '+str(gamma))
plt.imshow(R_crrt, cmap="gray")
ax = plt.subplot(2,2,3)
ax.set_title('Gamma: '+str(gamma)+" Converse by cmap")
plt.imshow(R_crrt, cmap="gray_r")
ax = plt.subplot(224)
ax.set_title('Gamma' + str(gamma)+ " Converse by function")
plt.imshow(R_crrt_inv, cmap="gray")
plt.show()
plt.figure(figsize=(15, 10))
ax = plt.subplot(1,2,1)
X = np.array(range(0,256))
plt.plot(X, GammaTable(1))
ax.set_title('gamma=1.0')
ax = plt.subplot(1,2,2)
plt.plot(X, GammaTable(gamma))
ax.set_title('gamma='+str(gamma))
plt.show()
```
|
github_jupyter
|
# history
# n个样本 分成c类 一共能分成多少类, 划分空间?
import numpy as np
import imageio
import matplotlib.pyplot as plt
im = imageio.imread('imageio:chelsea.png')
# 建表以后LUT
def GammaTable(gamma):
invGamma = 1.0 / gamma
table = np.array([((i / 255.0) ** invGamma) * 255
for i in np.arange(0, 256)]).astype("uint8")
return table
def LUT(image , lutTable):
# image: grayscale or RGB color image
# luTable: [255,] 1D numpy array mapping 0-255 values to ohter values
lut = lambda x: lutTable[x]
return lut(image)
def ForLUT2(x, lutTable):
return lutTable[x]
def LUT2(image ,lutTable):
return ForLUT2(image,lutTable)
def LUT3(image,lutTable):
return lutTable[image]
testlut = im[:,:,1]
testlutgamma = 2
resultlut1 = LUT(testlut,GammaTable(testlutgamma))
resultlut2 = LUT2(testlut,GammaTable(testlutgamma))
resultlut3 = LUT3(testlut,GammaTable(testlutgamma))
print("LUT和LUT2得到的答案是否完全相同,答案:",end = "")
print((resultlut1==resultlut2).all())
print("LUT和LUT3得到的答案是否完全相同,答案:",end = "")
print((resultlut1==resultlut3).all())
print("验证,都和原图像不同(经过处理),答案:",end = "")
print((resultlut3==im[:,:,1]).all())
def GammaTableChange(gamma):
invGamma = 1.0 / gamma
table = np.zeros(256)
for i in np.arange(0, 256):
table[i] = (((i/255.0)**invGamma)*255).astype("uint8")
return table
def TestEq(table1,table2):
print("两表内容是否完全一致"+str((table1 == table2).all()))
# 直接伽马函数映射
def DirectGammaFunc(image,gamma):
invGamma = 1.0/gamma
fuc = lambda x: ((x/255.0)**invGamma)*255
return fuc(image).astype("uint8")
# 直接伽马函数计算
def DirectCaculate(image,gamma):
newimg = np.copy(image)
def ImThresh(im, minv, maxv):
BinImg = np.zeros(im.shape, dtype=im.dtype)
for i in range(im.shape[0]):
for j in range(im.shape[1]):
if im[i,j]>=minv and im[i,j]<=maxv:
BinImg[i,j]=1
else:
BinImg[i,j]=0
return BinImg
def ImThreshv2(image, minv, maxv):
assert(len(image.shape)==2)
# 二值逻辑
group1 = image >= minv
group2 = image <= maxv
# 与操作 即要求满足 像素值>=minv 又要 <=maxv
return (group1*group2).astype(np.uint8)
# 杰哥的写法
def ImThreshv3(image, minv, maxv):
assert(len(image.shape)==2)
# 只是这里用的非逻辑值,数字化为0,1(uint8类型)
newimg = np.copy(image)
newimg[newimg > maxv] = 0
newimg[newimg < minv] = 0
newimg[newimg != 0] = 1
return newimg.astype(np.uint8)
R = im[:,:,0]
gamma = 1.5
TestEq(LUT(R,GammaTable(gamma)),LUT(R,GammaTableChange(gamma)))
R_crrt = LUT(R,GammaTable(gamma))
R_crrt2 = LUT2(R,GammaTable(gamma))
R_crrt_test = DirectGammaFunc(R,gamma)
inverGrey = lambda x: (255-x)
plt.figure(figsize=(15,20))
ax = plt.subplot(121)
ax.set_title("Test1")
plt.imshow(im)
R_crrt_inv = inverGrey(R_crrt)
ax = plt.subplot(122)
ax.set_title("Test2")
plt.imshow(im)
plt.show()
plt.figure(figsize=(16,10))
ax1 = plt.subplot(121)
ax1.imshow(ImThresh(R,130,256),cmap = "gray")
ax2 = plt.subplot(122)
ax2.imshow(ImThreshv3(R,130,256),cmap = "gray")
plt.show()
plt.figure(figsize=(15, 10))
ax = plt.subplot(2,2,1)
ax.set_title('Original Image')
plt.imshow(R,cmap = "gray")
ax = plt.subplot(2,2,2)
ax.set_title('Gamma: '+str(gamma))
plt.imshow(R_crrt, cmap="gray")
ax = plt.subplot(2,2,3)
ax.set_title('Gamma: '+str(gamma)+" Converse by cmap")
plt.imshow(R_crrt, cmap="gray_r")
ax = plt.subplot(224)
ax.set_title('Gamma' + str(gamma)+ " Converse by function")
plt.imshow(R_crrt_inv, cmap="gray")
plt.show()
plt.figure(figsize=(15, 10))
ax = plt.subplot(1,2,1)
X = np.array(range(0,256))
plt.plot(X, GammaTable(1))
ax.set_title('gamma=1.0')
ax = plt.subplot(1,2,2)
plt.plot(X, GammaTable(gamma))
ax.set_title('gamma='+str(gamma))
plt.show()
| 0.254324 | 0.704109 |
<a href="https://colab.research.google.com/github/athenian-ct-projects/Mock-Congress-Day-KG/blob/master/Voting_Tool_KG.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Voting system for Mock Congress Day
Kaveer G. '23

```
"""Note from the author: This is quite inefficient because it's based on a not-currently-working build of the ranked voting system.
It's essentially the ranked voting program with some functionality (and the lack of it functioning correctly) removed."""
#function to add "Message to operators: " before operator messages
def opmessage(message):
print("Message to operators: \n" +str(message))
#creates the ballot out of the votes
def voting():
morevoters = 1
while morevoters != 0:
voterlist = []
y = 1
print("CAUTION: if the candidate's name is entered incorrectly (including capitalization), the vote will not be counted.")
choice = input("choice: ")
if choice != "voting complete":
voterlist.append(choice)
ballotlist.append(voterlist)
else:
morevoters = 0
#counts the votes and assigns the amount of votes to the respective candidates
def calculate():
thevoter = 0
for i in range(0, len(ballotlist)):
thechoice = 0
# != was <
while thechoice < numberofcandidates:
if candidate_list.index(ballotlist[thevoter][thechoice])+1 != -1:
candidate_list[int(candidate_list.index(ballotlist[thevoter][thechoice]))+1] = candidate_list[int(candidate_list.index(ballotlist[thevoter][thechoice]))+1]+1
thechoice = numberofcandidates
else:
thechoice += 1
thevoter += 1
#finds the candidate with the most votes
def winner():
v = 1
justnumberscandidatelist = []
for i in range(0, int(len(candidate_list) / 2)):
justnumberscandidatelist.append(candidate_list[v])
v += 2
print('\n')
if justnumberscandidatelist.count(max(justnumberscandidatelist)) == 1:
print(candidate_list[candidate_list.index(max(justnumberscandidatelist))-1]+" is the winner!")
elif justnumberscandidatelist.count(max(justnumberscandidatelist)) > 1:
print("There's a tie!")
if input("Would you like additional information? ") != "no":
print('\n')
print("Number of votes per candidate: " + str(candidate_list))
#Main code begins
print('\n')
print("Welcome to the automated voting system.")
candidate_list = []
morecandidates = 1
numberofcandidates = 0
print('\n')
opmessage("If it is likely a candidate's name will be misspelled or otherwise entered incorrectly, \nit is recommended to enter an alternate name instead, such as a nickname or number.")
while morecandidates == 1:
candidate_list.append(input("Enter candidate: "))
candidate_list.append(0)
numberofcandidates += 1
if input("Are there more candidates? Type 'no' to stop adding candidates. ") == "no":
morecandidates = 0
print('\n')
opmessage("When there are no additional voters remaining, enter 'voting complete' in the 'choice' field.")
opmessage("It is recommended to create a line of voters at this time.")
print('\n')
print("Voting begins now.")
ballotlist = []
voting()
calculate()
winner()
print('\n')
print("Created by Kaveer Gera")
```
|
github_jupyter
|
"""Note from the author: This is quite inefficient because it's based on a not-currently-working build of the ranked voting system.
It's essentially the ranked voting program with some functionality (and the lack of it functioning correctly) removed."""
#function to add "Message to operators: " before operator messages
def opmessage(message):
print("Message to operators: \n" +str(message))
#creates the ballot out of the votes
def voting():
morevoters = 1
while morevoters != 0:
voterlist = []
y = 1
print("CAUTION: if the candidate's name is entered incorrectly (including capitalization), the vote will not be counted.")
choice = input("choice: ")
if choice != "voting complete":
voterlist.append(choice)
ballotlist.append(voterlist)
else:
morevoters = 0
#counts the votes and assigns the amount of votes to the respective candidates
def calculate():
thevoter = 0
for i in range(0, len(ballotlist)):
thechoice = 0
# != was <
while thechoice < numberofcandidates:
if candidate_list.index(ballotlist[thevoter][thechoice])+1 != -1:
candidate_list[int(candidate_list.index(ballotlist[thevoter][thechoice]))+1] = candidate_list[int(candidate_list.index(ballotlist[thevoter][thechoice]))+1]+1
thechoice = numberofcandidates
else:
thechoice += 1
thevoter += 1
#finds the candidate with the most votes
def winner():
v = 1
justnumberscandidatelist = []
for i in range(0, int(len(candidate_list) / 2)):
justnumberscandidatelist.append(candidate_list[v])
v += 2
print('\n')
if justnumberscandidatelist.count(max(justnumberscandidatelist)) == 1:
print(candidate_list[candidate_list.index(max(justnumberscandidatelist))-1]+" is the winner!")
elif justnumberscandidatelist.count(max(justnumberscandidatelist)) > 1:
print("There's a tie!")
if input("Would you like additional information? ") != "no":
print('\n')
print("Number of votes per candidate: " + str(candidate_list))
#Main code begins
print('\n')
print("Welcome to the automated voting system.")
candidate_list = []
morecandidates = 1
numberofcandidates = 0
print('\n')
opmessage("If it is likely a candidate's name will be misspelled or otherwise entered incorrectly, \nit is recommended to enter an alternate name instead, such as a nickname or number.")
while morecandidates == 1:
candidate_list.append(input("Enter candidate: "))
candidate_list.append(0)
numberofcandidates += 1
if input("Are there more candidates? Type 'no' to stop adding candidates. ") == "no":
morecandidates = 0
print('\n')
opmessage("When there are no additional voters remaining, enter 'voting complete' in the 'choice' field.")
opmessage("It is recommended to create a line of voters at this time.")
print('\n')
print("Voting begins now.")
ballotlist = []
voting()
calculate()
winner()
print('\n')
print("Created by Kaveer Gera")
| 0.266262 | 0.488222 |
```
from scipy.optimize import curve_fit
import pylab as plt
import numpy as np
def blackbody_lam(lam, T):
""" Blackbody as a function of wavelength (um) and temperature (K).
returns units of erg/s/cm^2/cm/Steradian
"""
from scipy.constants import h,k,c
lam = 1e-6 * lam # convert to metres
return 2*h*c**2 / (lam**5 * (np.exp(h*c / (lam*k*T)) - 1))
wa = np.linspace(0.1, 2, 100) # wavelengths in um
T1 = 5000.
T2 = 8000.
y1 = blackbody_lam(wa, T1)
y2 = blackbody_lam(wa, T2)
ytot = y1 + y2
np.random.seed(1)
# make synthetic data with Gaussian errors
sigma = np.ones(len(wa)) * 1 * np.median(ytot)
ydata = ytot + np.random.randn(len(wa)) * sigma
# plot the input model and synthetic data
plt.figure()
plt.plot(wa, y1, ':', lw=2, label='T1=%.0f' % T1)
plt.plot(wa, y2, ':', lw=2, label='T2=%.0f' % T2)
plt.plot(wa, ytot, ':', lw=2, label='T1 + T2\n(true model)')
plt.plot(wa, ydata, ls='steps-mid', lw=2, label='Fake data')
plt.xlabel('Wavelength (microns)')
plt.ylabel('Intensity (erg/s/cm$^2$/cm/Steradian)')
# fit two blackbodies to the synthetic data
def func(wa, T1, T2):
return blackbody_lam(wa, T1) + blackbody_lam(wa, T2)
# Note the initial guess values for T1 and T2 (p0 keyword below). They
# are quite different to the known true values, but not *too*
# different. If these are too far away from the solution curve_fit()
# will not be able to find a solution. This is not a Python-specific
# problem, it is true for almost every fitting algorithm for
# non-linear models. The initial guess is important!
popt, pcov = curve_fit(func, wa, ydata, p0=(1000, 3000), sigma=sigma)
# get the best fitting parameter values and their 1 sigma errors
# (assuming the parameters aren't strongly correlated).
bestT1, bestT2 = popt
sigmaT1, sigmaT2 = np.sqrt(np.diag(pcov))
ybest = blackbody_lam(wa, bestT1) + blackbody_lam(wa, bestT2)
print('True model values')
print(' T1 = %.2f' % T1)
print(' T2 = %.2f' % T2)
print('Parameters of best-fitting model:')
print(' T1 = %.2f +/- %.2f' % (bestT1, sigmaT1))
print(' T2 = %.2f +/- %.2f' % (bestT2, sigmaT2))
degrees_of_freedom = len(wa) - 2
resid = (ydata - func(wa, *popt)) / sigma
chisq = np.dot(resid, resid)
# plot the solution
plt.plot(wa, ybest, label='Best fitting\nmodel')
plt.legend(frameon=False)
plt.savefig('fit_bb.png')
plt.show()
spectra_contsep_j193015_1 = Table.read("mansiclass/spec_auto_contsep_lstep1__crr_b_ifu20211023_02_55_33_RCB-J193015.txt", format = "ascii")
from astropy.table import Table
```
|
github_jupyter
|
from scipy.optimize import curve_fit
import pylab as plt
import numpy as np
def blackbody_lam(lam, T):
""" Blackbody as a function of wavelength (um) and temperature (K).
returns units of erg/s/cm^2/cm/Steradian
"""
from scipy.constants import h,k,c
lam = 1e-6 * lam # convert to metres
return 2*h*c**2 / (lam**5 * (np.exp(h*c / (lam*k*T)) - 1))
wa = np.linspace(0.1, 2, 100) # wavelengths in um
T1 = 5000.
T2 = 8000.
y1 = blackbody_lam(wa, T1)
y2 = blackbody_lam(wa, T2)
ytot = y1 + y2
np.random.seed(1)
# make synthetic data with Gaussian errors
sigma = np.ones(len(wa)) * 1 * np.median(ytot)
ydata = ytot + np.random.randn(len(wa)) * sigma
# plot the input model and synthetic data
plt.figure()
plt.plot(wa, y1, ':', lw=2, label='T1=%.0f' % T1)
plt.plot(wa, y2, ':', lw=2, label='T2=%.0f' % T2)
plt.plot(wa, ytot, ':', lw=2, label='T1 + T2\n(true model)')
plt.plot(wa, ydata, ls='steps-mid', lw=2, label='Fake data')
plt.xlabel('Wavelength (microns)')
plt.ylabel('Intensity (erg/s/cm$^2$/cm/Steradian)')
# fit two blackbodies to the synthetic data
def func(wa, T1, T2):
return blackbody_lam(wa, T1) + blackbody_lam(wa, T2)
# Note the initial guess values for T1 and T2 (p0 keyword below). They
# are quite different to the known true values, but not *too*
# different. If these are too far away from the solution curve_fit()
# will not be able to find a solution. This is not a Python-specific
# problem, it is true for almost every fitting algorithm for
# non-linear models. The initial guess is important!
popt, pcov = curve_fit(func, wa, ydata, p0=(1000, 3000), sigma=sigma)
# get the best fitting parameter values and their 1 sigma errors
# (assuming the parameters aren't strongly correlated).
bestT1, bestT2 = popt
sigmaT1, sigmaT2 = np.sqrt(np.diag(pcov))
ybest = blackbody_lam(wa, bestT1) + blackbody_lam(wa, bestT2)
print('True model values')
print(' T1 = %.2f' % T1)
print(' T2 = %.2f' % T2)
print('Parameters of best-fitting model:')
print(' T1 = %.2f +/- %.2f' % (bestT1, sigmaT1))
print(' T2 = %.2f +/- %.2f' % (bestT2, sigmaT2))
degrees_of_freedom = len(wa) - 2
resid = (ydata - func(wa, *popt)) / sigma
chisq = np.dot(resid, resid)
# plot the solution
plt.plot(wa, ybest, label='Best fitting\nmodel')
plt.legend(frameon=False)
plt.savefig('fit_bb.png')
plt.show()
spectra_contsep_j193015_1 = Table.read("mansiclass/spec_auto_contsep_lstep1__crr_b_ifu20211023_02_55_33_RCB-J193015.txt", format = "ascii")
from astropy.table import Table
| 0.803829 | 0.694445 |
___
# The Sparks Foundation
###### GRIP August21 - Data Science & Buisness Analytics Internship
**SUBMITTED BY - Vanshika Dharwal**
___
## TASK 1 - Prediction using Supervised ML
In Supervised learning, machines are trained using well "labelled" training data and on basis of that data, machines predict the output.
#### AIM - Predicting the percentage of students based on the no. of study hours
Libraries used:
* Pandas
* Scikit-Learn
* Numpy
* Matplotlib
* Seaborn
```
# Importing all libraries required in this notebook
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
```
##### STEP1 - Data Acquisition
```
# Reading data from remote link
url = "http://bit.ly/w-data"
s_data = pd.read_csv(url)
print("Data imported successfully")
s_data.head(10)
```
##### Step 2 - Data Exploration
```
s_data.info()
s_data.describe()
s_data.columns
```
##### Step 3 - Data Visualization
```
sns.pairplot(s_data)
```
understanding the coorelation between the two columns.
```
sns.heatmap(s_data.corr())
```
Ploting data points on 2-D graph to eyeball and finding relationship between the data.
```
# Plotting the distribution of scores
s_data.plot(x='Hours', y='Scores', style='o')
plt.title('Hours vs Percentage')
plt.grid()
plt.xlabel('Hours Studied')
plt.ylabel('Percentage Score')
plt.show()
```
**From the graphs above, we can clearly see that there is a positive linear relation between the number of hours studied and percentage of score.**
##### Step 4 - Data Preprocessing
Dividing the data into "attributes" (inputs) and "labels" (outputs).
```
X = s_data.iloc[:, :-1].values
y = s_data.iloc[:, 1].values
```
##### Step 5 - Model Training
Now that we have our attributes and labels, the next step is to split this data into training and test sets. We'll do this by using Scikit-Learn's built-in train_test_split() method:
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.2, random_state=0)
```
We have split our data into training and testing sets, and now is finally the time to train our algorithm.
```
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train, y_train)
print("Training complete.")
```
##### Step 6 - Plotting the Line of regression
Now since our model is trained now, its the time to visualize the best-fit line of regression.
```
# Plotting the regression line
line = regressor.coef_*X+regressor.intercept_
# Plotting for the test data
plt.scatter(X, y)
plt.plot(X, line,color='red');
plt.grid()
plt.show()
```
##### Step 7 - Making Predictions
```
print(X_test) # Testing data - In Hours
y_pred = regressor.predict(X_test) # Predicting the scores
# Comparing Actual vs Predicted
df = pd.DataFrame({'Actual': y_test, 'Predicted': y_pred})
df
# Plotting the Bar graph to depict the difference between the actual and predicted value
df.plot(kind='bar',figsize=(5,5))
plt.grid(which='major', linewidth='0.5', color='red')
plt.grid(which='minor', linewidth='0.5', color='blue')
plt.show()
print(X_test) # Testing data - In Hours
y_pred = regressor.predict(X_test) # Predicting the scores
```
A one unit increase in hours is associated with 9.814% increase in scores
### What will be predicted score if a student studies for 9.25 hrs/ day?
```
hours = 9.25
test = np.array([hours])
test = test.reshape(-1, 1)
own_pred = regressor.predict(test)
print("No of Hours = {}".format(hours))
print("Predicted Score = {}".format(own_pred[0]))
```
##### STEP 8 - Evaluating the model
Using the mean square error.
```
from sklearn import metrics
print('Mean Absolute Error:',
metrics.mean_absolute_error(y_test, y_pred))
```
## CONCLUSION
**I successfully predicted the scores for a student who studies for 9.25 hrs/day which came out to be 93.6917. Also I was able to evaluate the model's performance.**
# THANKYOU
|
github_jupyter
|
# Importing all libraries required in this notebook
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
# Reading data from remote link
url = "http://bit.ly/w-data"
s_data = pd.read_csv(url)
print("Data imported successfully")
s_data.head(10)
s_data.info()
s_data.describe()
s_data.columns
sns.pairplot(s_data)
sns.heatmap(s_data.corr())
# Plotting the distribution of scores
s_data.plot(x='Hours', y='Scores', style='o')
plt.title('Hours vs Percentage')
plt.grid()
plt.xlabel('Hours Studied')
plt.ylabel('Percentage Score')
plt.show()
X = s_data.iloc[:, :-1].values
y = s_data.iloc[:, 1].values
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.2, random_state=0)
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train, y_train)
print("Training complete.")
# Plotting the regression line
line = regressor.coef_*X+regressor.intercept_
# Plotting for the test data
plt.scatter(X, y)
plt.plot(X, line,color='red');
plt.grid()
plt.show()
print(X_test) # Testing data - In Hours
y_pred = regressor.predict(X_test) # Predicting the scores
# Comparing Actual vs Predicted
df = pd.DataFrame({'Actual': y_test, 'Predicted': y_pred})
df
# Plotting the Bar graph to depict the difference between the actual and predicted value
df.plot(kind='bar',figsize=(5,5))
plt.grid(which='major', linewidth='0.5', color='red')
plt.grid(which='minor', linewidth='0.5', color='blue')
plt.show()
print(X_test) # Testing data - In Hours
y_pred = regressor.predict(X_test) # Predicting the scores
hours = 9.25
test = np.array([hours])
test = test.reshape(-1, 1)
own_pred = regressor.predict(test)
print("No of Hours = {}".format(hours))
print("Predicted Score = {}".format(own_pred[0]))
from sklearn import metrics
print('Mean Absolute Error:',
metrics.mean_absolute_error(y_test, y_pred))
| 0.811265 | 0.974773 |
<small><i>This notebook was put together by [Jake Vanderplas](http://www.vanderplas.com). Source and license info is on [GitHub](https://github.com/jakevdp/sklearn_tutorial/).</i></small>
# An Introduction to scikit-learn: Machine Learning in Python
## Goals of this Tutorial
- **Introduce the basics of Machine Learning**, and some skills useful in practice.
- **Introduce the syntax of scikit-learn**, so that you can make use of the rich toolset available.
## Schedule:
**Preliminaries: Setup & introduction** (15 min)
* Making sure your computer is set-up
**Basic Principles of Machine Learning and the Scikit-learn Interface** (45 min)
* What is Machine Learning?
* Machine learning data layout
* Supervised Learning
- Classification
- Regression
- Measuring performance
* Unsupervised Learning
- Clustering
- Dimensionality Reduction
- Density Estimation
* Evaluation of Learning Models
* Choosing the right algorithm for your dataset
**Supervised learning in-depth** (1 hr)
* Support Vector Machines
* Decision Trees and Random Forests
**Unsupervised learning in-depth** (1 hr)
* Principal Component Analysis
* K-means Clustering
* Gaussian Mixture Models
**Model Validation** (1 hr)
* Validation and Cross-validation
## Preliminaries
This tutorial requires the following packages:
- Python version 2.7 or 3.4+
- `numpy` version 1.8 or later: http://www.numpy.org/
- `scipy` version 0.15 or later: http://www.scipy.org/
- `matplotlib` version 1.3 or later: http://matplotlib.org/
- `scikit-learn` version 0.15 or later: http://scikit-learn.org
- `ipython`/`jupyter` version 3.0 or later, with notebook support: http://ipython.org
- `seaborn`: version 0.5 or later, used mainly for plot styling
The easiest way to get these is to use the [conda](http://store.continuum.io/) environment manager.
I suggest downloading and installing [miniconda](http://conda.pydata.org/miniconda.html).
The following command will install all required packages:
```
$ conda install numpy scipy matplotlib scikit-learn ipython-notebook
```
Alternatively, you can download and install the (very large) Anaconda software distribution, found at https://store.continuum.io/.
### Checking your installation
You can run the following code to check the versions of the packages on your system:
(in IPython notebook, press `shift` and `return` together to execute the contents of a cell)
```
from __future__ import print_function
import IPython
print('IPython:', IPython.__version__)
import numpy
print('numpy:', numpy.__version__)
import scipy
print('scipy:', scipy.__version__)
import matplotlib
print('matplotlib:', matplotlib.__version__)
import sklearn
print('scikit-learn:', sklearn.__version__)
```
## Useful Resources
- **scikit-learn:** http://scikit-learn.org (see especially the narrative documentation)
- **matplotlib:** http://matplotlib.org (see especially the gallery section)
- **Jupyter:** http://jupyter.org (also check out http://nbviewer.jupyter.org)
|
github_jupyter
|
$ conda install numpy scipy matplotlib scikit-learn ipython-notebook
from __future__ import print_function
import IPython
print('IPython:', IPython.__version__)
import numpy
print('numpy:', numpy.__version__)
import scipy
print('scipy:', scipy.__version__)
import matplotlib
print('matplotlib:', matplotlib.__version__)
import sklearn
print('scikit-learn:', sklearn.__version__)
| 0.467575 | 0.962568 |
```
import config
import requests
import spotipy
from spotipy.oauth2 import SpotifyClientCredentials
from pprint import pprint
import pandas as pd
auth = SpotifyClientCredentials(
client_id=config.SPOTIPY_CLIENT_ID,
client_secret=config.SPOTIPY_CLIENT_SECRET
)
token = auth.get_access_token()
sp = spotipy.Spotify(auth=token)
def get_tracks_from_playlist(playlist_URI):
offset = 0
tracklist = []
while True:
response = sp.playlist_tracks(playlist_URI,
offset=offset,
fields='items.track.id,total',
additional_types=['track'])
tracks = response['items']
for track in tracks:
track_id = track['track']['id']
tracklist.append(track_id)
offset = offset + len(response['items'])
# print(offset, "/", response['total'])
if len(response['items']) == 0:
break
return tracklist
def create_augmented_df(playlist_URI, polarity=None):
df = pd.DataFrame(columns=['track id'])
df['track id'] = get_tracks_from_playlist(playlist_URI)
if polarity != None:
df['gt label'] = df['track id'].apply(lambda x: polarity)
df['track name'] = df['track id'].apply(lambda x: sp.tracks([x])['tracks'][0]['name'])
df['artist'] = df['track id'].apply(lambda x: sp.tracks([x])['tracks'][0]['artists'][0]['name'])
audio_features = ['danceability', 'energy', 'key', 'loudness', 'mode', 'speechiness', 'acousticness', 'instrumentalness', 'liveness', 'valence', 'tempo']
df = df.reindex(df.columns.tolist() + audio_features, axis=1)
for i in range(len(df)):
track_id = df['track id'][i]
analysis = sp.audio_features(track_id)[0]
for feature in audio_features:
df[feature][i] = analysis[feature]
return df
# happy beats
df1 = create_augmented_df('spotify:playlist:37i9dQZF1DWSf2RDTDayIx', 1)
df1.head()
# life sucks
df2 = create_augmented_df('spotify:playlist:37i9dQZF1DX3YSRoSdA634', 0)
df2.head()
df_test = pd.concat([df1, df2])
df_test
df_test.to_csv('data/test.csv', index=False)
df1['energy'].hist()
df2['energy'].hist()
train_df1 = create_augmented_df('spotify:playlist:37i9dQZF1DX7KNKjOK0o75') # have a great day
train_df2 = create_augmented_df('spotify:playlist:37i9dQZF1DX3rxVfibe1L0') # mood booster
train_df3 = create_augmented_df('spotify:playlist:37i9dQZF1DX4fpCWaHOned') # confidence boost
train_df4 = create_augmented_df('spotify:playlist:37i9dQZF1DX6xZZEgC9Ubl') # tear drop
train_df5 = create_augmented_df('spotify:playlist:37i9dQZF1DX59NCqCqJtoH') # idk.
train_df6 = create_augmented_df('spotify:playlist:37i9dQZF1DWSqBruwoIXkA') # down in the dumps
df_train = pd.concat([train_df1, train_df2, train_df3, train_df4, train_df5, train_df6])
df_train
df_train.to_csv('data/train.csv', index=False)
```
|
github_jupyter
|
import config
import requests
import spotipy
from spotipy.oauth2 import SpotifyClientCredentials
from pprint import pprint
import pandas as pd
auth = SpotifyClientCredentials(
client_id=config.SPOTIPY_CLIENT_ID,
client_secret=config.SPOTIPY_CLIENT_SECRET
)
token = auth.get_access_token()
sp = spotipy.Spotify(auth=token)
def get_tracks_from_playlist(playlist_URI):
offset = 0
tracklist = []
while True:
response = sp.playlist_tracks(playlist_URI,
offset=offset,
fields='items.track.id,total',
additional_types=['track'])
tracks = response['items']
for track in tracks:
track_id = track['track']['id']
tracklist.append(track_id)
offset = offset + len(response['items'])
# print(offset, "/", response['total'])
if len(response['items']) == 0:
break
return tracklist
def create_augmented_df(playlist_URI, polarity=None):
df = pd.DataFrame(columns=['track id'])
df['track id'] = get_tracks_from_playlist(playlist_URI)
if polarity != None:
df['gt label'] = df['track id'].apply(lambda x: polarity)
df['track name'] = df['track id'].apply(lambda x: sp.tracks([x])['tracks'][0]['name'])
df['artist'] = df['track id'].apply(lambda x: sp.tracks([x])['tracks'][0]['artists'][0]['name'])
audio_features = ['danceability', 'energy', 'key', 'loudness', 'mode', 'speechiness', 'acousticness', 'instrumentalness', 'liveness', 'valence', 'tempo']
df = df.reindex(df.columns.tolist() + audio_features, axis=1)
for i in range(len(df)):
track_id = df['track id'][i]
analysis = sp.audio_features(track_id)[0]
for feature in audio_features:
df[feature][i] = analysis[feature]
return df
# happy beats
df1 = create_augmented_df('spotify:playlist:37i9dQZF1DWSf2RDTDayIx', 1)
df1.head()
# life sucks
df2 = create_augmented_df('spotify:playlist:37i9dQZF1DX3YSRoSdA634', 0)
df2.head()
df_test = pd.concat([df1, df2])
df_test
df_test.to_csv('data/test.csv', index=False)
df1['energy'].hist()
df2['energy'].hist()
train_df1 = create_augmented_df('spotify:playlist:37i9dQZF1DX7KNKjOK0o75') # have a great day
train_df2 = create_augmented_df('spotify:playlist:37i9dQZF1DX3rxVfibe1L0') # mood booster
train_df3 = create_augmented_df('spotify:playlist:37i9dQZF1DX4fpCWaHOned') # confidence boost
train_df4 = create_augmented_df('spotify:playlist:37i9dQZF1DX6xZZEgC9Ubl') # tear drop
train_df5 = create_augmented_df('spotify:playlist:37i9dQZF1DX59NCqCqJtoH') # idk.
train_df6 = create_augmented_df('spotify:playlist:37i9dQZF1DWSqBruwoIXkA') # down in the dumps
df_train = pd.concat([train_df1, train_df2, train_df3, train_df4, train_df5, train_df6])
df_train
df_train.to_csv('data/train.csv', index=False)
| 0.111314 | 0.109135 |
# Simple feed-forward neural network MNIST digits classification with Keras and Tensorflow
```
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# Keras (from TensorFlow) imports for the dataset and building NN
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import load_model
# load train/test datasets
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# Draw several figures
fig = plt.figure()
for i in range(9):
plt.subplot(3,3,i+1)
plt.tight_layout()
plt.imshow(X_train[i], cmap='gray', interpolation='none')
plt.title("Digit: {}".format(y_train[i]))
plt.xticks([])
plt.yticks([])
fig = plt.figure()
plt.subplot(2,1,1)
plt.imshow(X_train[0], cmap='gray', interpolation='none')
plt.title("Digit: {}".format(y_train[0]))
plt.xticks([])
plt.yticks([])
plt.subplot(2,1,2)
plt.hist(X_train[0].reshape(784))
plt.title("Pixel Value Distribution")
# let's print the shape before we reshape and normalize
print("X_train shape", X_train.shape)
print("y_train shape", y_train.shape)
print("X_test shape", X_test.shape)
print("y_test shape", y_test.shape)
X_train = X_train.reshape(60000, 784)
X_test = X_test.reshape(10000, 784)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
# normalizing the data to help with the training
X_train /= 255
X_test /= 255
# print the final input shape ready for training
print(X_train.shape)
print(X_test.shape)
n_classes = 10
Y_train = to_categorical(y_train, n_classes)
Y_test = to_categorical(y_test, n_classes)
# building a linear stack of layers with the sequential model
model = Sequential()
model.add(Dense(512, input_shape=(784,)))
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Dense(10))
model.add(Activation('softmax'))
# compiling the sequential model
model.compile(loss='categorical_crossentropy', metrics=['accuracy'], optimizer='adam')
# training the model and saving metrics in history
history = model.fit(X_train, Y_train,
batch_size=128, epochs=20,
verbose=1,
validation_data=(X_test, Y_test))
import os
save_dir = "results/"
if not os.path.exists(save_dir):
os.makedirs(save_dir)
# saving the model
model_name = 'keras_mnist_v1.h5'
model_path = os.path.join(save_dir, model_name)
model.save(model_path)
print('Saved trained model at %s ' % model_path)
# evaluate test data
mnist_model = load_model(model_path)
loss_and_metrics = mnist_model.evaluate(X_test, Y_test, verbose=2)
print("Test Loss", loss_and_metrics[0])
print("Test Accuracy", loss_and_metrics[1])
# see which we predicted correctly and which not
predicted_classes = mnist_model.predict_classes(X_test)
correct_indices = np.nonzero(predicted_classes == y_test)[0]
incorrect_indices = np.nonzero(predicted_classes != y_test)[0]
print(len(correct_indices)," classified correctly")
print(len(incorrect_indices)," classified incorrectly")
# adapt figure size to accomodate 18 subplots
plt.rcParams['figure.figsize'] = (7,14)
figure_evaluation = plt.figure()
# plot 9 correct predictions
for i, correct in enumerate(correct_indices[:9]):
plt.subplot(6,3,i+1)
plt.imshow(X_test[correct].reshape(28,28), cmap='gray', interpolation='none')
plt.title(
"Predicted: {}, Truth: {}".format(predicted_classes[correct],
y_test[correct]))
plt.xticks([])
plt.yticks([])
# plot 9 incorrect predictions
for i, incorrect in enumerate(incorrect_indices[:9]):
plt.subplot(6,3,i+10)
plt.imshow(X_test[incorrect].reshape(28,28), cmap='gray', interpolation='none')
plt.title(
"Predicted {}, Truth: {}".format(predicted_classes[incorrect],
y_test[incorrect]))
plt.xticks([])
plt.yticks([])
```
|
github_jupyter
|
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# Keras (from TensorFlow) imports for the dataset and building NN
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import load_model
# load train/test datasets
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# Draw several figures
fig = plt.figure()
for i in range(9):
plt.subplot(3,3,i+1)
plt.tight_layout()
plt.imshow(X_train[i], cmap='gray', interpolation='none')
plt.title("Digit: {}".format(y_train[i]))
plt.xticks([])
plt.yticks([])
fig = plt.figure()
plt.subplot(2,1,1)
plt.imshow(X_train[0], cmap='gray', interpolation='none')
plt.title("Digit: {}".format(y_train[0]))
plt.xticks([])
plt.yticks([])
plt.subplot(2,1,2)
plt.hist(X_train[0].reshape(784))
plt.title("Pixel Value Distribution")
# let's print the shape before we reshape and normalize
print("X_train shape", X_train.shape)
print("y_train shape", y_train.shape)
print("X_test shape", X_test.shape)
print("y_test shape", y_test.shape)
X_train = X_train.reshape(60000, 784)
X_test = X_test.reshape(10000, 784)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
# normalizing the data to help with the training
X_train /= 255
X_test /= 255
# print the final input shape ready for training
print(X_train.shape)
print(X_test.shape)
n_classes = 10
Y_train = to_categorical(y_train, n_classes)
Y_test = to_categorical(y_test, n_classes)
# building a linear stack of layers with the sequential model
model = Sequential()
model.add(Dense(512, input_shape=(784,)))
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Dense(10))
model.add(Activation('softmax'))
# compiling the sequential model
model.compile(loss='categorical_crossentropy', metrics=['accuracy'], optimizer='adam')
# training the model and saving metrics in history
history = model.fit(X_train, Y_train,
batch_size=128, epochs=20,
verbose=1,
validation_data=(X_test, Y_test))
import os
save_dir = "results/"
if not os.path.exists(save_dir):
os.makedirs(save_dir)
# saving the model
model_name = 'keras_mnist_v1.h5'
model_path = os.path.join(save_dir, model_name)
model.save(model_path)
print('Saved trained model at %s ' % model_path)
# evaluate test data
mnist_model = load_model(model_path)
loss_and_metrics = mnist_model.evaluate(X_test, Y_test, verbose=2)
print("Test Loss", loss_and_metrics[0])
print("Test Accuracy", loss_and_metrics[1])
# see which we predicted correctly and which not
predicted_classes = mnist_model.predict_classes(X_test)
correct_indices = np.nonzero(predicted_classes == y_test)[0]
incorrect_indices = np.nonzero(predicted_classes != y_test)[0]
print(len(correct_indices)," classified correctly")
print(len(incorrect_indices)," classified incorrectly")
# adapt figure size to accomodate 18 subplots
plt.rcParams['figure.figsize'] = (7,14)
figure_evaluation = plt.figure()
# plot 9 correct predictions
for i, correct in enumerate(correct_indices[:9]):
plt.subplot(6,3,i+1)
plt.imshow(X_test[correct].reshape(28,28), cmap='gray', interpolation='none')
plt.title(
"Predicted: {}, Truth: {}".format(predicted_classes[correct],
y_test[correct]))
plt.xticks([])
plt.yticks([])
# plot 9 incorrect predictions
for i, incorrect in enumerate(incorrect_indices[:9]):
plt.subplot(6,3,i+10)
plt.imshow(X_test[incorrect].reshape(28,28), cmap='gray', interpolation='none')
plt.title(
"Predicted {}, Truth: {}".format(predicted_classes[incorrect],
y_test[incorrect]))
plt.xticks([])
plt.yticks([])
| 0.682891 | 0.941493 |
```
%pylab inline
pylab.rcParams['figure.figsize'] = (16.0, 8.0)
```
# Adaptive determination of Monte Carlo trials
The Monte Carlo outcome is based on **random** draws from the joint probability distribution associated with the input quantities. Thus, the outcome and every statistics derived are **random**.
### Exercise 5.1
For the model function
$$ Y = f(X_1,X_2,X_3) = X_1 + X_2 + X_3 $$
with independent input quantities for which knowledge is encoded as
- $X_1$: Gamma distribution with scale parameter $a=1.5$
- $X_2$: normal distribution with $\mu=1.3$ and $\sigma=0.1$
- $X_3$: t-distribution with location parameter $0.8$ and scale parameter $0.3$ and with 5 degrees of freedom
carry out a Monte Carlo simulation with 1000 runs. Repeat this simulation 100 times using a for-loop. Calculate and store the estimates $y$ for each simulation run and compare the different outcomes.
```
from scipy.stats import gamma, norm, t
rst = random.RandomState(1)
# distribution of input quantities
x1dist = gamma(1.5)
x2dist = norm(loc=1.3, scale=0.1)
x3dist = t(loc=0.8, scale=0.3, df=5)
# measurement model
model = lambda X1,X2,X3: X1 + X2 + X3
MCruns = 1000
repeats= 100
means = zeros(repeats)
# repeat Monte Carlo runs
for k in range(repeats):
x1 = x1dist.rvs(MCruns)
x2 = x2dist.rvs(MCruns)
x3 = x3dist.rvs(MCruns)
Y = model(x1,x2,x3)
means[k] = Y.mean()
hist(means, bins = 20)
title("Mean values of 100 repeated Monte Carlo runs (with 1000 trials each)");
```
## Adaptive Monte Carlo method
The randomness of the Monte Carlo outcomes cannot be avoided. However, the variation between runs decreases with an increasing number of Monte Carlo simulations. The aim is thus to adaptively decide on the number of Monte Carlo trials based on
* a prescribed numerical tolerance
* at a chosen level of confidence
#### Stein's method
From Wübbeler et al. (doi: http://iopscience.iop.org/0026-1394/47/3/023):
Let $y_1, y_2, \ldots$ be a sequence of values drawn independentyl from a Gaussian distribution with unknown expecation $\mu$ and variance $\sigma^2$.
The aim is to determine a rule that terminates this sequence such that $\bar{y}(h)$, being the average of the sequence terminated at $h$, satisfies that the interval
$$ [\bar{y}(h)-\delta, \bar{y}(h)+\delta] $$
is a confidence interval for $\mu$ at confidence level $1-\alpha$.
1) Draw an initial number $h_1>1$ of samples and calculate
$$ s_y^2(h_1) = \frac{1}{h-1} \sum_{i=1}^{h_1} (y_i - \bar{y}(h_1))^2 $$
2) Calculate the number $h_2$ of additional values as
$$ h_2 = \max \left( floor({\frac{s_y^2(h_1)(t_{h_1-1,1-\alpha/2})^2}{\delta^2}})-h_1+1,0 \right) $$
#### Application to Monte Carlo simulations
We consider Monte Carlo simulations block-wise. That is, we choose a modest number of Monte Carlo trials, e.g. 1000, and consider a Monte Carlo simulation with that number of trials as one block. Each block has a block mean, standard deviation (uncertainty), etc.
With $h_1$ being the number of such blocks and $y_1,y_2,\ldots$ a selected outcome of each block (e.g. the mean, variance, interval boundaries, etc.) Stein's method can be applied to calculate the additionally required number of blocks.
**Reminder**
The deviation $\delta$ can be calculated from a prescribed number of significant digits as follows:
- Write the number of interest in the form $ z = c \times 10^l$ with $c$ having the chosen number of digits.
- Calculate the numerical tolerance as $\delta = \frac{1}{2} 10^l$
### Exercise 5.2
Repeat Exercise 5.1 using Stein's method, starting with an initial number of $h_1 = 10$ repetitions. Calculate $h_2$ such that a numerical tolerance of 2 digits is achieved with a 95% level of confidence.
```
from scipy.stats import gamma, norm, t
rst = random.RandomState(1)
# distributions of input quantities
x1dist = gamma(1.5)
x2dist = norm(loc=1.3, scale=0.1)
x3dist = t(loc=0.8, scale=0.3, df=5)
# measurement model
model = lambda X1,X2,X3: X1 + X2 + X3
# Monte Carlo block size
MCruns = 1000
# number of initial Monte Carlo blocks
h1 = 10
means = zeros(h1)
delta = 0.05
alpha = 0.05
# repeated Monte Carlo method
for k in range(h1):
x1 = x1dist.rvs(MCruns)
x2 = x2dist.rvs(MCruns)
x3 = x3dist.rvs(MCruns)
Y = model(x1,x2,x3)
means[k] = Y.mean()
# calculate additional number of Monte Carlo blocks
h2 = int(max( floor(means.var()*t(h1-1).ppf(1-alpha/2)**2/delta**2) - h1+1, 0 ))
means = np.r_[means, zeros(h2)]
# repeated Monte Carlo method
for k in range(h1,h1+h2):
x1 = x1dist.rvs(MCruns)
x2 = x2dist.rvs(MCruns)
x3 = x3dist.rvs(MCruns)
Y = model(x1,x2,x3)
means[k] = Y.mean()
y = means.mean()
print(y)
```
The confidence level for the achieved accuracy is a frequentist measure. Therefore, in order to verify the achieved confidence, we repeat the adaptive Monte Carlo method and assess the long run success.
```
# validate the level of confidence
reruns = 1000
y = zeros(reruns)
MCruns = 1000
h1 = 10
for r in range(reruns):
means = zeros(h1)
delta = 0.05
alpha = 0.05
for k in range(h1):
x1 = x1dist.rvs(MCruns)
x2 = x2dist.rvs(MCruns)
x3 = x3dist.rvs(MCruns)
Y = model(x1,x2,x3)
means[k] = Y.mean()
h2 = int(max( floor(means.var()*t(h1-1).ppf(1-alpha/2)**2/delta**2) - h1+1, 0 ))
means = np.r_[means, zeros(h2)]
for k in range(h1,h1+h2):
x1 = x1dist.rvs(MCruns)
x2 = x2dist.rvs(MCruns)
x3 = x3dist.rvs(MCruns)
Y = model(x1,x2,x3)
means[k] = Y.mean()
y[r] = means.mean()
hist(y, bins = 100)
axvline(y.mean()-delta, color="k")
axvline(y.mean()+delta, color="k")
title("Mean values of repeated adaptive Monte Carlo method");
```
The results of the adaptive Monte Carlo method are still random. The spread of calculated mean values, however, is below the chosen tolerance with the prescribed level of confidence.
|
github_jupyter
|
%pylab inline
pylab.rcParams['figure.figsize'] = (16.0, 8.0)
from scipy.stats import gamma, norm, t
rst = random.RandomState(1)
# distribution of input quantities
x1dist = gamma(1.5)
x2dist = norm(loc=1.3, scale=0.1)
x3dist = t(loc=0.8, scale=0.3, df=5)
# measurement model
model = lambda X1,X2,X3: X1 + X2 + X3
MCruns = 1000
repeats= 100
means = zeros(repeats)
# repeat Monte Carlo runs
for k in range(repeats):
x1 = x1dist.rvs(MCruns)
x2 = x2dist.rvs(MCruns)
x3 = x3dist.rvs(MCruns)
Y = model(x1,x2,x3)
means[k] = Y.mean()
hist(means, bins = 20)
title("Mean values of 100 repeated Monte Carlo runs (with 1000 trials each)");
from scipy.stats import gamma, norm, t
rst = random.RandomState(1)
# distributions of input quantities
x1dist = gamma(1.5)
x2dist = norm(loc=1.3, scale=0.1)
x3dist = t(loc=0.8, scale=0.3, df=5)
# measurement model
model = lambda X1,X2,X3: X1 + X2 + X3
# Monte Carlo block size
MCruns = 1000
# number of initial Monte Carlo blocks
h1 = 10
means = zeros(h1)
delta = 0.05
alpha = 0.05
# repeated Monte Carlo method
for k in range(h1):
x1 = x1dist.rvs(MCruns)
x2 = x2dist.rvs(MCruns)
x3 = x3dist.rvs(MCruns)
Y = model(x1,x2,x3)
means[k] = Y.mean()
# calculate additional number of Monte Carlo blocks
h2 = int(max( floor(means.var()*t(h1-1).ppf(1-alpha/2)**2/delta**2) - h1+1, 0 ))
means = np.r_[means, zeros(h2)]
# repeated Monte Carlo method
for k in range(h1,h1+h2):
x1 = x1dist.rvs(MCruns)
x2 = x2dist.rvs(MCruns)
x3 = x3dist.rvs(MCruns)
Y = model(x1,x2,x3)
means[k] = Y.mean()
y = means.mean()
print(y)
# validate the level of confidence
reruns = 1000
y = zeros(reruns)
MCruns = 1000
h1 = 10
for r in range(reruns):
means = zeros(h1)
delta = 0.05
alpha = 0.05
for k in range(h1):
x1 = x1dist.rvs(MCruns)
x2 = x2dist.rvs(MCruns)
x3 = x3dist.rvs(MCruns)
Y = model(x1,x2,x3)
means[k] = Y.mean()
h2 = int(max( floor(means.var()*t(h1-1).ppf(1-alpha/2)**2/delta**2) - h1+1, 0 ))
means = np.r_[means, zeros(h2)]
for k in range(h1,h1+h2):
x1 = x1dist.rvs(MCruns)
x2 = x2dist.rvs(MCruns)
x3 = x3dist.rvs(MCruns)
Y = model(x1,x2,x3)
means[k] = Y.mean()
y[r] = means.mean()
hist(y, bins = 100)
axvline(y.mean()-delta, color="k")
axvline(y.mean()+delta, color="k")
title("Mean values of repeated adaptive Monte Carlo method");
| 0.784236 | 0.986917 |
# India failed to take necessary measures?
Import required python modules which are helpful to analyze the data
```
import pandas as pd
import datetime
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import numpy as np
```
# Create Data frame by using data file
Here we have data file related to covid-19 Indaia Data set
```
# Read Data from file
df = pd.read_csv("covid_19_india.csv")
df.head()
```
We can observer that from above data set that we have data on day level cases
# Based on above data set let's create line plot to get the counts for cases related to Cured,Deaths and Confirmed counts on daily bases
Lets Create groupby class based on Date
```
df = df.groupby('Date', sort=False).mean() # group by based on date
df
```
Lets create a line plot for Confirmed cases on Y axis and Date on X axis
```
df['Confirmed'].plot.line(figsize = (10,4), legend=True, color = 'blue', use_index = True, title ='Impact on lockdown for covid-19' )
```
Here we can see in above chart confirmed cases are getting more day to day
Lets create a line plot for Cured cases on Y axis and Date on X axis
```
df['Cured'].plot.line(figsize = (10,4), legend=True, color = 'green', use_index = True, title ='Impact on lockdown for covid-19' )
```
Here we can see in above chart the recovery cases are getting increasing day to day which is good
Lets create a line plot for Deaths cases on Y axis and Date on X axis
```
df['Deaths'].plot.line(figsize = (10,4), legend=True, color = 'Red', use_index = True)
```
Here we can see in above chart due to covie-19 only few cases
lets combined all the above graphs and see together how data looks
```
df.plot.line(figsize = (10,4), legend=True, use_index = True)
```
Here we can see all together data, Confirmed cases are getting more than cured from may onwards we can see more cases are getting registering in India before may we can see not even 10k cases.
Government of India declared its first lockdown on 23 March 2020 to prevent further spread of COVID when the first reports of patients being affected were observed. While lockdown was extended in further three stages as on 15 April 2020, 4 May 2020 and 18 May 2020 respectively. After putting lockdown for almost for two months, it was then lifted in stages and steps to avoid spread of COVID including certain measures such as social distancing, closed public places, etc. From the above plot, in the initial stages of lockdown the curve is bit flattening i.e. the rate of spread of disease was slow. As soon as lockdown was lifted, that is during the unlock phases, the curve seems to be increasing exponentially, meaning the virus started spreading drastically after May 2020.
|
github_jupyter
|
import pandas as pd
import datetime
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import numpy as np
# Read Data from file
df = pd.read_csv("covid_19_india.csv")
df.head()
df = df.groupby('Date', sort=False).mean() # group by based on date
df
df['Confirmed'].plot.line(figsize = (10,4), legend=True, color = 'blue', use_index = True, title ='Impact on lockdown for covid-19' )
df['Cured'].plot.line(figsize = (10,4), legend=True, color = 'green', use_index = True, title ='Impact on lockdown for covid-19' )
df['Deaths'].plot.line(figsize = (10,4), legend=True, color = 'Red', use_index = True)
df.plot.line(figsize = (10,4), legend=True, use_index = True)
| 0.282889 | 0.951997 |
```
from matplotlib import pylab as plt
import numpy as np
import math
%matplotlib inline
```
## Defining Vector and its operations
```
class Vector(object):
def __init__(self, x, y):
super(Vector, self).__init__()
self.x = float(x)
self.y = float(y)
def magnitude(self):
return math.sqrt(self.x ** 2 + self.y **2)
def direction(self):
"""
calculate direction of this vector and returns it as unit vector
"""
magnitude = self.magnitude()
return Vector(self.x / magnitude, self.y / magnitude)
def __add__(self, v):
return Vector(self.x + v.x, self.y + v.y)
def __sub__(self, v):
return Vector(self.x - v.x, self.y - v.y)
def __mul__(self, v):
print "mul"
return (self.x * v.x) + (self.y * v.y)
def __rmul__(self, v):
return Vector(v * self.x, v * self.y)
def __str__(self):
return "vector({}, {})".format(self.x, self.y)
```
## Plotting vector
```
def drawVector(v, s="b-"):
if v.x < 0:
x_axis = np.arange(v.x, 1)
else:
x_axis = np.arange(0, v.x+1)
#print x_axis
plt.plot(x_axis, x_axis * float(v.y)/float(v.x), s)
plt.xticks(range(-12, 12))
plt.yticks(range(-6, 12))
plt.rcParams["figure.figsize"] = [12, 9]
```
## Unit vector
```
drawVector(Vector(3, 9), "b-")
drawVector(Vector(3, 9).direction(), "r-") # unit vector
plt.grid(True)
plt.title("the red one is the unit vector of the blue vector")
print Vector(3, 9).direction()
plt.show()
```
## Addition and subtraction
```
v1 = Vector(8, 3)
v2 = Vector(2, 4)
a = v1 + v2
s1 = v1 - v2
s2 = v2 - v1
drawVector(v1, "y-")
drawVector(v2, "g-")
drawVector(a, "b--")
drawVector(s1, "r--")
drawVector(s2, "c--")
plt.grid()
v1 = Vector(3, 4)
v2 = Vector(2, -2)
a = v1 + v2
s1 = v1 - v2
s2 = v2 - v1
drawVector(v1, "y-")
drawVector(v2, "g-")
drawVector(a, "b--")
drawVector(s1, "r--")
drawVector(s2, "c--")
plt.grid()
```
## Dot product and projection
```
v1 = Vector(7, 0)
v2 = Vector(8, 4)
u = v1.direction()
z = (u * v2) * u
drawVector(v1, "b-")
drawVector(v2, "g-")
drawVector(z, "r--")
plt.title("projection of te green vector on the blue vector")
plt.grid()
print v1* v2
v1 = Vector(8, 2)
v2 = Vector(3, 5)
u = v1.direction()
z = (u * v2) * u
drawVector(v1, "b-")
drawVector(v2, "g-")
drawVector(z, "r-")
drawVector(z - v2, "c--")
plt.title("projection of te blue vector on the green vector")
plt.show()
plt.grid()
v1 = Vector(2, 1)
v2 = Vector(-5, 7)
u = v1.direction()
p = (u * v2) * u
print (v1 * v2)
drawVector(v1, "b-")
drawVector(v2, "g-")
drawVector(p, "r--")
plt.grid()
x1 = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0]
y1 = [12.0, 18.0, 14.0, 8.0, 10.0, 6.0]
x2 = [4, 5, 7, 8]
y2 = [47, 37, 22, 24]
plt.plot(x1, y1, "o", color="yellow")
plt.plot(x2, y2, "x", color="green")
plt.plot([4], [40], "x", color="red")
plt.plot([2], [22], "o", color="red")
#plt.plot(range(9),[ i * - 4.6 + 54 for i in range(9)], ":", color="blue")
plt.plot(range(9),[ i * - 4.6 + 44 for i in range(9)], color="blue")
#plt.plot(range(8),[ i * - 4.6 + 34 for i in range(8)], ":", color="blue")
plt.title("Figure 3: maximum margin")
5, 5 * - 4.6 + 44
w = Vector(2, 2)
v1 = Vector(3, 4)
u = w.direction()
print u
p = (u * v1) * u
#print (v1 * v2)
drawVector(w, "b-")
drawVector(v1, "g-")
drawVector(p, "r--")
plt.plot(range(-7, 7), [-i * 1. for i in range(-7, 7)], "k")
plt.grid()
```
|
github_jupyter
|
from matplotlib import pylab as plt
import numpy as np
import math
%matplotlib inline
class Vector(object):
def __init__(self, x, y):
super(Vector, self).__init__()
self.x = float(x)
self.y = float(y)
def magnitude(self):
return math.sqrt(self.x ** 2 + self.y **2)
def direction(self):
"""
calculate direction of this vector and returns it as unit vector
"""
magnitude = self.magnitude()
return Vector(self.x / magnitude, self.y / magnitude)
def __add__(self, v):
return Vector(self.x + v.x, self.y + v.y)
def __sub__(self, v):
return Vector(self.x - v.x, self.y - v.y)
def __mul__(self, v):
print "mul"
return (self.x * v.x) + (self.y * v.y)
def __rmul__(self, v):
return Vector(v * self.x, v * self.y)
def __str__(self):
return "vector({}, {})".format(self.x, self.y)
def drawVector(v, s="b-"):
if v.x < 0:
x_axis = np.arange(v.x, 1)
else:
x_axis = np.arange(0, v.x+1)
#print x_axis
plt.plot(x_axis, x_axis * float(v.y)/float(v.x), s)
plt.xticks(range(-12, 12))
plt.yticks(range(-6, 12))
plt.rcParams["figure.figsize"] = [12, 9]
drawVector(Vector(3, 9), "b-")
drawVector(Vector(3, 9).direction(), "r-") # unit vector
plt.grid(True)
plt.title("the red one is the unit vector of the blue vector")
print Vector(3, 9).direction()
plt.show()
v1 = Vector(8, 3)
v2 = Vector(2, 4)
a = v1 + v2
s1 = v1 - v2
s2 = v2 - v1
drawVector(v1, "y-")
drawVector(v2, "g-")
drawVector(a, "b--")
drawVector(s1, "r--")
drawVector(s2, "c--")
plt.grid()
v1 = Vector(3, 4)
v2 = Vector(2, -2)
a = v1 + v2
s1 = v1 - v2
s2 = v2 - v1
drawVector(v1, "y-")
drawVector(v2, "g-")
drawVector(a, "b--")
drawVector(s1, "r--")
drawVector(s2, "c--")
plt.grid()
v1 = Vector(7, 0)
v2 = Vector(8, 4)
u = v1.direction()
z = (u * v2) * u
drawVector(v1, "b-")
drawVector(v2, "g-")
drawVector(z, "r--")
plt.title("projection of te green vector on the blue vector")
plt.grid()
print v1* v2
v1 = Vector(8, 2)
v2 = Vector(3, 5)
u = v1.direction()
z = (u * v2) * u
drawVector(v1, "b-")
drawVector(v2, "g-")
drawVector(z, "r-")
drawVector(z - v2, "c--")
plt.title("projection of te blue vector on the green vector")
plt.show()
plt.grid()
v1 = Vector(2, 1)
v2 = Vector(-5, 7)
u = v1.direction()
p = (u * v2) * u
print (v1 * v2)
drawVector(v1, "b-")
drawVector(v2, "g-")
drawVector(p, "r--")
plt.grid()
x1 = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0]
y1 = [12.0, 18.0, 14.0, 8.0, 10.0, 6.0]
x2 = [4, 5, 7, 8]
y2 = [47, 37, 22, 24]
plt.plot(x1, y1, "o", color="yellow")
plt.plot(x2, y2, "x", color="green")
plt.plot([4], [40], "x", color="red")
plt.plot([2], [22], "o", color="red")
#plt.plot(range(9),[ i * - 4.6 + 54 for i in range(9)], ":", color="blue")
plt.plot(range(9),[ i * - 4.6 + 44 for i in range(9)], color="blue")
#plt.plot(range(8),[ i * - 4.6 + 34 for i in range(8)], ":", color="blue")
plt.title("Figure 3: maximum margin")
5, 5 * - 4.6 + 44
w = Vector(2, 2)
v1 = Vector(3, 4)
u = w.direction()
print u
p = (u * v1) * u
#print (v1 * v2)
drawVector(w, "b-")
drawVector(v1, "g-")
drawVector(p, "r--")
plt.plot(range(-7, 7), [-i * 1. for i in range(-7, 7)], "k")
plt.grid()
| 0.605099 | 0.905823 |
**This is a reference code. Not necessary for program to run**
**This R code is integrated in the main experiment code which is written in python**
```
%load_ext blackcellmagic
# R code
suppressWarnings({suppressMessages({
library(mlrMBO)
library(ggplot2)
})})
ps = makeParamSet(
makeIntegerParam("power", lower = 10, upper = 2200),
makeIntegerParam("time", lower = 500, upper = 2000),
makeDiscreteParam("gas", values = c("Argon")),
makeIntegerParam("pressure", lower = 920, upper = 930)
)
ctrl = makeMBOControl(y.name = "ratio")
ctrl = setMBOControlInfill(ctrl, opt = "focussearch", opt.focussearch.maxit = 10, opt.focussearch.points = 10000, crit = makeMBOInfillCritEI())
data=read.csv("dataset-2.csv")
data<-na.omit(data)
suppressMessages({opt.state = initSMBO(par.set = ps, design = data, control = ctrl, minimize = FALSE, noisy = TRUE)})
print("Proposed parameters:")
prop = suppressWarnings({proposePoints(opt.state)})
print(prop$prop.points)
x<-prop$prop.points
write.table(x, file = "dataset-2.csv", sep = ",", append = TRUE, quote = FALSE,col.names = FALSE, row.names = FALSE)
```
**The following is the way to integrate R code in Python environment**
```
#rpy2 is the package to be installed for R-Python interfacing!
import rpy2.robjects as robjects
robjects.r('''
suppressWarnings({suppressMessages({
library(mlrMBO)
library(ggplot2)
})})
ps = makeParamSet(
makeIntegerParam("power", lower = 10, upper = 2200),
makeIntegerParam("time", lower = 500, upper = 2000),
makeDiscreteParam("gas", values = c("Argon")),
makeIntegerParam("pressure", lower = 920, upper = 930)
)
ctrl = makeMBOControl(y.name = "ratio")
ctrl = setMBOControlInfill(ctrl, opt = "focussearch", opt.focussearch.maxit = 10, opt.focussearch.points = 10000, crit = makeMBOInfillCritEI())
data=read.csv("dataset-2.csv")
data<-na.omit(data)
suppressMessages({opt.state = initSMBO(par.set = ps, design = data, control = ctrl, minimize = FALSE, noisy = TRUE)})
print("Proposed parameters:")
prop = suppressWarnings({proposePoints(opt.state)})
print(prop$prop.points)
x<-prop$prop.points
write.table(x, file = "dataset-2.csv", sep = ",", append = TRUE, quote = FALSE,col.names = FALSE, row.names = FALSE)
''')
```
|
github_jupyter
|
%load_ext blackcellmagic
# R code
suppressWarnings({suppressMessages({
library(mlrMBO)
library(ggplot2)
})})
ps = makeParamSet(
makeIntegerParam("power", lower = 10, upper = 2200),
makeIntegerParam("time", lower = 500, upper = 2000),
makeDiscreteParam("gas", values = c("Argon")),
makeIntegerParam("pressure", lower = 920, upper = 930)
)
ctrl = makeMBOControl(y.name = "ratio")
ctrl = setMBOControlInfill(ctrl, opt = "focussearch", opt.focussearch.maxit = 10, opt.focussearch.points = 10000, crit = makeMBOInfillCritEI())
data=read.csv("dataset-2.csv")
data<-na.omit(data)
suppressMessages({opt.state = initSMBO(par.set = ps, design = data, control = ctrl, minimize = FALSE, noisy = TRUE)})
print("Proposed parameters:")
prop = suppressWarnings({proposePoints(opt.state)})
print(prop$prop.points)
x<-prop$prop.points
write.table(x, file = "dataset-2.csv", sep = ",", append = TRUE, quote = FALSE,col.names = FALSE, row.names = FALSE)
#rpy2 is the package to be installed for R-Python interfacing!
import rpy2.robjects as robjects
robjects.r('''
suppressWarnings({suppressMessages({
library(mlrMBO)
library(ggplot2)
})})
ps = makeParamSet(
makeIntegerParam("power", lower = 10, upper = 2200),
makeIntegerParam("time", lower = 500, upper = 2000),
makeDiscreteParam("gas", values = c("Argon")),
makeIntegerParam("pressure", lower = 920, upper = 930)
)
ctrl = makeMBOControl(y.name = "ratio")
ctrl = setMBOControlInfill(ctrl, opt = "focussearch", opt.focussearch.maxit = 10, opt.focussearch.points = 10000, crit = makeMBOInfillCritEI())
data=read.csv("dataset-2.csv")
data<-na.omit(data)
suppressMessages({opt.state = initSMBO(par.set = ps, design = data, control = ctrl, minimize = FALSE, noisy = TRUE)})
print("Proposed parameters:")
prop = suppressWarnings({proposePoints(opt.state)})
print(prop$prop.points)
x<-prop$prop.points
write.table(x, file = "dataset-2.csv", sep = ",", append = TRUE, quote = FALSE,col.names = FALSE, row.names = FALSE)
''')
| 0.337859 | 0.740409 |
# Illustration of the sampling theorem
## The Shannon sampling theorem
A signal $f(t)$ with Fourier transform that is zero outside $[-\omega_1, \omega_1]$ is completely described by equidistant points $f(kh)$ if the sampling frequency is higher than $2\omega_1$.
### Reconstruction
The reconstruction is given by
\begin{equation}
f(t) = \sum_{k=-\infty}^\infty f(kh) \frac{\sin (\omega_s(t-kh)/2)}{\omega_s (t-kh)/2} = \sum_{k=-\infty}^\infty f(kh) \mathrm{sinc} \frac{\omega_s(t-kh)}{2}
\end{equation}
## Example from class, Problem 7.2 in Åström & Wittenmark
A signal $y(t)$ that we want to sample for purpose of feedback control has frequency content within the range $(-\omega_0, \omega_2$. The signal is corrupted by a sinusoidal noise at the frequency $5\omega_0$, let's assume a cosine, since its spectrum is real:
\begin{equation}
y_m(t) = y(t) + a\cos 5\omega t
\end{equation}
What is the lowest sampling frequency we can use, and still separate the sampled sinusoid (possibly its alias frequency) from the frequency content of $y(t)$?
1. Solution in book: $\omega_s = 6\omega_0$.
2. Suggested in class: $\omega_s = 2\omega_0$.
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# Generate signal of interest y(t). Let's use the sinc^2(t) function, which has a triangular fourier transform
# within (-2\pi, 2\pi)
def y_measurement(w0, t):
y = np.sinc(w0*t/(2*np.pi))**2
n = 0.1*np.cos(5*w0*t)
return (y+n, y, n)
Nc = 4000 # Number of samples in "continuous" signal
T = 10.0 # Seconds to simulate
hc = T/Nc # Sampling frequency of "continuous" signal
wNc = np.pi/hc
t = np.linspace(0,10, Nc)
w0 = 2*np.pi
(ym, y, n) = y_measurement(w0, t)
# Fourier transforms
Yf = np.fft.fft(y)
Nf = np.fft.fft(n)
wpos = np.linspace(0,wNc, Nc/2)
# Plot signals and discrete Fourier transform
plt.figure(figsize=(16,4))
plt.plot(t,y)
plt.plot(t,n)
plt.plot(t, y+n)
plt.xlabel(r'$t$ [s]')
plt.xlim((-0.1,4))
plt.legend((r'$y(t)$', r'$n(t)$', r'$y(t)+n(t)$'))
plt.title('Time series')
plt.figure(figsize=(16,4))
plt.plot(np.hstack((-wpos[::-1]-wpos[1], wpos)),
np.hstack((Yf[int(Nc/2):], Yf[:int(Nc/2)])))
plt.plot(np.hstack((-wpos[::-1]-wpos[1], wpos)),
np.hstack((Nf[int(Nc/2):], Nf[:int(Nc/2)])))
plt.xlabel(r'$\omega$ [rad/s]')
plt.xlim((-6*w0, 6*w0))
plt.xticks((-5*w0, -w0, w0, 5*w0))
plt.ylim((-20, 220))
lbls=plt.gca().set_xticklabels([r'$-5\omega_0$', r'$-\omega_0$', r'$\omega_0$', r'$5\omega_0$'])
plt.title('Spectrum (real part)')
# Now let's sample at ws=6w0
N = 600 # Number of samples to take
ws1 = 6*w0
h1 = 2*np.pi/ws1
ts1 = np.arange(N)*h1
(ym1, y1, n1) = y_measurement(w0, ts1)
Ym1f = np.fft.fft(ym1)
wpos1 = np.linspace(0, ws1/2, N/2)
# Plot the sampled signal and its spectrum
plt.figure(figsize=(16,4))
plt.plot(t, ym, color=[0.7, 0.7, 1])
plt.stem(ts1,ym1, linefmt='r--', markerfmt='ro', basefmt = 'r-')
plt.xlabel(r'$t$ [s]')
plt.xlim((-0.1,4))
plt.title('Time series')
plt.figure(figsize=(10,4))
plt.plot(np.hstack((-wpos1[::-1]-wpos1[1], wpos1)),
np.hstack((Ym1f[int(N/2):], Ym1f[:int(N/2)])))
plt.xlabel(r'$\omega$ [rad/s]')
plt.xlim((-6*w0, 6*w0))
plt.xticks((-5*w0, -w0, w0, 5*w0))
lbls=plt.gca().set_xticklabels([r'$-5\omega_0$', r'$-\omega_0$', r'$\omega_0$', r'$5\omega_0$'])
plt.title('Spectrum (real part)')
# And sampling at ws=2w0
N = 600 # Number of samples to take
ws2 = 2*w0
h2 = 2*np.pi/ws2
ts2 = np.arange(N)*h2
(ym2, y2, n2) = y_measurement(w0, ts2)
Ym2f = np.fft.fft(ym2)
Ym2fpos = Ym2f[:int(N/2)]
Ym2fpos[-1] = 0.5*Ym2f[int(N/2)] # Divide the energy at wN equally among the positive and negative part
Ym2fneg = Ym2f[int(N/2):]
Ym2fneg[0] /= 2.0
wpos2 = np.linspace(0, ws2/2, N/2)
# Plot the sampled signal and its spectrum
plt.figure(figsize=(16,4))
plt.plot(t, ym, color=[0.7, 0.7, 1])
plt.stem(ts2,ym2, linefmt='r--', markerfmt='ro', basefmt = 'r-')
plt.xlabel(r'$t$ [s]')
plt.xlim((-0.1,4))
plt.title('Time series')
plt.figure(figsize=(16,4))
plt.plot(np.hstack((-wpos2[::-1]-wpos2[1], wpos2)), np.hstack((Ym2fneg, Ym2fpos)))
plt.xlabel(r'$\omega$ [rad/s]')
plt.xlim((-6*w0, 6*w0))
plt.xticks((-5*w0, -w0, w0, 5*w0))
lbls=plt.gca().set_xticklabels([r'$-5\omega_0$', r'$-\omega_0$', r'$\omega_0$', r'$5\omega_0$'])
plt.title('Spectrum (real part)')
```
## A digital notch filter to get rid of the alias of the sinusoid at $\omega_o$
A digital filter with two complex conjugated zeros at $\mathrm{e}^{\pm i \omega_n h}$ will filter out signals at the frequency $\omega_n$. In this case with $\omega_s = 2\omega_0$ we would want the zero at the Nyquist frequency, since for this case $\omega_N = \omega_0$. In order to not attenuate too much of the signal content near $\omega_0$, we combine the zero with a resonanse near the frequency, meaning two poles close to the unit circle at the frequency. How close the poles are is determined with a parameter $r < 1$. This gives the filter
\begin{equation}
H(z) = \frac{ z^2 -2\cos \omega_0 h z + 1}{z^2 - 2r\cos \omega_0 hz + r^2}
\end{equation}
With $r=0.9$, and $\omega_0 h = \pi$, this gives the filter
\begin{equation}
H(z) = \frac{(z+1)^2}{z^2 + 1.8z + 0.81}
\end{equation}
```
# So, apply a digital notch filter at w0
import scipy.signal as ss
r = 0.9
bf1 = [1, -2*np.cos(w0*h1), 1]
af1 = [1, -2*np.cos(w0*h1), r**2]
bf2 = [1, 2, 1]
af2 = [1, 2*r, r**2]
yf1 = ss.lfilter(bf1, af1, ym1)
yf2 = ss.lfilter(bf2, af2, ym2)
# Fourier transform
Yf1f = np.fft.fft(yf1)*h1
Yf1fpos = Yf1f[:int(N/2)]
Yf1fpos[-1] = 0.5*Yf1f[int(N/2)] # Divide the energy at wN equally among the positive and negative part
Yf1fneg = Yf1f[int(N/2):]
Yf1fneg[0] /= 2.0
Yf2f = np.fft.fft(yf2)*h2
Yf2fpos = Yf2f[:int(N/2)]
Yf2fpos[-1] = 0.5*Yf2f[int(N/2)] # Divide the energy at wN equally among the positive and negative part
Yf2fneg = Yf2f[int(N/2):]
Yf2fneg[0] /= 2.0
wpos2 = np.linspace(0, ws2/2, N/2)
wpos1 = np.linspace(0, ws1/2, N/2)
# Plot the sampled signal and its spectrum
plt.figure(figsize=(16,4))
plt.plot(t, ym)
plt.plot(t, y)
plt.stem(ts2,ym2, linefmt='r--', markerfmt='ro', basefmt = 'r-')
plt.stem(ts2,yf2, linefmt='m--', markerfmt='mo', basefmt = 'm-')
plt.stem(ts1[::3],yf1[::3], linefmt='y--', markerfmt='yo', basefmt = 'y-')
plt.xlabel(r'$t$ [s]')
plt.xlim(-0.1,4)
plt.legend((r'$y(t)+n(t)$', r'$y(t)$', r'Sampled at $2\omega_0$', r'Sampled at $2\omega_0$ and filtered',
r'Sampled at $6\omega_0$, filtered and resampled'))
plt.title('Time series')
plt.figure(figsize=(16,4))
plt.plot(np.hstack((-wpos2[::-1]-wpos2[1], wpos2)), np.hstack((Yf2fneg, Yf2fpos)))
plt.plot(np.hstack((-wpos1[::-1]-wpos1[1], wpos1)), np.hstack((Yf1fneg, Yf1fpos)))
plt.xlabel(r'$\omega$ [rad/s]')
plt.xlim((-3*w0, 3*w0))
plt.xticks((-2*w0, -w0, w0, 2*w0))
lbls=plt.gca().set_xticklabels([r'$-2\omega_0$', r'$-\omega_0$', r'$\omega_0$', r'$2\omega_0$'])
plt.legend((r'Sampled at $2\omega_0$', r'Sampled at $6\omega$'))
plt.title('Spectrum (real part) of filtered signals')
```
|
github_jupyter
|
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# Generate signal of interest y(t). Let's use the sinc^2(t) function, which has a triangular fourier transform
# within (-2\pi, 2\pi)
def y_measurement(w0, t):
y = np.sinc(w0*t/(2*np.pi))**2
n = 0.1*np.cos(5*w0*t)
return (y+n, y, n)
Nc = 4000 # Number of samples in "continuous" signal
T = 10.0 # Seconds to simulate
hc = T/Nc # Sampling frequency of "continuous" signal
wNc = np.pi/hc
t = np.linspace(0,10, Nc)
w0 = 2*np.pi
(ym, y, n) = y_measurement(w0, t)
# Fourier transforms
Yf = np.fft.fft(y)
Nf = np.fft.fft(n)
wpos = np.linspace(0,wNc, Nc/2)
# Plot signals and discrete Fourier transform
plt.figure(figsize=(16,4))
plt.plot(t,y)
plt.plot(t,n)
plt.plot(t, y+n)
plt.xlabel(r'$t$ [s]')
plt.xlim((-0.1,4))
plt.legend((r'$y(t)$', r'$n(t)$', r'$y(t)+n(t)$'))
plt.title('Time series')
plt.figure(figsize=(16,4))
plt.plot(np.hstack((-wpos[::-1]-wpos[1], wpos)),
np.hstack((Yf[int(Nc/2):], Yf[:int(Nc/2)])))
plt.plot(np.hstack((-wpos[::-1]-wpos[1], wpos)),
np.hstack((Nf[int(Nc/2):], Nf[:int(Nc/2)])))
plt.xlabel(r'$\omega$ [rad/s]')
plt.xlim((-6*w0, 6*w0))
plt.xticks((-5*w0, -w0, w0, 5*w0))
plt.ylim((-20, 220))
lbls=plt.gca().set_xticklabels([r'$-5\omega_0$', r'$-\omega_0$', r'$\omega_0$', r'$5\omega_0$'])
plt.title('Spectrum (real part)')
# Now let's sample at ws=6w0
N = 600 # Number of samples to take
ws1 = 6*w0
h1 = 2*np.pi/ws1
ts1 = np.arange(N)*h1
(ym1, y1, n1) = y_measurement(w0, ts1)
Ym1f = np.fft.fft(ym1)
wpos1 = np.linspace(0, ws1/2, N/2)
# Plot the sampled signal and its spectrum
plt.figure(figsize=(16,4))
plt.plot(t, ym, color=[0.7, 0.7, 1])
plt.stem(ts1,ym1, linefmt='r--', markerfmt='ro', basefmt = 'r-')
plt.xlabel(r'$t$ [s]')
plt.xlim((-0.1,4))
plt.title('Time series')
plt.figure(figsize=(10,4))
plt.plot(np.hstack((-wpos1[::-1]-wpos1[1], wpos1)),
np.hstack((Ym1f[int(N/2):], Ym1f[:int(N/2)])))
plt.xlabel(r'$\omega$ [rad/s]')
plt.xlim((-6*w0, 6*w0))
plt.xticks((-5*w0, -w0, w0, 5*w0))
lbls=plt.gca().set_xticklabels([r'$-5\omega_0$', r'$-\omega_0$', r'$\omega_0$', r'$5\omega_0$'])
plt.title('Spectrum (real part)')
# And sampling at ws=2w0
N = 600 # Number of samples to take
ws2 = 2*w0
h2 = 2*np.pi/ws2
ts2 = np.arange(N)*h2
(ym2, y2, n2) = y_measurement(w0, ts2)
Ym2f = np.fft.fft(ym2)
Ym2fpos = Ym2f[:int(N/2)]
Ym2fpos[-1] = 0.5*Ym2f[int(N/2)] # Divide the energy at wN equally among the positive and negative part
Ym2fneg = Ym2f[int(N/2):]
Ym2fneg[0] /= 2.0
wpos2 = np.linspace(0, ws2/2, N/2)
# Plot the sampled signal and its spectrum
plt.figure(figsize=(16,4))
plt.plot(t, ym, color=[0.7, 0.7, 1])
plt.stem(ts2,ym2, linefmt='r--', markerfmt='ro', basefmt = 'r-')
plt.xlabel(r'$t$ [s]')
plt.xlim((-0.1,4))
plt.title('Time series')
plt.figure(figsize=(16,4))
plt.plot(np.hstack((-wpos2[::-1]-wpos2[1], wpos2)), np.hstack((Ym2fneg, Ym2fpos)))
plt.xlabel(r'$\omega$ [rad/s]')
plt.xlim((-6*w0, 6*w0))
plt.xticks((-5*w0, -w0, w0, 5*w0))
lbls=plt.gca().set_xticklabels([r'$-5\omega_0$', r'$-\omega_0$', r'$\omega_0$', r'$5\omega_0$'])
plt.title('Spectrum (real part)')
# So, apply a digital notch filter at w0
import scipy.signal as ss
r = 0.9
bf1 = [1, -2*np.cos(w0*h1), 1]
af1 = [1, -2*np.cos(w0*h1), r**2]
bf2 = [1, 2, 1]
af2 = [1, 2*r, r**2]
yf1 = ss.lfilter(bf1, af1, ym1)
yf2 = ss.lfilter(bf2, af2, ym2)
# Fourier transform
Yf1f = np.fft.fft(yf1)*h1
Yf1fpos = Yf1f[:int(N/2)]
Yf1fpos[-1] = 0.5*Yf1f[int(N/2)] # Divide the energy at wN equally among the positive and negative part
Yf1fneg = Yf1f[int(N/2):]
Yf1fneg[0] /= 2.0
Yf2f = np.fft.fft(yf2)*h2
Yf2fpos = Yf2f[:int(N/2)]
Yf2fpos[-1] = 0.5*Yf2f[int(N/2)] # Divide the energy at wN equally among the positive and negative part
Yf2fneg = Yf2f[int(N/2):]
Yf2fneg[0] /= 2.0
wpos2 = np.linspace(0, ws2/2, N/2)
wpos1 = np.linspace(0, ws1/2, N/2)
# Plot the sampled signal and its spectrum
plt.figure(figsize=(16,4))
plt.plot(t, ym)
plt.plot(t, y)
plt.stem(ts2,ym2, linefmt='r--', markerfmt='ro', basefmt = 'r-')
plt.stem(ts2,yf2, linefmt='m--', markerfmt='mo', basefmt = 'm-')
plt.stem(ts1[::3],yf1[::3], linefmt='y--', markerfmt='yo', basefmt = 'y-')
plt.xlabel(r'$t$ [s]')
plt.xlim(-0.1,4)
plt.legend((r'$y(t)+n(t)$', r'$y(t)$', r'Sampled at $2\omega_0$', r'Sampled at $2\omega_0$ and filtered',
r'Sampled at $6\omega_0$, filtered and resampled'))
plt.title('Time series')
plt.figure(figsize=(16,4))
plt.plot(np.hstack((-wpos2[::-1]-wpos2[1], wpos2)), np.hstack((Yf2fneg, Yf2fpos)))
plt.plot(np.hstack((-wpos1[::-1]-wpos1[1], wpos1)), np.hstack((Yf1fneg, Yf1fpos)))
plt.xlabel(r'$\omega$ [rad/s]')
plt.xlim((-3*w0, 3*w0))
plt.xticks((-2*w0, -w0, w0, 2*w0))
lbls=plt.gca().set_xticklabels([r'$-2\omega_0$', r'$-\omega_0$', r'$\omega_0$', r'$2\omega_0$'])
plt.legend((r'Sampled at $2\omega_0$', r'Sampled at $6\omega$'))
plt.title('Spectrum (real part) of filtered signals')
| 0.798147 | 0.97441 |
# Fake Cross-section problem for TG Testing
```
import material_pb2 as mat
import numpy as np
```
Energy groups and $\chi$
```
energy_groups = [2e7, 1.353e6, 9.119e3, 3.928, 0.6251, 0.1457, 0.0569, 0]
normalized_chi = [0.5, 0.25, 0.25, 0, 0, 0, 0]
```
Base location to save files:
```
folder = "/home/josh/repos/bart/benchmarks/tgnda_fake_cross_sections/"
def make_material(name, sigma_t, sigma_s, is_fissionable, nu_sig_f):
material = mat.Material()
material.full_name = name
material.id = name
material.abbreviation = name
material.is_fissionable = is_fissionable
material.number_of_groups = 7
eg = mat.Material.VectorProperty()
eg.id = mat.Material.ENERGY_GROUPS
eg.value.extend(energy_groups)
material_sigma_t = mat.Material.VectorProperty()
material_sigma_t.id = mat.Material.SIGMA_T
material_sigma_t.value.extend(sigma_t)
diff = mat.Material.VectorProperty()
diff.id = mat.Material.DIFFUSION_COEFF
diff.value.extend([1.0/(3.0 * val) for val in sigma_t])
if is_fissionable:
nsf = mat.Material.VectorProperty()
nsf.id = mat.Material.NU_SIG_F
nsf.value.extend(nu_sig_f)
chi = mat.Material.VectorProperty()
chi.id = mat.Material.CHI
chi.value.extend(normalized_chi)
material_sigma_s = mat.Material.MatrixProperty()
material_sigma_s.id = mat.Material.SIGMA_S
material_sigma_s.value.extend(sigma_s)
if is_fissionable:
material.vector_property.extend([eg, material_sigma_t, nsf, chi, diff])
else:
material.vector_property.extend([eg, material_sigma_t, diff])
material.matrix_property.extend([material_sigma_s])
filename = folder + name + ".material"
f = open(filename, 'wb')
f.write(material.SerializeToString())
f.close()
```
----
## Materials
### High scattering fissionable material
```
name = "high_scattering_fissionable"
sigma_t = [1, 1, 1, 1, 1, 1, 1]
sigma_s = np.array([[0.499, 0, 0, 0, 0, 0, 0],
[0.25, 0.499, 0, 0, 0, 0, 0],
[0.25, 0.499, 0.459, 0, 0, 0, 0],
[0, 0, 0.24, 0.3, 0.29, 0.25, 0.2],
[0, 0, 0.1, 0.23, 0.3, 0.24, 0.2],
[0, 0, 0.1, 0.23, 0.2, 0.3 , 0.2],
[0, 0, 0.1, 0.23, 0.2, 0.2, 0.3]])
nu = 2
nu_sig_f = []
for group in range(7):
sigma_s[:,group] *= sigma_t[group]
sigma_a = sigma_t[group] - np.sum(sigma_s[:,group])
nu_sig_f.append(nu * 0.95*sigma_a)
sigma_s_list = []
for row in sigma_s:
for val in row:
sigma_s_list.append(val)
make_material(name, sigma_t, sigma_s_list, True, nu_sig_f)
```
### Non-fissionable reflector
```
name = "high_scattering_reflector"
sigma_t = [1, 1, 1, 1, 1, 1, 1]
sigma_s = np.array([[0.499, 0, 0, 0, 0, 0, 0],
[0.25, 0.499, 0, 0, 0, 0, 0],
[0.25, 0.499, 0.459, 0, 0, 0, 0],
[0, 0, 0.24, 0.3, 0.29, 0.25, 0.23],
[0, 0, 0.1, 0.23, 0.3, 0.24, 0.23],
[0, 0, 0.1, 0.23, 0.2, 0.3 , 0.23],
[0, 0, 0.1, 0.23, 0.2, 0.2, 0.3]])
for group in range(7):
sigma_s[:,group] *= sigma_t[group]
sigma_a = sigma_t[group] - np.sum(sigma_s[:,group])
sigma_s_list = []
for row in sigma_s:
for val in row:
sigma_s_list.append(val)
make_material(name, sigma_t, sigma_s_list, False, [])
```
|
github_jupyter
|
import material_pb2 as mat
import numpy as np
energy_groups = [2e7, 1.353e6, 9.119e3, 3.928, 0.6251, 0.1457, 0.0569, 0]
normalized_chi = [0.5, 0.25, 0.25, 0, 0, 0, 0]
folder = "/home/josh/repos/bart/benchmarks/tgnda_fake_cross_sections/"
def make_material(name, sigma_t, sigma_s, is_fissionable, nu_sig_f):
material = mat.Material()
material.full_name = name
material.id = name
material.abbreviation = name
material.is_fissionable = is_fissionable
material.number_of_groups = 7
eg = mat.Material.VectorProperty()
eg.id = mat.Material.ENERGY_GROUPS
eg.value.extend(energy_groups)
material_sigma_t = mat.Material.VectorProperty()
material_sigma_t.id = mat.Material.SIGMA_T
material_sigma_t.value.extend(sigma_t)
diff = mat.Material.VectorProperty()
diff.id = mat.Material.DIFFUSION_COEFF
diff.value.extend([1.0/(3.0 * val) for val in sigma_t])
if is_fissionable:
nsf = mat.Material.VectorProperty()
nsf.id = mat.Material.NU_SIG_F
nsf.value.extend(nu_sig_f)
chi = mat.Material.VectorProperty()
chi.id = mat.Material.CHI
chi.value.extend(normalized_chi)
material_sigma_s = mat.Material.MatrixProperty()
material_sigma_s.id = mat.Material.SIGMA_S
material_sigma_s.value.extend(sigma_s)
if is_fissionable:
material.vector_property.extend([eg, material_sigma_t, nsf, chi, diff])
else:
material.vector_property.extend([eg, material_sigma_t, diff])
material.matrix_property.extend([material_sigma_s])
filename = folder + name + ".material"
f = open(filename, 'wb')
f.write(material.SerializeToString())
f.close()
name = "high_scattering_fissionable"
sigma_t = [1, 1, 1, 1, 1, 1, 1]
sigma_s = np.array([[0.499, 0, 0, 0, 0, 0, 0],
[0.25, 0.499, 0, 0, 0, 0, 0],
[0.25, 0.499, 0.459, 0, 0, 0, 0],
[0, 0, 0.24, 0.3, 0.29, 0.25, 0.2],
[0, 0, 0.1, 0.23, 0.3, 0.24, 0.2],
[0, 0, 0.1, 0.23, 0.2, 0.3 , 0.2],
[0, 0, 0.1, 0.23, 0.2, 0.2, 0.3]])
nu = 2
nu_sig_f = []
for group in range(7):
sigma_s[:,group] *= sigma_t[group]
sigma_a = sigma_t[group] - np.sum(sigma_s[:,group])
nu_sig_f.append(nu * 0.95*sigma_a)
sigma_s_list = []
for row in sigma_s:
for val in row:
sigma_s_list.append(val)
make_material(name, sigma_t, sigma_s_list, True, nu_sig_f)
name = "high_scattering_reflector"
sigma_t = [1, 1, 1, 1, 1, 1, 1]
sigma_s = np.array([[0.499, 0, 0, 0, 0, 0, 0],
[0.25, 0.499, 0, 0, 0, 0, 0],
[0.25, 0.499, 0.459, 0, 0, 0, 0],
[0, 0, 0.24, 0.3, 0.29, 0.25, 0.23],
[0, 0, 0.1, 0.23, 0.3, 0.24, 0.23],
[0, 0, 0.1, 0.23, 0.2, 0.3 , 0.23],
[0, 0, 0.1, 0.23, 0.2, 0.2, 0.3]])
for group in range(7):
sigma_s[:,group] *= sigma_t[group]
sigma_a = sigma_t[group] - np.sum(sigma_s[:,group])
sigma_s_list = []
for row in sigma_s:
for val in row:
sigma_s_list.append(val)
make_material(name, sigma_t, sigma_s_list, False, [])
| 0.221772 | 0.856632 |
```
import os
import numpy as np
from PIL import Image
palette = np.array([
[0, 0, 0],
[128, 0, 0],
[0, 128, 0],
[128, 128, 0],
[0, 0, 128],
[128, 0, 128],
[0, 128, 128],
[128, 128, 128],
[64, 0, 0],
[192, 0, 0],
[64, 128, 0],
[192, 128, 0],
[64, 0, 128],
[192, 0, 128],
[64, 128, 128],
[192, 128, 128],
[0, 64, 0],
[128, 64, 0],
[0, 192, 0],
[128, 192, 0],
[0, 64, 128]
])
def IoU_Calculation(gt_img, pred_img):
gt_img = gt_img.flatten()
pred_img = pred_img.flatten()
IoU = []
for label in range(0,21):
intersection = 0
union = 0
for gt, pred in zip(gt_img, pred_img):
if (gt == label and pred == label):
intersection += 1
if (gt == label or pred == label):
union += 1
if (intersection == 0):
IoU.append(0)
else:
IoU.append(intersection/union)
return IoU
def fromColor2Label(img):
img = (np.array(img))
converted_label = np.zeros(img.shape[:2])
for i, rows in enumerate(img):
for j, v in enumerate(rows):
for index, color in enumerate(palette):
if (np.array_equal(v,color)):
converted_label[i,j] = index
return converted_label
# converted_label = fromColor2Label(Image.open('/home/dongwonshin/Desktop/dilation/batch_results/2008_000059.png'))
def fromLabel2Color(label):
label = np.array(label)
converted_color = np.zeros((label.shape[0],label.shape[1],3))
for i, rows in enumerate(label):
for j, v in enumerate(rows):
color = palette[v]
converted_color[i,j] = color
return converted_color
# converted_color = fromLabel2Color(Image.open('/home/dongwonshin/Desktop/Datasets/benchmark_RELEASE/dataset/pngs/2008_000009.png'))
# prediected image conversion
import scipy.misc
from PIL import Image
with open('/home/dongwonshin/Desktop/Datasets/benchmark_RELEASE/dataset/val.txt') as fp:
contents = fp.readlines()
for n, content in enumerate(contents):
gt_path = os.path.join('/home/dongwonshin/Desktop/Datasets/benchmark_RELEASE/dataset/pngs',content[:-1]+'.png')
converted_gt_path = os.path.join('/home/dongwonshin/Desktop/Datasets/benchmark_RELEASE/dataset/pngs_converted',content[:-1]+'.png')
gt_img = scipy.misc.imread(gt_path)
converted_gt_img = fromLabel2Color(gt_img)
scipy.misc.imsave(converted_gt_path, converted_gt_img)
# IoU = IoU_Calculation(gt_img, pred_img)
# print(IoU)
```
|
github_jupyter
|
import os
import numpy as np
from PIL import Image
palette = np.array([
[0, 0, 0],
[128, 0, 0],
[0, 128, 0],
[128, 128, 0],
[0, 0, 128],
[128, 0, 128],
[0, 128, 128],
[128, 128, 128],
[64, 0, 0],
[192, 0, 0],
[64, 128, 0],
[192, 128, 0],
[64, 0, 128],
[192, 0, 128],
[64, 128, 128],
[192, 128, 128],
[0, 64, 0],
[128, 64, 0],
[0, 192, 0],
[128, 192, 0],
[0, 64, 128]
])
def IoU_Calculation(gt_img, pred_img):
gt_img = gt_img.flatten()
pred_img = pred_img.flatten()
IoU = []
for label in range(0,21):
intersection = 0
union = 0
for gt, pred in zip(gt_img, pred_img):
if (gt == label and pred == label):
intersection += 1
if (gt == label or pred == label):
union += 1
if (intersection == 0):
IoU.append(0)
else:
IoU.append(intersection/union)
return IoU
def fromColor2Label(img):
img = (np.array(img))
converted_label = np.zeros(img.shape[:2])
for i, rows in enumerate(img):
for j, v in enumerate(rows):
for index, color in enumerate(palette):
if (np.array_equal(v,color)):
converted_label[i,j] = index
return converted_label
# converted_label = fromColor2Label(Image.open('/home/dongwonshin/Desktop/dilation/batch_results/2008_000059.png'))
def fromLabel2Color(label):
label = np.array(label)
converted_color = np.zeros((label.shape[0],label.shape[1],3))
for i, rows in enumerate(label):
for j, v in enumerate(rows):
color = palette[v]
converted_color[i,j] = color
return converted_color
# converted_color = fromLabel2Color(Image.open('/home/dongwonshin/Desktop/Datasets/benchmark_RELEASE/dataset/pngs/2008_000009.png'))
# prediected image conversion
import scipy.misc
from PIL import Image
with open('/home/dongwonshin/Desktop/Datasets/benchmark_RELEASE/dataset/val.txt') as fp:
contents = fp.readlines()
for n, content in enumerate(contents):
gt_path = os.path.join('/home/dongwonshin/Desktop/Datasets/benchmark_RELEASE/dataset/pngs',content[:-1]+'.png')
converted_gt_path = os.path.join('/home/dongwonshin/Desktop/Datasets/benchmark_RELEASE/dataset/pngs_converted',content[:-1]+'.png')
gt_img = scipy.misc.imread(gt_path)
converted_gt_img = fromLabel2Color(gt_img)
scipy.misc.imsave(converted_gt_path, converted_gt_img)
# IoU = IoU_Calculation(gt_img, pred_img)
# print(IoU)
| 0.154631 | 0.354321 |
```
from MyCreds.mycreds import USCensusAPI
import requests
import pandas as pd
import ast
# API Reference: https://www.census.gov/data/developers/guidance/api-user-guide.Example_API_Queries.html
host = 'https://api.census.gov/data'
year = '/2019'
# Data Dictionary: https://api.census.gov/data.html
dataset_acronym = '/acs/acs1'
g = '?get='
# Variables for the ACS: https://api.census.gov/data/2005/acs/acs1/variables.html
variables = 'NAME,B01001_001E'
location = '&for=us:*'
usr_key = f"&key={USCensusAPI.api_key}"
query_url = f"{host}{year}{dataset_acronym}{g}{variables}{location}{usr_key}"
# API Reference: https://www.census.gov/data/developers/guidance/api-user-guide.Example_API_Queries.html
host = 'https://api.census.gov/data'
year = '/2019'
# Data Dictionary: https://api.census.gov/data.html
dataset_acronym = '/acs/acs1'
g = '?get='
# Variables for the ACS: https://api.census.gov/data/2005/acs/acs1/variables.html
variables = 'B01001_002E,B01001_003E,B01001_004E,B01001_005E,B01001_006E,B01001_007E,B01001_008E,B01001_009E,B01001_010E,B01001_011E,B01001_012E,B01001_013E,B01001_014E,B01001_015E,B01001_016E,B01001_017E,B01001_018E,B01001_019E,B01001_020E,B01001_021E,B01001_022E,B01001_023E,B01001_024E,B01001_025E'
location = '&for=us:*'
usr_key = f"&key={USCensusAPI.api_key}"
query_url = f"{host}{year}{dataset_acronym}{g}{variables}{location}{usr_key}"
response = requests.get(query_url)
print(response.text)
response = requests.get(query_url)
print(response.text)
```
B01001_001E is Estimated Total: Sex by Age without delineation. In other words, this query is basically just returning 328,239,523 which is the total estimated US population in 2019.
Rather than going through and copying all the variable names from the reference table, I'm going to try and make things easier on myself and see if I can't just read that table in with pandas and extract the variable names.
```
variable_table_url = 'https://api.census.gov/data/2019/acs/acs1/variables.html'
v_table = pd.read_table(variable_table_url, skiprows=59)
v_table
```
Well, line 59 threw an error so I skipped it but the results aren't good. Because I'm too tired and lazy right now to figure out how to make that work properly, I'm going to give read_html a shot really quick.
```
variable_table_url = 'https://api.census.gov/data/2019/acs/acs1/variables.html'
v_table = pd.read_html(variable_table_url)
v_table
type(v_table)
variable_df = pd.DataFrame(v_table[0])
variable_df
```
That's more like it! This will make it easier to automate pulling out multiple variables and giving them more appropriate names than 'B01001_001E', for instance.
```
total_male_by_age_variables = ",".join(variable_df.iloc[3:27]['Name'].values)
total_male_by_age_variables
```
Ok, that gets me a string representation of all the variable names for the male population by age. I just picked those because they were at the top of the list. I'm going to insert those into the API query and see what we get here.
```
# Only thing changing here is the variables which are substituted in under total_male_by_age_variables
m_query_url = f"{host}{year}{dataset_acronym}{g}{total_male_by_age_variables}{location}{usr_key}"
m_response = requests.get(m_query_url)
m_response.text
```
So we really just want the second list since those will be the values. We'll also want to use the 'label' column from the `variable_df` to get column headers that actually mean something. The last item in the `m_response.text[1]` is just the geography code for the US which is 1, so we'll drop that value as well.
```
m_values = [int(i) for i in ast.literal_eval(m_response.text)[1][:-1]]
m_values
```
We'll clean the labels.
Example: 'Estimate!!Total:!!Male:!!Under 5 years' -> 'Male: Under 5 year'
```
m_labels = ['Male: Total', *[i.strip('Estimate!!Total:!!').replace("!!", " ") for i in variable_df.iloc[4:27]['Label'].values]]
m_labels
{m_labels[i]: m_values[i] for i in range(len(m_labels))}
pd.DataFrame({2019: {m_labels[i]: m_values[i] for i in range(len(m_labels))}}).reindex(m_labels)
```
___
Ok, there is all the male population information for 2019. I'm going to try 2018 as well but I'm worried that the indexes of variables may have changed over the years. We'll see how it goes. Before I do that I'm going to write some functions so I can just pop new info in without copying and pasting for every year now that I have a somewhat working proof of concept.
```
from MyCreds.mycreds import USCensusAPI
import requests
import pandas as pd
import ast
year = 2018
def get_variable_table_df(year):
variable_table_url = f'https://api.census.gov/data/{year}/acs/acs1/variables.html'
v_table = pd.read_html(variable_table_url)
variable_df = pd.DataFrame(v_table[0])
return variable_df
v_table = get_variable_table_df(year)
v_table
def get_male_by_age_index(variable_table):
start_index = variable_table[((variable_table['Label'] == 'Estimate!!Total!!Male') | (variable_table['Label'] == 'Estimate!!Total:!!Male:')) & (variable_table['Concept'] == 'SEX BY AGE')].index[0]
end_index = variable_table[((variable_table['Label'] == 'Estimate!!Total!!Male!!85 years and over') | (variable_table['Label'] == 'Estimate!!Total:!!Male:!!85 years and over')) & (variable_table['Concept'] == 'SEX BY AGE')].index[0]
return start_index, end_index + 1
male_by_age_indeces = get_male_by_age_index(v_table)
male_by_age_indeces
def get_variable_names(variable_table, indeces):
total_male_by_age_variables = ",".join(variable_table.iloc[indeces[0]: indeces[1]]['Name'].values)
return total_male_by_age_variables
variables = get_variable_names(v_table, male_by_age_indeces)
variables
def get_query_url(year, variables):
# API Reference: https://www.census.gov/data/developers/guidance/api-user-guide.Example_API_Queries.html
host = 'https://api.census.gov/data'
year = f'/{year}'
# Data Dictionary: https://api.census.gov/data.html
dataset_acronym = '/acs/acs1'
g = '?get='
# Variables for the ACS: https://api.census.gov/data/2005/acs/acs1/variables.html
# variables = 'NAME,B01001_001E'
location = '&for=us:*'
usr_key = f"&key={USCensusAPI.api_key}"
query_url = f"{host}{year}{dataset_acronym}{g}{variables}{location}{usr_key}"
return query_url
query_url = get_query_url(year, variables)
def get_query_text(query_url):
response = requests.get(query_url)
return response.text
response_text = get_query_text(query_url)
response_text
def get_values_from_response(response_text):
values = [int(i) for i in ast.literal_eval(response_text)[1][:-1]]
return values
vals = get_values_from_response(response_text)
vals
def get_labels(variable_df, indeces):
# labels = ['Male: Total', *[i.strip('Estimate!!Total:!!').replace("!!", " ") for i in variable_df.iloc[indeces[0]:indeces[1]]['Label'].values]]
labels = [i.replace("!!", " ").replace(":", "") for i in variable_df.iloc[indeces[0]:indeces[1]]['Label'].values]
return labels
labels = get_labels(v_table, male_by_age_indeces)
labels
def create_year_pop_dataframe(year, labels, values):
df = pd.DataFrame({year: {labels[i]: values[i] for i in range(len(labels))}}).reindex(labels)
return df
def create_male_pop_by_age_df(year):
v_table = get_variable_table_df(year)
male_by_age_indeces = get_male_by_age_index(v_table)
variables = get_variable_names(v_table, male_by_age_indeces)
query_url = get_query_url(year, variables)
response_text = get_query_text(query_url)
vals = get_values_from_response(response_text)
labels = get_labels(v_table, male_by_age_indeces)
df = create_year_pop_dataframe(year, labels, vals)
return df
df_2018 = create_male_pop_by_age_df(2018)
df_2018
df_2017 = create_male_pop_by_age_df(2017)
df_2017
df_2019 = create_male_pop_by_age_df(2019)
df_2019
df = df_2017.merge(df_2018, left_index=True, right_index=True)
df = df.merge(df_2019, left_index=True, right_index=True)
df.T.reset_index().rename({'index': 'Year'}, axis=1)
df_2005 = create_male_pop_by_age_df(2005)
df_2005
from tqdm.notebook import tqdm
years = [i for i in range(2005, 2020)]
male_pop_by_age_df = pd.DataFrame()
for year in tqdm(years):
try:
y_df = create_male_pop_by_age_df(year)
male_pop_by_age_df = male_pop_by_age_df.merge(y_df, left_index=True, right_index=True)
except:
next
male_pop_by_age_df
```
Hmmm, did something wrong here ^^^^
|
github_jupyter
|
from MyCreds.mycreds import USCensusAPI
import requests
import pandas as pd
import ast
# API Reference: https://www.census.gov/data/developers/guidance/api-user-guide.Example_API_Queries.html
host = 'https://api.census.gov/data'
year = '/2019'
# Data Dictionary: https://api.census.gov/data.html
dataset_acronym = '/acs/acs1'
g = '?get='
# Variables for the ACS: https://api.census.gov/data/2005/acs/acs1/variables.html
variables = 'NAME,B01001_001E'
location = '&for=us:*'
usr_key = f"&key={USCensusAPI.api_key}"
query_url = f"{host}{year}{dataset_acronym}{g}{variables}{location}{usr_key}"
# API Reference: https://www.census.gov/data/developers/guidance/api-user-guide.Example_API_Queries.html
host = 'https://api.census.gov/data'
year = '/2019'
# Data Dictionary: https://api.census.gov/data.html
dataset_acronym = '/acs/acs1'
g = '?get='
# Variables for the ACS: https://api.census.gov/data/2005/acs/acs1/variables.html
variables = 'B01001_002E,B01001_003E,B01001_004E,B01001_005E,B01001_006E,B01001_007E,B01001_008E,B01001_009E,B01001_010E,B01001_011E,B01001_012E,B01001_013E,B01001_014E,B01001_015E,B01001_016E,B01001_017E,B01001_018E,B01001_019E,B01001_020E,B01001_021E,B01001_022E,B01001_023E,B01001_024E,B01001_025E'
location = '&for=us:*'
usr_key = f"&key={USCensusAPI.api_key}"
query_url = f"{host}{year}{dataset_acronym}{g}{variables}{location}{usr_key}"
response = requests.get(query_url)
print(response.text)
response = requests.get(query_url)
print(response.text)
variable_table_url = 'https://api.census.gov/data/2019/acs/acs1/variables.html'
v_table = pd.read_table(variable_table_url, skiprows=59)
v_table
variable_table_url = 'https://api.census.gov/data/2019/acs/acs1/variables.html'
v_table = pd.read_html(variable_table_url)
v_table
type(v_table)
variable_df = pd.DataFrame(v_table[0])
variable_df
total_male_by_age_variables = ",".join(variable_df.iloc[3:27]['Name'].values)
total_male_by_age_variables
# Only thing changing here is the variables which are substituted in under total_male_by_age_variables
m_query_url = f"{host}{year}{dataset_acronym}{g}{total_male_by_age_variables}{location}{usr_key}"
m_response = requests.get(m_query_url)
m_response.text
m_values = [int(i) for i in ast.literal_eval(m_response.text)[1][:-1]]
m_values
m_labels = ['Male: Total', *[i.strip('Estimate!!Total:!!').replace("!!", " ") for i in variable_df.iloc[4:27]['Label'].values]]
m_labels
{m_labels[i]: m_values[i] for i in range(len(m_labels))}
pd.DataFrame({2019: {m_labels[i]: m_values[i] for i in range(len(m_labels))}}).reindex(m_labels)
from MyCreds.mycreds import USCensusAPI
import requests
import pandas as pd
import ast
year = 2018
def get_variable_table_df(year):
variable_table_url = f'https://api.census.gov/data/{year}/acs/acs1/variables.html'
v_table = pd.read_html(variable_table_url)
variable_df = pd.DataFrame(v_table[0])
return variable_df
v_table = get_variable_table_df(year)
v_table
def get_male_by_age_index(variable_table):
start_index = variable_table[((variable_table['Label'] == 'Estimate!!Total!!Male') | (variable_table['Label'] == 'Estimate!!Total:!!Male:')) & (variable_table['Concept'] == 'SEX BY AGE')].index[0]
end_index = variable_table[((variable_table['Label'] == 'Estimate!!Total!!Male!!85 years and over') | (variable_table['Label'] == 'Estimate!!Total:!!Male:!!85 years and over')) & (variable_table['Concept'] == 'SEX BY AGE')].index[0]
return start_index, end_index + 1
male_by_age_indeces = get_male_by_age_index(v_table)
male_by_age_indeces
def get_variable_names(variable_table, indeces):
total_male_by_age_variables = ",".join(variable_table.iloc[indeces[0]: indeces[1]]['Name'].values)
return total_male_by_age_variables
variables = get_variable_names(v_table, male_by_age_indeces)
variables
def get_query_url(year, variables):
# API Reference: https://www.census.gov/data/developers/guidance/api-user-guide.Example_API_Queries.html
host = 'https://api.census.gov/data'
year = f'/{year}'
# Data Dictionary: https://api.census.gov/data.html
dataset_acronym = '/acs/acs1'
g = '?get='
# Variables for the ACS: https://api.census.gov/data/2005/acs/acs1/variables.html
# variables = 'NAME,B01001_001E'
location = '&for=us:*'
usr_key = f"&key={USCensusAPI.api_key}"
query_url = f"{host}{year}{dataset_acronym}{g}{variables}{location}{usr_key}"
return query_url
query_url = get_query_url(year, variables)
def get_query_text(query_url):
response = requests.get(query_url)
return response.text
response_text = get_query_text(query_url)
response_text
def get_values_from_response(response_text):
values = [int(i) for i in ast.literal_eval(response_text)[1][:-1]]
return values
vals = get_values_from_response(response_text)
vals
def get_labels(variable_df, indeces):
# labels = ['Male: Total', *[i.strip('Estimate!!Total:!!').replace("!!", " ") for i in variable_df.iloc[indeces[0]:indeces[1]]['Label'].values]]
labels = [i.replace("!!", " ").replace(":", "") for i in variable_df.iloc[indeces[0]:indeces[1]]['Label'].values]
return labels
labels = get_labels(v_table, male_by_age_indeces)
labels
def create_year_pop_dataframe(year, labels, values):
df = pd.DataFrame({year: {labels[i]: values[i] for i in range(len(labels))}}).reindex(labels)
return df
def create_male_pop_by_age_df(year):
v_table = get_variable_table_df(year)
male_by_age_indeces = get_male_by_age_index(v_table)
variables = get_variable_names(v_table, male_by_age_indeces)
query_url = get_query_url(year, variables)
response_text = get_query_text(query_url)
vals = get_values_from_response(response_text)
labels = get_labels(v_table, male_by_age_indeces)
df = create_year_pop_dataframe(year, labels, vals)
return df
df_2018 = create_male_pop_by_age_df(2018)
df_2018
df_2017 = create_male_pop_by_age_df(2017)
df_2017
df_2019 = create_male_pop_by_age_df(2019)
df_2019
df = df_2017.merge(df_2018, left_index=True, right_index=True)
df = df.merge(df_2019, left_index=True, right_index=True)
df.T.reset_index().rename({'index': 'Year'}, axis=1)
df_2005 = create_male_pop_by_age_df(2005)
df_2005
from tqdm.notebook import tqdm
years = [i for i in range(2005, 2020)]
male_pop_by_age_df = pd.DataFrame()
for year in tqdm(years):
try:
y_df = create_male_pop_by_age_df(year)
male_pop_by_age_df = male_pop_by_age_df.merge(y_df, left_index=True, right_index=True)
except:
next
male_pop_by_age_df
| 0.40204 | 0.585338 |
```
%%javascript
$('#appmode-leave').hide();
$('#copy-binder-link').hide();
$('#visit-repo-link').hide();
```
# [Open empty notebook](Empty.ipynb)
# Crystal Violet Virtual Experiment
```
import ipywidgets as ipw
import json
import random
import time
import pandas as pd
import os
import webbrowser
import math
import numpy as np
from IPython.display import display, Markdown, FileLink, clear_output
class StopExecution(Exception):
def _render_traceback_(self):
pass
%%javascript
IPython.OutputArea.prototype._should_scroll = function(lines) {
return false;
}
with open(".lab.json") as infile:
jsdata = json.load(infile)
params = jsdata["cv"]
t = int( time.time() * 1000.0 )
random.seed( ((t & 0xff000000) >> 24) +
((t & 0x00ff0000) >> 8) +
((t & 0x0000ff00) << 8) +
((t & 0x000000ff) << 24) )
params["NaOH"] = 0.5 * random.gauss(1,params["error"])
params["CV"] = 2.5e-5 * random.gauss(1,params["error"])
def run_experiment():
tt = random.gauss(T.value,params["et"])
x0 = v0.value + random.gauss(0,params["ev"])
x1 = v1.value + random.gauss(0,params["ev"])
x2 = v2.value + random.gauss(0,params["ev"])
lnk = params["A"] - params["B"]/params["R"]/(tt+273.15)
kr = math.exp(lnk)
vt = x0 + x1 + x2
coh = params["NaOH"] * x0 / vt
ccv = params["CV"] * x1 / vt
kd = kr * math.pow(coh,params["beta"])
Abs = params["A0"]*ccv
res = pd.DataFrame(columns=["#Time [s]" , "Absorbance"])
for i in range(2, params["nTime"]):
var_list = []
var_list.append(i)
expVal = Abs * math.exp(-kr*i) + abs(random.gauss(0,params["error"]) + 0.008)
var_list.append(expVal)
res.loc[len(res)] = var_list
res.to_csv(respath.value, index=False)
local_file = FileLink(respath.value, result_html_prefix="Click here to download: ")
with out_P:
display(local_file)
display(res.tail(params["nTime"]))
out_Error = ipw.Output()
out_P = ipw.Output()
# output filename
fileName = "results.csv"
respath = ipw.Text(fileName)
v0 = ipw.FloatSlider(value=10, min=0, max=20)
v1 = ipw.FloatSlider(value=10, min=0, max=20)
v2 = ipw.FloatSlider(value=10, min=0, max=20)
T = ipw.FloatSlider(value=25, min=10, max=40)
def reset(btn):
if os.path.exists(respath.value):
os.remove(respath.value)
with out_Error:
out_Error.clear_output()
with out_P:
out_P.clear_output()
clear_output()
create_ipw()
def calc(btn):
out_P.clear_output()
run_experiment()
# interactive buttons ---
btn_calc = ipw.Button(description="Perform Experiment", layout=ipw.Layout(width="150px"))
btn_calc.on_click(calc)
btn_reset = ipw.Button(description="Reset Experiment", layout=ipw.Layout(width="150px"))
btn_reset.on_click(reset)
# -- output widgets
def create_ipw():
rows = []
label_layout = ipw.Layout(width='300px')
rows.append(ipw.HBox([ipw.Label('Output filename : ',layout=label_layout),respath]))
rows.append(ipw.HBox([ipw.Label(value="2.5x10$^{-5}$M stock solution of CV (ml)",layout=label_layout),v0]))
rows.append(ipw.HBox([ipw.Label(value="0.5 M stock solution of NaOH (ml)",layout=label_layout),v1]))
rows.append(ipw.HBox([ipw.Label(value="Deionised water (ml)",layout=label_layout),v2]))
rows.append(ipw.HBox([ipw.Label(value="Temperature ($^\circ$C)",layout=label_layout),T]))
rows.append(ipw.HBox([btn_reset,btn_calc]))
rows.append(ipw.HBox([out_Error]))
rows.append(ipw.HBox([out_P]))
display(ipw.VBox(rows))
create_ipw()
```
|
github_jupyter
|
%%javascript
$('#appmode-leave').hide();
$('#copy-binder-link').hide();
$('#visit-repo-link').hide();
import ipywidgets as ipw
import json
import random
import time
import pandas as pd
import os
import webbrowser
import math
import numpy as np
from IPython.display import display, Markdown, FileLink, clear_output
class StopExecution(Exception):
def _render_traceback_(self):
pass
%%javascript
IPython.OutputArea.prototype._should_scroll = function(lines) {
return false;
}
with open(".lab.json") as infile:
jsdata = json.load(infile)
params = jsdata["cv"]
t = int( time.time() * 1000.0 )
random.seed( ((t & 0xff000000) >> 24) +
((t & 0x00ff0000) >> 8) +
((t & 0x0000ff00) << 8) +
((t & 0x000000ff) << 24) )
params["NaOH"] = 0.5 * random.gauss(1,params["error"])
params["CV"] = 2.5e-5 * random.gauss(1,params["error"])
def run_experiment():
tt = random.gauss(T.value,params["et"])
x0 = v0.value + random.gauss(0,params["ev"])
x1 = v1.value + random.gauss(0,params["ev"])
x2 = v2.value + random.gauss(0,params["ev"])
lnk = params["A"] - params["B"]/params["R"]/(tt+273.15)
kr = math.exp(lnk)
vt = x0 + x1 + x2
coh = params["NaOH"] * x0 / vt
ccv = params["CV"] * x1 / vt
kd = kr * math.pow(coh,params["beta"])
Abs = params["A0"]*ccv
res = pd.DataFrame(columns=["#Time [s]" , "Absorbance"])
for i in range(2, params["nTime"]):
var_list = []
var_list.append(i)
expVal = Abs * math.exp(-kr*i) + abs(random.gauss(0,params["error"]) + 0.008)
var_list.append(expVal)
res.loc[len(res)] = var_list
res.to_csv(respath.value, index=False)
local_file = FileLink(respath.value, result_html_prefix="Click here to download: ")
with out_P:
display(local_file)
display(res.tail(params["nTime"]))
out_Error = ipw.Output()
out_P = ipw.Output()
# output filename
fileName = "results.csv"
respath = ipw.Text(fileName)
v0 = ipw.FloatSlider(value=10, min=0, max=20)
v1 = ipw.FloatSlider(value=10, min=0, max=20)
v2 = ipw.FloatSlider(value=10, min=0, max=20)
T = ipw.FloatSlider(value=25, min=10, max=40)
def reset(btn):
if os.path.exists(respath.value):
os.remove(respath.value)
with out_Error:
out_Error.clear_output()
with out_P:
out_P.clear_output()
clear_output()
create_ipw()
def calc(btn):
out_P.clear_output()
run_experiment()
# interactive buttons ---
btn_calc = ipw.Button(description="Perform Experiment", layout=ipw.Layout(width="150px"))
btn_calc.on_click(calc)
btn_reset = ipw.Button(description="Reset Experiment", layout=ipw.Layout(width="150px"))
btn_reset.on_click(reset)
# -- output widgets
def create_ipw():
rows = []
label_layout = ipw.Layout(width='300px')
rows.append(ipw.HBox([ipw.Label('Output filename : ',layout=label_layout),respath]))
rows.append(ipw.HBox([ipw.Label(value="2.5x10$^{-5}$M stock solution of CV (ml)",layout=label_layout),v0]))
rows.append(ipw.HBox([ipw.Label(value="0.5 M stock solution of NaOH (ml)",layout=label_layout),v1]))
rows.append(ipw.HBox([ipw.Label(value="Deionised water (ml)",layout=label_layout),v2]))
rows.append(ipw.HBox([ipw.Label(value="Temperature ($^\circ$C)",layout=label_layout),T]))
rows.append(ipw.HBox([btn_reset,btn_calc]))
rows.append(ipw.HBox([out_Error]))
rows.append(ipw.HBox([out_P]))
display(ipw.VBox(rows))
create_ipw()
| 0.205296 | 0.352787 |


<h1 align='center'>Modelling the COVID-19 Outbreak</h1>
<h2 align='center'>with Laura G Funderburk</h2>

## Modelling the COVID19 Outbreak in Canada
In this notebook, we’ll implement a “<b>S</b>usceptible, <b>E</b>xposed, <b>I</b>nfected and <b>R</b>ecovered” (<b>SEIR</b>) model used in epidemiology, the study of how disease occurs in populations.
### What is a Mathematical Model
A mathematical model is a description of a system using <b>mathematical concepts</b> and <b>mathematical language</b>.
You can think of a math model as a tool to help us describe what we believe about the workings of phenomena in the world.
<b>We use the language of mathematics to express our beliefs.</b>
<b>We use mathematics (theoretical and numerical analysis) to evaluate the model, and get insights about the original phenomenon.</b>
### How do we model a problem using mathematics?
|Step | Description |
|-|-|
|1| <font color=#000000><b>Choose what phenomenon you want to model|</b></font>
|2| <font color=#000000><b>What assumptions are you making about the phenomenon|1|</b></font>
|3| <font color=#000000><b>Use a flow diagram to help you determine the structure of your model|1|</b></font>
|4| <font color=#000000><b>Choose equations|2|</b></font>
|5| <font color=#000000><b>Implement equations using Python|2|</b></font>
|6| <font color=#000000><b>Solve equations|2|</b></font>
|7| <font color=#000000><b>Study the behaviour of the model|3|</b></font>
|8| <font color=#000000><b>Test the model|3|</b></font>
|9| <font color=#000000><b>Use the model|3|</b></font>
### Our phenomenon of interest: modelling number of people affected by COVID-19
Let's turn now to an event that made headlines in 2020: the COVID-19 pandemic.
COVID-19 is a viral infection caused by a pathogen called SARS-CoV-2.
<center><img src='./images/23311_lores.jpg',style="width: 600px;"></center>
<center>SARS-CoV-2 virus. Illustration by CDC/ Alissa Eckert, MSMI; Dan Higgins, MAMS (2020) </center>
```
YouTubeVideo('LTPJQnEZOLE')
from IPython.lib.display import YouTubeVideo
YouTubeVideo('wdRYoAOCs_k')
```
### Assumptions for a first model
1. Mode of transmission of the disease from person to person is through contact ("contact transmission") between a person who interacts with an infectious person.
2. Once a person comes into contact with the pathogen, there is a period of time (called the latency period) in which they are infected, but cannot infect others (yet!).
3. Population is not-constant (that is, people are born and die as time goes by).
4. A person in the population is either one of:
- <b>S</b>usceptible, i.e. not infected but not yet exposed,
- <b>E</b>xposed to the infection, i.e. exposed to the virus, but not yet infectious,
- <b>I</b>nfectious, and
- <b>R</b>ecovered from the infection.
5. People can die by "natural causes" during any of the stages. We assume an additional cause of death associated with the infectious stage.
6. People can get reinfected after they recover.
### Flow diagram representing those assumptions
How does a person move from one stage into another? In other words, how does a person go from susceptible to exposed, to infected, to recovered?
$\Delta$: Per-capita birth rate.
$\mu$: Per-capita natural death rate.
$\alpha$: Virus-induced average fatality rate.
$\beta$: Probability of disease transmission per contact (dimensionless) times the number of contacts per unit time.
$\epsilon$: Rate of progression from exposed to infectious (the reciprocal is the incubation period).
$\gamma$: Recovery rate of infectious individuals (the reciprocal is the infectious period).
$\delta$: Rate at which a recovered person re-enters into the susceptible category
<center><img src='./images/SEIR.png',style="width: 1200px;"></center>
### Using Mathematics & Code to Create a Simulation
Using a tool from Calculus called "Differential Equations", we can create a system that will allow us to study our model.
It will look daunting - but don't fret! This is what we refer to as using the language of mathematics to express our beliefs about a phenomenon.
Watch this YouTube Video to see how we get to the equations using our assumptions and the diagram.
<center><img src='./images/SEIR.png',style="width: 1200px;"></center>
### The mathematical model
$N$ is updated at each time step, and infected peopel die at a higher rate.
$$ N(t) = S(t) + E(t) + I(t) + R(t)$$
We can then express our model using differential equations
$$\frac{dS}{dt} = \Delta N - \beta \frac{S}{N}I - \mu S + \delta R$$
$$\frac{dE}{dt} = \beta \frac{S}{N}I - \mu E - \epsilon E =\beta \frac{S}{N}I - (\mu + \epsilon)E$$
$$\frac{dI}{dt} = \epsilon E - (\gamma+ \mu + \alpha )I$$
$$\frac{dR}{dt} = \gamma I - \mu R - \delta R = \gamma I - (mu + \delta)R$$
Also, we can keep track of people who die due to the infection.
$$\frac{dD}{dt} = \alpha I $$
We can then solve the equations to see how the values for Susceptible (S), Exposed (E), Infectious (I) and Recovered (R) change over time.
### Tinkering with the Parameters: $\beta$, the rate of contact
We can use Python code to solve for and plot the solutions to our system of equations.
Let's start with the rate of contact $\beta$. The more susceptible people are in contact with infectious people, the higher the value of $\beta$.
What happens when we reduce this rate?, i.e. if we find ways to reduce contact between infectious and susceptible people? Run the cells below, and use the widget to find out how the numbers change.
```
%run ./scripts/covid19_model.py
interact_manual(tinker_beta,
beta=widgets.FloatSlider(min=0, max=1, step=0.01, value=0.5,description='Beta: contact rate',style=style));
```
When we reduce the contact between infectious and susceptible, we see that the number of new infections each infection generates is lower.
How can we reduce the contact in real life?
We can do things like social distancing, wearing masks, and using vaccines to prevent susceptible people from becoming exposed.
What is the rate $\beta$ required so that each infections generates less than 1 infection?
What happens in our plot when we enter that value for $\beta$?
### Tinkering with the Parameters $\beta$: the rate of contact and $\alpha$: the rate of death due COVID-19
How deadly is COVID-19? Let's tinker with a new parameter $\alpha$ - in our diagram, this corresponds to the death by COVID-19 rate.
```
interact_manual(tinker_beta_alpha,
beta=widgets.FloatSlider(min=0, max=1, step=0.01, value=0.5,description='Beta: contact rate',style=style),
alpha=widgets.FloatSlider(min=0, max=1, step=0.01, value=0.5,description='Alpha: COVID-19 death rate',style=style));
```
Something interesting will happen now...the deadlier the virus is, the lower the number of new infections generated by each existing infection.
In the table found [here](https://coronavirus.jhu.edu/data/mortality) under "Case-fatality" there are percentages for the case fatality of COVID-19 for different countries.
If you want to try them using our notebook, recall that 1% can be represented by decimal values as 0.01, 10% as 0.1 and 100% as 1.0.
Do you think that COVID-19 is a deadly disease?
### Tinkering with the remaining parameters
Let's incorporate the rest of our parameters into the simulation.
$\Delta$: Per-capita birth rate.
$\mu$: Per-capita natural death rate.
$\alpha$: Virus-induced average fatality rate.
$\beta$: Probability of disease transmission per contact (dimensionless) times the number of contacts per unit time.
$\epsilon$: Rate of progression from exposed to infectious (the reciprocal is the incubation period).
$\gamma$: Recovery rate of infectious individuals (the reciprocal is the infectious period).
$\delta$: Rate at which a recovered person re-enters into the susceptible category
<center><img src='./images/FlowChart.png',style="width: 600px;"></center>
Run the cell below and change the values of parameters to see how the numbers change.
```
display(tab)
```
### Plotting number of infectious against reported cases of COVID-19 in Canada
Using COVID-19 Open Data [1], we are going to compare our model to the number of daily cases reported in Canada.
[1] COVID-19 Data Repository by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University, https://github.com/CSSEGISandData/COVID-19
In Canada, a person normally gets tested once they start displaying [symptoms of COVID-19](https://www.covid-19canada.com/#symptoms).
By the time a person starts showing symptoms, they can infect others who are in close contact with them.
Let's focus on the number of infectious people then and plot that against real data.
Run the following cell.
Play with the parameters to get a "first guess" of what the parameters in our model are. Try to get as close as possible to the curve of reported data.
```
%run -i ./scripts/open_data.py
display(tab1)
```
### Model's Limitations:
1. Our assumes a constant contact rate - whereas we have modified our contact rate by practicing social distancing, lockdown, and easing lockdown measures.
2. Our model assumes immunity post recovery - which is yet to be proven or disproven.
3. Our model does not take into account inner circles having higher probability of exposure and infection when a member is infectious.
4. Our model does not take into account other factors, such as age, immunodeficiencies, and groups who might be more impacted than others.
5. Model is extremely sensitive to perturbations - small changes in parameters lead to significant changes in number of people in Exposed and Infected categories.
### Data's Limitations:
1. Infected individuals are those who got tested and obtained a positive result - it does not take into account actual cases that were never reported.
2. Infected individuals present symptoms - difficult to measure asymptomatic transmission.
3. Data does not represent accurately whether report is from the same individual at different times.
4. Data is based on test accuracy - a false negative means there might be infected people who tested negative, similar to a false positive, i.e. people who are not infected who test as if they were.
## Further reading
Infectious Disease Modelling https://towardsdatascience.com/infectious-disease-modelling-beyond-the-basic-sir-model-216369c584c4
Model adapted from Carcione José M., Santos Juan E., Bagaini Claudio, Ba Jing, A Simulation of a COVID-19 Epidemic Based on a Deterministic SEIR Model. <b>Frontiers in Public Health</b> Vol 8, 2020 https://www.frontiersin.org/article/10.3389/fpubh.2020.00230 DOI=10.3389/fpubh.2020.00230
[](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md)
|
github_jupyter
|
YouTubeVideo('LTPJQnEZOLE')
from IPython.lib.display import YouTubeVideo
YouTubeVideo('wdRYoAOCs_k')
%run ./scripts/covid19_model.py
interact_manual(tinker_beta,
beta=widgets.FloatSlider(min=0, max=1, step=0.01, value=0.5,description='Beta: contact rate',style=style));
interact_manual(tinker_beta_alpha,
beta=widgets.FloatSlider(min=0, max=1, step=0.01, value=0.5,description='Beta: contact rate',style=style),
alpha=widgets.FloatSlider(min=0, max=1, step=0.01, value=0.5,description='Alpha: COVID-19 death rate',style=style));
display(tab)
%run -i ./scripts/open_data.py
display(tab1)
| 0.374104 | 0.993307 |
# Linear Regression Consulting Project
You've been contracted by Hyundai Heavy Industries to help them build a predictive model for some ships. [Hyundai Heavy Industries](http://www.hyundai.eu/en) is one of the world's largest ship manufacturing companies and builds cruise liners.
You've been flown to their headquarters in Ulsan, South Korea to help them give accurate estimates of how many crew members a ship will require.
They are currently building new ships for some customers and want you to create a model and use it to predict how many crew members the ships will need.
Here is what the data looks like so far:
Description: Measurements of ship size, capacity, crew, and age for 158 cruise
ships.
Variables/Columns
Ship Name 1-20
Cruise Line 21-40
Age (as of 2013) 46-48
Tonnage (1000s of tons) 50-56
passengers (100s) 58-64
Length (100s of feet) 66-72
Cabins (100s) 74-80
Passenger Density 82-88
Crew (100s) 90-96
It is saved in a csv file for you called "cruise_ship_info.csv". Your job is to create a regression model that will help predict how many crew members will be needed for future ships. The client also mentioned that they have found that particular cruise lines will differ in acceptable crew counts, so it is most likely an important feature to include in your analysis!
Once you've created the model and tested it for a quick check on how well you can expect it to perform, make sure you take a look at why it performs so well!
```
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('cruise').getOrCreate()
df = spark.read.csv('cruise_ship_info.csv', header=True, inferSchema=True)
df.printSchema()
df.describe().show()
for item in df.head(3):
print(item,'\n')
df.show()
from pyspark.ml.linalg import Vectors
from pyspark.ml.feature import VectorAssembler, StringIndexer
df.groupBy('Cruise_line').count().show()
indexer = StringIndexer(inputCol='Cruise_line', outputCol='cruise_cat')
indexed = indexer.fit(df).transform(df)
indexed.show()
indexed.columns
assembler = VectorAssembler(inputCols=['Age','Tonnage','passengers','length','cabins',
'passenger_density','cruise_cat'],
outputCol='features'
)
output = assembler.transform(indexed)
output.select(['features', 'crew']).show()
final_data = output.select(['features', 'crew'])
train_data, test_data = final_data.randomSplit([0.7,0.3])
train_data.describe().show()
test_data.describe().show()
from pyspark.ml.regression import LinearRegression
lr = LinearRegression(featuresCol='features', labelCol='crew', predictionCol='prediction')
lr_model = lr.fit(train_data)
print('Linear model coefficients: {}'.format(lr_model.coefficients), '\n')
print('Linear model intercept: {}'.format(lr_model.intercept))
test_results = lr_model.evaluate(test_data)
print('MSE: ', test_results.meanSquaredError)
print('RMSE: ', test_results.rootMeanSquaredError)
print('R2 Value: ', test_results.r2)
```
R2 value is 0.94, actually is pretty good
```
test_results.residuals.show()
from pyspark.sql.functions import corr
df.select(corr('crew', 'passengers')).show()
```
Okay, so maybe it does make sense! Well that is good news for us, this is information we can bring to the company!
|
github_jupyter
|
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('cruise').getOrCreate()
df = spark.read.csv('cruise_ship_info.csv', header=True, inferSchema=True)
df.printSchema()
df.describe().show()
for item in df.head(3):
print(item,'\n')
df.show()
from pyspark.ml.linalg import Vectors
from pyspark.ml.feature import VectorAssembler, StringIndexer
df.groupBy('Cruise_line').count().show()
indexer = StringIndexer(inputCol='Cruise_line', outputCol='cruise_cat')
indexed = indexer.fit(df).transform(df)
indexed.show()
indexed.columns
assembler = VectorAssembler(inputCols=['Age','Tonnage','passengers','length','cabins',
'passenger_density','cruise_cat'],
outputCol='features'
)
output = assembler.transform(indexed)
output.select(['features', 'crew']).show()
final_data = output.select(['features', 'crew'])
train_data, test_data = final_data.randomSplit([0.7,0.3])
train_data.describe().show()
test_data.describe().show()
from pyspark.ml.regression import LinearRegression
lr = LinearRegression(featuresCol='features', labelCol='crew', predictionCol='prediction')
lr_model = lr.fit(train_data)
print('Linear model coefficients: {}'.format(lr_model.coefficients), '\n')
print('Linear model intercept: {}'.format(lr_model.intercept))
test_results = lr_model.evaluate(test_data)
print('MSE: ', test_results.meanSquaredError)
print('RMSE: ', test_results.rootMeanSquaredError)
print('R2 Value: ', test_results.r2)
test_results.residuals.show()
from pyspark.sql.functions import corr
df.select(corr('crew', 'passengers')).show()
| 0.592077 | 0.847274 |
# <font color = Blue> Lead Scoring Case Study
**Identifying hot leads using `Logistic Regression`**
Authors : Santh Raul
## Problem Statement
An education company named __X Education__ sells online courses to industry professionals. On any given day, many professionals who are interested in the courses land on their website and browse for courses.
The company markets its courses on several websites and search engines like Google. Once these people land on the website, they might browse the courses or fill up a form for the course or watch some videos. <br>
__When these people fill up a form providing their email address or phone number, they are classified to be a lead. Moreover, the company also gets leads through past referrals.__<br>
Once these leads are acquired, employees from the sales team start making calls, writing emails, etc. Through this process, some of the leads get converted while most do not. __The typical lead conversion rate at X education is around 30%.__
Now, although X Education gets a lot of leads, its lead conversion rate is very poor. For example, if, say, they acquire 100 leads in a day, only about 30 of them are converted. To make this process more efficient, the company wishes to identify the most potential leads, also known as __‘Hot Leads’__. <br>
If they successfully identify this set of leads, the lead conversion rate should go up as the sales team will now be focusing more on communicating with the potential leads rather than making calls to everyone. A typical lead conversion process can be represented using the following funnel:

__Lead Conversion Process__ - Demonstrated as a funnel
As you can see, there are a lot of leads generated in the initial stage (top) but only a few of them come out as paying customers from the bottom.<br>
In the middle stage, you need to nurture the potential leads well (i.e. educating the leads about the product, constantly communicating etc. ) in order to get a higher lead conversion.
X Education has appointed you to help them select the most promising leads, i.e. the leads that are most likely to convert into paying customers. <br>
The company requires you to build a model wherein you need to assign a lead score to each of the leads such that the customers with higher lead score have a higher conversion chance and the customers with lower lead score have a lower conversion chance.
__The CEO, in particular, has given a ballpark of the target lead conversion rate to be around 80%.__
### Data
You have been provided with a leads dataset from the past with around 9000 data points. This dataset consists of various attributes such as Lead Source, Total Time Spent on Website, Total Visits, Last Activity, etc. which may or may not be useful in ultimately deciding whether a lead will be converted or not. The target variable, in this case, is the column ‘Converted’ which tells whether a past lead was converted or not wherein 1 means it was converted and 0 means it wasn’t converted.
Another thing that you also need to check out for are the levels present in the categorical variables.<br>
__Many of the categorical variables have a level called 'Select' which needs to be handled because it is as good as a null value.__
### Goal
1. Build a logistic regression model to assign a lead score between 0 and 100 to each of the leads which can be used by the company to target potential leads. A higher score would mean that the lead is hot, i.e. is most likely to convert whereas a lower score would mean that the lead is cold and will mostly not get converted.
## Technical Approach:
1. Read and Inspect DataSet
2. Data Cleaning
- Handling the “Select” level
- Missing Value Treatement
- Data Transformation
3. Exploratory Data Analysis
- Outlier Treatement
- Categorical Variables Analysis
- NUmerical Variables Analysis
4. Data Preparation
- Dummies for all categorical columns.
- train-test split.
- Feature Scaling
5. Model Building
- Varible selction using RFE
- Build Logistic Regression model
6. Model Evalaution
- Finding optmial cut-off
- Metrics Evaluation in train set
- Validating model metrics on test set
- Model Summary
7. Lead Scoring
- Final Prediction and Lead scoring
- Feature Importance
8. Conclusion & Recommendation
## i. Import Required Libraries
```
# Supress Warnings
import warnings
warnings.filterwarnings('ignore')
# Importing libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# visulaisation
from matplotlib.pyplot import xticks
%matplotlib inline
# Data display coustomization
pd.set_option('display.max_rows', 100)
pd.set_option('display.max_columns', 100)
import sklearn
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import MinMaxScaler
from sklearn.linear_model import LogisticRegression
from sklearn.feature_selection import RFE
from sklearn import metrics
from sklearn.metrics import confusion_matrix
from sklearn.metrics import precision_score, recall_score
from sklearn.metrics import precision_recall_curve
import statsmodels.api as sm
from statsmodels.stats.outliers_influence import variance_inflation_factor
```
## 1. Read and Inspect data Set
### 1.1 Read and Inspect data Set
```
df = pd.read_csv('Leads.csv')
df.head(5)
# shape of data set
print('Rows and columns of the dataset (Rows , Columns): ', df.shape)
# Look for data types
df.info()
# insights from dataset
df.describe()
# check for duplicate Leads
print('No of duplicate Leads :', df['Lead Number'].duplicated().sum())
```
Insights:
- There are `9240` obesrvation and `37` features in the dataset
- No duplicate entry observed in the dataset
- `Tags, Lead Quality, Lead Profile, Asymmetrique Activity Index, Asymmetrique Profile Index, Asymmetrique Activity Score, Asymmetrique Profile Score` are the features which are recorded after interaction of sales team with the lead. So as a input to model, these data will not availble. Hence we will drop these variables from dataframe.
- `Prospect ID` and `Lead Number` are identification number for lead. So we will keep `Lead Number` to identify the lead and drop `Prospect ID` from our dataframe.
## 2. Cleaning dataset
### 2.1 Drop Unwanted Columns
```
# list of columns having information of post interaction with lead
score_cols = ['Tags', 'Lead Quality', 'Lead Profile', 'Asymmetrique Activity Index' ,
'Asymmetrique Profile Index', 'Asymmetrique Activity Score', 'Asymmetrique Profile Score',
'I agree to pay the amount through cheque', 'Prospect ID', 'Last Notable Activity']
# drop score_cols
df.drop(score_cols, axis = 1, inplace = True)
df.head()
```
### 2.2 Check Null Values:
There are some columns having value as `Select` which is basically missing value at random. So we need to convert them into `NaN` and after that we will do the further analysis of null values.
```
# Check for value 'Select' in the columns
sel_cols = df.isin(['Select']).sum(axis=0)
sel_cols[sel_cols>0]
```
Insights:
- There are three columns `Specialization`, `How did you hear about X Education` and `City` are having high number of value as `select`. we will impute these values to `NaN`.
```
# replace 'Select' with 'NaN'
df['Specialization'].replace('Select', np.nan, inplace= True)
df['How did you hear about X Education'].replace('Select', np.nan, inplace= True)
df['City'].replace('Select', np.nan, inplace= True)
#Check for value 'Select' inthe columns
sel_cols = df.isin(['Select']).sum(axis=0)
sel_cols[sel_cols>0]
# check % Null values
print('Pecentage of null values:\n')
print(round(df.isnull().mean()*100, 2))
# grafical representation of columns having % null values
# features having null value
null_cols = round(df.isnull().mean()*100, 2)
null_cols = null_cols[null_cols>0]
# plot columns having null value
plt.figure(figsize= (8,3),dpi=75)
null_cols.plot(kind = 'bar')
plt.title (' columns having null values', fontsize = 14)
plt.ylabel('% null values')
plt.show()
# plt.savefig('filename.png', dpi=300)
# Select columns having more that 50% null values
null_50 = null_cols[null_cols.values>50]
print('Null values >50% :')
print(null_50)
# Check columns having null value<2%
null_2 = null_cols[(null_cols>0) & (null_cols<2)]
print('\nNull values <2% :')
print(null_2)
```
Insights:
- There are 10 columns having null value
- `How did you hear about X Education` is having null value more that 50%. We will drop these columns.
- `Lead Source, TotalVisits, Page Views Per Visit, Last Activity` are having null value < 2%. We will drop these rows having null value
- Rest of the columns we will impute null vaules with suitable value.
```
# dropping features having more than
df.drop(null_50.index, axis =1, inplace= True)
# Check NUll values after dropping
null_cols = round(df.isnull().mean()*100,2)
null_cols = null_cols[null_cols>0]
null_cols
# drop rows for featues where null values <2%.
df.dropna(subset = null_2.index, inplace = True)
# check for null colums after dropping of rows
null_cols = round(df.isnull().mean()*100,2)
null_cols = null_cols[null_cols>0]
null_cols
# Check % of retailned observations
print('% retained observation: ', round(len(df)/9240*100,2))
# get the data insight for rest of the null columns
df[null_cols.index].describe()
# Percentage of most frequent value with respect to total counts
df[null_cols.index].describe().loc['freq']/df[null_cols.index].describe().loc['count']*100
```
Insights:
-`Country`, `What matters most to you in choosing a course` are having most frequent values more than 95%. We need to be carefull while treating these columns
**Country:**
```
# Percentage value of data
round(df['Country'].value_counts(normalize = True)*100, 2)
```
Insight:
- `Country`: The data is highly skewed having ~95% value as `India`. Imputing `NaN` as India is going to further increase the skewness.
- So keeping this column will not contribute to our model. Hence we will drop the `Country` column
```
# drop country column
df.drop('Country', axis =1, inplace = True)
```
**Specialization:**
```
# value counts for 'Specialisation'
round(df['Specialization'].value_counts(normalize=True)*100, 2)
```
Insights:
`Specialization:` As there are multiple specialisation having similar %, so we cann't conclude the null value as a specific Specialisation. So we will impute null values as `Unknown`.
```
# impute missing values of 'Sepcialisation ' column to 'Other'
df['Specialization'].replace(np.nan, 'Unknown', inplace = True)
```
**What is your current occupation:**
```
# value counts in % 'What is your current occupation'
round(df['What is your current occupation'].value_counts(normalize=True)*100, 2)
```
Insights:
`What is your current occupation:`
- Maximum Lead are from Uneployed level. So we will impute null value as `Unemployed`.
- Also we will combine categories `Housewife`, `businessman` to `Other` as these are having verry low %.
- Although the data spred is slightly skewed, we wffel this as mportant variable, so we would like to keep this variable and let model to decide.
```
# impute 'How did you hear about X Education' column
df['What is your current occupation'].replace(np.nan,'Unemployed', inplace=True )
df['What is your current occupation'].replace(['Housewife','Businessman'],'Other', inplace=True )
# value counts in 'What is your current occupation' after imputation
round(df['What is your current occupation'].value_counts(normalize=True)*100, 2)
```
**What matters most to you in choosing a course:**
```
# value counts 'What matters most to you in choosing a course'
round(df['What matters most to you in choosing a course'].value_counts(normalize=True)*100, 2)
```
Insight:
`What matters most to you in choosing a course`: The data is highly skewed having ~99% value as `Better Career Prospects`. Keeping this column will not contribute to our model. so we will drop this column .
```
# drop 'What matters most to you in choosing a course' column
df.drop('What matters most to you in choosing a course', axis =1, inplace = True)
```
**City:**
```
# value counts 'City'
round(df['City'].value_counts(normalize=True)*100, 2)
```
Insights: `City`
- The missing value is completely random, So we will impute this to mode of category which is `Mumbai`
- We will combine `Other Cities`, `Other Cities of Maharashtra`, `Tier II Cities` into ` Other Cities`
```
# Imputation of null values 'City'
df['City'] = df['City'].replace(np.nan, 'Mumbai')
df['City'].replace(['Other Cities of Maharashtra', 'Tier II Cities'],'Other Cities', inplace=True )
# value counts 'City'
round(df['City'].value_counts(normalize=True)*100, 2)
```
**Final Check for Null values:**
```
# check for null colums after dropping of rows
df.isnull().mean()
```
- Now the dataframe is clean & we can proceed for analysis with this dataset
### 2.3 Data Transformation:
```
# Select Object columns
cat_var = df.select_dtypes('O').columns
cat_var
# insights of categorical variables
df[cat_var].describe()
# Percentage of most frequent value with respect to total counts
df_cat = df[cat_var].describe().loc['freq']/len(df)*100
df_cat.sort_values(ascending=False)
```
Insight:
- based on above analysis we can infer that following columns are having one value more that 90%, so it would not contribute to model. Therefore we would drop these columns.
> `'Get updates on DM Content', 'Update me on Supply Chain Content', 'Receive More Updates About Our Courses', 'Magazine', 'Newspaper', 'X Education Forums', 'Newspaper Article', 'Do Not Call', 'Digital Advertisement', 'Through Recommendations', 'Search', 'Do Not Email'`
```
# Drop skewed columns (most frequest values >90%)
cat_cols = ['Get updates on DM Content', 'Update me on Supply Chain Content', 'Receive More Updates About Our Courses',
'Magazine', 'Newspaper', 'X Education Forums', 'Newspaper Article', 'Do Not Call', 'Digital Advertisement',
'Through Recommendations', 'Search', 'Do Not Email']
df.drop(cat_cols, axis = 1, inplace=True)
df.shape
# Select Object columns
cat_var = df.select_dtypes('O').columns
cat_var
# insights of categorical variable
df[cat_var].describe()
```
Insights:
- We could find following column in categorical data type having higher number of unique values. We will inspect unique values into and try to combine and reduce the number unique value.
> `'Lead Source','Specialization', 'Last Activity'`
**Lead Source**
```
# lead source
df['Lead Source'].value_counts()
```
Insights:
- We will combine follwing unique values,
1. Google, google and bing : `Google` (as these are seacrh engine)
2. FaceBook, Click2call, Press_Release, Social Media, Live Chat, youtubechannel, NC_EDM, WeLearn, testone, welearnblog_Home, blog, Pay per Click Ads : `SocialMedia_Others` (as these are social media and other media sources)
```
# combining values of lead source
df['Lead Source'].replace(['google','bing'], 'Google', inplace = True)
df['Lead Source'].replace(['Facebook', 'Click2call', 'Press_Release', 'Social Media', 'Live Chat', 'youtubechannel',
'NC_EDM', 'WeLearn', 'testone', 'welearnblog_Home', 'blog', 'Pay per Click Ads'],
'SocialMedia_Others', inplace = True)
# check value couts after combining
df['Lead Source'].value_counts()
```
***Specialization***
```
# Specialisation
df['Specialization'].value_counts()
```
Insights:
- We will combine follwing unique values as follwing:
1. Operation Management: Supply Chain Management
2. Finance Management: Banking, Investment And Insurance
3. Other_Specialization: 'Media and Advertising', 'Travel and Tourism', 'International Business', 'Healthcare Management', 'E-COMMERCE', 'Hospitality Management', 'Retail Management', 'Rural and Agribusiness', 'E-Business', 'Services Excellence'
```
# combining values of Specialisation
df['Specialization'].replace(['Supply Chain Management'],'Operation Management', inplace = True)
df['Specialization'].replace(['Banking, Investment And Insurance'], 'Finance Management', inplace = True)
df['Specialization'].replace(['Media and Advertising', 'Travel and Tourism', 'International Business',
'Healthcare Management', 'E-COMMERCE', 'Hospitality Management', 'Retail Management',
'Rural and Agribusiness', 'E-Business', 'Services Excellence'], 'Other_Specialization',
inplace = True)
# check value couts after combining
df['Specialization'].value_counts(normalize= True)*100
```
**Last Activity:**
```
# Last Notable Activity
df['Last Activity'].value_counts(normalize = True)*100
```
Insights:
- We will combine following unique values to `Other_Activity`:
`'Email Bounced', 'Unsubscribed', 'Unreachable', 'Had a Phone Conversation','Email Marked Spam', 'Form Submitted on Website','Email Received','Resubscribed to emails', 'Approached upfront','View in browser link Clicked'`
```
# combining values of Last Notable Activity
df['Last Activity'].replace(['Email Bounced','Email Link Clicked', 'Form Submitted on Website', 'Unreachable',
'Unsubscribed', 'Had a Phone Conversation','View in browser link Clicked',
'Approached upfront', 'Email Received','Email Marked Spam',
'Visited Booth in Tradeshow','Resubscribed to emails'],'Other_Activity',
inplace = True)
# check value couts after combining
df['Last Activity'].value_counts(normalize= True)*100
# chack categorical variables after data cleaning
df.describe(include='O')
```
## 3. Exploratory data Analysis
Converted is the target variable, Indicates whether a lead has been successfully converted (1) or not (0).
```
# check convertion rate
print('convertion rate (%):', round(df['Converted'].mean()*100,2))
# Get insights of numerical variables
df.describe()
```
### 3.1 Outlier treatment
```
# boxplot for continous variable and check for outliers
plt.figure(figsize=(17,4), dpi= 150)
plt.subplot(1,3,1)
sns.boxplot(df['TotalVisits'])
plt.subplot(1,3,2)
sns.boxplot(df['Total Time Spent on Website'])
plt.subplot(1,3,3)
sns.boxplot(df['Page Views Per Visit'])
plt.show()
```
Insight:
- we have two outlier variables in our dataset: `TotalVisits`,`Page Views Per Visit`
- If we drop these outliers, we may loose significant volume of observations. So we will do the outlier treatment using upper capping.
```
# Outlier capping 'TotalVisits'
IQR = df['TotalVisits'].quantile(0.75)-df['TotalVisits'].quantile(0.25)
UL = df['TotalVisits'].quantile(0.75) + IQR*1.5
df.loc[df['TotalVisits'] > UL, 'TotalVisits'] = UL
# Outlier capping 'Page Views Per Visit'
IQR = df['Page Views Per Visit'].quantile(0.75)-df['Page Views Per Visit'].quantile(0.25)
UL = df['Page Views Per Visit'].quantile(0.75) + IQR*1.5
df.loc[df['Page Views Per Visit'] > UL, 'Page Views Per Visit'] = UL
# boxplot for continous variable after outlier capping
plt.figure(figsize=(17,4), dpi= 150)
plt.subplot(1,3,1)
sns.boxplot(df['TotalVisits'])
plt.subplot(1,3,2)
sns.boxplot(df['Total Time Spent on Website'])
plt.subplot(1,3,3)
sns.boxplot(df['Page Views Per Visit'])
plt.show()
```
### 3.2 Categorical Variables
```
# visulaisation
cols = 2
rows = len(cat_var)//cols+1
plt.figure(figsize = (13,25))
for i in enumerate(cat_var):
plt.subplot(rows,cols,i[0]+1)
sns.countplot(x = i[1], hue= 'Converted', data = df)
plt.xticks(rotation = 90)
plt.tight_layout(pad= 1)
plt.show()
```
Insights:
1. Lead Origin:
- `API` and `Landing Page Submission` bring higher number of leads as well as conversion.
- Although count of leads are not very high in `Lead Add Form`, it has a very high conversion rate.
2. Lead Source:
- `Google` and `Direct traffic` generates maximum number of leads. Conversion Rate of `reference leads` and `leads through welingak website` is high.
3. Specialization
- A lot of lead are having `Unknow Specialisation`
- Management specializations are having higher conversion rate.
4. What is your current occupation
- Higher number of leads are `Unemployed` however `Working Professional` are having higher conversion rate
5. City
- Maximum number of lead are from `Mumbai` City
6. A free copy of Mastering The Interview
- Most of the lead have not opted for `A free copy of Mastering The Interview`.
7. Last Notable Activity
- `SMS Sent` activity has higer conversion rate.
### 3.3 Numerical Variables:
```
# select numerical columns
num_cols = ['TotalVisits','Total Time Spent on Website', 'Page Views Per Visit']
# paiplot
sns.pairplot(df[num_cols])
plt.show()
# heatmap
sns.heatmap(df[num_cols].corr(), annot=True)
plt.show()
```
Insights:
- There is high correlation between `Total Visit` and `Page Views Per Visit`
## 4. Data Preparation:
### 4.1 Create dummies for all categorical columns
```
# chekc columns
df.columns
# get insights of categorical variables
df.describe(include='O')
# Creating a dummy variable for the variable 'Lead Origin'
cont = pd.get_dummies(df['Lead Origin'],prefix='Lead Origin',drop_first=True)
#Adding the results to the master dataframe
df = pd.concat([df,cont],axis=1)
# Creating a dummy variable for the variable 'Lead Source'
cont = pd.get_dummies(df['Lead Source'],prefix='Lead Source',drop_first=True)
#Adding the results to the master dataframe
df = pd.concat([df,cont],axis=1)
# Creating a dummy variable for the variable 'Lead Source'
cont = pd.get_dummies(df['Last Activity'],prefix='Last Activity',drop_first=True)
#Adding the results to the master dataframe
df = pd.concat([df,cont],axis=1)
# Creating a dummy variable for the variable 'Specialization'
cont = pd.get_dummies(df['Specialization'],prefix='Specialization',drop_first=True)
#Adding the results to the master dataframe
df = pd.concat([df,cont],axis=1)
# Creating a dummy variable for the variable 'What is your current occupation'
cont = pd.get_dummies(df['What is your current occupation'],prefix='What is your current occupation',drop_first=True)
#Adding the results to the master dataframe
df = pd.concat([df,cont],axis=1)
# Creating a dummy variable for the variable 'City'
cont = pd.get_dummies(df['City'],prefix='City',drop_first=True)
#Adding the results to the master dataframe
df = pd.concat([df,cont],axis=1)
# # Creating a dummy variable for the variable 'Last Notable Activity'
# cont = pd.get_dummies(df['Last Notable Activity'],prefix='Last Notable Activity',drop_first=True)
# #Adding the results to the master dataframe
# df = pd.concat([df,cont],axis=1)
#created dummies for the below variables, so drop the same
df = df.drop(['Lead Origin','Lead Source','Last Activity','Specialization','City','What is your current occupation'],
axis =1)
df.columns
# check for dataframe shape
df.shape
# Converting bInary varibales (Yes to 1 and No to 0)
df['A free copy of Mastering The Interview'] = df['A free copy of Mastering The Interview'].map({'Yes': 1, 'No': 0})
df['A free copy of Mastering The Interview'].value_counts()
# check dataframe after dummies creation
df.info()
# Ensuring there are no categorical columns left in the dataframe
cols = df.columns
num_cols = df._get_numeric_data().columns
list(set(cols) - set(num_cols))
```
### 4.2 Train - Test Split
```
# Creating feature variables of X
X = df.drop(['Converted','Lead Number'],1)
X.head()
# Creating feature variables of y
y = df['Converted']
y.head()
# Splitting the dataset into test and train
X_train, X_test, y_train, y_test = train_test_split(X,y, train_size =0.7, test_size =0.3, random_state = 100)
```
### 4.3 Feature Scaling
```
# Scaling the variables using standaredscaler
scaler = StandardScaler()
# Scaling of the numerical data X_train
num_cols = ['TotalVisits','Total Time Spent on Website', 'Page Views Per Visit']
X_train[num_cols]=scaler.fit_transform(X_train[num_cols])
X_train.head()
# scaling numerical dat X_test
X_test[num_cols]=scaler.transform(X_test[num_cols])
X_test.head()
```
## 5. Model Bulding
```
# Logistic Regression Model (Zero model)
logm = sm.GLM(y_train,(sm.add_constant(X_train)), family = sm.families.Binomial())
print(logm.fit().summary())
```
Insights:-
- Since there are a lot of variables and it is difficult to build model with such high number of variable. So We'll use RFE to select significant variables.
#### Feature Selection Using RFE
```
# Running Logistic Regression
logreg = LogisticRegression()
# Running RFE
rfe = RFE(logreg, 25) # running RFE with 25 variables as output
rfe = rfe.fit(X_train, y_train)
rfe.support_
list(zip(X_train.columns, rfe.support_, rfe.ranking_))
col_rfe = X_train.columns[rfe.support_]
col_rfe
```
Insights:
Now you have all the variables selected by RFE and since we care about the statistics part, i.e. the p-values and the VIFs, let's use these variables to create a logistic regression model using statsmodels.
#### Assessing the model with StatsModels
```
# selct x_train based on RFE
X_train_rfe = X_train[col_rfe]
# create function for stats logistic model
def sm_logregmodel(X_train_sm):
#Add constant
X_train_sm = sm.add_constant(X_train_sm)
# create a fitted model
logm = sm.GLM(y_train, X_train_sm, family = sm.families.Binomial())
res = logm.fit()
return res
# Function to calculate VIF
# calculate VIF
def vif_calc(X):
vif = pd.DataFrame()
vif['Features'] = X.columns
vif['VIF'] = [variance_inflation_factor(X.values, i) for i in range(X.shape[1])]
vif['VIF'] = round(vif['VIF'],2)
vif = vif.sort_values(by='VIF', ascending = False)
return vif
# Create 1st model with RFE features
logm1 = sm_logregmodel(X_train_rfe)
print(logm1.summary())
# Loop to remove P value variables >0.05 in bstep mannen and update model
pvalue = logm1.pvalues[1:]
while(max(pvalue)>0.05):
maxp_var = pvalue[pvalue == pvalue.max()].index
print('Removed variable:' , maxp_var[0], ' P value: ', round(max(pvalue),3))
# drop variable with high p value
X_train_rfe = X_train_rfe.drop(maxp_var, axis = 1)
logm1 = sm_logregmodel(X_train_rfe)
pvalue = logm1.pvalues[1:]
# Create 1st model with RFE features
logm2 = sm_logregmodel(X_train_rfe)
print(logm2.summary())
# Check for VIF
print(vif_calc(X_train_rfe))
# drop variable with high p value and update the model
X_train_rfe.drop('What is your current occupation_Unemployed', axis = 1, inplace = True)
#update model
logm3 = sm_logregmodel(X_train_rfe)
print(logm3.summary())
# check VIF
print(vif_calc(X_train_rfe))
# drop variable with high p value and update the model
X_train_rfe.drop('Lead Origin_Landing Page Submission', axis = 1, inplace = True)
#update model
logm4 = sm_logregmodel(X_train_rfe)
print(logm4.summary())
# check VIF
print(vif_calc(X_train_rfe))
# drop variable with high p value and update the model
X_train_rfe.drop('Lead Source_Welingak Website', axis = 1, inplace = True)
#update model
logm5 = sm_logregmodel(X_train_rfe)
print(logm5.summary())
# check VIF
print(vif_calc(X_train_rfe))
# List down final model varibales and its coefficients
# assign final model to lm_final
log_final = logm5
# list down and check variables of final model
var_final = list(log_final.params.index)
var_final.remove('const')
print('Final Selected Variables:', var_final)
# Print the coefficents of final varible
print('\033[1m{:10s}\033[0m'.format('\nCoefficent for the variables are:'))
print(round(log_final.params,3))
```
## 6. Model Evaluation
```
# getting the predicted values on the train set
X_train_sm = sm.add_constant(X_train[var_final])
y_train_pred = log_final.predict(X_train_sm)
y_train_pred[:10]
# Reshaping the numpy array containing predicted values
y_train_pred = y_train_pred.values.reshape(-1)
y_train_pred[:10]
# Create a new dataframe containing the actual conversion flag and the probabilities predicted by the model
y_train_pred_final = pd.DataFrame({'Converted':y_train.values, 'Conversion_Prob': y_train_pred})
y_train_pred_final.head()
# Prediction at 0.5
cut_off = 0.5
y_train_pred_final['predicted'] = y_train_pred_final.Conversion_Prob.map(lambda x: 1 if x > cut_off else 0)
# Let's see the head
y_train_pred_final.head()
```
### Metric Evaluation: (Train Set)
```
# Confusion matrix
confusion = metrics.confusion_matrix(y_train_pred_final.Converted, y_train_pred_final.predicted )
print(confusion)
# Classification Summary
from sklearn.metrics import classification_report
print(classification_report(y_train_pred_final.Converted, y_train_pred_final.predicted))
```
Insights:
- with the current cut-off as 0.5 the accuracy is 0.81 which is acceptable, but Sensitivity/Recall is 0.69 which is quite low. So we need to optmise the cut-off point.
### Finding Optimal cutoff point
The previous cutoff was randomly selected and we need to find the optimal one
### ROC Curve:
```
# function for ROC curve
def draw_roc( actual, probs ):
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
fpr, tpr, thresholds = metrics.roc_curve( actual, probs, drop_intermediate = False )
auc_score = metrics.roc_auc_score( actual, probs )
plt.figure(figsize=(5, 5))
plt.plot( fpr, tpr, label='ROC curve (area = %0.2f)' % auc_score )
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate or [1 - True Negative Rate]')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic curve')
plt.legend(loc="lower right")
plt.show()
# fpr, tpr, thresholds = roc_curve(y_train_pred_final.Converted, y_train_pred_final.Conversion_Prob)
auc_score = roc_auc_score(actual, probs)
print('ROC AUC : ',round(auc_score,2))
optimal_idx = np.argmax(tpr - fpr)
optimal_threshold = thresholds[optimal_idx]
print("Threshold value is:", round(optimal_threshold,2))
# return fpr,tpr, thresholds
# plot Roc Curve
draw_roc(y_train_pred_final.Converted, y_train_pred_final.Conversion_Prob)
# Let's create columns with different probability cutoffs
numbers = [float(x)/10 for x in range(10)]
for i in numbers:
y_train_pred_final[i]= y_train_pred_final.Conversion_Prob.map(lambda x: 1 if x > i else 0)
y_train_pred_final.head()
# Now let's calculate accuracy sensitivity and specificity for various probability cutoffs.
cutoff_df = pd.DataFrame( columns = ['prob','accuracy','sensi','speci'])
from sklearn.metrics import confusion_matrix
# TP = confusion[1,1] # true positive
# TN = confusion[0,0] # true negatives
# FP = confusion[0,1] # false positives
# FN = confusion[1,0] # false negatives
num = [0.0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9]
for i in num:
cm1 = metrics.confusion_matrix(y_train_pred_final.Converted, y_train_pred_final[i] )
total1=sum(sum(cm1))
accuracy = (cm1[0,0]+cm1[1,1])/total1
speci = cm1[0,0]/(cm1[0,0]+cm1[0,1])
sensi = cm1[1,1]/(cm1[1,0]+cm1[1,1])
cutoff_df.loc[i] =[ i ,accuracy,sensi,speci]
print(cutoff_df)
# validate Optimal cut off point
sns.set_style("whitegrid")
cutoff_df.plot.line(x='prob', y=['accuracy','sensi','speci'], figsize=(14,6))
# plot x axis limits
plt.xticks(np.arange(0, 1, step=0.05), size = 12)
plt.yticks(size = 12)
plt.show()
```
- From the curve above, 0.35 is the optimum point to take it as a cutoff probability
```
cut_off = 0.35
y_train_pred_final['final_predicted'] = y_train_pred_final.Conversion_Prob.map( lambda x: 1 if x > cut_off else 0)
y_train_pred_final.head()
confusion_train = metrics.confusion_matrix(y_train_pred_final.Converted, y_train_pred_final.final_predicted)
confusion_train
```
**Precision and recall tradeoff**
```
# Precision and recall tradeoff
p, r, thresholds = precision_recall_curve(y_train_pred_final.Converted, y_train_pred_final.Conversion_Prob)
# Slightly alter the figure size to make it more horizontal.
plt.plot(thresholds, p[:-1], "g-")
plt.plot(thresholds, r[:-1], "r-")
plt.show()
```
Insights:
- From the precision-recall graph above, we get the optical threshold value as close to 0.4. Which is close to our earlier optimal cutoff point.
**Classification Report Summary:**
```
# Classification - at optimal cut off
from sklearn.metrics import classification_report
print(classification_report(y_train_pred_final.Converted, y_train_pred_final.final_predicted))
```
### 6.1 Prediction on Test set
```
# getting the predicted values on the train set
X_test_sm = sm.add_constant(X_test[var_final])
y_test_pred = log_final.predict(X_test_sm)
y_test_pred[:10]
# Create a new dataframe containing the actual conversion flag and the probabilities predicted by the model
y_test_pred_final = pd.DataFrame({'Converted':y_test.values, 'Conversion_Prob': y_test_pred})
y_test_pred_final.head()
# Final Prediction on test set
y_test_pred_final['predicted'] = y_test_pred_final.Conversion_Prob.map(lambda x: 1 if x > cut_off else 0)
# Let's see the head
y_test_pred_final.head()
```
**Metric Evaluation: (Test Set)**
```
# Classification summary on test set
from sklearn.metrics import classification_report
print(classification_report(y_test_pred_final.Converted, y_test_pred_final.predicted))
```
**Cross Validation Score on test data**
To avoid overfitting, let us calculate the Cross Validation Score to see how our model performs
```
# ROC Curve
draw_roc(y_test_pred_final.Converted, y_test_pred_final.Conversion_Prob)
```
Insights:
Since we got a value of 0.87, our model seems to be doing well on the test dataset.
### 6.3 Model Summary
```
# function to predict and get classification summary
def classification_model_metrics(logm, X, y, cut_off):
# check variables of model
X_cols = list(logm.params.index)
X_cols.remove('const')
# getting the predicted values on the train set
# var_final = X[log_final.params.index[1:]]
X_sm = sm.add_constant(X[X_cols])
y_pred = logm.predict(X_sm)
# Reshaping the numpy array containing predicted values
y_pred = y_pred.values.reshape(-1)
# Create a new dataframe containing the actual conversion flag and the probabilities predicted by the model
y_pred_final = pd.DataFrame({'Converted':y.values, 'Conversion_Prob': y_pred})
# Prediction at cutoff
y_pred_final['predicted'] = y_pred_final.Conversion_Prob.map(lambda x: 1 if x > cut_off else 0)
# Classification Summary
from sklearn.metrics import classification_report
classification_summary = classification_report(y_pred_final.Converted, y_pred_final.predicted, digits = 2)
return classification_summary
# Model Metric Evaluation at optimum cut off
model = logm5
cut_off = 0.35
print('model metrics of train set @', cut_off )
model_metrics = classification_model_metrics(model,X_train, y_train, cut_off)
print(model_metrics)
print('--------------------------------------------------------')
print('model metrics of test set @', cut_off )
model_metrics = classification_model_metrics(model,X_test, y_test, cut_off)
print(model_metrics)
```
## 7. Final Prediction and Lead Score
Calculating Lead score for the entire dataset
𝐿𝑒𝑎𝑑𝑆𝑐𝑜𝑟𝑒=100∗𝐶𝑜𝑛𝑣𝑒𝑟𝑠𝑖𝑜𝑛𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦
This needs to be calculated for all the leads from the original dataset (train + test)
```
# getting the predicted values on the total dataset
X = df[var_final]
X_sm = sm.add_constant(X[var_final])
y_pred = log_final.predict(X_sm)
y_pred[:10]
# #lead score for customer in the range 0-100
Lead_Score = df[['Lead Number']]
Lead_Score['Lead_Score'] = round(y_pred*100,2)
Lead_Score['Coversion_Pred'] = y_pred.map(lambda x: 1 if x > cut_off else 0)
Lead_Score.head()
```
**Feature Importance**
```
#Selecting the coefficients of the selected features from our final model excluding the intercept/constant
final_params = log_final.params[1:]
# Print the coefficents of final varible
print('\033[1m{:10s}\033[0m'.format('\nCoefficent for the final variables are:'))
print(round(final_params,3))
#Getting a relative coeffient value for all the features wrt the feature with the highest coefficient
feature_importance = final_params
feature_importance = 100.0 * (feature_importance / feature_importance.max())
feature_importance.sort_values(ascending = False)
feature_importance = feature_importance.sort_values()
plt.figure(figsize=(6,6))
feature_importance.plot.barh(align='center', color = 'tab:red',alpha=0.8, fontsize = 12)
plt.title('Relative Feature Importance', fontsize=14)
plt.show()
```
Insights:
Important features Contribution for Lead conversion: `Lead Add Form (Lead Origin), Working Professional (What is your current occupation), SMS Sent (Last Activity)`
## 8. Conclusion and Recommendation:
**Conclusions:**
1. The Accuracy, Precision and Recall score we got from test set in acceptable range.
2. All the model metrics are similar which indicates that the model is in stable state
3. Business should consider lead score more than 35 as a hot lead for maximum conversion
4. Important features contrinution for Lead conversion rate:
a. Lead Add Form (Lead Origin)
b. Working Professional (What is your current occupation )
c. SMS Sent (Last Activity)
**Keeping these in mind the X Education can flourish as they have a very high chance to get almost all the potential buyers to change their mind and buy their courses.**
**Business recommendations:**
1. As we have ~80% of in both train and test database, with Recall / Sensitivity / TPR of 80%, it means we have identified our most of the converted customers correctly.
2. However, by changing cut off limit we can achieve business target. As by reducing cut off, education group can have more target’s while increasing cut off they can limit out targets and will focus only on those customer’s those have very high probability of conversion or “Hot Leads”.
|
github_jupyter
|
# Supress Warnings
import warnings
warnings.filterwarnings('ignore')
# Importing libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# visulaisation
from matplotlib.pyplot import xticks
%matplotlib inline
# Data display coustomization
pd.set_option('display.max_rows', 100)
pd.set_option('display.max_columns', 100)
import sklearn
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import MinMaxScaler
from sklearn.linear_model import LogisticRegression
from sklearn.feature_selection import RFE
from sklearn import metrics
from sklearn.metrics import confusion_matrix
from sklearn.metrics import precision_score, recall_score
from sklearn.metrics import precision_recall_curve
import statsmodels.api as sm
from statsmodels.stats.outliers_influence import variance_inflation_factor
df = pd.read_csv('Leads.csv')
df.head(5)
# shape of data set
print('Rows and columns of the dataset (Rows , Columns): ', df.shape)
# Look for data types
df.info()
# insights from dataset
df.describe()
# check for duplicate Leads
print('No of duplicate Leads :', df['Lead Number'].duplicated().sum())
# list of columns having information of post interaction with lead
score_cols = ['Tags', 'Lead Quality', 'Lead Profile', 'Asymmetrique Activity Index' ,
'Asymmetrique Profile Index', 'Asymmetrique Activity Score', 'Asymmetrique Profile Score',
'I agree to pay the amount through cheque', 'Prospect ID', 'Last Notable Activity']
# drop score_cols
df.drop(score_cols, axis = 1, inplace = True)
df.head()
# Check for value 'Select' in the columns
sel_cols = df.isin(['Select']).sum(axis=0)
sel_cols[sel_cols>0]
# replace 'Select' with 'NaN'
df['Specialization'].replace('Select', np.nan, inplace= True)
df['How did you hear about X Education'].replace('Select', np.nan, inplace= True)
df['City'].replace('Select', np.nan, inplace= True)
#Check for value 'Select' inthe columns
sel_cols = df.isin(['Select']).sum(axis=0)
sel_cols[sel_cols>0]
# check % Null values
print('Pecentage of null values:\n')
print(round(df.isnull().mean()*100, 2))
# grafical representation of columns having % null values
# features having null value
null_cols = round(df.isnull().mean()*100, 2)
null_cols = null_cols[null_cols>0]
# plot columns having null value
plt.figure(figsize= (8,3),dpi=75)
null_cols.plot(kind = 'bar')
plt.title (' columns having null values', fontsize = 14)
plt.ylabel('% null values')
plt.show()
# plt.savefig('filename.png', dpi=300)
# Select columns having more that 50% null values
null_50 = null_cols[null_cols.values>50]
print('Null values >50% :')
print(null_50)
# Check columns having null value<2%
null_2 = null_cols[(null_cols>0) & (null_cols<2)]
print('\nNull values <2% :')
print(null_2)
# dropping features having more than
df.drop(null_50.index, axis =1, inplace= True)
# Check NUll values after dropping
null_cols = round(df.isnull().mean()*100,2)
null_cols = null_cols[null_cols>0]
null_cols
# drop rows for featues where null values <2%.
df.dropna(subset = null_2.index, inplace = True)
# check for null colums after dropping of rows
null_cols = round(df.isnull().mean()*100,2)
null_cols = null_cols[null_cols>0]
null_cols
# Check % of retailned observations
print('% retained observation: ', round(len(df)/9240*100,2))
# get the data insight for rest of the null columns
df[null_cols.index].describe()
# Percentage of most frequent value with respect to total counts
df[null_cols.index].describe().loc['freq']/df[null_cols.index].describe().loc['count']*100
# Percentage value of data
round(df['Country'].value_counts(normalize = True)*100, 2)
# drop country column
df.drop('Country', axis =1, inplace = True)
# value counts for 'Specialisation'
round(df['Specialization'].value_counts(normalize=True)*100, 2)
# impute missing values of 'Sepcialisation ' column to 'Other'
df['Specialization'].replace(np.nan, 'Unknown', inplace = True)
# value counts in % 'What is your current occupation'
round(df['What is your current occupation'].value_counts(normalize=True)*100, 2)
# impute 'How did you hear about X Education' column
df['What is your current occupation'].replace(np.nan,'Unemployed', inplace=True )
df['What is your current occupation'].replace(['Housewife','Businessman'],'Other', inplace=True )
# value counts in 'What is your current occupation' after imputation
round(df['What is your current occupation'].value_counts(normalize=True)*100, 2)
# value counts 'What matters most to you in choosing a course'
round(df['What matters most to you in choosing a course'].value_counts(normalize=True)*100, 2)
# drop 'What matters most to you in choosing a course' column
df.drop('What matters most to you in choosing a course', axis =1, inplace = True)
# value counts 'City'
round(df['City'].value_counts(normalize=True)*100, 2)
# Imputation of null values 'City'
df['City'] = df['City'].replace(np.nan, 'Mumbai')
df['City'].replace(['Other Cities of Maharashtra', 'Tier II Cities'],'Other Cities', inplace=True )
# value counts 'City'
round(df['City'].value_counts(normalize=True)*100, 2)
# check for null colums after dropping of rows
df.isnull().mean()
# Select Object columns
cat_var = df.select_dtypes('O').columns
cat_var
# insights of categorical variables
df[cat_var].describe()
# Percentage of most frequent value with respect to total counts
df_cat = df[cat_var].describe().loc['freq']/len(df)*100
df_cat.sort_values(ascending=False)
# Drop skewed columns (most frequest values >90%)
cat_cols = ['Get updates on DM Content', 'Update me on Supply Chain Content', 'Receive More Updates About Our Courses',
'Magazine', 'Newspaper', 'X Education Forums', 'Newspaper Article', 'Do Not Call', 'Digital Advertisement',
'Through Recommendations', 'Search', 'Do Not Email']
df.drop(cat_cols, axis = 1, inplace=True)
df.shape
# Select Object columns
cat_var = df.select_dtypes('O').columns
cat_var
# insights of categorical variable
df[cat_var].describe()
# lead source
df['Lead Source'].value_counts()
# combining values of lead source
df['Lead Source'].replace(['google','bing'], 'Google', inplace = True)
df['Lead Source'].replace(['Facebook', 'Click2call', 'Press_Release', 'Social Media', 'Live Chat', 'youtubechannel',
'NC_EDM', 'WeLearn', 'testone', 'welearnblog_Home', 'blog', 'Pay per Click Ads'],
'SocialMedia_Others', inplace = True)
# check value couts after combining
df['Lead Source'].value_counts()
# Specialisation
df['Specialization'].value_counts()
# combining values of Specialisation
df['Specialization'].replace(['Supply Chain Management'],'Operation Management', inplace = True)
df['Specialization'].replace(['Banking, Investment And Insurance'], 'Finance Management', inplace = True)
df['Specialization'].replace(['Media and Advertising', 'Travel and Tourism', 'International Business',
'Healthcare Management', 'E-COMMERCE', 'Hospitality Management', 'Retail Management',
'Rural and Agribusiness', 'E-Business', 'Services Excellence'], 'Other_Specialization',
inplace = True)
# check value couts after combining
df['Specialization'].value_counts(normalize= True)*100
# Last Notable Activity
df['Last Activity'].value_counts(normalize = True)*100
# combining values of Last Notable Activity
df['Last Activity'].replace(['Email Bounced','Email Link Clicked', 'Form Submitted on Website', 'Unreachable',
'Unsubscribed', 'Had a Phone Conversation','View in browser link Clicked',
'Approached upfront', 'Email Received','Email Marked Spam',
'Visited Booth in Tradeshow','Resubscribed to emails'],'Other_Activity',
inplace = True)
# check value couts after combining
df['Last Activity'].value_counts(normalize= True)*100
# chack categorical variables after data cleaning
df.describe(include='O')
# check convertion rate
print('convertion rate (%):', round(df['Converted'].mean()*100,2))
# Get insights of numerical variables
df.describe()
# boxplot for continous variable and check for outliers
plt.figure(figsize=(17,4), dpi= 150)
plt.subplot(1,3,1)
sns.boxplot(df['TotalVisits'])
plt.subplot(1,3,2)
sns.boxplot(df['Total Time Spent on Website'])
plt.subplot(1,3,3)
sns.boxplot(df['Page Views Per Visit'])
plt.show()
# Outlier capping 'TotalVisits'
IQR = df['TotalVisits'].quantile(0.75)-df['TotalVisits'].quantile(0.25)
UL = df['TotalVisits'].quantile(0.75) + IQR*1.5
df.loc[df['TotalVisits'] > UL, 'TotalVisits'] = UL
# Outlier capping 'Page Views Per Visit'
IQR = df['Page Views Per Visit'].quantile(0.75)-df['Page Views Per Visit'].quantile(0.25)
UL = df['Page Views Per Visit'].quantile(0.75) + IQR*1.5
df.loc[df['Page Views Per Visit'] > UL, 'Page Views Per Visit'] = UL
# boxplot for continous variable after outlier capping
plt.figure(figsize=(17,4), dpi= 150)
plt.subplot(1,3,1)
sns.boxplot(df['TotalVisits'])
plt.subplot(1,3,2)
sns.boxplot(df['Total Time Spent on Website'])
plt.subplot(1,3,3)
sns.boxplot(df['Page Views Per Visit'])
plt.show()
# visulaisation
cols = 2
rows = len(cat_var)//cols+1
plt.figure(figsize = (13,25))
for i in enumerate(cat_var):
plt.subplot(rows,cols,i[0]+1)
sns.countplot(x = i[1], hue= 'Converted', data = df)
plt.xticks(rotation = 90)
plt.tight_layout(pad= 1)
plt.show()
# select numerical columns
num_cols = ['TotalVisits','Total Time Spent on Website', 'Page Views Per Visit']
# paiplot
sns.pairplot(df[num_cols])
plt.show()
# heatmap
sns.heatmap(df[num_cols].corr(), annot=True)
plt.show()
# chekc columns
df.columns
# get insights of categorical variables
df.describe(include='O')
# Creating a dummy variable for the variable 'Lead Origin'
cont = pd.get_dummies(df['Lead Origin'],prefix='Lead Origin',drop_first=True)
#Adding the results to the master dataframe
df = pd.concat([df,cont],axis=1)
# Creating a dummy variable for the variable 'Lead Source'
cont = pd.get_dummies(df['Lead Source'],prefix='Lead Source',drop_first=True)
#Adding the results to the master dataframe
df = pd.concat([df,cont],axis=1)
# Creating a dummy variable for the variable 'Lead Source'
cont = pd.get_dummies(df['Last Activity'],prefix='Last Activity',drop_first=True)
#Adding the results to the master dataframe
df = pd.concat([df,cont],axis=1)
# Creating a dummy variable for the variable 'Specialization'
cont = pd.get_dummies(df['Specialization'],prefix='Specialization',drop_first=True)
#Adding the results to the master dataframe
df = pd.concat([df,cont],axis=1)
# Creating a dummy variable for the variable 'What is your current occupation'
cont = pd.get_dummies(df['What is your current occupation'],prefix='What is your current occupation',drop_first=True)
#Adding the results to the master dataframe
df = pd.concat([df,cont],axis=1)
# Creating a dummy variable for the variable 'City'
cont = pd.get_dummies(df['City'],prefix='City',drop_first=True)
#Adding the results to the master dataframe
df = pd.concat([df,cont],axis=1)
# # Creating a dummy variable for the variable 'Last Notable Activity'
# cont = pd.get_dummies(df['Last Notable Activity'],prefix='Last Notable Activity',drop_first=True)
# #Adding the results to the master dataframe
# df = pd.concat([df,cont],axis=1)
#created dummies for the below variables, so drop the same
df = df.drop(['Lead Origin','Lead Source','Last Activity','Specialization','City','What is your current occupation'],
axis =1)
df.columns
# check for dataframe shape
df.shape
# Converting bInary varibales (Yes to 1 and No to 0)
df['A free copy of Mastering The Interview'] = df['A free copy of Mastering The Interview'].map({'Yes': 1, 'No': 0})
df['A free copy of Mastering The Interview'].value_counts()
# check dataframe after dummies creation
df.info()
# Ensuring there are no categorical columns left in the dataframe
cols = df.columns
num_cols = df._get_numeric_data().columns
list(set(cols) - set(num_cols))
# Creating feature variables of X
X = df.drop(['Converted','Lead Number'],1)
X.head()
# Creating feature variables of y
y = df['Converted']
y.head()
# Splitting the dataset into test and train
X_train, X_test, y_train, y_test = train_test_split(X,y, train_size =0.7, test_size =0.3, random_state = 100)
# Scaling the variables using standaredscaler
scaler = StandardScaler()
# Scaling of the numerical data X_train
num_cols = ['TotalVisits','Total Time Spent on Website', 'Page Views Per Visit']
X_train[num_cols]=scaler.fit_transform(X_train[num_cols])
X_train.head()
# scaling numerical dat X_test
X_test[num_cols]=scaler.transform(X_test[num_cols])
X_test.head()
# Logistic Regression Model (Zero model)
logm = sm.GLM(y_train,(sm.add_constant(X_train)), family = sm.families.Binomial())
print(logm.fit().summary())
# Running Logistic Regression
logreg = LogisticRegression()
# Running RFE
rfe = RFE(logreg, 25) # running RFE with 25 variables as output
rfe = rfe.fit(X_train, y_train)
rfe.support_
list(zip(X_train.columns, rfe.support_, rfe.ranking_))
col_rfe = X_train.columns[rfe.support_]
col_rfe
# selct x_train based on RFE
X_train_rfe = X_train[col_rfe]
# create function for stats logistic model
def sm_logregmodel(X_train_sm):
#Add constant
X_train_sm = sm.add_constant(X_train_sm)
# create a fitted model
logm = sm.GLM(y_train, X_train_sm, family = sm.families.Binomial())
res = logm.fit()
return res
# Function to calculate VIF
# calculate VIF
def vif_calc(X):
vif = pd.DataFrame()
vif['Features'] = X.columns
vif['VIF'] = [variance_inflation_factor(X.values, i) for i in range(X.shape[1])]
vif['VIF'] = round(vif['VIF'],2)
vif = vif.sort_values(by='VIF', ascending = False)
return vif
# Create 1st model with RFE features
logm1 = sm_logregmodel(X_train_rfe)
print(logm1.summary())
# Loop to remove P value variables >0.05 in bstep mannen and update model
pvalue = logm1.pvalues[1:]
while(max(pvalue)>0.05):
maxp_var = pvalue[pvalue == pvalue.max()].index
print('Removed variable:' , maxp_var[0], ' P value: ', round(max(pvalue),3))
# drop variable with high p value
X_train_rfe = X_train_rfe.drop(maxp_var, axis = 1)
logm1 = sm_logregmodel(X_train_rfe)
pvalue = logm1.pvalues[1:]
# Create 1st model with RFE features
logm2 = sm_logregmodel(X_train_rfe)
print(logm2.summary())
# Check for VIF
print(vif_calc(X_train_rfe))
# drop variable with high p value and update the model
X_train_rfe.drop('What is your current occupation_Unemployed', axis = 1, inplace = True)
#update model
logm3 = sm_logregmodel(X_train_rfe)
print(logm3.summary())
# check VIF
print(vif_calc(X_train_rfe))
# drop variable with high p value and update the model
X_train_rfe.drop('Lead Origin_Landing Page Submission', axis = 1, inplace = True)
#update model
logm4 = sm_logregmodel(X_train_rfe)
print(logm4.summary())
# check VIF
print(vif_calc(X_train_rfe))
# drop variable with high p value and update the model
X_train_rfe.drop('Lead Source_Welingak Website', axis = 1, inplace = True)
#update model
logm5 = sm_logregmodel(X_train_rfe)
print(logm5.summary())
# check VIF
print(vif_calc(X_train_rfe))
# List down final model varibales and its coefficients
# assign final model to lm_final
log_final = logm5
# list down and check variables of final model
var_final = list(log_final.params.index)
var_final.remove('const')
print('Final Selected Variables:', var_final)
# Print the coefficents of final varible
print('\033[1m{:10s}\033[0m'.format('\nCoefficent for the variables are:'))
print(round(log_final.params,3))
# getting the predicted values on the train set
X_train_sm = sm.add_constant(X_train[var_final])
y_train_pred = log_final.predict(X_train_sm)
y_train_pred[:10]
# Reshaping the numpy array containing predicted values
y_train_pred = y_train_pred.values.reshape(-1)
y_train_pred[:10]
# Create a new dataframe containing the actual conversion flag and the probabilities predicted by the model
y_train_pred_final = pd.DataFrame({'Converted':y_train.values, 'Conversion_Prob': y_train_pred})
y_train_pred_final.head()
# Prediction at 0.5
cut_off = 0.5
y_train_pred_final['predicted'] = y_train_pred_final.Conversion_Prob.map(lambda x: 1 if x > cut_off else 0)
# Let's see the head
y_train_pred_final.head()
# Confusion matrix
confusion = metrics.confusion_matrix(y_train_pred_final.Converted, y_train_pred_final.predicted )
print(confusion)
# Classification Summary
from sklearn.metrics import classification_report
print(classification_report(y_train_pred_final.Converted, y_train_pred_final.predicted))
# function for ROC curve
def draw_roc( actual, probs ):
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
fpr, tpr, thresholds = metrics.roc_curve( actual, probs, drop_intermediate = False )
auc_score = metrics.roc_auc_score( actual, probs )
plt.figure(figsize=(5, 5))
plt.plot( fpr, tpr, label='ROC curve (area = %0.2f)' % auc_score )
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate or [1 - True Negative Rate]')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic curve')
plt.legend(loc="lower right")
plt.show()
# fpr, tpr, thresholds = roc_curve(y_train_pred_final.Converted, y_train_pred_final.Conversion_Prob)
auc_score = roc_auc_score(actual, probs)
print('ROC AUC : ',round(auc_score,2))
optimal_idx = np.argmax(tpr - fpr)
optimal_threshold = thresholds[optimal_idx]
print("Threshold value is:", round(optimal_threshold,2))
# return fpr,tpr, thresholds
# plot Roc Curve
draw_roc(y_train_pred_final.Converted, y_train_pred_final.Conversion_Prob)
# Let's create columns with different probability cutoffs
numbers = [float(x)/10 for x in range(10)]
for i in numbers:
y_train_pred_final[i]= y_train_pred_final.Conversion_Prob.map(lambda x: 1 if x > i else 0)
y_train_pred_final.head()
# Now let's calculate accuracy sensitivity and specificity for various probability cutoffs.
cutoff_df = pd.DataFrame( columns = ['prob','accuracy','sensi','speci'])
from sklearn.metrics import confusion_matrix
# TP = confusion[1,1] # true positive
# TN = confusion[0,0] # true negatives
# FP = confusion[0,1] # false positives
# FN = confusion[1,0] # false negatives
num = [0.0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9]
for i in num:
cm1 = metrics.confusion_matrix(y_train_pred_final.Converted, y_train_pred_final[i] )
total1=sum(sum(cm1))
accuracy = (cm1[0,0]+cm1[1,1])/total1
speci = cm1[0,0]/(cm1[0,0]+cm1[0,1])
sensi = cm1[1,1]/(cm1[1,0]+cm1[1,1])
cutoff_df.loc[i] =[ i ,accuracy,sensi,speci]
print(cutoff_df)
# validate Optimal cut off point
sns.set_style("whitegrid")
cutoff_df.plot.line(x='prob', y=['accuracy','sensi','speci'], figsize=(14,6))
# plot x axis limits
plt.xticks(np.arange(0, 1, step=0.05), size = 12)
plt.yticks(size = 12)
plt.show()
cut_off = 0.35
y_train_pred_final['final_predicted'] = y_train_pred_final.Conversion_Prob.map( lambda x: 1 if x > cut_off else 0)
y_train_pred_final.head()
confusion_train = metrics.confusion_matrix(y_train_pred_final.Converted, y_train_pred_final.final_predicted)
confusion_train
# Precision and recall tradeoff
p, r, thresholds = precision_recall_curve(y_train_pred_final.Converted, y_train_pred_final.Conversion_Prob)
# Slightly alter the figure size to make it more horizontal.
plt.plot(thresholds, p[:-1], "g-")
plt.plot(thresholds, r[:-1], "r-")
plt.show()
# Classification - at optimal cut off
from sklearn.metrics import classification_report
print(classification_report(y_train_pred_final.Converted, y_train_pred_final.final_predicted))
# getting the predicted values on the train set
X_test_sm = sm.add_constant(X_test[var_final])
y_test_pred = log_final.predict(X_test_sm)
y_test_pred[:10]
# Create a new dataframe containing the actual conversion flag and the probabilities predicted by the model
y_test_pred_final = pd.DataFrame({'Converted':y_test.values, 'Conversion_Prob': y_test_pred})
y_test_pred_final.head()
# Final Prediction on test set
y_test_pred_final['predicted'] = y_test_pred_final.Conversion_Prob.map(lambda x: 1 if x > cut_off else 0)
# Let's see the head
y_test_pred_final.head()
# Classification summary on test set
from sklearn.metrics import classification_report
print(classification_report(y_test_pred_final.Converted, y_test_pred_final.predicted))
# ROC Curve
draw_roc(y_test_pred_final.Converted, y_test_pred_final.Conversion_Prob)
# function to predict and get classification summary
def classification_model_metrics(logm, X, y, cut_off):
# check variables of model
X_cols = list(logm.params.index)
X_cols.remove('const')
# getting the predicted values on the train set
# var_final = X[log_final.params.index[1:]]
X_sm = sm.add_constant(X[X_cols])
y_pred = logm.predict(X_sm)
# Reshaping the numpy array containing predicted values
y_pred = y_pred.values.reshape(-1)
# Create a new dataframe containing the actual conversion flag and the probabilities predicted by the model
y_pred_final = pd.DataFrame({'Converted':y.values, 'Conversion_Prob': y_pred})
# Prediction at cutoff
y_pred_final['predicted'] = y_pred_final.Conversion_Prob.map(lambda x: 1 if x > cut_off else 0)
# Classification Summary
from sklearn.metrics import classification_report
classification_summary = classification_report(y_pred_final.Converted, y_pred_final.predicted, digits = 2)
return classification_summary
# Model Metric Evaluation at optimum cut off
model = logm5
cut_off = 0.35
print('model metrics of train set @', cut_off )
model_metrics = classification_model_metrics(model,X_train, y_train, cut_off)
print(model_metrics)
print('--------------------------------------------------------')
print('model metrics of test set @', cut_off )
model_metrics = classification_model_metrics(model,X_test, y_test, cut_off)
print(model_metrics)
# getting the predicted values on the total dataset
X = df[var_final]
X_sm = sm.add_constant(X[var_final])
y_pred = log_final.predict(X_sm)
y_pred[:10]
# #lead score for customer in the range 0-100
Lead_Score = df[['Lead Number']]
Lead_Score['Lead_Score'] = round(y_pred*100,2)
Lead_Score['Coversion_Pred'] = y_pred.map(lambda x: 1 if x > cut_off else 0)
Lead_Score.head()
#Selecting the coefficients of the selected features from our final model excluding the intercept/constant
final_params = log_final.params[1:]
# Print the coefficents of final varible
print('\033[1m{:10s}\033[0m'.format('\nCoefficent for the final variables are:'))
print(round(final_params,3))
#Getting a relative coeffient value for all the features wrt the feature with the highest coefficient
feature_importance = final_params
feature_importance = 100.0 * (feature_importance / feature_importance.max())
feature_importance.sort_values(ascending = False)
feature_importance = feature_importance.sort_values()
plt.figure(figsize=(6,6))
feature_importance.plot.barh(align='center', color = 'tab:red',alpha=0.8, fontsize = 12)
plt.title('Relative Feature Importance', fontsize=14)
plt.show()
| 0.610453 | 0.962036 |
# Navigation
---
In this notebook, you will learn how to use the Unity ML-Agents environment for the first project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893).
### 1. Start the Environment
We begin by importing some necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).
```
from unityagents import UnityEnvironment
import numpy as np
```
Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.
- **Mac**: `"path/to/Banana.app"`
- **Windows** (x86): `"path/to/Banana_Windows_x86/Banana.exe"`
- **Windows** (x86_64): `"path/to/Banana_Windows_x86_64/Banana.exe"`
- **Linux** (x86): `"path/to/Banana_Linux/Banana.x86"`
- **Linux** (x86_64): `"path/to/Banana_Linux/Banana.x86_64"`
- **Linux** (x86, headless): `"path/to/Banana_Linux_NoVis/Banana.x86"`
- **Linux** (x86_64, headless): `"path/to/Banana_Linux_NoVis/Banana.x86_64"`
For instance, if you are using a Mac, then you downloaded `Banana.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:
```
env = UnityEnvironment(file_name="Banana.app")
```
```
env = UnityEnvironment(file_name="./Banana_Linux/Banana.x86_64")
```
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
```
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
```
### 2. Examine the State and Action Spaces
The simulation contains a single agent that navigates a large environment. At each time step, it has four actions at its disposal:
- `0` - walk forward
- `1` - walk backward
- `2` - turn left
- `3` - turn right
The state space has `37` dimensions and contains the agent's velocity, along with ray-based perception of objects around agent's forward direction. A reward of `+1` is provided for collecting a yellow banana, and a reward of `-1` is provided for collecting a blue banana.
Run the code cell below to print some information about the environment.
```
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents in the environment
print('Number of agents:', len(env_info.agents))
# number of actions
action_size = brain.vector_action_space_size
print('Number of actions:', action_size)
# examine the state space
state = env_info.vector_observations[0]
print('States look like:', state)
state_size = len(state)
print('States have length:', state_size)
```
### 3. Take Random Actions in the Environment
In the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.
Once this cell is executed, you will watch the agent's performance, if it selects an action (uniformly) at random with each time step. A window should pop up that allows you to observe the agent, as it moves through the environment.
Of course, as part of the project, you'll have to change the code so that the agent is able to use its experience to gradually choose better actions when interacting with the environment!
```
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
state = env_info.vector_observations[0] # get the current state
score = 0 # initialize the score
while True:
action = np.random.randint(action_size) # select an action
env_info = env.step(action)[brain_name] # send the action to the environment
next_state = env_info.vector_observations[0] # get the next state
reward = env_info.rewards[0] # get the reward
done = env_info.local_done[0] # see if episode has finished
score += reward # update the score
state = next_state # roll over the state to next time step
if done: # exit loop if episode finished
break
print("Score: {}".format(score))
```
When finished, you can close the environment.
```
env.close()
```
### 4. It's Your Turn!
Now it's your turn to train your own agent to solve the environment! When training the environment, set `train_mode=True`, so that the line for resetting the environment looks like the following:
```python
env_info = env.reset(train_mode=True)[brain_name]
```
|
github_jupyter
|
from unityagents import UnityEnvironment
import numpy as np
env = UnityEnvironment(file_name="Banana.app")
env = UnityEnvironment(file_name="./Banana_Linux/Banana.x86_64")
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents in the environment
print('Number of agents:', len(env_info.agents))
# number of actions
action_size = brain.vector_action_space_size
print('Number of actions:', action_size)
# examine the state space
state = env_info.vector_observations[0]
print('States look like:', state)
state_size = len(state)
print('States have length:', state_size)
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
state = env_info.vector_observations[0] # get the current state
score = 0 # initialize the score
while True:
action = np.random.randint(action_size) # select an action
env_info = env.step(action)[brain_name] # send the action to the environment
next_state = env_info.vector_observations[0] # get the next state
reward = env_info.rewards[0] # get the reward
done = env_info.local_done[0] # see if episode has finished
score += reward # update the score
state = next_state # roll over the state to next time step
if done: # exit loop if episode finished
break
print("Score: {}".format(score))
env.close()
env_info = env.reset(train_mode=True)[brain_name]
| 0.256273 | 0.983832 |
# This colab notebook must be run on a **P100** GPU instance otherwise it will crash. Use the Cell-1 to ensure that it has a **P100** GPU instance
Cell-1: Ensure the required gpu instance (P100)
```
#no.of sockets i.e available slots for physical processors
!lscpu | grep 'Socket(s):'
#no.of cores each processor is having
!lscpu | grep 'Core(s) per socket:'
#no.of threads each core is having
!lscpu | grep 'Thread(s) per core'
#GPU count and name
!nvidia-smi -L
#use this command to see GPU activity while doing Deep Learning tasks, for this command 'nvidia-smi' and for above one to work, go to 'Runtime > change runtime type > Hardware Accelerator > GPU'
!nvidia-smi
```
Cell-2: Add Google Drive
```
from google.colab import drive
drive.mount('/content/gdrive')
```
Cell-3: Install Required Dependencies
```
!pip install efficientnet_pytorch==0.7.0
!pip install albumentations==0.4.5
!pip install torch==1.6.0+cu101 torchvision==0.7.0+cu101 -f https://download.pytorch.org/whl/torch\_stable.html -q\
```
Cell-4: Run this cell to generate current fold weight ( Estimated Time for training this fold is around 2 hours )
```
import sys
sys.path.insert(0, "/content/gdrive/My Drive/zindi_cgiar_wheat_growth_stage_challenge/src_lq2_step2")
from dataset import *
from model import *
from trainer import *
from utils import *
import numpy as np
from sklearn.model_selection import StratifiedKFold
from torch.utils.data import DataLoader
config = {
'n_folds': 5,
'random_seed': 7200,
'run_fold': 4,
'model_name': 'efficientnet-b4',
'global_dim': 1792,
'batch_size': 48,
'n_core': 0,
'weight_saving_path': '/content/gdrive/My Drive/zindi_cgiar_wheat_growth_stage_challenge/train_lq2_only_effnet_b4_step2/weights/',
'resume_checkpoint_path': None,
'lr': 0.01,
'total_epochs': 100,
}
if __name__ == '__main__':
set_random_state(config['random_seed'])
imgs = np.load('/content/gdrive/My Drive/zindi_cgiar_wheat_growth_stage_challenge/zindi_npy_data/train_imgs.npy')
labels = np.load('/content/gdrive/My Drive/zindi_cgiar_wheat_growth_stage_challenge/zindi_npy_data/train_labels.npy')
labels_quality = np.load('/content/gdrive/My Drive/zindi_cgiar_wheat_growth_stage_challenge/zindi_npy_data/train_labels_quality.npy')
imgs_lq2 = imgs[labels_quality == 2]
labels_lq2 = labels[labels_quality == 2]
labels_lq2 = labels_lq2 - 1
imgs_1 = imgs[labels_quality == 1]
labels_1 = labels[labels_quality == 1]
del imgs, labels
pred_lq1 = np.load('/content/gdrive/My Drive/zindi_cgiar_wheat_growth_stage_challenge/b4_lq2_only_step1_5fold_pred_wd.npy')
labels_lq1 = labels_1[labels_1 != 1]
pred_lq1 = pred_lq1[labels_1 != 1]
imgs_lq1 = imgs_1[labels_1 != 1]
pred_lq1 = pred_lq1[labels_lq1 != 6]
imgs_lq1 = imgs_lq1[labels_lq1 != 6]
labels_lq1 = labels_lq1[labels_lq1 != 6]
labels_lq1 = labels_lq1 - 1
del imgs_1, labels_1
thr = 0.5
diff = np.abs(labels_lq1 - pred_lq1)
imgs_lq1 = imgs_lq1[diff <= thr]
labels_lq1 = labels_lq1[diff <= thr]
del diff, pred_lq1
skf = StratifiedKFold(n_splits=config['n_folds'], shuffle=True, random_state=config['random_seed'])
for fold_number, (train_index, val_index) in enumerate(skf.split(X=imgs_lq2, y=labels_lq2)):
if fold_number != config['run_fold']:
continue
skf = StratifiedKFold(n_splits=config['n_folds'], shuffle=True, random_state=config['random_seed'])
for fold_number, (train_index_lq1, val_index_lq1) in enumerate(skf.split(X=imgs_lq1, y=labels_lq1)):
if fold_number != config['run_fold']:
continue
train_imgs = np.concatenate( [imgs_lq2[train_index], imgs_lq1[train_index_lq1]] )
train_labels = np.concatenate( [labels_lq2[train_index], labels_lq1[train_index_lq1]] )
train_dataset = ZCDataset(
train_imgs,
train_labels,
transform=get_train_transforms(),
test=False,
)
train_loader = DataLoader(
train_dataset,
batch_size=config['batch_size'],
shuffle=True,
num_workers=config['n_core'],
drop_last=True,
pin_memory=True,
)
del train_imgs, train_labels
val_imgs = np.concatenate( [imgs_lq2[val_index], imgs_lq1[val_index_lq1]] )
val_labels = np.concatenate( [labels_lq2[val_index], labels_lq1[val_index_lq1]] )
del imgs_lq2, labels_lq2, imgs_lq1, labels_lq1
val_dataset = ZCDataset(
val_imgs,
val_labels,
transform=get_val_transforms(),
test=True,
)
val_loader = DataLoader(
val_dataset,
batch_size=config['batch_size'],
shuffle=False,
num_workers=config['n_core'],
pin_memory=True,
)
del val_imgs, val_labels
model = CNN_Model(config['model_name'], config['global_dim'])
checkpoint_dict = torch.load('/content/gdrive/My Drive/zindi_cgiar_wheat_growth_stage_challenge/train_lq2_only_effnet_b4_step1/weights/fold4/checkpoint_best_f1_score_fold4.pth')
model.load_state_dict(checkpoint_dict['Model_state_dict'])
print('Current val f1 score is {}'.format(checkpoint_dict['Current_val_f1_score']))
args = {
'model': model,
'Loaders': [train_loader,val_loader],
'metrics': {'Loss':AverageMeter, 'f1_score':PrintMeter, 'rmse':PrintMeter},
'checkpoint_saving_path': config['weight_saving_path'],
'resume_train_from_checkpoint': False,
'resume_checkpoint_path': config['resume_checkpoint_path'],
'lr': config['lr'],
'fold': fold_number,
'epochsTorun': config['total_epochs'],
'batch_size': config['batch_size'],
'test_run_for_error': False,
'problem_name': 'zindi_cigar',
}
Trainer = ModelTrainer(**args)
Trainer.fit()
```
|
github_jupyter
|
#no.of sockets i.e available slots for physical processors
!lscpu | grep 'Socket(s):'
#no.of cores each processor is having
!lscpu | grep 'Core(s) per socket:'
#no.of threads each core is having
!lscpu | grep 'Thread(s) per core'
#GPU count and name
!nvidia-smi -L
#use this command to see GPU activity while doing Deep Learning tasks, for this command 'nvidia-smi' and for above one to work, go to 'Runtime > change runtime type > Hardware Accelerator > GPU'
!nvidia-smi
from google.colab import drive
drive.mount('/content/gdrive')
!pip install efficientnet_pytorch==0.7.0
!pip install albumentations==0.4.5
!pip install torch==1.6.0+cu101 torchvision==0.7.0+cu101 -f https://download.pytorch.org/whl/torch\_stable.html -q\
import sys
sys.path.insert(0, "/content/gdrive/My Drive/zindi_cgiar_wheat_growth_stage_challenge/src_lq2_step2")
from dataset import *
from model import *
from trainer import *
from utils import *
import numpy as np
from sklearn.model_selection import StratifiedKFold
from torch.utils.data import DataLoader
config = {
'n_folds': 5,
'random_seed': 7200,
'run_fold': 4,
'model_name': 'efficientnet-b4',
'global_dim': 1792,
'batch_size': 48,
'n_core': 0,
'weight_saving_path': '/content/gdrive/My Drive/zindi_cgiar_wheat_growth_stage_challenge/train_lq2_only_effnet_b4_step2/weights/',
'resume_checkpoint_path': None,
'lr': 0.01,
'total_epochs': 100,
}
if __name__ == '__main__':
set_random_state(config['random_seed'])
imgs = np.load('/content/gdrive/My Drive/zindi_cgiar_wheat_growth_stage_challenge/zindi_npy_data/train_imgs.npy')
labels = np.load('/content/gdrive/My Drive/zindi_cgiar_wheat_growth_stage_challenge/zindi_npy_data/train_labels.npy')
labels_quality = np.load('/content/gdrive/My Drive/zindi_cgiar_wheat_growth_stage_challenge/zindi_npy_data/train_labels_quality.npy')
imgs_lq2 = imgs[labels_quality == 2]
labels_lq2 = labels[labels_quality == 2]
labels_lq2 = labels_lq2 - 1
imgs_1 = imgs[labels_quality == 1]
labels_1 = labels[labels_quality == 1]
del imgs, labels
pred_lq1 = np.load('/content/gdrive/My Drive/zindi_cgiar_wheat_growth_stage_challenge/b4_lq2_only_step1_5fold_pred_wd.npy')
labels_lq1 = labels_1[labels_1 != 1]
pred_lq1 = pred_lq1[labels_1 != 1]
imgs_lq1 = imgs_1[labels_1 != 1]
pred_lq1 = pred_lq1[labels_lq1 != 6]
imgs_lq1 = imgs_lq1[labels_lq1 != 6]
labels_lq1 = labels_lq1[labels_lq1 != 6]
labels_lq1 = labels_lq1 - 1
del imgs_1, labels_1
thr = 0.5
diff = np.abs(labels_lq1 - pred_lq1)
imgs_lq1 = imgs_lq1[diff <= thr]
labels_lq1 = labels_lq1[diff <= thr]
del diff, pred_lq1
skf = StratifiedKFold(n_splits=config['n_folds'], shuffle=True, random_state=config['random_seed'])
for fold_number, (train_index, val_index) in enumerate(skf.split(X=imgs_lq2, y=labels_lq2)):
if fold_number != config['run_fold']:
continue
skf = StratifiedKFold(n_splits=config['n_folds'], shuffle=True, random_state=config['random_seed'])
for fold_number, (train_index_lq1, val_index_lq1) in enumerate(skf.split(X=imgs_lq1, y=labels_lq1)):
if fold_number != config['run_fold']:
continue
train_imgs = np.concatenate( [imgs_lq2[train_index], imgs_lq1[train_index_lq1]] )
train_labels = np.concatenate( [labels_lq2[train_index], labels_lq1[train_index_lq1]] )
train_dataset = ZCDataset(
train_imgs,
train_labels,
transform=get_train_transforms(),
test=False,
)
train_loader = DataLoader(
train_dataset,
batch_size=config['batch_size'],
shuffle=True,
num_workers=config['n_core'],
drop_last=True,
pin_memory=True,
)
del train_imgs, train_labels
val_imgs = np.concatenate( [imgs_lq2[val_index], imgs_lq1[val_index_lq1]] )
val_labels = np.concatenate( [labels_lq2[val_index], labels_lq1[val_index_lq1]] )
del imgs_lq2, labels_lq2, imgs_lq1, labels_lq1
val_dataset = ZCDataset(
val_imgs,
val_labels,
transform=get_val_transforms(),
test=True,
)
val_loader = DataLoader(
val_dataset,
batch_size=config['batch_size'],
shuffle=False,
num_workers=config['n_core'],
pin_memory=True,
)
del val_imgs, val_labels
model = CNN_Model(config['model_name'], config['global_dim'])
checkpoint_dict = torch.load('/content/gdrive/My Drive/zindi_cgiar_wheat_growth_stage_challenge/train_lq2_only_effnet_b4_step1/weights/fold4/checkpoint_best_f1_score_fold4.pth')
model.load_state_dict(checkpoint_dict['Model_state_dict'])
print('Current val f1 score is {}'.format(checkpoint_dict['Current_val_f1_score']))
args = {
'model': model,
'Loaders': [train_loader,val_loader],
'metrics': {'Loss':AverageMeter, 'f1_score':PrintMeter, 'rmse':PrintMeter},
'checkpoint_saving_path': config['weight_saving_path'],
'resume_train_from_checkpoint': False,
'resume_checkpoint_path': config['resume_checkpoint_path'],
'lr': config['lr'],
'fold': fold_number,
'epochsTorun': config['total_epochs'],
'batch_size': config['batch_size'],
'test_run_for_error': False,
'problem_name': 'zindi_cigar',
}
Trainer = ModelTrainer(**args)
Trainer.fit()
| 0.290779 | 0.690872 |
# Sense HAT for PYNQ:Temperature and Pressure Sensor
This notebook illustrates how to read the temperature and pressure sensor data use the [Sense HAT](https://www.raspberrypi.org/products/sense-hat/) .
This example notebook includes the following steps.
1. import python libraries
2. select RPi switch and using Microblaze library
3. configure the I2C device
4. read single temperature and pressure
5. read and plot temperature once every 200ms for 5s

### 1. Sense HAT Introduction
The Sense HAT, which is a fundamental part of the [Astro Pi](https://astro-pi.org/) mission, allows your board to sense the world around it. It has a 8×8 RGB LED matrix, a five-button joystick and includes the following sensors:
* Gyroscope
* Accelerometer
* Magnetometer
* Temperature
* Barometric pressure
* Humidity

### 2. Prepare the overlay
Download the overlay first, then select the shared pin to be connected to
RPi header (by default, the pins will be connected to PMODA instead).
```
from pynq.overlays.base import BaseOverlay
from pynq.lib import MicroblazeLibrary
import matplotlib.pyplot as plt
from imp import reload
from time import sleep
from sensehat import *
base = BaseOverlay('base.bit')
lib = MicroblazeLibrary(base.RPI, ['i2c', 'gpio', 'xio_switch','circular_buffer'])
```
### 3. Configure the I2C device and GPIO device
Initialize the I2C device and set the I2C pin of RPi header. Since the PYNQ-ZU board does not have pull-up on the Reset_N pin of the HAT (GPIO25), set that to 1.
```
i2c = lib.i2c_open_device(1)
lib.set_pin(2, lib.SDA1)
lib.set_pin(3, lib.SCL1)
gpio = lib.gpio_open(25)
gpio.write(1)
```
### 4. Read single temperature and pressure
The MEMS pressure sensor of Sense HAT is LPS25H,
```
lps25h_sensor = lps25h.LPS25H_I2C(i2c)
press = lps25h_sensor.pressure
print('Pressure (hPa): ({0:0.3f})'.format(press))
tmp = lps25h_sensor.temperature
print('Temperature (℃"): ({0:0.3f})'.format(tmp))
```
### 5. Start logging once every 200ms for 5 seconds
Executing the next cell will start logging the temperature sensor values every 200ms, and will run for 5s.
```
cnt = 0
tmp_array = []
while True:
tmp = lps25h_sensor.temperature
tmp_array.append(tmp)
cnt = cnt + 1
sleep(0.2)
if cnt > 25:
break
plt.plot(range(len(tmp_array)), tmp_array, 'ro')
plt.title("Sense Hat Temperature Plot")
min_tmp_array = min(tmp_array)
max_tmp_array = max(tmp_array)
plt.axis([0, len(tmp_array), min_tmp_array, max_tmp_array])
plt.show()
```
### 6.Clean up
Close the I2C device and switch back the connection on the shared pin to PMODA header.
```
i2c.close()
```
Copyright (C) 2020 Xilinx, Inc
|
github_jupyter
|
from pynq.overlays.base import BaseOverlay
from pynq.lib import MicroblazeLibrary
import matplotlib.pyplot as plt
from imp import reload
from time import sleep
from sensehat import *
base = BaseOverlay('base.bit')
lib = MicroblazeLibrary(base.RPI, ['i2c', 'gpio', 'xio_switch','circular_buffer'])
i2c = lib.i2c_open_device(1)
lib.set_pin(2, lib.SDA1)
lib.set_pin(3, lib.SCL1)
gpio = lib.gpio_open(25)
gpio.write(1)
lps25h_sensor = lps25h.LPS25H_I2C(i2c)
press = lps25h_sensor.pressure
print('Pressure (hPa): ({0:0.3f})'.format(press))
tmp = lps25h_sensor.temperature
print('Temperature (℃"): ({0:0.3f})'.format(tmp))
cnt = 0
tmp_array = []
while True:
tmp = lps25h_sensor.temperature
tmp_array.append(tmp)
cnt = cnt + 1
sleep(0.2)
if cnt > 25:
break
plt.plot(range(len(tmp_array)), tmp_array, 'ro')
plt.title("Sense Hat Temperature Plot")
min_tmp_array = min(tmp_array)
max_tmp_array = max(tmp_array)
plt.axis([0, len(tmp_array), min_tmp_array, max_tmp_array])
plt.show()
i2c.close()
| 0.26971 | 0.98355 |
```
# default_exp bridge
#hide
from nbdev.showdoc import *
#hide
# stellt sicher, dass beim verändern der core library diese wieder neu geladen wird
%load_ext autoreload
%autoreload 2
```
# Bridge
```
#export
from abc import ABC, abstractmethod
from bfh_mt_hs2020_rl_basics.agent import AgentBase
from typing import Iterable, Tuple, List
import numpy as np
from ignite.engine import Engine
from ptan.experience import ExperienceFirstLast
import torch
from torch.optim import Optimizer, Adam
from torch import device
class BridgeBase(ABC):
def __init__(self, agent: AgentBase, optimizer: Optimizer = None,
learning_rate: float = 0.0001,
gamma: float = 0.9,
initial_population: int = 1000,
batch_size: int = 32):
self.agent = agent
self.device = agent.device
self.gamma = gamma
self.initial_population = initial_population
self.batch_size = batch_size
if optimizer is not None:
self.optimzer = optimizer
else:
self.optimizer = Adam(self.agent.net.parameters(), lr=learning_rate)
def batch_generator(self):
self.agent.buffer.populate(self.initial_population)
while True:
self.agent.buffer.populate(1)
yield self.get_sample()
def _unpack_batch(self, batch: List[ExperienceFirstLast]):
states, actions, rewards, dones, last_states = [],[],[],[],[]
for exp in batch:
state = np.array(exp.state)
states.append(state)
actions.append(exp.action)
rewards.append(exp.reward)
dones.append(exp.last_state is None)
if exp.last_state is None:
lstate = state # the result will be masked anyway
else:
lstate = np.array(exp.last_state)
last_states.append(lstate)
return np.array(states, copy=False), \
np.array(actions), \
np.array(rewards, dtype=np.float32), \
np.array(dones, dtype=np.uint8), \
np.array(last_states, copy=False)
@abstractmethod
def get_sample(self, engine: Engine, batchdata):
pass
@abstractmethod
def process_batch(self, engine: Engine, batchdata):
pass
```
## SimpleBridge
```
#export
from bfh_mt_hs2020_rl_basics.agent import AgentBase, SimpleAgent
from typing import Iterable, Tuple, List
import numpy as np
from ignite.engine import Engine
from ptan.experience import ExperienceFirstLast
import torch
import torch.nn as nn
from torch.optim import Optimizer, Adam
from torch import device
class SimpleBridge(BridgeBase):
def __init__(self, agent: SimpleAgent, optimizer: Optimizer = None,
learning_rate: float = 0.0001,
gamma: float = 0.9,
initial_population: int = 1000,
batch_size: int = 32):
super(SimpleBridge, self).__init__(agent, optimizer, learning_rate, gamma, initial_population, batch_size)
def get_sample(self):
return self.agent.buffer.sample(self.batch_size)
def process_batch(self, engine:Engine, batchdata):
self.optimizer.zero_grad()
loss_v = self._calc_loss(batchdata)
loss_v.backward()
self.optimizer.step()
self.agent.iteration_completed(engine.state.iteration)
return {
"loss": loss_v.item(),
"epsilon": self.agent.selector.epsilon,
}
def _calc_loss(self, batch: List[ExperienceFirstLast]):
states, actions, rewards, dones, next_states = self._unpack_batch(batch)
states_v = torch.tensor(states).to(self.device)
next_states_v = torch.tensor(next_states).to(self.device)
actions_v = torch.tensor(actions).to(self.device)
rewards_v = torch.tensor(rewards).to(self.device)
done_mask = torch.BoolTensor(dones).to(self.device)
actions_v = actions_v.unsqueeze(-1)
state_action_vals = self.agent.net(states_v).gather(1, actions_v)
state_action_vals = state_action_vals.squeeze(-1)
with torch.no_grad():
next_state_vals = self.agent.tgt_net.target_model(next_states_v).max(1)[0]
next_state_vals[done_mask] = 0.0
bellman_vals = next_state_vals.detach() * self.gamma + rewards_v
return nn.MSELoss()(state_action_vals, bellman_vals)
from bfh_mt_hs2020_rl_basics.agent import SimpleAgent
from bfh_mt_hs2020_rl_basics.env import CarEnv
def basic_simple_init(device=torch.device("cpu")) -> SimpleBridge:
env = CarEnv()
agent = SimpleAgent(env, device, gamma=0.9, buffer_size=1000)
bridge = SimpleBridge(agent, gamma=0.9)
return bridge
def simple_experiences() -> List[ExperienceFirstLast]:
return [
ExperienceFirstLast( np.array([0.0, 0.0, 0.0, 0.0], dtype=np.float32), np.int64(0), 1.0, np.array([0.5, 0.5, 0.5, 1.0], dtype=np.float32)),
ExperienceFirstLast( np.array([1.0, 1.0, 1.0, 1.0], dtype=np.float32), np.int64(1), 2.0, None)
]
def test_init_cuda():
assert basic_simple_init(torch.device("cuda")) != None
def test_init_cpu():
assert basic_simple_init(torch.device("cpu")) != None
def test_unpack():
bridge = basic_simple_init()
batch = simple_experiences()
unpacked = bridge._unpack_batch(batch)
# todo -Checks
def test_calc_loss():
bridge = basic_simple_init()
batch = simple_experiences()
loss = bridge._calc_loss(batch)
# todo -Checks
from ignite.engine import Engine
def test_process_batch(device=torch.device("cpu")):
bridge = basic_simple_init(device)
batch = simple_experiences()
bridge.process_batch(Engine(bridge.process_batch), batch)
# todo -Checks
def test_batch_generator(device=torch.device("cpu")):
# Test Iterator
bridge = basic_simple_init(device)
a = bridge.batch_generator()
nextbatch = next(a)
assert len(nextbatch) == 32
# Basis Tests
test_init_cpu()
test_init_cuda()
test_unpack()
test_calc_loss()
test_process_batch()
test_batch_generator()
test_process_batch(torch.device("cuda"))
test_batch_generator(torch.device("cuda"))
```
## RainbowBridge
```
#export
from bfh_mt_hs2020_rl_basics.agent import AgentBase, RainbowAgent
from typing import Iterable, Tuple, List
import numpy as np
from ignite.engine import Engine
from ptan.experience import ExperienceFirstLast
import torch
import torch.nn as nn
from torch.optim import Optimizer, Adam
from torch import device
class RainbowBridge(BridgeBase):
def __init__(self, agent: RainbowAgent, optimizer: Optimizer = None,
learning_rate: float = 0.0001,
gamma: float = 0.9,
initial_population: int = 1000,
batch_size: int = 32,
beta_start: float = 0.4,
beta_frames: int = 50000):
super(RainbowBridge, self).__init__(agent, optimizer, learning_rate, gamma, initial_population, batch_size)
self.beta_start = beta_start
self.beta = beta_start
self.beta_frames = beta_frames
def get_sample(self):
return self.agent.buffer.sample(self.batch_size, self.beta)
def _update_beta(self, idx):
v = self.beta_start + idx * (1.0 - self.beta_start) / self.beta_frames
self.beta = min(1.0, v)
return self.beta
def process_batch(self, engine, batch_data):
batch, batch_indices, batch_weights = batch_data
self.optimizer.zero_grad()
loss_v, sample_prios = self._calc_loss(
batch,
batch_weights,
gamma=self.gamma**self.agent.steps_count)
loss_v.backward()
self.optimizer.step()
self.agent.buffer.update_priorities(batch_indices, sample_prios)
self.agent.iteration_completed(engine.state.iteration)
return {
"loss": loss_v.item(),
"beta": self._update_beta(engine.state.iteration),
}
def _calc_loss(self, batch, batch_weights, gamma):
states, actions, rewards, dones, next_states = self._unpack_batch(batch)
states_v = torch.tensor(states).to(self.device)
actions_v = torch.tensor(actions).to(self.device)
rewards_v = torch.tensor(rewards).to(self.device)
done_mask = torch.BoolTensor(dones).to(self.device)
batch_weights_v = torch.tensor(batch_weights).to(self.device)
state_action_values = self.agent.net(states_v).gather(1, actions_v.unsqueeze(-1)).squeeze(-1)
with torch.no_grad():
next_states_v = torch.tensor(next_states).to(self.device)
next_state_values = self.agent.tgt_net.target_model(next_states_v).max(1)[0]
next_state_values[done_mask] = 0.0
expected_state_action_values = next_state_values.detach() * gamma + rewards_v
losses_v = batch_weights_v * (state_action_values - expected_state_action_values) ** 2
return losses_v.mean(), (losses_v + 1e-5).data.cpu().numpy()
from bfh_mt_hs2020_rl_basics.agent import RainbowAgent
from bfh_mt_hs2020_rl_basics.env import CarEnv
def basic_rainbow_init(device=torch.device("cpu")) -> RainbowBridge:
env = CarEnv()
agent = RainbowAgent(env, device, gamma=0.9, buffer_size=1000)
bridge = RainbowBridge(agent, gamma=0.9)
return bridge
def test_init_rainbow_cuda():
assert basic_rainbow_init(torch.device("cuda")) != None
def test_init_rainbow_cpu():
assert basic_rainbow_init(torch.device("cpu")) != None
def test_rainbow_calc_loss():
bridge = basic_rainbow_init()
batch = simple_experiences()
loss = bridge._calc_loss(batch, [0.5,0.5], 0.9)
# todo -Checks
from ignite.engine import Engine
def test_rainbow_process_batch(device=torch.device("cpu")):
bridge = basic_rainbow_init(device)
bridge.agent.buffer.buffer = [0,1]
batch = simple_experiences()
bridge.process_batch(Engine(bridge.process_batch), (batch, [0,1],[0.5,0.5]))
# todo -Checks
def test_rainbow_batch_generator(device=torch.device("cpu")):
# Test Iterator
bridge = basic_rainbow_init(device)
a = bridge.batch_generator()
nextbatch, idxes, weights = next(a)
assert len(nextbatch) == 32
test_rainbow_batch_generator()
test_init_rainbow_cpu()
test_init_rainbow_cuda()
test_rainbow_calc_loss()
test_rainbow_process_batch()
```
|
github_jupyter
|
# default_exp bridge
#hide
from nbdev.showdoc import *
#hide
# stellt sicher, dass beim verändern der core library diese wieder neu geladen wird
%load_ext autoreload
%autoreload 2
#export
from abc import ABC, abstractmethod
from bfh_mt_hs2020_rl_basics.agent import AgentBase
from typing import Iterable, Tuple, List
import numpy as np
from ignite.engine import Engine
from ptan.experience import ExperienceFirstLast
import torch
from torch.optim import Optimizer, Adam
from torch import device
class BridgeBase(ABC):
def __init__(self, agent: AgentBase, optimizer: Optimizer = None,
learning_rate: float = 0.0001,
gamma: float = 0.9,
initial_population: int = 1000,
batch_size: int = 32):
self.agent = agent
self.device = agent.device
self.gamma = gamma
self.initial_population = initial_population
self.batch_size = batch_size
if optimizer is not None:
self.optimzer = optimizer
else:
self.optimizer = Adam(self.agent.net.parameters(), lr=learning_rate)
def batch_generator(self):
self.agent.buffer.populate(self.initial_population)
while True:
self.agent.buffer.populate(1)
yield self.get_sample()
def _unpack_batch(self, batch: List[ExperienceFirstLast]):
states, actions, rewards, dones, last_states = [],[],[],[],[]
for exp in batch:
state = np.array(exp.state)
states.append(state)
actions.append(exp.action)
rewards.append(exp.reward)
dones.append(exp.last_state is None)
if exp.last_state is None:
lstate = state # the result will be masked anyway
else:
lstate = np.array(exp.last_state)
last_states.append(lstate)
return np.array(states, copy=False), \
np.array(actions), \
np.array(rewards, dtype=np.float32), \
np.array(dones, dtype=np.uint8), \
np.array(last_states, copy=False)
@abstractmethod
def get_sample(self, engine: Engine, batchdata):
pass
@abstractmethod
def process_batch(self, engine: Engine, batchdata):
pass
#export
from bfh_mt_hs2020_rl_basics.agent import AgentBase, SimpleAgent
from typing import Iterable, Tuple, List
import numpy as np
from ignite.engine import Engine
from ptan.experience import ExperienceFirstLast
import torch
import torch.nn as nn
from torch.optim import Optimizer, Adam
from torch import device
class SimpleBridge(BridgeBase):
def __init__(self, agent: SimpleAgent, optimizer: Optimizer = None,
learning_rate: float = 0.0001,
gamma: float = 0.9,
initial_population: int = 1000,
batch_size: int = 32):
super(SimpleBridge, self).__init__(agent, optimizer, learning_rate, gamma, initial_population, batch_size)
def get_sample(self):
return self.agent.buffer.sample(self.batch_size)
def process_batch(self, engine:Engine, batchdata):
self.optimizer.zero_grad()
loss_v = self._calc_loss(batchdata)
loss_v.backward()
self.optimizer.step()
self.agent.iteration_completed(engine.state.iteration)
return {
"loss": loss_v.item(),
"epsilon": self.agent.selector.epsilon,
}
def _calc_loss(self, batch: List[ExperienceFirstLast]):
states, actions, rewards, dones, next_states = self._unpack_batch(batch)
states_v = torch.tensor(states).to(self.device)
next_states_v = torch.tensor(next_states).to(self.device)
actions_v = torch.tensor(actions).to(self.device)
rewards_v = torch.tensor(rewards).to(self.device)
done_mask = torch.BoolTensor(dones).to(self.device)
actions_v = actions_v.unsqueeze(-1)
state_action_vals = self.agent.net(states_v).gather(1, actions_v)
state_action_vals = state_action_vals.squeeze(-1)
with torch.no_grad():
next_state_vals = self.agent.tgt_net.target_model(next_states_v).max(1)[0]
next_state_vals[done_mask] = 0.0
bellman_vals = next_state_vals.detach() * self.gamma + rewards_v
return nn.MSELoss()(state_action_vals, bellman_vals)
from bfh_mt_hs2020_rl_basics.agent import SimpleAgent
from bfh_mt_hs2020_rl_basics.env import CarEnv
def basic_simple_init(device=torch.device("cpu")) -> SimpleBridge:
env = CarEnv()
agent = SimpleAgent(env, device, gamma=0.9, buffer_size=1000)
bridge = SimpleBridge(agent, gamma=0.9)
return bridge
def simple_experiences() -> List[ExperienceFirstLast]:
return [
ExperienceFirstLast( np.array([0.0, 0.0, 0.0, 0.0], dtype=np.float32), np.int64(0), 1.0, np.array([0.5, 0.5, 0.5, 1.0], dtype=np.float32)),
ExperienceFirstLast( np.array([1.0, 1.0, 1.0, 1.0], dtype=np.float32), np.int64(1), 2.0, None)
]
def test_init_cuda():
assert basic_simple_init(torch.device("cuda")) != None
def test_init_cpu():
assert basic_simple_init(torch.device("cpu")) != None
def test_unpack():
bridge = basic_simple_init()
batch = simple_experiences()
unpacked = bridge._unpack_batch(batch)
# todo -Checks
def test_calc_loss():
bridge = basic_simple_init()
batch = simple_experiences()
loss = bridge._calc_loss(batch)
# todo -Checks
from ignite.engine import Engine
def test_process_batch(device=torch.device("cpu")):
bridge = basic_simple_init(device)
batch = simple_experiences()
bridge.process_batch(Engine(bridge.process_batch), batch)
# todo -Checks
def test_batch_generator(device=torch.device("cpu")):
# Test Iterator
bridge = basic_simple_init(device)
a = bridge.batch_generator()
nextbatch = next(a)
assert len(nextbatch) == 32
# Basis Tests
test_init_cpu()
test_init_cuda()
test_unpack()
test_calc_loss()
test_process_batch()
test_batch_generator()
test_process_batch(torch.device("cuda"))
test_batch_generator(torch.device("cuda"))
#export
from bfh_mt_hs2020_rl_basics.agent import AgentBase, RainbowAgent
from typing import Iterable, Tuple, List
import numpy as np
from ignite.engine import Engine
from ptan.experience import ExperienceFirstLast
import torch
import torch.nn as nn
from torch.optim import Optimizer, Adam
from torch import device
class RainbowBridge(BridgeBase):
def __init__(self, agent: RainbowAgent, optimizer: Optimizer = None,
learning_rate: float = 0.0001,
gamma: float = 0.9,
initial_population: int = 1000,
batch_size: int = 32,
beta_start: float = 0.4,
beta_frames: int = 50000):
super(RainbowBridge, self).__init__(agent, optimizer, learning_rate, gamma, initial_population, batch_size)
self.beta_start = beta_start
self.beta = beta_start
self.beta_frames = beta_frames
def get_sample(self):
return self.agent.buffer.sample(self.batch_size, self.beta)
def _update_beta(self, idx):
v = self.beta_start + idx * (1.0 - self.beta_start) / self.beta_frames
self.beta = min(1.0, v)
return self.beta
def process_batch(self, engine, batch_data):
batch, batch_indices, batch_weights = batch_data
self.optimizer.zero_grad()
loss_v, sample_prios = self._calc_loss(
batch,
batch_weights,
gamma=self.gamma**self.agent.steps_count)
loss_v.backward()
self.optimizer.step()
self.agent.buffer.update_priorities(batch_indices, sample_prios)
self.agent.iteration_completed(engine.state.iteration)
return {
"loss": loss_v.item(),
"beta": self._update_beta(engine.state.iteration),
}
def _calc_loss(self, batch, batch_weights, gamma):
states, actions, rewards, dones, next_states = self._unpack_batch(batch)
states_v = torch.tensor(states).to(self.device)
actions_v = torch.tensor(actions).to(self.device)
rewards_v = torch.tensor(rewards).to(self.device)
done_mask = torch.BoolTensor(dones).to(self.device)
batch_weights_v = torch.tensor(batch_weights).to(self.device)
state_action_values = self.agent.net(states_v).gather(1, actions_v.unsqueeze(-1)).squeeze(-1)
with torch.no_grad():
next_states_v = torch.tensor(next_states).to(self.device)
next_state_values = self.agent.tgt_net.target_model(next_states_v).max(1)[0]
next_state_values[done_mask] = 0.0
expected_state_action_values = next_state_values.detach() * gamma + rewards_v
losses_v = batch_weights_v * (state_action_values - expected_state_action_values) ** 2
return losses_v.mean(), (losses_v + 1e-5).data.cpu().numpy()
from bfh_mt_hs2020_rl_basics.agent import RainbowAgent
from bfh_mt_hs2020_rl_basics.env import CarEnv
def basic_rainbow_init(device=torch.device("cpu")) -> RainbowBridge:
env = CarEnv()
agent = RainbowAgent(env, device, gamma=0.9, buffer_size=1000)
bridge = RainbowBridge(agent, gamma=0.9)
return bridge
def test_init_rainbow_cuda():
assert basic_rainbow_init(torch.device("cuda")) != None
def test_init_rainbow_cpu():
assert basic_rainbow_init(torch.device("cpu")) != None
def test_rainbow_calc_loss():
bridge = basic_rainbow_init()
batch = simple_experiences()
loss = bridge._calc_loss(batch, [0.5,0.5], 0.9)
# todo -Checks
from ignite.engine import Engine
def test_rainbow_process_batch(device=torch.device("cpu")):
bridge = basic_rainbow_init(device)
bridge.agent.buffer.buffer = [0,1]
batch = simple_experiences()
bridge.process_batch(Engine(bridge.process_batch), (batch, [0,1],[0.5,0.5]))
# todo -Checks
def test_rainbow_batch_generator(device=torch.device("cpu")):
# Test Iterator
bridge = basic_rainbow_init(device)
a = bridge.batch_generator()
nextbatch, idxes, weights = next(a)
assert len(nextbatch) == 32
test_rainbow_batch_generator()
test_init_rainbow_cpu()
test_init_rainbow_cuda()
test_rainbow_calc_loss()
test_rainbow_process_batch()
| 0.611266 | 0.795817 |
## Using submodel loss of active materials in PyBaMM
In this notebook we show how to use the loss of active materials (LAM) submodel in pybamm. The LAM model follows the equation (25) from [[6]](#References), and the stresses are calculated by equations (7)-(9) in [[1]](#References). To see all of the models and submodels available in PyBaMM, please take a look at the documentation here.
```
%pip install pybamm -q # install PyBaMM if it is not installed
import pybamm
import os
import numpy as np
import matplotlib.pyplot as plt
os.chdir(pybamm.__path__[0]+'/..')
# Here the model is applicable to SPM, SPMe and DFN
model = pybamm.lithium_ion.DFN(
options=
{
"particle": "Fickian diffusion",
"sei":"solvent-diffusion limited",
"sei film resistance":"distributed",
"sei porosity change":"false",
"particle cracking":"no cracking",
"loss of active material":"both",
}
)
chemistry = pybamm.parameter_sets.Ai2020
param = pybamm.ParameterValues(chemistry=chemistry)
param.update({"Negative electrode LAM constant propotional term": 1e-4})
param.update({"Positive electrode LAM constant propotional term": 1e-4})
total_cycles = 2
experiment = pybamm.Experiment(
[
"Discharge at 1C until 3 V",
"Rest for 600 seconds",
"Charge at 1C until 4.2 V",
"Hold at 4.199 V for 600 seconds",
] * total_cycles
)
sim1 = pybamm.Simulation(
model,
experiment = experiment,
parameter_values = param,
solver = pybamm.CasadiSolver(dt_max=100),
)
solution = sim1.solve()
t_all = solution["Time [h]"].entries
v_all = solution["Terminal voltage [V]"].entries
I_if_n = solution["Sum of x-averaged negative electrode interfacial current densities"].entries
I_if_p = solution["Sum of x-averaged positive electrode interfacial current densities"].entries
# ploting the results
f, (ax1, ax2, ax3) = plt.subplots(1, 3 ,figsize=(18,4))
ax1.plot(t_all, v_all, label="loss of active material model")
ax1.set_xlabel("Time [h]")
ax1.set_ylabel("Terminal voltage [V]")
#ax1.legend()
ax2.plot(t_all, I_if_p, label="loss of active material model")
ax2.set_xlabel("Time [h]")
ax2.set_ylabel("Positive electrode interfacial current densities")
#ax2.legend()
#ax2.set_xlim(6000,7000)
ax3.plot(t_all, I_if_n, label="loss of active material model")
ax3.set_xlabel("Time [h]")
ax3.set_ylabel("Negative electrode interfacial current densities")
ax3.legend(bbox_to_anchor=(1, 1.2))
#ax3.set_xlim(10000,15000)
# f.tight_layout(pad=1.0)
plt.show()
LAM_n_all = solution["X-averaged negative electrode active material volume fraction"].entries
LAM_p_all = solution["X-averaged positive electrode active material volume fraction"].entries
f, (ax1, ax2) = plt.subplots(1, 2 ,figsize=(10,4))
ax1.plot(t_all, LAM_n_all, label="loss of active material model")
ax1.set_xlabel("Time [h]")
ax1.set_ylabel("X-averaged negative electrode active material volume fraction")
ax2.plot(t_all, LAM_p_all, label="loss of active material model")
ax2.set_xlabel("Time [h]")
ax2.set_ylabel("X-averaged positive electrode active material volume fraction")
f.tight_layout(pad=3.0)
plt.show()
S_t_n_all = solution["X-averaged negative particle surface tangential stress"].entries
S_t_p_all = solution["X-averaged positive particle surface tangential stress"].entries
f, (ax1, ax2) = plt.subplots(1, 2 ,figsize=(10,4))
ax1.plot(t_all, S_t_n_all, label="loss of active material model")
ax1.set_xlabel("Time [h]")
ax1.set_ylabel("X-averaged negative tangential stress/ $E_n$")
ax2.plot(t_all, S_t_p_all, label="loss of active material model")
ax2.set_xlabel("Time [h]")
ax2.set_ylabel("X-averaged positive tangential stress/ $E_p$")
f.tight_layout(pad=3.0)
plt.show()
k1 = 1e-4
k2 = 1e-3
k3 = 1e-2
param.update({"Positive electrode LAM constant propotional term": k2})
param.update({"Negative electrode LAM constant propotional term": k2})
sim2 = pybamm.Simulation(
model,
experiment=experiment,
parameter_values=param,
solver=pybamm.CasadiSolver(dt_max=100),
)
solution2 = sim2.solve()
param.update({"Positive electrode LAM constant propotional term": k3})
param.update({"Negative electrode LAM constant propotional term": k3})
sim3 = pybamm.Simulation(
model,
experiment=experiment,
parameter_values=param,
solver=pybamm.CasadiSolver(dt_max=100),
)
solution3 = sim3.solve()
t_all2 = solution2["Time [h]"].entries
t_all3 = solution3["Time [h]"].entries
LAM_n_all2 = solution2["X-averaged negative electrode active material volume fraction"].entries
LAM_p_all2 = solution2["X-averaged positive electrode active material volume fraction"].entries
LAM_n_all3 = solution3["X-averaged negative electrode active material volume fraction"].entries
LAM_p_all3 = solution3["X-averaged positive electrode active material volume fraction"].entries
f, (ax1, ax2) = plt.subplots(1, 2 ,figsize=(10,4))
ax1.plot(t_all, LAM_n_all, label="k_LAM = "+ str(k1))
ax1.plot(t_all2, LAM_n_all2, label="k_LAM = "+ str(k2))
ax1.plot(t_all3, LAM_n_all3, label="k_LAM = "+ str(k3))
ax1.set_xlabel("Time [h]")
ax1.set_ylabel("X-averaged negative electrode active material volume fraction")
ax1.legend()
ax2.plot(t_all, LAM_p_all, label="k_LAM = "+ str(k1))
ax2.plot(t_all2, LAM_p_all2, label="k_LAM = "+ str(k2))
ax2.plot(t_all3, LAM_p_all3, label="k_LAM = "+ str(k3))
ax2.set_xlabel("Time [h]")
ax2.set_ylabel("X-averaged positive electrode active material volume fraction")
f.tight_layout(pad=3.0)
ax2.legend()
plt.show()
t_all2 = solution2["Time [h]"].entries
t_all3 = solution3["Time [h]"].entries
a_n_all = solution["X-averaged negative electrode surface area to volume ratio"].entries
a_p_all = solution["X-averaged positive electrode surface area to volume ratio"].entries
a_n_all2 = solution2["X-averaged negative electrode surface area to volume ratio"].entries
a_p_all2 = solution2["X-averaged positive electrode surface area to volume ratio"].entries
a_n_all3 = solution3["Negative electrode surface area to volume ratio"].entries[-1,:]
a_p_all3 = solution3["Positive electrode surface area to volume ratio"].entries[0,:]
f, (ax1, ax2) = plt.subplots(1, 2 ,figsize=(10,4))
ax1.plot(t_all, a_n_all, label="k_LAM = "+ str(k1))
ax1.plot(t_all2, a_n_all2, label="k_LAM = "+ str(k2))
ax1.plot(t_all3, a_n_all3, label="k_LAM = "+ str(k3))
ax1.set_xlabel("Time [h]")
ax1.set_ylabel("X-averaged negative electrode surface area to volume ratio")
ax1.legend()
ax2.plot(t_all, a_p_all, label="k_LAM = "+ str(k1))
ax2.plot(t_all2, a_p_all2, label="k_LAM = "+ str(k2))
ax2.plot(t_all3, a_p_all3, label="k_LAM = "+ str(k3))
ax2.set_xlabel("Time [h]")
ax2.set_ylabel("X-averaged positive electrode surface area to volume ratio")
f.tight_layout(pad=3.0)
ax2.legend()
plt.show()
v_all = solution["Terminal voltage [V]"].entries
v_all2 = solution2["Terminal voltage [V]"].entries
v_all3 = solution3["Terminal voltage [V]"].entries
I_if_n = solution["Sum of x-averaged negative electrode interfacial current densities"].entries
I_if_p = solution["Sum of x-averaged positive electrode interfacial current densities"].entries
I_if_n2 = solution2["Sum of x-averaged negative electrode interfacial current densities"].entries
I_if_p2 = solution2["Sum of x-averaged positive electrode interfacial current densities"].entries
I_if_n3 = solution3["Sum of x-averaged negative electrode interfacial current densities"].entries
I_if_p3 = solution3["Sum of x-averaged positive electrode interfacial current densities"].entries
f, (ax1, ax2, ax3) = plt.subplots(1, 3 ,figsize=(18,5))
ax1.plot(t_all, v_all, label="k_LAM = "+ str(k1))
ax1.plot(t_all2, v_all2, label="k_LAM = "+ str(k2))
ax1.plot(t_all3, v_all3, label="k_LAM = "+ str(k3))
ax1.set_xlabel("Time [h]")
ax1.set_ylabel("Terminal voltage [V]")
#ax1.legend()
#ax1.set_xlim(0.5,0.8)
ax2.plot(t_all, I_if_n, label="k_LAM = "+ str(k1))
ax2.plot(t_all2, I_if_n2, label="k_LAM = "+ str(k2))
ax2.plot(t_all3, I_if_n3, label="k_LAM = "+ str(k3))
ax2.set_xlabel("Time [h]")
ax2.set_ylabel("Negative electrode interfacial current densities")
#ax2.legend()
#ax2.set_xlim(6000,7000)
ax2.set_ylim(2.2155,2.2165)
ax3.plot(t_all, I_if_p, label="k_LAM = "+ str(k1))
ax3.plot(t_all2, I_if_p2, label="k_LAM = "+ str(k2))
ax3.plot(t_all3, I_if_p3, label="k_LAM = "+ str(k3))
ax3.set_xlabel("Time [h]")
ax3.set_ylabel("Positive electrode interfacial current densities")
ax3.legend(bbox_to_anchor=(0.68, 1.3), ncol=2)
#ax3.set_xlim(2,2.8)
#ax3.set_ylim(2.492,2.494)
ax3.set_ylim(-2.494,-2.492)
plt.tight_layout(pad=1.0)
```
## References
The relevant papers for this notebook are:
```
pybamm.print_citations()
```
|
github_jupyter
|
%pip install pybamm -q # install PyBaMM if it is not installed
import pybamm
import os
import numpy as np
import matplotlib.pyplot as plt
os.chdir(pybamm.__path__[0]+'/..')
# Here the model is applicable to SPM, SPMe and DFN
model = pybamm.lithium_ion.DFN(
options=
{
"particle": "Fickian diffusion",
"sei":"solvent-diffusion limited",
"sei film resistance":"distributed",
"sei porosity change":"false",
"particle cracking":"no cracking",
"loss of active material":"both",
}
)
chemistry = pybamm.parameter_sets.Ai2020
param = pybamm.ParameterValues(chemistry=chemistry)
param.update({"Negative electrode LAM constant propotional term": 1e-4})
param.update({"Positive electrode LAM constant propotional term": 1e-4})
total_cycles = 2
experiment = pybamm.Experiment(
[
"Discharge at 1C until 3 V",
"Rest for 600 seconds",
"Charge at 1C until 4.2 V",
"Hold at 4.199 V for 600 seconds",
] * total_cycles
)
sim1 = pybamm.Simulation(
model,
experiment = experiment,
parameter_values = param,
solver = pybamm.CasadiSolver(dt_max=100),
)
solution = sim1.solve()
t_all = solution["Time [h]"].entries
v_all = solution["Terminal voltage [V]"].entries
I_if_n = solution["Sum of x-averaged negative electrode interfacial current densities"].entries
I_if_p = solution["Sum of x-averaged positive electrode interfacial current densities"].entries
# ploting the results
f, (ax1, ax2, ax3) = plt.subplots(1, 3 ,figsize=(18,4))
ax1.plot(t_all, v_all, label="loss of active material model")
ax1.set_xlabel("Time [h]")
ax1.set_ylabel("Terminal voltage [V]")
#ax1.legend()
ax2.plot(t_all, I_if_p, label="loss of active material model")
ax2.set_xlabel("Time [h]")
ax2.set_ylabel("Positive electrode interfacial current densities")
#ax2.legend()
#ax2.set_xlim(6000,7000)
ax3.plot(t_all, I_if_n, label="loss of active material model")
ax3.set_xlabel("Time [h]")
ax3.set_ylabel("Negative electrode interfacial current densities")
ax3.legend(bbox_to_anchor=(1, 1.2))
#ax3.set_xlim(10000,15000)
# f.tight_layout(pad=1.0)
plt.show()
LAM_n_all = solution["X-averaged negative electrode active material volume fraction"].entries
LAM_p_all = solution["X-averaged positive electrode active material volume fraction"].entries
f, (ax1, ax2) = plt.subplots(1, 2 ,figsize=(10,4))
ax1.plot(t_all, LAM_n_all, label="loss of active material model")
ax1.set_xlabel("Time [h]")
ax1.set_ylabel("X-averaged negative electrode active material volume fraction")
ax2.plot(t_all, LAM_p_all, label="loss of active material model")
ax2.set_xlabel("Time [h]")
ax2.set_ylabel("X-averaged positive electrode active material volume fraction")
f.tight_layout(pad=3.0)
plt.show()
S_t_n_all = solution["X-averaged negative particle surface tangential stress"].entries
S_t_p_all = solution["X-averaged positive particle surface tangential stress"].entries
f, (ax1, ax2) = plt.subplots(1, 2 ,figsize=(10,4))
ax1.plot(t_all, S_t_n_all, label="loss of active material model")
ax1.set_xlabel("Time [h]")
ax1.set_ylabel("X-averaged negative tangential stress/ $E_n$")
ax2.plot(t_all, S_t_p_all, label="loss of active material model")
ax2.set_xlabel("Time [h]")
ax2.set_ylabel("X-averaged positive tangential stress/ $E_p$")
f.tight_layout(pad=3.0)
plt.show()
k1 = 1e-4
k2 = 1e-3
k3 = 1e-2
param.update({"Positive electrode LAM constant propotional term": k2})
param.update({"Negative electrode LAM constant propotional term": k2})
sim2 = pybamm.Simulation(
model,
experiment=experiment,
parameter_values=param,
solver=pybamm.CasadiSolver(dt_max=100),
)
solution2 = sim2.solve()
param.update({"Positive electrode LAM constant propotional term": k3})
param.update({"Negative electrode LAM constant propotional term": k3})
sim3 = pybamm.Simulation(
model,
experiment=experiment,
parameter_values=param,
solver=pybamm.CasadiSolver(dt_max=100),
)
solution3 = sim3.solve()
t_all2 = solution2["Time [h]"].entries
t_all3 = solution3["Time [h]"].entries
LAM_n_all2 = solution2["X-averaged negative electrode active material volume fraction"].entries
LAM_p_all2 = solution2["X-averaged positive electrode active material volume fraction"].entries
LAM_n_all3 = solution3["X-averaged negative electrode active material volume fraction"].entries
LAM_p_all3 = solution3["X-averaged positive electrode active material volume fraction"].entries
f, (ax1, ax2) = plt.subplots(1, 2 ,figsize=(10,4))
ax1.plot(t_all, LAM_n_all, label="k_LAM = "+ str(k1))
ax1.plot(t_all2, LAM_n_all2, label="k_LAM = "+ str(k2))
ax1.plot(t_all3, LAM_n_all3, label="k_LAM = "+ str(k3))
ax1.set_xlabel("Time [h]")
ax1.set_ylabel("X-averaged negative electrode active material volume fraction")
ax1.legend()
ax2.plot(t_all, LAM_p_all, label="k_LAM = "+ str(k1))
ax2.plot(t_all2, LAM_p_all2, label="k_LAM = "+ str(k2))
ax2.plot(t_all3, LAM_p_all3, label="k_LAM = "+ str(k3))
ax2.set_xlabel("Time [h]")
ax2.set_ylabel("X-averaged positive electrode active material volume fraction")
f.tight_layout(pad=3.0)
ax2.legend()
plt.show()
t_all2 = solution2["Time [h]"].entries
t_all3 = solution3["Time [h]"].entries
a_n_all = solution["X-averaged negative electrode surface area to volume ratio"].entries
a_p_all = solution["X-averaged positive electrode surface area to volume ratio"].entries
a_n_all2 = solution2["X-averaged negative electrode surface area to volume ratio"].entries
a_p_all2 = solution2["X-averaged positive electrode surface area to volume ratio"].entries
a_n_all3 = solution3["Negative electrode surface area to volume ratio"].entries[-1,:]
a_p_all3 = solution3["Positive electrode surface area to volume ratio"].entries[0,:]
f, (ax1, ax2) = plt.subplots(1, 2 ,figsize=(10,4))
ax1.plot(t_all, a_n_all, label="k_LAM = "+ str(k1))
ax1.plot(t_all2, a_n_all2, label="k_LAM = "+ str(k2))
ax1.plot(t_all3, a_n_all3, label="k_LAM = "+ str(k3))
ax1.set_xlabel("Time [h]")
ax1.set_ylabel("X-averaged negative electrode surface area to volume ratio")
ax1.legend()
ax2.plot(t_all, a_p_all, label="k_LAM = "+ str(k1))
ax2.plot(t_all2, a_p_all2, label="k_LAM = "+ str(k2))
ax2.plot(t_all3, a_p_all3, label="k_LAM = "+ str(k3))
ax2.set_xlabel("Time [h]")
ax2.set_ylabel("X-averaged positive electrode surface area to volume ratio")
f.tight_layout(pad=3.0)
ax2.legend()
plt.show()
v_all = solution["Terminal voltage [V]"].entries
v_all2 = solution2["Terminal voltage [V]"].entries
v_all3 = solution3["Terminal voltage [V]"].entries
I_if_n = solution["Sum of x-averaged negative electrode interfacial current densities"].entries
I_if_p = solution["Sum of x-averaged positive electrode interfacial current densities"].entries
I_if_n2 = solution2["Sum of x-averaged negative electrode interfacial current densities"].entries
I_if_p2 = solution2["Sum of x-averaged positive electrode interfacial current densities"].entries
I_if_n3 = solution3["Sum of x-averaged negative electrode interfacial current densities"].entries
I_if_p3 = solution3["Sum of x-averaged positive electrode interfacial current densities"].entries
f, (ax1, ax2, ax3) = plt.subplots(1, 3 ,figsize=(18,5))
ax1.plot(t_all, v_all, label="k_LAM = "+ str(k1))
ax1.plot(t_all2, v_all2, label="k_LAM = "+ str(k2))
ax1.plot(t_all3, v_all3, label="k_LAM = "+ str(k3))
ax1.set_xlabel("Time [h]")
ax1.set_ylabel("Terminal voltage [V]")
#ax1.legend()
#ax1.set_xlim(0.5,0.8)
ax2.plot(t_all, I_if_n, label="k_LAM = "+ str(k1))
ax2.plot(t_all2, I_if_n2, label="k_LAM = "+ str(k2))
ax2.plot(t_all3, I_if_n3, label="k_LAM = "+ str(k3))
ax2.set_xlabel("Time [h]")
ax2.set_ylabel("Negative electrode interfacial current densities")
#ax2.legend()
#ax2.set_xlim(6000,7000)
ax2.set_ylim(2.2155,2.2165)
ax3.plot(t_all, I_if_p, label="k_LAM = "+ str(k1))
ax3.plot(t_all2, I_if_p2, label="k_LAM = "+ str(k2))
ax3.plot(t_all3, I_if_p3, label="k_LAM = "+ str(k3))
ax3.set_xlabel("Time [h]")
ax3.set_ylabel("Positive electrode interfacial current densities")
ax3.legend(bbox_to_anchor=(0.68, 1.3), ncol=2)
#ax3.set_xlim(2,2.8)
#ax3.set_ylim(2.492,2.494)
ax3.set_ylim(-2.494,-2.492)
plt.tight_layout(pad=1.0)
pybamm.print_citations()
| 0.640748 | 0.887302 |
# Principal Component Analysis
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
```
## Data Loading
```
PATH = "../../../Dimensionality_Reduction/PCA/Python/Wine.csv"
dataset = pd.read_csv(PATH)
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
```
## Train Test Split
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.20,
random_state=42)
```
## Feature Scaling
```
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
```
## PCA
```
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
X_train = pca.fit_transform(X_train)
X_test = pca.transform(X_test)
```
## Logistic Regression
```
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(random_state=42)
classifier.fit(X_train, y_train)
```
## Prediction
```
y_pred = classifier.predict(X_test)
print(np.concatenate((y_test.reshape(len(y_test),1), y_pred.reshape(len(y_pred),1)), axis=1))
```
## Confusion Matrix
```
import seaborn as sn
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
df_cm = pd.DataFrame(cm, range(len(cm[0])), range(len(cm[0])))
sn.set(font_scale=1.4)
sn.heatmap(df_cm, annot=True, annot_kws={"size": 16})
plt.show()
```
## Accuracy Score
```
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_pred)
```
## Visualisation
```
from matplotlib.colors import ListedColormap
X_set, y_set = X_train, y_train
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),
np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green', 'blue')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],
c = ListedColormap(('red', 'green', 'blue'))(i), label = j)
plt.title('Logistic Regression (Training set)')
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.legend()
plt.show()
from matplotlib.colors import ListedColormap
X_set, y_set = X_test, y_test
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),
np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green', 'blue')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],
c = ListedColormap(('red', 'green', 'blue'))(i), label = j)
plt.title('Logistic Regression (Test set)')
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.legend()
plt.show()
```
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
PATH = "../../../Dimensionality_Reduction/PCA/Python/Wine.csv"
dataset = pd.read_csv(PATH)
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.20,
random_state=42)
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
X_train = pca.fit_transform(X_train)
X_test = pca.transform(X_test)
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(random_state=42)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
print(np.concatenate((y_test.reshape(len(y_test),1), y_pred.reshape(len(y_pred),1)), axis=1))
import seaborn as sn
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
df_cm = pd.DataFrame(cm, range(len(cm[0])), range(len(cm[0])))
sn.set(font_scale=1.4)
sn.heatmap(df_cm, annot=True, annot_kws={"size": 16})
plt.show()
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_pred)
from matplotlib.colors import ListedColormap
X_set, y_set = X_train, y_train
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),
np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green', 'blue')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],
c = ListedColormap(('red', 'green', 'blue'))(i), label = j)
plt.title('Logistic Regression (Training set)')
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.legend()
plt.show()
from matplotlib.colors import ListedColormap
X_set, y_set = X_test, y_test
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),
np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green', 'blue')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],
c = ListedColormap(('red', 'green', 'blue'))(i), label = j)
plt.title('Logistic Regression (Test set)')
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.legend()
plt.show()
| 0.582254 | 0.930836 |
# Resfeber gas data set exporation
Very basic preliminary exploration of our primary gasoline price dataset. For future reference the data set can be accessed through api as well.
[source](https://www.eia.gov/dnav/pet/pet_pri_gnd_a_epm0_pte_dpgal_w.htm)
* Explore
* Clean
* Regression prediction
* Summary and Next steps
```
import pandas as pd
df = pd.read_excel('PET_PRI_GND_A_EPM0_PTE_DPGAL_W.xls', sheet_name = 2)
df.head()
header = df.iloc[1]
df = df[2:]
df.columns = header
df.head()
df.shape
df.describe()
# Okay lets maybe split the date up into year month and day.
df['Date'].astype('datetime64[ns]')
df['Date'] = df['Date'].astype('datetime64[ns]')
df.head()
df['Month'] = df['Date'].apply(lambda x: x.month)
df['Day'] = df['Date'].apply(lambda x: x.day)
df['Year'] = df['Date'].apply(lambda x: x.year)
df.head()
truncated_headers = ['date', 'east_coast', 'new_england', 'central_atlantic',
'lower_atlantic', 'midwest', 'gulf_coast',
'rocky_mountain', 'west_coast', 'west_coast_no_cal',
'month', 'day', 'year']
df.columns = truncated_headers
df.head()
df.isnull().sum()
```
Based off this, I suspect that only the first 267 entries have nans. So checking to see if it is the case. If so, I can evaluate the dates, and either cut all those entries out, or potentially don't use the west_coast_no_cal column.
```
df.iloc[267:].isnull().sum()
clean = df.iloc[267:]
clean.head()
# I'm going to iterate through each region building a model testing
# and validating. Need a list of the regions for that.
y_list = clean.columns
y_list = y_list.drop(['date', 'month', 'day', 'year'])
y_list
X = clean[['month', 'day', 'year']]
X.head()
```
## Predictions
For each region I'm going to use a mean guess as the baseline. Then I'm going to train a linear regression model, and compare the mean absolute error of each model agains the baseline.
```
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.dummy import DummyRegressor
from sklearn.metrics import mean_absolute_error, r2_score, mean_squared_error
import numpy as np
for target in y_list:
y = clean[target]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.24,
random_state = 42)
baseline = DummyRegressor(strategy = 'mean')
baseline.fit(X_train, y_train)
y_pred_base = baseline.predict(X_test)
mae_base = mean_absolute_error(y_test, y_pred_base)
mse_base = mean_squared_error(y_test, y_pred_base)
rmse_base = np.sqrt(mse_base)
r2_base = r2_score(y_test, y_pred_base)
model = LinearRegression()
model.fit(X_train, y_train)
y_pred_train = model.predict(X_train)
mae_train = mean_absolute_error(y_train, y_pred_train)
mse_train = mean_squared_error(y_train, y_pred_train)
rmse_train = np.sqrt(mse_train)
r2_train = r2_score(y_train, y_pred_train)
y_pred_test = model.predict(X_test)
mae_test = mean_absolute_error(y_test, y_pred_test)
mse_test = mean_squared_error(y_test, y_pred_test)
rmse_test = np.sqrt(mse_test)
r2_test = r2_score(y_test, y_pred_test)
print(f'@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@')
print(f'Evaluating {target} Region Gas Predictions')
print()
print(f'@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@')
print(f'------------------------------------------------------------------')
print(f' Mean Absolute Error')
print(f'------------------------------------------------------------------')
print(f'Baseline error: ${mae_base:.2f} per galon')
print(f'Train error: ${mae_train:.2f} per galon')
print(f'Test error: ${mae_test:.2f} per galon')
print(f'------------------------------------------------------------------')
print(f' Mean Squared Error')
print(f'------------------------------------------------------------------')
print(f'Baseline error: ${mse_base:.2f} per galon')
print(f'Train error: ${mse_train:.2f} per galon')
print(f'Test error: ${mse_test:.2f} per galon')
print(f'------------------------------------------------------------------')
print(f' Root Mean Squared Error')
print(f'------------------------------------------------------------------')
print(f'Baseline error: ${rmse_base:.2f} per galon')
print(f'Train error: ${rmse_train:.2f} per galon')
print(f'Test error: ${rmse_test:.2f} per galon')
print(f'------------------------------------------------------------------')
print(f' R-Squared (coefficient of determination)')
print(f'------------------------------------------------------------------')
print(f'Baseline r2: {r2_base:.2f}')
print(f'Train r2: {r2_train:.2f}')
print(f'Test r2: {r2_test:.2f}')
print(f'------------------------------------------------------------------')
print()
print()
```
### Summary
First off it looks like none of the models are overfitting. I base this on the low variance between training and testing error.
The r2 scores are a bit lower than I think many people would like on a surface level analysis. But we must consider that gas prices are not a natural phenomina, and have a significant amount of human input. R2 is a poor evaluation when it comes to analysis of human behaviors (which plays into gas prices). So I'm not particularly concerned r2 just yet.
R2 aside we are looking at a 10 - 23 cents per gallon improvement over baseline with just linear regression (mae). When that comes to gas, thats pretty significant improvment in predictions. A simple linear regression model works reasonably well for an mvp.
### Next steps
This regional gas predictor will ultimately make more accurate gas predictions than a massive national total gas predictor. So its probably important that I figure out the logic behind determining a region. So these seem like the next best targets for me to focus on.
#### Make Region selector
* A function for determining what region a destination belongs to
* Comparing the regions between each destination, and logic for when and where to switch regions
* Calculating route distance between each region (or is this passed in from the front end team?)
* Calculating total gas expenditure between to destinations, and summing the total destination costs
#### Set up testing framework for improving gas preditions
* Outline different models I'd like to try (boosted decision tree, nearest neighbors, etc)
* Create a matrix and a consistent evaluation metric (MAE)
* Process each test and evaluate each model
|
github_jupyter
|
import pandas as pd
df = pd.read_excel('PET_PRI_GND_A_EPM0_PTE_DPGAL_W.xls', sheet_name = 2)
df.head()
header = df.iloc[1]
df = df[2:]
df.columns = header
df.head()
df.shape
df.describe()
# Okay lets maybe split the date up into year month and day.
df['Date'].astype('datetime64[ns]')
df['Date'] = df['Date'].astype('datetime64[ns]')
df.head()
df['Month'] = df['Date'].apply(lambda x: x.month)
df['Day'] = df['Date'].apply(lambda x: x.day)
df['Year'] = df['Date'].apply(lambda x: x.year)
df.head()
truncated_headers = ['date', 'east_coast', 'new_england', 'central_atlantic',
'lower_atlantic', 'midwest', 'gulf_coast',
'rocky_mountain', 'west_coast', 'west_coast_no_cal',
'month', 'day', 'year']
df.columns = truncated_headers
df.head()
df.isnull().sum()
df.iloc[267:].isnull().sum()
clean = df.iloc[267:]
clean.head()
# I'm going to iterate through each region building a model testing
# and validating. Need a list of the regions for that.
y_list = clean.columns
y_list = y_list.drop(['date', 'month', 'day', 'year'])
y_list
X = clean[['month', 'day', 'year']]
X.head()
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.dummy import DummyRegressor
from sklearn.metrics import mean_absolute_error, r2_score, mean_squared_error
import numpy as np
for target in y_list:
y = clean[target]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.24,
random_state = 42)
baseline = DummyRegressor(strategy = 'mean')
baseline.fit(X_train, y_train)
y_pred_base = baseline.predict(X_test)
mae_base = mean_absolute_error(y_test, y_pred_base)
mse_base = mean_squared_error(y_test, y_pred_base)
rmse_base = np.sqrt(mse_base)
r2_base = r2_score(y_test, y_pred_base)
model = LinearRegression()
model.fit(X_train, y_train)
y_pred_train = model.predict(X_train)
mae_train = mean_absolute_error(y_train, y_pred_train)
mse_train = mean_squared_error(y_train, y_pred_train)
rmse_train = np.sqrt(mse_train)
r2_train = r2_score(y_train, y_pred_train)
y_pred_test = model.predict(X_test)
mae_test = mean_absolute_error(y_test, y_pred_test)
mse_test = mean_squared_error(y_test, y_pred_test)
rmse_test = np.sqrt(mse_test)
r2_test = r2_score(y_test, y_pred_test)
print(f'@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@')
print(f'Evaluating {target} Region Gas Predictions')
print()
print(f'@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@')
print(f'------------------------------------------------------------------')
print(f' Mean Absolute Error')
print(f'------------------------------------------------------------------')
print(f'Baseline error: ${mae_base:.2f} per galon')
print(f'Train error: ${mae_train:.2f} per galon')
print(f'Test error: ${mae_test:.2f} per galon')
print(f'------------------------------------------------------------------')
print(f' Mean Squared Error')
print(f'------------------------------------------------------------------')
print(f'Baseline error: ${mse_base:.2f} per galon')
print(f'Train error: ${mse_train:.2f} per galon')
print(f'Test error: ${mse_test:.2f} per galon')
print(f'------------------------------------------------------------------')
print(f' Root Mean Squared Error')
print(f'------------------------------------------------------------------')
print(f'Baseline error: ${rmse_base:.2f} per galon')
print(f'Train error: ${rmse_train:.2f} per galon')
print(f'Test error: ${rmse_test:.2f} per galon')
print(f'------------------------------------------------------------------')
print(f' R-Squared (coefficient of determination)')
print(f'------------------------------------------------------------------')
print(f'Baseline r2: {r2_base:.2f}')
print(f'Train r2: {r2_train:.2f}')
print(f'Test r2: {r2_test:.2f}')
print(f'------------------------------------------------------------------')
print()
print()
| 0.397003 | 0.845751 |
```
from datasets import Dataset
import pandas as pd
df = pd.read_csv('data/small_corpus_neural.csv',index_col=0)
df['reviews']= df['reviews'].astype(str)
def score_to_Target(value):
if value >= 5:
return 2
if value <= 4 and value >= 2:
return 1
else:
return 0
df['rating_class'] = df['ratings'].apply(lambda x:score_to_Target(x))
from sklearn.utils import shuffle
df = shuffle(df)
from sklearn.model_selection import train_test_split
train_df, test_df = train_test_split(df,
stratify=df["rating_class"],
random_state=42)
from transformers import DistilBertTokenizerFast
tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased')
train_text = list(train_df['reviews'])
val_text = list(test_df['reviews'])
train_encodings = tokenizer(train_text, truncation=True, padding=True)
val_encodings = tokenizer(val_text, truncation=True, padding=True)
train_labels = list(train_df['rating_class'])
val_labels = list(test_df['rating_class'])
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
class SentimentAnalyserDataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
item['labels'] = torch.tensor(self.labels[idx])
return item
def __len__(self):
return len(self.labels)
train_dataset = SentimentAnalyserDataset(train_encodings, train_labels)
val_dataset = SentimentAnalyserDataset(val_encodings, val_labels)
len(val_dataset)
from sklearn.metrics import accuracy_score, f1_score
def compute_metrics(pred):
labels = pred.label_ids
preds = pred.predictions.argmax(-1)
f1 = f1_score(labels, preds, average="weighted")
acc = accuracy_score(labels, preds)
return {"accuracy": acc, "f1": f1}
from transformers import DistilBertForSequenceClassification, Trainer, TrainingArguments
batch_size = 16
logging_steps = len(train_dataset) // batch_size
training_args = TrainingArguments(output_dir="results",
num_train_epochs=2,
learning_rate=2e-5,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
load_best_model_at_end=True,
metric_for_best_model="f1",
weight_decay=0.01,
evaluation_strategy="epoch",
disable_tqdm=False,
logging_steps=logging_steps,)
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained('distilbert-base-uncased',num_labels=3).to(device)
from transformers import Trainer
trainer = Trainer(model=model, args=training_args,
compute_metrics=compute_metrics,
train_dataset=train_dataset,
eval_dataset=val_dataset)
trainer.train();
results = trainer.evaluate()
results
def model_init():
return AutoModelForSequenceClassification.from_pretrained(
'distilbert-base-uncased', num_labels=3)
len(val_dataset)-200
train_subset,_= torch.utils.data.random_split(train_dataset, [1000, 32750])
val_subset,_ = torch.utils.data.random_split(val_dataset, [200, 11050])
torch.cuda.empty_cache()
trainer = Trainer(model_init=model_init, args=training_args,
compute_metrics=compute_metrics, train_dataset=train_subset,
eval_dataset=val_subset)
best_run = trainer.hyperparameter_search(n_trials=3, direction="maximize")
for key, value in best_run.hyperparameters.items():
setattr(trainer.args, key, value)
trainer.train_dataset = train_dataset
trainer.eval_dataset = val_dataset
trainer.train();
```
|
github_jupyter
|
from datasets import Dataset
import pandas as pd
df = pd.read_csv('data/small_corpus_neural.csv',index_col=0)
df['reviews']= df['reviews'].astype(str)
def score_to_Target(value):
if value >= 5:
return 2
if value <= 4 and value >= 2:
return 1
else:
return 0
df['rating_class'] = df['ratings'].apply(lambda x:score_to_Target(x))
from sklearn.utils import shuffle
df = shuffle(df)
from sklearn.model_selection import train_test_split
train_df, test_df = train_test_split(df,
stratify=df["rating_class"],
random_state=42)
from transformers import DistilBertTokenizerFast
tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased')
train_text = list(train_df['reviews'])
val_text = list(test_df['reviews'])
train_encodings = tokenizer(train_text, truncation=True, padding=True)
val_encodings = tokenizer(val_text, truncation=True, padding=True)
train_labels = list(train_df['rating_class'])
val_labels = list(test_df['rating_class'])
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
class SentimentAnalyserDataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
item['labels'] = torch.tensor(self.labels[idx])
return item
def __len__(self):
return len(self.labels)
train_dataset = SentimentAnalyserDataset(train_encodings, train_labels)
val_dataset = SentimentAnalyserDataset(val_encodings, val_labels)
len(val_dataset)
from sklearn.metrics import accuracy_score, f1_score
def compute_metrics(pred):
labels = pred.label_ids
preds = pred.predictions.argmax(-1)
f1 = f1_score(labels, preds, average="weighted")
acc = accuracy_score(labels, preds)
return {"accuracy": acc, "f1": f1}
from transformers import DistilBertForSequenceClassification, Trainer, TrainingArguments
batch_size = 16
logging_steps = len(train_dataset) // batch_size
training_args = TrainingArguments(output_dir="results",
num_train_epochs=2,
learning_rate=2e-5,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
load_best_model_at_end=True,
metric_for_best_model="f1",
weight_decay=0.01,
evaluation_strategy="epoch",
disable_tqdm=False,
logging_steps=logging_steps,)
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained('distilbert-base-uncased',num_labels=3).to(device)
from transformers import Trainer
trainer = Trainer(model=model, args=training_args,
compute_metrics=compute_metrics,
train_dataset=train_dataset,
eval_dataset=val_dataset)
trainer.train();
results = trainer.evaluate()
results
def model_init():
return AutoModelForSequenceClassification.from_pretrained(
'distilbert-base-uncased', num_labels=3)
len(val_dataset)-200
train_subset,_= torch.utils.data.random_split(train_dataset, [1000, 32750])
val_subset,_ = torch.utils.data.random_split(val_dataset, [200, 11050])
torch.cuda.empty_cache()
trainer = Trainer(model_init=model_init, args=training_args,
compute_metrics=compute_metrics, train_dataset=train_subset,
eval_dataset=val_subset)
best_run = trainer.hyperparameter_search(n_trials=3, direction="maximize")
for key, value in best_run.hyperparameters.items():
setattr(trainer.args, key, value)
trainer.train_dataset = train_dataset
trainer.eval_dataset = val_dataset
trainer.train();
| 0.828454 | 0.374047 |
```
import os
import pandas as pd
from pandas_profiling import ProfileReport
from pandas_profiling.utils.cache import cache_file
from collections import Counter
import seaborn as sn
import random
import statistics
import numpy as np
box_file_dir = os.path.join(os.getcwd(), "..", "..", "Box")
file_path_csv = os.path.join(box_file_dir, "covid_pts_enc_level_labs_dx_2021-02-02_deid.csv")
df = pd.read_csv(file_path_csv, index_col=False)
df.head()
def latinx(row):
if row.ethnicity_display == 'Hispanic or Latino' and row.race_display == 'White':
return "Hispanic"
elif row.ethnicity_display == 'Not Hispanic or Latino' and row.race_display == 'White':
return "White"
else:
return row.race_display
df['race_display'] = df.apply(lambda row: latinx(row), axis=1)
vent_df = df[~df['vent_hours_summed'].isnull()]
len(vent_df)
Counter(vent_df['race_display'])
icu_df = df[~df['icu_hours_summed'].isnull()]
Counter(icu_df['race_display'])
working_df = icu_df[~icu_df['qSOFA_score'].isnull()]
Counter(working_df['race_display'])
data = icu_df[['age_at_admit', 'pO2_Art',
'qSOFA_score','race_display',
'vent_hours_summed', 'zip_cust_table', 'heartfailure_com_flag',
'cancer_com_flag','gender','WBC','Mean_Arterial_Pressure',
'Bili_Total','CAD_com_flag','CKD_com_flag','COPD_com_flag',
'Creatinine', 'FiO2/Percent','Glasgow_Coma_Score','diabetes_com_flag',
'hypertension_com_flag','length_of_stay','discharge_disposition_display','Platelet', 'deid_empi_encounter']]
data.head()
working_df[['race_display', 'age_at_admit']].groupby('race_display').agg(['mean', 'count'])
# only 236 patients with all tests
allo_df = data[['pO2_Art', 'Creatinine', 'FiO2/Percent',
'Glasgow_Coma_Score', 'Platelet', 'Mean_Arterial_Pressure',
'Bili_Total', 'deid_empi_encounter']].dropna()
list_of_patients = list(allo_df['deid_empi_encounter'])
adjusted_patients = data[data['deid_empi_encounter'].isin(list_of_patients)]
def calculate_sofa(row):
count = 0
# need to implement Fi02/po2
if row.Platelet >= 100 and row.Platelet <= 149:
count += 1
elif row.Platelet >= 50 and row.Platelet <= 99:
count += 2
elif row.Platelet >= 20 and row.Platelet <= 49:
count += 3
elif row.Platelet < 20:
count += 4
# Glasgow
if row.Glasgow_Coma_Score == 13 or row.Glasgow_Coma_Score == 14:
count += 1
elif row.Glasgow_Coma_Score >= 10 and row.Glasgow_Coma_Score <= 12:
count += 2
elif row.Glasgow_Coma_Score >= 6 and row.Glasgow_Coma_Score <= 9:
count += 3
elif row.Glasgow_Coma_Score < 6:
count += 4
# Bilirubin
if float(row.Bili_Total) >= 1.2 and float(row.Bili_Total) <= 1.9:
count += 1
elif float(row.Bili_Total) >= 2.0 and float(row.Bili_Total) <= 5.9:
count += 2
elif float(row.Bili_Total) >= 6.0 and float(row.Bili_Total) <= 11.9:
count += 3
elif float(row.Bili_Total) >= 12.0:
count += 4
# Need to implement Mean artieral pressure later
# Creatinine
if row.Creatinine >= 1.2 and row.Creatinine <= 1.9:
count += 1
elif row.Creatinine >= 2.0 and row.Creatinine <= 3.4:
count += 2
elif row.Creatinine >= 3.5 and row.Creatinine <= 4.9:
count += 3
elif row.Creatinine >= 5.0:
count += 4
return count
allo_df['sofa'] = allo_df.apply(lambda row: calculate_sofa(row), axis = 1)
adjusted_patients['sofa'] = allo_df.apply(lambda row: calculate_sofa(row), axis = 1)
allo_df['sofa'].describe()
adjusted_patients['sofa'].describe()
#https://www.mdcalc.com/sequential-organ-failure-assessment-sofa-score#evidence
sofa_mortality_calibration = {
0: 0,
1: 0 ,
2: 6.4,
3: 6.4,
4: 20.2,
5: 20.2,
6: 21.5,
7: 21.5,
8: 33.3,
9: 33.3 ,
10: 50.0,
11: 50.0 ,
12: 95.2,
13: 95.2 ,
14: 95.2 ,
}
# still need to check corrobate
# digging onto various studies on measuring qSOFA for different comorbidities
# Min linked a paper about influenza
# can use these values
qsofa_mortality_calibration = {
0: 0.6,
1: 5 ,
2: 10,
3: 24,
}
working_df.dtypes
def comorbidity_count(row):
count = 0
if row.COPD_com_flag == 1:
count += 1
if row.asthma_com_flag == 1:
count += 1
if row.diabetes_com_flag == 1:
count += 1
if row.hypertension_com_flag == 1:
count += 1
if row.CAD_com_flag == 1:
count += 1
if row.heartfailure_com_flag == 1:
count += 1
if row.CKD_com_flag == 1:
count += 1
if row.cancer_com_flag == 1:
count += 1
return count
working_df[['COPD_com_flag', 'asthma_com_flag', 'diabetes_com_flag',
'hypertension_com_flag', 'CAD_com_flag', 'heartfailure_com_flag',
'CKD_com_flag', 'cancer_com_flag']] = working_df[['COPD_com_flag', 'asthma_com_flag', 'diabetes_com_flag',
'hypertension_com_flag', 'CAD_com_flag', 'heartfailure_com_flag',
'CKD_com_flag', 'cancer_com_flag']].fillna(0)
working_df[['COPD_com_flag', 'asthma_com_flag', 'diabetes_com_flag',
'hypertension_com_flag', 'CAD_com_flag', 'heartfailure_com_flag',
'CKD_com_flag', 'cancer_com_flag']] = working_df[['COPD_com_flag', 'asthma_com_flag', 'diabetes_com_flag',
'hypertension_com_flag', 'CAD_com_flag', 'heartfailure_com_flag',
'CKD_com_flag', 'cancer_com_flag']].astype(int)
working_df['total_comorbidities'] = working_df.apply(lambda row: comorbidity_count(row), axis=1)
working_df['cancer_com_flag'].dtype
working_df['has_comorbidity'] = working_df.total_comorbidities.apply(lambda x: 1 if x >= 1 else 0)
working_df['life_years'] = working_df.age_at_admit.apply(lambda x: 100 - x)
Counter(adjusted_patients['discharge_disposition_display'])
class Allocation(object):
# Code will be adjusted for SOFA. Currently using qSOFA
# Only looking at State Level CSC for vent allocation
def __init__(self, patients, scarcity, sofa_calibration):
self.patients = patients.copy()
self.patients['death'] = [0 for _ in range(len(self.patients))]
self.patients['allocated_vent'] = ["no" for _ in range(len(self.patients))]
self.num_vents = int(len(patients) * scarcity)
self.mortality_model = sofa_calibration
def allocate(self, row):
prob = self.mortality_model[row.qSOFA_score]
death = np.random.binomial(size=1, n=1, p=prob*.01)[0]
#print(death)
if death == 1 or row.discharge_disposition_display == 'Expired':
return death, 'yes'
else:
#print('yup yup')
return death, 'yes'
def check_expiration(self, df):
temp_df = df.copy()
for i, row in df.iterrows():
row = row.copy()
if (pd.isna(row.vent_hours_summed)) or row.discharge_disposition_display == 'Expired':
temp_df.loc[i, 'death'] = 1
else:
temp_df.loc[i, 'death'] = 0
return temp_df
def __run_allocation(self, df2):
for i, row in df2.iterrows():
row = row.copy()
if self.num_vents == 0:
#print('out')
break
mortality, allocate_cond = self.allocate(row)
df2.loc[i, 'death'] = mortality
df2.loc[i, 'allocated_vent'] = allocate_cond
self.num_vents -= 1
non_allocated = df2[df2['allocated_vent']=='no']
allocated = df2[df2['allocated_vent']=='yes']
adj_df = self.check_expiration(non_allocated)
return pd.concat([allocated, adj_df])
def lottery(self):
temp_patients = self.patients.copy()
temp_patients.sample(frac=1)
out_df = self.__run_allocation(temp_patients)
return out_df
def youngest(self):
temp_patients = self.patients.copy()
temp_patients.sort_values(by=['age_at_admit'], ascending=True, inplace=True)
out_df = self.__run_allocation(temp_patients)
return out_df
# pandas function
def __age_categorization(self, row):
if row.age_at_admit < 50:
return 1
elif row.age_at_admit < 70:
return 2
elif row.age_at_admit < 85:
return 3
else:
return 4
def maryland(self):
temp_patients = self.patients.copy()
temp_patients['age_cat'] = temp_patients.apply(lambda row: self.__age_categorization(row)
, axis=1)
temp_patients.sort_values(by=['qSOFA_score', 'total_comorbidities', 'age_cat'],
ascending=[True, True, True], inplace=True)
out_df = self.__run_allocation(temp_patients)
return out_df
def new_york(self):
temp_patients = self.patients.copy()
groups = [df for _, df in temp_patients.groupby('qSOFA_score')]
random.shuffle(groups)
grouped = pd.concat(groups).reset_index(drop=True)
grouped = grouped.sort_values('qSOFA_score', ascending=True)
out_df = self.__run_allocation(grouped)
return out_df
def max_lives_saved(self):
temp_patients = self.patients.copy()
temp_patients.sort_values(by=['qSOFA_score'], ascending=True, inplace=True)
out_df = self.__run_allocation(temp_patients)
return out_df
def max_life_years(self):
temp_patients = self.patients.copy()
temp_patients.sort_values(by=['qSOFA_score', 'life_years'], ascending=[True,False], inplace=True)
out_df = self.__run_allocation(temp_patients)
return out_df
def sickest_first(self):
temp_patients = self.patients.copy()
temp_patients.sort_values(by=['qSOFA_score'], ascending=False, inplace=True)
out_df = self.__run_allocation(temp_patients)
return out_df
zip_df = pd.read_csv('zip_code_data.csv', index_col=False)
zip_df['zip_code'] = zip_df.zip_code.apply(lambda x: x.strip('ZCTA5 '))
working_df = pd.merge(working_df, zip_df, left_on='zip_cust_table', right_on='zip_code', how='inner')
```
### Baseline
```
Counter(working_df['discharge_disposition_display'])
def latinx(row):
if row.ethnicity_display == 'Hispanic or Latino' and row.race_display == 'White':
return "Hispanic"
elif row.ethnicity_display == 'Not Hispanic or Latino' and row.race_display == 'White':
return "White"
else:
return row.race_display
working_df['race_display'] = df.apply(lambda row: latinx(row), axis=1)
# later think about the mortality rate as well
# summarize what I'm going to do and send to Victoria
len(working_df)
# compute other descriptive stats for this groupby
# final analysis
working_df[['race_display', 'age_at_admit']].groupby('race_display').agg(['mean', 'std', 'count']).round(2)
Counter(working_df['qSOFA_score'])
len(working_df['zip_cust_table'].unique())
# zip code demo eda
c = Counter(working_df['zip_cust_table'])
alist = c.most_common()
sum_patient = list(filter(lambda x: x[0][2] == '7', alist))
print(len(sum_patient))
num_p = 0
for x in sum_patient:
num_p += x[1]
num_p
c = Counter(working_df['zip_cust_table'])
alist = c.most_common()
n_alist = list(filter(lambda x: x[1] > 1, alist))
print(len(n_alist))
#n_alist
sn_plot = sn.distplot(working_df['qSOFA_score'])
plt.title('Distribution of qSOFA Score For ICU Patients')
plt.xlabel('qSOFA Score')
plt.savefig("final_figures/qSOFA_distribution.png")
race_count = Counter(working_df['race_display'])
race_count
working_df['poverty_rate'] = working_df['poverty_rate'].astype(float)
working_df['median_income'] = working_df['median_income'].astype(float)
bins = [0, 6, 12, 18,24,30,36,40]
bin_conv = [i+1 for i in range(len(bins))]
working_df['zip_binned_by_poverty'] = np.searchsorted(bins, working_df['poverty_rate'].values)
#temp_df['zip_binned_by_poverty'] = np.searchsorted(bins, temp_df['poverty_rate'].values)
bins = [20000, 40000, 60000, 80000,100000]
bin_conv = [i+1 for i in range(len(bins))]
working_df['zip_binned_by_income'] = np.searchsorted(bins, working_df['median_income'].values)
expired_df = working_df[working_df['discharge_disposition_display']=='Expired']
expired_df
# From Min
# Think about a table or graph that we would like
# to hav
Counter(expired_df['race_display'])
# number of patients who were on a ventilator
vent_df = working_df[~working_df['vent_hours_summed'].isnull()]
vent_df
# 716 icu patients
# 148 patients who died
# 289 patients on vents
# Number of patients who died on vent
vent_df[vent_df['discharge_disposition_display']=='Expired']
# 114 vent patients died
# 175 vent patients survived
Counter(vent_df[vent_df['discharge_disposition_display']=='Expired']['zip_binned_by_poverty'])
vent_df[vent_df['discharge_disposition_display']=='Expired']['zip_binned_by_poverty'].hist()
vent_df[vent_df['discharge_disposition_display']!='Expired']['zip_binned_by_poverty']
vent_df[vent_df['discharge_disposition_display']!='Expired']['zip_binned_by_poverty'].hist()
vent_df[vent_df['discharge_disposition_display']!='Expired']['zip_binned_by_poverty']
Counter(vent_df[vent_df['discharge_disposition_display']=='Expired']['qSOFA_score'])
vent_df[vent_df['discharge_disposition_display']=='Expired']['qSOFA_score'].hist()
Counter(vent_df[vent_df['discharge_disposition_display']!='Expired']['qSOFA_score'])
vent_df[vent_df['discharge_disposition_display']!='Expired']['qSOFA_score'].hist()
# 114
vent_df[vent_df['discharge_disposition_display']=='Expired']
len(working_df)
thresholds = np.linspace(0,1,11)
thresholds
race_count
baseline_deaths = Counter(working_df[working_df['discharge_disposition_display'] == 'Expired']['race_display'])
baseline_deaths
race_count
def death_percent(row):
count = race_count[row.race]
return 100 * (row.death_counts / count)
avg_base_death = statistics.mean(baseline_deaths.values())
baseline_deaths2 = pd.DataFrame(baseline_deaths.items(), columns=['race', 'death_counts'])
baseline_deaths2['threshold'] = 1.1
baseline_deaths2['avg_deaths'] = avg_base_death
baseline_deaths2['death_percent'] = baseline_deaths2.apply(lambda row: death_percent(row), axis=1)
baseline_deaths2['allocation_type'] = 'Baseline'
baseline_deaths2
ITER = 1000
```
### Lottery
```
iters = ITER
df_list_lot = []
df_list_lot.append(baseline_deaths2)
all_iters = []
for threshold in thresholds:
sums = 0
dict_list = []
df_inner_list = []
iter_df_list = []
for _ in range(iters):
allocate = Allocation(working_df, threshold, qsofa_mortality_calibration)
testing_df_lot = allocate.lottery()
df_inner_list.append(testing_df_lot)
sums += testing_df_lot['death'].sum()
racial_deaths = testing_df_lot[testing_df_lot['death'] == 1]
count_dict = Counter(racial_deaths['race_display'])
count_df = pd.DataFrame.from_dict(count_dict, orient='index').reset_index()
count_df = count_df.rename(columns={'index': 'race', 0: 'death_counts'})
dict_list.append(count_dict)
iter_df_list.append(count_df)
out_df = pd.concat(df_inner_list)
out_df.to_csv('sim_results/lottery_{}.csv'.format(threshold.round(3)))
new_df = pd.DataFrame(dict_list)
temp_new_df = new_df.mean().round(3).to_frame().reset_index()
#print(temp_new_df)
temp_new_df = temp_new_df.rename(columns={'index': 'race', 0: 'death_counts'})
temp_new_df['threshold'] = threshold.round(3)
temp_new_df['avg_deaths'] = sums/iters
out_new_df = pd.concat(iter_df_list)
out_new_df['threshold'] = threshold.round(3)
#print(out_new_df)
df_list_lot.append(temp_new_df)
all_iters.append(out_new_df)
all_iters_df = pd.concat(all_iters)
all_iters_df = all_iters_df.sort_values(by=['threshold', 'race'], inplace=False)
all_iters_df.to_csv('sim_results/all_lottery.csv'.format(threshold.round(3)))
race_count
try_df = pd.concat(df_list_lot)
try_df['death_percent'] = try_df.apply(lambda row: death_percent(row), axis=1)
try_df.to_csv('lottery_data_results.csv', index=False)
try_df
race_hue_labels = working_df['race_display'].unique()
race_hue_labels
#%matplotlib qt
import matplotlib.pyplot as plt
sn_plot = sn.factorplot(x='threshold', y='death_counts',
hue='race', hue_order = race_hue_labels, data=try_df, kind='bar')
plt.title('Lottery Allocation Scheme')
sn_plot.savefig("lottery_plot_output.png")
#%matplotlib qt
'''
import matplotlib.pyplot as plt
sn_plot = sn.factorplot(x='threshold', y='death_percent',
hue='race', hue_order = race_hue_labels, data=try_df, kind='bar')
plt.title('Lottery Allocation Scheme')
sn_plot.savefig("lottery_plot_percent_output.png")
'''
temp_df = try_df.replace(to_replace=[1.1], value=['Observed'])
x_vals = list(temp_df['threshold'].unique())
x_vals.append(x_vals.pop(0))
print(x_vals)
sn_plot = sn.factorplot(x='threshold', y='death_percent',
hue='race', hue_order = race_hue_labels, data=temp_df, kind='bar', order=x_vals, height=4, aspect=3)
plt.title('Lottery Allocation Scheme Mortality Rate Across Scarcity Indicators')
plt.xlabel('Scarcity Indicator')
plt.ylabel('Mortality Rate')
sn_plot._legend.set_title('Race')
sn_plot.savefig("final_figures/lottery_plot_percent_output.png")
```
### New York
```
iters = ITER
df_list_lot = []
df_list_lot.append(baseline_deaths2)
all_iters = []
for threshold in thresholds:
sums = 0
dict_list = []
df_inner_list = []
iter_df_list = []
for _ in range(iters):
allocate = Allocation(working_df, threshold, qsofa_mortality_calibration)
testing_df_lot = allocate.new_york()
df_inner_list.append(testing_df_lot)
sums += testing_df_lot['death'].sum()
racial_deaths = testing_df_lot[testing_df_lot['death'] == 1]
count_dict = Counter(racial_deaths['race_display'])
count_df = pd.DataFrame.from_dict(count_dict, orient='index').reset_index()
count_df = count_df.rename(columns={'index': 'race', 0: 'death_counts'})
dict_list.append(count_dict)
iter_df_list.append(count_df)
out_df = pd.concat(df_inner_list)
out_df.to_csv('sim_results/new_york_{}.csv'.format(threshold.round(3)))
new_df = pd.DataFrame(dict_list)
temp_new_df = new_df.mean().round(3).to_frame().reset_index()
#print(temp_new_df)
temp_new_df = temp_new_df.rename(columns={'index': 'race', 0: 'death_counts'})
temp_new_df['threshold'] = threshold.round(3)
temp_new_df['avg_deaths'] = sums/iters
out_new_df = pd.concat(iter_df_list)
out_new_df['threshold'] = threshold.round(3)
#print(out_new_df)
df_list_lot.append(temp_new_df)
all_iters.append(out_new_df)
all_iters_df = pd.concat(all_iters)
all_iters_df = all_iters_df.sort_values(by=['threshold', 'race'], inplace=False)
all_iters_df.to_csv('sim_results/all_ny.csv'.format(threshold.round(3)))
try_df_ny = pd.concat(df_list_lot)
try_df_ny['death_percent'] = try_df_ny.apply(lambda row: death_percent(row), axis=1)
try_df_ny.to_csv('ny_data_results.csv', index=False)
try_df_ny
#%matplotlib qt
import matplotlib.pyplot as plt
sn_plot = sn.factorplot(x='threshold', y='death_counts', hue='race', data=try_df_ny, kind='bar')
plt.title('New York Allocation Scheme')
sn_plot.savefig("ny_plot_output.png")
#%matplotlib qt
'''
import matplotlib.pyplot as plt
sn_plot = sn.factorplot(x='threshold', y='death_percent',
hue='race', hue_order = race_hue_labels, data=try_df_ny, kind='bar')
plt.title('New York Allocation Scheme')
sn_plot.savefig("ny_plot_percent_output.png")
'''
temp_df = try_df_ny.replace(to_replace=[1.1], value=['Observed'])
x_vals = list(temp_df['threshold'].unique())
x_vals.append(x_vals.pop(0))
print(x_vals)
sn_plot = sn.factorplot(x='threshold', y='death_percent',
hue='race', hue_order = race_hue_labels, data=temp_df, kind='bar', order=x_vals, height=4, aspect=3)
plt.title('New York Allocation Scheme Mortality Rate Across Scarcity Indicators')
plt.xlabel('Scarcity Indicator')
plt.ylabel('Mortality Rate')
sn_plot._legend.set_title('Race')
sn_plot.savefig("final_figures/ny_plot_percent_output.png")
```
### Maryland
```
iters = ITER
df_list_lot = []
df_list_lot.append(baseline_deaths2)
all_iters = []
for threshold in thresholds:
sums = 0
dict_list = []
df_inner_list = []
iter_df_list = []
for _ in range(iters):
allocate = Allocation(working_df, threshold, qsofa_mortality_calibration)
testing_df_lot = allocate.maryland()
df_inner_list.append(testing_df_lot)
sums += testing_df_lot['death'].sum()
racial_deaths = testing_df_lot[testing_df_lot['death'] == 1]
count_dict = Counter(racial_deaths['race_display'])
count_df = pd.DataFrame.from_dict(count_dict, orient='index').reset_index()
count_df = count_df.rename(columns={'index': 'race', 0: 'death_counts'})
dict_list.append(count_dict)
iter_df_list.append(count_df)
out_df = pd.concat(df_inner_list)
out_df.to_csv('sim_results/maryland_{}.csv'.format(threshold.round(3)))
new_df = pd.DataFrame(dict_list)
temp_new_df = new_df.mean().round(3).to_frame().reset_index()
#print(temp_new_df)
temp_new_df = temp_new_df.rename(columns={'index': 'race', 0: 'death_counts'})
temp_new_df['threshold'] = threshold.round(3)
temp_new_df['avg_deaths'] = sums/iters
out_new_df = pd.concat(iter_df_list)
out_new_df['threshold'] = threshold.round(3)
#print(out_new_df)
df_list_lot.append(temp_new_df)
all_iters.append(out_new_df)
all_iters_df = pd.concat(all_iters)
all_iters_df = all_iters_df.sort_values(by=['threshold', 'race'], inplace=False)
all_iters_df.to_csv('sim_results/all_mar.csv'.format(threshold.round(3)))
try_df_mar = pd.concat(df_list_lot)
try_df_mar['death_percent'] = try_df_mar.apply(lambda row: death_percent(row), axis=1)
try_df_mar.to_csv('mar_data_results.csv', index=False)
try_df_mar
#%matplotlib qt
import matplotlib.pyplot as plt
sn_plot = sn.factorplot(x='threshold', y='death_counts', hue='race', data=try_df_mar, kind='bar')
plt.title('Maryland Allocation Scheme')
sn_plot.savefig("mar_plot_output.png")
#%matplotlib qt
import matplotlib.pyplot as plt
'''
sn_plot = sn.factorplot(x='threshold', y='death_percent',
hue='race', hue_order = race_hue_labels, data=try_df_mar, kind='bar')
plt.title('Maryland Allocation Scheme')
sn_plot.savefig("mar_plot_percent_output.png")
'''
temp_df = try_df_mar.replace(to_replace=[1.1], value=['Observed'])
x_vals = list(temp_df['threshold'].unique())
x_vals.append(x_vals.pop(0))
print(x_vals)
sn_plot = sn.factorplot(x='threshold', y='death_percent',
hue='race', hue_order = race_hue_labels, data=temp_df, kind='bar', order=x_vals, height=4, aspect=3)
plt.title('Maryland Allocation Scheme Mortality Rate Across Scarcity Indicators')
plt.xlabel('Scarcity Indicator')
plt.ylabel('Mortality Rate')
sn_plot._legend.set_title('Race')
sn_plot.savefig("final_figures/mar_plot_percent_output.png")
```
### Max Lives Years
```
iters = ITER
df_list_lot = []
df_list_lot.append(baseline_deaths2)
all_iters = []
for threshold in thresholds:
sums = 0
dict_list = []
df_inner_list = []
iter_df_list = []
for _ in range(iters):
allocate = Allocation(working_df, threshold, qsofa_mortality_calibration)
testing_df_lot = allocate.max_life_years()
df_inner_list.append(testing_df_lot)
sums += testing_df_lot['death'].sum()
racial_deaths = testing_df_lot[testing_df_lot['death'] == 1]
count_dict = Counter(racial_deaths['race_display'])
count_df = pd.DataFrame.from_dict(count_dict, orient='index').reset_index()
count_df = count_df.rename(columns={'index': 'race', 0: 'death_counts'})
dict_list.append(count_dict)
iter_df_list.append(count_df)
out_df = pd.concat(df_inner_list)
out_df.to_csv('sim_results/max_life_years_{}.csv'.format(threshold.round(3)))
new_df = pd.DataFrame(dict_list)
temp_new_df = new_df.mean().round(3).to_frame().reset_index()
#print(temp_new_df)
temp_new_df = temp_new_df.rename(columns={'index': 'race', 0: 'death_counts'})
temp_new_df['threshold'] = threshold.round(3)
temp_new_df['avg_deaths'] = sums/iters
out_new_df = pd.concat(iter_df_list)
out_new_df['threshold'] = threshold.round(3)
#print(out_new_df)
df_list_lot.append(temp_new_df)
all_iters.append(out_new_df)
all_iters_df = pd.concat(all_iters)
all_iters_df = all_iters_df.sort_values(by=['threshold', 'race'], inplace=False)
all_iters_df.to_csv('sim_results/all_max_life_years.csv'.format(threshold.round(3)))
try_df_max_life = pd.concat(df_list_lot)
try_df_max_life['death_percent'] = try_df_max_life.apply(lambda row: death_percent(row), axis=1)
try_df_max_life.to_csv('max_lives_data_results.csv', index=False)
try_df_max_life
#%matplotlib qt
import matplotlib.pyplot as plt
sn_plot = sn.factorplot(x='threshold', y='death_counts', hue='race', data=try_df_max_life, kind='bar')
plt.title('Max Life Years Allocation Scheme')
sn_plot.savefig("max_life_plot_output.png")
#%matplotlib qt
import matplotlib.pyplot as plt
'''
sn_plot = sn.factorplot(x='threshold', y='death_percent',
hue='race', hue_order = race_hue_labels, data=try_df_max_life, kind='bar')
plt.title('Max Life Years Allocation Scheme')
sn_plot.savefig("max_life_plot_percent_output.png")
'''
temp_df = try_df_max_life.replace(to_replace=[1.1], value=['Observed'])
x_vals = list(temp_df['threshold'].unique())
x_vals.append(x_vals.pop(0))
print(x_vals)
sn_plot = sn.factorplot(x='threshold', y='death_percent',
hue='race', hue_order = race_hue_labels, data=temp_df, kind='bar', order=x_vals, height=4, aspect=3)
plt.title('Max Life Years Allocation Scheme Mortality Rate Across Scarcity Indicators')
plt.xlabel('Scarcity Indicator')
plt.ylabel('Mortality Rate')
sn_plot._legend.set_title('Race')
sn_plot.savefig("final_figures/max_life_plot_percent_output.png")
```
### Youngest
```
iters = ITER
df_list_lot = []
df_list_lot.append(baseline_deaths2)
all_iters = []
for threshold in thresholds:
sums = 0
dict_list = []
df_inner_list = []
iter_df_list = []
for _ in range(iters):
allocate = Allocation(working_df, threshold, qsofa_mortality_calibration)
testing_df_lot = allocate.youngest()
df_inner_list.append(testing_df_lot)
sums += testing_df_lot['death'].sum()
racial_deaths = testing_df_lot[testing_df_lot['death'] == 1]
count_dict = Counter(racial_deaths['race_display'])
count_df = pd.DataFrame.from_dict(count_dict, orient='index').reset_index()
count_df = count_df.rename(columns={'index': 'race', 0: 'death_counts'})
dict_list.append(count_dict)
iter_df_list.append(count_df)
out_df = pd.concat(df_inner_list)
out_df.to_csv('sim_results/youngest_{}.csv'.format(threshold.round(3)))
new_df = pd.DataFrame(dict_list)
temp_new_df = new_df.mean().round(3).to_frame().reset_index()
#print(temp_new_df)
temp_new_df = temp_new_df.rename(columns={'index': 'race', 0: 'death_counts'})
temp_new_df['threshold'] = threshold.round(3)
temp_new_df['avg_deaths'] = sums/iters
out_new_df = pd.concat(iter_df_list)
out_new_df['threshold'] = threshold.round(3)
#print(out_new_df)
df_list_lot.append(temp_new_df)
all_iters.append(out_new_df)
all_iters_df = pd.concat(all_iters)
all_iters_df = all_iters_df.sort_values(by=['threshold', 'race'], inplace=False)
all_iters_df.to_csv('sim_results/all_youngest.csv'.format(threshold.round(3)))
try_df_youngest = pd.concat(df_list_lot)
try_df_youngest['death_percent'] = try_df_youngest.apply(lambda row: death_percent(row), axis=1)
try_df_youngest.to_csv('youngest_data_results.csv', index=False)
try_df_youngest
#%matplotlib qt
import matplotlib.pyplot as plt
sn_plot = sn.factorplot(x='threshold', y='death_counts', hue='race', data=try_df_youngest, kind='bar')
plt.title('Youngest Allocation Scheme')
sn_plot.savefig("youngest_plot_output.png")
#%matplotlib qt
'''
import matplotlib.pyplot as plt
sn_plot = sn.factorplot(x='threshold', y='death_percent',
hue='race', hue_order = race_hue_labels, data=try_df_youngest, kind='bar')
plt.title('Youngest Allocation Scheme')
sn_plot.savefig("youngest_plot_percent_output.png")
'''
temp_df = try_df_youngest.replace(to_replace=[1.1], value=['Observed'])
x_vals = list(temp_df['threshold'].unique())
x_vals.append(x_vals.pop(0))
print(x_vals)
sn_plot = sn.factorplot(x='threshold', y='death_percent',
hue='race', hue_order = race_hue_labels, data=temp_df, kind='bar', order=x_vals, height=4, aspect=3)
plt.title('Youngest First Allocation Scheme Mortality Rate Across Scarcity Indicators')
plt.xlabel('Scarcity Indicator')
plt.ylabel('Mortality Rate')
sn_plot._legend.set_title('Race')
sn_plot.savefig("final_figures/youngest_plot_percent_output.png")
```
### Sickest First
```
iters = ITER
df_list_lot = []
df_list_lot.append(baseline_deaths2)
all_iters = []
for threshold in thresholds:
sums = 0
dict_list = []
df_inner_list = []
iter_df_list = []
for _ in range(iters):
allocate = Allocation(working_df, threshold, qsofa_mortality_calibration)
testing_df_lot = allocate.sickest_first()
df_inner_list.append(testing_df_lot)
sums += testing_df_lot['death'].sum()
racial_deaths = testing_df_lot[testing_df_lot['death'] == 1]
count_dict = Counter(racial_deaths['race_display'])
count_df = pd.DataFrame.from_dict(count_dict, orient='index').reset_index()
count_df = count_df.rename(columns={'index': 'race', 0: 'death_counts'})
dict_list.append(count_dict)
iter_df_list.append(count_df)
out_df = pd.concat(df_inner_list)
out_df.to_csv('sim_results/sickest_first_{}.csv'.format(threshold.round(3)))
new_df = pd.DataFrame(dict_list)
temp_new_df = new_df.mean().round(3).to_frame().reset_index()
#print(temp_new_df)
temp_new_df = temp_new_df.rename(columns={'index': 'race', 0: 'death_counts'})
temp_new_df['threshold'] = threshold.round(3)
temp_new_df['avg_deaths'] = sums/iters
out_new_df = pd.concat(iter_df_list)
out_new_df['threshold'] = threshold.round(3)
#print(out_new_df)
df_list_lot.append(temp_new_df)
all_iters.append(out_new_df)
all_iters_df = pd.concat(all_iters)
all_iters_df = all_iters_df.sort_values(by=['threshold', 'race'], inplace=False)
all_iters_df.to_csv('sim_results/all_sickest.csv'.format(threshold.round(3)))
testing_df_lot.head()
try_df_sickest = pd.concat(df_list_lot)
try_df_sickest['death_percent'] = try_df_sickest.apply(lambda row: death_percent(row), axis=1)
try_df_sickest.to_csv('sickest_data_results.csv', index=False)
try_df_sickest
#%matplotlib qt
import matplotlib.pyplot as plt
sn_plot = sn.factorplot(x='threshold', y='death_counts', hue='race', data=try_df_sickest, kind='bar')
plt.title('Sickest First Allocation Scheme')
sn_plot.savefig("sickest_plot_output.png")
#%matplotlib qt
import matplotlib.pyplot as plt
temp_df = try_df_sickest.replace(to_replace=[1.1], value=['Observed'])
x_vals = list(temp_df['threshold'].unique())
x_vals.append(x_vals.pop(0))
print(x_vals)
sn_plot = sn.factorplot(x='threshold', y='death_percent',
hue='race', hue_order = race_hue_labels, data=temp_df, kind='bar', order=x_vals, height=4, aspect=3)
plt.title('Sickest First Allocation Scheme Mortality Rate Across Scarcity Indicators')
plt.xlabel('Scarcity Indicator')
plt.ylabel('Mortality Rate')
sn_plot._legend.set_title('Race')
sn_plot.savefig("final_figures/sickest_plot_percent_output.png")
```
### Total
```
try_df['allocation_type'] = 'Lottery'
try_df_ny['allocation_type'] = 'New York'
try_df_mar['allocation_type'] = 'Maryland'
try_df_youngest['allocation_type'] = 'Youngest First'
try_df_max_life['allocation_type'] = 'Max Life Years'
try_df_sickest['allocation_type'] = 'Sickest First'
total_df = pd.concat([
try_df[['allocation_type', 'threshold', 'avg_deaths']],
try_df_ny[['allocation_type', 'threshold', 'avg_deaths']],
try_df_mar[['allocation_type', 'threshold', 'avg_deaths']],
try_df_youngest[['allocation_type', 'threshold', 'avg_deaths']],
try_df_max_life[['allocation_type', 'threshold', 'avg_deaths']],
try_df_sickest[['allocation_type', 'threshold', 'avg_deaths']],
])
total_df.to_csv('total_avg_deaths_data_results.csv', index=False)
#%matplotlib qt
import matplotlib.pyplot as plt
sn_plot = sn.factorplot(x='threshold', y='avg_deaths', hue='allocation_type', data=total_df[total_df['threshold'] != 1.1]
, kind='bar', legend_out=True)
plt.title('Average Number of Deaths per Allocation Scheme')
sn_plot._legend.set_title('Allocation Scheme')
plt.xlabel('Scarcity Indicator')
plt.ylabel('Average Number of Deaths')
sn_plot.savefig("final_figures/total_avg_deaths_plot_output.png")
```
### Break
```
all_iters_df = pd.read_csv('sim_results/all_sickest.csv')
all_iters_df['death_percent'] = all_iters_df.apply(lambda row: death_percent(row), axis=1)
all_iters_df.to_csv('sim_results/all_sickest.csv')
all_iters_df = pd.read_csv('sim_results/all_lottery.csv')
all_iters_df['death_percent'] = all_iters_df.apply(lambda row: death_percent(row), axis=1)
all_iters_df.to_csv('sim_results/all_lottery.csv')
all_iters_df = pd.read_csv('sim_results/all_ny.csv')
all_iters_df['death_percent'] = all_iters_df.apply(lambda row: death_percent(row), axis=1)
all_iters_df.to_csv('sim_results/all_ny.csv')
all_iters_df = pd.read_csv('sim_results/all_mar.csv')
all_iters_df['death_percent'] = all_iters_df.apply(lambda row: death_percent(row), axis=1)
all_iters_df.to_csv('sim_results/all_mar.csv')
all_iters_df = pd.read_csv('sim_results/all_youngest.csv')
all_iters_df['death_percent'] = all_iters_df.apply(lambda row: death_percent(row), axis=1)
all_iters_df.to_csv('sim_results/all_youngest.csv')
all_iters_df = pd.read_csv('sim_results/all_max_life_years.csv')
all_iters_df['death_percent'] = all_iters_df.apply(lambda row: death_percent(row), axis=1)
all_iters_df.to_csv('sim_results/all_max_life_years.csv')
```
|
github_jupyter
|
import os
import pandas as pd
from pandas_profiling import ProfileReport
from pandas_profiling.utils.cache import cache_file
from collections import Counter
import seaborn as sn
import random
import statistics
import numpy as np
box_file_dir = os.path.join(os.getcwd(), "..", "..", "Box")
file_path_csv = os.path.join(box_file_dir, "covid_pts_enc_level_labs_dx_2021-02-02_deid.csv")
df = pd.read_csv(file_path_csv, index_col=False)
df.head()
def latinx(row):
if row.ethnicity_display == 'Hispanic or Latino' and row.race_display == 'White':
return "Hispanic"
elif row.ethnicity_display == 'Not Hispanic or Latino' and row.race_display == 'White':
return "White"
else:
return row.race_display
df['race_display'] = df.apply(lambda row: latinx(row), axis=1)
vent_df = df[~df['vent_hours_summed'].isnull()]
len(vent_df)
Counter(vent_df['race_display'])
icu_df = df[~df['icu_hours_summed'].isnull()]
Counter(icu_df['race_display'])
working_df = icu_df[~icu_df['qSOFA_score'].isnull()]
Counter(working_df['race_display'])
data = icu_df[['age_at_admit', 'pO2_Art',
'qSOFA_score','race_display',
'vent_hours_summed', 'zip_cust_table', 'heartfailure_com_flag',
'cancer_com_flag','gender','WBC','Mean_Arterial_Pressure',
'Bili_Total','CAD_com_flag','CKD_com_flag','COPD_com_flag',
'Creatinine', 'FiO2/Percent','Glasgow_Coma_Score','diabetes_com_flag',
'hypertension_com_flag','length_of_stay','discharge_disposition_display','Platelet', 'deid_empi_encounter']]
data.head()
working_df[['race_display', 'age_at_admit']].groupby('race_display').agg(['mean', 'count'])
# only 236 patients with all tests
allo_df = data[['pO2_Art', 'Creatinine', 'FiO2/Percent',
'Glasgow_Coma_Score', 'Platelet', 'Mean_Arterial_Pressure',
'Bili_Total', 'deid_empi_encounter']].dropna()
list_of_patients = list(allo_df['deid_empi_encounter'])
adjusted_patients = data[data['deid_empi_encounter'].isin(list_of_patients)]
def calculate_sofa(row):
count = 0
# need to implement Fi02/po2
if row.Platelet >= 100 and row.Platelet <= 149:
count += 1
elif row.Platelet >= 50 and row.Platelet <= 99:
count += 2
elif row.Platelet >= 20 and row.Platelet <= 49:
count += 3
elif row.Platelet < 20:
count += 4
# Glasgow
if row.Glasgow_Coma_Score == 13 or row.Glasgow_Coma_Score == 14:
count += 1
elif row.Glasgow_Coma_Score >= 10 and row.Glasgow_Coma_Score <= 12:
count += 2
elif row.Glasgow_Coma_Score >= 6 and row.Glasgow_Coma_Score <= 9:
count += 3
elif row.Glasgow_Coma_Score < 6:
count += 4
# Bilirubin
if float(row.Bili_Total) >= 1.2 and float(row.Bili_Total) <= 1.9:
count += 1
elif float(row.Bili_Total) >= 2.0 and float(row.Bili_Total) <= 5.9:
count += 2
elif float(row.Bili_Total) >= 6.0 and float(row.Bili_Total) <= 11.9:
count += 3
elif float(row.Bili_Total) >= 12.0:
count += 4
# Need to implement Mean artieral pressure later
# Creatinine
if row.Creatinine >= 1.2 and row.Creatinine <= 1.9:
count += 1
elif row.Creatinine >= 2.0 and row.Creatinine <= 3.4:
count += 2
elif row.Creatinine >= 3.5 and row.Creatinine <= 4.9:
count += 3
elif row.Creatinine >= 5.0:
count += 4
return count
allo_df['sofa'] = allo_df.apply(lambda row: calculate_sofa(row), axis = 1)
adjusted_patients['sofa'] = allo_df.apply(lambda row: calculate_sofa(row), axis = 1)
allo_df['sofa'].describe()
adjusted_patients['sofa'].describe()
#https://www.mdcalc.com/sequential-organ-failure-assessment-sofa-score#evidence
sofa_mortality_calibration = {
0: 0,
1: 0 ,
2: 6.4,
3: 6.4,
4: 20.2,
5: 20.2,
6: 21.5,
7: 21.5,
8: 33.3,
9: 33.3 ,
10: 50.0,
11: 50.0 ,
12: 95.2,
13: 95.2 ,
14: 95.2 ,
}
# still need to check corrobate
# digging onto various studies on measuring qSOFA for different comorbidities
# Min linked a paper about influenza
# can use these values
qsofa_mortality_calibration = {
0: 0.6,
1: 5 ,
2: 10,
3: 24,
}
working_df.dtypes
def comorbidity_count(row):
count = 0
if row.COPD_com_flag == 1:
count += 1
if row.asthma_com_flag == 1:
count += 1
if row.diabetes_com_flag == 1:
count += 1
if row.hypertension_com_flag == 1:
count += 1
if row.CAD_com_flag == 1:
count += 1
if row.heartfailure_com_flag == 1:
count += 1
if row.CKD_com_flag == 1:
count += 1
if row.cancer_com_flag == 1:
count += 1
return count
working_df[['COPD_com_flag', 'asthma_com_flag', 'diabetes_com_flag',
'hypertension_com_flag', 'CAD_com_flag', 'heartfailure_com_flag',
'CKD_com_flag', 'cancer_com_flag']] = working_df[['COPD_com_flag', 'asthma_com_flag', 'diabetes_com_flag',
'hypertension_com_flag', 'CAD_com_flag', 'heartfailure_com_flag',
'CKD_com_flag', 'cancer_com_flag']].fillna(0)
working_df[['COPD_com_flag', 'asthma_com_flag', 'diabetes_com_flag',
'hypertension_com_flag', 'CAD_com_flag', 'heartfailure_com_flag',
'CKD_com_flag', 'cancer_com_flag']] = working_df[['COPD_com_flag', 'asthma_com_flag', 'diabetes_com_flag',
'hypertension_com_flag', 'CAD_com_flag', 'heartfailure_com_flag',
'CKD_com_flag', 'cancer_com_flag']].astype(int)
working_df['total_comorbidities'] = working_df.apply(lambda row: comorbidity_count(row), axis=1)
working_df['cancer_com_flag'].dtype
working_df['has_comorbidity'] = working_df.total_comorbidities.apply(lambda x: 1 if x >= 1 else 0)
working_df['life_years'] = working_df.age_at_admit.apply(lambda x: 100 - x)
Counter(adjusted_patients['discharge_disposition_display'])
class Allocation(object):
# Code will be adjusted for SOFA. Currently using qSOFA
# Only looking at State Level CSC for vent allocation
def __init__(self, patients, scarcity, sofa_calibration):
self.patients = patients.copy()
self.patients['death'] = [0 for _ in range(len(self.patients))]
self.patients['allocated_vent'] = ["no" for _ in range(len(self.patients))]
self.num_vents = int(len(patients) * scarcity)
self.mortality_model = sofa_calibration
def allocate(self, row):
prob = self.mortality_model[row.qSOFA_score]
death = np.random.binomial(size=1, n=1, p=prob*.01)[0]
#print(death)
if death == 1 or row.discharge_disposition_display == 'Expired':
return death, 'yes'
else:
#print('yup yup')
return death, 'yes'
def check_expiration(self, df):
temp_df = df.copy()
for i, row in df.iterrows():
row = row.copy()
if (pd.isna(row.vent_hours_summed)) or row.discharge_disposition_display == 'Expired':
temp_df.loc[i, 'death'] = 1
else:
temp_df.loc[i, 'death'] = 0
return temp_df
def __run_allocation(self, df2):
for i, row in df2.iterrows():
row = row.copy()
if self.num_vents == 0:
#print('out')
break
mortality, allocate_cond = self.allocate(row)
df2.loc[i, 'death'] = mortality
df2.loc[i, 'allocated_vent'] = allocate_cond
self.num_vents -= 1
non_allocated = df2[df2['allocated_vent']=='no']
allocated = df2[df2['allocated_vent']=='yes']
adj_df = self.check_expiration(non_allocated)
return pd.concat([allocated, adj_df])
def lottery(self):
temp_patients = self.patients.copy()
temp_patients.sample(frac=1)
out_df = self.__run_allocation(temp_patients)
return out_df
def youngest(self):
temp_patients = self.patients.copy()
temp_patients.sort_values(by=['age_at_admit'], ascending=True, inplace=True)
out_df = self.__run_allocation(temp_patients)
return out_df
# pandas function
def __age_categorization(self, row):
if row.age_at_admit < 50:
return 1
elif row.age_at_admit < 70:
return 2
elif row.age_at_admit < 85:
return 3
else:
return 4
def maryland(self):
temp_patients = self.patients.copy()
temp_patients['age_cat'] = temp_patients.apply(lambda row: self.__age_categorization(row)
, axis=1)
temp_patients.sort_values(by=['qSOFA_score', 'total_comorbidities', 'age_cat'],
ascending=[True, True, True], inplace=True)
out_df = self.__run_allocation(temp_patients)
return out_df
def new_york(self):
temp_patients = self.patients.copy()
groups = [df for _, df in temp_patients.groupby('qSOFA_score')]
random.shuffle(groups)
grouped = pd.concat(groups).reset_index(drop=True)
grouped = grouped.sort_values('qSOFA_score', ascending=True)
out_df = self.__run_allocation(grouped)
return out_df
def max_lives_saved(self):
temp_patients = self.patients.copy()
temp_patients.sort_values(by=['qSOFA_score'], ascending=True, inplace=True)
out_df = self.__run_allocation(temp_patients)
return out_df
def max_life_years(self):
temp_patients = self.patients.copy()
temp_patients.sort_values(by=['qSOFA_score', 'life_years'], ascending=[True,False], inplace=True)
out_df = self.__run_allocation(temp_patients)
return out_df
def sickest_first(self):
temp_patients = self.patients.copy()
temp_patients.sort_values(by=['qSOFA_score'], ascending=False, inplace=True)
out_df = self.__run_allocation(temp_patients)
return out_df
zip_df = pd.read_csv('zip_code_data.csv', index_col=False)
zip_df['zip_code'] = zip_df.zip_code.apply(lambda x: x.strip('ZCTA5 '))
working_df = pd.merge(working_df, zip_df, left_on='zip_cust_table', right_on='zip_code', how='inner')
Counter(working_df['discharge_disposition_display'])
def latinx(row):
if row.ethnicity_display == 'Hispanic or Latino' and row.race_display == 'White':
return "Hispanic"
elif row.ethnicity_display == 'Not Hispanic or Latino' and row.race_display == 'White':
return "White"
else:
return row.race_display
working_df['race_display'] = df.apply(lambda row: latinx(row), axis=1)
# later think about the mortality rate as well
# summarize what I'm going to do and send to Victoria
len(working_df)
# compute other descriptive stats for this groupby
# final analysis
working_df[['race_display', 'age_at_admit']].groupby('race_display').agg(['mean', 'std', 'count']).round(2)
Counter(working_df['qSOFA_score'])
len(working_df['zip_cust_table'].unique())
# zip code demo eda
c = Counter(working_df['zip_cust_table'])
alist = c.most_common()
sum_patient = list(filter(lambda x: x[0][2] == '7', alist))
print(len(sum_patient))
num_p = 0
for x in sum_patient:
num_p += x[1]
num_p
c = Counter(working_df['zip_cust_table'])
alist = c.most_common()
n_alist = list(filter(lambda x: x[1] > 1, alist))
print(len(n_alist))
#n_alist
sn_plot = sn.distplot(working_df['qSOFA_score'])
plt.title('Distribution of qSOFA Score For ICU Patients')
plt.xlabel('qSOFA Score')
plt.savefig("final_figures/qSOFA_distribution.png")
race_count = Counter(working_df['race_display'])
race_count
working_df['poverty_rate'] = working_df['poverty_rate'].astype(float)
working_df['median_income'] = working_df['median_income'].astype(float)
bins = [0, 6, 12, 18,24,30,36,40]
bin_conv = [i+1 for i in range(len(bins))]
working_df['zip_binned_by_poverty'] = np.searchsorted(bins, working_df['poverty_rate'].values)
#temp_df['zip_binned_by_poverty'] = np.searchsorted(bins, temp_df['poverty_rate'].values)
bins = [20000, 40000, 60000, 80000,100000]
bin_conv = [i+1 for i in range(len(bins))]
working_df['zip_binned_by_income'] = np.searchsorted(bins, working_df['median_income'].values)
expired_df = working_df[working_df['discharge_disposition_display']=='Expired']
expired_df
# From Min
# Think about a table or graph that we would like
# to hav
Counter(expired_df['race_display'])
# number of patients who were on a ventilator
vent_df = working_df[~working_df['vent_hours_summed'].isnull()]
vent_df
# 716 icu patients
# 148 patients who died
# 289 patients on vents
# Number of patients who died on vent
vent_df[vent_df['discharge_disposition_display']=='Expired']
# 114 vent patients died
# 175 vent patients survived
Counter(vent_df[vent_df['discharge_disposition_display']=='Expired']['zip_binned_by_poverty'])
vent_df[vent_df['discharge_disposition_display']=='Expired']['zip_binned_by_poverty'].hist()
vent_df[vent_df['discharge_disposition_display']!='Expired']['zip_binned_by_poverty']
vent_df[vent_df['discharge_disposition_display']!='Expired']['zip_binned_by_poverty'].hist()
vent_df[vent_df['discharge_disposition_display']!='Expired']['zip_binned_by_poverty']
Counter(vent_df[vent_df['discharge_disposition_display']=='Expired']['qSOFA_score'])
vent_df[vent_df['discharge_disposition_display']=='Expired']['qSOFA_score'].hist()
Counter(vent_df[vent_df['discharge_disposition_display']!='Expired']['qSOFA_score'])
vent_df[vent_df['discharge_disposition_display']!='Expired']['qSOFA_score'].hist()
# 114
vent_df[vent_df['discharge_disposition_display']=='Expired']
len(working_df)
thresholds = np.linspace(0,1,11)
thresholds
race_count
baseline_deaths = Counter(working_df[working_df['discharge_disposition_display'] == 'Expired']['race_display'])
baseline_deaths
race_count
def death_percent(row):
count = race_count[row.race]
return 100 * (row.death_counts / count)
avg_base_death = statistics.mean(baseline_deaths.values())
baseline_deaths2 = pd.DataFrame(baseline_deaths.items(), columns=['race', 'death_counts'])
baseline_deaths2['threshold'] = 1.1
baseline_deaths2['avg_deaths'] = avg_base_death
baseline_deaths2['death_percent'] = baseline_deaths2.apply(lambda row: death_percent(row), axis=1)
baseline_deaths2['allocation_type'] = 'Baseline'
baseline_deaths2
ITER = 1000
iters = ITER
df_list_lot = []
df_list_lot.append(baseline_deaths2)
all_iters = []
for threshold in thresholds:
sums = 0
dict_list = []
df_inner_list = []
iter_df_list = []
for _ in range(iters):
allocate = Allocation(working_df, threshold, qsofa_mortality_calibration)
testing_df_lot = allocate.lottery()
df_inner_list.append(testing_df_lot)
sums += testing_df_lot['death'].sum()
racial_deaths = testing_df_lot[testing_df_lot['death'] == 1]
count_dict = Counter(racial_deaths['race_display'])
count_df = pd.DataFrame.from_dict(count_dict, orient='index').reset_index()
count_df = count_df.rename(columns={'index': 'race', 0: 'death_counts'})
dict_list.append(count_dict)
iter_df_list.append(count_df)
out_df = pd.concat(df_inner_list)
out_df.to_csv('sim_results/lottery_{}.csv'.format(threshold.round(3)))
new_df = pd.DataFrame(dict_list)
temp_new_df = new_df.mean().round(3).to_frame().reset_index()
#print(temp_new_df)
temp_new_df = temp_new_df.rename(columns={'index': 'race', 0: 'death_counts'})
temp_new_df['threshold'] = threshold.round(3)
temp_new_df['avg_deaths'] = sums/iters
out_new_df = pd.concat(iter_df_list)
out_new_df['threshold'] = threshold.round(3)
#print(out_new_df)
df_list_lot.append(temp_new_df)
all_iters.append(out_new_df)
all_iters_df = pd.concat(all_iters)
all_iters_df = all_iters_df.sort_values(by=['threshold', 'race'], inplace=False)
all_iters_df.to_csv('sim_results/all_lottery.csv'.format(threshold.round(3)))
race_count
try_df = pd.concat(df_list_lot)
try_df['death_percent'] = try_df.apply(lambda row: death_percent(row), axis=1)
try_df.to_csv('lottery_data_results.csv', index=False)
try_df
race_hue_labels = working_df['race_display'].unique()
race_hue_labels
#%matplotlib qt
import matplotlib.pyplot as plt
sn_plot = sn.factorplot(x='threshold', y='death_counts',
hue='race', hue_order = race_hue_labels, data=try_df, kind='bar')
plt.title('Lottery Allocation Scheme')
sn_plot.savefig("lottery_plot_output.png")
#%matplotlib qt
'''
import matplotlib.pyplot as plt
sn_plot = sn.factorplot(x='threshold', y='death_percent',
hue='race', hue_order = race_hue_labels, data=try_df, kind='bar')
plt.title('Lottery Allocation Scheme')
sn_plot.savefig("lottery_plot_percent_output.png")
'''
temp_df = try_df.replace(to_replace=[1.1], value=['Observed'])
x_vals = list(temp_df['threshold'].unique())
x_vals.append(x_vals.pop(0))
print(x_vals)
sn_plot = sn.factorplot(x='threshold', y='death_percent',
hue='race', hue_order = race_hue_labels, data=temp_df, kind='bar', order=x_vals, height=4, aspect=3)
plt.title('Lottery Allocation Scheme Mortality Rate Across Scarcity Indicators')
plt.xlabel('Scarcity Indicator')
plt.ylabel('Mortality Rate')
sn_plot._legend.set_title('Race')
sn_plot.savefig("final_figures/lottery_plot_percent_output.png")
iters = ITER
df_list_lot = []
df_list_lot.append(baseline_deaths2)
all_iters = []
for threshold in thresholds:
sums = 0
dict_list = []
df_inner_list = []
iter_df_list = []
for _ in range(iters):
allocate = Allocation(working_df, threshold, qsofa_mortality_calibration)
testing_df_lot = allocate.new_york()
df_inner_list.append(testing_df_lot)
sums += testing_df_lot['death'].sum()
racial_deaths = testing_df_lot[testing_df_lot['death'] == 1]
count_dict = Counter(racial_deaths['race_display'])
count_df = pd.DataFrame.from_dict(count_dict, orient='index').reset_index()
count_df = count_df.rename(columns={'index': 'race', 0: 'death_counts'})
dict_list.append(count_dict)
iter_df_list.append(count_df)
out_df = pd.concat(df_inner_list)
out_df.to_csv('sim_results/new_york_{}.csv'.format(threshold.round(3)))
new_df = pd.DataFrame(dict_list)
temp_new_df = new_df.mean().round(3).to_frame().reset_index()
#print(temp_new_df)
temp_new_df = temp_new_df.rename(columns={'index': 'race', 0: 'death_counts'})
temp_new_df['threshold'] = threshold.round(3)
temp_new_df['avg_deaths'] = sums/iters
out_new_df = pd.concat(iter_df_list)
out_new_df['threshold'] = threshold.round(3)
#print(out_new_df)
df_list_lot.append(temp_new_df)
all_iters.append(out_new_df)
all_iters_df = pd.concat(all_iters)
all_iters_df = all_iters_df.sort_values(by=['threshold', 'race'], inplace=False)
all_iters_df.to_csv('sim_results/all_ny.csv'.format(threshold.round(3)))
try_df_ny = pd.concat(df_list_lot)
try_df_ny['death_percent'] = try_df_ny.apply(lambda row: death_percent(row), axis=1)
try_df_ny.to_csv('ny_data_results.csv', index=False)
try_df_ny
#%matplotlib qt
import matplotlib.pyplot as plt
sn_plot = sn.factorplot(x='threshold', y='death_counts', hue='race', data=try_df_ny, kind='bar')
plt.title('New York Allocation Scheme')
sn_plot.savefig("ny_plot_output.png")
#%matplotlib qt
'''
import matplotlib.pyplot as plt
sn_plot = sn.factorplot(x='threshold', y='death_percent',
hue='race', hue_order = race_hue_labels, data=try_df_ny, kind='bar')
plt.title('New York Allocation Scheme')
sn_plot.savefig("ny_plot_percent_output.png")
'''
temp_df = try_df_ny.replace(to_replace=[1.1], value=['Observed'])
x_vals = list(temp_df['threshold'].unique())
x_vals.append(x_vals.pop(0))
print(x_vals)
sn_plot = sn.factorplot(x='threshold', y='death_percent',
hue='race', hue_order = race_hue_labels, data=temp_df, kind='bar', order=x_vals, height=4, aspect=3)
plt.title('New York Allocation Scheme Mortality Rate Across Scarcity Indicators')
plt.xlabel('Scarcity Indicator')
plt.ylabel('Mortality Rate')
sn_plot._legend.set_title('Race')
sn_plot.savefig("final_figures/ny_plot_percent_output.png")
iters = ITER
df_list_lot = []
df_list_lot.append(baseline_deaths2)
all_iters = []
for threshold in thresholds:
sums = 0
dict_list = []
df_inner_list = []
iter_df_list = []
for _ in range(iters):
allocate = Allocation(working_df, threshold, qsofa_mortality_calibration)
testing_df_lot = allocate.maryland()
df_inner_list.append(testing_df_lot)
sums += testing_df_lot['death'].sum()
racial_deaths = testing_df_lot[testing_df_lot['death'] == 1]
count_dict = Counter(racial_deaths['race_display'])
count_df = pd.DataFrame.from_dict(count_dict, orient='index').reset_index()
count_df = count_df.rename(columns={'index': 'race', 0: 'death_counts'})
dict_list.append(count_dict)
iter_df_list.append(count_df)
out_df = pd.concat(df_inner_list)
out_df.to_csv('sim_results/maryland_{}.csv'.format(threshold.round(3)))
new_df = pd.DataFrame(dict_list)
temp_new_df = new_df.mean().round(3).to_frame().reset_index()
#print(temp_new_df)
temp_new_df = temp_new_df.rename(columns={'index': 'race', 0: 'death_counts'})
temp_new_df['threshold'] = threshold.round(3)
temp_new_df['avg_deaths'] = sums/iters
out_new_df = pd.concat(iter_df_list)
out_new_df['threshold'] = threshold.round(3)
#print(out_new_df)
df_list_lot.append(temp_new_df)
all_iters.append(out_new_df)
all_iters_df = pd.concat(all_iters)
all_iters_df = all_iters_df.sort_values(by=['threshold', 'race'], inplace=False)
all_iters_df.to_csv('sim_results/all_mar.csv'.format(threshold.round(3)))
try_df_mar = pd.concat(df_list_lot)
try_df_mar['death_percent'] = try_df_mar.apply(lambda row: death_percent(row), axis=1)
try_df_mar.to_csv('mar_data_results.csv', index=False)
try_df_mar
#%matplotlib qt
import matplotlib.pyplot as plt
sn_plot = sn.factorplot(x='threshold', y='death_counts', hue='race', data=try_df_mar, kind='bar')
plt.title('Maryland Allocation Scheme')
sn_plot.savefig("mar_plot_output.png")
#%matplotlib qt
import matplotlib.pyplot as plt
'''
sn_plot = sn.factorplot(x='threshold', y='death_percent',
hue='race', hue_order = race_hue_labels, data=try_df_mar, kind='bar')
plt.title('Maryland Allocation Scheme')
sn_plot.savefig("mar_plot_percent_output.png")
'''
temp_df = try_df_mar.replace(to_replace=[1.1], value=['Observed'])
x_vals = list(temp_df['threshold'].unique())
x_vals.append(x_vals.pop(0))
print(x_vals)
sn_plot = sn.factorplot(x='threshold', y='death_percent',
hue='race', hue_order = race_hue_labels, data=temp_df, kind='bar', order=x_vals, height=4, aspect=3)
plt.title('Maryland Allocation Scheme Mortality Rate Across Scarcity Indicators')
plt.xlabel('Scarcity Indicator')
plt.ylabel('Mortality Rate')
sn_plot._legend.set_title('Race')
sn_plot.savefig("final_figures/mar_plot_percent_output.png")
iters = ITER
df_list_lot = []
df_list_lot.append(baseline_deaths2)
all_iters = []
for threshold in thresholds:
sums = 0
dict_list = []
df_inner_list = []
iter_df_list = []
for _ in range(iters):
allocate = Allocation(working_df, threshold, qsofa_mortality_calibration)
testing_df_lot = allocate.max_life_years()
df_inner_list.append(testing_df_lot)
sums += testing_df_lot['death'].sum()
racial_deaths = testing_df_lot[testing_df_lot['death'] == 1]
count_dict = Counter(racial_deaths['race_display'])
count_df = pd.DataFrame.from_dict(count_dict, orient='index').reset_index()
count_df = count_df.rename(columns={'index': 'race', 0: 'death_counts'})
dict_list.append(count_dict)
iter_df_list.append(count_df)
out_df = pd.concat(df_inner_list)
out_df.to_csv('sim_results/max_life_years_{}.csv'.format(threshold.round(3)))
new_df = pd.DataFrame(dict_list)
temp_new_df = new_df.mean().round(3).to_frame().reset_index()
#print(temp_new_df)
temp_new_df = temp_new_df.rename(columns={'index': 'race', 0: 'death_counts'})
temp_new_df['threshold'] = threshold.round(3)
temp_new_df['avg_deaths'] = sums/iters
out_new_df = pd.concat(iter_df_list)
out_new_df['threshold'] = threshold.round(3)
#print(out_new_df)
df_list_lot.append(temp_new_df)
all_iters.append(out_new_df)
all_iters_df = pd.concat(all_iters)
all_iters_df = all_iters_df.sort_values(by=['threshold', 'race'], inplace=False)
all_iters_df.to_csv('sim_results/all_max_life_years.csv'.format(threshold.round(3)))
try_df_max_life = pd.concat(df_list_lot)
try_df_max_life['death_percent'] = try_df_max_life.apply(lambda row: death_percent(row), axis=1)
try_df_max_life.to_csv('max_lives_data_results.csv', index=False)
try_df_max_life
#%matplotlib qt
import matplotlib.pyplot as plt
sn_plot = sn.factorplot(x='threshold', y='death_counts', hue='race', data=try_df_max_life, kind='bar')
plt.title('Max Life Years Allocation Scheme')
sn_plot.savefig("max_life_plot_output.png")
#%matplotlib qt
import matplotlib.pyplot as plt
'''
sn_plot = sn.factorplot(x='threshold', y='death_percent',
hue='race', hue_order = race_hue_labels, data=try_df_max_life, kind='bar')
plt.title('Max Life Years Allocation Scheme')
sn_plot.savefig("max_life_plot_percent_output.png")
'''
temp_df = try_df_max_life.replace(to_replace=[1.1], value=['Observed'])
x_vals = list(temp_df['threshold'].unique())
x_vals.append(x_vals.pop(0))
print(x_vals)
sn_plot = sn.factorplot(x='threshold', y='death_percent',
hue='race', hue_order = race_hue_labels, data=temp_df, kind='bar', order=x_vals, height=4, aspect=3)
plt.title('Max Life Years Allocation Scheme Mortality Rate Across Scarcity Indicators')
plt.xlabel('Scarcity Indicator')
plt.ylabel('Mortality Rate')
sn_plot._legend.set_title('Race')
sn_plot.savefig("final_figures/max_life_plot_percent_output.png")
iters = ITER
df_list_lot = []
df_list_lot.append(baseline_deaths2)
all_iters = []
for threshold in thresholds:
sums = 0
dict_list = []
df_inner_list = []
iter_df_list = []
for _ in range(iters):
allocate = Allocation(working_df, threshold, qsofa_mortality_calibration)
testing_df_lot = allocate.youngest()
df_inner_list.append(testing_df_lot)
sums += testing_df_lot['death'].sum()
racial_deaths = testing_df_lot[testing_df_lot['death'] == 1]
count_dict = Counter(racial_deaths['race_display'])
count_df = pd.DataFrame.from_dict(count_dict, orient='index').reset_index()
count_df = count_df.rename(columns={'index': 'race', 0: 'death_counts'})
dict_list.append(count_dict)
iter_df_list.append(count_df)
out_df = pd.concat(df_inner_list)
out_df.to_csv('sim_results/youngest_{}.csv'.format(threshold.round(3)))
new_df = pd.DataFrame(dict_list)
temp_new_df = new_df.mean().round(3).to_frame().reset_index()
#print(temp_new_df)
temp_new_df = temp_new_df.rename(columns={'index': 'race', 0: 'death_counts'})
temp_new_df['threshold'] = threshold.round(3)
temp_new_df['avg_deaths'] = sums/iters
out_new_df = pd.concat(iter_df_list)
out_new_df['threshold'] = threshold.round(3)
#print(out_new_df)
df_list_lot.append(temp_new_df)
all_iters.append(out_new_df)
all_iters_df = pd.concat(all_iters)
all_iters_df = all_iters_df.sort_values(by=['threshold', 'race'], inplace=False)
all_iters_df.to_csv('sim_results/all_youngest.csv'.format(threshold.round(3)))
try_df_youngest = pd.concat(df_list_lot)
try_df_youngest['death_percent'] = try_df_youngest.apply(lambda row: death_percent(row), axis=1)
try_df_youngest.to_csv('youngest_data_results.csv', index=False)
try_df_youngest
#%matplotlib qt
import matplotlib.pyplot as plt
sn_plot = sn.factorplot(x='threshold', y='death_counts', hue='race', data=try_df_youngest, kind='bar')
plt.title('Youngest Allocation Scheme')
sn_plot.savefig("youngest_plot_output.png")
#%matplotlib qt
'''
import matplotlib.pyplot as plt
sn_plot = sn.factorplot(x='threshold', y='death_percent',
hue='race', hue_order = race_hue_labels, data=try_df_youngest, kind='bar')
plt.title('Youngest Allocation Scheme')
sn_plot.savefig("youngest_plot_percent_output.png")
'''
temp_df = try_df_youngest.replace(to_replace=[1.1], value=['Observed'])
x_vals = list(temp_df['threshold'].unique())
x_vals.append(x_vals.pop(0))
print(x_vals)
sn_plot = sn.factorplot(x='threshold', y='death_percent',
hue='race', hue_order = race_hue_labels, data=temp_df, kind='bar', order=x_vals, height=4, aspect=3)
plt.title('Youngest First Allocation Scheme Mortality Rate Across Scarcity Indicators')
plt.xlabel('Scarcity Indicator')
plt.ylabel('Mortality Rate')
sn_plot._legend.set_title('Race')
sn_plot.savefig("final_figures/youngest_plot_percent_output.png")
iters = ITER
df_list_lot = []
df_list_lot.append(baseline_deaths2)
all_iters = []
for threshold in thresholds:
sums = 0
dict_list = []
df_inner_list = []
iter_df_list = []
for _ in range(iters):
allocate = Allocation(working_df, threshold, qsofa_mortality_calibration)
testing_df_lot = allocate.sickest_first()
df_inner_list.append(testing_df_lot)
sums += testing_df_lot['death'].sum()
racial_deaths = testing_df_lot[testing_df_lot['death'] == 1]
count_dict = Counter(racial_deaths['race_display'])
count_df = pd.DataFrame.from_dict(count_dict, orient='index').reset_index()
count_df = count_df.rename(columns={'index': 'race', 0: 'death_counts'})
dict_list.append(count_dict)
iter_df_list.append(count_df)
out_df = pd.concat(df_inner_list)
out_df.to_csv('sim_results/sickest_first_{}.csv'.format(threshold.round(3)))
new_df = pd.DataFrame(dict_list)
temp_new_df = new_df.mean().round(3).to_frame().reset_index()
#print(temp_new_df)
temp_new_df = temp_new_df.rename(columns={'index': 'race', 0: 'death_counts'})
temp_new_df['threshold'] = threshold.round(3)
temp_new_df['avg_deaths'] = sums/iters
out_new_df = pd.concat(iter_df_list)
out_new_df['threshold'] = threshold.round(3)
#print(out_new_df)
df_list_lot.append(temp_new_df)
all_iters.append(out_new_df)
all_iters_df = pd.concat(all_iters)
all_iters_df = all_iters_df.sort_values(by=['threshold', 'race'], inplace=False)
all_iters_df.to_csv('sim_results/all_sickest.csv'.format(threshold.round(3)))
testing_df_lot.head()
try_df_sickest = pd.concat(df_list_lot)
try_df_sickest['death_percent'] = try_df_sickest.apply(lambda row: death_percent(row), axis=1)
try_df_sickest.to_csv('sickest_data_results.csv', index=False)
try_df_sickest
#%matplotlib qt
import matplotlib.pyplot as plt
sn_plot = sn.factorplot(x='threshold', y='death_counts', hue='race', data=try_df_sickest, kind='bar')
plt.title('Sickest First Allocation Scheme')
sn_plot.savefig("sickest_plot_output.png")
#%matplotlib qt
import matplotlib.pyplot as plt
temp_df = try_df_sickest.replace(to_replace=[1.1], value=['Observed'])
x_vals = list(temp_df['threshold'].unique())
x_vals.append(x_vals.pop(0))
print(x_vals)
sn_plot = sn.factorplot(x='threshold', y='death_percent',
hue='race', hue_order = race_hue_labels, data=temp_df, kind='bar', order=x_vals, height=4, aspect=3)
plt.title('Sickest First Allocation Scheme Mortality Rate Across Scarcity Indicators')
plt.xlabel('Scarcity Indicator')
plt.ylabel('Mortality Rate')
sn_plot._legend.set_title('Race')
sn_plot.savefig("final_figures/sickest_plot_percent_output.png")
try_df['allocation_type'] = 'Lottery'
try_df_ny['allocation_type'] = 'New York'
try_df_mar['allocation_type'] = 'Maryland'
try_df_youngest['allocation_type'] = 'Youngest First'
try_df_max_life['allocation_type'] = 'Max Life Years'
try_df_sickest['allocation_type'] = 'Sickest First'
total_df = pd.concat([
try_df[['allocation_type', 'threshold', 'avg_deaths']],
try_df_ny[['allocation_type', 'threshold', 'avg_deaths']],
try_df_mar[['allocation_type', 'threshold', 'avg_deaths']],
try_df_youngest[['allocation_type', 'threshold', 'avg_deaths']],
try_df_max_life[['allocation_type', 'threshold', 'avg_deaths']],
try_df_sickest[['allocation_type', 'threshold', 'avg_deaths']],
])
total_df.to_csv('total_avg_deaths_data_results.csv', index=False)
#%matplotlib qt
import matplotlib.pyplot as plt
sn_plot = sn.factorplot(x='threshold', y='avg_deaths', hue='allocation_type', data=total_df[total_df['threshold'] != 1.1]
, kind='bar', legend_out=True)
plt.title('Average Number of Deaths per Allocation Scheme')
sn_plot._legend.set_title('Allocation Scheme')
plt.xlabel('Scarcity Indicator')
plt.ylabel('Average Number of Deaths')
sn_plot.savefig("final_figures/total_avg_deaths_plot_output.png")
all_iters_df = pd.read_csv('sim_results/all_sickest.csv')
all_iters_df['death_percent'] = all_iters_df.apply(lambda row: death_percent(row), axis=1)
all_iters_df.to_csv('sim_results/all_sickest.csv')
all_iters_df = pd.read_csv('sim_results/all_lottery.csv')
all_iters_df['death_percent'] = all_iters_df.apply(lambda row: death_percent(row), axis=1)
all_iters_df.to_csv('sim_results/all_lottery.csv')
all_iters_df = pd.read_csv('sim_results/all_ny.csv')
all_iters_df['death_percent'] = all_iters_df.apply(lambda row: death_percent(row), axis=1)
all_iters_df.to_csv('sim_results/all_ny.csv')
all_iters_df = pd.read_csv('sim_results/all_mar.csv')
all_iters_df['death_percent'] = all_iters_df.apply(lambda row: death_percent(row), axis=1)
all_iters_df.to_csv('sim_results/all_mar.csv')
all_iters_df = pd.read_csv('sim_results/all_youngest.csv')
all_iters_df['death_percent'] = all_iters_df.apply(lambda row: death_percent(row), axis=1)
all_iters_df.to_csv('sim_results/all_youngest.csv')
all_iters_df = pd.read_csv('sim_results/all_max_life_years.csv')
all_iters_df['death_percent'] = all_iters_df.apply(lambda row: death_percent(row), axis=1)
all_iters_df.to_csv('sim_results/all_max_life_years.csv')
| 0.31732 | 0.35262 |
```
import numpy as np
import matplotlib.pyplot as plt
from numba import jit, vectorize, float64, int64
import warnings
warnings.filterwarnings('ignore')
def gradlogistic(X,Y,theta,n,V):
#theta is updated
#X,Y are samples from the minibatch
#n is total number of observations
#V should be provided ahead, specified here
Vinv = np.linalg.inv(V)
d1 = -np.diag(Y-1/(1+np.exp(-X@theta)))@X
d2 = Vinv@theta
d1_avg = d1.mean(axis = 0)
return d1_avg*n + d2
def generate_data(nrow,ncol):
np.random.seed(1234)
X=np.random.normal(0,1,[nrow,ncol])
Y=np.random.binomial(1,1/(1+np.exp(-(X@theta))))
train=np.random.choice(range(nrow),int(nrow/2),replace=False)
test=np.array(list(set(range(nrow))-set(train)))
return X, Y, train,test
np.random.seed(1234)
theta=np.array([-5,10,5,20,30])
X,Y,train,test = generate_data(20000,5)
X_train=X[train,:]
Y_train=Y[train]
X_test=X[test,:]
Y_test=Y[test]
def test_err(theta):
err = []
for i in points:
pro = 1/(1+np.exp(-(X_test @ theta[:i,:].T)))
pred = np.random.binomial(1,pro)
coverage = np.mean(pred == Y_test[:,None])
err.append(1-coverage)
return(err)
def batch(X,Y, nbatch):
nrow = X.shape[0]
idx = np.random.choice(nrow, nbatch, replace = False)
X_sample = X[idx,:]
Y_sample = Y[idx]
return X_sample, Y_sample
```
## SGLD
```
def sgld(X,Y,theta0,M,C,V,eps,nbatch,niter=2000):
n,p=X.shape
theta=theta0
theta_save=np.zeros([niter,p])
np.random.seed(10)
for t in range(niter):
X_sample,Y_sample = batch(X,Y,nbatch)
theta=theta-gradlogistic(X_sample,Y_sample,theta,n,V)*eps+np.random.multivariate_normal(np.zeros(p),np.sqrt(2*0.01*np.eye(p)),1).ravel()
theta_save[t,:]=theta
return theta_save
theta0 = np.zeros(5)
M = C = np.eye(5)
nbatch=500
eps=.001
V = np.diag([20,20,20,20,20])
points = np.arange(1,2010,100)
sgld_theta=sgld(X_train,Y_train,theta0,M,C,V,eps,nbatch,2000)
sglderr = test_err(sgld_theta)
plt.plot(points,sglderr)
plt.xlabel("iteration")
plt.ylabel("test error")
plt.title("SGLD")
plt.savefig("SGLD")
```
## SGHMC
```
@jit
def batch_numba(X,Y, nbatch):
nrow = X.shape[0]
idx = np.random.choice(nrow, nbatch, replace = False)
X_sample = X[idx,:]
Y_sample = Y[idx]
return X_sample, Y_sample
@jit([float64[:,:](float64[:],float64[:,:],float64[:],int64,float64[:],float64[:,:],float64[:,:],float64[:,:],float64,int64,int64)],cache = True)
def sghmc_numba(theta0,X,Y,nbatch,gradU,M,C,V,eps,step = 10, niter = 10):
B = 1/2 * V * eps
sigma = np.sqrt(2*eps*(C-B))
n, p = X.shape
theta = theta0 #set an initial value of theta
thetas =np.zeros([step,p])
Minv = np.linalg.inv(M)
np.random.seed(10)
#simulate dynamics
for t in range(step):
r = np.random.multivariate_normal(np.zeros(p),np.sqrt(M))
for i in range(niter):
theta = theta + eps*Minv@r
X_sample,Y_sample = batch_numba(X,Y,nbatch)
r = r - eps*gradU(X_sample, Y_sample,theta,n,V) - eps*C @ Minv @ r
+ np.random.multivariate_normal(np.zeros(p),sigma,1).ravel()
thetas[t,:] = theta
return thetas
theta0 = np.zeros(5)
M = C = np.eye(5)
nbatch=500
eps=.001
V = np.diag([20,20,20,20,20])
sghmc_theta = sghmc_numba(np.zeros(5),X_train,Y_train,nbatch,gradlogistic,M,C,V,eps,2000,50)
sghmcerr = test_err(sghmc_theta)
plt.plot(points,sghmcerr)
plt.xlabel("iteration")
plt.ylabel("test error")
plt.title("SGHMC")
plt.savefig("SGHMC")
```
## SGD (simplied version, without momentum)
```
def gradlogistic_1(x,y,theta,n,V):
#theta is updated
#X,Y are samples from the minibatch
#n is total number of observations
#V should be provided ahead, specified here
Vinv = np.linalg.inv(V)
d1 = -np.diag([(y-1/(1+np.exp(-x@theta)))])@x[None,:]
d2 = Vinv@theta
return d1 + d2
def sgd(X_train,y_train,nbatch,learning_rate,theta0, ninte):
theta = theta0
n,p = X_train.shape
thetas = np.zeros([ninte,p])
for i in range(ninte):
grad = gradlogistic_1(X_train[i,:],y_train[i],theta0,1,V)
theta = theta - learning_rate * grad
thetas[i] = theta
return thetas
theta0 = np.zeros(5)
M = C = np.eye(5)
nbatch=500
eps=.001
V = np.diag([20,20,20,20,20])
sgd_theta = sgd(X_train,Y_train,500,0.05,theta0,2000)
sgderr = test_err(sgd_theta)
plt.plot(points, sgderr)
plt.xlabel("iteration")
plt.ylabel("test error")
plt.title("SGD")
plt.savefig("SGD")
```
## Compare
```
plt.plot(points,sglderr,"d-")
plt.plot(points,sghmcerr,color = "red")
plt.plot(points, sgderr, '--',color = "orange")
plt.legend(['SGLD','SGHMC','SGD'])
plt.xlabel("iteration")
plt.ylabel("test error")
plt.title("Comparison")
plt.savefig("Comparison")
plt.plot(points,sglderr,"d-")
plt.plot(points,sghmcerr,color = "red")
plt.legend(['SGLD','SGHMC'])
plt.xlabel("iteration")
plt.ylabel("test error")
plt.title("SGLD vs. SGHMC")
plt.savefig("SGLDvsSGHMC")
```
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
from numba import jit, vectorize, float64, int64
import warnings
warnings.filterwarnings('ignore')
def gradlogistic(X,Y,theta,n,V):
#theta is updated
#X,Y are samples from the minibatch
#n is total number of observations
#V should be provided ahead, specified here
Vinv = np.linalg.inv(V)
d1 = -np.diag(Y-1/(1+np.exp(-X@theta)))@X
d2 = Vinv@theta
d1_avg = d1.mean(axis = 0)
return d1_avg*n + d2
def generate_data(nrow,ncol):
np.random.seed(1234)
X=np.random.normal(0,1,[nrow,ncol])
Y=np.random.binomial(1,1/(1+np.exp(-(X@theta))))
train=np.random.choice(range(nrow),int(nrow/2),replace=False)
test=np.array(list(set(range(nrow))-set(train)))
return X, Y, train,test
np.random.seed(1234)
theta=np.array([-5,10,5,20,30])
X,Y,train,test = generate_data(20000,5)
X_train=X[train,:]
Y_train=Y[train]
X_test=X[test,:]
Y_test=Y[test]
def test_err(theta):
err = []
for i in points:
pro = 1/(1+np.exp(-(X_test @ theta[:i,:].T)))
pred = np.random.binomial(1,pro)
coverage = np.mean(pred == Y_test[:,None])
err.append(1-coverage)
return(err)
def batch(X,Y, nbatch):
nrow = X.shape[0]
idx = np.random.choice(nrow, nbatch, replace = False)
X_sample = X[idx,:]
Y_sample = Y[idx]
return X_sample, Y_sample
def sgld(X,Y,theta0,M,C,V,eps,nbatch,niter=2000):
n,p=X.shape
theta=theta0
theta_save=np.zeros([niter,p])
np.random.seed(10)
for t in range(niter):
X_sample,Y_sample = batch(X,Y,nbatch)
theta=theta-gradlogistic(X_sample,Y_sample,theta,n,V)*eps+np.random.multivariate_normal(np.zeros(p),np.sqrt(2*0.01*np.eye(p)),1).ravel()
theta_save[t,:]=theta
return theta_save
theta0 = np.zeros(5)
M = C = np.eye(5)
nbatch=500
eps=.001
V = np.diag([20,20,20,20,20])
points = np.arange(1,2010,100)
sgld_theta=sgld(X_train,Y_train,theta0,M,C,V,eps,nbatch,2000)
sglderr = test_err(sgld_theta)
plt.plot(points,sglderr)
plt.xlabel("iteration")
plt.ylabel("test error")
plt.title("SGLD")
plt.savefig("SGLD")
@jit
def batch_numba(X,Y, nbatch):
nrow = X.shape[0]
idx = np.random.choice(nrow, nbatch, replace = False)
X_sample = X[idx,:]
Y_sample = Y[idx]
return X_sample, Y_sample
@jit([float64[:,:](float64[:],float64[:,:],float64[:],int64,float64[:],float64[:,:],float64[:,:],float64[:,:],float64,int64,int64)],cache = True)
def sghmc_numba(theta0,X,Y,nbatch,gradU,M,C,V,eps,step = 10, niter = 10):
B = 1/2 * V * eps
sigma = np.sqrt(2*eps*(C-B))
n, p = X.shape
theta = theta0 #set an initial value of theta
thetas =np.zeros([step,p])
Minv = np.linalg.inv(M)
np.random.seed(10)
#simulate dynamics
for t in range(step):
r = np.random.multivariate_normal(np.zeros(p),np.sqrt(M))
for i in range(niter):
theta = theta + eps*Minv@r
X_sample,Y_sample = batch_numba(X,Y,nbatch)
r = r - eps*gradU(X_sample, Y_sample,theta,n,V) - eps*C @ Minv @ r
+ np.random.multivariate_normal(np.zeros(p),sigma,1).ravel()
thetas[t,:] = theta
return thetas
theta0 = np.zeros(5)
M = C = np.eye(5)
nbatch=500
eps=.001
V = np.diag([20,20,20,20,20])
sghmc_theta = sghmc_numba(np.zeros(5),X_train,Y_train,nbatch,gradlogistic,M,C,V,eps,2000,50)
sghmcerr = test_err(sghmc_theta)
plt.plot(points,sghmcerr)
plt.xlabel("iteration")
plt.ylabel("test error")
plt.title("SGHMC")
plt.savefig("SGHMC")
def gradlogistic_1(x,y,theta,n,V):
#theta is updated
#X,Y are samples from the minibatch
#n is total number of observations
#V should be provided ahead, specified here
Vinv = np.linalg.inv(V)
d1 = -np.diag([(y-1/(1+np.exp(-x@theta)))])@x[None,:]
d2 = Vinv@theta
return d1 + d2
def sgd(X_train,y_train,nbatch,learning_rate,theta0, ninte):
theta = theta0
n,p = X_train.shape
thetas = np.zeros([ninte,p])
for i in range(ninte):
grad = gradlogistic_1(X_train[i,:],y_train[i],theta0,1,V)
theta = theta - learning_rate * grad
thetas[i] = theta
return thetas
theta0 = np.zeros(5)
M = C = np.eye(5)
nbatch=500
eps=.001
V = np.diag([20,20,20,20,20])
sgd_theta = sgd(X_train,Y_train,500,0.05,theta0,2000)
sgderr = test_err(sgd_theta)
plt.plot(points, sgderr)
plt.xlabel("iteration")
plt.ylabel("test error")
plt.title("SGD")
plt.savefig("SGD")
plt.plot(points,sglderr,"d-")
plt.plot(points,sghmcerr,color = "red")
plt.plot(points, sgderr, '--',color = "orange")
plt.legend(['SGLD','SGHMC','SGD'])
plt.xlabel("iteration")
plt.ylabel("test error")
plt.title("Comparison")
plt.savefig("Comparison")
plt.plot(points,sglderr,"d-")
plt.plot(points,sghmcerr,color = "red")
plt.legend(['SGLD','SGHMC'])
plt.xlabel("iteration")
plt.ylabel("test error")
plt.title("SGLD vs. SGHMC")
plt.savefig("SGLDvsSGHMC")
| 0.479504 | 0.858896 |
<center>
<img src="https://gitlab.com/ibm/skills-network/courses/placeholder101/-/raw/master/labs/module%201/images/IDSNlogo.png" width="300" alt="cognitiveclass.ai logo" />
</center>
<h1 align=center><font size = 5>Assignment: SQL Notebook for Peer Assignment</font></h1>
Estimated time needed: **60** minutes.
## Introduction
Using this Python notebook you will:
1. Understand the Spacex DataSet
2. Load the dataset into the corresponding table in a Db2 database
3. Execute SQL queries to answer assignment questions
## Overview of the DataSet
SpaceX has gained worldwide attention for a series of historic milestones.
It is the only private company ever to return a spacecraft from low-earth orbit, which it first accomplished in December 2010.
SpaceX advertises Falcon 9 rocket launches on its website with a cost of 62 million dollars wheras other providers cost upward of 165 million dollars each, much of the savings is because Space X can reuse the first stage.
Therefore if we can determine if the first stage will land, we can determine the cost of a launch.
This information can be used if an alternate company wants to bid against SpaceX for a rocket launch.
This dataset includes a record for each payload carried during a SpaceX mission into outer space.
### Download the datasets
This assignment requires you to load the spacex dataset.
In many cases the dataset to be analyzed is available as a .CSV (comma separated values) file, perhaps on the internet. Click on the link below to download and save the dataset (.CSV file):
<a href="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DS0321EN-SkillsNetwork/labs/module_2/data/Spacex.csv?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDS0321ENSkillsNetwork26802033-2021-01-01" target="_blank">Spacex DataSet</a>
### Store the dataset in database table
**it is highly recommended to manually load the table using the database console LOAD tool in DB2**.
<img src = "https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DS0321EN-SkillsNetwork/labs/module_2/images/spacexload.png">
Now open the Db2 console, open the LOAD tool, Select / Drag the .CSV file for the dataset, Next create a New Table, and then follow the steps on-screen instructions to load the data. Name the new table as follows:
**SPACEXDATASET**
**Follow these steps while using old DB2 UI which is having Open Console Screen**
**Note:While loading Spacex dataset, ensure that detect datatypes is disabled. Later click on the pencil icon(edit option).**
1. Change the Date Format by manually typing DD-MM-YYYY and timestamp format as DD-MM-YYYY HH\:MM:SS.
Here you should place the cursor at Date field and manually type as DD-MM-YYYY.
2. Change the PAYLOAD_MASS\_\_KG\_ datatype to INTEGER.
<img src = "https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DS0321EN-SkillsNetwork/labs/module_2/images/spacexload2.png">
**Changes to be considered when having DB2 instance with the new UI having Go to UI screen**
* Refer to this insruction in this <a href="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DB0201EN-SkillsNetwork/labs/Labs_Coursera_V5/labs/Lab%20-%20Sign%20up%20for%20IBM%20Cloud%20-%20Create%20Db2%20service%20instance%20-%20Get%20started%20with%20the%20Db2%20console/instructional-labs.md.html?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDS0321ENSkillsNetwork26802033-2021-01-01">link</a> for viewing the new Go to UI screen.
* Later click on **Data link(below SQL)** in the Go to UI screen and click on **Load Data** tab.
* Later browse for the downloaded spacex file.
<img src="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DS0321EN-SkillsNetwork/labs/module_2/images/browsefile.png" width="800"/>
* Once done select the schema andload the file.
<img src="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DS0321EN-SkillsNetwork/labs/module_2/images/spacexload3.png" width="800"/>
```
!pip install sqlalchemy==1.3.9
!pip install ibm_db_sa
!pip install ipython-sql
```
### Connect to the database
Let us first load the SQL extension and establish a connection with the database
```
%load_ext sql
```
**DB2 magic in case of old UI service credentials.**
In the next cell enter your db2 connection string. Recall you created Service Credentials for your Db2 instance before. From the **uri** field of your Db2 service credentials copy everything after db2:// (except the double quote at the end) and paste it in the cell below after ibm_db_sa://
<img src ="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DB0201EN-SkillsNetwork/labs/FinalModule_edX/images/URI.jpg">
in the following format
**%sql ibm_db_sa://my-username:my-password\@my-hostname:my-port/my-db-name**
**DB2 magic in case of new UI service credentials.**
<img src ="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DS0321EN-SkillsNetwork/labs/module_2/images/servicecredentials.png" width=600>
* Use the following format.
* Add security=SSL at the end
**%sql ibm_db_sa://my-username:my-password\@my-hostname:my-port/my-db-name?security=SSL**
```
%sql ibm_db_sa://rmx39209:JVp2hTAa8A2yFaTu@ba99a9e6-d59e-4883-8fc0-d6a8c9f7a08f.c1ogj3sd0tgtu0lqde00.databases.appdomain.cloud:31321/BLUDB?security=SSL
```
## Tasks
Now write and execute SQL queries to solve the assignment tasks.
### Task 1
##### Display the names of the unique launch sites in the space mission
```
%sql SELECT DISTINCT launch_site from SPACEX1
```
### Task 2
##### Display 5 records where launch sites begin with the string 'CCA'
```
%sql SELECT * from SPACEX1 WHERE launch_site LIKE 'CCA%' LIMIT 5
```
### Task 3
##### Display the total payload mass carried by boosters launched by NASA (CRS)
```
%sql SELECT SUM(payload_mass__kg_) FROM SPACEX1 WHERE customer = 'NASA (CRS)'
```
### Task 4
##### Display average payload mass carried by booster version F9 v1.1
```
%sql SELECT AVG(payload_mass__kg_) FROM SPACEX1 WHERE booster_version = 'F9 v1.1'
```
### Task 5
##### List the date when the first successful landing outcome in ground pad was acheived.
*Hint:Use min function*
```
%sql SELECT MIN(DATE) FROM SPACEX1 WHERE landing__outcome = 'Success (ground pad)'
```
### Task 6
##### List the names of the boosters which have success in drone ship and have payload mass greater than 4000 but less than 6000
```
%sql SELECT booster_version FROM SPACEX1 WHERE landing__outcome = 'Success (drone ship)' AND payload_mass__kg_ BETWEEN 4000 AND 6000
```
### Task 7
##### List the total number of successful and failure mission outcomes
```
%sql SELECT mission_outcome,COUNT(*) AS numbers FROM SPACEX1 GROUP BY mission_outcome
```
### Task 8
##### List the names of the booster_versions which have carried the maximum payload mass. Use a subquery
```
%sql SELECT booster_version FROM SPACEX1 where payload_mass__kg_ = (SELECT MAX(payload_mass__kg_) FROM SPACEX1)
```
### Task 9
##### List the failed landing_outcomes in drone ship, their booster versions, and launch site names for in year 2015
```
%sql SELECT booster_version,launch_site FROM SPACEX1 WHERE landing__outcome = 'Failure (drone ship)' AND YEAR(DATE) = 2015
```
### Task 10
##### Rank the count of landing outcomes (such as Failure (drone ship) or Success (ground pad)) between the date 2010-06-04 and 2017-03-20, in descending order
```
%sql SELECT landing__outcome,COUNT(*) AS NUMBERS FROM SPACEX1 WHERE DATE>'2010-06-04' AND DATE < '2017-03-20' GROUP BY landing__outcome ORDER BY NUMBERS DESC
```
### Reference Links
* <a href ="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DB0201EN-SkillsNetwork/labs/Labs_Coursera_V5/labs/Lab%20-%20String%20Patterns%20-%20Sorting%20-%20Grouping/instructional-labs.md.html?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDS0321ENSkillsNetwork26802033-2021-01-01&origin=www.coursera.org">Hands-on Lab : String Patterns, Sorting and Grouping</a>
* <a href="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DB0201EN-SkillsNetwork/labs/Labs_Coursera_V5/labs/Lab%20-%20Built-in%20functions%20/Hands-on_Lab__Built-in_Functions.md.html?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDS0321ENSkillsNetwork26802033-2021-01-01&origin=www.coursera.org">Hands-on Lab: Built-in functions</a>
* <a href="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DB0201EN-SkillsNetwork/labs/Labs_Coursera_V5/labs/Lab%20-%20Sub-queries%20and%20Nested%20SELECTs%20/instructional-labs.md.html?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDS0321ENSkillsNetwork26802033-2021-01-01&origin=www.coursera.org">Hands-on Lab : Sub-queries and Nested SELECT Statements</a>
* <a href="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DB0201EN-SkillsNetwork/labs/Module%205/DB0201EN-Week3-1-3-SQLmagic.ipynb?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDS0321ENSkillsNetwork26802033-2021-01-01">Hands-on Tutorial: Accessing Databases with SQL magic</a>
* <a href= "https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DB0201EN-SkillsNetwork/labs/Module%205/DB0201EN-Week3-1-4-Analyzing.ipynb?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDS0321ENSkillsNetwork26802033-2021-01-01">Hands-on Lab: Analyzing a real World Data Set</a>
## Author(s)
<h4> Lakshmi Holla </h4>
## Other Contributors
<h4> Rav Ahuja </h4>
## Change log
| Date | Version | Changed by | Change Description |
| ---------- | ------- | ------------- | ------------------------- |
| 2021-10-12 | 0.4 | Lakshmi Holla | Changed markdown |
| 2021-08-24 | 0.3 | Lakshmi Holla | Added library update |
| 2021-07-09 | 0.2 | Lakshmi Holla | Changes made in magic sql |
| 2021-05-20 | 0.1 | Lakshmi Holla | Created Initial Version |
## <h3 align="center"> © IBM Corporation 2021. All rights reserved. <h3/>
|
github_jupyter
|
!pip install sqlalchemy==1.3.9
!pip install ibm_db_sa
!pip install ipython-sql
%load_ext sql
%sql ibm_db_sa://rmx39209:JVp2hTAa8A2yFaTu@ba99a9e6-d59e-4883-8fc0-d6a8c9f7a08f.c1ogj3sd0tgtu0lqde00.databases.appdomain.cloud:31321/BLUDB?security=SSL
%sql SELECT DISTINCT launch_site from SPACEX1
%sql SELECT * from SPACEX1 WHERE launch_site LIKE 'CCA%' LIMIT 5
%sql SELECT SUM(payload_mass__kg_) FROM SPACEX1 WHERE customer = 'NASA (CRS)'
%sql SELECT AVG(payload_mass__kg_) FROM SPACEX1 WHERE booster_version = 'F9 v1.1'
%sql SELECT MIN(DATE) FROM SPACEX1 WHERE landing__outcome = 'Success (ground pad)'
%sql SELECT booster_version FROM SPACEX1 WHERE landing__outcome = 'Success (drone ship)' AND payload_mass__kg_ BETWEEN 4000 AND 6000
%sql SELECT mission_outcome,COUNT(*) AS numbers FROM SPACEX1 GROUP BY mission_outcome
%sql SELECT booster_version FROM SPACEX1 where payload_mass__kg_ = (SELECT MAX(payload_mass__kg_) FROM SPACEX1)
%sql SELECT booster_version,launch_site FROM SPACEX1 WHERE landing__outcome = 'Failure (drone ship)' AND YEAR(DATE) = 2015
%sql SELECT landing__outcome,COUNT(*) AS NUMBERS FROM SPACEX1 WHERE DATE>'2010-06-04' AND DATE < '2017-03-20' GROUP BY landing__outcome ORDER BY NUMBERS DESC
| 0.3492 | 0.940463 |
Oscillateur harmonique
==========
Equation canonique
--------
$\ddot x +sin(x)=0$
Système d'équations
-------------
$\ddot x(t+dt)=-sin(x(t))$
$\dot x(t+dt)=\dot x(t)+\ddot x(t).dt$
$x(t+dt)=x(t) + \dot x.dt$
```
import matplotlib.pyplot as plt
from numpy import *
def resolution_numerique(dt,x,dx,f):
'''
retourne un tuple de listes de valeurs [temps][x][dx/dt]
solutions de l'équation différentielle ddx+f(x)=0
'''
temps=[_*dt for _ in range(round(6*pi/dt))]
X,DX=[],[]
for t in temps:
ddx=eval(f)
dx=dx+ddx*dt
x=x+dx*dt
X.append(x)
DX.append(dx)
return temps, X, DX
'''
Paramètres de simulation
dt=0.01
x(0)=1
dx(0)/dt=0
'''
temps, X, DX = resolution_numerique(0.01,1,0,"-sin(x)")
plt.subplot(211)
plt.title("Oscillateur harmonique")
plt.plot(temps,X)
plt.xlabel("t")
plt.ylabel("x(t)")
plt.subplot(212)
plt.plot(X,DX)
plt.title("Portrait de phase")
plt.xlabel("x")
plt.ylabel("dx/dt")
plt.show()
```
Solution analytique de l'équation linéaire à l'ordre 1
============
L'approximation $sin(x) \approx x$ conduit à l'équation $\ddot x+x=0$
qui a pour solution $x(t)=x_0~cos(t)$
pour les conditions initiales $x(0)=x_0$ et $\dot x=0$
Comparaison solution linéaire et solution numérique
==============
Représentation graphique $x(t)$
------------------
```
def trace_comparatif(x0):
temps, X, DX=resolution_numerique(0.01,x0,0,"-sin(x)")
solution_lineaire=[x0*cos(t) for t in temps]
plt.title("x(0)="+str(x0))
plt.plot(temps, solution_lineaire , label="Solution équation linéaire")
plt.plot(temps, X , label="Solution numérique")
plt.xlabel("t")
plt.ylabel("x")
plt.legend()
plt.show()
trace_comparatif(3)
trace_comparatif(2)
trace_comparatif(1)
trace_comparatif(0.5)
```
Portrait de phase
-------
```
def trace_comparatif(x0):
temps, X, DX=resolution_numerique(0.01,x0,0,"-sin(x)")
solution_lineaire=[x0*cos(t) for t in temps]
dsolution_lineaire=[-x0*sin(t) for t in temps]
plt.title("x(0)="+str(x0))
plt.plot( solution_lineaire , dsolution_lineaire, label="Solution équation linéaire")
plt.plot( X , DX , label="Solution numérique")
plt.xlabel("t")
plt.ylabel("x")
plt.legend()
plt.show()
trace_comparatif(3)
trace_comparatif(2)
trace_comparatif(1)
trace_comparatif(0.5)
```
Quantification de l'écart au modèle linéaire
--------------------------
Représentation du rapport $\frac{\dot x_{max~numérique}}{\dot x_{max~analytique}}=f(x(0))$.
```
def trace_ecart():
fraction=[]
valeurs_x0=linspace(0.1,7,20)
for x0 in valeurs_x0:
dx=max(resolution_numerique(0.01,x0,0,"-sin(x)")[2])
fraction.append(dx/x0)
plt.title("Ecart modèle linéaire - solution numérique 'exacte'")
plt.plot(valeurs_x0, fraction)
plt.xlabel("x(0)")
plt.show()
trace_ecart()
```
Conclusion
-----------
L'approximation linéaire est valable pour les faibles amplitudes. Moins de 5% d'écart par exemple pour $x \approx 1 radian$.
Dans ce cas les solutions numériques et la solution analytique sont très proches.
Dans le cas du pendule simple, cette approximation est appelée **loi des petits angles**.
|
github_jupyter
|
import matplotlib.pyplot as plt
from numpy import *
def resolution_numerique(dt,x,dx,f):
'''
retourne un tuple de listes de valeurs [temps][x][dx/dt]
solutions de l'équation différentielle ddx+f(x)=0
'''
temps=[_*dt for _ in range(round(6*pi/dt))]
X,DX=[],[]
for t in temps:
ddx=eval(f)
dx=dx+ddx*dt
x=x+dx*dt
X.append(x)
DX.append(dx)
return temps, X, DX
'''
Paramètres de simulation
dt=0.01
x(0)=1
dx(0)/dt=0
'''
temps, X, DX = resolution_numerique(0.01,1,0,"-sin(x)")
plt.subplot(211)
plt.title("Oscillateur harmonique")
plt.plot(temps,X)
plt.xlabel("t")
plt.ylabel("x(t)")
plt.subplot(212)
plt.plot(X,DX)
plt.title("Portrait de phase")
plt.xlabel("x")
plt.ylabel("dx/dt")
plt.show()
def trace_comparatif(x0):
temps, X, DX=resolution_numerique(0.01,x0,0,"-sin(x)")
solution_lineaire=[x0*cos(t) for t in temps]
plt.title("x(0)="+str(x0))
plt.plot(temps, solution_lineaire , label="Solution équation linéaire")
plt.plot(temps, X , label="Solution numérique")
plt.xlabel("t")
plt.ylabel("x")
plt.legend()
plt.show()
trace_comparatif(3)
trace_comparatif(2)
trace_comparatif(1)
trace_comparatif(0.5)
def trace_comparatif(x0):
temps, X, DX=resolution_numerique(0.01,x0,0,"-sin(x)")
solution_lineaire=[x0*cos(t) for t in temps]
dsolution_lineaire=[-x0*sin(t) for t in temps]
plt.title("x(0)="+str(x0))
plt.plot( solution_lineaire , dsolution_lineaire, label="Solution équation linéaire")
plt.plot( X , DX , label="Solution numérique")
plt.xlabel("t")
plt.ylabel("x")
plt.legend()
plt.show()
trace_comparatif(3)
trace_comparatif(2)
trace_comparatif(1)
trace_comparatif(0.5)
def trace_ecart():
fraction=[]
valeurs_x0=linspace(0.1,7,20)
for x0 in valeurs_x0:
dx=max(resolution_numerique(0.01,x0,0,"-sin(x)")[2])
fraction.append(dx/x0)
plt.title("Ecart modèle linéaire - solution numérique 'exacte'")
plt.plot(valeurs_x0, fraction)
plt.xlabel("x(0)")
plt.show()
trace_ecart()
| 0.314577 | 0.923108 |
**This notebook is an exercise in the [Data Visualization](https://www.kaggle.com/learn/data-visualization) course. You can reference the tutorial at [this link](https://www.kaggle.com/alexisbcook/hello-seaborn).**
---
In this exercise, you will write your first lines of code and learn how to use the coding environment for the micro-course!
## Setup
First, you'll learn how to run code, and we'll start with the code cell below. (Remember that a **code cell** in a notebook is just a gray box containing code that we'd like to run.)
- Begin by clicking inside the code cell.
- Click on the blue triangle (in the shape of a "Play button") that appears to the left of the code cell.
- If your code was run sucessfully, you will see `Setup Complete` as output below the cell.

The code cell below imports and configures the Python libraries that you need to complete the exercise.
Click on the cell and run it.
```
import pandas as pd
pd.plotting.register_matplotlib_converters()
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
# Set up code checking
import os
if not os.path.exists("../input/fifa.csv"):
os.symlink("../input/data-for-datavis/fifa.csv", "../input/fifa.csv")
from learntools.core import binder
binder.bind(globals())
from learntools.data_viz_to_coder.ex1 import *
print("Setup Complete")
```
The code you just ran sets up the system to give you feedback on your work. You'll learn more about the feedback system in the next step.
## Step 1: Explore the feedback system
Each exercise lets you test your new skills with a real-world dataset. Along the way, you'll receive feedback on your work. You'll see if your answer is right, get customized hints, and see the official solution (_if you'd like to take a look!_).
To explore the feedback system, we'll start with a simple example of a coding problem. Follow the following steps in order:
1. Run the code cell below without making any edits. It will show the following output:
> <font color='#ccaa33'>Check:</font> When you've updated the starter code, `check()` will tell you whether your code is correct. You need to update the code that creates variable `one`
This means you need to change the code to set the variable `one` to something other than the blank provided below (`____`).
2. Replace the underline with a `2`, so that the line of code appears as `one = 2`. Then, run the code cell. This should return the following output:
> <font color='#cc3333'>Incorrect:</font> Incorrect value for `one`: `2`
This means we still have the wrong answer to the question.
3. Now, change the `2` to `1`, so that the line of code appears as `one = 1`. Then, run the code cell. The answer should be marked as <font color='#33cc33'>Correct</font>. You have now completed this problem!
```
# Fill in the line below
one=1
# Check your answer
step_1.check()
```
In this exercise, you were responsible for filling in the line of code that sets the value of variable `one`. **Don't edit the code that checks your answer.** You'll need to run the lines of code like `step_1.check()` and `step_2.check()` just as they are provided.
This problem was relatively straightforward, but for more difficult problems, you may like to receive a hint or view the official solution. Run the code cell below now to receive both for this problem.
```
step_1.hint()
step_1.solution()
```
## Step 2: Load the data
You are ready to get started with some data visualization! You'll begin by loading the dataset from the previous tutorial.
The code you need is already provided in the cell below. Just run that cell. If it shows <font color='#33cc33'>Correct</font> result, you're ready to move on!
```
# Path of the file to read
fifa_filepath = "../input/fifa.csv"
# Read the file into a variable fifa_data
fifa_data = pd.read_csv(fifa_filepath, index_col="Date", parse_dates=True)
# Check your answer
step_2.check()
```
Next, recall the difference between comments and executable code:
- **Comments** are preceded by a pound sign (`#`) and contain text that appear faded and italicized. They are completely ignored by the computer when the code is run.
- **Executable code** is code that is run by the computer.
In the code cell below, every line is a comment:
```python
# Uncomment the line below to receive a hint
#step_2.hint()
#step_2.solution()
```
If you run the code cell below without making any changes, it won't return any output. Try this now!
```
# Uncomment the line below to receive a hint
step_2.hint()
# Uncomment the line below to see the solution
step_2.solution()
```
Next, remove the pound sign before `step_2.hint()` so that the code cell above appears as follows:
```python
# Uncomment the line below to receive a hint
step_2.hint()
#step_2.solution()
```
When we remove the pound sign before a line of code, we say we **uncomment** the line. This turns the comment into a line of executable code that is run by the computer. Run the code cell now, which should return the <font color='#3366cc'>Hint</font> as output.
Finally, uncomment the line to see the solution, so the code cell appears as follows:
```python
# Uncomment the line below to receive a hint
step_2.hint()
step_2.solution()
```
Then, run the code cell. You should receive both a <font color='#3366cc'>Hint</font> and the <font color='#33cc99'>Solution</font>.
If at any point you're having trouble with coming up with the correct answer to a problem, you are welcome to obtain either a hint or the solution before completing the cell. (So, you don't need to get a <font color='#33cc33'>Correct</font> result before running the code that gives you a <font color='#3366cc'>Hint</font> or the <font color='#33cc99'>Solution</font>.)
## Step 3: Plot the data
Now that the data is loaded into the notebook, you're ready to visualize it!
Run the next code cell without changes to make a line chart. The code may not make sense yet - you'll learn all about it in the next tutorial!
```
# Set the width and height of the figure
plt.figure(figsize=(16,6))
# Line chart showing how FIFA rankings evolved over time
sns.lineplot(data=fifa_data)
#we are plotting a lineplot here with the entire fifa_data
# Check your answer
step_3.a.check()
```
Some questions won't require you to write any code. Instead, you'll interpret visualizations.
As an example, consider the question: Considering only the years represented in the dataset, which countries spent at least 5 consecutive years in the #1 ranked spot?
To receive a <font color='#3366cc'>Hint</font>, uncomment the line below, and run the code cell.
```
#step_3.b.hint()
```
Once you have an answer, check the <font color='#33cc99'>Solution</font> to get credit for completing the problem and to ensure your interpretation is right.
```
# Check your answer (Run this code cell to receive credit!)
step_3.b.solution()
```
Congratulations - you have completed your first coding exercise!
# Keep going
Move on to learn to create your own **[line charts](https://www.kaggle.com/alexisbcook/line-charts)** with a new dataset.
---
*Have questions or comments? Visit the [Learn Discussion forum](https://www.kaggle.com/learn-forum/161291) to chat with other Learners.*
|
github_jupyter
|
import pandas as pd
pd.plotting.register_matplotlib_converters()
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
# Set up code checking
import os
if not os.path.exists("../input/fifa.csv"):
os.symlink("../input/data-for-datavis/fifa.csv", "../input/fifa.csv")
from learntools.core import binder
binder.bind(globals())
from learntools.data_viz_to_coder.ex1 import *
print("Setup Complete")
# Fill in the line below
one=1
# Check your answer
step_1.check()
step_1.hint()
step_1.solution()
# Path of the file to read
fifa_filepath = "../input/fifa.csv"
# Read the file into a variable fifa_data
fifa_data = pd.read_csv(fifa_filepath, index_col="Date", parse_dates=True)
# Check your answer
step_2.check()
# Uncomment the line below to receive a hint
#step_2.hint()
#step_2.solution()
# Uncomment the line below to receive a hint
step_2.hint()
# Uncomment the line below to see the solution
step_2.solution()
# Uncomment the line below to receive a hint
step_2.hint()
#step_2.solution()
# Uncomment the line below to receive a hint
step_2.hint()
step_2.solution()
# Set the width and height of the figure
plt.figure(figsize=(16,6))
# Line chart showing how FIFA rankings evolved over time
sns.lineplot(data=fifa_data)
#we are plotting a lineplot here with the entire fifa_data
# Check your answer
step_3.a.check()
#step_3.b.hint()
# Check your answer (Run this code cell to receive credit!)
step_3.b.solution()
| 0.396769 | 0.978915 |
#### Author : Jeonghun Yoon
# Naive Bayes를 이용하여 영화 리뷰를 예측하는 감정 분류기를 구현하라.
- 0 : 부정
- 1 : 긍정
## 1. 데이터 생성
```
import pandas as pd
# 영화 리뷰를 load한다. 사랑/장르라는 단어를 포함하고 있는 document를 load 한다.
reviews = pd.read_csv('./inputs/ratings_train.txt', delimiter='\t')
# 데이터 확인
reviews.head(10)
neg = reviews[(reviews.document.str.len() >= 30) & (reviews.label == 0)].sample(3000, random_state=43)
pos = reviews[(reviews.document.str.len() >= 30) & (reviews.label == 1)].sample(3000, random_state=43)
# 형태소 분석기
import re
import konlpy
from konlpy.tag import Twitter
okt = Twitter()
def parse(s):
s = re.sub(r'[?$.!,-_\'\"(){}~]+', '', s)
try:
return okt.nouns(s)
except:
return []
neg['parsed_doc'] = neg.document.apply(parse)
pos['parsed_doc'] = pos.document.apply(parse)
neg.head()
pos.head()
# 학습 데이터 : 5,800개 / 테스트 데이터 : 200개
neg_train = neg[:2900]
pos_train = pos[:2900]
neg_test = neg[2900:]
pos_test = pos[2900:]
```
## 2. Corpus 생성하기
```
neg_corpus = set(neg_train.parsed_doc.sum())
pos_corpus = set(pos_train.parsed_doc.sum())
corpus = list((neg_corpus).union(pos_corpus))
print('corpus 길이', len(corpus))
corpus[:10]
```
## 3. Bag of words vector 생성하기 (형태를 살펴보고 익히는 용도로 사용하자.)
```
from collections import OrderedDict
neg_bow_vecs = []
for _, doc in neg.parsed_doc.items():
bow_vecs = OrderedDict()
for w in corpus:
if w in doc:
bow_vecs[w] = 1
else:
bow_vecs[w] = 0
neg_bow_vecs.append(bow_vecs)
# neg_bow_vecs[0]
pos_bow_vecs = []
for _, doc in pos.parsed_doc.items():
bow_vecs = OrderedDict()
for w in corpus:
if w in doc:
bow_vecs[w] = 1
else:
bow_vecs[w] = 0
pos_bow_vecs.append(bow_vecs)
# pos_bow_vecs[0]
```
## 4. 모델 training
$n$ : Document의 차원 즉 전체 corpus의 크기
$$p(pos|doc) = \frac{p(doc|pos) \times p(pos)}{p(doc)} = \frac{\Pi_{i=1}^{n}p(word_i|pos) \times p(pos)}{p(doc)}$$
$$p(neg|doc) = \frac{p(doc|neg) \times p(neg)}{p(doc)} = \frac{\Pi_{i=1}^{n}p(word_i|neg) \times p(neg)}{p(neg)}$$
<font color=red>$p(word_i|pos), p(word_i|neg), p(pos), p(neg)$</font>를 구하는 것이 모델의 학습이다.
### Likelihood
$p(word_i|pos)$
- $\frac{\text{해당 word를 포함하고 있는 positive 문장의 갯수}}{\text{Positive 문장의 총 갯수}}$
$p(word_i|neg)$
- $\frac{\text{해당 word를 포함하고 있는 negative 문장의 갯수}}{\text{Negative 문장의 총 갯수}}$
```
import numpy as np
neg_words_likelihood_cnts = {}
for w in corpus:
cnt = 0
for _, doc in neg_train.parsed_doc.items():
if w in doc:
cnt += 1
neg_words_likelihood_cnts[w] = cnt
pos_words_likelihood_cnts = {}
for w in corpus:
cnt = 0
for _, doc in pos_train.parsed_doc.items():
if w in doc:
cnt += 1
pos_words_likelihood_cnts[w] = cnt
import operator
sorted(neg_words_likelihood_cnts.items(), key=operator.itemgetter(1), reverse=True)[:10]
sorted(pos_words_likelihood_cnts.items(), key=operator.itemgetter(1), reverse=True)[:10]
```
### Prior
$p(pos)$
- $\frac{\text{Positive 문서의 갯수}}{\text{문서의 총 갯수}}$
$p(neg)$
- $\frac{\text{Negative 문서의 갯수}}{\text{문서의 총 갯수}}$
# 5. Classifier
- Document는 0, 1의 bag of word vector로 표현한다. Corpus의 크기가 $n$이면, 각 문장의 bag of word vector의 크기는 $n$이다.
- ex) 전체 corpus가 `cat, love, i, do, like, him, you`라고 하자. 문장이 `i love you` 이면, 이 문장의 bag of word vector는 `(0, 1, 1, 0, 0, 0, 1)` 이다.
- 단어가 없는 경우는 **Laplacian smoothing** 을 사용한다.
- $m$은 전체 문장의 갯수, 즉 pos 문장 + neg 문장의 수이다.
$$p(word_j|\text{neg})=\frac{\sum_{i=1}^{m}I(word_j^{(i)}=1 \text{ and }y^{(i)}=\text{neg}) + 1}{\sum_{i=1}^{m}I(y^{(i)} = \text{neg}) + \text{the number of words in corpus}}$$
$$p(word_j|\text{pos})=\frac{\sum_{i=1}^{m}I(word_j^{(i)}=1 \text{ and }y^{(i)}=\text{pos}) + 1}{\sum_{i=1}^{m}I(y^{(i)} = \text{pos}) + \text{the number of words in corpus}}$$
```
test_data = pd.concat([neg_test, pos_test], axis=0)
def predict(doc):
pos_prior, neg_prior = 1/2, 1/2
# Posterior of pos
pos_prob = np.log(1)
for word in corpus:
if word in doc:
# 단어가 현재 문장에 존재하고, pos 문장에 나온적이 있는 경우
if word in pos_words_likelihood_cnts:
pos_prob += np.log((pos_words_likelihood_cnts[word] + 1) / (len(pos_train) + len(corpus)))
else:
# 단어가 현재 문장에 존재하고, pos 문장에서 한 번도 나온적이 없는 경우 : 라플라시안 스무딩
pos_prob += np.log(1 / (len(pos_train) + len(corpus)))
else:
# 단어가 현재 문장에 존재하지 않고, pos 문장에 나온적이 있는 경우 (pos에서 해당단어가 없는 확률을 구할 수 있다.)
if word in pos_words_likelihood_cnts:
pos_prob += \
np.log((len(pos_train) - pos_words_likelihood_cnts[word] + 1) / (len(pos_train) + len(corpus)))
else:
# 단어가 현재 문장에 존재하지 않고, pos 문장에서 단 한 번도 나온적이 없는 경우 : 라플라시안 스무딩
pos_prob += np.log((len(pos_train) + 1) / (len(pos_train) + len(corpus)))
pos_prob += np.log(pos_prior)
# Posterior of neg
neg_prob = 1
for word in corpus:
if word in doc:
# 단어가 현재 문장에 존재하고, neg 문장에 나온적이 있는 경우
if word in neg_words_likelihood_cnts:
neg_prob += np.log((neg_words_likelihood_cnts[word] + 1) / (len(neg_train) + len(corpus)))
else:
# 단어가 현재 문장에 존재하고, neg 문장에서 한 번도 나온적이 없는 경우 : 라플라시안 스무딩
neg_prob += np.log(1 / (len(neg_train) + len(corpus)))
else:
# 단어가 현재 문장에 존재하지 않고, neg 문장에 나온적이 있는 경우 (neg에서 해당단어가 없는 확률을 구할 수 있다.)
if word in neg_words_likelihood_cnts:
neg_prob += \
np.log((len(neg_train) - neg_words_likelihood_cnts[word] + 1) / (len(neg_train) + len(corpus)))
else:
# 단어가 현재 문장에 존재하지 않고, pos 문장에서 단 한 번도 나온적이 없는 경우 : 라플라시안 스무딩
neg_prob += np.log((len(neg_train) + 1) / (len(neg_train) + len(corpus)))
neg_prob += np.log(neg_prior)
if pos_prob >= neg_prob:
return 1
else:
return 0
test_data['pred'] = test_data.parsed_doc.apply(predict)
test_data.head()
sum(test_data.label ^ test_data.pred)
1 - sum(test_data.label ^ test_data.pred)/len(test_data)
```
---
|
github_jupyter
|
import pandas as pd
# 영화 리뷰를 load한다. 사랑/장르라는 단어를 포함하고 있는 document를 load 한다.
reviews = pd.read_csv('./inputs/ratings_train.txt', delimiter='\t')
# 데이터 확인
reviews.head(10)
neg = reviews[(reviews.document.str.len() >= 30) & (reviews.label == 0)].sample(3000, random_state=43)
pos = reviews[(reviews.document.str.len() >= 30) & (reviews.label == 1)].sample(3000, random_state=43)
# 형태소 분석기
import re
import konlpy
from konlpy.tag import Twitter
okt = Twitter()
def parse(s):
s = re.sub(r'[?$.!,-_\'\"(){}~]+', '', s)
try:
return okt.nouns(s)
except:
return []
neg['parsed_doc'] = neg.document.apply(parse)
pos['parsed_doc'] = pos.document.apply(parse)
neg.head()
pos.head()
# 학습 데이터 : 5,800개 / 테스트 데이터 : 200개
neg_train = neg[:2900]
pos_train = pos[:2900]
neg_test = neg[2900:]
pos_test = pos[2900:]
neg_corpus = set(neg_train.parsed_doc.sum())
pos_corpus = set(pos_train.parsed_doc.sum())
corpus = list((neg_corpus).union(pos_corpus))
print('corpus 길이', len(corpus))
corpus[:10]
from collections import OrderedDict
neg_bow_vecs = []
for _, doc in neg.parsed_doc.items():
bow_vecs = OrderedDict()
for w in corpus:
if w in doc:
bow_vecs[w] = 1
else:
bow_vecs[w] = 0
neg_bow_vecs.append(bow_vecs)
# neg_bow_vecs[0]
pos_bow_vecs = []
for _, doc in pos.parsed_doc.items():
bow_vecs = OrderedDict()
for w in corpus:
if w in doc:
bow_vecs[w] = 1
else:
bow_vecs[w] = 0
pos_bow_vecs.append(bow_vecs)
# pos_bow_vecs[0]
import numpy as np
neg_words_likelihood_cnts = {}
for w in corpus:
cnt = 0
for _, doc in neg_train.parsed_doc.items():
if w in doc:
cnt += 1
neg_words_likelihood_cnts[w] = cnt
pos_words_likelihood_cnts = {}
for w in corpus:
cnt = 0
for _, doc in pos_train.parsed_doc.items():
if w in doc:
cnt += 1
pos_words_likelihood_cnts[w] = cnt
import operator
sorted(neg_words_likelihood_cnts.items(), key=operator.itemgetter(1), reverse=True)[:10]
sorted(pos_words_likelihood_cnts.items(), key=operator.itemgetter(1), reverse=True)[:10]
test_data = pd.concat([neg_test, pos_test], axis=0)
def predict(doc):
pos_prior, neg_prior = 1/2, 1/2
# Posterior of pos
pos_prob = np.log(1)
for word in corpus:
if word in doc:
# 단어가 현재 문장에 존재하고, pos 문장에 나온적이 있는 경우
if word in pos_words_likelihood_cnts:
pos_prob += np.log((pos_words_likelihood_cnts[word] + 1) / (len(pos_train) + len(corpus)))
else:
# 단어가 현재 문장에 존재하고, pos 문장에서 한 번도 나온적이 없는 경우 : 라플라시안 스무딩
pos_prob += np.log(1 / (len(pos_train) + len(corpus)))
else:
# 단어가 현재 문장에 존재하지 않고, pos 문장에 나온적이 있는 경우 (pos에서 해당단어가 없는 확률을 구할 수 있다.)
if word in pos_words_likelihood_cnts:
pos_prob += \
np.log((len(pos_train) - pos_words_likelihood_cnts[word] + 1) / (len(pos_train) + len(corpus)))
else:
# 단어가 현재 문장에 존재하지 않고, pos 문장에서 단 한 번도 나온적이 없는 경우 : 라플라시안 스무딩
pos_prob += np.log((len(pos_train) + 1) / (len(pos_train) + len(corpus)))
pos_prob += np.log(pos_prior)
# Posterior of neg
neg_prob = 1
for word in corpus:
if word in doc:
# 단어가 현재 문장에 존재하고, neg 문장에 나온적이 있는 경우
if word in neg_words_likelihood_cnts:
neg_prob += np.log((neg_words_likelihood_cnts[word] + 1) / (len(neg_train) + len(corpus)))
else:
# 단어가 현재 문장에 존재하고, neg 문장에서 한 번도 나온적이 없는 경우 : 라플라시안 스무딩
neg_prob += np.log(1 / (len(neg_train) + len(corpus)))
else:
# 단어가 현재 문장에 존재하지 않고, neg 문장에 나온적이 있는 경우 (neg에서 해당단어가 없는 확률을 구할 수 있다.)
if word in neg_words_likelihood_cnts:
neg_prob += \
np.log((len(neg_train) - neg_words_likelihood_cnts[word] + 1) / (len(neg_train) + len(corpus)))
else:
# 단어가 현재 문장에 존재하지 않고, pos 문장에서 단 한 번도 나온적이 없는 경우 : 라플라시안 스무딩
neg_prob += np.log((len(neg_train) + 1) / (len(neg_train) + len(corpus)))
neg_prob += np.log(neg_prior)
if pos_prob >= neg_prob:
return 1
else:
return 0
test_data['pred'] = test_data.parsed_doc.apply(predict)
test_data.head()
sum(test_data.label ^ test_data.pred)
1 - sum(test_data.label ^ test_data.pred)/len(test_data)
| 0.26322 | 0.928474 |
```
import os
import pandas as pd
import numpy as np
import pickle
from collections import Counter
%matplotlib inline
import seaborn as sns
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
%matplotlib inline
import seaborn as sns
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
%%time
stations_dict = pickle.load(open('stations_dict.p', "rb"))
stations_latlng = pickle.load(open('stations_latlng.p', "rb"))
#df = pd.read_feather('2017_all')
df = pd.read_feather('df_')
#df = pd.read_feather('df_train')
df.shape
df.columns
# 3248, 3480 special stations
sp_stations = [3248, 3480, 3247, 3215, 3478]
idx = df[(df['start station id'].isin(sp_stations))
| (df['end station id'].isin(sp_stations))].index
df.drop(idx,0,inplace=True)
df.shape
df.head(3)
df.tail(3)
# 2% of the trips ends up in the same station, ignore these for our purpose
print(df[df['start station id'] == df['end station id']].shape)
df = df[df['start station id'] != df['end station id']]
df.tripduration.quantile([0, .1, .25, .5, .75, .99, .999, .9999])
# only look those trips are less or equal to one hour
df = df[df.tripduration <= 3600]
print(df.shape)
d = df.tripduration
sns.distplot(d, bins = 50);
sns.distplot(np.log(d), bins = 50);
del d
Counter(df.usertype)
%%time
df['date'] = df['starttime'].apply(lambda x: x.date())
%%time
S = set(df['date'])
d = dict()
for s in S:
d[s] = s.weekday() + 1
df['weekday'] = df['date'].map(d)
sns.barplot(x="weekday", y="tripduration", data=df.sample(500000))
sns.barplot(x="weekday", y="tripduration", data=df[df.usertype == 'Customer'].sample(100000))
sns.barplot(x="weekday", y="tripduration", data=df[df.usertype == 'Subscriber'].sample(100000))
# number of trips vs. dow
tmp = df.groupby(['weekday']).tripduration.size().reset_index()
sns.regplot(x="weekday", y="tripduration", data=tmp,
scatter_kws={"s": 50},
order=2, ci=None, truncate=True, fit_reg=False)
# consumer
# number of trips vs. dow
tmp = df[df.usertype == 'Customer'].groupby(['weekday']).tripduration.size().reset_index()
sns.regplot(x="weekday", y="tripduration", data=tmp,
scatter_kws={"s": 50},
order=2, ci=None, truncate=True, fit_reg=False)
%%time
df['weekend'] = df['weekday'].map(lambda x: 0 if x < 6 else 1)
S = set(df.date)
d = dict()
for s in S:
d[s] = s.month
df['month'] = df['date'].map(d)
# number of trips vs. hours
tmp = df.groupby(['starthour']).tripduration.size().reset_index()
display(tmp)
sns.regplot(x="starthour", y="tripduration", data=tmp,
scatter_kws={"s": 50}, ci=None, fit_reg=False);
def hour_min(time):
t = time.split(':')
return int(t[0])*100 + int(t[1])/60*100
%%time
df['time'] = df['starttime'].astype(str).apply(lambda x: x[11:])
df['time'] = df['time'].map(lambda x: hour_min(x))
# number of trips vs. HH%MM
tmp = df.groupby(['time']).tripduration.size().reset_index()
sns.regplot(x="time", y="tripduration", data=tmp,
scatter_kws={"s": 10}, ci=None, fit_reg=False);
# Customer
# number of trips vs. HH%MM
tmp = df[df.usertype == 'Customer'].groupby(['time']).tripduration.size().reset_index()
sns.regplot(x="time", y="tripduration", data=tmp,
scatter_kws={"s": 10}, ci=None, fit_reg=False);
# Customer
# number of trips vs. HH%MM
tmp = df[df.usertype == 'Subscriber'].groupby(['time']).tripduration.size().reset_index()
sns.regplot(x="time", y="tripduration", data=tmp,
scatter_kws={"s": 10}, ci=None, fit_reg=False);
%%time
plt.figure(figsize=(24,16))
sns.barplot(x="starthour", y="tripduration",
data=df[(df.usertype == 'Subscriber') & (df.weekend == 0)].sample(300000))
plt.figure(figsize=(24,16))
sns.barplot(x="starthour", y="tripduration",
data=df[(df.usertype == 'Customer') & (df.weekend == 1)].sample(300000))
tmp = df.groupby(['month', 'usertype']).tripduration.size().reset_index()
plt.figure(figsize=(24,13.5))
sns.barplot(x="month", y="tripduration", hue="usertype",
data=tmp);
# number of trips vs. day
tmp = df[df['month']==8].groupby(['date', 'usertype']).tripduration.size().reset_index()
tmp['date'] = tmp['date'].apply(lambda x: str(x)[-2:])
plt.figure(figsize=(24,13.5))
sns.barplot(x="date", y="tripduration", hue="usertype",
data=tmp);
from datetime import datetime
def display_all(df):
"""
display more than 20 rows/cols
"""
with pd.option_context("display.max_rows", 1000):
with pd.option_context("display.max_columns", 1000):
display(df)
nyc_temp = pd.read_csv('nyc_temp_2017.csv')
nyc_temp['2017'] = nyc_temp['2017'].apply(lambda x: datetime.strptime(x, "%Y-%m-%d").date())
nyc_temp.columns = ['date', 'Temp_high', 'Temp_avg', 'Temp_low', 'Precip', 'Rain', 'Snow', 'Fog']
nyc_temp.sample(5)
%%time
df = pd.merge(df, nyc_temp, 'left', on='date')
df.Precip.quantile(np.clip(np.arange(.7, 1., .05), 0, 1))
df['rain_vol'] = 0
# v light, medium, heavy
df.loc[df['Precip'] >= 0.001, 'rain_vol'] = 1
df.loc[df['Precip'] >= 0.03, 'rain_vol'] = 2
df.loc[df['Precip'] >= 0.2, 'rain_vol'] = 3
df['temp_level'] = 0
df.loc[df['Temp_high'] >= 56, 'temp_level'] = 1
df.loc[df['Temp_high'] >= 67, 'temp_level'] = 2
df.loc[df['Temp_high'] >= 76, 'temp_level'] = 3
df.loc[df['Temp_high'] >= 83, 'temp_level'] = 4
tmp = df[df.usertype == 'Subscriber']\
.groupby(['temp_level','rain_vol'])\
.agg({'tripduration': 'size',
'date': lambda x: x.nunique()}).reset_index()
tmp['avg_trip_num'] = tmp['tripduration']/tmp['date']
g = sns.barplot(x="temp_level",
y="avg_trip_num",
data=tmp,
hue="rain_vol",
palette=sns.cubehelix_palette(8, start=.9, rot=-.75))
# scatter_kws={"s": 10}, ci=None, fit_reg=False);
g.figure.set_size_inches(16, 9)
tmp = df[df.usertype == 'Subscriber']\
.groupby(['temp_level','Rain'])\
.agg({'tripduration': 'size',
'date': lambda x: x.nunique()}).reset_index()
tmp['avg_trip_num'] = tmp['tripduration']/tmp['date']
g = sns.barplot(x="temp_level", y="avg_trip_num", data=tmp, hue="Rain",)
# scatter_kws={"s": 10}, ci=None, fit_reg=False);
g.figure.set_size_inches(16, 9)
tmp = df[df.usertype == 'Subscriber']\
.groupby(['temp_level','Snow'])\
.agg({'tripduration': 'size',
'date': lambda x: x.nunique()}).reset_index()
tmp['avg_trip_num'] = tmp['tripduration']/tmp['date']
g = sns.barplot(x="temp_level", y="avg_trip_num", data=tmp, hue="Snow",)
# scatter_kws={"s": 10}, ci=None, fit_reg=False);
g.figure.set_size_inches(16, 9)
tmp = df[df.usertype == 'Subscriber']\
.groupby(['temp_level','Fog'])\
.agg({'tripduration': 'size',
'date': lambda x: x.nunique()}).reset_index()
tmp['avg_trip_num'] = tmp['tripduration']/tmp['date']
g = sns.barplot(x="temp_level", y="avg_trip_num", data=tmp, hue="Fog",)
# scatter_kws={"s": 10}, ci=None, fit_reg=False);
g.figure.set_size_inches(16, 9)
%%time
g = sns.barplot(x="temp_level", y="tripduration",
hue="rain_vol",
data=df[df.usertype == 'Subscriber'].sample(1000000),
palette=sns.cubehelix_palette(8, start=.9, rot=-.75));
g.figure.set_size_inches(16, 9)
%%time
df['lat1'] = df['start station id'].map(lambda x: stations_latlng[x][0])
df['lon1'] = df['start station id'].map(lambda x: stations_latlng[x][1])
df['lat2'] = df['end station id'].map(lambda x: stations_latlng[x][0])
df['lon2'] = df['end station id'].map(lambda x: stations_latlng[x][1])
from math import sin, cos, sqrt, atan2, radians
def manhattan_distance(latlon1, latlon2):
R = 6371
lat1 = radians(latlon1[0])
lon1 = radians(latlon1[1])
lat2 = radians(latlon2[0])
lon2 = radians(latlon2[1])
dlon = lon2 - lon1
dlat = lat2 - lat1
a1 = sin(dlat / 2)**2
c1 = 2 * atan2(sqrt(a1), sqrt(1 - a1))
d1 = R * c1
a2 = sin(dlon / 2)**2
c2 = 2 * atan2(sqrt(a2), sqrt(1 - a2))
d2 = R * c2
return d1+d2
d1 = stations_latlng[523]
d1
d2 = stations_latlng[428]
d2
a = abs(d1[0]-d2[0])
b = abs(d1[1]-d2[1])
(a+b)*111.195
manhattan_distance(d1, d2)
d1
d2
%%time
tmp = df.groupby(['start station id', 'end station id']).size().reset_index()
tmp.columns = ['start station id', 'end station id', 'size']
tmp = tmp.sort_values('size', ascending=False).reset_index()
%%time
tmp['lat1'] = tmp['start station id'].map(lambda x: stations_latlng[x][0])
tmp['lon1'] = tmp['start station id'].map(lambda x: stations_latlng[x][1])
tmp['lat2'] = tmp['end station id'].map(lambda x: stations_latlng[x][0])
tmp['lon2'] = tmp['end station id'].map(lambda x: stations_latlng[x][1])
lat2 = tmp['lat2'].values
lon2 = tmp['lon2'].values
plt.figure(figsize = (10,10))
plt.plot(lon2,lat2,'.', alpha = 0.8, markersize = 0.1)
plt.show()
def latlon2pos(lat, lon, size=240):
return int(round((40.84-lat)*size/240*1000-1)), int(round((lon+74.12)*size/240*1000-1))
%%time
tmp = df.groupby(['start station id']).size().reset_index()
tmp.columns = ['start station id', 'size']
tmp = tmp.sort_values('size', ascending=False).reset_index()
tmp['lat1'] = tmp['start station id'].map(lambda x: stations_latlng[x][0])
tmp['lon1'] = tmp['start station id'].map(lambda x: stations_latlng[x][1])
print(tmp.shape)
# show the log density of pickup and dropoff locations
s = 200
imageSize = (s,s)
locationDensityImage = np.zeros(imageSize)
for i in range(len(tmp)):
t = tmp.loc[i]
locationDensityImage[latlon2pos(t['lat1'], t['lon1'], s)] += t['size']#np.log1p(t['size'])
fig, ax = plt.subplots(nrows=1,ncols=1,figsize=(12,12))
ax.imshow(np.log1p(locationDensityImage), cmap='hot')
ax.set_axis_off()
%%time
tmp = df.groupby(['end station id']).size().reset_index()
tmp.columns = ['end station id', 'size']
tmp = tmp.sort_values('size', ascending=False).reset_index()
tmp['lat1'] = tmp['end station id'].map(lambda x: stations_latlng[x][0])
tmp['lon1'] = tmp['end station id'].map(lambda x: stations_latlng[x][1])
print(tmp.shape)
# show the log density of pickup and dropoff locations
s = 200
imageSize = (s,s)
locationDensityImage1 = np.zeros(imageSize)
for i in range(len(tmp)):
t = tmp.loc[i]
locationDensityImage1[latlon2pos(t['lat1'], t['lon1'], s)] += t['size']#np.log1p(t['size'])
fig, ax = plt.subplots(nrows=1,ncols=1,figsize=(12,12))
ax.imshow(np.log1p(locationDensityImage1), cmap='hot')
ax.set_axis_off()
from sklearn.cluster import KMeans
# %%time
# tmp = df[['start station id']].sample(200000)
# loc_df = pd.DataFrame()
# loc_df['longitude'] = tmp['start station id'].map(lambda x: stations_latlng[x][1])
# loc_df['latitude'] = tmp['start station id'].map(lambda x: stations_latlng[x][0])
# Ks = range(5, 50)
# km = [KMeans(n_clusters=i) for i in Ks]
# score = [km[i].fit(loc_df).score(loc_df) for i in range(len(km))]
# score = [abs(i) for i in score]
plt.plot((score))
%%time
tmp = df[['start station id']].sample(200000)
loc_df = pd.DataFrame()
loc_df['longitude'] = tmp['start station id'].map(lambda x: stations_latlng[x][1])
loc_df['latitude'] = tmp['start station id'].map(lambda x: stations_latlng[x][0])
kmeans = KMeans(n_clusters=16, random_state=2, n_init = 10).fit(loc_df)
loc_df['label'] = kmeans.labels_
plt.figure(figsize = (10,10))
for label in loc_df.label.unique():
plt.plot(loc_df.longitude[loc_df.label == label],loc_df.latitude[loc_df.label == label],'.', alpha = 0.3, markersize = 1)
plt.title('Clusters of New York (and New Jersey)')
plt.show()
fig,ax = plt.subplots(figsize = (10,10))
for label in loc_df.label.unique():
ax.plot(loc_df.longitude[loc_df.label == label],loc_df.latitude[loc_df.label == label],'.', alpha = 0.4, markersize = 0.1, color = 'gray')
ax.plot(kmeans.cluster_centers_[label,0],kmeans.cluster_centers_[label,1],'o', color = 'r')
ax.annotate(label, (kmeans.cluster_centers_[label,0],kmeans.cluster_centers_[label,1]), color = 'b', fontsize = 20)
ax.set_title('Cluster Centers')
plt.show()
%%time
df['start_cluster'] = kmeans.predict(df[['lon1','lat1']])
df['end_cluster'] = kmeans.predict(df[['lon2','lat2']])
clusters = pd.DataFrame()
clusters['x'] = kmeans.cluster_centers_[:,0]
clusters['y'] = kmeans.cluster_centers_[:,1]
clusters['label'] = range(len(clusters))
loc_df = loc_df.sample(5000)
import os
from matplotlib.pyplot import *
import matplotlib.pyplot as plt
from matplotlib import animation
from sklearn.cluster import KMeans
from IPython.display import HTML
from subprocess import check_output
import io
import base64
%%time
fig, ax = plt.subplots(1, 1, figsize = (10,10))
df_ = df.sample(5000000)
def animate(hour):
ax.clear()
ax.set_title('Absolute Traffic - Hour ' + str(hour))
plt.figure(figsize = (10,10));
for label in loc_df.label.unique():
ax.plot(loc_df.longitude[loc_df.label == label],loc_df.latitude[loc_df.label == label],'.', alpha = 1, markersize = 2, color = 'gray');
ax.plot(kmeans.cluster_centers_[label,0],kmeans.cluster_centers_[label,1],'o', color = 'r');
for label in clusters.label:
for dest_label in clusters.label:
num_of_rides = len(df_[(df_.start_cluster == label) & (df_.end_cluster == dest_label) & (df_.starthour == hour)])
dist_x = clusters.x[clusters.label == label].values[0] - clusters.x[clusters.label == dest_label].values[0]
dist_y = clusters.y[clusters.label == label].values[0] - clusters.y[clusters.label == dest_label].values[0]
pct = np.true_divide(num_of_rides,len(df_))
arr = Arrow(clusters.x[clusters.label == label].values, clusters.y[clusters.label == label].values, -dist_x, -dist_y, edgecolor='white', width = 15*pct)
ax.add_patch(arr)
arr.set_facecolor('g')
ani = animation.FuncAnimation(fig,animate,sorted(df.starthour.unique()), interval = 1000);
plt.close();
ani.save('Absolute.gif', writer='imagemagick', fps=2);
filename = 'Absolute.gif'
video = io.open(filename, 'r+b').read();
encoded = base64.b64encode(video);
HTML(data='''<img src="data:image/gif;base64,{0}" type="gif" />'''.format(encoded.decode('ascii')));
%%time
fig, ax = plt.subplots(1, 1, figsize = (10,10))
def animate(hour):
ax.clear()
ax.set_title('Relative Traffic - Hour ' + str(hour))
plt.figure(figsize = (10,10))
for label in loc_df.label.unique():
ax.plot(loc_df.longitude[loc_df.label == label],loc_df.latitude[loc_df.label == label],'.', alpha = 1, markersize = 2, color = 'gray')
ax.plot(kmeans.cluster_centers_[label,0],kmeans.cluster_centers_[label,1],'o', color = 'r')
for label in clusters.label:
for dest_label in clusters.label:
num_of_rides = len(df_[(df_.start_cluster == label) & (df_.end_cluster == dest_label) & (df_.starthour == hour)])
dist_x = clusters.x[clusters.label == label].values[0] - clusters.x[clusters.label == dest_label].values[0]
dist_y = clusters.y[clusters.label == label].values[0] - clusters.y[clusters.label == dest_label].values[0]
pct = np.true_divide(num_of_rides,len(df_[df_.starthour == hour]))
arr = Arrow(clusters.x[clusters.label == label].values, clusters.y[clusters.label == label].values, -dist_x, -dist_y, edgecolor='white', width = pct)
ax.add_patch(arr)
arr.set_facecolor('g')
ani = animation.FuncAnimation(fig,animate,sorted(df_.starthour.unique()), interval = 1000)
plt.close()
ani.save('Relative.gif', writer='imagemagick', fps=2)
filename = 'Relative.gif'
video = io.open(filename, 'r+b').read()
encoded = base64.b64encode(video)
HTML(data='''<img src="data:image/gif;base64,{0}" type="gif" />'''.format(encoded.decode('ascii')))
df.tripduration.quantile([0, .25, .5, .75, 1.])
df.to_feather('df_')
def col_encode(col):
"""Encodes a pandas column with continous ids.
"""
uniq = np.unique(col)
name2idx = {o:i for i,o in enumerate(uniq)}
return name2idx#, np.array([name2idx[x] for x in col]), len(uniq)
col_encode(df['usertype'])
%%time
df['user_enc'] = df['usertype'].map(col_encode(df['usertype']))
display_all(df.head(5))
%%time
# naive distance
df['est_dist'] = abs(df['lat1'] - df['lat2']) + abs(df['lon1'] - df['lon2'])
df['est_dist'] = df['est_dist'] * 111195
%%time
d = df.est_dist
sns.distplot(d, bins = 50);
del d
np.array(df.est_dist).reshape(1, -1)
%%time
for i in ['starttime', 'stoptime', 'bikeid', 'usertype']:
try:
df.drop([i], 1, inplace=True)
except:
pass
df.est_dist.quantile([.5, .95, .97, .98, .99, 1.])
display_all(df.sample(5))
df['speed'] = df.est_dist/df.tripduration
df.speed.quantile([0, .1, .2, .3, .4, .5, .6, .7 ,.8, .9, 1.])
df.speed.quantile([.9, .92, .94 ,.96, .98, .99, .995, 1.])
idx = df[df.speed > 10].index
df.drop(idx, 0 ,inplace=True)
df = df.reset_index()
%%time
d = df.speed
sns.distplot(d, bins = 50);
del d
%%time
for i in ['index', 'lat1', 'lon1', 'lat2', 'lon2', 'time', 'speed']:
try:
df.drop([i], 1, inplace=True)
except:
pass
display_all(df.head())
date_temp = df.groupby(['month']).Temp_high.mean().reset_index()
date_temp['month'] = date_temp['month']-1
date_temp.columns = ['month', 'temp']
tmp = df.groupby(['month']).tripduration.size().reset_index()
#tmp = pd.merge(tmp, date_temp, 'left', 'month')
fig, ax = plt.subplots(figsize=(24,13.5))
ax2 = ax.twinx()
sns.barplot(x="month", y="tripduration", data=tmp, color="#fff89e", ax=ax);
sns.regplot(x="month", y="temp", data=date_temp, ax=ax2, fit_reg=False);
ax.set_ylim(0, None)
ax2.set_ylim(32, 90)
plt.title('Trip numbers in each month along average temperature', fontsize=20)
plt.show()
tmp = df[['date', 'Temp_high']].groupby(['date']).first().reset_index()
tmp['diff'] = 0
tmp.loc[1:, 'diff'] = np.diff(tmp.Temp_high)
tmp.head()
temp_d = dict(zip(tmp['date'], tmp['diff']))
%%time
df['temp_diff'] = df['date'].map(temp_d)
df['est_dist'] = df['est_dist'].astype(int)
tmp = df[df.month==2].groupby(['date']).tripduration.size().reset_index()
tmp['date'] = tmp['date'].map(col_encode(tmp['date']))+1
fig, ax = plt.subplots(figsize=(24,13.5))
ax2 = ax.twinx()
sns.barplot(x="date", y="tripduration", data=tmp, color="#fff89e", ax=ax);
ax.set_ylim(0, None)
ax2.set_ylim(32, 90)
#plt.title('Trip numbers in each month along average temperature', fontsize=20)
plt.show()
display_all(df.sample(5))
tmp = pd.read_csv('nyc_temp_2017.csv')
tmp['2017'] = pd.to_datetime(tmp['2017'])
tmp.columns = ['date', 'Temp_high', 'Temp_avg',
'Temp_low', 'Precip', 'Rain',
'Snow', 'Fog', 'off_work',
'snow_plus_1']
tmp.sample(10)
tmp = tmp[['date', 'off_work', 'snow_plus_1']]
tmp.sample(3)
%%time
df = pd.merge(df, tmp, 'left', on='date')
df.sample(5)
df.to_feather('df_train')
```
|
github_jupyter
|
import os
import pandas as pd
import numpy as np
import pickle
from collections import Counter
%matplotlib inline
import seaborn as sns
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
%matplotlib inline
import seaborn as sns
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
%%time
stations_dict = pickle.load(open('stations_dict.p', "rb"))
stations_latlng = pickle.load(open('stations_latlng.p', "rb"))
#df = pd.read_feather('2017_all')
df = pd.read_feather('df_')
#df = pd.read_feather('df_train')
df.shape
df.columns
# 3248, 3480 special stations
sp_stations = [3248, 3480, 3247, 3215, 3478]
idx = df[(df['start station id'].isin(sp_stations))
| (df['end station id'].isin(sp_stations))].index
df.drop(idx,0,inplace=True)
df.shape
df.head(3)
df.tail(3)
# 2% of the trips ends up in the same station, ignore these for our purpose
print(df[df['start station id'] == df['end station id']].shape)
df = df[df['start station id'] != df['end station id']]
df.tripduration.quantile([0, .1, .25, .5, .75, .99, .999, .9999])
# only look those trips are less or equal to one hour
df = df[df.tripduration <= 3600]
print(df.shape)
d = df.tripduration
sns.distplot(d, bins = 50);
sns.distplot(np.log(d), bins = 50);
del d
Counter(df.usertype)
%%time
df['date'] = df['starttime'].apply(lambda x: x.date())
%%time
S = set(df['date'])
d = dict()
for s in S:
d[s] = s.weekday() + 1
df['weekday'] = df['date'].map(d)
sns.barplot(x="weekday", y="tripduration", data=df.sample(500000))
sns.barplot(x="weekday", y="tripduration", data=df[df.usertype == 'Customer'].sample(100000))
sns.barplot(x="weekday", y="tripduration", data=df[df.usertype == 'Subscriber'].sample(100000))
# number of trips vs. dow
tmp = df.groupby(['weekday']).tripduration.size().reset_index()
sns.regplot(x="weekday", y="tripduration", data=tmp,
scatter_kws={"s": 50},
order=2, ci=None, truncate=True, fit_reg=False)
# consumer
# number of trips vs. dow
tmp = df[df.usertype == 'Customer'].groupby(['weekday']).tripduration.size().reset_index()
sns.regplot(x="weekday", y="tripduration", data=tmp,
scatter_kws={"s": 50},
order=2, ci=None, truncate=True, fit_reg=False)
%%time
df['weekend'] = df['weekday'].map(lambda x: 0 if x < 6 else 1)
S = set(df.date)
d = dict()
for s in S:
d[s] = s.month
df['month'] = df['date'].map(d)
# number of trips vs. hours
tmp = df.groupby(['starthour']).tripduration.size().reset_index()
display(tmp)
sns.regplot(x="starthour", y="tripduration", data=tmp,
scatter_kws={"s": 50}, ci=None, fit_reg=False);
def hour_min(time):
t = time.split(':')
return int(t[0])*100 + int(t[1])/60*100
%%time
df['time'] = df['starttime'].astype(str).apply(lambda x: x[11:])
df['time'] = df['time'].map(lambda x: hour_min(x))
# number of trips vs. HH%MM
tmp = df.groupby(['time']).tripduration.size().reset_index()
sns.regplot(x="time", y="tripduration", data=tmp,
scatter_kws={"s": 10}, ci=None, fit_reg=False);
# Customer
# number of trips vs. HH%MM
tmp = df[df.usertype == 'Customer'].groupby(['time']).tripduration.size().reset_index()
sns.regplot(x="time", y="tripduration", data=tmp,
scatter_kws={"s": 10}, ci=None, fit_reg=False);
# Customer
# number of trips vs. HH%MM
tmp = df[df.usertype == 'Subscriber'].groupby(['time']).tripduration.size().reset_index()
sns.regplot(x="time", y="tripduration", data=tmp,
scatter_kws={"s": 10}, ci=None, fit_reg=False);
%%time
plt.figure(figsize=(24,16))
sns.barplot(x="starthour", y="tripduration",
data=df[(df.usertype == 'Subscriber') & (df.weekend == 0)].sample(300000))
plt.figure(figsize=(24,16))
sns.barplot(x="starthour", y="tripduration",
data=df[(df.usertype == 'Customer') & (df.weekend == 1)].sample(300000))
tmp = df.groupby(['month', 'usertype']).tripduration.size().reset_index()
plt.figure(figsize=(24,13.5))
sns.barplot(x="month", y="tripduration", hue="usertype",
data=tmp);
# number of trips vs. day
tmp = df[df['month']==8].groupby(['date', 'usertype']).tripduration.size().reset_index()
tmp['date'] = tmp['date'].apply(lambda x: str(x)[-2:])
plt.figure(figsize=(24,13.5))
sns.barplot(x="date", y="tripduration", hue="usertype",
data=tmp);
from datetime import datetime
def display_all(df):
"""
display more than 20 rows/cols
"""
with pd.option_context("display.max_rows", 1000):
with pd.option_context("display.max_columns", 1000):
display(df)
nyc_temp = pd.read_csv('nyc_temp_2017.csv')
nyc_temp['2017'] = nyc_temp['2017'].apply(lambda x: datetime.strptime(x, "%Y-%m-%d").date())
nyc_temp.columns = ['date', 'Temp_high', 'Temp_avg', 'Temp_low', 'Precip', 'Rain', 'Snow', 'Fog']
nyc_temp.sample(5)
%%time
df = pd.merge(df, nyc_temp, 'left', on='date')
df.Precip.quantile(np.clip(np.arange(.7, 1., .05), 0, 1))
df['rain_vol'] = 0
# v light, medium, heavy
df.loc[df['Precip'] >= 0.001, 'rain_vol'] = 1
df.loc[df['Precip'] >= 0.03, 'rain_vol'] = 2
df.loc[df['Precip'] >= 0.2, 'rain_vol'] = 3
df['temp_level'] = 0
df.loc[df['Temp_high'] >= 56, 'temp_level'] = 1
df.loc[df['Temp_high'] >= 67, 'temp_level'] = 2
df.loc[df['Temp_high'] >= 76, 'temp_level'] = 3
df.loc[df['Temp_high'] >= 83, 'temp_level'] = 4
tmp = df[df.usertype == 'Subscriber']\
.groupby(['temp_level','rain_vol'])\
.agg({'tripduration': 'size',
'date': lambda x: x.nunique()}).reset_index()
tmp['avg_trip_num'] = tmp['tripduration']/tmp['date']
g = sns.barplot(x="temp_level",
y="avg_trip_num",
data=tmp,
hue="rain_vol",
palette=sns.cubehelix_palette(8, start=.9, rot=-.75))
# scatter_kws={"s": 10}, ci=None, fit_reg=False);
g.figure.set_size_inches(16, 9)
tmp = df[df.usertype == 'Subscriber']\
.groupby(['temp_level','Rain'])\
.agg({'tripduration': 'size',
'date': lambda x: x.nunique()}).reset_index()
tmp['avg_trip_num'] = tmp['tripduration']/tmp['date']
g = sns.barplot(x="temp_level", y="avg_trip_num", data=tmp, hue="Rain",)
# scatter_kws={"s": 10}, ci=None, fit_reg=False);
g.figure.set_size_inches(16, 9)
tmp = df[df.usertype == 'Subscriber']\
.groupby(['temp_level','Snow'])\
.agg({'tripduration': 'size',
'date': lambda x: x.nunique()}).reset_index()
tmp['avg_trip_num'] = tmp['tripduration']/tmp['date']
g = sns.barplot(x="temp_level", y="avg_trip_num", data=tmp, hue="Snow",)
# scatter_kws={"s": 10}, ci=None, fit_reg=False);
g.figure.set_size_inches(16, 9)
tmp = df[df.usertype == 'Subscriber']\
.groupby(['temp_level','Fog'])\
.agg({'tripduration': 'size',
'date': lambda x: x.nunique()}).reset_index()
tmp['avg_trip_num'] = tmp['tripduration']/tmp['date']
g = sns.barplot(x="temp_level", y="avg_trip_num", data=tmp, hue="Fog",)
# scatter_kws={"s": 10}, ci=None, fit_reg=False);
g.figure.set_size_inches(16, 9)
%%time
g = sns.barplot(x="temp_level", y="tripduration",
hue="rain_vol",
data=df[df.usertype == 'Subscriber'].sample(1000000),
palette=sns.cubehelix_palette(8, start=.9, rot=-.75));
g.figure.set_size_inches(16, 9)
%%time
df['lat1'] = df['start station id'].map(lambda x: stations_latlng[x][0])
df['lon1'] = df['start station id'].map(lambda x: stations_latlng[x][1])
df['lat2'] = df['end station id'].map(lambda x: stations_latlng[x][0])
df['lon2'] = df['end station id'].map(lambda x: stations_latlng[x][1])
from math import sin, cos, sqrt, atan2, radians
def manhattan_distance(latlon1, latlon2):
R = 6371
lat1 = radians(latlon1[0])
lon1 = radians(latlon1[1])
lat2 = radians(latlon2[0])
lon2 = radians(latlon2[1])
dlon = lon2 - lon1
dlat = lat2 - lat1
a1 = sin(dlat / 2)**2
c1 = 2 * atan2(sqrt(a1), sqrt(1 - a1))
d1 = R * c1
a2 = sin(dlon / 2)**2
c2 = 2 * atan2(sqrt(a2), sqrt(1 - a2))
d2 = R * c2
return d1+d2
d1 = stations_latlng[523]
d1
d2 = stations_latlng[428]
d2
a = abs(d1[0]-d2[0])
b = abs(d1[1]-d2[1])
(a+b)*111.195
manhattan_distance(d1, d2)
d1
d2
%%time
tmp = df.groupby(['start station id', 'end station id']).size().reset_index()
tmp.columns = ['start station id', 'end station id', 'size']
tmp = tmp.sort_values('size', ascending=False).reset_index()
%%time
tmp['lat1'] = tmp['start station id'].map(lambda x: stations_latlng[x][0])
tmp['lon1'] = tmp['start station id'].map(lambda x: stations_latlng[x][1])
tmp['lat2'] = tmp['end station id'].map(lambda x: stations_latlng[x][0])
tmp['lon2'] = tmp['end station id'].map(lambda x: stations_latlng[x][1])
lat2 = tmp['lat2'].values
lon2 = tmp['lon2'].values
plt.figure(figsize = (10,10))
plt.plot(lon2,lat2,'.', alpha = 0.8, markersize = 0.1)
plt.show()
def latlon2pos(lat, lon, size=240):
return int(round((40.84-lat)*size/240*1000-1)), int(round((lon+74.12)*size/240*1000-1))
%%time
tmp = df.groupby(['start station id']).size().reset_index()
tmp.columns = ['start station id', 'size']
tmp = tmp.sort_values('size', ascending=False).reset_index()
tmp['lat1'] = tmp['start station id'].map(lambda x: stations_latlng[x][0])
tmp['lon1'] = tmp['start station id'].map(lambda x: stations_latlng[x][1])
print(tmp.shape)
# show the log density of pickup and dropoff locations
s = 200
imageSize = (s,s)
locationDensityImage = np.zeros(imageSize)
for i in range(len(tmp)):
t = tmp.loc[i]
locationDensityImage[latlon2pos(t['lat1'], t['lon1'], s)] += t['size']#np.log1p(t['size'])
fig, ax = plt.subplots(nrows=1,ncols=1,figsize=(12,12))
ax.imshow(np.log1p(locationDensityImage), cmap='hot')
ax.set_axis_off()
%%time
tmp = df.groupby(['end station id']).size().reset_index()
tmp.columns = ['end station id', 'size']
tmp = tmp.sort_values('size', ascending=False).reset_index()
tmp['lat1'] = tmp['end station id'].map(lambda x: stations_latlng[x][0])
tmp['lon1'] = tmp['end station id'].map(lambda x: stations_latlng[x][1])
print(tmp.shape)
# show the log density of pickup and dropoff locations
s = 200
imageSize = (s,s)
locationDensityImage1 = np.zeros(imageSize)
for i in range(len(tmp)):
t = tmp.loc[i]
locationDensityImage1[latlon2pos(t['lat1'], t['lon1'], s)] += t['size']#np.log1p(t['size'])
fig, ax = plt.subplots(nrows=1,ncols=1,figsize=(12,12))
ax.imshow(np.log1p(locationDensityImage1), cmap='hot')
ax.set_axis_off()
from sklearn.cluster import KMeans
# %%time
# tmp = df[['start station id']].sample(200000)
# loc_df = pd.DataFrame()
# loc_df['longitude'] = tmp['start station id'].map(lambda x: stations_latlng[x][1])
# loc_df['latitude'] = tmp['start station id'].map(lambda x: stations_latlng[x][0])
# Ks = range(5, 50)
# km = [KMeans(n_clusters=i) for i in Ks]
# score = [km[i].fit(loc_df).score(loc_df) for i in range(len(km))]
# score = [abs(i) for i in score]
plt.plot((score))
%%time
tmp = df[['start station id']].sample(200000)
loc_df = pd.DataFrame()
loc_df['longitude'] = tmp['start station id'].map(lambda x: stations_latlng[x][1])
loc_df['latitude'] = tmp['start station id'].map(lambda x: stations_latlng[x][0])
kmeans = KMeans(n_clusters=16, random_state=2, n_init = 10).fit(loc_df)
loc_df['label'] = kmeans.labels_
plt.figure(figsize = (10,10))
for label in loc_df.label.unique():
plt.plot(loc_df.longitude[loc_df.label == label],loc_df.latitude[loc_df.label == label],'.', alpha = 0.3, markersize = 1)
plt.title('Clusters of New York (and New Jersey)')
plt.show()
fig,ax = plt.subplots(figsize = (10,10))
for label in loc_df.label.unique():
ax.plot(loc_df.longitude[loc_df.label == label],loc_df.latitude[loc_df.label == label],'.', alpha = 0.4, markersize = 0.1, color = 'gray')
ax.plot(kmeans.cluster_centers_[label,0],kmeans.cluster_centers_[label,1],'o', color = 'r')
ax.annotate(label, (kmeans.cluster_centers_[label,0],kmeans.cluster_centers_[label,1]), color = 'b', fontsize = 20)
ax.set_title('Cluster Centers')
plt.show()
%%time
df['start_cluster'] = kmeans.predict(df[['lon1','lat1']])
df['end_cluster'] = kmeans.predict(df[['lon2','lat2']])
clusters = pd.DataFrame()
clusters['x'] = kmeans.cluster_centers_[:,0]
clusters['y'] = kmeans.cluster_centers_[:,1]
clusters['label'] = range(len(clusters))
loc_df = loc_df.sample(5000)
import os
from matplotlib.pyplot import *
import matplotlib.pyplot as plt
from matplotlib import animation
from sklearn.cluster import KMeans
from IPython.display import HTML
from subprocess import check_output
import io
import base64
%%time
fig, ax = plt.subplots(1, 1, figsize = (10,10))
df_ = df.sample(5000000)
def animate(hour):
ax.clear()
ax.set_title('Absolute Traffic - Hour ' + str(hour))
plt.figure(figsize = (10,10));
for label in loc_df.label.unique():
ax.plot(loc_df.longitude[loc_df.label == label],loc_df.latitude[loc_df.label == label],'.', alpha = 1, markersize = 2, color = 'gray');
ax.plot(kmeans.cluster_centers_[label,0],kmeans.cluster_centers_[label,1],'o', color = 'r');
for label in clusters.label:
for dest_label in clusters.label:
num_of_rides = len(df_[(df_.start_cluster == label) & (df_.end_cluster == dest_label) & (df_.starthour == hour)])
dist_x = clusters.x[clusters.label == label].values[0] - clusters.x[clusters.label == dest_label].values[0]
dist_y = clusters.y[clusters.label == label].values[0] - clusters.y[clusters.label == dest_label].values[0]
pct = np.true_divide(num_of_rides,len(df_))
arr = Arrow(clusters.x[clusters.label == label].values, clusters.y[clusters.label == label].values, -dist_x, -dist_y, edgecolor='white', width = 15*pct)
ax.add_patch(arr)
arr.set_facecolor('g')
ani = animation.FuncAnimation(fig,animate,sorted(df.starthour.unique()), interval = 1000);
plt.close();
ani.save('Absolute.gif', writer='imagemagick', fps=2);
filename = 'Absolute.gif'
video = io.open(filename, 'r+b').read();
encoded = base64.b64encode(video);
HTML(data='''<img src="data:image/gif;base64,{0}" type="gif" />'''.format(encoded.decode('ascii')));
%%time
fig, ax = plt.subplots(1, 1, figsize = (10,10))
def animate(hour):
ax.clear()
ax.set_title('Relative Traffic - Hour ' + str(hour))
plt.figure(figsize = (10,10))
for label in loc_df.label.unique():
ax.plot(loc_df.longitude[loc_df.label == label],loc_df.latitude[loc_df.label == label],'.', alpha = 1, markersize = 2, color = 'gray')
ax.plot(kmeans.cluster_centers_[label,0],kmeans.cluster_centers_[label,1],'o', color = 'r')
for label in clusters.label:
for dest_label in clusters.label:
num_of_rides = len(df_[(df_.start_cluster == label) & (df_.end_cluster == dest_label) & (df_.starthour == hour)])
dist_x = clusters.x[clusters.label == label].values[0] - clusters.x[clusters.label == dest_label].values[0]
dist_y = clusters.y[clusters.label == label].values[0] - clusters.y[clusters.label == dest_label].values[0]
pct = np.true_divide(num_of_rides,len(df_[df_.starthour == hour]))
arr = Arrow(clusters.x[clusters.label == label].values, clusters.y[clusters.label == label].values, -dist_x, -dist_y, edgecolor='white', width = pct)
ax.add_patch(arr)
arr.set_facecolor('g')
ani = animation.FuncAnimation(fig,animate,sorted(df_.starthour.unique()), interval = 1000)
plt.close()
ani.save('Relative.gif', writer='imagemagick', fps=2)
filename = 'Relative.gif'
video = io.open(filename, 'r+b').read()
encoded = base64.b64encode(video)
HTML(data='''<img src="data:image/gif;base64,{0}" type="gif" />'''.format(encoded.decode('ascii')))
df.tripduration.quantile([0, .25, .5, .75, 1.])
df.to_feather('df_')
def col_encode(col):
"""Encodes a pandas column with continous ids.
"""
uniq = np.unique(col)
name2idx = {o:i for i,o in enumerate(uniq)}
return name2idx#, np.array([name2idx[x] for x in col]), len(uniq)
col_encode(df['usertype'])
%%time
df['user_enc'] = df['usertype'].map(col_encode(df['usertype']))
display_all(df.head(5))
%%time
# naive distance
df['est_dist'] = abs(df['lat1'] - df['lat2']) + abs(df['lon1'] - df['lon2'])
df['est_dist'] = df['est_dist'] * 111195
%%time
d = df.est_dist
sns.distplot(d, bins = 50);
del d
np.array(df.est_dist).reshape(1, -1)
%%time
for i in ['starttime', 'stoptime', 'bikeid', 'usertype']:
try:
df.drop([i], 1, inplace=True)
except:
pass
df.est_dist.quantile([.5, .95, .97, .98, .99, 1.])
display_all(df.sample(5))
df['speed'] = df.est_dist/df.tripduration
df.speed.quantile([0, .1, .2, .3, .4, .5, .6, .7 ,.8, .9, 1.])
df.speed.quantile([.9, .92, .94 ,.96, .98, .99, .995, 1.])
idx = df[df.speed > 10].index
df.drop(idx, 0 ,inplace=True)
df = df.reset_index()
%%time
d = df.speed
sns.distplot(d, bins = 50);
del d
%%time
for i in ['index', 'lat1', 'lon1', 'lat2', 'lon2', 'time', 'speed']:
try:
df.drop([i], 1, inplace=True)
except:
pass
display_all(df.head())
date_temp = df.groupby(['month']).Temp_high.mean().reset_index()
date_temp['month'] = date_temp['month']-1
date_temp.columns = ['month', 'temp']
tmp = df.groupby(['month']).tripduration.size().reset_index()
#tmp = pd.merge(tmp, date_temp, 'left', 'month')
fig, ax = plt.subplots(figsize=(24,13.5))
ax2 = ax.twinx()
sns.barplot(x="month", y="tripduration", data=tmp, color="#fff89e", ax=ax);
sns.regplot(x="month", y="temp", data=date_temp, ax=ax2, fit_reg=False);
ax.set_ylim(0, None)
ax2.set_ylim(32, 90)
plt.title('Trip numbers in each month along average temperature', fontsize=20)
plt.show()
tmp = df[['date', 'Temp_high']].groupby(['date']).first().reset_index()
tmp['diff'] = 0
tmp.loc[1:, 'diff'] = np.diff(tmp.Temp_high)
tmp.head()
temp_d = dict(zip(tmp['date'], tmp['diff']))
%%time
df['temp_diff'] = df['date'].map(temp_d)
df['est_dist'] = df['est_dist'].astype(int)
tmp = df[df.month==2].groupby(['date']).tripduration.size().reset_index()
tmp['date'] = tmp['date'].map(col_encode(tmp['date']))+1
fig, ax = plt.subplots(figsize=(24,13.5))
ax2 = ax.twinx()
sns.barplot(x="date", y="tripduration", data=tmp, color="#fff89e", ax=ax);
ax.set_ylim(0, None)
ax2.set_ylim(32, 90)
#plt.title('Trip numbers in each month along average temperature', fontsize=20)
plt.show()
display_all(df.sample(5))
tmp = pd.read_csv('nyc_temp_2017.csv')
tmp['2017'] = pd.to_datetime(tmp['2017'])
tmp.columns = ['date', 'Temp_high', 'Temp_avg',
'Temp_low', 'Precip', 'Rain',
'Snow', 'Fog', 'off_work',
'snow_plus_1']
tmp.sample(10)
tmp = tmp[['date', 'off_work', 'snow_plus_1']]
tmp.sample(3)
%%time
df = pd.merge(df, tmp, 'left', on='date')
df.sample(5)
df.to_feather('df_train')
| 0.221603 | 0.428114 |
```
from sklearn.neighbors import KNeighborsClassifier, KNeighborsRegressor
from sklearn.model_selection import GridSearchCV, cross_val_score, KFold
from sklearn.metrics import roc_auc_score, precision_recall_curve
from sklearn.metrics import auc as calculate_auc
from sklearn.metrics import mean_squared_error
from sklearn.metrics import accuracy_score
from tqdm import tqdm
from sklearn.utils import shuffle
from joblib import load, dump
import numpy as np
import pandas as pd
import os
from chembench import load_data, dataset
from molmap import feature
bitsinfo = feature.fingerprint.Extraction().bitsinfo
fp_types = bitsinfo.Subtypes.unique()
fp_types
from scipy.stats.stats import pearsonr
def r2(y_true, y_pred):
pcc, _ = pearsonr(y_true,y_pred)
return pcc[0]**2
def rmse(y_true, y_pred):
mse = mean_squared_error(y_true, y_pred)
rmse = np.sqrt(mse)
return rmse
def PRC_AUC(y_true, y_score):
precision, recall, threshold = precision_recall_curve(y_true, y_score) #PRC_AUC
auc = calculate_auc(recall, precision)
return auc
def ROC_AUC(y_true, y_score):
auc = roc_auc_score(y_true, y_score)
return auc
esol = dataset.load_ESOL()
lipop = dataset.load_Lipop()
FreeSolv = dataset.load_FreeSolv()
PDBF = dataset.load_PDBF()
datasets = [esol, lipop, FreeSolv] #malaria
performance = []
for data in datasets:
for fp_type in fp_types:
task_name = data.task_name
print(task_name, fp_type)
df, induces = load_data(task_name)
X2 = load('/raid/shenwanxiang/10_FP_effect/tempignore/X2_%s_%s.data' % (task_name, fp_type) )
n, w, c = X2.sum(axis=-1).shape
X2 = X2.reshape(n, w*c)
Y = data.y
for i, idx in enumerate(induces):
train_idx, valid_idx, test_idx = idx
X = X2[train_idx]
y = Y[train_idx]
X_valid = X2[valid_idx]
y_valid = Y[valid_idx]
X_test = X2[test_idx]
y_test = Y[test_idx]
# Set up possible values of parameters to optimize over
n_neighbors_list = np.arange(1, 15, 2)
weights_list = ['uniform', 'distance']
res = []
for n_neighbors in tqdm(n_neighbors_list, ascii=True):
for weights in weights_list:
clf = KNeighborsRegressor(n_neighbors=n_neighbors, weights = weights)
clf.fit(X, y)
score = clf.score(X_valid, y_valid)
res.append([n_neighbors, weights, score])
dfr = pd.DataFrame(res, columns = ['n_neighbors', 'weights', 'score'])
gidx = dfr['score'].idxmax()
best_params = dfr.iloc[gidx].to_dict()
best_params.pop('score')
best_params
clf = KNeighborsRegressor(**best_params)
clf.fit(X, y, )
test_r2 = r2(y_test, clf.predict(X_test))
test_rmse = rmse(y_test, clf.predict(X_test))
results = {"task_name":task_name, 'fp_type':fp_type,"split-time":i, "test_rmse":test_rmse , "test_r2": test_r2}
performance.append(results)
pd.DataFrame(performance).to_csv('./knn_regression.csv')
pd.DataFrame(performance).groupby(['task_name','fp_type'])['test_r2'].apply(lambda x:x.mean())
```
|
github_jupyter
|
from sklearn.neighbors import KNeighborsClassifier, KNeighborsRegressor
from sklearn.model_selection import GridSearchCV, cross_val_score, KFold
from sklearn.metrics import roc_auc_score, precision_recall_curve
from sklearn.metrics import auc as calculate_auc
from sklearn.metrics import mean_squared_error
from sklearn.metrics import accuracy_score
from tqdm import tqdm
from sklearn.utils import shuffle
from joblib import load, dump
import numpy as np
import pandas as pd
import os
from chembench import load_data, dataset
from molmap import feature
bitsinfo = feature.fingerprint.Extraction().bitsinfo
fp_types = bitsinfo.Subtypes.unique()
fp_types
from scipy.stats.stats import pearsonr
def r2(y_true, y_pred):
pcc, _ = pearsonr(y_true,y_pred)
return pcc[0]**2
def rmse(y_true, y_pred):
mse = mean_squared_error(y_true, y_pred)
rmse = np.sqrt(mse)
return rmse
def PRC_AUC(y_true, y_score):
precision, recall, threshold = precision_recall_curve(y_true, y_score) #PRC_AUC
auc = calculate_auc(recall, precision)
return auc
def ROC_AUC(y_true, y_score):
auc = roc_auc_score(y_true, y_score)
return auc
esol = dataset.load_ESOL()
lipop = dataset.load_Lipop()
FreeSolv = dataset.load_FreeSolv()
PDBF = dataset.load_PDBF()
datasets = [esol, lipop, FreeSolv] #malaria
performance = []
for data in datasets:
for fp_type in fp_types:
task_name = data.task_name
print(task_name, fp_type)
df, induces = load_data(task_name)
X2 = load('/raid/shenwanxiang/10_FP_effect/tempignore/X2_%s_%s.data' % (task_name, fp_type) )
n, w, c = X2.sum(axis=-1).shape
X2 = X2.reshape(n, w*c)
Y = data.y
for i, idx in enumerate(induces):
train_idx, valid_idx, test_idx = idx
X = X2[train_idx]
y = Y[train_idx]
X_valid = X2[valid_idx]
y_valid = Y[valid_idx]
X_test = X2[test_idx]
y_test = Y[test_idx]
# Set up possible values of parameters to optimize over
n_neighbors_list = np.arange(1, 15, 2)
weights_list = ['uniform', 'distance']
res = []
for n_neighbors in tqdm(n_neighbors_list, ascii=True):
for weights in weights_list:
clf = KNeighborsRegressor(n_neighbors=n_neighbors, weights = weights)
clf.fit(X, y)
score = clf.score(X_valid, y_valid)
res.append([n_neighbors, weights, score])
dfr = pd.DataFrame(res, columns = ['n_neighbors', 'weights', 'score'])
gidx = dfr['score'].idxmax()
best_params = dfr.iloc[gidx].to_dict()
best_params.pop('score')
best_params
clf = KNeighborsRegressor(**best_params)
clf.fit(X, y, )
test_r2 = r2(y_test, clf.predict(X_test))
test_rmse = rmse(y_test, clf.predict(X_test))
results = {"task_name":task_name, 'fp_type':fp_type,"split-time":i, "test_rmse":test_rmse , "test_r2": test_r2}
performance.append(results)
pd.DataFrame(performance).to_csv('./knn_regression.csv')
pd.DataFrame(performance).groupby(['task_name','fp_type'])['test_r2'].apply(lambda x:x.mean())
| 0.718397 | 0.419945 |
# Neural Learning - implementing elements of neural networks
_Last revision: Fri Jul 5 18:59:30 AEST 2019_
## Introduction
In this lab we will expand on some of the concepts of neural learning, starting with the perceptron. Initially we understand the representational capacity of a perceptron, then how to implement learning for elementary Boolean functions, i.e., concept learning, and look at a perceptron learning a linear classifier on a real-world dataset.
The remainder of the lab goes into some "hands-on" aspects of supervised learning for neural networks, based on the multi-layer perceptron trained by error back-propagation.
There are only questions as such in the first section, a review of perceptrons. For the second part on the multi-layer perceprton you are just supposed to step through the cells, running the code, understanding why it is doing what it does, and possibly adding your own cells to experiment.
This code is for explanatory purposes only – for real neural networks you would use one of the many code libraries that exist.
**Note: this notebook has only been tested using Python 3.**
### Acknowledgement
The perceptron implementation for this lab is based on the presentation and code in Chapter 3 of "Machine Learning" by Stephen Marsland, CRC Press, 2015.
The multi-layer perceptron part of the lab is based on the presentation and code accompanying Chapter 18 of "Data Science from Scratch" by Joel Grus, O'Reilly Media, 2015 (all the code for the book is available [here](http://github.com/joelgrus/data-science-from-scratch)).
## (1) Linear classification with the Perceptron
### Getting started
In this lab we will use a slight variant on the notation and setup
used in the lectures.
These changes are not going to affect the capabilities of the perceptron.
For a given set of $m$ inputs, the first stage of the computation is when the perceptron multiplies each of the input values with its corresponding weight and adds these together:
$$ h = \sum_{i}^{m} w_{i} x_{i} $$
The second stage is to apply the thresholding output rule or activation function of the perceptron to produce the classification output.
For this lab we will slightly change the activation function to map to either $0$ or $1$ rather than the $-1$ or $+1$ we had in the lecture notes.
The value set for the bias or threshold input will also be changed from $1$ to $-1$.
$$ o = g(h) = \left\{
\begin{array}{lll}
1 & \mbox{if} & h > 0 \\
0 & \mbox{otherwise if} & h \leq 0 \\
\end{array}
\right. $$
Let's go ahead and implement a Perceptron in Python.
## Representing simple Boolean functions as a linear classifier
We will first look at modelling a simple two-input Boolean function as linear classifier. This is a Perceptron WITHOUT any learning! To get started we will use the OR function, for which the truth table will be familiar to you all. Note that you will need to pick some weights for the function to output the correct values given the input. There are many possible values that could do the job. Also, remember to take care with the dimension of the weight vector.
```
# set up the data, i.e., all the cases in the truth table
x=[[0,0],[0,1],[1,0],[1,1]]
y=[0,1,1,1]
# number of data points
n=4
# number of inputs to the perceptron
m=3
# what weights should be assigned to correctly compute the OR function ?
w=[0.02,0.03,0.03]
# loop over the data
for i in range(n):
h=w[0]*(-1)# this is the bias weight and input
for j in range(1,m):
# print('J is ', j)
# print('Data is ', x[i][j-1])
h+=w[j]*x[i][j-1]
# print('H is ', h)
if(h>0):
output=1
else:
output=0
print('For Input', x[i], 'with Class', y[i], 'Predict ', output)
```
Now change your code to model the AND function (again restricted to two inputs).
```
# set up the data, i.e., all the cases in the truth table
x=[[0,0],[0,1],[1,0],[1,1]]
y=[0,0,0,1]
# number of data points
n=4
# number of inputs to the perceptron
m=3
# what weights should be assigned to correctly compute the AND function ?
w=[0.05,0.03,0.03]
# loop over the data
for i in range(n):
h=w[0]*(-1)# this is the bias weight and input
for j in range(1,m):
h+=w[j]*x[i][j-1]
if(h>0):
output=1
else:
output=0
print('For Input', x[i], 'with Class', y[i], 'Predict ', output)
```
## Changing the data structures for machine learning
We got right down to the details of how a linear classifier works. Now this being a perceptron, you probably recall that rather than using a fixed set of weights to do the prediction each time, there is a simple training rule that updates the weights on the basis of discrepancies between the classifier's prediction on the data and the actual class. So we could extend our previous code to implement that training rule, but the code is a little fiddly and you're probably thinking there should be a simpler way to do this. If so, you are correct, but it is based on moving towards coding with matrix and vector operations, rather than directly using Python arrays. To do this we need to import the NumPy library (there is a tutorial at: <href <a>https://docs.scipy.org/doc/numpy-dev/user/quickstart.html</a>>).
For example, when we need to predict a class for an instance $\mathbf{x}$ given the current weights $\mathbf{w}$ we can use the inner product operation $\mathbf{x} \cdot \mathbf{w}$. To get this functionality using NumPy we just do the following:
```
import numpy as np
x=np.array([0,1,1])
w=np.array([0.02,0.03,0.03])
h=np.dot(x,w)
print(h)
```
But wait, there's more! Since $\mathbf{x}$ and $\mathbf{w}$ are both actually matrices, the same operation will enable us to apply the inner product of the weight vector $\mathbf{w}$ to ALL the data instances at once. In this case we write the matrix of data instances $\mathbf{X}$. Just note that we need to take care that the data matrix and weight vector are properly initialised to make this operation work correctly. Now the code for predicting the class values of all of our data given the weight vector is as follows:
```
import numpy as np
# Data set with class values in last column
dataset = np.array([[0,0,0],[0,1,1],[1,0,1],[1,1,1]]) # OR function
X=dataset[:,0:2]
y = dataset[:,2:]
# Note: the bias weight is now the last!
w = np.array([[0.03],[0.03],[0.02]])
# Add the values for the bias weights (-1) to the data matrix
nData = np.shape(X)[0]
X = np.concatenate((X,-np.ones((nData,1))),axis=1)
# get the value of the activation function
h = np.dot(X,w)
yhat = np.where(h>0,1,0)
err = yhat-y
print('Activations:\n', h)
print('Predictions:\n', yhat)
print('Misclassifications\n', err)
```
This code uses some more NumPy built-ins. Check the documentation to be sure you know what is going on. One of these, np.where(), is useful here. It takes 3 arguments and returns an array. The first argument is a predicate on an array that is either evaluates to true, returning the second argument at the corresponding index in the array or false, returning the third argument instead. Now see how you get on re-implementing the code to do the prediction for the two-input Boolean AND function, as above.
```
import numpy as np
# Data set with class values in last column
dataset = np.array([[0,0,0],[0,1,0],[1,0,0],[1,1,1]]) # AND function
X=dataset[:,0:2]
y = dataset[:,2:]
# Note: the bias weight is now the last!
w = np.array([[0.03],[0.03],[0.05]])
# Add the values for the bias weights (-1) to the data matrix
nData = np.shape(X)[0]
X = np.concatenate((X,-np.ones((nData,1))),axis=1)
# get the value of the activation function
h = np.dot(X,w)
yhat = np.where(h>0,1,0)
err = yhat-y
print('Activations:\n', h)
print('Predictions:\n', yhat)
print('Misclassifications\n', err)
```
## Adding in weight updates to make the learning work
We have spent some time just getting the weights and data in the right vector-matrix format to be able to do the prediction. What else do we need to get this thing to learn ?
One thing we will need is some random initialisation for the weight vector. What sort of values would be appropriate for this initialisation?
The initialisation will be done using a NumPy built-in. Note that we need weights for each of the inputs "nIn", plus one for the bias. Also, the "nOut" parameter is just a placeholder in case you want your Perceptron to predict more than one output at a time. Here we will just use one.
```
nIn = 2 # still working with 2-input Boolean functions
nOut = 1 # so a true/false classification output
w = np.random.rand(nIn+1,nOut)*0.1-0.05 # Check: does this return a column vector?
print(w)
```
The other main thing we need is to see how the Perceptron training rule is implemented to update the weights for each attribute given all the information in the data matrix plus the misclassifications. Note that this implementation is a batch version, unlike the version in the lecture notes which is incremental. Both approaches have their place. Here we go for simplicity of implementation.
**Question:** What must the inner dimensions of the matrix multiplcation be for the weight update ? Check with the lecture notes to see what terms we will need. Recall that the augmented data matrix has $m+1$ columns, where $m$ is the number of inputs. However, the misclassifications, or errors, are of dimensionality $n$, because there is potentially one misclassification for every example in the dataset. What has to happen ?
Correct: you need to transpose the augmented data matrix to ensure the inner dimensions match (they both must be of size $n$). Check you are sure before inspecting the code (it's just a one-liner). Here the parameter "eta" is the learning rate $\eta$, which for this code is set to $0.25$. Once more $\hat{y} - y$ will be our misclassification vector. Can you see why the updated weight vector $w$ has the values it does ?
```
eta=0.25
w -= eta*np.dot(np.transpose(X),yhat-y) # this is it - learning in one line of code!
print(w)
```
Now we can put all the above together. Note that we need to set an upper limit for the number of iterations (T). Play with this code and run it as above for our Boolean functions. See what happens to the weights for "OR". Does the Perceptron learn this function. Now try "AND". Then try "XOR" (exclusive or). Now go back and experiment with the learning rate. Does anything change ?
```
from __future__ import division
import numpy as np
# Dataset with class values in last column
dataset = np.array([[0,0,0],[0,1,1],[1,0,1],[1,1,1]]) # OR function
# dataset = np.array([[0,0,0],[0,1,0],[1,0,0],[1,1,1]]) # AND function
# dataset = np.array([[0,0,0],[0,1,1],[1,0,1],[1,1,0]]) # XOR function
X = dataset[:,0:2]
y = dataset[:,2:]
nIn = np.shape(X)[1] # no. of columns of data matrix
nOut = np.shape(y)[1] # no. of columns of class values -- just 1 here
nData = np.shape(X)[0] # no. of rows of data matrix
w = np.random.rand(nIn+1,nOut)*0.1-0.05
X = np.concatenate((X,-np.ones((nData,1))),axis=1)
eta=0.25
T=20
# Train for T iterations
for t in range(T):
# Predict outputs given current weights
h = np.dot(X,w)
yhat = np.where(h>0,1,0)
# Update weights for all incorrect classifications
w -= eta*np.dot(np.transpose(X),yhat-y)
# Output current performance
errors=yhat-y
perrors=((nData - np.sum(np.where(errors==0,1,0)))/nData)
# print(perrors, 'is Error on iteration:', t)
print('Iteration:', t, ' Error:', perrors)
```
## Perceptron training on real data
Finally, try this out on a real dataset, the standard diabetes dataset. You can download this from within your program. The rest of your program should work the same. Replace the lines defining the dataset, X and y variables with the code below. Perhaps surprisingly this simple algorithm actually learns to classify, but unfortunately, this basic implementation of neural learning is not likely to find a very good model. It's also not clear if it converges. You might want to increase the number of iterations from 20. Also, you could try transforming the data, for example, by making all attribute values lie in the same range. Search for methods of normalisation using the NumPy built-in functions "np.mean()" and "np.var()". For example, you could transform dataset ```X``` with this normalisation:
```Z = (X - np.mean(X,axis = 0))/(np.var(X,axis = 0)**0.5)```.
```
import urllib
# URL for a copy of the Pima Indians Diabetes dataset (UCI Machine Learning Repository)
url = "http://cse.unsw.edu.au/~mike/comp9417/data/uci_pima_indians_diabetes.csv"
# download the file
raw_data = urllib.request.urlopen(url)
# load the CSV file as a numpy matrix
dataset = np.loadtxt(raw_data, delimiter=",")
print(dataset.shape) # 8 attributes, 1 class, 768 examples
X = dataset[:,0:8]
y = dataset[:,8:9]
```
Here is the full code
```
from __future__ import division
import numpy as np
import urllib
# URL for a copy of the Pima Indians Diabetes dataset (UCI Machine Learning Repository)
url = "http://cse.unsw.edu.au/~mike/comp9417/data/uci_pima_indians_diabetes.csv"
# download the file
raw_data = urllib.request.urlopen(url)
# load the CSV file as a numpy matrix
dataset = np.loadtxt(raw_data, delimiter=",")
print(dataset.shape)
X = dataset[:,0:8]
y = dataset[:,8:9]
nIn = np.shape(X)[1] # no. of columns of data matrix
nOut = np.shape(y)[1] # no. of columns of class values -- just 1 here
nData = np.shape(X)[0] # no. of rows of data matrix
w = np.random.rand(nIn+1,nOut)*0.1-0.05
X = np.concatenate((X,-np.ones((nData,1))),axis=1)
eta=0.25
T=20
# Train for T iterations
for t in range(T):
# Predict outputs given current weights
h = np.dot(X,w)
yhat = np.where(h>0,1,0)
# Update weights for all incorrect classifications
w -= eta*np.dot(np.transpose(X),yhat-y)
# Output current performance
errors=yhat-y
perrors=((nData - np.sum(np.where(errors==0,1,0)))/nData)
# print(perrors, 'is Error on iteration:', t)
print('Iteration:', t, ' Error:', perrors)
```
## (2) Implementing a Multi-layer Perceptron
Although real-world applications of neural networks are typically based on one of the many special-purpose libraries (such as TensorFlow, PyTorch, CNTK, etc.) it is possible and instructive to implement at least a basic neural network just using standard Python libraries. We start by implementing some key functions and concepts for a multi-layer neural network. Before coding the fully connected multi-layer neural network, let us code some basic functions needed for the multi-layer neural network. We will need several libraries later so it is easiest to import them first.
```
%matplotlib inline
import matplotlib.pyplot as plt
from collections import Counter
from functools import partial
import math, random
import numpy as np
```
### What is the sigmoid function?
```
def sigmoid(x):
# To do
return
```
### What is the derivative of the sigmoid function?
```
def sigmoid_der(x):
# To do
return
```
### What is the output function for neurons?
```
def neuron_output(w,x,b):
# To do
return
```
### What is the softmax function?
```
def softmax(y):
# To do
return
```
### How to initialise the network weights?
```
def initial_weight(input_dim,output_dim,hid_layers):
number_NN = hid_layers+[output_dim]
last_neural_number = input_dim
weight_list,bias_list = [],[]
for current_neural_number in number_NN:
# To do: code up some method to initialize weights and uncomment the following 2 lines
# current_weights =
# current_bias =
last_neural_number = current_neural_number
weight_list.append(current_weights)
bias_list.append(current_bias)
return weight_list,bias_list
```
### How many functions did you manage to implement?
You should have been able to think of code for most of these functions from your knowledge of neural networks. If you did manage to get some code, great: below there is a reference implementation of a multilayer perceptron in which most of your functions should work if you added them.
To test this implementation we will use a toy dataset.
## Example application: simplified hand-written digit classification
We will use a dataset of simplified "hand-written" digits for classification into one of ten classes (0-9). The representation is in a text format (see below) to make it easy to handle.
For this dataset the inputs will be a 5x5 matrix of binary "pixels" (0 or 1, represented pictorially as '.' or '1' for input and '.' or '@' for output).
The network structure will be:
25 inputs (pixels)
5 hidden units
10 output units.
The output unit with the largest value will taken as the predicted digit.
We will run the network for 10000 iterations.
### Build the raw digit input and the target value
```
raw_digits = [
"""11111
1...1
1...1
1...1
11111""",
"""..1..
..1..
..1..
..1..
..1..""",
"""11111
....1
11111
1....
11111""",
"""11111
....1
11111
....1
11111""",
"""1...1
1...1
11111
....1
....1""",
"""11111
1....
11111
....1
11111""",
"""11111
1....
11111
1...1
11111""",
"""11111
....1
....1
....1
....1""",
"""11111
1...1
11111
1...1
11111""",
"""11111
1...1
11111
....1
11111"""]
def make_digit(raw_digit):
return [1 if c == '1' else 0
for row in raw_digit.split("\n")
for c in row.strip()]
inputs = np.array(list(map(make_digit, raw_digits)))
targets = np.eye(10)
```
### Implementation
Here is a Neural Network object, providing the ability to define the learning rate, number of epochs/iterations, batch size, the number of layers and the number of neurons in each layer. The default setting of learning_rate, epochs, batch size and neural_numbers are 0.1, 1000, None, and \[10\] respectively. If batch_size is set to be None, that means all samples will be used for training in each iteration. \[10\] means that there is only one hidden layer with 10 neurons. If you want to change the number of hidden layers or the number of neurons, you can change the value of ```neural_numbers```.
Compare your function code from above with the ones used in this implementation.
```
class NeuralNetwork(object):
def __init__(self, learning_rate=0.1, epochs=1000, batch_size=None,neural_numbers=[10]):
self.learning_rate = learning_rate
self.epochs = epochs
self.batch_size = batch_size
self.neural_numbers=neural_numbers
self.layers=len(self.neural_numbers)+1
np.random.seed(77)
def fit(self,X,y):
self.X,self.y = X,y
self.initial_weight()
self.backpropagate(X,y)
def forward(self,X):
output_list = []
input_x = X
for layer in range(self.layers):
cur_weight = self.weight_list[layer]
cur_bias = self.bias_list[layer]
# Calculate the output for current layer
output = self.neuron_output(cur_weight,input_x,cur_bias)
# The current output will be the input for the next layer.
input_x = output
output_list.append(output)
return output_list
def backpropagate(self,train_x,train_y):
acc_list=[]
for iteration in range(self.epochs):
if self.batch_size:
n=train_x.shape[0]
# Sample batch_size number of sample for n samples
sample_index=np.random.choice(n, self.batch_size, replace=False)
x=train_x[sample_index,:]
y=train_y[sample_index,:]
else:
x=train_x
y=train_y
output_list=self.forward(x)
y_pred=output_list.pop()
# Record the accuracy every 5 iteration.
if iteration%5==0:
acc=self.accuracy(self.softmax(y),self.softmax(y_pred))
acc_list.append(acc)
loss_last=y-y_pred
output=y_pred
for layer in range(self.layers-1,-1,-1):
if layer!=0:
input_last=output_list.pop()
else:
input_last=x
if layer==self.layers-1:
loss,dw,db=self.der_last_layer(loss_last,output,input_last)
else:
weight=self.weight_list[layer+1]
loss,dw,db=self.der_hidden_layer(loss_last,output,input_last,weight)
output=input_last
self.weight_list[layer] +=dw*self.learning_rate
self.bias_list[layer] +=db*self.learning_rate
loss_last=loss
self.acc_list=acc_list
def predict(self,X):
output_list = self.forward(X)
pred_y = self.softmax(output_list[-1])
return pred_y
def accuracy(self, pred, y_test):
assert len(pred) == len(y_test)
true_pred=np.where(pred==y_test)
if true_pred:
true_n = true_pred[0].shape[0]
return true_n/len(pred)
else:
return 0
def initial_weight(self):
if self.X is not None and self.y is not None:
x=self.X
y=self.y
input_dim = x.shape[1]
output_dim = y.shape[1]
number_NN = self.neural_numbers+[output_dim]
weight_list,bias_list = [],[]
last_neural_number = input_dim
for cur_neural_number in number_NN:
# The dimension of weight matrix is last neural number * current neural number
weights = np.random.randn(last_neural_number, cur_neural_number)
# The number of dimension for bias is 1 and the number of current neural
bias = np.zeros((1, cur_neural_number))
last_neural_number=cur_neural_number
weight_list.append(weights)
bias_list.append(bias)
self.weight_list=weight_list
self.bias_list=bias_list
# Classical sigmoid activation functions are used in every layer in this network
def sigmoid(self, x):
return 1 / (1 + np.exp(-x))
# Derivation of the sigmoid activation function
def sigmoid_der(self, x):
return (1 - x) * x
# Calculate the output for this layer
def neuron_output(self,w,x,b):
wx=np.dot(x, w)
return self.sigmoid( wx + b)
def der_last_layer(self,loss_last,output,input_x):
sigmoid_der=self.sigmoid_der(output)
loss = sigmoid_der*loss_last
dW = np.dot(input_x.T, loss)
db = np.sum(loss, axis=0, keepdims=True)
return loss,dW,db
def der_hidden_layer(self,loss_last,output,input_x,weight):
loss = self.sigmoid_der(output) * np.dot(loss_last,weight.T)
db = np.sum(loss, axis=0, keepdims=True)
dW = np.dot(input_x.T, loss)
return loss,dW,db
def softmax(self,y):
return np.argmax(y,axis=1)
```
### How to run the implementation
```
Learning_rate=0.05
nn=NeuralNetwork(learning_rate=Learning_rate)
nn.fit(inputs,targets)
```
### Experimenting with the implementation
Parameter turning is not that easy in Neural Networks. To see this, let's investigate the relationship between learning rate and accuracy. Below is a function to test the effect of learning rate on accuracy. Run it and it should generate some plots to show the effect.
If you want to try other values for the learning rate, or investigate the effect of other parameters, go ahead and change them and see what happens.
```
def test_LearnRate(Learning_rate,inputs,targets):
nn=NeuralNetwork(learning_rate=Learning_rate)
nn.fit(inputs,targets)
acc_array=np.array(nn.acc_list)
plt.plot(np.arange(acc_array.shape[0])*5,acc_array)
plt.title("Learning Rate:{}".format(Learning_rate))
plt.ylabel("Accuracy")
plt.xlabel("Number of iterations")
plt.figure()
plt.subplot(2,2,1)
Learning_rate=0.05
test_LearnRate(Learning_rate,inputs,targets)
plt.subplot(2,2,2)
Learning_rate=0.1
test_LearnRate(Learning_rate,inputs,targets)
plt.subplot(2,2,3)
Learning_rate=0.5
test_LearnRate(Learning_rate,inputs,targets)
plt.subplot(2,2,4)
Learning_rate=1
test_LearnRate(Learning_rate,inputs,targets)
plt.tight_layout()
plt.show()
```
|
github_jupyter
|
# set up the data, i.e., all the cases in the truth table
x=[[0,0],[0,1],[1,0],[1,1]]
y=[0,1,1,1]
# number of data points
n=4
# number of inputs to the perceptron
m=3
# what weights should be assigned to correctly compute the OR function ?
w=[0.02,0.03,0.03]
# loop over the data
for i in range(n):
h=w[0]*(-1)# this is the bias weight and input
for j in range(1,m):
# print('J is ', j)
# print('Data is ', x[i][j-1])
h+=w[j]*x[i][j-1]
# print('H is ', h)
if(h>0):
output=1
else:
output=0
print('For Input', x[i], 'with Class', y[i], 'Predict ', output)
# set up the data, i.e., all the cases in the truth table
x=[[0,0],[0,1],[1,0],[1,1]]
y=[0,0,0,1]
# number of data points
n=4
# number of inputs to the perceptron
m=3
# what weights should be assigned to correctly compute the AND function ?
w=[0.05,0.03,0.03]
# loop over the data
for i in range(n):
h=w[0]*(-1)# this is the bias weight and input
for j in range(1,m):
h+=w[j]*x[i][j-1]
if(h>0):
output=1
else:
output=0
print('For Input', x[i], 'with Class', y[i], 'Predict ', output)
import numpy as np
x=np.array([0,1,1])
w=np.array([0.02,0.03,0.03])
h=np.dot(x,w)
print(h)
import numpy as np
# Data set with class values in last column
dataset = np.array([[0,0,0],[0,1,1],[1,0,1],[1,1,1]]) # OR function
X=dataset[:,0:2]
y = dataset[:,2:]
# Note: the bias weight is now the last!
w = np.array([[0.03],[0.03],[0.02]])
# Add the values for the bias weights (-1) to the data matrix
nData = np.shape(X)[0]
X = np.concatenate((X,-np.ones((nData,1))),axis=1)
# get the value of the activation function
h = np.dot(X,w)
yhat = np.where(h>0,1,0)
err = yhat-y
print('Activations:\n', h)
print('Predictions:\n', yhat)
print('Misclassifications\n', err)
import numpy as np
# Data set with class values in last column
dataset = np.array([[0,0,0],[0,1,0],[1,0,0],[1,1,1]]) # AND function
X=dataset[:,0:2]
y = dataset[:,2:]
# Note: the bias weight is now the last!
w = np.array([[0.03],[0.03],[0.05]])
# Add the values for the bias weights (-1) to the data matrix
nData = np.shape(X)[0]
X = np.concatenate((X,-np.ones((nData,1))),axis=1)
# get the value of the activation function
h = np.dot(X,w)
yhat = np.where(h>0,1,0)
err = yhat-y
print('Activations:\n', h)
print('Predictions:\n', yhat)
print('Misclassifications\n', err)
nIn = 2 # still working with 2-input Boolean functions
nOut = 1 # so a true/false classification output
w = np.random.rand(nIn+1,nOut)*0.1-0.05 # Check: does this return a column vector?
print(w)
eta=0.25
w -= eta*np.dot(np.transpose(X),yhat-y) # this is it - learning in one line of code!
print(w)
from __future__ import division
import numpy as np
# Dataset with class values in last column
dataset = np.array([[0,0,0],[0,1,1],[1,0,1],[1,1,1]]) # OR function
# dataset = np.array([[0,0,0],[0,1,0],[1,0,0],[1,1,1]]) # AND function
# dataset = np.array([[0,0,0],[0,1,1],[1,0,1],[1,1,0]]) # XOR function
X = dataset[:,0:2]
y = dataset[:,2:]
nIn = np.shape(X)[1] # no. of columns of data matrix
nOut = np.shape(y)[1] # no. of columns of class values -- just 1 here
nData = np.shape(X)[0] # no. of rows of data matrix
w = np.random.rand(nIn+1,nOut)*0.1-0.05
X = np.concatenate((X,-np.ones((nData,1))),axis=1)
eta=0.25
T=20
# Train for T iterations
for t in range(T):
# Predict outputs given current weights
h = np.dot(X,w)
yhat = np.where(h>0,1,0)
# Update weights for all incorrect classifications
w -= eta*np.dot(np.transpose(X),yhat-y)
# Output current performance
errors=yhat-y
perrors=((nData - np.sum(np.where(errors==0,1,0)))/nData)
# print(perrors, 'is Error on iteration:', t)
print('Iteration:', t, ' Error:', perrors)
import urllib
# URL for a copy of the Pima Indians Diabetes dataset (UCI Machine Learning Repository)
url = "http://cse.unsw.edu.au/~mike/comp9417/data/uci_pima_indians_diabetes.csv"
# download the file
raw_data = urllib.request.urlopen(url)
# load the CSV file as a numpy matrix
dataset = np.loadtxt(raw_data, delimiter=",")
print(dataset.shape) # 8 attributes, 1 class, 768 examples
X = dataset[:,0:8]
y = dataset[:,8:9]
from __future__ import division
import numpy as np
import urllib
# URL for a copy of the Pima Indians Diabetes dataset (UCI Machine Learning Repository)
url = "http://cse.unsw.edu.au/~mike/comp9417/data/uci_pima_indians_diabetes.csv"
# download the file
raw_data = urllib.request.urlopen(url)
# load the CSV file as a numpy matrix
dataset = np.loadtxt(raw_data, delimiter=",")
print(dataset.shape)
X = dataset[:,0:8]
y = dataset[:,8:9]
nIn = np.shape(X)[1] # no. of columns of data matrix
nOut = np.shape(y)[1] # no. of columns of class values -- just 1 here
nData = np.shape(X)[0] # no. of rows of data matrix
w = np.random.rand(nIn+1,nOut)*0.1-0.05
X = np.concatenate((X,-np.ones((nData,1))),axis=1)
eta=0.25
T=20
# Train for T iterations
for t in range(T):
# Predict outputs given current weights
h = np.dot(X,w)
yhat = np.where(h>0,1,0)
# Update weights for all incorrect classifications
w -= eta*np.dot(np.transpose(X),yhat-y)
# Output current performance
errors=yhat-y
perrors=((nData - np.sum(np.where(errors==0,1,0)))/nData)
# print(perrors, 'is Error on iteration:', t)
print('Iteration:', t, ' Error:', perrors)
%matplotlib inline
import matplotlib.pyplot as plt
from collections import Counter
from functools import partial
import math, random
import numpy as np
def sigmoid(x):
# To do
return
def sigmoid_der(x):
# To do
return
def neuron_output(w,x,b):
# To do
return
def softmax(y):
# To do
return
def initial_weight(input_dim,output_dim,hid_layers):
number_NN = hid_layers+[output_dim]
last_neural_number = input_dim
weight_list,bias_list = [],[]
for current_neural_number in number_NN:
# To do: code up some method to initialize weights and uncomment the following 2 lines
# current_weights =
# current_bias =
last_neural_number = current_neural_number
weight_list.append(current_weights)
bias_list.append(current_bias)
return weight_list,bias_list
raw_digits = [
"""11111
1...1
1...1
1...1
11111""",
"""..1..
..1..
..1..
..1..
..1..""",
"""11111
....1
11111
1....
11111""",
"""11111
....1
11111
....1
11111""",
"""1...1
1...1
11111
....1
....1""",
"""11111
1....
11111
....1
11111""",
"""11111
1....
11111
1...1
11111""",
"""11111
....1
....1
....1
....1""",
"""11111
1...1
11111
1...1
11111""",
"""11111
1...1
11111
....1
11111"""]
def make_digit(raw_digit):
return [1 if c == '1' else 0
for row in raw_digit.split("\n")
for c in row.strip()]
inputs = np.array(list(map(make_digit, raw_digits)))
targets = np.eye(10)
class NeuralNetwork(object):
def __init__(self, learning_rate=0.1, epochs=1000, batch_size=None,neural_numbers=[10]):
self.learning_rate = learning_rate
self.epochs = epochs
self.batch_size = batch_size
self.neural_numbers=neural_numbers
self.layers=len(self.neural_numbers)+1
np.random.seed(77)
def fit(self,X,y):
self.X,self.y = X,y
self.initial_weight()
self.backpropagate(X,y)
def forward(self,X):
output_list = []
input_x = X
for layer in range(self.layers):
cur_weight = self.weight_list[layer]
cur_bias = self.bias_list[layer]
# Calculate the output for current layer
output = self.neuron_output(cur_weight,input_x,cur_bias)
# The current output will be the input for the next layer.
input_x = output
output_list.append(output)
return output_list
def backpropagate(self,train_x,train_y):
acc_list=[]
for iteration in range(self.epochs):
if self.batch_size:
n=train_x.shape[0]
# Sample batch_size number of sample for n samples
sample_index=np.random.choice(n, self.batch_size, replace=False)
x=train_x[sample_index,:]
y=train_y[sample_index,:]
else:
x=train_x
y=train_y
output_list=self.forward(x)
y_pred=output_list.pop()
# Record the accuracy every 5 iteration.
if iteration%5==0:
acc=self.accuracy(self.softmax(y),self.softmax(y_pred))
acc_list.append(acc)
loss_last=y-y_pred
output=y_pred
for layer in range(self.layers-1,-1,-1):
if layer!=0:
input_last=output_list.pop()
else:
input_last=x
if layer==self.layers-1:
loss,dw,db=self.der_last_layer(loss_last,output,input_last)
else:
weight=self.weight_list[layer+1]
loss,dw,db=self.der_hidden_layer(loss_last,output,input_last,weight)
output=input_last
self.weight_list[layer] +=dw*self.learning_rate
self.bias_list[layer] +=db*self.learning_rate
loss_last=loss
self.acc_list=acc_list
def predict(self,X):
output_list = self.forward(X)
pred_y = self.softmax(output_list[-1])
return pred_y
def accuracy(self, pred, y_test):
assert len(pred) == len(y_test)
true_pred=np.where(pred==y_test)
if true_pred:
true_n = true_pred[0].shape[0]
return true_n/len(pred)
else:
return 0
def initial_weight(self):
if self.X is not None and self.y is not None:
x=self.X
y=self.y
input_dim = x.shape[1]
output_dim = y.shape[1]
number_NN = self.neural_numbers+[output_dim]
weight_list,bias_list = [],[]
last_neural_number = input_dim
for cur_neural_number in number_NN:
# The dimension of weight matrix is last neural number * current neural number
weights = np.random.randn(last_neural_number, cur_neural_number)
# The number of dimension for bias is 1 and the number of current neural
bias = np.zeros((1, cur_neural_number))
last_neural_number=cur_neural_number
weight_list.append(weights)
bias_list.append(bias)
self.weight_list=weight_list
self.bias_list=bias_list
# Classical sigmoid activation functions are used in every layer in this network
def sigmoid(self, x):
return 1 / (1 + np.exp(-x))
# Derivation of the sigmoid activation function
def sigmoid_der(self, x):
return (1 - x) * x
# Calculate the output for this layer
def neuron_output(self,w,x,b):
wx=np.dot(x, w)
return self.sigmoid( wx + b)
def der_last_layer(self,loss_last,output,input_x):
sigmoid_der=self.sigmoid_der(output)
loss = sigmoid_der*loss_last
dW = np.dot(input_x.T, loss)
db = np.sum(loss, axis=0, keepdims=True)
return loss,dW,db
def der_hidden_layer(self,loss_last,output,input_x,weight):
loss = self.sigmoid_der(output) * np.dot(loss_last,weight.T)
db = np.sum(loss, axis=0, keepdims=True)
dW = np.dot(input_x.T, loss)
return loss,dW,db
def softmax(self,y):
return np.argmax(y,axis=1)
Learning_rate=0.05
nn=NeuralNetwork(learning_rate=Learning_rate)
nn.fit(inputs,targets)
def test_LearnRate(Learning_rate,inputs,targets):
nn=NeuralNetwork(learning_rate=Learning_rate)
nn.fit(inputs,targets)
acc_array=np.array(nn.acc_list)
plt.plot(np.arange(acc_array.shape[0])*5,acc_array)
plt.title("Learning Rate:{}".format(Learning_rate))
plt.ylabel("Accuracy")
plt.xlabel("Number of iterations")
plt.figure()
plt.subplot(2,2,1)
Learning_rate=0.05
test_LearnRate(Learning_rate,inputs,targets)
plt.subplot(2,2,2)
Learning_rate=0.1
test_LearnRate(Learning_rate,inputs,targets)
plt.subplot(2,2,3)
Learning_rate=0.5
test_LearnRate(Learning_rate,inputs,targets)
plt.subplot(2,2,4)
Learning_rate=1
test_LearnRate(Learning_rate,inputs,targets)
plt.tight_layout()
plt.show()
| 0.371251 | 0.992116 |
# R(2+1)D Model on Webcam Stream
## Prerequisite for Webcam example
This notebook assumes you have a webcam connected to your machine. If you want to use a remote-VM to run the model and codes while using a local machine for the webcam stream, you can use an SSH tunnel:
1. SSH connect to your VM:
`$ ssh -L 8888:localhost:8888 <user-id@url-to-your-vm>`
1. Launch a Jupyter session on the VM (with port 8888 which is the default)
1. Open localhost:8888 from your browser on the webcam connected local machine to access the Jupyter notebook running on the VM.
We use the `ipywebrtc` module to show the webcam widget in the notebook. Currently, the widget works on Chrome and Firefox. For more details about the widget, please visit [ipywebrtc github](https://github.com/maartenbreddels/ipywebrtc).
```
%reload_ext autoreload
%autoreload 2
from collections import deque
import io
import os
import sys
from time import sleep, time
from threading import Thread
import decord
import IPython.display
from ipywebrtc import CameraStream, ImageRecorder
from ipywidgets import HBox, HTML, Layout, VBox, Widget
import numpy as np
from PIL import Image
import torch
import torch.cuda as cuda
import torch.nn as nn
from torchvision.transforms import Compose
from vu.data import KINETICS
from vu.models.r2plus1d import R2Plus1D
from vu.utils import system_info, transforms_video as transforms
system_info()
```
## Load Pre-trained Model
Load R(2+1)D 34-layer model pre-trained on IG65M and fine-tuned on Kinetics400. There are two versions of the model: 8-frame model and 32-frame model based on the input clip length. The 32-frame model is slower than 8-frame model.
```
NUM_CLASSES = 400
NUM_FRAMES = 8 # 8 or 32.
IM_SCALE = 128 # resize then crop
INPUT_SIZE = 112 # input clip size: 3 x NUM_FRAMES x 112 x 112
# Normalization
MEAN = (0.43216, 0.394666, 0.37645)
STD = (0.22803, 0.22145, 0.216989)
model = R2Plus1D.init_model(
sample_length=NUM_FRAMES,
base_model='kinetics'
)
```
### Prepare class names
Since we use Kinetics400 model out of the box, we load its class names. The dataset consists of 400 human actions. For example, the first 20 labels are:
```
labels = KINETICS.class_names
labels[:20]
```
Among them, we will use 50 classes that we are interested in (i.e. the actions make sense to demonstrate in front of the webcam) and ignore other classes by filtering out from the model outputs.
```
REL_LABELS = [
"assembling computer",
"applying cream",
"brushing teeth",
"clapping",
"cleaning floor",
"cleaning windows",
"drinking",
# will regard all eatings as simply "eating"
"eating burger",
"eating chips",
"eating doughnuts",
"eating hotdog",
"eating ice cream",
"fixing hair",
"hammer throw",
# will regards all kicking as simply "kicking"
"high kick",
# will regards jogging and running on treadmill as "running"
"jogging",
"laughing",
"mopping floor",
"moving furniture",
"opening bottle",
"plastering",
# will regards all punching as simply "punching"
"punching bag",
"punching person (boxing)",
"pushing cart",
# will regard all readings as simply "reading"
"reading book",
"reading newspaper",
"rock scissors paper",
"running on treadmill",
"shaking hands",
"shaking head",
"side kick",
"slapping",
"smoking",
"sneezing",
"spray painting",
"spraying",
"stretching arm",
"stretching leg",
"sweeping floor",
"swinging legs",
"texting",
# will regards all throwing as simply "throwing"
"throwing axe",
"throwing ball",
"unboxing",
"unloading truck",
"using computer",
"using remote controller (not gaming)",
"welding",
"writing",
"yawning",
]
len(REL_LABELS)
```
### Load model to device
```
if cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
model.to(device)
model.eval()
```
## Run Model
Here, we use a sliding window classification for action recognition on the continuous webcam stream. We average the last 5 windows results to smoothing out the prediction results. We also reject classes that the score is less than `SCORE_THRESHOLD`.
```
SCORE_THRESHOLD = 0.04
AVERAGING_SIZE = 5 # Averaging 5 latest clips to make video-level prediction (or smoothing)
transform = Compose([
transforms.ToTensorVideo(),
transforms.ResizeVideo(IM_SCALE),
transforms.CenterCropVideo(INPUT_SIZE),
transforms.NormalizeVideo(MEAN, STD)
])
def predict(frames, transform, device, model):
clip = torch.from_numpy(np.array(frames))
# Transform frames and append batch dim
sample = torch.unsqueeze(transform(clip), 0)
sample = sample.to(device)
output = model(sample)
scores = nn.functional.softmax(output, dim=1).data.cpu().numpy()[0]
return scores
def filter_labels(
id_score_dict,
labels,
threshold=0.0,
target_labels=None,
filter_labels=None
):
# Show only interested actions (target_labels) with a confidence score >= threshold
result = {}
for i, s in id_score_dict.items():
l = labels[i]
if (s < threshold) or\
(target_labels is not None and l not in target_labels) or\
(filter_labels is not None and l in filter_labels):
continue
# Simplify some labels
if l.startswith('eating'):
l = 'eating'
elif l.startswith('reading'):
l = 'reading'
elif l.startswith('punching'):
l = 'punching'
elif l.startswith('throwing'):
l = 'throwing'
elif l.endswith('kick'):
l = 'kicking'
elif l == 'jogging' or l == 'running on treadmill':
l = 'running'
if l in result:
result[l] += s
else:
result[l] = s
return result
```
### On Webcam Stream
#### Start webcam
```
# Webcam
w_cam = CameraStream(
constraints={
'facing_mode': 'user',
'audio': False,
'video': {'width': 400, 'height': 400}
},
layout=Layout(width='400px')
)
# Image recorder for taking a snapshot
w_imrecorder = ImageRecorder(
format='jpg',
stream=w_cam,
layout=Layout(padding='0 0 0 100px')
)
# Text widget to show our classification results
w_text = HTML(layout=Layout(padding='0 0 0 100px'))
def predict_webcam_frames():
""" Predict activity by using a pretrained model
"""
global w_imrecorder, w_text, is_playing
global device, model
# Use deque for sliding window over frames
window = deque()
scores_cache = deque()
scores_sum = np.zeros(NUM_CLASSES)
while is_playing:
try:
# Get the image (RGBA) and convert to RGB
im = Image.open(
io.BytesIO(w_imrecorder.image.value)
).convert('RGB')
window.append(np.array(im))
if len(window) == NUM_FRAMES:
# Make a prediction
t = time()
scores = predict(window, transform, device, model)
dur = time() - t
# Averaging scores across clips (dense prediction)
scores_cache.append(scores)
scores_sum += scores
if len(scores_cache) == AVERAGING_SIZE:
scores_avg = scores_sum / AVERAGING_SIZE
# 1. Pick top-5 labels
top5_id_score_dict = {
i: scores_avg[i] for i in (-scores_avg).argpartition(4)[:5]
}
# 2. Filter by SCORE_THRESHOLD and REL_LABELS
top5_label_score_dict = filter_labels(
top5_id_score_dict,
labels,
threshold=SCORE_THRESHOLD,
target_labels=REL_LABELS
)
# 3. Display the labels sorted by scores
top5 = sorted(
top5_label_score_dict.items(), key=lambda kv: -kv[1]
)
# Plot final results nicely
w_text.value = (
"{} fps<p style='font-size:20px'>".format(1//dur) + "<br>".join([
"{} ({:.3f})".format(k, v) for k, v in top5
]) + "</p>"
)
scores_sum -= scores_cache.popleft()
window.popleft()
else:
w_text.value = "Preparing..."
except OSError:
# If im_recorder doesn't have valid image data, skip it.
pass
except BaseException as e:
w_text.value = "Exception: " + str(e)
break
# Taking the next snapshot programmatically
w_imrecorder.recording = True
sleep(0.02)
is_playing = False
# Once prediciton started, hide image recorder widget for faster fps
def start(_):
global is_playing
# Make sure this get called only once
if not is_playing:
w_imrecorder.layout.display = 'none'
is_playing = True
Thread(target=predict_webcam_frames).start()
w_imrecorder.image.observe(start, 'value')
HBox([w_cam, w_imrecorder, w_text])
```
To start inference on webcam stream, click 'capture' button when the stream is started.
#### Stop Webcam and clean-up
```
is_playing = False
Widget.close_all()
```
### Appendix: Run on a video file
Here, we show how to use the model on a video file. We utilize threading so that the inference does not block the video preview.
* Prerequisite - Download HMDB51 video files from [here](http://serre-lab.clps.brown.edu/resource/hmdb-a-large-human-motion-database/#Downloads)
```
def _predict_video_frames(window, scores_cache, scores_sum, is_ready):
t = time()
scores = predict(window, transform, device, model)
dur = time() - t
# Averaging scores across clips (dense prediction)
scores_cache.append(scores)
scores_sum += scores
if len(scores_cache) == AVERAGING_SIZE:
scores_avg = scores_sum / AVERAGING_SIZE
top5_id_score_dict = {
i: scores_avg[i] for i in (-scores_avg).argpartition(4)[:5]
}
top5_label_score_dict = filter_labels(
top5_id_score_dict,
labels,
threshold=SCORE_THRESHOLD,
)
top5 = sorted(top5_label_score_dict.items(), key=lambda kv: -kv[1])
# Plot final results nicely
d_caption.update(IPython.display.HTML(
"{} fps<p style='font-size:20px'>".format(1 // dur) + "<br>".join([
"{} ({:.3f})".format(k, v) for k, v in top5
]) + "</p>"
))
scores_sum -= scores_cache.popleft()
# Inference done. Ready to run on the next frames.
window.popleft()
is_ready[0] = True
def predict_video_frames(video_filepath, d_video, d_caption):
"""Load video and show frames and inference results on
d_video and d_caption displays
"""
video_reader = decord.VideoReader(video_filepath)
print("Total frames = {}".format(len(video_reader)))
is_ready = [True]
window = deque()
scores_cache = deque()
scores_sum = np.zeros(NUM_CLASSES)
while True:
try:
frame = video_reader.next().asnumpy()
if len(frame.shape) != 3:
break
# Start an inference thread when ready
if is_ready[0]:
window.append(frame)
if len(window) == NUM_FRAMES:
is_ready[0] = False
Thread(
target=_predict_video_frames,
args=(window, scores_cache, scores_sum, is_ready)
).start()
# Show video preview
f = io.BytesIO()
im = Image.fromarray(frame)
im.save(f, 'jpeg')
d_video.update(IPython.display.Image(data=f.getvalue()))
sleep(0.03)
except:
break
video_filepath = os.path.join(
"data", "hmdb51", "videos",
"push", "Baby_Push_Cart_push_f_cm_np1_ri_bad_0.avi"
)
d_video = IPython.display.display("", display_id=1)
d_caption = IPython.display.display("Preparing...", display_id=2)
try:
predict_video_frames(video_filepath, d_video, d_caption)
except KeyboardInterrupt:
pass
```
|
github_jupyter
|
%reload_ext autoreload
%autoreload 2
from collections import deque
import io
import os
import sys
from time import sleep, time
from threading import Thread
import decord
import IPython.display
from ipywebrtc import CameraStream, ImageRecorder
from ipywidgets import HBox, HTML, Layout, VBox, Widget
import numpy as np
from PIL import Image
import torch
import torch.cuda as cuda
import torch.nn as nn
from torchvision.transforms import Compose
from vu.data import KINETICS
from vu.models.r2plus1d import R2Plus1D
from vu.utils import system_info, transforms_video as transforms
system_info()
NUM_CLASSES = 400
NUM_FRAMES = 8 # 8 or 32.
IM_SCALE = 128 # resize then crop
INPUT_SIZE = 112 # input clip size: 3 x NUM_FRAMES x 112 x 112
# Normalization
MEAN = (0.43216, 0.394666, 0.37645)
STD = (0.22803, 0.22145, 0.216989)
model = R2Plus1D.init_model(
sample_length=NUM_FRAMES,
base_model='kinetics'
)
labels = KINETICS.class_names
labels[:20]
REL_LABELS = [
"assembling computer",
"applying cream",
"brushing teeth",
"clapping",
"cleaning floor",
"cleaning windows",
"drinking",
# will regard all eatings as simply "eating"
"eating burger",
"eating chips",
"eating doughnuts",
"eating hotdog",
"eating ice cream",
"fixing hair",
"hammer throw",
# will regards all kicking as simply "kicking"
"high kick",
# will regards jogging and running on treadmill as "running"
"jogging",
"laughing",
"mopping floor",
"moving furniture",
"opening bottle",
"plastering",
# will regards all punching as simply "punching"
"punching bag",
"punching person (boxing)",
"pushing cart",
# will regard all readings as simply "reading"
"reading book",
"reading newspaper",
"rock scissors paper",
"running on treadmill",
"shaking hands",
"shaking head",
"side kick",
"slapping",
"smoking",
"sneezing",
"spray painting",
"spraying",
"stretching arm",
"stretching leg",
"sweeping floor",
"swinging legs",
"texting",
# will regards all throwing as simply "throwing"
"throwing axe",
"throwing ball",
"unboxing",
"unloading truck",
"using computer",
"using remote controller (not gaming)",
"welding",
"writing",
"yawning",
]
len(REL_LABELS)
if cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
model.to(device)
model.eval()
SCORE_THRESHOLD = 0.04
AVERAGING_SIZE = 5 # Averaging 5 latest clips to make video-level prediction (or smoothing)
transform = Compose([
transforms.ToTensorVideo(),
transforms.ResizeVideo(IM_SCALE),
transforms.CenterCropVideo(INPUT_SIZE),
transforms.NormalizeVideo(MEAN, STD)
])
def predict(frames, transform, device, model):
clip = torch.from_numpy(np.array(frames))
# Transform frames and append batch dim
sample = torch.unsqueeze(transform(clip), 0)
sample = sample.to(device)
output = model(sample)
scores = nn.functional.softmax(output, dim=1).data.cpu().numpy()[0]
return scores
def filter_labels(
id_score_dict,
labels,
threshold=0.0,
target_labels=None,
filter_labels=None
):
# Show only interested actions (target_labels) with a confidence score >= threshold
result = {}
for i, s in id_score_dict.items():
l = labels[i]
if (s < threshold) or\
(target_labels is not None and l not in target_labels) or\
(filter_labels is not None and l in filter_labels):
continue
# Simplify some labels
if l.startswith('eating'):
l = 'eating'
elif l.startswith('reading'):
l = 'reading'
elif l.startswith('punching'):
l = 'punching'
elif l.startswith('throwing'):
l = 'throwing'
elif l.endswith('kick'):
l = 'kicking'
elif l == 'jogging' or l == 'running on treadmill':
l = 'running'
if l in result:
result[l] += s
else:
result[l] = s
return result
# Webcam
w_cam = CameraStream(
constraints={
'facing_mode': 'user',
'audio': False,
'video': {'width': 400, 'height': 400}
},
layout=Layout(width='400px')
)
# Image recorder for taking a snapshot
w_imrecorder = ImageRecorder(
format='jpg',
stream=w_cam,
layout=Layout(padding='0 0 0 100px')
)
# Text widget to show our classification results
w_text = HTML(layout=Layout(padding='0 0 0 100px'))
def predict_webcam_frames():
""" Predict activity by using a pretrained model
"""
global w_imrecorder, w_text, is_playing
global device, model
# Use deque for sliding window over frames
window = deque()
scores_cache = deque()
scores_sum = np.zeros(NUM_CLASSES)
while is_playing:
try:
# Get the image (RGBA) and convert to RGB
im = Image.open(
io.BytesIO(w_imrecorder.image.value)
).convert('RGB')
window.append(np.array(im))
if len(window) == NUM_FRAMES:
# Make a prediction
t = time()
scores = predict(window, transform, device, model)
dur = time() - t
# Averaging scores across clips (dense prediction)
scores_cache.append(scores)
scores_sum += scores
if len(scores_cache) == AVERAGING_SIZE:
scores_avg = scores_sum / AVERAGING_SIZE
# 1. Pick top-5 labels
top5_id_score_dict = {
i: scores_avg[i] for i in (-scores_avg).argpartition(4)[:5]
}
# 2. Filter by SCORE_THRESHOLD and REL_LABELS
top5_label_score_dict = filter_labels(
top5_id_score_dict,
labels,
threshold=SCORE_THRESHOLD,
target_labels=REL_LABELS
)
# 3. Display the labels sorted by scores
top5 = sorted(
top5_label_score_dict.items(), key=lambda kv: -kv[1]
)
# Plot final results nicely
w_text.value = (
"{} fps<p style='font-size:20px'>".format(1//dur) + "<br>".join([
"{} ({:.3f})".format(k, v) for k, v in top5
]) + "</p>"
)
scores_sum -= scores_cache.popleft()
window.popleft()
else:
w_text.value = "Preparing..."
except OSError:
# If im_recorder doesn't have valid image data, skip it.
pass
except BaseException as e:
w_text.value = "Exception: " + str(e)
break
# Taking the next snapshot programmatically
w_imrecorder.recording = True
sleep(0.02)
is_playing = False
# Once prediciton started, hide image recorder widget for faster fps
def start(_):
global is_playing
# Make sure this get called only once
if not is_playing:
w_imrecorder.layout.display = 'none'
is_playing = True
Thread(target=predict_webcam_frames).start()
w_imrecorder.image.observe(start, 'value')
HBox([w_cam, w_imrecorder, w_text])
is_playing = False
Widget.close_all()
def _predict_video_frames(window, scores_cache, scores_sum, is_ready):
t = time()
scores = predict(window, transform, device, model)
dur = time() - t
# Averaging scores across clips (dense prediction)
scores_cache.append(scores)
scores_sum += scores
if len(scores_cache) == AVERAGING_SIZE:
scores_avg = scores_sum / AVERAGING_SIZE
top5_id_score_dict = {
i: scores_avg[i] for i in (-scores_avg).argpartition(4)[:5]
}
top5_label_score_dict = filter_labels(
top5_id_score_dict,
labels,
threshold=SCORE_THRESHOLD,
)
top5 = sorted(top5_label_score_dict.items(), key=lambda kv: -kv[1])
# Plot final results nicely
d_caption.update(IPython.display.HTML(
"{} fps<p style='font-size:20px'>".format(1 // dur) + "<br>".join([
"{} ({:.3f})".format(k, v) for k, v in top5
]) + "</p>"
))
scores_sum -= scores_cache.popleft()
# Inference done. Ready to run on the next frames.
window.popleft()
is_ready[0] = True
def predict_video_frames(video_filepath, d_video, d_caption):
"""Load video and show frames and inference results on
d_video and d_caption displays
"""
video_reader = decord.VideoReader(video_filepath)
print("Total frames = {}".format(len(video_reader)))
is_ready = [True]
window = deque()
scores_cache = deque()
scores_sum = np.zeros(NUM_CLASSES)
while True:
try:
frame = video_reader.next().asnumpy()
if len(frame.shape) != 3:
break
# Start an inference thread when ready
if is_ready[0]:
window.append(frame)
if len(window) == NUM_FRAMES:
is_ready[0] = False
Thread(
target=_predict_video_frames,
args=(window, scores_cache, scores_sum, is_ready)
).start()
# Show video preview
f = io.BytesIO()
im = Image.fromarray(frame)
im.save(f, 'jpeg')
d_video.update(IPython.display.Image(data=f.getvalue()))
sleep(0.03)
except:
break
video_filepath = os.path.join(
"data", "hmdb51", "videos",
"push", "Baby_Push_Cart_push_f_cm_np1_ri_bad_0.avi"
)
d_video = IPython.display.display("", display_id=1)
d_caption = IPython.display.display("Preparing...", display_id=2)
try:
predict_video_frames(video_filepath, d_video, d_caption)
except KeyboardInterrupt:
pass
| 0.505371 | 0.916372 |
# Scattertext spaCy with Yelp Dataset
Exploratory data analysis and visualization for text data
Medium Article - [Analyze Yelp Dataset with Scattertext spaCy](https://link.medium.com/k3DRTC57I1)
[GitHub Repo](https://github.com/gyhou/yelp_dataset)
https://www.yelp.com/dataset/
```
import pandas as pd
# csv file can be found in the github repo
df = pd.read_csv('yelp_reviews_RV_categories.csv')
print(df.shape)
df.head()
# Check how rating is distributed
import seaborn as sns
sns.distplot(df['review_stars']);
# Consolidate rating to high or low
df['rating'] = df['review_stars'].replace({1:'Low Rating', 2:'Low Rating', 3:'Low Rating',
4:'High Rating', 5:'High Rating'})
df.rating.value_counts()
# Group similar categories
df_RV_Auto = df[df['categories'].str.contains('RV Repair|RV Dealers|RV Rental', case=False, na=False)]
df_Parks_Camp = df[df['categories'].str.contains('RV Parks|Campgrounds', case=False, na=False)]
```
## Use NLP on review text
```
# Make sure you have the english language model
# !python -m spacy download en_core_web_sm
import spacy
import scattertext
# https://spacy.io/models/en
# use the english model that you have
nlp = spacy.load('en_core_web_sm')
# Create a text file to add stop words
with open('stopwords.txt', 'r') as f:
str_f = f.read()
stopwords_file = set(str_f.split('\n'))
nlp.Defaults.stop_words |= stopwords_file
# Add more stop words
from nltk.corpus import stopwords
stopWords = set(stopwords.words('english'))
nlp.Defaults.stop_words |= stopWords
```
### Set up corpus - Term Frequency and Scaled F-Score
```
def term_freq(df_yelp):
corpus = (scattertext.CorpusFromPandas(df_yelp,
category_col='rating',
text_col='text',
nlp=nlp)
.build()
.remove_terms(nlp.Defaults.stop_words, ignore_absences=True)
# ignore_absences: if the term does not appear, don't raise an error, just move on.
)
df = corpus.get_term_freq_df()
df['High_Rating_Score'] = corpus.get_scaled_f_scores('High Rating')
df['Low_Rating_Score'] = corpus.get_scaled_f_scores('Low Rating')
df['High_Rating_Score'] = round(df['High_Rating_Score'], 2)
df['Low_Rating_Score'] = round(df['Low_Rating_Score'], 2)
df_high = df.sort_values(by='High Rating freq',
ascending = False).reset_index()
df_low = df.sort_values(by='Low Rating freq',
ascending=False).reset_index()
return df_high, df_low
# Frequency and Scaled F-Score for RV Parks and Campgrounds
Parks_Camp_high, Parks_Camp_low = term_freq(df_Parks_Camp)
# Sorted by High Rating Frequency
Parks_Camp_high.head(10)
# Sorted by Low Rating Frequency
Parks_Camp_low.head(10)
# Frequency and Scaled F-Score for RV Repair, RV Dealers and RV Rental
RV_Auto_high, RV_Auto_low = term_freq(df_RV_Auto)
RV_Auto_high.head(10)
RV_Auto_low.head(10)
# Frequency and Scaled F-Score for all 5 RV categories
RV_all_high, RV_all_low = term_freq(df)
RV_all_high.head(10)
RV_all_low.head(10)
```
## Using Scattertext to visualize term associations
```
# Label each excerpt with the name of business using the metadata parameter
corpus_dataframe = df_Parks_Camp
html = scattertext.produce_scattertext_explorer(corpus,
category='Low Rating',
category_name='Low Rating',
not_category_name='High Rating',
width_in_pixels=1000,
metadata=corpus_dataframe['name'])
html_file_name = "RV-Parks-Campgrounds-Yelp-Review-Scattertext.html"
open(html_file_name, 'wb').write(html.encode('utf-8'))
```
|
github_jupyter
|
import pandas as pd
# csv file can be found in the github repo
df = pd.read_csv('yelp_reviews_RV_categories.csv')
print(df.shape)
df.head()
# Check how rating is distributed
import seaborn as sns
sns.distplot(df['review_stars']);
# Consolidate rating to high or low
df['rating'] = df['review_stars'].replace({1:'Low Rating', 2:'Low Rating', 3:'Low Rating',
4:'High Rating', 5:'High Rating'})
df.rating.value_counts()
# Group similar categories
df_RV_Auto = df[df['categories'].str.contains('RV Repair|RV Dealers|RV Rental', case=False, na=False)]
df_Parks_Camp = df[df['categories'].str.contains('RV Parks|Campgrounds', case=False, na=False)]
# Make sure you have the english language model
# !python -m spacy download en_core_web_sm
import spacy
import scattertext
# https://spacy.io/models/en
# use the english model that you have
nlp = spacy.load('en_core_web_sm')
# Create a text file to add stop words
with open('stopwords.txt', 'r') as f:
str_f = f.read()
stopwords_file = set(str_f.split('\n'))
nlp.Defaults.stop_words |= stopwords_file
# Add more stop words
from nltk.corpus import stopwords
stopWords = set(stopwords.words('english'))
nlp.Defaults.stop_words |= stopWords
def term_freq(df_yelp):
corpus = (scattertext.CorpusFromPandas(df_yelp,
category_col='rating',
text_col='text',
nlp=nlp)
.build()
.remove_terms(nlp.Defaults.stop_words, ignore_absences=True)
# ignore_absences: if the term does not appear, don't raise an error, just move on.
)
df = corpus.get_term_freq_df()
df['High_Rating_Score'] = corpus.get_scaled_f_scores('High Rating')
df['Low_Rating_Score'] = corpus.get_scaled_f_scores('Low Rating')
df['High_Rating_Score'] = round(df['High_Rating_Score'], 2)
df['Low_Rating_Score'] = round(df['Low_Rating_Score'], 2)
df_high = df.sort_values(by='High Rating freq',
ascending = False).reset_index()
df_low = df.sort_values(by='Low Rating freq',
ascending=False).reset_index()
return df_high, df_low
# Frequency and Scaled F-Score for RV Parks and Campgrounds
Parks_Camp_high, Parks_Camp_low = term_freq(df_Parks_Camp)
# Sorted by High Rating Frequency
Parks_Camp_high.head(10)
# Sorted by Low Rating Frequency
Parks_Camp_low.head(10)
# Frequency and Scaled F-Score for RV Repair, RV Dealers and RV Rental
RV_Auto_high, RV_Auto_low = term_freq(df_RV_Auto)
RV_Auto_high.head(10)
RV_Auto_low.head(10)
# Frequency and Scaled F-Score for all 5 RV categories
RV_all_high, RV_all_low = term_freq(df)
RV_all_high.head(10)
RV_all_low.head(10)
# Label each excerpt with the name of business using the metadata parameter
corpus_dataframe = df_Parks_Camp
html = scattertext.produce_scattertext_explorer(corpus,
category='Low Rating',
category_name='Low Rating',
not_category_name='High Rating',
width_in_pixels=1000,
metadata=corpus_dataframe['name'])
html_file_name = "RV-Parks-Campgrounds-Yelp-Review-Scattertext.html"
open(html_file_name, 'wb').write(html.encode('utf-8'))
| 0.434701 | 0.877634 |
```
%matplotlib inline
import sys
print(sys.version)
import numpy as np
print(np.__version__)
import pandas as pd
print(pd.__version__)
import matplotlib.pyplot as plt
```
Now fundamentally the data frame is just an abstraction but it provides a ton of useful tools that you’re going to get to see. This video is just going to go over the basic idea of the data frame as well as how to create them.
```
import string
upcase = [x for x in string.ascii_uppercase]
lcase = [x for x in string.ascii_lowercase]
print(upcase[:5], lcase[:5])
```
You can create DataFrames by passing in np arrays, lists of series, or dictionaries.
```
pd.DataFrame([upcase, lcase])
```
We’ll be covering a lot of different aspects here but as always we’re going to start with the simple stuff. A simplification of a data frame is like an excel table or sql table. You’ve got columns and rows.
In more specific pandas terms, it's a more powerful list of series. Each column is a Series of data and it just so happens these can have relationships.
You can see that if we just pass in a list of lists it treats them like columns. Of course if that’s an issue we can just transpose it and get we’ll get them as columns.
```
pd.DataFrame([upcase, lcase]).T
```
This should be familiar because it’s the same way that we transpose ndarrays in numpy.
Of course we can also specify them as explicit columns but passing in a dictionary where the keys are the column names and the values are the lists of each item (or the rows).
```
letters = pd.DataFrame({'lowercase':lcase, 'uppercase':upcase})
letters.head()
```
Now you’ll see that if these lengths are not the same, we’ll get a ValueError so it’s worth checking to make sure your data is clean before importing or using it to create a DataFrame
```
pd.DataFrame({'lowercase':lcase + [0], 'uppercase':upcase})
letters.head()
```
We can rename the columns easily and even add a new one through a relatively simple dictionary like assignment. I'll go over some more complex methods later on.
```
letters.columns = ['LowerCase','UpperCase']
np.random.seed(25)
letters['Number'] = np.random.random_integers(1,50,26)
letters
```
Now just like Series, DataFrames have data types, we can get those by accessing the dtypes of the DataFrame which will give us details on the data types we've got.
```
letters.dtypes
letters.index = lcase
letters
```
Of course we can sort maybe by a specific column or by the index(the default).
```
letters.sort('Number')
letters.sort()
```
We've seen how to query for one column and multiple columns isn't too much more difficult.
We can get upper and lower case columns
```
letters[['LowerCase','UpperCase']].head()
```
We can also just query the index as well. We went over a lot of that in the Series Section and a lot of the same applies here.
We can query by index location or by letters
```
letters.iloc[5:10]
letters["f":"k"]
```
Now that we’ve covered this basic concept of pandas.
We covered how indexes integrate with both Series and DataFrames. We've covered how numpy underlies a lot of the power we've got and to be honest we've really covered a lot of the fundamental for doing data analysis with python and pandas.
Although these videos have been using fabricated data we have covered a lot of the methods that you’re going to be using on a regular basis during your analysis of data.
Let's go ahead and dive into our first data set
|
github_jupyter
|
%matplotlib inline
import sys
print(sys.version)
import numpy as np
print(np.__version__)
import pandas as pd
print(pd.__version__)
import matplotlib.pyplot as plt
import string
upcase = [x for x in string.ascii_uppercase]
lcase = [x for x in string.ascii_lowercase]
print(upcase[:5], lcase[:5])
pd.DataFrame([upcase, lcase])
pd.DataFrame([upcase, lcase]).T
letters = pd.DataFrame({'lowercase':lcase, 'uppercase':upcase})
letters.head()
pd.DataFrame({'lowercase':lcase + [0], 'uppercase':upcase})
letters.head()
letters.columns = ['LowerCase','UpperCase']
np.random.seed(25)
letters['Number'] = np.random.random_integers(1,50,26)
letters
letters.dtypes
letters.index = lcase
letters
letters.sort('Number')
letters.sort()
letters[['LowerCase','UpperCase']].head()
letters.iloc[5:10]
letters["f":"k"]
| 0.096823 | 0.960473 |
```
from PIL import Image
import torch
import argparse
import numpy as np
import matplotlib.pyplot as plt
import sys
sys.path.append('../') # add relative path
from module.sttr import STTR
from dataset.preprocess import normalization, compute_left_occ_region
from utilities.misc import NestedTensor
```
### Define STTR model
```
# Default parameters
args = type('', (), {})() # create empty args
args.channel_dim = 128
args.position_encoding='sine1d_rel'
args.num_attn_layers=6
args.nheads=8
args.regression_head='ot'
args.context_adjustment_layer='cal'
args.cal_num_blocks=8
args.cal_feat_dim=16
args.cal_expansion_ratio=4
model = STTR(args).cuda().eval()
# Load the pretrained model
model_file_name = "../kitti_finetuned_model.pth.tar"
checkpoint = torch.load(model_file_name)
pretrained_dict = checkpoint['state_dict']
model.load_state_dict(pretrained_dict)
print("Pre-trained model successfully loaded.")
```
### Read image
```
left = np.array(Image.open('../sample_data/KITTI_2015/training/image_2/000046_10.png'))
right = np.array(Image.open('../sample_data/KITTI_2015/training/image_3/000046_10.png'))
disp = np.array(Image.open('../sample_data/KITTI_2015/training/disp_occ_0/000046_10.png')).astype(np.float) / 256.
# Visualize image
plt.figure(1)
plt.imshow(left)
plt.figure(2)
plt.imshow(right)
plt.figure(3)
plt.imshow(disp)
```
Preprocess data for STTR
```
# normalize
input_data = {'left': left, 'right':right, 'disp':disp}
input_data = normalization(**input_data)
# donwsample attention by stride of 3
h, w, _ = left.shape
bs = 1
downsample = 3
col_offset = int(downsample / 2)
row_offset = int(downsample / 2)
sampled_cols = torch.arange(col_offset, w, downsample)[None,].expand(bs, -1).cuda()
sampled_rows = torch.arange(row_offset, h, downsample)[None,].expand(bs, -1).cuda()
# build NestedTensor
input_data = NestedTensor(input_data['left'].cuda()[None,],input_data['right'].cuda()[None,], sampled_cols=sampled_cols, sampled_rows=sampled_rows)
```
### Inference
```
output = model(input_data)
# set disparity of occ area to 0
disp_pred = output['disp_pred'].data.cpu().numpy()[0]
occ_pred = output['occ_pred'].data.cpu().numpy()[0] > 0.5
disp_pred[occ_pred] = 0.0
# visualize predicted disparity and occlusion map
plt.figure(4)
plt.imshow(disp_pred)
plt.figure(5)
plt.imshow(occ_pred)
```
### Compute metrics
```
# manually compute occluded region
occ_mask = compute_left_occ_region(w, disp)
# visualize the known occluded region
plt.figure(6)
plt.imshow(occ_mask)
# compute difference in non-occluded region only
diff = disp - disp_pred
diff[occ_mask] = 0.0 # set occ area to be 0.0
# Note: code for computing the metrics can be found in module/loss.py
valid_mask = np.logical_and(disp > 0.0, ~occ_mask)
# find 3 px error
err_px = (diff > 3).sum()
total_px = (valid_mask).sum()
print('3 px error %.3f%%'%(err_px*100.0/total_px))
# find epe
err = np.abs(diff[valid_mask]).sum()
print('EPE %f'%(err * 1.0/ total_px))
```
|
github_jupyter
|
from PIL import Image
import torch
import argparse
import numpy as np
import matplotlib.pyplot as plt
import sys
sys.path.append('../') # add relative path
from module.sttr import STTR
from dataset.preprocess import normalization, compute_left_occ_region
from utilities.misc import NestedTensor
# Default parameters
args = type('', (), {})() # create empty args
args.channel_dim = 128
args.position_encoding='sine1d_rel'
args.num_attn_layers=6
args.nheads=8
args.regression_head='ot'
args.context_adjustment_layer='cal'
args.cal_num_blocks=8
args.cal_feat_dim=16
args.cal_expansion_ratio=4
model = STTR(args).cuda().eval()
# Load the pretrained model
model_file_name = "../kitti_finetuned_model.pth.tar"
checkpoint = torch.load(model_file_name)
pretrained_dict = checkpoint['state_dict']
model.load_state_dict(pretrained_dict)
print("Pre-trained model successfully loaded.")
left = np.array(Image.open('../sample_data/KITTI_2015/training/image_2/000046_10.png'))
right = np.array(Image.open('../sample_data/KITTI_2015/training/image_3/000046_10.png'))
disp = np.array(Image.open('../sample_data/KITTI_2015/training/disp_occ_0/000046_10.png')).astype(np.float) / 256.
# Visualize image
plt.figure(1)
plt.imshow(left)
plt.figure(2)
plt.imshow(right)
plt.figure(3)
plt.imshow(disp)
# normalize
input_data = {'left': left, 'right':right, 'disp':disp}
input_data = normalization(**input_data)
# donwsample attention by stride of 3
h, w, _ = left.shape
bs = 1
downsample = 3
col_offset = int(downsample / 2)
row_offset = int(downsample / 2)
sampled_cols = torch.arange(col_offset, w, downsample)[None,].expand(bs, -1).cuda()
sampled_rows = torch.arange(row_offset, h, downsample)[None,].expand(bs, -1).cuda()
# build NestedTensor
input_data = NestedTensor(input_data['left'].cuda()[None,],input_data['right'].cuda()[None,], sampled_cols=sampled_cols, sampled_rows=sampled_rows)
output = model(input_data)
# set disparity of occ area to 0
disp_pred = output['disp_pred'].data.cpu().numpy()[0]
occ_pred = output['occ_pred'].data.cpu().numpy()[0] > 0.5
disp_pred[occ_pred] = 0.0
# visualize predicted disparity and occlusion map
plt.figure(4)
plt.imshow(disp_pred)
plt.figure(5)
plt.imshow(occ_pred)
# manually compute occluded region
occ_mask = compute_left_occ_region(w, disp)
# visualize the known occluded region
plt.figure(6)
plt.imshow(occ_mask)
# compute difference in non-occluded region only
diff = disp - disp_pred
diff[occ_mask] = 0.0 # set occ area to be 0.0
# Note: code for computing the metrics can be found in module/loss.py
valid_mask = np.logical_and(disp > 0.0, ~occ_mask)
# find 3 px error
err_px = (diff > 3).sum()
total_px = (valid_mask).sum()
print('3 px error %.3f%%'%(err_px*100.0/total_px))
# find epe
err = np.abs(diff[valid_mask]).sum()
print('EPE %f'%(err * 1.0/ total_px))
| 0.531696 | 0.743727 |
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import sys
from google.colab import drive
# Mount google drive
drive.mount("/content/drive")
```
# Coordinates and euclidean distance
```
class Coordinate:
def __init__(self, x, y):
self.x = x
self.y = y
@staticmethod
# Euclidean distance between two points
def cal_dist(a, b):
return np.sqrt(((a.x - b.x) ** 2) + ((a.y - b.y) ** 2))
@staticmethod
def cal_tot_dist(coordinate):
# Initializing distance
distance = 0
# Calculating the distance a pair of points
for f, s in zip(coordinate[:-1], coordinate[1:]):
distance = distance + Coordinate.cal_dist(f, s)
distance = distance + Coordinate.cal_dist(coordinate[0], coordinate[-1])
return distance
```
# Generate random nodes
```
# Driver functions and inputs
coordinate = []
x = []
y = []
# Change the dimension
dim = 50
# Inserting coordinates into the class objects
for i in range(dim):
object = Coordinate(np.random.uniform(), np.random.uniform())
x.append(object.x)
y.append(object.y)
coordinate.append(object)
```
# Rajasthan Tourist Places
```
# Get the x and y coordinats of 25 places
df = pd.read_csv("/content/drive/My Drive/rajasthan.csv")
x = df["x"]
y = df["y"]
dim = len(x)
coordinate = []
for i in range(dim):
object = Coordinate(x[i], y[i])
coordinate.append(object)
```
# VLSI datasets
```
def get_data(option=1):
if (option == 1):
file = "/content/drive/My Drive/VLSI Datasets/xqf131.tsp"
elif (option == 2):
file = "/content/drive/My Drive/VLSI Datasets/xqg237.tsp"
elif (option == 3):
file = "/content/drive/My Drive/VLSI Datasets/pma343.tsp"
elif (option == 4):
file = "/content/drive/My Drive/VLSI Datasets/pka379.tsp"
elif (option == 5):
file = "/content/drive/My Drive/VLSI Datasets/bcl380.tsp"
infile = open(file, "r")
content = infile.readline().strip().split()
while content[0] != "NODE_COORD_SECTION":
if content[0] == "DIMENSION":
dimension = content[2]
content = infile.readline().strip().split()
arr_x = []
arr_y = []
# Fill the x, y coordinates into the arr_x, arr_y
for i in range(0, int(dimension)):
s, x, y = infile.readline().strip().split()[:]
arr_x.append(float(x))
arr_y.append(float(y))
# Close the file
infile.close()
return dimension, arr_x, arr_y
# Mount google drive
drive.mount("/content/drive")
x = []
y = []
coordinate = []
# "Cities": "Options"
# 131 : 1
# 237 : 2
# 343 : 3
# 379 : 4
# 380 : 5
# Enter the option (parameter) here 👇
dim, x, y = get_data(2)
# Inserting co-ordinates into the class objects
for i in range(len(x)):
object = Coordinate(x[i], y[i])
coordinate.append(object)
```
# Simulated Annealing
```
# Driver functions and inputs
# Scatter plot x, y coordinates
plt.figure(figsize=(8, 8))
plt.scatter(x, y)
plt.title("Scatter Plot %s cities" % (dim))
plt.xlabel("x coordinates")
plt.ylabel("y coordinates")
plt.show()
# Plotting coordinates
fig = plt.figure(figsize=(30, 12))
axes1 = fig.add_subplot(121)
axes2 = fig.add_subplot(122)
axes1.title.set_text("Random Path")
axes2.title.set_text("Path for TSP with SA")
for f, s in zip(coordinate[:-1], coordinate[1:]):
axes1.plot([f.x, s.x], [f.y, s.y], "b")
axes1.plot(
[coordinate[0].x, coordinate[-1].x], [coordinate[0].y, coordinate[-1].y], "b"
)
axes1.set_xlabel("x")
axes1.set_ylabel("y")
for coor in coordinate:
axes1.plot(coor.x, coor.y, "ro")
# Simluated Annealing
costs = []
cost0 = Coordinate.cal_tot_dist(coordinate)
initial_cost = cost0
# Temperature
T = 30
factor = 0.995
# Increase the no. of iterations for VLSI datasets to 5000
iterations = 2000
for i in range(iterations):
costs.append(cost0)
sys.stdout.write("\r")
sys.stdout.write("Percentage completed: %d%%" % ((i * 100)/ iterations))
sys.stdout.flush()
# Iterate to find a good solution for each value of T
for j in range(500):
c1, c2 = np.random.randint(0, len(coordinate), size=2)
# Exchange coordinates
temp = coordinate[c1]
coordinate[c1] = coordinate[c2]
coordinate[c2] = temp
# Calculate the new cost
cost1 = Coordinate.cal_tot_dist(coordinate)
# Check if new cost is smaller than the previous cost
if cost1 < cost0:
cost0 = cost1
# If the new cost is greater than the previous cost
else:
# Select a random value
x = np.random.uniform()
# accept with probability p
if x < (1/(1 + np.exp((cost0 - cost1)*(-1) / T))):
cost0 = cost1
# Exchange coordinates
else:
temp = coordinate[c1]
coordinate[c1] = coordinate[c2]
coordinate[c2] = temp
# Reduce the temperature
T = T * factor
print("\nInitial cost: ", initial_cost)
print("Final cost after simulated annealing: ", cost0)
# Plotting the results after running simulated annealing
for f, s in zip(coordinate[:-1], coordinate[1:]):
axes2.plot([f.x, s.x], [f.y, s.y], "b")
axes2.plot(
[coordinate[0].x, coordinate[-1].x], [coordinate[0].y, coordinate[-1].y], "b"
)
for coor in coordinate:
axes2.plot(coor.x, coor.y, "ro")
axes2.set_xlabel("x")
axes2.set_ylabel("y")
plt.show()
print("\n\n")
# Plot the fitness curve of the cost vs iterations
plt.plot(np.arange(iterations), costs)
plt.axhline(y=initial_cost, color="r", linestyle="--")
plt.title("Fitness Curve")
plt.xlabel("iterations")
plt.ylabel("cost")
plt.show()
```
# Comparison costs
```
# Comparing initial cost with final cost (optimised route) for
# - 25 random numbers
# - 25 Rajasthan places
# - 131 cities
# - 237 cities
# - 343 cities
# - 379 cities
# - 380 cities
nodes = [25, 25, 131, 237, 343, 379, 380]
cost_initial = [13.956, 76.127, 1383.916, 2949.579, 3117.179, 2898.213, 10013.540]
cost_final = [4.103, 40.486, 693.312, 1673.994, 2091.522, 2148.960, 3378.900]
plt.plot(nodes, cost_initial, nodes, cost_final, marker = "o")
plt.legend(["intial cost", "final cost"])
plt.title("Cost Compare")
plt.xlabel("Nodes/Cities")
plt.ylabel("Cost")
plt.show()
# VLSI datasets
# Comparing initial cost with calculated final cost (optimised route) and optimal cost
# - 131 cities
# - 237 cities
# - 343 cities
# - 379 cities
# - 380 cities
nodes = [131, 237, 343, 379, 380]
cost_initial = [1383.916, 2949.579, 3117.179, 2898.213, 10013.540]
cost_final = [693.312, 1673.994, 2091.522, 2148.960, 3378.900]
# Data from VLSI
cost_optimal = [564, 1019, 1368, 1332, 1621]
plt.plot(nodes, cost_initial, nodes, cost_final, nodes, cost_optimal, marker = "o")
plt.legend(["intial cost", "final cost", "optimal cost"])
plt.title("Cost Compare")
plt.xlabel("Nodes/Cities")
plt.ylabel("Cost")
plt.show()
```
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import sys
from google.colab import drive
# Mount google drive
drive.mount("/content/drive")
class Coordinate:
def __init__(self, x, y):
self.x = x
self.y = y
@staticmethod
# Euclidean distance between two points
def cal_dist(a, b):
return np.sqrt(((a.x - b.x) ** 2) + ((a.y - b.y) ** 2))
@staticmethod
def cal_tot_dist(coordinate):
# Initializing distance
distance = 0
# Calculating the distance a pair of points
for f, s in zip(coordinate[:-1], coordinate[1:]):
distance = distance + Coordinate.cal_dist(f, s)
distance = distance + Coordinate.cal_dist(coordinate[0], coordinate[-1])
return distance
# Driver functions and inputs
coordinate = []
x = []
y = []
# Change the dimension
dim = 50
# Inserting coordinates into the class objects
for i in range(dim):
object = Coordinate(np.random.uniform(), np.random.uniform())
x.append(object.x)
y.append(object.y)
coordinate.append(object)
# Get the x and y coordinats of 25 places
df = pd.read_csv("/content/drive/My Drive/rajasthan.csv")
x = df["x"]
y = df["y"]
dim = len(x)
coordinate = []
for i in range(dim):
object = Coordinate(x[i], y[i])
coordinate.append(object)
def get_data(option=1):
if (option == 1):
file = "/content/drive/My Drive/VLSI Datasets/xqf131.tsp"
elif (option == 2):
file = "/content/drive/My Drive/VLSI Datasets/xqg237.tsp"
elif (option == 3):
file = "/content/drive/My Drive/VLSI Datasets/pma343.tsp"
elif (option == 4):
file = "/content/drive/My Drive/VLSI Datasets/pka379.tsp"
elif (option == 5):
file = "/content/drive/My Drive/VLSI Datasets/bcl380.tsp"
infile = open(file, "r")
content = infile.readline().strip().split()
while content[0] != "NODE_COORD_SECTION":
if content[0] == "DIMENSION":
dimension = content[2]
content = infile.readline().strip().split()
arr_x = []
arr_y = []
# Fill the x, y coordinates into the arr_x, arr_y
for i in range(0, int(dimension)):
s, x, y = infile.readline().strip().split()[:]
arr_x.append(float(x))
arr_y.append(float(y))
# Close the file
infile.close()
return dimension, arr_x, arr_y
# Mount google drive
drive.mount("/content/drive")
x = []
y = []
coordinate = []
# "Cities": "Options"
# 131 : 1
# 237 : 2
# 343 : 3
# 379 : 4
# 380 : 5
# Enter the option (parameter) here 👇
dim, x, y = get_data(2)
# Inserting co-ordinates into the class objects
for i in range(len(x)):
object = Coordinate(x[i], y[i])
coordinate.append(object)
# Driver functions and inputs
# Scatter plot x, y coordinates
plt.figure(figsize=(8, 8))
plt.scatter(x, y)
plt.title("Scatter Plot %s cities" % (dim))
plt.xlabel("x coordinates")
plt.ylabel("y coordinates")
plt.show()
# Plotting coordinates
fig = plt.figure(figsize=(30, 12))
axes1 = fig.add_subplot(121)
axes2 = fig.add_subplot(122)
axes1.title.set_text("Random Path")
axes2.title.set_text("Path for TSP with SA")
for f, s in zip(coordinate[:-1], coordinate[1:]):
axes1.plot([f.x, s.x], [f.y, s.y], "b")
axes1.plot(
[coordinate[0].x, coordinate[-1].x], [coordinate[0].y, coordinate[-1].y], "b"
)
axes1.set_xlabel("x")
axes1.set_ylabel("y")
for coor in coordinate:
axes1.plot(coor.x, coor.y, "ro")
# Simluated Annealing
costs = []
cost0 = Coordinate.cal_tot_dist(coordinate)
initial_cost = cost0
# Temperature
T = 30
factor = 0.995
# Increase the no. of iterations for VLSI datasets to 5000
iterations = 2000
for i in range(iterations):
costs.append(cost0)
sys.stdout.write("\r")
sys.stdout.write("Percentage completed: %d%%" % ((i * 100)/ iterations))
sys.stdout.flush()
# Iterate to find a good solution for each value of T
for j in range(500):
c1, c2 = np.random.randint(0, len(coordinate), size=2)
# Exchange coordinates
temp = coordinate[c1]
coordinate[c1] = coordinate[c2]
coordinate[c2] = temp
# Calculate the new cost
cost1 = Coordinate.cal_tot_dist(coordinate)
# Check if new cost is smaller than the previous cost
if cost1 < cost0:
cost0 = cost1
# If the new cost is greater than the previous cost
else:
# Select a random value
x = np.random.uniform()
# accept with probability p
if x < (1/(1 + np.exp((cost0 - cost1)*(-1) / T))):
cost0 = cost1
# Exchange coordinates
else:
temp = coordinate[c1]
coordinate[c1] = coordinate[c2]
coordinate[c2] = temp
# Reduce the temperature
T = T * factor
print("\nInitial cost: ", initial_cost)
print("Final cost after simulated annealing: ", cost0)
# Plotting the results after running simulated annealing
for f, s in zip(coordinate[:-1], coordinate[1:]):
axes2.plot([f.x, s.x], [f.y, s.y], "b")
axes2.plot(
[coordinate[0].x, coordinate[-1].x], [coordinate[0].y, coordinate[-1].y], "b"
)
for coor in coordinate:
axes2.plot(coor.x, coor.y, "ro")
axes2.set_xlabel("x")
axes2.set_ylabel("y")
plt.show()
print("\n\n")
# Plot the fitness curve of the cost vs iterations
plt.plot(np.arange(iterations), costs)
plt.axhline(y=initial_cost, color="r", linestyle="--")
plt.title("Fitness Curve")
plt.xlabel("iterations")
plt.ylabel("cost")
plt.show()
# Comparing initial cost with final cost (optimised route) for
# - 25 random numbers
# - 25 Rajasthan places
# - 131 cities
# - 237 cities
# - 343 cities
# - 379 cities
# - 380 cities
nodes = [25, 25, 131, 237, 343, 379, 380]
cost_initial = [13.956, 76.127, 1383.916, 2949.579, 3117.179, 2898.213, 10013.540]
cost_final = [4.103, 40.486, 693.312, 1673.994, 2091.522, 2148.960, 3378.900]
plt.plot(nodes, cost_initial, nodes, cost_final, marker = "o")
plt.legend(["intial cost", "final cost"])
plt.title("Cost Compare")
plt.xlabel("Nodes/Cities")
plt.ylabel("Cost")
plt.show()
# VLSI datasets
# Comparing initial cost with calculated final cost (optimised route) and optimal cost
# - 131 cities
# - 237 cities
# - 343 cities
# - 379 cities
# - 380 cities
nodes = [131, 237, 343, 379, 380]
cost_initial = [1383.916, 2949.579, 3117.179, 2898.213, 10013.540]
cost_final = [693.312, 1673.994, 2091.522, 2148.960, 3378.900]
# Data from VLSI
cost_optimal = [564, 1019, 1368, 1332, 1621]
plt.plot(nodes, cost_initial, nodes, cost_final, nodes, cost_optimal, marker = "o")
plt.legend(["intial cost", "final cost", "optimal cost"])
plt.title("Cost Compare")
plt.xlabel("Nodes/Cities")
plt.ylabel("Cost")
plt.show()
| 0.501953 | 0.834272 |
# Talks markdown generator for academicpages
Takes a TSV of talks with metadata and converts them for use with [academicpages.github.io](academicpages.github.io). This is an interactive Jupyter notebook ([see more info here](http://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/what_is_jupyter.html)). The core python code is also in `talks.py`. Run either from the `markdown_generator` folder after replacing `talks.tsv` with one containing your data.
TODO: Make this work with BibTex and other databases, rather than Stuart's non-standard TSV format and citation style.
```
import pandas as pd
import os
```
## Data format
The TSV needs to have the following columns: title, type, url_slug, venue, date, location, talk_url, description, with a header at the top. Many of these fields can be blank, but the columns must be in the TSV.
- Fields that cannot be blank: `title`, `url_slug`, `date`. All else can be blank. `type` defaults to "Talk"
- `date` must be formatted as YYYY-MM-DD.
- `url_slug` will be the descriptive part of the .md file and the permalink URL for the page about the paper.
- The .md file will be `YYYY-MM-DD-[url_slug].md` and the permalink will be `https://[yourdomain]/talks/YYYY-MM-DD-[url_slug]`
- The combination of `url_slug` and `date` must be unique, as it will be the basis for your filenames
This is how the raw file looks (it doesn't look pretty, use a spreadsheet or other program to edit and create).
```
!cat talks.csv
```
## Import TSV
Pandas makes this easy with the read_csv function. We are using a TSV, so we specify the separator as a tab, or `\t`.
I found it important to put this data in a tab-separated values format, because there are a lot of commas in this kind of data and comma-separated values can get messed up. However, you can modify the import statement, as pandas also has read_excel(), read_json(), and others.
```
talks = pd.read_csv("talks.csv", sep=",", header=0)
talks
```
## Escape special characters
YAML is very picky about how it takes a valid string, so we are replacing single and double quotes (and ampersands) with their HTML encoded equivilents. This makes them look not so readable in raw format, but they are parsed and rendered nicely.
```
html_escape_table = {
"&": "&",
'"': """,
"'": "'"
}
def html_escape(text):
if type(text) is str:
return "".join(html_escape_table.get(c,c) for c in text)
else:
return "False"
```
## Creating the markdown files
This is where the heavy lifting is done. This loops through all the rows in the TSV dataframe, then starts to concatentate a big string (```md```) that contains the markdown for each type. It does the YAML metadata first, then does the description for the individual page.
```
loc_dict = {}
for row, item in talks.iterrows():
md_filename = str(item.date) + "-" + item.url_slug + ".md"
html_filename = str(item.date) + "-" + item.url_slug
year = item.date[:4]
md = "---\ntitle: \"" + item.title + '"\n'
md += "collection: talks" + "\n"
if len(str(item.type)) > 3:
md += 'type: "' + item.type + '"\n'
else:
md += 'type: "Talk"\n'
md += "permalink: /talks/" + html_filename + "\n"
if len(str(item.venue)) > 3:
md += 'venue: "' + item.venue + '"\n'
if len(str(item.location)) > 3:
md += "date: " + str(item.date) + "\n"
if len(str(item.location)) > 3:
md += 'location: "' + str(item.location) + '"\n'
md += "---\n"
if len(str(item.talk_url)) > 3:
md += "\n[See talk here](" + item.talk_url + ")\n"
if len(str(item.description)) > 3:
md += "\n" + html_escape(item.description) + "\n"
md_filename = os.path.basename(md_filename)
#print(md)
with open("../_talks/" + md_filename, 'w') as f:
f.write(md)
```
These files are in the talks directory, one directory below where we're working from.
```
!ls ../_talks
!cat ../_talks/2013-03-01-tutorial-1.md
```
|
github_jupyter
|
import pandas as pd
import os
!cat talks.csv
talks = pd.read_csv("talks.csv", sep=",", header=0)
talks
html_escape_table = {
"&": "&",
'"': """,
"'": "'"
}
def html_escape(text):
if type(text) is str:
return "".join(html_escape_table.get(c,c) for c in text)
else:
return "False"
loc_dict = {}
for row, item in talks.iterrows():
md_filename = str(item.date) + "-" + item.url_slug + ".md"
html_filename = str(item.date) + "-" + item.url_slug
year = item.date[:4]
md = "---\ntitle: \"" + item.title + '"\n'
md += "collection: talks" + "\n"
if len(str(item.type)) > 3:
md += 'type: "' + item.type + '"\n'
else:
md += 'type: "Talk"\n'
md += "permalink: /talks/" + html_filename + "\n"
if len(str(item.venue)) > 3:
md += 'venue: "' + item.venue + '"\n'
if len(str(item.location)) > 3:
md += "date: " + str(item.date) + "\n"
if len(str(item.location)) > 3:
md += 'location: "' + str(item.location) + '"\n'
md += "---\n"
if len(str(item.talk_url)) > 3:
md += "\n[See talk here](" + item.talk_url + ")\n"
if len(str(item.description)) > 3:
md += "\n" + html_escape(item.description) + "\n"
md_filename = os.path.basename(md_filename)
#print(md)
with open("../_talks/" + md_filename, 'w') as f:
f.write(md)
!ls ../_talks
!cat ../_talks/2013-03-01-tutorial-1.md
| 0.086191 | 0.757548 |
# Lab: Titanic Survival Exploration with Decision Trees
## Getting Started
In this lab, you will see how decision trees work by implementing a decision tree in sklearn.
We'll start by loading the dataset and displaying some of its rows.
```
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from IPython.display import display # Allows the use of display() for DataFrames
# Pretty display for notebooks
%matplotlib inline
# Set a random seed
import random
random.seed(42)
# Load the dataset
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
display(full_data.head())
```
Recall that these are the various features present for each passenger on the ship:
- **Survived**: Outcome of survival (0 = No; 1 = Yes)
- **Pclass**: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class)
- **Name**: Name of passenger
- **Sex**: Sex of the passenger
- **Age**: Age of the passenger (Some entries contain `NaN`)
- **SibSp**: Number of siblings and spouses of the passenger aboard
- **Parch**: Number of parents and children of the passenger aboard
- **Ticket**: Ticket number of the passenger
- **Fare**: Fare paid by the passenger
- **Cabin** Cabin number of the passenger (Some entries contain `NaN`)
- **Embarked**: Port of embarkation of the passenger (C = Cherbourg; Q = Queenstown; S = Southampton)
Since we're interested in the outcome of survival for each passenger or crew member, we can remove the **Survived** feature from this dataset and store it as its own separate variable `outcomes`. We will use these outcomes as our prediction targets.
Run the code cell below to remove **Survived** as a feature of the dataset and store it in `outcomes`.
```
# Store the 'Survived' feature in a new variable and remove it from the dataset
outcomes = full_data['Survived']
features_raw = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
display(features_raw.head())
```
The very same sample of the RMS Titanic data now shows the **Survived** feature removed from the DataFrame. Note that `data` (the passenger data) and `outcomes` (the outcomes of survival) are now *paired*. That means for any passenger `data.loc[i]`, they have the survival outcome `outcomes[i]`.
## Preprocessing the data
Now, let's do some data preprocessing. First, we'll remove the names of the passengers, and then one-hot encode the features.
**Question:** Why would it be a terrible idea to one-hot encode the data without removing the names?
**Answer:** If we one-hot encode the names columns, then there would be one column for each name, and the model would be learn the names of the survivors, and make predictions based on that. This would lead to some serious overfitting!
```
# Removing the names
features_no_name = features_raw.drop(['Name'], axis=1)
# One-hot encoding
features = pd.get_dummies(features_no_name)
```
And now we'll fill in any blanks with zeroes.
```
features = features.fillna(0.0)
display(features.head())
```
## (TODO) Training the model
Now we're ready to train a model in sklearn. First, let's split the data into training and testing sets. Then we'll train the model on the training set.
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(features, outcomes, test_size=0.2, random_state=42)
# Import the classifier from sklearn
from sklearn.tree import DecisionTreeClassifier
# TODO: Define the classifier, and fit it to the data
model = DecisionTreeClassifier()
model.fit(X_train, y_train)
```
## Testing the model
Now, let's see how our model does, let's calculate the accuracy over both the training and the testing set.
```
# Making predictions
y_train_pred = model.predict(X_train)
y_test_pred = model.predict(X_test)
# Calculate the accuracy
from sklearn.metrics import accuracy_score
train_accuracy = accuracy_score(y_train, y_train_pred)
test_accuracy = accuracy_score(y_test, y_test_pred)
print('The training accuracy is', train_accuracy)
print('The test accuracy is', test_accuracy)
```
# Exerise: Improving the model
Ok, high training accuracy and a lower testing accuracy. We may be overfitting a bit.
So now it's your turn to shine! Train a new model, and try to specify some parameters in order to improve the testing accuracy, such as:
- `max_depth`
- `min_samples_leaf`
- `min_samples_split`
You can use your intuition, trial and error, or even better, feel free to use Grid Search!
**Challenge:** Try to get to 85% accuracy on the testing set. If you'd like a hint, take a look at the solutions notebook in this same folder.
```
# Training the model
model = DecisionTreeClassifier(max_depth=6, min_samples_leaf=6, min_samples_split=10)
model.fit(X_train, y_train)
# Making predictions
y_train_pred = model.predict(X_train)
y_test_pred = model.predict(X_test)
# Calculating accuracies
train_accuracy = accuracy_score(y_train, y_train_pred)
test_accuracy = accuracy_score(y_test, y_test_pred)
print('The training accuracy is', train_accuracy)
print('The test accuracy is', test_accuracy)
```
|
github_jupyter
|
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from IPython.display import display # Allows the use of display() for DataFrames
# Pretty display for notebooks
%matplotlib inline
# Set a random seed
import random
random.seed(42)
# Load the dataset
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
display(full_data.head())
# Store the 'Survived' feature in a new variable and remove it from the dataset
outcomes = full_data['Survived']
features_raw = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
display(features_raw.head())
# Removing the names
features_no_name = features_raw.drop(['Name'], axis=1)
# One-hot encoding
features = pd.get_dummies(features_no_name)
features = features.fillna(0.0)
display(features.head())
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(features, outcomes, test_size=0.2, random_state=42)
# Import the classifier from sklearn
from sklearn.tree import DecisionTreeClassifier
# TODO: Define the classifier, and fit it to the data
model = DecisionTreeClassifier()
model.fit(X_train, y_train)
# Making predictions
y_train_pred = model.predict(X_train)
y_test_pred = model.predict(X_test)
# Calculate the accuracy
from sklearn.metrics import accuracy_score
train_accuracy = accuracy_score(y_train, y_train_pred)
test_accuracy = accuracy_score(y_test, y_test_pred)
print('The training accuracy is', train_accuracy)
print('The test accuracy is', test_accuracy)
# Training the model
model = DecisionTreeClassifier(max_depth=6, min_samples_leaf=6, min_samples_split=10)
model.fit(X_train, y_train)
# Making predictions
y_train_pred = model.predict(X_train)
y_test_pred = model.predict(X_test)
# Calculating accuracies
train_accuracy = accuracy_score(y_train, y_train_pred)
test_accuracy = accuracy_score(y_test, y_test_pred)
print('The training accuracy is', train_accuracy)
print('The test accuracy is', test_accuracy)
| 0.576184 | 0.98948 |
## Tackling Classification problems
```
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
%matplotlib inline
sns.set()
X, y = make_classification(n_samples=100, n_features=2, n_informative=2,
n_redundant=0, n_clusters_per_class=1,
class_sep=2.0, random_state=101)
colors = list(map(lambda x: 'green' if x == 0 else 'black', y))
plt.scatter(X[:, 0], X[:, 1], marker='o', c=colors, linewidths=0, edgecolors=None)
plt.xlabel('feature 1')
plt.ylabel('feature 2')
```
## Assessing classifier's performance
```
# dummy variables
y_orig = [0,0,0,0,0,0,1,1,1,1]
y_pred = [0,0,0,0,1,1,1,1,1,0]
from sklearn.metrics import confusion_matrix, classification_report
rep = confusion_matrix(y_orig, y_pred)
print(rep)
print(classification_report(y_orig, y_pred))
# visual representation
plt.matshow(rep)
plt.title('confusion matrix')
plt.show()
# two ways to show the accuracy score
# overall accuracy
from sklearn.metrics import accuracy_score
print(accuracy_score(y_orig, y_pred))
# accuracy of one specific label
from sklearn.metrics import precision_score
print(precision_score(y_orig, y_pred))
# recall
from sklearn.metrics import recall_score
print(recall_score(y_orig, y_pred))
# f1-score
from sklearn.metrics import f1_score
print(f1_score(y_orig, y_pred))
```
Classification report shows 4 different things:
- precision: the number of correctly classified samples
- recall: the number out of which it was correctly classified (also called sensitivity)
- f1-score: measure of test's accuracy
- support: shows how many labels are in the test
If there's a sample [0, 0, 0, 0, 1] -> recall of zeros is 4/5 -> 0.8 or 80%
F1-score's formula: $2*\frac{precision*recall}{precision+recall}$
## Probability-based approach - the Foundation of Logistic Regression
$$P(y=1|x)=\sigma(W^T*x)$$, where $\sigma=\frac{1}{1+e^{-t}}$ (which is also known as Sigmoid function or inverse-logit function)
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y.astype('float'), test_size=0.33, random_state=101)
print(y_test.dtype)
y_test
from sklearn.linear_model import LinearRegression
lr = LinearRegression().fit(X_train, y_train)
y_pred = np.clip(lr.predict(X_test), 0, 1)
# print(y_pred)
print(y_test)
print(list(map(lambda x: 1 if x > 0.5 else 0, y_pred)))
# sigmoid function
def sigmoid(x):
return 1 / (1 + np.exp(-x))
X_val = np.linspace(-10, 10, 1000)
lines = np.arange(-10, 11)
plt.title('Sigmoid Function')
plt.plot(X_val, sigmoid(X_val), color='blue', linewidth=3)
plt.axhline(y=0.5, color='r', linestyle='--')
plt.axvline(x=0.0, color='r', linestyle='--')
plt.xlabel('x values')
plt.ylabel('sigma(x)')
plt.show()
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression().fit(X_train, y_train.astype(int))
y_clf = clf.predict(X_test)
print(classification_report(y_test.astype(int), y_clf))
# visualize the results
h = 0.02
# plot the decision boundary. For that, we'll assign a color to each point in the mesh [x_min, x_max]x[y_min, y_max]
x_min, x_max = X[:, 0].min() - .5, X[:, 1].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# put the results into a color plot
Z = Z.reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.pink)
# plot the training points
plt.scatter(X[:, 0], X[:, 1], c=y, edgecolors='k', linewidths=0, cmap=plt.cm.Paired)
plt.xticks(())
plt.yticks(())
plt.show()
# now let's see the bare probabilities & weight vector. To compute probabilities, we need to use predict_proba method of
# the classifier. It returns two values: P(0) and P(1)
Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]
Z = Z.reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.autumn)
ax = plt.axes()
ax.arrow(0, 0, clf.coef_[0][0], clf.coef_[0][1], head_width=0.5, head_length=0.5, fc='k', ec='k')
plt.scatter(0, 0, marker='o', c='k')
```
## Pros & Cons of Logistic Regression
### Pros
- super fast
- has an extension - multiclass classification
### Cons
- prone to underfitting (boundary has to be a line or a hyperplane)
- can't work with non-linear values
|
github_jupyter
|
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
%matplotlib inline
sns.set()
X, y = make_classification(n_samples=100, n_features=2, n_informative=2,
n_redundant=0, n_clusters_per_class=1,
class_sep=2.0, random_state=101)
colors = list(map(lambda x: 'green' if x == 0 else 'black', y))
plt.scatter(X[:, 0], X[:, 1], marker='o', c=colors, linewidths=0, edgecolors=None)
plt.xlabel('feature 1')
plt.ylabel('feature 2')
# dummy variables
y_orig = [0,0,0,0,0,0,1,1,1,1]
y_pred = [0,0,0,0,1,1,1,1,1,0]
from sklearn.metrics import confusion_matrix, classification_report
rep = confusion_matrix(y_orig, y_pred)
print(rep)
print(classification_report(y_orig, y_pred))
# visual representation
plt.matshow(rep)
plt.title('confusion matrix')
plt.show()
# two ways to show the accuracy score
# overall accuracy
from sklearn.metrics import accuracy_score
print(accuracy_score(y_orig, y_pred))
# accuracy of one specific label
from sklearn.metrics import precision_score
print(precision_score(y_orig, y_pred))
# recall
from sklearn.metrics import recall_score
print(recall_score(y_orig, y_pred))
# f1-score
from sklearn.metrics import f1_score
print(f1_score(y_orig, y_pred))
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y.astype('float'), test_size=0.33, random_state=101)
print(y_test.dtype)
y_test
from sklearn.linear_model import LinearRegression
lr = LinearRegression().fit(X_train, y_train)
y_pred = np.clip(lr.predict(X_test), 0, 1)
# print(y_pred)
print(y_test)
print(list(map(lambda x: 1 if x > 0.5 else 0, y_pred)))
# sigmoid function
def sigmoid(x):
return 1 / (1 + np.exp(-x))
X_val = np.linspace(-10, 10, 1000)
lines = np.arange(-10, 11)
plt.title('Sigmoid Function')
plt.plot(X_val, sigmoid(X_val), color='blue', linewidth=3)
plt.axhline(y=0.5, color='r', linestyle='--')
plt.axvline(x=0.0, color='r', linestyle='--')
plt.xlabel('x values')
plt.ylabel('sigma(x)')
plt.show()
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression().fit(X_train, y_train.astype(int))
y_clf = clf.predict(X_test)
print(classification_report(y_test.astype(int), y_clf))
# visualize the results
h = 0.02
# plot the decision boundary. For that, we'll assign a color to each point in the mesh [x_min, x_max]x[y_min, y_max]
x_min, x_max = X[:, 0].min() - .5, X[:, 1].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# put the results into a color plot
Z = Z.reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.pink)
# plot the training points
plt.scatter(X[:, 0], X[:, 1], c=y, edgecolors='k', linewidths=0, cmap=plt.cm.Paired)
plt.xticks(())
plt.yticks(())
plt.show()
# now let's see the bare probabilities & weight vector. To compute probabilities, we need to use predict_proba method of
# the classifier. It returns two values: P(0) and P(1)
Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]
Z = Z.reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.autumn)
ax = plt.axes()
ax.arrow(0, 0, clf.coef_[0][0], clf.coef_[0][1], head_width=0.5, head_length=0.5, fc='k', ec='k')
plt.scatter(0, 0, marker='o', c='k')
| 0.557364 | 0.895614 |
```
%matplotlib inline
import numpy as np
import pandas as pd
import math
from scipy import stats
import pickle
from causality.analysis.dataframe import CausalDataFrame
from sklearn.linear_model import LinearRegression
import matplotlib.pyplot as plt
import plotly
import plotly.graph_objs as go
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
init_notebook_mode(connected=True)
```
Open the data from past notebooks and correct them to only include years that are common between the data structures (>1999).
```
with open('VariableData/money_data.pickle', 'rb') as f:
income_data, housing_data, rent_data = pickle.load(f)
with open('VariableData/demographic_data.pickle', 'rb') as f:
demographic_data = pickle.load(f)
with open('VariableData/endowment.pickle', 'rb') as f:
endowment = pickle.load(f)
with open('VariableData/expander.pickle', 'rb') as f:
expander = pickle.load(f)
endowment = endowment[endowment['FY'] > 1997].reset_index()
endowment.drop('index', axis=1, inplace=True)
demographic_data = demographic_data[demographic_data['year'] > 1999].reset_index()
demographic_data.drop('index', axis=1, inplace=True)
income_data = income_data[income_data['year'] > 1999].reset_index()
income_data.drop('index', axis=1, inplace=True)
housing_data = housing_data[housing_data['year'] > 1999].reset_index()
housing_data.drop('index', axis=1, inplace=True)
rent_data = rent_data[rent_data['year'] > 1999].reset_index()
rent_data.drop('index', axis=1, inplace=True)
```
Define a function to graph (and perform linear regression on) a given set of data.
```
def grapher(x, y, city, title, ytitle, xtitle, filename):
slope, intercept, r_value, p_value, std_err = stats.linregress(x, y)
fit = slope * x + intercept
trace0 = go.Scatter(
x = x,
y = y,
mode = 'markers',
name=city,
marker=go.Marker(color='#D2232A')
)
fit0 = go.Scatter(
x = x,
y = fit,
mode='lines',
marker=go.Marker(color='#AC1D23'),
name='Linear Fit'
)
data = [trace0, fit0]
layout = go.Layout(
title = title,
font = dict(family='Gotham', size=12),
yaxis=dict(
title=ytitle
),
xaxis=dict(
title=xtitle)
)
fig = go.Figure(data=data, layout=layout)
return iplot(fig, filename=filename)
```
Investigate the connection between the endowment's value and the Black population in Cambridge, controlling for rent and housing prices.
```
x = pd.to_numeric(endowment['Value ($B)']).as_matrix()
y = pd.to_numeric(demographic_data['c_black']).as_matrix()
z1 = pd.to_numeric(rent_data['cambridge']).as_matrix()
z2 = pd.to_numeric(housing_data['cambridge']).as_matrix()
X = CausalDataFrame({'x': x, 'y': y, 'z1': z1, 'z2': z2})
plt.rcParams['font.size'] = 12
endow_black = grapher(x, y, "Cambridge", "The Correlation Between Endowment and Black Population", "Black Population of Cambridge", "Endowment ($B)", "endow_black")
causal_endow_black = X.zplot(x='x', y='y', z=['z1', 'z2'], z_types={'z1': 'c', 'z2': 'c'}, kind='line', title='The Controlled Correlation Between Endowment (Billions of Dollars)\n and Black Population', color="#D2232A")
causal_endow_black.set(xlabel="Endowment", ylabel="Black Population of Cambridge")
fig = causal_endow_black.get_figure()
ax = plt.gca()
ax.set_frame_on(False)
ax.get_yaxis().set_visible(False)
ax.legend_.remove()
fig.savefig('images/black_endow.svg', format='svg', dpi=1200, bbox_inches='tight')
```
Investigate the connection between the endowment's value and the housing prices in Cambridge, controlling for growth of the population.
```
x = pd.to_numeric(endowment['Value ($B)']).as_matrix()
y = pd.to_numeric(housing_data['cambridge']).as_matrix()
z1 = pd.to_numeric(demographic_data['c_white']).as_matrix()
z2 = pd.to_numeric(demographic_data['c_poc']).as_matrix()
X = CausalDataFrame({'x': x, 'y': y, 'z1': z1, 'z2': z2})
endow_housing = grapher(x, y, "Cambridge", "The Correlation Between Endowment and Housing Prices", "Housing Prices in Cambridge", "Endowment ($B)", "endow_housing")
causal_endow_housing = X.zplot(x='x', y='y', z=['z1', 'z2'], z_types={'z1': 'c', 'z2': 'c'}, kind='line', title='The Controlled Correlation Between Endowment (Billions of Dollars) \n and Housing Prices', color="#D2232A")
causal_endow_housing.set(xlabel="Endowment", ylabel="Median Housing Prices in Cambridge ($)")
fig = causal_endow_housing.get_figure()
ax = plt.gca()
ax.set_frame_on(False)
ax.get_yaxis().set_visible(False)
ax.legend_.remove()
fig.savefig('images/housing_endow.svg', format='svg', dpi=1200, bbox_inches='tight')
```
Investigate the connection between the endowment's value and the rent prices in Cambridge, controlling for growth of the population.
```
x = pd.to_numeric(endowment['Value ($B)']).as_matrix()
y = pd.to_numeric(rent_data['cambridge']).as_matrix()
z1 = pd.to_numeric(demographic_data['c_white']).as_matrix()
z2 = pd.to_numeric(demographic_data['c_poc']).as_matrix()
X = CausalDataFrame({'x': x, 'y': y, 'z1': z1, 'z2': z2})
endow_rent = grapher(x, y, "Cambridge", "The Correlation Between Endowment and Rent", "Rent in Cambridge", "Endowment ($B)", "endow_rent")
causal_endow_rent = X.zplot(x='x', y='y', z=['z1', 'z2'], z_types={'z1': 'c', 'z2': 'c'}, kind='line', title='The Controlled Correlation Between Endowment and Rent')
causal_endow_rent.set(xlabel="Endowment ($)", ylabel="Rent in Cambridge ($)")
fig = causal_endow_rent.get_figure()
ax = plt.gca()
ax.set_frame_on(False)
ax.get_yaxis().set_visible(False)
ax.legend_.remove()
fig.savefig('images/rent_endow.svg', format='svg', dpi=1200, bbox_inches='tight')
```
Investigate the connection between the amount Harvard pays the city of Cambridge per year (PILOT) and the rent prices in Cambridge, controlling for growth of the population.
```
x = pd.to_numeric(expander['Payments to City']).as_matrix()
y = pd.to_numeric(rent_data['cambridge']).as_matrix()
# Remove the last two elements of the other arrays – PILOT data is not sufficient otherwise.
y = y[:-2].copy()
z1 = pd.to_numeric(demographic_data['c_white']).as_matrix()
z1 = z1[:-2].copy()
z2 = pd.to_numeric(demographic_data['c_poc']).as_matrix()
z2 = z2[:-2].copy()
X = CausalDataFrame({'x': x, 'y': y, 'z1': z1, 'z2': z2})
pilot_rent = grapher(x, y, "Cambridge", "The Correlation Between Harvard's PILOT and Rent", "Rent in Cambridge", "PILOT ($)", "pilot_rent")
causal_endow_rent = X.zplot(x='x', y='y', z=['z1', 'z2'], z_types={'z1': 'c', 'z2': 'c'}, kind='line')
```
|
github_jupyter
|
%matplotlib inline
import numpy as np
import pandas as pd
import math
from scipy import stats
import pickle
from causality.analysis.dataframe import CausalDataFrame
from sklearn.linear_model import LinearRegression
import matplotlib.pyplot as plt
import plotly
import plotly.graph_objs as go
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
init_notebook_mode(connected=True)
with open('VariableData/money_data.pickle', 'rb') as f:
income_data, housing_data, rent_data = pickle.load(f)
with open('VariableData/demographic_data.pickle', 'rb') as f:
demographic_data = pickle.load(f)
with open('VariableData/endowment.pickle', 'rb') as f:
endowment = pickle.load(f)
with open('VariableData/expander.pickle', 'rb') as f:
expander = pickle.load(f)
endowment = endowment[endowment['FY'] > 1997].reset_index()
endowment.drop('index', axis=1, inplace=True)
demographic_data = demographic_data[demographic_data['year'] > 1999].reset_index()
demographic_data.drop('index', axis=1, inplace=True)
income_data = income_data[income_data['year'] > 1999].reset_index()
income_data.drop('index', axis=1, inplace=True)
housing_data = housing_data[housing_data['year'] > 1999].reset_index()
housing_data.drop('index', axis=1, inplace=True)
rent_data = rent_data[rent_data['year'] > 1999].reset_index()
rent_data.drop('index', axis=1, inplace=True)
def grapher(x, y, city, title, ytitle, xtitle, filename):
slope, intercept, r_value, p_value, std_err = stats.linregress(x, y)
fit = slope * x + intercept
trace0 = go.Scatter(
x = x,
y = y,
mode = 'markers',
name=city,
marker=go.Marker(color='#D2232A')
)
fit0 = go.Scatter(
x = x,
y = fit,
mode='lines',
marker=go.Marker(color='#AC1D23'),
name='Linear Fit'
)
data = [trace0, fit0]
layout = go.Layout(
title = title,
font = dict(family='Gotham', size=12),
yaxis=dict(
title=ytitle
),
xaxis=dict(
title=xtitle)
)
fig = go.Figure(data=data, layout=layout)
return iplot(fig, filename=filename)
x = pd.to_numeric(endowment['Value ($B)']).as_matrix()
y = pd.to_numeric(demographic_data['c_black']).as_matrix()
z1 = pd.to_numeric(rent_data['cambridge']).as_matrix()
z2 = pd.to_numeric(housing_data['cambridge']).as_matrix()
X = CausalDataFrame({'x': x, 'y': y, 'z1': z1, 'z2': z2})
plt.rcParams['font.size'] = 12
endow_black = grapher(x, y, "Cambridge", "The Correlation Between Endowment and Black Population", "Black Population of Cambridge", "Endowment ($B)", "endow_black")
causal_endow_black = X.zplot(x='x', y='y', z=['z1', 'z2'], z_types={'z1': 'c', 'z2': 'c'}, kind='line', title='The Controlled Correlation Between Endowment (Billions of Dollars)\n and Black Population', color="#D2232A")
causal_endow_black.set(xlabel="Endowment", ylabel="Black Population of Cambridge")
fig = causal_endow_black.get_figure()
ax = plt.gca()
ax.set_frame_on(False)
ax.get_yaxis().set_visible(False)
ax.legend_.remove()
fig.savefig('images/black_endow.svg', format='svg', dpi=1200, bbox_inches='tight')
x = pd.to_numeric(endowment['Value ($B)']).as_matrix()
y = pd.to_numeric(housing_data['cambridge']).as_matrix()
z1 = pd.to_numeric(demographic_data['c_white']).as_matrix()
z2 = pd.to_numeric(demographic_data['c_poc']).as_matrix()
X = CausalDataFrame({'x': x, 'y': y, 'z1': z1, 'z2': z2})
endow_housing = grapher(x, y, "Cambridge", "The Correlation Between Endowment and Housing Prices", "Housing Prices in Cambridge", "Endowment ($B)", "endow_housing")
causal_endow_housing = X.zplot(x='x', y='y', z=['z1', 'z2'], z_types={'z1': 'c', 'z2': 'c'}, kind='line', title='The Controlled Correlation Between Endowment (Billions of Dollars) \n and Housing Prices', color="#D2232A")
causal_endow_housing.set(xlabel="Endowment", ylabel="Median Housing Prices in Cambridge ($)")
fig = causal_endow_housing.get_figure()
ax = plt.gca()
ax.set_frame_on(False)
ax.get_yaxis().set_visible(False)
ax.legend_.remove()
fig.savefig('images/housing_endow.svg', format='svg', dpi=1200, bbox_inches='tight')
x = pd.to_numeric(endowment['Value ($B)']).as_matrix()
y = pd.to_numeric(rent_data['cambridge']).as_matrix()
z1 = pd.to_numeric(demographic_data['c_white']).as_matrix()
z2 = pd.to_numeric(demographic_data['c_poc']).as_matrix()
X = CausalDataFrame({'x': x, 'y': y, 'z1': z1, 'z2': z2})
endow_rent = grapher(x, y, "Cambridge", "The Correlation Between Endowment and Rent", "Rent in Cambridge", "Endowment ($B)", "endow_rent")
causal_endow_rent = X.zplot(x='x', y='y', z=['z1', 'z2'], z_types={'z1': 'c', 'z2': 'c'}, kind='line', title='The Controlled Correlation Between Endowment and Rent')
causal_endow_rent.set(xlabel="Endowment ($)", ylabel="Rent in Cambridge ($)")
fig = causal_endow_rent.get_figure()
ax = plt.gca()
ax.set_frame_on(False)
ax.get_yaxis().set_visible(False)
ax.legend_.remove()
fig.savefig('images/rent_endow.svg', format='svg', dpi=1200, bbox_inches='tight')
x = pd.to_numeric(expander['Payments to City']).as_matrix()
y = pd.to_numeric(rent_data['cambridge']).as_matrix()
# Remove the last two elements of the other arrays – PILOT data is not sufficient otherwise.
y = y[:-2].copy()
z1 = pd.to_numeric(demographic_data['c_white']).as_matrix()
z1 = z1[:-2].copy()
z2 = pd.to_numeric(demographic_data['c_poc']).as_matrix()
z2 = z2[:-2].copy()
X = CausalDataFrame({'x': x, 'y': y, 'z1': z1, 'z2': z2})
pilot_rent = grapher(x, y, "Cambridge", "The Correlation Between Harvard's PILOT and Rent", "Rent in Cambridge", "PILOT ($)", "pilot_rent")
causal_endow_rent = X.zplot(x='x', y='y', z=['z1', 'z2'], z_types={'z1': 'c', 'z2': 'c'}, kind='line')
| 0.535341 | 0.81899 |
# Laboratory 02
## Requirements
For the second part of the exercises you will need the `wikipedia` package. On Windows machines, use the following command in the Anaconda Prompt (`Start --> Anaconda --> Anaconda Prompt`):
conda install -c conda-forge wikipedia
This command should work with other Anaconda environments (OSX, Linux).
If you are using virtualenv directly instead of Anaconda, the following command installs it in your virtualenv:
pip install wikipedia
or
sudo pip install wikipedia
installs it system-wide.
You are encouraged to reuse functions that you defined in earlier exercises.
## 1.1 Define a function that takes a sequence as its input and returns whether the sequence is symmetric. A sequence is symmetric if it is equal to its reverse.
```
def is_symmetric(l):
# TODO
assert(is_symmetric([1]) == True)
assert(is_symmetric([]) == True)
assert(is_symmetric([1, 2, 3, 1]) == False)
assert(is_symmetric([1, "foo", "bar", "foo", 1]) == True)
assert(is_symmetric("abcba") == True)
```
## 1.2 Define a function that takes a sequence and an integer $k$ as its input and returns the $k$ largest element. Do not use the built-in `max` function. Do not change the original sequence. If $k$ is not specified return one element in a list.
```
def k_largest(l, k=1):
pass
l = [-1, 0, 3, 2]
assert(k_largest(l) == [3])
assert(k_largest(l, 2) == [2, 3] or k_largest(l, 2))
```
## \*1.3 Add an optional `key` argument that works analogously to the built-in `sorted`'s key argument.
## 1.4 Define a function that takes a matrix as an input represented as a list of lists (you can assume that the input is a valid matrix). Return its transpose without changing the original matrix.
```
def transpose(M):
# TODO
m1 = [[1, 2, 3], [4, 5, 6]]
m2 = [[1, 4], [2, 5], [3, 6]]
assert(transpose(m1) == m2)
assert(transpose(transpose(m1)) == m1)
```
## 2.1 Define a function that takes a string as its input and return a dictionary with the character frequencies.
```
def char_freq(s):
# TODO
assert(char_freq("aba") == {"a": 2, "b": 1})
```
## 2.2 Add an optional `skip_symbols` to the `char_freq` function. `skip_symbols` is the set of symbols that should be excluded from the frequence dictionary. If this argument is not specified, the function should include every symbol.
```
def char_freq_with_skip(s, skip_symbols=None):
# TODO
assert(char_freq_with_skip("ab.abc?", skip_symbols=".?") == {"a": 2, "b": 2, "c": 1})
```
## 2.2 Define a function that computes word frequencies in a text.
```
def word_freq(s):
# TODO
s = "the green tea and the black tea"
assert(word_freq(s) == {"the": 2, "tea": 2, "green": 1, "black": 1, "and": 1})
```
## 2.3 Define a function that counts the uppercase letters in a string.
```
def count_upper_case(s):
# TODO
assert(count_upper_case("A") == 1)
assert(count_upper_case("abA bcCa") == 2)
```
## 2.4 Define a function that takes two strings and decides whether they are anagrams. A string is an anagram of another string if its letters can be rearranged so that it equals the other string.
For example:
```
abc -- bac
aabb -- abab
```
Counter examples:
```
abc -- aabc
abab -- aaab
```
```
def anagram(s1, s2):
# TODO
assert(anagram("abc", "bac") == True)
assert(anagram("aabb", "abab") == True)
assert(anagram("abab", "aaab") == False)
```
## 2.5. Define a sentence splitter function that takes a string and splits it into a list of sentences. Sentences end with `.` and the new sentence must start with a whitespace (`str.isspace`) or be the end of the string. See the examples below.
```
def sentence_splitter(s):
# TODO
assert(sentence_splitter("A.b. acd.") == ['A.b', 'acd'])
assert(sentence_splitter("A. b. acd.") == ['A', 'b', 'acd'])
```
## Wikipedia module
The following exercises use the `wikipedia` package. The basic usage is illustrated below.
The documentation is available [here](https://pypi.python.org/pypi/wikipedia/).
Searching for pages:
```
import wikipedia
results = wikipedia.search("Budapest")
results
```
Downloading an article:
```
article = wikipedia.page("Budapest")
article.summary[:100]
```
The content attribute contains the full text:
```
type(article.content), len(article.content)
```
By default the module downloads the English Wikipedia. The language can be changed the following way:
```
wikipedia.set_lang("fr")
wikipedia.search("Budapest")
fr_article = wikipedia.page("Budapest")
fr_article.summary[:100]
```
## 3.0 Change the language back to English and test the package with a few other pages.
## 3.1 Download 4-5 arbitrary pages from the English Wikipedia (they should exceed 100000 characters combined) and compute the word frequencies using your previously defined function(s). Print the most common 20 words in the following format (the example is not the correct answer):
```
unintelligent <TAB> 123456
moribund <TAB> 123451
...
```
The words and their frequency are separated by TABS and no additional whitespace should be added.
## 3.2 Repeat the same exercise for your native language if it denotes word boundaries with spaces. If it doesn't choose an arbitrary language other than English.
## 3.3 Define a function that takes a string and returns its bigram frequencies as a dictionary.
Character bigrams are pairs of subsequent characters. For example word `apple` contains the following bigrams: `ap, pp, pl, le`.
They are used for language modeling.
## 3.4 Using your previous English collection compute bigram frequencies.
What are the 10 most common and 10 least common bigrams?
## \*3.5 Define a function that takes two parameters: a string and an integer N and returns the N-gram frequencies of the string. For $N=2$ the function works the same as in the previous example.
Try the function for $N=1..5$. How many unique N-grams are in your collection?
## 3.6 Compute the same statistics for your native language.
|
github_jupyter
|
def is_symmetric(l):
# TODO
assert(is_symmetric([1]) == True)
assert(is_symmetric([]) == True)
assert(is_symmetric([1, 2, 3, 1]) == False)
assert(is_symmetric([1, "foo", "bar", "foo", 1]) == True)
assert(is_symmetric("abcba") == True)
def k_largest(l, k=1):
pass
l = [-1, 0, 3, 2]
assert(k_largest(l) == [3])
assert(k_largest(l, 2) == [2, 3] or k_largest(l, 2))
def transpose(M):
# TODO
m1 = [[1, 2, 3], [4, 5, 6]]
m2 = [[1, 4], [2, 5], [3, 6]]
assert(transpose(m1) == m2)
assert(transpose(transpose(m1)) == m1)
def char_freq(s):
# TODO
assert(char_freq("aba") == {"a": 2, "b": 1})
def char_freq_with_skip(s, skip_symbols=None):
# TODO
assert(char_freq_with_skip("ab.abc?", skip_symbols=".?") == {"a": 2, "b": 2, "c": 1})
def word_freq(s):
# TODO
s = "the green tea and the black tea"
assert(word_freq(s) == {"the": 2, "tea": 2, "green": 1, "black": 1, "and": 1})
def count_upper_case(s):
# TODO
assert(count_upper_case("A") == 1)
assert(count_upper_case("abA bcCa") == 2)
abc -- bac
aabb -- abab
abc -- aabc
abab -- aaab
def anagram(s1, s2):
# TODO
assert(anagram("abc", "bac") == True)
assert(anagram("aabb", "abab") == True)
assert(anagram("abab", "aaab") == False)
def sentence_splitter(s):
# TODO
assert(sentence_splitter("A.b. acd.") == ['A.b', 'acd'])
assert(sentence_splitter("A. b. acd.") == ['A', 'b', 'acd'])
import wikipedia
results = wikipedia.search("Budapest")
results
article = wikipedia.page("Budapest")
article.summary[:100]
type(article.content), len(article.content)
wikipedia.set_lang("fr")
wikipedia.search("Budapest")
fr_article = wikipedia.page("Budapest")
fr_article.summary[:100]
unintelligent <TAB> 123456
moribund <TAB> 123451
...
| 0.147586 | 0.955402 |
# Project2 - Host program
```
import numpy as np
import matplotlib.pyplot as plt
import time
from pynq import Overlay
import pynq.lib.dma
from pynq import Xlnk
from pynq import MMIO
o1 = Overlay('/home/xilinx/jupyter_notebooks/detector/detector.bit')
# Download your bitstream to FPGA
t_before_bitstream = time.time()
o1.download()
t_after_bitstream = time.time()
print(t_after_bitstream - t_before_bitstream, 'seconds to program bitstream')
dmaIR = ol.streamPh.dma_I_R # First DMA
dmaQT = ol.streamPh.dma_Q_T # Second DMA
ph_ip = ol.streamPh.phasedetector # Your IP
xlnk = Xlnk() # Contiguous Memory Allocator (CMA)
length = 1024
# Open input/output files
fI = open('input_i.dat','r')
fQ = open('input_q.dat','r')
fG = open('out_gold.dat', 'r')
# Allocate regular numpy arrays to store input and output
inp_I = np.empty([length,], dtype=np.float32)
inp_Q = np.empty([length,], dtype=np.float32)
golden_R = np.empty([length,], dtype=np.float32)
golden_T = np.empty([length,], dtype=np.float32)
# Store data into arrays
for i in range (0, length):
golden_R[i], golden_T[i] = [np.float32(x) for x in next(fG).split()]
inp_I[i] = np.float32(next(fI))
inp_Q[i] = np.float32(next(fQ))
plt.plot(golden_R)
plt.plot(golden_T)
print("Golden thetas at the R peaks are:\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n".format(golden_T[31],golden_T[63],golden_T[95],golden_T[127],golden_T[159],golden_T[191],golden_T[223],golden_T[255],golden_T[287],golden_T[319],golden_T[351],golden_T[383],golden_T[415],golden_T[447],golden_T[479],golden_T[511]))
```
## Complete the following block
```
# Allocate CMA array for DMA
xlnk = Xlnk()
# Copy regular numpy arrays to CMA arrays
dma1 = o1.detector.axi_dma_0
dma2 = o1.detector.axi_dma_1
# Write length using MMIO (we got the address from Vivado)
detector_ip = MMIO(0x43c00000,0x10000)
size = 1024
in_buffer1 = xlnk.cma_array(shape=(size,), dtype=np.float32)
in_buffer2 = xlnk.cma_array(shape=(size,), dtype=np.float32)
out_bufferT = xlnk.cma_array(shape=(size,), dtype=np.float32)
out_bufferR = xlnk.cma_array(shape=(size,), dtype=np.float32)
fI = open('input_i.dat','r')
fQ = open('input_q.dat','r')
fG = open('out_gold.dat', 'r')
for i in range (0, size):
in_buffer1[i] = np.float32(next(fI))
in_buffer2[i] = np.float32(next(fQ))
detector_ip.write(0x10,size)
t_start = time.time()
# Begin data transfer from/to DMA
dma1.sendchannel.transfer(in_buffer1)
dma2.sendchannel.transfer(in_buffer2)
dma1.recvchannel.transfer(out_bufferT)
dma2.recvchannel.transfer(out_bufferR)
dma1.sendchannel.wait()
dma2.sendchannel.wait()
dma1.recvchannel.wait()
dma2.recvchannel.wait()
t_stop = time.time()
# Free the CMA buffers
in_buffer1.close()
in_buffer2.close()
out_bufferR.close()
out_bufferT.close()
print(t_after_bitstream - t_before_bitstream, 'seconds to execute on hardware')
plt.plot(out_bufferR)
plt.plot(out_bufferT)
print("Output thetas at the R peaks are:\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n".format(out_bufferT[31],out_bufferT[63],out_bufferT[95],out_bufferT[127],out_bufferT[159],out_bufferT[191],out_bufferT[223],out_bufferT[255],out_bufferT[287],out_bufferT[319],out_bufferT[351],out_bufferT[383],out_bufferT[415],out_bufferT[447],out_bufferT[479],out_bufferT[511]))
```
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
import time
from pynq import Overlay
import pynq.lib.dma
from pynq import Xlnk
from pynq import MMIO
o1 = Overlay('/home/xilinx/jupyter_notebooks/detector/detector.bit')
# Download your bitstream to FPGA
t_before_bitstream = time.time()
o1.download()
t_after_bitstream = time.time()
print(t_after_bitstream - t_before_bitstream, 'seconds to program bitstream')
dmaIR = ol.streamPh.dma_I_R # First DMA
dmaQT = ol.streamPh.dma_Q_T # Second DMA
ph_ip = ol.streamPh.phasedetector # Your IP
xlnk = Xlnk() # Contiguous Memory Allocator (CMA)
length = 1024
# Open input/output files
fI = open('input_i.dat','r')
fQ = open('input_q.dat','r')
fG = open('out_gold.dat', 'r')
# Allocate regular numpy arrays to store input and output
inp_I = np.empty([length,], dtype=np.float32)
inp_Q = np.empty([length,], dtype=np.float32)
golden_R = np.empty([length,], dtype=np.float32)
golden_T = np.empty([length,], dtype=np.float32)
# Store data into arrays
for i in range (0, length):
golden_R[i], golden_T[i] = [np.float32(x) for x in next(fG).split()]
inp_I[i] = np.float32(next(fI))
inp_Q[i] = np.float32(next(fQ))
plt.plot(golden_R)
plt.plot(golden_T)
print("Golden thetas at the R peaks are:\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n".format(golden_T[31],golden_T[63],golden_T[95],golden_T[127],golden_T[159],golden_T[191],golden_T[223],golden_T[255],golden_T[287],golden_T[319],golden_T[351],golden_T[383],golden_T[415],golden_T[447],golden_T[479],golden_T[511]))
# Allocate CMA array for DMA
xlnk = Xlnk()
# Copy regular numpy arrays to CMA arrays
dma1 = o1.detector.axi_dma_0
dma2 = o1.detector.axi_dma_1
# Write length using MMIO (we got the address from Vivado)
detector_ip = MMIO(0x43c00000,0x10000)
size = 1024
in_buffer1 = xlnk.cma_array(shape=(size,), dtype=np.float32)
in_buffer2 = xlnk.cma_array(shape=(size,), dtype=np.float32)
out_bufferT = xlnk.cma_array(shape=(size,), dtype=np.float32)
out_bufferR = xlnk.cma_array(shape=(size,), dtype=np.float32)
fI = open('input_i.dat','r')
fQ = open('input_q.dat','r')
fG = open('out_gold.dat', 'r')
for i in range (0, size):
in_buffer1[i] = np.float32(next(fI))
in_buffer2[i] = np.float32(next(fQ))
detector_ip.write(0x10,size)
t_start = time.time()
# Begin data transfer from/to DMA
dma1.sendchannel.transfer(in_buffer1)
dma2.sendchannel.transfer(in_buffer2)
dma1.recvchannel.transfer(out_bufferT)
dma2.recvchannel.transfer(out_bufferR)
dma1.sendchannel.wait()
dma2.sendchannel.wait()
dma1.recvchannel.wait()
dma2.recvchannel.wait()
t_stop = time.time()
# Free the CMA buffers
in_buffer1.close()
in_buffer2.close()
out_bufferR.close()
out_bufferT.close()
print(t_after_bitstream - t_before_bitstream, 'seconds to execute on hardware')
plt.plot(out_bufferR)
plt.plot(out_bufferT)
print("Output thetas at the R peaks are:\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n {}\n".format(out_bufferT[31],out_bufferT[63],out_bufferT[95],out_bufferT[127],out_bufferT[159],out_bufferT[191],out_bufferT[223],out_bufferT[255],out_bufferT[287],out_bufferT[319],out_bufferT[351],out_bufferT[383],out_bufferT[415],out_bufferT[447],out_bufferT[479],out_bufferT[511]))
| 0.280616 | 0.621168 |
<img src="https://github.com/Microsoft/sqlworkshops/blob/master/graphics/solutions-microsoft-logo-small.png?raw=true" alt="Microsoft">
<br>
# SQL Server 2019 Big Data Clusters
## Using Spark For Machine Learning
In this tutorial you will learn how to work with Spark Jobs in a SQL Server big data cluster.
Wide World Importers has refridgerated trucks to deliver temperature-sensitive products. These are high-profit, and high-expense items. In the past, there have been failures in the cooling systems, and the primary culprit has been the deep-cycle batteries used in the system.
WWI began replacing the batteriess every three months as a preventative measure, but this has a high cost. Recently, the taxes on recycling batteries has increased dramatically. The CEO has asked the Data Science team if they can investigate creating a Predictive Maintenance system to more accurately tell the maintenance staff how long a battery will last, rather than relying on a flat 3 month cycle.
The trucks have sensors that transmit data to a file location. The trips are also logged. In this Jupyter Notebook, you'll create, train and store a Machine Learning model using SciKit-Learn, so that it can be deployed to multiple hosts.
```
import pickle
import pandas as pd
import numpy as np
import datetime as dt
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
```
First, download the sensor data from the location where it is transmitted from the trucks, and load it into a Spark DataFrame.
```
df = pd.read_csv('https://cs7a9736a9346a1x44c6xb00.blob.core.windows.net/backups/training-formatted.csv', header=0)
df.dropna()
print(df.shape)
print(list(df.columns))
```
After examining the data, the Data Science team selects certain columns that they believe are highly predictive of the battery life.
```
# Select the features used for predicting battery life
x = df.iloc[:,1:74]
x = x.iloc[:,np.r_[2:7, 9:73]]
x = x.interpolate()
# Select the labels only (the measured battery life)
y = df.iloc[:,0].values.flatten()
print('Interpolation Complete')
# Examine the features selected
print(list(x.columns))
```
The lead Data Scientist believes that a standard Regression algorithm would do the best predictions.
```
# Train a regression model
from sklearn.ensemble import GradientBoostingRegressor
model = GradientBoostingRegressor()
model.fit(x,y)
# Try making a single prediction and observe the result
model.predict(x.iloc[0:1])
```
After the model is trained, perform testing from labeled data.
```
# access the test data from HDFS by reading into a Spark DataFrame
test_data = pd.read_csv('https://cs7a9736a9346a1x44c6xb00.blob.core.windows.net/backups/fleet-formatted.csv', header=0)
test_data.dropna()
# prepare the test data (dropping unused columns)
test_data = test_data.drop(columns=["Car_ID", "Battery_Age"])
test_data = test_data.iloc[:,np.r_[2:7, 9:73]]
test_data.rename(columns={'Twelve_hourly_temperature_forecast_for_next_31_days _reversed': 'Twelve_hourly_temperature_history_for_last_31_days_before_death_l ast_recording_first'}, inplace=True)
# make the battery life predictions for each of the vehicles in the test data
battery_life_predictions = model.predict(test_data)
# examine the prediction
battery_life_predictions
# prepare one data frame that includes predictions for each vehicle
scored_data = test_data
scored_data["Estimated_Battery_Life"] = battery_life_predictions
df_scored = spark.createDataFrame(scored_data)
# Optionally, write out the scored data:
# df_scored.coalesce(1).write.option("header", "true").csv("/pdm")
```
Once you are satisfied with the Model, you can save it out using the "Pickle" library for deployment to other systems.
```
pickle_file = open('/tmp/pdm.pkl', 'wb')
pickle.dump(model, pickle_file)
import os
print(os.getcwd())
os.listdir('//tmp/')
```
**(Optional)**
You could export this model and <a href="https://azure.microsoft.com/en-us/services/sql-database-edge/" target="_blank">run it at the edge or in SQL Server directly</a>. Here's an example of what that code could look like:
<pre>
DECLARE @query_string nvarchar(max) -- Query Truck Data
SET @query_string='
SELECT ['Trip_Length_Mean', 'Trip_Length_Sigma', 'Trips_Per_Day_Mean', 'Trips_Per_Day_Sigma', 'Battery_Rated_Cycles', 'Alternator_Efficiency', 'Car_Has_EcoStart', 'Twelve_hourly_temperature_history_for_last_31_days_before_death_last_recording_first', 'Sensor_Reading_1', 'Sensor_Reading_2', 'Sensor_Reading_3', 'Sensor_Reading_4', 'Sensor_Reading_5', 'Sensor_Reading_6', 'Sensor_Reading_7', 'Sensor_Reading_8', 'Sensor_Reading_9', 'Sensor_Reading_10', 'Sensor_Reading_11', 'Sensor_Reading_12', 'Sensor_Reading_13', 'Sensor_Reading_14', 'Sensor_Reading_15', 'Sensor_Reading_16', 'Sensor_Reading_17', 'Sensor_Reading_18', 'Sensor_Reading_19', 'Sensor_Reading_20', 'Sensor_Reading_21', 'Sensor_Reading_22', 'Sensor_Reading_23', 'Sensor_Reading_24', 'Sensor_Reading_25', 'Sensor_Reading_26', 'Sensor_Reading_27', 'Sensor_Reading_28', 'Sensor_Reading_29', 'Sensor_Reading_30', 'Sensor_Reading_31', 'Sensor_Reading_32', 'Sensor_Reading_33', 'Sensor_Reading_34', 'Sensor_Reading_35', 'Sensor_Reading_36', 'Sensor_Reading_37', 'Sensor_Reading_38', 'Sensor_Reading_39', 'Sensor_Reading_40', 'Sensor_Reading_41', 'Sensor_Reading_42', 'Sensor_Reading_43', 'Sensor_Reading_44', 'Sensor_Reading_45', 'Sensor_Reading_46', 'Sensor_Reading_47', 'Sensor_Reading_48', 'Sensor_Reading_49', 'Sensor_Reading_50', 'Sensor_Reading_51', 'Sensor_Reading_52', 'Sensor_Reading_53', 'Sensor_Reading_54', 'Sensor_Reading_55', 'Sensor_Reading_56', 'Sensor_Reading_57', 'Sensor_Reading_58', 'Sensor_Reading_59', 'Sensor_Reading_60', 'Sensor_Reading_61']
FROM Truck_Sensor_Readings'
EXEC [dbo].[PredictBattLife] 'pdm', @query_string;
</pre>
|
github_jupyter
|
import pickle
import pandas as pd
import numpy as np
import datetime as dt
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
df = pd.read_csv('https://cs7a9736a9346a1x44c6xb00.blob.core.windows.net/backups/training-formatted.csv', header=0)
df.dropna()
print(df.shape)
print(list(df.columns))
# Select the features used for predicting battery life
x = df.iloc[:,1:74]
x = x.iloc[:,np.r_[2:7, 9:73]]
x = x.interpolate()
# Select the labels only (the measured battery life)
y = df.iloc[:,0].values.flatten()
print('Interpolation Complete')
# Examine the features selected
print(list(x.columns))
# Train a regression model
from sklearn.ensemble import GradientBoostingRegressor
model = GradientBoostingRegressor()
model.fit(x,y)
# Try making a single prediction and observe the result
model.predict(x.iloc[0:1])
# access the test data from HDFS by reading into a Spark DataFrame
test_data = pd.read_csv('https://cs7a9736a9346a1x44c6xb00.blob.core.windows.net/backups/fleet-formatted.csv', header=0)
test_data.dropna()
# prepare the test data (dropping unused columns)
test_data = test_data.drop(columns=["Car_ID", "Battery_Age"])
test_data = test_data.iloc[:,np.r_[2:7, 9:73]]
test_data.rename(columns={'Twelve_hourly_temperature_forecast_for_next_31_days _reversed': 'Twelve_hourly_temperature_history_for_last_31_days_before_death_l ast_recording_first'}, inplace=True)
# make the battery life predictions for each of the vehicles in the test data
battery_life_predictions = model.predict(test_data)
# examine the prediction
battery_life_predictions
# prepare one data frame that includes predictions for each vehicle
scored_data = test_data
scored_data["Estimated_Battery_Life"] = battery_life_predictions
df_scored = spark.createDataFrame(scored_data)
# Optionally, write out the scored data:
# df_scored.coalesce(1).write.option("header", "true").csv("/pdm")
pickle_file = open('/tmp/pdm.pkl', 'wb')
pickle.dump(model, pickle_file)
import os
print(os.getcwd())
os.listdir('//tmp/')
| 0.486088 | 0.97532 |
# Description
* Merging raw reads with PEAR
## Setting variables
```
## Where are your raw sequences found?
seqdir = '/home/backup_files/raw_reads/hempmicrobiome.Sam.Ali.SmartLab.2018/'
## What directory do you want to work in and keep all subsequent files in?
workdir = '/home/sam/notebooks/hemp_microbiome/data/ITS_OTUs/merge_demult/'
## What is the name of your forward and reverse reads?
readFile1 = 'read1.ITS.fq.gz'
readFile2 = 'read2.ITS.fq.gz'
## What name do you want included for all subsequent files?
name = 'hemp_ITS'
```
# Init
```
import screed
from glob import glob
import matplotlib.pyplot as plt
import numpy as np
from mpld3 import enable_notebook
import screed
import pandas as pd
import os
%matplotlib inline
%load_ext rpy2.ipython
%%R
library(ggplot2)
library(dplyr)
# Move into working directory and if it doesn't exist, make it
if not os.path.isdir(workdir):
os.makedirs(workdir)
%cd $workdir
```
## Uncompress the fastq files
```
output1 = os.path.join(workdir, "forward.fastq")
seqfile1 = os.path.join(seqdir, readFile1)
!cd $workdir; \
pigz -k -d -p 20 -c $seqfile1 > $output1
output2 = os.path.join(workdir, "reverse.fastq")
seqfile2 = os.path.join(seqdir, readFile2)
!cd $workdir; \
pigz -k -d -p 20 -c $seqfile2 > $output2
```
# Merging
```
!cd $workdir; \
pear -m 600 -j 20 \
-f forward.fastq \
-r reverse.fastq \
-o pear_merged_$name
```
# Making a screed db of merged reads
```
pear_merged_file = !echo "pear_merged_"$name".assembled.fastq"
pear_merged_file = pear_merged_file[0]
os.chdir(workdir)
screed.read_fastq_sequences(pear_merged_file)
pear_merged_file += '_screed'
fqdb = screed.ScreedDB(pear_merged_file)
pear_merged_file
lengths = []
for read in fqdb.itervalues():
lengths.append((len(read["sequence"])))
fig = plt.figure()
ax = fig.add_subplot(111)
h = ax.hist(np.array(lengths), bins=50)
xl = ax.set_xlabel("Sequence Length, nt")
yl = ax.set_ylabel("Count")
fig.set_size_inches((10,6))
print ('Number of reads: {}'.format(len(lengths)))
```
## Quality stats on merged reads
```
def qualStats(sourceDir, fileName):
outFile = fileName + '_qualStats'
!cd $sourceDir; \
fastx_quality_stats -i $fileName -o $outFile -Q 33
return outFile
qualStatsRes = qualStats(workdir, 'pear_merged_'+name+'.assembled.fastq')
%%R -i workdir -i qualStatsRes
setwd(workdir)
# reading in qual-stats files
tbl.r12 = read.delim(qualStatsRes, sep='\t')
rownames(tbl.r12) = 1:nrow(tbl.r12)
%%R -w 800 -h 300
# smooth curve on median qual values
ggplot(tbl.r12, aes(x=column, y=med, ymin=Q1, ymax=Q3)) +
geom_smooth(se=FALSE, method='auto') +
geom_linerange(alpha=0.3) +
labs(x='position', y='median quality score') +
theme_bw() +
theme(
text = element_text(size=16)
)
```
## Clean up
Remove temporary files made during this process. These are really big files that you no longer need. If you are worried about having to re-do something then you can skip this step and clean up manually at the very end.
|
github_jupyter
|
## Where are your raw sequences found?
seqdir = '/home/backup_files/raw_reads/hempmicrobiome.Sam.Ali.SmartLab.2018/'
## What directory do you want to work in and keep all subsequent files in?
workdir = '/home/sam/notebooks/hemp_microbiome/data/ITS_OTUs/merge_demult/'
## What is the name of your forward and reverse reads?
readFile1 = 'read1.ITS.fq.gz'
readFile2 = 'read2.ITS.fq.gz'
## What name do you want included for all subsequent files?
name = 'hemp_ITS'
import screed
from glob import glob
import matplotlib.pyplot as plt
import numpy as np
from mpld3 import enable_notebook
import screed
import pandas as pd
import os
%matplotlib inline
%load_ext rpy2.ipython
%%R
library(ggplot2)
library(dplyr)
# Move into working directory and if it doesn't exist, make it
if not os.path.isdir(workdir):
os.makedirs(workdir)
%cd $workdir
output1 = os.path.join(workdir, "forward.fastq")
seqfile1 = os.path.join(seqdir, readFile1)
!cd $workdir; \
pigz -k -d -p 20 -c $seqfile1 > $output1
output2 = os.path.join(workdir, "reverse.fastq")
seqfile2 = os.path.join(seqdir, readFile2)
!cd $workdir; \
pigz -k -d -p 20 -c $seqfile2 > $output2
!cd $workdir; \
pear -m 600 -j 20 \
-f forward.fastq \
-r reverse.fastq \
-o pear_merged_$name
pear_merged_file = !echo "pear_merged_"$name".assembled.fastq"
pear_merged_file = pear_merged_file[0]
os.chdir(workdir)
screed.read_fastq_sequences(pear_merged_file)
pear_merged_file += '_screed'
fqdb = screed.ScreedDB(pear_merged_file)
pear_merged_file
lengths = []
for read in fqdb.itervalues():
lengths.append((len(read["sequence"])))
fig = plt.figure()
ax = fig.add_subplot(111)
h = ax.hist(np.array(lengths), bins=50)
xl = ax.set_xlabel("Sequence Length, nt")
yl = ax.set_ylabel("Count")
fig.set_size_inches((10,6))
print ('Number of reads: {}'.format(len(lengths)))
def qualStats(sourceDir, fileName):
outFile = fileName + '_qualStats'
!cd $sourceDir; \
fastx_quality_stats -i $fileName -o $outFile -Q 33
return outFile
qualStatsRes = qualStats(workdir, 'pear_merged_'+name+'.assembled.fastq')
%%R -i workdir -i qualStatsRes
setwd(workdir)
# reading in qual-stats files
tbl.r12 = read.delim(qualStatsRes, sep='\t')
rownames(tbl.r12) = 1:nrow(tbl.r12)
%%R -w 800 -h 300
# smooth curve on median qual values
ggplot(tbl.r12, aes(x=column, y=med, ymin=Q1, ymax=Q3)) +
geom_smooth(se=FALSE, method='auto') +
geom_linerange(alpha=0.3) +
labs(x='position', y='median quality score') +
theme_bw() +
theme(
text = element_text(size=16)
)
| 0.333503 | 0.522629 |
# Lyon Parking Analysis
The objective of this analysis is to use the data collected between April 2017 and August 2017 to classify the behavior of parking lots in the city of Lyon, France.
After finishing an Android application during the academic year of 2016-2017 at my University, I was too curious not to use the data obtained for some amazing project. All data was collected from https://data.grandlyon.com/, an open-data plataform of the Grand Lyon metropole which aims to encourage citizen participation in the development of the city and in the creation of new services. The methodology applied here was based on [Jake VanderPlas' analysis of Seattle Work Habits](http://jakevdp.github.io/blog/2015/07/23/learning-seattles-work-habits-from-bicycle-counts/).
There are two datasets corcerning the parking lots:
- a dataset containing the names, coordinates and other relevant static information about 1000 parkings in the city;
- a dataset containing the real-time number of disponible places of 42 parking lots (this number has been increasing).
The data was stored in a SQL database and here we will be using Machine Learning methods from scikit-learn library to visualize and classify the parking lots.
Before going further, most of non-valuable information was removed from the .sql file. Some data concerning the number of disponible places were filled by "No data available" due to closure or maybe a problem in the communication between the database and the parking lot.
## The data
We'll analyse the real time data of 43 parkings, our objects of study. The information we have is the number of disponible places for each ~5 minutes. Yet, the total capacity of each object varies considerably and this greatly affects the number of arriving cars and the number of occupied parking places by hour.
It's more interesting to attach ourselves to the following question: What moment does the parkings have their peak of arriving cars for each day?
We expect that the answer to this question, on the other hand, does not depend on the capacity.
Let's create a class that we'll use to manage each parking.
```
import time
import json
import urllib.request
import sqlite3
from datetime import datetime
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
#connecting to the sql file
conn =sqlite3.connect('parking20171024-16h.sqlite')
#creating a cursor
cursor = conn.cursor()
#each parking will be associated to an Parking object.
class Parking:
def __init__(self,number):
self.__state = []
self.__time = []
self.__capacity = 0
self.__name = number
#selecting all rows from the sql table
cursor.execute("SELECT * from {}".format(number))
data = cursor.fetchall()
i = 0
for row in data:
dateString = row[1] #the index 0 of each row contains the moment of last update
if i == 0:
self.__capacity= int(row[3])
try:
if i>=0:
if len(self.__time)==0 or self.__time[-1] != datetime.strptime(dateString,'%Y-%m-%d %H:%M:%S'):
self.__time.append(datetime.strptime(dateString,'%Y-%m-%d %H:%M:%S'))
state = row[2].split(' ')[0] #because the data was in the format "xxx places diponibles"
if (state.isdigit()):
state = int(state)
else:
state = self.__state[-1]
self.__state.append(state)
except ValueError:
continue
i+=1
return
def getTimeTable(self):
return self.__time
def getStateTable(self):
return self.__state
def getCapacity(self):
return self.__capacity
def getPeakHours(self):
dateList = [] # the list of days that can be associated to a peak hour for a given Parking.
peakHours = [] # the moment of peak in hours
momentBefore = self.__time[3] #the first 2 rows were quite noisy
availabilityBefore = self.__state[3]
arrivingCars = 0
maxValue = 0 #maxValue of arriving cars for each day (changes everytime we find an hour with more arriving cars than before)
maxDelta = 0 #difference of disponible places for each row of the database.
maxDeltaMinute = 0 #minute of maximum delta
peakHour = None
for i in range(4,len(self.__time)):
moment = self.__time[i]
availability = self.__state[i]
if moment.day != momentBefore.day and moment<datetime(2017,9,29,0,0,0): #the data after 29 september was also quite noisy
if peakHour != None and peakHour>5:
dateList.append(datetime(momentBefore.year,momentBefore.month,momentBefore.day))
peakHours.append(peakHour)
maxValue = 0
maxDelta = 0
arrivingCars = 0
elif moment<datetime(2017,9,29,0,0,0):
if moment.hour == momentBefore.hour:
if availability<availabilityBefore:
delta = availabilityBefore - availability
arrivingCars += delta
if delta>maxDelta:
maxDelta = delta
maxDeltaMinute = momentBefore.minute
if moment.hour != momentBefore.hour:
#verify if momentBefore.hour corresponds to the peak hour
if arrivingCars>maxValue:
maxValue = arrivingCars
peakHour = momentBefore.hour + maxDeltaMinute/60
arrivingCars = 0
maxDelta = 0
availabilityBefore = availability
momentBefore = moment
return (dateList,peakHours)
#unfortunately, some hard-coding was necessary. The tables had their names as the numbers in full in french.
parking_names = ['un','soixantedixneuf','deux','trois','quatre','cinq', 'six','sept','huit','dix','onze','douze','treize','quatorze','seize','dixhuit','dixneuf','vingt','vingtun','vingtdeux','vingttrois','vingtcinq','vingtsept','vingthuit','trenteneuf','quarante','quarante_un','quarantetrois','quarantequatre','quarantecinq','quarantesept','quarantehuit','quaranteneuf','cinquante','cinquantedeux','cinquantetrois','cinquante_quatre','cinquante_cinq', 'centcinq','centquatre','centsept','quatrevingtsept']
parking_numbers = [1,79,2,3,4,5,6,7,8,10,11,12,13,14,16,18,19,20,21,22,23,25,27,28,39,40,41,43,44,45,47,48,49,50,52,53,54,55,105,104,107,87]
parkings = []
for name in parking_names:
parkings.append(Parking(name))
import pandas as pd
#Create a pandas table relating each state to one specific hour
(timeTable108,stateTable108) = Parking('centhuit').getPeakHours()
print(len(stateTable108))
df = pd.DataFrame({ 'Day':timeTable108,'108':stateTable108})
df.set_index('Day', inplace = True)
for i in range(len(parking_names)):
(timeTable,stateTable) = parkings[i].getPeakHours()
dfTemp = pd.DataFrame({'Day':timeTable,parking_numbers[i]:stateTable})
dfTemp.set_index('Day', inplace = True)
df = pd.concat([df, dfTemp], axis=1)
df.head()
df.shape
df = df.fillna(df.mean()) #placing all unavailable data with the parking mean value
df.shape
```
## Extracting information from the data
Our pandas table associates each index (a date) to 43 peak hours (one for each parking). It's possible to see this in an other way:
We can associate to each parking an peak hour for each day.
```
transposed = df.transpose()
#transposed = transposed.fillna(transposed.mean(1))
X = transposed.values
X.shape
```
We now have 43 vectors of 77 dimensions as our data.
We'll use a method called Principal Component Analysis (PCA) to project all these vectors in 2 dimensions so we can better visualize and analyse it. If you want to understand how PCA works, here's a good video: https://www.youtube.com/watch?v=_UVHneBUBW0
```
from sklearn.decomposition import PCA
Xpca = PCA(0.65).fit_transform(X)
Xpca.shape
colors = X.mean(1)
plt.scatter(Xpca[:, 0], Xpca[:, 1], c=colors,
cmap='cubehelix')
plt.colorbar(label='Peak Hour (Mean)');
plt.show()
```
We can start to see that the parking lots with peak hours near 17h tend to cluster, and also do the parkings with mean peak hour at 8 hours.
However, there's an outlier point at the top left of the image. We can use an Outlier Detection Method to remove it from out data and then use the PCA again.
```
from sklearn import cluster, datasets, mixture,ensemble
import seaborn; seaborn.set()
isolationForest = ensemble.IsolationForest(n_estimators=(43-4), contamination = 0/43)
isolationForest.fit(Xpca)
outlierPrediction = isolationForest.predict(Xpca)
plt.scatter(Xpca[:, 0], Xpca[:, 1], c=outlierPrediction,cmap='cubehelix')
plt.show()
transposed['Cluster'] = outlierPrediction
transposedIF = transposed.loc[transposed['Cluster'] == 1]
transposedIF.head()
Xif = transposedIF.values
Xpcaif = PCA(0.8).fit_transform(Xif)
mean = Xif.mean(1)
plt.scatter(Xpcaif[:, 0], Xpcaif[:, 1], c=mean,
cmap='cubehelix')
plt.colorbar(label='Peak Hour (mean)');
plt.show()
```
As we have only 43 points to plot, the image is not as clean as it could be. However, it's possible to see points with small values of peak hours clustering in the left, points with high values of peak hours clustering in the right and some others in between the two. We can try to separate the points in 3 different clusters. Let the machine speak for itself.
```
import warnings
from sklearn import cluster, datasets, mixture
import seaborn; seaborn.set()
nbClusters = 3
gmm = mixture.GaussianMixture(n_components=nbClusters, covariance_type='full')
with warnings.catch_warnings():
warnings.filterwarnings(
"ignore",
message="the number of connected components of the " +
"connectivity matrix is [0-9]{1,2}" +
" > 1. Completing it to avoid stopping the tree early.",
category=UserWarning)
warnings.filterwarnings(
"ignore",
message="Graph is not fully connected, spectral embedding" +
" may not work as expected.",
category=UserWarning)
gmm.fit(Xpcaif)
clusterPrediction = gmm.predict(Xpcaif)
plt.scatter(Xpcaif[:, 0], Xpcaif[:, 1], c=clusterPrediction,cmap='cubehelix')
plt.show()
```
## Visualizing the data
Is it reasonable to cluster the 42 parkings in 3 groups?
The following figure is the plot of the mean values of peak hours for each cluster for each day. Some parkings are more frequented in the night, some are more likely to have their peak hours near 8h and some near 13h.
```
transposedIF['Cluster'] = clusterPrediction
#df = df.join(transposed['Cluster'], on=df.index, lsuffix ='_left', rsuffix = '_right')
#df.drop(df.transpose().columns[0])
transposedIF = transposedIF.drop(transposedIF.columns[0], axis = 1)
df.iloc[15:,:]
import matplotlib.dates as mdates
x = transposedIF.transpose().index.values.tolist()[:-8]
c0 = transposedIF.loc[transposedIF['Cluster'] == 0].transpose().mean(1).values.tolist()[:-8]
c1 = transposedIF.loc[transposedIF['Cluster'] == 1].transpose().mean(1).values.tolist()[:-8]
c2 = transposedIF.loc[transposedIF['Cluster'] == 2].transpose().mean(1).values.tolist()[:-8]
for i in range(len(x)):
x[i] = datetime.strptime(str(x[i]),'%Y-%m-%d %H:%M:%S').date()
plt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%Y/%m/%d'))
plt.gca().xaxis.set_major_locator(mdates.DayLocator(interval = 7))
cluster0, = plt.plot(x,c0, label = 'Cluster 0')
cluster1, = plt.plot(x,c1, label = 'Cluster 1')
cluster2, = plt.plot(x,c2, label = 'Cluster 2')
plt.legend(handles=[cluster0,cluster1,cluster2])
plt.gcf().autofmt_xdate()
plt.savefig('meanhourbycluster.png', bbox_inches='tight')
plt.show()
```
If we look at a certain moment in July, we will realize that there's a day where the peak hour of clusters 2 and 0 is considerably greater than the average.
```
x[c2.index(max(c2))]
```
It's the Bastille Day in France!
```
by_hour = transposedIF.groupby(['Cluster', transposedIF.index]).mean()
by_hour
import os
import folium
import json
from folium.plugins import MarkerCluster
jsonFile = open('infocomplete.txt')
jsonFileString = jsonFile.read()
dataParkingsStringList = jsonFileString[3:].split(',\n')
listJsonData = []
for jsondata in dataParkingsStringList:
dataDict = json.loads(jsondata)
listJsonData.append(dataDict)
l2 = []
l1 = []
l0 = []
for index, row in transposedIF.iterrows():
if row['Cluster'] == 2:
l2.append(str(index))
if row['Cluster'] == 1:
l1.append(str(index))
if row['Cluster'] == 0:
l0.append(str(index))
print(l0,l1,l2)
m = folium.Map(location=[45.7484600,4.8467100], zoom_start=14)
for dictData in listJsonData:
if dictData['properties']['idparkingcriter'] in l2:
[lon, lat] = dictData['geometry']['coordinates']
popupText = "ID : " + dictData['properties']['idparkingcriter'] + ". Capacity: " + dictData['properties']['capacite'] + ". Fermeture: " + dictData['properties']['fermeture']
marker = folium.Marker(
location=[lat, lon],
popup=popupText,
icon=folium.Icon(color='red')
)
m.add_child(marker)
if dictData['properties']['idparkingcriter'] in l1:
[lon, lat] = dictData['geometry']['coordinates']
popupText = "ID : " + dictData['properties']['idparkingcriter'] + ". Capacity: " + dictData['properties']['capacite'] + ". Fermeture: " + dictData['properties']['fermeture']
marker = folium.Marker(
location=[lat, lon],
popup=popupText,
icon=folium.Icon(color='green')
)
m.add_child(marker)
if dictData['properties']['idparkingcriter'] in l0:
[lon, lat] = dictData['geometry']['coordinates']
popupText = "ID : " + dictData['properties']['idparkingcriter'] + ". Capacity: " + dictData['properties']['capacite'] + ". Fermeture: " + dictData['properties']['fermeture']
marker = folium.Marker(
location=[lat, lon],
popup=popupText
)
m.add_child(marker)
m.save(os.path.join('ClusterParking.html'))
m
```
|
github_jupyter
|
import time
import json
import urllib.request
import sqlite3
from datetime import datetime
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
#connecting to the sql file
conn =sqlite3.connect('parking20171024-16h.sqlite')
#creating a cursor
cursor = conn.cursor()
#each parking will be associated to an Parking object.
class Parking:
def __init__(self,number):
self.__state = []
self.__time = []
self.__capacity = 0
self.__name = number
#selecting all rows from the sql table
cursor.execute("SELECT * from {}".format(number))
data = cursor.fetchall()
i = 0
for row in data:
dateString = row[1] #the index 0 of each row contains the moment of last update
if i == 0:
self.__capacity= int(row[3])
try:
if i>=0:
if len(self.__time)==0 or self.__time[-1] != datetime.strptime(dateString,'%Y-%m-%d %H:%M:%S'):
self.__time.append(datetime.strptime(dateString,'%Y-%m-%d %H:%M:%S'))
state = row[2].split(' ')[0] #because the data was in the format "xxx places diponibles"
if (state.isdigit()):
state = int(state)
else:
state = self.__state[-1]
self.__state.append(state)
except ValueError:
continue
i+=1
return
def getTimeTable(self):
return self.__time
def getStateTable(self):
return self.__state
def getCapacity(self):
return self.__capacity
def getPeakHours(self):
dateList = [] # the list of days that can be associated to a peak hour for a given Parking.
peakHours = [] # the moment of peak in hours
momentBefore = self.__time[3] #the first 2 rows were quite noisy
availabilityBefore = self.__state[3]
arrivingCars = 0
maxValue = 0 #maxValue of arriving cars for each day (changes everytime we find an hour with more arriving cars than before)
maxDelta = 0 #difference of disponible places for each row of the database.
maxDeltaMinute = 0 #minute of maximum delta
peakHour = None
for i in range(4,len(self.__time)):
moment = self.__time[i]
availability = self.__state[i]
if moment.day != momentBefore.day and moment<datetime(2017,9,29,0,0,0): #the data after 29 september was also quite noisy
if peakHour != None and peakHour>5:
dateList.append(datetime(momentBefore.year,momentBefore.month,momentBefore.day))
peakHours.append(peakHour)
maxValue = 0
maxDelta = 0
arrivingCars = 0
elif moment<datetime(2017,9,29,0,0,0):
if moment.hour == momentBefore.hour:
if availability<availabilityBefore:
delta = availabilityBefore - availability
arrivingCars += delta
if delta>maxDelta:
maxDelta = delta
maxDeltaMinute = momentBefore.minute
if moment.hour != momentBefore.hour:
#verify if momentBefore.hour corresponds to the peak hour
if arrivingCars>maxValue:
maxValue = arrivingCars
peakHour = momentBefore.hour + maxDeltaMinute/60
arrivingCars = 0
maxDelta = 0
availabilityBefore = availability
momentBefore = moment
return (dateList,peakHours)
#unfortunately, some hard-coding was necessary. The tables had their names as the numbers in full in french.
parking_names = ['un','soixantedixneuf','deux','trois','quatre','cinq', 'six','sept','huit','dix','onze','douze','treize','quatorze','seize','dixhuit','dixneuf','vingt','vingtun','vingtdeux','vingttrois','vingtcinq','vingtsept','vingthuit','trenteneuf','quarante','quarante_un','quarantetrois','quarantequatre','quarantecinq','quarantesept','quarantehuit','quaranteneuf','cinquante','cinquantedeux','cinquantetrois','cinquante_quatre','cinquante_cinq', 'centcinq','centquatre','centsept','quatrevingtsept']
parking_numbers = [1,79,2,3,4,5,6,7,8,10,11,12,13,14,16,18,19,20,21,22,23,25,27,28,39,40,41,43,44,45,47,48,49,50,52,53,54,55,105,104,107,87]
parkings = []
for name in parking_names:
parkings.append(Parking(name))
import pandas as pd
#Create a pandas table relating each state to one specific hour
(timeTable108,stateTable108) = Parking('centhuit').getPeakHours()
print(len(stateTable108))
df = pd.DataFrame({ 'Day':timeTable108,'108':stateTable108})
df.set_index('Day', inplace = True)
for i in range(len(parking_names)):
(timeTable,stateTable) = parkings[i].getPeakHours()
dfTemp = pd.DataFrame({'Day':timeTable,parking_numbers[i]:stateTable})
dfTemp.set_index('Day', inplace = True)
df = pd.concat([df, dfTemp], axis=1)
df.head()
df.shape
df = df.fillna(df.mean()) #placing all unavailable data with the parking mean value
df.shape
transposed = df.transpose()
#transposed = transposed.fillna(transposed.mean(1))
X = transposed.values
X.shape
from sklearn.decomposition import PCA
Xpca = PCA(0.65).fit_transform(X)
Xpca.shape
colors = X.mean(1)
plt.scatter(Xpca[:, 0], Xpca[:, 1], c=colors,
cmap='cubehelix')
plt.colorbar(label='Peak Hour (Mean)');
plt.show()
from sklearn import cluster, datasets, mixture,ensemble
import seaborn; seaborn.set()
isolationForest = ensemble.IsolationForest(n_estimators=(43-4), contamination = 0/43)
isolationForest.fit(Xpca)
outlierPrediction = isolationForest.predict(Xpca)
plt.scatter(Xpca[:, 0], Xpca[:, 1], c=outlierPrediction,cmap='cubehelix')
plt.show()
transposed['Cluster'] = outlierPrediction
transposedIF = transposed.loc[transposed['Cluster'] == 1]
transposedIF.head()
Xif = transposedIF.values
Xpcaif = PCA(0.8).fit_transform(Xif)
mean = Xif.mean(1)
plt.scatter(Xpcaif[:, 0], Xpcaif[:, 1], c=mean,
cmap='cubehelix')
plt.colorbar(label='Peak Hour (mean)');
plt.show()
import warnings
from sklearn import cluster, datasets, mixture
import seaborn; seaborn.set()
nbClusters = 3
gmm = mixture.GaussianMixture(n_components=nbClusters, covariance_type='full')
with warnings.catch_warnings():
warnings.filterwarnings(
"ignore",
message="the number of connected components of the " +
"connectivity matrix is [0-9]{1,2}" +
" > 1. Completing it to avoid stopping the tree early.",
category=UserWarning)
warnings.filterwarnings(
"ignore",
message="Graph is not fully connected, spectral embedding" +
" may not work as expected.",
category=UserWarning)
gmm.fit(Xpcaif)
clusterPrediction = gmm.predict(Xpcaif)
plt.scatter(Xpcaif[:, 0], Xpcaif[:, 1], c=clusterPrediction,cmap='cubehelix')
plt.show()
transposedIF['Cluster'] = clusterPrediction
#df = df.join(transposed['Cluster'], on=df.index, lsuffix ='_left', rsuffix = '_right')
#df.drop(df.transpose().columns[0])
transposedIF = transposedIF.drop(transposedIF.columns[0], axis = 1)
df.iloc[15:,:]
import matplotlib.dates as mdates
x = transposedIF.transpose().index.values.tolist()[:-8]
c0 = transposedIF.loc[transposedIF['Cluster'] == 0].transpose().mean(1).values.tolist()[:-8]
c1 = transposedIF.loc[transposedIF['Cluster'] == 1].transpose().mean(1).values.tolist()[:-8]
c2 = transposedIF.loc[transposedIF['Cluster'] == 2].transpose().mean(1).values.tolist()[:-8]
for i in range(len(x)):
x[i] = datetime.strptime(str(x[i]),'%Y-%m-%d %H:%M:%S').date()
plt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%Y/%m/%d'))
plt.gca().xaxis.set_major_locator(mdates.DayLocator(interval = 7))
cluster0, = plt.plot(x,c0, label = 'Cluster 0')
cluster1, = plt.plot(x,c1, label = 'Cluster 1')
cluster2, = plt.plot(x,c2, label = 'Cluster 2')
plt.legend(handles=[cluster0,cluster1,cluster2])
plt.gcf().autofmt_xdate()
plt.savefig('meanhourbycluster.png', bbox_inches='tight')
plt.show()
x[c2.index(max(c2))]
by_hour = transposedIF.groupby(['Cluster', transposedIF.index]).mean()
by_hour
import os
import folium
import json
from folium.plugins import MarkerCluster
jsonFile = open('infocomplete.txt')
jsonFileString = jsonFile.read()
dataParkingsStringList = jsonFileString[3:].split(',\n')
listJsonData = []
for jsondata in dataParkingsStringList:
dataDict = json.loads(jsondata)
listJsonData.append(dataDict)
l2 = []
l1 = []
l0 = []
for index, row in transposedIF.iterrows():
if row['Cluster'] == 2:
l2.append(str(index))
if row['Cluster'] == 1:
l1.append(str(index))
if row['Cluster'] == 0:
l0.append(str(index))
print(l0,l1,l2)
m = folium.Map(location=[45.7484600,4.8467100], zoom_start=14)
for dictData in listJsonData:
if dictData['properties']['idparkingcriter'] in l2:
[lon, lat] = dictData['geometry']['coordinates']
popupText = "ID : " + dictData['properties']['idparkingcriter'] + ". Capacity: " + dictData['properties']['capacite'] + ". Fermeture: " + dictData['properties']['fermeture']
marker = folium.Marker(
location=[lat, lon],
popup=popupText,
icon=folium.Icon(color='red')
)
m.add_child(marker)
if dictData['properties']['idparkingcriter'] in l1:
[lon, lat] = dictData['geometry']['coordinates']
popupText = "ID : " + dictData['properties']['idparkingcriter'] + ". Capacity: " + dictData['properties']['capacite'] + ". Fermeture: " + dictData['properties']['fermeture']
marker = folium.Marker(
location=[lat, lon],
popup=popupText,
icon=folium.Icon(color='green')
)
m.add_child(marker)
if dictData['properties']['idparkingcriter'] in l0:
[lon, lat] = dictData['geometry']['coordinates']
popupText = "ID : " + dictData['properties']['idparkingcriter'] + ". Capacity: " + dictData['properties']['capacite'] + ". Fermeture: " + dictData['properties']['fermeture']
marker = folium.Marker(
location=[lat, lon],
popup=popupText
)
m.add_child(marker)
m.save(os.path.join('ClusterParking.html'))
m
| 0.364778 | 0.91228 |
# 6장. 텍스트 다루기
이 노트북을 주피터 노트북 뷰어(nbviewer.jupyter.org)로 보거나 구글 코랩(colab.research.google.com)에서 실행할 수 있습니다.
<table align="left">
<td>
<a target="_blank" href="https://nbviewer.jupyter.org/github/rickiepark/machine-learning-with-python-cookbook/blob/master/06.ipynb"><img src="https://jupyter.org/assets/main-logo.svg" width="28" />주피터 노트북 뷰어로 보기</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/rickiepark/machine-learning-with-python-cookbook/blob/master/06.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />구글 코랩(Colab)에서 실행하기</a>
</td>
</table>
## 6.1 텍스트 정제
```
# 텍스트를 만듭니다.
text_data = [" Interrobang. By Aishwarya Henriette ",
"Parking And Going. By Karl Gautier",
" Today Is The night. By Jarek Prakash "]
# 공백 문자를 제거합니다.
strip_whitespace = [string.strip() for string in text_data]
# 텍스트를 확인합니다.
strip_whitespace
# 마침표를 제거합니다.
remove_periods = [string.replace(".", "") for string in strip_whitespace]
# 텍스트를 확인합니다.
remove_periods
# 함수를 만듭니다.
def capitalizer(string: str) -> str:
return string.upper()
# 함수를 적용합니다.
[capitalizer(string) for string in remove_periods]
# 라이브러리를 임포트합니다.
import re
# 함수를 만듭니다.
def replace_letters_with_X(string: str) -> str:
return re.sub(r"[a-zA-Z]", "X", string)
# 함수를 적용합니다.
[replace_letters_with_X(string) for string in remove_periods]
```
## 6.2 HTML 파싱과 정제
```
# 라이브러리를 임포트합니다.
from bs4 import BeautifulSoup
# 예제 HTML 코드를 만듭니다.
html = """
<div class='full_name'><span style='font-weight:bold'>
Masego</span> Azra</div>"
"""
# html을 파싱합니다.
soup = BeautifulSoup(html, "lxml")
# "full_name" 이름의 클래스를 가진 div를 찾아 텍스트를 출력합니다.
soup.find("div", { "class" : "full_name" }).text
```
## 6.3 구둣점 삭제
```
# 라이브러리를 임포트합니다.
import unicodedata
import sys
# 텍스트를 만듭니다.
text_data = ['Hi!!!! I. Love. This. Song....',
'10000% Agree!!!! #LoveIT',
'Right?!?!']
# 구두점 문자로 이루어진 딕셔너리를 만듭니다.
punctuation = dict.fromkeys(i for i in range(sys.maxunicode)
if unicodedata.category(chr(i)).startswith('P'))
# 문자열의 구두점을 삭제합니다.
[string.translate(punctuation) for string in text_data]
```
## 6.4 텍스트 토큰화
```
# 구두점 데이터를 다운로드합니다.
import nltk
nltk.download('punkt')
# 라이브러리를 임포트합니다.
from nltk.tokenize import word_tokenize
# 텍스트를 만듭니다.
string = "The science of today is the technology of tomorrow"
# 단어를 토큰으로 나눕니다.
word_tokenize(string)
# 라이브러리를 임포트합니다.
from nltk.tokenize import sent_tokenize
# 텍스트를 만듭니다.
string = "The science of today is the technology of tomorrow. Tomorrow is today."
# 문장으로 나눕니다.
sent_tokenize(string)
```
## 6.5 불용어 삭제
```
# 불용어 데이터를 다운로드합니다.
import nltk
nltk.download('stopwords')
# 라이브러리를 임포트합니다.
from nltk.corpus import stopwords
# 단어 토큰을 만듭니다.
tokenized_words = ['i',
'am',
'going',
'to',
'go',
'to',
'the',
'store',
'and',
'park']
# 불용어를 적재합니다.
stop_words = stopwords.words('english')
# 불용어를 삭제합니다.
[word for word in tokenized_words if word not in stop_words]
# 불용어를 확인합니다.
stop_words[:5]
stopwords.abspath
```
## 붙임
```
from sklearn.feature_extraction.text import ENGLISH_STOP_WORDS
len(ENGLISH_STOP_WORDS), len(stop_words)
list(ENGLISH_STOP_WORDS)[:5]
```
## 6.6 어간 추출
```
# 라이브러리를 임포트합니다.
from nltk.stem.porter import PorterStemmer
# 단어 토큰을 만듭니다.
tokenized_words = ['i', 'am', 'humbled', 'by', 'this', 'traditional', 'meeting']
# 어간 추출기를 만듭니다.
porter = PorterStemmer()
# 어간 추출기를 적용합니다.
[porter.stem(word) for word in tokenized_words]
```
## 6.7 품사 태깅
```
# 태거를 다운로드합니다.
import nltk
nltk.download('averaged_perceptron_tagger')
# 라이브러리를 임포트합니다.
from nltk import pos_tag
from nltk import word_tokenize
# 텍스트를 만듭니다.
text_data = "Chris loved outdoor running"
# 사전 훈련된 품사 태깅을 사용합니다.
text_tagged = pos_tag(word_tokenize(text_data))
# 품사를 확인합니다.
text_tagged
# 단어를 필터링합니다.
[word for word, tag in text_tagged if tag in ['NN','NNS','NNP','NNPS'] ]
from sklearn.preprocessing import MultiLabelBinarizer
# 텍스트를 만듭니다.
tweets = ["I am eating a burrito for breakfast",
"Political science is an amazing field",
"San Francisco is an awesome city"]
# 빈 리스트를 만듭니다.
tagged_tweets = []
# 각 단어와 트윗을 태깅합니다.
for tweet in tweets:
tweet_tag = nltk.pos_tag(word_tokenize(tweet))
tagged_tweets.append([tag for word, tag in tweet_tag])
# 원-핫 인코딩을 사용하여 태그를 특성으로 변환합니다.
one_hot_multi = MultiLabelBinarizer()
one_hot_multi.fit_transform(tagged_tweets)
# 특성 이름을 확인합니다.
one_hot_multi.classes_
# 브라운 코퍼스를 다운로드합니다.
import nltk
nltk.download('brown')
# 라이브러리를 임포트합니다.
from nltk.corpus import brown
from nltk.tag import UnigramTagger
from nltk.tag import BigramTagger
from nltk.tag import TrigramTagger
# 브라운 코퍼스에서 텍스트를 추출한 다음 문장으로 나눕니다.
sentences = brown.tagged_sents(categories='news')
# 4,000개의 문장은 훈련용으로 623개는 테스트용으로 나눕니다.
train = sentences[:4000]
test = sentences[4000:]
# 백오프 태그 객체를 만듭니다.
unigram = UnigramTagger(train)
bigram = BigramTagger(train, backoff=unigram)
trigram = TrigramTagger(train, backoff=bigram)
# 정확도를 확인합니다.
trigram.evaluate(test)
```
## 붙임
```
# 코랩에서 실행하는 경우 다음 주석을 제거하고 실행하세요.
#!pip install konlpy
from konlpy.tag import Okt
okt = Okt()
text = '태양계는 지금으로부터 약 46억 년 전, 거대한 분자 구름의 일부분이 중력 붕괴를 일으키면서 형성되었다'
okt.pos(text)
okt.morphs(text)
okt.nouns(text)
```
## 6.8 텍스트를 BoW로 인코딩하기
```
# 라이브러리를 임포트합니다.
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
# 텍스트를 만듭니다.
text_data = np.array(['I love Brazil. Brazil!',
'Sweden is best',
'Germany beats both'])
# BoW 특성 행렬을 만듭니다.
count = CountVectorizer()
bag_of_words = count.fit_transform(text_data)
# 특성 행렬을 확인합니다.
bag_of_words
bag_of_words.toarray()
# 특성 이름을 확인합니다.
count.get_feature_names()
# 옵션을 지정하여 특성 행렬을 만듭니다.
count_2gram = CountVectorizer(ngram_range=(1,2),
stop_words="english",
vocabulary=['brazil'])
bag = count_2gram.fit_transform(text_data)
# 특성 행렬을 확인합니다.
bag.toarray()
# 1-그램과 2-그램을 확인합니다.
count_2gram.vocabulary_
```
## 6.9 단어 중요도에 가중치 부여하기
```
# 라이브러리를 임포트합니다.
import numpy as np
from sklearn.feature_extraction.text import TfidfVectorizer
# 텍스트를 만듭니다.
text_data = np.array(['I love Brazil. Brazil!',
'Sweden is best',
'Germany beats both'])
# tf-idf 특성 행렬을 만듭니다.
tfidf = TfidfVectorizer()
feature_matrix = tfidf.fit_transform(text_data)
# tf-idf 특성 행렬을 확인합니다.
feature_matrix
# tf-idf 특성 행렬을 밀집 배열로 확인합니다.
feature_matrix.toarray()
# 특성 이름을 확인합니다.
tfidf.vocabulary_
```
|
github_jupyter
|
# 텍스트를 만듭니다.
text_data = [" Interrobang. By Aishwarya Henriette ",
"Parking And Going. By Karl Gautier",
" Today Is The night. By Jarek Prakash "]
# 공백 문자를 제거합니다.
strip_whitespace = [string.strip() for string in text_data]
# 텍스트를 확인합니다.
strip_whitespace
# 마침표를 제거합니다.
remove_periods = [string.replace(".", "") for string in strip_whitespace]
# 텍스트를 확인합니다.
remove_periods
# 함수를 만듭니다.
def capitalizer(string: str) -> str:
return string.upper()
# 함수를 적용합니다.
[capitalizer(string) for string in remove_periods]
# 라이브러리를 임포트합니다.
import re
# 함수를 만듭니다.
def replace_letters_with_X(string: str) -> str:
return re.sub(r"[a-zA-Z]", "X", string)
# 함수를 적용합니다.
[replace_letters_with_X(string) for string in remove_periods]
# 라이브러리를 임포트합니다.
from bs4 import BeautifulSoup
# 예제 HTML 코드를 만듭니다.
html = """
<div class='full_name'><span style='font-weight:bold'>
Masego</span> Azra</div>"
"""
# html을 파싱합니다.
soup = BeautifulSoup(html, "lxml")
# "full_name" 이름의 클래스를 가진 div를 찾아 텍스트를 출력합니다.
soup.find("div", { "class" : "full_name" }).text
# 라이브러리를 임포트합니다.
import unicodedata
import sys
# 텍스트를 만듭니다.
text_data = ['Hi!!!! I. Love. This. Song....',
'10000% Agree!!!! #LoveIT',
'Right?!?!']
# 구두점 문자로 이루어진 딕셔너리를 만듭니다.
punctuation = dict.fromkeys(i for i in range(sys.maxunicode)
if unicodedata.category(chr(i)).startswith('P'))
# 문자열의 구두점을 삭제합니다.
[string.translate(punctuation) for string in text_data]
# 구두점 데이터를 다운로드합니다.
import nltk
nltk.download('punkt')
# 라이브러리를 임포트합니다.
from nltk.tokenize import word_tokenize
# 텍스트를 만듭니다.
string = "The science of today is the technology of tomorrow"
# 단어를 토큰으로 나눕니다.
word_tokenize(string)
# 라이브러리를 임포트합니다.
from nltk.tokenize import sent_tokenize
# 텍스트를 만듭니다.
string = "The science of today is the technology of tomorrow. Tomorrow is today."
# 문장으로 나눕니다.
sent_tokenize(string)
# 불용어 데이터를 다운로드합니다.
import nltk
nltk.download('stopwords')
# 라이브러리를 임포트합니다.
from nltk.corpus import stopwords
# 단어 토큰을 만듭니다.
tokenized_words = ['i',
'am',
'going',
'to',
'go',
'to',
'the',
'store',
'and',
'park']
# 불용어를 적재합니다.
stop_words = stopwords.words('english')
# 불용어를 삭제합니다.
[word for word in tokenized_words if word not in stop_words]
# 불용어를 확인합니다.
stop_words[:5]
stopwords.abspath
from sklearn.feature_extraction.text import ENGLISH_STOP_WORDS
len(ENGLISH_STOP_WORDS), len(stop_words)
list(ENGLISH_STOP_WORDS)[:5]
# 라이브러리를 임포트합니다.
from nltk.stem.porter import PorterStemmer
# 단어 토큰을 만듭니다.
tokenized_words = ['i', 'am', 'humbled', 'by', 'this', 'traditional', 'meeting']
# 어간 추출기를 만듭니다.
porter = PorterStemmer()
# 어간 추출기를 적용합니다.
[porter.stem(word) for word in tokenized_words]
# 태거를 다운로드합니다.
import nltk
nltk.download('averaged_perceptron_tagger')
# 라이브러리를 임포트합니다.
from nltk import pos_tag
from nltk import word_tokenize
# 텍스트를 만듭니다.
text_data = "Chris loved outdoor running"
# 사전 훈련된 품사 태깅을 사용합니다.
text_tagged = pos_tag(word_tokenize(text_data))
# 품사를 확인합니다.
text_tagged
# 단어를 필터링합니다.
[word for word, tag in text_tagged if tag in ['NN','NNS','NNP','NNPS'] ]
from sklearn.preprocessing import MultiLabelBinarizer
# 텍스트를 만듭니다.
tweets = ["I am eating a burrito for breakfast",
"Political science is an amazing field",
"San Francisco is an awesome city"]
# 빈 리스트를 만듭니다.
tagged_tweets = []
# 각 단어와 트윗을 태깅합니다.
for tweet in tweets:
tweet_tag = nltk.pos_tag(word_tokenize(tweet))
tagged_tweets.append([tag for word, tag in tweet_tag])
# 원-핫 인코딩을 사용하여 태그를 특성으로 변환합니다.
one_hot_multi = MultiLabelBinarizer()
one_hot_multi.fit_transform(tagged_tweets)
# 특성 이름을 확인합니다.
one_hot_multi.classes_
# 브라운 코퍼스를 다운로드합니다.
import nltk
nltk.download('brown')
# 라이브러리를 임포트합니다.
from nltk.corpus import brown
from nltk.tag import UnigramTagger
from nltk.tag import BigramTagger
from nltk.tag import TrigramTagger
# 브라운 코퍼스에서 텍스트를 추출한 다음 문장으로 나눕니다.
sentences = brown.tagged_sents(categories='news')
# 4,000개의 문장은 훈련용으로 623개는 테스트용으로 나눕니다.
train = sentences[:4000]
test = sentences[4000:]
# 백오프 태그 객체를 만듭니다.
unigram = UnigramTagger(train)
bigram = BigramTagger(train, backoff=unigram)
trigram = TrigramTagger(train, backoff=bigram)
# 정확도를 확인합니다.
trigram.evaluate(test)
# 코랩에서 실행하는 경우 다음 주석을 제거하고 실행하세요.
#!pip install konlpy
from konlpy.tag import Okt
okt = Okt()
text = '태양계는 지금으로부터 약 46억 년 전, 거대한 분자 구름의 일부분이 중력 붕괴를 일으키면서 형성되었다'
okt.pos(text)
okt.morphs(text)
okt.nouns(text)
# 라이브러리를 임포트합니다.
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
# 텍스트를 만듭니다.
text_data = np.array(['I love Brazil. Brazil!',
'Sweden is best',
'Germany beats both'])
# BoW 특성 행렬을 만듭니다.
count = CountVectorizer()
bag_of_words = count.fit_transform(text_data)
# 특성 행렬을 확인합니다.
bag_of_words
bag_of_words.toarray()
# 특성 이름을 확인합니다.
count.get_feature_names()
# 옵션을 지정하여 특성 행렬을 만듭니다.
count_2gram = CountVectorizer(ngram_range=(1,2),
stop_words="english",
vocabulary=['brazil'])
bag = count_2gram.fit_transform(text_data)
# 특성 행렬을 확인합니다.
bag.toarray()
# 1-그램과 2-그램을 확인합니다.
count_2gram.vocabulary_
# 라이브러리를 임포트합니다.
import numpy as np
from sklearn.feature_extraction.text import TfidfVectorizer
# 텍스트를 만듭니다.
text_data = np.array(['I love Brazil. Brazil!',
'Sweden is best',
'Germany beats both'])
# tf-idf 특성 행렬을 만듭니다.
tfidf = TfidfVectorizer()
feature_matrix = tfidf.fit_transform(text_data)
# tf-idf 특성 행렬을 확인합니다.
feature_matrix
# tf-idf 특성 행렬을 밀집 배열로 확인합니다.
feature_matrix.toarray()
# 특성 이름을 확인합니다.
tfidf.vocabulary_
| 0.260013 | 0.949716 |
<a href="https://colab.research.google.com/github/pachterlab/kallistobustools/blob/master/tutorials/docs/tutorials/download_data/data_download.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Data downloading
This tutorial provides information on where to find single-cell RNA-seq data, and how to download it for processing with the **kallisto | bustools** workflow.
## Databases
There are multiple databases that are important repositories for sequencing data and metadata, and that are relevant for obtaining single-cell RNA-seq data. For each archive we provide an example of how the data is organized and how to download it.
* **[Biological Project Library](https://bigd.big.ac.cn/bioproject/)** (BioProject): The Biological Project Library organizes metadata for research projects involving genomic data types. This repository, which was started in 2016, is similar to the Gene Expression Omnibus. As an example, the data from the paper [Peng et al. 2019](https://www.nature.com/articles/s41422-019-0195-y) is organized under project accession [PRJCA001063](https://bigd.big.ac.cn/bioproject/browse/PRJCA001063). Each single-cell RNA-seq dataset has a “BioSample accession”, e.g. [SAMC047103](https://bigd.big.ac.cn/biosample/browse/SAMC047103). A further link to the Genome Sequencing Archive provides access to FASTQ files.
* **[Genome Sequence Archive](http://gsa.big.ac.cn/)** (GSA): This repository contains reads for projects in FASTQ format. For example, reads for [SAMC047103](https://bigd.big.ac.cn/biosample/browse/SAMC047103) from the [PRJCA001063](https://bigd.big.ac.cn/bioproject/browse/PRJCA001063) in the BioProject repository are accessible under accession [CRA001160](https://bigd.big.ac.cn/gsa/browse/CRA001160). A specific run accession, e.g. [CRR034516](https://bigd.big.ac.cn/gsa/browse/CRA001160/CRR034516) provides direct access to FASTQ files.
* **[Gene Expression Omnibus](https://www.ncbi.nlm.nih.gov/geo/)** (GEO): The Gene Expression Omnibus is a repository for [MIAME (Minimum Infomration about a Microarray Experiment)](https://www.ncbi.nlm.nih.gov/geo/info/MIAME.html) compliant data. While the MIAME standards were established during a time when gene expression data was primarily collected with microarrays, the standards also apply to sequencing data and the GEO repository hosts project metadata for both types of research projects. As an example, the project link for the paper [Wolock et al. 2019](https://www.sciencedirect.com/science/article/pii/S2211124719307971) is [GSE132151](https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE132151). Most papers refer to their data via GEO accessions, so GEO is a useful repository for searching for data from projects.
* **[European Nucelotide Archive](https://www.ebi.ac.uk/ena)** (ENA): The ENA provides access to nucleotide sequences associated with genomic projects. In the case of [GSE132151](https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE132151) mentioned above, the nucleotide sequences are at [PRJNA546231](https://www.ebi.ac.uk/ena/data/view/PRJNA546231). The ENA provides direct access to FASTQ files from the project page. It also links to NCBI Sequence Read Archive format data.
* **[Sequence Read Archive](https://www.ncbi.nlm.nih.gov/sra)** (SRA): The SRA is a sequence repository for genomic data. Files are stored in SRA format, which must be downloaded and converted to FASTQ format prior to pre-processing using the `fasterq-dump` program available as part of [SRA tools](https://github.com/ncbi/sra-tools/wiki/HowTo:-fasterq-dump). For example, the data in [Rossi et al., 2019](https://science.sciencemag.org/content/364/6447/1271) can be located in the SRA via [GEO](https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE130597), then to the [SRA](https://www.ncbi.nlm.nih.gov/sra?term=SRP194426), and finally a sequence data page for one of the runs, [SRX5779290](https://trace.ncbi.nlm.nih.gov/Traces/sra/?run=SRR9000493) has information about the traces (reads). The SRA tools operate directly on SRA accessions.
## Searching
The [sra-explorer](https://ewels.github.io/sra-explorer/) website is an effective and easy to use utility for searching the SRA and for downloading files. The utility finds SRA entires by keywords or accession numbers and produces links to the FASTQs and to commands for downloading them.
## Streaming
Single-cell RNA-seq data from sequence repositories can be streamed into `kb` making possible a workflow that does not require saving files to disk prior to pre-processing. For example, the following command can be used to stream data from the a URL:
__Note__: Streaming is not supported on Windows.
### Install `kb`
```
!pip install --quiet kb-python
```
### Download a pre-built mouse index
The only required file that must be locally stored on disk prior to pre-processing is the index, which is why we download it here.
```
%%time
!kb ref -d mouse -i index.idx -g t2g.txt
%%time
!kb count -i index.idx -g t2g.txt -x 10xv2 --h5ad -t 2 \
https://caltech.box.com/shared/static/w9ww8et5o029s2e3usjzpbq8lpot29rh.gz \
https://caltech.box.com/shared/static/ql00zyvqnpy7bf8ogdoe9zfy907guzy9.gz
```
|
github_jupyter
|
!pip install --quiet kb-python
%%time
!kb ref -d mouse -i index.idx -g t2g.txt
%%time
!kb count -i index.idx -g t2g.txt -x 10xv2 --h5ad -t 2 \
https://caltech.box.com/shared/static/w9ww8et5o029s2e3usjzpbq8lpot29rh.gz \
https://caltech.box.com/shared/static/ql00zyvqnpy7bf8ogdoe9zfy907guzy9.gz
| 0.294316 | 0.984456 |
Notebook to investigate the negative salinity error that occurred in the 3 April 2015 Nowcast
```
from __future__ import division
import matplotlib.pyplot as plt
import netCDF4 as nc
import numpy as np
from salishsea_tools import viz_tools
%matplotlib inline
grid = nc.Dataset('../../NEMO-forcing/grid/bathy_meter_SalishSea2.nc')
bathy = grid.variables['Bathymetry'][:]
fig, ax = plt.subplots(1, 1, figsize=(10, 8))
viz_tools.set_aspect(ax)
mesh = ax.pcolormesh(bathy, cmap='winter_r')
fig.colorbar(mesh)
ax.plot(173,696,'ok')
fig, ax = plt.subplots(1, 1, figsize=(10, 8))
viz_tools.set_aspect(ax)
cmap = plt.get_cmap('winter_r')
cmap.set_bad('burlywood')
mesh = ax.pcolormesh(bathy, cmap=cmap)
fig.colorbar(mesh)
plt.axis((150, 250, 650, 750))
ax.plot(173,696,'ok')
```
Right off the south edge of Savary Island. Probably not a River problem.
```
data = nc.Dataset('/data/dlatorne/MEOPAR/SalishSea/nowcast/03apr15/SalishSea_1h_20150403_20150403_grid_T.nc')
salinity = data.variables['vosaline'][:]
surfaceheight = data.variables['sossheig'][:]
print salinity.shape
# mask salinity
m = salinity == 0
sal = np.ma.array(salinity, mask=m)
m = surfaceheight == 0
eta = np.ma.array(surfaceheight, mask=m)
fig, ax = plt.subplots(1, 1, figsize=(10, 8))
viz_tools.set_aspect(ax)
mesh = ax.pcolormesh(sal[17,0,650:750,150:250], cmap=cmap)
fig.colorbar(mesh)
fig, ax = plt.subplots(1, 1, figsize=(10, 8))
viz_tools.set_aspect(ax)
mesh = ax.pcolormesh(eta[17,650:750,150:250], cmap=cmap)
fig.colorbar(mesh)
atmos = nc.Dataset('/ocean/sallen/allen/research/MEOPAR/Operational/ops_y2015m04d03.nc')
precip = atmos.variables['precip'][:]
u_wind = atmos.variables['u_wind'][:]
v_wind = atmos.variables['v_wind'][:]
print precip.shape
fig, ax = plt.subplots(1, 1, figsize=(10, 8))
mesh = ax.pcolormesh(precip[19])
ax.set_xlim((0,256))
ax.set_ylim((0,266))
fig.colorbar(mesh)
fig, ax = plt.subplots(1, 1, figsize=(10, 8))
mesh = ax.pcolormesh(u_wind[19])
ax.set_xlim((0,256))
ax.set_ylim((0,266))
fig.colorbar(mesh)
fig, ax = plt.subplots(1, 1, figsize=(10, 8))
mesh = ax.pcolormesh(v_wind[19])
ax.set_xlim((0,256))
ax.set_ylim((0,266))
fig.colorbar(mesh)
print u_wind[19,10,10], v_wind[19,10,10]
print u_wind[19].max(), u_wind[19].min()
print v_wind[19].max(), v_wind[19].min()
```
So u_wind values are all NaN's
```
fig, ax = plt.subplots(1, 1, figsize=(10, 8))
mesh = ax.pcolormesh(u_wind[18])
ax.set_xlim((0,256))
ax.set_ylim((0,266))
fig.colorbar(mesh)
fig, ax = plt.subplots(1, 1, figsize=(10, 8))
mesh = ax.pcolormesh(u_wind[20])
ax.set_xlim((0,256))
ax.set_ylim((0,266))
fig.colorbar(mesh)
```
|
github_jupyter
|
from __future__ import division
import matplotlib.pyplot as plt
import netCDF4 as nc
import numpy as np
from salishsea_tools import viz_tools
%matplotlib inline
grid = nc.Dataset('../../NEMO-forcing/grid/bathy_meter_SalishSea2.nc')
bathy = grid.variables['Bathymetry'][:]
fig, ax = plt.subplots(1, 1, figsize=(10, 8))
viz_tools.set_aspect(ax)
mesh = ax.pcolormesh(bathy, cmap='winter_r')
fig.colorbar(mesh)
ax.plot(173,696,'ok')
fig, ax = plt.subplots(1, 1, figsize=(10, 8))
viz_tools.set_aspect(ax)
cmap = plt.get_cmap('winter_r')
cmap.set_bad('burlywood')
mesh = ax.pcolormesh(bathy, cmap=cmap)
fig.colorbar(mesh)
plt.axis((150, 250, 650, 750))
ax.plot(173,696,'ok')
data = nc.Dataset('/data/dlatorne/MEOPAR/SalishSea/nowcast/03apr15/SalishSea_1h_20150403_20150403_grid_T.nc')
salinity = data.variables['vosaline'][:]
surfaceheight = data.variables['sossheig'][:]
print salinity.shape
# mask salinity
m = salinity == 0
sal = np.ma.array(salinity, mask=m)
m = surfaceheight == 0
eta = np.ma.array(surfaceheight, mask=m)
fig, ax = plt.subplots(1, 1, figsize=(10, 8))
viz_tools.set_aspect(ax)
mesh = ax.pcolormesh(sal[17,0,650:750,150:250], cmap=cmap)
fig.colorbar(mesh)
fig, ax = plt.subplots(1, 1, figsize=(10, 8))
viz_tools.set_aspect(ax)
mesh = ax.pcolormesh(eta[17,650:750,150:250], cmap=cmap)
fig.colorbar(mesh)
atmos = nc.Dataset('/ocean/sallen/allen/research/MEOPAR/Operational/ops_y2015m04d03.nc')
precip = atmos.variables['precip'][:]
u_wind = atmos.variables['u_wind'][:]
v_wind = atmos.variables['v_wind'][:]
print precip.shape
fig, ax = plt.subplots(1, 1, figsize=(10, 8))
mesh = ax.pcolormesh(precip[19])
ax.set_xlim((0,256))
ax.set_ylim((0,266))
fig.colorbar(mesh)
fig, ax = plt.subplots(1, 1, figsize=(10, 8))
mesh = ax.pcolormesh(u_wind[19])
ax.set_xlim((0,256))
ax.set_ylim((0,266))
fig.colorbar(mesh)
fig, ax = plt.subplots(1, 1, figsize=(10, 8))
mesh = ax.pcolormesh(v_wind[19])
ax.set_xlim((0,256))
ax.set_ylim((0,266))
fig.colorbar(mesh)
print u_wind[19,10,10], v_wind[19,10,10]
print u_wind[19].max(), u_wind[19].min()
print v_wind[19].max(), v_wind[19].min()
fig, ax = plt.subplots(1, 1, figsize=(10, 8))
mesh = ax.pcolormesh(u_wind[18])
ax.set_xlim((0,256))
ax.set_ylim((0,266))
fig.colorbar(mesh)
fig, ax = plt.subplots(1, 1, figsize=(10, 8))
mesh = ax.pcolormesh(u_wind[20])
ax.set_xlim((0,256))
ax.set_ylim((0,266))
fig.colorbar(mesh)
| 0.53437 | 0.890103 |
```
#hide
# default_exp script
```
# Script - command line interfaces
> A fast way to turn your python function into a script.
Part of [fast.ai](https://www.fast.ai)'s toolkit for delightful developer experiences.
## Overview
Sometimes, you want to create a quick script, either for yourself, or for others. But in Python, that involves a whole lot of boilerplate and ceremony, especially if you want to support command line arguments, provide help, and other niceties. You can use [argparse](https://docs.python.org/3/library/argparse.html) for this purpose, which comes with Python, but it's complex and verbose.
`fastcore.script` makes life easier. There are much fancier modules to help you write scripts (we recommend [Python Fire](https://github.com/google/python-fire), and [Click](https://click.palletsprojects.com/en/7.x/) is also popular), but fastcore.script is very fast and very simple. In fact, it's <50 lines of code! Basically, it's just a little wrapper around `argparse` that uses modern Python features and some thoughtful defaults to get rid of the boilerplate.
For full details, see the [docs](https://fastcore.script.fast.ai) for `core`.
## Example
Here's a complete example (available in `examples/test_fastcore.py`):
```python
from fastcore.script import *
@call_parse
def main(msg:Param("The message", str),
upper:Param("Convert to uppercase?", store_true)):
"Print `msg`, optionally converting to uppercase"
print(msg.upper() if upper else msg)
````
If you copy that info a file and run it, you'll see:
```
$ examples/test_fastcore.py --help
usage: test_fastcore.py [-h] [--upper] [--pdb PDB] [--xtra XTRA] msg
Print `msg`, optionally converting to uppercase
positional arguments:
msg The message
optional arguments:
-h, --help show this help message and exit
--upper Convert to uppercase? (default: False)
--pdb PDB Run in pdb debugger (default: False)
--xtra XTRA Parse for additional args (default: '')
```
As you see, we didn't need any `if __name__ == "__main__"`, we didn't have to parse arguments, we just wrote a function, added a decorator to it, and added some annotations to our function's parameters. As a bonus, we can also use this function directly from a REPL such as Jupyter Notebook - it's not just for command line scripts!
## Param annotations
Each parameter in your function should have an annotation `Param(...)` (as in the example above). You can pass the following when calling `Param`: `help`,`type`,`opt`,`action`,`nargs`,`const`,`choices`,`required` . Except for `opt`, all of these are just passed directly to `argparse`, so you have all the power of that module at your disposal. Generally you'll want to pass at least `help` (since this is provided as the help string for that parameter) and `type` (to ensure that you get the type of data you expect). `opt` is a bool that defines whether a param is optional or required (positional) - but you'll generally not need to set this manually, because fastcore.script will set it for you automatically based on *default* values.
You should provide a default (after the `=`) for any *optional* parameters. If you don't provide a default for a parameter, then it will be a *positional* parameter.
## setuptools scripts
There's a really nice feature of pip/setuptools that lets you create commandline scripts directly from functions, makes them available in the `PATH`, and even makes your scripts cross-platform (e.g. in Windows it creates an exe). fastcore.script supports this feature too. The trick to making a function available as a script is to add a `console_scripts` section to your setup file, of the form: `script_name=module:function_name`. E.g. in this case we use: `test_fastcore.script=fastcore.script.test_cli:main`. With this, you can then just type `test_fastcore.script` at any time, from any directory, and your script will be called (once it's installed using one of the methods below).
You don't actually have to write a `setup.py` yourself. Instead, just use [nbdev](https://nbdev.fast.ai). Then modify `settings.ini` as appropriate for your module/script. To install your script directly, you can type `pip install -e .`. Your script, when installed this way (it's called an [editable install](http://codumentary.blogspot.com/2014/11/python-tip-of-year-pip-install-editable.html), will automatically be up to date even if you edit it - there's no need to reinstall it after editing. With nbdev you can even make your module and script available for installation directly from pip and conda by running `make release`.
## API details
```
from fastcore.test import *
#export
import inspect,functools
import argparse
from fastcore.imports import *
from fastcore.utils import *
#export
def store_true():
"Placeholder to pass to `Param` for `store_true` action"
pass
#export
def store_false():
"Placeholder to pass to `Param` for `store_false` action"
pass
#export
def bool_arg(v):
"Use as `type` for `Param` to get `bool` behavior"
return str2bool(v)
#export
def clean_type_str(x:str):
x = str(x)
x = re.sub("(class|function|__main__\.|\ at.*)", '', x)
x = re.sub("(<|>|'|\ )", '', x) # spl characters
return x
class Test: pass
test_eq(clean_type_str(argparse.ArgumentParser), 'argparse.ArgumentParser')
test_eq(clean_type_str(Test), 'Test')
test_eq(clean_type_str(int), 'int')
test_eq(clean_type_str(float), 'float')
test_eq(clean_type_str(store_false), 'store_false')
#export
class Param:
"A parameter in a function used in `anno_parser` or `call_parse`"
def __init__(self, help=None, type=None, opt=True, action=None, nargs=None, const=None,
choices=None, required=None, default=None):
if type==store_true: type,action,default=None,'store_true' ,False
if type==store_false: type,action,default=None,'store_false',True
store_attr()
def set_default(self, d):
if self.default is None:
if d==inspect.Parameter.empty: self.opt = False
else: self.default = d
if self.default is not None: self.help += f" (default: {self.default})"
@property
def pre(self): return '--' if self.opt else ''
@property
def kwargs(self): return {k:v for k,v in self.__dict__.items()
if v is not None and k!='opt' and k[0]!='_'}
def __repr__(self):
if self.help is None and self.type is None: return ""
if self.help is None and self.type is not None: return f"{clean_type_str(self.type)}"
if self.help is not None and self.type is None: return f"<{self.help}>"
if self.help is not None and self.type is not None: return f"{clean_type_str(self.type)} <{self.help}>"
test_eq(repr(Param("Help goes here")), '<Help goes here>')
test_eq(repr(Param("Help", int)), 'int <Help>')
test_eq(repr(Param(help=None, type=int)), 'int')
test_eq(repr(Param(help=None, type=None)), '')
```
Each parameter in your function should have an annotation `Param(...)`. You can pass the following when calling `Param`: `help`,`type`,`opt`,`action`,`nargs`,`const`,`choices`,`required` (i.e. it takes the same parameters as `argparse.ArgumentParser.add_argument`, plus `opt`). Except for `opt`, all of these are just passed directly to `argparse`, so you have all the power of that module at your disposal. Generally you'll want to pass at least `help` (since this is provided as the help string for that parameter) and `type` (to ensure that you get the type of data you expect).
`opt` is a bool that defines whether a param is optional or required (positional) - but you'll generally not need to set this manually, because fastcore.script will set it for you automatically based on *default* values. You should provide a default (after the `=`) for any *optional* parameters. If you don't provide a default for a parameter, then it will be a *positional* parameter.
Param's `__repr__` also allows for more informative function annotation when looking up the function's doc using shift+tab. You see the type annotation (if there is one) and the accompanying help documentation with it.
```
def f(required:Param("Required param", int),
a:Param("param 1", bool_arg),
b:Param("param 2", str)="test"):
"my docs"
...
f?
p = Param(help="help", type=int)
p.set_default(1)
test_eq(p.kwargs, {'help': 'help (default: 1)', 'type': int, 'default': 1})
#export
def anno_parser(func, prog=None, from_name=False):
"Look at params (annotated with `Param`) in func and return an `ArgumentParser`"
p = argparse.ArgumentParser(description=func.__doc__, prog=prog)
for k,v in inspect.signature(func).parameters.items():
param = func.__annotations__.get(k, Param())
param.set_default(v.default)
p.add_argument(f"{param.pre}{k}", **param.kwargs)
p.add_argument(f"--pdb", help="Run in pdb debugger (default: False)", action='store_true')
p.add_argument(f"--xtra", help="Parse for additional args (default: '')", type=str)
return p
```
This converts a function with parameter annotations of type `Param` into an `argparse.ArgumentParser` object. Function arguments with a default provided are optional, and other arguments are positional.
```
def f(required:Param("Required param", int),
a:Param("param 1", bool_arg),
b:Param("param 2", str)="test"):
"my docs"
...
p = anno_parser(f, 'progname')
p.print_help()
#export
def args_from_prog(func, prog):
"Extract args from `prog`"
if prog is None or '#' not in prog: return {}
if '##' in prog: _,prog = prog.split('##', 1)
progsp = prog.split("#")
args = {progsp[i]:progsp[i+1] for i in range(0, len(progsp), 2)}
for k,v in args.items():
t = func.__annotations__.get(k, Param()).type
if t: args[k] = t(v)
return args
```
Sometimes it's convenient to extract arguments from the actual name of the called program. `args_from_prog` will do this, assuming that names and values of the params are separated by a `#`. Optionally there can also be a prefix separated by `##` (double underscore).
```
exp = {'a': False, 'b': 'baa'}
test_eq(args_from_prog(f, 'foo##a#0#b#baa'), exp)
test_eq(args_from_prog(f, 'a#0#b#baa'), exp)
#export
SCRIPT_INFO = SimpleNamespace(func=None)
#export
def call_parse(func):
"Decorator to create a simple CLI from `func` using `anno_parser`"
mod = inspect.getmodule(inspect.currentframe().f_back)
if not mod: return func
@functools.wraps(func)
def _f(*args, **kwargs):
mod = inspect.getmodule(inspect.currentframe().f_back)
if not mod: return func(*args, **kwargs)
if not SCRIPT_INFO.func and mod.__name__=="__main__": SCRIPT_INFO.func = func.__name__
p = anno_parser(func)
args = p.parse_args().__dict__
xtra = otherwise(args.pop('xtra', ''), eq(1), p.prog)
tfunc = trace(func) if args.pop('pdb', False) else func
tfunc(**merge(args, args_from_prog(func, xtra)))
if mod.__name__=="__main__":
setattr(mod, func.__name__, _f)
SCRIPT_INFO.func = func.__name__
return _f()
else: return _f
@call_parse
def test_add(a:Param("param a", int), b:Param("param 1",int)): return a + b
```
`call_parse` decorated functions work as regular functions and also as command-line interface functions.
```
test_eq(test_add(1,2), 3)
```
This is the main way to use `fastcore.script`; decorate your function with `call_parse`, add `Param` annotations as shown above, and it can then be used as a script.
## Export -
```
#hide
from nbdev.export import notebook2script
notebook2script()
```
|
github_jupyter
|
#hide
# default_exp script
from fastcore.script import *
@call_parse
def main(msg:Param("The message", str),
upper:Param("Convert to uppercase?", store_true)):
"Print `msg`, optionally converting to uppercase"
print(msg.upper() if upper else msg)
$ examples/test_fastcore.py --help
usage: test_fastcore.py [-h] [--upper] [--pdb PDB] [--xtra XTRA] msg
Print `msg`, optionally converting to uppercase
positional arguments:
msg The message
optional arguments:
-h, --help show this help message and exit
--upper Convert to uppercase? (default: False)
--pdb PDB Run in pdb debugger (default: False)
--xtra XTRA Parse for additional args (default: '')
from fastcore.test import *
#export
import inspect,functools
import argparse
from fastcore.imports import *
from fastcore.utils import *
#export
def store_true():
"Placeholder to pass to `Param` for `store_true` action"
pass
#export
def store_false():
"Placeholder to pass to `Param` for `store_false` action"
pass
#export
def bool_arg(v):
"Use as `type` for `Param` to get `bool` behavior"
return str2bool(v)
#export
def clean_type_str(x:str):
x = str(x)
x = re.sub("(class|function|__main__\.|\ at.*)", '', x)
x = re.sub("(<|>|'|\ )", '', x) # spl characters
return x
class Test: pass
test_eq(clean_type_str(argparse.ArgumentParser), 'argparse.ArgumentParser')
test_eq(clean_type_str(Test), 'Test')
test_eq(clean_type_str(int), 'int')
test_eq(clean_type_str(float), 'float')
test_eq(clean_type_str(store_false), 'store_false')
#export
class Param:
"A parameter in a function used in `anno_parser` or `call_parse`"
def __init__(self, help=None, type=None, opt=True, action=None, nargs=None, const=None,
choices=None, required=None, default=None):
if type==store_true: type,action,default=None,'store_true' ,False
if type==store_false: type,action,default=None,'store_false',True
store_attr()
def set_default(self, d):
if self.default is None:
if d==inspect.Parameter.empty: self.opt = False
else: self.default = d
if self.default is not None: self.help += f" (default: {self.default})"
@property
def pre(self): return '--' if self.opt else ''
@property
def kwargs(self): return {k:v for k,v in self.__dict__.items()
if v is not None and k!='opt' and k[0]!='_'}
def __repr__(self):
if self.help is None and self.type is None: return ""
if self.help is None and self.type is not None: return f"{clean_type_str(self.type)}"
if self.help is not None and self.type is None: return f"<{self.help}>"
if self.help is not None and self.type is not None: return f"{clean_type_str(self.type)} <{self.help}>"
test_eq(repr(Param("Help goes here")), '<Help goes here>')
test_eq(repr(Param("Help", int)), 'int <Help>')
test_eq(repr(Param(help=None, type=int)), 'int')
test_eq(repr(Param(help=None, type=None)), '')
def f(required:Param("Required param", int),
a:Param("param 1", bool_arg),
b:Param("param 2", str)="test"):
"my docs"
...
f?
p = Param(help="help", type=int)
p.set_default(1)
test_eq(p.kwargs, {'help': 'help (default: 1)', 'type': int, 'default': 1})
#export
def anno_parser(func, prog=None, from_name=False):
"Look at params (annotated with `Param`) in func and return an `ArgumentParser`"
p = argparse.ArgumentParser(description=func.__doc__, prog=prog)
for k,v in inspect.signature(func).parameters.items():
param = func.__annotations__.get(k, Param())
param.set_default(v.default)
p.add_argument(f"{param.pre}{k}", **param.kwargs)
p.add_argument(f"--pdb", help="Run in pdb debugger (default: False)", action='store_true')
p.add_argument(f"--xtra", help="Parse for additional args (default: '')", type=str)
return p
def f(required:Param("Required param", int),
a:Param("param 1", bool_arg),
b:Param("param 2", str)="test"):
"my docs"
...
p = anno_parser(f, 'progname')
p.print_help()
#export
def args_from_prog(func, prog):
"Extract args from `prog`"
if prog is None or '#' not in prog: return {}
if '##' in prog: _,prog = prog.split('##', 1)
progsp = prog.split("#")
args = {progsp[i]:progsp[i+1] for i in range(0, len(progsp), 2)}
for k,v in args.items():
t = func.__annotations__.get(k, Param()).type
if t: args[k] = t(v)
return args
exp = {'a': False, 'b': 'baa'}
test_eq(args_from_prog(f, 'foo##a#0#b#baa'), exp)
test_eq(args_from_prog(f, 'a#0#b#baa'), exp)
#export
SCRIPT_INFO = SimpleNamespace(func=None)
#export
def call_parse(func):
"Decorator to create a simple CLI from `func` using `anno_parser`"
mod = inspect.getmodule(inspect.currentframe().f_back)
if not mod: return func
@functools.wraps(func)
def _f(*args, **kwargs):
mod = inspect.getmodule(inspect.currentframe().f_back)
if not mod: return func(*args, **kwargs)
if not SCRIPT_INFO.func and mod.__name__=="__main__": SCRIPT_INFO.func = func.__name__
p = anno_parser(func)
args = p.parse_args().__dict__
xtra = otherwise(args.pop('xtra', ''), eq(1), p.prog)
tfunc = trace(func) if args.pop('pdb', False) else func
tfunc(**merge(args, args_from_prog(func, xtra)))
if mod.__name__=="__main__":
setattr(mod, func.__name__, _f)
SCRIPT_INFO.func = func.__name__
return _f()
else: return _f
@call_parse
def test_add(a:Param("param a", int), b:Param("param 1",int)): return a + b
test_eq(test_add(1,2), 3)
#hide
from nbdev.export import notebook2script
notebook2script()
| 0.549882 | 0.877477 |
<img align="left" src="https://lever-client-logos.s3.amazonaws.com/864372b1-534c-480e-acd5-9711f850815c-1524247202159.png" width=200>
<br></br>
## *Data Science Unit 4 Sprint 4*
# Sprint Challenge
### RNNs, CNNs, GANS, and AutoML
In this Sprint Challenge, you'll explore some of the cutting edge of Data Science. *Caution* - these approaches can be pretty heavy computationally. All problems are designed to completed with 5-10 minutes of run time on most machines. If you approach takes longer, please double check your work.
## Part 1 - RNNs
Use an RNN to fit a classification model on tweets to distinguish from tweets from any two accounts. The following code sample illustrates how to access data from an account (no API auth needed, uses [twitterscraper](https://github.com/taspinar/twitterscraper):
```
!pip install twitterscraper
from twitterscraper import query_tweets
austen_tweets = query_tweets('from:austen', 1000)
len(austen_tweets)
austen_tweets[0].text
```
Your Tasks:
* Select two twitter accounts to gather data from
* Use twitterscraper to get ~1,000 tweets from each account
* Encode the characters to a sequence of integers for the model
* Get the data into the appropriate shape/format, including labels and a train/test split
* Use Keras to fit a predictive model, classying tweets as being from one acount or the other
* Report your overall score and accuracy
For reference, the [Keras IMDB classification example](https://github.com/keras-team/keras/blob/master/examples/imdb_lstm.py) will be useful, as well as the RNN code we used in class.
Note - focus on getting a running model, not on making accuracy with extreme data size or epoch numbers. Fit a baseline model based on tweet text. Only revisit and push accuracy or incorporate additional features if you get everything else done!
```
elon_tweets = query_tweets('from:elonmusk', 1000)
len(elon_tweets)
for i, j in enumerate(austen_tweets):
if i < 10:
print(austen_tweets[i].text)
print("-"*100)
for i, j in enumerate(elon_tweets):
if i < 10:
print(elon_tweets[i].text)
```
### Encode to integers
```
# get all tweet texts
both_tweets = ''
for i in austen_tweets:
both_tweets = both_tweets + i.text
for i in elon_tweets:
both_tweets = both_tweets + i.text
# Convert all tweet texts to numeric
chars = list(set(both_tweets))
char_indices = dict((c, i) for i, c in enumerate(chars))
# Convert austen tweet to numeric set based on all tweets
austen_tweets_num = []
for i, j in enumerate(austen_tweets):
num_list = [char_indices[char] for char in j.text]
austen_tweets_num.append(num_list)
# Convert elon tweet to numeric set based on all tweets
elon_tweets_num = []
for i, j in enumerate(elon_tweets):
num_list = [char_indices[char] for char in j.text]
elon_tweets_num.append(num_list)
print(len(austen_tweets_num), len(elon_tweets_num))
print("-"*100)
print(austen_tweets_num[1:3])
print("-"*100)
print(elon_tweets_num[1:3])
```
### Run Model
```
# Convert to np array for machine learning
X = np.array(austen_tweets_num + elon_tweets_num)
austen_y = np.zeros((len(austen_tweets_num),), dtype=np.int)
elon_y = np.ones((len(elon_tweets_num),), dtype=np.int)
y = np.concatenate((austen_y,elon_y), axis=0)
X.shape, y.shape
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.layers import LSTM
from keras.layers.embeddings import Embedding
from keras.preprocessing import sequence
max_features = 2000
maxlen = 80
epochs = 10
batch_size = 20
print('Pad sequences (samples x time)')
X_train = sequence.pad_sequences(X_train, maxlen=maxlen)
X_test = sequence.pad_sequences(X_test, maxlen=maxlen)
print('x_train shape:', X_train.shape)
print('x_test shape:', X_test.shape)
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
```
### Result
```
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=epochs, batch_size=batch_size)
```
## Part 2 - CNNs
Time to play "find the frog!" Use Keras and ResNet50 to detect which of the following images contain frogs:
```
!pip install google_images_download
from google_images_download import google_images_download
response = google_images_download.googleimagesdownload()
arguments = {'keywords': "animal pond", "limit": 5, "print_urls": True}
absolute_image_paths = response.download(arguments)
```
At the time of writing at least a few do, but since the internet changes - it is possible your 5 won't. You can easily verify yourself, and (once you have working code) increase the number of images you pull to be more sure of getting a frog. Your goal is validly run ResNet50 on the input images - don't worry about tuning or improving the model.
*Hint:* ResNet 50 doesn't just return "frog". The three labels it has for frogs are bullfrog, tree frog, and tailed frog.
Stretch goal - also check for fish.
```
for i in absolute_image_paths[0]['animal pond']:
print(i.split("/")[-1:])
from keras.applications.resnet50 import ResNet50
from keras.preprocessing import image
from keras.applications.resnet50 import preprocess_input, decode_predictions
def process_img_path(img_path):
return image.load_img(img_path, target_size=(224, 224))
def img_contains_frog(img):
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
model = ResNet50(weights='imagenet')
features = model.predict(x)
results = decode_predictions(features, top=3)[0]
print(results)
for entry in results:
if 'frog' in entry[1]:
return entry[2], 'Frog is in this picture'
else:
return 'Frog is not in this picture'
for i in absolute_image_paths[0]['animal pond']:
print(i.split("/")[-1:])
print(img_contains_frog(process_img_path(i)))
print("-"*50)
```
## Part 3 - AutoML
Use [TPOT](https://epistasislab.github.io/tpot/) to fit a predictive model for the King County housing data, with `price` as the target output variable.
```
!pip install tpot
import pandas as pd
url = "https://raw.githubusercontent.com/ryanleeallred/datasets/master/kc_house_data.csv"
df = pd.read_csv(url)
df.head()
print(df.shape)
df.isnull().sum()
```
As with previous questions, your goal is to run TPOT and successfully run and report error at the end. Also, in the interest of time, feel free to choose small `generation=1`and `population_size=10` parameters, so your pipeline runs efficiently. You will want to be able to iterate and test.
*Hint:* You will have to drop and/or type coerce at least a few variables to get things working. It's fine to err on the side of dropping to get things running - as long as you still get a valid model with reasonable predictive power.
```
from tpot import TPOTRegressor
from sklearn.model_selection import train_test_split
X = df.drop(['price','id','date'], axis=1).values
X_train, X_test, y_train, y_test = train_test_split(X, df['price'].values, test_size=0.2)
```
### Result
```
tpot = TPOTRegressor(generations=2, population_size=10, verbosity=2)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
```
## Part 4 - More...
Answer the following questions, with a target audience of a fellow Data Scientist:
* What do you consider your strongest area as a Data Scientist?
* My strongest area as a Data Scientist are domain knowledge (business & healthcare) and applying machine learning models to solve real-world problem.
* What area of Data Science would you most like to learn more about and why?
* I would probably spend more time learning more about natural language processing and cognitive computing development since their usefulness are applicable to most industries.
* Where do you think Data Science will be in 5 years?
* Similar to a boom and bust cycle, I feel that data science is in a booming cycle where its popularity is trending up. I hope the good time continues for at least 10 years so we can build better and diverse technology as more people getting into the field.
A few sentences per answer is fine. Only elaborate if time allows. Use markdown to format your answers.
Thank you for your hard, and congratulations!! You've learned a lot, and you should proudly call yourself a Data Scientist.
|
github_jupyter
|
!pip install twitterscraper
from twitterscraper import query_tweets
austen_tweets = query_tweets('from:austen', 1000)
len(austen_tweets)
austen_tweets[0].text
elon_tweets = query_tweets('from:elonmusk', 1000)
len(elon_tweets)
for i, j in enumerate(austen_tweets):
if i < 10:
print(austen_tweets[i].text)
print("-"*100)
for i, j in enumerate(elon_tweets):
if i < 10:
print(elon_tweets[i].text)
# get all tweet texts
both_tweets = ''
for i in austen_tweets:
both_tweets = both_tweets + i.text
for i in elon_tweets:
both_tweets = both_tweets + i.text
# Convert all tweet texts to numeric
chars = list(set(both_tweets))
char_indices = dict((c, i) for i, c in enumerate(chars))
# Convert austen tweet to numeric set based on all tweets
austen_tweets_num = []
for i, j in enumerate(austen_tweets):
num_list = [char_indices[char] for char in j.text]
austen_tweets_num.append(num_list)
# Convert elon tweet to numeric set based on all tweets
elon_tweets_num = []
for i, j in enumerate(elon_tweets):
num_list = [char_indices[char] for char in j.text]
elon_tweets_num.append(num_list)
print(len(austen_tweets_num), len(elon_tweets_num))
print("-"*100)
print(austen_tweets_num[1:3])
print("-"*100)
print(elon_tweets_num[1:3])
# Convert to np array for machine learning
X = np.array(austen_tweets_num + elon_tweets_num)
austen_y = np.zeros((len(austen_tweets_num),), dtype=np.int)
elon_y = np.ones((len(elon_tweets_num),), dtype=np.int)
y = np.concatenate((austen_y,elon_y), axis=0)
X.shape, y.shape
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.layers import LSTM
from keras.layers.embeddings import Embedding
from keras.preprocessing import sequence
max_features = 2000
maxlen = 80
epochs = 10
batch_size = 20
print('Pad sequences (samples x time)')
X_train = sequence.pad_sequences(X_train, maxlen=maxlen)
X_test = sequence.pad_sequences(X_test, maxlen=maxlen)
print('x_train shape:', X_train.shape)
print('x_test shape:', X_test.shape)
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=epochs, batch_size=batch_size)
!pip install google_images_download
from google_images_download import google_images_download
response = google_images_download.googleimagesdownload()
arguments = {'keywords': "animal pond", "limit": 5, "print_urls": True}
absolute_image_paths = response.download(arguments)
for i in absolute_image_paths[0]['animal pond']:
print(i.split("/")[-1:])
from keras.applications.resnet50 import ResNet50
from keras.preprocessing import image
from keras.applications.resnet50 import preprocess_input, decode_predictions
def process_img_path(img_path):
return image.load_img(img_path, target_size=(224, 224))
def img_contains_frog(img):
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
model = ResNet50(weights='imagenet')
features = model.predict(x)
results = decode_predictions(features, top=3)[0]
print(results)
for entry in results:
if 'frog' in entry[1]:
return entry[2], 'Frog is in this picture'
else:
return 'Frog is not in this picture'
for i in absolute_image_paths[0]['animal pond']:
print(i.split("/")[-1:])
print(img_contains_frog(process_img_path(i)))
print("-"*50)
!pip install tpot
import pandas as pd
url = "https://raw.githubusercontent.com/ryanleeallred/datasets/master/kc_house_data.csv"
df = pd.read_csv(url)
df.head()
print(df.shape)
df.isnull().sum()
from tpot import TPOTRegressor
from sklearn.model_selection import train_test_split
X = df.drop(['price','id','date'], axis=1).values
X_train, X_test, y_train, y_test = train_test_split(X, df['price'].values, test_size=0.2)
tpot = TPOTRegressor(generations=2, population_size=10, verbosity=2)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
| 0.523908 | 0.924176 |
# About this Notebook
Temporal Regularized Matrix Factorization (TRMF) is an effective tool for imputing missing data within a given multivariate time series and forecasting time series with missing values. This approach is from the following literature:
> Hsiang-Fu Yu, Nikhil Rao, Inderjit S. Dhillon, 2016. [**Temporal regularized matrix factorization for high-dimensional time series prediction**](http://www.cs.utexas.edu/~rofuyu/papers/tr-mf-nips.pdf). 30th Conference on Neural Information Processing Systems (*NIPS 2016*), Barcelona, Spain.
**Acknowledgement**: We would like to thank
- Antony Masso Lussier (HEC Montreal)
for providing helpful suggestion and discussion. Thank you!
## Quick Run
This notebook is publicly available for any usage at our data imputation project. Please click [**transdim**](https://github.com/xinychen/transdim).
## Data Organization: Matrix Structure
In this post, we consider a dataset of $m$ discrete time series $\boldsymbol{y}_{i}\in\mathbb{R}^{f},i\in\left\{1,2,...,m\right\}$. The time series may have missing elements. We express spatio-temporal dataset as a matrix $Y\in\mathbb{R}^{m\times f}$ with $m$ rows (e.g., locations) and $f$ columns (e.g., discrete time intervals),
$$Y=\left[ \begin{array}{cccc} y_{11} & y_{12} & \cdots & y_{1f} \\ y_{21} & y_{22} & \cdots & y_{2f} \\ \vdots & \vdots & \ddots & \vdots \\ y_{m1} & y_{m2} & \cdots & y_{mf} \\ \end{array} \right]\in\mathbb{R}^{m\times f}.$$
## TRMF model
Temporal Regularized Matrix Factorization (TRMF) is an approach to incorporate temporal dependencies into commonly-used matrix factorization model. The temporal dependencies are described among ${\boldsymbol{x}_t}$ explicitly. Such approach takes the form:
$$\boldsymbol{x}_{t}\approx\sum_{l\in\mathcal{L}}\boldsymbol{\theta}_{l}\circledast\boldsymbol{x}_{t-l},$$
where this autoregressive (AR) is specialized by a lag set $\mathcal{L}=\left\{l_1,l_2,...,l_d\right\}$ (e.g., $\mathcal{L}=\left\{1,2,144\right\}$) and weights $\boldsymbol{\theta}_{l}\in\mathbb{R}^{r},\forall l$, and we further define
$$\mathcal{R}_{AR}\left(X\mid \mathcal{L},\Theta,\eta\right)=\frac{1}{2}\sum_{t=l_d+1}^{f}\left(\boldsymbol{x}_{t}-\sum_{l\in\mathcal{L}}\boldsymbol{\theta}_{l}\circledast\boldsymbol{x}_{t-l}\right)^\top\left(\boldsymbol{x}_{t}-\sum_{l\in\mathcal{L}}\boldsymbol{\theta}_{l}\circledast\boldsymbol{x}_{t-l}\right)+\frac{\eta}{2}\sum_{t=1}^{f}\boldsymbol{x}_{t}^\top\boldsymbol{x}_{t}.$$
Thus, TRMF-AR is given by solving
$$\min_{W,X,\Theta}\frac{1}{2}\underbrace{\sum_{(i,t)\in\Omega}\left(y_{it}-\boldsymbol{w}_{i}^T\boldsymbol{x}_{t}\right)^2}_{\text{sum of squared residual errors}}+\lambda_{w}\underbrace{\mathcal{R}_{w}\left(W\right)}_{W-\text{regularizer}}+\lambda_{x}\underbrace{\mathcal{R}_{AR}\left(X\mid \mathcal{L},\Theta,\eta\right)}_{\text{AR-regularizer}}+\lambda_{\theta}\underbrace{\mathcal{R}_{\theta}\left(\Theta\right)}_{\Theta-\text{regularizer}}$$
where $\mathcal{R}_{w}\left(W\right)=\frac{1}{2}\sum_{i=1}^{m}\boldsymbol{w}_{i}^\top\boldsymbol{w}_{i}$ and $\mathcal{R}_{\theta}\left(\Theta\right)=\frac{1}{2}\sum_{l\in\mathcal{L}}\boldsymbol{\theta}_{l}^\top\boldsymbol{\theta}_{l}$ are regularization terms.
### Define TRMF model with `Numpy`
Observing the optimization problem of TRMF model as mentioned above, we categorize the parameters within this model as **parameters** (i.e., `init_para` in the TRMF function) and **hyperparameters** (i.e., `init_hyper`).
- **Parameters** include spatial matrix $W$, temporal matrix $X$, and AR coefficients $\Theta$.
- **Hyperparameters** include weight parameters on some regularizers, i.e., $\lambda_w$, $\lambda_x$, $\lambda_\theta$, and $\eta$.
### How to understand Python code of TRMF?
#### Update spatial matrix $W$
We write Python code for updating spatial matrix as follows,
```python
for i in range(dim1):
pos0 = np.where(sparse_mat[i, :] != 0)
Xt = X[pos0[0], :]
vec0 = Xt.T @ sparse_mat[i, pos0[0]]
mat0 = inv(Xt.T @ Xt + lambda_w * np.eye(rank))
W[i, :] = mat0 @ vec0
```
For your better understanding of these codes, let us see what happened in each line. Recall that the equation for updating $W$ is
$$\boldsymbol{w}_{i} \Leftarrow\left(\sum_{t:(i, t) \in \Omega} \boldsymbol{x}_{t} \boldsymbol{x}_{t}^{T}+\lambda_{w} I\right)^{-1} \sum_{t:(i, t) \in \Omega} y_{i t} \boldsymbol{x}_{t}$$
from the optimizization problem:
$$\min _{W} \frac{1}{2} \underbrace{\sum_{(i, t) \in \Omega}\left(y_{i t}-\boldsymbol{w}_{i}^{T} \boldsymbol{x}_{t}\right)^{2}}_{\text {sum of squared residual errors }}+\frac{1}{2} \lambda_{w} \underbrace{\sum_{i=1}^{m} \boldsymbol{w}_{i}^{T} \boldsymbol{w}_{i}}_{\text{sum of squared entries}}.$$
As can be seen,
- `vec0 = Xt.T @ sparse_mat[i, pos0[0]])` corresponds to $$\sum_{t:(i, t) \in \Omega} y_{i t} \boldsymbol{x}_{t}.$$
- `mat0 = inv(Xt.T @ Xt + lambda_w * np.eye(rank))` corresponds to $$\left(\sum_{t:(i, t) \in \Omega} \boldsymbol{x}_{t} \boldsymbol{x}_{t}^{T}+\lambda_{w} I\right)^{-1}.$$
- `W[i, :] = mat0 @ vec0` corresponds to the update:
$$\boldsymbol{w}_{i} \Leftarrow\left(\sum_{t:(i, t) \in \Omega} \boldsymbol{x}_{t} \boldsymbol{x}_{t}^{T}+\lambda_{w} I\right)^{-1} \sum_{t:(i, t) \in \Omega} y_{i t} \boldsymbol{x}_{t}.$$
#### Update temporal matrix $X$
We write Python code for updating temporal matrix as follows,
```python
for t in range(dim2):
pos0 = np.where(sparse_mat[:, t] != 0)
Wt = W[pos0[0], :]
Mt = np.zeros((rank, rank))
Nt = np.zeros(rank)
if t < np.max(time_lags):
Pt = np.zeros((rank, rank))
Qt = np.zeros(rank)
else:
Pt = np.eye(rank)
Qt = np.einsum('ij, ij -> j', theta, X[t - time_lags, :])
if t < dim2 - np.min(time_lags):
if t >= np.max(time_lags) and t < dim2 - np.max(time_lags):
index = list(range(0, d))
else:
index = list(np.where((t + time_lags >= np.max(time_lags)) & (t + time_lags < dim2)))[0]
for k in index:
Ak = theta[k, :]
Mt += np.diag(Ak ** 2)
theta0 = theta.copy()
theta0[k, :] = 0
Nt += np.multiply(Ak, X[t + time_lags[k], :]
- np.einsum('ij, ij -> j', theta0, X[t + time_lags[k] - time_lags, :]))
vec0 = Wt.T @ sparse_mat[pos0[0], t] + lambda_x * Nt + lambda_x * Qt
mat0 = inv(Wt.T @ Wt + lambda_x * Mt + lambda_x * Pt + lambda_x * eta * np.eye(rank))
X[t, :] = mat0 @ vec0
```
These codes seem to be very complicated. Let us first see the optimization problem for getting a closed-form update of $X$:
$$\min_{W,X,\Theta}\frac{1}{2}\underbrace{\sum_{(i,t)\in\Omega}\left(y_{it}-\boldsymbol{w}_{i}^T\boldsymbol{x}_{t}\right)^2}_{\text{sum of squared residual errors}}+\underbrace{\frac{1}{2}\lambda_{x}\sum_{t=l_d+1}^{f}\left(\boldsymbol{x}_{t}-\sum_{l\in\mathcal{L}}\boldsymbol{\theta}_{l}\circledast\boldsymbol{x}_{t-l}\right)^\top\left(\boldsymbol{x}_{t}-\sum_{l\in\mathcal{L}}\boldsymbol{\theta}_{l}\circledast\boldsymbol{x}_{t-l}\right)+\frac{1}{2}\lambda_{x}\eta\sum_{t=1}^{f}\boldsymbol{x}_{t}^\top\boldsymbol{x}_{t}}_{\text{AR-term}}+\underbrace{\frac{1}{2}\lambda_{\theta}\sum_{l\in\mathcal{L}}\boldsymbol{\theta}_{l}^\top\boldsymbol{\theta}_{l}}_{\Theta-\text{term}}.$$
- For $t=1,...,l_d$, update of $X$ is
$$\boldsymbol{x}_{t} \Leftarrow\left(\sum_{i:(i, t) \in \Omega} \boldsymbol{w}_{i} \boldsymbol{w}_{i}^{T}+\lambda_{x} \eta I\right)^{-1} \sum_{i:(i, t) \in \Omega} y_{i t} \boldsymbol{w}_{i}.$$
- For $t=l_d+1,...,f$, update of $X$ is
$${\boldsymbol{x}_{t}\Leftarrow\left(\sum_{i:(i,t)\in\Omega}\boldsymbol{w}_{i}\boldsymbol{w}_{i}^{T}+\lambda_xI+\lambda_x\sum_{h\in\mathcal{L},t+h \leq T}\text{diag}(\boldsymbol{\theta}_{h}\circledast\boldsymbol{\theta}_{h})+\lambda_x\eta I\right)^{-1}}{\left(\sum_{i:(i,t)\in\Omega}y_{it}\boldsymbol{w}_{i}+\lambda_x\sum_{l\in\mathcal{L}}\boldsymbol{\theta}_{l}\circledast\boldsymbol{x}_{t-l}+\lambda_x\sum_{h\in\mathcal{L},t+h \leq T}\boldsymbol{\theta}_{h}\circledast\boldsymbol{\psi}_{t+h}\right)}.$$
Then, as can be seen,
- `Mt += np.diag(Ak ** 2)` corresponds to $$\sum_{h\in\mathcal{L},t+h \leq T}\text{diag}(\boldsymbol{\theta}_{h}\circledast\boldsymbol{\theta}_{h}).$$
- `Nt += np.multiply(Ak, X[t + time_lags[k], :] - np.einsum('ij, ij -> j', theta0, X[t + time_lags[k] - time_lags, :]))` corresponds to $$\sum_{h\in\mathcal{L},t+h \leq T}\boldsymbol{\theta}_{h}\circledast\boldsymbol{\psi}_{t+h}.$$
- `Qt = np.einsum('ij, ij -> j', theta, X[t - time_lags, :])` corresponds to $$\sum_{l\in\mathcal{L}}\boldsymbol{\theta}_{l}\circledast\boldsymbol{x}_{t-l}.$$
-`X[t, :] = mat0 @ vec0` corresponds to the update of $X$.
#### Update AR coefficients $\Theta$
We write Python code for updating temporal matrix as follows,
```python
for k in range(d):
theta0 = theta.copy()
theta0[k, :] = 0
mat0 = np.zeros((dim2 - np.max(time_lags), rank))
for L in range(d):
mat0 += X[np.max(time_lags) - time_lags[L] : dim2 - time_lags[L] , :] @ np.diag(theta0[L, :])
VarPi = X[np.max(time_lags) : dim2, :] - mat0
var1 = np.zeros((rank, rank))
var2 = np.zeros(rank)
for t in range(np.max(time_lags), dim2):
B = X[t - time_lags[k], :]
var1 += np.diag(np.multiply(B, B))
var2 += np.diag(B) @ VarPi[t - np.max(time_lags), :]
theta[k, :] = inv(var1 + lambda_theta * np.eye(rank) / lambda_x) @ var2
```
For your better understanding of these codes, let us see what happened in each line. Recall that the equation for updating $\theta$ is
$$
\color{red} {\boldsymbol{\theta}_{h}\Leftarrow\left(\sum_{t=l_d+1}^{f}\text{diag}(\boldsymbol{x}_{t-h}\circledast \boldsymbol{x}_{t-h})+\frac{\lambda_{\theta}}{\lambda_x}I\right)^{-1}\left(\sum_{t=l_d+1}^{f}{\boldsymbol{\pi}_{t}^{h}}\circledast \boldsymbol{x}_{t-h}\right)}
$$
where $\boldsymbol{\pi}_{t}^{h}=\boldsymbol{x}_{t}-\sum_{l\in\mathcal{L},l\neq h}\boldsymbol{\theta}_{l}\circledast\boldsymbol{x}_{t-l}$ from the optimizization problem:
$$
\min_{\Theta}\frac{1}{2}\lambda_{x}\underbrace{\sum_{t=l_d+1}^{f}\left(\boldsymbol{x}_{t}-\sum_{l\in\mathcal{L}}\boldsymbol{\theta}_{l}\circledast\boldsymbol{x}_{t-l}\right)^\top\left(\boldsymbol{x}_{t}-\sum_{l\in\mathcal{L}}\boldsymbol{\theta}_{l}\circledast\boldsymbol{x}_{t-l}\right)}_{\text{sum of squared residual errors}}+\frac{1}{2}\lambda_{\theta}\underbrace{\sum_{l\in\mathcal{L}}\boldsymbol{\theta}_{l}^\top\boldsymbol{\theta}_{l}}_{\text{sum of squared entries}}
$$
As can be seen,
- `mat0 += X[np.max(time_lags) - time_lags[L] : dim2 - time_lags[L] , :] @ np.diag(theta0[L, :])` corresponds to $$\sum_{l\in\mathcal{L},l\neq h}\boldsymbol{\theta}_{l}\circledast\boldsymbol{x}_{t-l}$$.
- `var1 += np.diag(np.multiply(B, B))` corresponds to $$\sum_{t=l_d+1}^{f}\text{diag}(\boldsymbol{x}_{t-h}\circledast \boldsymbol{x}_{t-h}).$$
- `var2 += np.diag(B) @ VarPi[t - np.max(time_lags), :]` corresponds to $$\sum_{t=l_d+1}^{f}{\boldsymbol{\pi}_{t}^{h}}\circledast \boldsymbol{x}_{t-h}.$$
```
import numpy as np
from numpy.linalg import inv as inv
def TRMF(dense_mat, sparse_mat, init_para, init_hyper, time_lags, maxiter):
"""Temporal Regularized Matrix Factorization, TRMF."""
## Initialize parameters
W = init_para["W"]
X = init_para["X"]
theta = init_para["theta"]
## Set hyperparameters
lambda_w = init_hyper["lambda_w"]
lambda_x = init_hyper["lambda_x"]
lambda_theta = init_hyper["lambda_theta"]
eta = init_hyper["eta"]
dim1, dim2 = sparse_mat.shape
pos_train = np.where(sparse_mat != 0)
pos_test = np.where((dense_mat != 0) & (sparse_mat == 0))
binary_mat = sparse_mat.copy()
binary_mat[pos_train] = 1
d, rank = theta.shape
for it in range(maxiter):
## Update spatial matrix W
for i in range(dim1):
pos0 = np.where(sparse_mat[i, :] != 0)
Xt = X[pos0[0], :]
vec0 = Xt.T @ sparse_mat[i, pos0[0]]
mat0 = inv(Xt.T @ Xt + lambda_w * np.eye(rank))
W[i, :] = mat0 @ vec0
## Update temporal matrix X
for t in range(dim2):
pos0 = np.where(sparse_mat[:, t] != 0)
Wt = W[pos0[0], :]
Mt = np.zeros((rank, rank))
Nt = np.zeros(rank)
if t < np.max(time_lags):
Pt = np.zeros((rank, rank))
Qt = np.zeros(rank)
else:
Pt = np.eye(rank)
Qt = np.einsum('ij, ij -> j', theta, X[t - time_lags, :])
if t < dim2 - np.min(time_lags):
if t >= np.max(time_lags) and t < dim2 - np.max(time_lags):
index = list(range(0, d))
else:
index = list(np.where((t + time_lags >= np.max(time_lags)) & (t + time_lags < dim2)))[0]
for k in index:
Ak = theta[k, :]
Mt += np.diag(Ak ** 2)
theta0 = theta.copy()
theta0[k, :] = 0
Nt += np.multiply(Ak, X[t + time_lags[k], :]
- np.einsum('ij, ij -> j', theta0, X[t + time_lags[k] - time_lags, :]))
vec0 = Wt.T @ sparse_mat[pos0[0], t] + lambda_x * Nt + lambda_x * Qt
mat0 = inv(Wt.T @ Wt + lambda_x * Mt + lambda_x * Pt + lambda_x * eta * np.eye(rank))
X[t, :] = mat0 @ vec0
## Update AR coefficients theta
for k in range(d):
theta0 = theta.copy()
theta0[k, :] = 0
mat0 = np.zeros((dim2 - np.max(time_lags), rank))
for L in range(d):
mat0 += X[np.max(time_lags) - time_lags[L] : dim2 - time_lags[L] , :] @ np.diag(theta0[L, :])
VarPi = X[np.max(time_lags) : dim2, :] - mat0
var1 = np.zeros((rank, rank))
var2 = np.zeros(rank)
for t in range(np.max(time_lags), dim2):
B = X[t - time_lags[k], :]
var1 += np.diag(np.multiply(B, B))
var2 += np.diag(B) @ VarPi[t - np.max(time_lags), :]
theta[k, :] = inv(var1 + lambda_theta * np.eye(rank) / lambda_x) @ var2
mat_hat = W @ X.T
mape = np.sum(np.abs(dense_mat[pos_test] - mat_hat[pos_test])
/ dense_mat[pos_test]) / dense_mat[pos_test].shape[0]
rmse = np.sqrt(np.sum((dense_mat[pos_test] - mat_hat[pos_test]) ** 2)/dense_mat[pos_test].shape[0])
if (it + 1) % 200 == 0:
print('Iter: {}'.format(it + 1))
print('Imputation MAPE: {:.6}'.format(mape))
print('Imputation RMSE: {:.6}'.format(rmse))
print()
```
## Missing Data Imputation
In the following, we apply the above defined TRMF function to the task of missing data imputation task on the following spatiotemporal multivariate time series datasets/matrices:
- **Guangzhou data set**: [Guangzhou urban traffic speed data set](https://doi.org/10.5281/zenodo.1205228).
- **Birmingham data set**: [Birmingham parking data set](https://archive.ics.uci.edu/ml/datasets/Parking+Birmingham).
- **Hangzhou data set**: [Hangzhou metro passenger flow data set](https://doi.org/10.5281/zenodo.3145403).
- **Settle data set**: [Seattle freeway traffic speed data set](https://github.com/zhiyongc/Seattle-Loop-Data).
The original data sets have been adapted into our experiments, and it is now available at the fold of `datasets`.
### Experiments on Guangzhou Data Set
```
import scipy.io
tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/tensor.mat')
tensor = tensor['tensor']
random_matrix = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_matrix.mat')
random_matrix = random_matrix['random_matrix']
random_tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_tensor.mat')
random_tensor = random_tensor['random_tensor']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.2
# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_mat = (np.round(random_tensor + 0.5 - missing_rate)
.reshape([random_tensor.shape[0], random_tensor.shape[1] * random_tensor.shape[2]]))
# =============================================================================
sparse_mat = np.multiply(dense_mat, binary_mat)
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 80
time_lags = np.array([1, 2, 144])
d = time_lags.shape[0]
## Initialize parameters
W = 0.1 * np.random.rand(dim1, rank)
X = 0.1 * np.random.rand(dim2, rank)
theta = 0.1 * np.random.rand(d, rank)
init_para = {"W": W, "X": X, "theta": theta}
## Set hyparameters
lambda_w = 500
lambda_x = 500
lambda_theta = 500
eta = 0.03
init_hyper = {"lambda_w": lambda_w, "lambda_x": lambda_x, "lambda_theta": lambda_theta, "eta": eta}
maxiter = 1000
TRMF(dense_mat, sparse_mat, init_para, init_hyper, time_lags, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
import scipy.io
tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/tensor.mat')
tensor = tensor['tensor']
random_matrix = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_matrix.mat')
random_matrix = random_matrix['random_matrix']
random_tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_tensor.mat')
random_tensor = random_tensor['random_tensor']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.4
# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_mat = (np.round(random_tensor + 0.5 - missing_rate)
.reshape([random_tensor.shape[0], random_tensor.shape[1] * random_tensor.shape[2]]))
# =============================================================================
sparse_mat = np.multiply(dense_mat, binary_mat)
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 80
time_lags = np.array([1, 2, 144])
d = time_lags.shape[0]
## Initialize parameters
W = 0.1 * np.random.rand(dim1, rank)
X = 0.1 * np.random.rand(dim2, rank)
theta = 0.1 * np.random.rand(d, rank)
init_para = {"W": W, "X": X, "theta": theta}
## Set hyparameters
lambda_w = 500
lambda_x = 500
lambda_theta = 500
eta = 0.03
init_hyper = {"lambda_w": lambda_w, "lambda_x": lambda_x, "lambda_theta": lambda_theta, "eta": eta}
maxiter = 1000
TRMF(dense_mat, sparse_mat, init_para, init_hyper, time_lags, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
import scipy.io
tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/tensor.mat')
tensor = tensor['tensor']
random_matrix = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_matrix.mat')
random_matrix = random_matrix['random_matrix']
random_tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_tensor.mat')
random_tensor = random_tensor['random_tensor']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.2
# =============================================================================
### Non-random missing (NM) scenario
### Set the NM scenario by:
binary_tensor = np.zeros(tensor.shape)
for i1 in range(tensor.shape[0]):
for i2 in range(tensor.shape[1]):
binary_tensor[i1, i2, :] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate)
binary_mat = binary_tensor.reshape([binary_tensor.shape[0], binary_tensor.shape[1] * binary_tensor.shape[2]])
# =============================================================================
sparse_mat = np.multiply(dense_mat, binary_mat)
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
time_lags = np.array([1, 2, 144])
d = time_lags.shape[0]
## Initialize parameters
W = 0.1 * np.random.rand(dim1, rank)
X = 0.1 * np.random.rand(dim2, rank)
theta = 0.1 * np.random.rand(d, rank)
init_para = {"W": W, "X": X, "theta": theta}
## Set hyparameters
lambda_w = 500
lambda_x = 500
lambda_theta = 500
eta = 0.03
init_hyper = {"lambda_w": lambda_w, "lambda_x": lambda_x, "lambda_theta": lambda_theta, "eta": eta}
maxiter = 1000
TRMF(dense_mat, sparse_mat, init_para, init_hyper, time_lags, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
import scipy.io
tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/tensor.mat')
tensor = tensor['tensor']
random_matrix = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_matrix.mat')
random_matrix = random_matrix['random_matrix']
random_tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_tensor.mat')
random_tensor = random_tensor['random_tensor']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.4
# =============================================================================
### Non-random missing (NM) scenario
### Set the NM scenario by:
binary_tensor = np.zeros(tensor.shape)
for i1 in range(tensor.shape[0]):
for i2 in range(tensor.shape[1]):
binary_tensor[i1, i2, :] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate)
binary_mat = binary_tensor.reshape([binary_tensor.shape[0], binary_tensor.shape[1] * binary_tensor.shape[2]])
# =============================================================================
sparse_mat = np.multiply(dense_mat, binary_mat)
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
time_lags = np.array([1, 2, 144])
d = time_lags.shape[0]
## Initialize parameters
W = 0.1 * np.random.rand(dim1, rank)
X = 0.1 * np.random.rand(dim2, rank)
theta = 0.1 * np.random.rand(d, rank)
init_para = {"W": W, "X": X, "theta": theta}
## Set hyparameters
lambda_w = 500
lambda_x = 500
lambda_theta = 500
eta = 0.03
init_hyper = {"lambda_w": lambda_w, "lambda_x": lambda_x, "lambda_theta": lambda_theta, "eta": eta}
maxiter = 1000
TRMF(dense_mat, sparse_mat, init_para, init_hyper, time_lags, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
**Experiment results** of missing data imputation using TRMF:
| scenario |`rank`|`Lambda_w`|`Lambda_x`|`Lambda_theta`|`eta`|`maxiter`| mape | rmse |
|:----------|-----:|---------:|---------:|-------------:|----:|--------:|----------:|----------:|
|**20%, RM**| 80 | 500 | 500 | 500 | 0.03| 1000 | **0.0747**| **3.1424**|
|**40%, RM**| 80 | 500 | 500 | 500 | 0.03| 1000 | **0.0776**| **3.2536**|
|**20%, NM**| 10 | 500 | 500 | 500 | 0.03| 1000 | **0.1024**| **4.2710**|
|**40%, NM**| 10 | 500 | 500 | 500 | 0.03| 1000 | **0.1037**| **4.3713**|
### Experiments on Birmingham Data Set
```
import scipy.io
tensor = scipy.io.loadmat('../datasets/Birmingham-data-set/tensor.mat')
tensor = tensor['tensor']
random_matrix = scipy.io.loadmat('../datasets/Birmingham-data-set/random_matrix.mat')
random_matrix = random_matrix['random_matrix']
random_tensor = scipy.io.loadmat('../datasets/Birmingham-data-set/random_tensor.mat')
random_tensor = random_tensor['random_tensor']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.1
# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_mat = (np.round(random_tensor + 0.5 - missing_rate)
.reshape([random_tensor.shape[0], random_tensor.shape[1] * random_tensor.shape[2]]))
# =============================================================================
sparse_mat = np.multiply(dense_mat, binary_mat)
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 30
time_lags = np.array([1, 2, 18])
d = time_lags.shape[0]
## Initialize parameters
W = 0.1 * np.random.rand(dim1, rank)
X = 0.1 * np.random.rand(dim2, rank)
theta = 0.1 * np.random.rand(d, rank)
init_para = {"W": W, "X": X, "theta": theta}
## Set hyparameters
lambda_w = 100
lambda_x = 100
lambda_theta = 100
eta = 0.01
init_hyper = {"lambda_w": lambda_w, "lambda_x": lambda_x, "lambda_theta": lambda_theta, "eta": eta}
maxiter = 1000
TRMF(dense_mat, sparse_mat, init_para, init_hyper, time_lags, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
import scipy.io
tensor = scipy.io.loadmat('../datasets/Birmingham-data-set/tensor.mat')
tensor = tensor['tensor']
random_matrix = scipy.io.loadmat('../datasets/Birmingham-data-set/random_matrix.mat')
random_matrix = random_matrix['random_matrix']
random_tensor = scipy.io.loadmat('../datasets/Birmingham-data-set/random_tensor.mat')
random_tensor = random_tensor['random_tensor']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.3
# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_mat = (np.round(random_tensor + 0.5 - missing_rate)
.reshape([random_tensor.shape[0], random_tensor.shape[1] * random_tensor.shape[2]]))
# =============================================================================
sparse_mat = np.multiply(dense_mat, binary_mat)
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 30
time_lags = np.array([1, 2, 18])
d = time_lags.shape[0]
## Initialize parameters
W = 0.1 * np.random.rand(dim1, rank)
X = 0.1 * np.random.rand(dim2, rank)
theta = 0.1 * np.random.rand(d, rank)
init_para = {"W": W, "X": X, "theta": theta}
## Set hyparameters
lambda_w = 100
lambda_x = 100
lambda_theta = 100
eta = 0.01
init_hyper = {"lambda_w": lambda_w, "lambda_x": lambda_x, "lambda_theta": lambda_theta, "eta": eta}
maxiter = 1000
TRMF(dense_mat, sparse_mat, init_para, init_hyper, time_lags, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
import scipy.io
tensor = scipy.io.loadmat('../datasets/Birmingham-data-set/tensor.mat')
tensor = tensor['tensor']
random_matrix = scipy.io.loadmat('../datasets/Birmingham-data-set/random_matrix.mat')
random_matrix = random_matrix['random_matrix']
random_tensor = scipy.io.loadmat('../datasets/Birmingham-data-set/random_tensor.mat')
random_tensor = random_tensor['random_tensor']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.1
# =============================================================================
### Non-random missing (NM) scenario
### Set the NM scenario by:
binary_tensor = np.zeros(tensor.shape)
for i1 in range(tensor.shape[0]):
for i2 in range(tensor.shape[1]):
binary_tensor[i1, i2, :] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate)
binary_mat = binary_tensor.reshape([binary_tensor.shape[0], binary_tensor.shape[1] * binary_tensor.shape[2]])
# =============================================================================
sparse_mat = np.multiply(dense_mat, binary_mat)
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
time_lags = np.array([1, 2, 18])
d = time_lags.shape[0]
## Initialize parameters
W = 0.1 * np.random.rand(dim1, rank)
X = 0.1 * np.random.rand(dim2, rank)
theta = 0.1 * np.random.rand(d, rank)
init_para = {"W": W, "X": X, "theta": theta}
## Set hyparameters
lambda_w = 100
lambda_x = 100
lambda_theta = 100
eta = 0.01
init_hyper = {"lambda_w": lambda_w, "lambda_x": lambda_x, "lambda_theta": lambda_theta, "eta": eta}
maxiter = 1000
TRMF(dense_mat, sparse_mat, init_para, init_hyper, time_lags, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
import scipy.io
tensor = scipy.io.loadmat('../datasets/Birmingham-data-set/tensor.mat')
tensor = tensor['tensor']
random_matrix = scipy.io.loadmat('../datasets/Birmingham-data-set/random_matrix.mat')
random_matrix = random_matrix['random_matrix']
random_tensor = scipy.io.loadmat('../datasets/Birmingham-data-set/random_tensor.mat')
random_tensor = random_tensor['random_tensor']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.3
# =============================================================================
### Non-random missing (NM) scenario
### Set the NM scenario by:
binary_tensor = np.zeros(tensor.shape)
for i1 in range(tensor.shape[0]):
for i2 in range(tensor.shape[1]):
binary_tensor[i1, i2, :] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate)
binary_mat = binary_tensor.reshape([binary_tensor.shape[0], binary_tensor.shape[1] * binary_tensor.shape[2]])
# =============================================================================
sparse_mat = np.multiply(dense_mat, binary_mat)
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
time_lags = np.array([1, 2, 18])
d = time_lags.shape[0]
## Initialize parameters
W = 0.1 * np.random.rand(dim1, rank)
X = 0.1 * np.random.rand(dim2, rank)
theta = 0.1 * np.random.rand(d, rank)
init_para = {"W": W, "X": X, "theta": theta}
## Set hyparameters
lambda_w = 100
lambda_x = 100
lambda_theta = 100
eta = 0.01
init_hyper = {"lambda_w": lambda_w, "lambda_x": lambda_x, "lambda_theta": lambda_theta, "eta": eta}
maxiter = 1000
TRMF(dense_mat, sparse_mat, init_para, init_hyper, time_lags, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
**Experiment results** of missing data imputation using TRMF:
| scenario |`rank`|`Lambda_w`|`Lambda_x`|`Lambda_theta`|`eta`|`maxiter`| mape | rmse |
|:----------|-----:|---------:|---------:|-------------:|----:|--------:|----------:|----------:|
|**10%, RM**| 30 | 100 | 100 | 100 | 0.01| 1000 | **0.0277**|**10.5701**|
|**30%, RM**| 30 | 100 | 100 | 100 | 0.01| 1000 | **0.0369**|**21.8022** |
|**10%, NM**| 10 | 100 | 100 | 100 | 0.01| 1000 | **0.1274**|**29.4629**|
|**30%, NM**| 10 | 100 | 100 | 100 | 0.01| 1000 | **0.1635**|**85.9752**|
### Experiments on Hangzhou Data Set
```
import scipy.io
tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/tensor.mat')
tensor = tensor['tensor']
random_matrix = scipy.io.loadmat('../datasets/Hangzhou-data-set/random_matrix.mat')
random_matrix = random_matrix['random_matrix']
random_tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/random_tensor.mat')
random_tensor = random_tensor['random_tensor']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.2
# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_mat = (np.round(random_tensor + 0.5 - missing_rate)
.reshape([random_tensor.shape[0], random_tensor.shape[1] * random_tensor.shape[2]]))
# =============================================================================
sparse_mat = np.multiply(dense_mat, binary_mat)
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 50
time_lags = np.array([1, 2, 108])
d = time_lags.shape[0]
## Initialize parameters
W = 0.1 * np.random.rand(dim1, rank)
X = 0.1 * np.random.rand(dim2, rank)
theta = 0.1 * np.random.rand(d, rank)
init_para = {"W": W, "X": X, "theta": theta}
## Set hyparameters
lambda_w = 500
lambda_x = 500
lambda_theta = 500
eta = 0.03
init_hyper = {"lambda_w": lambda_w, "lambda_x": lambda_x, "lambda_theta": lambda_theta, "eta": eta}
maxiter = 1000
TRMF(dense_mat, sparse_mat, init_para, init_hyper, time_lags, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
import scipy.io
tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/tensor.mat')
tensor = tensor['tensor']
random_matrix = scipy.io.loadmat('../datasets/Hangzhou-data-set/random_matrix.mat')
random_matrix = random_matrix['random_matrix']
random_tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/random_tensor.mat')
random_tensor = random_tensor['random_tensor']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.4
# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_mat = (np.round(random_tensor + 0.5 - missing_rate)
.reshape([random_tensor.shape[0], random_tensor.shape[1] * random_tensor.shape[2]]))
# =============================================================================
sparse_mat = np.multiply(dense_mat, binary_mat)
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 50
time_lags = np.array([1, 2, 108])
d = time_lags.shape[0]
## Initialize parameters
W = 0.1 * np.random.rand(dim1, rank)
X = 0.1 * np.random.rand(dim2, rank)
theta = 0.1 * np.random.rand(d, rank)
init_para = {"W": W, "X": X, "theta": theta}
## Set hyparameters
lambda_w = 500
lambda_x = 500
lambda_theta = 500
eta = 0.03
init_hyper = {"lambda_w": lambda_w, "lambda_x": lambda_x, "lambda_theta": lambda_theta, "eta": eta}
maxiter = 1000
TRMF(dense_mat, sparse_mat, init_para, init_hyper, time_lags, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
import scipy.io
tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/tensor.mat')
tensor = tensor['tensor']
random_matrix = scipy.io.loadmat('../datasets/Hangzhou-data-set/random_matrix.mat')
random_matrix = random_matrix['random_matrix']
random_tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/random_tensor.mat')
random_tensor = random_tensor['random_tensor']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.2
# =============================================================================
### Non-random missing (NM) scenario
### Set the NM scenario by:
binary_tensor = np.zeros(tensor.shape)
for i1 in range(tensor.shape[0]):
for i2 in range(tensor.shape[1]):
binary_tensor[i1, i2, :] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate)
binary_mat = binary_tensor.reshape([binary_tensor.shape[0], binary_tensor.shape[1] * binary_tensor.shape[2]])
# =============================================================================
sparse_mat = np.multiply(dense_mat, binary_mat)
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
time_lags = np.array([1, 2, 108])
d = time_lags.shape[0]
## Initialize parameters
W = 0.1 * np.random.rand(dim1, rank)
X = 0.1 * np.random.rand(dim2, rank)
theta = 0.1 * np.random.rand(d, rank)
init_para = {"W": W, "X": X, "theta": theta}
## Set hyparameters
lambda_w = 500
lambda_x = 500
lambda_theta = 500
eta = 0.03
init_hyper = {"lambda_w": lambda_w, "lambda_x": lambda_x, "lambda_theta": lambda_theta, "eta": eta}
maxiter = 1000
TRMF(dense_mat, sparse_mat, init_para, init_hyper, time_lags, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
import scipy.io
tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/tensor.mat')
tensor = tensor['tensor']
random_matrix = scipy.io.loadmat('../datasets/Hangzhou-data-set/random_matrix.mat')
random_matrix = random_matrix['random_matrix']
random_tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/random_tensor.mat')
random_tensor = random_tensor['random_tensor']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.4
# =============================================================================
### Non-random missing (NM) scenario
### Set the NM scenario by:
binary_tensor = np.zeros(tensor.shape)
for i1 in range(tensor.shape[0]):
for i2 in range(tensor.shape[1]):
binary_tensor[i1, i2, :] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate)
binary_mat = binary_tensor.reshape([binary_tensor.shape[0], binary_tensor.shape[1] * binary_tensor.shape[2]])
# =============================================================================
sparse_mat = np.multiply(dense_mat, binary_mat)
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
time_lags = np.array([1, 2, 108])
d = time_lags.shape[0]
## Initialize parameters
W = 0.1 * np.random.rand(dim1, rank)
X = 0.1 * np.random.rand(dim2, rank)
theta = 0.1 * np.random.rand(d, rank)
init_para = {"W": W, "X": X, "theta": theta}
## Set hyparameters
lambda_w = 500
lambda_x = 500
lambda_theta = 500
eta = 0.03
init_hyper = {"lambda_w": lambda_w, "lambda_x": lambda_x, "lambda_theta": lambda_theta, "eta": eta}
maxiter = 1000
TRMF(dense_mat, sparse_mat, init_para, init_hyper, time_lags, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
**Experiment results** of missing data imputation using TRMF:
| scenario |`rank`|`Lambda_w`|`Lambda_x`|`Lambda_theta`|`eta`|`maxiter`| mape | rmse |
|:----------|-----:|---------:|---------:|-------------:|----:|--------:|----------:|----------:|
|**20%, RM**| 50 | 1000 | 1000 | 1000 | 0.03| 1000 | **0.2131**|**37.0673**|
|**40%, RM**| 50 | 1000 | 1000 | 1000 | 0.03| 1000 | **0.2289**|**38.15**|
|**20%, NM**| 10 | 1000 | 500 | 500 | 0.03| 1000 | **0.2607**|**40.0598**|
|**40%, NM**| 10 | 1000 | 500 | 500 | 0.03| 1000 | **0.2732**|**39.7538**|
### Experiments on Seattle Data Set
```
import pandas as pd
dense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0)
RM_mat = pd.read_csv('../datasets/Seattle-data-set/RM_mat.csv', index_col = 0)
dense_mat = dense_mat.values
RM_mat = RM_mat.values
missing_rate = 0.2
# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_mat = np.round(RM_mat + 0.5 - missing_rate)
# =============================================================================
sparse_mat = np.multiply(dense_mat, binary_mat)
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 50
time_lags = np.array([1, 2, 288])
d = time_lags.shape[0]
## Initialize parameters
W = 0.1 * np.random.rand(dim1, rank)
X = 0.1 * np.random.rand(dim2, rank)
theta = 0.1 * np.random.rand(d, rank)
init_para = {"W": W, "X": X, "theta": theta}
## Set hyparameters
lambda_w = 500
lambda_x = 500
lambda_theta = 500
eta = 0.03
init_hyper = {"lambda_w": lambda_w, "lambda_x": lambda_x, "lambda_theta": lambda_theta, "eta": eta}
maxiter = 1000
TRMF(dense_mat, sparse_mat, init_para, init_hyper, time_lags, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
import pandas as pd
dense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0)
RM_mat = pd.read_csv('../datasets/Seattle-data-set/RM_mat.csv', index_col = 0)
dense_mat = dense_mat.values
RM_mat = RM_mat.values
missing_rate = 0.4
# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_mat = np.round(RM_mat + 0.5 - missing_rate)
# =============================================================================
sparse_mat = np.multiply(dense_mat, binary_mat)
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 50
time_lags = np.array([1, 2, 288])
d = time_lags.shape[0]
## Initialize parameters
W = 0.1 * np.random.rand(dim1, rank)
X = 0.1 * np.random.rand(dim2, rank)
theta = 0.1 * np.random.rand(d, rank)
init_para = {"W": W, "X": X, "theta": theta}
## Set hyparameters
lambda_w = 500
lambda_x = 500
lambda_theta = 500
eta = 0.03
init_hyper = {"lambda_w": lambda_w, "lambda_x": lambda_x, "lambda_theta": lambda_theta, "eta": eta}
maxiter = 1000
TRMF(dense_mat, sparse_mat, init_para, init_hyper, time_lags, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
import pandas as pd
dense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0)
NM_mat = pd.read_csv('../datasets/Seattle-data-set/NM_mat.csv', index_col = 0)
dense_mat = dense_mat.values
NM_mat = NM_mat.values
missing_rate = 0.2
# =============================================================================
### Non-random missing (NM) scenario
### Set the NM scenario by:
binary_tensor = np.zeros((dense_mat.shape[0], 28, 288))
for i1 in range(binary_tensor.shape[0]):
for i2 in range(binary_tensor.shape[1]):
binary_tensor[i1, i2, :] = np.round(NM_mat[i1, i2] + 0.5 - missing_rate)
# =============================================================================
sparse_mat = np.multiply(dense_mat, binary_tensor.reshape([dense_mat.shape[0], dense_mat.shape[1]]))
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
time_lags = np.array([1, 2, 288])
d = time_lags.shape[0]
## Initialize parameters
W = 0.1 * np.random.rand(dim1, rank)
X = 0.1 * np.random.rand(dim2, rank)
theta = 0.1 * np.random.rand(d, rank)
init_para = {"W": W, "X": X, "theta": theta}
## Set hyparameters
lambda_w = 500
lambda_x = 500
lambda_theta = 500
eta = 0.03
init_hyper = {"lambda_w": lambda_w, "lambda_x": lambda_x, "lambda_theta": lambda_theta, "eta": eta}
maxiter = 1000
TRMF(dense_mat, sparse_mat, init_para, init_hyper, time_lags, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
import pandas as pd
dense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0)
NM_mat = pd.read_csv('../datasets/Seattle-data-set/NM_mat.csv', index_col = 0)
dense_mat = dense_mat.values
NM_mat = NM_mat.values
missing_rate = 0.4
# =============================================================================
### Non-random missing (NM) scenario
### Set the NM scenario by:
binary_tensor = np.zeros((dense_mat.shape[0], 28, 288))
for i1 in range(binary_tensor.shape[0]):
for i2 in range(binary_tensor.shape[1]):
binary_tensor[i1, i2, :] = np.round(NM_mat[i1, i2] + 0.5 - missing_rate)
# =============================================================================
sparse_mat = np.multiply(dense_mat, binary_tensor.reshape([dense_mat.shape[0], dense_mat.shape[1]]))
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
time_lags = np.array([1, 2, 288])
d = time_lags.shape[0]
## Initialize parameters
W = 0.1 * np.random.rand(dim1, rank)
X = 0.1 * np.random.rand(dim2, rank)
theta = 0.1 * np.random.rand(d, rank)
init_para = {"W": W, "X": X, "theta": theta}
## Set hyparameters
lambda_w = 500
lambda_x = 500
lambda_theta = 500
eta = 0.03
init_hyper = {"lambda_w": lambda_w, "lambda_x": lambda_x, "lambda_theta": lambda_theta, "eta": eta}
maxiter = 1000
TRMF(dense_mat, sparse_mat, init_para, init_hyper, time_lags, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
**Experiment results** of missing data imputation using TRMF:
| scenario |`rank`|`Lambda_w`|`Lambda_x`|`Lambda_theta`|`eta`|`maxiter`| mape | rmse |
|:----------|-----:|---------:|---------:|-------------:|----:|--------:|----------:|----------:|
|**20%, RM**| 50 | 1000 | 1000 | 1000 | 0.03| 1000 | **0.0596**| **3.7148**|
|**40%, RM**| 50 | 1000 | 1000 | 1000 | 0.03| 1000 | **0.0616**| **3.7928**|
|**20%, NM**| 10 | 1000 | 500 | 500 | 0.03| 1000 | **0.0912**| **5.2626**|
|**40%, NM**| 10 | 1000 | 500 | 500 | 0.03| 1000 | **0.0919**| **5.2995**|
|
github_jupyter
|
for i in range(dim1):
pos0 = np.where(sparse_mat[i, :] != 0)
Xt = X[pos0[0], :]
vec0 = Xt.T @ sparse_mat[i, pos0[0]]
mat0 = inv(Xt.T @ Xt + lambda_w * np.eye(rank))
W[i, :] = mat0 @ vec0
for t in range(dim2):
pos0 = np.where(sparse_mat[:, t] != 0)
Wt = W[pos0[0], :]
Mt = np.zeros((rank, rank))
Nt = np.zeros(rank)
if t < np.max(time_lags):
Pt = np.zeros((rank, rank))
Qt = np.zeros(rank)
else:
Pt = np.eye(rank)
Qt = np.einsum('ij, ij -> j', theta, X[t - time_lags, :])
if t < dim2 - np.min(time_lags):
if t >= np.max(time_lags) and t < dim2 - np.max(time_lags):
index = list(range(0, d))
else:
index = list(np.where((t + time_lags >= np.max(time_lags)) & (t + time_lags < dim2)))[0]
for k in index:
Ak = theta[k, :]
Mt += np.diag(Ak ** 2)
theta0 = theta.copy()
theta0[k, :] = 0
Nt += np.multiply(Ak, X[t + time_lags[k], :]
- np.einsum('ij, ij -> j', theta0, X[t + time_lags[k] - time_lags, :]))
vec0 = Wt.T @ sparse_mat[pos0[0], t] + lambda_x * Nt + lambda_x * Qt
mat0 = inv(Wt.T @ Wt + lambda_x * Mt + lambda_x * Pt + lambda_x * eta * np.eye(rank))
X[t, :] = mat0 @ vec0
for k in range(d):
theta0 = theta.copy()
theta0[k, :] = 0
mat0 = np.zeros((dim2 - np.max(time_lags), rank))
for L in range(d):
mat0 += X[np.max(time_lags) - time_lags[L] : dim2 - time_lags[L] , :] @ np.diag(theta0[L, :])
VarPi = X[np.max(time_lags) : dim2, :] - mat0
var1 = np.zeros((rank, rank))
var2 = np.zeros(rank)
for t in range(np.max(time_lags), dim2):
B = X[t - time_lags[k], :]
var1 += np.diag(np.multiply(B, B))
var2 += np.diag(B) @ VarPi[t - np.max(time_lags), :]
theta[k, :] = inv(var1 + lambda_theta * np.eye(rank) / lambda_x) @ var2
import numpy as np
from numpy.linalg import inv as inv
def TRMF(dense_mat, sparse_mat, init_para, init_hyper, time_lags, maxiter):
"""Temporal Regularized Matrix Factorization, TRMF."""
## Initialize parameters
W = init_para["W"]
X = init_para["X"]
theta = init_para["theta"]
## Set hyperparameters
lambda_w = init_hyper["lambda_w"]
lambda_x = init_hyper["lambda_x"]
lambda_theta = init_hyper["lambda_theta"]
eta = init_hyper["eta"]
dim1, dim2 = sparse_mat.shape
pos_train = np.where(sparse_mat != 0)
pos_test = np.where((dense_mat != 0) & (sparse_mat == 0))
binary_mat = sparse_mat.copy()
binary_mat[pos_train] = 1
d, rank = theta.shape
for it in range(maxiter):
## Update spatial matrix W
for i in range(dim1):
pos0 = np.where(sparse_mat[i, :] != 0)
Xt = X[pos0[0], :]
vec0 = Xt.T @ sparse_mat[i, pos0[0]]
mat0 = inv(Xt.T @ Xt + lambda_w * np.eye(rank))
W[i, :] = mat0 @ vec0
## Update temporal matrix X
for t in range(dim2):
pos0 = np.where(sparse_mat[:, t] != 0)
Wt = W[pos0[0], :]
Mt = np.zeros((rank, rank))
Nt = np.zeros(rank)
if t < np.max(time_lags):
Pt = np.zeros((rank, rank))
Qt = np.zeros(rank)
else:
Pt = np.eye(rank)
Qt = np.einsum('ij, ij -> j', theta, X[t - time_lags, :])
if t < dim2 - np.min(time_lags):
if t >= np.max(time_lags) and t < dim2 - np.max(time_lags):
index = list(range(0, d))
else:
index = list(np.where((t + time_lags >= np.max(time_lags)) & (t + time_lags < dim2)))[0]
for k in index:
Ak = theta[k, :]
Mt += np.diag(Ak ** 2)
theta0 = theta.copy()
theta0[k, :] = 0
Nt += np.multiply(Ak, X[t + time_lags[k], :]
- np.einsum('ij, ij -> j', theta0, X[t + time_lags[k] - time_lags, :]))
vec0 = Wt.T @ sparse_mat[pos0[0], t] + lambda_x * Nt + lambda_x * Qt
mat0 = inv(Wt.T @ Wt + lambda_x * Mt + lambda_x * Pt + lambda_x * eta * np.eye(rank))
X[t, :] = mat0 @ vec0
## Update AR coefficients theta
for k in range(d):
theta0 = theta.copy()
theta0[k, :] = 0
mat0 = np.zeros((dim2 - np.max(time_lags), rank))
for L in range(d):
mat0 += X[np.max(time_lags) - time_lags[L] : dim2 - time_lags[L] , :] @ np.diag(theta0[L, :])
VarPi = X[np.max(time_lags) : dim2, :] - mat0
var1 = np.zeros((rank, rank))
var2 = np.zeros(rank)
for t in range(np.max(time_lags), dim2):
B = X[t - time_lags[k], :]
var1 += np.diag(np.multiply(B, B))
var2 += np.diag(B) @ VarPi[t - np.max(time_lags), :]
theta[k, :] = inv(var1 + lambda_theta * np.eye(rank) / lambda_x) @ var2
mat_hat = W @ X.T
mape = np.sum(np.abs(dense_mat[pos_test] - mat_hat[pos_test])
/ dense_mat[pos_test]) / dense_mat[pos_test].shape[0]
rmse = np.sqrt(np.sum((dense_mat[pos_test] - mat_hat[pos_test]) ** 2)/dense_mat[pos_test].shape[0])
if (it + 1) % 200 == 0:
print('Iter: {}'.format(it + 1))
print('Imputation MAPE: {:.6}'.format(mape))
print('Imputation RMSE: {:.6}'.format(rmse))
print()
import scipy.io
tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/tensor.mat')
tensor = tensor['tensor']
random_matrix = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_matrix.mat')
random_matrix = random_matrix['random_matrix']
random_tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_tensor.mat')
random_tensor = random_tensor['random_tensor']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.2
# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_mat = (np.round(random_tensor + 0.5 - missing_rate)
.reshape([random_tensor.shape[0], random_tensor.shape[1] * random_tensor.shape[2]]))
# =============================================================================
sparse_mat = np.multiply(dense_mat, binary_mat)
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 80
time_lags = np.array([1, 2, 144])
d = time_lags.shape[0]
## Initialize parameters
W = 0.1 * np.random.rand(dim1, rank)
X = 0.1 * np.random.rand(dim2, rank)
theta = 0.1 * np.random.rand(d, rank)
init_para = {"W": W, "X": X, "theta": theta}
## Set hyparameters
lambda_w = 500
lambda_x = 500
lambda_theta = 500
eta = 0.03
init_hyper = {"lambda_w": lambda_w, "lambda_x": lambda_x, "lambda_theta": lambda_theta, "eta": eta}
maxiter = 1000
TRMF(dense_mat, sparse_mat, init_para, init_hyper, time_lags, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
import scipy.io
tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/tensor.mat')
tensor = tensor['tensor']
random_matrix = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_matrix.mat')
random_matrix = random_matrix['random_matrix']
random_tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_tensor.mat')
random_tensor = random_tensor['random_tensor']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.4
# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_mat = (np.round(random_tensor + 0.5 - missing_rate)
.reshape([random_tensor.shape[0], random_tensor.shape[1] * random_tensor.shape[2]]))
# =============================================================================
sparse_mat = np.multiply(dense_mat, binary_mat)
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 80
time_lags = np.array([1, 2, 144])
d = time_lags.shape[0]
## Initialize parameters
W = 0.1 * np.random.rand(dim1, rank)
X = 0.1 * np.random.rand(dim2, rank)
theta = 0.1 * np.random.rand(d, rank)
init_para = {"W": W, "X": X, "theta": theta}
## Set hyparameters
lambda_w = 500
lambda_x = 500
lambda_theta = 500
eta = 0.03
init_hyper = {"lambda_w": lambda_w, "lambda_x": lambda_x, "lambda_theta": lambda_theta, "eta": eta}
maxiter = 1000
TRMF(dense_mat, sparse_mat, init_para, init_hyper, time_lags, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
import scipy.io
tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/tensor.mat')
tensor = tensor['tensor']
random_matrix = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_matrix.mat')
random_matrix = random_matrix['random_matrix']
random_tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_tensor.mat')
random_tensor = random_tensor['random_tensor']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.2
# =============================================================================
### Non-random missing (NM) scenario
### Set the NM scenario by:
binary_tensor = np.zeros(tensor.shape)
for i1 in range(tensor.shape[0]):
for i2 in range(tensor.shape[1]):
binary_tensor[i1, i2, :] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate)
binary_mat = binary_tensor.reshape([binary_tensor.shape[0], binary_tensor.shape[1] * binary_tensor.shape[2]])
# =============================================================================
sparse_mat = np.multiply(dense_mat, binary_mat)
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
time_lags = np.array([1, 2, 144])
d = time_lags.shape[0]
## Initialize parameters
W = 0.1 * np.random.rand(dim1, rank)
X = 0.1 * np.random.rand(dim2, rank)
theta = 0.1 * np.random.rand(d, rank)
init_para = {"W": W, "X": X, "theta": theta}
## Set hyparameters
lambda_w = 500
lambda_x = 500
lambda_theta = 500
eta = 0.03
init_hyper = {"lambda_w": lambda_w, "lambda_x": lambda_x, "lambda_theta": lambda_theta, "eta": eta}
maxiter = 1000
TRMF(dense_mat, sparse_mat, init_para, init_hyper, time_lags, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
import scipy.io
tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/tensor.mat')
tensor = tensor['tensor']
random_matrix = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_matrix.mat')
random_matrix = random_matrix['random_matrix']
random_tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_tensor.mat')
random_tensor = random_tensor['random_tensor']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.4
# =============================================================================
### Non-random missing (NM) scenario
### Set the NM scenario by:
binary_tensor = np.zeros(tensor.shape)
for i1 in range(tensor.shape[0]):
for i2 in range(tensor.shape[1]):
binary_tensor[i1, i2, :] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate)
binary_mat = binary_tensor.reshape([binary_tensor.shape[0], binary_tensor.shape[1] * binary_tensor.shape[2]])
# =============================================================================
sparse_mat = np.multiply(dense_mat, binary_mat)
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
time_lags = np.array([1, 2, 144])
d = time_lags.shape[0]
## Initialize parameters
W = 0.1 * np.random.rand(dim1, rank)
X = 0.1 * np.random.rand(dim2, rank)
theta = 0.1 * np.random.rand(d, rank)
init_para = {"W": W, "X": X, "theta": theta}
## Set hyparameters
lambda_w = 500
lambda_x = 500
lambda_theta = 500
eta = 0.03
init_hyper = {"lambda_w": lambda_w, "lambda_x": lambda_x, "lambda_theta": lambda_theta, "eta": eta}
maxiter = 1000
TRMF(dense_mat, sparse_mat, init_para, init_hyper, time_lags, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
import scipy.io
tensor = scipy.io.loadmat('../datasets/Birmingham-data-set/tensor.mat')
tensor = tensor['tensor']
random_matrix = scipy.io.loadmat('../datasets/Birmingham-data-set/random_matrix.mat')
random_matrix = random_matrix['random_matrix']
random_tensor = scipy.io.loadmat('../datasets/Birmingham-data-set/random_tensor.mat')
random_tensor = random_tensor['random_tensor']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.1
# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_mat = (np.round(random_tensor + 0.5 - missing_rate)
.reshape([random_tensor.shape[0], random_tensor.shape[1] * random_tensor.shape[2]]))
# =============================================================================
sparse_mat = np.multiply(dense_mat, binary_mat)
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 30
time_lags = np.array([1, 2, 18])
d = time_lags.shape[0]
## Initialize parameters
W = 0.1 * np.random.rand(dim1, rank)
X = 0.1 * np.random.rand(dim2, rank)
theta = 0.1 * np.random.rand(d, rank)
init_para = {"W": W, "X": X, "theta": theta}
## Set hyparameters
lambda_w = 100
lambda_x = 100
lambda_theta = 100
eta = 0.01
init_hyper = {"lambda_w": lambda_w, "lambda_x": lambda_x, "lambda_theta": lambda_theta, "eta": eta}
maxiter = 1000
TRMF(dense_mat, sparse_mat, init_para, init_hyper, time_lags, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
import scipy.io
tensor = scipy.io.loadmat('../datasets/Birmingham-data-set/tensor.mat')
tensor = tensor['tensor']
random_matrix = scipy.io.loadmat('../datasets/Birmingham-data-set/random_matrix.mat')
random_matrix = random_matrix['random_matrix']
random_tensor = scipy.io.loadmat('../datasets/Birmingham-data-set/random_tensor.mat')
random_tensor = random_tensor['random_tensor']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.3
# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_mat = (np.round(random_tensor + 0.5 - missing_rate)
.reshape([random_tensor.shape[0], random_tensor.shape[1] * random_tensor.shape[2]]))
# =============================================================================
sparse_mat = np.multiply(dense_mat, binary_mat)
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 30
time_lags = np.array([1, 2, 18])
d = time_lags.shape[0]
## Initialize parameters
W = 0.1 * np.random.rand(dim1, rank)
X = 0.1 * np.random.rand(dim2, rank)
theta = 0.1 * np.random.rand(d, rank)
init_para = {"W": W, "X": X, "theta": theta}
## Set hyparameters
lambda_w = 100
lambda_x = 100
lambda_theta = 100
eta = 0.01
init_hyper = {"lambda_w": lambda_w, "lambda_x": lambda_x, "lambda_theta": lambda_theta, "eta": eta}
maxiter = 1000
TRMF(dense_mat, sparse_mat, init_para, init_hyper, time_lags, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
import scipy.io
tensor = scipy.io.loadmat('../datasets/Birmingham-data-set/tensor.mat')
tensor = tensor['tensor']
random_matrix = scipy.io.loadmat('../datasets/Birmingham-data-set/random_matrix.mat')
random_matrix = random_matrix['random_matrix']
random_tensor = scipy.io.loadmat('../datasets/Birmingham-data-set/random_tensor.mat')
random_tensor = random_tensor['random_tensor']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.1
# =============================================================================
### Non-random missing (NM) scenario
### Set the NM scenario by:
binary_tensor = np.zeros(tensor.shape)
for i1 in range(tensor.shape[0]):
for i2 in range(tensor.shape[1]):
binary_tensor[i1, i2, :] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate)
binary_mat = binary_tensor.reshape([binary_tensor.shape[0], binary_tensor.shape[1] * binary_tensor.shape[2]])
# =============================================================================
sparse_mat = np.multiply(dense_mat, binary_mat)
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
time_lags = np.array([1, 2, 18])
d = time_lags.shape[0]
## Initialize parameters
W = 0.1 * np.random.rand(dim1, rank)
X = 0.1 * np.random.rand(dim2, rank)
theta = 0.1 * np.random.rand(d, rank)
init_para = {"W": W, "X": X, "theta": theta}
## Set hyparameters
lambda_w = 100
lambda_x = 100
lambda_theta = 100
eta = 0.01
init_hyper = {"lambda_w": lambda_w, "lambda_x": lambda_x, "lambda_theta": lambda_theta, "eta": eta}
maxiter = 1000
TRMF(dense_mat, sparse_mat, init_para, init_hyper, time_lags, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
import scipy.io
tensor = scipy.io.loadmat('../datasets/Birmingham-data-set/tensor.mat')
tensor = tensor['tensor']
random_matrix = scipy.io.loadmat('../datasets/Birmingham-data-set/random_matrix.mat')
random_matrix = random_matrix['random_matrix']
random_tensor = scipy.io.loadmat('../datasets/Birmingham-data-set/random_tensor.mat')
random_tensor = random_tensor['random_tensor']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.3
# =============================================================================
### Non-random missing (NM) scenario
### Set the NM scenario by:
binary_tensor = np.zeros(tensor.shape)
for i1 in range(tensor.shape[0]):
for i2 in range(tensor.shape[1]):
binary_tensor[i1, i2, :] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate)
binary_mat = binary_tensor.reshape([binary_tensor.shape[0], binary_tensor.shape[1] * binary_tensor.shape[2]])
# =============================================================================
sparse_mat = np.multiply(dense_mat, binary_mat)
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
time_lags = np.array([1, 2, 18])
d = time_lags.shape[0]
## Initialize parameters
W = 0.1 * np.random.rand(dim1, rank)
X = 0.1 * np.random.rand(dim2, rank)
theta = 0.1 * np.random.rand(d, rank)
init_para = {"W": W, "X": X, "theta": theta}
## Set hyparameters
lambda_w = 100
lambda_x = 100
lambda_theta = 100
eta = 0.01
init_hyper = {"lambda_w": lambda_w, "lambda_x": lambda_x, "lambda_theta": lambda_theta, "eta": eta}
maxiter = 1000
TRMF(dense_mat, sparse_mat, init_para, init_hyper, time_lags, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
import scipy.io
tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/tensor.mat')
tensor = tensor['tensor']
random_matrix = scipy.io.loadmat('../datasets/Hangzhou-data-set/random_matrix.mat')
random_matrix = random_matrix['random_matrix']
random_tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/random_tensor.mat')
random_tensor = random_tensor['random_tensor']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.2
# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_mat = (np.round(random_tensor + 0.5 - missing_rate)
.reshape([random_tensor.shape[0], random_tensor.shape[1] * random_tensor.shape[2]]))
# =============================================================================
sparse_mat = np.multiply(dense_mat, binary_mat)
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 50
time_lags = np.array([1, 2, 108])
d = time_lags.shape[0]
## Initialize parameters
W = 0.1 * np.random.rand(dim1, rank)
X = 0.1 * np.random.rand(dim2, rank)
theta = 0.1 * np.random.rand(d, rank)
init_para = {"W": W, "X": X, "theta": theta}
## Set hyparameters
lambda_w = 500
lambda_x = 500
lambda_theta = 500
eta = 0.03
init_hyper = {"lambda_w": lambda_w, "lambda_x": lambda_x, "lambda_theta": lambda_theta, "eta": eta}
maxiter = 1000
TRMF(dense_mat, sparse_mat, init_para, init_hyper, time_lags, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
import scipy.io
tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/tensor.mat')
tensor = tensor['tensor']
random_matrix = scipy.io.loadmat('../datasets/Hangzhou-data-set/random_matrix.mat')
random_matrix = random_matrix['random_matrix']
random_tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/random_tensor.mat')
random_tensor = random_tensor['random_tensor']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.4
# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_mat = (np.round(random_tensor + 0.5 - missing_rate)
.reshape([random_tensor.shape[0], random_tensor.shape[1] * random_tensor.shape[2]]))
# =============================================================================
sparse_mat = np.multiply(dense_mat, binary_mat)
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 50
time_lags = np.array([1, 2, 108])
d = time_lags.shape[0]
## Initialize parameters
W = 0.1 * np.random.rand(dim1, rank)
X = 0.1 * np.random.rand(dim2, rank)
theta = 0.1 * np.random.rand(d, rank)
init_para = {"W": W, "X": X, "theta": theta}
## Set hyparameters
lambda_w = 500
lambda_x = 500
lambda_theta = 500
eta = 0.03
init_hyper = {"lambda_w": lambda_w, "lambda_x": lambda_x, "lambda_theta": lambda_theta, "eta": eta}
maxiter = 1000
TRMF(dense_mat, sparse_mat, init_para, init_hyper, time_lags, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
import scipy.io
tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/tensor.mat')
tensor = tensor['tensor']
random_matrix = scipy.io.loadmat('../datasets/Hangzhou-data-set/random_matrix.mat')
random_matrix = random_matrix['random_matrix']
random_tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/random_tensor.mat')
random_tensor = random_tensor['random_tensor']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.2
# =============================================================================
### Non-random missing (NM) scenario
### Set the NM scenario by:
binary_tensor = np.zeros(tensor.shape)
for i1 in range(tensor.shape[0]):
for i2 in range(tensor.shape[1]):
binary_tensor[i1, i2, :] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate)
binary_mat = binary_tensor.reshape([binary_tensor.shape[0], binary_tensor.shape[1] * binary_tensor.shape[2]])
# =============================================================================
sparse_mat = np.multiply(dense_mat, binary_mat)
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
time_lags = np.array([1, 2, 108])
d = time_lags.shape[0]
## Initialize parameters
W = 0.1 * np.random.rand(dim1, rank)
X = 0.1 * np.random.rand(dim2, rank)
theta = 0.1 * np.random.rand(d, rank)
init_para = {"W": W, "X": X, "theta": theta}
## Set hyparameters
lambda_w = 500
lambda_x = 500
lambda_theta = 500
eta = 0.03
init_hyper = {"lambda_w": lambda_w, "lambda_x": lambda_x, "lambda_theta": lambda_theta, "eta": eta}
maxiter = 1000
TRMF(dense_mat, sparse_mat, init_para, init_hyper, time_lags, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
import scipy.io
tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/tensor.mat')
tensor = tensor['tensor']
random_matrix = scipy.io.loadmat('../datasets/Hangzhou-data-set/random_matrix.mat')
random_matrix = random_matrix['random_matrix']
random_tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/random_tensor.mat')
random_tensor = random_tensor['random_tensor']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.4
# =============================================================================
### Non-random missing (NM) scenario
### Set the NM scenario by:
binary_tensor = np.zeros(tensor.shape)
for i1 in range(tensor.shape[0]):
for i2 in range(tensor.shape[1]):
binary_tensor[i1, i2, :] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate)
binary_mat = binary_tensor.reshape([binary_tensor.shape[0], binary_tensor.shape[1] * binary_tensor.shape[2]])
# =============================================================================
sparse_mat = np.multiply(dense_mat, binary_mat)
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
time_lags = np.array([1, 2, 108])
d = time_lags.shape[0]
## Initialize parameters
W = 0.1 * np.random.rand(dim1, rank)
X = 0.1 * np.random.rand(dim2, rank)
theta = 0.1 * np.random.rand(d, rank)
init_para = {"W": W, "X": X, "theta": theta}
## Set hyparameters
lambda_w = 500
lambda_x = 500
lambda_theta = 500
eta = 0.03
init_hyper = {"lambda_w": lambda_w, "lambda_x": lambda_x, "lambda_theta": lambda_theta, "eta": eta}
maxiter = 1000
TRMF(dense_mat, sparse_mat, init_para, init_hyper, time_lags, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
import pandas as pd
dense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0)
RM_mat = pd.read_csv('../datasets/Seattle-data-set/RM_mat.csv', index_col = 0)
dense_mat = dense_mat.values
RM_mat = RM_mat.values
missing_rate = 0.2
# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_mat = np.round(RM_mat + 0.5 - missing_rate)
# =============================================================================
sparse_mat = np.multiply(dense_mat, binary_mat)
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 50
time_lags = np.array([1, 2, 288])
d = time_lags.shape[0]
## Initialize parameters
W = 0.1 * np.random.rand(dim1, rank)
X = 0.1 * np.random.rand(dim2, rank)
theta = 0.1 * np.random.rand(d, rank)
init_para = {"W": W, "X": X, "theta": theta}
## Set hyparameters
lambda_w = 500
lambda_x = 500
lambda_theta = 500
eta = 0.03
init_hyper = {"lambda_w": lambda_w, "lambda_x": lambda_x, "lambda_theta": lambda_theta, "eta": eta}
maxiter = 1000
TRMF(dense_mat, sparse_mat, init_para, init_hyper, time_lags, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
import pandas as pd
dense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0)
RM_mat = pd.read_csv('../datasets/Seattle-data-set/RM_mat.csv', index_col = 0)
dense_mat = dense_mat.values
RM_mat = RM_mat.values
missing_rate = 0.4
# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_mat = np.round(RM_mat + 0.5 - missing_rate)
# =============================================================================
sparse_mat = np.multiply(dense_mat, binary_mat)
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 50
time_lags = np.array([1, 2, 288])
d = time_lags.shape[0]
## Initialize parameters
W = 0.1 * np.random.rand(dim1, rank)
X = 0.1 * np.random.rand(dim2, rank)
theta = 0.1 * np.random.rand(d, rank)
init_para = {"W": W, "X": X, "theta": theta}
## Set hyparameters
lambda_w = 500
lambda_x = 500
lambda_theta = 500
eta = 0.03
init_hyper = {"lambda_w": lambda_w, "lambda_x": lambda_x, "lambda_theta": lambda_theta, "eta": eta}
maxiter = 1000
TRMF(dense_mat, sparse_mat, init_para, init_hyper, time_lags, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
import pandas as pd
dense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0)
NM_mat = pd.read_csv('../datasets/Seattle-data-set/NM_mat.csv', index_col = 0)
dense_mat = dense_mat.values
NM_mat = NM_mat.values
missing_rate = 0.2
# =============================================================================
### Non-random missing (NM) scenario
### Set the NM scenario by:
binary_tensor = np.zeros((dense_mat.shape[0], 28, 288))
for i1 in range(binary_tensor.shape[0]):
for i2 in range(binary_tensor.shape[1]):
binary_tensor[i1, i2, :] = np.round(NM_mat[i1, i2] + 0.5 - missing_rate)
# =============================================================================
sparse_mat = np.multiply(dense_mat, binary_tensor.reshape([dense_mat.shape[0], dense_mat.shape[1]]))
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
time_lags = np.array([1, 2, 288])
d = time_lags.shape[0]
## Initialize parameters
W = 0.1 * np.random.rand(dim1, rank)
X = 0.1 * np.random.rand(dim2, rank)
theta = 0.1 * np.random.rand(d, rank)
init_para = {"W": W, "X": X, "theta": theta}
## Set hyparameters
lambda_w = 500
lambda_x = 500
lambda_theta = 500
eta = 0.03
init_hyper = {"lambda_w": lambda_w, "lambda_x": lambda_x, "lambda_theta": lambda_theta, "eta": eta}
maxiter = 1000
TRMF(dense_mat, sparse_mat, init_para, init_hyper, time_lags, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
import pandas as pd
dense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0)
NM_mat = pd.read_csv('../datasets/Seattle-data-set/NM_mat.csv', index_col = 0)
dense_mat = dense_mat.values
NM_mat = NM_mat.values
missing_rate = 0.4
# =============================================================================
### Non-random missing (NM) scenario
### Set the NM scenario by:
binary_tensor = np.zeros((dense_mat.shape[0], 28, 288))
for i1 in range(binary_tensor.shape[0]):
for i2 in range(binary_tensor.shape[1]):
binary_tensor[i1, i2, :] = np.round(NM_mat[i1, i2] + 0.5 - missing_rate)
# =============================================================================
sparse_mat = np.multiply(dense_mat, binary_tensor.reshape([dense_mat.shape[0], dense_mat.shape[1]]))
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
time_lags = np.array([1, 2, 288])
d = time_lags.shape[0]
## Initialize parameters
W = 0.1 * np.random.rand(dim1, rank)
X = 0.1 * np.random.rand(dim2, rank)
theta = 0.1 * np.random.rand(d, rank)
init_para = {"W": W, "X": X, "theta": theta}
## Set hyparameters
lambda_w = 500
lambda_x = 500
lambda_theta = 500
eta = 0.03
init_hyper = {"lambda_w": lambda_w, "lambda_x": lambda_x, "lambda_theta": lambda_theta, "eta": eta}
maxiter = 1000
TRMF(dense_mat, sparse_mat, init_para, init_hyper, time_lags, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
| 0.384334 | 0.992718 |
# RadarCOVID-Report
## Data Extraction
```
import datetime
import json
import logging
import os
import shutil
import tempfile
import textwrap
import uuid
import matplotlib.pyplot as plt
import matplotlib.ticker
import numpy as np
import pandas as pd
import pycountry
import retry
import seaborn as sns
%matplotlib inline
current_working_directory = os.environ.get("PWD")
if current_working_directory:
os.chdir(current_working_directory)
sns.set()
matplotlib.rcParams["figure.figsize"] = (15, 6)
extraction_datetime = datetime.datetime.utcnow()
extraction_date = extraction_datetime.strftime("%Y-%m-%d")
extraction_previous_datetime = extraction_datetime - datetime.timedelta(days=1)
extraction_previous_date = extraction_previous_datetime.strftime("%Y-%m-%d")
extraction_date_with_hour = datetime.datetime.utcnow().strftime("%Y-%m-%d@%H")
current_hour = datetime.datetime.utcnow().hour
are_today_results_partial = current_hour != 23
```
### Constants
```
from Modules.ExposureNotification import exposure_notification_io
spain_region_country_code = "ES"
germany_region_country_code = "DE"
default_backend_identifier = spain_region_country_code
backend_generation_days = 7 * 2
daily_summary_days = 7 * 4 * 3
daily_plot_days = 7 * 4
tek_dumps_load_limit = daily_summary_days + 1
```
### Parameters
```
environment_backend_identifier = os.environ.get("RADARCOVID_REPORT__BACKEND_IDENTIFIER")
if environment_backend_identifier:
report_backend_identifier = environment_backend_identifier
else:
report_backend_identifier = default_backend_identifier
report_backend_identifier
environment_enable_multi_backend_download = \
os.environ.get("RADARCOVID_REPORT__ENABLE_MULTI_BACKEND_DOWNLOAD")
if environment_enable_multi_backend_download:
report_backend_identifiers = None
else:
report_backend_identifiers = [report_backend_identifier]
report_backend_identifiers
environment_invalid_shared_diagnoses_dates = \
os.environ.get("RADARCOVID_REPORT__INVALID_SHARED_DIAGNOSES_DATES")
if environment_invalid_shared_diagnoses_dates:
invalid_shared_diagnoses_dates = environment_invalid_shared_diagnoses_dates.split(",")
else:
invalid_shared_diagnoses_dates = []
invalid_shared_diagnoses_dates
```
### COVID-19 Cases
```
report_backend_client = \
exposure_notification_io.get_backend_client_with_identifier(
backend_identifier=report_backend_identifier)
@retry.retry(tries=10, delay=10, backoff=1.1, jitter=(0, 10))
def download_cases_dataframe():
return pd.read_csv("https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/owid-covid-data.csv")
confirmed_df_ = download_cases_dataframe()
confirmed_df_.iloc[0]
confirmed_df = confirmed_df_.copy()
confirmed_df = confirmed_df[["date", "new_cases", "iso_code"]]
confirmed_df.rename(
columns={
"date": "sample_date",
"iso_code": "country_code",
},
inplace=True)
def convert_iso_alpha_3_to_alpha_2(x):
try:
return pycountry.countries.get(alpha_3=x).alpha_2
except Exception as e:
logging.info(f"Error converting country ISO Alpha 3 code '{x}': {repr(e)}")
return None
confirmed_df["country_code"] = confirmed_df.country_code.apply(convert_iso_alpha_3_to_alpha_2)
confirmed_df.dropna(inplace=True)
confirmed_df["sample_date"] = pd.to_datetime(confirmed_df.sample_date, dayfirst=True)
confirmed_df["sample_date"] = confirmed_df.sample_date.dt.strftime("%Y-%m-%d")
confirmed_df.sort_values("sample_date", inplace=True)
confirmed_df.tail()
confirmed_days = pd.date_range(
start=confirmed_df.iloc[0].sample_date,
end=extraction_datetime)
confirmed_days_df = pd.DataFrame(data=confirmed_days, columns=["sample_date"])
confirmed_days_df["sample_date_string"] = \
confirmed_days_df.sample_date.dt.strftime("%Y-%m-%d")
confirmed_days_df.tail()
def sort_source_regions_for_display(source_regions: list) -> list:
if report_backend_identifier in source_regions:
source_regions = [report_backend_identifier] + \
list(sorted(set(source_regions).difference([report_backend_identifier])))
else:
source_regions = list(sorted(source_regions))
return source_regions
report_source_regions = report_backend_client.source_regions_for_date(
date=extraction_datetime.date())
report_source_regions = sort_source_regions_for_display(
source_regions=report_source_regions)
report_source_regions
def get_cases_dataframe(source_regions_for_date_function, columns_suffix=None):
source_regions_at_date_df = confirmed_days_df.copy()
source_regions_at_date_df["source_regions_at_date"] = \
source_regions_at_date_df.sample_date.apply(
lambda x: source_regions_for_date_function(date=x))
source_regions_at_date_df.sort_values("sample_date", inplace=True)
source_regions_at_date_df["_source_regions_group"] = source_regions_at_date_df. \
source_regions_at_date.apply(lambda x: ",".join(sort_source_regions_for_display(x)))
source_regions_at_date_df.tail()
#%%
source_regions_for_summary_df_ = \
source_regions_at_date_df[["sample_date", "_source_regions_group"]].copy()
source_regions_for_summary_df_.rename(columns={"_source_regions_group": "source_regions"}, inplace=True)
source_regions_for_summary_df_.tail()
#%%
confirmed_output_columns = ["sample_date", "new_cases", "covid_cases"]
confirmed_output_df = pd.DataFrame(columns=confirmed_output_columns)
for source_regions_group, source_regions_group_series in \
source_regions_at_date_df.groupby("_source_regions_group"):
source_regions_set = set(source_regions_group.split(","))
confirmed_source_regions_set_df = \
confirmed_df[confirmed_df.country_code.isin(source_regions_set)].copy()
confirmed_source_regions_group_df = \
confirmed_source_regions_set_df.groupby("sample_date").new_cases.sum() \
.reset_index().sort_values("sample_date")
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df.merge(
confirmed_days_df[["sample_date_string"]].rename(
columns={"sample_date_string": "sample_date"}),
how="right")
confirmed_source_regions_group_df["new_cases"] = \
confirmed_source_regions_group_df["new_cases"].clip(lower=0)
confirmed_source_regions_group_df["covid_cases"] = \
confirmed_source_regions_group_df.new_cases.rolling(7, min_periods=0).mean().round()
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df[confirmed_output_columns]
confirmed_source_regions_group_df = confirmed_source_regions_group_df.replace(0, np.nan)
confirmed_source_regions_group_df.fillna(method="ffill", inplace=True)
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df[
confirmed_source_regions_group_df.sample_date.isin(
source_regions_group_series.sample_date_string)]
confirmed_output_df = confirmed_output_df.append(confirmed_source_regions_group_df)
result_df = confirmed_output_df.copy()
result_df.tail()
#%%
result_df.rename(columns={"sample_date": "sample_date_string"}, inplace=True)
result_df = confirmed_days_df[["sample_date_string"]].merge(result_df, how="left")
result_df.sort_values("sample_date_string", inplace=True)
result_df.fillna(method="ffill", inplace=True)
result_df.tail()
#%%
result_df[["new_cases", "covid_cases"]].plot()
if columns_suffix:
result_df.rename(
columns={
"new_cases": "new_cases_" + columns_suffix,
"covid_cases": "covid_cases_" + columns_suffix},
inplace=True)
return result_df, source_regions_for_summary_df_
confirmed_eu_df, source_regions_for_summary_df = get_cases_dataframe(
report_backend_client.source_regions_for_date)
confirmed_es_df, _ = get_cases_dataframe(
lambda date: [spain_region_country_code],
columns_suffix=spain_region_country_code.lower())
```
### Extract API TEKs
```
raw_zip_path_prefix = "Data/TEKs/Raw/"
base_backend_identifiers = [report_backend_identifier]
multi_backend_exposure_keys_df = \
exposure_notification_io.download_exposure_keys_from_backends(
backend_identifiers=report_backend_identifiers,
generation_days=backend_generation_days,
fail_on_error_backend_identifiers=base_backend_identifiers,
save_raw_zip_path_prefix=raw_zip_path_prefix)
multi_backend_exposure_keys_df["region"] = multi_backend_exposure_keys_df["backend_identifier"]
multi_backend_exposure_keys_df.rename(
columns={
"generation_datetime": "sample_datetime",
"generation_date_string": "sample_date_string",
},
inplace=True)
multi_backend_exposure_keys_df.head()
early_teks_df = multi_backend_exposure_keys_df[
multi_backend_exposure_keys_df.rolling_period < 144].copy()
early_teks_df["rolling_period_in_hours"] = early_teks_df.rolling_period / 6
early_teks_df[early_teks_df.sample_date_string != extraction_date] \
.rolling_period_in_hours.hist(bins=list(range(24)))
early_teks_df[early_teks_df.sample_date_string == extraction_date] \
.rolling_period_in_hours.hist(bins=list(range(24)))
multi_backend_exposure_keys_df = multi_backend_exposure_keys_df[[
"sample_date_string", "region", "key_data"]]
multi_backend_exposure_keys_df.head()
active_regions = \
multi_backend_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist()
active_regions
multi_backend_summary_df = multi_backend_exposure_keys_df.groupby(
["sample_date_string", "region"]).key_data.nunique().reset_index() \
.pivot(index="sample_date_string", columns="region") \
.sort_index(ascending=False)
multi_backend_summary_df.rename(
columns={"key_data": "shared_teks_by_generation_date"},
inplace=True)
multi_backend_summary_df.rename_axis("sample_date", inplace=True)
multi_backend_summary_df = multi_backend_summary_df.fillna(0).astype(int)
multi_backend_summary_df = multi_backend_summary_df.head(backend_generation_days)
multi_backend_summary_df.head()
def compute_keys_cross_sharing(x):
teks_x = x.key_data_x.item()
common_teks = set(teks_x).intersection(x.key_data_y.item())
common_teks_fraction = len(common_teks) / len(teks_x)
return pd.Series(dict(
common_teks=common_teks,
common_teks_fraction=common_teks_fraction,
))
multi_backend_exposure_keys_by_region_df = \
multi_backend_exposure_keys_df.groupby("region").key_data.unique().reset_index()
multi_backend_exposure_keys_by_region_df["_merge"] = True
multi_backend_exposure_keys_by_region_combination_df = \
multi_backend_exposure_keys_by_region_df.merge(
multi_backend_exposure_keys_by_region_df, on="_merge")
multi_backend_exposure_keys_by_region_combination_df.drop(
columns=["_merge"], inplace=True)
if multi_backend_exposure_keys_by_region_combination_df.region_x.nunique() > 1:
multi_backend_exposure_keys_by_region_combination_df = \
multi_backend_exposure_keys_by_region_combination_df[
multi_backend_exposure_keys_by_region_combination_df.region_x !=
multi_backend_exposure_keys_by_region_combination_df.region_y]
multi_backend_exposure_keys_cross_sharing_df = \
multi_backend_exposure_keys_by_region_combination_df \
.groupby(["region_x", "region_y"]) \
.apply(compute_keys_cross_sharing) \
.reset_index()
multi_backend_cross_sharing_summary_df = \
multi_backend_exposure_keys_cross_sharing_df.pivot_table(
values=["common_teks_fraction"],
columns="region_x",
index="region_y",
aggfunc=lambda x: x.item())
multi_backend_cross_sharing_summary_df
multi_backend_without_active_region_exposure_keys_df = \
multi_backend_exposure_keys_df[multi_backend_exposure_keys_df.region != report_backend_identifier]
multi_backend_without_active_region = \
multi_backend_without_active_region_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist()
multi_backend_without_active_region
exposure_keys_summary_df = multi_backend_exposure_keys_df[
multi_backend_exposure_keys_df.region == report_backend_identifier]
exposure_keys_summary_df.drop(columns=["region"], inplace=True)
exposure_keys_summary_df = \
exposure_keys_summary_df.groupby(["sample_date_string"]).key_data.nunique().to_frame()
exposure_keys_summary_df = \
exposure_keys_summary_df.reset_index().set_index("sample_date_string")
exposure_keys_summary_df.sort_index(ascending=False, inplace=True)
exposure_keys_summary_df.rename(columns={"key_data": "shared_teks_by_generation_date"}, inplace=True)
exposure_keys_summary_df.head()
```
### Dump API TEKs
```
tek_list_df = multi_backend_exposure_keys_df[
["sample_date_string", "region", "key_data"]].copy()
tek_list_df["key_data"] = tek_list_df["key_data"].apply(str)
tek_list_df.rename(columns={
"sample_date_string": "sample_date",
"key_data": "tek_list"}, inplace=True)
tek_list_df = tek_list_df.groupby(
["sample_date", "region"]).tek_list.unique().reset_index()
tek_list_df["extraction_date"] = extraction_date
tek_list_df["extraction_date_with_hour"] = extraction_date_with_hour
tek_list_path_prefix = "Data/TEKs/"
tek_list_current_path = tek_list_path_prefix + f"/Current/RadarCOVID-TEKs.json"
tek_list_daily_path = tek_list_path_prefix + f"Daily/RadarCOVID-TEKs-{extraction_date}.json"
tek_list_hourly_path = tek_list_path_prefix + f"Hourly/RadarCOVID-TEKs-{extraction_date_with_hour}.json"
for path in [tek_list_current_path, tek_list_daily_path, tek_list_hourly_path]:
os.makedirs(os.path.dirname(path), exist_ok=True)
tek_list_base_df = tek_list_df[tek_list_df.region == report_backend_identifier]
tek_list_base_df.drop(columns=["extraction_date", "extraction_date_with_hour"]).to_json(
tek_list_current_path,
lines=True, orient="records")
tek_list_base_df.drop(columns=["extraction_date_with_hour"]).to_json(
tek_list_daily_path,
lines=True, orient="records")
tek_list_base_df.to_json(
tek_list_hourly_path,
lines=True, orient="records")
tek_list_base_df.head()
```
### Load TEK Dumps
```
import glob
def load_extracted_teks(mode, region=None, limit=None) -> pd.DataFrame:
extracted_teks_df = pd.DataFrame(columns=["region"])
file_paths = list(reversed(sorted(glob.glob(tek_list_path_prefix + mode + "/RadarCOVID-TEKs-*.json"))))
if limit:
file_paths = file_paths[:limit]
for file_path in file_paths:
logging.info(f"Loading TEKs from '{file_path}'...")
iteration_extracted_teks_df = pd.read_json(file_path, lines=True)
extracted_teks_df = extracted_teks_df.append(
iteration_extracted_teks_df, sort=False)
extracted_teks_df["region"] = \
extracted_teks_df.region.fillna(spain_region_country_code).copy()
if region:
extracted_teks_df = \
extracted_teks_df[extracted_teks_df.region == region]
return extracted_teks_df
daily_extracted_teks_df = load_extracted_teks(
mode="Daily",
region=report_backend_identifier,
limit=tek_dumps_load_limit)
daily_extracted_teks_df.head()
exposure_keys_summary_df_ = daily_extracted_teks_df \
.sort_values("extraction_date", ascending=False) \
.groupby("sample_date").tek_list.first() \
.to_frame()
exposure_keys_summary_df_.index.name = "sample_date_string"
exposure_keys_summary_df_["tek_list"] = \
exposure_keys_summary_df_.tek_list.apply(len)
exposure_keys_summary_df_ = exposure_keys_summary_df_ \
.rename(columns={"tek_list": "shared_teks_by_generation_date"}) \
.sort_index(ascending=False)
exposure_keys_summary_df = exposure_keys_summary_df_
exposure_keys_summary_df.head()
```
### Daily New TEKs
```
tek_list_df = daily_extracted_teks_df.groupby("extraction_date").tek_list.apply(
lambda x: set(sum(x, []))).reset_index()
tek_list_df = tek_list_df.set_index("extraction_date").sort_index(ascending=True)
tek_list_df.head()
def compute_teks_by_generation_and_upload_date(date):
day_new_teks_set_df = tek_list_df.copy().diff()
try:
day_new_teks_set = day_new_teks_set_df[
day_new_teks_set_df.index == date].tek_list.item()
except ValueError:
day_new_teks_set = None
if pd.isna(day_new_teks_set):
day_new_teks_set = set()
day_new_teks_df = daily_extracted_teks_df[
daily_extracted_teks_df.extraction_date == date].copy()
day_new_teks_df["shared_teks"] = \
day_new_teks_df.tek_list.apply(lambda x: set(x).intersection(day_new_teks_set))
day_new_teks_df["shared_teks"] = \
day_new_teks_df.shared_teks.apply(len)
day_new_teks_df["upload_date"] = date
day_new_teks_df.rename(columns={"sample_date": "generation_date"}, inplace=True)
day_new_teks_df = day_new_teks_df[
["upload_date", "generation_date", "shared_teks"]]
day_new_teks_df["generation_to_upload_days"] = \
(pd.to_datetime(day_new_teks_df.upload_date) -
pd.to_datetime(day_new_teks_df.generation_date)).dt.days
day_new_teks_df = day_new_teks_df[day_new_teks_df.shared_teks > 0]
return day_new_teks_df
shared_teks_generation_to_upload_df = pd.DataFrame()
for upload_date in daily_extracted_teks_df.extraction_date.unique():
shared_teks_generation_to_upload_df = \
shared_teks_generation_to_upload_df.append(
compute_teks_by_generation_and_upload_date(date=upload_date))
shared_teks_generation_to_upload_df \
.sort_values(["upload_date", "generation_date"], ascending=False, inplace=True)
shared_teks_generation_to_upload_df.tail()
today_new_teks_df = \
shared_teks_generation_to_upload_df[
shared_teks_generation_to_upload_df.upload_date == extraction_date].copy()
today_new_teks_df.tail()
if not today_new_teks_df.empty:
today_new_teks_df.set_index("generation_to_upload_days") \
.sort_index().shared_teks.plot.bar()
generation_to_upload_period_pivot_df = \
shared_teks_generation_to_upload_df[
["upload_date", "generation_to_upload_days", "shared_teks"]] \
.pivot(index="upload_date", columns="generation_to_upload_days") \
.sort_index(ascending=False).fillna(0).astype(int) \
.droplevel(level=0, axis=1)
generation_to_upload_period_pivot_df.head()
new_tek_df = tek_list_df.diff().tek_list.apply(
lambda x: len(x) if not pd.isna(x) else None).to_frame().reset_index()
new_tek_df.rename(columns={
"tek_list": "shared_teks_by_upload_date",
"extraction_date": "sample_date_string",}, inplace=True)
new_tek_df.tail()
shared_teks_uploaded_on_generation_date_df = shared_teks_generation_to_upload_df[
shared_teks_generation_to_upload_df.generation_to_upload_days == 0] \
[["upload_date", "shared_teks"]].rename(
columns={
"upload_date": "sample_date_string",
"shared_teks": "shared_teks_uploaded_on_generation_date",
})
shared_teks_uploaded_on_generation_date_df.head()
estimated_shared_diagnoses_df = shared_teks_generation_to_upload_df \
.groupby(["upload_date"]).shared_teks.max().reset_index() \
.sort_values(["upload_date"], ascending=False) \
.rename(columns={
"upload_date": "sample_date_string",
"shared_teks": "shared_diagnoses",
})
invalid_shared_diagnoses_dates_mask = \
estimated_shared_diagnoses_df.sample_date_string.isin(invalid_shared_diagnoses_dates)
estimated_shared_diagnoses_df[invalid_shared_diagnoses_dates_mask] = 0
estimated_shared_diagnoses_df.head()
```
### Hourly New TEKs
```
hourly_extracted_teks_df = load_extracted_teks(
mode="Hourly", region=report_backend_identifier, limit=25)
hourly_extracted_teks_df.head()
hourly_new_tek_count_df = hourly_extracted_teks_df \
.groupby("extraction_date_with_hour").tek_list. \
apply(lambda x: set(sum(x, []))).reset_index().copy()
hourly_new_tek_count_df = hourly_new_tek_count_df.set_index("extraction_date_with_hour") \
.sort_index(ascending=True)
hourly_new_tek_count_df["new_tek_list"] = hourly_new_tek_count_df.tek_list.diff()
hourly_new_tek_count_df["new_tek_count"] = hourly_new_tek_count_df.new_tek_list.apply(
lambda x: len(x) if not pd.isna(x) else 0)
hourly_new_tek_count_df.rename(columns={
"new_tek_count": "shared_teks_by_upload_date"}, inplace=True)
hourly_new_tek_count_df = hourly_new_tek_count_df.reset_index()[[
"extraction_date_with_hour", "shared_teks_by_upload_date"]]
hourly_new_tek_count_df.head()
hourly_summary_df = hourly_new_tek_count_df.copy()
hourly_summary_df.set_index("extraction_date_with_hour", inplace=True)
hourly_summary_df = hourly_summary_df.fillna(0).astype(int).reset_index()
hourly_summary_df["datetime_utc"] = pd.to_datetime(
hourly_summary_df.extraction_date_with_hour, format="%Y-%m-%d@%H")
hourly_summary_df.set_index("datetime_utc", inplace=True)
hourly_summary_df = hourly_summary_df.tail(-1)
hourly_summary_df.head()
```
### Official Statistics
```
import requests
import pandas.io.json
official_stats_response = requests.get("https://radarcovid.covid19.gob.es/kpi/statistics/basics")
official_stats_response.raise_for_status()
official_stats_df_ = pandas.io.json.json_normalize(official_stats_response.json())
official_stats_df = official_stats_df_.copy()
official_stats_df["date"] = pd.to_datetime(official_stats_df["date"], dayfirst=True)
official_stats_df.head()
official_stats_column_map = {
"date": "sample_date",
"applicationsDownloads.totalAcummulated": "app_downloads_es_accumulated",
"communicatedContagions.totalAcummulated": "shared_diagnoses_es_accumulated",
}
accumulated_suffix = "_accumulated"
accumulated_values_columns = \
list(filter(lambda x: x.endswith(accumulated_suffix), official_stats_column_map.values()))
interpolated_values_columns = \
list(map(lambda x: x[:-len(accumulated_suffix)], accumulated_values_columns))
official_stats_df = \
official_stats_df[official_stats_column_map.keys()] \
.rename(columns=official_stats_column_map)
official_stats_df["extraction_date"] = extraction_date
official_stats_df.head()
official_stats_path = "Data/Statistics/Current/RadarCOVID-Statistics.json"
previous_official_stats_df = pd.read_json(official_stats_path, orient="records", lines=True)
previous_official_stats_df["sample_date"] = pd.to_datetime(previous_official_stats_df["sample_date"], dayfirst=True)
official_stats_df = official_stats_df.append(previous_official_stats_df)
official_stats_df.head()
official_stats_df = official_stats_df[~(official_stats_df.shared_diagnoses_es_accumulated == 0)]
official_stats_df.sort_values("extraction_date", ascending=False, inplace=True)
official_stats_df.drop_duplicates(subset=["sample_date"], keep="first", inplace=True)
official_stats_df.head()
official_stats_stored_df = official_stats_df.copy()
official_stats_stored_df["sample_date"] = official_stats_stored_df.sample_date.dt.strftime("%Y-%m-%d")
official_stats_stored_df.to_json(official_stats_path, orient="records", lines=True)
official_stats_df.drop(columns=["extraction_date"], inplace=True)
official_stats_df = confirmed_days_df.merge(official_stats_df, how="left")
official_stats_df.sort_values("sample_date", ascending=False, inplace=True)
official_stats_df.head()
official_stats_df[accumulated_values_columns] = \
official_stats_df[accumulated_values_columns] \
.astype(float).interpolate(limit_area="inside")
official_stats_df[interpolated_values_columns] = \
official_stats_df[accumulated_values_columns].diff(periods=-1)
official_stats_df.drop(columns="sample_date", inplace=True)
official_stats_df.head()
```
### Data Merge
```
result_summary_df = exposure_keys_summary_df.merge(
new_tek_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
shared_teks_uploaded_on_generation_date_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
estimated_shared_diagnoses_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
official_stats_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = confirmed_eu_df.tail(daily_summary_days).merge(
result_summary_df, on=["sample_date_string"], how="left")
result_summary_df.head()
result_summary_df = confirmed_es_df.tail(daily_summary_days).merge(
result_summary_df, on=["sample_date_string"], how="left")
result_summary_df.head()
result_summary_df["sample_date"] = pd.to_datetime(result_summary_df.sample_date_string)
result_summary_df = result_summary_df.merge(source_regions_for_summary_df, how="left")
result_summary_df.set_index(["sample_date", "source_regions"], inplace=True)
result_summary_df.drop(columns=["sample_date_string"], inplace=True)
result_summary_df.sort_index(ascending=False, inplace=True)
result_summary_df.head()
with pd.option_context("mode.use_inf_as_na", True):
result_summary_df = result_summary_df.fillna(0).astype(int)
result_summary_df["teks_per_shared_diagnosis"] = \
(result_summary_df.shared_teks_by_upload_date / result_summary_df.shared_diagnoses).fillna(0)
result_summary_df["shared_diagnoses_per_covid_case"] = \
(result_summary_df.shared_diagnoses / result_summary_df.covid_cases).fillna(0)
result_summary_df["shared_diagnoses_per_covid_case_es"] = \
(result_summary_df.shared_diagnoses_es / result_summary_df.covid_cases_es).fillna(0)
result_summary_df.head(daily_plot_days)
def compute_aggregated_results_summary(days) -> pd.DataFrame:
aggregated_result_summary_df = result_summary_df.copy()
aggregated_result_summary_df["covid_cases_for_ratio"] = \
aggregated_result_summary_df.covid_cases.mask(
aggregated_result_summary_df.shared_diagnoses == 0, 0)
aggregated_result_summary_df["covid_cases_for_ratio_es"] = \
aggregated_result_summary_df.covid_cases_es.mask(
aggregated_result_summary_df.shared_diagnoses_es == 0, 0)
aggregated_result_summary_df = aggregated_result_summary_df \
.sort_index(ascending=True).fillna(0).rolling(days).agg({
"covid_cases": "sum",
"covid_cases_es": "sum",
"covid_cases_for_ratio": "sum",
"covid_cases_for_ratio_es": "sum",
"shared_teks_by_generation_date": "sum",
"shared_teks_by_upload_date": "sum",
"shared_diagnoses": "sum",
"shared_diagnoses_es": "sum",
}).sort_index(ascending=False)
with pd.option_context("mode.use_inf_as_na", True):
aggregated_result_summary_df = aggregated_result_summary_df.fillna(0).astype(int)
aggregated_result_summary_df["teks_per_shared_diagnosis"] = \
(aggregated_result_summary_df.shared_teks_by_upload_date /
aggregated_result_summary_df.covid_cases_for_ratio).fillna(0)
aggregated_result_summary_df["shared_diagnoses_per_covid_case"] = \
(aggregated_result_summary_df.shared_diagnoses /
aggregated_result_summary_df.covid_cases_for_ratio).fillna(0)
aggregated_result_summary_df["shared_diagnoses_per_covid_case_es"] = \
(aggregated_result_summary_df.shared_diagnoses_es /
aggregated_result_summary_df.covid_cases_for_ratio_es).fillna(0)
return aggregated_result_summary_df
aggregated_result_with_7_days_window_summary_df = compute_aggregated_results_summary(days=7)
aggregated_result_with_7_days_window_summary_df.head()
last_7_days_summary = aggregated_result_with_7_days_window_summary_df.to_dict(orient="records")[1]
last_7_days_summary
aggregated_result_with_14_days_window_summary_df = compute_aggregated_results_summary(days=13)
last_14_days_summary = aggregated_result_with_14_days_window_summary_df.to_dict(orient="records")[1]
last_14_days_summary
```
## Report Results
```
display_column_name_mapping = {
"sample_date": "Sample\u00A0Date\u00A0(UTC)",
"source_regions": "Source Countries",
"datetime_utc": "Timestamp (UTC)",
"upload_date": "Upload Date (UTC)",
"generation_to_upload_days": "Generation to Upload Period in Days",
"region": "Backend",
"region_x": "Backend\u00A0(A)",
"region_y": "Backend\u00A0(B)",
"common_teks": "Common TEKs Shared Between Backends",
"common_teks_fraction": "Fraction of TEKs in Backend (A) Available in Backend (B)",
"covid_cases": "COVID-19 Cases (Source Countries)",
"shared_teks_by_generation_date": "Shared TEKs by Generation Date (Source Countries)",
"shared_teks_by_upload_date": "Shared TEKs by Upload Date (Source Countries)",
"shared_teks_uploaded_on_generation_date": "Shared TEKs Uploaded on Generation Date (Source Countries)",
"shared_diagnoses": "Shared Diagnoses (Source Countries – Estimation)",
"teks_per_shared_diagnosis": "TEKs Uploaded per Shared Diagnosis (Source Countries)",
"shared_diagnoses_per_covid_case": "Usage Ratio (Source Countries)",
"covid_cases_es": "COVID-19 Cases (Spain)",
"app_downloads_es": "App Downloads (Spain – Official)",
"shared_diagnoses_es": "Shared Diagnoses (Spain – Official)",
"shared_diagnoses_per_covid_case_es": "Usage Ratio (Spain)",
}
summary_columns = [
"covid_cases",
"shared_teks_by_generation_date",
"shared_teks_by_upload_date",
"shared_teks_uploaded_on_generation_date",
"shared_diagnoses",
"teks_per_shared_diagnosis",
"shared_diagnoses_per_covid_case",
"covid_cases_es",
"app_downloads_es",
"shared_diagnoses_es",
"shared_diagnoses_per_covid_case_es",
]
summary_percentage_columns= [
"shared_diagnoses_per_covid_case_es",
"shared_diagnoses_per_covid_case",
]
```
### Daily Summary Table
```
result_summary_df_ = result_summary_df.copy()
result_summary_df = result_summary_df[summary_columns]
result_summary_with_display_names_df = result_summary_df \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping)
result_summary_with_display_names_df
```
### Daily Summary Plots
```
result_plot_summary_df = result_summary_df.head(daily_plot_days)[summary_columns] \
.droplevel(level=["source_regions"]) \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping)
summary_ax_list = result_plot_summary_df.sort_index(ascending=True).plot.bar(
title=f"Daily Summary",
rot=45, subplots=True, figsize=(15, 30), legend=False)
ax_ = summary_ax_list[0]
ax_.get_figure().tight_layout()
ax_.get_figure().subplots_adjust(top=0.95)
_ = ax_.set_xticklabels(sorted(result_plot_summary_df.index.strftime("%Y-%m-%d").tolist()))
for percentage_column in summary_percentage_columns:
percentage_column_index = summary_columns.index(percentage_column)
summary_ax_list[percentage_column_index].yaxis \
.set_major_formatter(matplotlib.ticker.PercentFormatter(1.0))
```
### Daily Generation to Upload Period Table
```
display_generation_to_upload_period_pivot_df = \
generation_to_upload_period_pivot_df \
.head(backend_generation_days)
display_generation_to_upload_period_pivot_df \
.head(backend_generation_days) \
.rename_axis(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping)
fig, generation_to_upload_period_pivot_table_ax = plt.subplots(
figsize=(12, 1 + 0.6 * len(display_generation_to_upload_period_pivot_df)))
generation_to_upload_period_pivot_table_ax.set_title(
"Shared TEKs Generation to Upload Period Table")
sns.heatmap(
data=display_generation_to_upload_period_pivot_df
.rename_axis(columns=display_column_name_mapping)
.rename_axis(index=display_column_name_mapping),
fmt=".0f",
annot=True,
ax=generation_to_upload_period_pivot_table_ax)
generation_to_upload_period_pivot_table_ax.get_figure().tight_layout()
```
### Hourly Summary Plots
```
hourly_summary_ax_list = hourly_summary_df \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.plot.bar(
title=f"Last 24h Summary",
rot=45, subplots=True, legend=False)
ax_ = hourly_summary_ax_list[-1]
ax_.get_figure().tight_layout()
ax_.get_figure().subplots_adjust(top=0.9)
_ = ax_.set_xticklabels(sorted(hourly_summary_df.index.strftime("%Y-%m-%d@%H").tolist()))
```
### Publish Results
```
github_repository = os.environ.get("GITHUB_REPOSITORY")
if github_repository is None:
github_repository = "pvieito/Radar-STATS"
github_project_base_url = "https://github.com/" + github_repository
display_formatters = {
display_column_name_mapping["teks_per_shared_diagnosis"]: lambda x: f"{x:.2f}" if x != 0 else "",
display_column_name_mapping["shared_diagnoses_per_covid_case"]: lambda x: f"{x:.2%}" if x != 0 else "",
display_column_name_mapping["shared_diagnoses_per_covid_case_es"]: lambda x: f"{x:.2%}" if x != 0 else "",
}
general_columns = \
list(filter(lambda x: x not in display_formatters, display_column_name_mapping.values()))
general_formatter = lambda x: f"{x}" if x != 0 else ""
display_formatters.update(dict(map(lambda x: (x, general_formatter), general_columns)))
daily_summary_table_html = result_summary_with_display_names_df \
.head(daily_plot_days) \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.to_html(formatters=display_formatters)
multi_backend_summary_table_html = multi_backend_summary_df \
.head(daily_plot_days) \
.rename_axis(columns=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping) \
.to_html(formatters=display_formatters)
def format_multi_backend_cross_sharing_fraction(x):
if pd.isna(x):
return "-"
elif round(x * 100, 1) == 0:
return ""
else:
return f"{x:.1%}"
multi_backend_cross_sharing_summary_table_html = multi_backend_cross_sharing_summary_df \
.rename_axis(columns=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping) \
.to_html(
classes="table-center",
formatters=display_formatters,
float_format=format_multi_backend_cross_sharing_fraction)
multi_backend_cross_sharing_summary_table_html = \
multi_backend_cross_sharing_summary_table_html \
.replace("<tr>","<tr style=\"text-align: center;\">")
extraction_date_result_summary_df = \
result_summary_df[result_summary_df.index.get_level_values("sample_date") == extraction_date]
extraction_date_result_hourly_summary_df = \
hourly_summary_df[hourly_summary_df.extraction_date_with_hour == extraction_date_with_hour]
covid_cases = \
extraction_date_result_summary_df.covid_cases.item()
shared_teks_by_generation_date = \
extraction_date_result_summary_df.shared_teks_by_generation_date.item()
shared_teks_by_upload_date = \
extraction_date_result_summary_df.shared_teks_by_upload_date.item()
shared_diagnoses = \
extraction_date_result_summary_df.shared_diagnoses.item()
teks_per_shared_diagnosis = \
extraction_date_result_summary_df.teks_per_shared_diagnosis.item()
shared_diagnoses_per_covid_case = \
extraction_date_result_summary_df.shared_diagnoses_per_covid_case.item()
shared_teks_by_upload_date_last_hour = \
extraction_date_result_hourly_summary_df.shared_teks_by_upload_date.sum().astype(int)
display_source_regions = ", ".join(report_source_regions)
if len(report_source_regions) == 1:
display_brief_source_regions = report_source_regions[0]
else:
display_brief_source_regions = f"{len(report_source_regions)} 🇪🇺"
def get_temporary_image_path() -> str:
return os.path.join(tempfile.gettempdir(), str(uuid.uuid4()) + ".png")
def save_temporary_plot_image(ax):
if isinstance(ax, np.ndarray):
ax = ax[0]
media_path = get_temporary_image_path()
ax.get_figure().savefig(media_path)
return media_path
def save_temporary_dataframe_image(df):
import dataframe_image as dfi
df = df.copy()
df_styler = df.style.format(display_formatters)
media_path = get_temporary_image_path()
dfi.export(df_styler, media_path)
return media_path
summary_plots_image_path = save_temporary_plot_image(
ax=summary_ax_list)
summary_table_image_path = save_temporary_dataframe_image(
df=result_summary_with_display_names_df)
hourly_summary_plots_image_path = save_temporary_plot_image(
ax=hourly_summary_ax_list)
multi_backend_summary_table_image_path = save_temporary_dataframe_image(
df=multi_backend_summary_df)
generation_to_upload_period_pivot_table_image_path = save_temporary_plot_image(
ax=generation_to_upload_period_pivot_table_ax)
```
### Save Results
```
report_resources_path_prefix = "Data/Resources/Current/RadarCOVID-Report-"
result_summary_df.to_csv(
report_resources_path_prefix + "Summary-Table.csv")
result_summary_df.to_html(
report_resources_path_prefix + "Summary-Table.html")
hourly_summary_df.to_csv(
report_resources_path_prefix + "Hourly-Summary-Table.csv")
multi_backend_summary_df.to_csv(
report_resources_path_prefix + "Multi-Backend-Summary-Table.csv")
multi_backend_cross_sharing_summary_df.to_csv(
report_resources_path_prefix + "Multi-Backend-Cross-Sharing-Summary-Table.csv")
generation_to_upload_period_pivot_df.to_csv(
report_resources_path_prefix + "Generation-Upload-Period-Table.csv")
_ = shutil.copyfile(
summary_plots_image_path,
report_resources_path_prefix + "Summary-Plots.png")
_ = shutil.copyfile(
summary_table_image_path,
report_resources_path_prefix + "Summary-Table.png")
_ = shutil.copyfile(
hourly_summary_plots_image_path,
report_resources_path_prefix + "Hourly-Summary-Plots.png")
_ = shutil.copyfile(
multi_backend_summary_table_image_path,
report_resources_path_prefix + "Multi-Backend-Summary-Table.png")
_ = shutil.copyfile(
generation_to_upload_period_pivot_table_image_path,
report_resources_path_prefix + "Generation-Upload-Period-Table.png")
```
### Publish Results as JSON
```
def generate_summary_api_results(df: pd.DataFrame) -> list:
api_df = df.reset_index().copy()
api_df["sample_date_string"] = \
api_df["sample_date"].dt.strftime("%Y-%m-%d")
api_df["source_regions"] = \
api_df["source_regions"].apply(lambda x: x.split(","))
return api_df.to_dict(orient="records")
summary_api_results = \
generate_summary_api_results(df=result_summary_df)
today_summary_api_results = \
generate_summary_api_results(df=extraction_date_result_summary_df)[0]
summary_results = dict(
backend_identifier=report_backend_identifier,
source_regions=report_source_regions,
extraction_datetime=extraction_datetime,
extraction_date=extraction_date,
extraction_date_with_hour=extraction_date_with_hour,
last_hour=dict(
shared_teks_by_upload_date=shared_teks_by_upload_date_last_hour,
shared_diagnoses=0,
),
today=today_summary_api_results,
last_7_days=last_7_days_summary,
last_14_days=last_14_days_summary,
daily_results=summary_api_results)
summary_results = \
json.loads(pd.Series([summary_results]).to_json(orient="records"))[0]
with open(report_resources_path_prefix + "Summary-Results.json", "w") as f:
json.dump(summary_results, f, indent=4)
```
### Publish on README
```
with open("Data/Templates/README.md", "r") as f:
readme_contents = f.read()
readme_contents = readme_contents.format(
extraction_date_with_hour=extraction_date_with_hour,
github_project_base_url=github_project_base_url,
daily_summary_table_html=daily_summary_table_html,
multi_backend_summary_table_html=multi_backend_summary_table_html,
multi_backend_cross_sharing_summary_table_html=multi_backend_cross_sharing_summary_table_html,
display_source_regions=display_source_regions)
with open("README.md", "w") as f:
f.write(readme_contents)
```
### Publish on Twitter
```
enable_share_to_twitter = os.environ.get("RADARCOVID_REPORT__ENABLE_PUBLISH_ON_TWITTER")
github_event_name = os.environ.get("GITHUB_EVENT_NAME")
if enable_share_to_twitter and github_event_name == "schedule" and \
(shared_teks_by_upload_date_last_hour or not are_today_results_partial):
import tweepy
twitter_api_auth_keys = os.environ["RADARCOVID_REPORT__TWITTER_API_AUTH_KEYS"]
twitter_api_auth_keys = twitter_api_auth_keys.split(":")
auth = tweepy.OAuthHandler(twitter_api_auth_keys[0], twitter_api_auth_keys[1])
auth.set_access_token(twitter_api_auth_keys[2], twitter_api_auth_keys[3])
api = tweepy.API(auth)
summary_plots_media = api.media_upload(summary_plots_image_path)
summary_table_media = api.media_upload(summary_table_image_path)
generation_to_upload_period_pivot_table_image_media = api.media_upload(generation_to_upload_period_pivot_table_image_path)
media_ids = [
summary_plots_media.media_id,
summary_table_media.media_id,
generation_to_upload_period_pivot_table_image_media.media_id,
]
if are_today_results_partial:
today_addendum = " (Partial)"
else:
today_addendum = ""
def format_shared_diagnoses_per_covid_case(value) -> str:
if value == 0:
return "–"
return f"≤{value:.2%}"
display_shared_diagnoses_per_covid_case = \
format_shared_diagnoses_per_covid_case(value=shared_diagnoses_per_covid_case)
display_last_14_days_shared_diagnoses_per_covid_case = \
format_shared_diagnoses_per_covid_case(value=last_14_days_summary["shared_diagnoses_per_covid_case"])
display_last_14_days_shared_diagnoses_per_covid_case_es = \
format_shared_diagnoses_per_covid_case(value=last_14_days_summary["shared_diagnoses_per_covid_case_es"])
status = textwrap.dedent(f"""
#RadarCOVID – {extraction_date_with_hour}
Today{today_addendum}:
- Uploaded TEKs: {shared_teks_by_upload_date:.0f} ({shared_teks_by_upload_date_last_hour:+d} last hour)
- Shared Diagnoses: ≤{shared_diagnoses:.0f}
- Usage Ratio: {display_shared_diagnoses_per_covid_case}
Last 14 Days:
- Usage Ratio (Estimation): {display_last_14_days_shared_diagnoses_per_covid_case}
- Usage Ratio (Official): {display_last_14_days_shared_diagnoses_per_covid_case_es}
Info: {github_project_base_url}#documentation
""")
status = status.encode(encoding="utf-8")
api.update_status(status=status, media_ids=media_ids)
```
|
github_jupyter
|
import datetime
import json
import logging
import os
import shutil
import tempfile
import textwrap
import uuid
import matplotlib.pyplot as plt
import matplotlib.ticker
import numpy as np
import pandas as pd
import pycountry
import retry
import seaborn as sns
%matplotlib inline
current_working_directory = os.environ.get("PWD")
if current_working_directory:
os.chdir(current_working_directory)
sns.set()
matplotlib.rcParams["figure.figsize"] = (15, 6)
extraction_datetime = datetime.datetime.utcnow()
extraction_date = extraction_datetime.strftime("%Y-%m-%d")
extraction_previous_datetime = extraction_datetime - datetime.timedelta(days=1)
extraction_previous_date = extraction_previous_datetime.strftime("%Y-%m-%d")
extraction_date_with_hour = datetime.datetime.utcnow().strftime("%Y-%m-%d@%H")
current_hour = datetime.datetime.utcnow().hour
are_today_results_partial = current_hour != 23
from Modules.ExposureNotification import exposure_notification_io
spain_region_country_code = "ES"
germany_region_country_code = "DE"
default_backend_identifier = spain_region_country_code
backend_generation_days = 7 * 2
daily_summary_days = 7 * 4 * 3
daily_plot_days = 7 * 4
tek_dumps_load_limit = daily_summary_days + 1
environment_backend_identifier = os.environ.get("RADARCOVID_REPORT__BACKEND_IDENTIFIER")
if environment_backend_identifier:
report_backend_identifier = environment_backend_identifier
else:
report_backend_identifier = default_backend_identifier
report_backend_identifier
environment_enable_multi_backend_download = \
os.environ.get("RADARCOVID_REPORT__ENABLE_MULTI_BACKEND_DOWNLOAD")
if environment_enable_multi_backend_download:
report_backend_identifiers = None
else:
report_backend_identifiers = [report_backend_identifier]
report_backend_identifiers
environment_invalid_shared_diagnoses_dates = \
os.environ.get("RADARCOVID_REPORT__INVALID_SHARED_DIAGNOSES_DATES")
if environment_invalid_shared_diagnoses_dates:
invalid_shared_diagnoses_dates = environment_invalid_shared_diagnoses_dates.split(",")
else:
invalid_shared_diagnoses_dates = []
invalid_shared_diagnoses_dates
report_backend_client = \
exposure_notification_io.get_backend_client_with_identifier(
backend_identifier=report_backend_identifier)
@retry.retry(tries=10, delay=10, backoff=1.1, jitter=(0, 10))
def download_cases_dataframe():
return pd.read_csv("https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/owid-covid-data.csv")
confirmed_df_ = download_cases_dataframe()
confirmed_df_.iloc[0]
confirmed_df = confirmed_df_.copy()
confirmed_df = confirmed_df[["date", "new_cases", "iso_code"]]
confirmed_df.rename(
columns={
"date": "sample_date",
"iso_code": "country_code",
},
inplace=True)
def convert_iso_alpha_3_to_alpha_2(x):
try:
return pycountry.countries.get(alpha_3=x).alpha_2
except Exception as e:
logging.info(f"Error converting country ISO Alpha 3 code '{x}': {repr(e)}")
return None
confirmed_df["country_code"] = confirmed_df.country_code.apply(convert_iso_alpha_3_to_alpha_2)
confirmed_df.dropna(inplace=True)
confirmed_df["sample_date"] = pd.to_datetime(confirmed_df.sample_date, dayfirst=True)
confirmed_df["sample_date"] = confirmed_df.sample_date.dt.strftime("%Y-%m-%d")
confirmed_df.sort_values("sample_date", inplace=True)
confirmed_df.tail()
confirmed_days = pd.date_range(
start=confirmed_df.iloc[0].sample_date,
end=extraction_datetime)
confirmed_days_df = pd.DataFrame(data=confirmed_days, columns=["sample_date"])
confirmed_days_df["sample_date_string"] = \
confirmed_days_df.sample_date.dt.strftime("%Y-%m-%d")
confirmed_days_df.tail()
def sort_source_regions_for_display(source_regions: list) -> list:
if report_backend_identifier in source_regions:
source_regions = [report_backend_identifier] + \
list(sorted(set(source_regions).difference([report_backend_identifier])))
else:
source_regions = list(sorted(source_regions))
return source_regions
report_source_regions = report_backend_client.source_regions_for_date(
date=extraction_datetime.date())
report_source_regions = sort_source_regions_for_display(
source_regions=report_source_regions)
report_source_regions
def get_cases_dataframe(source_regions_for_date_function, columns_suffix=None):
source_regions_at_date_df = confirmed_days_df.copy()
source_regions_at_date_df["source_regions_at_date"] = \
source_regions_at_date_df.sample_date.apply(
lambda x: source_regions_for_date_function(date=x))
source_regions_at_date_df.sort_values("sample_date", inplace=True)
source_regions_at_date_df["_source_regions_group"] = source_regions_at_date_df. \
source_regions_at_date.apply(lambda x: ",".join(sort_source_regions_for_display(x)))
source_regions_at_date_df.tail()
#%%
source_regions_for_summary_df_ = \
source_regions_at_date_df[["sample_date", "_source_regions_group"]].copy()
source_regions_for_summary_df_.rename(columns={"_source_regions_group": "source_regions"}, inplace=True)
source_regions_for_summary_df_.tail()
#%%
confirmed_output_columns = ["sample_date", "new_cases", "covid_cases"]
confirmed_output_df = pd.DataFrame(columns=confirmed_output_columns)
for source_regions_group, source_regions_group_series in \
source_regions_at_date_df.groupby("_source_regions_group"):
source_regions_set = set(source_regions_group.split(","))
confirmed_source_regions_set_df = \
confirmed_df[confirmed_df.country_code.isin(source_regions_set)].copy()
confirmed_source_regions_group_df = \
confirmed_source_regions_set_df.groupby("sample_date").new_cases.sum() \
.reset_index().sort_values("sample_date")
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df.merge(
confirmed_days_df[["sample_date_string"]].rename(
columns={"sample_date_string": "sample_date"}),
how="right")
confirmed_source_regions_group_df["new_cases"] = \
confirmed_source_regions_group_df["new_cases"].clip(lower=0)
confirmed_source_regions_group_df["covid_cases"] = \
confirmed_source_regions_group_df.new_cases.rolling(7, min_periods=0).mean().round()
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df[confirmed_output_columns]
confirmed_source_regions_group_df = confirmed_source_regions_group_df.replace(0, np.nan)
confirmed_source_regions_group_df.fillna(method="ffill", inplace=True)
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df[
confirmed_source_regions_group_df.sample_date.isin(
source_regions_group_series.sample_date_string)]
confirmed_output_df = confirmed_output_df.append(confirmed_source_regions_group_df)
result_df = confirmed_output_df.copy()
result_df.tail()
#%%
result_df.rename(columns={"sample_date": "sample_date_string"}, inplace=True)
result_df = confirmed_days_df[["sample_date_string"]].merge(result_df, how="left")
result_df.sort_values("sample_date_string", inplace=True)
result_df.fillna(method="ffill", inplace=True)
result_df.tail()
#%%
result_df[["new_cases", "covid_cases"]].plot()
if columns_suffix:
result_df.rename(
columns={
"new_cases": "new_cases_" + columns_suffix,
"covid_cases": "covid_cases_" + columns_suffix},
inplace=True)
return result_df, source_regions_for_summary_df_
confirmed_eu_df, source_regions_for_summary_df = get_cases_dataframe(
report_backend_client.source_regions_for_date)
confirmed_es_df, _ = get_cases_dataframe(
lambda date: [spain_region_country_code],
columns_suffix=spain_region_country_code.lower())
raw_zip_path_prefix = "Data/TEKs/Raw/"
base_backend_identifiers = [report_backend_identifier]
multi_backend_exposure_keys_df = \
exposure_notification_io.download_exposure_keys_from_backends(
backend_identifiers=report_backend_identifiers,
generation_days=backend_generation_days,
fail_on_error_backend_identifiers=base_backend_identifiers,
save_raw_zip_path_prefix=raw_zip_path_prefix)
multi_backend_exposure_keys_df["region"] = multi_backend_exposure_keys_df["backend_identifier"]
multi_backend_exposure_keys_df.rename(
columns={
"generation_datetime": "sample_datetime",
"generation_date_string": "sample_date_string",
},
inplace=True)
multi_backend_exposure_keys_df.head()
early_teks_df = multi_backend_exposure_keys_df[
multi_backend_exposure_keys_df.rolling_period < 144].copy()
early_teks_df["rolling_period_in_hours"] = early_teks_df.rolling_period / 6
early_teks_df[early_teks_df.sample_date_string != extraction_date] \
.rolling_period_in_hours.hist(bins=list(range(24)))
early_teks_df[early_teks_df.sample_date_string == extraction_date] \
.rolling_period_in_hours.hist(bins=list(range(24)))
multi_backend_exposure_keys_df = multi_backend_exposure_keys_df[[
"sample_date_string", "region", "key_data"]]
multi_backend_exposure_keys_df.head()
active_regions = \
multi_backend_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist()
active_regions
multi_backend_summary_df = multi_backend_exposure_keys_df.groupby(
["sample_date_string", "region"]).key_data.nunique().reset_index() \
.pivot(index="sample_date_string", columns="region") \
.sort_index(ascending=False)
multi_backend_summary_df.rename(
columns={"key_data": "shared_teks_by_generation_date"},
inplace=True)
multi_backend_summary_df.rename_axis("sample_date", inplace=True)
multi_backend_summary_df = multi_backend_summary_df.fillna(0).astype(int)
multi_backend_summary_df = multi_backend_summary_df.head(backend_generation_days)
multi_backend_summary_df.head()
def compute_keys_cross_sharing(x):
teks_x = x.key_data_x.item()
common_teks = set(teks_x).intersection(x.key_data_y.item())
common_teks_fraction = len(common_teks) / len(teks_x)
return pd.Series(dict(
common_teks=common_teks,
common_teks_fraction=common_teks_fraction,
))
multi_backend_exposure_keys_by_region_df = \
multi_backend_exposure_keys_df.groupby("region").key_data.unique().reset_index()
multi_backend_exposure_keys_by_region_df["_merge"] = True
multi_backend_exposure_keys_by_region_combination_df = \
multi_backend_exposure_keys_by_region_df.merge(
multi_backend_exposure_keys_by_region_df, on="_merge")
multi_backend_exposure_keys_by_region_combination_df.drop(
columns=["_merge"], inplace=True)
if multi_backend_exposure_keys_by_region_combination_df.region_x.nunique() > 1:
multi_backend_exposure_keys_by_region_combination_df = \
multi_backend_exposure_keys_by_region_combination_df[
multi_backend_exposure_keys_by_region_combination_df.region_x !=
multi_backend_exposure_keys_by_region_combination_df.region_y]
multi_backend_exposure_keys_cross_sharing_df = \
multi_backend_exposure_keys_by_region_combination_df \
.groupby(["region_x", "region_y"]) \
.apply(compute_keys_cross_sharing) \
.reset_index()
multi_backend_cross_sharing_summary_df = \
multi_backend_exposure_keys_cross_sharing_df.pivot_table(
values=["common_teks_fraction"],
columns="region_x",
index="region_y",
aggfunc=lambda x: x.item())
multi_backend_cross_sharing_summary_df
multi_backend_without_active_region_exposure_keys_df = \
multi_backend_exposure_keys_df[multi_backend_exposure_keys_df.region != report_backend_identifier]
multi_backend_without_active_region = \
multi_backend_without_active_region_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist()
multi_backend_without_active_region
exposure_keys_summary_df = multi_backend_exposure_keys_df[
multi_backend_exposure_keys_df.region == report_backend_identifier]
exposure_keys_summary_df.drop(columns=["region"], inplace=True)
exposure_keys_summary_df = \
exposure_keys_summary_df.groupby(["sample_date_string"]).key_data.nunique().to_frame()
exposure_keys_summary_df = \
exposure_keys_summary_df.reset_index().set_index("sample_date_string")
exposure_keys_summary_df.sort_index(ascending=False, inplace=True)
exposure_keys_summary_df.rename(columns={"key_data": "shared_teks_by_generation_date"}, inplace=True)
exposure_keys_summary_df.head()
tek_list_df = multi_backend_exposure_keys_df[
["sample_date_string", "region", "key_data"]].copy()
tek_list_df["key_data"] = tek_list_df["key_data"].apply(str)
tek_list_df.rename(columns={
"sample_date_string": "sample_date",
"key_data": "tek_list"}, inplace=True)
tek_list_df = tek_list_df.groupby(
["sample_date", "region"]).tek_list.unique().reset_index()
tek_list_df["extraction_date"] = extraction_date
tek_list_df["extraction_date_with_hour"] = extraction_date_with_hour
tek_list_path_prefix = "Data/TEKs/"
tek_list_current_path = tek_list_path_prefix + f"/Current/RadarCOVID-TEKs.json"
tek_list_daily_path = tek_list_path_prefix + f"Daily/RadarCOVID-TEKs-{extraction_date}.json"
tek_list_hourly_path = tek_list_path_prefix + f"Hourly/RadarCOVID-TEKs-{extraction_date_with_hour}.json"
for path in [tek_list_current_path, tek_list_daily_path, tek_list_hourly_path]:
os.makedirs(os.path.dirname(path), exist_ok=True)
tek_list_base_df = tek_list_df[tek_list_df.region == report_backend_identifier]
tek_list_base_df.drop(columns=["extraction_date", "extraction_date_with_hour"]).to_json(
tek_list_current_path,
lines=True, orient="records")
tek_list_base_df.drop(columns=["extraction_date_with_hour"]).to_json(
tek_list_daily_path,
lines=True, orient="records")
tek_list_base_df.to_json(
tek_list_hourly_path,
lines=True, orient="records")
tek_list_base_df.head()
import glob
def load_extracted_teks(mode, region=None, limit=None) -> pd.DataFrame:
extracted_teks_df = pd.DataFrame(columns=["region"])
file_paths = list(reversed(sorted(glob.glob(tek_list_path_prefix + mode + "/RadarCOVID-TEKs-*.json"))))
if limit:
file_paths = file_paths[:limit]
for file_path in file_paths:
logging.info(f"Loading TEKs from '{file_path}'...")
iteration_extracted_teks_df = pd.read_json(file_path, lines=True)
extracted_teks_df = extracted_teks_df.append(
iteration_extracted_teks_df, sort=False)
extracted_teks_df["region"] = \
extracted_teks_df.region.fillna(spain_region_country_code).copy()
if region:
extracted_teks_df = \
extracted_teks_df[extracted_teks_df.region == region]
return extracted_teks_df
daily_extracted_teks_df = load_extracted_teks(
mode="Daily",
region=report_backend_identifier,
limit=tek_dumps_load_limit)
daily_extracted_teks_df.head()
exposure_keys_summary_df_ = daily_extracted_teks_df \
.sort_values("extraction_date", ascending=False) \
.groupby("sample_date").tek_list.first() \
.to_frame()
exposure_keys_summary_df_.index.name = "sample_date_string"
exposure_keys_summary_df_["tek_list"] = \
exposure_keys_summary_df_.tek_list.apply(len)
exposure_keys_summary_df_ = exposure_keys_summary_df_ \
.rename(columns={"tek_list": "shared_teks_by_generation_date"}) \
.sort_index(ascending=False)
exposure_keys_summary_df = exposure_keys_summary_df_
exposure_keys_summary_df.head()
tek_list_df = daily_extracted_teks_df.groupby("extraction_date").tek_list.apply(
lambda x: set(sum(x, []))).reset_index()
tek_list_df = tek_list_df.set_index("extraction_date").sort_index(ascending=True)
tek_list_df.head()
def compute_teks_by_generation_and_upload_date(date):
day_new_teks_set_df = tek_list_df.copy().diff()
try:
day_new_teks_set = day_new_teks_set_df[
day_new_teks_set_df.index == date].tek_list.item()
except ValueError:
day_new_teks_set = None
if pd.isna(day_new_teks_set):
day_new_teks_set = set()
day_new_teks_df = daily_extracted_teks_df[
daily_extracted_teks_df.extraction_date == date].copy()
day_new_teks_df["shared_teks"] = \
day_new_teks_df.tek_list.apply(lambda x: set(x).intersection(day_new_teks_set))
day_new_teks_df["shared_teks"] = \
day_new_teks_df.shared_teks.apply(len)
day_new_teks_df["upload_date"] = date
day_new_teks_df.rename(columns={"sample_date": "generation_date"}, inplace=True)
day_new_teks_df = day_new_teks_df[
["upload_date", "generation_date", "shared_teks"]]
day_new_teks_df["generation_to_upload_days"] = \
(pd.to_datetime(day_new_teks_df.upload_date) -
pd.to_datetime(day_new_teks_df.generation_date)).dt.days
day_new_teks_df = day_new_teks_df[day_new_teks_df.shared_teks > 0]
return day_new_teks_df
shared_teks_generation_to_upload_df = pd.DataFrame()
for upload_date in daily_extracted_teks_df.extraction_date.unique():
shared_teks_generation_to_upload_df = \
shared_teks_generation_to_upload_df.append(
compute_teks_by_generation_and_upload_date(date=upload_date))
shared_teks_generation_to_upload_df \
.sort_values(["upload_date", "generation_date"], ascending=False, inplace=True)
shared_teks_generation_to_upload_df.tail()
today_new_teks_df = \
shared_teks_generation_to_upload_df[
shared_teks_generation_to_upload_df.upload_date == extraction_date].copy()
today_new_teks_df.tail()
if not today_new_teks_df.empty:
today_new_teks_df.set_index("generation_to_upload_days") \
.sort_index().shared_teks.plot.bar()
generation_to_upload_period_pivot_df = \
shared_teks_generation_to_upload_df[
["upload_date", "generation_to_upload_days", "shared_teks"]] \
.pivot(index="upload_date", columns="generation_to_upload_days") \
.sort_index(ascending=False).fillna(0).astype(int) \
.droplevel(level=0, axis=1)
generation_to_upload_period_pivot_df.head()
new_tek_df = tek_list_df.diff().tek_list.apply(
lambda x: len(x) if not pd.isna(x) else None).to_frame().reset_index()
new_tek_df.rename(columns={
"tek_list": "shared_teks_by_upload_date",
"extraction_date": "sample_date_string",}, inplace=True)
new_tek_df.tail()
shared_teks_uploaded_on_generation_date_df = shared_teks_generation_to_upload_df[
shared_teks_generation_to_upload_df.generation_to_upload_days == 0] \
[["upload_date", "shared_teks"]].rename(
columns={
"upload_date": "sample_date_string",
"shared_teks": "shared_teks_uploaded_on_generation_date",
})
shared_teks_uploaded_on_generation_date_df.head()
estimated_shared_diagnoses_df = shared_teks_generation_to_upload_df \
.groupby(["upload_date"]).shared_teks.max().reset_index() \
.sort_values(["upload_date"], ascending=False) \
.rename(columns={
"upload_date": "sample_date_string",
"shared_teks": "shared_diagnoses",
})
invalid_shared_diagnoses_dates_mask = \
estimated_shared_diagnoses_df.sample_date_string.isin(invalid_shared_diagnoses_dates)
estimated_shared_diagnoses_df[invalid_shared_diagnoses_dates_mask] = 0
estimated_shared_diagnoses_df.head()
hourly_extracted_teks_df = load_extracted_teks(
mode="Hourly", region=report_backend_identifier, limit=25)
hourly_extracted_teks_df.head()
hourly_new_tek_count_df = hourly_extracted_teks_df \
.groupby("extraction_date_with_hour").tek_list. \
apply(lambda x: set(sum(x, []))).reset_index().copy()
hourly_new_tek_count_df = hourly_new_tek_count_df.set_index("extraction_date_with_hour") \
.sort_index(ascending=True)
hourly_new_tek_count_df["new_tek_list"] = hourly_new_tek_count_df.tek_list.diff()
hourly_new_tek_count_df["new_tek_count"] = hourly_new_tek_count_df.new_tek_list.apply(
lambda x: len(x) if not pd.isna(x) else 0)
hourly_new_tek_count_df.rename(columns={
"new_tek_count": "shared_teks_by_upload_date"}, inplace=True)
hourly_new_tek_count_df = hourly_new_tek_count_df.reset_index()[[
"extraction_date_with_hour", "shared_teks_by_upload_date"]]
hourly_new_tek_count_df.head()
hourly_summary_df = hourly_new_tek_count_df.copy()
hourly_summary_df.set_index("extraction_date_with_hour", inplace=True)
hourly_summary_df = hourly_summary_df.fillna(0).astype(int).reset_index()
hourly_summary_df["datetime_utc"] = pd.to_datetime(
hourly_summary_df.extraction_date_with_hour, format="%Y-%m-%d@%H")
hourly_summary_df.set_index("datetime_utc", inplace=True)
hourly_summary_df = hourly_summary_df.tail(-1)
hourly_summary_df.head()
import requests
import pandas.io.json
official_stats_response = requests.get("https://radarcovid.covid19.gob.es/kpi/statistics/basics")
official_stats_response.raise_for_status()
official_stats_df_ = pandas.io.json.json_normalize(official_stats_response.json())
official_stats_df = official_stats_df_.copy()
official_stats_df["date"] = pd.to_datetime(official_stats_df["date"], dayfirst=True)
official_stats_df.head()
official_stats_column_map = {
"date": "sample_date",
"applicationsDownloads.totalAcummulated": "app_downloads_es_accumulated",
"communicatedContagions.totalAcummulated": "shared_diagnoses_es_accumulated",
}
accumulated_suffix = "_accumulated"
accumulated_values_columns = \
list(filter(lambda x: x.endswith(accumulated_suffix), official_stats_column_map.values()))
interpolated_values_columns = \
list(map(lambda x: x[:-len(accumulated_suffix)], accumulated_values_columns))
official_stats_df = \
official_stats_df[official_stats_column_map.keys()] \
.rename(columns=official_stats_column_map)
official_stats_df["extraction_date"] = extraction_date
official_stats_df.head()
official_stats_path = "Data/Statistics/Current/RadarCOVID-Statistics.json"
previous_official_stats_df = pd.read_json(official_stats_path, orient="records", lines=True)
previous_official_stats_df["sample_date"] = pd.to_datetime(previous_official_stats_df["sample_date"], dayfirst=True)
official_stats_df = official_stats_df.append(previous_official_stats_df)
official_stats_df.head()
official_stats_df = official_stats_df[~(official_stats_df.shared_diagnoses_es_accumulated == 0)]
official_stats_df.sort_values("extraction_date", ascending=False, inplace=True)
official_stats_df.drop_duplicates(subset=["sample_date"], keep="first", inplace=True)
official_stats_df.head()
official_stats_stored_df = official_stats_df.copy()
official_stats_stored_df["sample_date"] = official_stats_stored_df.sample_date.dt.strftime("%Y-%m-%d")
official_stats_stored_df.to_json(official_stats_path, orient="records", lines=True)
official_stats_df.drop(columns=["extraction_date"], inplace=True)
official_stats_df = confirmed_days_df.merge(official_stats_df, how="left")
official_stats_df.sort_values("sample_date", ascending=False, inplace=True)
official_stats_df.head()
official_stats_df[accumulated_values_columns] = \
official_stats_df[accumulated_values_columns] \
.astype(float).interpolate(limit_area="inside")
official_stats_df[interpolated_values_columns] = \
official_stats_df[accumulated_values_columns].diff(periods=-1)
official_stats_df.drop(columns="sample_date", inplace=True)
official_stats_df.head()
result_summary_df = exposure_keys_summary_df.merge(
new_tek_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
shared_teks_uploaded_on_generation_date_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
estimated_shared_diagnoses_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
official_stats_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = confirmed_eu_df.tail(daily_summary_days).merge(
result_summary_df, on=["sample_date_string"], how="left")
result_summary_df.head()
result_summary_df = confirmed_es_df.tail(daily_summary_days).merge(
result_summary_df, on=["sample_date_string"], how="left")
result_summary_df.head()
result_summary_df["sample_date"] = pd.to_datetime(result_summary_df.sample_date_string)
result_summary_df = result_summary_df.merge(source_regions_for_summary_df, how="left")
result_summary_df.set_index(["sample_date", "source_regions"], inplace=True)
result_summary_df.drop(columns=["sample_date_string"], inplace=True)
result_summary_df.sort_index(ascending=False, inplace=True)
result_summary_df.head()
with pd.option_context("mode.use_inf_as_na", True):
result_summary_df = result_summary_df.fillna(0).astype(int)
result_summary_df["teks_per_shared_diagnosis"] = \
(result_summary_df.shared_teks_by_upload_date / result_summary_df.shared_diagnoses).fillna(0)
result_summary_df["shared_diagnoses_per_covid_case"] = \
(result_summary_df.shared_diagnoses / result_summary_df.covid_cases).fillna(0)
result_summary_df["shared_diagnoses_per_covid_case_es"] = \
(result_summary_df.shared_diagnoses_es / result_summary_df.covid_cases_es).fillna(0)
result_summary_df.head(daily_plot_days)
def compute_aggregated_results_summary(days) -> pd.DataFrame:
aggregated_result_summary_df = result_summary_df.copy()
aggregated_result_summary_df["covid_cases_for_ratio"] = \
aggregated_result_summary_df.covid_cases.mask(
aggregated_result_summary_df.shared_diagnoses == 0, 0)
aggregated_result_summary_df["covid_cases_for_ratio_es"] = \
aggregated_result_summary_df.covid_cases_es.mask(
aggregated_result_summary_df.shared_diagnoses_es == 0, 0)
aggregated_result_summary_df = aggregated_result_summary_df \
.sort_index(ascending=True).fillna(0).rolling(days).agg({
"covid_cases": "sum",
"covid_cases_es": "sum",
"covid_cases_for_ratio": "sum",
"covid_cases_for_ratio_es": "sum",
"shared_teks_by_generation_date": "sum",
"shared_teks_by_upload_date": "sum",
"shared_diagnoses": "sum",
"shared_diagnoses_es": "sum",
}).sort_index(ascending=False)
with pd.option_context("mode.use_inf_as_na", True):
aggregated_result_summary_df = aggregated_result_summary_df.fillna(0).astype(int)
aggregated_result_summary_df["teks_per_shared_diagnosis"] = \
(aggregated_result_summary_df.shared_teks_by_upload_date /
aggregated_result_summary_df.covid_cases_for_ratio).fillna(0)
aggregated_result_summary_df["shared_diagnoses_per_covid_case"] = \
(aggregated_result_summary_df.shared_diagnoses /
aggregated_result_summary_df.covid_cases_for_ratio).fillna(0)
aggregated_result_summary_df["shared_diagnoses_per_covid_case_es"] = \
(aggregated_result_summary_df.shared_diagnoses_es /
aggregated_result_summary_df.covid_cases_for_ratio_es).fillna(0)
return aggregated_result_summary_df
aggregated_result_with_7_days_window_summary_df = compute_aggregated_results_summary(days=7)
aggregated_result_with_7_days_window_summary_df.head()
last_7_days_summary = aggregated_result_with_7_days_window_summary_df.to_dict(orient="records")[1]
last_7_days_summary
aggregated_result_with_14_days_window_summary_df = compute_aggregated_results_summary(days=13)
last_14_days_summary = aggregated_result_with_14_days_window_summary_df.to_dict(orient="records")[1]
last_14_days_summary
display_column_name_mapping = {
"sample_date": "Sample\u00A0Date\u00A0(UTC)",
"source_regions": "Source Countries",
"datetime_utc": "Timestamp (UTC)",
"upload_date": "Upload Date (UTC)",
"generation_to_upload_days": "Generation to Upload Period in Days",
"region": "Backend",
"region_x": "Backend\u00A0(A)",
"region_y": "Backend\u00A0(B)",
"common_teks": "Common TEKs Shared Between Backends",
"common_teks_fraction": "Fraction of TEKs in Backend (A) Available in Backend (B)",
"covid_cases": "COVID-19 Cases (Source Countries)",
"shared_teks_by_generation_date": "Shared TEKs by Generation Date (Source Countries)",
"shared_teks_by_upload_date": "Shared TEKs by Upload Date (Source Countries)",
"shared_teks_uploaded_on_generation_date": "Shared TEKs Uploaded on Generation Date (Source Countries)",
"shared_diagnoses": "Shared Diagnoses (Source Countries – Estimation)",
"teks_per_shared_diagnosis": "TEKs Uploaded per Shared Diagnosis (Source Countries)",
"shared_diagnoses_per_covid_case": "Usage Ratio (Source Countries)",
"covid_cases_es": "COVID-19 Cases (Spain)",
"app_downloads_es": "App Downloads (Spain – Official)",
"shared_diagnoses_es": "Shared Diagnoses (Spain – Official)",
"shared_diagnoses_per_covid_case_es": "Usage Ratio (Spain)",
}
summary_columns = [
"covid_cases",
"shared_teks_by_generation_date",
"shared_teks_by_upload_date",
"shared_teks_uploaded_on_generation_date",
"shared_diagnoses",
"teks_per_shared_diagnosis",
"shared_diagnoses_per_covid_case",
"covid_cases_es",
"app_downloads_es",
"shared_diagnoses_es",
"shared_diagnoses_per_covid_case_es",
]
summary_percentage_columns= [
"shared_diagnoses_per_covid_case_es",
"shared_diagnoses_per_covid_case",
]
result_summary_df_ = result_summary_df.copy()
result_summary_df = result_summary_df[summary_columns]
result_summary_with_display_names_df = result_summary_df \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping)
result_summary_with_display_names_df
result_plot_summary_df = result_summary_df.head(daily_plot_days)[summary_columns] \
.droplevel(level=["source_regions"]) \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping)
summary_ax_list = result_plot_summary_df.sort_index(ascending=True).plot.bar(
title=f"Daily Summary",
rot=45, subplots=True, figsize=(15, 30), legend=False)
ax_ = summary_ax_list[0]
ax_.get_figure().tight_layout()
ax_.get_figure().subplots_adjust(top=0.95)
_ = ax_.set_xticklabels(sorted(result_plot_summary_df.index.strftime("%Y-%m-%d").tolist()))
for percentage_column in summary_percentage_columns:
percentage_column_index = summary_columns.index(percentage_column)
summary_ax_list[percentage_column_index].yaxis \
.set_major_formatter(matplotlib.ticker.PercentFormatter(1.0))
display_generation_to_upload_period_pivot_df = \
generation_to_upload_period_pivot_df \
.head(backend_generation_days)
display_generation_to_upload_period_pivot_df \
.head(backend_generation_days) \
.rename_axis(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping)
fig, generation_to_upload_period_pivot_table_ax = plt.subplots(
figsize=(12, 1 + 0.6 * len(display_generation_to_upload_period_pivot_df)))
generation_to_upload_period_pivot_table_ax.set_title(
"Shared TEKs Generation to Upload Period Table")
sns.heatmap(
data=display_generation_to_upload_period_pivot_df
.rename_axis(columns=display_column_name_mapping)
.rename_axis(index=display_column_name_mapping),
fmt=".0f",
annot=True,
ax=generation_to_upload_period_pivot_table_ax)
generation_to_upload_period_pivot_table_ax.get_figure().tight_layout()
hourly_summary_ax_list = hourly_summary_df \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.plot.bar(
title=f"Last 24h Summary",
rot=45, subplots=True, legend=False)
ax_ = hourly_summary_ax_list[-1]
ax_.get_figure().tight_layout()
ax_.get_figure().subplots_adjust(top=0.9)
_ = ax_.set_xticklabels(sorted(hourly_summary_df.index.strftime("%Y-%m-%d@%H").tolist()))
github_repository = os.environ.get("GITHUB_REPOSITORY")
if github_repository is None:
github_repository = "pvieito/Radar-STATS"
github_project_base_url = "https://github.com/" + github_repository
display_formatters = {
display_column_name_mapping["teks_per_shared_diagnosis"]: lambda x: f"{x:.2f}" if x != 0 else "",
display_column_name_mapping["shared_diagnoses_per_covid_case"]: lambda x: f"{x:.2%}" if x != 0 else "",
display_column_name_mapping["shared_diagnoses_per_covid_case_es"]: lambda x: f"{x:.2%}" if x != 0 else "",
}
general_columns = \
list(filter(lambda x: x not in display_formatters, display_column_name_mapping.values()))
general_formatter = lambda x: f"{x}" if x != 0 else ""
display_formatters.update(dict(map(lambda x: (x, general_formatter), general_columns)))
daily_summary_table_html = result_summary_with_display_names_df \
.head(daily_plot_days) \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.to_html(formatters=display_formatters)
multi_backend_summary_table_html = multi_backend_summary_df \
.head(daily_plot_days) \
.rename_axis(columns=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping) \
.to_html(formatters=display_formatters)
def format_multi_backend_cross_sharing_fraction(x):
if pd.isna(x):
return "-"
elif round(x * 100, 1) == 0:
return ""
else:
return f"{x:.1%}"
multi_backend_cross_sharing_summary_table_html = multi_backend_cross_sharing_summary_df \
.rename_axis(columns=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping) \
.to_html(
classes="table-center",
formatters=display_formatters,
float_format=format_multi_backend_cross_sharing_fraction)
multi_backend_cross_sharing_summary_table_html = \
multi_backend_cross_sharing_summary_table_html \
.replace("<tr>","<tr style=\"text-align: center;\">")
extraction_date_result_summary_df = \
result_summary_df[result_summary_df.index.get_level_values("sample_date") == extraction_date]
extraction_date_result_hourly_summary_df = \
hourly_summary_df[hourly_summary_df.extraction_date_with_hour == extraction_date_with_hour]
covid_cases = \
extraction_date_result_summary_df.covid_cases.item()
shared_teks_by_generation_date = \
extraction_date_result_summary_df.shared_teks_by_generation_date.item()
shared_teks_by_upload_date = \
extraction_date_result_summary_df.shared_teks_by_upload_date.item()
shared_diagnoses = \
extraction_date_result_summary_df.shared_diagnoses.item()
teks_per_shared_diagnosis = \
extraction_date_result_summary_df.teks_per_shared_diagnosis.item()
shared_diagnoses_per_covid_case = \
extraction_date_result_summary_df.shared_diagnoses_per_covid_case.item()
shared_teks_by_upload_date_last_hour = \
extraction_date_result_hourly_summary_df.shared_teks_by_upload_date.sum().astype(int)
display_source_regions = ", ".join(report_source_regions)
if len(report_source_regions) == 1:
display_brief_source_regions = report_source_regions[0]
else:
display_brief_source_regions = f"{len(report_source_regions)} 🇪🇺"
def get_temporary_image_path() -> str:
return os.path.join(tempfile.gettempdir(), str(uuid.uuid4()) + ".png")
def save_temporary_plot_image(ax):
if isinstance(ax, np.ndarray):
ax = ax[0]
media_path = get_temporary_image_path()
ax.get_figure().savefig(media_path)
return media_path
def save_temporary_dataframe_image(df):
import dataframe_image as dfi
df = df.copy()
df_styler = df.style.format(display_formatters)
media_path = get_temporary_image_path()
dfi.export(df_styler, media_path)
return media_path
summary_plots_image_path = save_temporary_plot_image(
ax=summary_ax_list)
summary_table_image_path = save_temporary_dataframe_image(
df=result_summary_with_display_names_df)
hourly_summary_plots_image_path = save_temporary_plot_image(
ax=hourly_summary_ax_list)
multi_backend_summary_table_image_path = save_temporary_dataframe_image(
df=multi_backend_summary_df)
generation_to_upload_period_pivot_table_image_path = save_temporary_plot_image(
ax=generation_to_upload_period_pivot_table_ax)
report_resources_path_prefix = "Data/Resources/Current/RadarCOVID-Report-"
result_summary_df.to_csv(
report_resources_path_prefix + "Summary-Table.csv")
result_summary_df.to_html(
report_resources_path_prefix + "Summary-Table.html")
hourly_summary_df.to_csv(
report_resources_path_prefix + "Hourly-Summary-Table.csv")
multi_backend_summary_df.to_csv(
report_resources_path_prefix + "Multi-Backend-Summary-Table.csv")
multi_backend_cross_sharing_summary_df.to_csv(
report_resources_path_prefix + "Multi-Backend-Cross-Sharing-Summary-Table.csv")
generation_to_upload_period_pivot_df.to_csv(
report_resources_path_prefix + "Generation-Upload-Period-Table.csv")
_ = shutil.copyfile(
summary_plots_image_path,
report_resources_path_prefix + "Summary-Plots.png")
_ = shutil.copyfile(
summary_table_image_path,
report_resources_path_prefix + "Summary-Table.png")
_ = shutil.copyfile(
hourly_summary_plots_image_path,
report_resources_path_prefix + "Hourly-Summary-Plots.png")
_ = shutil.copyfile(
multi_backend_summary_table_image_path,
report_resources_path_prefix + "Multi-Backend-Summary-Table.png")
_ = shutil.copyfile(
generation_to_upload_period_pivot_table_image_path,
report_resources_path_prefix + "Generation-Upload-Period-Table.png")
def generate_summary_api_results(df: pd.DataFrame) -> list:
api_df = df.reset_index().copy()
api_df["sample_date_string"] = \
api_df["sample_date"].dt.strftime("%Y-%m-%d")
api_df["source_regions"] = \
api_df["source_regions"].apply(lambda x: x.split(","))
return api_df.to_dict(orient="records")
summary_api_results = \
generate_summary_api_results(df=result_summary_df)
today_summary_api_results = \
generate_summary_api_results(df=extraction_date_result_summary_df)[0]
summary_results = dict(
backend_identifier=report_backend_identifier,
source_regions=report_source_regions,
extraction_datetime=extraction_datetime,
extraction_date=extraction_date,
extraction_date_with_hour=extraction_date_with_hour,
last_hour=dict(
shared_teks_by_upload_date=shared_teks_by_upload_date_last_hour,
shared_diagnoses=0,
),
today=today_summary_api_results,
last_7_days=last_7_days_summary,
last_14_days=last_14_days_summary,
daily_results=summary_api_results)
summary_results = \
json.loads(pd.Series([summary_results]).to_json(orient="records"))[0]
with open(report_resources_path_prefix + "Summary-Results.json", "w") as f:
json.dump(summary_results, f, indent=4)
with open("Data/Templates/README.md", "r") as f:
readme_contents = f.read()
readme_contents = readme_contents.format(
extraction_date_with_hour=extraction_date_with_hour,
github_project_base_url=github_project_base_url,
daily_summary_table_html=daily_summary_table_html,
multi_backend_summary_table_html=multi_backend_summary_table_html,
multi_backend_cross_sharing_summary_table_html=multi_backend_cross_sharing_summary_table_html,
display_source_regions=display_source_regions)
with open("README.md", "w") as f:
f.write(readme_contents)
enable_share_to_twitter = os.environ.get("RADARCOVID_REPORT__ENABLE_PUBLISH_ON_TWITTER")
github_event_name = os.environ.get("GITHUB_EVENT_NAME")
if enable_share_to_twitter and github_event_name == "schedule" and \
(shared_teks_by_upload_date_last_hour or not are_today_results_partial):
import tweepy
twitter_api_auth_keys = os.environ["RADARCOVID_REPORT__TWITTER_API_AUTH_KEYS"]
twitter_api_auth_keys = twitter_api_auth_keys.split(":")
auth = tweepy.OAuthHandler(twitter_api_auth_keys[0], twitter_api_auth_keys[1])
auth.set_access_token(twitter_api_auth_keys[2], twitter_api_auth_keys[3])
api = tweepy.API(auth)
summary_plots_media = api.media_upload(summary_plots_image_path)
summary_table_media = api.media_upload(summary_table_image_path)
generation_to_upload_period_pivot_table_image_media = api.media_upload(generation_to_upload_period_pivot_table_image_path)
media_ids = [
summary_plots_media.media_id,
summary_table_media.media_id,
generation_to_upload_period_pivot_table_image_media.media_id,
]
if are_today_results_partial:
today_addendum = " (Partial)"
else:
today_addendum = ""
def format_shared_diagnoses_per_covid_case(value) -> str:
if value == 0:
return "–"
return f"≤{value:.2%}"
display_shared_diagnoses_per_covid_case = \
format_shared_diagnoses_per_covid_case(value=shared_diagnoses_per_covid_case)
display_last_14_days_shared_diagnoses_per_covid_case = \
format_shared_diagnoses_per_covid_case(value=last_14_days_summary["shared_diagnoses_per_covid_case"])
display_last_14_days_shared_diagnoses_per_covid_case_es = \
format_shared_diagnoses_per_covid_case(value=last_14_days_summary["shared_diagnoses_per_covid_case_es"])
status = textwrap.dedent(f"""
#RadarCOVID – {extraction_date_with_hour}
Today{today_addendum}:
- Uploaded TEKs: {shared_teks_by_upload_date:.0f} ({shared_teks_by_upload_date_last_hour:+d} last hour)
- Shared Diagnoses: ≤{shared_diagnoses:.0f}
- Usage Ratio: {display_shared_diagnoses_per_covid_case}
Last 14 Days:
- Usage Ratio (Estimation): {display_last_14_days_shared_diagnoses_per_covid_case}
- Usage Ratio (Official): {display_last_14_days_shared_diagnoses_per_covid_case_es}
Info: {github_project_base_url}#documentation
""")
status = status.encode(encoding="utf-8")
api.update_status(status=status, media_ids=media_ids)
| 0.268749 | 0.215464 |
# Convergence
Training convergence figures.
```
import pickle
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
from sys_simulator.general import load_with_pickle, sns_confidence_interval_plot
from copy import deepcopy
import os
EXP_NAME = 'convergencia'
# ddpg
ALGO_NAME = 'ddpg'
filepath = "/home/lucas/dev/sys-simulator-2/data/ddpg/script8/20210523-163328/log.pickle"
# dql
# ALGO_NAME = 'dql'
# filepath = "D:\Dev/sys-simulator-2\DATA\DQL\SCRIPT52\\20210508-144816\\log.pickle"
# a2c
# ALGO_NAME = 'a2c'
# filepath = "D:\\Dev\\sys-simulator-2\\data\\a2c\\script16\\20210509-134816\\log.pickle"
# output path
OUTPUT_PATH = f'/home/lucas/dev/sys-simulator-2/figs/{EXP_NAME}/{ALGO_NAME}'
file = open(filepath, 'rb')
data = pickle.load(file)
file.close()
data.keys()
EVAL_EVERY = data['eval_every']
EVAL_EVERY
d_train = data['train_bags']
d_test = data['test_bags']
d_train.keys()
xx = d_test['mue_sinrs']
xx = np.array(xx)
xx.shape
xx = d_train['mue_sinrs']
xx = np.array(xx)
xx.shape
mue_sinrs = deepcopy(d_train['mue_sinrs'])
mue_sinrs.append(d_test['mue_sinrs'])
mue_sinrs = np.array(mue_sinrs)
mue_sinrs.shape
d2d_sinrs = deepcopy(d_train['d2d_sinrs'])
d2d_sinrs.append(d_test['d2d_sinrs'])
d2d_sinrs = np.array(d2d_sinrs)
d2d_sinrs.shape
sinr_threshold = 6
mue_avail = deepcopy(mue_sinrs >= sinr_threshold)
mue_avail.shape
rewards = deepcopy(d_train['rewards'])
rewards.append(d_test['rewards'])
rewards = np.array(rewards)
rewards.shape
```
## Fonts config
```
x_font = {
'family': 'serif',
'color': 'black',
'weight': 'normal',
'size': 16,
}
y_font = {
'family': 'serif',
'color': 'black',
'weight': 'normal',
'size': 16,
}
ticks_font = {
'fontfamily': 'serif',
'fontsize': 13
}
legends_font = {
'size': 13,
'family': 'serif'
}
```
## Ticks
```
x_ticks = [i*EVAL_EVERY*(mue_sinrs.shape[0]-1)/5 for i in range(6)]
```
## MUE SINR
```
x = np.ones(mue_sinrs.shape)
for i in range(mue_sinrs.shape[0]):
x[i, :] = i
x = x.reshape(-1)
x *= EVAL_EVERY
mu = mue_sinrs.mean(axis=1)
std = mue_sinrs.std(axis=1)
conf95 = 1.96 * std / np.sqrt(mue_sinrs.shape[1])
plt.figure()
plt.plot(mu)
plt.plot(std)
plt.plot(conf95)
plt.show()
plt.figure(figsize=(10,7))
sns.lineplot(x=x, y=mue_sinrs.reshape(-1))
plt.xlabel('Steps', fontdict=x_font)
plt.ylabel('Average MUE SINR [dB]', fontdict=y_font)
plt.xticks(x_ticks, **ticks_font)
plt.yticks(**ticks_font)
fig_name = 'mue-sinr'
svg_path = f'{OUTPUT_PATH}/{fig_name}.svg'
eps_path = f'{OUTPUT_PATH}/{fig_name}.eps'
print(svg_path)
# save fig
plt.savefig(svg_path)
os.system(f'magick convert {svg_path} {eps_path}')
plt.show()
```
## D2D SINR
```
y_font
plt.figure(figsize=(10,7))
sns.lineplot(x=x, y=d2d_sinrs[:,:,0].reshape(-1), label='Device 1')
sns.lineplot(x=x, y=d2d_sinrs[:,:,1].reshape(-1), label='Device 2')
plt.xlabel('Steps', fontdict=x_font)
plt.ylabel('Average D2D SINR [dB]', fontdict=y_font)
plt.xticks(x_ticks, **ticks_font)
plt.yticks(**ticks_font)
plt.legend(prop=legends_font)
fig_name = 'd2d-sinr'
svg_path = f'{OUTPUT_PATH}/{fig_name}.svg'
eps_path = f'{OUTPUT_PATH}/{fig_name}.eps'
print(svg_path)
# save fig
plt.savefig(svg_path)
os.system(f'magick convert {svg_path} {eps_path}')
plt.show()
```
## MUE availability
```
plt.figure(figsize=(10,7))
sns.lineplot(x=x, y=mue_avail.reshape(-1))
plt.xlabel('Steps', fontdict=x_font)
plt.ylabel('Average MUE availability', fontdict=y_font)
plt.xticks(x_ticks, **ticks_font)
plt.yticks([0., .2, .4, .6, .8, .9, .95, 1.], **ticks_font)
fig_name = 'mue-availability'
svg_path = f'{OUTPUT_PATH}/{fig_name}.svg'
eps_path = f'{OUTPUT_PATH}/{fig_name}.eps'
print(svg_path)
# save fig
plt.savefig(svg_path)
os.system(f'magick convert {svg_path} {eps_path}')
plt.show()
```
## Rewards
```
plt.figure(figsize=(10,7))
if ALGO_NAME != 'ddpg':
sns.lineplot(x=x, y=rewards[:,:,0].reshape(-1), label='Device 1')
sns.lineplot(x=x, y=rewards[:,:,1].reshape(-1), label='Device 2')
else:
sns.lineplot(x=x, y=rewards.reshape(-1))
plt.xlabel('Steps', fontdict=x_font)
plt.ylabel('Average Rewards', fontdict=y_font)
plt.xticks(x_ticks, **ticks_font)
plt.yticks(**ticks_font)
plt.legend(prop=legends_font)
fig_name = 'rewards'
svg_path = f'{OUTPUT_PATH}/{fig_name}.svg'
eps_path = f'{OUTPUT_PATH}/{fig_name}.eps'
print(svg_path)
# save fig
plt.savefig(svg_path)
os.system(f'magick convert {svg_path} {eps_path}')
plt.show()
```
|
github_jupyter
|
import pickle
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
from sys_simulator.general import load_with_pickle, sns_confidence_interval_plot
from copy import deepcopy
import os
EXP_NAME = 'convergencia'
# ddpg
ALGO_NAME = 'ddpg'
filepath = "/home/lucas/dev/sys-simulator-2/data/ddpg/script8/20210523-163328/log.pickle"
# dql
# ALGO_NAME = 'dql'
# filepath = "D:\Dev/sys-simulator-2\DATA\DQL\SCRIPT52\\20210508-144816\\log.pickle"
# a2c
# ALGO_NAME = 'a2c'
# filepath = "D:\\Dev\\sys-simulator-2\\data\\a2c\\script16\\20210509-134816\\log.pickle"
# output path
OUTPUT_PATH = f'/home/lucas/dev/sys-simulator-2/figs/{EXP_NAME}/{ALGO_NAME}'
file = open(filepath, 'rb')
data = pickle.load(file)
file.close()
data.keys()
EVAL_EVERY = data['eval_every']
EVAL_EVERY
d_train = data['train_bags']
d_test = data['test_bags']
d_train.keys()
xx = d_test['mue_sinrs']
xx = np.array(xx)
xx.shape
xx = d_train['mue_sinrs']
xx = np.array(xx)
xx.shape
mue_sinrs = deepcopy(d_train['mue_sinrs'])
mue_sinrs.append(d_test['mue_sinrs'])
mue_sinrs = np.array(mue_sinrs)
mue_sinrs.shape
d2d_sinrs = deepcopy(d_train['d2d_sinrs'])
d2d_sinrs.append(d_test['d2d_sinrs'])
d2d_sinrs = np.array(d2d_sinrs)
d2d_sinrs.shape
sinr_threshold = 6
mue_avail = deepcopy(mue_sinrs >= sinr_threshold)
mue_avail.shape
rewards = deepcopy(d_train['rewards'])
rewards.append(d_test['rewards'])
rewards = np.array(rewards)
rewards.shape
x_font = {
'family': 'serif',
'color': 'black',
'weight': 'normal',
'size': 16,
}
y_font = {
'family': 'serif',
'color': 'black',
'weight': 'normal',
'size': 16,
}
ticks_font = {
'fontfamily': 'serif',
'fontsize': 13
}
legends_font = {
'size': 13,
'family': 'serif'
}
x_ticks = [i*EVAL_EVERY*(mue_sinrs.shape[0]-1)/5 for i in range(6)]
x = np.ones(mue_sinrs.shape)
for i in range(mue_sinrs.shape[0]):
x[i, :] = i
x = x.reshape(-1)
x *= EVAL_EVERY
mu = mue_sinrs.mean(axis=1)
std = mue_sinrs.std(axis=1)
conf95 = 1.96 * std / np.sqrt(mue_sinrs.shape[1])
plt.figure()
plt.plot(mu)
plt.plot(std)
plt.plot(conf95)
plt.show()
plt.figure(figsize=(10,7))
sns.lineplot(x=x, y=mue_sinrs.reshape(-1))
plt.xlabel('Steps', fontdict=x_font)
plt.ylabel('Average MUE SINR [dB]', fontdict=y_font)
plt.xticks(x_ticks, **ticks_font)
plt.yticks(**ticks_font)
fig_name = 'mue-sinr'
svg_path = f'{OUTPUT_PATH}/{fig_name}.svg'
eps_path = f'{OUTPUT_PATH}/{fig_name}.eps'
print(svg_path)
# save fig
plt.savefig(svg_path)
os.system(f'magick convert {svg_path} {eps_path}')
plt.show()
y_font
plt.figure(figsize=(10,7))
sns.lineplot(x=x, y=d2d_sinrs[:,:,0].reshape(-1), label='Device 1')
sns.lineplot(x=x, y=d2d_sinrs[:,:,1].reshape(-1), label='Device 2')
plt.xlabel('Steps', fontdict=x_font)
plt.ylabel('Average D2D SINR [dB]', fontdict=y_font)
plt.xticks(x_ticks, **ticks_font)
plt.yticks(**ticks_font)
plt.legend(prop=legends_font)
fig_name = 'd2d-sinr'
svg_path = f'{OUTPUT_PATH}/{fig_name}.svg'
eps_path = f'{OUTPUT_PATH}/{fig_name}.eps'
print(svg_path)
# save fig
plt.savefig(svg_path)
os.system(f'magick convert {svg_path} {eps_path}')
plt.show()
plt.figure(figsize=(10,7))
sns.lineplot(x=x, y=mue_avail.reshape(-1))
plt.xlabel('Steps', fontdict=x_font)
plt.ylabel('Average MUE availability', fontdict=y_font)
plt.xticks(x_ticks, **ticks_font)
plt.yticks([0., .2, .4, .6, .8, .9, .95, 1.], **ticks_font)
fig_name = 'mue-availability'
svg_path = f'{OUTPUT_PATH}/{fig_name}.svg'
eps_path = f'{OUTPUT_PATH}/{fig_name}.eps'
print(svg_path)
# save fig
plt.savefig(svg_path)
os.system(f'magick convert {svg_path} {eps_path}')
plt.show()
plt.figure(figsize=(10,7))
if ALGO_NAME != 'ddpg':
sns.lineplot(x=x, y=rewards[:,:,0].reshape(-1), label='Device 1')
sns.lineplot(x=x, y=rewards[:,:,1].reshape(-1), label='Device 2')
else:
sns.lineplot(x=x, y=rewards.reshape(-1))
plt.xlabel('Steps', fontdict=x_font)
plt.ylabel('Average Rewards', fontdict=y_font)
plt.xticks(x_ticks, **ticks_font)
plt.yticks(**ticks_font)
plt.legend(prop=legends_font)
fig_name = 'rewards'
svg_path = f'{OUTPUT_PATH}/{fig_name}.svg'
eps_path = f'{OUTPUT_PATH}/{fig_name}.eps'
print(svg_path)
# save fig
plt.savefig(svg_path)
os.system(f'magick convert {svg_path} {eps_path}')
plt.show()
| 0.261048 | 0.541591 |
```
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
class ValidationError(Exception):
pass
def check_order(*args):
def validate(fields):
for smaller, greater in zip(args[:-1], args[1:]):
if fields[greater] < fields[smaller]:
raise ValidationError("%s must not be smaller, than %s" % (greater, smaller))
return validate
def range_validator(name, minimum, maximum):
def validate(fields):
if not minimum <= fields[name] <= maximum:
raise ValidationError("%s must be between %s and %s" % (name, minimum, maximum))
return validate
class Field(object):
def __init__(self, name, default=0, help=None, hide=False):
self.name = name
self.default = default
self.help = help or ''
self.hide = hide
class Product(object):
@classmethod
def fields(cls):
return []
@classmethod
def field_dict(cls):
return {f.name: f for f in cls.fields()}
def __init__(self, **kwargs):
self.validate(kwargs)
fields = {f.name: f for f in self.fields()}
self.fields = {f.name: f.default for f in fields.values()}
for name, value in kwargs.items():
if name not in fields:
raise ValueError("Field '%s' is not recognized" % name)
self.fields[name] = value
@classmethod
def validators(cls):
def check_positive(fields):
for name, value in fields.items():
if value <= 0:
raise ValidationError("%s must be positive" % name)
return [check_positive]
@classmethod
def validate(cls, fields):
for validator in cls.validators():
validator(fields)
def barriers(self):
return []
def payoff(self, spot):
raise NotImplementedError
class Forward(Product):
@classmethod
def fields(cls):
return [Field('price', default=100)]
def payoff(self, spot):
return spot - self.fields['price']
class VanillaOption(Product):
@classmethod
def fields(cls):
return [Field('strike', default=100)]
class Call(VanillaOption):
def payoff(self, spot):
return np.maximum(0.0, spot - self.fields['strike'])
class Put(VanillaOption):
def payoff(self, spot):
return np.maximum(0.0, self.fields['strike'] - spot)
class LeveragedProduct(Product):
@classmethod
def fields(cls):
return [Field('leverage_ratio', default=1, hide=True)]
@classmethod
def validators(cls):
return super().validators() + [range_validator('leverage_ratio', 0, 2)]
def leverage(self):
return self.fields['leverage_ratio']
class CallSpread(LeveragedProduct):
@classmethod
def fields(cls):
return [Field('long_call', default=100), Field('short_call', default=120)] + super().fields()
@classmethod
def validators(cls):
return super().validators() + [check_order('long_call', 'short_call')]
def payoff(self, spot):
return Call(strike=self.fields['long_call']).payoff(spot) \
- self.leverage() * Call(strike=self.fields['short_call']).payoff(spot)
class PutSpread(LeveragedProduct):
@classmethod
def fields(cls):
return [Field('long_put', default=120), Field('short_put', default=100)] + super().fields()
@classmethod
def validators(cls):
return super().validators() + [check_order('short_put', 'long_put')]
def payoff(self, spot):
return Put(strike=self.fields['long_put']).payoff(spot) \
- self.leverage() * Put(strike=self.fields['short_put']).payoff(spot)
class Collar(Product):
@classmethod
def fields(cls):
return [Field('call_strike', default=120), Field('put_strike', default=100)]
@classmethod
def validators(cls):
return super().validators() + [check_order('put_strike', 'call_strike')]
def payoff(self, spot):
return Call(strike=self.fields['call_strike']).payoff(spot) \
- Put(strike=self.fields['put_strike']).payoff(spot)
class Seagull(Product):
@classmethod
def fields(cls):
return [Field('short_put', default=100), Field('long_call', default=110), Field('upper_strike', default=120)]
@classmethod
def validators(cls):
return super().validators() + [check_order('short_put', 'long_call', 'upper_strike')]
def payoff(self, spot):
return Call(strike=self.fields['long_call']).payoff(spot) \
- Put(strike=self.fields['short_put']).payoff(spot) \
- Call(strike=self.fields['upper_strike']).payoff(spot)
class EnhancedForward(Product):
@classmethod
def fields(cls):
return [Field('strike', default=100), Field('upper_strike', default=120)]
@classmethod
def validators(cls):
return super().validators() + [check_order('strike', 'upper_strike')]
def payoff(self, spot):
return Forward(price=self.fields['strike']).payoff(spot) \
- Call(strike=self.fields['upper_strike']).payoff(spot)
class BarrierOption(Product):
@classmethod
def barrier_names(cls):
return {'barrier'}
def barriers(self):
barrier_names = self.barrier_names()
return [value for name, value in self.fields.items() if name in barrier_names]
class KOForward(LeveragedProduct, BarrierOption):
@classmethod
def fields(cls):
return [Field('strike', default=100), Field('barrier', default=120)] + super().fields()
@classmethod
def validators(cls):
return super().validators() + [check_order('strike', 'barrier')]
def payoff(self, spot):
return (Call(strike=self.fields['strike']).payoff(spot)
- self.leverage() * Put(strike=self.fields['strike']).payoff(spot)) * (
spot < self.fields['barrier'])
class ForwardExtra(BarrierOption):
@classmethod
def fields(cls):
return [Field('strike', default=120), Field('barrier', default=100)]
@classmethod
def validators(cls):
return super().validators() + [check_order('barrier', 'strike')]
def payoff(self, spot):
return Forward(price=self.fields['strike']).payoff(spot) \
* np.logical_or(spot < self.fields['barrier'], self.fields['strike'] < spot)
class CollarExtra(BarrierOption):
@classmethod
def fields(cls):
return [Field('call_strike', default=120), Field('put_strike', default=110), Field('barrier', default=100)]
@classmethod
def validators(cls):
return super().validators() + [check_order('barrier', 'put_strike', 'call_strike')]
def payoff(self, spot):
return Collar(call_strike=self.fields['call_strike'], put_strike=self.fields['put_strike']).payoff(spot) \
* np.logical_or(spot < self.fields['barrier'], self.fields['put_strike'] < spot)
from sys import stderr
import ipywidgets as widgets
from IPython.display import display
products = [Forward, Collar, Call, Put, CallSpread, PutSpread, Seagull, EnhancedForward, KOForward, ForwardExtra, CollarExtra]
products = {p.__name__: p for p in products}
plt.style.use('seaborn-whitegrid')
PLOT_SAMPLES = 1000
def plot_main(ax, x, y, is_dotted=False):
blue = plt.rcParams['axes.prop_cycle'].by_key()['color'][0]
ax.plot(x, y, lw=3, linestyle=':' if is_dotted else '-', color=blue, zorder=1000, label="Payoff")
def plot_payoff(product, size, **kwargs):
try:
product = product(**kwargs)
except ValidationError as ex:
stderr.write('%s\n'% ex)
return
field_dict = product.field_dict()
min_value = max(kwargs.values())
max_value = 0
for key, value in kwargs.items():
if field_dict[key].hide:
continue
min_value = min(min_value, value)
max_value = max(max_value, value)
spots = np.linspace(min_value - size, max_value + size, PLOT_SAMPLES)
payoff_values = product.payoff(spots)
fig = plt.figure(figsize=(12, 12))
ax = plt.axes()
for key, value in kwargs.items():
if field_dict[key].hide:
continue
ax.axvline(x=value, color='black', linestyle='--', alpha=0.5, label=key, lw=2)
ax.text(value, max(payoff_values), '%s = %s ' % (key, value),
rotation=90, va='top', ha='right', fontsize='x-large', color='black', alpha=0.5)
# Draw discontinuities with dashed lines
blue = plt.rcParams['axes.prop_cycle'].by_key()['color'][0]
last_idx = 0
for barrier in product.barriers():
barrier_lower_idx = np.max(np.argwhere(spots < barrier))
barrier_upper_idx = np.min(np.argwhere(spots > barrier))
assert last_idx <= barrier_lower_idx
plot_main(ax, spots[last_idx:barrier_lower_idx], payoff_values[last_idx:barrier_lower_idx])
plot_main(ax, [barrier, barrier], [payoff_values[barrier_lower_idx], payoff_values[barrier_upper_idx]],
is_dotted=True)
last_idx = barrier_upper_idx
plot_main(ax, spots[last_idx:], payoff_values[last_idx:])
ax.set_aspect('equal')
ax.set_xlim(min(spots), max(spots))
ax.tick_params(labelsize='x-large')
plt.title('Payoff function for a %s option' % product.__class__.__name__, size='xx-large')
ax.set_xlabel('Spot price at maturity', size='x-large')
ax.set_ylabel('Payoff', size='x-large')
plt.show()
def show_widgets():
setup = True
field_widget = []
def print_price(**kwargs):
product = product_widget.value
plot_payoff(product, 20, **kwargs)
def product_changed(product):
fields = {f.name: widgets.FloatText(description=f.name, value=f.default) for f in product.fields()}
if field_widget:
field_widget[0].close()
new_i = widgets.interactive(print_price, **fields)
display(new_i)
field_widget[:] = [new_i]
product_widget = widgets.Select(options=products, description='Product:')
product_i = widgets.interactive(product_changed, product=product_widget)
display(product_i)
setup = False
show_widgets()
```
|
github_jupyter
|
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
class ValidationError(Exception):
pass
def check_order(*args):
def validate(fields):
for smaller, greater in zip(args[:-1], args[1:]):
if fields[greater] < fields[smaller]:
raise ValidationError("%s must not be smaller, than %s" % (greater, smaller))
return validate
def range_validator(name, minimum, maximum):
def validate(fields):
if not minimum <= fields[name] <= maximum:
raise ValidationError("%s must be between %s and %s" % (name, minimum, maximum))
return validate
class Field(object):
def __init__(self, name, default=0, help=None, hide=False):
self.name = name
self.default = default
self.help = help or ''
self.hide = hide
class Product(object):
@classmethod
def fields(cls):
return []
@classmethod
def field_dict(cls):
return {f.name: f for f in cls.fields()}
def __init__(self, **kwargs):
self.validate(kwargs)
fields = {f.name: f for f in self.fields()}
self.fields = {f.name: f.default for f in fields.values()}
for name, value in kwargs.items():
if name not in fields:
raise ValueError("Field '%s' is not recognized" % name)
self.fields[name] = value
@classmethod
def validators(cls):
def check_positive(fields):
for name, value in fields.items():
if value <= 0:
raise ValidationError("%s must be positive" % name)
return [check_positive]
@classmethod
def validate(cls, fields):
for validator in cls.validators():
validator(fields)
def barriers(self):
return []
def payoff(self, spot):
raise NotImplementedError
class Forward(Product):
@classmethod
def fields(cls):
return [Field('price', default=100)]
def payoff(self, spot):
return spot - self.fields['price']
class VanillaOption(Product):
@classmethod
def fields(cls):
return [Field('strike', default=100)]
class Call(VanillaOption):
def payoff(self, spot):
return np.maximum(0.0, spot - self.fields['strike'])
class Put(VanillaOption):
def payoff(self, spot):
return np.maximum(0.0, self.fields['strike'] - spot)
class LeveragedProduct(Product):
@classmethod
def fields(cls):
return [Field('leverage_ratio', default=1, hide=True)]
@classmethod
def validators(cls):
return super().validators() + [range_validator('leverage_ratio', 0, 2)]
def leverage(self):
return self.fields['leverage_ratio']
class CallSpread(LeveragedProduct):
@classmethod
def fields(cls):
return [Field('long_call', default=100), Field('short_call', default=120)] + super().fields()
@classmethod
def validators(cls):
return super().validators() + [check_order('long_call', 'short_call')]
def payoff(self, spot):
return Call(strike=self.fields['long_call']).payoff(spot) \
- self.leverage() * Call(strike=self.fields['short_call']).payoff(spot)
class PutSpread(LeveragedProduct):
@classmethod
def fields(cls):
return [Field('long_put', default=120), Field('short_put', default=100)] + super().fields()
@classmethod
def validators(cls):
return super().validators() + [check_order('short_put', 'long_put')]
def payoff(self, spot):
return Put(strike=self.fields['long_put']).payoff(spot) \
- self.leverage() * Put(strike=self.fields['short_put']).payoff(spot)
class Collar(Product):
@classmethod
def fields(cls):
return [Field('call_strike', default=120), Field('put_strike', default=100)]
@classmethod
def validators(cls):
return super().validators() + [check_order('put_strike', 'call_strike')]
def payoff(self, spot):
return Call(strike=self.fields['call_strike']).payoff(spot) \
- Put(strike=self.fields['put_strike']).payoff(spot)
class Seagull(Product):
@classmethod
def fields(cls):
return [Field('short_put', default=100), Field('long_call', default=110), Field('upper_strike', default=120)]
@classmethod
def validators(cls):
return super().validators() + [check_order('short_put', 'long_call', 'upper_strike')]
def payoff(self, spot):
return Call(strike=self.fields['long_call']).payoff(spot) \
- Put(strike=self.fields['short_put']).payoff(spot) \
- Call(strike=self.fields['upper_strike']).payoff(spot)
class EnhancedForward(Product):
@classmethod
def fields(cls):
return [Field('strike', default=100), Field('upper_strike', default=120)]
@classmethod
def validators(cls):
return super().validators() + [check_order('strike', 'upper_strike')]
def payoff(self, spot):
return Forward(price=self.fields['strike']).payoff(spot) \
- Call(strike=self.fields['upper_strike']).payoff(spot)
class BarrierOption(Product):
@classmethod
def barrier_names(cls):
return {'barrier'}
def barriers(self):
barrier_names = self.barrier_names()
return [value for name, value in self.fields.items() if name in barrier_names]
class KOForward(LeveragedProduct, BarrierOption):
@classmethod
def fields(cls):
return [Field('strike', default=100), Field('barrier', default=120)] + super().fields()
@classmethod
def validators(cls):
return super().validators() + [check_order('strike', 'barrier')]
def payoff(self, spot):
return (Call(strike=self.fields['strike']).payoff(spot)
- self.leverage() * Put(strike=self.fields['strike']).payoff(spot)) * (
spot < self.fields['barrier'])
class ForwardExtra(BarrierOption):
@classmethod
def fields(cls):
return [Field('strike', default=120), Field('barrier', default=100)]
@classmethod
def validators(cls):
return super().validators() + [check_order('barrier', 'strike')]
def payoff(self, spot):
return Forward(price=self.fields['strike']).payoff(spot) \
* np.logical_or(spot < self.fields['barrier'], self.fields['strike'] < spot)
class CollarExtra(BarrierOption):
@classmethod
def fields(cls):
return [Field('call_strike', default=120), Field('put_strike', default=110), Field('barrier', default=100)]
@classmethod
def validators(cls):
return super().validators() + [check_order('barrier', 'put_strike', 'call_strike')]
def payoff(self, spot):
return Collar(call_strike=self.fields['call_strike'], put_strike=self.fields['put_strike']).payoff(spot) \
* np.logical_or(spot < self.fields['barrier'], self.fields['put_strike'] < spot)
from sys import stderr
import ipywidgets as widgets
from IPython.display import display
products = [Forward, Collar, Call, Put, CallSpread, PutSpread, Seagull, EnhancedForward, KOForward, ForwardExtra, CollarExtra]
products = {p.__name__: p for p in products}
plt.style.use('seaborn-whitegrid')
PLOT_SAMPLES = 1000
def plot_main(ax, x, y, is_dotted=False):
blue = plt.rcParams['axes.prop_cycle'].by_key()['color'][0]
ax.plot(x, y, lw=3, linestyle=':' if is_dotted else '-', color=blue, zorder=1000, label="Payoff")
def plot_payoff(product, size, **kwargs):
try:
product = product(**kwargs)
except ValidationError as ex:
stderr.write('%s\n'% ex)
return
field_dict = product.field_dict()
min_value = max(kwargs.values())
max_value = 0
for key, value in kwargs.items():
if field_dict[key].hide:
continue
min_value = min(min_value, value)
max_value = max(max_value, value)
spots = np.linspace(min_value - size, max_value + size, PLOT_SAMPLES)
payoff_values = product.payoff(spots)
fig = plt.figure(figsize=(12, 12))
ax = plt.axes()
for key, value in kwargs.items():
if field_dict[key].hide:
continue
ax.axvline(x=value, color='black', linestyle='--', alpha=0.5, label=key, lw=2)
ax.text(value, max(payoff_values), '%s = %s ' % (key, value),
rotation=90, va='top', ha='right', fontsize='x-large', color='black', alpha=0.5)
# Draw discontinuities with dashed lines
blue = plt.rcParams['axes.prop_cycle'].by_key()['color'][0]
last_idx = 0
for barrier in product.barriers():
barrier_lower_idx = np.max(np.argwhere(spots < barrier))
barrier_upper_idx = np.min(np.argwhere(spots > barrier))
assert last_idx <= barrier_lower_idx
plot_main(ax, spots[last_idx:barrier_lower_idx], payoff_values[last_idx:barrier_lower_idx])
plot_main(ax, [barrier, barrier], [payoff_values[barrier_lower_idx], payoff_values[barrier_upper_idx]],
is_dotted=True)
last_idx = barrier_upper_idx
plot_main(ax, spots[last_idx:], payoff_values[last_idx:])
ax.set_aspect('equal')
ax.set_xlim(min(spots), max(spots))
ax.tick_params(labelsize='x-large')
plt.title('Payoff function for a %s option' % product.__class__.__name__, size='xx-large')
ax.set_xlabel('Spot price at maturity', size='x-large')
ax.set_ylabel('Payoff', size='x-large')
plt.show()
def show_widgets():
setup = True
field_widget = []
def print_price(**kwargs):
product = product_widget.value
plot_payoff(product, 20, **kwargs)
def product_changed(product):
fields = {f.name: widgets.FloatText(description=f.name, value=f.default) for f in product.fields()}
if field_widget:
field_widget[0].close()
new_i = widgets.interactive(print_price, **fields)
display(new_i)
field_widget[:] = [new_i]
product_widget = widgets.Select(options=products, description='Product:')
product_i = widgets.interactive(product_changed, product=product_widget)
display(product_i)
setup = False
show_widgets()
| 0.772144 | 0.410461 |
# 実践編: ディープラーニングを使ったモニタリングデータの時系列解析
健康意識の高まりや運動人口の増加に伴って,活動量計などのウェアラブルデバイスが普及し始めています.センサーデバイスから心拍数などの情報を取得することで,リアルタイムに健康状態をモニタリングできる可能性があることから,近年ではヘルスケア領域での活用事例も増えてきています.2018年2月には,Cardiogram社とカリフォルニア大学が共同研究の成果を発表し,心拍数データに対してDeep Learningを適用することで,高精度に糖尿病予備群を予測可能であることを報告し,大きな注目を集めました.また,Apple Watch Series 4には心電図作成の機能が搭載されるなど,センサーデバイスも進歩を続け,より精緻な情報が取得できるようになってきています.こうした背景において,モニタリングデータを収集・解析し,健康管理に繋げていく取り組みは今後益々盛んになっていくものと考えられます.
本章では,心電図の信号波形データを対象として,不整脈を検出する問題に取り組みます.
## 環境構築
本章では, 下記のライブラリを利用します.
* Cupy
* Chainer
* Scipy
* Matplotlib
* Seaborn
* Pandas
* WFDB
* Scikit-learn
* Imbalanced-learn
以下のセルを実行 (Shift + Enter) して,必要なものをインストールして下さい.
```
!apt -y -q install tree
!pip install wfdb==2.2.1 scikit-learn==0.20.1 imbalanced-learn==0.4.3
```
インストールが完了したら以下のセルを実行して,各ライブラリのインポート,及びバージョン確認を行って下さい.
```
import os
import random
import numpy as np
import chainer
import scipy
import pandas as pd
import matplotlib
import seaborn as sn
import wfdb
import sklearn
import imblearn
chainer.print_runtime_info()
print("Scipy: ", scipy.__version__)
print("Pandas: ", pd.__version__)
print("Matplotlib: ", matplotlib.__version__)
print("Seaborn: ", sn.__version__)
print("WFDB: ", wfdb.__version__)
print("Scikit-learn: ", sklearn.__version__)
print("Imbalanced-learn: ", imblearn.__version__)
```
また,本章の実行結果を再現可能なように,4章 (4.2.4.4) で紹介した乱数シードの固定を行います.
(以降の計算を行う上で必ず必要となる設定ではありません.)
```
def reset_seed(seed=42):
random.seed(seed)
np.random.seed(seed)
if chainer.cuda.available:
chainer.cuda.cupy.random.seed(seed)
reset_seed(42)
```
## 心電図(ECG)と不整脈診断について
**心電図(Electrocardiogram, ECG)**は,心筋を協調的,律動的に動かすために刺激伝導系で伝達される電気信号を体表から記録したものであり,心電図検査は不整脈や虚血性心疾患の診断において広く用いられる検査です[[1](https://en.wikipedia.org/wiki/Electrocardiography), [2](https://www.ningen-dock.jp/wp/wp-content/uploads/2013/09/d4bb55fcf01494e251d315b76738ab40.pdf)].
標準的な心電図は,手足からとる心電図(四肢誘導)として,双極誘導($Ⅰ$,$Ⅱ$,$Ⅲ$),及び単極誘導($aV_R$,$aV_L$,$aV_F$)の6誘導,胸部からとる心電図(胸部誘導)として,$V_1$,$V_2$,$V_3$,$V_4$,$V_5$,$V_6$の6誘導,計12誘導から成ります.このうち,特に不整脈のスクリーニングを行う際には,$Ⅱ$誘導と$V_1$誘導に注目して診断が行われるのが一般的とされています.
心臓が正常な状態では,ECGにおいては規則的な波形が観測され,これを**正常洞調律 (Normal sinus rhythm, NSR)**といいます.
具体的には,以下の3つの主要な波形で構成されており,
1. **P波**:心房の脱分極(心房の興奮)
1. **QRS波**:心室の脱分極(心室の興奮)
1. **T波**:心室の再分極(心室興奮の収まり)
の順番で,下図のような波形が観測されます.

([[1](https://en.wikipedia.org/wiki/Electrocardiography)]より引用)
こうした規則的な波形に乱れが生じ,調律に異常があると判断された場合,不整脈などの疑いがあるため,診断が行われることになります.
## 使用するデータセット
ここでは,ECGの公開データとして有名な[MIT-BIH Arrhythmia Database (mitdb)](https://www.physionet.org/physiobank/database/mitdb/)を使用します.
47名の患者から収集した48レコードが登録されており,各レコードファイルには約30分間の2誘導($II$,$V_1$)のシグナルデータが格納されています.また,各R波のピーク位置に対してアノテーションが付与されています.(データとアノテーションの詳細については[こちら](https://www.physionet.org/physiobank/database/html/mitdbdir/intro.htm)を御覧ください.)
データベースは[PhysioNet](https://www.physionet.org/)によって管理されており,ダウンロードや読み込み用のPythonパッケージが提供されているので,今回はそちらを利用してデータを入手します.
```
dataset_root = './dataset'
download_dir = os.path.join(dataset_root, 'download')
```
まずはmitdbデータベースをダウンロードしましょう.
※実行時にエラーが出た場合は,再度実行して下さい.
```
wfdb.dl_database('mitdb', dl_dir=download_dir)
```
無事ダウンロードが完了すると, `Finished downloading files` というメッセージが表示されます.
ファイル一覧を確認してみましょう.
```
print(sorted(os.listdir(download_dir)))
```
ファイル名の数字はレコードIDを表しています.各レコードには3種類のファイルがあり,
- `.dat` : シグナル(バイナリ形式)
- `.atr` : アノテーション(バイナリ形式)
- `.hea` : ヘッダ(バイナリファイルの読み込みに必要)
となっています.
## データ前処理
ダウンロードしたファイルを読み込み,機械学習モデルへの入力形式に変換する**データ前処理**について説明します.
本節では,以下の手順で前処理を行います.
1. レコードIDを事前に 学習用 / 評価用 に分割
- 48レコードのうち,
- ID =(102, 104, 107, 217)のシグナルはペースメーカーの拍動が含まれるため除外します.
- ID = 114のシグナルは波形の一部が反転しているため,今回は除外します.
- ID = (201, 202)は同一の患者から得られたデータのため,202を除外します.
- 上記を除く計42レコードを,学習用とテスト用に分割します(分割方法は[[3](https://ieeexplore.ieee.org/document/1306572)]を参考).
1. シグナルファイル (.dat) の読み込み
- $Ⅱ$誘導シグナルと$V_1$誘導シグナルが格納されていますが,今回は$Ⅱ$誘導のみ利用します.
- サンプリング周波数は360 Hz なので,1秒間に360回のペースで数値が記録されていることになります.
1. アノテーションファイル (.atr) の読み込み
- 各R波ピークの位置 (positions) と,そのラベル (symbols) を取得します.
1. シグナルの正規化
- 平均0,分散1になるように変換を行います.
1. シグナルの分割 (segmentation)
- 各R波ピークを中心として2秒間(前後1秒ずつ)の断片を切り出していきます.
1. 分割シグナルへのラベル付与
- 各R波ピークに付与されているラベルを,下表(※)に従って集約し,今回の解析では正常拍動 (Normal),及び心室異所性拍動 (VEB) に対応するラベルが付与されている分割シグナルのみ学習・評価に利用します.
※ Association for the Advancement of Medical Instrumentation (AAMI)が推奨している基準([[3](https://ieeexplore.ieee.org/document/1306572)])で,5種類に大別して整理されています.

([[4](https://arxiv.org/abs/1810.04121)]より引用)
まずは以下のセルを実行して,データ前処理クラスを定義しましょう.
前処理クラス内では,以下のメンバ関数を定義しています.
* `__init__()` (コンストラクタ) : 変数の初期化,学習用とテスト用への分割ルール,利用するラベルの集約ルール
* `_load_data()` : シグナル,及びアノテーションの読み込み
* `_normalize_signal()` : `method`オプションに応じてシグナルをスケーリング
* `_segment_data()` : 読み込んだシグナルとアノテーションを,一定幅(`window_size`)で切り出し
* `preprocess_dataset()` : 学習データ,テストデータを作成
* `_preprocess_dataset_core()` : `preprocess_datataset()`内で呼ばれるメインの処理.
```
class BaseECGDatasetPreprocessor(object):
def __init__(
self,
dataset_root,
window_size=720, # 2 seconds
):
self.dataset_root = dataset_root
self.download_dir = os.path.join(self.dataset_root, 'download')
self.window_size = window_size
self.sample_rate = 360.
# split list
self.train_record_list = [
'101', '106', '108', '109', '112', '115', '116', '118', '119', '122',
'124', '201', '203', '205', '207', '208', '209', '215', '220', '223', '230'
]
self.test_record_list = [
'100', '103', '105', '111', '113', '117', '121', '123', '200', '210',
'212', '213', '214', '219', '221', '222', '228', '231', '232', '233', '234'
]
# annotation
self.labels = ['N', 'V']
self.valid_symbols = ['N', 'L', 'R', 'e', 'j', 'V', 'E']
self.label_map = {
'N': 'N', 'L': 'N', 'R': 'N', 'e': 'N', 'j': 'N',
'V': 'V', 'E': 'V'
}
def _load_data(
self,
base_record,
channel=0 # [0, 1]
):
record_name = os.path.join(self.download_dir, str(base_record))
# read dat file
signals, fields = wfdb.rdsamp(record_name)
assert fields['fs'] == self.sample_rate
# read annotation file
annotation = wfdb.rdann(record_name, 'atr')
symbols = annotation.symbol
positions = annotation.sample
return signals[:, channel], symbols, positions
def _normalize_signal(
self,
signal,
method='std'
):
if method == 'minmax':
# Min-Max scaling
min_val = np.min(signal)
max_val = np.max(signal)
return (signal - min_val) / (max_val - min_val)
elif method == 'std':
# Zero mean and unit variance
signal = (signal - np.mean(signal)) / np.std(signal)
return signal
else:
raise ValueError("Invalid method: {}".format(method))
def _segment_data(
self,
signal,
symbols,
positions
):
X = []
y = []
sig_len = len(signal)
for i in range(len(symbols)):
start = positions[i] - self.window_size // 2
end = positions[i] + self.window_size // 2
if symbols[i] in self.valid_symbols and start >= 0 and end <= sig_len:
segment = signal[start:end]
assert len(segment) == self.window_size, "Invalid length"
X.append(segment)
y.append(self.labels.index(self.label_map[symbols[i]]))
return np.array(X), np.array(y)
def preprocess_dataset(
self,
normalize=True
):
# preprocess training dataset
self._preprocess_dataset_core(self.train_record_list, "train", normalize)
# preprocess test dataset
self._preprocess_dataset_core(self.test_record_list, "test", normalize)
def _preprocess_dataset_core(
self,
record_list,
mode="train",
normalize=True
):
Xs, ys = [], []
save_dir = os.path.join(self.dataset_root, 'preprocessed', mode)
for i in range(len(record_list)):
signal, symbols, positions = self._load_data(record_list[i])
if normalize:
signal = self._normalize_signal(signal)
X, y = self._segment_data(signal, symbols, positions)
Xs.append(X)
ys.append(y)
os.makedirs(save_dir, exist_ok=True)
np.save(os.path.join(save_dir, "X.npy"), np.vstack(Xs))
np.save(os.path.join(save_dir, "y.npy"), np.concatenate(ys))
```
データ保存先のrootディレクトリ(dataset_root)を指定し, `preprocess_dataset()` を実行することで,前処理後のデータがNumpy Array形式で所定の場所に保存されます.
```
BaseECGDatasetPreprocessor(dataset_root).preprocess_dataset()
```
実行後,以下のファイルが保存されていることを確認しましょう.
* train/X.npy : 学習用シグナル
* train/y.npy : 学習用ラベル
* test/X.npy : 評価用シグナル
* test/y.npy : 評価用ラベル
```
!tree ./dataset/preprocessed
```
次に,保存したファイルを読み込み,中身を確認してみましょう.
```
X_train = np.load(os.path.join(dataset_root, 'preprocessed', 'train', 'X.npy'))
y_train = np.load(os.path.join(dataset_root, 'preprocessed', 'train', 'y.npy'))
X_test = np.load(os.path.join(dataset_root, 'preprocessed', 'test', 'X.npy'))
y_test = np.load(os.path.join(dataset_root, 'preprocessed', 'test', 'y.npy'))
```
データセットのサンプル数はそれぞれ以下の通りです.
* 学習用 : 47738個
* 評価用 : 45349個
各シグナルデータは,2 (sec) * 360 (Hz) = 720次元ベクトルとして表現されています.
```
print("X_train.shape = ", X_train.shape, " \t y_train.shape = ", y_train.shape)
print("X_test.shape = ", X_test.shape, " \t y_test.shape = ", y_test.shape)
```
各ラベルはインデックスで表現されており,
* 0 : 正常拍動 (Normal)
* 1 : 心室異所性拍動 (VEB)
となっています.
学習用データセットに含まれている各ラベル毎のサンプル数をカウントしてみましょう.
```
uniq_train, counts_train = np.unique(y_train, return_counts=True)
print("y_train count each labels: ", dict(zip(uniq_train, counts_train)))
```
評価用データについても同様にラベル毎のサンプル数をカウントします.
```
uniq_test, counts_test = np.unique(y_test, return_counts=True)
print("y_test count each labels: ", dict(zip(uniq_test, counts_test)))
```
学習用データ,評価用データ共に,VEBサンプルは全体の10%未満であり,大多数は正常拍動サンプルであることが分かります.
次に,正常拍動,及びVEBのシグナルデータを可視化してみましょう.
```
%matplotlib inline
import matplotlib.pyplot as plt
```
正常拍動の例を図示したものが以下になります.
P波 - QRS波 - T波が規則的に出現していることが確認できます.
```
idx_n = np.where(y_train == 0)[0]
plt.plot(X_train[idx_n[0]])
```
一方でVEBの波形は規則性が乱れ,R波ピークの形状やピーク間距離も正常例とは異なる性質を示していることが分かります.
```
idx_s = np.where(y_train == 1)[0]
plt.plot(X_train[idx_s[0]])
```
本章の目的は,ECGシグナル特徴をうまく捉え,新たな波形サンプルに対しても高精度に正常/異常を予測するモデルを構築することです.
次節では,深層学習を利用したモデル構築について説明していきます.
## 深層学習を用いた時系列データ解析
### 学習
まず,前節で準備した前処理済みデータをChainerで読み込むためのデータセットクラスを定義します.
```
class ECGDataset(chainer.dataset.DatasetMixin):
def __init__(
self,
path
):
if os.path.isfile(os.path.join(path, 'X.npy')):
self.X = np.load(os.path.join(path, 'X.npy'))
else:
raise FileNotFoundError("{}/X.npy not found.".format(path))
if os.path.isfile(os.path.join(path, 'y.npy')):
self.y = np.load(os.path.join(path, 'y.npy'))
else:
raise FileNotFoundError("{}/y.npy not found.".format(path))
def __len__(self):
return len(self.X)
def get_example(self, i):
return self.X[None, i].astype(np.float32), self.y[i]
```
続いて,学習(+予測)に利用するネットワーク構造を定義します.
今回は,画像認識タスクで有名な,CNNベースの**ResNet34**と同様のネットワーク構造を利用します.[[5](https://arxiv.org/abs/1512.03385)]
ただし,入力シグナルは1次元配列であることから,ここでは画像解析等で一般的に利用される2D Convolutionではなく,前章の遺伝子解析と同様,1D Convolutionを利用します.
```
import chainer.functions as F
import chainer.links as L
from chainer import reporter
from chainer import Variable
class BaseBlock(chainer.Chain):
def __init__(
self,
channels,
stride=1,
dilate=1
):
self.stride = stride
super(BaseBlock, self).__init__()
with self.init_scope():
self.c1 = L.ConvolutionND(1, None, channels, 3, stride, dilate, dilate=dilate)
self.c2 = L.ConvolutionND(1, None, channels, 3, 1, dilate, dilate=dilate)
if stride > 1:
self.cd = L.ConvolutionND(1, None, channels, 1, stride, 0)
self.b1 = L.BatchNormalization(channels)
self.b2 = L.BatchNormalization(channels)
def __call__(self, x):
h = F.relu(self.b1(self.c1(x)))
if self.stride > 1:
res = self.cd(x)
else:
res = x
h = res + self.b2(self.c2(h))
return F.relu(h)
class ResBlock(chainer.Chain):
def __init__(
self,
channels,
n_block,
dilate=1
):
self.n_block = n_block
super(ResBlock, self).__init__()
with self.init_scope():
self.b0 = BaseBlock(channels, 2, dilate)
for i in range(1, n_block):
bx = BaseBlock(channels, 1, dilate)
setattr(self, 'b{}'.format(str(i)), bx)
def __call__(self, x):
h = self.b0(x)
for i in range(1, self.n_block):
h = getattr(self, 'b{}'.format(str(i)))(h)
return h
class ResNet34(chainer.Chain):
def __init__(self):
super(ResNet34, self).__init__()
with self.init_scope():
self.conv1 = L.ConvolutionND(1, None, 64, 7, 2, 3)
self.bn1 = L.BatchNormalization(64)
self.resblock0 = ResBlock(64, 3)
self.resblock1 = ResBlock(128, 4)
self.resblock2 = ResBlock(256, 6)
self.resblock3 = ResBlock(512, 3)
self.fc = L.Linear(None, 2)
def __call__(self, x):
h = F.relu(self.bn1(self.conv1(x)))
h = F.max_pooling_nd(h, 3, 2)
for i in range(4):
h = getattr(self, 'resblock{}'.format(str(i)))(h)
h = F.average(h, axis=2)
h = self.fc(h)
return h
class Classifier(chainer.Chain):
def __init__(
self,
predictor,
lossfun=F.softmax_cross_entropy
):
super(Classifier, self).__init__()
with self.init_scope():
self.predictor = predictor
self.lossfun = lossfun
def __call__(self, *args):
assert len(args) >= 2
x = args[:-1]
t = args[-1]
y = self.predictor(*x)
# loss
loss = self.lossfun(y, t)
with chainer.no_backprop_mode():
# other metrics
accuracy = F.accuracy(y, t)
# reporter
reporter.report({'loss': loss}, self)
reporter.report({'accuracy': accuracy}, self)
return loss
def predict(self, x):
with chainer.function.no_backprop_mode(), chainer.using_config('train', False):
x = Variable(self.xp.asarray(x, dtype=self.xp.float32))
y = self.predictor(x)
return y
```
学習を実行するための準備として,以下の関数を用意します.
- `create_train_dataset()`:学習用データセットを `ECGDataset` クラスに渡す
- `create_trainer()`:学習に必要な設定を行い,Trainerオブジェクトを作成
```
from chainer import optimizers
from chainer.optimizer import WeightDecay
from chainer.iterators import MultiprocessIterator
from chainer import training
from chainer.training import extensions
from chainer.training import triggers
from chainer.backends.cuda import get_device_from_id
def create_train_dataset(root_path):
train_path = os.path.join(root_path, 'preprocessed', 'train')
train_dataset = ECGDataset(train_path)
return train_dataset
def create_trainer(
batchsize, train_dataset, nb_epoch=1,
device=0, lossfun=F.softmax_cross_entropy
):
# setup model
model = ResNet34()
train_model = Classifier(model, lossfun=lossfun)
# use Adam optimizer
optimizer = optimizers.Adam(alpha=0.001)
optimizer.setup(train_model)
optimizer.add_hook(WeightDecay(0.0001))
# setup iterator
train_iter = MultiprocessIterator(train_dataset, batchsize)
# define updater
updater = training.StandardUpdater(train_iter, optimizer, device=device)
# setup trainer
stop_trigger = (nb_epoch, 'epoch')
trainer = training.trainer.Trainer(updater, stop_trigger)
logging_attributes = [
'epoch', 'iteration',
'main/loss', 'main/accuracy'
]
trainer.extend(
extensions.LogReport(logging_attributes, trigger=(2000 // batchsize, 'iteration'))
)
trainer.extend(
extensions.PrintReport(logging_attributes)
)
trainer.extend(
extensions.ExponentialShift('alpha', 0.75, optimizer=optimizer),
trigger=(4000 // batchsize, 'iteration')
)
return trainer
```
これで学習の準備が整ったので,関数を呼び出してtrainerを作成します.
```
train_dataset = create_train_dataset(dataset_root)
trainer = create_trainer(256, train_dataset, nb_epoch=1, device=0)
```
それでは学習を開始しましょう. (1分30秒程度で学習が完了します.)
```
%time trainer.run()
```
学習が問題なく進めば,main/accuracyが0.99 (99%)付近まで到達していると思います.
### 評価
学習したモデルを評価用データに当てはめて識別性能を確認するため,以下の関数を用意します.
- `create_test_dataset()` : 評価用データの読み込み
- `predict()` : 推論を行い,結果の配列(正解ラベルと予測ラベル)を出力
- `print_confusion_matrix()` : 予測結果から混同行列とよばれる表を出力
- `print_scores()` : 予測結果から予測精度の評価指標を出力
```
from chainer import cuda
from sklearn.metrics import classification_report
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
def create_test_dataset(root_path):
test_path = os.path.join(root_path, 'preprocessed', 'test')
test_dataset = ECGDataset(test_path)
return test_dataset
def predict(trainer, test_dataset, batchsize, device=-1):
model = trainer.updater.get_optimizer('main').target
ys = []
ts = []
for i in range(len(test_dataset) // batchsize + 1):
if i == len(test_dataset) // batchsize:
X, t = zip(*test_dataset[i*batchsize: len(test_dataset)])
else:
X, t = zip(*test_dataset[i*batchsize:(i+1)*batchsize])
X = cuda.to_gpu(np.array(X), device)
y = model.predict(X)
y = cuda.to_cpu(y.data.argmax(axis=1))
ys.append(y)
ts.append(np.array(t))
return np.concatenate(ts), np.concatenate(ys)
def print_confusion_matrix(y_true, y_pred):
labels = sorted(list(set(y_true)))
target_names = ['Normal', 'VEB']
cmx = confusion_matrix(y_true, y_pred, labels=labels)
df_cmx = pd.DataFrame(cmx, index=target_names, columns=target_names)
plt.figure(figsize = (5,3))
sn.heatmap(df_cmx, annot=True, annot_kws={"size": 18}, fmt="d", cmap='Blues')
plt.show()
def print_scores(y_true, y_pred):
target_names = ['Normal', 'VEB']
print(classification_report(y_true, y_pred, target_names=target_names))
print("accuracy: ", accuracy_score(y_true, y_pred))
```
評価用データセットを用意し,
```
test_dataset = create_test_dataset(dataset_root)
```
評価用データに対して予測を行います. (17秒程度で予測が完了します)
```
%time y_true_test, y_pred_test = predict(trainer, test_dataset, 256, 0)
```
それでは予測結果を確認していきましょう.
まずは, **混同行列** とよばれる,予測の分類結果をまとめた表を作成します.行方向(表側)を正解ラベル,列方向(表頭)を予測ラベルとして,各項目では以下の集計値を求めています.
* 左上 : 実際に正常拍動であるサンプルが,正常拍動と予測された数
* 右上 : 実際に正常拍動であるサンプルが,VEBと予測された数
* 左下 : 実際にVEBであるサンプルが,正常と予測された数
* 右下 : 実際にVEBであるサンプルが,VEBと予測された数
```
print_confusion_matrix(y_true_test, y_pred_test)
```
続いて,予測結果から計算される予測精度の評価指標スコアを表示してみましょう.
特に,以下のスコアに注目してみてください.
* 適合率 (Precision) : それぞれの予測診断結果 (Normal or VEB) のうち,正しく診断できていた(正解も同じ診断結果であった)割合
* 再現率 (Recall) : それぞれの正解診断結果 (Normal or VEB) のうち,正しく予測できていた(予測も同じ診断結果であった)割合
* F1値 (F1-score) : 適合率と再現率の調和平均
* 正解率 (Accuracy) : 全ての診断結果 (Normal and VEB) のうち,正しく予測できていた(予測も同じ診断結果であった)割合
また,クラスごとのスコアの下に,複数の平均スコアが表示されていますが,それぞれの意味は以下の通りです.
* マイクロ平均 (micro avg) : 各クラスを区別せずに,混同行列全体からスコアを算出.計算結果はいずれも正解率と一致
* マクロ平均 (macro avg) : クラスごとに算出されたスコアの単純平均
* 重み付き平均 (weighted avg) : クラスごとに算出されたスコアをサンプル数の比率で重み付けした加重平均
```
print_scores(y_true_test, y_pred_test)
```
サンプル数が多い正常拍動に対する予測スコアは高い値を示す一方で,サンプル数の少ないVEBに対しては,スコアが低くなる傾向があります.今回のデータセットのように,サンプルが占めるクラスの割合が極端に偏っている不均衡データでは,こうした傾向がしばしば観測されることが知られています.
次節では,このようなクラス不均衡問題への対応をはじめとして,予測モデルを改善するための試行錯誤について幾つか紹介していきます.
## 精度向上に向けて
本節では,前節にて構築した学習器に対して,「データセット」「目的関数」「学習モデル」「前処理」といった様々な観点で工夫を行うことで,精度改善に寄与する方法を模索していきます.
機械学習を用いて解析を行う際には,どの工夫が精度改善に有効なのか予め分からない場合が多く,試行錯誤が必要となります.ただし,手当たり次第の方法を試すことは得策では無いので,対象とするデータセットの性質に基づいて,有効となり得る手段を検討していくことが重要となります.
まずはじめに,前節でも課題として挙がっていた,クラス不均衡の問題への対処法から検討していきましょう.
### クラス不均衡データへの対応
前節でも触れたように,**クラス不均衡データ**を用いて学習器を構築する際,大多数を占めるクラスに偏った予測結果となり,少数のクラスに対して精度が低くなってしまう場合があることが一般的に知られています.一方で,(今回のデータセットを含めて)現実世界のタスクにおいては,大多数の正常サンプルの中に含まれる少数の異常サンプルを精度良く検出することが重要であるというケースは少なくありません.こうした状況において,少数クラスの検出に注目してモデルを学習するための方策が幾つか存在します.
具体的には,
1. **サンプリング**
- 不均衡データセットからサンプリングを行い,クラス比率のバランスが取れたデータセットを作成.
- **Undersampling** : 大多数の正常サンプルを削減.
- **Oversampling** : 少数の異常サンプルを水増し.
1. **損失関数の重み調整**
- 正常サンプルを異常と誤分類した際のペナルティを小さく,異常サンプルを正常と誤分類した際のペナルティを大きくする.
- 例えば,サンプル数の存在比率の逆数を重みとして利用.
1. **目的関数(損失関数)の変更**
- 異常サンプルに対する予測スコアを向上させるような目的関数を導入.
1. **異常検知**
- 正常サンプルのデータ分布を仮定し,そこから十分に逸脱したサンプルを異常とみなす.
などの方法があります.本節では,「1.サンプリング」,「3.目的関数の変更」の例を紹介していきます.
#### サンプリング
**Undersampling**と**Oversampling**を組み合わせて,データセットの不均衡を解消することを考えます.
今回は以下のステップでサンプリングを行います.
1. Undersamplingにより,正常拍動サンプルのみ1/4に削減 (VEBサンプルは全て残す)
* ここでは,単純なランダムサンプリングを採用します.ランダム性があるため,分類にとって重要な(VEBサンプルとの識別境界付近にある)サンプルを削除してしまう可能性があります.
* ランダムサンプリングの問題を緩和する手法も幾つか存在しますが,今回は使用しません.
1. Oversamplingにより,Undersampling後の正常拍動サンプルと同数になるまでVEBサンプルを水増し
* SMOTE (Synthetic Minority Over-sampling TEchnique) という手法を採用します.
* ランダムにデータを水増しする最も単純な方法だと,過学習を引き起こしやすくなります.SMOTEでは,VEBサンプルと,その近傍VEBサンプルとの間のデータ点をランダムに生成してデータに追加していくことで,過学習の影響を緩和しています.
サンプリングを行うために, `SampledECGDataset` クラスを定義します.
また,そのクラスを読み込んで学習用データセットオブジェクトを作成する `create_sampled_train_datset()` 関数を用意します.
```
from imblearn.datasets import make_imbalance
from imblearn.over_sampling import SMOTE
class SampledECGDataset(ECGDataset):
def __init__(
self,
path
):
super(SampledECGDataset, self).__init__(path)
_, counts = np.unique(self.y, return_counts=True)
self.X, self.y = make_imbalance(
self.X, self.y,
sampling_strategy={0: counts[0]//4, 1: counts[1]}
)
smote = SMOTE(random_state=42)
self.X, self.y = smote.fit_sample(self.X, self.y)
def create_sampled_train_dataset(root_path):
train_path = os.path.join(root_path, 'preprocessed', 'train')
train_dataset = SampledECGDataset(train_path)
return train_dataset
train_dataset = create_sampled_train_dataset(dataset_root)
```
それでは先程と同様に,trainerを作成して学習を実行してみましょう.(1分程度で学習が完了します.)
```
trainer = create_trainer(256, train_dataset, nb_epoch=2, device=0)
%time trainer.run()
```
学習が完了したら,評価用データで予測を行い,精度を確認してみましょう.
```
%time y_true_test, y_pred_test = predict(trainer, test_dataset, 256, 0)
print_confusion_matrix(y_true_test, y_pred_test)
print_scores(y_true_test, y_pred_test)
```
先程の予測結果と比較して,サンプリングの効果によりVEBサンプルに対する検出精度(特にrecall)が向上しているかを確認してみて下さい.
(サンプリングのランダム性や,学習の初期値依存性などの影響があるため,必ず精度向上するとは限らないことにご注意下さい.)
#### 損失関数の変更
続いて,**損失関数を変更**することで,少数の異常サンプルに対して精度向上させる方法を検討します.少数クラスの予測精度向上に注目した損失関数はこれまでに幾つも提案されていますが,今回はその中で,**Focal loss** という損失関数を利用します.
Focal lossは,画像の物体検知手法の研究論文 [[6](https://arxiv.org/abs/1708.02002)] の中で提案された損失関数です.One-stage物体検知手法において,大量の候補領域の中で実際に物体が存在する領域はたかだか数個であることが多く,クラス不均衡なタスクになっており,学習がうまく進まないという問題があります.こうした問題に対処するために提案されたのがfocal lossであり,以下の式によって記述されます.
$$
FL(p_t) = - (1 - p_t)^{\gamma}\log(p_t)
$$
ここで$p_t$はSoftmax関数の出力(確率値)です.$\gamma = 0$の場合,通常のSoftmax cross-entorpy lossと等しくなりますが,$\gamma > 0$の場合,明確に分類可能な(識別が簡単な)サンプルに対して,相対損失を小さくする効果があります.その結果,分類が難しいサンプルにより注目して学習が進んでいくことが期待されます.
下図は,正解クラスの予測確率値と,その際の損失の関係をプロットしており,$\gamma$の値を変化させた場合に,相対損失がどのように下がっていくかを示しています.

([[6](https://arxiv.org/abs/1708.02002)]より引用)
それでは実際に,Focal loss関数を定義してみましょう.
```
from chainer.backends.cuda import get_array_module
def focal_loss(x, t, class_num=2, gamma=0.5, eps=1e-6):
xp = get_array_module(t)
p = F.softmax(x)
p = F.clip(p, x_min=eps, x_max=1-eps)
log_p = F.log_softmax(x)
t_onehot = xp.eye(class_num)[t.ravel()]
loss_sce = -1 * t_onehot * log_p
loss_focal = F.sum(loss_sce * (1. - p) ** gamma, axis=1)
return F.mean(loss_focal)
```
前項目で実施したデータサンプリングは行わず,初期(§8.5)の学習時と同様の設定にした上で,損失関数をfocal lossに変更します.
```
train_dataset = create_train_dataset(dataset_root)
trainer = create_trainer(256, train_dataset, nb_epoch=1, device=0, lossfun=focal_loss)
```
それでは学習を開始しましょう.(1分30秒ほどで学習が完了します.)
```
%time trainer.run()
```
学習が完了したら,評価用データにて予測結果を確認してみましょう.
```
%time y_true_test, y_pred_test = predict(trainer, test_dataset, 256, 0)
print_confusion_matrix(y_true_test, y_pred_test)
print_scores(y_true_test, y_pred_test)
```
初期モデルの予測結果と,今回の予測結果を比較してみて下さい.
(余力があれば,$\gamma$の値を変化させた場合に,予測結果にどのような影響があるか確認してみて下さい.)
### ネットワーク構造の変更
続いて,学習に用いる**ネットワーク構造を変更**することを検討します.
ここでは,最初に用いたResNet34構造に対して以下の拡張を行います.
1. 1D Convolutionを,**1D Dilated Convolution**に変更
- Dilated Convolutionを用いることで,パラメータ数の増大を抑えながら,より広範囲の特徴を抽出可能になると期待されます(遺伝子解析の際と同様のモチベーション).
- 広範囲の特徴が重要でないタスクの場合には,精度向上に繋がらない(または,場合によっては精度が低下する)可能性もあります.
1. 最終層の手前に全結合層を追加し,**Dropout**を適用
- Dropoutを行うことで,学習器の汎化性能が向上することを期待します.ただし複数の先行研究([[7](https://arxiv.org/abs/1506.02158v6)]など)において,単純に畳み込み層の直後にDropoutを適用するだけでは汎化性能の向上が期待出来ないと報告されていることから,今回は全結合層に適用することにします.
それでは,上記の拡張を加えたネットワークを定義しましょう.(ResBlockクラスは,初期モデル構築時に定義済み)
```
class DilatedResNet34(chainer.Chain):
def __init__(self):
super(DilatedResNet34, self).__init__()
with self.init_scope():
self.conv1 = L.ConvolutionND(1, None, 64, 7, 2, 3)
self.bn1 = L.BatchNormalization(64)
self.resblock0 = ResBlock(64, 3, 1)
self.resblock1 = ResBlock(128, 4, 1)
self.resblock2 = ResBlock(256, 6, 2)
self.resblock3 = ResBlock(512, 3, 4)
self.fc1 = L.Linear(None, 512)
self.fc2 = L.Linear(None, 2)
def __call__(self, x):
h = F.relu(self.bn1(self.conv1(x)))
h = F.max_pooling_nd(h, 3, 2)
for i in range(4):
h = getattr(self, 'resblock{}'.format(str(i)))(h)
h = F.average(h, axis=2)
h = F.dropout(self.fc1(h), 0.5)
h = self.fc2(h)
return h
```
初期(§8.5)の学習時と同様の設定にした上で,ネットワーク構造を `DilatedResNet34` に変更して学習を行います.
```
def create_trainer(
batchsize, train_dataset, nb_epoch=1,
device=0, lossfun=F.softmax_cross_entropy
):
# setup model
model = DilatedResNet34()
train_model = Classifier(model, lossfun=lossfun)
# use Adam optimizer
optimizer = optimizers.Adam(alpha=0.001)
optimizer.setup(train_model)
optimizer.add_hook(WeightDecay(0.0001))
# setup iterator
train_iter = MultiprocessIterator(train_dataset, batchsize)
# define updater
updater = training.StandardUpdater(train_iter, optimizer, device=device)
# setup trainer
stop_trigger = (nb_epoch, 'epoch')
trainer = training.trainer.Trainer(updater, stop_trigger)
logging_attributes = [
'epoch', 'iteration',
'main/loss', 'main/accuracy'
]
trainer.extend(
extensions.LogReport(logging_attributes, trigger=(2000 // batchsize, 'iteration'))
)
trainer.extend(
extensions.PrintReport(logging_attributes)
)
trainer.extend(
extensions.ExponentialShift('alpha', 0.75, optimizer=optimizer),
trigger=(4000 // batchsize, 'iteration')
)
return trainer
train_dataset = create_train_dataset(dataset_root)
test_dataset = create_test_dataset(dataset_root)
trainer = create_trainer(256, train_dataset, nb_epoch=1, device=0)
```
それでは,これまでと同様に学習を開始しましょう.(1分30秒ほどで学習が完了します.)
```
%time trainer.run()
```
学習が完了したら,評価用データで予測を行い,精度を確認してみましょう.
```
%time y_true_test, y_pred_test = predict(trainer, test_dataset, 256, 0)
print_confusion_matrix(y_true_test, y_pred_test)
print_scores(y_true_test, y_pred_test)
```
初期モデルの予測結果と,今回の予測結果を比較してみて下さい.
### ノイズ除去の効果検証
最後に,心電図に含まれる**ノイズの除去**について検討します.
心電図波形には,以下のような外部ノイズが含まれている可能性があります.[[8](http://www.iosrjournals.org/iosr-jece/papers/ICETEM/Vol.%201%20Issue%201/ECE%2006-40-44.pdf)]
* 高周波
* **筋電図ノイズ** (Electromyogram noise)
- 体動により,筋肉の電気的活動が心電図に混入する場合があります.
* **電力線誘導障害** (Power line interference)
- 静電誘導により交流電流が流れ込み,心電図に混入する場合があります.
- 電源配線に電流が流れることで磁力線が発生し,電磁誘導作用により交流電流が流れ込む場合があります.
* **加算性白色ガウスノイズ** (Additive white Gaussian noise)
- 外部環境に由来する様々な要因でホワイトノイズが混入してきます.
* 低周波
* **基線変動** (Baseline wandering)
- 電極の装着不良,発汗,体動などの影響で,基線がゆっくり変動する場合があります.
心電図を解析する際は,頻脈や徐脈などの異常波形を正確に判別するために,上記のようなノイズを除去する前処理が行われるのが一般的です.
ノイズを除去する方法は幾つかありますが,最も単純なのは,線形フィルタを適用する方法です.今回は線形フィルタの一つであるバターワースフィルタを用いて,ノイズ除去を試してみましょう.
`BaseECGDatasetPreprocessor` にシグナルノイズ除去の機能を追加した, `DenoiseECGDatasetPreprocessor` クラスを定義します.
```
from scipy.signal import butter, lfilter
class DenoiseECGDatasetPreprocessor(BaseECGDatasetPreprocessor):
def __init__(
self,
dataset_root='./',
window_size=720
):
super(DenoiseECGDatasetPreprocessor, self).__init__(
dataset_root, window_size)
def _denoise_signal(
self,
signal,
btype='low',
cutoff_low=0.2,
cutoff_high=25.,
order=5
):
nyquist = self.sample_rate / 2.
if btype == 'band':
cut_off = (cutoff_low / nyquist, cutoff_high / nyquist)
elif btype == 'high':
cut_off = cutoff_low / nyquist
elif btype == 'low':
cut_off = cutoff_high / nyquist
else:
return signal
b, a = butter(order, cut_off, analog=False, btype=btype)
return lfilter(b, a, signal)
def _segment_data(
self,
signal,
symbols,
positions
):
X = []
y = []
sig_len = len(signal)
for i in range(len(symbols)):
start = positions[i] - self.window_size // 2
end = positions[i] + self.window_size // 2
if symbols[i] in self.valid_symbols and start >= 0 and end <= sig_len:
segment = signal[start:end]
assert len(segment) == self.window_size, "Invalid length"
X.append(segment)
y.append(self.labels.index(self.label_map[symbols[i]]))
return np.array(X), np.array(y)
def prepare_dataset(
self,
denoise=False,
normalize=True
):
if not os.path.isdir(self.download_dir):
self.download_data()
# prepare training dataset
self._prepare_dataset_core(self.train_record_list, "train", denoise, normalize)
# prepare test dataset
self._prepare_dataset_core(self.test_record_list, "test", denoise, normalize)
def _prepare_dataset_core(
self,
record_list,
mode="train",
denoise=False,
normalize=True
):
Xs, ys = [], []
save_dir = os.path.join(self.dataset_root, 'preprocessed', mode)
for i in range(len(record_list)):
signal, symbols, positions = self._load_data(record_list[i])
if denoise:
signal = self._denoise_signal(signal)
if normalize:
signal = self._normalize_signal(signal)
X, y = self._segment_data(signal, symbols, positions)
Xs.append(X)
ys.append(y)
os.makedirs(save_dir, exist_ok=True)
np.save(os.path.join(save_dir, "X.npy"), np.vstack(Xs))
np.save(os.path.join(save_dir, "y.npy"), np.concatenate(ys))
```
線形フィルタを適用することで,学習モデルが異常拍動のパターンを特徴として捉えやすくなる可能性があります.一方で,異常拍動を検出するにあたって重要な情報も除去されてしまう可能性があることに注意してください.
また,線形フィルタにおいては,その周波数特性(どの帯域の周波数成分を遮断するか)によって,幾つかの大まかな分類があります.例えば,以下のものがあります.
* **ローパスフィルタ (Low-pass filter)** : 低周波成分のみ通過 (高周波成分を遮断)
* **ハイパスフィルタ (High-pass filter)** : 高周波成分のみ通過 (低周波成分を遮断)
* **バンドパスフィルタ(Band-pass filter)** : 特定の帯域成分のみ通過 (低周波,高周波成分を遮断)

([[9](https://en.wikipedia.org/wiki/Filter_%28signal_processing%29)] より引用)
mitdbでは,予め0.1 Hz 以下の低周波と,100 Hz 以上の高周波をバンドパスフィルタによって除去済みであるため,ここではさらに,25 Hz のローパス・バターワースフィルタによって高周波ノイズを取り除きます.
それでは,ノイズ除去オプションを有効にして,前処理を実行してみましょう.
```
DenoiseECGDatasetPreprocessor(dataset_root).prepare_dataset(denoise=True)
```
実際に,高周波ノイズ除去後の波形を可視化してみましょう.
```
X_train_d = np.load(os.path.join(dataset_root, 'preprocessed', 'train', 'X.npy'))
plt.subplots(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(X_train[idx_n[0]])
plt.subplot(1, 2, 2)
plt.plot(X_train_d[idx_n[0]])
plt.show()
```
左図がフィルタリング前の波形,右図がフィルタリング後の波形です.
細かな振動が取り除かれていることが確認できると思います.
これまでと同様に,ノイズ除去後のデータを用いて学習を行ってみましょう.(1分30秒ほどで学習が完了します.)
```
train_dataset = create_train_dataset(dataset_root)
test_dataset = create_test_dataset(dataset_root)
trainer = create_trainer(256, train_dataset, nb_epoch=1, device=0)
%time trainer.run()
```
学習が完了したら,評価用データで予測を行い,精度を確認してみましょう.
```
%time y_true_test, y_pred_test = predict(trainer, test_dataset, 256, 0)
print_confusion_matrix(y_true_test, y_pred_test)
print_scores(y_true_test, y_pred_test)
```
高周波のノイズを除去したことで,予測精度がどのように変わったか確認してみましょう.
## おわりに
本章では,ECGの公開データセットを利用して,不整脈検知の問題に取り組みました.
本講義内容を通じてお伝えしたかったことは,以下となります.
1. 心電図を解析するにあたって必要となる最低限の知識
1. モニタリングデータを解析するための基本的な前処理手順
1. CNNベースのモデルを利用した学習器の構築
1. データセットの性質を考慮した学習方法や前処理の工夫
また,精度向上に向けて様々な手法を試してみましたが,現実世界のタスクにおいては,どの工夫が有効に働くか自明で無い場合がほとんどです.従って,試行錯誤を行いながら,その問題設定に適合するやり方を模索していく必要があります.
さらなる取り組みとしては,例えば下記内容を検討する余地があります.
* 情報の追加
* $Ⅱ$誘導シグナルに加えて,$V_1$誘導シグナルも同時に入力として与えます.([[10](https://www.kdd.org/kdd2018/files/deep-learning-day/DLDay18_paper_16.pdf)]などで実施)
* 前処理の工夫
* セグメント長の変更
* より長時間のセグメントを入力とすることで,長期的な波形情報を抽出.([[4](https://arxiv.org/abs/1810.04121)]では10秒のセグメントを解析に利用)
* 入力情報が増えることで,却って学習が難しくなってしまう可能性あり.
* リサンプリング
* サンプリング周波数を下げることで,長期的な波形情報を抽出.([[4](https://arxiv.org/abs/1810.04121)]では180 Hzにダウンサンプリング)
* 波形が粗くなることで学習に影響する可能性あり.
* 適切な前処理を行わないと,折り返し雑音と呼ばれる歪みが発生.
* (モデルに入力する前に情報を縮小する処理は,画像解析などの分野では一般的)
* ラベルの追加
* Normal,VEBに加えて,SVEB(上室異所性拍動)等も追加.
* ラベルの与え方の変更
* セグメント範囲内に正常以外のピークラベルが含まれる場合に優先的にそのラベルを付与する,等.
* モデルの変更
* ResNet50,ResNet101,DenseNet121など,より深いネットワーク構造や他のネットワーク構造を利用する.
* 長期的な特徴を抽出するために,CNNの後段にRNNベースの構造(LSTMなど)を組み込む ([[4](https://arxiv.org/abs/1810.04121)]などで実施).
余力がある方は,是非チャレンジしてみてください.
また,最近では独自に収集した大規模なモニタリングデータを対象として,研究成果を発表する事例も幾つか出てきています.
* Cardiogram社とカリフォルニア大学の共同研究で,活動量計から心拍数データを収集し,深層学習を用いて糖尿病予備群を予測するDeepHeartを発表[[11](https://arxiv.org/abs/1802.02511)].
* スタンフォード大学のAndrew Ng.の研究室でも,独自に収集したECGレコードから$14$種類の波形クラス分類を予測するモデルを構築し,医師と比較実験を実施[[12](https://arxiv.org/abs/1707.01836)].
デバイスの進歩によって簡単に精緻な情報が収集可能になってきていることから,こうした研究は今後益々盛んになっていくと考えられます.
以上で,モニタリングデータの時系列解析の章は終了となります.お疲れ様でした.
## 参考文献
1. **Electrocardiography** Wikipedia: The Free Encyclopedia. Wikimedia Foundation, Inc. 22 July 2004. Web. 10 Aug. 2004, [[Link](https://en.wikipedia.org/wiki/Electrocardiography)]
1. **心電図健診判定マニュアル**, 日本人間ドック学会, 平成26年4月, [[Link](https://www.ningen-dock.jp/wp/wp-content/uploads/2013/09/d4bb55fcf01494e251d315b76738ab40.pdf)]
1. **Automatic classification of heartbeats using ECG morphology and heartbeat interval features**, Phillip de Chazal et al., June 2004, [[Link](https://ieeexplore.ieee.org/document/1306572)]
1. **Inter-Patient ECG Classification with Convolutional and Recurrent Neural Networks**, Li Guo et al., Sep 2018, [[Link](https://arxiv.org/abs/1810.04121)]
1. **Deep Residual Learning for Image Recognition**, Kaiming He et al., Dec 2015, [[Link](https://arxiv.org/abs/1512.03385)]
1. **Focal Loss for Dense Object Detection**, Tsung-Yi Lin et al., Aug 2017, [[Link](https://arxiv.org/abs/1708.02002)]
1. **Bayesian Convolutional Neural Networks with Bernoulli Approximate Variational Inference**, Yarin Gal et al., Jun 2015, [[Link](https://arxiv.org/abs/1506.02158v6)]
1. **Noise Analysis and Different Denoising Techniques of ECG Signal - A Survey**, Aswathy Velayudhan et al., ICETEM2016, [[Link](http://www.iosrjournals.org/iosr-jece/papers/ICETEM/Vol.%201%20Issue%201/ECE%2006-40-44.pdf)]
1. **Filter (signal processing)**, Wikipedia: The Free Encyclopedia. Wikimedia Foundation, Inc. 22 July 2004. Web. 10 Aug. 2004, [[Link](https://en.wikipedia.org/wiki/Filter_%28signal_processing%29)]
1. **Arrhythmia Detection from 2-lead ECG using Convolutional Denoising Autoencoders**, Keiichi Ochiai et al., KDD2018, [[Link](https://www.kdd.org/kdd2018/files/deep-learning-day/DLDay18_paper_16.pdf)]
1. **DeepHeart: Semi-Supervised Sequence Learning for Cardiovascular Risk Prediction**, Brandon Ballinger et al., Feb 2018, [[Link](https://arxiv.org/abs/1802.02511)]
1. **Cardiologist-Level Arrhythmia Detection with Convolutional Neural Networks**, Pranav Rajpurkar et al., Jul 2017, [[Link](https://arxiv.org/abs/1707.01836)]
|
github_jupyter
|
!apt -y -q install tree
!pip install wfdb==2.2.1 scikit-learn==0.20.1 imbalanced-learn==0.4.3
import os
import random
import numpy as np
import chainer
import scipy
import pandas as pd
import matplotlib
import seaborn as sn
import wfdb
import sklearn
import imblearn
chainer.print_runtime_info()
print("Scipy: ", scipy.__version__)
print("Pandas: ", pd.__version__)
print("Matplotlib: ", matplotlib.__version__)
print("Seaborn: ", sn.__version__)
print("WFDB: ", wfdb.__version__)
print("Scikit-learn: ", sklearn.__version__)
print("Imbalanced-learn: ", imblearn.__version__)
def reset_seed(seed=42):
random.seed(seed)
np.random.seed(seed)
if chainer.cuda.available:
chainer.cuda.cupy.random.seed(seed)
reset_seed(42)
dataset_root = './dataset'
download_dir = os.path.join(dataset_root, 'download')
wfdb.dl_database('mitdb', dl_dir=download_dir)
print(sorted(os.listdir(download_dir)))
class BaseECGDatasetPreprocessor(object):
def __init__(
self,
dataset_root,
window_size=720, # 2 seconds
):
self.dataset_root = dataset_root
self.download_dir = os.path.join(self.dataset_root, 'download')
self.window_size = window_size
self.sample_rate = 360.
# split list
self.train_record_list = [
'101', '106', '108', '109', '112', '115', '116', '118', '119', '122',
'124', '201', '203', '205', '207', '208', '209', '215', '220', '223', '230'
]
self.test_record_list = [
'100', '103', '105', '111', '113', '117', '121', '123', '200', '210',
'212', '213', '214', '219', '221', '222', '228', '231', '232', '233', '234'
]
# annotation
self.labels = ['N', 'V']
self.valid_symbols = ['N', 'L', 'R', 'e', 'j', 'V', 'E']
self.label_map = {
'N': 'N', 'L': 'N', 'R': 'N', 'e': 'N', 'j': 'N',
'V': 'V', 'E': 'V'
}
def _load_data(
self,
base_record,
channel=0 # [0, 1]
):
record_name = os.path.join(self.download_dir, str(base_record))
# read dat file
signals, fields = wfdb.rdsamp(record_name)
assert fields['fs'] == self.sample_rate
# read annotation file
annotation = wfdb.rdann(record_name, 'atr')
symbols = annotation.symbol
positions = annotation.sample
return signals[:, channel], symbols, positions
def _normalize_signal(
self,
signal,
method='std'
):
if method == 'minmax':
# Min-Max scaling
min_val = np.min(signal)
max_val = np.max(signal)
return (signal - min_val) / (max_val - min_val)
elif method == 'std':
# Zero mean and unit variance
signal = (signal - np.mean(signal)) / np.std(signal)
return signal
else:
raise ValueError("Invalid method: {}".format(method))
def _segment_data(
self,
signal,
symbols,
positions
):
X = []
y = []
sig_len = len(signal)
for i in range(len(symbols)):
start = positions[i] - self.window_size // 2
end = positions[i] + self.window_size // 2
if symbols[i] in self.valid_symbols and start >= 0 and end <= sig_len:
segment = signal[start:end]
assert len(segment) == self.window_size, "Invalid length"
X.append(segment)
y.append(self.labels.index(self.label_map[symbols[i]]))
return np.array(X), np.array(y)
def preprocess_dataset(
self,
normalize=True
):
# preprocess training dataset
self._preprocess_dataset_core(self.train_record_list, "train", normalize)
# preprocess test dataset
self._preprocess_dataset_core(self.test_record_list, "test", normalize)
def _preprocess_dataset_core(
self,
record_list,
mode="train",
normalize=True
):
Xs, ys = [], []
save_dir = os.path.join(self.dataset_root, 'preprocessed', mode)
for i in range(len(record_list)):
signal, symbols, positions = self._load_data(record_list[i])
if normalize:
signal = self._normalize_signal(signal)
X, y = self._segment_data(signal, symbols, positions)
Xs.append(X)
ys.append(y)
os.makedirs(save_dir, exist_ok=True)
np.save(os.path.join(save_dir, "X.npy"), np.vstack(Xs))
np.save(os.path.join(save_dir, "y.npy"), np.concatenate(ys))
BaseECGDatasetPreprocessor(dataset_root).preprocess_dataset()
!tree ./dataset/preprocessed
X_train = np.load(os.path.join(dataset_root, 'preprocessed', 'train', 'X.npy'))
y_train = np.load(os.path.join(dataset_root, 'preprocessed', 'train', 'y.npy'))
X_test = np.load(os.path.join(dataset_root, 'preprocessed', 'test', 'X.npy'))
y_test = np.load(os.path.join(dataset_root, 'preprocessed', 'test', 'y.npy'))
print("X_train.shape = ", X_train.shape, " \t y_train.shape = ", y_train.shape)
print("X_test.shape = ", X_test.shape, " \t y_test.shape = ", y_test.shape)
uniq_train, counts_train = np.unique(y_train, return_counts=True)
print("y_train count each labels: ", dict(zip(uniq_train, counts_train)))
uniq_test, counts_test = np.unique(y_test, return_counts=True)
print("y_test count each labels: ", dict(zip(uniq_test, counts_test)))
%matplotlib inline
import matplotlib.pyplot as plt
idx_n = np.where(y_train == 0)[0]
plt.plot(X_train[idx_n[0]])
idx_s = np.where(y_train == 1)[0]
plt.plot(X_train[idx_s[0]])
class ECGDataset(chainer.dataset.DatasetMixin):
def __init__(
self,
path
):
if os.path.isfile(os.path.join(path, 'X.npy')):
self.X = np.load(os.path.join(path, 'X.npy'))
else:
raise FileNotFoundError("{}/X.npy not found.".format(path))
if os.path.isfile(os.path.join(path, 'y.npy')):
self.y = np.load(os.path.join(path, 'y.npy'))
else:
raise FileNotFoundError("{}/y.npy not found.".format(path))
def __len__(self):
return len(self.X)
def get_example(self, i):
return self.X[None, i].astype(np.float32), self.y[i]
import chainer.functions as F
import chainer.links as L
from chainer import reporter
from chainer import Variable
class BaseBlock(chainer.Chain):
def __init__(
self,
channels,
stride=1,
dilate=1
):
self.stride = stride
super(BaseBlock, self).__init__()
with self.init_scope():
self.c1 = L.ConvolutionND(1, None, channels, 3, stride, dilate, dilate=dilate)
self.c2 = L.ConvolutionND(1, None, channels, 3, 1, dilate, dilate=dilate)
if stride > 1:
self.cd = L.ConvolutionND(1, None, channels, 1, stride, 0)
self.b1 = L.BatchNormalization(channels)
self.b2 = L.BatchNormalization(channels)
def __call__(self, x):
h = F.relu(self.b1(self.c1(x)))
if self.stride > 1:
res = self.cd(x)
else:
res = x
h = res + self.b2(self.c2(h))
return F.relu(h)
class ResBlock(chainer.Chain):
def __init__(
self,
channels,
n_block,
dilate=1
):
self.n_block = n_block
super(ResBlock, self).__init__()
with self.init_scope():
self.b0 = BaseBlock(channels, 2, dilate)
for i in range(1, n_block):
bx = BaseBlock(channels, 1, dilate)
setattr(self, 'b{}'.format(str(i)), bx)
def __call__(self, x):
h = self.b0(x)
for i in range(1, self.n_block):
h = getattr(self, 'b{}'.format(str(i)))(h)
return h
class ResNet34(chainer.Chain):
def __init__(self):
super(ResNet34, self).__init__()
with self.init_scope():
self.conv1 = L.ConvolutionND(1, None, 64, 7, 2, 3)
self.bn1 = L.BatchNormalization(64)
self.resblock0 = ResBlock(64, 3)
self.resblock1 = ResBlock(128, 4)
self.resblock2 = ResBlock(256, 6)
self.resblock3 = ResBlock(512, 3)
self.fc = L.Linear(None, 2)
def __call__(self, x):
h = F.relu(self.bn1(self.conv1(x)))
h = F.max_pooling_nd(h, 3, 2)
for i in range(4):
h = getattr(self, 'resblock{}'.format(str(i)))(h)
h = F.average(h, axis=2)
h = self.fc(h)
return h
class Classifier(chainer.Chain):
def __init__(
self,
predictor,
lossfun=F.softmax_cross_entropy
):
super(Classifier, self).__init__()
with self.init_scope():
self.predictor = predictor
self.lossfun = lossfun
def __call__(self, *args):
assert len(args) >= 2
x = args[:-1]
t = args[-1]
y = self.predictor(*x)
# loss
loss = self.lossfun(y, t)
with chainer.no_backprop_mode():
# other metrics
accuracy = F.accuracy(y, t)
# reporter
reporter.report({'loss': loss}, self)
reporter.report({'accuracy': accuracy}, self)
return loss
def predict(self, x):
with chainer.function.no_backprop_mode(), chainer.using_config('train', False):
x = Variable(self.xp.asarray(x, dtype=self.xp.float32))
y = self.predictor(x)
return y
from chainer import optimizers
from chainer.optimizer import WeightDecay
from chainer.iterators import MultiprocessIterator
from chainer import training
from chainer.training import extensions
from chainer.training import triggers
from chainer.backends.cuda import get_device_from_id
def create_train_dataset(root_path):
train_path = os.path.join(root_path, 'preprocessed', 'train')
train_dataset = ECGDataset(train_path)
return train_dataset
def create_trainer(
batchsize, train_dataset, nb_epoch=1,
device=0, lossfun=F.softmax_cross_entropy
):
# setup model
model = ResNet34()
train_model = Classifier(model, lossfun=lossfun)
# use Adam optimizer
optimizer = optimizers.Adam(alpha=0.001)
optimizer.setup(train_model)
optimizer.add_hook(WeightDecay(0.0001))
# setup iterator
train_iter = MultiprocessIterator(train_dataset, batchsize)
# define updater
updater = training.StandardUpdater(train_iter, optimizer, device=device)
# setup trainer
stop_trigger = (nb_epoch, 'epoch')
trainer = training.trainer.Trainer(updater, stop_trigger)
logging_attributes = [
'epoch', 'iteration',
'main/loss', 'main/accuracy'
]
trainer.extend(
extensions.LogReport(logging_attributes, trigger=(2000 // batchsize, 'iteration'))
)
trainer.extend(
extensions.PrintReport(logging_attributes)
)
trainer.extend(
extensions.ExponentialShift('alpha', 0.75, optimizer=optimizer),
trigger=(4000 // batchsize, 'iteration')
)
return trainer
train_dataset = create_train_dataset(dataset_root)
trainer = create_trainer(256, train_dataset, nb_epoch=1, device=0)
%time trainer.run()
from chainer import cuda
from sklearn.metrics import classification_report
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
def create_test_dataset(root_path):
test_path = os.path.join(root_path, 'preprocessed', 'test')
test_dataset = ECGDataset(test_path)
return test_dataset
def predict(trainer, test_dataset, batchsize, device=-1):
model = trainer.updater.get_optimizer('main').target
ys = []
ts = []
for i in range(len(test_dataset) // batchsize + 1):
if i == len(test_dataset) // batchsize:
X, t = zip(*test_dataset[i*batchsize: len(test_dataset)])
else:
X, t = zip(*test_dataset[i*batchsize:(i+1)*batchsize])
X = cuda.to_gpu(np.array(X), device)
y = model.predict(X)
y = cuda.to_cpu(y.data.argmax(axis=1))
ys.append(y)
ts.append(np.array(t))
return np.concatenate(ts), np.concatenate(ys)
def print_confusion_matrix(y_true, y_pred):
labels = sorted(list(set(y_true)))
target_names = ['Normal', 'VEB']
cmx = confusion_matrix(y_true, y_pred, labels=labels)
df_cmx = pd.DataFrame(cmx, index=target_names, columns=target_names)
plt.figure(figsize = (5,3))
sn.heatmap(df_cmx, annot=True, annot_kws={"size": 18}, fmt="d", cmap='Blues')
plt.show()
def print_scores(y_true, y_pred):
target_names = ['Normal', 'VEB']
print(classification_report(y_true, y_pred, target_names=target_names))
print("accuracy: ", accuracy_score(y_true, y_pred))
test_dataset = create_test_dataset(dataset_root)
%time y_true_test, y_pred_test = predict(trainer, test_dataset, 256, 0)
print_confusion_matrix(y_true_test, y_pred_test)
print_scores(y_true_test, y_pred_test)
from imblearn.datasets import make_imbalance
from imblearn.over_sampling import SMOTE
class SampledECGDataset(ECGDataset):
def __init__(
self,
path
):
super(SampledECGDataset, self).__init__(path)
_, counts = np.unique(self.y, return_counts=True)
self.X, self.y = make_imbalance(
self.X, self.y,
sampling_strategy={0: counts[0]//4, 1: counts[1]}
)
smote = SMOTE(random_state=42)
self.X, self.y = smote.fit_sample(self.X, self.y)
def create_sampled_train_dataset(root_path):
train_path = os.path.join(root_path, 'preprocessed', 'train')
train_dataset = SampledECGDataset(train_path)
return train_dataset
train_dataset = create_sampled_train_dataset(dataset_root)
trainer = create_trainer(256, train_dataset, nb_epoch=2, device=0)
%time trainer.run()
%time y_true_test, y_pred_test = predict(trainer, test_dataset, 256, 0)
print_confusion_matrix(y_true_test, y_pred_test)
print_scores(y_true_test, y_pred_test)
from chainer.backends.cuda import get_array_module
def focal_loss(x, t, class_num=2, gamma=0.5, eps=1e-6):
xp = get_array_module(t)
p = F.softmax(x)
p = F.clip(p, x_min=eps, x_max=1-eps)
log_p = F.log_softmax(x)
t_onehot = xp.eye(class_num)[t.ravel()]
loss_sce = -1 * t_onehot * log_p
loss_focal = F.sum(loss_sce * (1. - p) ** gamma, axis=1)
return F.mean(loss_focal)
train_dataset = create_train_dataset(dataset_root)
trainer = create_trainer(256, train_dataset, nb_epoch=1, device=0, lossfun=focal_loss)
%time trainer.run()
%time y_true_test, y_pred_test = predict(trainer, test_dataset, 256, 0)
print_confusion_matrix(y_true_test, y_pred_test)
print_scores(y_true_test, y_pred_test)
class DilatedResNet34(chainer.Chain):
def __init__(self):
super(DilatedResNet34, self).__init__()
with self.init_scope():
self.conv1 = L.ConvolutionND(1, None, 64, 7, 2, 3)
self.bn1 = L.BatchNormalization(64)
self.resblock0 = ResBlock(64, 3, 1)
self.resblock1 = ResBlock(128, 4, 1)
self.resblock2 = ResBlock(256, 6, 2)
self.resblock3 = ResBlock(512, 3, 4)
self.fc1 = L.Linear(None, 512)
self.fc2 = L.Linear(None, 2)
def __call__(self, x):
h = F.relu(self.bn1(self.conv1(x)))
h = F.max_pooling_nd(h, 3, 2)
for i in range(4):
h = getattr(self, 'resblock{}'.format(str(i)))(h)
h = F.average(h, axis=2)
h = F.dropout(self.fc1(h), 0.5)
h = self.fc2(h)
return h
def create_trainer(
batchsize, train_dataset, nb_epoch=1,
device=0, lossfun=F.softmax_cross_entropy
):
# setup model
model = DilatedResNet34()
train_model = Classifier(model, lossfun=lossfun)
# use Adam optimizer
optimizer = optimizers.Adam(alpha=0.001)
optimizer.setup(train_model)
optimizer.add_hook(WeightDecay(0.0001))
# setup iterator
train_iter = MultiprocessIterator(train_dataset, batchsize)
# define updater
updater = training.StandardUpdater(train_iter, optimizer, device=device)
# setup trainer
stop_trigger = (nb_epoch, 'epoch')
trainer = training.trainer.Trainer(updater, stop_trigger)
logging_attributes = [
'epoch', 'iteration',
'main/loss', 'main/accuracy'
]
trainer.extend(
extensions.LogReport(logging_attributes, trigger=(2000 // batchsize, 'iteration'))
)
trainer.extend(
extensions.PrintReport(logging_attributes)
)
trainer.extend(
extensions.ExponentialShift('alpha', 0.75, optimizer=optimizer),
trigger=(4000 // batchsize, 'iteration')
)
return trainer
train_dataset = create_train_dataset(dataset_root)
test_dataset = create_test_dataset(dataset_root)
trainer = create_trainer(256, train_dataset, nb_epoch=1, device=0)
%time trainer.run()
%time y_true_test, y_pred_test = predict(trainer, test_dataset, 256, 0)
print_confusion_matrix(y_true_test, y_pred_test)
print_scores(y_true_test, y_pred_test)
from scipy.signal import butter, lfilter
class DenoiseECGDatasetPreprocessor(BaseECGDatasetPreprocessor):
def __init__(
self,
dataset_root='./',
window_size=720
):
super(DenoiseECGDatasetPreprocessor, self).__init__(
dataset_root, window_size)
def _denoise_signal(
self,
signal,
btype='low',
cutoff_low=0.2,
cutoff_high=25.,
order=5
):
nyquist = self.sample_rate / 2.
if btype == 'band':
cut_off = (cutoff_low / nyquist, cutoff_high / nyquist)
elif btype == 'high':
cut_off = cutoff_low / nyquist
elif btype == 'low':
cut_off = cutoff_high / nyquist
else:
return signal
b, a = butter(order, cut_off, analog=False, btype=btype)
return lfilter(b, a, signal)
def _segment_data(
self,
signal,
symbols,
positions
):
X = []
y = []
sig_len = len(signal)
for i in range(len(symbols)):
start = positions[i] - self.window_size // 2
end = positions[i] + self.window_size // 2
if symbols[i] in self.valid_symbols and start >= 0 and end <= sig_len:
segment = signal[start:end]
assert len(segment) == self.window_size, "Invalid length"
X.append(segment)
y.append(self.labels.index(self.label_map[symbols[i]]))
return np.array(X), np.array(y)
def prepare_dataset(
self,
denoise=False,
normalize=True
):
if not os.path.isdir(self.download_dir):
self.download_data()
# prepare training dataset
self._prepare_dataset_core(self.train_record_list, "train", denoise, normalize)
# prepare test dataset
self._prepare_dataset_core(self.test_record_list, "test", denoise, normalize)
def _prepare_dataset_core(
self,
record_list,
mode="train",
denoise=False,
normalize=True
):
Xs, ys = [], []
save_dir = os.path.join(self.dataset_root, 'preprocessed', mode)
for i in range(len(record_list)):
signal, symbols, positions = self._load_data(record_list[i])
if denoise:
signal = self._denoise_signal(signal)
if normalize:
signal = self._normalize_signal(signal)
X, y = self._segment_data(signal, symbols, positions)
Xs.append(X)
ys.append(y)
os.makedirs(save_dir, exist_ok=True)
np.save(os.path.join(save_dir, "X.npy"), np.vstack(Xs))
np.save(os.path.join(save_dir, "y.npy"), np.concatenate(ys))
DenoiseECGDatasetPreprocessor(dataset_root).prepare_dataset(denoise=True)
X_train_d = np.load(os.path.join(dataset_root, 'preprocessed', 'train', 'X.npy'))
plt.subplots(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(X_train[idx_n[0]])
plt.subplot(1, 2, 2)
plt.plot(X_train_d[idx_n[0]])
plt.show()
train_dataset = create_train_dataset(dataset_root)
test_dataset = create_test_dataset(dataset_root)
trainer = create_trainer(256, train_dataset, nb_epoch=1, device=0)
%time trainer.run()
%time y_true_test, y_pred_test = predict(trainer, test_dataset, 256, 0)
print_confusion_matrix(y_true_test, y_pred_test)
print_scores(y_true_test, y_pred_test)
| 0.498779 | 0.921216 |
## Importing the Data
```
# Constants
DATASET_DIR = './data/'
GLOVE_DIR = './glove.6B/'
SAVE_DIR = './'
import os
import pandas as pd
X = pd.read_csv(os.path.join(DATASET_DIR, 'training_set_rel3.tsv'), sep='\t', encoding='ISO-8859-1')
y = X['domain1_score']
X = X.dropna(axis=1)
X = X.drop(columns=['rater1_domain1', 'rater2_domain1'])
X.head()
```
Minimum and Maximum Scores for each essay set.
```
minimum_scores = [-1, 2, 1, 0, 0, 0, 0, 0, 0]
maximum_scores = [-1, 12, 6, 3, 3, 4, 4, 30, 60]
```
## Preprocessing the Data
We will preprocess all essays and convert them to feature vectors so that they can be fed into the RNN.
These are all helper functions used to clean the essays.
```
import numpy as np
import nltk
import re
from nltk.corpus import stopwords
from gensim.models import Word2Vec
def essay_to_wordlist(essay_v, remove_stopwords):
"""Remove the tagged labels and word tokenize the sentence."""
essay_v = re.sub("[^a-zA-Z]", " ", essay_v)
words = essay_v.lower().split()
if remove_stopwords:
stops = set(stopwords.words("english"))
words = [w for w in words if not w in stops]
return (words)
def essay_to_sentences(essay_v, remove_stopwords):
"""Sentence tokenize the essay and call essay_to_wordlist() for word tokenization."""
tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
raw_sentences = tokenizer.tokenize(essay_v.strip())
sentences = []
for raw_sentence in raw_sentences:
if len(raw_sentence) > 0:
sentences.append(essay_to_wordlist(raw_sentence, remove_stopwords))
return sentences
def makeFeatureVec(words, model, num_features):
"""Make Feature Vector from the words list of an Essay."""
featureVec = np.zeros((num_features,),dtype="float32")
num_words = 0.
index2word_set = set(model.wv.index2word)
for word in words:
if word in index2word_set:
num_words += 1
featureVec = np.add(featureVec,model[word])
featureVec = np.divide(featureVec,num_words)
return featureVec
def getAvgFeatureVecs(essays, model, num_features):
"""Main function to generate the word vectors for word2vec model."""
counter = 0
essayFeatureVecs = np.zeros((len(essays),num_features),dtype="float32")
for essay in essays:
essayFeatureVecs[counter] = makeFeatureVec(essay, model, num_features)
counter = counter + 1
return essayFeatureVecs
```
## Defining the model
Here we define a 2-Layer LSTM Model.
Note that instead of using sigmoid activation in the output layer we will use
Relu since we are not normalising training labels.
```
from keras.layers import Embedding, LSTM, Dense, Dropout, Lambda, Flatten
from keras.models import Sequential, load_model, model_from_config
import keras.backend as K
def get_model():
"""Define the model."""
model = Sequential()
model.add(LSTM(300, dropout=0.4, recurrent_dropout=0.4, input_shape=[1, 300], return_sequences=True))
model.add(LSTM(64, recurrent_dropout=0.4))
model.add(Dropout(0.5))
model.add(Dense(1, activation='relu'))
model.compile(loss='mean_squared_error', optimizer='rmsprop', metrics=['mae'])
model.summary()
return model
```
## Training Phase
Now we train the model on the dataset.
We will use 5-Fold Cross Validation and measure the Quadratic Weighted Kappa for each fold.
We will then calculate Average Kappa for all the folds.
```
from sklearn.cross_validation import KFold
from sklearn.linear_model import LinearRegression
from sklearn.metrics import cohen_kappa_score
cv = KFold(len(X), n_folds=5, shuffle=True)
results = []
y_pred_list = []
count = 1
for traincv, testcv in cv:
print("\n--------Fold {}--------\n".format(count))
X_test, X_train, y_test, y_train = X.iloc[testcv], X.iloc[traincv], y.iloc[testcv], y.iloc[traincv]
train_essays = X_train['essay']
test_essays = X_test['essay']
sentences = []
for essay in train_essays:
# Obtaining all sentences from the training essays.
sentences += essay_to_sentences(essay, remove_stopwords = True)
# Initializing variables for word2vec model.
num_features = 300
min_word_count = 40
num_workers = 4
context = 10
downsampling = 1e-3
print("Training Word2Vec Model...")
model = Word2Vec(sentences, workers=num_workers, size=num_features, min_count = min_word_count, window = context, sample = downsampling)
model.init_sims(replace=True)
model.wv.save_word2vec_format('word2vecmodel.bin', binary=True)
clean_train_essays = []
# Generate training and testing data word vectors.
for essay_v in train_essays:
clean_train_essays.append(essay_to_wordlist(essay_v, remove_stopwords=True))
trainDataVecs = getAvgFeatureVecs(clean_train_essays, model, num_features)
clean_test_essays = []
for essay_v in test_essays:
clean_test_essays.append(essay_to_wordlist( essay_v, remove_stopwords=True ))
testDataVecs = getAvgFeatureVecs( clean_test_essays, model, num_features )
trainDataVecs = np.array(trainDataVecs)
testDataVecs = np.array(testDataVecs)
# Reshaping train and test vectors to 3 dimensions. (1 represnts one timestep)
trainDataVecs = np.reshape(trainDataVecs, (trainDataVecs.shape[0], 1, trainDataVecs.shape[1]))
testDataVecs = np.reshape(testDataVecs, (testDataVecs.shape[0], 1, testDataVecs.shape[1]))
lstm_model = get_model()
lstm_model.fit(trainDataVecs, y_train, batch_size=64, epochs=50)
#lstm_model.load_weights('./model_weights/final_lstm.h5')
y_pred = lstm_model.predict(testDataVecs)
# Save any one of the 8 models.
if count == 5:
lstm_model.save('./model_weights/final_lstm.h5')
# Round y_pred to the nearest integer.
y_pred = np.around(y_pred)
# Evaluate the model on the evaluation metric. "Quadratic mean averaged Kappa"
result = cohen_kappa_score(y_test.values,y_pred,weights='quadratic')
print("Kappa Score: {}".format(result))
results.append(result)
count += 1
```
The Avg. Kappa Score is 0.961 which is the highest we have ever seen on this dataset.
```
print("Average Kappa score after a 5-fold cross validation: ",np.around(np.array(results).mean(),decimals=4))
```
|
github_jupyter
|
# Constants
DATASET_DIR = './data/'
GLOVE_DIR = './glove.6B/'
SAVE_DIR = './'
import os
import pandas as pd
X = pd.read_csv(os.path.join(DATASET_DIR, 'training_set_rel3.tsv'), sep='\t', encoding='ISO-8859-1')
y = X['domain1_score']
X = X.dropna(axis=1)
X = X.drop(columns=['rater1_domain1', 'rater2_domain1'])
X.head()
minimum_scores = [-1, 2, 1, 0, 0, 0, 0, 0, 0]
maximum_scores = [-1, 12, 6, 3, 3, 4, 4, 30, 60]
import numpy as np
import nltk
import re
from nltk.corpus import stopwords
from gensim.models import Word2Vec
def essay_to_wordlist(essay_v, remove_stopwords):
"""Remove the tagged labels and word tokenize the sentence."""
essay_v = re.sub("[^a-zA-Z]", " ", essay_v)
words = essay_v.lower().split()
if remove_stopwords:
stops = set(stopwords.words("english"))
words = [w for w in words if not w in stops]
return (words)
def essay_to_sentences(essay_v, remove_stopwords):
"""Sentence tokenize the essay and call essay_to_wordlist() for word tokenization."""
tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
raw_sentences = tokenizer.tokenize(essay_v.strip())
sentences = []
for raw_sentence in raw_sentences:
if len(raw_sentence) > 0:
sentences.append(essay_to_wordlist(raw_sentence, remove_stopwords))
return sentences
def makeFeatureVec(words, model, num_features):
"""Make Feature Vector from the words list of an Essay."""
featureVec = np.zeros((num_features,),dtype="float32")
num_words = 0.
index2word_set = set(model.wv.index2word)
for word in words:
if word in index2word_set:
num_words += 1
featureVec = np.add(featureVec,model[word])
featureVec = np.divide(featureVec,num_words)
return featureVec
def getAvgFeatureVecs(essays, model, num_features):
"""Main function to generate the word vectors for word2vec model."""
counter = 0
essayFeatureVecs = np.zeros((len(essays),num_features),dtype="float32")
for essay in essays:
essayFeatureVecs[counter] = makeFeatureVec(essay, model, num_features)
counter = counter + 1
return essayFeatureVecs
from keras.layers import Embedding, LSTM, Dense, Dropout, Lambda, Flatten
from keras.models import Sequential, load_model, model_from_config
import keras.backend as K
def get_model():
"""Define the model."""
model = Sequential()
model.add(LSTM(300, dropout=0.4, recurrent_dropout=0.4, input_shape=[1, 300], return_sequences=True))
model.add(LSTM(64, recurrent_dropout=0.4))
model.add(Dropout(0.5))
model.add(Dense(1, activation='relu'))
model.compile(loss='mean_squared_error', optimizer='rmsprop', metrics=['mae'])
model.summary()
return model
from sklearn.cross_validation import KFold
from sklearn.linear_model import LinearRegression
from sklearn.metrics import cohen_kappa_score
cv = KFold(len(X), n_folds=5, shuffle=True)
results = []
y_pred_list = []
count = 1
for traincv, testcv in cv:
print("\n--------Fold {}--------\n".format(count))
X_test, X_train, y_test, y_train = X.iloc[testcv], X.iloc[traincv], y.iloc[testcv], y.iloc[traincv]
train_essays = X_train['essay']
test_essays = X_test['essay']
sentences = []
for essay in train_essays:
# Obtaining all sentences from the training essays.
sentences += essay_to_sentences(essay, remove_stopwords = True)
# Initializing variables for word2vec model.
num_features = 300
min_word_count = 40
num_workers = 4
context = 10
downsampling = 1e-3
print("Training Word2Vec Model...")
model = Word2Vec(sentences, workers=num_workers, size=num_features, min_count = min_word_count, window = context, sample = downsampling)
model.init_sims(replace=True)
model.wv.save_word2vec_format('word2vecmodel.bin', binary=True)
clean_train_essays = []
# Generate training and testing data word vectors.
for essay_v in train_essays:
clean_train_essays.append(essay_to_wordlist(essay_v, remove_stopwords=True))
trainDataVecs = getAvgFeatureVecs(clean_train_essays, model, num_features)
clean_test_essays = []
for essay_v in test_essays:
clean_test_essays.append(essay_to_wordlist( essay_v, remove_stopwords=True ))
testDataVecs = getAvgFeatureVecs( clean_test_essays, model, num_features )
trainDataVecs = np.array(trainDataVecs)
testDataVecs = np.array(testDataVecs)
# Reshaping train and test vectors to 3 dimensions. (1 represnts one timestep)
trainDataVecs = np.reshape(trainDataVecs, (trainDataVecs.shape[0], 1, trainDataVecs.shape[1]))
testDataVecs = np.reshape(testDataVecs, (testDataVecs.shape[0], 1, testDataVecs.shape[1]))
lstm_model = get_model()
lstm_model.fit(trainDataVecs, y_train, batch_size=64, epochs=50)
#lstm_model.load_weights('./model_weights/final_lstm.h5')
y_pred = lstm_model.predict(testDataVecs)
# Save any one of the 8 models.
if count == 5:
lstm_model.save('./model_weights/final_lstm.h5')
# Round y_pred to the nearest integer.
y_pred = np.around(y_pred)
# Evaluate the model on the evaluation metric. "Quadratic mean averaged Kappa"
result = cohen_kappa_score(y_test.values,y_pred,weights='quadratic')
print("Kappa Score: {}".format(result))
results.append(result)
count += 1
print("Average Kappa score after a 5-fold cross validation: ",np.around(np.array(results).mean(),decimals=4))
| 0.710226 | 0.775987 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.