markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
ASSIGNMENT 1) Replicate the lesson code. I recommend that you [do not copy-paste](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit).Get caught up to where we got our example in class and then try and take things further. How close to "pixel perfect" can you make the lecture graph?Once you have something that you're proud of, share your graph in the cohort channel and move on to the second exercise. 2) Reproduce another example from [FiveThityEight's shared data repository](https://data.fivethirtyeight.com/).**WARNING**: There are a lot of very custom graphs and tables at the above link. I **highly** recommend not trying to reproduce any that look like a table of values or something really different from the graph types that we are already familiar with. Search through the posts until you find a graph type that you are more or less familiar with: histogram, bar chart, stacked bar chart, line chart, [seaborn relplot](https://seaborn.pydata.org/generated/seaborn.relplot.html), etc. Recreating some of the graphics that 538 uses would be a lot easier in Adobe photoshop/illustrator than with matplotlib. - If you put in some time to find a graph that looks "easy" to replicate you'll probably find that it's not as easy as you thought. - If you start with a graph that looks hard to replicate you'll probably run up against a brick wall and be disappointed with your afternoon. | # Your Work Here
from IPython.display import display, Image
url = 'https://fivethirtyeight.com/wp-content/uploads/2017/09/mehtahickey-inconvenient-0830-1.png'
example = Image(url=url, width=400)
display (example)
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as num
import pandas as pd
plt.style.use('fivethirtyeight')
fake = pd.Series ([38, 3, 2, 1, 2, 4, 6, 5, 5, 33], index=range(1,11))
fake.plot.bar (color ='#ed713a', width = 0.9);
style_list = ['default', 'classic'] + sorted (style for style in plt.style.available if style != 'classic')
style_list
fake2 = pd.Series (
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,1,
2, 2, 2,
3, 3, 3,
4, 4,
5, 5, 5,
6, 6, 6, 6,
7, 7, 7, 7, 7,
8, 8, 8, 8,
9, 9, 9, 9,
10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10,
])
fake2.head()
plt.style.use('fivethirtyeight')
fake2.value_counts().sort_index().plot.bar (color='#ed713a', width=0.9)
display (example)
fig = plt.figure(facecolor = 'black')
ax = fake2.value_counts().sort_index().plot.bar (color = '#ed713a', width=0.9)
ax.set(facecolor='black')
plt.xlabel('Rating', color='white')
plt.ylabel ('Percent of total votes', color='white')
display(example)
list (range (0,50,10))
fig = plt.figure(facecolor ='white',figsize=(5,4) )
ax = fake.plot.bar (color ='#fc7703', width=0.9)
ax.set (facecolor ='white') | _____no_output_____ | MIT | Sam_Kumar_LS_DS_123_Make_Explanatory_Visualizations_Assignment.ipynb | sampath11/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling-OLD |
STRETCH OPTIONS 1) Reproduce one of the following using the matplotlib or seaborn libraries:- [thanksgiving-2015](https://fivethirtyeight.com/features/heres-what-your-part-of-america-eats-on-thanksgiving/) - [candy-power-ranking](https://fivethirtyeight.com/features/the-ultimate-halloween-candy-power-ranking/) - or another example of your choice! 2) Make more charts!Choose a chart you want to make, from [Visual Vocabulary - Vega Edition](http://ft.com/vocabulary).Find the chart in an example gallery of a Python data visualization library:- [Seaborn](http://seaborn.pydata.org/examples/index.html)- [Altair](https://altair-viz.github.io/gallery/index.html)- [Matplotlib](https://matplotlib.org/gallery.html)- [Pandas](https://pandas.pydata.org/pandas-docs/stable/visualization.html)Reproduce the chart. [Optionally, try the "Ben Franklin Method."](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) If you want, experiment and make changes.Take notes. Consider sharing your work with your cohort! | # More Work Here | _____no_output_____ | MIT | Sam_Kumar_LS_DS_123_Make_Explanatory_Visualizations_Assignment.ipynb | sampath11/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling-OLD |
Basic Operations on Images | Drawing on Images
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import cv2 | _____no_output_____ | MIT | 8.Open CV/Basics of open cv.ipynb | Jiggu07/FACE-MASK-RECOGNITION |
Create a black image which will act as a template. | image_blank = np.zeros(shape=(512,512,3),dtype=np.int16) | _____no_output_____ | MIT | 8.Open CV/Basics of open cv.ipynb | Jiggu07/FACE-MASK-RECOGNITION |
image_blank Display the black image. | plt.imshow(image_blank) | _____no_output_____ | MIT | 8.Open CV/Basics of open cv.ipynb | Jiggu07/FACE-MASK-RECOGNITION |
Function & Attributes The generalised function for drawing shapes on images is: cv2.shape(line, rectangle etc)(image,Pt1,Pt2,color,thickness) There are some common arguments which are passed in function to draw shapes on images:* Image on which shapes are to be drawn* co-ordinates of the shape to be drawn from Pt1(top left) to Pt2(bottom right)* Color: The color of the shape that is to be drawn. It is passed as a tuple, eg: (255,0,0). * For grayscale, it will be the scale of brightness.* The thickness of the geometrical figure. 1. Straight Line Drawing a straight line across an image requires specifying the points, through which the line will pass. | # Draw a diagonal red line with thickness of 5 px
line_red = cv2.line(image_blank,(0,0),(500,500),(255,0,255),10)
plt.imshow(line_red)
# Draw a diagonal green line with thickness of 5 px
line_green = cv2.line(image_blank,(0,0),(511,511),(0,255,0),5)
plt.imshow(line_green)
# Draw a diagonal green line with thickness of 5 px
line_green = cv2.line(image_blank,(0,0),(511,511),(0,255,0),10)
plt.imshow(line_green) | _____no_output_____ | MIT | 8.Open CV/Basics of open cv.ipynb | Jiggu07/FACE-MASK-RECOGNITION |
2. Rectangle For a rectangle, we need to specify the top left and the bottom right coordinates. | #Draw a blue rectangle with a thickness of 5 px
rectangle= cv2.rectangle(image_blank,(0,0),(510,128),(0,255,0),25)
plt.imshow(rectangle) | _____no_output_____ | MIT | 8.Open CV/Basics of open cv.ipynb | Jiggu07/FACE-MASK-RECOGNITION |
3. Circle | For a circle, we need to pass its center coordinates and radius value. Let us draw a circle inside the rectangle drawn above
img1 = cv2.circle(image_blank,(447,0), 250, (255,0,0), 50) # -1 corresponds to a filled circle
plt.imshow(img1) | _____no_output_____ | MIT | 8.Open CV/Basics of open cv.ipynb | Jiggu07/FACE-MASK-RECOGNITION |
Writing on Images Adding text to images is also similar to drawing shapes on them. But you need to specify certain arguments before doing so:* Text to be written* coordinates of the text. The text on an image begins from the bottom left direction.* Font type and scale.* Other attributes like color, thickness and line type. Normally the line type that is used is lineType = cv2.LINE_AA. | font = cv2.FONT_HERSHEY_SIMPLEX
text = cv2.putText(img1,'Alok',(10,500), font, 4,(255,255,255),2)
plt.imshow(text)
These were the minor operations that can be done on images using OpenCV. Feel free to experiment with the shapes and text. | _____no_output_____ | MIT | 8.Open CV/Basics of open cv.ipynb | Jiggu07/FACE-MASK-RECOGNITION |
- Install the Colab Code package which will let you run jupyter lab in colab using ngrok tunnel | !pip install colabcode | Collecting colabcode
Downloading https://files.pythonhosted.org/packages/5d/d5/4f9db2a4fe80f507c9c44c2cd4fd614234c1fe0d77e8f1101329997a19cd/colabcode-0.0.9-py3-none-any.whl
Collecting pyngrok>=5.0.0
Downloading https://files.pythonhosted.org/packages/ea/63/e086f165125e9bf2e71c0db2955911baaaa0af8947ab5c7b3771bdf4d4d5/pyngrok-5.0.0.tar.gz
Requirement already satisfied: PyYAML in /usr/local/lib/python3.6/dist-packages (from pyngrok>=5.0.0->colabcode) (3.13)
Building wheels for collected packages: pyngrok
Building wheel for pyngrok (setup.py) ... [?25l[?25hdone
Created wheel for pyngrok: filename=pyngrok-5.0.0-cp36-none-any.whl size=18780 sha256=07ed17d9fab927c3428ea36d27ae42e2178a40a5a99a8cf46b800921362b6394
Stored in directory: /root/.cache/pip/wheels/95/df/23/af8dde08c3fcdc7b966adcacef48ab29aa3b0b1860df5d2b79
Successfully built pyngrok
Installing collected packages: pyngrok, colabcode
Successfully installed colabcode-0.0.9 pyngrok-5.0.0
| MIT | Jupyter_Lab_Colab.ipynb | debparth/Colab_tips_tricks |
- Paste your **Authorization Code** below in **authtoken** after signing up from [**here**](https://dashboard.ngrok.com/signup) - Click on the **XXXXXXXXX.ngrok.io** link below after running the cell | from colabcode import ColabCode
ColabCode(authtoken="", mount_drive=True, lab=True) | Code Server can be accessed on: NgrokTunnel: "http://9998767f9579.ngrok.io" -> "http://localhost:10000"
[2020-10-28T19:04:23.315Z] info Using user-data-dir ~/.local/share/code-server
[2020-10-28T19:04:23.321Z] info code-server 3.6.1 62735da69466a444561ab9b1115dc7c4d496d455
[2020-10-28T19:04:23.322Z] info Using config file ~/.config/code-server/config.yaml
[2020-10-28T19:04:23.327Z] info HTTP server listening on http://127.0.0.1:10000
[2020-10-28T19:04:23.327Z] info - No authentication
[2020-10-28T19:04:23.327Z] info - Not serving HTTPS
| MIT | Jupyter_Lab_Colab.ipynb | debparth/Colab_tips_tricks |
--- Day 3: Binary Diagnostic --- [](https://mybinder.org/v2/gh/oddrationale/AdventOfCode2021FSharp/main?urlpath=lab%2Ftree%2FDay03.ipynb) The submarine has been making some odd creaking noises, so you ask it to produce a diagnostic report just in case.The diagnostic report (your puzzle input) consists of a list of binary numbers which, when decoded properly, can tell you many useful things about the conditions of the submarine. The first parameter to check is the power consumption.You need to use the binary numbers in the diagnostic report to generate two new binary numbers (called the gamma rate and the epsilon rate). The power consumption can then be found by multiplying the gamma rate by the epsilon rate.Each bit in the gamma rate can be determined by finding the most common bit in the corresponding position of all numbers in the diagnostic report. For example, given the following diagnostic report:001001111010110101111010101111001111110010000110010001001010Considering only the first bit of each number, there are five 0 bits and seven 1 bits. Since the most common bit is 1, the first bit of the gamma rate is 1.The most common second bit of the numbers in the diagnostic report is 0, so the second bit of the gamma rate is 0.The most common value of the third, fourth, and fifth bits are 1, 1, and 0, respectively, and so the final three bits of the gamma rate are 110.So, the gamma rate is the binary number 10110, or 22 in decimal.The epsilon rate is calculated in a similar way; rather than use the most common bit, the least common bit from each position is used. So, the epsilon rate is 01001, or 9 in decimal. Multiplying the gamma rate (22) by the epsilon rate (9) produces the power consumption, 198.Use the binary numbers in your diagnostic report to calculate the gamma rate and epsilon rate, then multiply them together. What is the power consumption of the submarine? (Be sure to represent your answer in decimal, not binary.) | let input = File.ReadAllLines @"input/03.txt"
#!time
input
|> Seq.map (fun line -> line.ToCharArray() |> Seq.map (fun char -> char |> string |> int))
|> Seq.transpose
|> Seq.map (fun bits -> if (bits |> Seq.sum) * 2 > input.Length then (1, 0) else (0, 1))
|> Seq.fold (fun (gamma, epsilon) (g, e) -> gamma + string g, epsilon + string e) (String.Empty, String.Empty)
|> fun (gamma, epsilon) -> Convert.ToInt32(gamma, 2) * Convert.ToInt32(epsilon, 2) | _____no_output_____ | MIT | Day03.ipynb | oddrationale/AdventOfCode2021FSharp |
--- Part Two --- Next, you should verify the life support rating, which can be determined by multiplying the oxygen generator rating by the CO2 scrubber rating.Both the oxygen generator rating and the CO2 scrubber rating are values that can be found in your diagnostic report - finding them is the tricky part. Both values are located using a similar process that involves filtering out values until only one remains. Before searching for either rating value, start with the full list of binary numbers from your diagnostic report and consider just the first bit of those numbers. Then:Keep only numbers selected by the bit criteria for the type of rating value for which you are searching. Discard numbers which do not match the bit criteria.If you only have one number left, stop; this is the rating value for which you are searching.Otherwise, repeat the process, considering the next bit to the right.The bit criteria depends on which type of rating value you want to find:To find oxygen generator rating, determine the most common value (0 or 1) in the current bit position, and keep only numbers with that bit in that position. If 0 and 1 are equally common, keep values with a 1 in the position being considered.To find CO2 scrubber rating, determine the least common value (0 or 1) in the current bit position, and keep only numbers with that bit in that position. If 0 and 1 are equally common, keep values with a 0 in the position being considered.For example, to determine the oxygen generator rating value using the same example diagnostic report from above:Start with all 12 numbers and consider only the first bit of each number. There are more 1 bits (7) than 0 bits (5), so keep only the 7 numbers with a 1 in the first position: 11110, 10110, 10111, 10101, 11100, 10000, and 11001.Then, consider the second bit of the 7 remaining numbers: there are more 0 bits (4) than 1 bits (3), so keep only the 4 numbers with a 0 in the second position: 10110, 10111, 10101, and 10000.In the third position, three of the four numbers have a 1, so keep those three: 10110, 10111, and 10101.In the fourth position, two of the three numbers have a 1, so keep those two: 10110 and 10111.In the fifth position, there are an equal number of 0 bits and 1 bits (one each). So, to find the oxygen generator rating, keep the number with a 1 in that position: 10111.As there is only one number left, stop; the oxygen generator rating is 10111, or 23 in decimal.Then, to determine the CO2 scrubber rating value from the same example above:Start again with all 12 numbers and consider only the first bit of each number. There are fewer 0 bits (5) than 1 bits (7), so keep only the 5 numbers with a 0 in the first position: 00100, 01111, 00111, 00010, and 01010.Then, consider the second bit of the 5 remaining numbers: there are fewer 1 bits (2) than 0 bits (3), so keep only the 2 numbers with a 1 in the second position: 01111 and 01010.In the third position, there are an equal number of 0 bits and 1 bits (one each). So, to find the CO2 scrubber rating, keep the number with a 0 in that position: 01010.As there is only one number left, stop; the CO2 scrubber rating is 01010, or 10 in decimal.Finally, to find the life support rating, multiply the oxygen generator rating (23) by the CO2 scrubber rating (10) to get 230.Use the binary numbers in your diagnostic report to calculate the oxygen generator rating and CO2 scrubber rating, then multiply them together. What is the life support rating of the submarine? (Be sure to represent your answer in decimal, not binary.) | let mostCommonBit bits =
match (bits |> Seq.sum) with
| sum when sum * 2 >= (bits |> Seq.length) -> 1
| _ -> 0
let leastCommonBit bits =
match (bits |> Seq.sum) with
| sum when sum * 2 >= (bits |> Seq.length) -> 0
| _ -> 1
let filter bitCriteria pos (input: seq<string>) =
let column =
input
|> Seq.map (fun line -> line.ToCharArray() |> Seq.map (fun char -> char |> string |> int))
|> Seq.transpose
|> Seq.item pos
|> Seq.cache
let filterBit =
column
|> bitCriteria
let criteria =
column
|> Seq.map (fun bit -> bit = filterBit)
if input |> Seq.length = 1 then
input
else
input
|> Seq.zip criteria
|> Seq.filter (fun (c, _) -> c)
|> Seq.map (fun (_, line) -> line)
#!time
let oxygenGeneratorRating =
[0..input.[0].Length-1]
|> Seq.fold (fun acc pos -> filter mostCommonBit pos acc) input
|> fun result -> Convert.ToInt32(result |> Seq.exactlyOne, 2)
let co2ScrubbingRating =
[0..input.[0].Length-1]
|> Seq.fold (fun acc pos -> filter leastCommonBit pos acc) input
|> fun result -> Convert.ToInt32(result |> Seq.exactlyOne, 2)
oxygenGeneratorRating * co2ScrubbingRating | _____no_output_____ | MIT | Day03.ipynb | oddrationale/AdventOfCode2021FSharp |
Getting Predictions and Prediction Explanations**Author**: Thodoris Petropoulos**Label**: Model Deployment ScopeThe scope of this notebook is to provide instructions on how to get predictions and prediction explanations out of a trained model using the Python API. BackgroundThe main ways you can get predictions out of DataRobot using Python would be the modeling API and the prediction API.**Modeling API**: You can use the modelling API if you use Python or R and there are multiple ways you can interact with it.**Prediction API**: Any project can be called with the Prediction API if you have prediction servers. This is a simple REST API. Click on a model in the UI, then "Deploy Model" and "Activate now". You'll have access to a Python code snippet to help you interact with it. You can also deploy the model through the python API.For the purposes of this tutorial, we will focus on the Modeling API. Note that this particular method of scoring utilizes modeling workers. This means that if someone is using these workers for modeling, your prediction is going to have to wait. This method of scoring is good for testing but not for deployment. For actual deployment, please deploy the model as a REST API through DataRobot's UI or through the API. Requirements- Python version 3.7.3- DataRobot API version 2.19.0. Small adjustments might be needed depending on the Python version and DataRobot API version you are using.Full documentation of the Python package can be found here: https://datarobot-public-api-client.readthedocs-hosted.comIt is assumed you already have a DataRobot Project object and a DataRobot Model object. Import Libraries | import datarobot as dr | _____no_output_____ | Apache-2.0 | Making Predictions/Python/Getting Predictions and Prediction Explanations.ipynb | hcchengithub/examples-for-data-scientists |
Requesting PredictionsBefore actually requesting predictions, you should upload the dataset you wish to predict via Project.upload_dataset. Previously uploaded datasets can be seen under Project.get_datasets. When uploading the dataset you can provide the path to a local file, a file object, raw file content, a pandas.DataFrame object, or the url to a publicly available dataset. | #Uploading prediction dataset
dataset_from_path = project.upload_dataset('path/file')
#Request predictions
predict_job = model.request_predictions(dataset_from_path.id)
#Waiting for prediction calculations
predictions = predict_job.get_result_when_complete()
predictions.head() | _____no_output_____ | Apache-2.0 | Making Predictions/Python/Getting Predictions and Prediction Explanations.ipynb | hcchengithub/examples-for-data-scientists |
Requesting Prediction ExplanationsIn order to create PredictionExplanations for a particular model and dataset, you must first Compute feature impact for the model via dr.Model.get_or_request_feature_impact() | model.get_or_request_feature_impact()
pei = dr.PredictionExplanationsInitialization.create(project.id, model.id)
#Wait for results of Prediction Explanations
pei.get_result_when_complete()
pe_job = dr.PredictionExplanations.create(project.id, model.id, dataset_from_path.id)
#Waiting for Job to Complete
pe = pe_job.get_result_when_complete()
df_pe = pe.get_all_as_dataframe()
df_pe.head() | _____no_output_____ | Apache-2.0 | Making Predictions/Python/Getting Predictions and Prediction Explanations.ipynb | hcchengithub/examples-for-data-scientists |
Time Series Projects CaveatsPrediction datasets are uploaded as normal predictions. However, when uploading a prediction dataset, a new parameter forecastPoint can be specified. The forecast point of a prediction dataset identifies the point in time relative which predictions should be generated, and if one is not specified when uploading a dataset, the server will choose the most recent possible forecast point. The forecast window specified when setting the partitioning options for the project determines how far into the future from the forecast point predictions should be calculated.**Important Note**:When uploading a dataset for Time Series projects scoring, you need to include the actual values from previous dates depending on the feature derivation setup. For example, if feature derivation window is -10 to -1 days and you want to forecast sales for the next 3 days, your dataset would look like this:| date | sales | Known_in_advance_feature ||------------|-------|--------------------------|| 01/01/2019 | 130 | AAA || 02/01/2019 | 123 | VVV || 03/01/2019 | 412 | BBB || 04/01/2019 | 321 | DDD || 05/01/2019 | 512 | DDD || 06/01/2019 | 623 | VVV || 07/01/2019 | 356 | CCC || 08/01/2019 | 133 | AAA || 09/01/2019 | 356 | CCC || 10/01/2019 | 654 | DDD || 11/01/2019 | | BBB || 12/01/2019 | | CCC || 13/01/2019 | | DDD |DataRobot will detect your forecast point as 10/01/2019 and then it will calculate lag features and make predictions for the missing dates. Getting Predictions from a DataRobot DeploymentIf you have used MLOps to deploy a model (DataRobot or Custom), you will have access to an API which you can call using an API Client. Below is a python script of an API Client. You can create your own API Client in the language of your choice! | """
Usage:
python datarobot-predict.py <input-file.csv>
This example uses the requests library which you can install with:
pip install requests
We highly recommend that you update SSL certificates with:
pip install -U urllib3[secure] certifi
"""
import sys
import json
import requests
API_URL = 'Find this in Deployment -> Overview -> Summary -> Endpoint'
API_KEY = 'YOUR_API_KEY'
DATAROBOT_KEY = 'Find this in Deployment -> Predictions -> Prediction API -> Single mode -> on top of the code sample'
DEPLOYMENT_ID = 'YOUR_DEPLOYMENT_ID'
MAX_PREDICTION_FILE_SIZE_BYTES = 52428800 # 50 MB
class DataRobotPredictionError(Exception):
"""Raised if there are issues getting predictions from DataRobot"""
def make_datarobot_deployment_predictions(data, deployment_id):
"""
Make predictions on data provided using DataRobot deployment_id provided.
See docs for details:
https://app.eu.datarobot.com/docs/users-guide/predictions/api/new-prediction-api.html
Parameters
----------
data : str
Feature1,Feature2
numeric_value,string
deployment_id : str
The ID of the deployment to make predictions with.
Returns
-------
Response schema:
https://app.eu.datarobot.com/docs/users-guide/predictions/api/new-prediction-api.html#response-schema
Raises
------
DataRobotPredictionError if there are issues getting predictions from DataRobot
"""
# Set HTTP headers. The charset should match the contents of the file.
headers = {
'Content-Type': 'text/plain; charset=UTF-8',
'Authorization': 'Bearer {}'.format(API_KEY),
'DataRobot-Key': DATAROBOT_KEY,
}
url = API_URL.format(deployment_id=deployment_id)
# Make API request for predictions
predictions_response = requests.post(
url,
data=data,
headers=headers,
)
_raise_dataroboterror_for_status(predictions_response)
# Return a Python dict following the schema in the documentation
return predictions_response.json()
def _raise_dataroboterror_for_status(response):
"""Raise DataRobotPredictionError if the request fails along with the response returned"""
try:
response.raise_for_status()
except requests.exceptions.HTTPError:
err_msg = '{code} Error: {msg}'.format(
code=response.status_code, msg=response.text)
raise DataRobotPredictionError(err_msg)
def main(filename, deployment_id):
"""
Return an exit code on script completion or error. Codes > 0 are errors to the shell.
Also useful as a usage demonstration of
`make_datarobot_deployment_predictions(data, deployment_id)`
"""
if not filename:
print(
'Input file is required argument. '
'Usage: python datarobot-predict.py <input-file.csv>')
return 1
data = open(filename, 'rb').read()
data_size = sys.getsizeof(data)
if data_size >= MAX_PREDICTION_FILE_SIZE_BYTES:
print(
'Input file is too large: {} bytes. '
'Max allowed size is: {} bytes.'
).format(data_size, MAX_PREDICTION_FILE_SIZE_BYTES)
return 1
try:
predictions = make_datarobot_deployment_predictions(data, deployment_id)
except DataRobotPredictionError as exc:
print(exc)
return 1
print(json.dumps(predictions, indent=4))
return 0
if __name__ == "__main__":
filename = sys.argv[1]
sys.exit(main(filename, DEPLOYMENT_ID))
| _____no_output_____ | Apache-2.0 | Making Predictions/Python/Getting Predictions and Prediction Explanations.ipynb | hcchengithub/examples-for-data-scientists |
Double Machine Learning: Summarized Data and InterpretabilityDouble Machine Learning (DML) is an algorithm that applies arbitrary machine learning methodsto fit the treatment and response, then uses a linear model to predict the response residualsfrom the treatment residuals. | %load_ext autoreload
%autoreload 2
# Helper imports
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
import seaborn as sns | _____no_output_____ | BSD-3-Clause | notebooks/Weighted Double Machine Learning Examples.ipynb | lwschm/EconML |
Generating Raw Data | import scipy.special
np.random.seed(123)
n=10000 # number of raw samples
d=10 # number of binary features + 1
# Generating random segments aka binary features. We will use features 1,...,4 for heterogeneity.
# The rest for controls. Just as an example.
X = np.random.binomial(1, .5, size=(n, d))
# The first column of X is the treatment. Generating an imbalanced A/B test
X[:, 0] = np.random.binomial(1, scipy.special.expit(X[:, 1]))
# Generating an outcome with treatment effect heterogeneity. The first binary feature creates heterogeneity
# We also have confounding on the first variable. We also have heteroskedastic errors.
y = (-1 + 2 * X[:, 1]) * X[:, 0] + X[:, 1] + (1*X[:, 1] + 1)*np.random.normal(0, 1, size=(n,)) | _____no_output_____ | BSD-3-Clause | notebooks/Weighted Double Machine Learning Examples.ipynb | lwschm/EconML |
Creating Summarized DataFor each segment, we split the data in two and create one summarized copy for each split. The summarized copy contains the number of samples that were summarized and the variance of the observations for the summarized copies. Optimally we would want two copies per segment, as I'm creating here, but with many segments, the approach would work ok even with a single copy per segment. | from econml.tests.test_statsmodels import _summarize
X_sum = np.unique(X, axis=0)
n_sum = np.zeros(X_sum.shape[0])
# The _summarize function performs the summary operation and returns the summarized data
# For each segment we have two copies.
X1, X2, y1, y2, X1_sum, X2_sum, y1_sum, y2_sum, n1_sum, n2_sum, var1_sum, var2_sum = _summarize(X, y)
# We concatenate the two copies data
X_sum = np.vstack([X1_sum, X2_sum]) # first coordinate is treatment, the rest are features
y_sum = np.concatenate((y1_sum, y2_sum)) # outcome
n_sum = np.concatenate((n1_sum, n2_sum)) # number of summarized points
var_sum = np.concatenate((var1_sum, var2_sum)) # variance of the summarized points
splits = (np.arange(len(y1_sum)), np.arange(len(y1_sum), len(y_sum))) # indices of the two summarized copies | _____no_output_____ | BSD-3-Clause | notebooks/Weighted Double Machine Learning Examples.ipynb | lwschm/EconML |
Applying the LinearDML | from econml.sklearn_extensions.linear_model import WeightedLassoCV
from econml.dml import LinearDML
from sklearn.linear_model import LogisticRegressionCV
# One can replace model_y and model_t with any scikit-learn regressor and classifier correspondingly
# as long as it accepts the sample_weight keyword argument at fit time.
est = LinearDML(model_y=WeightedLassoCV(cv=3),
model_t=LogisticRegressionCV(cv=3),
discrete_treatment=True)
est.fit(y_sum, X_sum[:, 0], X=X_sum[:, 1:5], W=X_sum[:, 5:],
sample_weight=n_sum, sample_var=var_sum)
# Treatment Effect of particular segments
est.effect(np.array([[1, 0, 0, 0]])) # effect of segment with features [1, 0, 0, 0]
# Confidence interval for effect
est.effect_interval(np.array([[1, 0, 0, 0]]), alpha=.05) # effect of segment with features [1, 0, 0, 0]
# Getting the coefficients of the linear CATE model together with the corresponding feature names
print(np.array(list(zip(est.cate_feature_names(['A', 'B', 'C', 'D']), est.coef_)))) | [['A' '2.0151755553187574']
['B' '0.07589941486626034']
['C' '-0.026742049958516114']
['D' '-0.12871399676275952']]
| BSD-3-Clause | notebooks/Weighted Double Machine Learning Examples.ipynb | lwschm/EconML |
Non-Linear CATE Models with Polynomial Features | from econml.sklearn_extensions.linear_model import WeightedLassoCV
from econml.dml import LinearDML
from sklearn.linear_model import LogisticRegressionCV
from sklearn.preprocessing import PolynomialFeatures
# One can replace model_y and model_t with any scikit-learn regressor and classifier correspondingly
# as long as it accepts the sample_weight keyword argument at fit time.
est = LinearDML(model_y=WeightedLassoCV(cv=3),
model_t=LogisticRegressionCV(cv=3),
featurizer=PolynomialFeatures(degree=2, interaction_only=True, include_bias=False),
discrete_treatment=True)
est.fit(y_sum, X_sum[:, 0], X=X_sum[:, 1:5], W=X_sum[:, 5:],
sample_weight=n_sum, sample_var=var_sum)
# Getting the confidence intervals of the coefficients and the intercept of the CATE model
# together with the corresponding feature names.
feat_names = est.cate_feature_names(['A', 'B', 'C', 'D'])
point_int = est.intercept_
point = est.coef_
lower_int, upper_int = est.intercept__interval(alpha=0.01)
lower, upper = est.coef__interval(alpha=0.01)
yerr = np.zeros((2, point.shape[0]))
yerr[0, :] = point - lower
yerr[1, :] = upper - point
with sns.axes_style('darkgrid'):
fig, ax = plt.subplots(1,1)
x = np.arange(1, 1 + len(point))
plt.errorbar(np.concatenate(([0], x)), np.concatenate(([point_int], point)),
np.hstack([np.array([[point_int-lower_int], [upper_int - point_int]]), yerr]), fmt='o')
ax.set_xticks(np.concatenate(([0], x)))
ax.set_xticklabels([1] + list(feat_names), rotation='vertical', fontsize=18)
ax.set_ylabel('coef')
plt.show()
import itertools
# Getting the confidence intervals of the CATE(x) for different x vectors
fnames = np.array(['A', 'B', 'C', 'D'])
lst = list(itertools.product([0, 1], repeat=4))
point = []
lower = []
upper = []
feat_names = []
for x in lst:
feat_names.append(" ".join(fnames[np.array(x)>0]))
x = np.array(x).reshape((1, -1))
point.append(est.effect(x)[0])
lb, ub = est.effect_interval(x, alpha=.01)
lower.append(lb[0])
upper.append(ub[0])
feat_names = np.array(feat_names)
point = np.array(point)
lower = np.array(lower)
upper = np.array(upper)
yerr = np.zeros((2, point.shape[0]))
yerr[0, :] = point - lower
yerr[1, :] = upper - point
with sns.axes_style('darkgrid'):
fig, ax = plt.subplots(1,1, figsize=(20, 5))
x = np.arange(len(point))
stat_sig = (lower>0) | (upper<0)
plt.errorbar(x[stat_sig], point[stat_sig], yerr[:, stat_sig], fmt='o', label='stat_sig')
plt.errorbar(x[~stat_sig], point[~stat_sig], yerr[:, ~stat_sig], fmt='o', color='red', label='insig')
ax.set_xticks(x)
ax.set_xticklabels(feat_names, rotation='vertical', fontsize=18)
ax.set_ylabel('coef')
plt.legend()
plt.show() | _____no_output_____ | BSD-3-Clause | notebooks/Weighted Double Machine Learning Examples.ipynb | lwschm/EconML |
Non-Linear CATE Models with Forests | from econml.dml import CausalForestDML
from sklearn.ensemble import GradientBoostingRegressor, GradientBoostingClassifier
# One can replace model_y and model_t with any scikit-learn regressor and classifier correspondingly
# as long as it accepts the sample_weight keyword argument at fit time.
est = CausalForestDML(model_y=GradientBoostingRegressor(n_estimators=30, min_samples_leaf=30),
model_t=GradientBoostingClassifier(n_estimators=30, min_samples_leaf=30),
discrete_treatment=True,
n_estimators=1000,
min_samples_leaf=2,
min_impurity_decrease=0.001,
verbose=0, min_weight_fraction_leaf=.03)
est.fit(y_sum, X_sum[:, 0], X=X_sum[:, 1:5], W=X_sum[:, 5:],
sample_weight=n_sum, sample_var=None)
import itertools
# Getting the confidence intervals of the CATE(x) for different x vectors
fnames = np.array(['A', 'B', 'C', 'D'])
lst = list(itertools.product([0, 1], repeat=4))
point = []
lower = []
upper = []
feat_names = []
for x in lst:
feat_names.append(" ".join(fnames[np.array(x)>0]))
x = np.array(x).reshape((1, -1))
point.append(est.effect(x)[0])
lb, ub = est.effect_interval(x, alpha=.01)
lower.append(lb[0])
upper.append(ub[0])
feat_names = np.array(feat_names)
point = np.array(point)
lower = np.array(lower)
upper = np.array(upper)
yerr = np.zeros((2, point.shape[0]))
yerr[0, :] = point - lower
yerr[1, :] = upper - point
with sns.axes_style('darkgrid'):
fig, ax = plt.subplots(1,1, figsize=(20, 5))
x = np.arange(len(point))
stat_sig = (lower>0) | (upper<0)
plt.errorbar(x[stat_sig], point[stat_sig], yerr[:, stat_sig], fmt='o', label='stat_sig')
plt.errorbar(x[~stat_sig], point[~stat_sig], yerr[:, ~stat_sig], fmt='o', color='red', label='insig')
ax.set_xticks(x)
ax.set_xticklabels(feat_names, rotation='vertical', fontsize=18)
ax.set_ylabel('coef')
plt.legend()
plt.show() | _____no_output_____ | BSD-3-Clause | notebooks/Weighted Double Machine Learning Examples.ipynb | lwschm/EconML |
Tree Interpretation of the CATE Model | from econml.cate_interpreter import SingleTreeCateInterpreter
intrp = SingleTreeCateInterpreter(include_model_uncertainty=True, max_depth=2, min_samples_leaf=1)
# We interpret the CATE models behavior on the distribution of heterogeneity features
intrp.interpret(est, X_sum[:, 1:5])
# exporting to a dot file
intrp.export_graphviz(out_file='cate_tree.dot', feature_names=['A', 'B', 'C', 'D'])
# or we can directly render. Requires the graphviz python library
intrp.render(out_file='cate_tree', format='pdf', view=True, feature_names=['A', 'B', 'C', 'D'])
# or we can also plot inline with matplotlib. a bit uglier
plt.figure(figsize=(25, 5))
intrp.plot(feature_names=['A', 'B', 'C', 'D'], fontsize=12)
plt.show() | C:\ProgramData\Anaconda3\lib\site-packages\ipykernel_launcher.py:4: UserWarning: Matplotlib is currently using agg, which is a non-GUI backend, so cannot show the figure.
after removing the cwd from sys.path.
| BSD-3-Clause | notebooks/Weighted Double Machine Learning Examples.ipynb | lwschm/EconML |
Tree Based Treatment Policy Based on CATE Model | from econml.cate_interpreter import SingleTreePolicyInterpreter
intrp = SingleTreePolicyInterpreter(risk_level=0.05, max_depth=3, min_samples_leaf=1, min_impurity_decrease=.001)
# We find a tree based treatment policy based on the CATE model
# sample_treatment_costs is the cost of treatment. Policy will treat if effect is above this cost.
# It can also be an array that has a different cost for each sample. In case treating different segments
# has different cost.
intrp.interpret(est, X_sum[:, 1:5],
sample_treatment_costs=0)
# exporting to a dot file
intrp.export_graphviz(out_file='cate_tree.dot', feature_names=['A', 'B', 'C', 'D'])
# or we can directly render. Requires the graphviz python library
intrp.render(out_file='policy_tree', format='pdf', view=True, feature_names=['A', 'B', 'C', 'D'])
# or we can also plot inline with matplotlib. a bit uglier
plt.figure(figsize=(25, 5))
intrp.plot(feature_names=['A', 'B', 'C', 'D'], fontsize=14)
plt.show() | C:\ProgramData\Anaconda3\lib\site-packages\ipykernel_launcher.py:4: UserWarning: Matplotlib is currently using agg, which is a non-GUI backend, so cannot show the figure.
after removing the cwd from sys.path.
| BSD-3-Clause | notebooks/Weighted Double Machine Learning Examples.ipynb | lwschm/EconML |
Appendix: AmendmentTo make estimation even more precise one should simply choose the two splits used during the crossfit part of Double Machine Learning so that each summaried copy of a segment ends up in a separate split. We can do this as follows: | from econml.sklearn_extensions.linear_model import WeightedLassoCV
from econml.dml import LinearDML
from sklearn.linear_model import LogisticRegressionCV
# One can replace model_y and model_t with any scikit-learn regressor and classifier correspondingly
# as long as it accepts the sample_weight keyword argument at fit time.
est = LinearDML(model_y=WeightedLassoCV(cv=3),
model_t=LogisticRegressionCV(cv=3),
discrete_treatment=True,
cv=[(splits[0], splits[1]), (splits[1], splits[0])]) # we input custom fold structure
est.fit(y_sum, X_sum[:, 0], X=X_sum[:, 1:5], W=X_sum[:, 5:],
sample_weight=n_sum, sample_var=var_sum)
# Treatment Effect of particular segments
est.effect(np.array([[1, 0, 0, 0]])) # effect of segment with features [1, 0, 0, 0]
# Confidence interval for effect
est.effect_interval(np.array([[1, 0, 0, 0]])) # effect of segment with features [1, 0, 0, 0] | _____no_output_____ | BSD-3-Clause | notebooks/Weighted Double Machine Learning Examples.ipynb | lwschm/EconML |
Working with remote analysis Columns in remote dataset:ClassificationYearPeriodPeriod Desc.Aggregate LevelIs Leaf CodeTrade Flow CodeTrade FlowReporter CodeReporterReporter ISOPartner CodePartnerPartner ISOCommodity CodeCommodityQty Unit CodeQty UnitQtyNetweight (kg)Trade Value (US$)Flag | # Our request was denied, we will get an error
private_dataset_ptr.get()
total_sum = 0
for row in private_dataset_ptr:
if row[6] == 1: # Trade Flow Code 1 == Import
print("This row was imported")
total_sum += row[-2]
print(f'The total value of all Canadian Imports in this dataset amounts to USD${total_sum}')
for i in private_dataset_ptr:
print(i) | <syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13de11160>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x10454d5e0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13ddeb5b0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13e3ca670>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13d7f5ee0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13dbc8340>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13e3ca670>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13ddeba00>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13d7f84f0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13dbc81c0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13dbc8340>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13ddeba00>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13e3ca790>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13ddeba00>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x10454d5e0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13ddeb130>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13e3ca670>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13d7f5cd0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13e3ca670>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13e2289a0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13de11160>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13dbc8340>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13de11dc0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13dbc82e0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13de11670>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13dbc81c0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x10454d5e0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13ddeb130>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13d7f5cd0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13ddebb80>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13ddebfd0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13d7f84f0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13de112e0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13dbc82e0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13de11220>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13ddeb5b0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13d7f84f0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x10454d5e0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13dbc8340>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13ddeb130>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13ddeb5b0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13dbc8340>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13dbc82e0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13dbc8340>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13e3ca670>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13e2289a0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13e3ca670>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13dbc81c0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13d7f5ee0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13e2289a0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13dbc82e0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13e2289a0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13d7f5cd0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13ddeb5b0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13db56160>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x10454d5e0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13d7f5cd0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13ddeb130>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13d7f5cd0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13ddeba00>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13e3ca790>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13dbc81c0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13ddebb80>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13db56160>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13d7f5ee0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13de11dc0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13dbc82e0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13e3ca790>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13d7f84f0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13ddebfd0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13de11670>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13dbc8340>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13d7f84f0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13db56160>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13e3ca670>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13dbc81c0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13ddeba00>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13e3ca670>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x10454d5e0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13de11310>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13d7f84f0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13db56160>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13dbc82e0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13dbc8340>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13ddeba00>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13d7f5ee0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13e3ca670>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13d7f5ee0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13ddeb130>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13ddebfd0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13e3ca790>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13dbc82e0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13ddeba00>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13dbc8340>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13d7f5cd0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13d7f84f0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13ddeb130>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13d7f5ee0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13e2289a0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13db56160>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13e3ca790>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13dbc8340>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13d7f84f0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13e2289a0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13e3ca790>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13ddebb80>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13e3ca790>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13dbc82e0>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13dbc8340>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13ddeb130>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13dbc8340>
<syft.proxy.syft.core.tensor.tensor.TensorPointer object at 0x13e3ca790>
| Apache-2.0 | notebooks/Experimental/Ishan/ADP Demo/Old Versions/Final Demo DataScientist.ipynb | Noob-can-Compile/PySyft |
Distribution of publication count for Dmel TF genesFor each TF gene, count the number of *curated* publications, using data from GO and Monarch | import ontobio.golr.golr_associations as ga
# Fetch all Dmel TF genes
DNA_BINDING_TF = 'GO:0003700'
DMEL = 'NCBITaxon:7227'
tf_genes = ga.get_subjects_for_object(object=DNA_BINDING_TF, subject_taxon=DMEL)
len(tf_genes)
# Routine to go to GO and Monarch to fetch all annotations for a gene
def get_pubs_for_gene(g):
# Monarch
r = ga.search_associations(subject=g, rows=-1)
pubs = set()
for a in r['associations']:
pl = a['publications']
if pl is not None:
pubs.update([p['id'] for p in pl if p['id'].startswith('PMID')])
# GO
r = ga.search_associations(subject=g, rows=-1, object_category='function')
for a in r['associations']:
pl = a['reference']
if pl is not None:
pubs.update([p for p in pl if p.startswith('PMID')])
return pubs
len(get_pubs_for_gene(tf_genes[0]))
# find all gene,numberOfPub pairs
pairs = []
for g in tf_genes:
np = len(get_pubs_for_gene(g))
pairs.append((g,np))
# Check
vals = [np for _,np in pairs]
vals[0:5]
# Check
tf_genes_with_no_pubs = [g for g,np in pairs if np==0]
tf_genes_with_no_pubs
# genes with fewer than 5 pubs
[g for g,np in pairs if np < 5]
import matplotlib.pyplot as plt
%matplotlib inline
# Histogram
plt.hist(vals, bins=40)
plt.ylabel('No of genes')
plt.xlabel('No of pubs')
plt.show()
# Save results
import csv
with open('gene-pubs.csv', 'w', newline='') as csvfile:
w = csv.writer(csvfile, delimiter=',')
for g,np in pairs:
w.writerow([g,np])
| _____no_output_____ | BSD-3-Clause | notebooks/TF_Pub_Analysis.ipynb | alliance-genome/ontobio |
1-5.1 Python Intro conditionals, type, and mathematics extended - **conditionals: `elif`**- **casting** - basic math operators -----> Student will be able to - **code more than two choices using `elif`** - **gather numeric input using type casting** - perform subtraction, multiplication and division operations in code Concepts conditional `elif`[]( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/a2ac5f4b-0400-4a60-91d5-d350c3cc0515/Unit1_Section5.1-elif.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/a2ac5f4b-0400-4a60-91d5-d350c3cc0515/Unit1_Section5.1-elif.vtt","srclang":"en","kind":"subtitles","label":"english"}]) a little review - **`if`** means "**if** a condition exists then do some task." **`if`** is usually followed by **`else`** - **`else`** means "**or else** after we have tested **if**, then do an alternative task" When there is a need to test for multiple conditions there is **`elif`**- **`elif`** statement follows **`if`**, and means **"else, if "** another condition exists do something else- **`elif`** can be used many times- **`else`** is used after the last test condition (**`if`** or **`elif`**) in psuedo code **If** it is raining bring an umbrella or **Else If** (`elif`) it is snowing bring a warm coat or **Else** go as usual Like **`else`**, the **`elif`** only executes when the previous conditional is False Examples | # [ ] review the code then run testing different inputs
# WHAT TO WEAR
weather = input("Enter weather (sunny, rainy, snowy): ")
if weather.lower() == "sunny":
print("Wear a t-shirt")
elif weather.lower() == "rainy":
print("Bring an umbrella and boots")
elif weather.lower() == "snowy":
print("Wear a warm coat and hat")
else:
print("Sorry, not sure what to suggest for", weather)
# [ ] review the code then run testing different inputs
# SECRET NUMBER GUESS
secret_num = "2"
guess = input("Enter a guess for the secret number (1-3): ")
if guess.isdigit() == False:
print("Invalid: guess should only use digits")
elif guess == "1":
print("Guess is too low")
elif guess == secret_num:
print("Guess is right")
elif guess == "3":
print("Guess is too high")
else:
print(guess, "is not a valid guess (1-3)") | _____no_output_____ | MIT | Python Absolute Beginner/Module_3_4_Absolute_Beginner.ipynb | serah-wif/pythonteachingcode |
Task 1 Program: Shirt Sale Complete program using `if, elif, else`- Get user input for variable size (S, M, L)- reply with each shirt size and price (Small = \$ 6, Medium = \$ 7, Large = \$ 8)- if the reply is other than S, M, L, give a message for not available- *optional*: add additional sizes | # [ ] code and test SHIRT SALE
| _____no_output_____ | MIT | Python Absolute Beginner/Module_3_4_Absolute_Beginner.ipynb | serah-wif/pythonteachingcode |
Concepts castingCasting is the conversion from one data type to another Such as converting from **`str`** to **`int`**.[]( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/4cbf7f96-9ddd-4962-88a8-71081d7d5ef6/Unit1_Section5.1-casting-input.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/4cbf7f96-9ddd-4962-88a8-71081d7d5ef6/Unit1_Section5.1-casting-input.vtt","srclang":"en","kind":"subtitles","label":"english"}]) `int()`the **`int()`** function can convert stings that represent whole counting numbers into integers and strip decimals to convert float numbers to integers- `int("1") = 1` the string representing the integer character `"1"`, cast to a number - `int(5.1) = 5` the decimal (float), `5.1`, truncated into a non-decimal (integer) - `int("5.1") = ValueError` `"5.1"` isn't a string representation of integer, `int()` can cast only strings representing integer values Example | weight1 = '60' # a string
weight2 = 170 # an integer
# add 2 integers
total_weight = int(weight1) + weight2
print(total_weight) | 230
| MIT | Python Absolute Beginner/Module_3_4_Absolute_Beginner.ipynb | serah-wif/pythonteachingcode |
Task 2 casting with `int()` & `str()` | str_num_1 = "11"
str_num_2 = "15"
int_num_3 = 10
# [ ] Add the 3 numbers as integers and print the result
str_num_1 = "11"
str_num_2 = "15"
int_num_3 = 10
# [ ] Add the 3 numbers as test strings and print the result
| _____no_output_____ | MIT | Python Absolute Beginner/Module_3_4_Absolute_Beginner.ipynb | serah-wif/pythonteachingcode |
Task 2 cont... Program: adding using `int` casting- **[ ]** initialize **`str_integer`** variable to a **string containing characters of an integer** (quotes) - **[ ]** initialize **`int_number`** variable with an **integer value** (no quotes)- **[ ]** initialize **`number_total`** variable and **add int_number + str_integer** using **`int`** casting- **[ ]** print the sum (**`number_total`**) | # [ ] code and test: adding using int casting
str_integer = "2"
int_number = 10
number_total = int(str_integer) + int_number
print(number_total)
| _____no_output_____ | MIT | Python Absolute Beginner/Module_3_4_Absolute_Beginner.ipynb | serah-wif/pythonteachingcode |
Concepts `input()` strings that represent numbers can be "cast" to integer values Example | # [ ] review and run code
student_age = input('enter student age (integer): ')
age_next_year = int(student_age) + 1
print('Next year student will be',age_next_year)
# [ ] review and run code
# cast to int at input
student_age = int(input('enter student age (integer): '))
age_in_decade = student_age + 10
print('In a decade the student will be', age_in_decade) | _____no_output_____ | MIT | Python Absolute Beginner/Module_3_4_Absolute_Beginner.ipynb | serah-wif/pythonteachingcode |
Task 3 Program: adding calculator- get input of 2 **integer** numbers - cast the input and print the input followed by the result - Output Example: **`9 + 13 = 22`** Optional: check if input .isdigit() before trying integer addition to avoid errors in casting invalid inputs | # [ ] code and test the adding calculator
| _____no_output_____ | MIT | Python Absolute Beginner/Module_3_4_Absolute_Beginner.ipynb | serah-wif/pythonteachingcode |
Read DataWe run into memory issues using the code block below:```pythondata = pd.read_json('../data/17.04_association_data.json', orient='records', typ='frame', lines=True, numpy=True)```Thus, I have turned to another library to iteratively load the JSON file into memory. | # Convert the JSON data to a list of strings. I can then parse the strings
# using usjon later.
filename = '../data/17.04_association_data.json'
with open(filename, 'r+') as f:
data = f.readlines()
data = [x.rstrip() for x in data]
len(data)
from pprint import pprint
pprint(json.loads(data[0])) | {'association_score': {'datasources': {'cancer_gene_census': 0.90830650599492,
'chembl': 0.825695006743774,
'europepmc': 0.30565916633482804,
'eva': 0.905780555555555,
'eva_somatic': 0.0,
'expression_atlas': 0.190300397794741,
'gene2phenotype': 1.0,
'gwas_catalog': 0.0,
'intogen': 0.0,
'phenodigm': 0.14626161111111102,
'reactome': 1,
'uniprot': 1,
'uniprot_literature': 1},
'datatypes': {'affected_pathway': 1.0,
'animal_model': 0.14626161111111102,
'genetic_association': 1.0,
'known_drug': 0.825695006743774,
'literature': 0.30565916633482804,
'rna_expression': 0.190300397794741,
'somatic_mutation': 0.90830650599492},
'overall': 1.0},
'disease': {'efo_info': {'label': 'neoplasm',
'path': [['EFO_0000616']],
'therapeutic_area': {'codes': [], 'labels': []}},
'id': 'EFO_0000616'},
'evidence_count': {'datasources': {'cancer_gene_census': 159.0,
'chembl': 91.0,
'europepmc': 4551.0,
'eva': 6.0,
'eva_somatic': 27.0,
'expression_atlas': 12.0,
'gene2phenotype': 1.0,
'gwas_catalog': 0.0,
'intogen': 19.0,
'phenodigm': 5.0,
'reactome': 26.0,
'uniprot': 37.0,
'uniprot_literature': 9.0},
'datatypes': {'affected_pathway': 26.0,
'animal_model': 5.0,
'genetic_association': 53.0,
'known_drug': 91.0,
'literature': 4551.0,
'rna_expression': 12.0,
'somatic_mutation': 205.0},
'total': 4943.0},
'id': 'ENSG00000121879-EFO_0000616',
'is_direct': True,
'target': {'gene_info': {'name': 'phosphatidylinositol-4,5-bisphosphate '
'3-kinase catalytic subunit alpha',
'symbol': 'PIK3CA'},
'id': 'ENSG00000121879'}}
| MIT | notebooks/01-data-exploration.ipynb | ericmjl/target-prediction |
From observation, I'm seeing that the `datatypes` key-value dictionary under the `association_score` data dictionary looks like the thing that is used for data analysis. On the other hand, there's an `evidence_count` thing as well - I think that one is the so-called "raw data". What was used in the paper should be the `association_score -> datatypes` dictionary. | from tqdm import tqdm
records = []
for d in tqdm(data):
# Get the datatype out.
d = json.loads(d)
record = d['association_score']['datatypes']
# Add the target symbol to the record.
record['target'] = d['target']['gene_info']['symbol']
record['target_id'] = d['target']['id']
# Add the disease ID to the record.
record['disease'] = d['disease']['id']
record['disease_efo_label'] = d['disease']['efo_info']['label']
records.append(record) | 100%|██████████| 2673321/2673321 [00:44<00:00, 60570.96it/s]
| MIT | notebooks/01-data-exploration.ipynb | ericmjl/target-prediction |
Let's write this to the "feather" format - it'll let us load the dataframe really quickly in other notebooks. | pd.DataFrame(records).to_feather('../data/association_score_data_types.feather') | _____no_output_____ | MIT | notebooks/01-data-exploration.ipynb | ericmjl/target-prediction |
Just to test, let's reload the dataframe. | df = pd.read_feather('../data/association_score_data_types.feather')
df.head() | _____no_output_____ | MIT | notebooks/01-data-exploration.ipynb | ericmjl/target-prediction |
Great! Sanity check passed :). Exploratory AnalysisLet's go on to some exploratory analysis of the data.I'd like to first see how many of each target type is represented in the dataset.In the paper, for each target, the GSK research group used a simple "mean" of all evidence strengths across all diseases for a given target. I wasn't very satisfied with this, as I'm concerned about variability across diseases. Thus, to start, I will begin with a "coefficient of variation" computation, which will give us a good measure of the spread relative to the mean.If the spread (measured by standard deviation) is greater than the mean, we should see CV > 1. Intuitively, I think this may indicate problems with using a simple mean.To ensure that we don't get any `NaN` values after the computation, I will replace all zero-valued data with an infinitesimally small number, $ 10^{-6} $. | df_cv = df.replace(0, 1E-6).groupby('target').std() / df.replace(0, 1E-6).groupby('target').mean()
df_cv.sample(10) | _____no_output_____ | MIT | notebooks/01-data-exploration.ipynb | ericmjl/target-prediction |
How many target-disease pairs represented? | len(df) | _____no_output_____ | MIT | notebooks/01-data-exploration.ipynb | ericmjl/target-prediction |
How many unique targets are there? | len(df_cv) | _____no_output_____ | MIT | notebooks/01-data-exploration.ipynb | ericmjl/target-prediction |
And how many unique diseases are represented? | len(df.groupby('disease').mean())
# Theoretical number of target-disease pairs
len(df_cv) * len(df.groupby('disease').mean()) | _____no_output_____ | MIT | notebooks/01-data-exploration.ipynb | ericmjl/target-prediction |
If densely populated, there should be $ 31051 \times 8891 \approx 270 million $ unique combinations. However, we only have $ 2673321 \approx 2.6 million $ target-disease pairs represented. That means a very sparse dataset. Let's now do a simple count of the cells here:- How many have non-zero values?- Of those that have non-zero values: - How many have CV < 1? - How many have CV = 1? - How many have CV > 1? | # This is the number of cells that have nonzero values.
df_cv[df_cv != 0].isnull()
import matplotlib.pyplot as plt
import numpy as np
def ecdf(data):
x, y = np.sort(data), np.arange(1, len(data)+1) / len(data)
return x, y | _____no_output_____ | MIT | notebooks/01-data-exploration.ipynb | ericmjl/target-prediction |
Let's make an ECDF scatter plot of the non-zero data. We're still only interested in the coefficient of variation (CV). In the following plots, I will plot the ECDF of log10-transformed CV scores for each target. Recall that CV 1 indicates variation greater than mean. I would like to see what proportion of CV scores are greater than 1. | from matplotlib.gridspec import GridSpec
from scipy.stats import percentileofscore as pos
df_cv_nonzero = df_cv[df_cv != 0]
gs = GridSpec(2, 4)
fig = plt.figure(figsize=(12, 6))
for i, col in enumerate(df_cv.columns):
x, y = ecdf(df_cv_nonzero[col].dropna())
x = np.log10(x)
ax = fig.add_subplot(gs[i])
ax.scatter(x, y)
# What percentile is the value 0
zero_pos = pos(x, 0)
ax.set_title(f'{col}, {100 - np.round(zero_pos, 2)}%')
ax.vlines(x=0, ymin=0, ymax=1)
plt.tight_layout()
plt.show() | _____no_output_____ | MIT | notebooks/01-data-exploration.ipynb | ericmjl/target-prediction |
Segmentação de Clientes Nesse Notebooks, vamos fazer segmetação de Clie | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from matplotlib import style
import plotly
import matplotlib.dates as mdates
from datetime import datetime, timedelta
import plotly.offline as pyoff
import plotly.graph_objs as go
#initiate visualization library for jupyter notebook
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
pyoff.init_notebook_mode(connected=True)
sns.set(style="ticks")
%matplotlib inline
import gc
import itertools
from datetime import datetime
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
warnings.simplefilter("ignore")
np.random.seed(157)
import os,sys
sys.path.append("../src/")
from plot_functions import *
customers_ = pd.read_csv("../files/olist_data/olist_customers_dataset.csv")
order_items_ = pd.read_csv("../files/olist_data/olist_order_items_dataset.csv")
order_payments_ = pd.read_csv("../files/olist_data/olist_order_payments_dataset.csv")
orders_ = pd.read_csv("../files/olist_data/olist_orders_dataset.csv")
products = pd.read_csv("../files/olist_data/olist_products_dataset.csv")
sellers = pd.read_csv("../files/olist_data/olist_sellers_dataset.csv")
geoloc = pd.read_csv("../files/olist_data/olist_geolocation_dataset.csv")
geoloc[geoloc[['geolocation_zip_code_prefix', 'geolocation_city', 'geolocation_state']].duplicated()]
# creating master dataframe
order_payments_.head()
df1 = order_payments_.merge(order_items_, on='order_id')
df2 = df1.merge(orders_, on='order_id')
df3 = df2.merge(products,on ='product_id')
df4 = df3.merge(sellers,on = 'seller_id')
df5 = df4.merge(geoloc,left_on = 'seller_zip_code_prefix',right_on='geolocation_zip_code_prefix')
df5.drop('seller_zip_code_prefix',axis = 1 ,inplace = True)
df = df5.merge(customers_, on='customer_id')
print(df.shape)
del customers_,order_items_,order_payments_,orders_,products,sellers,geoloc
del df1,df2,df3,df4,df5 # realasee memory
df.head()
df.info()
df.describe() | _____no_output_____ | MIT | notebooks/.ipynb_checkpoints/Customer-Segmentation-checkpoint.ipynb | gabedewitt/Ollist_End_to_End |
Limpeza dos dados | # converting date columns to datetime
date_columns = ['shipping_limit_date', 'order_purchase_timestamp', 'order_approved_at', 'order_delivered_carrier_date', 'order_delivered_customer_date', 'order_estimated_delivery_date']
for col in date_columns:
df[col] = pd.to_datetime(df[col], format='%Y-%m-%d %H:%M:%S')
# cleaning up name columns
df['customer_city'] = df['customer_city'].str.title()
df['payment_type'] = df['payment_type'].str.replace('_', ' ').str.title()
# engineering new/essential columns
df['delivery_against_estimated'] = (df['order_estimated_delivery_date'] - df['order_delivered_customer_date']).dt.days
df['order_purchase_year'] = df.order_purchase_timestamp.apply(lambda x: x.year)
df['order_purchase_month'] = df.order_purchase_timestamp.apply(lambda x: x.month)
df['order_purchase_dayofweek'] = df.order_purchase_timestamp.apply(lambda x: x.dayofweek)
df['order_purchase_hour'] = df.order_purchase_timestamp.apply(lambda x: x.hour)
df['order_purchase_day'] = df['order_purchase_dayofweek'].map({0:'Mon',1:'Tue',2:'Wed',3:'Thu',4:'Fri',5:'Sat',6:'Sun'})
df['order_purchase_mon'] = df.order_purchase_timestamp.apply(lambda x: x.month).map({1:'Jan',2:'Feb',3:'Mar',4:'Apr',5:'May',6:'Jun',7:'Jul',8:'Aug',9:'Sep',10:'Oct',11:'Nov',12:'Dec'})
# Changing the month attribute for correct ordenation
df['month_year'] = df['order_purchase_month'].astype(str).apply(lambda x: '0' + x if len(x) == 1 else x)
df['month_year'] = df['order_purchase_year'].astype(str) + '-' + df['month_year'].astype(str)
#creating year month column
df['month_y'] = df['order_purchase_timestamp'].map(lambda date: 100*date.year + date.month)
# displaying summary staticstics of columns
df.describe(include='all')
# displaying missing value counts and corresponding percentage against total observations
missing_values = df.isnull().sum().sort_values(ascending = False)
percentage = (df.isnull().sum()/df.isnull().count()*100).sort_values(ascending = False)
pd.concat([missing_values, percentage], axis=1, keys=['Values', 'Percentage']).transpose()
# dropping missing values
df.dropna(inplace=True)
df.isnull().values.any() | _____no_output_____ | MIT | notebooks/.ipynb_checkpoints/Customer-Segmentation-checkpoint.ipynb | gabedewitt/Ollist_End_to_End |
Análise exploratória de dados | n_customers = df['customer_unique_id'].nunique()
print('Unique customers: {}'.format(n_customers))
n_cities = df['customer_city'].nunique()
print('Unique cities: {}'.format(n_cities))
# City disctribution
df['customer_city'].value_counts().sort_values(ascending=False)
cities = df['customer_city'].value_counts().sort_values(ascending=False).head(50).to_frame()
plt.figure(figsize=(15,5))
sns.barplot(x = cities.index.values.flatten() ,y = cities.values.flatten())
plt.tight_layout()
plt.xticks(rotation = 90)
plt.title('Top 50 cidades')
plt.ylabel('Quantidade de Clientes');
# How many states are the customers from?
df['customer_state'].nunique()
cities = df['customer_city'].value_counts().sort_values(ascending=False).head(50).to_frame()
states = df['customer_state'].value_counts().sort_values(ascending=False)
plt.figure(figsize=(15,5))
sns.barplot(x = states.index.values.flatten() ,y = states.values.flatten())
plt.tight_layout()
plt.xticks(rotation = 90)
plt.title('Top Estados')
plt.ylabel('Quantidade de Clientes');
df_customer_dly = df.groupby(
'customer_unique_id',
as_index=False).agg({
'order_purchase_timestamp': 'min'
})
df_customer_dly.groupby('order_purchase_timestamp').count().cumsum().plot(figsize=(15,5))
plt.title('Customers cumulative')
plt.xlabel('Timeline')
plt.ylabel('Customer count cumulative');
# New customers count by day
ax = df_customer_dly.groupby('order_purchase_timestamp').count().head(10).plot(kind='bar', figsize=(15,5))
# set monthly locator
ax.xaxis.set_major_locator(mdates.MonthLocator(interval=1))
# set font and rotation for date tick labels
plt.gcf().autofmt_xdate()
plt.title('Customers cumulative')
plt.xlabel('Timeline')
plt.ylabel('Customer count'); | _____no_output_____ | MIT | notebooks/.ipynb_checkpoints/Customer-Segmentation-checkpoint.ipynb | gabedewitt/Ollist_End_to_End |
**Crescimento Mensal** | #calculate Revenue for each row and create a new dataframe with YearMonth - Revenue columns
df_revenue = df.groupby(['month_year'])['payment_value'].sum().reset_index()
df_revenue
#calculating for monthly revenie growth rate
# using pct_change() function to see monthly percentage change
df_revenue['MonthlyGrowth'] = df_revenue['payment_value'].pct_change()
df_revenue
#creating monthly active customers dataframe by counting unique Customer IDs
df_monthly_active = df.groupby('month_year')['customer_unique_id'].nunique().reset_index()
fig, ax = plt.subplots(figsize=(12, 6))
sns.set(palette='muted', color_codes=True, style='whitegrid')
bar_plot(x='month_year', y='customer_unique_id', df=df_monthly_active, value=True)
ax.tick_params(axis='x', labelrotation=90)
#creating monthly active customers dataframe by counting unique Customer IDs
df_monthly_sales = df.groupby('month_year')['order_status'].count().reset_index()
fig, ax = plt.subplots(figsize=(12, 6))
sns.set(palette='muted', color_codes=True, style='whitegrid')
bar_plot(x='month_year', y='order_status', df=df_monthly_sales, value=True)
ax.tick_params(axis='x', labelrotation=90)
| _____no_output_____ | MIT | notebooks/.ipynb_checkpoints/Customer-Segmentation-checkpoint.ipynb | gabedewitt/Ollist_End_to_End |
**Média de por compra de clientes** | # create a new dataframe for average revenue by taking the mean of it
df_monthly_order_avg = df.groupby('month_year')['payment_value'].mean().reset_index()
fig, ax = plt.subplots(figsize=(12, 6))
sns.set(palette='muted', color_codes=True, style='whitegrid')
bar_plot(x='month_year', y='payment_value', df=df_monthly_order_avg, value=True)
ax.tick_params(axis='x', labelrotation=90) | _____no_output_____ | MIT | notebooks/.ipynb_checkpoints/Customer-Segmentation-checkpoint.ipynb | gabedewitt/Ollist_End_to_End |
Segmentação de Clientes A segmetação irá classificar clionets baseado na frequencia de compra, quantidade de comprar e dinheiro gasto.* Como os clientes são segmentados de acordo com tempo gasto na comaprar, quantidade na comprar e o número de pedidos? | df['order_status'].value_counts()
df_customer = df[df['order_status']=='delivered']
# Setting reference day
df_customer['today'] = df_customer['order_purchase_timestamp'].max()
# Date deltas
df_customer['recency'] = df_customer['today'] - df['order_purchase_timestamp'] | _____no_output_____ | MIT | notebooks/.ipynb_checkpoints/Customer-Segmentation-checkpoint.ipynb | gabedewitt/Ollist_End_to_End |
Processing of a single image Loading the HDF5 file and converting to tiff | root_hdf5 = '../../pytorch/fake_images_TI/hdf5'
root_tiff = '../../pytorch/fake_images_TI/tiff'
root_postprocess_tiff = '../../pytorch/fake_images_TI/postprocess_tiff'
files_name = os.listdir(root_hdf5)
print(files_name)
for file_name in files_name:
file_path = os.path.join(root_hdf5, file_name)
f = h5py.File(file_path,'r')
my_array = f['data'][()]
img = my_array[0, 0, :, :, :].astype(np.float32)
file_name = file_name.split('.')[0]+".tiff"
# print(name)
file_path = os.path.join(root_tiff, file_name)
tifffile.imsave(file_path, img)
# print(img.shape) | _____no_output_____ | MIT | code/postprocess/Sample Postprocessing.ipynb | miniminisu/dcgan-code-cu-foam-3D |
Denoising and thresholding | files_name = os.listdir(root_tiff)
for file_name in files_name:
file_path = os.path.join(root_tiff, file_name)
im_in = tifffile.imread(file_path)
#apply single pixel denoising
im_in = median_filter(im_in, size=(3, 3, 3))
#cutaway outer noise area
#im_in = im_in[40:240, 40:240, 40:240]
#Normalize to range zero and one
im_in = im_in/255.
#Threshhold Image
threshold_global_otsu = threshold_otsu(im_in)
segmented_image = (im_in >= threshold_global_otsu).astype(np.int32)
#Store as postprocessed image
file_path = os.path.join(root_postprocess_tiff, file_name.split('.')[0]+'.tiff')
tifffile.imsave(file_path, segmented_image.astype(np.int32)) | _____no_output_____ | MIT | code/postprocess/Sample Postprocessing.ipynb | miniminisu/dcgan-code-cu-foam-3D |
Compute porosity | segmented_image = tifffile.imread("postprocessed_example.tiff")
porc = Counter(segmented_image.flatten())
print(porc)
porosity = porc[0]/float(porc[0]+porc[1])
print("Porosity of the sample: ", porosity) | Counter({1: 6425472, 0: 1574528})
Porosity of the sample: 0.196816
| MIT | code/postprocess/Sample Postprocessing.ipynb | miniminisu/dcgan-code-cu-foam-3D |
Measuring atmospheric pressure In this exercise we used a Sony XPeria phone's sensors with the [PhyPhox](https://phyphox.org/) application to measure ambient air pressure on a sunny summer day in southern France. We begin our descent from the Jura mountains at Col de la Fausille and end down at a parking lot at CERN's Meyrin site in Switzerland. | # Let's load the relevant python modules.
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
# Load the relevant data.
baro = pd.read_csv("../../Data/barometri_fausille.csv")
# Take a look at the data.
baro.head()
# Since the data has only time and pressure, we'll calculate the height difference from known formulas
# and add the values in the dataframe.
maxp = max(baro["Pressure (hPa)"])
dp = list(maxp-baro["Pressure (hPa)"].copy())
baro["Height (m)"] = np.ones(len(baro))
# Since we know our destination is elevated about 440 m from the sea level:
for i in range(0, len(dp)):
baro["Height (m)"][i] = dp[i]*(10/1.2) + 440
# Here's the plot vs. time. Can you tell where our car stood still?
fig, ax1 = plt.subplots()
fig.set_figwidth(20)
fig.set_figheight(5)
eka, = plt.plot(baro["Time (s)"],baro["Pressure (hPa)"], c = 'r', label = 'Pressure')
plt.title("Air pressure on the way down from Col de la Fausille \n", fontsize = 15)
plt.xlabel("Time (s)", fontsize = 15)
plt.ylabel("Pressure (hPa) \n", fontsize = 15)
ax2 = ax1.twinx()
ax2.set_ylabel('Height (m) \n', fontsize = 15)
toka, = plt.plot(baro["Time (s)"], baro["Height (m)"], c = 'b', label = 'Height')
axes=fig.gca()
axes.set_ylim(0,1500)
plt.legend([eka, toka],['Pressure', 'Height'], loc = 'lower right')
plt.show()
# Since time isn't physically very interesting here, let's try the height instead.
plt.figure(figsize = (20,5))
plt.plot(baro["Height (m)"], baro["Pressure (hPa)"])
plt.title("Air pressure \n", fontsize = 15)
plt.xlabel("Height (m)", fontsize = 15)
plt.ylabel("Pressure (hPa) \n", fontsize = 15)
plt.show() | _____no_output_____ | CC-BY-4.0 | Exercises-with-open-data/Other/barometer_col_de_la_fausille.ipynb | trongnghia00/cms |
[](http://introml.analyticsdojo.com)Introduction to Python - Groupby and Pivot Tablesintroml.analyticsdojo.com Groupby and Pivot Tables | !wget https://raw.githubusercontent.com/rpi-techfundamentals/spring2019-materials/master/input/train.csv
!wget https://raw.githubusercontent.com/rpi-techfundamentals/spring2019-materials/master/input/test.csv
import numpy as np
import pandas as pd
# Input data files are available in the "../input/" directory.
# Let's input them into a Pandas DataFrame
train = pd.read_csv("train.csv")
test = pd.read_csv("test.csv") | _____no_output_____ | MIT | site/_build/html/_sources/notebooks/02-intro-python/04-groupby.ipynb | rpi-techfundamentals/spring2020_website |
Groupby- Often it is useful to see statistics by different classes.- Can be used to examine different subpopulations | train.head()
print(train.dtypes)
#What does this tell us?
train.groupby(['Sex']).Survived.mean()
#What does this tell us?
train.groupby(['Sex','Pclass']).Survived.mean()
#What does this tell us? Here it doesn't look so clear. We could separate by set age ranges.
train.groupby(['Sex','Age']).Survived.mean() | _____no_output_____ | MIT | site/_build/html/_sources/notebooks/02-intro-python/04-groupby.ipynb | rpi-techfundamentals/spring2020_website |
Combining Multiple Operations- *Splitting* the data into groups based on some criteria- *Applying* a function to each group independently- *Combining* the results into a data structure | s = train.groupby(['Sex','Pclass'], as_index=False).Survived.sum()
s['PerSurv'] = train.groupby(['Sex','Pclass'], as_index=False).Survived.mean().Survived
s['PerSurv']=s['PerSurv']*100
s['Count'] = train.groupby(['Sex','Pclass'], as_index=False).Survived.count().Survived
survived =s.Survived
s
#What does this tell us?
spmean=train.groupby(['Sex','Pclass']).Survived.mean()
spcount=train.groupby(['Sex','Pclass']).Survived.sum()
spsum=train.groupby(['Sex','Pclass']).Survived.count()
spsum | _____no_output_____ | MIT | site/_build/html/_sources/notebooks/02-intro-python/04-groupby.ipynb | rpi-techfundamentals/spring2020_website |
Pivot Tables- A pivot table is a data summarization tool, much easier than the syntax of groupBy. - It can be used to that sum, sort, averge, count, over a pandas dataframe. - Download and open data in excel to appreciate the ways that you can use Pivot Tables. | #Load it and create a pivot table.
from google.colab import files
files.download('train.csv')
#List the index and the functions you want to aggregage by.
pd.pivot_table(train,index=["Sex","Pclass"],values=["Survived"],aggfunc=['count','sum','mean',]) | _____no_output_____ | MIT | site/_build/html/_sources/notebooks/02-intro-python/04-groupby.ipynb | rpi-techfundamentals/spring2020_website |
Notebook 2In this notebook, we worked with the result dataset from Notebook 1 and computed rolling statistics (mean, difference, std, max, min) for a list of features over various time windows. This was the most time consuming and computational expensive part of the entire tutorial. We encountered some roadblocks and found some workarounds. Please see below for more details. Outline- [Define Rolling Features and Window Sizes](Define-list-of-features-for-rolling-compute,-window-sizes)- [Issues and Solutions](What-issues-we-encountered-using-Pyspark-and-how-we-solved-them?)- [Rolling Compute](Rolling-Compute) - [Rolling Mean](Rolling-Mean) - [Rolling Difference](Rolling-Difference) - [Rolling Std](Rolling-Std) - [Rolling Max](Rolling-Max) - [Rolling Min](Rolling-Min)- [Join Results](Join-result-dataset-from-the-five-rolling-compute-cells:) | import pyspark.sql.functions as F
import time
import subprocess
import sys
import os
import re
from pyspark import SparkConf
from pyspark import SparkContext
from pyspark import SQLContext
from pyspark.sql.types import *
from pyspark.sql.functions import col,udf,lag,date_add,explode,lit,concat,unix_timestamp
from pyspark.sql.dataframe import *
from pyspark.sql.window import Window
from pyspark.sql.types import DateType
from datetime import datetime, timedelta
from pyspark.sql import Row
start_time = time.time()
| _____no_output_____ | MIT | Notebook_2_FeatureEngineering_RollingCompute.ipynb | yallic/PySpark-Predictive-Maintenance |
Define list of features for rolling compute, window sizes | rolling_features = [
'warn_type1_total', 'warn_type2_total',
'pca_1_warn','pca_2_warn', 'pca_3_warn', 'pca_4_warn', 'pca_5_warn',
'pca_6_warn','pca_7_warn', 'pca_8_warn', 'pca_9_warn', 'pca_10_warn',
'pca_11_warn','pca_12_warn', 'pca_13_warn', 'pca_14_warn', 'pca_15_warn',
'pca_16_warn','pca_17_warn', 'pca_18_warn', 'pca_19_warn', 'pca_20_warn',
'problem_type_1', 'problem_type_2', 'problem_type_3','problem_type_4',
'problem_type_1_per_usage1','problem_type_2_per_usage1',
'problem_type_3_per_usage1','problem_type_4_per_usage1',
'problem_type_1_per_usage2','problem_type_2_per_usage2',
'problem_type_3_per_usage2','problem_type_4_per_usage2',
'fault_code_type_1_count', 'fault_code_type_2_count', 'fault_code_type_3_count', 'fault_code_type_4_count',
'fault_code_type_1_count_per_usage1','fault_code_type_2_count_per_usage1',
'fault_code_type_3_count_per_usage1', 'fault_code_type_4_count_per_usage1',
'fault_code_type_1_count_per_usage2','fault_code_type_2_count_per_usage2',
'fault_code_type_3_count_per_usage2', 'fault_code_type_4_count_per_usage2']
# lag window 3, 7, 14, 30, 90 days
lags = [3, 7, 14, 30, 90]
print(len(rolling_features))
| 46
| MIT | Notebook_2_FeatureEngineering_RollingCompute.ipynb | yallic/PySpark-Predictive-Maintenance |
What issues we encountered using Pyspark and how we solved them?- If the entire list of **46 features** and **5 time windows** were computed for **5 different types of rolling** (mean, difference, std, max, min) all in one go, we always ran into "StackOverFlow" error. - It was because the lineage was too long and Spark could not handle it.- We could either create checkPoint and materialize it throughout the process.- OR break the workload into chunks and save the result from each chunk as parquet file. A few things we found helpful:- Before the rolling compute, save the upstream work as a parquet file in Notebook_1 ("Notebook_1_DataCleansing_FeatureEngineering"). It will speed up the whole process because we no need to repeat all the previous steps. It will also help reduce the lineage.- Print out the lag and feature name to track progress.- Use "htop" command from the terminal to keep track how many CPUs are running for a particular task. For rolling compute, we were considering two potential approaches: 1) Use Spark clusters on HDInsight to perform rolling compute in parallel; 2) Use single node Spark on a powerful VM. By looking at htop dashboard, we saw all the 32 cores were running at the same time for a single task (for example compute rolling mean). So if say we divide the workload onto multiple nodes and each node runs a type of rolling compute, the amount of time taken will be comparable with running everything in a sequential manner on a single node Spark on a powerful machine.- Use "%%time" for each cell to get an estimate of the total run time, we will then have a better idea where and what to optimze the process.- Materialize the intermediate results by either caching in memory or writing as parquet files. We chose to save as parquet files because we did not want to repeat the compute again in case cache() did not work or any part of the rolling compute did not work.- Why parquet? There are many reasons, just to name a few: parquet not only saves the data but also the schema, it is a preferred file format by Spark, you are allowed to read only the data you need, etc.. Rolling Compute Rolling Mean | %%time
# Load result dataset from Notebook #1
df = sqlContext.read.parquet('/mnt/resource/PysparkExample/notebook1_result.parquet')
for lag_n in lags:
wSpec = Window.partitionBy('deviceid').orderBy('date').rowsBetween(1-lag_n, 0)
for col_name in rolling_features:
df = df.withColumn(col_name+'_rollingmean_'+str(lag_n), F.avg(col(col_name)).over(wSpec))
print("Lag = %d, Column = %s" % (lag_n, col_name))
# Save the intermediate result for downstream work
df.write.mode('overwrite').parquet('/mnt/resource/PysparkExample/data_rollingmean.parquet')
| Lag = 3, Column = warn_type1_total
Lag = 3, Column = warn_type2_total
Lag = 3, Column = pca_1_warn
Lag = 3, Column = pca_2_warn
Lag = 3, Column = pca_3_warn
Lag = 3, Column = pca_4_warn
Lag = 3, Column = pca_5_warn
Lag = 3, Column = pca_6_warn
Lag = 3, Column = pca_7_warn
Lag = 3, Column = pca_8_warn
Lag = 3, Column = pca_9_warn
Lag = 3, Column = pca_10_warn
Lag = 3, Column = pca_11_warn
Lag = 3, Column = pca_12_warn
Lag = 3, Column = pca_13_warn
Lag = 3, Column = pca_14_warn
Lag = 3, Column = pca_15_warn
Lag = 3, Column = pca_16_warn
Lag = 3, Column = pca_17_warn
Lag = 3, Column = pca_18_warn
Lag = 3, Column = pca_19_warn
Lag = 3, Column = pca_20_warn
Lag = 3, Column = problem_type_1
Lag = 3, Column = problem_type_2
Lag = 3, Column = problem_type_3
Lag = 3, Column = problem_type_4
Lag = 3, Column = problem_type_1_per_usage1
Lag = 3, Column = problem_type_2_per_usage1
Lag = 3, Column = problem_type_3_per_usage1
Lag = 3, Column = problem_type_4_per_usage1
Lag = 3, Column = problem_type_1_per_usage2
Lag = 3, Column = problem_type_2_per_usage2
Lag = 3, Column = problem_type_3_per_usage2
Lag = 3, Column = problem_type_4_per_usage2
Lag = 3, Column = fault_code_type_1_count
Lag = 3, Column = fault_code_type_2_count
Lag = 3, Column = fault_code_type_3_count
Lag = 3, Column = fault_code_type_4_count
Lag = 3, Column = fault_code_type_1_count_per_usage1
Lag = 3, Column = fault_code_type_2_count_per_usage1
Lag = 3, Column = fault_code_type_3_count_per_usage1
Lag = 3, Column = fault_code_type_4_count_per_usage1
Lag = 3, Column = fault_code_type_1_count_per_usage2
Lag = 3, Column = fault_code_type_2_count_per_usage2
Lag = 3, Column = fault_code_type_3_count_per_usage2
Lag = 3, Column = fault_code_type_4_count_per_usage2
Lag = 7, Column = warn_type1_total
Lag = 7, Column = warn_type2_total
Lag = 7, Column = pca_1_warn
Lag = 7, Column = pca_2_warn
Lag = 7, Column = pca_3_warn
Lag = 7, Column = pca_4_warn
Lag = 7, Column = pca_5_warn
Lag = 7, Column = pca_6_warn
Lag = 7, Column = pca_7_warn
Lag = 7, Column = pca_8_warn
Lag = 7, Column = pca_9_warn
Lag = 7, Column = pca_10_warn
Lag = 7, Column = pca_11_warn
Lag = 7, Column = pca_12_warn
Lag = 7, Column = pca_13_warn
Lag = 7, Column = pca_14_warn
Lag = 7, Column = pca_15_warn
Lag = 7, Column = pca_16_warn
Lag = 7, Column = pca_17_warn
Lag = 7, Column = pca_18_warn
Lag = 7, Column = pca_19_warn
Lag = 7, Column = pca_20_warn
Lag = 7, Column = problem_type_1
Lag = 7, Column = problem_type_2
Lag = 7, Column = problem_type_3
Lag = 7, Column = problem_type_4
Lag = 7, Column = problem_type_1_per_usage1
Lag = 7, Column = problem_type_2_per_usage1
Lag = 7, Column = problem_type_3_per_usage1
Lag = 7, Column = problem_type_4_per_usage1
Lag = 7, Column = problem_type_1_per_usage2
Lag = 7, Column = problem_type_2_per_usage2
Lag = 7, Column = problem_type_3_per_usage2
Lag = 7, Column = problem_type_4_per_usage2
Lag = 7, Column = fault_code_type_1_count
Lag = 7, Column = fault_code_type_2_count
Lag = 7, Column = fault_code_type_3_count
Lag = 7, Column = fault_code_type_4_count
Lag = 7, Column = fault_code_type_1_count_per_usage1
Lag = 7, Column = fault_code_type_2_count_per_usage1
Lag = 7, Column = fault_code_type_3_count_per_usage1
Lag = 7, Column = fault_code_type_4_count_per_usage1
Lag = 7, Column = fault_code_type_1_count_per_usage2
Lag = 7, Column = fault_code_type_2_count_per_usage2
Lag = 7, Column = fault_code_type_3_count_per_usage2
Lag = 7, Column = fault_code_type_4_count_per_usage2
Lag = 14, Column = warn_type1_total
Lag = 14, Column = warn_type2_total
Lag = 14, Column = pca_1_warn
Lag = 14, Column = pca_2_warn
Lag = 14, Column = pca_3_warn
Lag = 14, Column = pca_4_warn
Lag = 14, Column = pca_5_warn
Lag = 14, Column = pca_6_warn
Lag = 14, Column = pca_7_warn
Lag = 14, Column = pca_8_warn
Lag = 14, Column = pca_9_warn
Lag = 14, Column = pca_10_warn
Lag = 14, Column = pca_11_warn
Lag = 14, Column = pca_12_warn
Lag = 14, Column = pca_13_warn
Lag = 14, Column = pca_14_warn
Lag = 14, Column = pca_15_warn
Lag = 14, Column = pca_16_warn
Lag = 14, Column = pca_17_warn
Lag = 14, Column = pca_18_warn
Lag = 14, Column = pca_19_warn
Lag = 14, Column = pca_20_warn
Lag = 14, Column = problem_type_1
Lag = 14, Column = problem_type_2
Lag = 14, Column = problem_type_3
Lag = 14, Column = problem_type_4
Lag = 14, Column = problem_type_1_per_usage1
Lag = 14, Column = problem_type_2_per_usage1
Lag = 14, Column = problem_type_3_per_usage1
Lag = 14, Column = problem_type_4_per_usage1
Lag = 14, Column = problem_type_1_per_usage2
Lag = 14, Column = problem_type_2_per_usage2
Lag = 14, Column = problem_type_3_per_usage2
Lag = 14, Column = problem_type_4_per_usage2
Lag = 14, Column = fault_code_type_1_count
Lag = 14, Column = fault_code_type_2_count
Lag = 14, Column = fault_code_type_3_count
Lag = 14, Column = fault_code_type_4_count
Lag = 14, Column = fault_code_type_1_count_per_usage1
Lag = 14, Column = fault_code_type_2_count_per_usage1
Lag = 14, Column = fault_code_type_3_count_per_usage1
Lag = 14, Column = fault_code_type_4_count_per_usage1
Lag = 14, Column = fault_code_type_1_count_per_usage2
Lag = 14, Column = fault_code_type_2_count_per_usage2
Lag = 14, Column = fault_code_type_3_count_per_usage2
Lag = 14, Column = fault_code_type_4_count_per_usage2
Lag = 30, Column = warn_type1_total
Lag = 30, Column = warn_type2_total
Lag = 30, Column = pca_1_warn
Lag = 30, Column = pca_2_warn
Lag = 30, Column = pca_3_warn
Lag = 30, Column = pca_4_warn
Lag = 30, Column = pca_5_warn
Lag = 30, Column = pca_6_warn
Lag = 30, Column = pca_7_warn
Lag = 30, Column = pca_8_warn
Lag = 30, Column = pca_9_warn
Lag = 30, Column = pca_10_warn
Lag = 30, Column = pca_11_warn
Lag = 30, Column = pca_12_warn
Lag = 30, Column = pca_13_warn
Lag = 30, Column = pca_14_warn
Lag = 30, Column = pca_15_warn
Lag = 30, Column = pca_16_warn
Lag = 30, Column = pca_17_warn
Lag = 30, Column = pca_18_warn
Lag = 30, Column = pca_19_warn
Lag = 30, Column = pca_20_warn
Lag = 30, Column = problem_type_1
Lag = 30, Column = problem_type_2
Lag = 30, Column = problem_type_3
Lag = 30, Column = problem_type_4
Lag = 30, Column = problem_type_1_per_usage1
Lag = 30, Column = problem_type_2_per_usage1
Lag = 30, Column = problem_type_3_per_usage1
Lag = 30, Column = problem_type_4_per_usage1
Lag = 30, Column = problem_type_1_per_usage2
Lag = 30, Column = problem_type_2_per_usage2
Lag = 30, Column = problem_type_3_per_usage2
Lag = 30, Column = problem_type_4_per_usage2
Lag = 30, Column = fault_code_type_1_count
Lag = 30, Column = fault_code_type_2_count
Lag = 30, Column = fault_code_type_3_count
Lag = 30, Column = fault_code_type_4_count
Lag = 30, Column = fault_code_type_1_count_per_usage1
Lag = 30, Column = fault_code_type_2_count_per_usage1
Lag = 30, Column = fault_code_type_3_count_per_usage1
Lag = 30, Column = fault_code_type_4_count_per_usage1
Lag = 30, Column = fault_code_type_1_count_per_usage2
Lag = 30, Column = fault_code_type_2_count_per_usage2
Lag = 30, Column = fault_code_type_3_count_per_usage2
Lag = 30, Column = fault_code_type_4_count_per_usage2
Lag = 90, Column = warn_type1_total
Lag = 90, Column = warn_type2_total
Lag = 90, Column = pca_1_warn
Lag = 90, Column = pca_2_warn
Lag = 90, Column = pca_3_warn
Lag = 90, Column = pca_4_warn
Lag = 90, Column = pca_5_warn
Lag = 90, Column = pca_6_warn
Lag = 90, Column = pca_7_warn
Lag = 90, Column = pca_8_warn
Lag = 90, Column = pca_9_warn
Lag = 90, Column = pca_10_warn
Lag = 90, Column = pca_11_warn
Lag = 90, Column = pca_12_warn
Lag = 90, Column = pca_13_warn
Lag = 90, Column = pca_14_warn
Lag = 90, Column = pca_15_warn
Lag = 90, Column = pca_16_warn
Lag = 90, Column = pca_17_warn
Lag = 90, Column = pca_18_warn
Lag = 90, Column = pca_19_warn
Lag = 90, Column = pca_20_warn
Lag = 90, Column = problem_type_1
Lag = 90, Column = problem_type_2
Lag = 90, Column = problem_type_3
Lag = 90, Column = problem_type_4
Lag = 90, Column = problem_type_1_per_usage1
Lag = 90, Column = problem_type_2_per_usage1
Lag = 90, Column = problem_type_3_per_usage1
Lag = 90, Column = problem_type_4_per_usage1
Lag = 90, Column = problem_type_1_per_usage2
Lag = 90, Column = problem_type_2_per_usage2
Lag = 90, Column = problem_type_3_per_usage2
Lag = 90, Column = problem_type_4_per_usage2
Lag = 90, Column = fault_code_type_1_count
Lag = 90, Column = fault_code_type_2_count
Lag = 90, Column = fault_code_type_3_count
Lag = 90, Column = fault_code_type_4_count
Lag = 90, Column = fault_code_type_1_count_per_usage1
Lag = 90, Column = fault_code_type_2_count_per_usage1
Lag = 90, Column = fault_code_type_3_count_per_usage1
Lag = 90, Column = fault_code_type_4_count_per_usage1
Lag = 90, Column = fault_code_type_1_count_per_usage2
Lag = 90, Column = fault_code_type_2_count_per_usage2
Lag = 90, Column = fault_code_type_3_count_per_usage2
Lag = 90, Column = fault_code_type_4_count_per_usage2
CPU times: user 848 ms, sys: 279 ms, total: 1.13 s
Wall time: 28min 6s
| MIT | Notebook_2_FeatureEngineering_RollingCompute.ipynb | yallic/PySpark-Predictive-Maintenance |
Rolling Difference | %%time
# Load result dataset from Notebook #1
df = sqlContext.read.parquet('/mnt/resource/PysparkExample/notebook1_result.parquet')
for lag_n in lags:
wSpec = Window.partitionBy('deviceid').orderBy('date').rowsBetween(1-lag_n, 0)
for col_name in rolling_features:
df = df.withColumn(col_name+'_rollingdiff_'+str(lag_n), col(col_name)-F.avg(col(col_name)).over(wSpec))
print("Lag = %d, Column = %s" % (lag_n, col_name))
rollingdiff = df.select(['key'] + list(s for s in df.columns if "rollingdiff" in s))
# Save the intermediate result for downstream work
rollingdiff.write.mode('overwrite').parquet('/mnt/resource/PysparkExample/rollingdiff.parquet')
| Lag = 3, Column = warn_type1_total
Lag = 3, Column = warn_type2_total
Lag = 3, Column = pca_1_warn
Lag = 3, Column = pca_2_warn
Lag = 3, Column = pca_3_warn
Lag = 3, Column = pca_4_warn
Lag = 3, Column = pca_5_warn
Lag = 3, Column = pca_6_warn
Lag = 3, Column = pca_7_warn
Lag = 3, Column = pca_8_warn
Lag = 3, Column = pca_9_warn
Lag = 3, Column = pca_10_warn
Lag = 3, Column = pca_11_warn
Lag = 3, Column = pca_12_warn
Lag = 3, Column = pca_13_warn
Lag = 3, Column = pca_14_warn
Lag = 3, Column = pca_15_warn
Lag = 3, Column = pca_16_warn
Lag = 3, Column = pca_17_warn
Lag = 3, Column = pca_18_warn
Lag = 3, Column = pca_19_warn
Lag = 3, Column = pca_20_warn
Lag = 3, Column = problem_type_1
Lag = 3, Column = problem_type_2
Lag = 3, Column = problem_type_3
Lag = 3, Column = problem_type_4
Lag = 3, Column = problem_type_1_per_usage1
Lag = 3, Column = problem_type_2_per_usage1
Lag = 3, Column = problem_type_3_per_usage1
Lag = 3, Column = problem_type_4_per_usage1
Lag = 3, Column = problem_type_1_per_usage2
Lag = 3, Column = problem_type_2_per_usage2
Lag = 3, Column = problem_type_3_per_usage2
Lag = 3, Column = problem_type_4_per_usage2
Lag = 3, Column = fault_code_type_1_count
Lag = 3, Column = fault_code_type_2_count
Lag = 3, Column = fault_code_type_3_count
Lag = 3, Column = fault_code_type_4_count
Lag = 3, Column = fault_code_type_1_count_per_usage1
Lag = 3, Column = fault_code_type_2_count_per_usage1
Lag = 3, Column = fault_code_type_3_count_per_usage1
Lag = 3, Column = fault_code_type_4_count_per_usage1
Lag = 3, Column = fault_code_type_1_count_per_usage2
Lag = 3, Column = fault_code_type_2_count_per_usage2
Lag = 3, Column = fault_code_type_3_count_per_usage2
Lag = 3, Column = fault_code_type_4_count_per_usage2
Lag = 7, Column = warn_type1_total
Lag = 7, Column = warn_type2_total
Lag = 7, Column = pca_1_warn
Lag = 7, Column = pca_2_warn
Lag = 7, Column = pca_3_warn
Lag = 7, Column = pca_4_warn
Lag = 7, Column = pca_5_warn
Lag = 7, Column = pca_6_warn
Lag = 7, Column = pca_7_warn
Lag = 7, Column = pca_8_warn
Lag = 7, Column = pca_9_warn
Lag = 7, Column = pca_10_warn
Lag = 7, Column = pca_11_warn
Lag = 7, Column = pca_12_warn
Lag = 7, Column = pca_13_warn
Lag = 7, Column = pca_14_warn
Lag = 7, Column = pca_15_warn
Lag = 7, Column = pca_16_warn
Lag = 7, Column = pca_17_warn
Lag = 7, Column = pca_18_warn
Lag = 7, Column = pca_19_warn
Lag = 7, Column = pca_20_warn
Lag = 7, Column = problem_type_1
Lag = 7, Column = problem_type_2
Lag = 7, Column = problem_type_3
Lag = 7, Column = problem_type_4
Lag = 7, Column = problem_type_1_per_usage1
Lag = 7, Column = problem_type_2_per_usage1
Lag = 7, Column = problem_type_3_per_usage1
Lag = 7, Column = problem_type_4_per_usage1
Lag = 7, Column = problem_type_1_per_usage2
Lag = 7, Column = problem_type_2_per_usage2
Lag = 7, Column = problem_type_3_per_usage2
Lag = 7, Column = problem_type_4_per_usage2
Lag = 7, Column = fault_code_type_1_count
Lag = 7, Column = fault_code_type_2_count
Lag = 7, Column = fault_code_type_3_count
Lag = 7, Column = fault_code_type_4_count
Lag = 7, Column = fault_code_type_1_count_per_usage1
Lag = 7, Column = fault_code_type_2_count_per_usage1
Lag = 7, Column = fault_code_type_3_count_per_usage1
Lag = 7, Column = fault_code_type_4_count_per_usage1
Lag = 7, Column = fault_code_type_1_count_per_usage2
Lag = 7, Column = fault_code_type_2_count_per_usage2
Lag = 7, Column = fault_code_type_3_count_per_usage2
Lag = 7, Column = fault_code_type_4_count_per_usage2
Lag = 14, Column = warn_type1_total
Lag = 14, Column = warn_type2_total
Lag = 14, Column = pca_1_warn
Lag = 14, Column = pca_2_warn
Lag = 14, Column = pca_3_warn
Lag = 14, Column = pca_4_warn
Lag = 14, Column = pca_5_warn
Lag = 14, Column = pca_6_warn
Lag = 14, Column = pca_7_warn
Lag = 14, Column = pca_8_warn
Lag = 14, Column = pca_9_warn
Lag = 14, Column = pca_10_warn
Lag = 14, Column = pca_11_warn
Lag = 14, Column = pca_12_warn
Lag = 14, Column = pca_13_warn
Lag = 14, Column = pca_14_warn
Lag = 14, Column = pca_15_warn
Lag = 14, Column = pca_16_warn
Lag = 14, Column = pca_17_warn
Lag = 14, Column = pca_18_warn
Lag = 14, Column = pca_19_warn
Lag = 14, Column = pca_20_warn
Lag = 14, Column = problem_type_1
Lag = 14, Column = problem_type_2
Lag = 14, Column = problem_type_3
Lag = 14, Column = problem_type_4
Lag = 14, Column = problem_type_1_per_usage1
Lag = 14, Column = problem_type_2_per_usage1
Lag = 14, Column = problem_type_3_per_usage1
Lag = 14, Column = problem_type_4_per_usage1
Lag = 14, Column = problem_type_1_per_usage2
Lag = 14, Column = problem_type_2_per_usage2
Lag = 14, Column = problem_type_3_per_usage2
Lag = 14, Column = problem_type_4_per_usage2
Lag = 14, Column = fault_code_type_1_count
Lag = 14, Column = fault_code_type_2_count
Lag = 14, Column = fault_code_type_3_count
Lag = 14, Column = fault_code_type_4_count
Lag = 14, Column = fault_code_type_1_count_per_usage1
Lag = 14, Column = fault_code_type_2_count_per_usage1
Lag = 14, Column = fault_code_type_3_count_per_usage1
Lag = 14, Column = fault_code_type_4_count_per_usage1
Lag = 14, Column = fault_code_type_1_count_per_usage2
Lag = 14, Column = fault_code_type_2_count_per_usage2
Lag = 14, Column = fault_code_type_3_count_per_usage2
Lag = 14, Column = fault_code_type_4_count_per_usage2
Lag = 30, Column = warn_type1_total
Lag = 30, Column = warn_type2_total
Lag = 30, Column = pca_1_warn
Lag = 30, Column = pca_2_warn
Lag = 30, Column = pca_3_warn
Lag = 30, Column = pca_4_warn
Lag = 30, Column = pca_5_warn
Lag = 30, Column = pca_6_warn
Lag = 30, Column = pca_7_warn
Lag = 30, Column = pca_8_warn
Lag = 30, Column = pca_9_warn
Lag = 30, Column = pca_10_warn
Lag = 30, Column = pca_11_warn
Lag = 30, Column = pca_12_warn
Lag = 30, Column = pca_13_warn
Lag = 30, Column = pca_14_warn
Lag = 30, Column = pca_15_warn
Lag = 30, Column = pca_16_warn
Lag = 30, Column = pca_17_warn
Lag = 30, Column = pca_18_warn
Lag = 30, Column = pca_19_warn
Lag = 30, Column = pca_20_warn
Lag = 30, Column = problem_type_1
Lag = 30, Column = problem_type_2
Lag = 30, Column = problem_type_3
Lag = 30, Column = problem_type_4
Lag = 30, Column = problem_type_1_per_usage1
Lag = 30, Column = problem_type_2_per_usage1
Lag = 30, Column = problem_type_3_per_usage1
Lag = 30, Column = problem_type_4_per_usage1
Lag = 30, Column = problem_type_1_per_usage2
Lag = 30, Column = problem_type_2_per_usage2
Lag = 30, Column = problem_type_3_per_usage2
Lag = 30, Column = problem_type_4_per_usage2
Lag = 30, Column = fault_code_type_1_count
Lag = 30, Column = fault_code_type_2_count
Lag = 30, Column = fault_code_type_3_count
Lag = 30, Column = fault_code_type_4_count
Lag = 30, Column = fault_code_type_1_count_per_usage1
Lag = 30, Column = fault_code_type_2_count_per_usage1
Lag = 30, Column = fault_code_type_3_count_per_usage1
Lag = 30, Column = fault_code_type_4_count_per_usage1
Lag = 30, Column = fault_code_type_1_count_per_usage2
Lag = 30, Column = fault_code_type_2_count_per_usage2
Lag = 30, Column = fault_code_type_3_count_per_usage2
Lag = 30, Column = fault_code_type_4_count_per_usage2
Lag = 90, Column = warn_type1_total
Lag = 90, Column = warn_type2_total
Lag = 90, Column = pca_1_warn
Lag = 90, Column = pca_2_warn
Lag = 90, Column = pca_3_warn
Lag = 90, Column = pca_4_warn
Lag = 90, Column = pca_5_warn
Lag = 90, Column = pca_6_warn
Lag = 90, Column = pca_7_warn
Lag = 90, Column = pca_8_warn
Lag = 90, Column = pca_9_warn
Lag = 90, Column = pca_10_warn
Lag = 90, Column = pca_11_warn
Lag = 90, Column = pca_12_warn
Lag = 90, Column = pca_13_warn
Lag = 90, Column = pca_14_warn
Lag = 90, Column = pca_15_warn
Lag = 90, Column = pca_16_warn
Lag = 90, Column = pca_17_warn
Lag = 90, Column = pca_18_warn
Lag = 90, Column = pca_19_warn
Lag = 90, Column = pca_20_warn
Lag = 90, Column = problem_type_1
Lag = 90, Column = problem_type_2
Lag = 90, Column = problem_type_3
Lag = 90, Column = problem_type_4
Lag = 90, Column = problem_type_1_per_usage1
Lag = 90, Column = problem_type_2_per_usage1
Lag = 90, Column = problem_type_3_per_usage1
Lag = 90, Column = problem_type_4_per_usage1
Lag = 90, Column = problem_type_1_per_usage2
Lag = 90, Column = problem_type_2_per_usage2
Lag = 90, Column = problem_type_3_per_usage2
Lag = 90, Column = problem_type_4_per_usage2
Lag = 90, Column = fault_code_type_1_count
Lag = 90, Column = fault_code_type_2_count
Lag = 90, Column = fault_code_type_3_count
Lag = 90, Column = fault_code_type_4_count
Lag = 90, Column = fault_code_type_1_count_per_usage1
Lag = 90, Column = fault_code_type_2_count_per_usage1
Lag = 90, Column = fault_code_type_3_count_per_usage1
Lag = 90, Column = fault_code_type_4_count_per_usage1
Lag = 90, Column = fault_code_type_1_count_per_usage2
Lag = 90, Column = fault_code_type_2_count_per_usage2
Lag = 90, Column = fault_code_type_3_count_per_usage2
Lag = 90, Column = fault_code_type_4_count_per_usage2
CPU times: user 1.18 s, sys: 383 ms, total: 1.56 s
Wall time: 45min 12s
| MIT | Notebook_2_FeatureEngineering_RollingCompute.ipynb | yallic/PySpark-Predictive-Maintenance |
Rolling Std | %%time
# Load result dataset from Notebook #1
df = sqlContext.read.parquet('/mnt/resource/PysparkExample/notebook1_result.parquet')
for lag_n in lags:
wSpec = Window.partitionBy('deviceid').orderBy('date').rowsBetween(1-lag_n, 0)
for col_name in rolling_features:
df = df.withColumn(col_name+'_rollingstd_'+str(lag_n), F.stddev(col(col_name)).over(wSpec))
print("Lag = %d, Column = %s" % (lag_n, col_name))
# There are some missing values for rollingstd features
rollingstd_features = list(s for s in df.columns if "rollingstd" in s)
df = df.fillna(0, subset=rollingstd_features)
rollingstd = df.select(['key'] + list(s for s in df.columns if "rollingstd" in s))
# Save the intermediate result for downstream work
rollingstd.write.mode('overwrite').parquet('/mnt/resource/PysparkExample/rollingstd.parquet')
| Lag = 3, Column = warn_type1_total
Lag = 3, Column = warn_type2_total
Lag = 3, Column = pca_1_warn
Lag = 3, Column = pca_2_warn
Lag = 3, Column = pca_3_warn
Lag = 3, Column = pca_4_warn
Lag = 3, Column = pca_5_warn
Lag = 3, Column = pca_6_warn
Lag = 3, Column = pca_7_warn
Lag = 3, Column = pca_8_warn
Lag = 3, Column = pca_9_warn
Lag = 3, Column = pca_10_warn
Lag = 3, Column = pca_11_warn
Lag = 3, Column = pca_12_warn
Lag = 3, Column = pca_13_warn
Lag = 3, Column = pca_14_warn
Lag = 3, Column = pca_15_warn
Lag = 3, Column = pca_16_warn
Lag = 3, Column = pca_17_warn
Lag = 3, Column = pca_18_warn
Lag = 3, Column = pca_19_warn
Lag = 3, Column = pca_20_warn
Lag = 3, Column = problem_type_1
Lag = 3, Column = problem_type_2
Lag = 3, Column = problem_type_3
Lag = 3, Column = problem_type_4
Lag = 3, Column = problem_type_1_per_usage1
Lag = 3, Column = problem_type_2_per_usage1
Lag = 3, Column = problem_type_3_per_usage1
Lag = 3, Column = problem_type_4_per_usage1
Lag = 3, Column = problem_type_1_per_usage2
Lag = 3, Column = problem_type_2_per_usage2
Lag = 3, Column = problem_type_3_per_usage2
Lag = 3, Column = problem_type_4_per_usage2
Lag = 3, Column = fault_code_type_1_count
Lag = 3, Column = fault_code_type_2_count
Lag = 3, Column = fault_code_type_3_count
Lag = 3, Column = fault_code_type_4_count
Lag = 3, Column = fault_code_type_1_count_per_usage1
Lag = 3, Column = fault_code_type_2_count_per_usage1
Lag = 3, Column = fault_code_type_3_count_per_usage1
Lag = 3, Column = fault_code_type_4_count_per_usage1
Lag = 3, Column = fault_code_type_1_count_per_usage2
Lag = 3, Column = fault_code_type_2_count_per_usage2
Lag = 3, Column = fault_code_type_3_count_per_usage2
Lag = 3, Column = fault_code_type_4_count_per_usage2
Lag = 7, Column = warn_type1_total
Lag = 7, Column = warn_type2_total
Lag = 7, Column = pca_1_warn
Lag = 7, Column = pca_2_warn
Lag = 7, Column = pca_3_warn
Lag = 7, Column = pca_4_warn
Lag = 7, Column = pca_5_warn
Lag = 7, Column = pca_6_warn
Lag = 7, Column = pca_7_warn
Lag = 7, Column = pca_8_warn
Lag = 7, Column = pca_9_warn
Lag = 7, Column = pca_10_warn
Lag = 7, Column = pca_11_warn
Lag = 7, Column = pca_12_warn
Lag = 7, Column = pca_13_warn
Lag = 7, Column = pca_14_warn
Lag = 7, Column = pca_15_warn
Lag = 7, Column = pca_16_warn
Lag = 7, Column = pca_17_warn
Lag = 7, Column = pca_18_warn
Lag = 7, Column = pca_19_warn
Lag = 7, Column = pca_20_warn
Lag = 7, Column = problem_type_1
Lag = 7, Column = problem_type_2
Lag = 7, Column = problem_type_3
Lag = 7, Column = problem_type_4
Lag = 7, Column = problem_type_1_per_usage1
Lag = 7, Column = problem_type_2_per_usage1
Lag = 7, Column = problem_type_3_per_usage1
Lag = 7, Column = problem_type_4_per_usage1
Lag = 7, Column = problem_type_1_per_usage2
Lag = 7, Column = problem_type_2_per_usage2
Lag = 7, Column = problem_type_3_per_usage2
Lag = 7, Column = problem_type_4_per_usage2
Lag = 7, Column = fault_code_type_1_count
Lag = 7, Column = fault_code_type_2_count
Lag = 7, Column = fault_code_type_3_count
Lag = 7, Column = fault_code_type_4_count
Lag = 7, Column = fault_code_type_1_count_per_usage1
Lag = 7, Column = fault_code_type_2_count_per_usage1
Lag = 7, Column = fault_code_type_3_count_per_usage1
Lag = 7, Column = fault_code_type_4_count_per_usage1
Lag = 7, Column = fault_code_type_1_count_per_usage2
Lag = 7, Column = fault_code_type_2_count_per_usage2
Lag = 7, Column = fault_code_type_3_count_per_usage2
Lag = 7, Column = fault_code_type_4_count_per_usage2
Lag = 14, Column = warn_type1_total
Lag = 14, Column = warn_type2_total
Lag = 14, Column = pca_1_warn
Lag = 14, Column = pca_2_warn
Lag = 14, Column = pca_3_warn
Lag = 14, Column = pca_4_warn
Lag = 14, Column = pca_5_warn
Lag = 14, Column = pca_6_warn
Lag = 14, Column = pca_7_warn
Lag = 14, Column = pca_8_warn
Lag = 14, Column = pca_9_warn
Lag = 14, Column = pca_10_warn
Lag = 14, Column = pca_11_warn
Lag = 14, Column = pca_12_warn
Lag = 14, Column = pca_13_warn
Lag = 14, Column = pca_14_warn
Lag = 14, Column = pca_15_warn
Lag = 14, Column = pca_16_warn
Lag = 14, Column = pca_17_warn
Lag = 14, Column = pca_18_warn
Lag = 14, Column = pca_19_warn
Lag = 14, Column = pca_20_warn
Lag = 14, Column = problem_type_1
Lag = 14, Column = problem_type_2
Lag = 14, Column = problem_type_3
Lag = 14, Column = problem_type_4
Lag = 14, Column = problem_type_1_per_usage1
Lag = 14, Column = problem_type_2_per_usage1
Lag = 14, Column = problem_type_3_per_usage1
Lag = 14, Column = problem_type_4_per_usage1
Lag = 14, Column = problem_type_1_per_usage2
Lag = 14, Column = problem_type_2_per_usage2
Lag = 14, Column = problem_type_3_per_usage2
Lag = 14, Column = problem_type_4_per_usage2
Lag = 14, Column = fault_code_type_1_count
Lag = 14, Column = fault_code_type_2_count
Lag = 14, Column = fault_code_type_3_count
Lag = 14, Column = fault_code_type_4_count
Lag = 14, Column = fault_code_type_1_count_per_usage1
Lag = 14, Column = fault_code_type_2_count_per_usage1
Lag = 14, Column = fault_code_type_3_count_per_usage1
Lag = 14, Column = fault_code_type_4_count_per_usage1
Lag = 14, Column = fault_code_type_1_count_per_usage2
Lag = 14, Column = fault_code_type_2_count_per_usage2
Lag = 14, Column = fault_code_type_3_count_per_usage2
Lag = 14, Column = fault_code_type_4_count_per_usage2
Lag = 30, Column = warn_type1_total
Lag = 30, Column = warn_type2_total
Lag = 30, Column = pca_1_warn
Lag = 30, Column = pca_2_warn
Lag = 30, Column = pca_3_warn
Lag = 30, Column = pca_4_warn
Lag = 30, Column = pca_5_warn
Lag = 30, Column = pca_6_warn
Lag = 30, Column = pca_7_warn
Lag = 30, Column = pca_8_warn
Lag = 30, Column = pca_9_warn
Lag = 30, Column = pca_10_warn
Lag = 30, Column = pca_11_warn
Lag = 30, Column = pca_12_warn
Lag = 30, Column = pca_13_warn
Lag = 30, Column = pca_14_warn
Lag = 30, Column = pca_15_warn
Lag = 30, Column = pca_16_warn
Lag = 30, Column = pca_17_warn
Lag = 30, Column = pca_18_warn
Lag = 30, Column = pca_19_warn
Lag = 30, Column = pca_20_warn
Lag = 30, Column = problem_type_1
Lag = 30, Column = problem_type_2
Lag = 30, Column = problem_type_3
Lag = 30, Column = problem_type_4
Lag = 30, Column = problem_type_1_per_usage1
Lag = 30, Column = problem_type_2_per_usage1
Lag = 30, Column = problem_type_3_per_usage1
Lag = 30, Column = problem_type_4_per_usage1
Lag = 30, Column = problem_type_1_per_usage2
Lag = 30, Column = problem_type_2_per_usage2
Lag = 30, Column = problem_type_3_per_usage2
Lag = 30, Column = problem_type_4_per_usage2
Lag = 30, Column = fault_code_type_1_count
Lag = 30, Column = fault_code_type_2_count
Lag = 30, Column = fault_code_type_3_count
Lag = 30, Column = fault_code_type_4_count
Lag = 30, Column = fault_code_type_1_count_per_usage1
Lag = 30, Column = fault_code_type_2_count_per_usage1
Lag = 30, Column = fault_code_type_3_count_per_usage1
Lag = 30, Column = fault_code_type_4_count_per_usage1
Lag = 30, Column = fault_code_type_1_count_per_usage2
Lag = 30, Column = fault_code_type_2_count_per_usage2
Lag = 30, Column = fault_code_type_3_count_per_usage2
Lag = 30, Column = fault_code_type_4_count_per_usage2
Lag = 90, Column = warn_type1_total
Lag = 90, Column = warn_type2_total
Lag = 90, Column = pca_1_warn
Lag = 90, Column = pca_2_warn
Lag = 90, Column = pca_3_warn
Lag = 90, Column = pca_4_warn
Lag = 90, Column = pca_5_warn
Lag = 90, Column = pca_6_warn
Lag = 90, Column = pca_7_warn
Lag = 90, Column = pca_8_warn
Lag = 90, Column = pca_9_warn
Lag = 90, Column = pca_10_warn
Lag = 90, Column = pca_11_warn
Lag = 90, Column = pca_12_warn
Lag = 90, Column = pca_13_warn
Lag = 90, Column = pca_14_warn
Lag = 90, Column = pca_15_warn
Lag = 90, Column = pca_16_warn
Lag = 90, Column = pca_17_warn
Lag = 90, Column = pca_18_warn
Lag = 90, Column = pca_19_warn
Lag = 90, Column = pca_20_warn
Lag = 90, Column = problem_type_1
Lag = 90, Column = problem_type_2
Lag = 90, Column = problem_type_3
Lag = 90, Column = problem_type_4
Lag = 90, Column = problem_type_1_per_usage1
Lag = 90, Column = problem_type_2_per_usage1
Lag = 90, Column = problem_type_3_per_usage1
Lag = 90, Column = problem_type_4_per_usage1
Lag = 90, Column = problem_type_1_per_usage2
Lag = 90, Column = problem_type_2_per_usage2
Lag = 90, Column = problem_type_3_per_usage2
Lag = 90, Column = problem_type_4_per_usage2
Lag = 90, Column = fault_code_type_1_count
Lag = 90, Column = fault_code_type_2_count
Lag = 90, Column = fault_code_type_3_count
Lag = 90, Column = fault_code_type_4_count
Lag = 90, Column = fault_code_type_1_count_per_usage1
Lag = 90, Column = fault_code_type_2_count_per_usage1
Lag = 90, Column = fault_code_type_3_count_per_usage1
Lag = 90, Column = fault_code_type_4_count_per_usage1
Lag = 90, Column = fault_code_type_1_count_per_usage2
Lag = 90, Column = fault_code_type_2_count_per_usage2
Lag = 90, Column = fault_code_type_3_count_per_usage2
Lag = 90, Column = fault_code_type_4_count_per_usage2
CPU times: user 1.03 s, sys: 411 ms, total: 1.44 s
Wall time: 30min 16s
| MIT | Notebook_2_FeatureEngineering_RollingCompute.ipynb | yallic/PySpark-Predictive-Maintenance |
Rolling Max | %%time
# Load result dataset from Notebook #1
df = sqlContext.read.parquet('/mnt/resource/PysparkExample/notebook1_result.parquet')
for lag_n in lags:
wSpec = Window.partitionBy('deviceid').orderBy('date').rowsBetween(1-lag_n, 0)
for col_name in rolling_features:
df = df.withColumn(col_name+'_rollingmax_'+str(lag_n), F.max(col(col_name)).over(wSpec))
print("Lag = %d, Column = %s" % (lag_n, col_name))
rollingmax = df.select(['key'] + list(s for s in df.columns if "rollingmax" in s))
# Save the intermediate result for downstream work
rollingmax.write.mode('overwrite').parquet('/mnt/resource/PysparkExample/rollingmax.parquet')
| Lag = 3, Column = warn_type1_total
Lag = 3, Column = warn_type2_total
Lag = 3, Column = pca_1_warn
Lag = 3, Column = pca_2_warn
Lag = 3, Column = pca_3_warn
Lag = 3, Column = pca_4_warn
Lag = 3, Column = pca_5_warn
Lag = 3, Column = pca_6_warn
Lag = 3, Column = pca_7_warn
Lag = 3, Column = pca_8_warn
Lag = 3, Column = pca_9_warn
Lag = 3, Column = pca_10_warn
Lag = 3, Column = pca_11_warn
Lag = 3, Column = pca_12_warn
Lag = 3, Column = pca_13_warn
Lag = 3, Column = pca_14_warn
Lag = 3, Column = pca_15_warn
Lag = 3, Column = pca_16_warn
Lag = 3, Column = pca_17_warn
Lag = 3, Column = pca_18_warn
Lag = 3, Column = pca_19_warn
Lag = 3, Column = pca_20_warn
Lag = 3, Column = problem_type_1
Lag = 3, Column = problem_type_2
Lag = 3, Column = problem_type_3
Lag = 3, Column = problem_type_4
Lag = 3, Column = problem_type_1_per_usage1
Lag = 3, Column = problem_type_2_per_usage1
Lag = 3, Column = problem_type_3_per_usage1
Lag = 3, Column = problem_type_4_per_usage1
Lag = 3, Column = problem_type_1_per_usage2
Lag = 3, Column = problem_type_2_per_usage2
Lag = 3, Column = problem_type_3_per_usage2
Lag = 3, Column = problem_type_4_per_usage2
Lag = 3, Column = fault_code_type_1_count
Lag = 3, Column = fault_code_type_2_count
Lag = 3, Column = fault_code_type_3_count
Lag = 3, Column = fault_code_type_4_count
Lag = 3, Column = fault_code_type_1_count_per_usage1
Lag = 3, Column = fault_code_type_2_count_per_usage1
Lag = 3, Column = fault_code_type_3_count_per_usage1
Lag = 3, Column = fault_code_type_4_count_per_usage1
Lag = 3, Column = fault_code_type_1_count_per_usage2
Lag = 3, Column = fault_code_type_2_count_per_usage2
Lag = 3, Column = fault_code_type_3_count_per_usage2
Lag = 3, Column = fault_code_type_4_count_per_usage2
Lag = 7, Column = warn_type1_total
Lag = 7, Column = warn_type2_total
Lag = 7, Column = pca_1_warn
Lag = 7, Column = pca_2_warn
Lag = 7, Column = pca_3_warn
Lag = 7, Column = pca_4_warn
Lag = 7, Column = pca_5_warn
Lag = 7, Column = pca_6_warn
Lag = 7, Column = pca_7_warn
Lag = 7, Column = pca_8_warn
Lag = 7, Column = pca_9_warn
Lag = 7, Column = pca_10_warn
Lag = 7, Column = pca_11_warn
Lag = 7, Column = pca_12_warn
Lag = 7, Column = pca_13_warn
Lag = 7, Column = pca_14_warn
Lag = 7, Column = pca_15_warn
Lag = 7, Column = pca_16_warn
Lag = 7, Column = pca_17_warn
Lag = 7, Column = pca_18_warn
Lag = 7, Column = pca_19_warn
Lag = 7, Column = pca_20_warn
Lag = 7, Column = problem_type_1
Lag = 7, Column = problem_type_2
Lag = 7, Column = problem_type_3
Lag = 7, Column = problem_type_4
Lag = 7, Column = problem_type_1_per_usage1
Lag = 7, Column = problem_type_2_per_usage1
Lag = 7, Column = problem_type_3_per_usage1
Lag = 7, Column = problem_type_4_per_usage1
Lag = 7, Column = problem_type_1_per_usage2
Lag = 7, Column = problem_type_2_per_usage2
Lag = 7, Column = problem_type_3_per_usage2
Lag = 7, Column = problem_type_4_per_usage2
Lag = 7, Column = fault_code_type_1_count
Lag = 7, Column = fault_code_type_2_count
Lag = 7, Column = fault_code_type_3_count
Lag = 7, Column = fault_code_type_4_count
Lag = 7, Column = fault_code_type_1_count_per_usage1
Lag = 7, Column = fault_code_type_2_count_per_usage1
Lag = 7, Column = fault_code_type_3_count_per_usage1
Lag = 7, Column = fault_code_type_4_count_per_usage1
Lag = 7, Column = fault_code_type_1_count_per_usage2
Lag = 7, Column = fault_code_type_2_count_per_usage2
Lag = 7, Column = fault_code_type_3_count_per_usage2
Lag = 7, Column = fault_code_type_4_count_per_usage2
Lag = 14, Column = warn_type1_total
Lag = 14, Column = warn_type2_total
Lag = 14, Column = pca_1_warn
Lag = 14, Column = pca_2_warn
Lag = 14, Column = pca_3_warn
Lag = 14, Column = pca_4_warn
Lag = 14, Column = pca_5_warn
Lag = 14, Column = pca_6_warn
Lag = 14, Column = pca_7_warn
Lag = 14, Column = pca_8_warn
Lag = 14, Column = pca_9_warn
Lag = 14, Column = pca_10_warn
Lag = 14, Column = pca_11_warn
Lag = 14, Column = pca_12_warn
Lag = 14, Column = pca_13_warn
Lag = 14, Column = pca_14_warn
Lag = 14, Column = pca_15_warn
Lag = 14, Column = pca_16_warn
Lag = 14, Column = pca_17_warn
Lag = 14, Column = pca_18_warn
Lag = 14, Column = pca_19_warn
Lag = 14, Column = pca_20_warn
Lag = 14, Column = problem_type_1
Lag = 14, Column = problem_type_2
Lag = 14, Column = problem_type_3
Lag = 14, Column = problem_type_4
Lag = 14, Column = problem_type_1_per_usage1
Lag = 14, Column = problem_type_2_per_usage1
Lag = 14, Column = problem_type_3_per_usage1
Lag = 14, Column = problem_type_4_per_usage1
Lag = 14, Column = problem_type_1_per_usage2
Lag = 14, Column = problem_type_2_per_usage2
Lag = 14, Column = problem_type_3_per_usage2
Lag = 14, Column = problem_type_4_per_usage2
Lag = 14, Column = fault_code_type_1_count
Lag = 14, Column = fault_code_type_2_count
Lag = 14, Column = fault_code_type_3_count
Lag = 14, Column = fault_code_type_4_count
Lag = 14, Column = fault_code_type_1_count_per_usage1
Lag = 14, Column = fault_code_type_2_count_per_usage1
Lag = 14, Column = fault_code_type_3_count_per_usage1
Lag = 14, Column = fault_code_type_4_count_per_usage1
Lag = 14, Column = fault_code_type_1_count_per_usage2
Lag = 14, Column = fault_code_type_2_count_per_usage2
Lag = 14, Column = fault_code_type_3_count_per_usage2
Lag = 14, Column = fault_code_type_4_count_per_usage2
Lag = 30, Column = warn_type1_total
Lag = 30, Column = warn_type2_total
Lag = 30, Column = pca_1_warn
Lag = 30, Column = pca_2_warn
Lag = 30, Column = pca_3_warn
Lag = 30, Column = pca_4_warn
Lag = 30, Column = pca_5_warn
Lag = 30, Column = pca_6_warn
Lag = 30, Column = pca_7_warn
Lag = 30, Column = pca_8_warn
Lag = 30, Column = pca_9_warn
Lag = 30, Column = pca_10_warn
Lag = 30, Column = pca_11_warn
Lag = 30, Column = pca_12_warn
Lag = 30, Column = pca_13_warn
Lag = 30, Column = pca_14_warn
Lag = 30, Column = pca_15_warn
Lag = 30, Column = pca_16_warn
Lag = 30, Column = pca_17_warn
Lag = 30, Column = pca_18_warn
Lag = 30, Column = pca_19_warn
Lag = 30, Column = pca_20_warn
Lag = 30, Column = problem_type_1
Lag = 30, Column = problem_type_2
Lag = 30, Column = problem_type_3
Lag = 30, Column = problem_type_4
Lag = 30, Column = problem_type_1_per_usage1
Lag = 30, Column = problem_type_2_per_usage1
Lag = 30, Column = problem_type_3_per_usage1
Lag = 30, Column = problem_type_4_per_usage1
Lag = 30, Column = problem_type_1_per_usage2
Lag = 30, Column = problem_type_2_per_usage2
Lag = 30, Column = problem_type_3_per_usage2
Lag = 30, Column = problem_type_4_per_usage2
Lag = 30, Column = fault_code_type_1_count
Lag = 30, Column = fault_code_type_2_count
Lag = 30, Column = fault_code_type_3_count
Lag = 30, Column = fault_code_type_4_count
Lag = 30, Column = fault_code_type_1_count_per_usage1
Lag = 30, Column = fault_code_type_2_count_per_usage1
Lag = 30, Column = fault_code_type_3_count_per_usage1
Lag = 30, Column = fault_code_type_4_count_per_usage1
Lag = 30, Column = fault_code_type_1_count_per_usage2
Lag = 30, Column = fault_code_type_2_count_per_usage2
Lag = 30, Column = fault_code_type_3_count_per_usage2
Lag = 30, Column = fault_code_type_4_count_per_usage2
Lag = 90, Column = warn_type1_total
Lag = 90, Column = warn_type2_total
Lag = 90, Column = pca_1_warn
Lag = 90, Column = pca_2_warn
Lag = 90, Column = pca_3_warn
Lag = 90, Column = pca_4_warn
Lag = 90, Column = pca_5_warn
Lag = 90, Column = pca_6_warn
Lag = 90, Column = pca_7_warn
Lag = 90, Column = pca_8_warn
Lag = 90, Column = pca_9_warn
Lag = 90, Column = pca_10_warn
Lag = 90, Column = pca_11_warn
Lag = 90, Column = pca_12_warn
Lag = 90, Column = pca_13_warn
Lag = 90, Column = pca_14_warn
Lag = 90, Column = pca_15_warn
Lag = 90, Column = pca_16_warn
Lag = 90, Column = pca_17_warn
Lag = 90, Column = pca_18_warn
Lag = 90, Column = pca_19_warn
Lag = 90, Column = pca_20_warn
Lag = 90, Column = problem_type_1
Lag = 90, Column = problem_type_2
Lag = 90, Column = problem_type_3
Lag = 90, Column = problem_type_4
Lag = 90, Column = problem_type_1_per_usage1
Lag = 90, Column = problem_type_2_per_usage1
Lag = 90, Column = problem_type_3_per_usage1
Lag = 90, Column = problem_type_4_per_usage1
Lag = 90, Column = problem_type_1_per_usage2
Lag = 90, Column = problem_type_2_per_usage2
Lag = 90, Column = problem_type_3_per_usage2
Lag = 90, Column = problem_type_4_per_usage2
Lag = 90, Column = fault_code_type_1_count
Lag = 90, Column = fault_code_type_2_count
Lag = 90, Column = fault_code_type_3_count
Lag = 90, Column = fault_code_type_4_count
Lag = 90, Column = fault_code_type_1_count_per_usage1
Lag = 90, Column = fault_code_type_2_count_per_usage1
Lag = 90, Column = fault_code_type_3_count_per_usage1
Lag = 90, Column = fault_code_type_4_count_per_usage1
Lag = 90, Column = fault_code_type_1_count_per_usage2
Lag = 90, Column = fault_code_type_2_count_per_usage2
Lag = 90, Column = fault_code_type_3_count_per_usage2
Lag = 90, Column = fault_code_type_4_count_per_usage2
CPU times: user 860 ms, sys: 316 ms, total: 1.18 s
Wall time: 24min 41s
| MIT | Notebook_2_FeatureEngineering_RollingCompute.ipynb | yallic/PySpark-Predictive-Maintenance |
Rolling Min | %%time
# Load result dataset from Notebook #1
df = sqlContext.read.parquet('/mnt/resource/PysparkExample/notebook1_result.parquet')
for lag_n in lags:
wSpec = Window.partitionBy('deviceid').orderBy('date').rowsBetween(1-lag_n, 0)
for col_name in rolling_features:
df = df.withColumn(col_name+'_rollingmin_'+str(lag_n), F.min(col(col_name)).over(wSpec))
print("Lag = %d, Column = %s" % (lag_n, col_name))
rollingmin = df.select(['key'] + list(s for s in df.columns if "rollingmin" in s))
# Save the intermediate result for downstream work
rollingmin.write.mode('overwrite').parquet('/mnt/resource/PysparkExample/rollingmin.parquet')
| Lag = 3, Column = warn_type1_total
Lag = 3, Column = warn_type2_total
Lag = 3, Column = pca_1_warn
Lag = 3, Column = pca_2_warn
Lag = 3, Column = pca_3_warn
Lag = 3, Column = pca_4_warn
Lag = 3, Column = pca_5_warn
Lag = 3, Column = pca_6_warn
Lag = 3, Column = pca_7_warn
Lag = 3, Column = pca_8_warn
Lag = 3, Column = pca_9_warn
Lag = 3, Column = pca_10_warn
Lag = 3, Column = pca_11_warn
Lag = 3, Column = pca_12_warn
Lag = 3, Column = pca_13_warn
Lag = 3, Column = pca_14_warn
Lag = 3, Column = pca_15_warn
Lag = 3, Column = pca_16_warn
Lag = 3, Column = pca_17_warn
Lag = 3, Column = pca_18_warn
Lag = 3, Column = pca_19_warn
Lag = 3, Column = pca_20_warn
Lag = 3, Column = problem_type_1
Lag = 3, Column = problem_type_2
Lag = 3, Column = problem_type_3
Lag = 3, Column = problem_type_4
Lag = 3, Column = problem_type_1_per_usage1
Lag = 3, Column = problem_type_2_per_usage1
Lag = 3, Column = problem_type_3_per_usage1
Lag = 3, Column = problem_type_4_per_usage1
Lag = 3, Column = problem_type_1_per_usage2
Lag = 3, Column = problem_type_2_per_usage2
Lag = 3, Column = problem_type_3_per_usage2
Lag = 3, Column = problem_type_4_per_usage2
Lag = 3, Column = fault_code_type_1_count
Lag = 3, Column = fault_code_type_2_count
Lag = 3, Column = fault_code_type_3_count
Lag = 3, Column = fault_code_type_4_count
Lag = 3, Column = fault_code_type_1_count_per_usage1
Lag = 3, Column = fault_code_type_2_count_per_usage1
Lag = 3, Column = fault_code_type_3_count_per_usage1
Lag = 3, Column = fault_code_type_4_count_per_usage1
Lag = 3, Column = fault_code_type_1_count_per_usage2
Lag = 3, Column = fault_code_type_2_count_per_usage2
Lag = 3, Column = fault_code_type_3_count_per_usage2
Lag = 3, Column = fault_code_type_4_count_per_usage2
Lag = 7, Column = warn_type1_total
Lag = 7, Column = warn_type2_total
Lag = 7, Column = pca_1_warn
Lag = 7, Column = pca_2_warn
Lag = 7, Column = pca_3_warn
Lag = 7, Column = pca_4_warn
Lag = 7, Column = pca_5_warn
Lag = 7, Column = pca_6_warn
Lag = 7, Column = pca_7_warn
Lag = 7, Column = pca_8_warn
Lag = 7, Column = pca_9_warn
Lag = 7, Column = pca_10_warn
Lag = 7, Column = pca_11_warn
Lag = 7, Column = pca_12_warn
Lag = 7, Column = pca_13_warn
Lag = 7, Column = pca_14_warn
Lag = 7, Column = pca_15_warn
Lag = 7, Column = pca_16_warn
Lag = 7, Column = pca_17_warn
Lag = 7, Column = pca_18_warn
Lag = 7, Column = pca_19_warn
Lag = 7, Column = pca_20_warn
Lag = 7, Column = problem_type_1
Lag = 7, Column = problem_type_2
Lag = 7, Column = problem_type_3
Lag = 7, Column = problem_type_4
Lag = 7, Column = problem_type_1_per_usage1
Lag = 7, Column = problem_type_2_per_usage1
Lag = 7, Column = problem_type_3_per_usage1
Lag = 7, Column = problem_type_4_per_usage1
Lag = 7, Column = problem_type_1_per_usage2
Lag = 7, Column = problem_type_2_per_usage2
Lag = 7, Column = problem_type_3_per_usage2
Lag = 7, Column = problem_type_4_per_usage2
Lag = 7, Column = fault_code_type_1_count
Lag = 7, Column = fault_code_type_2_count
Lag = 7, Column = fault_code_type_3_count
Lag = 7, Column = fault_code_type_4_count
Lag = 7, Column = fault_code_type_1_count_per_usage1
Lag = 7, Column = fault_code_type_2_count_per_usage1
Lag = 7, Column = fault_code_type_3_count_per_usage1
Lag = 7, Column = fault_code_type_4_count_per_usage1
Lag = 7, Column = fault_code_type_1_count_per_usage2
Lag = 7, Column = fault_code_type_2_count_per_usage2
Lag = 7, Column = fault_code_type_3_count_per_usage2
Lag = 7, Column = fault_code_type_4_count_per_usage2
Lag = 14, Column = warn_type1_total
Lag = 14, Column = warn_type2_total
Lag = 14, Column = pca_1_warn
Lag = 14, Column = pca_2_warn
Lag = 14, Column = pca_3_warn
Lag = 14, Column = pca_4_warn
Lag = 14, Column = pca_5_warn
Lag = 14, Column = pca_6_warn
Lag = 14, Column = pca_7_warn
Lag = 14, Column = pca_8_warn
Lag = 14, Column = pca_9_warn
Lag = 14, Column = pca_10_warn
Lag = 14, Column = pca_11_warn
Lag = 14, Column = pca_12_warn
Lag = 14, Column = pca_13_warn
Lag = 14, Column = pca_14_warn
Lag = 14, Column = pca_15_warn
Lag = 14, Column = pca_16_warn
Lag = 14, Column = pca_17_warn
Lag = 14, Column = pca_18_warn
Lag = 14, Column = pca_19_warn
Lag = 14, Column = pca_20_warn
Lag = 14, Column = problem_type_1
Lag = 14, Column = problem_type_2
Lag = 14, Column = problem_type_3
Lag = 14, Column = problem_type_4
Lag = 14, Column = problem_type_1_per_usage1
Lag = 14, Column = problem_type_2_per_usage1
Lag = 14, Column = problem_type_3_per_usage1
Lag = 14, Column = problem_type_4_per_usage1
Lag = 14, Column = problem_type_1_per_usage2
Lag = 14, Column = problem_type_2_per_usage2
Lag = 14, Column = problem_type_3_per_usage2
Lag = 14, Column = problem_type_4_per_usage2
Lag = 14, Column = fault_code_type_1_count
Lag = 14, Column = fault_code_type_2_count
Lag = 14, Column = fault_code_type_3_count
Lag = 14, Column = fault_code_type_4_count
Lag = 14, Column = fault_code_type_1_count_per_usage1
Lag = 14, Column = fault_code_type_2_count_per_usage1
Lag = 14, Column = fault_code_type_3_count_per_usage1
Lag = 14, Column = fault_code_type_4_count_per_usage1
Lag = 14, Column = fault_code_type_1_count_per_usage2
Lag = 14, Column = fault_code_type_2_count_per_usage2
Lag = 14, Column = fault_code_type_3_count_per_usage2
Lag = 14, Column = fault_code_type_4_count_per_usage2
Lag = 30, Column = warn_type1_total
Lag = 30, Column = warn_type2_total
Lag = 30, Column = pca_1_warn
Lag = 30, Column = pca_2_warn
Lag = 30, Column = pca_3_warn
Lag = 30, Column = pca_4_warn
Lag = 30, Column = pca_5_warn
Lag = 30, Column = pca_6_warn
Lag = 30, Column = pca_7_warn
Lag = 30, Column = pca_8_warn
Lag = 30, Column = pca_9_warn
Lag = 30, Column = pca_10_warn
Lag = 30, Column = pca_11_warn
Lag = 30, Column = pca_12_warn
Lag = 30, Column = pca_13_warn
Lag = 30, Column = pca_14_warn
Lag = 30, Column = pca_15_warn
Lag = 30, Column = pca_16_warn
Lag = 30, Column = pca_17_warn
Lag = 30, Column = pca_18_warn
Lag = 30, Column = pca_19_warn
Lag = 30, Column = pca_20_warn
Lag = 30, Column = problem_type_1
Lag = 30, Column = problem_type_2
Lag = 30, Column = problem_type_3
Lag = 30, Column = problem_type_4
Lag = 30, Column = problem_type_1_per_usage1
Lag = 30, Column = problem_type_2_per_usage1
Lag = 30, Column = problem_type_3_per_usage1
Lag = 30, Column = problem_type_4_per_usage1
Lag = 30, Column = problem_type_1_per_usage2
Lag = 30, Column = problem_type_2_per_usage2
Lag = 30, Column = problem_type_3_per_usage2
Lag = 30, Column = problem_type_4_per_usage2
Lag = 30, Column = fault_code_type_1_count
Lag = 30, Column = fault_code_type_2_count
Lag = 30, Column = fault_code_type_3_count
Lag = 30, Column = fault_code_type_4_count
Lag = 30, Column = fault_code_type_1_count_per_usage1
Lag = 30, Column = fault_code_type_2_count_per_usage1
Lag = 30, Column = fault_code_type_3_count_per_usage1
Lag = 30, Column = fault_code_type_4_count_per_usage1
Lag = 30, Column = fault_code_type_1_count_per_usage2
Lag = 30, Column = fault_code_type_2_count_per_usage2
Lag = 30, Column = fault_code_type_3_count_per_usage2
Lag = 30, Column = fault_code_type_4_count_per_usage2
Lag = 90, Column = warn_type1_total
Lag = 90, Column = warn_type2_total
Lag = 90, Column = pca_1_warn
Lag = 90, Column = pca_2_warn
Lag = 90, Column = pca_3_warn
Lag = 90, Column = pca_4_warn
Lag = 90, Column = pca_5_warn
Lag = 90, Column = pca_6_warn
Lag = 90, Column = pca_7_warn
Lag = 90, Column = pca_8_warn
Lag = 90, Column = pca_9_warn
Lag = 90, Column = pca_10_warn
Lag = 90, Column = pca_11_warn
Lag = 90, Column = pca_12_warn
Lag = 90, Column = pca_13_warn
Lag = 90, Column = pca_14_warn
Lag = 90, Column = pca_15_warn
Lag = 90, Column = pca_16_warn
Lag = 90, Column = pca_17_warn
Lag = 90, Column = pca_18_warn
Lag = 90, Column = pca_19_warn
Lag = 90, Column = pca_20_warn
Lag = 90, Column = problem_type_1
Lag = 90, Column = problem_type_2
Lag = 90, Column = problem_type_3
Lag = 90, Column = problem_type_4
Lag = 90, Column = problem_type_1_per_usage1
Lag = 90, Column = problem_type_2_per_usage1
Lag = 90, Column = problem_type_3_per_usage1
Lag = 90, Column = problem_type_4_per_usage1
Lag = 90, Column = problem_type_1_per_usage2
Lag = 90, Column = problem_type_2_per_usage2
Lag = 90, Column = problem_type_3_per_usage2
Lag = 90, Column = problem_type_4_per_usage2
Lag = 90, Column = fault_code_type_1_count
Lag = 90, Column = fault_code_type_2_count
Lag = 90, Column = fault_code_type_3_count
Lag = 90, Column = fault_code_type_4_count
Lag = 90, Column = fault_code_type_1_count_per_usage1
Lag = 90, Column = fault_code_type_2_count_per_usage1
Lag = 90, Column = fault_code_type_3_count_per_usage1
Lag = 90, Column = fault_code_type_4_count_per_usage1
Lag = 90, Column = fault_code_type_1_count_per_usage2
Lag = 90, Column = fault_code_type_2_count_per_usage2
Lag = 90, Column = fault_code_type_3_count_per_usage2
Lag = 90, Column = fault_code_type_4_count_per_usage2
CPU times: user 870 ms, sys: 306 ms, total: 1.18 s
Wall time: 23min 27s
| MIT | Notebook_2_FeatureEngineering_RollingCompute.ipynb | yallic/PySpark-Predictive-Maintenance |
Join result dataset from the five rolling compute cells:- Join in Spark is usually very slow, it is better to reduce the number of partitions before the join.- Check the number of partitions of the pyspark dataframe.- **repartition vs coalesce**. If we only want to reduce the number of partitions, it is better to use coalesce because repartition involves reshuffling which is computational more expensive and takes more time. | # Import result dataset
rollingmean = sqlContext.read.parquet('/mnt/resource/PysparkExample/data_rollingmean.parquet')
rollingdiff = sqlContext.read.parquet('/mnt/resource/PysparkExample/rollingdiff.parquet')
rollingstd = sqlContext.read.parquet('/mnt/resource/PysparkExample/rollingstd.parquet')
rollingmax = sqlContext.read.parquet('/mnt/resource/PysparkExample/rollingmax.parquet')
rollingmin = sqlContext.read.parquet('/mnt/resource/PysparkExample/rollingmin.parquet')
# Check the number of partitions for each dataset
print(rollingmean.rdd.getNumPartitions())
print(rollingdiff.rdd.getNumPartitions())
print(rollingstd.rdd.getNumPartitions())
print(rollingmax.rdd.getNumPartitions())
print(rollingmin.rdd.getNumPartitions())
%%time
# To make join faster, reduce the number of partitions (not necessarily to "1")
rollingmean = rollingmean.coalesce(1)
rollingdiff = rollingdiff.coalesce(1)
rollingstd = rollingstd.coalesce(1)
rollingmax = rollingmax.coalesce(1)
rollingmin = rollingmin.coalesce(1)
rolling_result = rollingmean.join(rollingdiff, 'key', 'inner')\
.join(rollingstd, 'key', 'inner')\
.join(rollingmax, 'key', 'inner')\
.join(rollingmin, 'key', 'inner')
## Write the final result as parquet file for downstream work in Notebook_3
rolling_result.write.mode('overwrite').parquet('/mnt/resource/PysparkExample/notebook2_result.parquet')
| CPU times: user 901 ms, sys: 303 ms, total: 1.2 s
Wall time: 1h 50min 38s
| MIT | Notebook_2_FeatureEngineering_RollingCompute.ipynb | yallic/PySpark-Predictive-Maintenance |
Are Graphs Unique?This notebook shows how to determine if each entry in the HydroNet dataset represents a unique graph. | %matplotlib inline
from matplotlib import pyplot as plt
from hydronet.data import graph_from_dict
from multiprocessing import Pool
from functools import partial
from tqdm import tqdm
import networkx as nx
import pandas as pd
import numpy as np | _____no_output_____ | Apache-2.0 | misc/are-unique-graphs-unique-clusters.ipynb | exalearn/hydronet |
Configuration | cluster_size = 18 | _____no_output_____ | Apache-2.0 | misc/are-unique-graphs-unique-clusters.ipynb | exalearn/hydronet |
Load in the DataLoad in a small dataset from disk | %%time
data = pd.read_json('../data/output/atomic_valid.json.gz', lines=True)
print(f'Loaded {len(data)} records') | Loaded 224018 records
CPU times: user 33.1 s, sys: 5.7 s, total: 38.8 s
Wall time: 38.8 s
| Apache-2.0 | misc/are-unique-graphs-unique-clusters.ipynb | exalearn/hydronet |
Find Pairs of Isomorphic GraphsAssess how many training records are isomorphic | data.query(f'n_waters=={cluster_size}', inplace=True)
print(f'Downselected to {len(data)} graphs') | Downselected to 5714 graphs
| Apache-2.0 | misc/are-unique-graphs-unique-clusters.ipynb | exalearn/hydronet |
Generate networkx objects for each | %%time
data['nx'] = data.apply(graph_from_dict, axis=1) | CPU times: user 1.66 s, sys: 170 ms, total: 1.83 s
Wall time: 1.83 s
| Apache-2.0 | misc/are-unique-graphs-unique-clusters.ipynb | exalearn/hydronet |
Compute which graphs are isomorphic | data.reset_index(inplace=True)
matches = [[] for _ in range(len(data))]
n_matches = 0
with Pool() as p:
for i, g in tqdm(enumerate(data['nx']), total=len(data)):
f = partial(nx.algorithms.is_isomorphic, g, node_match=dict.__eq__, edge_match=dict.__eq__)
is_match = p.map(f, data['nx'].iloc[i+1:])
for j, hit in enumerate(is_match):
if hit:
n_matches += 1
j_real = i + j + 1
matches[i].append(j_real)
matches[j_real].append(i)
print(f'Found {n_matches} pairs of isomorphic graphs') | 100%|██████████| 5714/5714 [26:46<00:00, 3.56it/s]
| Apache-2.0 | misc/are-unique-graphs-unique-clusters.ipynb | exalearn/hydronet |
Add to the dataframe for safe keeping | data['matches'] = matches
data['n_matches'] = data['matches'].apply(len) | _____no_output_____ | Apache-2.0 | misc/are-unique-graphs-unique-clusters.ipynb | exalearn/hydronet |
Assess Energy Differences between Isomorphic GraphsWe want to know how large they are. Does each graph represent a local minimum, or they actually very different in energy? | energy_diffs = []
for rid, row in data.query('n_matches>0').iterrows():
for m in row['matches']:
if m > rid:
energy_diffs.append(abs(row['energy'] - data.iloc[m]['energy']))
print(f'Maximum: {np.max(energy_diffs):.2e} kcal/mol')
print(f'Median: {np.percentile(energy_diffs, 50):.2e} kcal/mol')
print(f'Minimum: {np.min(energy_diffs):.2e} kcal/mol')
fig, ax = plt.subplots(figsize=(3.5, 2.5))
bins = np.logspace(-4, 1, 32)
ax.hist(energy_diffs, bins=bins)
ax.set_xscale('log')
ax.set_xlabel('$\Delta E$ (kcal/mol)')
ax.set_ylabel('Frequency')
fig.tight_layout()
fig.savefig(f'figures/energy-difference-isomorphic-graphs-size-{cluster_size}.png', dpi=320) | _____no_output_____ | Apache-2.0 | misc/are-unique-graphs-unique-clusters.ipynb | exalearn/hydronet |
For comparision, print out the range of energies for clusters | (data['energy'] - data['energy'].min()).describe() | _____no_output_____ | Apache-2.0 | misc/are-unique-graphs-unique-clusters.ipynb | exalearn/hydronet |
Interactive question answering with OpenVINOThis demo shows interactive question answering with OpenVINO. We use [small BERT-large-like model](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/bert-small-uncased-whole-word-masking-squad-int8-0002) distilled and quantized to INT8 on SQuAD v1.1 training set from larger BERT-large model. The model comes from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/). At the bottom of this notebook, you will see live inference results from your inputs. Imports | import time
from urllib import parse
import numpy as np
from openvino.runtime import Core, Dimension
import html_reader as reader
import tokens_bert as tokens | _____no_output_____ | Apache-2.0 | notebooks/213-question-answering/213-question-answering.ipynb | mzegla/openvino_notebooks |
The model Download the modelWe use `omz_downloader`, which is a command-line tool from the `openvino-dev` package. `omz_downloader` automatically creates a directory structure and downloads the selected model. If the model is already downloaded, this step is skipped.You can download and use any of the following models: `bert-large-uncased-whole-word-masking-squad-0001`, `bert-large-uncased-whole-word-masking-squad-int8-0001`, `bert-small-uncased-whole-word-masking-squad-0001`, `bert-small-uncased-whole-word-masking-squad-0002`, `bert-small-uncased-whole-word-masking-squad-int8-0002`, just change the model name below. Any of these models are already converted to OpenVINO Intermediate Representation (IR), so there is no need to use `omz_converter`. | # directory where model will be downloaded
base_model_dir = "model"
# desired precision
precision = "FP16-INT8"
# model name as named in Open Model Zoo
model_name = "bert-small-uncased-whole-word-masking-squad-int8-0002"
model_path = f"model/intel/{model_name}/{precision}/{model_name}.xml"
model_weights_path = f"model/intel/{model_name}/{precision}/{model_name}.bin"
download_command = f"omz_downloader " \
f"--name {model_name} " \
f"--precision {precision} " \
f"--output_dir {base_model_dir} " \
f"--cache_dir {base_model_dir}"
! $download_command | _____no_output_____ | Apache-2.0 | notebooks/213-question-answering/213-question-answering.ipynb | mzegla/openvino_notebooks |
Load the modelDownloaded models are located in a fixed structure, which indicates vendor, model name and precision. Only a few lines of code are required to run the model. First, we create an Inference Engine object. Then we read the network architecture and model weights from the .xml and .bin files. Finally, we compile the network for the desired device. Because of using dynamic shapes we can run code only on `CPU`. | # initialize inference engine
ie_core = Core()
# read the model and corresponding weights from file
model = ie_core.read_model(model=model_path, weights=model_weights_path)
# assign dynamic shapes to every input layer
for input_layer in model.inputs:
input_shape = input_layer.partial_shape
input_shape[1] = Dimension()
model.reshape({input_layer: input_shape})
# compile the model for the CPU
compiled_model = ie_core.compile_model(model=model, device_name="CPU")
# get input and output names of nodes
input_keys = list(compiled_model.inputs)
output_keys = list(compiled_model.outputs) | _____no_output_____ | Apache-2.0 | notebooks/213-question-answering/213-question-answering.ipynb | mzegla/openvino_notebooks |
Input keys are the names of the input nodes and output keys contain names of output nodes of the network. In the case of the BERT-large-like model, we have four inputs and two outputs. | [i.any_name for i in input_keys], [o.any_name for o in output_keys] | _____no_output_____ | Apache-2.0 | notebooks/213-question-answering/213-question-answering.ipynb | mzegla/openvino_notebooks |
ProcessingNLP models usually take a list of tokens as standard input. A token is a single word converted to some integer. To provide the proper input, we need the vocabulary for such mapping. We also define some special tokens like separators and a function to load the content from provided URLs. | # path to vocabulary file
vocab_file_path = "data/vocab.txt"
# create dictionary with words and their indices
vocab = tokens.load_vocab_file(vocab_file_path)
# define special tokens
cls_token = vocab["[CLS]"]
sep_token = vocab["[SEP]"]
# function to load text from given urls
def load_context(sources):
input_urls = []
paragraphs = []
for source in sources:
result = parse.urlparse(source)
if all([result.scheme, result.netloc]):
input_urls.append(source)
else:
paragraphs.append(source)
paragraphs.extend(reader.get_paragraphs(input_urls))
# produce one big context string
return "\n".join(paragraphs) | _____no_output_____ | Apache-2.0 | notebooks/213-question-answering/213-question-answering.ipynb | mzegla/openvino_notebooks |
PreprocessingThe main input (`input_ids`) to used BERT model consist of two parts: question tokens and context tokens separated by some special tokens. We also need to provide: `attention_mask`, which is a sequence of integer values representing the mask of valid values in the input; `token_type_ids`, which is a sequence of integer values representing the segmentation of the `input_ids` into question and context; `position_ids`, which is a sequence of integer values from 0 to length of input, extended by separation tokens, representing the position index for each input token. To know more about input, please read [this](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/bert-small-uncased-whole-word-masking-squad-int8-0002input). | # generator of a sequence of inputs
def prepare_input(question_tokens, context_tokens, input_keys):
input_ids = [cls_token] + question_tokens + [sep_token] + context_tokens + [sep_token]
# 1 for any index
attention_mask = [1] * len(input_ids)
# 0 for question tokens, 1 for context part
token_type_ids = [0] * (len(question_tokens) + 2) + [1] * (len(context_tokens) + 1)
# create input to feed the model
input_dict = {
"input_ids": np.array([input_ids], dtype=np.int32),
"attention_mask": np.array([attention_mask], dtype=np.int32),
"token_type_ids": np.array([token_type_ids], dtype=np.int32),
}
# some models require additional position_ids
if "position_ids" in [i_key.any_name for i_key in input_keys]:
position_ids = np.arange(len(input_ids))
input_dict["position_ids"] = np.array([position_ids], dtype=np.int32)
return input_dict | _____no_output_____ | Apache-2.0 | notebooks/213-question-answering/213-question-answering.ipynb | mzegla/openvino_notebooks |
PostprocessingThe results from the network are raw (logits). We need to use the softmax function to get the probability distribution. Then, we are looking for the best answer in the current part of the context (the highest score) and we return the score and the context range for the answer. | # based on https://github.com/openvinotoolkit/open_model_zoo/blob/bf03f505a650bafe8da03d2747a8b55c5cb2ef16/demos/common/python/openvino/model_zoo/model_api/models/bert.py#L163
def postprocess(output_start, output_end, question_tokens, context_tokens_start_end, input_size):
def get_score(logits):
out = np.exp(logits)
return out / out.sum(axis=-1)
# get start-end scores for context
score_start = get_score(output_start)
score_end = get_score(output_end)
# index of first context token in tensor
context_start_idx = len(question_tokens) + 2
# index of last+1 context token in tensor
context_end_idx = input_size - 1
# find product of all start-end combinations to find the best one
max_score, max_start, max_end = find_best_answer_window(start_score=score_start,
end_score=score_end,
context_start_idx=context_start_idx,
context_end_idx=context_end_idx)
# convert to context text start-end index
max_start = context_tokens_start_end[max_start][0]
max_end = context_tokens_start_end[max_end][1]
return max_score, max_start, max_end
# based on https://github.com/openvinotoolkit/open_model_zoo/blob/bf03f505a650bafe8da03d2747a8b55c5cb2ef16/demos/common/python/openvino/model_zoo/model_api/models/bert.py#L188
def find_best_answer_window(start_score, end_score, context_start_idx, context_end_idx):
context_len = context_end_idx - context_start_idx
score_mat = np.matmul(
start_score[context_start_idx:context_end_idx].reshape((context_len, 1)),
end_score[context_start_idx:context_end_idx].reshape((1, context_len)),
)
# reset candidates with end before start
score_mat = np.triu(score_mat)
# reset long candidates (>16 words)
score_mat = np.tril(score_mat, 16)
# find the best start-end pair
max_s, max_e = divmod(score_mat.flatten().argmax(), score_mat.shape[1])
max_score = score_mat[max_s, max_e]
return max_score, max_s, max_e | _____no_output_____ | Apache-2.0 | notebooks/213-question-answering/213-question-answering.ipynb | mzegla/openvino_notebooks |
Firstly, we need to create a list of tokens from the context and the question. Then, we are looking for the best answer in the context. The best answer should come with the highest score. | def get_best_answer(question, context, vocab, input_keys):
# convert context string to tokens
context_tokens, context_tokens_start_end = tokens.text_to_tokens(text=context.lower(),
vocab=vocab)
# convert question string to tokens
question_tokens, _ = tokens.text_to_tokens(text=question.lower(), vocab=vocab)
network_input = prepare_input(question_tokens, context_tokens, input_keys)
input_size = len(context_tokens) + len(question_tokens) + 3
# openvino inference
request = compiled_model.create_infer_request()
request.infer(inputs=network_input)
# postprocess the result getting the score and context range for the answer
score_start_end = postprocess(output_start=request.get_tensor(name="output_s").data[0],
output_end=request.get_tensor(name="output_e").data[0],
question_tokens=question_tokens,
context_tokens_start_end=context_tokens_start_end,
input_size=input_size)
# return the part of the context, which is already an answer
return context[score_start_end[1]:score_start_end[2]], score_start_end[0] | _____no_output_____ | Apache-2.0 | notebooks/213-question-answering/213-question-answering.ipynb | mzegla/openvino_notebooks |
Main Processing FunctionRun question answering on specific knowledge base and iterate through the questions. | def run_question_answering(sources):
print(f"Context: {sources}", flush=True)
context = load_context(sources)
if len(context) == 0:
print("Error: Empty context or outside paragraphs")
return
while True:
question = input()
# if no question - break
if question == "":
break
# measure processing time
start_time = time.perf_counter()
answer, score = get_best_answer(question=question, context=context, vocab=vocab, input_keys=input_keys)
end_time = time.perf_counter()
print(f"Question: {question}")
print(f"Answer: {answer}")
print(f"Score: {score:.2f}")
print(f"Time: {end_time - start_time:.2f}s") | _____no_output_____ | Apache-2.0 | notebooks/213-question-answering/213-question-answering.ipynb | mzegla/openvino_notebooks |
Run Run on local paragraphsChange sources to your own to answer your questions. You can use as many sources as you want. Usually, you need to wait a few seconds for the answer, but the longer context the longer the waiting time. The model is very limited and sensitive for the input. The answer can depend on whether there is a question mark at the end. The model will try to answer any of your questions even there is no good answer in the context, so in that case, you can see random results.Sample source: Computational complexity theory paragraph (from [here](https://rajpurkar.github.io/SQuAD-explorer/explore/v2.0/dev/Computational_complexity_theory.html))Sample questions:- What is the term for a task that generally lends itself to being solved by a computer?- By what main attribute are computational problems classified utilizing computational complexity theory?- What branch of theoretical computer science deals with broadly classifying computational problems by difficulty and class of relationship?If you want to stop the processing just put an empty string.*Note: Firstly, run the code below and then put your questions in the box.* | sources = ["Computational complexity theory is a branch of the theory of computation in theoretical computer "
"science that focuses on classifying computational problems according to their inherent difficulty, "
"and relating those classes to each other. A computational problem is understood to be a task that "
"is in principle amenable to being solved by a computer, which is equivalent to stating that the "
"problem may be solved by mechanical application of mathematical steps, such as an algorithm."]
run_question_answering(sources) | _____no_output_____ | Apache-2.0 | notebooks/213-question-answering/213-question-answering.ipynb | mzegla/openvino_notebooks |
Run on websitesYou can also provide urls. Note that the context (knowledge base) is built from website paragraphs. If some information is outside the paragraphs, the algorithm won't able to find it.Sample source: [OpenVINO wiki](https://en.wikipedia.org/wiki/OpenVINO)Sample questions:- What does OpenVINO mean?- What is the license for OpenVINO?- Where can you deploy OpenVINO code?If you want to stop the processing just put an empty string.*Note: Firstly, run the code below and then put your questions in the box.* | sources = ["https://en.wikipedia.org/wiki/OpenVINO"]
run_question_answering(sources) | _____no_output_____ | Apache-2.0 | notebooks/213-question-answering/213-question-answering.ipynb | mzegla/openvino_notebooks |
Laboratorio 10 | import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import plot_confusion_matrix, classification_report, accuracy_score, recall_score, f1_score
%matplotlib inline
breast_cancer = load_breast_cancer()
X, y = breast_cancer.data, breast_cancer.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
target_names = breast_cancer.target_names | _____no_output_____ | MIT | labs/lab10.ipynb | AlvaroMarambio/mat281_portfolio |
Ejercicio 1(1 pto.)Ajusta una regresión logística a los datos de entrenamiento y obtén el _accuracy_ con los datos de test. Utiliza el argumento `n_jobs` igual a $-1$, si aún así no converge aumenta el valor de `max_iter`.Hint: Recuerda que el _accuracy_ es el _score_ por defecto en los modelos de clasificación de scikit-learn. | # No hay que hace validacion cruzada pq la regresion logistica no tiene hiperparametros
lr = LogisticRegression(max_iter=2100, n_jobs=-1) # ya con 2100 iteraciones aproximadamente converge aproximadamente al mismo valor que con mas iteraciones
lr.fit(X_train, y_train)
y_pred = lr.predict(X_test)
print(f"Logistic Regression accuracy: {accuracy_score(y_test, y_pred):0.2f}") #lr.score(X_test, y_test) | Logistic Regression accuracy: 0.98
| MIT | labs/lab10.ipynb | AlvaroMarambio/mat281_portfolio |
Ejercicio 2(1 pto.)Utiliza `GridSearchCV` con 5 _folds_ para encontrar el mejor valor de `n_neighbors` de un modelo KNN. | knn =KNeighborsClassifier() # Defino el modelo de KNN
knn_grid = {"n_neighbors": np.arange(2, 31)} # Defino una grilla
knn_cv = GridSearchCV( # Hago validacion cruzada
KNeighborsClassifier(), # modelo a hiperparametrizar
param_grid=knn_grid, #defino la grilla a utilizar
cv=5, # Defino los 5 fold
n_jobs=-1 # Uso todos los nucleos
)
knn_cv.fit(X_train, y_train) # Entrenarla
knn_cv.best_params_
y_pred1 = knn_cv.predict(X_test)
#knn_cv.best_score_
#knn_cv.best_estimator_
knn_cv.best_params_
print(f"KNN accuracy: {accuracy_score(y_test, y_pred1):0.2f}") #Imorimir el acurracy del moejor modelo de knn con el mejor n_neighbors | KNN accuracy: 0.96
| MIT | labs/lab10.ipynb | AlvaroMarambio/mat281_portfolio |
Ejercicio 3(1 pto.)¿Cuál modelo escogerías basándote en los resultados anteriores? Justifica __Respuesta:__ El mejor modelo de los dos utilizados es el de regresion logistica ya que tiene un accuracy de 0.98 con 2100 iteraciones en comparacion con el de KNN que tiene un accuracy de 0.96 utilizando 5 fold. Ejercicio 4(1 pto.)Para el modelo seleccionado en el ejercicio anterior.* Grafica la matriz de confusión (no olvides colocar los nombres originales en los _labels_).* Imprime el reporte de clasificación. | # Me quedo con el modelo de regresion logistica
plot_confusion_matrix(lr, X_test, y_test, display_labels=target_names) # en vez de tirar 0 y 1 te salagan valores reales malignt bening
plt.show()
# Uso el y_pred que definí en el modelo de regresion logistica
print(classification_report(y_test, y_pred, target_names=breast_cancer.target_names)) | precision recall f1-score support
malignant 0.97 0.97 0.97 63
benign 0.98 0.98 0.98 108
accuracy 0.98 171
macro avg 0.97 0.97 0.97 171
weighted avg 0.98 0.98 0.98 171
| MIT | labs/lab10.ipynb | AlvaroMarambio/mat281_portfolio |
Chapter 11 - Machine LearningWoo! machinelearning ml AI Data science is a lot of reformatting of business problems into data problems, and then collecting, cleaning, formatting, and restructuring data. ML is almost an afterthought. But it is an essential afterthought. Intro to this chapter is a good one. | # supervised models
# need to create some learnin data
def split_data(data, prob):
results = [], []
for row in data:
results[0 if random.random() < prob else 1].append(row)
return results
def train_test_split(x, y, test_pct):
data = zip(x,y)
train, test = split_data(data, 1 - test_pct)
x_train, y_train = zip(*train)
x_test, y_test = zip(*test)
return x_train, x_test, y_train, y_test
# then you can create your little model
model = SomeKindOfModel()
x_train, x_test, y_train, y_test = train_test_split(xs, ys, 0.33)
model.train(x_train, y_train)
performance = model.test(x_test, y_test)
| _____no_output_____ | MIT | DSFS Chapter 11 - Machine Learning.ipynb | Kladar/dsfs |
Models aren't necessarily graded on accuracy. If we said every person named Luke will not develop leukemia, we'd be right 98% of the time.See the book (or a google search) for the confusion matrix, which describes true positives, true negatives, false positives (Type I error), and false negatives (Type 2 error) | def accuracy(tp, fp, fn, tn):
correct = tp + tn
total = tp+tn+fp+fn
return correct / total
print(accuracy(70, 4930, 13930, 981070))
# precision is accuracy of positive predictions
def precision(tp, fp, fn, tn):
return tp / (tp+fp)
print(precision(70,4930,13930,981070))
# recall is the fraction of the posinives identified
def recall(tp, fp, fn, tn):
return tp / (tp + fn)
print(recall(70,4930,13930,981070))
# both terrible = terrible model
# sometimes these are combined into an f1 score
def f1_score(tp, fp, fn, tn):
p = precision(tp, fp, fn, tn)
r = recall(tp, fp, fn, tn)
return 2 * p * r / (p+r)
print(f1_score(70,4930,13930,981070)) | 0.00736842105263158
| MIT | DSFS Chapter 11 - Machine Learning.ipynb | Kladar/dsfs |
Annotating pathway into mouse Single-Cell clustersThis tutorial shows how to use the descartes_rpa module with scanpy formated data outside of Descartes. Data from the [Trajectory inference for hematopoiesis in mouse](https://scanpy-tutorials.readthedocs.io/en/latest/paga-paul15.html) tutorial will be used. | import scanpy as sc
adata = sc.datasets.paul15()
adata.X = adata.X.astype('float64') # this is not required and results will be comparable without it
sc.pp.recipe_zheng17(adata)
sc.tl.pca(adata, svd_solver='arpack')
sc.pp.neighbors(adata, n_neighbors=4, n_pcs=20)
sc.tl.leiden(adata) | _____no_output_____ | Apache-2.0 | demo/mouse_data.ipynb | reactome/descartes |
Since this dataset is from mouse (Mus musculus), we pass its species as input | from descartes_rpa import get_pathways_for_group
get_pathways_for_group(adata, groupby="paul15_clusters", species="Mus musculus") | /home/joao/miniconda3/envs/descartes-rpa/lib/python3.9/site-packages/scanpy/tools/_rank_genes_groups.py:419: RuntimeWarning: invalid value encountered in log2
self.stats[group_name, 'logfoldchanges'] = np.log2(
| Apache-2.0 | demo/mouse_data.ipynb | reactome/descartes |
We can look at the top 2 marker genes for each cluster | from descartes_rpa.pl import marker_genes
marker_genes(adata, n_genes=2) | WARNING: dendrogram data not found (using key=dendrogram_paul15_clusters). Running `sc.tl.dendrogram` with default parameters. For fine tuning it is recommended to run `sc.tl.dendrogram` independently.
WARNING: saving figure to file dotplot_marker_genes.pdf
| Apache-2.0 | demo/mouse_data.ipynb | reactome/descartes |
Also, we can look at the shared pathways between clusters | from descartes_rpa.pl import shared_pathways
shared_pathways(adata, clusters=["9GMP", "1Ery", "17Neu", "4Ery"])
from descartes_rpa import get_shared
get_shared(adata, clusters=["1Ery", "4Ery"])
from descartes_rpa.pl import pathways
adata.uns["pathways"].keys()
pathways(adata, "18Eos")
pathways(adata, "3Ery") | _____no_output_____ | Apache-2.0 | demo/mouse_data.ipynb | reactome/descartes |
Djibouti* Homepage of project: https://oscovida.github.io* Plots are explained at http://oscovida.github.io/plots.html* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Djibouti.ipynb) | import datetime
import time
start = datetime.datetime.now()
print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}")
%config InlineBackend.figure_formats = ['svg']
from oscovida import *
overview("Djibouti", weeks=5);
overview("Djibouti");
compare_plot("Djibouti", normalise=True);
# load the data
cases, deaths = get_country_data("Djibouti")
# get population of the region for future normalisation:
inhabitants = population("Djibouti")
print(f'Population of "Djibouti": {inhabitants} people')
# compose into one table
table = compose_dataframe_summary(cases, deaths)
# show tables with up to 1000 rows
pd.set_option("max_rows", 1000)
# display the table
table | _____no_output_____ | CC-BY-4.0 | ipynb/Djibouti.ipynb | oscovida/oscovida.github.io |
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Djibouti.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Atlo Team for gathering and providing data from Hungary (https://atlo.team/koronamonitor/)- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))-------------------- | print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and "
f"deaths at {fetch_deaths_last_execution()}.")
# to force a fresh download of data, run "clear_cache()"
print(f"Notebook execution took: {datetime.datetime.now()-start}")
| _____no_output_____ | CC-BY-4.0 | ipynb/Djibouti.ipynb | oscovida/oscovida.github.io |
Linear regression without scikit-learnIn this notebook, we introduce linear regression. Before presenting theavailable scikit-learn classes, we will provide some insights with a simpleexample. We will use a dataset that contains information about penguins. NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. | import pandas as pd
penguins = pd.read_csv("../datasets/penguins_regression.csv")
penguins.head() | _____no_output_____ | CC-BY-4.0 | notebooks/linear_regression_without_sklearn.ipynb | brospars/scikit-learn-mooc |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.