markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
ERROR: type should be string, got " https://pbpython.com/bullet-graph.html" | import matplotlib.pyplot as plt
import seaborn as sns
from matplotlib.ticker import FuncFormatter
%matplotlib inline
# display a palette of 5 shades of green
sns.palplot(sns.light_palette("green", 5))
# 8 different shades of purple in reverse order
sns.palplot(sns.light_palette("purple", 8, reverse=True))
# define the values we want to plot
limits = [80, 100, 150] # 3 ranges: 0-80, 81-100, 101-150
data_to_plot = ("Example 1", 105, 120) #βExampleβ line with a value of 105 and target line of 120
# blues color palette
palette = sns.color_palette("Blues_r", len(limits))
# build the stacked bar chart of the ranges
fig, ax = plt.subplots()
ax.set_aspect('equal')
ax.set_yticks([1])
ax.set_yticklabels([data_to_plot[0]])
ax.axvline(data_to_plot[2], color="gray", ymin=0.10, ymax=0.9)
prev_limit = 0
for idx, lim in enumerate(limits):
ax.barh([1], lim-prev_limit, left=prev_limit, height=15, color=palette[idx])
prev_limit = lim
# Draw the value we're measuring
ax.barh([1], data_to_plot[1], color='black', height=5)
ax.axvline(data_to_plot[2], color="gray", ymin=0.10, ymax=0.9)
| _____no_output_____ | MIT | styling/bullet_graph.ipynb | TillMeineke/machine_learning |
Was Air Quality Affected in Countries or Regions Where COVID-19 was Most Prevalent?**By: Arpit Jain, Maria Stella Vardanega, Tingting Cao, Christopher Chang, Mona Ma, Fusu Luo** --- Outline I. Problem Definition & Data Source Description 1. Project Objectives 2. Data Source 3. Dataset Preview II. What are the most prevalent pollutants? III. What were the pollutant levels in 2019 and 2020 globally, and their averages? 1. Selecting Data from 2019 and 2020 with air pollutant information 2. Monthly Air Pollutant Data from 2019 3. Monthly Air Pollutant Data from 2020 IV: What cities had the highest changes in pollutant air quality index during COVID-19? 1. 10 cities with most air quality index reduction for each pollutant 2. Cities with more than 50 percent AQI decrease and 50 AQI decrease for each air pollutants V: Regression analysis on COVID-19 cases and pollutant Air Quality Index Globally VI: When were lockdowns implemented for each country? VII: How did Air Quality change in countries with low COVID-19 cases (NZ, AUS, TW) and high COVID-19 cases (US, IT,CN)? 1. Countries with high COVID cases 2. Countries with low COVID cases VIII: Conclusion IX: Public Tableau Dashboards --- I. Problem Definition & Data Source Description 1. Project Objectives Air pollution, as one of the most serious environmental problems confronting our civilization, is the presence of toxic gases and particles in the air at levels that pose adverse effects on global climate and lead to public health risk and disease. Exposure to elevated levels of air pollutants has been implicated in a diverse set of medical conditions including cardiovascular and respiratory mortality, lung cancer and autism. Air pollutants come from natural sources such as wildfires and volcanoes, as well as are highly related to human activities from mobile sources (such as cars, buses and planes) or stationary sources (such as industrial factories, power plants and wood burning fireplaces). However, in the past year, the COVID-19 pandemic has caused unprecedented changes to the our work, study and daily activities, subsequently led to major reductions in air pollutant emissions. And our team would like take this opportunity to examine the air quality in the past two years and look on how the air quality was impacted in countries and cities where the corona-virus was prevalent. 2. Data Source **Data Source Description:** In this project, we downloaded worldwide air quality data for Year 2019 and 2020 from the Air Quality Open Data Platform (https://aqicn.org/data-platform/covid19/), which provides historical air quality index and meteorological data for more than 380 major cities across the world. We used air quality index data in 2019 as baseline to find the air quality changes during COVID in 2020. In addition we joined the data with geographic location information from https://aqicn.org/data-platform/covid19/airquality-covid19-cities.json to get air quality index for each pollutant at city-level. According to the data source provider, the data for each major cities is based on the average (median) of several stations. The data set provides min, max, median and standard deviation for each of the air pollutant species in the form of Air Quality Index (AQI) that are converted from raw concentration based on the US Environmental Protection Agency (EPA) standard. The United States EPA list the following as the criteria at this website (https://www.epa.gov/criteria-air-pollutants/naaqs-table): Carbon Monoxide (CO), Nitrogen Dioxide (NO2), Ozone (O3), Particle Pollution (PM2.5) + (PM10), and finally Sulfur Dioxide (SO2). For the particle pollution the numbers stand for the size of the particles. PM2.5 means particles that are 2.5 micrometers and smaller, while PM10 means particles that are 10 micrometers and smaller. https://www.epa.gov/pm-pollution/particulate-matter-pm-basics. Particle Pollution typically includes Dust, Dirt, and Smoke. Our dataset covers most of the criteria pollutants (PM2.5, PM10, Ozone, SO2, NO2 and CO), and meteorological parameters such as temperature, wind speed, dew point, relative humidity. Air quality index basics are shown in the figure below. (source: https://www.airnow.gov/aqi/aqi-basics/) 3. Preview of the Dataset | %%bigquery
SELECT * FROM `ba775-team2-b2.AQICN.air_quality_data` LIMIT 10 | Query complete after 0.00s: 100%|ββββββββββ| 1/1 [00:00<00:00, 485.85query/s]
Downloading: 100%|ββββββββββ| 10/10 [00:01<00:00, 6.05rows/s]
| MIT | docs/team-projects/Summer-2021/B2-Team2-Analyzing-Air-Quality-During-COVID19-Pandemic.ipynb | JQmiracle/Business-Analytics-Toolbox |
--- II. What are the most prevalent pollutants?This question focuses on the prevalence of the pollutants. From the dataset, the prevalence can be defined geographically from the cities and countries that had recorded the parameters detected times. To find the prevalence, our team selected the parameters from situations in how many distinct cities and countries detected the parameter appeared. | %%bigquery
SELECT
Parameter,COUNT(distinct(City)) AS number_of_city,
COUNT(distinct(Country)) AS number_of_country,string_agg(distinct(Country)) AS list_country
FROM `ba775-team2-b2.AQICN.air_quality_data`
GROUP BY Parameter
ORDER BY number_of_city DESC | Query complete after 0.00s: 100%|ββββββββββ| 1/1 [00:00<00:00, 918.39query/s]
Downloading: 100%|ββββββββββ| 20/20 [00:01<00:00, 13.28rows/s]
| MIT | docs/team-projects/Summer-2021/B2-Team2-Analyzing-Air-Quality-During-COVID19-Pandemic.ipynb | JQmiracle/Business-Analytics-Toolbox |
From the result, top 6 parameters are meteorological parameters. And the first air pollutants (which can be harmful to the public health and environment) is PM2.5, followed by NO2 and PM10.PM2.5 has been detected in 548 cities and 92 countries. NO2 has been detected in 528 cities and 64 countries. PM10 has been detected in 527 cities and 71 countries.We conclude PM2.5, NO2 and PM10 are the most prevalent criteria pollutants from the dataset. All of them are considered criteria pollutants set by EPA. --- III. What were the pollutant levels in 2019 and 2020 globally, and their averages? The purpose of this question is to determine the air pollutant levels in 2019 and 2020. The air pollutant levels in 2019 serve as a baseline for the air pollutant levels in 2020. In the previous question we observe the distinct parameters that are within the Air Quality Database. Since the meteorological parameters are not needed for the project, we can exclude them, and only focus on the air pollutants. The first step is create a table where the parameters are only air pollutants and from the years 2019 and 2020. The next step was to select all the rows from each year, that had a certain parameter, and to union them all. This process was done for all six parameters for both years. 1. Selecting Data from 2019 and 2020 with air pollutant information | %%bigquery
SELECT Date, Country, City, lat as Latitude, lon as Longitude, pop as Population, Parameter as Pollutant, median as Pollutant_level
FROM `ba775-team2-b2.AQICN.air_quality_data`
WHERE (extract(year from date) = 2019 OR extract(year from date) = 2020) AND parameter IN ('co', 'o3','no2','so2','pm10',
'pm25')
ORDER BY Country, Date; | Query complete after 0.00s: 100%|ββββββββββ| 2/2 [00:00<00:00, 1175.70query/s]
Downloading: 100%|ββββββββββ| 1968194/1968194 [00:02<00:00, 804214.66rows/s]
| MIT | docs/team-projects/Summer-2021/B2-Team2-Analyzing-Air-Quality-During-COVID19-Pandemic.ipynb | JQmiracle/Business-Analytics-Toolbox |
As we can see after filtering the tables for only the air pollutants we have 1.9 million rows. From here we split the data into 2019 data and 2020 data. 2. Monthly Air Pollutant Data from 2019 | %%bigquery
SELECT extract(month from date) Month, Parameter as Pollutant,Round(avg(median),2) as Avg_Pollutant_Level_2019
FROM `ba775-team2-b2.AQICN.air_quality_data`
WHERE extract(year from date) = 2019 AND parameter IN ('co')
GROUP BY Month, Parameter
UNION ALL
SELECT extract(month from date) Month, Parameter ,Round(avg(median),2)
FROM `ba775-team2-b2.AQICN.air_quality_data`
WHERE extract(year from date) = 2019 AND parameter IN ('o3')
GROUP BY Month, Parameter
UNION ALL
SELECT extract(month from date) Month, Parameter ,Round(avg(median),2)
FROM `ba775-team2-b2.AQICN.air_quality_data`
WHERE extract(year from date) = 2019 AND parameter IN ('no2')
GROUP BY Month, Parameter
UNION ALL
SELECT extract(month from date) Month, Parameter ,Round(avg(median),2)
FROM `ba775-team2-b2.AQICN.air_quality_data`
WHERE extract(year from date) = 2019 AND parameter IN ('so2')
GROUP BY Month, Parameter
UNION ALL
SELECT extract(month from date) Month, Parameter ,Round(avg(median),2)
FROM `ba775-team2-b2.AQICN.air_quality_data`
WHERE extract(year from date) = 2019 AND parameter IN ('pm10')
GROUP BY Month, Parameter
UNION ALL
SELECT extract(month from date) Month, Parameter ,Round(avg(median),2)
FROM `ba775-team2-b2.AQICN.air_quality_data`
WHERE extract(year from date) = 2019 AND parameter IN ('pm25')
GROUP BY Month, Parameter
ORDER BY Month; | Query complete after 0.00s: 100%|ββββββββββ| 8/8 [00:00<00:00, 4930.85query/s]
Downloading: 100%|ββββββββββ| 72/72 [00:01<00:00, 64.20rows/s]
| MIT | docs/team-projects/Summer-2021/B2-Team2-Analyzing-Air-Quality-During-COVID19-Pandemic.ipynb | JQmiracle/Business-Analytics-Toolbox |
This query represents the average pollutant level for each air pollutant globally for each month. We do this again for the 2020 data. 3. Monthly Air Pollutant Data from 2020 | %%bigquery
SELECT extract(month from date) Month, Parameter as Pollutant,Round(avg(median),2) as Avg_Pollutant_Level_2020
FROM `ba775-team2-b2.AQICN.air_quality_data`
WHERE extract(year from date) = 2020 AND parameter IN ('co')
GROUP BY Month, Parameter
UNION ALL
SELECT extract(month from date) Month, Parameter ,Round(avg(median),2)
FROM `ba775-team2-b2.AQICN.air_quality_data`
WHERE extract(year from date) = 2020 AND parameter IN ('o3')
GROUP BY Month, Parameter
UNION ALL
SELECT extract(month from date) Month, Parameter ,Round(avg(median),2)
FROM `ba775-team2-b2.AQICN.air_quality_data`
WHERE extract(year from date) = 2020 AND parameter IN ('no2')
GROUP BY Month, Parameter
UNION ALL
SELECT extract(month from date) Month, Parameter ,Round(avg(median),2)
FROM `ba775-team2-b2.AQICN.air_quality_data`
WHERE extract(year from date) = 2020 AND parameter IN ('so2')
GROUP BY Month, Parameter
UNION ALL
SELECT extract(month from date) Month, Parameter ,Round(avg(median),2)
FROM `ba775-team2-b2.AQICN.air_quality_data`
WHERE extract(year from date) = 2020 AND parameter IN ('pm10')
GROUP BY Month, Parameter
UNION ALL
SELECT extract(month from date) Month, Parameter ,Round(avg(median),2)
FROM `ba775-team2-b2.AQICN.air_quality_data`
WHERE extract(year from date) = 2020 AND parameter IN ('pm25')
GROUP BY Month, Parameter
ORDER BY Month; | Query complete after 0.00s: 100%|ββββββββββ| 8/8 [00:00<00:00, 4959.27query/s]
Downloading: 100%|ββββββββββ| 72/72 [00:01<00:00, 51.38rows/s]
| MIT | docs/team-projects/Summer-2021/B2-Team2-Analyzing-Air-Quality-During-COVID19-Pandemic.ipynb | JQmiracle/Business-Analytics-Toolbox |
When comparing the data there isn't a noticeable difference in global pollutant levels from 2019 to 2020, which leads to the hypothesis of pollutant levels being regional rather than global. This might also mean that whatever effects might be occurring from COVID-19 cases, and lockdowns are short-term enough that the average monthly air pollutant is not capturing small intricacies in the data. We can further narrow down the data by analyzing data from when lockdowns were occurring in different countries, regions, and even cities. --- IV: What cities had the highest changes in pollutant air quality index during COVID-19? In this question, we are trying to find cities with most air quality improvement during COVID, and cities with longest time of certain level AQI reduction. 1. 10 cities with most air quality index reduction for each pollutantMaking queries and creating tables to find monthly average air quality index (AQI) for all pollutants at city level We are using data in 2019 as a baseline and computing AQI differences and percent differences. Negative difference values indicates air quality index decrease, corresponding to an air quality improvement, and positive difference values indicate air quality index increases, corresponding to an air quality deterioration. | %%bigquery
CREATE OR REPLACE TABLE AQICN.pollutant_diff_daily_aqi_less_than_500
AS
(
SELECT A.Date AS Date_2020,B.Date AS Date_2019,A.Country,A.City,A.lat,A.lon,A.Parameter,A.pop,A.median AS aqi_2020,B.median AS aqi_2019,(A.median-B.median) AS aqi_diff, ROUND((A.median-B.median)/B.median*100,2) AS aqi_percent_diff
FROM
(SELECT * FROM `ba775-team2-b2.AQICN.air_quality_data`
WHERE Parameter in ('pm25','pm10','o3','no2','co','so2') AND EXTRACT(Year FROM Date) = 2020 AND median > 0 AND median < 500) AS A
INNER JOIN
(SELECT * FROM `ba775-team2-b2.AQICN.air_quality_data`
WHERE Parameter in ('pm25','pm10','o3','no2','co','so2') AND EXTRACT(Year FROM Date) = 2019 AND median > 0 AND median < 500) AS B
ON A.City = B.City
WHERE EXTRACT(MONTH FROM A.Date) = EXTRACT(MONTH FROM B.Date) AND EXTRACT(DAY FROM A.Date) = EXTRACT(DAY FROM B.Date) AND A.Parameter = B.Parameter
ORDER BY City,Date_2020
)
%%bigquery
CREATE OR REPLACE TABLE AQICN.pollutant_diff_monthly_aqi
AS
SELECT EXTRACT(month FROM Date_2020) AS month_2020,EXTRACT(month FROM Date_2019) AS month_2019,
Country,City,lat,lon,Parameter,ROUND(AVG(aqi_2020),1) AS monthly_avg_aqi_2020,
ROUND(AVG(aqi_2019),1) AS monthly_avg_aqi_2019,(ROUND(AVG(aqi_2020),1)-ROUND(AVG(aqi_2019),1)) AS aqi_diff_monthly,
ROUND((AVG(aqi_2020)-AVG(aqi_2019))/AVG(aqi_2019)*100,2) AS aqi_percent_diff_monthly
FROM AQICN.pollutant_diff_daily_aqi_less_than_500
GROUP BY month_2020,month_2019,Country,City,lat,lon,Parameter
%%bigquery
SELECT *
FROM AQICN.pollutant_diff_monthly_aqi
ORDER BY Parameter,month_2020,Country
LIMIT 10 | Query complete after 0.00s: 100%|ββββββββββ| 1/1 [00:00<00:00, 368.89query/s]
Downloading: 100%|ββββββββββ| 10/10 [00:01<00:00, 6.39rows/s]
| MIT | docs/team-projects/Summer-2021/B2-Team2-Analyzing-Air-Quality-During-COVID19-Pandemic.ipynb | JQmiracle/Business-Analytics-Toolbox |
Order by monthly average AQI difference to find cities having top 10 air quality index reduction for each pollutant | %%bigquery
CREATE OR REPLACE TABLE AQICN.top_10_cites_most_pollutant_percent_diff_monthly
AS
(SELECT *
FROM AQICN.pollutant_diff_monthly_aqi
WHERE Parameter = 'co'
ORDER BY aqi_percent_diff_monthly
LIMIT 10)
UNION ALL
(SELECT *
FROM AQICN.pollutant_diff_monthly_aqi
WHERE Parameter = 'o3'
ORDER BY aqi_percent_diff_monthly
LIMIT 10)
UNION ALL
(SELECT *
FROM AQICN.pollutant_diff_monthly_aqi
WHERE Parameter = 'no2'
ORDER BY aqi_percent_diff_monthly
LIMIT 10)
UNION ALL
(SELECT *
FROM AQICN.pollutant_diff_monthly_aqi
WHERE Parameter = 'pm25'
ORDER BY aqi_percent_diff_monthly
LIMIT 10)
UNION ALL
(SELECT *
FROM AQICN.pollutant_diff_monthly_aqi
WHERE Parameter = 'pm10'
ORDER BY aqi_percent_diff_monthly
LIMIT 10)
%%bigquery
SELECT *
FROM AQICN.top_10_cites_most_pollutant_percent_diff_monthly
ORDER BY Parameter,aqi_percent_diff_monthly
LIMIT 10 | Query complete after 0.00s: 100%|ββββββββββ| 1/1 [00:00<00:00, 494.55query/s]
Downloading: 100%|ββββββββββ| 10/10 [00:01<00:00, 5.25rows/s]
| MIT | docs/team-projects/Summer-2021/B2-Team2-Analyzing-Air-Quality-During-COVID19-Pandemic.ipynb | JQmiracle/Business-Analytics-Toolbox |
Order by monthly average percent AQI difference to find cities having top 10 most air quality index reduction for each pollutant | %%bigquery
CREATE OR REPLACE TABLE AQICN.top_10_cites_most_pollutant_diff_monthly
AS
(SELECT *
FROM AQICN.pollutant_diff_monthly_aqi
WHERE Parameter = 'pm25'
ORDER BY aqi_diff_monthly
LIMIT 10)
UNION ALL
(SELECT *
FROM AQICN.pollutant_diff_monthly_aqi
WHERE Parameter = 'o3'
ORDER BY aqi_diff_monthly
LIMIT 10)
UNION ALL
(SELECT *
FROM AQICN.pollutant_diff_monthly_aqi
WHERE Parameter = 'pm10'
ORDER BY aqi_diff_monthly
LIMIT 10)
UNION ALL
(SELECT *
FROM AQICN.pollutant_diff_monthly_aqi
WHERE Parameter = 'no2'
ORDER BY aqi_diff_monthly
LIMIT 10)
UNION ALL
(SELECT *
FROM AQICN.pollutant_diff_monthly_aqi
WHERE Parameter = 'so2'
ORDER BY aqi_diff_monthly
LIMIT 10)
UNION ALL
(SELECT *
FROM AQICN.pollutant_diff_monthly_aqi
WHERE Parameter = 'co'
ORDER BY aqi_diff_monthly
LIMIT 10)
%%bigquery
SELECT *
FROM AQICN.top_10_cites_most_pollutant_diff_monthly
ORDER BY Parameter,aqi_diff_monthly
LIMIT 10 | Query complete after 0.00s: 100%|ββββββββββ| 1/1 [00:00<00:00, 446.25query/s]
Downloading: 100%|ββββββββββ| 10/10 [00:01<00:00, 6.48rows/s]
| MIT | docs/team-projects/Summer-2021/B2-Team2-Analyzing-Air-Quality-During-COVID19-Pandemic.ipynb | JQmiracle/Business-Analytics-Toolbox |
2. Cities with more than 50 percent AQI decrease and 50 AQI decrease for each air pollutants Reason: the higher the AQI, the unhealthier the air will be, especially for sensitive groups such as people with heart and lung disease, elders and children. A major reduction or percent reduction in AQI for long period of time implies a high air quality impact from the COIVD pandemic. | %%bigquery
SELECT City,Country,Parameter,COUNT(*) AS num_month_mt_50_per_decrease FROM AQICN.pollutant_diff_monthly_aqi
WHERE aqi_percent_diff_monthly < -50 AND aqi_diff_monthly < -50
GROUP BY City,Country,Parameter
ORDER BY Parameter,COUNT(*) DESC
LIMIT 10 | Query complete after 0.00s: 100%|ββββββββββ| 1/1 [00:00<00:00, 881.71query/s]
Downloading: 100%|ββββββββββ| 10/10 [00:01<00:00, 6.79rows/s]
| MIT | docs/team-projects/Summer-2021/B2-Team2-Analyzing-Air-Quality-During-COVID19-Pandemic.ipynb | JQmiracle/Business-Analytics-Toolbox |
--- ResultsDuring the pandemic, cities getting most air qualities improvements in terms of percent AQI differences for each pollutant are:CO: United States Portland, Chile Talca and Mexico Aguascalientes;NO2: Iran Qom, South Africa Middelburg and Philippines Butuan;SO2: Greece Athens, Mexico MΓ©rida and Mexico San Luis PotosΓ;Ozone: Mexico Aguascalientes, United States Queens and United States The Bronx;PM 10: India Gandhinagar, China Hohhot and Israel Tel Aviv;PM 2.5: Mexico MΓ©rida, Tajikistan Dushanbe, Bosnia and Herzegovina Sarajevo, Turkey Erzurum, China Qiqihar and India Gandhinagar;Cities getting at least 50% and 50 AQI reduction with longest time:CO: United States Portland, 3 out of 12 months;NO2: Iran Qom, 5 out of 12 months;O3: Mexico Aguascalientes, 5 out of 12 months;PM25: several cities including Iran Kermanshah, Singapore Singapore, AU Sydney and Canberra, 1 out of 12 months;PM10: India Gandhinagar and Bhopal, 2 out of 12 months;SO2: Mexico MΓ©rida 5 out of 12 months. --- V: Regression analysis on COVID-19 cases and pollutant Air Quality Index Globally The purpose of this part is to find the differences in AQI between 2019 and 2020, also the percentage changes for four parameters which include (NO,NO2,PM2.5 and O3), then join with the COVID confirmed table to find the regression between the AQI and the new confirmed case for each air pollutant. | %%bigquery
select A.month,A.month_n, A.country,A.parameter,round((B.avg_median_month- A.avg_median_month),2) as diff_avg,
(B.avg_median_month - A.avg_median_month)/A.avg_median_month as diff_perc
from
(SELECT FORMAT_DATETIME("%B", date) month,EXTRACT(year FROM date) year, EXTRACT(month FROM date) month_n, country,parameter,round(avg(median),2) as avg_median_month
FROM `AQICN.Arpit_Cleaned_Data2`
WHERE Parameter IN ('co','no2','o3','pm25') AND EXTRACT(year FROM date) = 2019
GROUP by 1,2,3,4,5
ORDER BY country, parameter) A
left join
(SELECT FORMAT_DATETIME("%B", date) month,EXTRACT(year FROM date) year, EXTRACT(month FROM date) month_n, country,parameter,round(avg(median),2) as avg_median_month
FROM `AQICN.Arpit_Cleaned_Data2`
WHERE Parameter IN ('co','no2','o3','pm25') AND EXTRACT(year FROM date) = 2020
GROUP by 1,2,3,4,5
ORDER BY country, parameter) B
using (month,country,parameter,month_n)
where A.avg_median_month >0
%%bigquery
select A.*,confirmed,B.country as country_name
from `all_para_20_19.all_para_20_19_diff` as A
inner join `covid_population.covid _pop` as B
on A.country = B.country_code2 and A.month = B.month and A.month_n = B.month_n
where B.year = 2020
order by A.country,A.month_n | Query complete after 0.00s: 100%|ββββββββββ| 3/3 [00:00<00:00, 1451.15query/s]
Downloading: 100%|ββββββββββ| 2789/2789 [00:01<00:00, 1960.44rows/s]
| MIT | docs/team-projects/Summer-2021/B2-Team2-Analyzing-Air-Quality-During-COVID19-Pandemic.ipynb | JQmiracle/Business-Analytics-Toolbox |
Using Bigquery ML to find liner regression between diff_avg for each parameter and confirmed cases(Example showing below is that parameter = co; x = confirmed; y=diff_avg --AQI changes) | %%bigquery
CREATE OR REPLACE MODEL `all_para_20_19.all_para_20_19_diff_covid_model`
# Specify options
OPTIONS
(model_type='linear_reg',
input_label_cols=['diff_avg']) AS
# Provide training data
SELECT
confirmed,
diff_avg
FROM
`all_para_20_19.all_para_20_19_diff_covid`
WHERE
parameter = 'co'
and diff_avg is not null
| _____no_output_____ | MIT | docs/team-projects/Summer-2021/B2-Team2-Analyzing-Air-Quality-During-COVID19-Pandemic.ipynb | JQmiracle/Business-Analytics-Toolbox |
Evaluating the model to find out r2_score for each monthly average air pollutant AQI changes vs monthly confirmed new cases linear regression model.Example showing below is Evaluation for country level monthly average CO AQI vs monthly new confirmed COVID cases model: | %%bigquery
SELECT * FROM
ML.EVALUATE(
MODEL `all_para_20_19.all_para_20_19_diff_covid_model`, # Model name
# Table to evaluate against
(SELECT
confirmed,
diff_avg
FROM
`all_para_20_19.all_para_20_19_diff_covid`
WHERE
parameter = 'co'
and diff_avg is not null
)
) | Query complete after 0.00s: 100%|ββββββββββ| 3/3 [00:00<00:00, 1508.20query/s]
Downloading: 100%|ββββββββββ| 1/1 [00:01<00:00, 1.86s/rows]
| MIT | docs/team-projects/Summer-2021/B2-Team2-Analyzing-Air-Quality-During-COVID19-Pandemic.ipynb | JQmiracle/Business-Analytics-Toolbox |
Evaluation for country level monthly average PM2.5 AQI changes vs monthly new confirmed COVID cases model: Evaluation for country level monthly average NO2 AQI changes vs monthly new confirmed COVID cases model: Evaluation for country level monthly average O3 AQI changes vs monthly new confirmed COVID cases model: We have also conducted log transformation of x-variables for linear regression, the most correlated data is PM 2.5 AQI changes vs LOG(confirmed case). Visualization is shown below. We can see an overall AQI changes from 2019 to 2020. However, after running regression for four air pollutants, model R-squares are less than 0.02, indicating a weak linear relationship between the air quality index changes and the numbers of new confirmed COVID cases. The result makes sense because there are complicated physical and chemical process involved in formation and transportation of air pollution, thus factors such as the weather, energy source, and terrain could also impact the AQI changes. Also, the dramatic increase of new COVID cases might not affect people's response in a way reducing outdoor activities, especially when "stay at home order" is partially lifted.In this case, we decide to specifically study some countries during their lockdown period and examine the AQI changes. --- VI: When were lockdowns implemented for each country? Lockdown Dates per CountryChina: Jan 23 - April 8, 2020 (Wuhan 76 day lockdown)USA: March 19 - April 7, 2020 Italy: March 9 - May 18, 2020Taiwan: No lockdowns in 2020. Lockdown started in July 2021. Australia: March 18 - May/June 2020New Zealand: March 25 - May/June 2020 From the previous regression model we can see that the there was very little correlation between AQI and confirmed cases, and one of the main reasons is that confirmed cases could not accurately capture human activity. To compensate for this, we narrowed down the dates of our pollutant data in order to compare the pollutant levels only during lockdown periods in 2019 and 2020 for the countries where COVID-19 was most prevalent: China, USA, Italy, and those that COVID-19 wasn't as prevalent: Taiwan, Australia, and New Zealand. We came to a conclusion that most lockdown periods started from mid March to April, May, or June, except for China, which started their lockdown late January until April of 2020. To generalize the lockdown dates for countries other than China, the SQL query included dates from the beginning of March to the end of June. As for China, the query included specific dates from January 23 to April 8th of 2020, which is the Wuhan 76 day lockdown. | %%bigquery
SELECT country, date, parameter, AVG(count) AS air_quality
FROM `ba775-team2-b2.AQICN.air_quality_data`
WHERE date BETWEEN '2020-03-01' AND '2020-06-30'
AND country in ('US','IT','AU','NZ','TW')
GROUP BY country, parameter, date
ORDER BY date
%%bigquery
SELECT country, date, parameter, AVG(count) AS air_quality
FROM `ba775-team2-b2.AQICN.air_quality_data`
WHERE date BETWEEN '2020-01-23' AND '2020-04-08'
AND country = 'CN'
GROUP BY country, parameter, date
ORDER BY date | Query complete after 0.00s: 100%|ββββββββββ| 1/1 [00:00<00:00, 1015.32query/s]
Downloading: 100%|ββββββββββ| 900/900 [00:01<00:00, 558.11rows/s]
| MIT | docs/team-projects/Summer-2021/B2-Team2-Analyzing-Air-Quality-During-COVID19-Pandemic.ipynb | JQmiracle/Business-Analytics-Toolbox |
--- VII: How did Air Quality change in countries with low COVID-19 cases (NZ, AUS, TW) and high COVID-19 cases (US, IT,CN)?This question was answered by creating separate tables that encompassed the equivalent lockdown periods per country for 2019. Then, the two tables were joined using the parameter and grouped according to country and parameter to create a subsequent table illustrating the percentage change in average pollution from 2019 to 2020 (during the respective lockdown periods). 1. Countries with high COVID cases | %%bigquery
CREATE OR REPLACE TABLE AQICN.air_quality2019_Italy AS
SELECT country, parameter, median AS air_quality2019 FROM `ba775-team2-b2.AQICN.air_quality_data`
WHERE date BETWEEN '2019-03-09' AND '2019-05-18'
AND country = 'IT'
%%bigquery
SELECT a2020.country, a2020.parameter, AVG(a2020.median) AS air_quality2020, AVG(air_quality2019) AS air_quality2019,
(AVG(a2020.median)-AVG(air_quality2019))/AVG(air_quality2019) AS percentage_change
FROM `ba775-team2-b2.AQICN.air_quality_data` AS a2020
LEFT JOIN AQICN.air_quality2019_Italy AS a2019
USING(parameter)
WHERE a2020.date BETWEEN '2020-03-09' AND '2020-05-18'
AND a2020.country = 'IT'
AND Parameter in ('pm25','pm10','o3','no2','co','so2')
GROUP BY a2020.country, a2020.parameter
ORDER BY percentage_change | Query complete after 0.00s: 100%|ββββββββββ| 4/4 [00:00<00:00, 2174.90query/s]
Downloading: 100%|ββββββββββ| 6/6 [00:01<00:00, 3.91rows/s]
| MIT | docs/team-projects/Summer-2021/B2-Team2-Analyzing-Air-Quality-During-COVID19-Pandemic.ipynb | JQmiracle/Business-Analytics-Toolbox |
Here we can see that the only pollutant that decreased during the 2020 lockdown in Italy, compared to the respective time period in 2019, was NO2, which decreased by 35.74%. | %%bigquery
CREATE OR REPLACE TABLE AQICN.air_quality2019_US AS
SELECT country, parameter, median AS air_quality2019 FROM `ba775-team2-b2.AQICN.air_quality_data`
WHERE date BETWEEN '2019-03-19' AND '2019-04-07'
AND country = 'US'
%%bigquery
SELECT a2020.country, a2020.parameter, AVG(a2020.median) AS air_quality2020, AVG(air_quality2019) AS air_quality2019,
(AVG(a2020.median)-AVG(air_quality2019))/AVG(air_quality2019) AS percentage_change
FROM `ba775-team2-b2.AQICN.air_quality_data` AS a2020
LEFT JOIN AQICN.air_quality2019_US AS a2019
USING(parameter)
WHERE a2020.date BETWEEN '2020-03-19' AND '2020-04-07'
AND a2020.country = 'US'
AND Parameter in ('pm25','pm10','o3','no2','co','so2')
GROUP BY a2020.country, a2020.parameter
ORDER BY percentage_change | Query complete after 0.00s: 100%|ββββββββββ| 4/4 [00:00<00:00, 2138.31query/s]
Downloading: 100%|ββββββββββ| 6/6 [00:01<00:00, 3.36rows/s]
| MIT | docs/team-projects/Summer-2021/B2-Team2-Analyzing-Air-Quality-During-COVID19-Pandemic.ipynb | JQmiracle/Business-Analytics-Toolbox |
In the United States, all the pollutants decreased in 2020 compared to 2019. The largest changes occurred in O3, NO2 and SO2, which decreased by 36.69%, 30.22%, and 27.10% respectively. This indicates that the lockdowns during the COVID-19 pandemic may have positively affected the emission of pollutants in the United States. | %%bigquery
CREATE OR REPLACE TABLE AQICN.air_quality2019_China AS
SELECT country, parameter, median AS air_quality2019 FROM `ba775-team2-b2.AQICN.air_quality_data`
WHERE date BETWEEN '2019-01-23' AND '2019-04-08'
AND country = 'CN'
%%bigquery
SELECT a2020.country, a2020.parameter, AVG(a2020.median) AS air_quality2020, AVG(air_quality2019) AS air_quality2019,
(AVG(a2020.median)-AVG(air_quality2019))/AVG(air_quality2019) AS percentage_change
FROM `ba775-team2-b2.AQICN.air_quality_data` AS a2020
LEFT JOIN AQICN.air_quality2019_China AS a2019
USING(parameter)
WHERE a2020.date BETWEEN '2020-01-23' AND '2020-04-08'
AND a2020.country = 'CN'
AND Parameter in ('pm25','pm10','o3','no2','co','so2')
GROUP BY a2020.country, a2020.parameter
ORDER BY percentage_change | Query complete after 0.00s: 100%|ββββββββββ| 4/4 [00:00<00:00, 1981.72query/s]
Downloading: 100%|ββββββββββ| 6/6 [00:01<00:00, 4.01rows/s]
| MIT | docs/team-projects/Summer-2021/B2-Team2-Analyzing-Air-Quality-During-COVID19-Pandemic.ipynb | JQmiracle/Business-Analytics-Toolbox |
In China, most pollutants decreased in 2020 compared to the same period in 2019. The largest change was in NO2 which decreased by 30.88% compared to the previous year. 2. Countries with low COVID cases | %%bigquery
CREATE OR REPLACE TABLE AQICN.air_quality2019_Taiwan AS
SELECT country, parameter, median AS air_quality2019 FROM `ba775-team2-b2.AQICN.air_quality_data`
WHERE EXTRACT(month FROM date) = 07
AND EXTRACT(year FROM date) = 2019
AND country = 'TW'
%%bigquery
SELECT a2020.country, a2020.parameter, AVG(a2020.median) AS air_quality2020, AVG(air_quality2019) AS air_quality2019,
(AVG(a2020.median)-AVG(air_quality2019))/AVG(air_quality2019) AS percentage_change
FROM `ba775-team2-b2.AQICN.air_quality_data` AS a2020
LEFT JOIN AQICN.air_quality2019_Taiwan AS a2019
USING(parameter)
WHERE EXTRACT(month FROM a2020.date) = 07
AND EXTRACT(year FROM a2020.date) = 2020
AND a2020.country = 'TW'
AND Parameter in ('pm25','pm10','o3','no2','co','so2')
GROUP BY a2020.country, a2020.parameter
ORDER BY percentage_change | Query complete after 0.00s: 100%|ββββββββββ| 4/4 [00:00<00:00, 1830.37query/s]
Downloading: 100%|ββββββββββ| 6/6 [00:01<00:00, 4.02rows/s]
| MIT | docs/team-projects/Summer-2021/B2-Team2-Analyzing-Air-Quality-During-COVID19-Pandemic.ipynb | JQmiracle/Business-Analytics-Toolbox |
Taiwan, which did not experience lockdowns due to COVID-19, also shows a decrease in all pollutant levels. This contradicts our initially hypothesis that countries who experienced more COVID-19 and therefore more lockdowns would have better air quality. | %%bigquery
CREATE OR REPLACE TABLE AQICN.air_quality2019_AUS AS
SELECT country, parameter, median AS air_quality2019 FROM `ba775-team2-b2.AQICN.air_quality_data`
WHERE date BETWEEN '2019-03-25' AND '2019-05-31'
AND country = 'NZ'
%%bigquery
SELECT a2020.country, a2020.parameter, AVG(a2020.median) AS air_quality2020, AVG(air_quality2019) AS air_quality2019,
(AVG(a2020.median)-AVG(air_quality2019))/AVG(air_quality2019) AS percentage_change
FROM `ba775-team2-b2.AQICN.air_quality_data` AS a2020
LEFT JOIN AQICN.air_quality2019_NZ AS a2019
USING(parameter)
WHERE a2020.date BETWEEN '2020-03-25' AND '2020-05-31'
AND Parameter in ('pm25','pm10','o3','no2','co','so2')
AND a2020.country = 'NZ'
GROUP BY a2020.country, a2020.parameter
ORDER BY percentage_change | Query complete after 0.00s: 100%|ββββββββββ| 4/4 [00:00<00:00, 2199.14query/s]
Downloading: 100%|ββββββββββ| 5/5 [00:01<00:00, 3.27rows/s]
| MIT | docs/team-projects/Summer-2021/B2-Team2-Analyzing-Air-Quality-During-COVID19-Pandemic.ipynb | JQmiracle/Business-Analytics-Toolbox |
New Zealand also shows a decrease in all pollutant levels. Nevertheless, New Zealand did go into lockdown for a period and these numbers may reflect the lessened activity due to COVID-19 during that time compared to the equivalent in 2019. | %%bigquery
CREATE OR REPLACE TABLE AQICN.air_quality2019_AUS AS
SELECT country, parameter, median AS air_quality2019 FROM `ba775-team2-b2.AQICN.air_quality_data`
WHERE date BETWEEN '2019-03-18' AND '2019-05-31'
AND country = 'AU'
%%bigquery
SELECT a2020.country, a2020.parameter, AVG(a2020.median) AS air_quality2020, AVG(air_quality2019) AS air_quality2019,
(AVG(a2020.median)-AVG(air_quality2019))/AVG(air_quality2019) AS percentage_change
FROM `ba775-team2-b2.AQICN.air_quality_data` AS a2020
LEFT JOIN AQICN.air_quality2019_AUS AS a2019
USING(parameter)
WHERE a2020.date BETWEEN '2020-03-18' AND '2020-05-31'
AND Parameter in ('pm25','pm10','o3','no2','co','so2')
AND a2020.country = 'AU'
GROUP BY a2020.country, a2020.parameter
ORDER BY percentage_change | Query complete after 0.00s: 100%|ββββββββββ| 4/4 [00:00<00:00, 2131.79query/s]
Downloading: 100%|ββββββββββ| 6/6 [00:01<00:00, 4.52rows/s]
| MIT | docs/team-projects/Summer-2021/B2-Team2-Analyzing-Air-Quality-During-COVID19-Pandemic.ipynb | JQmiracle/Business-Analytics-Toolbox |
Basics- All values of a categorical valiable are either in `categories` or `np.nan`.- Order is defined by the order of `categories`, not the lexical order of the values.- Internally, the data structure consists of a `categories` array and an integer arrays of `codes`, which point to the values in the `categories` array.- The memory usage of a categorical variable is proportional to the number of categories plus the length of the data, while that for an object dtype is a constant times the length of the data. As the number of categories approaches the length of the data, memory usage approaches that of object type.- Categories can be useful in the following scenarios: - To save memory (if number of categories small relative to number of rows) - If logical order differs from lexical order (e.g. 'small', 'medium', 'large') - To signal to libraries that column should be treated as a category (e.g. for plotting) General best practicesBased on [this](https://towardsdatascience.com/staying-sane-while-adopting-pandas-categorical-datatypes-78dbd19dcd8a) useful article.- Operate on category values rather than column elements. E.g. to rename categories use `df.catvar.cat.rename_rategories(*args, **kwargs)`, if there is no `cat` method available,consider operating on categories directly with `df.catvar.cat.categories`.- Merging on categories: the two key things to remember are that 1) Pandas treats categorical variables with different categories as different data types, and 2) category merge keys will only be categories in the merged dataframe if they are of the same data types (i.e. have the same categories), otherwise they will be converted back to objects.- Grouping on categories: remember that by default we group on all categories, not just those present in the data. More often than not, you'll want to use `df.groupby(catvar, observed=True)` to only use categories observed in the data. | titanic = sns.load_dataset("titanic")
titanic.head(2) | _____no_output_____ | MIT | content/post/pandas-categories/pandas-categories.ipynb | fabiangunzinger/wowchemy |
Operations I frequently use Renaming categories | titanic["class"].cat.rename_categories(str.upper)[:2] | _____no_output_____ | MIT | content/post/pandas-categories/pandas-categories.ipynb | fabiangunzinger/wowchemy |
Appending new categories | titanic["class"].cat.add_categories(["Fourth"]).cat.categories | _____no_output_____ | MIT | content/post/pandas-categories/pandas-categories.ipynb | fabiangunzinger/wowchemy |
Removing categories | titanic["class"].cat.remove_categories(["Third"]).cat.categories | _____no_output_____ | MIT | content/post/pandas-categories/pandas-categories.ipynb | fabiangunzinger/wowchemy |
Remove unused categories | titanic_small = titanic.iloc[:2]
titanic_small
titanic_small["class"].cat.remove_unused_categories().cat.categories | _____no_output_____ | MIT | content/post/pandas-categories/pandas-categories.ipynb | fabiangunzinger/wowchemy |
Remove and add categories simultaneously | titanic["class"].value_counts(dropna=False)
titanic["class"].cat.set_categories(["First", "Third", "Fourth"]).value_counts(
dropna=False
) | _____no_output_____ | MIT | content/post/pandas-categories/pandas-categories.ipynb | fabiangunzinger/wowchemy |
Using string and datetime accessorsThis works as expected, and if the number of distinct categories is small relative to the number of rows, then operating on the categories is faster (because under the hood, pandas applies the change to `categories` and constructs a new series (see [here](https://pandas.pydata.org/pandas-docs/stable/user_guide/categorical.htmlstring-and-datetime-accessors)) so no need to do this manually as I was inclined to). | cat_class = titanic["class"]
%timeit cat_class.str.contains('d')
str_class = titanic["class"].astype("object")
%timeit str_class.str.contains('d') | 149 Β΅s Β± 7.84 Β΅s per loop (mean Β± std. dev. of 7 runs, 10000 loops each)
398 Β΅s Β± 16.3 Β΅s per loop (mean Β± std. dev. of 7 runs, 1000 loops each)
| MIT | content/post/pandas-categories/pandas-categories.ipynb | fabiangunzinger/wowchemy |
Object creation Convert *sex* and *class* to the same categorical type, with categories being the union of all unique values of both columns. | cols = ["sex", "who"]
unique_values = np.unique(titanic[cols].to_numpy().ravel())
categories = pd.CategoricalDtype(categories=unique_values)
titanic[cols] = titanic[cols].astype(categories)
print(titanic.sex.cat.categories)
print(titanic.who.cat.categories)
# restore sex and who to object types
titanic[cols] = titanic[cols].astype("object") | _____no_output_____ | MIT | content/post/pandas-categories/pandas-categories.ipynb | fabiangunzinger/wowchemy |
Custom order | df = pd.DataFrame({"quality": ["good", "excellent", "very good"]})
df.sort_values("quality")
ordered_quality = pd.CategoricalDtype(["good", "very good", "excellent"], ordered=True)
df.quality = df.quality.astype(ordered_quality)
df.sort_values("quality") | _____no_output_____ | MIT | content/post/pandas-categories/pandas-categories.ipynb | fabiangunzinger/wowchemy |
Unique values | small_titanic = titanic.iloc[:2]
small_titanic | _____no_output_____ | MIT | content/post/pandas-categories/pandas-categories.ipynb | fabiangunzinger/wowchemy |
`Series.unique` returns values in order appearance, and only returns values that are present in the data. | small_titanic["class"].unique() | _____no_output_____ | MIT | content/post/pandas-categories/pandas-categories.ipynb | fabiangunzinger/wowchemy |
`Series.cat.categories` returns all category values. | small_titanic["class"].cat.categories | _____no_output_____ | MIT | content/post/pandas-categories/pandas-categories.ipynb | fabiangunzinger/wowchemy |
Machine Learning 1 Some Concepts | Mean Absolute Error (MAE) is the mean of the absolute value of the errors
Mean Squared Error (MSE) is the mean of the squared errors:
Root Mean Squared Error (RMSE) is the square root of the mean of the squared errors
Comparing these metrics:
MAE is the easiest to understand because itβs the average error.
MSE is more popular than MAE because MSE βpunishesβ larger errors, which tends to be useful in the real world.
RMSE is even more popular than MSE because RMSE is interpretable in the βyβ units.
# to get the metrics
from sklearn import metrics
print('MAE:', metrics.mean_absolute_error(y_test, y_pred))
print('MSE:', metrics.mean_squared_error(y_test, y_pred))
print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, y_pred)))
# Different way, just for illustration
y_pred = linreg.predict(X_test)
from sklearn.metrics import mean_squared_error
MSE = mean_squared_error(y_test, y_pred)
print(MSE)
| _____no_output_____ | BSD-2-Clause | Learning Notes/Learning Notes ML - 1 Basics.ipynb | k21k/Python-Notes |
Predictions | # Lets say that the model inputs are
X = df[['Weight', 'Volume']]
y = df['CO2']
regr = linear_model.LinearRegression()
regr.fit(X, y)
# Simply do that for predicting the CO2 emission of a car where the weight is 2300kg, and the volume is 1300ccm:
predictedCO2 = regr.predict([[2300, 1300]])
print(predictedCO2)
| _____no_output_____ | BSD-2-Clause | Learning Notes/Learning Notes ML - 1 Basics.ipynb | k21k/Python-Notes |
OLS Regression | https://docs.w3cub.com/statsmodels/generated/statsmodels.regression.linear_model.ols.fit_regularized/
est=sm.OLS(y, X)
est = est.fit()
est.summary() | _____no_output_____ | BSD-2-Clause | Learning Notes/Learning Notes ML - 1 Basics.ipynb | k21k/Python-Notes |
Plotting Errors | # provided that y_test and y_pred have been called (example below)
# X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
# y_pred = linreg.predict(X_test)
sns.distplot((y_test-y_pred),bins=50) | _____no_output_____ | BSD-2-Clause | Learning Notes/Learning Notes ML - 1 Basics.ipynb | k21k/Python-Notes |
Interpretation of Outputs Multiple Linear Regression | Almost all the real-world problems that you are going to encounter will have more than two variables.
Linear regression involving multiple variables is called βmultiple linear regressionβ or multivariate linear regression.
The steps to perform multiple linear regression are almost similar to that of simple linear regression.
The difference lies in the evaluation.
You can use it to find out which factor has the highest impact on the predicted output and how different variables relate to each other. | _____no_output_____ | BSD-2-Clause | Learning Notes/Learning Notes ML - 1 Basics.ipynb | k21k/Python-Notes |
Tensorflow Timeline Analysis on Model Zoo Benchmark between Intel optimized and stock TensorflowThis jupyter notebook will help you evaluate performance benefits from Intel-optimized Tensorflow on the level of Tensorflow operations via several pre-trained models from Intel Model Zoo. The notebook will show users a bar chart like the picture below for the Tensorflow operation level performance comparison. The red horizontal line represents the performance of Tensorflow operations from Stock Tensorflow, and the blue bars represent the speedup of Intel Tensorflow operations. The operations marked as "mkl-True" are accelerated by MKL-DNN a.k.a oneDNN, and users should be able to see a good speedup for those operations accelerated by MKL-DNN. > NOTE : Users need to get Tensorflow timeline json files from other Jupyter notebooks like benchmark_perf_comparison first to proceed this Jupyter notebook. The notebook will also show users two pie charts like the picture below for elapsed time percentage among different Tensorflow operations. Users can easily find the Tensorflow operation hotspots in these pie charts among Stock and Intel Tensorflow. Get Platform Information | from profiling.profile_utils import PlatformUtils
plat_utils = PlatformUtils()
plat_utils.dump_platform_info() | _____no_output_____ | Apache-2.0 | docs/notebooks/perf_analysis/benchmark_perf_timeline_analysis.ipynb | yinghu5/models |
Section 1: TensorFlow Timeline Analysis Prerequisites | !pip install cxxfilt
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import pandas as pd
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_columns', 500)
pd.set_option('display.width', 1500) | _____no_output_____ | Apache-2.0 | docs/notebooks/perf_analysis/benchmark_perf_timeline_analysis.ipynb | yinghu5/models |
List out the Timeline folders First, list out all Timeline folders from previous runs. | import os
filenames= os.listdir (".")
result = []
keyword = "Timeline"
for filename in filenames:
if os.path.isdir(os.path.join(os.path.abspath("."), filename)):
if filename.find(keyword) != -1:
result.append(filename)
result.sort()
index =0
for folder in result:
print(" %d : %s " %(index, folder))
index+=1 | _____no_output_____ | Apache-2.0 | docs/notebooks/perf_analysis/benchmark_perf_timeline_analysis.ipynb | yinghu5/models |
Select a Timeline folder from previous runs ACTION: Please select one Timeline folder and change FdIndex accordingly | FdIndex = 3 | _____no_output_____ | Apache-2.0 | docs/notebooks/perf_analysis/benchmark_perf_timeline_analysis.ipynb | yinghu5/models |
List out all Timeline json files inside Timeline folder. | import os
TimelineFd = result[FdIndex]
print(TimelineFd)
datafiles = [TimelineFd +os.sep+ x for x in os.listdir(TimelineFd) if '.json' == x[-5:]]
print(datafiles)
if len(datafiles) is 0:
print("ERROR! No json file in the selected folder. Please select other folder.")
elif len(datafiles) is 1:
print("WARNING! There is only 1 json file in the selected folder. Please select other folder to proceed Section 1.2.") | _____no_output_____ | Apache-2.0 | docs/notebooks/perf_analysis/benchmark_perf_timeline_analysis.ipynb | yinghu5/models |
> **Users can bypass below Section 1.1 and analyze performance among Stock and Intel TF by clicking the link : [Section 1_2](section_1_2).** Section 1.1: Performance Analysis for one TF Timeline result Step 1: Pick one of the Timeline files List out all the Timeline files first | index = 0
for file in datafiles:
print(" %d : %s " %(index, file))
index+=1 | _____no_output_____ | Apache-2.0 | docs/notebooks/perf_analysis/benchmark_perf_timeline_analysis.ipynb | yinghu5/models |
ACTION: Please select one timeline json file and change file_index accordingly | ## USER INPUT
file_index=0
fn = datafiles[file_index]
tfile_prefix = fn.split('_')[0]
tfile_postfix = fn.strip(tfile_prefix)[1:]
fn | _____no_output_____ | Apache-2.0 | docs/notebooks/perf_analysis/benchmark_perf_timeline_analysis.ipynb | yinghu5/models |
Step 2: Parse timeline into pandas format | from profiling.profile_utils import TFTimelinePresenter
tfp = TFTimelinePresenter(True)
timeline_pd = tfp.postprocess_timeline(tfp.read_timeline(fn))
timeline_pd = timeline_pd[timeline_pd['ph'] == 'X'] | _____no_output_____ | Apache-2.0 | docs/notebooks/perf_analysis/benchmark_perf_timeline_analysis.ipynb | yinghu5/models |
Step 3: Sum up the elapsed time of each TF operation | tfp.get_tf_ops_time(timeline_pd,fn,tfile_prefix) | _____no_output_____ | Apache-2.0 | docs/notebooks/perf_analysis/benchmark_perf_timeline_analysis.ipynb | yinghu5/models |
Step 4: Draw a bar chart for elapsed time of TF ops | filename= tfile_prefix +'_tf_op_duration_bar.png'
title_=tfile_prefix +'TF : op duration bar chart'
ax=tfp.summarize_barh(timeline_pd, 'arg_op', title=title_, topk=50, logx=True, figsize=(10,10))
tfp.show(ax,'bar') | _____no_output_____ | Apache-2.0 | docs/notebooks/perf_analysis/benchmark_perf_timeline_analysis.ipynb | yinghu5/models |
Step 5: Draw a pie chart for total time percentage of TF ops | filename= tfile_prefix +'_tf_op_duration_pie.png'
title_=tfile_prefix +'TF : op duration pie chart'
timeline_pd_known = timeline_pd[ ~timeline_pd['arg_op'].str.contains('unknown') ]
ax=tfp.summarize_pie(timeline_pd_known, 'arg_op', title=title_, topk=50, logx=True, figsize=(10,10))
tfp.show(ax,'pie')
ax.figure.savefig(filename,bbox_inches='tight') | _____no_output_____ | Apache-2.0 | docs/notebooks/perf_analysis/benchmark_perf_timeline_analysis.ipynb | yinghu5/models |
Section 1.2: Analyze TF Timeline results between Stock and Intel Tensorflow Speedup from MKL-DNN among different TF operations Step 1: Select one Intel and one Stock TF timeline files for analysis List out all timeline files in the selected folder | if len(datafiles) is 1:
print("ERROR! There is only 1 json file in the selected folder.")
print("Please select other Timeline folder from beginnning to proceed Section 1.2.")
for i in range(len(datafiles)):
print(" %d : %s " %(i, datafiles[i])) | _____no_output_____ | Apache-2.0 | docs/notebooks/perf_analysis/benchmark_perf_timeline_analysis.ipynb | yinghu5/models |
ACTION: Please select one timeline file as a perfomance baseline and the other as a comparison targetput the related index for your selected timeline file.In general, please put stock_timeline_xxxxx as the baseline. | # perfomance baseline
Baseline_Index=1
# comparison target
Comparison_Index=0 | _____no_output_____ | Apache-2.0 | docs/notebooks/perf_analysis/benchmark_perf_timeline_analysis.ipynb | yinghu5/models |
List out two selected timeline files | selected_datafiles = []
selected_datafiles.append(datafiles[Baseline_Index])
selected_datafiles.append(datafiles[Comparison_Index])
print(selected_datafiles) | _____no_output_____ | Apache-2.0 | docs/notebooks/perf_analysis/benchmark_perf_timeline_analysis.ipynb | yinghu5/models |
Step 2: Parsing timeline results into CSV files | %matplotlib agg
from profiling.profile_utils import TFTimelinePresenter
csvfiles=[]
tfp = TFTimelinePresenter(True)
for fn in selected_datafiles:
if fn.find('/'):
fn_nofd=fn.split('/')[1]
else:
fn_nofd=fn
tfile_name= fn_nofd.split('.')[0]
tfile_prefix = fn_nofd.split('_')[0]
tfile_postfix = fn_nofd.strip(tfile_prefix)[1:]
csvpath = TimelineFd +os.sep+tfile_name+'.csv'
print(csvpath)
csvfiles.append(csvpath)
timeline_pd = tfp.postprocess_timeline(tfp.read_timeline(fn))
timeline_pd = timeline_pd[timeline_pd['ph'] == 'X']
tfp.get_tf_ops_time(timeline_pd,fn,tfile_prefix) | _____no_output_____ | Apache-2.0 | docs/notebooks/perf_analysis/benchmark_perf_timeline_analysis.ipynb | yinghu5/models |
Step 3: Pre-processing for the two CSV files | import os
import pandas as pd
csvarray=[]
for csvf in csvfiles:
print("read into pandas :",csvf)
a = pd.read_csv(csvf)
csvarray.append(a)
a = csvarray[0]
b = csvarray[1] | _____no_output_____ | Apache-2.0 | docs/notebooks/perf_analysis/benchmark_perf_timeline_analysis.ipynb | yinghu5/models |
Step 4: Merge two CSV files and caculate the speedup accordingly | import os
import pandas as pd
fdir='merged'
if not os.path.exists(fdir):
os.mkdir(fdir)
fpath=fdir+os.sep+'merged.csv'
merged=tfp.merge_two_csv_files(fpath,a,b)
merged | _____no_output_____ | Apache-2.0 | docs/notebooks/perf_analysis/benchmark_perf_timeline_analysis.ipynb | yinghu5/models |
Step 5: Draw a bar chart for elapsed time of TF ops among stock TF and Intel TF | %matplotlib inline
print(fpath)
tfp.plot_compare_bar_charts(fpath)
tfp.plot_compare_ratio_bar_charts(fpath, tags=['','oneDNN ops']) | _____no_output_____ | Apache-2.0 | docs/notebooks/perf_analysis/benchmark_perf_timeline_analysis.ipynb | yinghu5/models |
Step 6: Draw pie charts for elapsed time of TF ops among stock TF and Intel TF | tfp.plot_compare_pie_charts(fpath) | _____no_output_____ | Apache-2.0 | docs/notebooks/perf_analysis/benchmark_perf_timeline_analysis.ipynb | yinghu5/models |
diagrams.generic.network.Firewalldiagrams.generic.network.Routerdiagrams.generic.network.Subnetdiagrams.generic.network.Switchdiagrams.generic.network.VPNdiagrams.generic.virtualization.Virtualboxdiagrams.generic.os.Windows | from diagrams import Cluster, Diagram
from diagrams.generic.network import Firewall
from diagrams.generic.network import Router
from diagrams.generic.network import Subnet
from diagrams.generic.network import Switch
from diagrams.generic.virtualization import Virtualbox
from diagrams.generic.os import Windows
graph_attr = {
"fontsize": "28",
"bgcolor": "grey"
}
with Diagram("My Network Automation with Python Lab", filename="MyLab", outformat="jpg", graph_attr=graph_attr, show=True):
my_computer = Windows("My Computer")
my_home_subnet = Subnet("My Home Subnet")
with Cluster("Virtualbox"):
lab_devs = [Switch("sw01"),
Switch("sw02"),
Switch("sw03 optional")]
my_computer >> my_home_subnet >> lab_devs
ls
rm my_lab.png
ls | MyLab.jpg Untitled.ipynb
| MIT | sec04-1_LabIntro/My_Lab_Diagram.ipynb | codered-by-ec-council/Network-Automation-in-Python |
The data is from a number of patients. The 12 first columns (age, an, ..., time) are features that should be used to predict the outcome in the last column (DEATH_EVENT). | # Loading some functionality you might find useful. You might want other than this...
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import metrics
from sklearn.model_selection import train_test_split, cross_val_score, GridSearchCV
from pandas.plotting import scatter_matrix
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier, plot_tree
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans
# Downloading data
url = 'https://raw.githubusercontent.com/BoBernhardsson/frtn65_exam2022/main/data.csv'
data = pd.read_csv(url)
data.head()
# Picking out features and labels
X = data.iloc[:,:-1].values
y = data.iloc[:,-1].values
(X.shape,y.shape)
# select which features to use
X = data.drop(columns=['DEATH_EVENT'])
y = data.loc[:,'DEATH_EVENT'].values
# Creating some initial KNN models and evaluating accuracy
Ndata = data.shape[0]
for nr in range(1,10):
knnmodel = KNeighborsClassifier(n_neighbors = nr)
knnmodel.fit(X=X,y=y)
predictions = knnmodel.predict(X=X)
print('neighbors = {0}: accuracy = {1:.3f}'.format(nr,1-sum(abs(predictions-y))/Ndata))
features = ['age', 'cr', 'ej', 'pl', 'se1', 'se2', 'time']
features_categ = ['an','di', 'hi', 'sex', 'sm']
#scale the dataset
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import OneHotEncoder
from sklearn.compose import ColumnTransformer
numerical_preprocessor = StandardScaler()
#do one-hot encoding for categorical features
categorical_preprocessor = OneHotEncoder(handle_unknown="ignore")
preprocessor = ColumnTransformer([
('one-hot-encoder', categorical_preprocessor, features_categ),
('standard-scaler', numerical_preprocessor, features)])
from sklearn.pipeline import Pipeline
#try Random Forest
clf = Pipeline(steps = [('preprocessor', preprocessor), ('classifier', RandomForestClassifier(n_estimators=100,max_depth=3))])
#split the dataset into trainig and testing
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
clf.fit(x_train, np.ravel(y_train))
print(clf.score(x_train,np.ravel(y_train)))
print('model score: %.3f' % clf.score(x_test,np.ravel(y_test)))
#evaluate the model using cross-validation
score = cross_val_score(clf, X, np.ravel(y), cv=25)
print("%0.2f accuracy with a standard deviation of %0.2f" % (score.mean()*100, score.std()))
score_test = clf.score(x_test, np.ravel(y_test))
print('Test score: ', '{0:.4f}'.format(score_test*100))
clf = RandomForestClassifier()
#find the best model using GridSearch
param_grid = {
'n_estimators': [100, 1000],
'max_depth': [3, 4, 5],
}
search = GridSearchCV(clf, param_grid, cv=4, verbose=1,n_jobs=-1)
search.fit(x_train, np.ravel(y_train))
score = search.score(x_test, np.ravel(y_test))
print("Best CV score: {} using {}".format(search.best_score_, search.best_params_))
print("Test accuracy: {}".format(score))
randomForestModel = RandomForestClassifier(n_estimators=100,max_depth=5)
#evaluate using cross-validation
score=cross_val_score(randomForestModel, X, y, cv=20)
randomForestModel.fit(x_train,np.ravel(y_train))
print('Training score: ', randomForestModel.score(x_train,np.ravel(y_train)))
print('Test score: ', randomForestModel.score(x_test,np.ravel(y_test)))
#make a prediction and evaluate the performance
y_pred = randomForestModel.predict(x_test)
score_new = randomForestModel.score(x_test, y_test)
print('Test score: ', score_new)
import seaborn as sns
from sklearn import metrics
#confusion matrix
cm = metrics.confusion_matrix(y_test, y_pred)
plt.figure(figsize=(10,10))
sns.heatmap(cm, annot=True, fmt=".0f", linewidths=1, square = True);
plt.ylabel('Actual label');
plt.xlabel('Predicted label');
plt.title('Accuracy Score: {0}'.format(score.mean()), size = 15);
from sklearn import metrics
#AUC
metrics.plot_roc_curve(randomForestModel, x_test, y_test)
from sklearn.metrics import classification_report
print(classification_report(y_test, randomForestModel.predict(x_test)))
from pandas import DataFrame
feature_df = DataFrame(data.columns.delete(0))
feature_df.columns = ['Features']
feature_df["Feature Importance"] = pd.Series(randomForestModel.feature_importances_)
#view feature importance according to Random Forest model
feature_df
#KNN model
clf = Pipeline(steps = [('preprocessor', preprocessor), ('classifier', KNeighborsClassifier(n_neighbors=3))])
clf.fit(x_train,np.ravel(y_train))
#evaluate the model using cross-validation
score = cross_val_score(clf, X, np.ravel(y), cv=25)
print("%0.2f accuracy with a standard deviation of %0.2f" % (scores.mean()*100, scores.std()))
score_test = clf.score(x_test, np.ravel(y_test))
print('Test score: ', '{0:.4f}'.format(score_test*100))
#make a prediction and evaluate the performance
y_pred = clf.predict(x_test)
score_new = clf.score(x_test, y_test)
print('Test score: ', score_new)
import seaborn as sns
from sklearn import metrics
#confusion matrix
cm = metrics.confusion_matrix(y_test, y_pred)
plt.figure(figsize=(10,10))
sns.heatmap(cm, annot=True, fmt=".0f", linewidths=1, square = True);
plt.ylabel('Actual label');
plt.xlabel('Predicted label');
plt.title('Accuracy Score: {0}'.format(score.mean()), size = 15);
from sklearn import metrics
#AUC
metrics.plot_roc_curve(clf, x_test, y_test)
from sklearn.metrics import classification_report
print(classification_report(y_test, clf.predict(x_test)))
#Boosting model
from sklearn.ensemble import GradientBoostingClassifier
#find the best learning rate
learning_rates = [0.05, 0.1, 0.25, 0.5, 0.75, 1]
for learning_rate in learning_rates:
gb = Pipeline(steps = [('preprocessor', preprocessor), ('Classifier', GradientBoostingClassifier(n_estimators=30, learning_rate = learning_rate, max_features=13, max_depth = 3, random_state = 0))])
gb.fit(x_train, y_train)
print("Learning rate: ", learning_rate)
print("Accuracy score (training): {0:.3f}".format(gb.score(x_train, y_train)))
print("Accuracy score (validation): {0:.3f}".format(gb.score(x_test, y_test)))
clf = Pipeline(steps = [('preprocessor', preprocessor), ('Classifier', GradientBoostingClassifier(n_estimators= 30, learning_rate = 0.25, max_features=13, max_depth = 3, random_state = 0))])
clf.fit(x_train,np.ravel(y_train))
#evaluate the models using cross-validation
from sklearn.model_selection import cross_val_score
scores = cross_val_score(clf, X, np.ravel(y), cv=25)
print("%0.2f accuracy with a standard deviation of %0.2f" % (scores.mean()*100, scores.std()))
score = clf.score(x_test, np.ravel(y_test))
print('Test score (Validation): ', '{0:.4f}'.format(score*100))
#make a prediction and evaluate the performance
y_pred = clf.predict(x_test)
score_test = clf.score(x_test, y_test)
print('Test score: ', score_test )
import seaborn as sns
from sklearn import metrics
#confusion matrix
cm = metrics.confusion_matrix(y_test, y_pred)
plt.figure(figsize=(10,10))
sns.heatmap(cm, annot=True, fmt=".0f", linewidths=1, square = True);
plt.ylabel('Actual label');
plt.xlabel('Predicted label');
plt.title('Accuracy Score: {0}'.format(score.mean()), size = 15);
from sklearn import metrics
#AUC
metrics.plot_roc_curve(clf, X, y)
from sklearn.metrics import classification_report
print(classification_report(y_test, clf.predict(x_test)))
| precision recall f1-score support
0 0.77 0.94 0.85 35
1 0.88 0.60 0.71 25
accuracy 0.80 60
macro avg 0.82 0.77 0.78 60
weighted avg 0.82 0.80 0.79 60
| MIT | SupervisedLearning/Problem2_SupervisedLearning_Gaiceanu.ipynb | TheodoraG/FRTN65 |
Death vs timeThe boxplot below illustrates the relationship between death and how long time it was between the measurements were taken and the followup event, when the patient health was checked (female=blue, male=orange).It is noted that short followup time is highly related to high probability of death, for both sexes. An explanation could be that severly unhealthy patients were followed up earlier, based on medical expert decisions. | fig, ax = plt.subplots(figsize = (8, 8))
survive = data.loc[(data.DEATH_EVENT == 0)].time
death = data.loc[(data.DEATH_EVENT == 1)].time
print('time_survived = {:.1f}'.format(survive.mean()))
print('time_dead = {:.1f}'.format(death.mean()))
sns.boxplot(data = data, x = 'DEATH_EVENT', y = 'time', hue = 'sex', width = 0.4, ax = ax, fliersize = 3, palette=sns.color_palette("pastel"))
sns.stripplot(data = data, x = 'DEATH_EVENT', y = 'time', hue = 'sex', size = 3, palette=sns.color_palette())
ax.set(xlabel = 'DEATH', ylabel = "time [days] ", title = 'The relationship between death and time')
plt.show()
# If we want to drop time as feature, we can use
Xnew = data.iloc[:,:-2].values
# select which features to use
Xnew = data.drop(columns=['DEATH_EVENT', 'time', 'sm', 'ej','age','cr','pl','se1','an','di','sex'])
y = data.loc[:,'DEATH_EVENT'].values
features = ['se2','hi']
#features_categ = ['an','di', 'hi', 'sex']
#scale the dataset
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import OneHotEncoder
from sklearn.compose import ColumnTransformer
numerical_preprocessor = StandardScaler()
#do one-hot encoding for categorical features
categorical_preprocessor = OneHotEncoder(handle_unknown="ignore")
preprocessor = ColumnTransformer([
#('one-hot-encoder', categorical_preprocessor, features_categ),
('standard-scaler', numerical_preprocessor, features)])
from sklearn.pipeline import Pipeline
#try Random Forest
clf = Pipeline(steps = [('preprocessor', preprocessor), ('classifier', RandomForestClassifier(n_estimators=1000,max_depth=3))])
#split the dataset into trainig and testing
x_train_new, x_test_new, y_train_new, y_test_new = train_test_split(Xnew, y, test_size=0.2, random_state=42)
clf.fit(x_train_new, np.ravel(y_train_new))
print(clf.score(x_train_new,np.ravel(y_train_new)))
print('model score: %.3f' % clf.score(x_test_new,np.ravel(y_test_new)))
clf = RandomForestClassifier()
param_grid = {
'n_estimators': [100, 1000],
'max_depth': [3, 4, 5],
}
search = GridSearchCV(clf, param_grid, cv=4, verbose=1,n_jobs=-1)
search.fit(x_train_new, np.ravel(y_train_new))
score = search.score(x_test_new, np.ravel(y_test_new))
print("Best CV score: {} using {}".format(search.best_score_, search.best_params_))
print("Test accuracy: {}".format(score))
randomForestModel = RandomForestClassifier(n_estimators=100,max_depth=4)
#cross-val score
score=cross_val_score(randomForestModel, Xnew, y, cv=20)
randomForestModel.fit(x_train_new,np.ravel(y_train_new))
print('Training score: ', randomForestModel.score(x_train_new,np.ravel(y_train_new)))
print('Test score: ', randomForestModel.score(x_test_new,np.ravel(y_test_new))) | Training score: 0.7364016736401674
Test score: 0.6166666666666667
| MIT | SupervisedLearning/Problem2_SupervisedLearning_Gaiceanu.ipynb | TheodoraG/FRTN65 |
Load Data | print(df.shape)
df.head()
df.info() | <class 'pandas.core.frame.DataFrame'>
RangeIndex: 1083 entries, 0 to 1082
Data columns (total 18 columns):
label 1083 non-null int64
artist 1083 non-null object
album 1083 non-null object
genre 1083 non-null object
single_count 1083 non-null int64
freq_billboard 1083 non-null int64
freq_genius 1083 non-null int64
freq_theSource 1083 non-null int64
freq_xxl 1083 non-null int64
rating_AOTY 61 non-null float64
rating_meta 324 non-null float64
rating_pitch 220 non-null float64
twitter 1083 non-null int64
instagram 1083 non-null int64
facebook 1083 non-null int64
spotify 1083 non-null int64
soundcloud 1083 non-null int64
youtube 1083 non-null int64
dtypes: float64(3), int64(12), object(3)
memory usage: 152.4+ KB
| MIT | modeling/modeling-1-KNN-SGD.ipynb | lucaseo/content-worth-debut-artist-classification-project |
**Note**- μ¨λΌμΈλ§€μ²΄ κΈ°μ¬μ μ, νλ‘ κ° νμ μ Null Valueκ° μκΈ° λλ¬Έμ, λΉμ₯ Decision Treeλ₯Ό ν΅ν΄ νμ΅μ μν¬ μ μμ΄, Featureμμ μ μΈλ₯Ό νλ€. Data Preparation for Modeling μ₯λ₯΄ `hiphop`, `R&B`, `Soul`, `Funk`, `Pop` | df = pd.get_dummies(df, columns=['genre'])
df.columns | _____no_output_____ | MIT | modeling/modeling-1-KNN-SGD.ipynb | lucaseo/content-worth-debut-artist-classification-project |
Split train & test data | feature_names = ['single_count', 'freq_billboard',
'freq_genius', 'freq_theSource', 'freq_xxl',
'twitter', 'instagram', 'facebook',
'spotify', 'soundcloud', 'youtube',
'genre_funk', 'genre_hiphop', 'genre_pop', 'genre_rnb', 'genre_soul']
dfX = df[feature_names].copy()
dfy = df['label'].copy()
dfX.tail()
dfy.tail()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(dfX, dfy, test_size=0.25, random_state=0) | _____no_output_____ | MIT | modeling/modeling-1-KNN-SGD.ipynb | lucaseo/content-worth-debut-artist-classification-project |
KNN | from sklearn.neighbors import KNeighborsClassifier
model = KNeighborsClassifier(n_neighbors=10).fit(X_train, y_train)
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test, model.predict(X_test))
from sklearn.metrics import classification_report
print(classification_report(y_test, model.predict(X_test)))
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
fpr, tpr, thresholds = roc_curve(y_test, model.predict_proba(X_test)[:, 1])
plt.plot(fpr, tpr, label="KNN")
plt.legend()
plt.plot([0, 1], [0, 1], 'k--', label="random guess")
plt.xlabel('False Positive Rate (Fall-Out)')
plt.ylabel('True Positive Rate (Recall)')
plt.title('Receiver operating characteristic example')
plt.show()
from sklearn.metrics import auc
auc(fpr, tpr) | _____no_output_____ | MIT | modeling/modeling-1-KNN-SGD.ipynb | lucaseo/content-worth-debut-artist-classification-project |
SGD | from sklearn.linear_model import SGDClassifier
model_SGD = SGDClassifier(random_state=0).fit(X_train, y_train)
confusion_matrix(y_train, model_SGD.predict(X_train))
confusion_matrix(y_test, model_SGD.predict(X_test))
print(classification_report(y_test, model_SGD.predict(X_test)))
fpr, tpr, thresholds = roc_curve(y_test, model_SGD.predict(X_test))
plt.figure(figsize=(10, 10))
plt.plot(fpr, tpr, label="roc curve")
plt.legend()
plt.plot([0, 1], [0, 1], 'k--', label="random guess")
plt.xlabel('False Positive Rate (Fall-Out)')
plt.ylabel('True Positive Rate (Recall)')
plt.title('Receiver operating characteristic example')
plt.show()
auc(fpr, tpr) | _____no_output_____ | MIT | modeling/modeling-1-KNN-SGD.ipynb | lucaseo/content-worth-debut-artist-classification-project |
AmsterdamUMCdb - Freely Accessible ICU Databaseversion 1.0.2 March 2020 Copyright © 2003-2020 Amsterdam UMC - Amsterdam Medical Data Science Vasopressors and inotropesShows medication for artificially increasing blood pressure (vasopressors) or stimulating heart function (inotropes), if any, a patient received. Imports | %matplotlib inline
import amsterdamumcdb
import psycopg2
import pandas as pd
import numpy as np
import re
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import matplotlib as mpl
import io
from IPython.display import display, HTML, Markdown | _____no_output_____ | MIT | concepts/lifesupport/vasopressors_inotropes.ipynb | AKI-Group-ukmuenster/AmsterdamUMCdb |
Display settings | #matplotlib settings for image size
#needs to be in a different cell from %matplotlib inline
plt.style.use('seaborn-darkgrid')
plt.rcParams["figure.dpi"] = 288
plt.rcParams["figure.figsize"] = [16, 12]
plt.rcParams["font.size"] = 12
pd.options.display.max_columns = None
pd.options.display.max_rows = None
pd.options.display.max_colwidth = 1000 | _____no_output_____ | MIT | concepts/lifesupport/vasopressors_inotropes.ipynb | AKI-Group-ukmuenster/AmsterdamUMCdb |
Connection settings | #Modify config.ini in the root folder of the repository to change the settings to connect to your postgreSQL database
import configparser
import os
config = configparser.ConfigParser()
if os.path.isfile('../../config.ini'):
config.read('../../config.ini')
else:
config.read('../../config.SAMPLE.ini')
#Open a connection to the postgres database:
con = psycopg2.connect(database=config['psycopg2']['database'],
user=config['psycopg2']['username'], password=config['psycopg2']['password'],
host=config['psycopg2']['host'], port=config['psycopg2']['port'])
con.set_client_encoding('WIN1252') #Uses code page for Dutch accented characters.
con.set_session(autocommit=True)
cursor = con.cursor()
cursor.execute('SET SCHEMA \'amsterdamumcdb\''); #set search_path to amsterdamumcdb schema | _____no_output_____ | MIT | concepts/lifesupport/vasopressors_inotropes.ipynb | AKI-Group-ukmuenster/AmsterdamUMCdb |
Vasopressors and inotropesfrom drugitems | sql_vaso_ino = """
WITH vasopressor_inotropes AS (
SELECT
admissionid,
CASE
WHEN COUNT(*) > 0 THEN TRUE
ELSE FALSE
END AS vasopressors_inotropes_bool,
STRING_AGG(DISTINCT item, '; ') AS vasopressors_inotropes_given
FROM drugitems
WHERE
ordercategoryid = 65 -- continuous i.v. perfusor
AND itemid IN (
6818, -- Adrenaline (Epinefrine)
7135, -- Isoprenaline (Isuprel)
7178, -- Dobutamine (Dobutrex)
7179, -- Dopamine (Inotropin)
7196, -- Enoximon (Perfan)
7229, -- Noradrenaline (Norepinefrine)
12467, -- Terlipressine (Glypressin)
13490, -- Methyleenblauw IV (Methylthionide cloride)
19929 -- Fenylefrine
)
AND rate > 0.1
GROUP BY admissionid
)
SELECT
a.admissionid, location,
CASE
WHEN vi.vasopressors_inotropes_bool Then TRUE
ELSE FALSE
END AS vasopressors_inotropes_bool,
vasopressors_inotropes_given
FROM admissions a
LEFT JOIN vasopressor_inotropes vi ON
a.admissionid = vi.admissionid
"""
vaso_ino = pd.read_sql(sql_vaso_ino,con)
vaso_ino.tail() | _____no_output_____ | MIT | concepts/lifesupport/vasopressors_inotropes.ipynb | AKI-Group-ukmuenster/AmsterdamUMCdb |
#@title Calculation of mass transfer and hydrate inhibition of a wet gas by injection of methanol
#@markdown Demonstration of mass transfer calculation using the NeqSim software in Python
#@markdown <br><br>This document is part of the module ["Introduction to Gas Processing using NeqSim in Colab"](https://colab.research.google.com/github/EvenSol/NeqSim-Colab/blob/master/notebooks/examples_of_NeqSim_in_Colab.ipynb#scrollTo=_eRtkQnHpL70).
%%capture
!pip install neqsim
import neqsim
from neqsim.thermo.thermoTools import *
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
from neqsim.thermo import fluid, fluid_df
import pandas as pd
from neqsim.process import gasscrubber, clearProcess, run,nequnit, phasemixer, splitter, clearProcess, stream, valve, separator, compressor, runProcess, viewProcess, heater,saturator, mixer
plt.style.use('classic')
%matplotlib inline | _____no_output_____ | Apache-2.0 | notebooks/process/masstransferMeOH.ipynb | EvenSol/NeqSim-Colab |
|
Mass transfer calculationsModel for mass transfer calculation in NeqSim based on Solbraa (2002):https://ntnuopen.ntnu.no/ntnu-xmlui/handle/11250/231326 In the following calculations we assume a water saturated gas the is mixed with pure liquid methanol. These phases are not in equiibrium when they enter the pipeline. When the gas and methanol liquid comes in contact in the pipeline, methanol will vaporize into the gas, and water (and other comonents from the gas) will be absorbed into the liquid methanol. The focus of the following calculations will be to evaluate the mass transfer as function of contanct length with gas and methanol. It also evaluates the hydrate temperature of the gas leaving the pipe section. Figure 1 Illustration of mass transfer process **The parameters for the model are:**Temperature and pressure of the pipe (mass transfer calculated at constant temperature and pressure).Length and diameter of pipe where gas and liquid will be in contact and mass transfer can occur.Flow rate of the gas in MSm3/day, flow rate of methanol (kg/hr). Calculation of compostion of aqueous phase and gas leaving pipe sectionIn the following script we will simulate the composition of the gas leaving pipe section at a given pipe lengt. | # Input parameters
pressure = 52.21 # bara
temperature = 15.2 #C
gasFlow = 1.23 #MSm3/day
methanolFlow = 6000.23 # kg/day
pipelength = 10.0 #meter
pipeInnerDiameter = 0.5 #meter
# Create a gas-condensate fluid
feedgas = {'ComponentName': ["nitrogen","CO2","methane", "ethane" , "propane", "i-butane", "n-butane", "water", "methanol"],
'MolarComposition[-]': [0.01, 0.01, 0.8, 0.06, 0.01,0.005,0.005, 0.0, 0.0]
}
naturalgasFluid = fluid_df(pd.DataFrame(feedgas)).setModel("CPAs-SRK-EOS-statoil")
naturalgasFluid.setTotalFlowRate(gasFlow, "MSm3/day")
naturalgasFluid.setTemperature(temperature, "C")
naturalgasFluid.setPressure(pressure, "bara")
# Create a liquid methanol fluid
feedMeOH = {'ComponentName': ["nitrogen","CO2","methane", "ethane" , "propane", "i-butane", "n-butane", "water", "methanol"],
'MolarComposition[-]': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,1.0]
}
meOHFluid = fluid_df(pd.DataFrame(feedMeOH) ).setModel("CPAs-SRK-EOS-statoil")
meOHFluid.setTotalFlowRate(methanolFlow, "kg/hr");
meOHFluid.setTemperature(temperature, "C");
meOHFluid.setPressure(pressure, "bara");
clearProcess()
dryinjectiongas = stream(naturalgasFluid)
MeOHFeed = stream(meOHFluid)
watersaturator = saturator(dryinjectiongas)
waterSaturatedFeedGas = stream(watersaturator.getOutStream())
mainMixer = phasemixer("gas MeOH mixer")
mainMixer.addStream(waterSaturatedFeedGas)
mainMixer.addStream(MeOHFeed)
pipeline = nequnit(mainMixer.getOutStream(), equipment="pipeline", flowpattern="stratified") #alternative flow patterns are: stratified, annular and droplet
pipeline.setLength(pipelength)
pipeline.setID(pipeInnerDiameter)
scrubber = gasscrubber(pipeline.getOutStream())
gasFromScrubber = stream(scrubber.getGasOutStream())
aqueousFromScrubber = stream(scrubber.getLiquidOutStream())
run()
print('Composition of gas leaving pipe section after ', pipelength, ' meter')
printFrame(gasFromScrubber.getFluid())
print('Composition of aqueous phase leaving pipe section after ', pipelength, ' meter')
printFrame(aqueousFromScrubber.getFluid())
print('Interface contact area ', pipeline.getInterfacialArea(), ' m^2')
print('Volume fraction aqueous phase ', pipeline.getOutStream().getFluid().getVolumeFraction(1), ' -') | Composition of gas leaving pipe section after 10.0 meter
total gas
nitrogen 1.11418E-2 1.11418E-2 [mole fraction]
CO2 1.08558E-2 1.08558E-2 [mole fraction]
methane 8.89321E-1 8.89321E-1 [mole fraction]
ethane 6.59386E-2 6.59386E-2 [mole fraction]
propane 1.10391E-2 1.10391E-2 [mole fraction]
i-butane 5.4792E-3 5.4792E-3 [mole fraction]
n-butane 5.48529E-3 5.48529E-3 [mole fraction]
water 3.42883E-4 3.42883E-4 [mole fraction]
methanol 3.96384E-4 3.96384E-4 [mole fraction]
Density 4.51425E1 [kg/m^3]
PhaseFraction 1E0 [mole fraction]
MolarMass 1.8183E1 1.8183E1 [kg/kmol]
Z factor 8.7716E-1 [-]
Heat Capacity (Cp) 2.54484E0 [kJ/kg*K]
Heat Capacity (Cv) 1.65968E0 [kJ/kg*K]
Speed of Sound 3.96268E2 [m/sec]
Enthalpy -3.29472E1 -3.29472E1 [kJ/kg]
Entropy -1.62974E0 -1.62974E0 [kJ/kg*K]
JT coefficient 5.03748E-1 [K/bar]
Viscosity 1.22106E-5 [kg/m*sec]
Conductivity 3.74742E-2 [W/m*K]
SurfaceTension [N/m]
Pressure 52.21 [bar]
Temperature 288.34999999999997 [K]
Model CPAs-SRK-EOS-statoil -
Mixing Rule classic-CPA_T -
Stream -
Composition of aqueous phase leaving pipe section after 10.0 meter
total aqueous
nitrogen 1.53381E-4 1.53381E-4 [mole fraction]
CO2 3.28961E-3 3.28961E-3 [mole fraction]
methane 3.44647E-2 3.44647E-2 [mole fraction]
ethane 1.09255E-2 1.09255E-2 [mole fraction]
propane 1.28029E-3 1.28029E-3 [mole fraction]
i-butane 1.08241E-3 1.08241E-3 [mole fraction]
n-butane 1.01559E-3 1.01559E-3 [mole fraction]
water 8.18682E-4 8.18682E-4 [mole fraction]
methanol 9.4697E-1 9.4697E-1 [mole fraction]
Density 7.82709E2 [kg/m^3]
PhaseFraction 1E0 [mole fraction]
MolarMass 3.15665E1 3.15665E1 [kg/kmol]
Z factor 8.7826E-2 [-]
Heat Capacity (Cp) 2.26412E0 [kJ/kg*K]
Heat Capacity (Cv) 1.885E0 [kJ/kg*K]
Speed of Sound 1.07486E3 [m/sec]
Enthalpy -1.14087E3 -1.14087E3 [kJ/kg]
Entropy -3.37618E0 -3.37618E0 [kJ/kg*K]
JT coefficient -3.74052E-2 [K/bar]
Viscosity 5.85317E-4 [kg/m*sec]
Conductivity 5.89686E-1 [W/m*K]
SurfaceTension [N/m]
Pressure 52.21 [bar]
Temperature 288.34999999999997 [K]
Model CPAs-SRK-EOS-statoil -
Mixing Rule classic-CPA_T -
Stream -
Interface contact area 2.641129854675618 m^2
Volume fraction aqueous phase 0.011201474850165916 -
| Apache-2.0 | notebooks/process/masstransferMeOH.ipynb | EvenSol/NeqSim-Colab |
Calculation of hydrate equilibrium temperature of gas leaving pipe sectionIn the following script we will simulate the composition of the gas leaving pipe section as well as hydrate equilibrium temperature of this gas as function of pipe length. | maxpipelength = 10.0
def hydtemps(length):
pipeline.setLength(length)
run();
return gasFromScrubber.getHydrateEquilibriumTemperature()-273.15
length = np.arange(0.01, maxpipelength, (maxpipelength)/10.0)
hydtem = [hydtemps(length2) for length2 in length]
plt.figure()
plt.plot(length, hydtem)
plt.xlabel('Length available for mass transfer [m]')
plt.ylabel('Hydrate eq.temperature [C]')
plt.title('Hydrate eq.temperature of gas leaving pipe section') | _____no_output_____ | Apache-2.0 | notebooks/process/masstransferMeOH.ipynb | EvenSol/NeqSim-Colab |
Amazon Forecast: predicting time-series at scaleForecasting is used in a variety of applications and business use cases: For example, retailers need to forecast the sales of their products to decide how much stock they need by location, Manufacturers need to estimate the number of parts required at their factories to optimize their supply chain, Businesses need to estimate their flexible workforce needs, Utilities need to forecast electricity consumption needs in order to attain an efficient energy network, and enterprises need to estimate their cloud infrastructure needs. Table of Contents* Step 0: [Setting up](setup)* Step 1: [Preparing the Datasets](prepare)* Step 2: [Importing the Data](import) * Step 2a: [Creating a Dataset Group](create) * Step 2b: [Creating a Target Dataset](target) * Step 2c: [Creating a Related Dataset](related) * Step 2d: [Update the Dataset Group](update) * Step 2e: [Creating a Target Time Series Dataset Import Job](targetImport) * Step 2f: [Creating a Related Time Series Dataset Import Job](relatedImport)* Step 3: [Choosing an Algorithm and Evaluating its Performance](algo) * Step 3a: [Choosing DeepAR+](DeepAR) * Step 3b: [Choosing Prophet](prophet)* Step 4: [Computing Error Metrics from Backtesting](error)* Step 5: [Creating a Forecast](forecast)* Step 6: [Querying the Forecasts](query)* Step 7: [Exporting the Forecasts](export)* Step 8: [Clearning up your Resources](cleanup) First let us setup Amazon ForecastThis section sets up the permissions and relevant endpoints. | %load_ext autoreload
%autoreload 2
from util.fcst_utils import *
import warnings
import boto3
import s3fs
plt.rcParams['figure.figsize'] = (15.0, 5.0)
warnings.filterwarnings('ignore') | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Although, we have set the region to us-west-2 below, you can choose any of the 6 regions that the service is available in. | region = 'us-west-2'
bucket = 'bike-demo'
version = 'prod'
session = boto3.Session(region_name=region)
forecast = session.client(service_name='forecast')
forecast_query = session.client(service_name='forecastquery')
role_arn = get_or_create_role_arn() | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
OverviewThe above figure summarizes the key workflow of using Forecast. Step 1: Preparing the Datasets | bike_df = pd.read_csv("../data/train.csv", dtype = object)
bike_df.head()
bike_df['count'] = bike_df['count'].astype('float')
bike_df['workingday'] = bike_df['workingday'].astype('float') | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
We take about two and a half week's of hourly data for demonstration, just for the purpose that there's no missing data in the whole range. | bike_df_small = bike_df[-2*7*24-24*3:]
bike_df_small['item_id'] = "bike_12" | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Let us plot the time series first. | bike_df_small.plot(x='datetime', y='count', figsize=(15, 8)) | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
We can see that the target time series seem to have a drop over weekends. Next let's plot both the target time series and the related time series that indicates whether today is a `workday` or not. More precisely, $r_t = 1$ if $t$ is a work day and 0 if not. | plt.figure(figsize=(15, 8))
ax = plt.gca()
bike_df_small.plot(x='datetime', y='count', ax=ax);
ax2 = ax.twinx()
bike_df_small.plot(x='datetime', y='workingday', color='red', ax=ax2); | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Notice that to use the related time series, we need to ensure that the related time series covers the whole target time series, as well as the future values as specified by the forecast horizon. More precisely, we need to make sure:```len(related time series) >= len(target time series) + forecast horizon```Basically, all items need to have data start at or before the item start date, and have data until the forecast horizon (i.e. the latest end date across all items + forecast horizon). Additionally, there should be no missing values in the related time series. The following picture illustrates the desired logic. For more details regarding how to prepare your Related Time Series dataset, please refer to the public documentation here. Suppose in this particular example, we wish to forecast for the next 24 hours, and thus we generate the following dataset. | target_df = bike_df_small[['item_id', 'datetime', 'count']][:-24]
rts_df = bike_df_small[['item_id', 'datetime', 'workingday']]
target_df.head(5) | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
As we can see, the length of the related time series is equal to the length of the target time series plus the forecast horizon. | print(len(target_df), len(rts_df))
assert len(target_df) + 24 == len(rts_df), "length doesn't match" | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Next we check whether there are "holes" in the related time series. | assert len(rts_df) == len(pd.date_range(
start=list(rts_df['datetime'])[0],
end=list(rts_df['datetime'])[-1],
freq='H'
)), "missing entries in the related time series" | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Everything looks fine, and we plot both time series again. As it can be seen, the related time series (indicator of whether the current day is a workday or not) is longer than the target time series. The binary working day indicator feature is a good example of a related time series, since it is known at all future time points. Other examples of related time series include holiday and promotion features. | plt.figure(figsize=(15, 10))
ax = plt.gca()
target_df.plot(x='datetime', y='count', ax=ax);
ax2 = ax.twinx()
rts_df.plot(x='datetime', y='workingday', color='red', ax=ax2);
target_df.to_csv("../data/bike_small.csv", index= False, header = False)
rts_df.to_csv("../data/bike_small_rts.csv", index= False, header = False)
s3 = session.client('s3')
account_id = boto3.client('sts').get_caller_identity().get('Account') | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
If you don't have this bucket `amazon-forecast-data-{account_id}`, create it first on S3. | bucket_name = f"amazon-forecast-data-{account_id}"
key = "bike_small"
s3.upload_file(Filename="../data/bike_small.csv", Bucket = bucket_name, Key = f"{key}/bike.csv")
s3.upload_file(Filename="../data/bike_small_rts.csv", Bucket = bucket_name, Key = f"{key}/bike_rts.csv") | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Step 2. Importing the DataNow we are ready to import the datasets into the Forecast service. Starting from the raw data, Amazon Forecast automatically extracts the dataset that is suitable for forecasting. As an example, a retailer normally records the transaction record such as | project = "bike_rts_demo"
idx = 4
s3_data_path = f"s3://{bucket_name}/{key}" | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Below, we specify key input data and forecast parameters | freq = "H"
forecast_horizon = 24
timestamp_format = "yyyy-MM-dd HH:mm:ss"
delimiter = ',' | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Step 2a. Creating a Dataset GroupFirst let's create a dataset group and then update it later to add our datasets. | dataset_group = f"{project}_gp_{idx}"
dataset_arns = []
create_dataset_group_response = forecast.create_dataset_group(Domain="RETAIL",
DatasetGroupName=dataset_group,
DatasetArns=dataset_arns)
logging.info(f'Creating dataset group {dataset_group}')
dataset_group_arn = create_dataset_group_response['DatasetGroupArn']
forecast.describe_dataset_group(DatasetGroupArn=dataset_group_arn) | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Step 2b. Creating a Target DatasetIn this example, we will define a target time series. This is a required dataset to use the service. Below we specify the target time series name af_demo_ts_4. | ts_dataset_name = f"{project}_ts_{idx}"
print(ts_dataset_name) | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Next, we specify the schema of our dataset below. Make sure the order of the attributes (columns) matches the raw data in the files. We follow the same three attribute format as the above example. | ts_schema_val = [{"AttributeName": "item_id", "AttributeType": "string"},
{"AttributeName": "timestamp", "AttributeType": "timestamp"},
{"AttributeName": "demand", "AttributeType": "float"}]
ts_schema = {"Attributes": ts_schema_val}
logging.info(f'Creating target dataset {ts_dataset_name}')
response = forecast.create_dataset(Domain="RETAIL",
DatasetType='TARGET_TIME_SERIES',
DatasetName=ts_dataset_name,
DataFrequency=freq,
Schema=ts_schema
)
ts_dataset_arn = response['DatasetArn']
forecast.describe_dataset(DatasetArn=ts_dataset_arn) | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Step 2c. Creating a Related DatasetIn this example, we will define a related time series. Specify the related time series name af_demo_rts_4. | rts_dataset_name = f"{project}_rts_{idx}"
print(rts_dataset_name) | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Specify the schema of your dataset here. Make sure the order of columns matches the raw data files. We follow the same three column format as the above example. | rts_schema_val = [{"AttributeName": "item_id", "AttributeType": "string"},
{"AttributeName": "timestamp", "AttributeType": "timestamp"},
{"AttributeName": "price", "AttributeType": "float"}]
rts_schema = {"Attributes": rts_schema_val}
logging.info(f'Creating related dataset {rts_dataset_name}')
response = forecast.create_dataset(Domain="RETAIL",
DatasetType='RELATED_TIME_SERIES',
DatasetName=rts_dataset_name,
DataFrequency=freq,
Schema=rts_schema
)
rts_dataset_arn = response['DatasetArn']
forecast.describe_dataset(DatasetArn=rts_dataset_arn) | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Step 2d. Updating the dataset group with the datasets we createdYou can have multiple datasets under the same dataset group. Update it with the datasets we created before. | dataset_arns = []
dataset_arns.append(ts_dataset_arn)
dataset_arns.append(rts_dataset_arn)
forecast.update_dataset_group(DatasetGroupArn=dataset_group_arn, DatasetArns=dataset_arns)
forecast.describe_dataset_group(DatasetGroupArn=dataset_group_arn) | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Step 2e. Creating a Target Time Series Dataset Import Job | ts_s3_data_path = f"{s3_data_path}/bike.csv"
ts_dataset_import_job_response = forecast.create_dataset_import_job(DatasetImportJobName=dataset_group,
DatasetArn=ts_dataset_arn,
DataSource= {
"S3Config" : {
"Path": ts_s3_data_path,
"RoleArn": role_arn
}
},
TimestampFormat=timestamp_format)
ts_dataset_import_job_arn=ts_dataset_import_job_response['DatasetImportJobArn']
status = wait(lambda: forecast.describe_dataset_import_job(DatasetImportJobArn=ts_dataset_import_job_arn))
assert status | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Step 2f. Creating a Related Time Series Dataset Import Job | rts_s3_data_path = f"{s3_data_path}/bike_rts.csv"
rts_dataset_import_job_response = forecast.create_dataset_import_job(DatasetImportJobName=dataset_group,
DatasetArn=rts_dataset_arn,
DataSource= {
"S3Config" : {
"Path": rts_s3_data_path,
"RoleArn": role_arn
}
},
TimestampFormat=timestamp_format)
rts_dataset_import_job_arn=rts_dataset_import_job_response['DatasetImportJobArn']
status = wait(lambda: forecast.describe_dataset_import_job(DatasetImportJobArn=rts_dataset_import_job_arn))
assert status | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Step 3. Choosing an algorithm and evaluating its performanceOnce the datasets are specified with the corresponding schema, Amazon Forecast will automatically aggregate all the relevant pieces of information for each item, such as sales, price, promotions, as well as categorical attributes, and generate the desired dataset. Next, one can choose an algorithm (forecasting model) and evaluate how well this particular algorithm works on this dataset. The following graph gives a high-level overview of the forecasting models.Amazon Forecast provides several state-of-the-art forecasting algorithms including classic forecasting methods such as ETS, ARIMA, Prophet and deep learning approaches such as DeepAR+. Classical forecasting methods, such as Autoregressive Integrated Moving Average (ARIMA) or Exponential Smoothing (ETS), fit a single model to each individual time series, and then use that model to extrapolate the time series into the future. Amazon's Non-Parametric Time Series (NPTS) forecaster also fits a single model to each individual time series. Unlike the naive or seasonal naive forecasters that use a fixed time index (the previous index $T-1$ or the past season $T - \tau$) as the prediction for time step $T$, NPTS randomly samples a time index $t \in \{0, \dots T-1\}$ in the past to generate a sample for the current time step $T$.In many applications, you may encounter many similar time series across a set of cross-sectional units. Examples of such time series groupings are demand for different products, server loads, and requests for web pages. In this case, it can be beneficial to train a single model jointly over all of these time series. DeepAR+ takes this approach, outperforming the standard ARIMA and ETS methods when your dataset contains hundreds of related time series. The trained model can also be used for generating forecasts for new time series that are similar to the ones it has been trained on. While deep learning approaches can outperform standard methods, this is only possible when there is sufficient data available for training. It is not true for example when one trains a neural network with a time-series contains only a few dozens of observations. Amazon Forecast provides the best of two worlds allowing users to either choose a specific algorithm or let Amazon Forecast automatically perform model selection. How to evaluate a forecasting model?Before moving forward, let's first introduce the notion of *backtest* when evaluating forecasting models. The key difference between evaluating forecasting algorithms and standard ML applications is that we need to make sure there is no future information gets used in the past. In other words, the procedure needs to be causal. In this notebook, let's compare the neural network based method, DeepAR+ with Facebook's open-source Bayesian method Prophet. | algorithm_arn = 'arn:aws:forecast:::algorithm/' | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Step 3a. Choosing DeepAR+ | algorithm = 'Deep_AR_Plus'
algorithm_arn_deep_ar_plus = algorithm_arn + algorithm
predictor_name_deep_ar = f'{project}_{algorithm.lower()}_{idx}'
logging.info(f'[{predictor_name_deep_ar}] Creating predictor {predictor_name_deep_ar} ...')
create_predictor_response = forecast.create_predictor(PredictorName=predictor_name_deep_ar,
AlgorithmArn=algorithm_arn_deep_ar_plus,
ForecastHorizon=forecast_horizon,
PerformAutoML=False,
PerformHPO=False,
InputDataConfig= {"DatasetGroupArn": dataset_group_arn},
FeaturizationConfig= {"ForecastFrequency": freq}
)
predictor_arn_deep_ar = create_predictor_response['PredictorArn']
status = wait(lambda: forecast.describe_predictor(PredictorArn=predictor_arn_deep_ar))
assert status
forecast.describe_predictor(PredictorArn=predictor_arn_deep_ar) | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Step 3b. Choosing Prophet | algorithm = 'Prophet'
algorithm_arn_prophet = algorithm_arn + algorithm
predictor_name_prophet = f'{project}_{algorithm.lower()}_{idx}'
algorithm_arn_prophet
logging.info(f'[{predictor_name_prophet}] Creating predictor %s ...' % predictor_name_prophet)
create_predictor_response = forecast.create_predictor(PredictorName=predictor_name_prophet,
AlgorithmArn=algorithm_arn_prophet,
ForecastHorizon=forecast_horizon,
PerformAutoML=False,
PerformHPO=False,
InputDataConfig= {"DatasetGroupArn": dataset_group_arn},
FeaturizationConfig= {"ForecastFrequency": freq}
)
predictor_arn_prophet = create_predictor_response['PredictorArn']
status = wait(lambda: forecast.describe_predictor(PredictorArn=predictor_arn_prophet))
assert status
forecast.describe_predictor(PredictorArn=predictor_arn_prophet) | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Step 4. Computing Error Metrics from Backtesting After creating the predictors, we can query the forecast accuracy given by the backtest scenario and have a quantitative understanding of the performance of the algorithm. Such a process is iterative in nature during model development. When an algorithm with satisfying performance is found, the customer can deploy the predictor into a production environment, and query the forecasts for a particular item to make business decisions. The figure below shows a sample plot of different quantile forecasts of a predictor. | logging.info('Done creating predictor. Getting accuracy numbers for DeepAR+ ...')
error_metrics_deep_ar_plus = forecast.get_accuracy_metrics(PredictorArn=predictor_arn_deep_ar)
error_metrics_deep_ar_plus
logging.info('Done creating predictor. Getting accuracy numbers for Prophet ...')
error_metrics_prophet = forecast.get_accuracy_metrics(PredictorArn=predictor_arn_prophet)
error_metrics_prophet
def extract_summary_metrics(metric_response, predictor_name):
df = pd.DataFrame(metric_response['PredictorEvaluationResults']
[0]['TestWindows'][0]['Metrics']['WeightedQuantileLosses'])
df['Predictor'] = predictor_name
return df
deep_ar_metrics = extract_summary_metrics(error_metrics_deep_ar_plus, "DeepAR")
prophet_metrics = extract_summary_metrics(error_metrics_prophet, "Prophet")
pd.concat([deep_ar_metrics, prophet_metrics]) \
.pivot(index='Quantile', columns='Predictor', values='LossValue').plot.bar(); | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
As we mentioned before, if you only have a handful of time series (in this case, only 1) with a small number of examples, the neural network models (DeepAR+) are not the best choice. Here, we clearly see that DeepAR+ behaves worse than Prophet in the case of a single time series. Step 5. Creating a ForecastNext we re-train with the full dataset, and create the forecast. | logging.info(f"Done fetching accuracy numbers. Creating forecaster for DeepAR+ ...")
forecast_name_deep_ar = f'{project}_deep_ar_plus_{idx}'
create_forecast_response_deep_ar = forecast.create_forecast(ForecastName=forecast_name_deep_ar,
PredictorArn=predictor_arn_deep_ar)
forecast_arn_deep_ar = create_forecast_response_deep_ar['ForecastArn']
status = wait(lambda: forecast.describe_forecast(ForecastArn=forecast_arn_deep_ar))
assert status
forecast.describe_forecast(ForecastArn=forecast_arn_deep_ar)
logging.info(f"Done fetching accuracy numbers. Creating forecaster for Prophet ...")
forecast_name_prophet = f'{project}_prophet_{idx}'
create_forecast_response_prophet = forecast.create_forecast(ForecastName=forecast_name_prophet,
PredictorArn=predictor_arn_prophet)
forecast_arn_prophet = create_forecast_response_prophet['ForecastArn']
status = wait(lambda: forecast.describe_forecast(ForecastArn=forecast_arn_prophet))
assert status
forecast.describe_forecast(ForecastArn=forecast_arn_prophet) | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Step 6. Querying the Forecasts | item_id = 'bike_12'
forecast_response_deep = forecast_query.query_forecast(
ForecastArn=forecast_arn_deep_ar,
Filters={"item_id": item_id})
forecast_response_prophet = forecast_query.query_forecast(ForecastArn=forecast_arn_prophet,
Filters={"item_id":item_id})
fname = f'../data/bike_small.csv'
exact = load_exact_sol(fname, item_id)
plot_forecasts(forecast_response_deep, exact)
plt.title("DeepAR Forecast");
plot_forecasts(forecast_response_prophet,exact)
plt.title("Prophet Forecast"); | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Step 7. Exporting your Forecasts | forecast_export_name_deep_ar = f'{project}_forecast_export_deep_ar_plus_{idx}'
forecast_export_name_deep_ar_path = f"{s3_data_path}/{forecast_export_name_deep_ar}"
create_forecast_export_response_deep_ar = forecast.create_forecast_export_job(ForecastExportJobName=forecast_export_name_deep_ar,
ForecastArn=forecast_arn_deep_ar,
Destination={
"S3Config" : {
"Path": forecast_export_name_deep_ar_path,
"RoleArn": role_arn
}
})
forecast_export_arn_deep_ar = create_forecast_export_response_deep_ar['ForecastExportJobArn']
status = wait(lambda: forecast.describe_forecast_export_job(ForecastExportJobArn = forecast_export_arn_deep_ar))
assert status
forecast_export_name_prophet = f'{project}_forecast_export_prophet_{idx}'
forecast_export_name_prophet_path = f"{s3_data_path}/{forecast_export_name_prophet}"
create_forecast_export_response_prophet = forecast.create_forecast_export_job(ForecastExportJobName=forecast_export_name_prophet,
ForecastArn=forecast_arn_prophet,
Destination={
"S3Config" : {
"Path": forecast_export_name_prophet_path,
"RoleArn": role_arn
}
})
forecast_export_arn_prophet = create_forecast_export_response_prophet['ForecastExportJobArn']
status = wait(lambda: forecast.describe_forecast_export_job(ForecastExportJobArn = forecast_export_arn_prophet))
assert status | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Step 8. Cleaning up your Resources Once we have completed the above steps, we can start to cleanup the resources we created. All delete jobs, except for `delete_dataset_group` are asynchronous, so we have added the helpful `wait_till_delete` function. Resource Limits documented here. | # Delete forecast export for both algorithms
wait_till_delete(lambda: forecast.delete_forecast_export_job(ForecastExportJobArn = forecast_export_arn_deep_ar))
wait_till_delete(lambda: forecast.delete_forecast_export_job(ForecastExportJobArn = forecast_export_arn_prophet))
# Delete forecast for both algorithms
wait_till_delete(lambda: forecast.delete_forecast(ForecastArn = forecast_arn_deep_ar))
wait_till_delete(lambda: forecast.delete_forecast(ForecastArn = forecast_arn_prophet))
# Delete predictor for both algorithms
wait_till_delete(lambda: forecast.delete_predictor(PredictorArn = predictor_arn_deep_ar))
wait_till_delete(lambda: forecast.delete_predictor(PredictorArn = predictor_arn_prophet))
# Delete the target time series and related time series dataset import jobs
wait_till_delete(lambda: forecast.delete_dataset_import_job(DatasetImportJobArn=ts_dataset_import_job_arn))
wait_till_delete(lambda: forecast.delete_dataset_import_job(DatasetImportJobArn=rts_dataset_import_job_arn))
# Delete the target time series and related time series datasets
wait_till_delete(lambda: forecast.delete_dataset(DatasetArn=ts_dataset_arn))
wait_till_delete(lambda: forecast.delete_dataset(DatasetArn=rts_dataset_arn))
# Delete dataset group
forecast.delete_dataset_group(DatasetGroupArn=dataset_group_arn) | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Subsets and Splits