text
stringlengths 2.5k
6.39M
| kind
stringclasses 3
values |
---|---|
# Assignment 1: Auto Correct
Welcome to the first assignment of Course 2. This assignment will give you a chance to brush up on your python and probability skills. In doing so, you will implement an auto-correct system that is very effective and useful.
## Outline
- [0. Overview](#0)
- [0.1 Edit Distance](#0-1)
- [1. Data Preprocessing](#1)
- [1.1 Exercise 1](#ex-1)
- [1.2 Exercise 2](#ex-2)
- [1.3 Exercise 3](#ex-3)
- [2. String Manipulation](#2)
- [2.1 Exercise 4](#ex-4)
- [2.2 Exercise 5](#ex-5)
- [2.3 Exercise 6](#ex-6)
- [2.4 Exercise 7](#ex-7)
- [3. Combining the edits](#3)
- [3.1 Exercise 8](#ex-8)
- [3.2 Exercise 9](#ex-9)
- [3.3 Exercise 10](#ex-10)
- [4. Minimum Edit Distance](#4)
- [4.1 Exercise 11](#ex-11)
- [5. Backtrace (Optional)](#5)
<a name='0'></a>
## 0. Overview
You use autocorrect every day on your cell phone and computer. In this assignment, you will explore what really goes on behind the scenes. Of course, the model you are about to implement is not identical to the one used in your phone, but it is still quite good.
By completing this assignment you will learn how to:
- Get a word count given a corpus
- Get a word probability in the corpus
- Manipulate strings
- Filter strings
- Implement Minimum edit distance to compare strings and to help find the optimal path for the edits.
- Understand how dynamic programming works
Similar systems are used everywhere.
- For example, if you type in the word **"I am lerningg"**, chances are very high that you meant to write **"learning"**, as shown in **Figure 1**.
<div style="width:image width px; font-size:100%; text-align:center;"><img src='auto-correct.png' alt="alternate text" width="width" height="height" style="width:300px;height:250px;" /> Figure 1 </div>
<a name='0-1'></a>
#### 0.1 Edit Distance
In this assignment, you will implement models that correct words that are 1 and 2 edit distances away.
- We say two words are n edit distance away from each other when we need n edits to change one word into another.
An edit could consist of one of the following options:
- Delete (remove a letter): ‘hat’ => ‘at, ha, ht’
- Switch (swap 2 adjacent letters): ‘eta’ => ‘eat, tea,...’
- Replace (change 1 letter to another): ‘jat’ => ‘hat, rat, cat, mat, ...’
- Insert (add a letter): ‘te’ => ‘the, ten, ate, ...’
You will be using the four methods above to implement an Auto-correct.
- To do so, you will need to compute probabilities that a certain word is correct given an input.
This auto-correct you are about to implement was first created by [Peter Norvig](https://en.wikipedia.org/wiki/Peter_Norvig) in 2007.
- His [original article](https://norvig.com/spell-correct.html) may be a useful reference for this assignment.
The goal of our spell check model is to compute the following probability:
$$P(c|w) = \frac{P(w|c)\times P(c)}{P(w)} \tag{Eqn-1}$$
The equation above is [Bayes Rule](https://en.wikipedia.org/wiki/Bayes%27_theorem).
- Equation 1 says that the probability of a word being correct $P(c|w) $is equal to the probability of having a certain word $w$, given that it is correct $P(w|c)$, multiplied by the probability of being correct in general $P(C)$ divided by the probability of that word $w$ appearing $P(w)$ in general.
- To compute equation 1, you will first import a data set and then create all the probabilities that you need using that data set.
<a name='1'></a>
# Part 1: Data Preprocessing
```
import re
from collections import Counter
import numpy as np
import pandas as pd
```
As in any other machine learning task, the first thing you have to do is process your data set.
- Many courses load in pre-processed data for you.
- However, in the real world, when you build these NLP systems, you load the datasets and process them.
- So let's get some real world practice in pre-processing the data!
Your first task is to read in a file called **'shakespeare.txt'** which is found in your file directory. To look at this file you can go to `File ==> Open `.
<a name='ex-1'></a>
### Exercise 1
Implement the function `process_data` which
1) Reads in a corpus (text file)
2) Changes everything to lowercase
3) Returns a list of words.
#### Options and Hints
- If you would like more of a real-life practice, don't open the 'Hints' below (yet) and try searching the web to derive your answer.
- If you want a little help, click on the green "General Hints" section by clicking on it with your mouse.
- If you get stuck or are not getting the expected results, click on the green 'Detailed Hints' section to get hints for each step that you'll take to complete this function.
<details>
<summary>
<font size="3" color="darkgreen"><b>General Hints</b></font>
</summary>
<p>
General Hints to get started
<ul>
<li>Python <a href="https://docs.python.org/3/tutorial/inputoutput.html">input and output<a></li>
<li>Python <a href="https://docs.python.org/3/library/re.html" >'re' documentation </a> </li>
</ul>
</p>
<details>
<summary>
<font size="3" color="darkgreen"><b>Detailed Hints</b></font>
</summary>
<p>
Detailed hints if you're stuck
<ul>
<li>Use 'with' syntax to read a file</li>
<li>Decide whether to use 'read()' or 'readline(). What's the difference?</li>
<li>Choose whether to use either str.lower() or str.lowercase(). What is the difference?</li>
<li>Use re.findall(pattern, string)</li>
<li>Look for the "Raw String Notation" section in the Python 're' documentation to understand the difference between r'\W', r'\W' and '\\W'. </li>
<li>For the pattern, decide between using '\s', '\w', '\s+' or '\w+'. What do you think are the differences?</li>
</ul>
</p>
```
# UNQ_C1 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: process_data
def process_data(file_name):
"""
Input:
A file_name which is found in your current directory. You just have to read it in.
Output:
words: a list containing all the words in the corpus (text file you read) in lower case.
"""
words = [] # return this variable correctly
### START CODE HERE ###
f = open(file_name, "r")
words = f.read().lower()
words = re.sub(r"[^a-zA-Z0-9]", " ", words)
words = words.split()
### END CODE HERE ###
return words
```
Note, in the following cell, 'words' is converted to a python `set`. This eliminates any duplicate entries.
```
#DO NOT MODIFY THIS CELL
word_l = process_data('shakespeare.txt')
vocab = set(word_l) # this will be your new vocabulary
print(f"The first ten words in the text are: \n{word_l[0:10]}")
print(f"There are {len(vocab)} unique words in the vocabulary.")
```
#### Expected Output
```Python
The first ten words in the text are:
['o', 'for', 'a', 'muse', 'of', 'fire', 'that', 'would', 'ascend', 'the']
There are 6116 unique words in the vocabulary.
```
<a name='ex-2'></a>
### Exercise 2
Implement a `get_count` function that returns a dictionary
- The dictionary's keys are words
- The value for each word is the number of times that word appears in the corpus.
For example, given the following sentence: **"I am happy because I am learning"**, your dictionary should return the following:
<table style="width:20%">
<tr>
<td> <b>Key </b> </td>
<td> <b>Value </b> </td>
</tr>
<tr>
<td> I </td>
<td> 2</td>
</tr>
<tr>
<td>am</td>
<td>2</td>
</tr>
<tr>
<td>happy</td>
<td>1</td>
</tr>
<tr>
<td>because</td>
<td>1</td>
</tr>
<tr>
<td>learning</td>
<td>1</td>
</tr>
</table>
**Instructions**:
Implement a `get_count` which returns a dictionary where the key is a word and the value is the number of times the word appears in the list.
<details>
<summary>
<font size="3" color="darkgreen"><b>Hints</b></font>
</summary>
<p>
<ul>
<li>Try implementing this using a for loop and a regular dictionary. This may be good practice for similar coding interview questions</li>
<li>You can also use defaultdict instead of a regualr dictionary, along with the for loop</li>
<li>Otherwise, to skip using a for loop, you can use Python's <a href="https://docs.python.org/3.7/library/collections.html#collections.Counter" > Counter class</a> </li>
</ul>
</p>
```
# UNQ_C2 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# UNIT TEST COMMENT: Candidate for Table Driven Tests
# GRADED FUNCTION: get_count
def get_count(word_l):
'''
Input:
word_l: a set of words representing the corpus.
Output:
word_count_dict: The wordcount dictionary where key is the word and value is its frequency.
'''
word_count_dict = {} # fill this with word counts
### START CODE HERE
word_count_dict = Counter(word_l)
### END CODE HERE ###
return word_count_dict
#DO NOT MODIFY THIS CELL
word_count_dict = get_count(word_l)
print(f"There are {len(word_count_dict)} key values pairs")
print(f"The count for the word 'thee' is {word_count_dict.get('thee',0)}")
```
#### Expected Output
```Python
There are 6116 key values pairs
The count for the word 'thee' is 240
```
<a name='ex-3'></a>
### Exercise 3
Given the dictionary of word counts, compute the probability that each word will appear if randomly selected from the corpus of words.
$$P(w_i) = \frac{C(w_i)}{M} \tag{Eqn-2}$$
where
$C(w_i)$ is the total number of times $w_i$ appears in the corpus.
$M$ is the total number of words in the corpus.
For example, the probability of the word 'am' in the sentence **'I am happy because I am learning'** is:
$$P(am) = \frac{C(w_i)}{M} = \frac {2}{7} \tag{Eqn-3}.$$
**Instructions:** Implement `get_probs` function which gives you the probability
that a word occurs in a sample. This returns a dictionary where the keys are words, and the value for each word is its probability in the corpus of words.
<details>
<summary>
<font size="3" color="darkgreen"><b>Hints</b></font>
</summary>
<p>
General advice
<ul>
<li> Use dictionary.values() </li>
<li> Use sum() </li>
<li> The cardinality (number of words in the corpus should be equal to len(word_l). You will calculate this same number, but using the word count dictionary.</li>
</ul>
If you're using a for loop:
<ul>
<li> Use dictionary.keys() </li>
</ul>
If you're using a dictionary comprehension:
<ul>
<li>Use dictionary.items() </li>
</ul>
</p>
```
# UNQ_C3 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: get_probs
def get_probs(word_count_dict):
'''
Input:
word_count_dict: The wordcount dictionary where key is the word and value is its frequency.
Output:
probs: A dictionary where keys are the words and the values are the probability that a word will occur.
'''
probs = {} # return this variable correctly
### START CODE HERE ###
M = sum(word_count_dict.values(), 0)
for key in word_count_dict:
probs[key] = word_count_dict[key] / M
### END CODE HERE ###
return probs
#DO NOT MODIFY THIS CELL
probs = get_probs(word_count_dict)
print(f"Length of probs is {len(probs)}")
print(f"P('thee') is {probs['thee']:.4f}")
```
#### Expected Output
```Python
Length of probs is 6116
P('thee') is 0.0045
```
<a name='2'></a>
# Part 2: String Manipulations
Now, that you have computed $P(w_i)$ for all the words in the corpus, you will write a few functions to manipulate strings so that you can edit the erroneous strings and return the right spellings of the words. In this section, you will implement four functions:
* `delete_letter`: given a word, it returns all the possible strings that have **one character removed**.
* `switch_letter`: given a word, it returns all the possible strings that have **two adjacent letters switched**.
* `replace_letter`: given a word, it returns all the possible strings that have **one character replaced by another different letter**.
* `insert_letter`: given a word, it returns all the possible strings that have an **additional character inserted**.
#### List comprehensions
String and list manipulation in python will often make use of a python feature called [list comprehensions](https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions). The routines below will be described as using list comprehensions, but if you would rather implement them in another way, you are free to do so as long as the result is the same. Further, the following section will provide detailed instructions on how to use list comprehensions and how to implement the desired functions. If you are a python expert, feel free to skip the python hints and move to implementing the routines directly.
Python List Comprehensions embed a looping structure inside of a list declaration, collapsing many lines of code into a single line. If you are not familiar with them, they seem slightly out of order relative to for loops.
<div style="width:image width px; font-size:100%; text-align:center;"><img src='GenericListComp3.PNG' alt="alternate text" width="width" height="height" style="width:800px;height:400px;"/> Figure 2 </div>
The diagram above shows that the components of a list comprehension are the same components you would find in a typical for loop that appends to a list, but in a different order. With that in mind, we'll continue the specifics of this assignment. We will be very descriptive for the first function, `deletes()`, and less so in later functions as you become familiar with list comprehensions.
<a name='ex-4'></a>
### Exercise 4
**Instructions for delete_letter():** Implement a `delete_letter()` function that, given a word, returns a list of strings with one character deleted.
For example, given the word **nice**, it would return the set: {'ice', 'nce', 'nic', 'nie'}.
**Step 1:** Create a list of 'splits'. This is all the ways you can split a word into Left and Right: For example,
'nice is split into : `[('', 'nice'), ('n', 'ice'), ('ni', 'ce'), ('nic', 'e'), ('nice', '')]`
This is common to all four functions (delete, replace, switch, insert).
<div style="width:image width px; font-size:100%; text-align:center;"><img src='Splits1.PNG' alt="alternate text" width="width" height="height" style="width:650px;height:200px;" /> Figure 3 </div>
**Step 2:** This is specific to `delete_letter`. Here, we are generating all words that result from deleting one character.
This can be done in a single line with a list comprehension. You can make use of this type of syntax:
`[f(a,b) for a, b in splits if condition]`
For our 'nice' example you get:
['ice', 'nce', 'nie', 'nic']
<div style="width:image width px; font-size:100%; text-align:center;"><img src='ListComp2.PNG' alt="alternate text" width="width" height="height" style="width:550px;height:300px;" /> Figure 4 </div>
#### Levels of assistance
Try this exercise with these levels of assistance.
- We hope that this will make it both a meaningful experience but also not a frustrating experience.
- Start with level 1, then move onto level 2, and 3 as needed.
- Level 1. Try to think this through and implement this yourself.
- Level 2. Click on the "Level 2 Hints" section for some hints to get started.
- Level 3. If you would prefer more guidance, please click on the "Level 3 Hints" cell for step by step instructions.
- If you are still stuck, look at the images in the "list comprehensions" section above.
<details>
<summary>
<font size="3" color="darkgreen"><b>Level 2 Hints</b></font>
</summary>
<p>
<ul>
<li><a href="" > Use array slicing like my_string[0:2] </a> </li>
<li><a href="" > Use list comprehensions or for loops </a> </li>
</ul>
</p>
<details>
<summary>
<font size="3" color="darkgreen"><b>Level 3 Hints</b></font>
</summary>
<p>
<ul>
<li>splits: Use array slicing, like my_str[0:2], to separate a string into two pieces.</li>
<li>Do this in a loop or list comprehension, so that you have a list of tuples.
<li> For example, "cake" can get split into "ca" and "ke". They're stored in a tuple ("ca","ke"), and the tuple is appended to a list. We'll refer to these as L and R, so the tuple is (L,R)</li>
<li>When choosing the range for your loop, if you input the word "cans" and generate the tuple ('cans',''), make sure to include an if statement to check the length of that right-side string (R) in the tuple (L,R) </li>
<li>deletes: Go through the list of tuples and combine the two strings together. You can use the + operator to combine two strings</li>
<li>When combining the tuples, make sure that you leave out a middle character.</li>
<li>Use array slicing to leave out the first character of the right substring.</li>
</ul>
</p>
```
# UNQ_C4 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# UNIT TEST COMMENT: Candidate for Table Driven Tests
# GRADED FUNCTION: deletes
def delete_letter(word, verbose=False):
'''
Input:
word: the string/word for which you will generate all possible words
in the vocabulary which have 1 missing character
Output:
delete_l: a list of all possible strings obtained by deleting 1 character from word
'''
delete_l = []
split_l = []
### START CODE HERE ###
split_l = [(word[:i], word[i:]) for i in range(len(word) + 1) if word[i:]]
delete_l = [L + R[1:] for L, R in split_l]
### END CODE HERE ###
if verbose: print(f"input word {word}, \nsplit_l = {split_l}, \ndelete_l = {delete_l}")
return delete_l
delete_word_l = delete_letter(word="cans",
verbose=True)
```
#### Expected Output
```CPP
Note: You might get a slightly different result with split_l
input word cans,
split_l = [('', 'cans'), ('c', 'ans'), ('ca', 'ns'), ('can', 's')],
delete_l = ['ans', 'cns', 'cas', 'can']
```
#### Note 1
- Notice how it has the extra tuple `('cans', '')`.
- This will be fine as long as you have checked the size of the right-side substring in tuple (L,R).
- Can you explain why this will give you the same result for the list of deletion strings (delete_l)?
```CPP
input word cans,
split_l = [('', 'cans'), ('c', 'ans'), ('ca', 'ns'), ('can', 's'), ('cans', '')],
delete_l = ['ans', 'cns', 'cas', 'can']
```
#### Note 2
If you end up getting the same word as your input word, like this:
```Python
input word cans,
split_l = [('', 'cans'), ('c', 'ans'), ('ca', 'ns'), ('can', 's'), ('cans', '')],
delete_l = ['ans', 'cns', 'cas', 'can', 'cans']
```
- Check how you set the `range`.
- See if you check the length of the string on the right-side of the split.
```
# test # 2
print(f"Number of outputs of delete_letter('at') is {len(delete_letter('at'))}")
```
#### Expected output
```CPP
Number of outputs of delete_letter('at') is 2
```
<a name='ex-5'></a>
### Exercise 5
**Instructions for switch_letter()**: Now implement a function that switches two letters in a word. It takes in a word and returns a list of all the possible switches of two letters **that are adjacent to each other**.
- For example, given the word 'eta', it returns {'eat', 'tea'}, but does not return 'ate'.
**Step 1:** is the same as in delete_letter()
**Step 2:** A list comprehension or for loop which forms strings by swapping adjacent letters. This is of the form:
`[f(L,R) for L, R in splits if condition]` where 'condition' will test the length of R in a given iteration. See below.
<div style="width:image width px; font-size:100%; text-align:center;"><img src='Switches1.PNG' alt="alternate text" width="width" height="height" style="width:600px;height:200px;"/> Figure 5 </div>
#### Levels of difficulty
Try this exercise with these levels of difficulty.
- Level 1. Try to think this through and implement this yourself.
- Level 2. Click on the "Level 2 Hints" section for some hints to get started.
- Level 3. If you would prefer more guidance, please click on the "Level 3 Hints" cell for step by step instructions.
<details>
<summary>
<font size="3" color="darkgreen"><b>Level 2 Hints</b></font>
</summary>
<p>
<ul>
<li><a href="" > Use array slicing like my_string[0:2] </a> </li>
<li><a href="" > Use list comprehensions or for loops </a> </li>
<li>To do a switch, think of the whole word as divided into 4 distinct parts. Write out 'cupcakes' on a piece of paper and see how you can split it into ('cupc', 'k', 'a', 'es')</li>
</ul>
</p>
<details>
<summary>
<font size="3" color="darkgreen"><b>Level 3 Hints</b></font>
</summary>
<p>
<ul>
<li>splits: Use array slicing, like my_str[0:2], to separate a string into two pieces.</li>
<li>Splitting is the same as for delete_letter</li>
<li>To perform the switch, go through the list of tuples and combine four strings together. You can use the + operator to combine strings</li>
<li>The four strings will be the left substring from the split tuple, followed by the first (index 1) character of the right substring, then the zero-th character (index 0) of the right substring, and then the remaining part of the right substring.</li>
<li>Unlike delete_letter, you will want to check that your right substring is at least a minimum length. To see why, review the previous hint bullet point (directly before this one).</li>
</ul>
</p>
```
# UNQ_C5 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# UNIT TEST COMMENT: Candidate for Table Driven Tests
# GRADED FUNCTION: switches
def switch_letter(word, verbose=False):
'''
Input:
word: input string
Output:
switches: a list of all possible strings with one adjacent charater switched
'''
switch_l = []
split_l = []
### START CODE HERE ###
split_l = [(word[:i], word[i:]) for i in range(len(word) + 1) if word[i:]]
switch_l = [L + R[1] + R[0] + R[2:] for L, R in split_l if len(R) > 1]
### END CODE HERE ###
if verbose: print(f"Input word = {word} \nsplit_l = {split_l} \nswitch_l = {switch_l}")
return switch_l
switch_word_l = switch_letter(word="eta",
verbose=True)
```
#### Expected output
```Python
Input word = eta
split_l = [('', 'eta'), ('e', 'ta'), ('et', 'a')]
switch_l = ['tea', 'eat']
```
#### Note 1
You may get this:
```Python
Input word = eta
split_l = [('', 'eta'), ('e', 'ta'), ('et', 'a'), ('eta', '')]
switch_l = ['tea', 'eat']
```
- Notice how it has the extra tuple `('eta', '')`.
- This is also correct.
- Can you think of why this is the case?
#### Note 2
If you get an error
```Python
IndexError: string index out of range
```
- Please see if you have checked the length of the strings when switching characters.
```
# test # 2
print(f"Number of outputs of switch_letter('at') is {len(switch_letter('at'))}")
```
#### Expected output
```CPP
Number of outputs of switch_letter('at') is 1
```
<a name='ex-6'></a>
### Exercise 6
**Instructions for replace_letter()**: Now implement a function that takes in a word and returns a list of strings with one **replaced letter** from the original word.
**Step 1:** is the same as in `delete_letter()`
**Step 2:** A list comprehension or for loop which form strings by replacing letters. This can be of the form:
`[f(a,b,c) for a, b in splits if condition for c in string]` Note the use of the second for loop.
It is expected in this routine that one or more of the replacements will include the original word. For example, replacing the first letter of 'ear' with 'e' will return 'ear'.
**Step 3:** Remove the original input letter from the output.
<details>
<summary>
<font size="3" color="darkgreen"><b>Hints</b></font>
</summary>
<p>
<ul>
<li>To remove a word from a list, first store its contents inside a set()</li>
<li>Use set.discard('the_word') to remove a word in a set (if the word does not exist in the set, then it will not throw a KeyError. Using set.remove('the_word') throws a KeyError if the word does not exist in the set. </li>
</ul>
</p>
```
# UNQ_C6 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# UNIT TEST COMMENT: Candidate for Table Driven Tests
# GRADED FUNCTION: replaces
def replace_letter(word, verbose=False):
'''
Input:
word: the input string/word
Output:
replaces: a list of all possible strings where we replaced one letter from the original word.
'''
letters = 'abcdefghijklmnopqrstuvwxyz'
replace_l = []
split_l = []
### START CODE HERE ###
split_l = [(word[:i], word[i:]) for i in range(len(word) + 1) if word[i:]]
replace_set = set([L + i + R[1:] for L, R in split_l for i in letters])
replace_set.discard(word)
### END CODE HERE ###
# turn the set back into a list and sort it, for easier viewing
replace_l = sorted(list(replace_set))
if verbose: print(f"Input word = {word} \nsplit_l = {split_l} \nreplace_l {replace_l}")
return replace_l
replace_l = replace_letter(word='can',
verbose=True)
```
#### Expected Output**:
```Python
Input word = can
split_l = [('', 'can'), ('c', 'an'), ('ca', 'n')]
replace_l ['aan', 'ban', 'caa', 'cab', 'cac', 'cad', 'cae', 'caf', 'cag', 'cah', 'cai', 'caj', 'cak', 'cal', 'cam', 'cao', 'cap', 'caq', 'car', 'cas', 'cat', 'cau', 'cav', 'caw', 'cax', 'cay', 'caz', 'cbn', 'ccn', 'cdn', 'cen', 'cfn', 'cgn', 'chn', 'cin', 'cjn', 'ckn', 'cln', 'cmn', 'cnn', 'con', 'cpn', 'cqn', 'crn', 'csn', 'ctn', 'cun', 'cvn', 'cwn', 'cxn', 'cyn', 'czn', 'dan', 'ean', 'fan', 'gan', 'han', 'ian', 'jan', 'kan', 'lan', 'man', 'nan', 'oan', 'pan', 'qan', 'ran', 'san', 'tan', 'uan', 'van', 'wan', 'xan', 'yan', 'zan']
```
- Note how the input word 'can' should not be one of the output words.
#### Note 1
If you get something like this:
```Python
Input word = can
split_l = [('', 'can'), ('c', 'an'), ('ca', 'n'), ('can', '')]
replace_l ['aan', 'ban', 'caa', 'cab', 'cac', 'cad', 'cae', 'caf', 'cag', 'cah', 'cai', 'caj', 'cak', 'cal', 'cam', 'cao', 'cap', 'caq', 'car', 'cas', 'cat', 'cau', 'cav', 'caw', 'cax', 'cay', 'caz', 'cbn', 'ccn', 'cdn', 'cen', 'cfn', 'cgn', 'chn', 'cin', 'cjn', 'ckn', 'cln', 'cmn', 'cnn', 'con', 'cpn', 'cqn', 'crn', 'csn', 'ctn', 'cun', 'cvn', 'cwn', 'cxn', 'cyn', 'czn', 'dan', 'ean', 'fan', 'gan', 'han', 'ian', 'jan', 'kan', 'lan', 'man', 'nan', 'oan', 'pan', 'qan', 'ran', 'san', 'tan', 'uan', 'van', 'wan', 'xan', 'yan', 'zan']
```
- Notice how split_l has an extra tuple `('can', '')`, but the output is still the same, so this is okay.
#### Note 2
If you get something like this:
```Python
Input word = can
split_l = [('', 'can'), ('c', 'an'), ('ca', 'n'), ('can', '')]
replace_l ['aan', 'ban', 'caa', 'cab', 'cac', 'cad', 'cae', 'caf', 'cag', 'cah', 'cai', 'caj', 'cak', 'cal', 'cam', 'cana', 'canb', 'canc', 'cand', 'cane', 'canf', 'cang', 'canh', 'cani', 'canj', 'cank', 'canl', 'canm', 'cann', 'cano', 'canp', 'canq', 'canr', 'cans', 'cant', 'canu', 'canv', 'canw', 'canx', 'cany', 'canz', 'cao', 'cap', 'caq', 'car', 'cas', 'cat', 'cau', 'cav', 'caw', 'cax', 'cay', 'caz', 'cbn', 'ccn', 'cdn', 'cen', 'cfn', 'cgn', 'chn', 'cin', 'cjn', 'ckn', 'cln', 'cmn', 'cnn', 'con', 'cpn', 'cqn', 'crn', 'csn', 'ctn', 'cun', 'cvn', 'cwn', 'cxn', 'cyn', 'czn', 'dan', 'ean', 'fan', 'gan', 'han', 'ian', 'jan', 'kan', 'lan', 'man', 'nan', 'oan', 'pan', 'qan', 'ran', 'san', 'tan', 'uan', 'van', 'wan', 'xan', 'yan', 'zan']
```
- Notice how there are strings that are 1 letter longer than the original word, such as `cana`.
- Please check for the case when there is an empty string `''`, and if so, do not use that empty string when setting replace_l.
```
# test # 2
print(f"Number of outputs of switch_letter('at') is {len(switch_letter('at'))}")
```
#### Expected output
```CPP
Number of outputs of switch_letter('at') is 1
```
<a name='ex-7'></a>
### Exercise 7
**Instructions for insert_letter()**: Now implement a function that takes in a word and returns a list with a letter inserted at every offset.
**Step 1:** is the same as in `delete_letter()`
**Step 2:** This can be a list comprehension of the form:
`[f(a,b,c) for a, b in splits if condition for c in string]`
```
# UNQ_C7 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# UNIT TEST COMMENT: Candidate for Table Driven Tests
# GRADED FUNCTION: inserts
def insert_letter(word, verbose=False):
'''
Input:
word: the input string/word
Output:
inserts: a set of all possible strings with one new letter inserted at every offset
'''
letters = 'abcdefghijklmnopqrstuvwxyz'
insert_l = []
split_l = []
### START CODE HERE ###
split_l = [(word[:i], word[i:]) for i in range(len(word) + 1)]
insert_l = [L + i + R for L, R in split_l for i in letters]
### END CODE HERE ###
if verbose: print(f"Input word {word} \nsplit_l = {split_l} \ninsert_l = {insert_l}")
return insert_l
insert_l = insert_letter('at', True)
print(f"Number of strings output by insert_letter('at') is {len(insert_l)}")
```
#### Expected output
```Python
Input word at
split_l = [('', 'at'), ('a', 't'), ('at', '')]
insert_l = ['aat', 'bat', 'cat', 'dat', 'eat', 'fat', 'gat', 'hat', 'iat', 'jat', 'kat', 'lat', 'mat', 'nat', 'oat', 'pat', 'qat', 'rat', 'sat', 'tat', 'uat', 'vat', 'wat', 'xat', 'yat', 'zat', 'aat', 'abt', 'act', 'adt', 'aet', 'aft', 'agt', 'aht', 'ait', 'ajt', 'akt', 'alt', 'amt', 'ant', 'aot', 'apt', 'aqt', 'art', 'ast', 'att', 'aut', 'avt', 'awt', 'axt', 'ayt', 'azt', 'ata', 'atb', 'atc', 'atd', 'ate', 'atf', 'atg', 'ath', 'ati', 'atj', 'atk', 'atl', 'atm', 'atn', 'ato', 'atp', 'atq', 'atr', 'ats', 'att', 'atu', 'atv', 'atw', 'atx', 'aty', 'atz']
Number of strings output by insert_letter('at') is 78
```
#### Note 1
If you get a split_l like this:
```Python
Input word at
split_l = [('', 'at'), ('a', 't')]
insert_l = ['aat', 'bat', 'cat', 'dat', 'eat', 'fat', 'gat', 'hat', 'iat', 'jat', 'kat', 'lat', 'mat', 'nat', 'oat', 'pat', 'qat', 'rat', 'sat', 'tat', 'uat', 'vat', 'wat', 'xat', 'yat', 'zat', 'aat', 'abt', 'act', 'adt', 'aet', 'aft', 'agt', 'aht', 'ait', 'ajt', 'akt', 'alt', 'amt', 'ant', 'aot', 'apt', 'aqt', 'art', 'ast', 'att', 'aut', 'avt', 'awt', 'axt', 'ayt', 'azt']
Number of strings output by insert_letter('at') is 52
```
- Notice that split_l is missing the extra tuple ('at', ''). For insertion, we actually **WANT** this tuple.
- The function is not creating all the desired output strings.
- Check the range that you use for the for loop.
#### Note 2
If you see this:
```Python
Input word at
split_l = [('', 'at'), ('a', 't'), ('at', '')]
insert_l = ['aat', 'bat', 'cat', 'dat', 'eat', 'fat', 'gat', 'hat', 'iat', 'jat', 'kat', 'lat', 'mat', 'nat', 'oat', 'pat', 'qat', 'rat', 'sat', 'tat', 'uat', 'vat', 'wat', 'xat', 'yat', 'zat', 'aat', 'abt', 'act', 'adt', 'aet', 'aft', 'agt', 'aht', 'ait', 'ajt', 'akt', 'alt', 'amt', 'ant', 'aot', 'apt', 'aqt', 'art', 'ast', 'att', 'aut', 'avt', 'awt', 'axt', 'ayt', 'azt']
Number of strings output by insert_letter('at') is 52
```
- Even though you may have fixed the split_l so that it contains the tuple `('at', '')`, notice that you're still missing some output strings.
- Notice that it's missing strings such as 'ata', 'atb', 'atc' all the way to 'atz'.
- To fix this, make sure that when you set insert_l, you allow the use of the empty string `''`.
```
# test # 2
print(f"Number of outputs of insert_letter('at') is {len(insert_letter('at'))}")
```
#### Expected output
```CPP
Number of outputs of insert_letter('at') is 78
```
<a name='3'></a>
# Part 3: Combining the edits
Now that you have implemented the string manipulations, you will create two functions that, given a string, will return all the possible single and double edits on that string. These will be `edit_one_letter()` and `edit_two_letters()`.
<a name='3-1'></a>
## 3.1 Edit one letter
<a name='ex-8'></a>
### Exercise 8
**Instructions**: Implement the `edit_one_letter` function to get all the possible edits that are one edit away from a word. The edits consist of the replace, insert, delete, and optionally the switch operation. You should use the previous functions you have already implemented to complete this function. The 'switch' function is a less common edit function, so its use will be selected by an "allow_switches" input argument.
Note that those functions return *lists* while this function should return a *python set*. Utilizing a set eliminates any duplicate entries.
<details>
<summary>
<font size="3" color="darkgreen"><b>Hints</b></font>
</summary>
<p>
<ul>
<li> Each of the functions returns a list. You can combine lists using the `+` operator. </li>
<li> To get unique strings (avoid duplicates), you can use the set() function. </li>
</ul>
</p>
```
# UNQ_C8 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# UNIT TEST COMMENT: Candidate for Table Driven Tests
# GRADED FUNCTION: edit_one_letter
def edit_one_letter(word, allow_switches = True):
"""
Input:
word: the string/word for which we will generate all possible wordsthat are one edit away.
Output:
edit_one_set: a set of words with one possible edit. Please return a set. and not a list.
"""
edit_one_set = set()
### START CODE HERE ###
switch_l = switch_letter(word) if allow_switches else []
edit_one_set = set(insert_letter(word) + delete_letter(word) + replace_letter(word) + switch_l)
### END CODE HERE ###
return edit_one_set
tmp_word = "at"
tmp_edit_one_set = edit_one_letter(tmp_word)
# turn this into a list to sort it, in order to view it
tmp_edit_one_l = sorted(list(tmp_edit_one_set))
print(f"input word {tmp_word} \nedit_one_l \n{tmp_edit_one_l}\n")
print(f"The type of the returned object should be a set {type(tmp_edit_one_set)}")
print(f"Number of outputs from edit_one_letter('at') is {len(edit_one_letter('at'))}")
```
#### Expected Output
```CPP
input word at
edit_one_l
['a', 'aa', 'aat', 'ab', 'abt', 'ac', 'act', 'ad', 'adt', 'ae', 'aet', 'af', 'aft', 'ag', 'agt', 'ah', 'aht', 'ai', 'ait', 'aj', 'ajt', 'ak', 'akt', 'al', 'alt', 'am', 'amt', 'an', 'ant', 'ao', 'aot', 'ap', 'apt', 'aq', 'aqt', 'ar', 'art', 'as', 'ast', 'ata', 'atb', 'atc', 'atd', 'ate', 'atf', 'atg', 'ath', 'ati', 'atj', 'atk', 'atl', 'atm', 'atn', 'ato', 'atp', 'atq', 'atr', 'ats', 'att', 'atu', 'atv', 'atw', 'atx', 'aty', 'atz', 'au', 'aut', 'av', 'avt', 'aw', 'awt', 'ax', 'axt', 'ay', 'ayt', 'az', 'azt', 'bat', 'bt', 'cat', 'ct', 'dat', 'dt', 'eat', 'et', 'fat', 'ft', 'gat', 'gt', 'hat', 'ht', 'iat', 'it', 'jat', 'jt', 'kat', 'kt', 'lat', 'lt', 'mat', 'mt', 'nat', 'nt', 'oat', 'ot', 'pat', 'pt', 'qat', 'qt', 'rat', 'rt', 'sat', 'st', 't', 'ta', 'tat', 'tt', 'uat', 'ut', 'vat', 'vt', 'wat', 'wt', 'xat', 'xt', 'yat', 'yt', 'zat', 'zt']
The type of the returned object should be a set <class 'set'>
Number of outputs from edit_one_letter('at') is 129
```
<a name='3-2'></a>
## Part 3.2 Edit two letters
<a name='ex-9'></a>
### Exercise 9
Now you can generalize this to implement to get two edits on a word. To do so, you would have to get all the possible edits on a single word and then for each modified word, you would have to modify it again.
**Instructions**: Implement the `edit_two_letters` function that returns a set of words that are two edits away. Note that creating additional edits based on the `edit_one_letter` function may 'restore' some one_edits to zero or one edits. That is allowed here. This accounted for in get_corrections.
<details>
<summary>
<font size="3" color="darkgreen"><b>Hints</b></font>
</summary>
<p>
<ul>
<li>You will likely want to take the union of two sets.</li>
<li>You can either use set.union() or use the '|' (or operator) to union two sets</li>
<li>See the documentation <a href="https://docs.python.org/2/library/sets.html" > Python sets </a> for examples of using operators or functions of the Python set.</li>
</ul>
</p>
```
# UNQ_C9 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# UNIT TEST COMMENT: Candidate for Table Driven Tests
# GRADED FUNCTION: edit_two_letters
def edit_two_letters(word, allow_switches = True):
'''
Input:
word: the input string/word
Output:
edit_two_set: a set of strings with all possible two edits
'''
edit_two_set = set()
### START CODE HERE ###
edit_one_set = edit_one_letter(word, allow_switches)
for edited_word in edit_one_set:
edit_two_set = edit_two_set.union(edit_one_letter(edited_word, allow_switches))
### END CODE HERE ###
return edit_two_set
tmp_edit_two_set = edit_two_letters("a")
tmp_edit_two_l = sorted(list(tmp_edit_two_set))
print(f"Number of strings with edit distance of two: {len(tmp_edit_two_l)}")
print(f"First 10 strings {tmp_edit_two_l[:10]}")
print(f"Last 10 strings {tmp_edit_two_l[-10:]}")
print(f"The data type of the returned object should be a set {type(tmp_edit_two_set)}")
print(f"Number of strings that are 2 edit distances from 'at' is {len(edit_two_letters('at'))}")
```
#### Expected Output
```CPP
Number of strings with edit distance of two: 2654
First 10 strings ['', 'a', 'aa', 'aaa', 'aab', 'aac', 'aad', 'aae', 'aaf', 'aag']
Last 10 strings ['zv', 'zva', 'zw', 'zwa', 'zx', 'zxa', 'zy', 'zya', 'zz', 'zza']
The data type of the returned object should be a set <class 'set'>
Number of strings that are 2 edit distances from 'at' is 7154
```
<a name='3-3'></a>
## Part 3-3: suggest spelling suggestions
Now you will use your `edit_two_letters` function to get a set of all the possible 2 edits on your word. You will then use those strings to get the most probable word you meant to type aka your typing suggestion.
<a name='ex-10'></a>
### Exercise 10
**Instructions**: Implement `get_corrections`, which returns a list of zero to n possible suggestion tuples of the form (word, probability_of_word).
**Step 1:** Generate suggestions for a supplied word: You'll use the edit functions you have developed. The 'suggestion algorithm' should follow this logic:
* If the word is in the vocabulary, suggest the word.
* Otherwise, if there are suggestions from `edit_one_letter` that are in the vocabulary, use those.
* Otherwise, if there are suggestions from `edit_two_letters` that are in the vocabulary, use those.
* Otherwise, suggest the input word.*
* The idea is that words generated from fewer edits are more likely than words with more edits.
Note:
- Edits of one or two letters may 'restore' strings to either zero or one edit. This algorithm accounts for this by preferentially selecting lower distance edits first.
#### Short circuit
In Python, logical operations such as `and` and `or` have two useful properties. They can operate on lists and they have ['short-circuit' behavior](https://docs.python.org/3/library/stdtypes.html). Try these:
```
# example of logical operation on lists or sets
print( [] and ["a","b"] )
print( [] or ["a","b"] )
#example of Short circuit behavior
val1 = ["Most","Likely"] or ["Less","so"] or ["least","of","all"] # selects first, does not evalute remainder
print(val1)
val2 = [] or [] or ["least","of","all"] # continues evaluation until there is a non-empty list
print(val2)
```
The logical `or` could be used to implement the suggestion algorithm very compactly. Alternately, if/then constructs could be used.
**Step 2**: Create a 'best_words' dictionary where the 'key' is a suggestion and the 'value' is the probability of that word in your vocabulary. If the word is not in the vocabulary, assign it a probability of 0.
**Step 3**: Select the n best suggestions. There may be fewer than n.
<details>
<summary>
<font size="3" color="darkgreen"><b>Hints</b></font>
</summary>
<p>
<ul>
<li>edit_one_letter and edit_two_letters return *python sets*. </li>
<li> Sets have a handy <a href="https://docs.python.org/2/library/sets.html" > set.intersection </a> feature</li>
<li>To find the keys that have the highest values in a dictionary, you can use the Counter dictionary to create a Counter object from a regular dictionary. Then you can use Counter.most_common(n) to get the n most common keys.
</li>
<li>To find the intersection of two sets, you can use set.intersection or the & operator.</li>
<li>If you are not as familiar with short circuit syntax (as shown above), feel free to use if else statements instead.</li>
<li>To use an if statement to check of a set is empty, use 'if not x:' syntax </li>
</ul>
</p>
```
# UNQ_C10 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# UNIT TEST COMMENT: Candidate for Table Driven Tests
# GRADED FUNCTION: get_corrections
def get_corrections(word, probs, vocab, n=2, verbose = False):
'''
Input:
word: a user entered string to check for suggestions
probs: a dictionary that maps each word to its probability in the corpus
vocab: a set containing all the vocabulary
n: number of possible word corrections you want returned in the dictionary
Output:
n_best: a list of tuples with the most probable n corrected words and their probabilities.
'''
suggestions = []
n_best = []
### START CODE HERE ###
word_probs = {}
if word in vocab:
word_probs[word] = probs[word]
if len(word_probs) == 0:
one_edit_set = edit_one_letter(word, True)
for edited_word in one_edit_set:
if edited_word in vocab:
word_probs[edited_word] = probs[edited_word]
if len(word_probs) == 0:
two_edits_set = edit_two_letters(word, True)
for edited_word in two_edits_set:
if edited_word in vocab:
word_probs[edited_word] = probs[edited_word]
n_best = Counter(word_probs).most_common(n)
for best_word, prob in n_best:
suggestions.append(best_word)
### END CODE HERE ###
if verbose: print("entered word = ", word, "\nsuggestions = ", suggestions)
return n_best
# Test your implementation - feel free to try other words in my word
my_word = 'dys'
tmp_corrections = get_corrections(my_word, probs, vocab, 2, verbose=True) # keep verbose=True
for i, word_prob in enumerate(tmp_corrections):
print(f"word {i}: {word_prob[0]}, probability {word_prob[1]:.6f}")
# CODE REVIEW COMMENT: using "tmp_corrections" insteads of "cors". "cors" is not defined
print(f"data type of corrections {type(tmp_corrections)}")
```
#### Expected Output
- Note: This expected output is for `my_word = 'dys'`. Also, keep `verbose=True`
```CPP
entered word = dys
suggestions = {'days', 'dye'}
word 0: days, probability 0.000410
word 1: dye, probability 0.000019
data type of corrections <class 'list'>
```
<a name='4'></a>
# Part 4: Minimum Edit distance
Now that you have implemented your auto-correct, how do you evaluate the similarity between two strings? For example: 'waht' and 'what'
Also how do you efficiently find the shortest path to go from the word, 'waht' to the word 'what'?
You will implement a dynamic programming system that will tell you the minimum number of edits required to convert a string into another string.
<a name='4-1'></a>
### Part 4.1 Dynamic Programming
Dynamic Programming breaks a problem down into subproblems which can be combined to form the final solution. Here, given a string source[0..i] and a string target[0..j], we will compute all the combinations of substrings[i, j] and calculate their edit distance. To do this efficiently, we will use a table to maintain the previously computed substrings and use those to calculate larger substrings.
You have to create a matrix and update each element in the matrix as follows:
$$\text{Initialization}$$
\begin{align}
D[0,0] &= 0 \\
D[i,0] &= D[i-1,0] + del\_cost(source[i]) \tag{4}\\
D[0,j] &= D[0,j-1] + ins\_cost(target[j]) \\
\end{align}
$$\text{Per Cell Operations}$$
\begin{align}
\\
D[i,j] =min
\begin{cases}
D[i-1,j] + del\_cost\\
D[i,j-1] + ins\_cost\\
D[i-1,j-1] + \left\{\begin{matrix}
rep\_cost; & if src[i]\neq tar[j]\\
0 ; & if src[i]=tar[j]
\end{matrix}\right.
\end{cases}
\tag{5}
\end{align}
So converting the source word **play** to the target word **stay**, using an input cost of one, a delete cost of 1, and replace cost of 2 would give you the following table:
<table style="width:20%">
<tr>
<td> <b> </b> </td>
<td> <b># </b> </td>
<td> <b>s </b> </td>
<td> <b>t </b> </td>
<td> <b>a </b> </td>
<td> <b>y </b> </td>
</tr>
<tr>
<td> <b> # </b></td>
<td> 0</td>
<td> 1</td>
<td> 2</td>
<td> 3</td>
<td> 4</td>
</tr>
<tr>
<td> <b> p </b></td>
<td> 1</td>
<td> 2</td>
<td> 3</td>
<td> 4</td>
<td> 5</td>
</tr>
<tr>
<td> <b> l </b></td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>5</td>
<td>6</td>
</tr>
<tr>
<td> <b> a </b></td>
<td>3</td>
<td>4</td>
<td>5</td>
<td>4</td>
<td>5</td>
</tr>
<tr>
<td> <b> y </b></td>
<td>4</td>
<td>5</td>
<td>6</td>
<td>5</td>
<td>4</td>
</tr>
</table>
The operations used in this algorithm are 'insert', 'delete', and 'replace'. These correspond to the functions that you defined earlier: insert_letter(), delete_letter() and replace_letter(). switch_letter() is not used here.
The diagram below describes how to initialize the table. Each entry in D[i,j] represents the minimum cost of converting string source[0:i] to string target[0:j]. The first column is initialized to represent the cumulative cost of deleting the source characters to convert string "EER" to "". The first row is initialized to represent the cumulative cost of inserting the target characters to convert from "" to "NEAR".
<div style="width:image width px; font-size:100%; text-align:center;"><img src='EditDistInit4.PNG' alt="alternate text" width="width" height="height" style="width:1000px;height:400px;"/> Figure 6 Initializing Distance Matrix</div>
Filling in the remainder of the table utilizes the 'Per Cell Operations' in the equation (5) above. Note, the diagram below includes in the table some of the 3 sub-calculations shown in light grey. Only 'min' of those operations is stored in the table in the `min_edit_distance()` function.
<div style="width:image width px; font-size:100%; text-align:center;"><img src='EditDistFill2.PNG' alt="alternate text" width="width" height="height" style="width:800px;height:400px;"/> Figure 7 Filling Distance Matrix</div>
Note that the formula for $D[i,j]$ shown in the image is equivalent to:
\begin{align}
\\
D[i,j] =min
\begin{cases}
D[i-1,j] + del\_cost\\
D[i,j-1] + ins\_cost\\
D[i-1,j-1] + \left\{\begin{matrix}
rep\_cost; & if src[i]\neq tar[j]\\
0 ; & if src[i]=tar[j]
\end{matrix}\right.
\end{cases}
\tag{5}
\end{align}
The variable `sub_cost` (for substitution cost) is the same as `rep_cost`; replacement cost. We will stick with the term "replace" whenever possible.
Below are some examples of cells where replacement is used. This also shows the minimum path from the lower right final position where "EER" has been replaced by "NEAR" back to the start. This provides a starting point for the optional 'backtrace' algorithm below.
<div style="width:image width px; font-size:100%; text-align:center;"><img src='EditDistExample1.PNG' alt="alternate text" width="width" height="height" style="width:1200px;height:400px;"/> Figure 8 Examples Distance Matrix</div>
<a name='ex-11'></a>
### Exercise 11
Again, the word "substitution" appears in the figure, but think of this as "replacement".
**Instructions**: Implement the function below to get the minimum amount of edits required given a source string and a target string.
<details>
<summary>
<font size="3" color="darkgreen"><b>Hints</b></font>
</summary>
<p>
<ul>
<li>The range(start, stop, step) function excludes 'stop' from its output</li>
<li><a href="" > words </a> </li>
</ul>
</p>
```
# UNQ_C11 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: min_edit_distance
def min_edit_distance(source, target, ins_cost = 1, del_cost = 1, rep_cost = 2):
'''
Input:
source: a string corresponding to the string you are starting with
target: a string corresponding to the string you want to end with
ins_cost: an integer setting the insert cost
del_cost: an integer setting the delete cost
rep_cost: an integer setting the replace cost
Output:
D: a matrix of len(source)+1 by len(target)+1 containing minimum edit distances
med: the minimum edit distance (med) required to convert the source string to the target
'''
# use deletion and insert cost as 1
m = len(source)
n = len(target)
#initialize cost matrix with zeros and dimensions (m+1,n+1)
D = np.zeros((m+1, n+1), dtype=int)
### START CODE HERE (Replace instances of 'None' with your code) ###
# Fill in column 0, from row 1 to row m, both inclusive
for row in range(1,m+1): # Replace None with the proper range
D[row,0] = row
# Fill in row 0, for all columns from 1 to n, both inclusive
for col in range(1,n+1): # Replace None with the proper range
D[0,col] = col
# Loop through row 1 to row m, both inclusive
for row in range(1,m+1):
# Loop through column 1 to column n, both inclusive
for col in range(1,n+1):
# Intialize r_cost to the 'replace' cost that is passed into this function
r_cost = rep_cost
# Check to see if source character at the previous row
# matches the target character at the previous column,
if source[row-1] == target[col-1]:
# Update the replacement cost to 0 if source and target are the same
r_cost = 0
# Update the cost at row, col based on previous entries in the cost matrix
# Refer to the equation calculate for D[i,j] (the minimum of three calculated costs)
D[row,col] = min((D[row-1, col] + del_cost), (D[row, col-1] + ins_cost), (D[row-1,col-1] + r_cost))
# Set the minimum edit distance with the cost found at row m, column n
med = D[m,n]
### END CODE HERE ###
return D, med
#DO NOT MODIFY THIS CELL
# testing your implementation
source = 'play'
target = 'stay'
matrix, min_edits = min_edit_distance(source, target)
print("minimum edits: ",min_edits, "\n")
idx = list('#' + source)
cols = list('#' + target)
df = pd.DataFrame(matrix, index=idx, columns= cols)
print(df)
```
**Expected Results:**
```CPP
minimum edits: 4
# s t a y
# 0 1 2 3 4
p 1 2 3 4 5
l 2 3 4 5 6
a 3 4 5 4 5
y 4 5 6 5 4
```
```
#DO NOT MODIFY THIS CELL
# testing your implementation
source = 'eer'
target = 'near'
matrix, min_edits = min_edit_distance(source, target)
print("minimum edits: ",min_edits, "\n")
idx = list(source)
idx.insert(0, '#')
cols = list(target)
cols.insert(0, '#')
df = pd.DataFrame(matrix, index=idx, columns= cols)
print(df)
```
**Expected Results**
```CPP
minimum edits: 3
# n e a r
# 0 1 2 3 4
e 1 2 1 2 3
e 2 3 2 3 4
r 3 4 3 4 3
```
We can now test several of our routines at once:
```
source = "eer"
targets = edit_one_letter(source,allow_switches = False) #disable switches since min_edit_distance does not include them
for t in targets:
_, min_edits = min_edit_distance(source, t,1,1,1) # set ins, del, sub costs all to one
if min_edits != 1: print(source, t, min_edits)
```
**Expected Results**
```CPP
(empty)
```
The 'replace()' routine utilizes all letters a-z one of which returns the original word.
```
source = "eer"
targets = edit_two_letters(source,allow_switches = False) #disable switches since min_edit_distance does not include them
for t in targets:
_, min_edits = min_edit_distance(source, t,1,1,1) # set ins, del, sub costs all to one
if min_edits != 2 and min_edits != 1: print(source, t, min_edits)
```
**Expected Results**
```CPP
eer eer 0
```
We have to allow single edits here because some two_edits will restore a single edit.
# Submission
Make sure you submit your assignment before you modify anything below
<a name='5'></a>
# Part 5: Optional - Backtrace
Once you have computed your matrix using minimum edit distance, how would find the shortest path from the top left corner to the bottom right corner?
Note that you could use backtrace algorithm. Try to find the shortest path given the matrix that your `min_edit_distance` function returned.
You can use these [lecture slides on minimum edit distance](https://web.stanford.edu/class/cs124/lec/med.pdf) by Dan Jurafsky to learn about the algorithm for backtrace.
```
# Experiment with back trace - insert your code here
```
#### References
- Dan Jurafsky - Speech and Language Processing - Textbook
- This auto-correct explanation was first done by Peter Norvig in 2007
|
github_jupyter
|
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
#from sklearn.metrics import precision_recall_curve
#from sklearn.metrics import plot_precision_recall_curve
from sklearn.metrics import average_precision_score
import matplotlib.pyplot as plt
%matplotlib inline
```
# demonstration of vectorizer in Scikit learn
```
# word counts
# list of text documents
text_test = ["This is a test document","this is a second text","and here is a third text", "this is a dum text!"]
# create the transform
vectorizer_test = CountVectorizer()
# tokenize and build vocab
vectorizer_test.fit(text_test)
# summarize
print(vectorizer_test.vocabulary_)
# encode document
vector_test = vectorizer_test.transform(text_test)
# show encoded vector
print(vector_test.toarray())
```
# Spam/Ham dataset
```
# import data from TSV
sms_data=pd.read_csv('SMSSpamCollection.txt', sep='\t')
sms_data.head()
# create a new column with inferred class
def f(row):
if row['Label'] == "ham":
val = 1
else:
val = 0
return val
sms_data['Class'] = sms_data.apply(f, axis=1)
sms_data.head()
vectorizer = CountVectorizer(
analyzer = 'word',
lowercase = True,
)
features = vectorizer.fit_transform(
sms_data['Text']
)
# split X and Y
X = features.toarray()
Y= sms_data['Class']
# split training and testing
X_train, X_test, Y_train, Y_test = train_test_split(X, Y,test_size=0.20, random_state=1)
log_model = LogisticRegression(penalty='l2', solver='lbfgs', class_weight='balanced')
log_model = log_model.fit(X_train, Y_train)
# make predictions
Y_pred = log_model.predict(X_test)
# make predictions
pred_df = pd.DataFrame({'Actual': Y_test, 'Predicted': Y_pred.flatten()})
pred_df.head()
# compute accuracy of the spam filter
print(accuracy_score(Y_test, Y_pred))
# compute precision and recall
average_precision = average_precision_score(Y_test, Y_pred)
print('Average precision-recall score: {0:0.2f}'.format(
average_precision))
# precision-recall curve
disp = plot_precision_recall_curve(log_model, X_test, Y_test)
disp.ax_.set_title('2-class Precision-Recall curve: '
'AP={0:0.2f}'.format(average_precision))
# look at the words learnt and their coefficients
coeff_df = pd.DataFrame({'coeffs': log_model.coef_.flatten(), 'Words': vectorizer.get_feature_names()}, )
# Words with highest coefficients -> predictive of 'Ham'
coeff_df.nlargest(10, 'coeffs')
# Words with highest coefficients -> predictive of 'Spam'
coeff_df.nsmallest(10, 'coeffs')
```
# Upload review dataset
```
# import data from TSV
rev_data=pd.read_csv('book_reviews.csv')
rev_data.head()
# fix issues with format text
reviews = rev_data['reviewText'].apply(lambda x: np.str_(x))
# create a new column with inferred class
# count words in texts
# split X and Y
# split training and testing
# fit model (this takes a while)
# make predictions
# compute accuracy of the sentiment analysis
# compute precision and recall
# precision-recall curve
# look at the words learnt and their coefficients
coeff_df = pd.DataFrame({'coeffs': log_model.coef_.flatten(), 'Words': vectorizer.get_feature_names()}, )
# Words with highest coefficients -> predictive of 'good reviews'
coeff_df.nlargest(10, 'coeffs')
# Words with highest coefficients -> predictive of 'bad reviews'
coeff_df.nsmallest(10, 'coeffs')
```
|
github_jupyter
|
```
import json
from pathlib import Path
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import xgboost as xgb
from category_encoders import OneHotEncoder
from pandas_profiling import ProfileReport
from sklearn.impute import SimpleImputer
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import FunctionTransformer, MinMaxScaler, StandardScaler
from sklearn_pandas import DataFrameMapper, gen_features
sns.set_style("white")
ROOT_PATH = Path("..")
df = pd.read_csv(ROOT_PATH / "data/raw/metadata.csv")
svc_ids = pd.read_json(ROOT_PATH / "data/raw/song_vs_call.json").squeeze()
svc_df = df.loc[df.id.isin(svc_ids)].copy()
with open(ROOT_PATH / "data/processed/svc_split.json") as svc_split_file:
svc_split = json.load(svc_split_file)
train_ids = svc_split["train_ids"]
test_ids = svc_split["test_ids"]
# Add response variable
type_col = svc_df.type.str.lower().str.replace(" ", "").str.split(",")
filtered_type_col = type_col.apply(lambda l: set(l) - {"call", "song"})
svc_df["pred"] = type_col.apply(lambda l: "call" in l).astype(int)
# Add gender feature
def filter_gender(labels):
if "male" in labels:
return "male"
elif "female" in labels:
return "female"
else:
return np.nan
svc_df["gender"] = type_col.apply(filter_gender)
# Add age feature
def filter_age(labels):
if "adult" in labels:
return "adult"
elif "juvenile" in labels:
return "juvenile"
else:
return np.nan
svc_df["age"] = type_col.apply(filter_age)
# Set up date and time columns
svc_df.date = pd.to_datetime(svc_df.date, format="%Y-%m-%d", errors="coerce")
svc_df.time = pd.to_datetime(svc_df.time, format="%H:%M", errors="coerce")
svc_df["month"] = svc_df.date.dt.month
svc_df["day"] = svc_df.date.dt.day
svc_df["hour"] = svc_df.time.dt.hour
svc_df["minute"] = svc_df.time.dt.minute
profile = ProfileReport(svc_df, title="Metadata Report")
fname = "metadata_report.html"
profile.to_file(f"assets/{fname}")
keep_cols = [
"id",
"gen",
"sp",
"ssp",
"en",
"lat",
"lng",
"time",
"date",
"gender",
"age",
"month",
"day",
"hour",
"minute",
]
X_df, y_df = (
svc_df.reindex(columns=keep_cols).copy(),
svc_df.reindex(columns=["id", "pred"]).copy(),
)
dt_feats = gen_features(
columns=[["month"], ["day"], ["hour"], ["minute"]],
classes=[
SimpleImputer,
{"class": FunctionTransformer, "func": np.sin},
MinMaxScaler,
],
)
n_components = 50
feature_mapper = DataFrameMapper(
[
("id", None),
(["gen"], OneHotEncoder(drop_invariant=True, use_cat_names=True)),
(["sp"], OneHotEncoder(drop_invariant=True, use_cat_names=True)),
(["en"], OneHotEncoder(drop_invariant=True, use_cat_names=True)),
(["lat"], [SimpleImputer(), StandardScaler()]), # gaussian
(["lng"], [SimpleImputer(), MinMaxScaler()]), # bi-modal --> MinMaxScaler
# TODO: maybe later look into converting month / day into days since start of year
(
["month"],
[
SimpleImputer(),
FunctionTransformer(lambda X: np.sin((X - 1) * 2 * np.pi / 12)),
StandardScaler(), # gaussian
],
),
(
["day"],
[
SimpleImputer(),
FunctionTransformer(lambda X: np.sin(X * 2 * np.pi / 31)),
MinMaxScaler(), # uniform
],
),
# TODO: maybe later look into converting hour / minute into seconds since start of day
(
["hour"],
[
SimpleImputer(),
FunctionTransformer(lambda X: np.sin(X * 2 * np.pi / 24)),
StandardScaler(), # gaussian
],
),
(
["minute"],
[
SimpleImputer(),
FunctionTransformer(lambda X: np.sin(X * 2 * np.pi / 60)),
MinMaxScaler(), # uniform
],
),
],
df_out=True,
)
X_feat_df = feature_mapper.fit_transform(X_df, y_df["pred"])
X_train, X_test = (
X_feat_df[X_feat_df.id.isin(train_ids)].drop(columns=["id"]),
X_feat_df[X_feat_df.id.isin(test_ids)].drop(columns=["id"]),
)
y_train, y_test = (
y_df[y_df.id.isin(train_ids)].drop(columns=["id"]).squeeze(),
y_df[y_df.id.isin(test_ids)].drop(columns=["id"]).squeeze(),
)
lr = LogisticRegression()
lr.fit(X_train, y_train)
print(lr.score(X_test, y_test))
xgb_clf = xgb.XGBClassifier()
eval_set = [(X_train, y_train), (X_test, y_test)]
xgb_clf.fit(
X_train, y_train, eval_metric=["error", "logloss"], eval_set=eval_set, verbose=False
)
print(xgb_clf.score(X_test, y_test))
# Loss
# retrieve performance metrics
results = xgb_clf.evals_result()
epochs = len(results["validation_0"]["error"])
x_axis = range(0, epochs)
# plot log loss
f, ax = plt.subplots()
ax.plot(x_axis, results["validation_0"]["logloss"], label="Train")
ax.plot(x_axis, results["validation_1"]["logloss"], label="Test")
ax.legend()
sns.despine(f, ax)
plt.ylabel("Log Loss")
plt.xlabel("Epochs")
plt.title("Log Loss vs Epochs (XGBoost Meta Model)")
f.tight_layout()
fig_name = "svc_meta_xgb_loss.png"
f.savefig(f"assets/{fig_name}", dpi=150);
xgb_train_acc = xgb_clf.score(X_train, y_train)
xgb_test_acc = xgb_clf.score(X_test, y_test)
print(f"Train Loss: {results['validation_0']['logloss'][-1]:.3f}")
print(f"Test Loss: {results['validation_1']['logloss'][-1]:.3f}")
print(f"Train Accuracy: {xgb_train_acc:.3f}")
print(f"Test Accuracy: {xgb_test_acc:.3f}")
# Feature importance
# how it is calculated: https://stats.stackexchange.com/a/422480/311774
N_feats = 10
feat_imp = pd.Series(
xgb_clf.feature_importances_, index=X_train.columns, name="imp"
).nlargest(N_feats)
f, ax = plt.subplots()
ax.bar(feat_imp.index, feat_imp)
sns.despine(f, ax, bottom=True)
plt.setp(ax.get_xticklabels(), rotation=75)
ax.set_xlabel("Feature Name")
ax.set_ylabel("Importance")
ax.set_title(f"Importance vs Feature for the {N_feats} Largest Importances")
f.tight_layout()
fig_name = "svc_meta_xgb_imp.png"
f.savefig(f"assets/{fig_name}", dpi=150);
```
|
github_jupyter
|
```
'''
This notebook analyzes splicing and cleavage using LRS data.
Figures 6 and S7
'''
import os
import re
import numpy as np
import pandas as pd
from pandas.api.types import CategoricalDtype
import mygene
import scipy
from plotnine import *
import warnings
warnings.filterwarnings('ignore')
import matplotlib
matplotlib.rcParams['pdf.fonttype'] = 42 # export pdfs with editable font types in Illustrator
# Link to annotation of all TES from UCSC Genome Browser
TES_all = pd.read_csv('mm10_TES.bed', # downloaded from UCSC table browser, all genes last coordinate only
delimiter = '\t',
names = ['chr', 'start', 'end', 'name', 'score', 'strand'])
# Link to active TSSs from PRO-seq
active_TSS = pd.read_csv('../annotation_files/active_TSS_PROseq_150_counts_mm10_VM20.txt', delimiter = '\t', header = 0)
# Links to input data: BED12 files that have been separated by splicing status
dataFiles = [
'../Figure_2_S3/all_spliced_reads.bed',
'../Figure_2_S3/partially_spliced_reads.bed',
'../Figure_2_S3/all_unspliced_reads.bed',
]
# Read in file with PROseq read counts downstream of the PAS associated with each intron transcript ID
proseq_cts_all = pd.read_csv('../annotation_files/200916_Untreated_10readcutoff_PROseq_Intronic_vs_Flanking_Exon_Signal.txt', sep = '\t')
# # First, filter TES coordinates for only those that come from actively transcribed TSS's in MEL cells (as defined by PRO-seq)
TES_all['ENSMUST.ID'] = TES_all.name.str.split('_').str[0] # create column with txid from name column
TES = pd.merge(TES_all, active_TSS[['txname']], left_on = 'ENSMUST.ID', right_on = 'txname', how = 'inner', copy = False)
# Generate a file for doing bedtools coverage with a window around the TES
# set window length for generating window around splice sites
upstream = 100
downstream = 1000
# make a window around TES using defined window length
TES.loc[TES['strand'] == '+', 'window_start'] = (TES['start'] - upstream)
TES.loc[TES['strand'] == '-', 'window_start'] = (TES['start'] - downstream)
TES.loc[TES['strand'] == '+', 'window_end'] = (TES['end'] + downstream)
TES.loc[TES['strand'] == '-', 'window_end'] = (TES['end'] + upstream)
# return window start and end coordinates to intergers rather than floats
TES['window_start'] = TES['window_start'].astype(np.int64)
TES['window_end'] = TES['window_end'].astype(np.int64)
TES['window_start'] = TES['window_start'].astype(np.int64)
TES['window_end'] = TES['window_end'].astype(np.int64)
out_cols = ['chr', 'window_start', 'window_end', 'name', 'score', 'strand']
TES.to_csv('TES_window.bed', columns = out_cols, sep = '\t', index = False, header = False)
# Calculate coverage over the TES window region using bedtools coverage (IN A TERMINAL WINDOW)
################################################################################################################
# bedtools coverage -s -d -a TES_window.bed -b ../Figure_2/all_spliced_reads.bed > all_spliced_cov.txt
# bedtools coverage -s -d -a TES_window.bed -b ../Figure_2/partially_spliced_reads.bed > partially_spliced_cov.txt
# bedtools coverage -s -d -a TES_window.bed -b ../Figure_2/all_unspliced_reads.bed > all_unspliced_cov.txt
################################################################################################################
# Define a function to read in output of bedtools coverage, rearrange columns and compute coverage in bins over a TES window
def get_coverage(file):
filestring = file.split('/')[-1].split('_')[0:2]
sample = '_'.join(filestring) # get sample ID from file name
f = pd.read_csv(file, compression = 'gzip', sep = '\t', names = ['chr', 'start', 'end', 'name', 'score', 'strand', 'position', 'count'])
f_grouped = f.groupby(['strand', 'position']).agg({'count':'sum'}) # group by position and strand, sum all counts
tmp = f_grouped.unstack(level='strand') # separate plus and minus strand counts
tmp_plus = tmp['count', '+'].to_frame() # convert both + and - strand series to dataframes
tmp_minus = tmp['count', '-'].to_frame()
tmp_minus = tmp_minus[::-1] # reverse order of the entries in the minus strand df
tmp_minus['new_position'] = list(range(1,1102,1)) # reset the position to be 1-50 for the minus strand so it matches plus strand (flipped)
df = pd.merge(tmp_plus, tmp_minus, left_index = True, right_on = 'new_position')
df['total_count'] = df['count', '+'] + df['count', '-']
df = df[['new_position', 'total_count']] # drop separate count columns for each strand
df['rel_position'] = range(-100,1001,1) # add relative position around TES
TES_val = df['total_count'].values[1] # get the coverage at the TES nucleotide position
df['TES_pos_count'] = TES_val
df['normalized_count'] = df['total_count'] / df['TES_pos_count'] # normalize coverage to TES coverage
df['sample'] = sample # add sample identifier
return df # return dataframe with position around TES ('normalized_count') and relative position around TES ('rel_position')
# get coverage for all_spliced, partially_spliced, and all_unspliced reads (each of these is slow)
df_cov_all_spliced = get_coverage('all_spliced_cov.txt.gz')
df_cov_partially_spliced = get_coverage('partially_spliced_cov.txt.gz')
df_cov_all_unspliced = get_coverage('all_unspliced_cov.txt.gz')
# concat all coverage dataframes together
df = pd.concat([df_cov_all_spliced, df_cov_partially_spliced, df_cov_all_unspliced])
# save coverage df
df.to_csv('coverage_matrix.txt', sep = '\t', index = False, header = True)
```
### Figure 6C
```
# plot read coverage past PAS
my_colours = ['#43006A', '#FBC17D', '#81176D']
plt_PAS_coverage = (ggplot
(data = df, mapping=aes( x = 'rel_position', y = 'normalized_count', colour = 'sample')) +
geom_line(size = 2, stat = 'identity') +
scale_colour_manual(values = my_colours) +
theme_linedraw(base_size = 12) +
xlab('Position relative to PAS [nt]') +
ylim(0.6,1.05) +
xlim(-100, 250) +
ylab('Read Coverage normalized to 100 nt before PAS'))
plt_PAS_coverage
# plot PROseq coverage downstream of TES
# NOTE: THIS INPUT FILE IS FROM CLAUDIA WITHOUT INTRON IDS
proseq_tes = pd.read_csv('Figure 6C_Log2 transformed PRO-seq Signal aroundTES Violin Plot Test.txt', sep = '\t')
proseq_tes.columns = ['<0.6', '0.6-0.79', '0.8-0.99', '1']
proseq_tes_long = proseq_tes.melt(value_vars = ['<0.6', '0.6-0.79', '0.8-0.99', '1'], value_name = 'PROseq_counts', var_name = 'CoSE')
cat_type = CategoricalDtype(categories=['<0.6', '0.6-0.79', '0.8-0.99', '1'], ordered=True) # turn category column into a category variable in order to control order of plotting
proseq_tes_long['CoSE'] = proseq_tes_long['CoSE'].astype(cat_type)
```
### Figure S7D
```
my_colours = ['#FA8657', '#FBC17D', '#81176D', '#43006A']
plt_cose_TESPROseq = (ggplot
(data=proseq_tes_long, mapping=aes( x='CoSE', y = 'PROseq_counts', fill = 'CoSE')) +
geom_violin(width = 0.8) +
geom_boxplot(width = 0.3, fill = 'white', alpha = 0.4) +
theme_linedraw(base_size = 12) +
theme(axis_text_x=element_text(rotation=45, hjust=1)) +
# theme(figure_size = (2.5,4)) +
theme(figure_size = (3,4)) +
ylab('PROseq Read Counts TES to +1 kb (log2)') +
ylim(1, 15) +
# scale_y_log10(limits = (0.000001, 5)) +
# scale_y_log10() +
scale_fill_manual(values = my_colours)
)
plt_cose_TESPROseq
# Combine reads that have been classifed by splicing status into a single file, adding a new column to record splicing status
alldata = []
for file in dataFiles:
# df = pd.read_csv(file, delimiter = '\t', names = ['chr', 'start', 'end', 'name', 'score', 'strand', 'readStart', 'readEnd', 'rgb', 'blocks', 'blockSizes', 'blockStarts'])
df = pd.read_csv(file, delimiter = '\t', names = ['chr', 'start', 'end', 'name', 'score', 'strand', 'readStart', 'readEnd', 'rgb', 'blocks', 'blockSizes', 'blockStarts', 'status', 'treatment'])
# splicing_status = file.split('/')[2]
# df['status'] = splicing_status
alldata.append(df)
data = pd.concat(alldata)
# Define a function to get the 5' end coordiates for each read
def get_5end_coord(df):
plus = df.loc[df['strand'] == '+']
minus = df.loc[df['strand'] == '-']
columns = ['chr', 'start', 'end', 'name', 'score', 'strand', 'readStart', 'readEnd', 'rgb', 'blocks', 'blockSizes', 'blockStarts', 'status', 'treatment']
plus['end'] = plus['start'] + 1
plus_out = plus[columns]
minus['start'] = minus['end'] - 1
minus_out = minus[columns]
out = pd.concat([plus_out, minus_out])
out.to_csv('data_combined_5end.bed', sep = '\t', index = False, header = False)
# Create a BED file with 5' end coordinate for combined long read data with splicing status classification
get_5end_coord(data)
# Bedtools intersect 5' end of reads with active transcripts - write entire read a (IN A TERMINAL WINDOW)
################################################################################################################
# bedtools intersect -wo -s -a data_combined_5end.bed -b ../annotation_files/active_transcripts.bed > fiveEnd_intersect_active_transcripts.txt
################################################################################################################
# Read in result of bedtools intersect: r_ indicates read info, t_ indicates transcript annotation info
intersect = pd.read_csv('fiveEnd_intersect_active_transcripts.txt',
delimiter = '\t',
names = ['r_chr', 'r_fiveEnd_start', 'r_fiveEnd_end', 'r_name', 'r_score', 'r_strand', 'r_readStart', 'r_readEnd', 'r_rgb', 'r_blocks', 'r_blockSizes', 'r_blockStarts', 'splicing_status', 'treatment', 't_chr', 't_start', 't_end', 't_name', 't_score', 't_strand', 'overlaps'])
# For reach row, compare whether or not the readEnd is past the transcript end, if so add 1 to cleavage status
distance_past_PAS = 50 # set cutoff distance for the read end to be past the annotated PAS
intersect_plus = intersect.loc[intersect['r_strand'] == "+"]
intersect_minus = intersect.loc[intersect['r_strand'] == "-"]
conditions_plus = [intersect_plus['r_readEnd'] > intersect_plus['t_end'].astype(int) + distance_past_PAS,
intersect_plus['r_readEnd'] <= intersect_plus['t_end'].astype(int) + distance_past_PAS]
conditions_minus = [intersect_minus['r_readStart'] < intersect_minus['t_start'].astype(int) - distance_past_PAS,
intersect_minus['r_readStart'] >= intersect_minus['t_start'].astype(int) - distance_past_PAS]
outputs = [1,0]
intersect_plus['uncleaved'] = np.select(conditions_plus, outputs, np.NaN)
intersect_minus['uncleaved'] = np.select(conditions_minus, outputs, np.NaN)
i = pd.concat([intersect_plus, intersect_minus])
g = i.groupby('r_name').agg({'uncleaved':'sum', # count the number of reads with 3' end past transcript PAS
'splicing_status':'first', # record splicing status for each read
'treatment':'first', # record treatment condiditon for each read
'overlaps':'sum'}) # count the total number of transcript overlaps
g['cleavage_ratio'] = g['uncleaved']/g['overlaps'] # calculate how many times a transcript is called as uncleaved for all of the transcript annotations that it overlaps
g['cleavage_status'] = np.where(g['cleavage_ratio'] ==1,'uncleaved', 'cleaved') # only classify a read as "uncleaved" if the 3' end is past the PAS for ALL transcript annotations that it overlaps with
# Calculate fraction of reads that are in each splicing category for cleaved/uncleaved reads
total_uncleaved = len(g.loc[g['cleavage_status'] == 'uncleaved'])
total_cleaved = len(g.loc[g['cleavage_status'] == 'cleaved'])
all_spliced_cleaved = len(g.loc[(g['splicing_status'] == 'all_spliced') & (g['cleavage_status'] == 'cleaved')])
partially_spliced_cleaved = len(g.loc[(g['splicing_status'] == 'partially_spliced') & (g['cleavage_status'] == 'cleaved')])
all_unspliced_cleaved = len(g.loc[(g['splicing_status'] == 'all_unspliced') & (g['cleavage_status'] == 'cleaved')])
all_spliced_uncleaved = len(g.loc[(g['splicing_status'] == 'all_spliced') & (g['cleavage_status'] == 'uncleaved')])
partially_spliced_uncleaved = len(g.loc[(g['splicing_status'] == 'partially_spliced') & (g['cleavage_status'] == 'uncleaved')])
all_unspliced_uncleaved = len(g.loc[(g['splicing_status'] == 'all_unspliced') & (g['cleavage_status'] == 'uncleaved')])
data_list = [['uncleaved', 'all_spliced', all_spliced_uncleaved, total_uncleaved],
['uncleaved', 'partially_spliced', partially_spliced_uncleaved, total_uncleaved],
['uncleaved', 'all_unspliced', all_unspliced_uncleaved, total_uncleaved],
['cleaved', 'all_spliced', all_spliced_cleaved, total_cleaved],
['cleaved', 'partially_spliced', partially_spliced_cleaved, total_cleaved],
['cleaved', 'all_unspliced', all_unspliced_cleaved, total_cleaved]]
# Create the pandas DataFrame
df = pd.DataFrame(data_list, columns = ['cleavage_status', 'splicing_status', 'count', 'total'])
df['fraction'] = df['count']/df['total']
df
print('Number of Cleaved reads: ' + str(total_cleaved))
print('Number of Uncleaved reads: ' + str(total_uncleaved))
```
### Figure 6C
```
my_colours = ['#43006A', '#FBC17D', '#81176D']
plt_splicing_cleavage_fraction = (ggplot
(data=df, mapping=aes(x='cleavage_status', y='fraction', fill = 'splicing_status')) +
geom_bar(stat = 'identity', position = 'stack', colour = 'black') +
theme_linedraw(base_size = 12) +
theme(figure_size = (3,6)) +
xlab("3' End Cleavage Status") +
ylab('Fraction of Long Reads') +
scale_fill_manual(values = my_colours)
)
plt_splicing_cleavage_fraction
```
### Save output figures
```
plt_splicing_cleavage_fraction.save('fraction_uncleaved_unspliced_reads.pdf') # Fig 6C
plt_PAS_coverage.save('coverage_downstream_PAS_splicing_status.pdf') # Fig 6D
plt_cose_TESPROseq.save('PROseq_counts_TES_by_CoSE.pdf') # Fig S7D
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/iamsoroush/DeepEEGAbstractor/blob/master/cv_hmdd_4s_proposed_gap.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
#@title # Clone the repository and upgrade Keras {display-mode: "form"}
!git clone https://github.com/iamsoroush/DeepEEGAbstractor.git
!pip install --upgrade keras
#@title # Imports {display-mode: "form"}
import os
import pickle
import sys
sys.path.append('DeepEEGAbstractor')
import numpy as np
from src.helpers import CrossValidator
from src.models import SpatioTemporalWFB, TemporalWFB, TemporalDFB, SpatioTemporalDFB
from src.dataset import DataLoader, Splitter, FixedLenGenerator
from google.colab import drive
drive.mount('/content/gdrive')
#@title # Set data path {display-mode: "form"}
#@markdown ---
#@markdown Type in the folder in your google drive that contains numpy _data_ folder:
parent_dir = 'soroush'#@param {type:"string"}
gdrive_path = os.path.abspath(os.path.join('gdrive/My Drive', parent_dir))
data_dir = os.path.join(gdrive_path, 'data')
cv_results_dir = os.path.join(gdrive_path, 'cross_validation')
if not os.path.exists(cv_results_dir):
os.mkdir(cv_results_dir)
print('Data directory: ', data_dir)
print('Cross validation results dir: ', cv_results_dir)
#@title ## Set Parameters
batch_size = 80
epochs = 50
k = 10
t = 10
instance_duration = 4 #@param {type:"slider", min:3, max:10, step:0.5}
instance_overlap = 1 #@param {type:"slider", min:0, max:3, step:0.5}
sampling_rate = 256 #@param {type:"number"}
n_channels = 20 #@param {type:"number"}
task = 'hmdd'
data_mode = 'cross_subject'
#@title ## Spatio-Temporal WFB
model_name = 'ST-WFB-GAP'
train_generator = FixedLenGenerator(batch_size=batch_size,
duration=instance_duration,
overlap=instance_overlap,
sampling_rate=sampling_rate,
is_train=True)
test_generator = FixedLenGenerator(batch_size=8,
duration=instance_duration,
overlap=instance_overlap,
sampling_rate=sampling_rate,
is_train=False)
params = {'task': task,
'data_mode': data_mode,
'main_res_dir': cv_results_dir,
'model_name': model_name,
'epochs': epochs,
'train_generator': train_generator,
'test_generator': test_generator,
't': t,
'k': k,
'channel_drop': True}
validator = CrossValidator(**params)
dataloader = DataLoader(data_dir,
task,
data_mode,
sampling_rate,
instance_duration,
instance_overlap)
data, labels = dataloader.load_data()
input_shape = (sampling_rate * instance_duration,
n_channels)
model_obj = SpatioTemporalWFB(input_shape,
model_name=model_name)
scores = validator.do_cv(model_obj,
data,
labels)
#@title ## Temporal WFB
model_name = 'T-WFB-GAP'
train_generator = FixedLenGenerator(batch_size=batch_size,
duration=instance_duration,
overlap=instance_overlap,
sampling_rate=sampling_rate,
is_train=True)
test_generator = FixedLenGenerator(batch_size=8,
duration=instance_duration,
overlap=instance_overlap,
sampling_rate=sampling_rate,
is_train=False)
params = {'task': task,
'data_mode': data_mode,
'main_res_dir': cv_results_dir,
'model_name': model_name,
'epochs': epochs,
'train_generator': train_generator,
'test_generator': test_generator,
't': t,
'k': k,
'channel_drop': True}
validator = CrossValidator(**params)
dataloader = DataLoader(data_dir,
task,
data_mode,
sampling_rate,
instance_duration,
instance_overlap)
data, labels = dataloader.load_data()
input_shape = (sampling_rate * instance_duration,
n_channels)
model_obj = TemporalWFB(input_shape,
model_name=model_name)
scores = validator.do_cv(model_obj,
data,
labels)
#@title ## Spatio-Temporal DFB
model_name = 'ST-DFB-GAP'
train_generator = FixedLenGenerator(batch_size=batch_size,
duration=instance_duration,
overlap=instance_overlap,
sampling_rate=sampling_rate,
is_train=True)
test_generator = FixedLenGenerator(batch_size=8,
duration=instance_duration,
overlap=instance_overlap,
sampling_rate=sampling_rate,
is_train=False)
params = {'task': task,
'data_mode': data_mode,
'main_res_dir': cv_results_dir,
'model_name': model_name,
'epochs': epochs,
'train_generator': train_generator,
'test_generator': test_generator,
't': t,
'k': k,
'channel_drop': True}
validator = CrossValidator(**params)
dataloader = DataLoader(data_dir,
task,
data_mode,
sampling_rate,
instance_duration,
instance_overlap)
data, labels = dataloader.load_data()
input_shape = (sampling_rate * instance_duration,
n_channels)
model_obj = SpatioTemporalDFB(input_shape,
model_name=model_name)
scores = validator.do_cv(model_obj,
data,
labels)
#@title ## Spatio-Temporal DFB (Normalized Kernels)
model_name = 'ST-DFB-NK-GAP'
train_generator = FixedLenGenerator(batch_size=batch_size,
duration=instance_duration,
overlap=instance_overlap,
sampling_rate=sampling_rate,
is_train=True)
test_generator = FixedLenGenerator(batch_size=8,
duration=instance_duration,
overlap=instance_overlap,
sampling_rate=sampling_rate,
is_train=False)
params = {'task': task,
'data_mode': data_mode,
'main_res_dir': cv_results_dir,
'model_name': model_name,
'epochs': epochs,
'train_generator': train_generator,
'test_generator': test_generator,
't': t,
'k': k,
'channel_drop': True}
validator = CrossValidator(**params)
dataloader = DataLoader(data_dir,
task,
data_mode,
sampling_rate,
instance_duration,
instance_overlap)
data, labels = dataloader.load_data()
input_shape = (sampling_rate * instance_duration,
n_channels)
model_obj = SpatioTemporalDFB(input_shape,
model_name=model_name,
normalize_kernels=True)
scores = validator.do_cv(model_obj,
data,
labels)
#@title ## Temporal DFB
model_name = 'T-DFB-GAP'
train_generator = FixedLenGenerator(batch_size=batch_size,
duration=instance_duration,
overlap=instance_overlap,
sampling_rate=sampling_rate,
is_train=True)
test_generator = FixedLenGenerator(batch_size=8,
duration=instance_duration,
overlap=instance_overlap,
sampling_rate=sampling_rate,
is_train=False)
params = {'task': task,
'data_mode': data_mode,
'main_res_dir': cv_results_dir,
'model_name': model_name,
'epochs': epochs,
'train_generator': train_generator,
'test_generator': test_generator,
't': t,
'k': k,
'channel_drop': True}
validator = CrossValidator(**params)
dataloader = DataLoader(data_dir,
task,
data_mode,
sampling_rate,
instance_duration,
instance_overlap)
data, labels = dataloader.load_data()
input_shape = (sampling_rate * instance_duration,
n_channels)
model_obj = TemporalDFB(input_shape,
model_name=model_name)
scores = validator.do_cv(model_obj,
data,
labels)
#@title ## Temporal DFB (Normalized Kernels)
model_name = 'T-DFB-NK-GAP'
train_generator = FixedLenGenerator(batch_size=batch_size,
duration=instance_duration,
overlap=instance_overlap,
sampling_rate=sampling_rate,
is_train=True)
test_generator = FixedLenGenerator(batch_size=8,
duration=instance_duration,
overlap=instance_overlap,
sampling_rate=sampling_rate,
is_train=False)
params = {'task': task,
'data_mode': data_mode,
'main_res_dir': cv_results_dir,
'model_name': model_name,
'epochs': epochs,
'train_generator': train_generator,
'test_generator': test_generator,
't': t,
'k': k,
'channel_drop': True}
validator = CrossValidator(**params)
dataloader = DataLoader(data_dir,
task,
data_mode,
sampling_rate,
instance_duration,
instance_overlap)
data, labels = dataloader.load_data()
input_shape = (sampling_rate * instance_duration,
n_channels)
model_obj = TemporalDFB(input_shape,
model_name=model_name,
normalize_kernels=True)
scores = validator.do_cv(model_obj,
data,
labels)
```
|
github_jupyter
|
# Gallery of examples

Here you can browse a gallery of examples using EinsteinPy in the form of Jupyter notebooks.
## [Visualizing advance of perihelion of a test particle in Schwarzschild space-time](docs/source/examples/Visualizing_advance_of_perihelion_of_a_test_particle_in_Schwarzschild_space-time.ipynb)
[](docs/source/examples/Visualizing_advance_of_perihelion_of_a_test_particle_in_Schwarzschild_space-time.ipynb)
## [Animations in EinsteinPy](docs/source/examples/Animations_in_EinsteinPy.ipynb)
[](docs/source/examples/Animations_in_EinsteinPy.ipynb)
## [Analysing Earth using EinsteinPy!](docs/source/examples/Analysing_Earth_using_EinsteinPy!.ipynb)
[](docs/source/examples/Analysing_Earth_using_EinsteinPy!.ipynb)
## [Symbolically Understanding Christoffel Symbol and Riemann Metric Tensor using EinsteinPy](docs/source/examples/Symbolically_Understanding_Christoffel_Symbol_and_Riemann_Curvature_Tensor_using_EinsteinPy.ipynb)
[](docs/source/examples/Symbolically_Understanding_Christoffel_Symbol_and_Riemann_Curvature_Tensor_using_EinsteinPy.ipynb)
## [Visualizing frame dragging in Kerr spacetime](docs/source/examples/Visualizing_frame_dragging_in_Kerr_spacetime.ipynb)
[](docs/source/examples/Visualizing_frame_dragging_in_Kerr_spacetime.ipynb)
## [Visualizing event horizon and ergosphere of Kerr black hole](docs/source/examples/Visualizing_event_horizon_and_ergosphere_of_Kerr_black_hole.ipynb)
[](docs/source/examples/Visualizing_event_horizon_and_ergosphere_of_Kerr_black_hole.ipynb)
## [Spatial Hypersurface Embedding for Schwarzschild Space-Time!](docs/source/examples/Plotting_spacial_hypersurface_embedding_for_schwarzschild_spacetime.ipynb)
[](docs/source/examples/Plotting_spacial_hypersurface_embedding_for_schwarzschild_spacetime.ipynb)
## [Playing with Contravariant and Covariant Indices in Tensors(Symbolic)](docs/source/examples/Playing_with_Contravariant_and_Covariant_Indices_in_Tensors(Symbolic).ipynb)
[](docs/source/examples/Playing_with_Contravariant_and_Covariant_Indices_in_Tensors(Symbolic).ipynb)
## [Ricci Tensor and Scalar Curvature calculations using Symbolic module](docs/source/examples/Ricci_Tensor_and_Scalar_Curvature_symbolic_calculation.ipynb)
[](docs/source/examples/Ricci_Tensor_and_Scalar_Curvature_symbolic_calculation.ipynb)
<center><em>Gregorio Ricci-Curbastro</em></center>
## [Weyl Tensor calculations using Symbolic module](docs/source/examples/Weyl_Tensor_symbolic_calculation.ipynb)
[](docs/source/examples/Weyl_Tensor_symbolic_calculation.ipynb)
<center><em>Hermann Weyl</em></center>
## [Eisntein Tensor calculations using Symbolic module](docs/source/examples/Einstein_Tensor_symbolic_calculation.ipynb)
[](docs/source/examples/Einstein_Tensor_symbolic_calculation.ipynb)
|
github_jupyter
|
# Third Project - Automated Repair
## Overview
### The Task
For the first two submissions we asked you to implement a _Debugger_ as well as an _Input Reducer_. Both of these tools are used to help the developer to locate bugs and then manually fix them.
In this project, you will implement a technique of automatic code repair. To do so, you will extend the `Repairer` introduced in the [Repairing Code Automatically](https://www.debuggingbook.org/beta/html/Repairer.html) chapter of [The Debugging Book](https://www.debuggingbook.org/beta).
Your own `Repairer` should automatically generate _repair suggestions_ to the faulty functions we provide later in this notebook. This can be achieved, for instance, by changing various components of the mutator, changing the debugger, or the reduction algorithm. However, you are neither reguired to make all these changes nor required to limit yourself to the changes proposed here.
### The Submission
The time frame for this project is **3 weeks**, and the deadline is **Februrary 5th, 23:59**.
The submission should be in form of a Jupyter notebook and you are expected to hand in the submission as a .zip-archive. The notebook should, apart from the code itself, also provide sufficient explanations and reasoning (in markdown cells) behind the decisions that lead to the solution provided. Projects that do not include explanations cannot get more than **15 points**.
```
import bookutils
# ignore
from typing import Any, Callable, Optional, Tuple, List
```
## A Faulty Function and How to Repair It
Before discussing how to use and extend the Repairer, we first start by introducing a new (and highly complex) function that is supposed to return the larger of two values.
```
def larger(x: Any, y: Any) -> Any:
if x < y:
return x
return y
```
Unfortunately, we introduced a bug which makes the function behave the exact opposite way as it is supposed to:
```
larger(1, 3)
```
To fix this issue, we could try to debug it, using the tools we have already seen. However, given the complexity of the function under test (sorry for the irony), we might want to automatically repair the function, using the *Repairer* introduced in *The Debugging Book*.
To do so, we first need to define set of test cases, which help the *Repairer* in fixing the function.
```
def larger_testcase() -> Tuple[int, int]:
x = random.randrange(100)
y = random.randrange(100)
return x, y
def larger_test(x: Any, y: Any) -> None:
m = larger(x, y)
assert m == max(x, y), f"expected {max(x, y)}, but got {m}"
import math
import random
random.seed(42)
```
Let us generate a random test case for our function:
```
larger_input = larger_testcase()
print(larger_input)
```
and then feed it into our `larger_test()`:
```
from ExpectError import ExpectError
with ExpectError():
larger_test(*larger_input)
```
As expected, we got an error – the `larger()` function has adefect.
For a complete test suite, we need a set of passing and failing tests. To be sure we have both, we create functions which produce dedicated inputs:
```
def larger_passing_testcase() -> Tuple:
while True:
try:
x, y = larger_testcase()
larger_test(x, y)
return x, y
except AssertionError:
pass
def larger_failing_testcase() -> Tuple:
while True:
try:
x, y = larger_testcase()
larger_test(x, y)
except AssertionError:
return x, y
passing_input = larger_passing_testcase()
print(passing_input)
```
With `passing_input`, our `larger()` function produces a correct result, and its test function does not fail.
```
larger_test(*passing_input)
failing_input = larger_failing_testcase()
print(failing_input)
```
While `failing_input` leads to an error:
```
with ExpectError():
larger_test(*failing_input)
```
With the above defined functions, we can now start to create a number of passing and failing tests:
```
TESTS = 100
LARGER_PASSING_TESTCASES = [larger_passing_testcase()
for i in range(TESTS)]
LARGER_FAILING_TESTCASES = [larger_failing_testcase()
for i in range(TESTS)]
from StatisticalDebugger import OchiaiDebugger
```
Next, let us use _statistical debugging_ to identify likely faulty locations. The `OchiaiDebugger` ranks individual code lines by how frequently they are executed in failing runs (and not in passing runs).
```
larger_debugger = OchiaiDebugger()
for x, y in LARGER_PASSING_TESTCASES + LARGER_FAILING_TESTCASES:
with larger_debugger:
larger_test(x, y)
```
Given the results of statistical debugging, we can now use the *Repairer* introduced in the book to repair our function. Here we use the default implementation which is initialized with the simple _StatementMutator_ mutator.
```
from Repairer import Repairer
from Repairer import ConditionMutator, CrossoverOperator
from Repairer import DeltaDebugger
repairer = Repairer(larger_debugger, log=True)
best_tree, fitness = repairer.repair() # type: ignore
```
The *Repairer* successfully produced a fix.
## Implementation
As stated above, the goal of this project is to implement a repairer capable of producing a fix to the functions defined in *the Evaluation section*, as well as secret functions.
To do this, you need to work with the _Repairer_ class from the book.
The _Repairer_ class is very configurable, so that it is easy to update and plug in various components: the fault localization (pass a different debugger that is a subclass of DifferenceDebugger), the mutation operator (set mutator_class to a subclass of `StatementMutator`), the crossover operator (set crossover_class to a subclass of `CrossoverOperator`), and the reduction algorithm (set reducer_class to a subclass of `Reducer`). You may change any of these components or make other changes at will.
For us to be able to test your implementation, you will have to implement the `debug_and_repair()` function defined here.
```
from bookutils import print_content, show_ast
def debug_and_repair(f: Callable,
testcases: List[Tuple[Any, ...]],
test_function: Callable,
log: bool = False) -> Optional[str]:
'''
Debugs a function with the given testcases and the test_function
and tries to repair it afterwards.
Parameters
----------
f : function
The faulty function, to be debugged and repaired
testcases : List
A list that includes test inputs for the function under test
test_function : function
A function that takes the test inputs and tests whether the
function under test produces the correct output.
log: bool
Turn logging on/off.
Returns
-------
str
The repaired version of f as a string.
'''
# TODO: implement this function
return None
```
The function `debug_and_repair()` is the function that needs to implement everything. We will provide you with the function to be repaired, as well as testcases for this function and a test-function. Let us show you how the function can be used and should behave:
```
random.seed(42)
import ast
def simple_debug_and_repair(f: Callable,
testcases: List[Tuple[Any, ...]],
test_function: Callable,
log: bool = False) -> str:
'''
Debugs a function with the given testcases and the test_function
and tries to repair it afterwards.
Parameters
----------
f : function
The faulty function, to be debugged and repaired
testcases : List
A list that includes test inputs for the function under test
test_function : function
A function, that takes the test inputs and tests whether the
function under test produces the correct output.
log: bool
Turn logging on/off.
Returns
-------
str
The repaired version of f as a string.
'''
debugger = OchiaiDebugger()
for i in testcases:
with debugger:
test_function(*i) # Ensure that you use *i here.
repairer = Repairer(debugger,
mutator_class=ConditionMutator,
crossover_class=CrossoverOperator,
reducer_class=DeltaDebugger,
log=log)
# Ensure that you specify a sufficient number of
# iterations to evolve.
best_tree, fitness = repairer.repair(iterations=100) # type: ignore
return ast.unparse(best_tree)
```
Here we again used the _Ochiai_ statistical debugger and the _Repairer_ described in [The Debugging Book](https://www.debuggingbook.org/beta/html/Repairer.html). In contrast to the initial example, now we used another type of mutator – `ConditionMutator`. It can successfully fix the `larger` function as well.
```
repaired = simple_debug_and_repair(larger,
LARGER_PASSING_TESTCASES +
LARGER_FAILING_TESTCASES,
larger_test, False)
print_content(repaired, '.py')
```
Although `simple_debug_and_repair` produced a correct solution for our example, it does not generalize to other functions.
So your task is to create the `debug_and_repair()` function which can be applied on any faulty function.
Apart from the function `debug_and_repair()`, you may of course implement your own classes. Make sure, however, that you are using these classes within `debug_and_repair()`. Also, keep in mind to tune the _iteration_ parameter of the `Repairer` so that it has sufficient number of generation to evolve. As it may take too much time to find a solution for an ill-programmed repairer (e.g., consider an infinite `while`loop introduced in the fix), we impose a _10-minute timeout_ for each repair.
## Evaluation
Having you implement `debug_and_repair()` allows us to easily test your implementation by calling the function with its respective arguments and testing the correctness of its output. In this section, we will provide you with some test cases as well as the testing framework for this project. This will help you to assess the quality of your work.
We define functions as well as test-case generators for these functions. The functions given here should be considered as **public tests**. If you pass all public tests, without hard-coding the solutions, you are guaranteed to achieve **10 points**. The secret tests for the remaining 10 must-have-points have similar defects.
### Factorial
The first function we implement is a _factorial_ function. It is supposed to compute the following formula:
\begin{equation*}
n! = \textit{factorial}(n) = \prod_{k=1}^{n}k, \quad \text{for $k\geq 1$}
\end{equation*}
Here we define three faulty functions `factorial1`, `factorial2`, and `factorial3` that are supposed to compute the factorial.
```
def factorial1(n): # type: ignore
res = 1
for i in range(1, n):
res *= i
return res
```
From the first sight, `factorial1` looks to be correctly implemented, still it produces the wrong answer:
```
factorial1(5)
```
while the correct value for 5! is 120.
```
def factorial_testcase() -> int:
n = random.randrange(100)
return n
def factorial1_test(n: int) -> None:
m = factorial1(n)
assert m == math.factorial(n)
def factorial_passing_testcase() -> Tuple:
while True:
try:
n = factorial_testcase()
factorial1_test(n)
return (n,)
except AssertionError:
pass
def factorial_failing_testcase() -> Tuple:
while True:
try:
n = factorial_testcase()
factorial1_test(n)
except AssertionError:
return (n,)
FACTORIAL_PASSING_TESTCASES_1 = [factorial_passing_testcase() for i in range(TESTS)]
FACTORIAL_FAILING_TESTCASES_1 = [factorial_failing_testcase() for i in range(TESTS)]
```
As we can see, our simple Repairer cannot produce a fix. (Or more precisely, the "fix" it produces is pretty much pointless.)
```
repaired = \
simple_debug_and_repair(factorial1,
FACTORIAL_PASSING_TESTCASES_1 +
FACTORIAL_FAILING_TESTCASES_1,
factorial1_test, True)
```
The problem is that the current `Repairer` does not provide a suitable mutation to change the right part of the code.
How can we repair this? Consider extending `StatementMutator` operator such that it can mutate various parts of the code, such as ranges, arithmetic operations, variable names etc. (As a reference of how to do that, look at the `ConditionMutator` class.)
The next faulty function is `factorial2()`:
```
def factorial2(n): # type: ignore
i = 1
for i in range(1, n + 1):
i *= i
return i
```
Again, it outputs the incorrect answer:
```
factorial2(5)
def factorial2_test(n: int) -> None:
m = factorial2(n)
assert m == math.factorial(n)
def factorial_passing_testcase() -> Tuple: # type: ignore
while True:
try:
n = factorial_testcase()
factorial2_test(n)
return (n,)
except AssertionError:
pass
def factorial_failing_testcase() -> Tuple: # type: ignore
while True:
try:
n = factorial_testcase()
factorial2_test(n)
except AssertionError:
return (n,)
FACTORIAL_PASSING_TESTCASES_2 = [factorial_passing_testcase()
for i in range(TESTS)]
FACTORIAL_FAILING_TESTCASES_2 = [factorial_failing_testcase()
for i in range(TESTS)]
```
The third faulty function is `factorial3()`:
```
def factorial3(n): # type: ignore
res = 1
for i in range(1, n + 1):
res += i
return res
factorial3(5)
def factorial3_test(n: int) -> None:
m = factorial3(n)
assert m == math.factorial(n)
def factorial_passing_testcase() -> Tuple: # type: ignore
while True:
try:
n = factorial_testcase()
factorial3_test(n)
return (n,)
except AssertionError:
pass
def factorial_failing_testcase() -> Tuple: # type: ignore
while True:
try:
n = factorial_testcase()
factorial3_test(n)
except AssertionError:
return (n,)
FACTORIAL_PASSING_TESTCASES_3 = [factorial_passing_testcase()
for i in range(TESTS)]
FACTORIAL_FAILING_TESTCASES_3 = [factorial_failing_testcase()
for i in range(TESTS)]
```
### Middle
The following faulty function is the already well known _Middle_ function, though with another defect.
```
def middle(x, y, z): # type: ignore
if x < x:
if y < z:
return y
if x < z:
return z
return x
if x < z:
return x
if y < z:
return z
return y
```
It should return the second largest number of the input, but it does not:
```
middle(2, 3, 1)
def middle_testcase() -> Tuple:
x = random.randrange(10)
y = random.randrange(10)
z = random.randrange(10)
return x, y, z
def middle_test(x: int, y: int, z: int) -> None:
m = middle(x, y, z)
assert m == sorted([x, y, z])[1]
def middle_passing_testcase() -> Tuple:
while True:
try:
x, y, z = middle_testcase()
middle_test(x, y, z)
return x, y, z
except AssertionError:
pass
def middle_failing_testcase() -> Tuple:
while True:
try:
x, y, z = middle_testcase()
middle_test(x, y, z)
except AssertionError:
return x, y, z
MIDDLE_PASSING_TESTCASES = [middle_passing_testcase()
for i in range(TESTS)]
MIDDLE_FAILING_TESTCASES = [middle_failing_testcase()
for i in range(TESTS)]
```
### Power
The power function should implement the following formular:
\begin{equation*}
\textit{power}(x, n) = x^n, \quad \text{for $x\geq 0$ and $n \geq 0$}
\end{equation*}
```
def power(x, n): # type: ignore
res = 1
for i in range(0, x):
res *= n
return res
```
However, this `power()` function either has an uncommon interpretation of $x^y$ – or it is simply wrong:
```
power(2, 5)
```
We go with the simpler explanation that `power()` is wrong. The correct value, of course, should be $2^5 = 32$.
```
def power_testcase() -> Tuple:
x = random.randrange(100)
n = random.randrange(100)
return x, n
def power_test(x: int, n: int) -> None:
m = power(x, n)
assert m == pow(x, n)
def power_passing_testcase() -> Tuple:
while True:
try:
x, n = power_testcase()
power_test(x, n)
return x, n
except AssertionError:
pass
def power_failing_testcase() -> Tuple:
while True:
try:
x, n = power_testcase()
power_test(x, n)
except AssertionError:
return x, n
POWER_PASSING_TESTCASES = [power_passing_testcase()
for i in range(TESTS)]
POWER_FAILING_TESTCASES = [power_failing_testcase()
for i in range(TESTS)]
```
### Tester Class
To make it convenient to test your solution we provide a testing framework:
```
import re
class Test:
def __init__(self,
function: Callable,
testcases: List,
test_function: Callable,
assert_function: Callable) -> None:
self.function = function
self.testcases = testcases
self.test_function = test_function
self.assert_function = assert_function
def run(self, repair_function: Callable) -> None:
repaired = repair_function(self.function,
self.testcases,
self.test_function)
repaired = re.sub(self.function.__name__, 'foo', repaired)
exec(repaired, globals())
for test in self.testcases:
res = foo(*test) # type: ignore
assert res == self.assert_function(*test)
def middle_assert(x, y, z): # type: ignore
return sorted([x, y, z])[1]
test0 = Test(factorial1, FACTORIAL_PASSING_TESTCASES_1 + FACTORIAL_FAILING_TESTCASES_1, factorial1_test, math.factorial)
test1 = Test(factorial2, FACTORIAL_PASSING_TESTCASES_2 + FACTORIAL_FAILING_TESTCASES_2, factorial2_test, math.factorial)
test2 = Test(factorial3, FACTORIAL_PASSING_TESTCASES_3 + FACTORIAL_FAILING_TESTCASES_3, factorial3_test, math.factorial)
test3 = Test(middle, MIDDLE_PASSING_TESTCASES + MIDDLE_FAILING_TESTCASES, middle_test, middle_assert)
test4 = Test(power, POWER_PASSING_TESTCASES + POWER_FAILING_TESTCASES, power_test, pow)
tests = [test0, test1, test2, test3, test4]
class Tester:
def __init__(self, function: Callable, tests: List) -> None:
self.function = function
self.tests = tests
random.seed(42) # We use this seed for our evaluation; don't change it.
def run_tests(self) -> None:
for test in self.tests:
try:
test.run(self.function)
print(f'Test {test.function.__name__}: OK')
except AssertionError:
print(f'Test {test.function.__name__}: Failed')
tester = Tester(simple_debug_and_repair, tests) # TODO: replace simple_debug_and_repair by your debug_and_repair function
tester.run_tests()
```
By executing the `Tester` as shown above, you can assess the quality of your repairing approach, by testing your own `debug_and_repair()` function.
## Grading
Your project will be graded by _automated tests_. The tests are executed in the same manner as shown above.
In total there are **20 points** + **10 bonus points** to be awarded for this project. **20 points** for the must-haves, **10 bonus points** for may-haves.
### Must-Haves (20 points)
Must haves include an implementation of the `debug_and_repair` function in a way that it automatically repairs faulty functions given sufficiantly large test suites.
**10 points** are awarded for passing the tests in this notebook. Each passing test being worth two points.
**10 points** are awarded for passing secret tests.
### May-Haves (10 points)
May-haves will also be tested with secret tests, and award **2 points** each. The may-have-features for this project are a more robust implementation, that is able to cope with a wider range of defects:
* Infinite loops
* Infinite recursion (`RecursionError` in Python)
* Type errors (`TypeError` in Python)
* Undefined identifiers (`NameError` in Python)
### General Rules
You need to achieve at least **10 points** to be awarded any points at all.
Tests must be passed without hard-coding results, otherwise no points are awarded.
Your code needs to be sufficiently documented in order to achieve points!
|
github_jupyter
|
# Sonic The Hedgehog 1 with Advantage Actor Critic
## Step 1: Import the libraries
```
import time
import retro
import random
import torch
import numpy as np
from collections import deque
import matplotlib.pyplot as plt
from IPython.display import clear_output
import math
%matplotlib inline
import sys
sys.path.append('../../')
from algos.agents import A2CAgent
from algos.models import ActorCnn, CriticCnn
from algos.preprocessing.stack_frame import preprocess_frame, stack_frame
```
## Step 2: Create our environment
Initialize the environment in the code cell below.
```
env = retro.make(game='SonicTheHedgehog-Genesis', state='GreenHillZone.Act1', scenario='contest')
env.seed(0)
# if gpu is to be used
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print("Device: ", device)
```
## Step 3: Viewing our Enviroment
```
print("The size of frame is: ", env.observation_space.shape)
print("No. of Actions: ", env.action_space.n)
env.reset()
plt.figure()
plt.imshow(env.reset())
plt.title('Original Frame')
plt.show()
possible_actions = {
# No Operation
0: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
# Left
1: [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0],
# Right
2: [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0],
# Left, Down
3: [0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0],
# Right, Down
4: [0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0],
# Down
5: [0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0],
# Down, B
6: [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0],
# B
7: [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
}
```
### Execute the code cell below to play Pong with a random policy.
```
def random_play():
score = 0
env.reset()
for i in range(200):
env.render()
action = possible_actions[np.random.randint(len(possible_actions))]
state, reward, done, _ = env.step(action)
score += reward
if done:
print("Your Score at end of game is: ", score)
break
env.reset()
env.render(close=True)
random_play()
```
## Step 4:Preprocessing Frame
```
plt.figure()
plt.imshow(preprocess_frame(env.reset(), (1, -1, -1, 1), 84), cmap="gray")
plt.title('Pre Processed image')
plt.show()
```
## Step 5: Stacking Frame
```
def stack_frames(frames, state, is_new=False):
frame = preprocess_frame(state, (1, -1, -1, 1), 84)
frames = stack_frame(frames, frame, is_new)
return frames
```
## Step 6: Creating our Agent
```
INPUT_SHAPE = (4, 84, 84)
ACTION_SIZE = len(possible_actions)
SEED = 0
GAMMA = 0.99 # discount factor
ALPHA= 0.0001 # Actor learning rate
BETA = 0.0005 # Critic learning rate
UPDATE_EVERY = 100 # how often to update the network
agent = A2CAgent(INPUT_SHAPE, ACTION_SIZE, SEED, device, GAMMA, ALPHA, BETA, UPDATE_EVERY, ActorCnn, CriticCnn)
```
## Step 7: Watching untrained agent play
```
env.viewer = None
# watch an untrained agent
state = stack_frames(None, env.reset(), True)
for j in range(200):
env.render(close=False)
action, _, _ = agent.act(state)
next_state, reward, done, _ = env.step(possible_actions[action])
state = stack_frames(state, next_state, False)
if done:
env.reset()
break
env.render(close=True)
```
## Step 8: Loading Agent
Uncomment line to load a pretrained agent
```
start_epoch = 0
scores = []
scores_window = deque(maxlen=20)
```
## Step 9: Train the Agent with Actor Critic
```
def train(n_episodes=1000):
"""
Params
======
n_episodes (int): maximum number of training episodes
"""
for i_episode in range(start_epoch + 1, n_episodes+1):
state = stack_frames(None, env.reset(), True)
score = 0
# Punish the agent for not moving forward
prev_state = {}
steps_stuck = 0
timestamp = 0
while timestamp < 10000:
action, log_prob, entropy = agent.act(state)
next_state, reward, done, info = env.step(possible_actions[action])
score += reward
timestamp += 1
# Punish the agent for standing still for too long.
if (prev_state == info):
steps_stuck += 1
else:
steps_stuck = 0
prev_state = info
if (steps_stuck > 20):
reward -= 1
next_state = stack_frames(state, next_state, False)
agent.step(state, log_prob, entropy, reward, done, next_state)
state = next_state
if done:
break
scores_window.append(score) # save most recent score
scores.append(score) # save most recent score
clear_output(True)
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(len(scores)), scores)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.show()
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="")
return scores
scores = train(1000)
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(len(scores)), scores)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.show()
```
## Step 10: Watch a Smart Agent!
```
env.viewer = None
# watch an untrained agent
state = stack_frames(None, env.reset(), True)
for j in range(10000):
env.render(close=False)
action, _, _ = agent.act(state)
next_state, reward, done, _ = env.step(possible_actions[action])
state = stack_frames(state, next_state, False)
if done:
env.reset()
break
env.render(close=True)
```
|
github_jupyter
|
# Objected-Oriented Simulation
Up to this point we have been using Python generators and shared resources as the building blocks for simulations of complex systems. This can be effective, particularly if the individual agents do not require access to the internal state of other agents. But there are situations where the action of an agent depends on the state or properties of another agent in the simulation. For example, consider this discussion question from the Grocery store checkout example:
>Suppose we were to change one or more of the lanes to a express lanes which handle only with a small number of items, say five or fewer. How would you expect this to change average waiting time? This is a form of prioritization ... are there other prioritizations that you might consider?
The customer action depends the item limit parameter associated with a checkout lane. This is a case where the action of one agent depends on a property of another. The shared resources builtin to the SimPy library provide some functionality in this regard, but how do add this to the simulations we write?
The good news is that Python offers a rich array of object oriented programming features well suited to this purpose. The SymPy documentation provides excellent examples of how to create Python objects for use in SymPy. The bad news is that object oriented programming in Python -- while straightforward compared to many other programming languages -- constitutes a steep learning curve for students unfamiliar with the core concepts.
Fortunately, since the introduction of Python 3.7 in 2018, the standard libraries for Python have included a simplified method for creating and using Python classes. Using [dataclass](https://realpython.com/python-data-classes/), it easy to create objects for SymPy simulations that retain the benefits of object oriented programming without all of the coding overhead.
The purpose of this notebook is to introduce the use of `dataclass` in creating SymPy simulations. To the best of the author's knowledge, this is a novel use of `dataclass` and the only example of which the author is aware.
## Installations and imports
```
!pip install sympy
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import random
import simpy
import pandas as pd
from dataclasses import dataclass
import sys
print(sys.version)
```
Additional imports are from the `dataclasses` library that has been part of the standard Python distribution since version 3.7. Here we import `dataclass` and `field`.
```
from dataclasses import dataclass, field
```
## Introduction to `dataclass`
Tutorials and additional documentation:
* [The Ultimate Guide to Data Classes in Python 3.7](https://realpython.com/python-data-classes/): Tutorial article from ReaalPython.com
* [dataclasses — Data Classes](https://docs.python.org/3/library/dataclasses.html): Official Python documentation.
* [Data Classes in Python](https://towardsdatascience.com/data-classes-in-python-8d1a09c1294b): Tutorial from TowardsDataScience.com
### Creating a `dataclass`
A `dataclass` defines a new class of Python objects. A `dataclass` object takes care of several routine things that you would otherwise have to code, such as creating instances of an object, testing for equality, and other aspects.
As an example, the following cell shows how to define a dataclass corresponding to a hypothetical Student object. The Student object maintains data associated with instances of a student. The dataclass also defines a function associated with the object.
```
from dataclasses import dataclass
@dataclass
class Student():
name: str
graduation_class: int
dorm: str
def print_name(self):
print(f"{self.name} (Class of {self.graduation_class})")
```
Let's create an instance of the Student object.
```
sam = Student("Sam Jones", 2024, "Alumni")
```
Let's see how the `print_name()` function works.
```
sam.print_name()
```
The next cell shows how to create a list of students, and how to iterate over a list of students.
```
# create a list of students
students = [
Student("Sam Jones", 2024, "Alumni"),
Student("Becky Smith", 2023, "Howard"),
]
# iterate over the list of students to print all of their names
for student in students:
student.print_name()
print(student.dorm)
```
Here are a few details you need to use `dataclass` effectively:
* The `class` statement is standard statement for creating a new class of Python objects. The preceding `@dataclass` is a Python 'decorator'. Decorators are Python functions that modify the behavior of subsequent statements. In this case, the `@dataclass` decorator modifies `class` to provide a streamlined syntax for implementing classes.
* A Python class names begin with a capital letter. In this case `Student` is the class name.
* The lines following the the class statement declare parameters that will be used by the new class. The parameters can be specified when you create an instance of the dataclass.
* Each paraameter is followed by type 'hint'. Commonly used type hints are `int`, `float`, `bool`, and `str`. Use the keyword `any` you don't know or can't specify a particular type. Type hints are actually used by type-checking tools and ignored by the python interpreter.
* Following the parameters, write any functions or generators that you may wish to define for the new class. To access variables unique to an instance of the class, preceed the parameter name with `self`.
### Specifying parameter values
There are different ways of specifying the parameter values assigned to an instance of a dataclass. Here are three particular methods:
* Specify the parameter value when creating a new instance. This is what was done in the Student example above.
* Provide a default values determined when the dataclass is defined.
* Provide a default_factory method to create a parameter value when an instance of the dataclass is created.
#### Specifying a parameter value when creating a new instance
Parameter values can be specified when creating an instance of a dataclass. The parameter values can be specified by position or by name as shown below.
```
from dataclasses import dataclass
@dataclass
class Student():
name: str
graduation_year: int
dorm: str
def print_name(self):
print(f"{self.name} (Class of {self.graduation_year})")
sam = Student("Sam Jones", 2031, "Alumni")
sam.print_name()
gilda = Student(name="Gilda Radner", graduation_year=2030, dorm="Howard")
gilda.print_name()
```
#### Setting default parameter values
Setting a default value for a parameter can save extra typing or coding. More importantly, setting default values makes it easier to maintain and adapt code for other applications, and is a convenient way to handle missing data.
There are two ways to set default parameter values. For str, int, float, bool, tuple (the immutable types in Python), a default value can be set using `=` as shown in the next cell.
```
from dataclasses import dataclass
@dataclass
class Student():
name: str = None
graduation_year: int = None
dorm: str = None
def print_name(self):
print(f"{self.name} (Class of {self.graduation_year})")
jdoe = Student(name="John Doe", dorm="Alumni")
jdoe.print_name()
```
Default parameter values are restricted to 'immutable' types. This technical restriction eliminiates the error-prone practice of use mutable objects, such as lists, as defaults. The difficulty with setting defaults for mutable objects is that all instances of the dataclass share the same value. If one instance of the object changes that value, then all other instances are affected. This leads to unpredictable behavior, and is a particularly nasty bug to uncover and fix.
There are two ways to provide defaults for mutable parameters such as lists, sets, dictionaries, or arbitrary Python objects.
The more direct way is to specify a function for constucting the default parameter value using the `field` statement with the `default_factory` option. The default_factory is called when a new instance of the dataclass is created. The function must take no arguments and must return a value that will be assigned to the designated parameter. Here's an example.
```
from dataclasses import dataclass
@dataclass
class Student():
name: str = None
graduation_year: int = None
dorm: str = None
majors: list = field(default_factory=list)
def print_name(self):
print(f"{self.name} (Class of {self.graduation_year})")
def print_majors(self):
for n, major in enumerate(self.majors):
print(f" {n+1}. {major}")
jdoe = Student(name="John Doe", dorm="Alumni", majors=["Math", "Chemical Engineering"])
jdoe.print_name()
jdoe.print_majors()
Student().print_majors()
```
#### Initializing a dataclass with __post_init__(self)
Frequently there are additional steps to complete when creating a new instance of a dataclass. For that purpose, a dataclass may contain an
optional function with the special name `__post_init__(self)`. If present, that function is run automatically following the creation of a new instance. This feature will be demonstrated in following reimplementation of the grocery store checkout operation.
## Using `dataclass` with Simpy
### Step 0. A simple model
To demonstrate the use of classes in SimPy simulations, let's begin with a simple model of a clock using generators.
```
import simpy
def clock(id="", t_step=1.0):
while True:
print(id, env.now)
yield env.timeout(t_step)
env = simpy.Environment()
env.process(clock("A"))
env.process(clock("B", 1.5))
env.run(until=5.0)
```
### Step 1. Embed the generator inside of a class
As a first step, we rewrite the generator as a Python dataclass named `Clock`. The parameters are given default values, and the generator is incorporated within the Clock object. Note the use of `self` to refer to parameters specific to an instance of the class.
```
import simpy
from dataclasses import dataclass
@dataclass
class Clock():
id: str = ""
t_step: float = 1.0
def process(self):
while True:
print(self.id, env.now)
yield env.timeout(self.t_step)
env = simpy.Environment()
env.process(Clock("A").process())
env.process(Clock("B", 1.5).process())
env.run(until=5)
```
### Step 2. Eliminate (if possible) global variables
Our definition of clock requires the simulation environment to have a specific name `env`, and assumes env is a global variable. That's generally not a good coding practice because it imposes an assumption on any user of the class, and exposes the internal coding of the class. A much better practice is to use class parameters to pass this data through a well defined interface to the class.
```
import simpy
from dataclasses import dataclass
@dataclass
class Clock():
env: simpy.Environment
id: str = ""
t_step: float = 1.0
def process(self):
while True:
print(self.id, self.env.now)
yield self.env.timeout(self.t_step)
env = simpy.Environment()
env.process(Clock(env, "A").process())
env.process(Clock(env, "B", 1.5).process())
env.run(until=10)
```
### Step 3. Encapsulate initializations inside __post_init__
```
import simpy
from dataclasses import dataclass
@dataclass
class Clock():
env: simpy.Environment
id: str = ""
t_step: float = 1.0
def __post_init__(self):
self.env.process(self.process())
def process(self):
while True:
print(self.id, self.env.now)
yield self.env.timeout(self.t_step)
env = simpy.Environment()
Clock(env, "A")
Clock(env, "B", 1.5)
env.run(until=5)
```
## Grocery Store Model
Let's review our model for the grocery store checkout operations. There are multiple checkout lanes, each with potentially different characteristics. With generators we were able to implement differences in the time required to scan items. But another parameter, a limit on number of items that could be checked out in a lane, required a new global list. The reason was the need to access that parameter, something that a generator doesn't allow. This is where classes become important building blocks in creating more complex simulations.
Our new strategy will be encapsulate the generator inside of a dataclass object. Here's what we'll ask each class definition to do:
* Create a parameter corresponding to the simulation environment. This makes our classes reusable in other simulations by eliminating a reference to a globall variable.
* Create parameters with reasonable defaults values.
* Initialize any objects used within the class.
* Register the class generator with the simulation environment.
```
from dataclasses import dataclass
# create simulation models
@dataclass
class Checkout():
env: simpy.Environment
lane: simpy.Store = None
t_item: float = 1/10
item_limit: int = 25
t_payment: float = 2.0
def __post_init__(self):
self.lane = simpy.Store(self.env)
self.env.process(self.process())
def process(self):
while True:
customer_id, cart, enter_time = yield self.lane.get()
wait_time = env.now - enter_time
yield env.timeout(self.t_payment + cart*self.t_item)
customer_log.append([customer_id, cart, enter_time, wait_time, env.now])
@dataclass
class CustomerGenerator():
env: simpy.Environment
rate: float = 1.0
customer_id: int = 1
def __post_init__(self):
self.env.process(self.process())
def process(self):
while True:
yield env.timeout(random.expovariate(self.rate))
cart = random.randint(1, 25)
available_checkouts = [checkout for checkout in checkouts if cart <= checkout.item_limit]
checkout = min(available_checkouts, key=lambda checkout: len(checkout.lane.items))
yield checkout.lane.put([self.customer_id, cart, env.now])
self.customer_id += 1
def lane_logger(t_sample=0.1):
while True:
lane_log.append([env.now] + [len(checkout.lane.items) for checkout in checkouts])
yield env.timeout(t_sample)
# create simulation environment
env = simpy.Environment()
# create simulation objects (agents)
CustomerGenerator(env)
checkouts = [
Checkout(env, t_item=1/5, item_limit=25),
Checkout(env, t_item=1/5, item_limit=25),
Checkout(env, item_limit=5),
Checkout(env),
Checkout(env),
]
env.process(lane_logger())
# run process
customer_log = []
lane_log = []
env.run(until=600)
def visualize():
# extract lane data
lane_df = pd.DataFrame(lane_log, columns = ["time"] + [f"lane {n}" for n in range(0, len(checkouts))])
lane_df = lane_df.set_index("time")
customer_df = pd.DataFrame(customer_log, columns = ["customer id", "cart items", "enter", "wait", "leave"])
customer_df["elapsed"] = customer_df["leave"] - customer_df["enter"]
# compute kpi's
print(f"Average waiting time = {customer_df['wait'].mean():5.2f} minutes")
print(f"\nAverage lane queue \n{lane_df.mean()}")
print(f"\nOverall aaverage lane queue \n{lane_df.mean().mean():5.4f}")
# plot results
fig, ax = plt.subplots(3, 1, figsize=(12, 7))
ax[0].plot(lane_df)
ax[0].set_xlabel("time / min")
ax[0].set_title("length of checkout lanes")
ax[0].legend(lane_df.columns)
ax[1].bar(customer_df["customer id"], customer_df["wait"])
ax[1].set_xlabel("customer id")
ax[1].set_ylabel("minutes")
ax[1].set_title("customer waiting time")
ax[2].bar(customer_df["customer id"], customer_df["elapsed"])
ax[2].set_xlabel("customer id")
ax[2].set_ylabel("minutes")
ax[2].set_title("total elapsed time")
plt.tight_layout()
visualize()
```
## Customers as agents
```
from dataclasses import dataclass
# create simulation models
@dataclass
class Checkout():
env: simpy.Environment
lane: simpy.Store = None
t_item: float = 1/10
item_limit: int = 25
t_payment: float = 2.0
def __post_init__(self):
self.lane = simpy.Store(self.env)
self.env.process(self.process())
def process(self):
while True:
customer_id, cart, enter_time = yield self.lane.get()
wait_time = env.now - enter_time
yield env.timeout(self.t_payment + cart*self.t_item)
customer_log.append([customer_id, cart, enter_time, wait_time, env.now])
@dataclass
class CustomerGenerator():
env: simpy.Environment
rate: float = 1.0
customer_id: int = 1
def __post_init__(self):
self.env.process(self.process())
def process(self):
while True:
yield env.timeout(random.expovariate(self.rate))
Customer(self.env, self.customer_id)
self.customer_id += 1
@dataclass
class Customer():
env: simpy.Environment
id: int = 0
def __post_init__(self):
self.cart = random.randint(1, 25)
self.env.process(self.process())
def process(self):
available_checkouts = [checkout for checkout in checkouts if self.cart <= checkout.item_limit]
checkout = min(available_checkouts, key=lambda checkout: len(checkout.lane.items))
yield checkout.lane.put([self.id, self.cart, env.now])
def lane_logger(t_sample=0.1):
while True:
lane_log.append([env.now] + [len(checkout.lane.items) for checkout in checkouts])
yield env.timeout(t_sample)
# create simulation environment
env = simpy.Environment()
# create simulation objects (agents)
CustomerGenerator(env)
checkouts = [
Checkout(env, t_item=1/5, item_limit=25),
Checkout(env, t_item=1/5, item_limit=25),
Checkout(env, item_limit=5),
Checkout(env),
Checkout(env),
]
env.process(lane_logger())
# run process
customer_log = []
lane_log = []
env.run(until=600)
visualize()
```
## Creating Smart Objects
```
from dataclasses import dataclass, field
import pandas as pd
# create simulation models
@dataclass
class Checkout():
lane: simpy.Store
t_item: float = 1/10
item_limit: int = 25
def process(self):
while True:
customer_id, cart, enter_time = yield self.lane.get()
wait_time = env.now - enter_time
yield env.timeout(t_payment + cart*self.t_item)
customer_log.append([customer_id, cart, enter_time, wait_time, env.now])
@dataclass
class CustomerGenerator():
rate: float = 1.0
customer_id: int = 1
def process(self):
while True:
yield env.timeout(random.expovariate(self.rate))
cart = random.randint(1, 25)
available_checkouts = [checkout for checkout in checkouts if cart <= checkout.item_limit]
checkout = min(available_checkouts, key=lambda checkout: len(checkout.lane.items))
yield checkout.lane.put([self.customer_id, cart, env.now])
self.customer_id += 1
@dataclass
class LaneLogger():
lane_log: list = field(default_factory=list) # this creates a variable that can be modified
t_sample: float = 0.1
lane_df: pd.DataFrame = field(default_factory=pd.DataFrame)
def process(self):
while True:
self.lane_log.append([env.now] + [len(checkout.lane.items) for checkout in checkouts])
yield env.timeout(self.t_sample)
def report(self):
self.lane_df = pd.DataFrame(self.lane_log, columns = ["time"] + [f"lane {n}" for n in range(0, N)])
self.lane_df = self.lane_df.set_index("time")
print(f"\nAverage lane queue \n{self.lane_df.mean()}")
print(f"\nOverall average lane queue \n{self.lane_df.mean().mean():5.4f}")
def plot(self):
self.lane_df = pd.DataFrame(self.lane_log, columns = ["time"] + [f"lane {n}" for n in range(0, N)])
self.lane_df = self.lane_df.set_index("time")
fig, ax = plt.subplots(1, 1, figsize=(12, 3))
ax.plot(self.lane_df)
ax.set_xlabel("time / min")
ax.set_title("length of checkout lanes")
ax.legend(self.lane_df.columns)
# create simulation environment
env = simpy.Environment()
# create simulation objects (agents)
customer_generator = CustomerGenerator()
checkouts = [
Checkout(simpy.Store(env), t_item=1/5),
Checkout(simpy.Store(env), t_item=1/5),
Checkout(simpy.Store(env), item_limit=5),
Checkout(simpy.Store(env)),
Checkout(simpy.Store(env)),
]
lane_logger = LaneLogger()
# register agents
env.process(customer_generator.process())
for checkout in checkouts:
env.process(checkout.process())
env.process(lane_logger.process())
# run process
env.run(until=600)
# plot results
lane_logger.report()
lane_logger.plot()
```
|
github_jupyter
|
# 01 - Introduction to numpy: why does numpy exist?
You might have read somewhere that Python is "slow" in comparison to some other languages. While generally true, this statement has only little meaning without context. As a scripting language (e.g. simplify tasks such as file renaming, data download, etc.), python is fast enough. For *numerical computations* (like the computations done by an atmospheric model or by a machine learning algorithm), "pure" Python is very slow indeed. Fortunately, there is a way to overcome this problem!
In this chapter we are going to explain why the [numpy](http://numpy.org) library was created. Numpy is the fundamental library which transformed the general purpose python language into a scientific language like Matlab, R or IDL.
Before introducing numpy, we will discuss some of the differences between python and compiled languages widely used in scientific software development (like C and FORTRAN).
## Why is python "slow"?
In the next unit about numbers, we'll learn that the memory consumption of a python ``int`` is larger than the memory needed to store the binary number alone. This overhead in memory consumption is due to the nature of python data types, which are all **objects**. We've already learned that these objects come with certain "services".
Everything is an object in Python. Yes, even functions are objects! Let me prove it to you:
```
def useful_function(a, b):
"""This function adds two objects together.
Parameters
----------
a : an object
b : another object
Returns
-------
The sum of the two
"""
return a + b
type(useful_function)
print(useful_function.__doc__)
```
Functions are objects of type ``function``, and one of their attributes (``__doc__``) gives us access to their **docstring**. During the course of the semester you are going to learn how to use more and more of these object features, and hopefully you are going to like them more and more (at least this is what happened to me).
Now, why does this make python "slow"? Well, in simple terms, these "services" tend to increase the complexity and the number of operations an interpreter has to perform when running a program. More specialized languages will be less flexible than python, but will be faster at running specialized operations and be less memory hungry (because they don't need this overhead of flexible memory on top of every object).
Python's high-level of abstraction (i.e. python's flexibility) makes it slower than its lower-level counterparts like C or FORTRAN. But, why is that so?
## Dynamically versus statically typed languages
Python is a so-called **dynamically typed** language, which means that the **type** of a variable is determined by the interpreter *at run time*. To understand what that means in practice, let's have a look at the following code snippet:
```
a = 2
b = 3
c = a + b
```
The line ``c = a + b`` is valid python syntax. The *operation* that has to be applied by the ``+`` operator, however, depends on the type of the variables to be added. Remember what happens when adding two lists for example:
```
a = [2]
b = [3]
a + b
```
In this simple example it would be theoretically possible for the interpreter to predict which operation to apply beforehand (by parsing all lines of code prior to the action). In most cases, however, this is impossible: for example, a function taking arguments does not know beforehand the type of the arguments it will receive.
Languages which assess the type of variables *at run time* are called [dynamically typed programming languages](https://en.wikipedia.org/wiki/Category:Dynamically_typed_programming_languages). Matlab, Python or R are examples falling in this category.
**Statically typed languages**, however, require the *programmer* to provide the type of variables while writing the code. Here is an example of a program written in C:
```c
#include <stdio.h>
int main ()
{
int a = 2;
int b = 3;
int c = a + b;
printf ("Sum of two numbers : %d \n", c);
}
```
The major difference with the Python code above is that the programmer indicated the type of the variables when they are assigned. Variable type definition in the code script is an integral part of the C syntax. This applies to the variables themselves, but also to the output of computations. This is a fundamental difference to python, and comes with several advantages. Static typing usually results in code that executes faster: since the program knows the exact data types that are in use, it can predict the memory consumption of operations beforehand and produce optimized machine code. Another advantage is code documentation: the statement ``int c = a + b`` makes it clear that we are adding two numbers while the python equivalent ``c = a + b`` could produce a number, a string, a list, etc.
## Compiled versus interpreted languages
Statically typed languages often require **compilation**. To run the C code snippet I had to create a new text file (``example.c``), write the code, compile it (``$ gcc -o myprogram example.c``), before finally being able to execute it (``$ ./myprogram``).
[gcc](https://en.wikipedia.org/wiki/GNU_Compiler_Collection) is the compiler I used to translate the C source code (a text file) to a low level language (machine code) in order to create an **executable** (``myprogram``). Later changes to the source code require a new compilation step for the changes to take effect.
Because of this "edit-compile-run" cycle, compiled languages are not interactive: in the C language, there is no equivalent to python's command line interpreter. Compiling complex programs can take up to several hours in some extreme cases. This compilation time, however, is usually associated with faster execution times: as mentioned earlier, the compiler's task is to optimize the program for memory consumption by source code analysis. Often, a compiled program is optimized for the machine architecture it is compiled onto. Like interpreters, there can be different compilers for the same language. They differ in the optimization steps they undertake to make the program faster, and in their support of various hardware architectures.
**Interpreters** do not require compilation: they analyze the code at run time. The following code for example is syntactically correct:
```
def my_func(a, b):
return a + b
```
but the *execution* of this code results in a `TypeError` when the variables have the wrong type:
```
my_func(1, '2')
```
The interpreter cannot detect these errors before runtime: they happen when the variables are finally added together, not when they are created.
**Parenthesis I: python bytecode**
When executing a python program from the command line, the CPython interpreter creates a hidden directory called ``__pycache__``. This directory contains [bytecode](https://en.wikipedia.org/wiki/Bytecode) files, which are your python source code files translated to binary files. This is an optimization step which makes subsequent executions of the program run faster. While this conversion step is sometimes called "compilation", it should not be mistaken with a C-program compilation: indeed, python bytecode still needs an interpreter to run, while compiled executables can be run without C interpreter.
**Parenthesis II: static typing and compilation**
Statically typed languages are often compiled, and dynamically typed languages are often interpreted. While this is a good rule of thumb, this is not always true and the vast landscape of programming languages contains many exceptions. This lecture is only a very short introduction to these concepts: you'll have to refer to more advanced computer science lectures if you want to learn about these topics in more detail.
## Here comes numpy
Let's summarize the two previous chapters:
- Python is flexible, interactive and slow
- C is less flexible, non-interactive and fast
This is a simplification, but not far from the truth.
Now, let's add another obstacle to using python for science: the built-in `list` data type in python is mostly useless for arithmetics or vector computations. Indeed, to add two lists together element-wise (a behavior that you would expect as a scientist), you must write:
```
def add_lists(A, B):
"""Element-wise addition of two lists."""
return [a + b for a, b in zip(A, B)]
add_lists([1, 2, 3], [4, 5, 6])
```
The numpy equivalent is much more intuitive and straightforward:
```
import numpy as np
def add_arrays(A, B):
return np.add(A, B)
add_arrays([1, 2, 3], [4, 5, 6])
```
Let's see which of the two functions runs faster:
```
n = 10
A = np.random.randn(n)
B = np.random.randn(n)
%timeit add_lists(A, B)
%timeit add_arrays(A, B)
```
Numpy is approximately 5-6 times faster.
```{exercise}
Repeat the performance test with n=100 and n=10000. How does the performance scale with the size of the array? Now repeat the test but make the input arguments ``A`` and ``B`` *lists* instead of numpy arrays. How is the performance comparison in this case? Why?
```
Why is numpy so much faster than pure python? One of the major reasons is **vectorization**, which is the process of applying mathematical operations to *all* elements of an array ("vector") at once instead of looping through them like we would do in pure python. "for loops" in python are slow because for each addition, python has to:
- access the elements a and b in the lists A and B
- check the type of both a and b
- apply the ``+`` operator on the data they store
- store the result.
Numpy skips the first two steps and does them only once before the actual operation. What does numpy know about the addition operation that the pure python version can't infer?
- the type of all numbers to add
- the type of the output
- the size of the output array (same as input)
Numpy can use this information to optimize the computation, but this isn't possible without trade-offs. See the following for example:
```
add_lists([1, 'foo'], [3, 'bar']) # works fine
add_arrays([1, 'foo'], [3, 'bar']) # raises a TypeError
```
$\rightarrow$ **numpy can only be that fast because the input and output data types are uniform and known before the operation**.
Internally, numpy achieves vectorization by relying on a lower-level, statically typed and compiled language: C! At the time of writing, about 35% of the [numpy codebase](https://github.com/numpy/numpy) is written in C/C++. The rest of the codebase offers an interface (a "layer") between python and the internal C code.
As a result, numpy has to be *compiled* at installation. Most users do not notice this compilation step anymore (recent pip and conda installations are shipped with pre-compiled binaries), but installing numpy used to require several minutes on my laptop when I started to learn python myself.
## Take home points
- The process of "type checking" may occur either at compile-time (statically typed language) or at runtime (dynamically typed language). These terms are not usually used in a strict sense.
- Statically typed languages are often compiled, while dynamically typed languages are interpreted.
- There is a trade-off between the flexibility of a language and its speed: static typing allows programs to be optimized at compilation time, thus allowing them to run faster. But writing code in a statically typed language is slower, especially for interactive data exploration (not really possible in fact).
- When speed matters, python allows to use compiled libraries under a python interface. numpy is using C under the hood to optimize its computations.
- numpy arrays use a continuous block of memory of homogenous data type. This allows for faster memory access and easy vectorization of mathematical operations.
## Further reading
I highly recommend to have a look at the first part of Jake Vanderplas' blog post, [Why python is slow](https://jakevdp.github.io/blog/2014/05/09/why-python-is-slow/) (up to the "hacking" part). It provides more details and a good visual illustration of the ``c = a + b`` example. The second part is a more involved read, but very interesting too!
## Addendum: is python really *that* slow?
The short answer is: yes, python is slower than a number of other languages. You'll find many benchmarks online illustrating it.
Is it bad?
No. Jake Vanderplas (a well known contributor of the scientific python community) [writes](https://jakevdp.github.io/blog/2014/05/09/why-python-is-slow/#So-Why-Use-Python?):
*As well, it comes down to this: dynamic typing makes Python easier to use than C. It's extremely flexible and forgiving, this flexibility leads to efficient use of development time, and on those occasions that you really need the optimization of C or Fortran, Python offers easy hooks into compiled libraries. It's why Python use within many scientific communities has been continually growing. With all that put together, Python ends up being an extremely efficient language for the overall task of doing science with code.*
It's the flexibility and readability of python that makes it so popular. Python is the language of choice for major actors like [instagram](https://www.youtube.com/watch?v=66XoCk79kjM) or [spotify](https://labs.spotify.com/2013/03/20/how-we-use-python-at-spotify/), and it has become the high-level interface to highly optimized machine learning libraries like [TensorFlow](https://github.com/tensorflow/tensorflow) or [Torch](http://pytorch.org/).
For a scientist, writing code efficiently is *much* more important than writing efficient code. Or [is it](https://xkcd.com/1205/)?
|
github_jupyter
|
<img src="https://github.com/pmservice/ai-openscale-tutorials/raw/master/notebooks/images/banner.png" align="left" alt="banner">
# Working with Watson Machine Learning - Quality Monitor and Feedback Logging
### Contents
- [1.0 Install Python Packages](#setup)
- [2.0 Configure Credentials](#credentials)
- [3.0 OpenScale configuration](#openscale)
- [4.0 Get Subscriptions](#subscription)
- [5.0 Quality monitoring and Feedback logging](#quality)
# 1.0 Install Python Packages
```
!rm -rf /home/spark/shared/user-libs/python3.6*
!pip install --upgrade ibm-ai-openscale==2.2.1 --no-cache | tail -n 1
!pip install --upgrade watson-machine-learning-client-V4==1.0.55 | tail -n 1
!pip install --upgrade pyspark==2.3 | tail -n 1
```
### Action: restart the kernel!
# 2.0 Configure credentials <a name="credentials"></a>
```
import warnings
warnings.filterwarnings('ignore')
```
The url for `WOS_CREDENTIALS` is the url of the CP4D cluster, i.e. `https://zen-cpd-zen.apps.com`. `username` and `password` must be ones that have cluster admin privileges.
```
WOS_CREDENTIALS = {
"url": "https://zen-cpd-zen.cp4d-2bef4fc2b2-0001.us-south.containers.appdomain.cloud",
"username": "*****",
"password": "*****"
}
WML_CREDENTIALS = WOS_CREDENTIALS.copy()
WML_CREDENTIALS['instance_id']='openshift'
WML_CREDENTIALS['version']='2.5.0'
%store -r MODEL_NAME
%store -r DEPLOYMENT_NAME
%store -r DEFAULT_SPACE
```
# 3.0 Configure OpenScale <a name="openscale"></a>
The notebook will now import the necessary libraries and configure OpenScale
```
from watson_machine_learning_client import WatsonMachineLearningAPIClient
import json
wml_client = WatsonMachineLearningAPIClient(WML_CREDENTIALS)
from ibm_ai_openscale import APIClient4ICP
from ibm_ai_openscale.engines import *
from ibm_ai_openscale.utils import *
from ibm_ai_openscale.supporting_classes import PayloadRecord, Feature
from ibm_ai_openscale.supporting_classes.enums import *
ai_client = APIClient4ICP(WOS_CREDENTIALS)
ai_client.version
```
# 4.0 Get Subscription <a name="subscription"></a>
```
subscription = None
if subscription is None:
subscriptions_uids = ai_client.data_mart.subscriptions.get_uids()
for sub in subscriptions_uids:
if ai_client.data_mart.subscriptions.get_details(sub)['entity']['asset']['name'] == MODEL_NAME:
print("Found existing subscription.")
subscription = ai_client.data_mart.subscriptions.get(sub)
if subscription is None:
print("No subscription found. Please run openscale-initial-setup.ipynb to configure.")
```
### Set Deployment UID
```
wml_client.set.default_space(DEFAULT_SPACE)
wml_deployments = wml_client.deployments.get_details()
deployment_uid = None
for deployment in wml_deployments['resources']:
print(deployment['entity']['name'])
if DEPLOYMENT_NAME == deployment['entity']['name']:
deployment_uid = deployment['metadata']['guid']
break
print(deployment_uid)
```
# 5.0 Quality monitoring and Feedback logging
<a name="quality"></a>
## 4.1 Enable quality monitoring
The code below waits ten seconds to allow the payload logging table to be set up before it begins enabling monitors. First, it turns on the quality (accuracy) monitor and sets an alert threshold of 70%. OpenScale will show an alert on the dashboard if the model accuracy measurement (area under the curve, in the case of a binary classifier) falls below this threshold.
The second paramater supplied, min_records, specifies the minimum number of feedback records OpenScale needs before it calculates a new measurement. The quality monitor runs hourly, but the accuracy reading in the dashboard will not change until an additional 50 feedback records have been added, via the user interface, the Python client, or the supplied feedback endpoint.
```
subscription.quality_monitoring.enable(threshold=0.7, min_records=50)
```
## 4.2 Feedback logging
The code below downloads and stores enough feedback data to meet the minimum threshold so that OpenScale can calculate a new accuracy measurement. It then kicks off the accuracy monitor. The monitors run hourly, or can be initiated via the Python API, the REST API, or the graphical user interface.
```
!rm additional_feedback_data.json
!wget https://raw.githubusercontent.com/IBM/monitor-ibm-cloud-pak-with-watson-openscale/master/data/additional_feedback_data.json
with open('additional_feedback_data.json') as feedback_file:
additional_feedback_data = json.load(feedback_file)
subscription.feedback_logging.store(additional_feedback_data['data'])
subscription.feedback_logging.show_table()
run_details = subscription.quality_monitoring.run(background_mode=False)
subscription.quality_monitoring.show_table()
%matplotlib inline
quality_pd = subscription.quality_monitoring.get_table_content(format='pandas')
quality_pd.plot.barh(x='id', y='value');
ai_client.data_mart.get_deployment_metrics()
```
## Congratulations!
You have finished the hands-on lab for IBM Watson OpenScale. You can now view the OpenScale dashboard by going to the CPD `Home` page, and clicking `Services`. Choose the `OpenScale` tile and click the menu to `Open`. Click on the tile for the model you've created to see Quality monitor. Click on the timeseries graph to get detailed information on transactions during a specific time window.
OpenScale shows model performance over time. You have two options to keep data flowing to your OpenScale graphs:
* Download, configure and schedule the [model feed notebook](https://raw.githubusercontent.com/emartensibm/german-credit/master/german_credit_scoring_feed.ipynb). This notebook can be set up with your WML credentials, and scheduled to provide a consistent flow of scoring requests to your model, which will appear in your OpenScale monitors.
* Re-run this notebook. Running this notebook from the beginning will delete and re-create the model and deployment, and re-create the historical data. Please note that the payload and measurement logs for the previous deployment will continue to be stored in your datamart, and can be deleted if necessary.
## Authors
Eric Martens, is a technical specialist having expertise in analysis and description of business processes, and their translation into functional and non-functional IT requirements. He acts as the interpreter between the worlds of IT and business.
Lukasz Cmielowski, PhD, is an Automation Architect and Data Scientist at IBM with a track record of developing enterprise-level applications that substantially increases clients' ability to turn data into actionable knowledge.
Zilu (Peter) Tang, is a cognitive developer with experties in deep learning and enterprise AI solutions from Watson Openscale to many other cutting-edge IBM research projects.
|
github_jupyter
|
this notebook first collect all stats obtained in intial exploration.
it will be a big table, indexed by subset, neuron, structure, optimization.
# result:
I will use k9cX + k6s2 + vanilla as my basis.
```
import h5py
import numpy as np
import os.path
from functools import partial
from collections import OrderedDict
import pandas as pd
pd.options.display.max_rows = 100
pd.options.display.max_columns = 100
from scipy.stats import pearsonr
# get number of parameters.
from tang_jcompneuro import dir_dictionary
from tang_jcompneuro.cnn_exploration_pytorch import get_num_params
num_param_dict = get_num_params()
def print_relevant_models():
for x, y in num_param_dict.items():
if x.startswith('k9c') and 'k6s2max' in x and x.endswith('vanilla'):
print(x, y)
print_relevant_models()
def generic_call_back(name, obj, env):
if isinstance(obj, h5py.Dataset):
arch, dataset, subset, neuron_idx, opt = name.split('/')
assert dataset == 'MkA_Shape'
neuron_idx = int(neuron_idx)
corr_this = obj.attrs['corr']
if corr_this.dtype != np.float32:
# this will get hit by my code.
assert corr_this == 0.0
env['result'].append(
{
'subset': subset,
'neuron': neuron_idx,
'arch': arch,
'opt': opt,
'corr': corr_this,
'time': obj.attrs['time'],
# 'num_param': num_param_dict[arch],
}
)
def collect_all_data():
cnn_explore_dir = os.path.join(dir_dictionary['models'], 'cnn_exploration')
env = {'result': []}
count = 0
for root, dirs, files in os.walk(cnn_explore_dir):
for f in files:
if f.lower().endswith('.hdf5'):
count += 1
if count % 100 == 0:
print(count)
f_check = os.path.join(root, f)
with h5py.File(f_check, 'r') as f_metric:
f_metric.visititems(partial(generic_call_back, env=env))
result = pd.DataFrame(env['result'], columns=['subset', 'neuron', 'arch', 'opt', 'corr', 'time'])
result = result.set_index(['subset', 'neuron', 'arch', 'opt'], verify_integrity=True)
print(count)
return result
all_data = collect_all_data()
# 66 (arch) x 35 (opt) x 2 (subsets) x 14 (neurons per subset)
assert all_data.shape == (64680, 2)
%matplotlib inline
import matplotlib.pyplot as plt
def check_run_time():
# check time. as long as it's fast, it's fine.
time_all = all_data['time'].values
plt.close('all')
plt.hist(time_all, bins=100)
plt.show()
print(time_all.min(), time_all.max(),
np.median(time_all), np.mean(time_all))
print(np.sort(time_all)[::-1][:50])
check_run_time()
# seems that it's good to check those with more than 100 sec.
def check_long_ones():
long_runs = all_data[all_data['time']>=100]
return long_runs
# typically, long cases are from adam.
# I'm not sure whether these numbers are accruate. but maybe let's ignore them for now.
check_long_ones()
# I think it's easier to analyze per data set.
def study_one_subset(df_this_only_corr):
# this df_this_only_corr should be a series.
# with (neuron, arch, opt) as the (multi) index.
# first, I want to know how good my opt approximation is.
#
# I will show two ways.
# first, use my opt approximation to replace the best
# one for every combination of neuron and arch.
# show scatter plot, pearsonr, as well as how much performance is lost.
#
# second, I want to see, if for each neuron I choose the best architecture,
# how much performance is lost.
#
# there are actually two ways to choose best architecture.
# a) one is, best one is chosen based on the exact version of loss.
# b) another one is, best one is chosen separately.
#
# by the last plot in _examine_opt (second, b)), you can see that,
# given enough architectures to choose from, these optimization methods can achieve neear optimal.
a = _examine_opt(df_this_only_corr)
# ok. then, I'd like to check archtectures.
# here, I will use these arch's performance on the approx version.
_examine_arch(a)
def _examine_arch(df_neuron_by_arch):
# mark input as tmp_sutff.
# then you can run things like
# tmp_stuff.T.mean(axis=1).sort_values()
# or tmp_stuff.T.median(axis=1).sort_values()
# my finding is that k9cX_nobn_k6s2max_vanilla
# where X is number of channels often perform best.
# essentially, I can remove those k13 stuff.
# also, dropout and factored works poorly.
# so remove them as well.
# k6s2 stuff may not be that evident.
# so I will examine that next.
print(df_neuron_by_arch.T.mean(axis=1).sort_values(ascending=False).iloc[:10])
print(df_neuron_by_arch.T.median(axis=1).sort_values(ascending=False).iloc[:10])
columns = df_neuron_by_arch.columns
columsn_to_preserve = [x for x in columns if x.startswith('k9c') and x.endswith('vanilla')]
df_neuron_by_arch = df_neuron_by_arch[columsn_to_preserve]
print(df_neuron_by_arch.T.mean(axis=1).sort_values(ascending=False))
print(df_neuron_by_arch.T.median(axis=1).sort_values(ascending=False))
# just search 'k6s2max' in the output, and see that most of them are on top.
def show_stuff(x1, x2, figsize=(10, 10), title='',
xlabel=None, ylabel=None):
plt.close('all')
plt.figure(figsize=figsize)
plt.scatter(x1, x2, s=5)
plt.xlim(0,1)
plt.ylim(0,1)
if xlabel is not None:
plt.xlabel(xlabel)
if ylabel is not None:
plt.ylabel(ylabel)
plt.plot([0,1], [0,1], linestyle='--', color='r')
plt.title(title + 'corr {:.2f}'.format(pearsonr(x1,x2)[0]))
plt.axis('equal')
plt.show()
def _extract_max_value_from_neuron_by_arch_stuff(neuron_by_arch_stuff: np.ndarray, max_idx=None):
assert isinstance(neuron_by_arch_stuff, np.ndarray)
n_neuron, n_arch = neuron_by_arch_stuff.shape
if max_idx is None:
max_idx = np.argmax(neuron_by_arch_stuff, axis=1)
assert max_idx.shape == (n_neuron,)
best_perf_per_neuron = neuron_by_arch_stuff[np.arange(n_neuron), max_idx]
assert best_perf_per_neuron.shape == (n_neuron, )
# OCD, sanity check.
for neuron_idx in range(n_neuron):
assert best_perf_per_neuron[neuron_idx] == neuron_by_arch_stuff[neuron_idx, max_idx[neuron_idx]]
return neuron_by_arch_stuff[np.arange(n_neuron), max_idx], max_idx
def _examine_opt(df_this_only_corr):
# seems that best opt can be approximated by max(1e-3L2_1e-3L2_adam002_mse, 1e-4L2_1e-3L2_adam002_mse,
# '1e-3L2_1e-3L2_sgd_mse', '1e-4L2_1e-3L2_sgd_mse')
# let's see how well that goes.
# this is by running code like
# opt_var = all_data['corr'].xs('OT', level='subset').unstack('arch').unstack('neuron').median(axis=1).sort_values()
# where you can replace OT with all,
# median with mean.
# and check by eye.
# notice that mean and median may give pretty different results.
opt_approxer = (
'1e-3L2_1e-3L2_adam002_mse', '1e-4L2_1e-3L2_adam002_mse',
'1e-3L2_1e-3L2_sgd_mse', '1e-4L2_1e-3L2_sgd_mse'
)
opt_in_columns = df_this_only_corr.unstack('opt')
opt_best = opt_in_columns.max(axis=1).values
assert np.all(opt_best > 0)
opt_best_approx = np.asarray([df_this_only_corr.unstack('opt')[x].values for x in opt_approxer]).max(axis=0)
assert opt_best.shape == opt_best_approx.shape
# compute how much is lost.
preserved_performance = opt_best_approx.mean()/opt_best.mean()
print('preserved performance', preserved_performance)
show_stuff(opt_best, opt_best_approx, (8, 8), 'approx vs. exact, all arch, all neurons, ',
'exact', 'approx')
both_exact_and_opt = pd.DataFrame(OrderedDict([('exact', opt_best), ('approx', opt_best_approx)]),
index = opt_in_columns.index.copy())
both_exact_and_opt.columns.name = 'opt_type'
best_arch_performance_exact, max_idx = _extract_max_value_from_neuron_by_arch_stuff(both_exact_and_opt['exact'].unstack('arch').values)
best_arch_performance_approx, _ = _extract_max_value_from_neuron_by_arch_stuff(both_exact_and_opt['approx'].unstack('arch').values, max_idx)
best_arch_performance_own_idx, _ = _extract_max_value_from_neuron_by_arch_stuff(both_exact_and_opt['approx'].unstack('arch').values)
assert best_arch_performance_exact.shape == best_arch_performance_approx.shape
#return best_arch_performance_exact, best_arch_performance_approx
show_stuff(best_arch_performance_exact, best_arch_performance_approx, (6, 6),
'approx vs. exact, best arch (determined by exact), all neurons, ',
'exact', 'approx')
show_stuff(best_arch_performance_exact, best_arch_performance_own_idx, (6, 6),
'approx vs. exact, best arch (determined by each), all neurons, ',
'exact', 'approx')
return both_exact_and_opt['approx'].unstack('arch')
tmp_stuff = study_one_subset(all_data['corr'].xs('OT', level='subset'))
```
|
github_jupyter
|
# Prescient Tutorial
## Getting Started
This is a tutorial to demonstration the basic functionality of Prescient. Please follow the installation instructions in the [README](https://github.com/grid-parity-exchange/Prescient/blob/master/README.md) before proceeding. This tutorial will assume we are using the CBC MIP solver, however, we will point out where one could use a different solver (CPLEX, Gurobi, Xpress).
## RTS-GMLC
We will use the RTS-GMLC test system as a demonstration. Prescient comes included with a translator for the RTS-GMLC system data, which is publically available [here](https://github.com/GridMod/RTS-GMLC). To find out more about the RTS-GMLC system, or if you use the RTS-GMLC system in published research, please see or cite the [RTS-GMLC paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8753693&isnumber=4374138&tag=1).
## IMPORTANT NOTE
In the near future, the dev-team will allow more-direct reading of data in the "RTS-GMLC" format directly into the simulator. In the past, we have created one-off scripts for each data set to put then in the format required by the populator.
### Downloading the RTS-GMLC data
```
# first, we'll use the built-in function to download the RTS-GMLC system to Prescicent/downloads/rts_gmlc
import prescient.downloaders.rts_gmlc as rts_downloader
# the download function has the path Prescient/downloads/rts_gmlc hard-coded.
# All it does is a 'git clone' of the RTS-GMLC repo
rts_downloader.download()
# we should be able to see the RTS-GMLC data now
import os
rts_gmlc_dir = rts_downloader.rts_download_path+os.sep+'RTS-GMLC'
print(rts_gmlc_dir)
os.listdir(rts_gmlc_dir)
```
### Converting RTS-GMLC data into the format for the "populator"
```
# first thing we'll do is to create a *.dat file template for the "static" data, e.g.,
# branches, buses, generators, to Prescicent/downloads/rts_gmlc/templates/rts_with_network_template_hotstart.dat
from prescient.downloaders.rts_gmlc_prescient.rtsgmlc_to_dat import write_template
write_template(rts_gmlc_dir=rts_gmlc_dir,
file_name=rts_downloader.rts_download_path+os.sep+'templates'+os.sep+'rts_with_network_template_hotstart.dat')
# next, we'll convert the included time-series data into input for the populator
# (this step can take a while because we set up an entire year's worth of data)
from prescient.downloaders.rts_gmlc_prescient.process_RTS_GMLC_data import create_timeseries
create_timeseries(rts_downloader.rts_download_path)
# Lastly, Prescient comes with some pre-made scripts and templates to help get up-and-running with RTS-GMLC.
# This function just puts those in rts_downloader.rts_download_path from
# Prescient/prescient/downloaders/rts_gmlc_prescient/runners
rts_downloader.copy_templates()
os.listdir(rts_downloader.rts_download_path)
```
NOTE: the above steps are completely automated in the `__main__` function of Prescient/prescient/downloaders/rts_gmlc.py
### Running the populator
Below we'll show how the populator is set-up by the scripts above an subsequently run.
```
# we'll work in the directory we've set up now for
# running the populator and simulator
# If prescient is properly installed, this could be
# a directory anywhere on your system
os.chdir(rts_downloader.rts_download_path)
os.getcwd()
# helper for displaying *.txt files in jupyter
def print_file(file_n):
'''prints file contents to the screen'''
with open(file_n, 'r') as f:
for l in f:
print(l.strip())
```
Generally, one would call `runner.py populate_with_network_deterministic.txt` to set-up the data for the simulator. We'll give a brief overview below as to how that is orchestrated.
```
print_file('populate_with_network_deterministic.txt')
```
First, notice the `command/exec` line, which tells `runner.py` which command to execute. These `*.txt` files could be replaced with bash scripts, or run from the command line directly. In this case,
`populator.py --start-date 2020-07-10 --end-date 2020-07-16 --source-file sources_with_network.txt --output-directory deterministic_with_network_scenarios --scenario-creator-options-file deterministic_scenario_creator_with_network.txt
--traceback`
would give the same result. The use of the `*.txt` files enables saving these complex commands in a cross-platform compatable manner.
The `--start-date` and `--end-date` specify the date range for which we'll generate simulator input. The `--ouput-directory` gives the path (relative in this case) where the simulator input (the output of this script) should go. The `--sources-file` and `--scenario-creator-options-file` point to other `*.txt` files.
#### --scenario-creator-options-file
```
print_file('deterministic_scenario_creator_with_network.txt')
```
This file points the `scenario_creator` to the templates created/copied above, which store the "static" prescient data, e.g., `--sceneario-template-file` points to the bus/branch/generator data. The `--tree-template-file` is depreciated at this point, pending re-introdcution of stochastic unit commitment capabilities.
```
# This prints out the files entire contents, just to look at.
# See if you can find the set "NondispatchableGenerators"
print_file('templates/rts_with_network_template_hotstart.dat')
```
#### --sources-file
```
print_file('sources_with_network.txt')
```
This file connects each "Source" (e.g., `122_HYDRO_1`) in the file `templates/rts_with_network_template_hotstart.dat` to the `*.csv` files generated above for both load and renewable generation. Other things controlled here are whether a renewable resource is dispatchable at all.
```
# You could also run 'runner.py populate_with_network_deterministic.txt' from the command line
import prescient.scripts.runner as runner
runner.run('populate_with_network_deterministic.txt')
```
This creates the "input deck" for July 10, 2020 -- July 16, 2020 for the simulator in the ouput directory `determinstic_with_network_scenarios`.
```
sorted(os.listdir('deterministic_with_network_scenarios'+os.sep+'pyspdir_twostage'))
```
Inside each of these directories are the `*.dat` files specifying the simulation for each day.
```
sorted(os.listdir('deterministic_with_network_scenarios'+os.sep+'pyspdir_twostage'+os.sep+'2020-07-10'))
```
`Scenario_actuals.dat` contains the "actuals" for the day, which is used for the SCED problems, and `Scenario_forecast.dat` contains the "forecasts" for the day. The other `*.dat` files are hold-overs from stochastic mode.
`scenarios.csv` has forecast and actuals data for every uncertain generator in an easy-to-process format.
### Running the simulator
Below we show how to set-up and run the simulator.
Below is the contents of the included `simulate_with_network_deterministic.txt`:
```
print_file('simulate_with_network_deterministic.txt')
```
Description of the options included are as follows:
- `--data-directory`: Where the source data is (same as outputs for the populator).
- `--simulate-out-of-sample`: This option directs the simulator to use different forecasts from actuals. Without it, the simulation is run with forecasts equal to actuals
- `--run-sced-with-persistent-forecast-errors`: This option directs the simulator to use forecasts (adjusted by the current forecast error) for sced look-ahead periods, instead of using the actuals for sced look-ahead periods.
- `--output-directory`: Where to write the output data.
- `--run-deterministic-ruc`: Directs the simualtor to run a deterministic (as opposed to stochastic) unit commitment problem. Required for now as stochastic unit commitment is currently deprecated.
- `--start-date`: Day to start the simulation on. Must be in the data-directory.
- `--num-days`: Number of days to simulate, including the start date. All days must be included in the data-directory.
- `--sced-horizon`: Number of look-ahead periods (in hours) for the real-time economic dispatch problem.
- `--traceback`: If enabled, the simulator will print a trace if it failed.
- `--random-seed`: Unused currently.
- `--output-sced-initial-conditions`: Prints the initial conditions for the economic dispatch problem to the screen.
- `--output-sced-demands`: Prints the demands for the economic dispatch problem to the screen.
- `--output-sced-solutions`: Prints the solution for the economic dispatch problem to the screen.
- `--output-ruc-initial-conditions`: Prints the initial conditions for the unit commitment problem to the screen.
- `--output-ruc-solutions`: Prints the commitment solution for the unit commitment problem to the screen.
- `--output-ruc-dispatches`: Prints the dispatch solution for the unit commitment problem to the screen.
- `--output-solver-logs`: Prints the logs from the optimization solver (CBC, CPLEX, Gurobi, Xpress) to the screen.
- `--ruc-mipgap`: Optimality gap to use for the unit commitment problem. Default is 1% used here -- can often be tighted for commerical solvers.
- `--symbolic-solver-labels`: If set, `symbolic_solver_labels` is used when writing optimization models from Pyomo to the solver. Only useful for low-level debugging.
- `--reserve-factor`: If set, overwrites any basic reserve factor included in the test data.
- `--deterministic-ruc-solver`: The optimization solver ('cbc', 'cplex', 'gurobi', 'xpress') used for the unit commitment problem.
- `--sced-solver`: The optimization solver ('cbc', 'cplex', 'gurobi', 'xpress') used for the economic dispatch problem.
Other options not included in this file, which may be useful:
- `--compute-market-settlements`: (True/False) If enabled, solves a day-ahead pricing problem (in addition to the real-time pricing problem) and computes generator revenue based on day-ahead and real-time prices.
- `--day-ahead-pricing`: ('LMP', 'ELMP', 'aCHP') Specifies the type of day-ahead price to use. Default is 'aCHP'.
- `--price-threashold`: The maximum value for the energy price ($/MWh). Useful for when market settlements are computed to avoid very large LMP values when load shedding occurs.
- `--reserve-price-threashold`: The maximum value for the reserve price (\$/MW). Useful for when market settlements are computed to avoid very large LMP values when reserve shortfall occurs.
- `--deterministic-ruc-solver-options`: Options to pass into the unit commitment solver (specific to the solver used) for every unit commitment solve.
- `--sced-solver-options`: Options to pass into the economic dispatch solve (specific to the solver used) for every economic dispatch solve.
- `--plugin`: Path to a Python module to modify Prescient behavior.
```
# You could also run 'runner.py simulate_with_network_deterministic.txt' from the command line
# This runs a week of RTS-GMLC, which with the open-source cbc solver will take several (~12) minutes
import prescient.scripts.runner as runner
runner.run('simulate_with_network_deterministic.txt')
```
### Analyzing results
Summary and detailed `*.csv` files are written to the specified output directory (in this case, `deterministic_with_network_simulation_output`).
```
sorted(os.listdir('deterministic_with_network_simulation_output/'))
```
Below we give a breif description of the contents of each file.
- `bus_detail.csv`: Detailed results (demand, LMP, etc.) by bus.
- `daily_summary.csv`: Summary results by day. Demand, renewables data, costs, load shedding/over generation, etc.
- `hourly_gen_summary.csv`: Gives total thermal headroom and data on reserves (shortfall, price) by hour.
- `hourly_summary.csv`: Summary results by hour. Similar to `daily_summary.csv`.
- `line_detail.csv`: Detailed results (flow in MW) by bus.
- `overall_simulation_output.csv`: Summary results for the entire simulation run. Similar to `daily_summary.csv`.
- `plots`: Directory containing stackgraphs for every day of the simulation.
- `renewables_detail.csv`: Detailed results (output, curtailment) by renewable generator.
- `runtimes.csv`: Runtimes for each economic dispatch problem.
- `thermal_detail.csv`: Detailed results (dispatch, commitment, costs) per thermal generator.
Generally, the first think to look at, as a sanity check is the stackgraphs:
```
dates = [f'2020-07-1{i}' for i in range(0,7)]
from IPython.display import Image
for date in dates:
display(Image('deterministic_with_network_simulation_output'+os.sep+'plots'+os.sep+'stackgraph_'+date+'.png',
width=500))
```
Due to the non-deterministic nature of most MIP solvers, your results may be slightly different than mine. For my simulation, two things stand out:
1. The load-shedding at the end of the day (hour 23) on July 12th.
2. The renewables curtailed the evening of July 15th into the morning of July 16th.
For this tutorial, let's hypothesize about the cause of (2). Often renewables are curtailed either because of a binding transmission constraint, or because some or all of the thermal generators are operating at minimum power. Let's investigate the first possibility.
#### Examining Loaded Transmission Lines
```
import pandas as pd
# load in the output data for the lines
line_flows = pd.read_csv('deterministic_with_network_simulation_output'+os.sep+'line_detail.csv', index_col=[0,1,2,3])
# load in the source data for the lines
line_attributes = pd.read_csv('RTS-GMLC'+os.sep+'RTS_Data'+os.sep+'SourceData'+os.sep+'branch.csv', index_col=0)
# get the line limits
line_limits = line_attributes['Cont Rating']
# get a series of flows
line_flows = line_flows['Flow']
line_flows
# rename the line_limits to match the
# index of line_flows
line_limits.index.name = "Line"
line_limits
lines_relative_flow = line_flows/line_limits
lines_near_limits_time = lines_relative_flow[ (lines_relative_flow > 0.99) | (lines_relative_flow < -0.99) ]
lines_near_limits_time
```
As we can see, near the end of the day on July 15th and the beginning of the day July 16th, several transmission constraints are binding, which correspond exactly to the periods of renewables curtailment in the stackgraphs above.
|
github_jupyter
|
# Prédiction à l'aide de forêts aléatoires
Les forêts aléatoires sont des modèles de bagging ne nécessitant pas beaucoup de *fine tuning* pour obtenir des performances correctes. De plus, ces méthodes sont plus résitances au surapprentissage par rapport à d'autres méthodes.
```
from google.colab import drive
drive.mount('/content/drive')
dossier_donnees = "/content/drive/My Drive/projet_info_Ensae"
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import RandomizedSearchCV
from sklearn import metrics
from sklearn.model_selection import GridSearchCV
import numpy as np
from matplotlib import pyplot as plt
```
## Lecture des données de train et validation
```
donnees = pd.read_csv(dossier_donnees + "/donnees_model/donnees_train.csv", index_col = 1)
donnees_validation = pd.read_csv(dossier_donnees + "/donnees_model/donnees_validation.csv", index_col = 1)
donnees.drop(columns= "Unnamed: 0", inplace = True)
donnees_validation.drop(columns= "Unnamed: 0", inplace = True)
```
On va faire quelques petites modifications sur les donnees :
- Les variables `arrondissement`, `pp`, `mois` vont être considérées comme variable catégorielle
- Les variables `datemut` vont être supprimées
- La variable `sbati_squa` est retirée en suivant les recommandations de Maître Wenceslas Sanchez.
```
donnees["arrondissement"] = donnees["arrondissement"].astype("object")
donnees["pp"] = donnees["pp"].astype("object")
donnees["mois"] = donnees["datemut"].str[5:7].astype("object")
donnees_train = donnees.drop(columns = ["nblot", "nbpar", "nblocmut", "nblocdep","datemut","sbati_squa"])
donnees_validation["arrondissement"] = donnees_validation["arrondissement"].astype("object")
donnees_validation["pp"] = donnees_validation["pp"].astype("object")
donnees_validation["mois"] = donnees_validation["datemut"].str[5:7].astype("object")
donnees_validation.drop(columns = ["nblot", "nbpar", "nblocmut", "nblocdep","datemut","sbati_squa"], inplace = True)
donnees_train.rename(columns = {"valfoncact2" : "valfoncact"}, inplace = True)
donnees_validation.rename(columns = {"valfoncact2" : "valfoncact"}, inplace = True)
def preparation(table):
#Restriction à certains biens
table = table[(table["valfoncact"] > 1e5) & (table["valfoncact"] < 3*(1e6))]
#Ajout données brut
men_brut = table.loc[:, "Men":"Men_mais"].apply(lambda x : x*table["Men"], axis = 0).add_suffix("_brut")
ind_brut = table.loc[:, "Ind_0_3":"Ind_80p"].apply(lambda x : x*table["Ind"], axis = 0).add_suffix("_brut")
table = pd.concat([table, men_brut, ind_brut],axis = 1)
table_X = table.drop(columns = ["valfoncact"]).to_numpy()
table_Y = table["valfoncact"].to_numpy()
nom = table.drop(columns = ["valfoncact"]).columns
return(table_X,table_Y,nom)
donnees_validation_prep_X,donnees_validation_prep_Y,nom = preparation(donnees_validation)
donnees_train_prep_X,donnees_train_prep_Y,nom = preparation(donnees_train)
```
## Modélisation
```
rf = RandomForestRegressor(random_state=42,n_jobs = -1)
```
Pour le choix du nombre de variables testés à chaque split, Breiman [2000] recommande qu'utiliser dans les problèmes de régression $\sqrt{p}$ comme valeur où p désigne le nombre de covariables. Ici $p$ vaut 67. On prendra donc $p = 8$ ainsi que $6$ et $16$.
Le nombre d'arbres (`n_estimators`) n'est *a priori* pas le critère le plus déterminant dans la performance des forêts aléatoires au delà d'un certain seuil. Nous essayons ici des valeurs *conventionnelles*.
Pour contrôler la profondeur des feuilles de chaque arbre CART, nous utilisons le nombre d'individus minimums dans chaque feuille de l'arbre. Plus il est grand, plus l'arbre sera petit. Notons que les arbres ne sont pas élagués ici.
```
param_grid = {
'n_estimators': [100,200,500,1000],
'max_features': [6,8,16],
'min_samples_leaf' : [1,2,5,10]
}
rf_grid_search = GridSearchCV(estimator = rf, param_grid = param_grid,cv = 3, verbose=2, n_jobs = -1)
rf_grid_search.fit(donnees_train_prep_X, donnees_train_prep_Y)
print(rf_grid_search.best_params_)
rf2 = RandomForestRegressor(random_state=42,n_jobs = -1, max_features = 16, min_samples_leaf= 2, n_estimators= 1000)
rf2.fit(donnees_train_prep_X,donnees_train_prep_Y)
pred = rf2.predict(donnees_validation_prep_X)
np.sqrt(metrics.mean_squared_error(donnees_validation_prep_Y,pred))
```
## Visualisation de l'importance des variables
Afin de savoir quelles sont les variables les plus importantes dans la prédiction, nous allons utiliser l'importance des variables. Il s'agit ici d'une importance basée sur la diminution de l'indice de Gini.
```
sorted_idx = rf2.feature_importances_.argsort()
plt.figure(figsize=(10,15))
plt.barh(nom[sorted_idx], rf2.feature_importances_[sorted_idx])
plt.xlabel("Random Forest Feature Importance")
```
On note que les variables les plus importantes pour la prédiction sont :
- sbati : la surface du bien
- pp : le nombre de pièces
- nv_par_hab : le niveau de vie par habitant du carreaux de 200 mètres dans lequel le bien se situe
- Men_mai : Part de ménages en maison
- arrondissement : Arrondissement dans lequel se situe le bien
- Men_prop : Part de ménages propriétaires
- Ind_80p : Part de plus de 80 ans
- Ind_65_79 : Part d'individus âgés entre 65 et 79 ans
- Men_mai_brut : Nombre bruts de ménages en maison
|
github_jupyter
|
# Deutsch-Jozsa and Grover with Aqua
The Aqua library in Qiskit implements some common algorithms so that they can be used without needing to program the circuits for each case. In this notebook, we will show how we can use the Deutsch-Jozsa and Grover algorithms.
## Detusch-Jozsa
To use the Deutsch-Jozsa algorithm, we need to import some extra packages in addition to the ones we have been using.
```
%matplotlib inline
from qiskit import *
from qiskit.visualization import *
from qiskit.tools.monitor import *
from qiskit.aqua import *
from qiskit.aqua.components.oracles import *
from qiskit.aqua.algorithms import *
```
To specify the elements of the Deutsch-Jozsa algorithm, we must use an oracle (the function that we need to test to see if it is constant or balanced). Aqua offers the possibility of defining this oracle at a high level, without giving the actual quantum gates, with *TruthTableOracle*.
*TruthTableOracle* receives a string of zeroes and ones of length $2^n$ that sets what are the values of the oracle for the $2^n$ binary strings in lexicographical order. For example, with the string 0101 we will have a boolean function that is 0 on 00 and 10 but 1 on 01 and 11 (and, thus, it is balanced).
```
oracle = TruthTableOracle("0101")
oracle.construct_circuit().draw(output='mpl')
```
Once we have defined the oracle, we can easily create an instance of the Deutsch-Jozsa algorithm and draw the circuit.
```
dj = DeutschJozsa(oracle)
dj.construct_circuit(measurement=True).draw(output='mpl')
```
Obviously, we could execute this circuit on any backend. However, Aqua specifies some extra elements in addition to the circuit, as for instance how the results are to be interpreted.
To execute a quantum algorithm in Aqua, we need to pass it a *QuantumInstance* (which includes the backend and possibly other settings) and the algorithm will use it as many times as need. The result will include information about the execution and, in the case of Deutsch-Jozsa, the final veredict.
```
backend = Aer.get_backend('qasm_simulator')
quantum_instance = QuantumInstance(backend)
result = dj.run(quantum_instance)
print(result)
```
Let us check that it also works with constant functions.
```
oracle2 = TruthTableOracle('00000000')
dj2 = DeutschJozsa(oracle2)
result = dj2.run(quantum_instance)
print("The function is",result['result'])
```
# Grover
As in the case of Deutsch-Jozsa, for the Aqua implementation of Grover's algorithm we need to provide an oracle. We can also specify the number of iterations.
```
backend = Aer.get_backend('qasm_simulator')
oracle3 = TruthTableOracle('0001')
g = Grover(oracle3, iterations=1)
```
The execution is also similar to the one of Deutsch-Jozsa
```
result = g.run(quantum_instance)
print(result)
```
It can also be interesting to use oracles that we construct from logical expressions
```
expression = '(x | y) & (~y | z) & (~x | ~z | w) & (~x | y | z | ~w)'
oracle4 = LogicalExpressionOracle(expression)
g2 = Grover(oracle4, iterations = 3)
result = g2.run(quantum_instance)
print(result)
```
If we do not know the number of solutions or if we do not want to specify the number of iterations, we can use the incremenal mode, that allows us to find a solution in time $O(\sqrt{N})$.
```
backend = Aer.get_backend('qasm_simulator')
expression2 = '(x & y & z & w) | (~x & ~y & ~z & ~w)'
#expression2 = '(x & y) | (~x & ~y)'
oracle5 = LogicalExpressionOracle(expression2, optimization = True)
g3 = Grover(oracle5, incremental = True)
result = g3.run(quantum_instance)
print(result)
```
|
github_jupyter
|
# Facial Keypoint Detection
This project will be all about defining and training a convolutional neural network to perform facial keypoint detection, and using computer vision techniques to transform images of faces. The first step in any challenge like this will be to load and visualize the data you'll be working with.
Let's take a look at some examples of images and corresponding facial keypoints.
<img src='images/key_pts_example.png' width=50% height=50%/>
Facial keypoints (also called facial landmarks) are the small magenta dots shown on each of the faces in the image above. In each training and test image, there is a single face and **68 keypoints, with coordinates (x, y), for that face**. These keypoints mark important areas of the face: the eyes, corners of the mouth, the nose, etc. These keypoints are relevant for a variety of tasks, such as face filters, emotion recognition, pose recognition, and so on. Here they are, numbered, and you can see that specific ranges of points match different portions of the face.
<img src='images/landmarks_numbered.jpg' width=30% height=30%/>
---
## Load and Visualize Data
The first step in working with any dataset is to become familiar with your data; you'll need to load in the images of faces and their keypoints and visualize them! This set of image data has been extracted from the [YouTube Faces Dataset](https://www.cs.tau.ac.il/~wolf/ytfaces/), which includes videos of people in YouTube videos. These videos have been fed through some processing steps and turned into sets of image frames containing one face and the associated keypoints.
#### Training and Testing Data
This facial keypoints dataset consists of 5770 color images. All of these images are separated into either a training or a test set of data.
* 3462 of these images are training images, for you to use as you create a model to predict keypoints.
* 2308 are test images, which will be used to test the accuracy of your model.
The information about the images and keypoints in this dataset are summarized in CSV files, which we can read in using `pandas`. Let's read the training CSV and get the annotations in an (N, 2) array where N is the number of keypoints and 2 is the dimension of the keypoint coordinates (x, y).
---
First, before we do anything, we have to load in our image data. This data is stored in a zip file and in the below cell, we access it by it's URL and unzip the data in a `/data/` directory that is separate from the workspace home directory.
```
# -- DO NOT CHANGE THIS CELL -- #
!mkdir /data
!wget -P /data/ https://s3.amazonaws.com/video.udacity-data.com/topher/2018/May/5aea1b91_train-test-data/train-test-data.zip
!unzip -n /data/train-test-data.zip -d /data
# import the required libraries
import glob
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import cv2
```
Then, let's load in our training data and display some stats about that data to make sure it's been loaded in correctly!
```
key_pts_frame = pd.read_csv('/data/training_frames_keypoints.csv')
n = 0
image_name = key_pts_frame.iloc[n, 0]
key_pts = key_pts_frame.iloc[n, 1:].as_matrix()
key_pts = key_pts.astype('float').reshape(-1, 2)
print('Image name: ', image_name)
print('Landmarks shape: ', key_pts.shape)
print('First 4 key pts: {}'.format(key_pts[:4]))
# print out some stats about the training data
print('Number of images: ', key_pts_frame.shape[0])
```
## Look at some images
Below, is a function `show_keypoints` that takes in an image and keypoints and displays them. As you look at this data, **note that these images are not all of the same size**, and neither are the faces! To eventually train a neural network on these images, we'll need to standardize their shape.
```
def show_keypoints(image, key_pts):
"""Show image with keypoints"""
plt.imshow(image)
plt.scatter(key_pts[:, 0], key_pts[:, 1], s=20, marker='.', c='m')
# Display a few different types of images by changing the index n
# select an image by index in our data frame
n = 0
image_name = key_pts_frame.iloc[n, 0]
key_pts = key_pts_frame.iloc[n, 1:].as_matrix()
key_pts = key_pts.astype('float').reshape(-1, 2)
plt.figure(figsize=(5, 5))
show_keypoints(mpimg.imread(os.path.join('/data/training/', image_name)), key_pts)
plt.show()
```
## Dataset class and Transformations
To prepare our data for training, we'll be using PyTorch's Dataset class. Much of this this code is a modified version of what can be found in the [PyTorch data loading tutorial](http://pytorch.org/tutorials/beginner/data_loading_tutorial.html).
#### Dataset class
``torch.utils.data.Dataset`` is an abstract class representing a
dataset. This class will allow us to load batches of image/keypoint data, and uniformly apply transformations to our data, such as rescaling and normalizing images for training a neural network.
Your custom dataset should inherit ``Dataset`` and override the following
methods:
- ``__len__`` so that ``len(dataset)`` returns the size of the dataset.
- ``__getitem__`` to support the indexing such that ``dataset[i]`` can
be used to get the i-th sample of image/keypoint data.
Let's create a dataset class for our face keypoints dataset. We will
read the CSV file in ``__init__`` but leave the reading of images to
``__getitem__``. This is memory efficient because all the images are not
stored in the memory at once but read as required.
A sample of our dataset will be a dictionary
``{'image': image, 'keypoints': key_pts}``. Our dataset will take an
optional argument ``transform`` so that any required processing can be
applied on the sample. We will see the usefulness of ``transform`` in the
next section.
```
from torch.utils.data import Dataset, DataLoader
class FacialKeypointsDataset(Dataset):
"""Face Landmarks dataset."""
def __init__(self, csv_file, root_dir, transform=None):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.key_pts_frame = pd.read_csv(csv_file)
self.root_dir = root_dir
self.transform = transform
def __len__(self):
return len(self.key_pts_frame)
def __getitem__(self, idx):
image_name = os.path.join(self.root_dir,
self.key_pts_frame.iloc[idx, 0])
image = mpimg.imread(image_name)
# if image has an alpha color channel, get rid of it
if(image.shape[2] == 4):
image = image[:,:,0:3]
key_pts = self.key_pts_frame.iloc[idx, 1:].as_matrix()
key_pts = key_pts.astype('float').reshape(-1, 2)
sample = {'image': image, 'keypoints': key_pts}
if self.transform:
sample = self.transform(sample)
return sample
```
Now that we've defined this class, let's instantiate the dataset and display some images.
```
# Construct the dataset
face_dataset = FacialKeypointsDataset(csv_file='/data/training_frames_keypoints.csv',
root_dir='/data/training/')
# print some stats about the dataset
print('Length of dataset: ', len(face_dataset))
# Display a few of the images from the dataset
num_to_display = 3
for i in range(num_to_display):
# define the size of images
fig = plt.figure(figsize=(20,10))
# randomly select a sample
rand_i = np.random.randint(0, len(face_dataset))
sample = face_dataset[rand_i]
# print the shape of the image and keypoints
print(i, sample['image'].shape, sample['keypoints'].shape)
ax = plt.subplot(1, num_to_display, i + 1)
ax.set_title('Sample #{}'.format(i))
# Using the same display function, defined earlier
show_keypoints(sample['image'], sample['keypoints'])
```
## Transforms
Now, the images above are not of the same size, and neural networks often expect images that are standardized; a fixed size, with a normalized range for color ranges and coordinates, and (for PyTorch) converted from numpy lists and arrays to Tensors.
Therefore, we will need to write some pre-processing code.
Let's create four transforms:
- ``Normalize``: to convert a color image to grayscale values with a range of [0,1] and normalize the keypoints to be in a range of about [-1, 1]
- ``Rescale``: to rescale an image to a desired size.
- ``RandomCrop``: to crop an image randomly.
- ``ToTensor``: to convert numpy images to torch images.
We will write them as callable classes instead of simple functions so
that parameters of the transform need not be passed everytime it's
called. For this, we just need to implement ``__call__`` method and
(if we require parameters to be passed in), the ``__init__`` method.
We can then use a transform like this:
tx = Transform(params)
transformed_sample = tx(sample)
Observe below how these transforms are generally applied to both the image and its keypoints.
```
import torch
from torchvision import transforms, utils
# tranforms
class Normalize(object):
"""Convert a color image to grayscale and normalize the color range to [0,1]."""
def __call__(self, sample):
image, key_pts = sample['image'], sample['keypoints']
image_copy = np.copy(image)
key_pts_copy = np.copy(key_pts)
# convert image to grayscale
image_copy = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
# scale color range from [0, 255] to [0, 1]
image_copy= image_copy/255.0
# scale keypoints to be centered around 0 with a range of [-1, 1]
# mean = 100, sqrt = 50, so, pts should be (pts - 100)/50
key_pts_copy = (key_pts_copy - 100)/50.0
return {'image': image_copy, 'keypoints': key_pts_copy}
class Rescale(object):
"""Rescale the image in a sample to a given size.
Args:
output_size (tuple or int): Desired output size. If tuple, output is
matched to output_size. If int, smaller of image edges is matched
to output_size keeping aspect ratio the same.
"""
def __init__(self, output_size):
assert isinstance(output_size, (int, tuple))
self.output_size = output_size
def __call__(self, sample):
image, key_pts = sample['image'], sample['keypoints']
h, w = image.shape[:2]
if isinstance(self.output_size, int):
if h > w:
new_h, new_w = self.output_size * h / w, self.output_size
else:
new_h, new_w = self.output_size, self.output_size * w / h
else:
new_h, new_w = self.output_size
new_h, new_w = int(new_h), int(new_w)
img = cv2.resize(image, (new_w, new_h))
# scale the pts, too
key_pts = key_pts * [new_w / w, new_h / h]
return {'image': img, 'keypoints': key_pts}
class RandomCrop(object):
"""Crop randomly the image in a sample.
Args:
output_size (tuple or int): Desired output size. If int, square crop
is made.
"""
def __init__(self, output_size):
assert isinstance(output_size, (int, tuple))
if isinstance(output_size, int):
self.output_size = (output_size, output_size)
else:
assert len(output_size) == 2
self.output_size = output_size
def __call__(self, sample):
image, key_pts = sample['image'], sample['keypoints']
h, w = image.shape[:2]
new_h, new_w = self.output_size
top = np.random.randint(0, h - new_h)
left = np.random.randint(0, w - new_w)
image = image[top: top + new_h,
left: left + new_w]
key_pts = key_pts - [left, top]
return {'image': image, 'keypoints': key_pts}
class ToTensor(object):
"""Convert ndarrays in sample to Tensors."""
def __call__(self, sample):
image, key_pts = sample['image'], sample['keypoints']
# if image has no grayscale color channel, add one
if(len(image.shape) == 2):
# add that third color dim
image = image.reshape(image.shape[0], image.shape[1], 1)
# swap color axis because
# numpy image: H x W x C
# torch image: C X H X W
image = image.transpose((2, 0, 1))
return {'image': torch.from_numpy(image),
'keypoints': torch.from_numpy(key_pts)}
```
## Test out the transforms
Let's test these transforms out to make sure they behave as expected. As you look at each transform, note that, in this case, **order does matter**. For example, you cannot crop a image using a value smaller than the original image (and the orginal images vary in size!), but, if you first rescale the original image, you can then crop it to any size smaller than the rescaled size.
```
# test out some of these transforms
rescale = Rescale(100)
crop = RandomCrop(50)
composed = transforms.Compose([Rescale(250),
RandomCrop(224)])
# apply the transforms to a sample image
test_num = 500
sample = face_dataset[test_num]
fig = plt.figure()
for i, tx in enumerate([rescale, crop, composed]):
transformed_sample = tx(sample)
ax = plt.subplot(1, 3, i + 1)
plt.tight_layout()
ax.set_title(type(tx).__name__)
show_keypoints(transformed_sample['image'], transformed_sample['keypoints'])
plt.show()
```
## Create the transformed dataset
Apply the transforms in order to get grayscale images of the same shape. Verify that your transform works by printing out the shape of the resulting data (printing out a few examples should show you a consistent tensor size).
```
# define the data tranform
# order matters! i.e. rescaling should come before a smaller crop
data_transform = transforms.Compose([Rescale(250),
RandomCrop(224),
Normalize(),
ToTensor()])
# create the transformed dataset
transformed_dataset = FacialKeypointsDataset(csv_file='/data/training_frames_keypoints.csv',
root_dir='/data/training/',
transform=data_transform)
# print some stats about the transformed data
print('Number of images: ', len(transformed_dataset))
# make sure the sample tensors are the expected size
for i in range(5):
sample = transformed_dataset[i]
print(i, sample['image'].size(), sample['keypoints'].size())
```
## Data Iteration and Batching
Right now, we are iterating over this data using a ``for`` loop, but we are missing out on a lot of PyTorch's dataset capabilities, specifically the abilities to:
- Batch the data
- Shuffle the data
- Load the data in parallel using ``multiprocessing`` workers.
``torch.utils.data.DataLoader`` is an iterator which provides all these
features, and we'll see this in use in the *next* notebook, Notebook 2, when we load data in batches to train a neural network!
---
## Ready to Train!
Now that you've seen how to load and transform our data, you're ready to build a neural network to train on this data.
In the next notebook, you'll be tasked with creating a CNN for facial keypoint detection.
|
github_jupyter
|
<a href="https://colab.research.google.com/github/enakai00/rl_book_solutions/blob/master/Chapter05/Exercise_5_12.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Exercise 5.12 : Solution
```
import numpy as np
from numpy import random
import copy
track_img1 = '''
##############
# G
# G
# G
# G
# G
# G
# #######
# #
# #
# #
# #
# #
# #
# #
# #
# #
# #
# #
# #
# #
# #
# #
# #
# #
# #
# #
# #
# #
# #
# #
#SSSSSS#
'''
track_img2 = '''
#################
### G
# G
# G
# G
# G
# G
# G
# G
# ##
# ###
# #
# ##
# #
# #
# #
# #
# #
# #
# #
# #
# #
# #
# #
# #
# #
# #
# #
# #
# #
#SSSSSSSSSSSSSSSSSSSSSSS#
'''
def get_track(track_img):
x_max = max(map(len, track_img.split('\n')))
track = []
for line in track_img.split('\n'):
if line == '':
continue
line += ' ' * x_max
track.append(list(line)[:x_max])
return np.array(track)
class Car:
def __init__(self, track):
self.path = []
self.track = track
self._restart()
def _restart(self):
self.vx, self.vy = 0, 0
self.y = len(self.track) - 1
while True:
self.x = random.randint(len(self.track[self.y]))
if self.track[self.y][self.x] == 'S':
break
def get_state(self):
return self.x, self.y, self.vx, self.vy
def get_result(self):
result = np.copy(self.track)
for (x, y, vx, vy, ax, ay) in self.path:
result[y][x] = '+'
return result
def _action(self, ax, ay):
_vx = self.vx + ax
_vy = self.vy + ay
if _vx in range(5):
self.vx = _vx
if _vy in range(5):
self.vy = _vy
def move(self, ax, ay, noise):
self.path.append((self.x, self.y, self.vx, self.vy, ax, ay))
if noise and random.random() < 0.1:
self._action(0, 0)
else:
self._action(ax, ay)
for _ in range(self.vy):
self.y -= 1
if self.track[self.y][self.x] == 'G':
return True
if self.track[self.y][self.x] == '#':
self._restart()
for _ in range(self.vx):
self.x += 1
if self.track[self.y][self.x] == 'G':
return True
if self.track[self.y][self.x] == '#':
self._restart()
return False
def trial(car, policy, epsilon = 0.1, noise=True):
for _ in range(10000):
x, y, vx, vy = car.get_state()
state = "{:02},{:02}:{:02},{:02}".format(x, y, vx, vy)
if state not in policy.keys():
policy[state] = (0, 0)
ax, ay = policy[state]
if random.random() < epsilon:
ax, ay = random.randint(-1, 2, 2)
finished = car.move(ax, ay, noise)
if finished:
break
def optimal_action(q, x, y, vx, vy):
optimal = (0, 0)
q_max = 0
initial = True
for ay in range(-1, 2):
for ax in range(-1, 2):
sa = "{:02},{:02}:{:02},{:02}:{:02},{:02}".format(x, y, vx, vy, ax, ay)
if sa not in q.keys():
q[sa] = -10**10
if initial or q[sa] > q_max:
q_max = q[sa]
optimal = (ax, ay)
initial = False
return optimal
def run_sampling(track, num=100000):
policy_t = {}
policy_b = {}
q = {}
c = {}
epsilon = 0.1
for i in range(num):
if i % 2000 == 0:
print ('.', end='')
car = Car(track)
trial(car, policy_b, epsilon, noise=True)
g = 0
w = 1
path = car.path
path.reverse()
for x, y, vx, vy, ax, ay in path:
state = "{:02},{:02}:{:02},{:02}".format(x, y, vx, vy)
sa = "{:02},{:02}:{:02},{:02}:{:02},{:02}".format(x, y, vx, vy, ax, ay)
action = (ax, ay)
g += -1 # Reward = -1 for each step
if sa not in c.keys():
c[sa] = 0
c[sa] += w
if sa not in q.keys():
q[sa] = 0
q[sa] += w*(g-q[sa])/c[sa]
policy_t[state] = optimal_action(q, x, y, vx, vy)
if policy_t[state] != action:
break
w = w / (1 - epsilon + epsilon/9)
# b(a|s) = (1 - epsilon) + epsilon / 9
# 1 - epsilon : chosen with the greedy policy
# epsilon / 9 : chosen with the random policy
policy_b = copy.copy(policy_t) # Update the behaivor policy
return policy_t
track = get_track(track_img1)
policy_t = run_sampling(track, 200000)
for _ in range(3):
car = Car(track)
trial(car, policy_t, 0, noise=False)
result = car.get_result()
for line in result:
print (''.join(line))
print ()
track = get_track(track_img2)
policy_t = run_sampling(track, 200000)
for _ in range(3):
car = Car(track)
trial(car, policy_t, 0, noise=False)
result = car.get_result()
for line in result:
print (''.join(line))
print ()
```
|
github_jupyter
|
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
import sys
from pathlib import Path
sys.path.append(str(Path.cwd().parent))
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import plotting
from statsmodels.tsa.stattools import adfuller
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
from scipy.signal import detrend
%matplotlib inline
from load_dataset import Dataset
dataset = Dataset('../data/dataset/')
# возьмем временной ряд, характеризующий продажи алкоголя по месяцам
ts = dataset["alcohol_sales.csv"]
ts, ts_test = ts[:250], ts[250:]
ts.plot()
```
## Box-Jenkins
```
# как можно заметить, у него есть окололинейный тренд, гетероскедастичность, сезонный период равен 12 (месяцам)
# сначала уберем гетероскедастичность простым логарифмированием
ts_log = np.log(ts)
plotting.plot_ts(ts_log)
# Теперь подберем порядки дифференцирования d, D
# d малое подбирается таким образом, что d раз продифференцировав ряд, мы добьемся стационарности
# обычно таким дифференцированием убирают тренды
# D большое обычно подбирается так, что если d малое не дало стацинарности, мы можем D раз сезонно придифференцировать
# ряд, пока он не станет стационарным.
# для начала просто продифференцируем один раз
ts_log.diff().plot()
# в данном случае ряд сохраняет сезонность
plot_acf(ts_log.diff().dropna());
# попробуем применить сезонное дифференцирование
ts_log.diff(12).plot()
plot_acf(ts_log.diff(12).dropna());
# уже лучше
# посмотрим, что скажет критерий Дики Фуллера
# видим, что пока мы не можем отвергнуть нулевую гипотезу
adfuller(ts_log.diff(12).dropna())[1]
# давайте тогда попробуем обьединить сезонное и простое дифференцирования
ts_log.diff(12).diff().plot()
plot_acf(ts_log.diff(12).diff().dropna(), lags=40);
adfuller(ts_log.diff(12).diff().dropna())[1]
# отлично, вердикт о стационарности подтвержден, (d, D) = (1, 1)
# теперь разберемся с параметрами q, Q, p, P.
ts_flat = ts_log.diff(12).diff().dropna()
ts_flat.plot()
# отлично, для поиска параметров q, Q, p, P нарисуем график автокорреляции и частичной автокорреляции
# на графиках мы видим что резкое падение частичной автокорреляции, и плавное затухание полной автокорреляции,
# следовательно, наш ряд может быть описан моделью (p, d, 0), (P, D, 0). Итак, q = 0, Q = 0.
plot_acf(ts_flat.dropna());
plot_pacf(ts_flat, lags=50);
# найдем теперь параметры p, P
# p малое определяется как последний несезонный лаг, находящийся выше доверительного интервала
# в данном случае это p = 2, аналогично с сезонными лагами мы не видим никаких сезонных всплесков,
# значит P = 0, итак (p, P) = (2, 0)
plot_pacf(ts_flat, lags=50);
# теперь попробуем построить SARIMA с этими параметрами
from statsmodels.tsa.statespace import sarimax
pdq = (2, 1, 0)
PDQ = (0, 1, 0, 12)
model = sarimax.SARIMAX(ts_log, order=pdq, seasonal_order=PDQ)
res = model.fit()
preds = res.forecast(69)
plotting.plot_ts(ts_log, preds)
# восстановим в изначальном масштабе
plotting.plot_ts(np.exp(ts_log), np.exp(preds), ts_test)
# Видим что получилось весьма неплохо!
# чтобы убедиться еще раз, давайте проанализируем остатки
res = (np.exp(preds) - ts_test)
res.plot()
plot_acf(res, lags=40);
```
## Auto arima
```
from pmdarima import auto_arima
model = auto_arima(
ts_log, start_p=0, start_q=0,
max_p=3, max_q=3, m=12,
start_P=0, start_Q=0, seasonal=True,
d=1, D=1, trace=True,
error_action='ignore',
suppress_warnings=True,
stepwise=True
)
```
|
github_jupyter
|
```
import pandas as pd
import matplotlib.pyplot as plt
from os import listdir, mkdir
import numpy as np
from shutil import copy2
# reproducible randomness
from numpy.random import RandomState
df = pd.read_csv('./training_solutions_rev1.csv')
print(df.count()) # show total number of samples
print(df.head()) # showcase data set
df_irregular = df[['GalaxyID','Class8.4']] # Irregular (1.1)
condition_irregular = df_irregular['Class8.4']>0.2 # >?% vote YES
df_irregular1 = df_irregular[condition_irregular]
print(df_irregular1.count())
# get irregular galaxy ID
irregular_id = list(df_irregular1['GalaxyID'])
print(irregular_id[:10])
GALAXY_ORIG_FOLDER = './images_training_rev1/'
import cv2
img = cv2.imread(GALAXY_ORIG_FOLDER+'%d.jpg' % irregular_id[0],0)
plt.imshow(img)
plt.show()
img = cv2.imread(GALAXY_ORIG_FOLDER+'%d.jpg' % irregular_id[1],0)
plt.imshow(img)
plt.show()
img = cv2.imread(GALAXY_ORIG_FOLDER+'%d.jpg' % irregular_id[2],0)
plt.imshow(img)
plt.show()
df_odd = df[['GalaxyID','Class6.1']] # Irregular (1.1)
condition_odd = df_odd['Class6.1']>0.78 # >?% vote YES
df_odd1 = df_odd[condition_odd]
print(df_odd1.count())
# get irregular galaxy ID
odd_id = list(df_odd1['GalaxyID'])
print(odd_id[:10])
GALAXY_ORIG_FOLDER = './images_training_rev1/'
import cv2
img = cv2.imread(GALAXY_ORIG_FOLDER+'%d.jpg' % odd_id[0],0)
plt.imshow(img)
plt.show()
img = cv2.imread(GALAXY_ORIG_FOLDER+'%d.jpg' % odd_id[1],0)
plt.imshow(img)
plt.show()
img = cv2.imread(GALAXY_ORIG_FOLDER+'%d.jpg' % odd_id[2],0)
plt.imshow(img)
plt.show()
df_elliptical = df[['GalaxyID','Class1.1']] # Smooth (1.1)
condition_elliptical = df_elliptical['Class1.1']>0.9 # >?% vote YES
df_elliptical1 = df_elliptical[condition_elliptical]
print(df_elliptical1.count())
df_sprial = df[['GalaxyID','Class4.1']] # Spiral Arm(4.1)
condition = df_sprial['Class4.1']>0.9 # >?% vote YES
df_sprial1 = df_sprial[condition]
print(df_sprial1.count())
# get ellipitical galaxy ID
elliptical_id = list(df_elliptical1['GalaxyID'])
print(elliptical_id[:10])
# get sprial galaxy ID
sprial_id = list(df_sprial1['GalaxyID'])
print(sprial_id[:10])
# select the same of samples (reproducibility ensured)
prng = RandomState(1234567890)
num_samples = 2500
num_split = int(0.8 * num_samples)
assert(num_samples <= len(elliptical_id) and num_samples <= len(sprial_id))
elliptical_selected_idx = prng.choice(len(elliptical_id), num_samples)
sprial_selected_idx = prng.choice(len(sprial_id), num_samples)
elliptical_selected_id = []
sprial_selected_id = []
for idx in elliptical_selected_idx:
elliptical_selected_id.append(elliptical_id[idx])
for idx in sprial_selected_idx:
sprial_selected_id.append(sprial_id[idx])
prng.shuffle(elliptical_selected_id)
prng.shuffle(sprial_selected_id)
train_elliptical_id, test_elliptical_id = elliptical_selected_id[:num_split],elliptical_selected_id[num_split:]
train_sprial_id, test_sprial_id = sprial_selected_id[:num_split],sprial_selected_id[num_split:]
with open('./train_simple.txt', 'w+') as f:
for i in train_elliptical_id:
f.write('%d,0\n' % i)
for i in train_sprial_id:
f.write('%d,1\n' % i)
with open('./test_simple.txt', 'w+') as f:
for i in test_elliptical_id:
f.write('%d,0\n' % i)
for i in test_sprial_id:
f.write('%d,1\n' % i)
GALAXY_ORIG_FOLDER = './images_training_rev1/'
import cv2
img = cv2.imread(GALAXY_ORIG_FOLDER+'%d.jpg' % train_elliptical_id[0],0)
plt.imshow(img)
plt.show()
img = cv2.imread(GALAXY_ORIG_FOLDER+'%d.jpg' % train_sprial_id[0],0)
plt.imshow(img)
plt.show()
```
|
github_jupyter
|
# Ray RLlib - Overview
© 2019-2020, Anyscale. All Rights Reserved

## Join Us at Ray Summit 2020!
Join us for the [_free_ Ray Summit 2020 virtual conference](https://events.linuxfoundation.org/ray-summit/?utm_source=dean&utm_medium=embed&utm_campaign=ray_summit&utm_content=anyscale_academy), September 30 - October 1, 2020. We have an amazing lineup of luminar keynote speakers and breakout sessions on the Ray ecosystem, third-party Ray libraries, and applications of Ray in the real world.

## About This Tutorial
This tutorial, part of [Anyscale Academy](https://anyscale.com/academy), introduces the broad topic of _reinforcement learning_ (RL) and [RLlib](https://ray.readthedocs.io/en/latest/rllib.html), Ray's comprehensive RL library.

The lessons in this tutorial use different _environments_ from [OpenAI Gym](https://gym.openai.com/) to illustrate how to train _policies_.
See the instructions in the [README](../README.md) for setting up your environment to use this tutorial.
Go [here](../Overview.ipynb) for an overview of all tutorials.
## Tutorial Sections
Because of the breadth of RL this tutorial is divided into several sections. See below for a recommended _learning plan_.
### Introduction to Reinforcement Learning and RLlib
| | Lesson | Description |
| :- | :----- | :---------- |
| 00 | [Ray RLlib Overview](00-Ray-RLlib-Overview.ipynb) | Overview of this tutorial, including all the sections. (This file.) |
| 01 | [Introduction to Reinforcement Learning](01-Introduction-to-Reinforcement-Learning.ipynb) | A quick introduction to the concepts of reinforcement learning. You can skim or skip this lesson if you already understand RL concepts. |
| 02 | [Introduction to RLlib](02-Introduction-to-RLlib.ipynb) | An overview of RLlib, its goals and the capabilities it provides. |
| | [RL References](References-Reinforcement-Learning.ipynb) | References on reinforcement learning. |
Exercise solutions for this introduction can be found [here](solutions/Ray-RLlib-Solutions.ipynb).
### Multi-Armed Bandits
_Multi-Armed Bandits_ (MABs) are a special kind of RL problem that have broad and growing applications. They are also an excellent platform for investigating the important _exploitation vs. exploration tradeoff_ at the heart of RL. The term _multi-armed bandit_ is inspired by the slot machines in casinos, so called _one-armed bandits_, but where a machine might have more than one arm.
| | Lesson | Description |
| :- | :----- | :---------- |
| 00 | [Multi-Armed-Bandits Overview](multi-armed-bandits/00-Multi-Armed-Bandits-Overview.ipynb) | Overview of this set of lessons. |
| 01 | [Introduction to Multi-Armed Bandits](multi-armed-bandits/01-Introduction-to-Multi-Armed-Bandits.ipynb) | A quick introduction to the concepts of multi-armed bandits (MABs) and how they fit in the spectrum of RL problems. |
| 02 | [Exploration vs. Exploitation Strategies](multi-armed-bandits/02-Exploration-vs-Exploitation-Strategies.ipynb) | A deeper look at algorithms that balance exploration vs. exploitation, the key challenge for efficient solutions. Much of this material is technical and can be skipped in a first reading, but skim the first part of this lesson at least. |
| 03 | [Simple Multi-Armed Bandit](multi-armed-bandits/03-Simple-Multi-Armed-Bandit.ipynb) | A simple example of a multi-armed bandit to illustrate the core ideas. |
| 04 | [Linear Upper Confidence Bound](multi-armed-bandits/04-Linear-Upper-Confidence-Bound.ipynb) | One popular algorithm for exploration vs. exploitation is _Upper Confidence Bound_. This lesson shows how to use a linear version in RLlib. |
| 05 | [Linear Thompson Sampling](multi-armed-bandits/05-Linear-Thompson-Sampling.ipynb) | Another popular algorithm for exploration vs. exploitation is _Thompson Sampling_. This lesson shows how to use a linear version in RLlib. |
| 06 | [Market Example](multi-armed-bandits/06-Market-Example.ipynb) | A simplified real-world example of MABs, finding the optimal stock and bond investment strategy. |
Exercise solutions for the bandits section of the tutorial can be found [here](multi-armed-bandits/solutions/Multi-Armed-Bandits-Solutions.ipynb).
### Explore Reinforcement Learning and RLlib
This section dives into more details about RL and using RLlib. It is best studied after going through the MAB material.
| | Lesson | Description |
| :- | :----- | :---------- |
| 00 | [Explore RLlib Overview](explore-rllib/00-Explore-RLlib-Overview.ipynb) | Overview of this set of lessons. |
| 01 | [Application - Cart Pole](explore-rllib/01-Application-Cart-Pole.ipynb) | The best starting place for learning how to use RL, in this case to train a moving car to balance a vertical pole. Based on the `CartPole-v1` environment from OpenAI Gym, combined with RLlib. |
| 02 | [Application: Bipedal Walker](explore-rllib/02-Bipedal-Walker.ipynb) | Train a two-legged robot simulator. This is an optional lesson, due to the longer compute times required, but fun to try. |
| 03 | [Custom Environments and Reward Shaping](explore-rllib/03-Custom-Environments-Reward-Shaping.ipynb) | How to customize environments and rewards for your applications. |
Some additional examples you might explore can be found in the `extras` folder:
| Lesson | Description |
| :----- | :---------- |
| [Extra: Application - Mountain Car](explore-rllib/extras/Extra-Application-Mountain-Car.ipynb) | Based on the `MountainCar-v0` environment from OpenAI Gym. |
| [Extra: Application - Taxi](explore-rllib/extras/Extra-Application-Taxi.ipynb) | Based on the `Taxi-v3` environment from OpenAI Gym. |
| [Extra: Application - Frozen Lake](explore-rllib/extras/Extra-Application-Frozen-Lake.ipynb) | Based on the `FrozenLake-v0` environment from OpenAI Gym. |
In addition, exercise solutions for this "exploration" section of the tutorial can be found [here](explore-rllib/solutions/Ray-RLlib-Solutions.ipynb).
### RecSys: Recommender System
This section applies RL to the problem of building a recommender system, a state-of-the-art technique that addresses many of the limitations of older approaches.
| | Lesson | Description |
| :- | :----- | :---------- |
| 00 | [RecSys: Recommender System Overview](recsys/00-RecSys-Overview.ipynb) | Overview of this set of lessons. |
| 01 | [Recsys: Recommender System](recsys/01-Recsys.ipynb) | An example that builds a recommender system using reinforcement learning. |
The [Custom Environments and Reward Shaping](explore-rllib/03-Custom-Environments-Reward-Shaping.ipynb) lesson from _Explore RLlib_ might be useful background for this section.
For earlier versions of some of these tutorials, see [`rllib_exercises`](https://github.com/ray-project/tutorial/blob/master/rllib_exercises/rllib_colab.ipynb) in the original [github.com/ray-project/tutorial](https://github.com/ray-project/tutorial) project.
## Learning Plan
We recommend the following _learning plan_ for working through the lessons:
Start with the introduction material for RL and RLlib:
* [Ray RLlib Overview](00-Ray-RLlib-Overview.ipynb) - This file
* [Introduction to Reinforcement Learning](01-Introduction-to-Reinforcement-Learning.ipynb)
* [Introduction to RLlib](02-Introduction-to-RLlib.ipynb)
Then study several of the lessons for multi-armed bandits, starting with these lessons:
* [Multi-Armed-Bandits Overview](multi-armed-bandits/00-Multi-Armed-Bandits-Overview.ipynb)
* [Introduction to Multi-Armed Bandits](multi-armed-bandits/01-Introduction-to-Multi-Armed-Bandits.ipynb)
* [Exploration vs. Exploitation Strategies](multi-armed-bandits/02-Exploration-vs-Exploitation-Strategies.ipynb): Skim at least the first part of this lesson.
* [Simple Multi-Armed Bandit](multi-armed-bandits/03-Simple-Multi-Armed-Bandit.ipynb)
As time permits, study one or both of the following lessons:
* [Linear Upper Confidence Bound](multi-armed-bandits/04-Linear-Upper-Confidence-Bound.ipynb)
* [Linear Thompson Sampling](multi-armed-bandits/05-Linear-Thompson-Sampling.ipynb)
Then finish with this more complete example:
* [Market Example](multi-armed-bandits/06-Market-Example.ipynb)
Next, return to the "exploration" lessons under `explore-rllib` and work through as many of the following lessons as time permits:
* [Application: Cart Pole](explore-rllib/01-Application-Cart-Pole.ipynb): Further exploration of the popular `CartPole` example.
* [Application: Bipedal Walker](explore-rllib/02-Bipedal-Walker.ipynb): A nontrivial, but simplified robot simulator.
* [Custom Environments and Reward Shaping](explore-rllib/03-Custom-Environments-Reward-Shaping.ipynb): More about creating custom environments for your problem. Also, finetuning the rewards to ensure sufficient exploration.
Other examples that use different OpenAI Gym environments are provided for your use in the `extras` directory:
* [Extra: Application - Mountain Car](explore-rllib/extras/Extra-Application-Mountain-Car.ipynb)
* [Extra: Application - Taxi](explore-rllib/extras/Extra-Application-Taxi.ipynb)
* [Extra: Application - Frozen Lake](explore-rllib/extras/Extra-Application-Frozen-Lake.ipynb)
Finally, the [references](References-Reinforcement-Learning.ipynb) collect useful books, papers, blog posts, and other available tutorial materials.
## Getting Help
* The [#tutorial channel](https://ray-distributed.slack.com/archives/C011ML23W5B) on the [Ray Slack](https://ray-distributed.slack.com). [Click here](https://forms.gle/9TSdDYUgxYs8SA9e8) to join.
* [Email](mailto:[email protected])
Find an issue? Please report it!
* [GitHub issues](https://github.com/anyscale/academy/issues)
## Give Us Feedback!
Let us know what you like and don't like about this RL and RLlib tutorial.
* [Survey](https://forms.gle/D2Lo4K5tkcqsWeKU8)
|
github_jupyter
|
# 🌋 Quick Feature Tour
[](https://colab.research.google.com/github/RelevanceAI/RelevanceAI-readme-docs/blob/v2.0.0/docs/getting-started/_notebooks/RelevanceAI-ReadMe-Quick-Feature-Tour.ipynb)
### 1. Set up Relevance AI
Get started using our RelevanceAI SDK and use of [Vectorhub](https://hub.getvectorai.com/)'s [CLIP model](https://hub.getvectorai.com/model/text_image%2Fclip) for encoding.
```
# remove `!` if running the line in a terminal
!pip install -U RelevanceAI[notebook]==2.0.0
# remove `!` if running the line in a terminal
!pip install -U vectorhub[clip]
```
Follow the signup flow and get your credentials below otherwise, you can sign up/login and find your credentials in the settings [here](https://auth.relevance.ai/signup/?callback=https%3A%2F%2Fcloud.relevance.ai%2Flogin%3Fredirect%3Dcli-api)

```
from relevanceai import Client
"""
You can sign up/login and find your credentials here: https://cloud.relevance.ai/sdk/api
Once you have signed up, click on the value under `Activation token` and paste it here
"""
client = Client()
```

### 2. Create a dataset and insert data
Use one of our sample datasets to upload into your own project!
```
import pandas as pd
from relevanceai.utils.datasets import get_ecommerce_dataset_clean
# Retrieve our sample dataset. - This comes in the form of a list of documents.
documents = get_ecommerce_dataset_clean()
pd.DataFrame.from_dict(documents).head()
ds = client.Dataset("quickstart")
ds.insert_documents(documents)
```
See your dataset in the dashboard

### 3. Encode data and upload vectors into your new dataset
Encode a new product image vector using [Vectorhub's](https://hub.getvectorai.com/) `Clip2Vec` models and update your dataset with the resulting vectors. Please refer to [Vectorhub](https://github.com/RelevanceAI/vectorhub) for more details.
```
from vectorhub.bi_encoders.text_image.torch import Clip2Vec
model = Clip2Vec()
# Set the default encode to encoding an image
model.encode = model.encode_image
documents = model.encode_documents(fields=["product_image"], documents=documents)
ds.upsert_documents(documents=documents)
ds.schema
```
Monitor your vectors in the dashboard

### 4. Run clustering on your vectors
Run clustering on your vectors to better understand your data!
You can view your clusters in our clustering dashboard following the link which is provided after the clustering is finished!
```
from sklearn.cluster import KMeans
cluster_model = KMeans(n_clusters=10)
ds.cluster(cluster_model, ["product_image_clip_vector_"])
```
You can see the new `_cluster_` field that is added to your document schema.
Clustering results are uploaded back to the dataset as an additional field.
The default `alias` of the cluster will be the `kmeans_<k>`.
```
ds.schema
```
See your cluster centers in the dashboard

### 4. Run a vector search
Encode your query and find your image results!
Here our query is just a simple vector query, but our search comes with out of the box support for features such as multi-vector, filters, facets and traditional keyword matching to combine with your vector search. You can read more about how to construct a multivector query with those features [here](https://docs.relevance.ai/docs/vector-search-prerequisites).
See your search results on the dashboard here https://cloud.relevance.ai/sdk/search.
```
query = "gifts for the holidays"
query_vector = model.encode(query)
multivector_query = [{"vector": query_vector, "fields": ["product_image_clip_vector_"]}]
results = ds.vector_search(multivector_query=multivector_query, page_size=10)
```
See your multi-vector search results in the dashboard

Want to quickly create some example applications with Relevance AI? Check out some other guides below!
- [Text-to-image search with OpenAI's CLIP](https://docs.relevance.ai/docs/quickstart-text-to-image-search)
- [Hybrid Text search with Universal Sentence Encoder using Vectorhub](https://docs.relevance.ai/docs/quickstart-text-search)
- [Text search with Universal Sentence Encoder Question Answer from Google](https://docs.relevance.ai/docs/quickstart-question-answering)
|
github_jupyter
|

### <center> **Chukwuemeka Mba-Kalu** </center> <center> **Joseph Onwughalu** </center>
### <center> **An Analysis of the Brazilian Economy between 2000 and 2012** </center>
#### <center> Final Project In Partial Fulfillment of the Course Requirements </center> <center> [**Data Bootcamp**](http://nyu.data-bootcamp.com/) </center>
##### <center> Stern School of Business, NYU Spring 2017 </center> <center> **May 12, 2017** </center>
### The Brazilian Economy
In this project we examine in detail different complexities of Brazil’s growth between the years 2000-2012. During this period, Brazil set an example for many of the major emerging economies in Latin America, Africa, and Asia.
From the years 2000-2012, Brazil was one of the fastest growing major economies in the world. It is the 8th largest economy in the world, with its GDP totalling 2.2 trillion dollars and GDP per Capita being at 10,308 dollars. While designing this project, we were interested to find out more about the main drivers of the Brazilian economy. Specifically, we aim to look at specific trends and indicators that directly affect economic growth, especially in fast-growing countries such as Brazil. Certain trends include household consumption and its effects on the GDP, bilateral aid and investment flows and its effects on the GDP per capita growth. We also aim to view the effects of economic growth on climate change and public health by observing the carbon emissions percentage changes and specific indicators like the mortality rate.
We will be looking at generally accepted economic concepts and trends, making some hypotheses, and comparing our hypotheses to the Brazil data we have. Did Brazil follow these trends on its path to economic growth?
### Methodology - Data Acquisition
All the data we are using in this project was acquired from the World Bank and can be accessed and downloaded from the [website](www.WorldBank.org). By going on the website and searching for “World data report” we were given access to information that has to be submitted by the respective countries on the site. By clicking “[Brazil](http://databank.worldbank.org/data/reports.aspx?source=2&country=BRA),” we’re shown the information of several economic indicators and their respective data over a time period of 2000-2012 that we downloaded as an excel file. We picked more than 20 metrics to include in our data, such as:
* Population
* GDP (current US Dollars)
* Household final consumption expenditure, etc. (% of GDP)
* General government final consumption expenditure (current US Dollars)
* Life expectancy at birth, total (years)
For all of our analysis and data we will be looking at the 2000-2012 time period and have filtered the spreadsheets accordingly to reflect this information.
```
# Inportant Packages
import pandas as pd
import matplotlib.pyplot as plt
import sys
import datetime as dt
print('Python version is:', sys.version)
print('Pandas version:', pd.__version__)
print('Date:', dt.date.today())
```
### Reading in and Cleaning up the Data
We downloaded our [data](http://databank.worldbank.org/data/AjaxDownload/FileDownloadHandler.ashx?filename=67fd49af-3b41-4515-b248-87b045e61886.zip&filetype=CSV&language=en&displayfile=Data_Extract_From_World_Development_Indicators.zip) in xlxs, retained and renamed the important columns, and deleted rows without enough data. We alse transposed the table to make it easier to plot diagrams.
```
path = 'C:\\Users\\emeka_000\\Desktop\\Bootcamp_Emeka.xlsx'
odata = pd.read_excel(path,
usecols = ['Series Name','2000 [YR2000]', '2001 [YR2001]', '2002 [YR2002]',
'2003 [YR2003]', '2004 [YR2004]', '2005 [YR2005]', '2006 [YR2006]',
'2007 [YR2007]', '2008 [YR2008]', '2009 [YR2009]', '2010 [YR2010]',
'2011 [YR2011]', '2012 [YR2012]']
) #retained only the necessary columns
odata.columns = ['Metric', '2000', '2001', '2002', '2003', '2004', '2005', '2006', '2007', '2008',
'2009', '2010', '2011', '2012'] #easier column names
odata = odata.drop([20, 21, 22, 23, 24]) ##delete NaN values
odata = odata.transpose() #transpose to make diagram easier
odata #data with metrics description for the chart below
data = pd.read_excel(path,
usecols = ['2000 [YR2000]', '2001 [YR2001]', '2002 [YR2002]',
'2003 [YR2003]', '2004 [YR2004]', '2005 [YR2005]', '2006 [YR2006]',
'2007 [YR2007]', '2008 [YR2008]', '2009 [YR2009]', '2010 [YR2010]',
'2011 [YR2011]', '2012 [YR2012]']
) #same data but modified for pandas edits
data.columns = ['2000', '2001', '2002', '2003', '2004', '2005', '2006', '2007', '2008',
'2009', '2010', '2011', '2012'] #all columns are now string
data = data.transpose() #data used for the rest of the project
```
### GDP Growth and GDP Growth Rate in Brazil
To demonstrate Brazil's strong economic growht between 2000 and 2012, here are a few charts illustrating Brazil's GDP growth.
Gross domestic product (GDP) is the monetary value of all the finished goods and services produced within a country's borders in a specific time period. Though GDP is usually calculated on an annual basis, it can be calculated on a quarterly basis as well. GDP includes all private and public consumption, government outlays, investments and exports minus imports that occur within a defined territory. Put simply, GDP is a broad measurement of a nation’s overall economic activity.
GDP per Capita is a measure of the total output of a country that takes gross domestic product (GDP) and divides it by the number of people in the country.
Read more on [Investopedia](http://www.investopedia.com/terms/g/gdp.asp#ixzz4gjgzo4Ri)
```
data[4].plot(kind = 'line', #line plot
title = 'Brazil Yearly GDP (2000-2012) (current US$)', #title
fontsize=15,
color='Green',
linewidth=4, #width of plot line
figsize=(20,5),).title.set_size(20) #set figure size and title size
plt.xlabel("Year").set_size(15)
plt.ylabel("GDP (current US$) * 1e12").set_size(15) #set x and y axis, with their sizes
data[6].plot(kind = 'line',
title = 'Brazil Yearly GDP Per Capita (2000-2012) (current US$)',
fontsize=15,
color='blue',
linewidth=4,
figsize=(20,5)).title.set_size(20)
plt.xlabel("Year").set_size(15)
plt.ylabel("GDP per capita (current US$)").set_size(15)
data[5].plot(kind = 'line',
title = 'Brazil Yearly GDP Growth (2000-2012) (%)',
fontsize=15,
color='red',
linewidth=4,
figsize=(20,5)).title.set_size(20)
plt.xlabel("Year").set_size(15)
plt.ylabel("GDP Growth (%)").set_size(15)
```
#### GDP Growth vs. GDP Growth Rate
While Brazil's GDP was growing quite consistently over the 12 years, its GDP growth-rate was not steady with negative growth during the 2008 financial crisis.
### Hypothesis: Household Consumption vs. Foreign Aid
Our hypothesis is that household consumption is a bigger driver of the Brazilian economy than foreign aid. With their rising incomes, Brazilians are expected to be empowered with larger disposable incomes to spend on goods and services. Foreign aid, on the other hand, might not filter down to the masses for spending.
```
fig, ax1 = plt.subplots(figsize = (20,5))
y1 = data[8]
y2 = data[4]
ax2 = ax1.twinx()
ax1.plot(y1, 'green') #household consumption
ax2.plot(y2, 'blue') #GDP growth
plt.title("Household Consumption (% of GDP) vs. GDP").set_size(20)
```
#### Actual: Household Consumption
GDP comprises of household consumption, net investments, government spending and net exports;increases or decreases in any of these areas would affect the overall GDP respectively. The data shows that despite household consumption decreasing as a % of GDP, the GDP was growing. We found this a little strange and difficult to understand. One explanation for this phenomenon could be that as emerging market economies continue to expand, there is an increased shift towards investments and government spending.
The blue line represents GDP growth and the green line represents Household Consumption.
```
fig, ax1 = plt.subplots(figsize = (20, 5))
y1 = data[11]
y2 = data[4]
ax2 = ax1.twinx()
ax1.plot(y1, 'red') #Net official development assistance
ax2.plot(y2, 'blue') #GDP growth
plt.title("Foreign Aid vs. GDP").set_size(20)
```
#### Actual: Foreign Aid
Regarding foreign aid, it should be the case that with decreases in aid there will be reduced economic growth because many developing countries do rely on that as a crucial resource. The data shows a positive corellation for Brazil. While household spending was not a major driver of the Brazil's GDP growth, foreign aid played a big role. We will now explore how foreign direct investment and government spending can affect economic growth.
The blue line represents GDP growth and the red line represents Foreign Aid.
### Hypothesis: Foreign Direct Investment vs. Government Spending
For emerging market economies, the general trend is that Governments contribute a significant proportion to the GDP. Given that Brazil experienced growth between the years 2000-2012, it is expected that a consequence was increased foreign direct investment. Naturally, we’d like to compare the increases in Government Spending versus this foreign direct investment and see who generally contributed more to the GDP growth of the country.
Our hypothesis is that the increased foreign direct investment was a bigger contributor to the GDP growth than government spending. With increased globalisation, we expect many multinationals and investors started business operations in Brazil due to its large, fast-growing market.
```
fig, ax1 = plt.subplots(figsize = (20, 5))
y1 = data[2]
y2 = data[4]
ax2 = ax1.twinx()
ax1.plot(y1, 'yellow') #household consumption
ax2.plot(y2, 'blue') #GDP growth
plt.title("Foreign Direct Investment (Inflows) (% of GDP) vs. GDP").set_size(20)
```
#### Actual: Foreign Direct Investment
Contrary to popular belief and economic concepts, increased foreign direct investment did not act as a major contributor to the GDP growth Brazil experienced. There is no clear general trend or correlation between FDI and GDP growth.
```
fig, ax1 = plt.subplots(figsize = (20, 5))
y1 = data[14]
y2 = data[4]
ax2 = ax1.twinx()
ax1.plot(y1, 'purple') #household consumption
ax2.plot(y2, 'blue') #GDP growth
plt.title("Government Spending vs. GDP").set_size(20)
```
#### Actual: Government Spending
It is clear that government spending is positively corellated with the total GDP growth Brazil experienced. We believe that this was the major driver for Brazil's growth.
### Hypothesis: Population Growth and GDP per capita
Brazil’s population growth continued to increase during this time period of 2000-2012. As mentioned earlier, Brazil’s GDP growth was also growing during the same time period. Given that GDP per capita is a nice economic indicator to highlight standard of living in a country, we wanted to see if the increasing population was negating the effects of increased economic growth.
Our hypothesis is that even though population was growing, the GDP per capita over the years generally increased at a higher rate and, all things equal, we are assured increased living standards in Brazil. This finding would prove to us that the GDP was growing at a faster rate than the population.
```
data.plot.scatter(x = 5, y = 0,
title = 'Population Growth vs. GDP Growth',
figsize=(20,5)).title.set_size(20)
plt.xlabel("GDP Growth Rate").set_size(15)
plt.ylabel("Population Growth Rate").set_size(15)
```
#### Actual: Population Growth
There is no correlation between the population growth rate and the overall GDP growth rate. The general GDP rate already accounts for population increases and decreases.
```
data.plot.scatter(x = 6, y = 0,
title = 'Population Growth vs. GDP per Capita',
figsize=(20,5)).title.set_size(20)
plt.xlabel("GDP per Capita").set_size(15)
plt.ylabel("Population Growth Rate").set_size(15)
```
#### Population Growth
The population growth rate has a negative correlation with GDP per capita. Our explanation is that, as economies advance, the birth rate is expected to decrease. This generally causes population growth rate to fall and GDP per Capita to rise.
### Hypothesis: Renewable Energy Expenditures and C02 Emissions
What one would expect is that as a country’s economy grows, its investments in renewable energy methods would increase as well. Such actions should lead to a decrease in CO2 emissions as cleaner energy processes are being applied. Our hypothesis disagrees with this.
We believe that despite there being significant increases in renewable energy expenditures due to increased incomes and a larger, more diversified economy, there will still be more than proportionate increases in C02 emissions. By testing this hypothesis we will begin to understand certain explanations as to why this may be true or false.
```
data[15].plot(kind = 'bar',
title = 'Renewable energy consumption (% of total) (2000-2012)',
fontsize=15,
color='green',
linewidth=4,
figsize=(20,5)).title.set_size(20)
data[12].plot(kind = 'bar',
title = 'CO2 emissions from liquid fuel consumption (2000-2012)',
fontsize=15,
color='red',
linewidth=4,
figsize=(20,5)).title.set_size(20)
data[13].plot(kind = 'bar',
title = 'CO2 emissions from gaseous fuel consumption (2000-2012)',
fontsize=15,
color='blue',
linewidth=4,
figsize=(20,5)).title.set_size(20)
```
#### Actual: Renewable Energy Consumption vs. CO2 Emmissions
As countries continue to grow their economies, it is expected that people’s incomes will continue to rise. Increased disposable incomes should cause better energy consumption methods but as our hypothesis states, C02 emissions still continue to rise. This could be due to the increase in population as more people are using carbon goods and products.
### Hypothesis: Health Expenditures and Life Expectancy
There should be a positive correlation between health expenditures and life expenditures. Naturally, the more a country is spending on healthcare, the higher the life expectancy ought to be. Our hypothesis agrees with this positive statement and we’d like to test it. If it turns out that health expenditure increases positively affects life expectancy, then we can attribute the increase to an improved economy that allows for more health expenditures from individuals, organisations and institutions.
```
data.plot.scatter(x = 7, y = 19, #scatter plot
title = 'Health Expenditures vs. Life Expectancy',
figsize=(20,5)).title.set_size(20)
plt.xlabel("Health Expenditures").set_size(15)
plt.ylabel("Life Expectancy").set_size(15)
```
#### Actual: Health Expenditures and Life Expectancy
As expected, there is a positive correlation between health expenditures and life expectancy in Brazil. This is the natural expectation that as a country spends more on healthcare services, products and research, the life expectancy should increase as improvements to health science will be made due to such investments.
### Conclusion
When we first started working on this project, we wanted to analyze some of the generally accepted economic concepts we’ve learned in our four years at Stern. Using a previously booming emerging market economy like Brazil as a test subject, we put these economic metrics to test. Some metrics contributed to increased economic growth and some indicators also show that the economic growth played a big role in society.
We started with specific hypotheses of what we expected to happen before running the data. While there were some findings that met our expectations, we came across some surprising information that made us realize that economies aren’t completely systematic and will vary in functioning.
Although household spending and foreign direct investment were generally increasing, we did not find that there was a direct correlation between its growth and the GDP growth rate. What instead was our conclusion was that the foreign aid and government spending were two of the major drivers of GDP growth during the years 2000 - 2012.

|
github_jupyter
|
```
# Allow us to load `open_cp` without installing
import sys, os.path
sys.path.insert(0, os.path.abspath(os.path.join("..", "..")))
```
# Chicago data
The data can be downloaded from https://catalog.data.gov/dataset/crimes-2001-to-present-398a4 (see the module docstring of `open_cp.sources.chicago` See also https://data.cityofchicago.org/Public-Safety/Crimes-2001-to-present/ijzp-q8t2
In this notebook, we quickly look at the data, check that the data agrees between both sources, and demo some of the library features provided for loading the data.
```
import open_cp.sources.chicago as chicago
import geopandas as gpd
import sys, os, csv, lzma
filename = os.path.join("..", "..", "open_cp", "sources", "chicago.csv")
filename_all = os.path.join("..", "..", "open_cp", "sources", "chicago_all.csv.xz")
filename_all1 = os.path.join("..", "..", "open_cp", "sources", "chicago_all1.csv.xz")
```
Let us look at the snapshot of the last year, vs the total dataset. The data appears to be the same, though the exact format changes.
```
with open(filename, "rt") as file:
reader = csv.reader(file)
print(next(reader))
print(next(reader))
with lzma.open(filename_all, "rt") as file:
reader = csv.reader(file)
print(next(reader))
print(next(reader))
```
As well as loading data directly into a `TimedPoints` class, we can process a sub-set of the data to GeoJSON, or straight to a geopandas dataframe (if geopandas is installed).
```
geo_data = chicago.load_to_GeoJSON()
geo_data[0]
frame = chicago.load_to_geoDataFrame()
frame.head()
```
## Explore with QGIS
We can save the dataframe to a shape-file which can be viewed in e.g. QGIS.
To explore the spatial-distribution, I would recommend using an interactive GIS package. Using QGIS (free and open source) you can easily add a basemap using GoogleMaps or OpenStreetMap, etc. See http://maps.cga.harvard.edu/qgis/wkshop/basemap.php
I found this to be slightly buggy. On Windows, QGIS 2.18.7 I found that the following worked:
- First open the `chicago.shp` file produced from the line above.
- Select the Coordinate reference system "WGS 84 / EPSG:4326"
- Now go to the menu "Web" -> "OpenLayers plugin" -> Whatever
- The projection should change to EPSG:3857. The basemap will obscure the point map, so in the "Layers Panel" drag the basemap to the bottom.
- Selecting EPSG:3857 at import time doesn't seem to work (which is different to the instructions..!)
```
# On my Windows install, if I don't do this, I get a GDAL error in
# the Jupyter console, and the resulting ".prj" file is empty.
# This isn't critical, but it confuses QGIS, and you end up having to
# choose a projection when loading the shape-file.
import os
os.environ["GDAL_DATA"] = "C:\\Users\\Matthew\\Anaconda3\\Library\\share\\gdal\\"
frame.to_file("chicago")
```
# A geoPandas example
Let's use the "generator of GeoJSON" option shown above to pick out only BURGLARY crimes from the 2001-- dataset (which is too large to easily load into a dataframe in one go).
```
with lzma.open(filename_all, "rt") as file:
features = [ event for event in chicago.generate_GeoJSON_Features(file, type="all")
if event["properties"]["crime"] == "THEFT" ]
frame = gpd.GeoDataFrame.from_features(features)
frame.crs = {"init":"EPSG:4326"} # Lon/Lat native coords
frame.head()
frame.to_file("chicago_all_theft")
with lzma.open(filename_all, "rt") as file:
features = [ event for event in chicago.generate_GeoJSON_Features(file, type="all")
if event["properties"]["crime"] == "BURGLARY" ]
frame = gpd.GeoDataFrame.from_features(features)
frame.crs = {"init":"EPSG:4326"} # Lon/Lat native coords
frame.head()
frame.to_file("chicago_all_burglary")
frame["type"].unique()
frame["location"].unique()
```
Upon loading into QGIS to visualise, we find that the 2001 data seems to be geocoded in a different way... The events are not on the road, and the distribution looks less artificial. Let's extract the 2001 burglary data, and then the all the 2001 data, and save.
```
with lzma.open(filename_all, "rt") as file:
features = [ event for event in chicago.generate_GeoJSON_Features(file, type="all")
if event["properties"]["timestamp"].startswith("2001") ]
frame = gpd.GeoDataFrame.from_features(features)
frame.crs = {"init":"EPSG:4326"} # Lon/Lat native coords
frame.head()
frame.to_file("chicago_2001")
```
# Explore rounding errors
We check the following:
- The X and Y COORDINATES fields (which we'll see, in a different notebook, at longitude / latitude coordinates projected in EPSG:3435 in feet) are always whole numbers.
- The longitude and latitude data contains at most 9 decimals places of accuracy.
In the other notebook, we look at map projections. The data is most consistent with the longitude / latitude coordinates being the primary source, and the X/Y projected coordinates being computed and rounded to the nearest integer.
```
longs, lats = [], []
xcs, ycs = [], []
with open(filename, "rt") as file:
reader = csv.reader(file)
header = next(reader)
print(header)
for row in reader:
if len(row[14]) > 0:
longs.append(row[14])
lats.append(row[15])
xcs.append(row[12])
ycs.append(row[13])
set(len(x) for x in longs), set(len(x) for x in lats)
any(x.find('.') >= 0 for x in xcs), any(y.find('.') >= 0 for y in ycs)
```
# Repeated data
Mostly the "case" assignment is unique, but there are a few exceptions to this.
```
import collections
with lzma.open(filename_all, "rt") as file:
c = collections.Counter( event["properties"]["case"] for event in
chicago.generate_GeoJSON_Features(file, type="all") )
multiples = set( key for key in c if c[key] > 1 )
len(multiples)
with lzma.open(file_all, "rt") as file:
data = gpd.GeoDataFrame.from_features(
event for event in chicago.generate_GeoJSON_Features(file, type="all")
if event["properties"]["case"] in multiples
)
len(data), len(data.case.uniques())
```
|
github_jupyter
|
# Build and Evaluate a Linear Risk model
Welcome to the first assignment in Course 2!
## Outline
- [1. Import Packages](#1)
- [2. Load Data](#2)
- [3. Explore the Dataset](#3)
- [4. Mean-Normalize the Data](#4)
- [Exercise 1](#Ex-1)
- [5. Build the Model](#Ex-2)
- [Exercise 2](#Ex-2)
- [6. Evaluate the Model Using the C-Index](#6)
- [Exercise 3](#Ex-3)
- [7. Evaluate the Model on the Test Set](#7)
- [8. Improve the Model](#8)
- [Exercise 4](#Ex-4)
- [9. Evalute the Improved Model](#9)
## Overview of the Assignment
In this assignment, you'll build a risk score model for retinopathy in diabetes patients using logistic regression.
As we develop the model, we will learn about the following topics:
- Data preprocessing
- Log transformations
- Standardization
- Basic Risk Models
- Logistic Regression
- C-index
- Interactions Terms
### Diabetic Retinopathy
Retinopathy is an eye condition that causes changes to the blood vessels in the part of the eye called the retina.
This often leads to vision changes or blindness.
Diabetic patients are known to be at high risk for retinopathy.
### Logistic Regression
Logistic regression is an appropriate analysis to use for predicting the probability of a binary outcome. In our case, this would be the probability of having or not having diabetic retinopathy.
Logistic Regression is one of the most commonly used algorithms for binary classification. It is used to find the best fitting model to describe the relationship between a set of features (also referred to as input, independent, predictor, or explanatory variables) and a binary outcome label (also referred to as an output, dependent, or response variable). Logistic regression has the property that the output prediction is always in the range $[0,1]$. Sometimes this output is used to represent a probability from 0%-100%, but for straight binary classification, the output is converted to either $0$ or $1$ depending on whether it is below or above a certain threshold, usually $0.5$.
It may be confusing that the term regression appears in the name even though logistic regression is actually a classification algorithm, but that's just a name it was given for historical reasons.
<a name='1'></a>
## 1. Import Packages
We'll first import all the packages that we need for this assignment.
- `numpy` is the fundamental package for scientific computing in python.
- `pandas` is what we'll use to manipulate our data.
- `matplotlib` is a plotting library.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
<a name='2'></a>
## 2. Load Data
First we will load in the dataset that we will use for training and testing our model.
- Run the next cell to load the data that is stored in csv files.
- There is a function `load_data` which randomly generates data, but for consistency, please use the data from the csv files.
```
from utils import load_data
# This function creates randomly generated data
# X, y = load_data(6000)
# For stability, load data from files that were generated using the load_data
X = pd.read_csv('X_data.csv',index_col=0)
y_df = pd.read_csv('y_data.csv',index_col=0)
y = y_df['y']
```
`X` and `y` are Pandas DataFrames that hold the data for 6,000 diabetic patients.
<a name='3'></a>
## 3. Explore the Dataset
The features (`X`) include the following fields:
* Age: (years)
* Systolic_BP: Systolic blood pressure (mmHg)
* Diastolic_BP: Diastolic blood pressure (mmHg)
* Cholesterol: (mg/DL)
We can use the `head()` method to display the first few records of each.
```
X.head()
```
The target (`y`) is an indicator of whether or not the patient developed retinopathy.
* y = 1 : patient has retinopathy.
* y = 0 : patient does not have retinopathy.
```
y.head()
```
Before we build a model, let's take a closer look at the distribution of our training data. To do this, we will split the data into train and test sets using a 75/25 split.
For this, we can use the built in function provided by sklearn library. See the documentation for [sklearn.model_selection.train_test_split](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html).
```
from sklearn.model_selection import train_test_split
X_train_raw, X_test_raw, y_train, y_test = train_test_split(X, y, train_size=0.75, random_state=0)
```
Plot the histograms of each column of `X_train` below:
```
for col in X.columns:
X_train_raw.loc[:, col].hist()
plt.title(col)
plt.show()
```
As we can see, the distributions have a generally bell shaped distribution, but with slight rightward skew.
Many statistical models assume that the data is normally distributed, forming a symmetric Gaussian bell shape (with no skew) more like the example below.
```
from scipy.stats import norm
data = np.random.normal(50,12, 5000)
fitting_params = norm.fit(data)
norm_dist_fitted = norm(*fitting_params)
t = np.linspace(0,100, 100)
plt.hist(data, bins=60, density=True)
plt.plot(t, norm_dist_fitted.pdf(t))
plt.title('Example of Normally Distributed Data')
plt.show()
```
We can transform our data to be closer to a normal distribution by removing the skew. One way to remove the skew is by applying the log function to the data.
Let's plot the log of the feature variables to see that it produces the desired effect.
```
for col in X_train_raw.columns:
np.log(X_train_raw.loc[:, col]).hist()
plt.title(col)
plt.show()
```
We can see that the data is more symmetric after taking the log.
<a name='4'></a>
## 4. Mean-Normalize the Data
Let's now transform our data so that the distributions are closer to standard normal distributions.
First we will remove some of the skew from the distribution by using the log transformation.
Then we will "standardize" the distribution so that it has a mean of zero and standard deviation of 1. Recall that a standard normal distribution has mean of zero and standard deviation of 1.
<a name='Ex-1'></a>
### Exercise 1
* Write a function that first removes some of the skew in the data, and then standardizes the distribution so that for each data point $x$,
$$\overline{x} = \frac{x - mean(x)}{std(x)}$$
* Keep in mind that we want to pretend that the test data is "unseen" data.
* This implies that it is unavailable to us for the purpose of preparing our data, and so we do not want to consider it when evaluating the mean and standard deviation that we use in the above equation. Instead we want to calculate these values using the training data alone, but then use them for standardizing both the training and the test data.
* For a further discussion on the topic, see this article ["Why do we need to re-use training parameters to transform test data"](https://sebastianraschka.com/faq/docs/scale-training-test.html).
#### Note
- For the sample standard deviation, please calculate the unbiased estimator:
$$s = \sqrt{\frac{\sum_{i=1}^n(x_{i} - \bar{x})^2}{n-1}}$$
- In other words, if you numpy, set the degrees of freedom `ddof` to 1.
- For pandas, the default `ddof` is already set to 1.
<details>
<summary>
<font size="3" color="darkgreen"><b>Hints</b></font>
</summary>
<p>
<ul>
<li> When working with Pandas DataFrames, you can use the aggregation functions <code>mean</code> and <code>std</code> functions. Note that in order to apply an aggregation function separately for each row or each column, you'll set the axis parameter to either <code>0</code> or <code>1</code>. One produces the aggregation along columns and the other along rows, but it is easy to get them confused. So experiment with each option below to see which one you should use to get an average for each column in the dataframe.
<code>
avg = df.mean(axis=0)
avg = df.mean(axis=1)
</code>
</li>
<br></br>
<li>Remember to use <b>training</b> data statistics when standardizing both the training and the test data.</li>
</ul>
</p>
</details>
```
# UNQ_C1 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def make_standard_normal(df_train, df_test):
"""
In order to make the data closer to a normal distribution, take log
transforms to reduce the skew.
Then standardize the distribution with a mean of zero and standard deviation of 1.
Args:
df_train (dataframe): unnormalized training data.
df_test (dataframe): unnormalized test data.
Returns:
df_train_normalized (dateframe): normalized training data.
df_test_normalized (dataframe): normalized test data.
"""
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# Remove skew by applying the log function to the train set, and to the test set
df_train_unskewed = np.log(df_train)
df_test_unskewed = np.log(df_test)
#calculate the mean and standard deviation of the training set
mean = df_train_unskewed.mean(axis=0)
stdev = df_train_unskewed.std(axis=0)
# standardize the training set
df_train_standardized = (df_train_unskewed - mean) / stdev
# standardize the test set (see instructions and hints above)
df_test_standardized = (df_test_unskewed - mean) / stdev
### END CODE HERE ###
return df_train_standardized, df_test_standardized
```
#### Test Your Work
```
# test
tmp_train = pd.DataFrame({'field1': [1,2,10], 'field2': [4,5,11]})
tmp_test = pd.DataFrame({'field1': [1,3,10], 'field2': [4,6,11]})
tmp_train_transformed, tmp_test_transformed = make_standard_normal(tmp_train,tmp_test)
print(f"Training set transformed field1 has mean {tmp_train_transformed['field1'].mean(axis=0):.4f} and standard deviation {tmp_train_transformed['field1'].std(axis=0):.4f} ")
print(f"Test set transformed, field1 has mean {tmp_test_transformed['field1'].mean(axis=0):.4f} and standard deviation {tmp_test_transformed['field1'].std(axis=0):.4f}")
print(f"Skew of training set field1 before transformation: {tmp_train['field1'].skew(axis=0):.4f}")
print(f"Skew of training set field1 after transformation: {tmp_train_transformed['field1'].skew(axis=0):.4f}")
print(f"Skew of test set field1 before transformation: {tmp_test['field1'].skew(axis=0):.4f}")
print(f"Skew of test set field1 after transformation: {tmp_test_transformed['field1'].skew(axis=0):.4f}")
```
#### Expected Output:
```CPP
Training set transformed field1 has mean -0.0000 and standard deviation 1.0000
Test set transformed, field1 has mean 0.1144 and standard deviation 0.9749
Skew of training set field1 before transformation: 1.6523
Skew of training set field1 after transformation: 1.0857
Skew of test set field1 before transformation: 1.3896
Skew of test set field1 after transformation: 0.1371
```
#### Transform training and test data
Use the function that you just implemented to make the data distribution closer to a standard normal distribution.
```
X_train, X_test = make_standard_normal(X_train_raw, X_test_raw)
```
After transforming the training and test sets, we'll expect the training set to be centered at zero with a standard deviation of $1$.
We will avoid observing the test set during model training in order to avoid biasing the model training process, but let's have a look at the distributions of the transformed training data.
```
for col in X_train.columns:
X_train[col].hist()
plt.title(col)
plt.show()
```
<a name='5'></a>
## 5. Build the Model
Now we are ready to build the risk model by training logistic regression with our data.
<a name='Ex-2'></a>
### Exercise 2
* Implement the `lr_model` function to build a model using logistic regression with the `LogisticRegression` class from `sklearn`.
* See the documentation for [sklearn.linear_model.LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegression.fit).
<details>
<summary>
<font size="3" color="darkgreen"><b>Hints</b></font>
</summary>
<p>
<ul>
<li>You can leave all the parameters to their default values when constructing an instance of the <code>sklearn.linear_model.LogisticRegression</code> class. If you get a warning message regarding the <code>solver</code> parameter, however, you may want to specify that particular one explicitly with <code>solver='lbfgs'</code>.
</li>
<br></br>
</ul>
</p>
</details>
```
# UNQ_C2 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def lr_model(X_train, y_train):
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# import the LogisticRegression class
from sklearn.linear_model import LogisticRegression
# create the model object
model = LogisticRegression()
# fit the model to the training data
model.fit(X_train,y_train)
### END CODE HERE ###
#return the fitted model
return model
```
#### Test Your Work
Note: the `predict` method returns the model prediction *after* converting it from a value in the $[0,1]$ range to a $0$ or $1$ depending on whether it is below or above $0.5$.
```
# Test
tmp_model = lr_model(X_train[0:3], y_train[0:3] )
print(tmp_model.predict(X_train[4:5]))
print(tmp_model.predict(X_train[5:6]))
```
#### Expected Output:
```CPP
[1.]
[1.]
```
Now that we've tested our model, we can go ahead and build it. Note that the `lr_model` function also fits the model to the training data.
```
model_X = lr_model(X_train, y_train)
```
<a name='6'></a>
## 6. Evaluate the Model Using the C-index
Now that we have a model, we need to evaluate it. We'll do this using the c-index.
* The c-index measures the discriminatory power of a risk score.
* Intuitively, a higher c-index indicates that the model's prediction is in agreement with the actual outcomes of a pair of patients.
* The formula for the c-index is
$$ \mbox{cindex} = \frac{\mbox{concordant} + 0.5 \times \mbox{ties}}{\mbox{permissible}} $$
* A permissible pair is a pair of patients who have different outcomes.
* A concordant pair is a permissible pair in which the patient with the higher risk score also has the worse outcome.
* A tie is a permissible pair where the patients have the same risk score.
<a name='Ex-3'></a>
### Exercise 3
* Implement the `cindex` function to compute c-index.
* `y_true` is the array of actual patient outcomes, 0 if the patient does not eventually get the disease, and 1 if the patient eventually gets the disease.
* `scores` is the risk score of each patient. These provide relative measures of risk, so they can be any real numbers. By convention, they are always non-negative.
* Here is an example of input data and how to interpret it:
```Python
y_true = [0,1]
scores = [0.45, 1.25]
```
* There are two patients. Index 0 of each array is associated with patient 0. Index 1 is associated with patient 1.
* Patient 0 does not have the disease in the future (`y_true` is 0), and based on past information, has a risk score of 0.45.
* Patient 1 has the disease at some point in the future (`y_true` is 1), and based on past information, has a risk score of 1.25.
```
# UNQ_C3 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def cindex(y_true, scores):
'''
Input:
y_true (np.array): a 1-D array of true binary outcomes (values of zero or one)
0: patient does not get the disease
1: patient does get the disease
scores (np.array): a 1-D array of corresponding risk scores output by the model
Output:
c_index (float): (concordant pairs + 0.5*ties) / number of permissible pairs
'''
n = len(y_true)
assert len(scores) == n
concordant = 0
permissible = 0
ties = 0
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# use two nested for loops to go through all unique pairs of patients
for i in range(n):
for j in range(i+1, n): #choose the range of j so that j>i
# Check if the pair is permissible (the patient outcomes are different)
if y_true[i] != y_true[j]:
# Count the pair if it's permissible
permissible =permissible + 1
# For permissible pairs, check if they are concordant or are ties
# check for ties in the score
if scores[i] == scores[j]:
# count the tie
ties = ties + 1
# if it's a tie, we don't need to check patient outcomes, continue to the top of the for loop.
continue
# case 1: patient i doesn't get the disease, patient j does
if y_true[i] == 0 and y_true[j] == 1:
# Check if patient i has a lower risk score than patient j
if scores[i] < scores[j]:
# count the concordant pair
concordant = concordant + 1
# Otherwise if patient i has a higher risk score, it's not a concordant pair.
# Already checked for ties earlier
# case 2: patient i gets the disease, patient j does not
if y_true[i] == 1 and y_true[j] == 0:
# Check if patient i has a higher risk score than patient j
if scores[i] > scores[j]:
#count the concordant pair
concordant = concordant + 1
# Otherwise if patient i has a lower risk score, it's not a concordant pair.
# We already checked for ties earlier
# calculate the c-index using the count of permissible pairs, concordant pairs, and tied pairs.
c_index = (concordant + (0.5 * ties)) / permissible
### END CODE HERE ###
return c_index
```
#### Test Your Work
You can use the following test cases to make sure your implementation is correct.
```
# test
y_true = np.array([1.0, 0.0, 0.0, 1.0])
# Case 1
scores = np.array([0, 1, 1, 0])
print('Case 1 Output: {}'.format(cindex(y_true, scores)))
# Case 2
scores = np.array([1, 0, 0, 1])
print('Case 2 Output: {}'.format(cindex(y_true, scores)))
# Case 3
scores = np.array([0.5, 0.5, 0.0, 1.0])
print('Case 3 Output: {}'.format(cindex(y_true, scores)))
cindex(y_true, scores)
```
#### Expected Output:
```CPP
Case 1 Output: 0.0
Case 2 Output: 1.0
Case 3 Output: 0.875
```
#### Note
Please check your implementation of the for loops.
- There is way to make a mistake on the for loops that cannot be caught with unit tests.
- Bonus: Can you think of what this error could be, and why it can't be caught by unit tests?
<a name='7'></a>
## 7. Evaluate the Model on the Test Set
Now, you can evaluate your trained model on the test set.
To get the predicted probabilities, we use the `predict_proba` method. This method will return the result from the model *before* it is converted to a binary 0 or 1. For each input case, it returns an array of two values which represent the probabilities for both the negative case (patient does not get the disease) and positive case (patient the gets the disease).
```
scores = model_X.predict_proba(X_test)[:, 1]
c_index_X_test = cindex(y_test.values, scores)
print(f"c-index on test set is {c_index_X_test:.4f}")
```
#### Expected output:
```CPP
c-index on test set is 0.8182
```
Let's plot the coefficients to see which variables (patient features) are having the most effect. You can access the model coefficients by using `model.coef_`
```
coeffs = pd.DataFrame(data = model_X.coef_, columns = X_train.columns)
coeffs.T.plot.bar(legend=None);
```
### Question:
> __Which three variables have the largest impact on the model's predictions?__
<a name='8'></a>
## 8. Improve the Model
You can try to improve your model by including interaction terms.
* An interaction term is the product of two variables.
* For example, if we have data
$$ x = [x_1, x_2]$$
* We could add the product so that:
$$ \hat{x} = [x_1, x_2, x_1*x_2]$$
<a name='Ex-4'></a>
### Exercise 4
Write code below to add all interactions between every pair of variables to the training and test datasets.
```
# UNQ_C4 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def add_interactions(X):
"""
Add interaction terms between columns to dataframe.
Args:
X (dataframe): Original data
Returns:
X_int (dataframe): Original data with interaction terms appended.
"""
features = X.columns
m = len(features)
X_int = X.copy(deep=True)
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# 'i' loops through all features in the original dataframe X
for i in range(m):
# get the name of feature 'i'
feature_i_name = features[i]
# get the data for feature 'i'
feature_i_data = X[feature_i_name]
# choose the index of column 'j' to be greater than column i
for j in range(i+1, m):
# get the name of feature 'j'
feature_j_name = features[j]
# get the data for feature j'
feature_j_data = X[feature_j_name]
# create the name of the interaction feature by combining both names
# example: "apple" and "orange" are combined to be "apple_x_orange"
feature_i_j_name = feature_i_name+"_x_"+feature_j_name
# Multiply the data for feature 'i' and feature 'j'
# store the result as a column in dataframe X_int
X_int[feature_i_j_name] = feature_i_data * feature_j_data
### END CODE HERE ###
return X_int
```
#### Test Your Work
Run the cell below to check your implementation.
```
print("Original Data")
print(X_train.loc[:, ['Age', 'Systolic_BP']].head())
print("Data w/ Interactions")
print(add_interactions(X_train.loc[:, ['Age', 'Systolic_BP']].head()))
```
#### Expected Output:
```CPP
Original Data
Age Systolic_BP
1824 -0.912451 -0.068019
253 -0.302039 1.719538
1114 2.576274 0.155962
3220 1.163621 -2.033931
2108 -0.446238 -0.054554
Data w/ Interactions
Age Systolic_BP Age_x_Systolic_BP
1824 -0.912451 -0.068019 0.062064
253 -0.302039 1.719538 -0.519367
1114 2.576274 0.155962 0.401800
3220 1.163621 -2.033931 -2.366725
2108 -0.446238 -0.054554 0.024344
```
Once you have correctly implemented `add_interactions`, use it to make transformed version of `X_train` and `X_test`.
```
X_train_int = add_interactions(X_train)
X_test_int = add_interactions(X_test)
```
<a name='9'></a>
## 9. Evaluate the Improved Model
Now we can train the new and improved version of the model.
```
model_X_int = lr_model(X_train_int, y_train)
```
Let's evaluate our new model on the test set.
```
scores_X = model_X.predict_proba(X_test)[:, 1]
c_index_X_int_test = cindex(y_test.values, scores_X)
scores_X_int = model_X_int.predict_proba(X_test_int)[:, 1]
c_index_X_int_test = cindex(y_test.values, scores_X_int)
print(f"c-index on test set without interactions is {c_index_X_test:.4f}")
print(f"c-index on test set with interactions is {c_index_X_int_test:.4f}")
```
You should see that the model with interaction terms performs a bit better than the model without interactions.
Now let's take another look at the model coefficients to try and see which variables made a difference. Plot the coefficients and report which features seem to be the most important.
```
int_coeffs = pd.DataFrame(data = model_X_int.coef_, columns = X_train_int.columns)
int_coeffs.T.plot.bar();
```
### Questions:
> __Which variables are most important to the model?__<br>
> __Have the relevant variables changed?__<br>
> __What does it mean when the coefficients are positive or negative?__<br>
You may notice that Age, Systolic_BP, and Cholesterol have a positive coefficient. This means that a higher value in these three features leads to a higher prediction probability for the disease. You also may notice that the interaction of Age x Cholesterol has a negative coefficient. This means that a higher value for the Age x Cholesterol product reduces the prediction probability for the disease.
To understand the effect of interaction terms, let's compare the output of the model we've trained on sample cases with and without the interaction. Run the cell below to choose an index and look at the features corresponding to that case in the training set.
```
index = index = 3432
case = X_train_int.iloc[index, :]
print(case)
```
We can see that they have above average Age and Cholesterol. We can now see what our original model would have output by zero-ing out the value for Cholesterol and Age.
```
new_case = case.copy(deep=True)
new_case.loc["Age_x_Cholesterol"] = 0
new_case
print(f"Output with interaction: \t{model_X_int.predict_proba([case.values])[:, 1][0]:.4f}")
print(f"Output without interaction: \t{model_X_int.predict_proba([new_case.values])[:, 1][0]:.4f}")
```
#### Expected output
```CPP
Output with interaction: 0.9448
Output without interaction: 0.9965
```
We see that the model is less confident in its prediction with the interaction term than without (the prediction value is lower when including the interaction term). With the interaction term, the model has adjusted for the fact that the effect of high cholesterol becomes less important for older patients compared to younger patients.
|
github_jupyter
|
# CHEM 1000 - Spring 2022
Prof. Geoffrey Hutchison, University of Pittsburgh
## 1. Functions and Coordinate Sets
Chapter 1 in [*Mathematical Methods for Chemists*](http://sites.bu.edu/straub/mathematical-methods-for-molecular-science/)
By the end of this session, you should be able to:
- Handle 2D polar and 3D spherical coordinates
- Understand area elements in 2D polar coordinates
- Understand volume eleements in 3D spherical coordinates
### X/Y Cartesian 2D Coordinates
We've already been using the x/y 2D Cartesian coordinate set to plot functions.
Beyond `sympy`, we're going to use two new modules:
- `numpy` which lets us create and handle arrays of numbers
- `matplotlib` which lets us plot things
It's a little bit more complicated. For now, you can just consider these as **demos**. We'll go into code (and make our own plots) in the next recitation period.
```
# import numpy
# the "as np" part is giving a shortcut so we can write "np.function()" instead of "numpy.function()"
# (saving typing is nice)
import numpy as np
# similarly, we import matplotlib's 'pyplot' module
# and "as plt" means we can use "plt.show" instead of "matplotlib.pyplot.show()"
import matplotlib.pyplot as plt
# insert any graphs into our notebooks directly
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
# once we've done that import (once) - we just need to create our x/y values
x = np.arange(0, 4*np.pi, 0.1) # start, stop, resolution
y = np.sin(x) # creates an array with sin() of all the x values
plt.plot(x,y)
plt.show()
```
Sometimes, we need to get areas in the Cartesian xy system, but this is very easy - we simply multiply an increment in x ($dx$) and an increment in y ($dy$).
(Image from [*Mathematical Methods for Chemists*](http://sites.bu.edu/straub/mathematical-methods-for-molecular-science/))
<img src="../images/cartesian-area.png" width="400" />
### Polar (2D) Coordinates
Of course, not all functions work well in xy Cartesian coordinates. A function should produce one y value for any x value. Thus, a circle isn't easily represented as $y = f(x)$.
Instead, polar coordinates, use radius $r$ and angle $\theta$. (Image from [*Mathematical Methods for Chemists*](http://sites.bu.edu/straub/mathematical-methods-for-molecular-science/))
<img src="../images/cartesian-polar.png" width="343" />
As a reminder, we can interconvert x,y into r, theta:
$$
r = \sqrt{x^2 + y^2}
$$
$$
\theta = \arctan \frac{y}{x} = \tan^{-1} \frac{y}{x}
$$
```
x = 3.0
y = 1.0
r = np.sqrt(x**2 + y**2)
theta = np.arctan(y / x)
print('r =', round(r, 4), 'theta = ', round(theta, 4))
```
Okay, we can't express a circle as an easy $y = f(x)$ expression. Can we do that in polar coordinates? Sure. The radius will be constant, and theta will go from $0 .. 2\pi$.
```
theta = np.arange(0, 2*np.pi, 0.01) # set up an array of radii from 0 to 2π with 0.01 rad
# create a function r(theta) = 1.5 .. a constant
r = np.full(theta.size, 1.5)
# create a new polar plot
ax = plt.subplot(111, projection='polar')
ax.plot(theta, r, color='blue')
ax.set_rmax(3)
ax.set_rticks([1, 2]) # Less radial ticks
ax.set_rlabel_position(22.5) # Move radial labels away from plotted line
ax.grid(True)
plt.show()
```
Anything else? Sure - we can create spirals, etc. that are parametric functions in the XY Cartesian coordinates.
```
r = np.arange(0, 2, 0.01) # set up an array of radii from 0 to 2 with 0.01 resolution
# this is a function theta(r) = 2π * r
theta = 2 * np.pi * r # set up an array of theta angles - spiraling outward .. from 0 to 2*2pi = 4pi
# create a polar plot
ax = plt.subplot(111, projection='polar')
ax.plot(theta, r, color='red')
ax.set_rmax(3)
ax.set_rticks([1, 2]) # Less radial ticks
ax.set_rlabel_position(22.5) # Move radial labels away from plotted line
ax.grid(True)
plt.show()
```
Just like with xy Cartesian, we will eventually need to consider the area of functions in polar coordinates. (Image from [*Mathematical Methods for Chemists*](http://sites.bu.edu/straub/mathematical-methods-for-molecular-science/))
<img src="../images/polar_area.png" width=375 />
Note that the area depends on the radius. Even if we sweep out the same $\Delta r$ and $\Delta \theta$ an area further out from the center is larger:
```
# create a polar plot
ax = plt.subplot(111, projection='polar')
# first arc at r = 1.0
r1 = np.full(20, 1.0)
theta1 = np.linspace(1.0, 1.3, 20)
ax.plot(theta1, r1)
# second arc at r = 1.2
r2 = np.full(20, 1.2)
theta2 = np.linspace(1.0, 1.3, 20)
ax.plot(theta2, r2)
# first radial line at theta = 1.0 radians
r3 = np.linspace(1.0, 1.2, 20)
theta3 = np.full(20, 1.0)
ax.plot(theta3, r3)
# first radial line at theta = 1.3 radians
r4 = np.linspace(1.0, 1.2, 20)
theta4 = np.full(20, 1.3)
ax.plot(theta4, r4)
# smaller box
# goes from r = 0.4-> 0.6
# sweeps out theta = 1.0-1.3 radians
r5 = np.full(20, 0.4)
r6 = np.full(20, 0.6)
r7 = np.linspace(0.4, 0.6, 20)
r8 = np.linspace(0.4, 0.6, 20)
ax.plot(theta1, r5)
ax.plot(theta2, r6)
ax.plot(theta3, r7)
ax.plot(theta4, r8)
ax.set_rmax(1.5)
ax.set_rticks([0.5, 1, 1.5]) # Less radial ticks
ax.set_rlabel_position(-22.5) # Move radial labels away from plotted line
ax.grid(True)
plt.show()
```
Thus the area element will be $r dr d\theta$. While it's not precisely rectangular, the increments are very small and it's a reasonable approximation.
### 3D Cartesian Coordinates
Of course there are many times when we need to express functions like:
$$ z = f(x,y) $$
These are a standard extension of 2D Cartesian coordinates, and so the volume is simply defined as that of a rectangular solid.
<img src="../images/cartesian-volume.png" width="360" />
```
from sympy import symbols
from sympy.plotting import plot3d
x, y = symbols('x y')
plot3d(-0.5 * (x**2 + y**2), (x, -3, 3), (y, -3, 3))
```
### 3D Spherical Coordinates
Much like two dimensions we sometimes need to use spherical coordinates — atoms are spherical, after all.
<div class="alert alert-block alert-danger">
**WARNING** Some math courses use a different [convention](https://en.wikipedia.org/wiki/Spherical_coordinate_system#Conventions) than chemistry and physics.
- Physics and chemistry use $(r, \theta, \varphi)$ where $\theta$ is the angle down from the z-axis (e.g., latitude)
- Some math courses use $\theta$ as the angle in the XY 2D plane.
</div>
(Image from [*Mathematical Methods for Chemists*](http://sites.bu.edu/straub/mathematical-methods-for-molecular-science/))
<img src="../images/spherical.png" width="330" />
Where:
- $r$ is the radius, from 0 to $\infty$
- $\theta$ is the angle down from the z-axis
- e.g., think of N/S latitude on the Earth's surface) from 0° at the N pole to 90° (π/2) at the equator and 180° (π) at the S pole
- $\varphi$ is the angle in the $xy$ plane
- e.g., think of E/W longitude on the Earth), from 0 to 360° / 0..2π
We can interconvert xyz and $r\theta\varphi$
$$x = r\sin \theta \cos \varphi$$
$$y = r\sin \theta \sin \varphi$$
$$z = r \cos \theta$$
Or vice-versa:
$$
\begin{array}{l}r=\sqrt{x^{2}+y^{2}+z^{2}} \\ \theta=\arccos \left(\frac{z}{r}\right)=\cos ^{-1}\left(\frac{z}{r}\right) \\ \varphi=\tan ^{-1}\left(\frac{y}{x}\right)\end{array}
$$
The code below might look a little complicated. That's okay. I've added comments for the different sections and each line.
You don't need to understand all of it - it's intended to plot the function:
$$ r = |\cos(\theta^2) | $$
```
# import some matplotlib modules for 3D and color scales
import mpl_toolkits.mplot3d.axes3d as axes3d
import matplotlib.colors as mcolors
cmap = plt.get_cmap('jet') # pick a red-to-blue color map
fig = plt.figure() # create a figure
ax = fig.add_subplot(1,1,1, projection='3d') # set up some axes for a 3D projection
# We now set up the grid for evaluating our function
# particularly the angle portion of the spherical coordinates
theta = np.linspace(0, np.pi, 100)
phi = np.linspace(0, 2*np.pi, 100)
THETA, PHI = np.meshgrid(theta, phi)
# here's the function to plot
R = np.abs(np.cos(THETA**2))
# now convert R(phi, theta) to x, y, z coordinates to plot
X = R * np.sin(THETA) * np.cos(PHI)
Y = R * np.sin(THETA) * np.sin(PHI)
Z = R * np.cos(THETA)
# set up some colors based on the Z range .. from red to blue
norm = mcolors.Normalize(vmin=Z.min(), vmax=Z.max())
# plot the surface
plot = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, facecolors=cmap(norm(Z)),
linewidth=0, antialiased=True, alpha=0.4) # no lines, smooth graphics, semi-transparent
plt.show()
```
The volume element in spherical coordinates is a bit tricky, since the distances depend on the radius and angles:
(Image from [*Mathematical Methods for Chemists*](http://sites.bu.edu/straub/mathematical-methods-for-molecular-science/))
$$ dV = r^2 dr \sin \theta d\theta d\phi$$
<img src="../images/spherical-volume.png" width="414" />
-------
This notebook is from Prof. Geoffrey Hutchison, University of Pittsburgh
https://github.com/ghutchis/chem1000
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a>
|
github_jupyter
|
Before you turn this problem in, make sure everything runs as expected. First, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\rightarrow$Run All).
Make sure you fill in any place that says `YOUR CODE HERE` or "YOUR ANSWER HERE", as well as your name and collaborators below:
```
NAME = ""
COLLABORATORS = ""
```
---
<!--NOTEBOOK_HEADER-->
*This notebook contains material from [PyRosetta](https://RosettaCommons.github.io/PyRosetta.notebooks);
content is available [on Github](https://github.com/RosettaCommons/PyRosetta.notebooks.git).*
<!--NAVIGATION-->
< [Jupyter Notebooks, Python, and Google Colaboratory](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/01.02-Notebooks-Python-Colab.ipynb) | [Contents](toc.ipynb) | [Index](index.ipynb) | [Introduction to PyRosetta](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/02.00-Introduction-to-PyRosetta.ipynb) ><p><a href="https://colab.research.google.com/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/01.03-FAQ.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a>
# Frequently Asked Questions/Troubleshooting Tips
**Q1: I went through the setup instructions, but importing PyRosetta causes an error.**
Few things you could try:
- Make sure that you ran every cell in the `PyRosetta Google Drive Setup` notebook (Chapter 1.01) before importing PyRosetta.
- Make sure that you installed the correct version of PyRosetta. It has to be the Linux version or it won't work. If you did happen to use an incorrect version, just delete the wrong PyRosetta package from your Google Drive, upload the correct one, delete the `prefix` folder, and try running Chapter 1.01 again.
- Make sure that the directory tree is correct. The `PyRosetta` folder should be in the _top directory_ in your Google Drive, as well as the `notebooks` or `student-notebooks` folder. Your notebooks, including Chapter 01.01 should reside in either `notebooks` or `student-notebooks`.
**Q2: The `make-student-nb.bash` script doesn't work.**
This script automatically synchronizes the `notebooks` and `student-notebooks` folders. All changes should be made in `notebooks`. The script relies on the `nbgrader` module and the `nbpages` module. Make sure you've installed these before running the script (you might even have to update these modules).
If you just want to update the Table of Contents or Keywords files, you can just use the `nbpages` command alone.
If nbgrader is the problem, you might have to run the command: `nbgrader update .`
<!--NAVIGATION-->
< [Jupyter Notebooks, Python, and Google Colaboratory](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/01.02-Notebooks-Python-Colab.ipynb) | [Contents](toc.ipynb) | [Index](index.ipynb) | [Introduction to PyRosetta](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/02.00-Introduction-to-PyRosetta.ipynb) ><p><a href="https://colab.research.google.com/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/01.03-FAQ.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a>
|
github_jupyter
|
# Example for an analytical solution of weakly scattering sphere in Python
In this example the analytical solution for a weakly scattering sphere is based on Anderson, V. C., "Sound scattering from a fluid sphere",
J. Acoust. Soc. America, 22 (4), pp 426-431, July 1950 is computed in it's original form and the simplified version from Jech et al. (2015).
For the original Anderson (1950) version, several variable need to be defined to compute the target strength (TS):
- Define the sphere
- radius (Radius) in m
- range (Range) which is the distance from the sound emitting device to the center of the sphere (m)
- density of the sphere (Rho_b) (kg/$m^3$)
- sound velocity inside the sphere (c_b) (m/s)
- Define the surrounding fluid
- sound velocity in the surrounding fluid (c_w)(m/s)
- density of the surrounding fluid (kg/$m^3$)
- Define the plane wave
- Frequency (f) (Hz)
- Scattering angle relative to the travelling direction of the incident wave (rad)
Example for a 1 cm sphere with a density and sound velcoity contrast of 1.0025, at a range of 10 m, well outside of the nearfield at a frequency of 200 kHz with an assumed sound velocity of 1500 m/s and a density of 1026 kg/m^3 for the surrounding fluid, measured at 90 degrees (i.e. 1.571 rad):
```
from fluid_sphere import *
import time #just to time the execution of the script
#Define variables
c_w = 1500
f = 200000
c_b = 1.0025*1500
Range = 10
Radius = 0.01
Rho_w = 1026
Rho_b = 1.0025 * Rho_w
Theta = 1.571
#get TS
TS = fluid_sphere(f=f,Radius=Radius, Range=Range,Rho_w=Rho_w,Rho_b=Rho_b,Theta=Theta,c_w=c_w,c_b=c_b)
print("TS for the sphere is %.2f dB"%TS)
```
TS can easily becomputed for a range of frequencies, here from 1 to 300 kHz at 0.5 kHz steps:
```
freqs = np.arange(1,300,0.5)*1000
start = time.perf_counter()
TS = [fluid_sphere(f=x,Radius=Radius, Range=Range,Rho_w=Rho_w,Rho_b=Rho_b,Theta=Theta,c_w=c_w,c_b=c_b) for x in freqs]
end = time.perf_counter()
tel_0 = end - start
print("Evaluating the TS took %.2f seconds"%tel_0)
```
Plot the results:
```
import matplotlib.pyplot as plt
plt.rcParams.update({'font.size': 16})
fig, ax = plt.subplots(figsize=(12,6))
ax.plot(freqs/1000, TS)
plt.xlabel("Frequency [kHz]")
plt.ylabel("TS [dB re 1$m^2$]")
plt.title("TS for a sphere with a=%.1f cm @ range=%.1f m"%(Radius*100,Range))
plt.show()
```
Similarly for the simplified version, the sound velocity in the external fluid (c, m/s), the frequency (f, Hz), the density and sound velocity contrasts (g and h), distance to the sphere (r, m), the radius of the sphere (a,m) and the density of the surrounding fluid (rho, kg m^-3) need to be defined.
```
f = 200000
r = 10
a = 0.01
c = 1500
h = 1.0025
g = 1.0025
rho = 1026
TS = fluid_sphere_simple(f,r,a,c,h,g,rho)
print("TS for the sphere is %.2f dB"%TS)
start = time.perf_counter()
TS = [fluid_sphere_simple(f=x,r=r,a=a,c=c,h=h,g=g,rho=rho) for x in freqs]
end = time.perf_counter()
tel_1 = end - start
print("Evaluating the TS took %.2f seconds"%tel_1)
plt.rcParams.update({'font.size': 16})
fig, ax = plt.subplots(figsize=(12,6))
ax.plot(freqs/1000, TS)
plt.xlabel("Frequency [kHz]")
plt.ylabel("TS [dB re 1$m^2$]")
plt.title("TS for a sphere with a=%.1f cm @ range=%.1f m"%(Radius*100,Range))
plt.show()
if tel_1 < tel_0:
print("The simplified method (%.2f s) evaluated the TS %.2f s faster then "
"the original Anderson (1950) method (%.2f s)"%(tel_1,np.abs(tel_1-tel_0),tel_0))
else:
print("The simplified method (%.2f s) evaluated the TS %.2f s slower then "
"the original Anderson (1950) method (%.2f s)"%(tel_1,np.abs(tel_1-tel_0),tel_0))
```
Both methods should deliver the same results for backscatter at a 90 degrees angle, but the original Anderson (1950) is more flexible in terms of scattering angle and could easily be extended to include shear and surface pressure or be validated for a solid sphere.
|
github_jupyter
|
# Zipline Pipeline
### Introduction
On any given trading day, the entire universe of stocks consists of thousands of securities. Usually, you will not be interested in investing in all the stocks in the entire universe, but rather, you will likely select only a subset of these to invest. For example, you may only want to invest in stocks that have a 10-day average closing price of \$10.00 or less. Or you may only want to invest in the top 500 securities ranked by some factor.
In order to avoid spending a lot of time doing data wrangling to select only the securities you are interested in, people often use **pipelines**. In general, a pipeline is a placeholder for a series of data operations used to filter and rank data according to some factor or factors.
In this notebook, you will learn how to work with the **Zipline Pipeline**. Zipline is an open-source algorithmic trading simulator developed by *Quantopian*. We will learn how to use the Zipline Pipeline to filter stock data according to factors.
### Install Packages
```
conda install -c Quantopian zipline
import sys
!{sys.executable} -m pip install -r requirements.txt
```
# Loading Data with Zipline
Before we build our pipeline with Zipline, we will first see how we can load the stock data we are going to use into Zipline. Zipline uses **Data Bundles** to make it easy to use different data sources. A data bundle is a collection of pricing data, adjustment data, and an asset database. Zipline employs data bundles to preload data used to run backtests and store data for future runs. Zipline comes with a few data bundles by default but it also has the ability to ingest new bundles. The first step to using a data bundle is to ingest the data. Zipline's ingestion process will start by downloading the data or by loading data files from your local machine. It will then pass the data to a set of writer objects that converts the original data to Zipline’s internal format (`bcolz` for pricing data, and `SQLite` for split/merger/dividend data) that hs been optimized for speed. This new data is written to a standard location that Zipline can find. By default, the new data is written to a subdirectory of `ZIPLINE_ROOT/data/<bundle>`, where `<bundle>` is the name given to the bundle ingested and the subdirectory is named with the current date. This allows Zipline to look at older data and run backtests on older copies of the data. Running a backtest with an old ingestion makes it easier to reproduce backtest results later.
In this notebook, we will be using stock data from **Quotemedia**. In the Udacity Workspace you will find that the stock data from Quotemedia has already been ingested into Zipline. Therefore, in the code below we will use Zipline's `bundles.load()` function to load our previously ingested stock data from Quotemedia. In order to use the `bundles.load()` function we first need to do a couple of things. First, we need to specify the name of the bundle previously ingested. In this case, the name of the Quotemedia data bundle is `eod-quotemedia`:
```
# Specify the bundle name
bundle_name = 'eod-quotemedia'
```
Second, we need to register the data bundle and its ingest function with Zipline, using the `bundles.register()` function. The ingest function is responsible for loading the data into memory and passing it to a set of writer objects provided by Zipline to convert the data to Zipline’s internal format. Since the original Quotemedia data was contained in `.csv` files, we will use the `csvdir_equities()` function to generate the ingest function for our Quotemedia data bundle. In addition, since Quotemedia's `.csv` files contained daily stock data, we will set the time frame for our ingest function, to `daily`.
```
from zipline.data import bundles
from zipline.data.bundles.csvdir import csvdir_equities
# Create an ingest function
ingest_func = csvdir_equities(['daily'], bundle_name)
# Register the data bundle and its ingest function
bundles.register(bundle_name, ingest_func);
```
Once our data bundle and ingest function are registered, we can load our data using the `bundles.load()` function. Since this function loads our previously ingested data, we need to set `ZIPLINE_ROOT` to the path of the most recent ingested data. The most recent data is located in the `cwd/../../data/project_4_eod/` directory, where `cwd` is the current working directory. We will specify this location using the `os.environ[]` command.
```
import os
# Set environment variable 'ZIPLINE_ROOT' to the path where the most recent data is located
os.environ['ZIPLINE_ROOT'] = os.path.join(os.getcwd(),'project_4_eod')
# Load the data bundle
bundle_data = bundles.load(bundle_name)
```
# Building an Empty Pipeline
Once we have loaded our data, we can start building our Zipline pipeline. We begin by creating an empty Pipeline object using Zipline's `Pipeline` class. A Pipeline object represents a collection of named expressions to be compiled and executed by a Pipeline Engine. The `Pipeline(columns=None, screen=None)` class takes two optional parameters, `columns` and `screen`. The `columns` parameter is a dictionary used to indicate the intial columns to use, and the `screen` parameter is used to setup a screen to exclude unwanted data.
In the code below we will create a `screen` for our pipeline using Zipline's built-in `.AverageDollarVolume()` class. We will use the `.AverageDollarVolume()` class to produce a 60-day Average Dollar Volume of closing prices for every stock in our universe. We then use the `.top(10)` attribute to specify that we want to filter down our universe each day to just the top 10 assets. Therefore, this screen will act as a filter to exclude data from our stock universe each day. The average dollar volume is a good first pass filter to avoid illiquid assets.
```
from zipline.pipeline import Pipeline
from zipline.pipeline.factors import AverageDollarVolume
# Create a screen for our Pipeline
universe = AverageDollarVolume(window_length = 60).top(10)
# Create an empty Pipeline with the given screen
pipeline = Pipeline(screen = universe)
```
In the code above we have named our Pipeline object `pipeline` so that we can identify it later when we make computations. Remember a Pipeline is an object that represents computations we would like to perform every day. A freshly-constructed pipeline, like the one we just created, is empty. This means it doesn’t yet know how to compute anything, and it won’t produce any values if we ask for its outputs. In the sections below, we will see how to provide our Pipeline with expressions to compute.
# Factors and Filters
The `.AverageDollarVolume()` class used above is an example of a factor. In this section we will take a look at two types of computations that can be expressed in a pipeline: **Factors** and **Filters**. In general, factors and filters represent functions that produce a value from an asset in a moment in time, but are distinguished by the types of values they produce. Let's start by looking at factors.
### Factors
In general, a **Factor** is a function from an asset at a particular moment of time to a numerical value. A simple example of a factor is the most recent price of a security. Given a security and a specific moment in time, the most recent price is a number. Another example is the 10-day average trading volume of a security. Factors are most commonly used to assign values to securities which can then be combined with filters or other factors. The fact that you can combine multiple factors makes it easy for you to form new custom factors that can be as complex as you like. For example, constructing a Factor that computes the average of two other Factors can be simply illustrated usingthe pseudocode below:
```python
f1 = factor1(...)
f2 = factor2(...)
average = (f1 + f2) / 2.0
```
### Filters
In general, a **Filter** is a function from an asset at a particular moment in time to a boolean value (True of False). An example of a filter is a function indicating whether a security's price is below \$5. Given a security and a specific moment in time, this evaluates to either **True** or **False**. Filters are most commonly used for selecting sets of securities to include or exclude from your stock universe. Filters are usually applied using comparison operators, such as <, <=, !=, ==, >, >=.
# Viewing the Pipeline as a Diagram
Zipline's Pipeline class comes with the attribute `.show_graph()` that allows you to render the Pipeline as a Directed Acyclic Graph (DAG). This graph is specified using the DOT language and consequently we need a DOT graph layout program to view the rendered image. In the code below, we will use the Graphviz pakage to render the graph produced by the `.show_graph()` attribute. Graphviz is an open-source package for drawing graphs specified in DOT language scripts.
```
import graphviz
# Render the pipeline as a DAG
pipeline.show_graph()
```
Right now, our pipeline is empty and it only contains a screen. Therefore, when we rendered our `pipeline`, we only see the diagram of our `screen`:
```python
AverageDollarVolume(window_length = 60).top(10)
```
By default, the `.AverageDollarVolume()` class uses the `USEquityPricing` dataset, containing daily trading prices and volumes, to compute the average dollar volume:
```python
average_dollar_volume = np.nansum(close_price * volume, axis=0) / len(close_price)
```
The top of the diagram reflects the fact that the `.AverageDollarVolume()` class gets its inputs (closing price and volume) from the `USEquityPricing` dataset. The bottom of the diagram shows that the output is determined by the expression `x_0 <= 10`. This expression reflects the fact that we used `.top(10)` as a filter in our `screen`. We refer to each box in the diagram as a Term.
# Datasets and Dataloaders
One of the features of Zipline's Pipeline is that it separates the actual source of the stock data from the abstract description of that dataset. Therefore, Zipline employs **DataSets** and **Loaders** for those datasets. `DataSets` are just abstract collections of sentinel values describing the columns/types for a particular dataset. While a `loader` is an object which, given a request for a particular chunk of a dataset, can actually get the requested data. For example, the loader used for the `USEquityPricing` dataset, is the `USEquityPricingLoader` class. The `USEquityPricingLoader` class will delegate the loading of baselines and adjustments to lower-level subsystems that know how to get the pricing data in the default formats used by Zipline (`bcolz` for pricing data, and `SQLite` for split/merger/dividend data). As we saw in the beginning of this notebook, data bundles automatically convert the stock data into `bcolz` and `SQLite` formats. It is important to note that the `USEquityPricingLoader` class can also be used to load daily OHLCV data from other datasets, not just from the `USEquityPricing` dataset. Simliarly, it is also possible to write different loaders for the same dataset and use those instead of the default loader. Zipline contains lots of other loaders to allow you to load data from different datasets.
In the code below, we will use `USEquityPricingLoader(BcolzDailyBarWriter, SQLiteAdjustmentWriter)` to create a loader from a `bcolz` equity pricing directory and a `SQLite` adjustments path. Both the `BcolzDailyBarWriter` and `SQLiteAdjustmentWriter` determine the path of the pricing and adjustment data. Since we will be using the Quotemedia data bundle, we will use the `bundle_data.equity_daily_bar_reader` and the `bundle_data.adjustment_reader` as our `BcolzDailyBarWriter` and `SQLiteAdjustmentWriter`, respectively.
```
from zipline.pipeline.loaders import USEquityPricingLoader
# Set the dataloader
pricing_loader = USEquityPricingLoader(bundle_data.equity_daily_bar_reader, bundle_data.adjustment_reader)
```
# Pipeline Engine
Zipline employs computation engines for executing Pipelines. In the code below we will use Zipline's `SimplePipelineEngine()` class as the engine to execute our pipeline. The `SimplePipelineEngine(get_loader, calendar, asset_finder)` class associates the chosen data loader with the corresponding dataset and a trading calendar. The `get_loader` parameter must be a callable function that is given a loadable term and returns a `PipelineLoader` to use to retrieve the raw data for that term in the pipeline. In our case, we will be using the `pricing_loader` defined above, we therefore, create a function called `choose_loader` that returns our `pricing_loader`. The function also checks that the data that is being requested corresponds to OHLCV data, otherwise it retunrs an error. The `calendar` parameter must be a `DatetimeIndex` array of dates to consider as trading days when computing a range between a fixed `start_date` and `end_date`. In our case, we will be using the same trading days as those used in the NYSE. We will use Zipline's `get_calendar('NYSE')` function to retrieve the trading days used by the NYSE. We then use the `.all_sessions` attribute to get the `DatetimeIndex` from our `trading_calendar` and pass it to the `calendar` parameter. Finally, the `asset_finder` parameter determines which assets are in the top-level universe of our stock data at any point in time. Since we are using the Quotemedia data bundle, we set this parameter to the `bundle_data.asset_finder`.
```
from zipline.utils.calendars import get_calendar
from zipline.pipeline.data import USEquityPricing
from zipline.pipeline.engine import SimplePipelineEngine
# Define the function for the get_loader parameter
def choose_loader(column):
if column not in USEquityPricing.columns:
raise Exception('Column not in USEquityPricing')
return pricing_loader
# Set the trading calendar
trading_calendar = get_calendar('NYSE')
# Create a Pipeline engine
engine = SimplePipelineEngine(get_loader = choose_loader,
calendar = trading_calendar.all_sessions,
asset_finder = bundle_data.asset_finder)
```
# Running a Pipeline
Once we have chosen our engine we are ready to run or execute our pipeline. We can run our pipeline by using the `.run_pipeline()` attribute of the `SimplePipelineEngine` class. In particular, the `SimplePipelineEngine.run_pipeline(pipeline, start_date, end_date)` implements the following algorithm for executing pipelines:
1. Build a dependency graph of all terms in the `pipeline`. In this step, the graph is sorted topologically to determine the order in which we can compute the terms.
2. Ask our AssetFinder for a “lifetimes matrix”, which should contain, for each date between `start_date` and `end_date`, a boolean value for each known asset indicating whether the asset existed on that date.
3. Compute each term in the dependency order determined in step 1, caching the results in a a dictionary so that they can be fed into future terms.
4. For each date, determine the number of assets passing the `pipeline` screen. The sum, $N$, of all these values is the total number of rows in our output Pandas Dataframe, so we pre-allocate an output array of length $N$ for each factor in terms.
5. Fill in the arrays allocated in step 4 by copying computed values from our output cache into the corresponding rows.
6. Stick the values computed in step 5 into a Pandas DataFrame and return it.
In the code below, we run our pipeline for a single day, so our `start_date` and `end_date` will be the same. We then print some information about our `pipeline_output`.
```
import pandas as pd
# Set the start and end dates
start_date = pd.Timestamp('2016-01-05', tz = 'utc')
end_date = pd.Timestamp('2016-01-05', tz = 'utc')
# Run our pipeline for the given start and end dates
pipeline_output = engine.run_pipeline(pipeline, start_date, end_date)
# We print information about the pipeline output
print('The pipeline output has type:', type(pipeline_output), '\n')
# We print whether the pipeline output is a MultiIndex Dataframe
print('Is the pipeline output a MultiIndex Dataframe:', isinstance(pipeline_output.index, pd.core.index.MultiIndex), '\n')
# If the pipeline output is a MultiIndex Dataframe we print the two levels of the index
if isinstance(pipeline_output.index, pd.core.index.MultiIndex):
# We print the index level 0
print('Index Level 0:\n\n', pipeline_output.index.get_level_values(0), '\n')
# We print the index level 1
print('Index Level 1:\n\n', pipeline_output.index.get_level_values(1), '\n')
```
We can see above that the return value of `.run_pipeline()` is a `MultiIndex` Pandas DataFrame containing a row for each asset that passed our pipeline’s screen. We can also see that the 0th level of the index contains the date and the 1st level of the index contains the tickers. In general, the returned Pandas DataFrame will also contain a column for each factor and filter we add to the pipeline using `Pipeline.add()`. At this point we haven't added any factors or filters to our pipeline, consequently, the Pandas Dataframe will have no columns. In the following sections we will see how to add factors and filters to our pipeline.
# Get Tickers
We saw in the previous section, that the tickers of the stocks that passed our pipeline’s screen are contained in the 1st level of the index. Therefore, we can use the Pandas `.get_level_values(1).values.tolist()` method to get the tickers of those stocks and save them to a list.
```
# Get the values in index level 1 and save them to a list
universe_tickers = pipeline_output.index.get_level_values(1).values.tolist()
# Display the tickers
universe_tickers
```
# Get Data
Now that we have the tickers for the stocks that passed our pipeline’s screen, we can get the historical stock data for those tickers from our data bundle. In order to get the historical data we need to use Zipline's `DataPortal` class. A `DataPortal` is an interface to all of the data that a Zipline simulation needs. In the code below, we will create a `DataPortal` and `get_pricing` function to get historical stock prices for our tickers.
We have already seen most of the parameters used below when we create the `DataPortal`, so we won't explain them again here. The only new parameter is `first_trading_day`. The `first_trading_day` parameter is a `pd.Timestamp` indicating the first trading day for the simulation. We will set the first trading day to the first trading day in the data bundle. For more information on the `DataPortal` class see the [Zipline documentation](https://www.zipline.io/appendix.html?highlight=dataportal#zipline.data.data_portal.DataPortal)
```
from zipline.data.data_portal import DataPortal
# Create a data portal
data_portal = DataPortal(bundle_data.asset_finder,
trading_calendar = trading_calendar,
first_trading_day = bundle_data.equity_daily_bar_reader.first_trading_day,
equity_daily_reader = bundle_data.equity_daily_bar_reader,
adjustment_reader = bundle_data.adjustment_reader)
```
Now that we have created a `data_portal` we will create a helper function, `get_pricing`, that gets the historical data from the `data_portal` for a given set of `start_date` and `end_date`. The `get_pricing` function takes various parameters:
```python
def get_pricing(data_portal, trading_calendar, assets, start_date, end_date, field='close')
```
The first two parameters, `data_portal` and `trading_calendar`, have already been defined above. The third paramter, `assets`, is a list of tickers. In our case we will use the tickers from the output of our pipeline, namely, `universe_tickers`. The fourth and fifth parameters are strings specifying the `start_date` and `end_date`. The function converts these two strings into Timestamps with a Custom Business Day frequency. The last parameter, `field`, is a string used to indicate which field to return. In our case we want to get the closing price, so we set `field='close`.
The function returns the historical stock price data using the `.get_history_window()` attribute of the `DataPortal` class. This attribute returns a Pandas Dataframe containing the requested history window with the data fully adjusted. The `bar_count` parameter is an integer indicating the number of days to return. The number of days determines the number of rows of the returned dataframe. Both the `frequency` and `data_frequency` parameters are strings that indicate the frequency of the data to query, *i.e.* whether the data is in `daily` or `minute` intervals.
```
def get_pricing(data_portal, trading_calendar, assets, start_date, end_date, field='close'):
# Set the given start and end dates to Timestamps. The frequency string C is used to
# indicate that a CustomBusinessDay DateOffset is used
end_dt = pd.Timestamp(end_date, tz='UTC', freq='C')
start_dt = pd.Timestamp(start_date, tz='UTC', freq='C')
# Get the locations of the start and end dates
end_loc = trading_calendar.closes.index.get_loc(end_dt)
start_loc = trading_calendar.closes.index.get_loc(start_dt)
# return the historical data for the given window
return data_portal.get_history_window(assets=assets, end_dt=end_dt, bar_count=end_loc - start_loc,
frequency='1d',
field=field,
data_frequency='daily')
# Get the historical data for the given window
historical_data = get_pricing(data_portal, trading_calendar, universe_tickers,
start_date='2011-01-05', end_date='2016-01-05')
# Display the historical data
historical_data
```
# Date Alignment
When pipeline returns with a date of, e.g., `2016-01-07` this includes data that would be known as of before the **market open** on `2016-01-07`. As such, if you ask for latest known values on each day, it will return the closing price from the day before and label the date `2016-01-07`. All factor values assume to be run prior to the open on the labeled day with data known before that point in time.
# Adding Factors and Filters
Now that you know how build a pipeline and execute it, in this section we will see how we can add factors and filters to our pipeline. These factors and filters will determine the computations we want our pipeline to compute each day.
We can add both factors and filters to our pipeline using the `.add(column, name)` method of the `Pipeline` class. The `column` parameter represetns the factor or filter to add to the pipeline. The `name` parameter is a string that determines the name of the column in the output Pandas Dataframe for that factor of fitler. As mentioned earlier, each factor and filter will appear as a column in the output dataframe of our pipeline. Let's start by adding a factor to our pipeline.
### Factors
In the code below, we will use Zipline's built-in `SimpleMovingAverage` factor to create a factor that computes the 15-day mean closing price of securities. We will then add this factor to our pipeline and use `.show_graph()` to see a diagram of our pipeline with the factor added.
```
from zipline.pipeline.factors import SimpleMovingAverage
# Create a factor that computes the 15-day mean closing price of securities
mean_close_15 = SimpleMovingAverage(inputs = [USEquityPricing.close], window_length = 15)
# Add the factor to our pipeline
pipeline.add(mean_close_15, '15 Day MCP')
# Render the pipeline as a DAG
pipeline.show_graph()
```
In the diagram above we can clearly see the factor we have added. Now, we can run our pipeline again and see its output. The pipeline is run in exactly the same way we did before.
```
# Set starting and end dates
start_date = pd.Timestamp('2014-01-06', tz='utc')
end_date = pd.Timestamp('2016-01-05', tz='utc')
# Run our pipeline for the given start and end dates
output = engine.run_pipeline(pipeline, start_date, end_date)
# Display the pipeline output
output.head()
```
We can see that now our output dataframe contains a column with the name `15 Day MCP`, which is the name we gave to our factor before. This ouput dataframe from our pipeline gives us the 15-day mean closing price of the securities that passed our `screen`.
### Filters
Filters are created and added to the pipeline in the same way as factors. In the code below, we create a filter that returns `True` whenever the 15-day average closing price is above \$100. Remember, a filter produces a `True` or `False` value for each security every day. We will then add this filter to our pipeline and use `.show_graph()` to see a diagram of our pipeline with the filter added.
```
# Create a Filter that returns True whenever the 15-day average closing price is above $100
high_mean = mean_close_15 > 100
# Add the filter to our pipeline
pipeline.add(high_mean, 'High Mean')
# Render the pipeline as a DAG
pipeline.show_graph()
```
In the diagram above we can clearly see the fiter we have added. Now, we can run our pipeline again and see its output. The pipeline is run in exactly the same way we did before.
```
# Set starting and end dates
start_date = pd.Timestamp('2014-01-06', tz='utc')
end_date = pd.Timestamp('2016-01-05', tz='utc')
# Run our pipeline for the given start and end dates
output = engine.run_pipeline(pipeline, start_date, end_date)
# Display the pipeline output
output.head()
```
We can see that now our output dataframe contains a two columns, one for the filter and one for the factor. The new column has the name `High Mean`, which is the name we gave to our filter before. Notice that the filter column only contains Boolean values, where only the securities with a 15-day average closing price above \$100 have `True` values.
|
github_jupyter
|
# Network waterfall generation
```
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
from math import sqrt
import re, bisect
from colorama import Fore
```
## Select input file and experiment ID (~10 experiments per file)
- ./startup : Application startup
- ./startup_and_click : Application startup + click (single user interaction)
- ./multiclick : Application statup + clicks (multiple user interactions)
### (*critical flows* and performance metrics available for *startup* and *startup_and_click* datasets)
```
##example (YouTube)
FNAME = "./startup/com.google.android.youtube_bursts.txt"
EXPID = 1
```
## Load experiment data and plot waterfall
```
##load experiment data
d_exps = load_experiments(FNAME)
df = d_exps[EXPID]
print_head(FNAME)
##plot waterfall
plot_waterfall(df, fvolume=None, title=FNAME, fname_png="output_waterfall.png")
```
## A small library for plotting waterfalls, based on matplotlib
```
def load_experiments(fname):
df = pd.read_csv(fname, sep = ' ', low_memory=False)
## split the single file in multiple dataframes based on experiment id
d = {}
for expid in df['expId'].unique():
df_tmp = df[df['expId'] == expid].copy()
df_tmp = df_tmp.sort_values(by='t_start')
cat = pd.Categorical(df_tmp['flow'], ordered=False)
cat = cat.reorder_categories(df_tmp['flow'].unique())
df_tmp.loc[:, 'flowid'] = cat.codes
d[expid] = df_tmp
return d
def _get_reference_times(df):
tdt = df['TDT'].values[0]
aft = df['AFT'].values[0]
x_max = 0.5+max(df['t_end'].max(), aft, tdt)
return {
'tdt' : tdt,
'aft' : aft,
'x_max' : x_max}
def _get_max_time(df):
x_max = 0.5+df['t_end'].max()
return {'x_max' : x_max}
def _get_lines_burst(df, x_lim=None):
lines_burst = []
lines_burst_widths = []
for flowid, x_start, x_end, burst_bytes in df[['flowid', 't_start', 't_end', 'KB']].values:
if x_lim is None:
lines_burst.append([(x_start, flowid), (x_end, flowid)])
width = min(13, 2*sqrt(burst_bytes))
width = max(1, width)
lines_burst_widths.append(width)
else:
el = [(x_lim[0], flowid), (x_lim[1], flowid)]
if el not in lines_burst:
lines_burst.append(el)
return lines_burst, lines_burst_widths
def _plot_aft_tdt_reference(ax, tdt, aft, no_legend=False):
tdt_label = "TDT = " + str(tdt)[0:5]
aft_label = "AFT = " + str(aft)[0:5]
if no_legend:
tdt_label = None
aft_label = None
ax.axvline(x=tdt, color="green", label=tdt_label, linewidth=2) #, ax = ax)
ax.axvline(x=aft, color="purple", label=aft_label, linewidth=2) #, ax = ax)
lgd = ax.legend(bbox_to_anchor=[1, 1])
def _plot_bursts(ax, df, lines_flow,
lines_burst=None,
lines_burst_critical=None,
flow_kwargs={},
burst_kwargs={},
burst_critical_kwargs={},
title=None):
## flow lines
ax.add_collection(mpl.collections.LineCollection(lines_flow, **flow_kwargs))
## burst lines
if lines_burst is not None:
ax.add_collection(mpl.collections.LineCollection(lines_burst, **burst_kwargs))
if lines_burst_critical is not None:
ax.add_collection(mpl.collections.LineCollection(lines_burst_critical, **burst_critical_kwargs))
if 'AFT' in df and 'TDT' in df:
d_times = _get_reference_times(df)
## vertical reference lines
_plot_aft_tdt_reference(ax, tdt=d_times['tdt'], aft=d_times['aft'])
else:
d_times = _get_max_time(df)
## axis lim
x_max = d_times['x_max']
y_max = len(lines_flow)+1
ax.set_ylim((-1, y_max))
ax.set_xlim((0, x_max))
chess_lines = [[(0, y),(x_max, y)] for y in range(0, y_max, 2)]
ax.add_collection(mpl.collections.LineCollection(chess_lines, linewidths=10, color='gray', alpha=0.1))
## ticks
ax.yaxis.set_major_locator(mpl.ticker.MultipleLocator(1))
ax.tick_params(axis='y', length=0)
## y-labels (clipping the long ones)
labels = df[['flow', 'flowid']].sort_values(by='flowid').drop_duplicates()['flow'].values
ax.set_yticklabels(['',''] + list(labels))
## remove borders
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.spines['left'].set_visible(False)
## grid
ax.grid(axis='x', alpha=0.3)
ax.legend().remove()
def _plot_volume(ax, df, title=None, fvolume=None):
## get times
if 'AFT' in df and 'TDT' in df:
d_times = _get_reference_times(df)
else:
d_times = _get_max_time(df)
if fvolume!=None:
x=[]
y=[]
for line in open(fvolume):
x.append(float(line[0:-1].split(' ')[0]))
y.append(float(line[0:-1].split(' ')[1]))
ax.step(x, y, color='gray', where='post', label='')
else:
# get volume cumulate
df_tmp = df.copy()
df_tmp = df_tmp.sort_values(by='t_end')
df_tmp.loc[:, 'KB_cdf'] = df_tmp['KB'].cumsum() / df_tmp['KB'].sum()
ax.step(x=df_tmp['t_end'], y=df_tmp['KB_cdf'], color='gray', where='post', label='')
## remove border
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.spines['left'].set_visible(False)
if 'AFT' in df and 'TDT' in df:
_plot_aft_tdt_reference(ax, tdt=d_times['tdt'], aft=d_times['aft'], no_legend=False)
ax.tick_params(labeltop=True, labelbottom=False, length=0.1, axis='x', direction='out')
ax.set_xlim((0, d_times['x_max']))
## grid
ax.grid(axis='x', alpha=0.3)
ax.yaxis.set_major_locator(mpl.ticker.MultipleLocator(0.5))
ax.set_ylabel('CDF Volume')
## title
if title is not None:
ax.set_title(title, pad=20)
def print_head(fname):
print("TDT = Transport Delivery Time\nAFT = Above-the-Fold Time")
if 'multiclick' not in fname:
print(Fore.RED + 'Critical flows')
print(Fore.BLUE + 'Non-Critical flows')
def plot_waterfall(df, fvolume=None, title=None, fname_png=None): #, ax=None):
## first start and end of each flow
df_tmp = df.groupby('flowid').agg({'t_start':'min', 't_end':'max'})
## ..and create lines
lines_flow = [ [(x_start, y), (x_end, y)] for y, (x_start, x_end) in zip(df_tmp.index, df_tmp.values) ]
## lines for each burst
lines_burst, lines_burst_widths = _get_lines_burst(df[df['KB'] > 0])
## lines for each critical burst (if any info on critical domains in the input file)
if 'critical' in df:
lines_burst_critical, lines_burst_widths_critical = _get_lines_burst(df[(df['critical']) & (df['KB'] > 0)])
else:
lines_burst_critical, lines_burst_widths_critical = [], []
######################
fig_height = max(5, 0.25*len(lines_flow))
fig = plt.figure(figsize=(8, fig_height))
gs = mpl.gridspec.GridSpec(nrows=2, ncols=1, hspace=0.1, height_ratios=[1,3])
ax0 = plt.subplot(gs[0])
ax1 = plt.subplot(gs[1])
_plot_volume(ax0, df, title, fvolume)
_plot_bursts(ax1, df, lines_flow, lines_burst, lines_burst_critical,
flow_kwargs={
'linewidths':2,
'color': 'gray',
'linestyle' : (0, (1, 1)),
'alpha':0.7},
burst_kwargs={
'linewidths' : lines_burst_widths,
'color': 'blue'},
burst_critical_kwargs={
'linewidths':lines_burst_widths_critical,
'color': 'red'})
## add click timestamps (if any)
if 'clicks' in df:
for click_t in df['clicks'].values[0][1:-1].split(', '):
if float(click_t)<40 and float(click_t)>35:
continue
plt.axvline(x=float(click_t), color="grey", linestyle="--")
if fname_png is not None:
plt.savefig(fname_png, bbox_inches='tight')
```
|
github_jupyter
|
# Machine Learning for Telecom with Naive Bayes
# Introduction
Machine Learning for CallDisconnectReason is a notebook which demonstrates exploration of dataset and CallDisconnectReason classification with Spark ml Naive Bayes Algorithm.
```
from pyspark.sql.types import *
from pyspark.sql import SparkSession
from sagemaker import get_execution_role
import sagemaker_pyspark
role = get_execution_role()
# Configure Spark to use the SageMaker Spark dependency jars
jars = sagemaker_pyspark.classpath_jars()
classpath = ":".join(sagemaker_pyspark.classpath_jars())
spark = SparkSession.builder.config("spark.driver.extraClassPath", classpath)\
.master("local[*]").getOrCreate()
```
Using S3 Select, enables applications to retrieve only a subset of data from an object by using simple SQL expressions. By using S3 Select to retrieve only the data, you can achieve drastic performance increases – in many cases you can get as much as a 400% improvement.
- _We first read a parquet compressed format of CDR dataset using s3select which has already been processed by Glue._
```
cdr_start_loc = "<%CDRStartFile%>"
cdr_stop_loc = "<%CDRStopFile%>"
cdr_start_sample_loc = "<%CDRStartSampleFile%>"
cdr_stop_sample_loc = "<%CDRStopSampleFile%>"
df = spark.read.format("s3select").parquet(cdr_stop_sample_loc)
df.createOrReplaceTempView("cdr")
durationDF = spark.sql("SELECT _c13 as CallServiceDuration FROM cdr where _c0 = 'STOP'")
durationDF.count()
```
# Exploration of Data
- _We see how we can explore and visualize the dataset used for processing. Here we create a bar chart representation of CallServiceDuration from CDR dataset._
```
import matplotlib.pyplot as plt
durationpd = durationDF.toPandas().astype(int)
durationpd.plot(kind='bar',stacked=True,width=1)
```
- _We can represent the data and visualize with a box plot. The box extends from the lower to upper quartile values of the data, with a line at the median._
```
color = dict(boxes='DarkGreen', whiskers='DarkOrange',
medians='DarkBlue', caps='Gray')
durationpd.plot.box(color=color, sym='r+')
from pyspark.sql.functions import col
durationDF = durationDF.withColumn("CallServiceDuration", col("CallServiceDuration").cast(DoubleType()))
```
- _We can represent the data and visualize the data with histograms partitioned in different bins._
```
import matplotlib.pyplot as plt
bins, counts = durationDF.select('CallServiceDuration').rdd.flatMap(lambda x: x).histogram(durationDF.count())
plt.hist(bins[:-1], bins=bins, weights=counts,color=['green'])
sqlDF = spark.sql("SELECT _c2 as Accounting_ID, _c19 as Calling_Number,_c20 as Called_Number, _c14 as CallDisconnectReason FROM cdr where _c0 = 'STOP'")
sqlDF.show()
```
# Featurization
```
from pyspark.ml.feature import StringIndexer
accountIndexer = StringIndexer(inputCol="Accounting_ID", outputCol="AccountingIDIndex")
accountIndexer.setHandleInvalid("skip")
tempdf1 = accountIndexer.fit(sqlDF).transform(sqlDF)
callingNumberIndexer = StringIndexer(inputCol="Calling_Number", outputCol="Calling_NumberIndex")
callingNumberIndexer.setHandleInvalid("skip")
tempdf2 = callingNumberIndexer.fit(tempdf1).transform(tempdf1)
calledNumberIndexer = StringIndexer(inputCol="Called_Number", outputCol="Called_NumberIndex")
calledNumberIndexer.setHandleInvalid("skip")
tempdf3 = calledNumberIndexer.fit(tempdf2).transform(tempdf2)
from pyspark.ml.feature import StringIndexer
# Convert target into numerical categories
labelIndexer = StringIndexer(inputCol="CallDisconnectReason", outputCol="label")
labelIndexer.setHandleInvalid("skip")
from pyspark.sql.functions import rand
trainingFraction = 0.75;
testingFraction = (1-trainingFraction);
seed = 1234;
trainData, testData = tempdf3.randomSplit([trainingFraction, testingFraction], seed=seed);
# CACHE TRAIN AND TEST DATA
trainData.cache()
testData.cache()
trainData.count(),testData.count()
```
# Analyzing the label distribution
- We analyze the distribution of our target labels using a histogram where 16 represents Normal_Call_Clearing.
```
import matplotlib.pyplot as plt
negcount = trainData.filter("CallDisconnectReason != 16").count()
poscount = trainData.filter("CallDisconnectReason == 16").count()
negfrac = 100*float(negcount)/float(negcount+poscount)
posfrac = 100*float(poscount)/float(poscount+negcount)
ind = [0.0,1.0]
frac = [negfrac,posfrac]
width = 0.35
plt.title('Label Distribution')
plt.bar(ind, frac, width, color='r')
plt.xlabel("CallDisconnectReason")
plt.ylabel('Percentage share')
plt.xticks(ind,['0.0','1.0'])
plt.show()
import matplotlib.pyplot as plt
negcount = testData.filter("CallDisconnectReason != 16").count()
poscount = testData.filter("CallDisconnectReason == 16").count()
negfrac = 100*float(negcount)/float(negcount+poscount)
posfrac = 100*float(poscount)/float(poscount+negcount)
ind = [0.0,1.0]
frac = [negfrac,posfrac]
width = 0.35
plt.title('Label Distribution')
plt.bar(ind, frac, width, color='r')
plt.xlabel("CallDisconnectReason")
plt.ylabel('Percentage share')
plt.xticks(ind,['0.0','1.0'])
plt.show()
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.feature import VectorAssembler
vecAssembler = VectorAssembler(inputCols=["AccountingIDIndex","Calling_NumberIndex", "Called_NumberIndex"], outputCol="features")
```
__Spark ML Naive Bayes__:
Naive Bayes is a simple multiclass classification algorithm with the assumption of independence between every pair of features. Naive Bayes can be trained very efficiently. Within a single pass to the training data, it computes the conditional probability distribution of each feature given label, and then it applies Bayes’ theorem to compute the conditional probability distribution of label given an observation and use it for prediction.
- _We use Spark ML Naive Bayes Algorithm and spark Pipeline to train the data set._
```
from pyspark.ml.classification import NaiveBayes
from pyspark.ml.clustering import KMeans
from pyspark.ml import Pipeline
# Train a NaiveBayes model
nb = NaiveBayes(smoothing=1.0, modelType="multinomial")
# Chain labelIndexer, vecAssembler and NBmodel in a
pipeline = Pipeline(stages=[labelIndexer,vecAssembler, nb])
# Run stages in pipeline and train model
model = pipeline.fit(trainData)
# Run inference on the test data and show some results
predictions = model.transform(testData)
predictions.printSchema()
predictions.show()
predictiondf = predictions.select("label", "prediction", "probability")
pddf_pred = predictions.toPandas()
pddf_pred
```
- _We use Scatter plot for visualization and represent the dataset._
```
import matplotlib.pyplot as plt
import numpy as np
# Set the size of the plot
plt.figure(figsize=(14,7))
# Create a colormap
colormap = np.array(['red', 'lime', 'black'])
# Plot CDR
plt.subplot(1, 2, 1)
plt.scatter(pddf_pred.Calling_NumberIndex, pddf_pred.Called_NumberIndex, c=pddf_pred.prediction)
plt.title('CallDetailRecord')
plt.show()
```
# Evaluation
```
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
evaluator = MulticlassClassificationEvaluator(labelCol="label", predictionCol="prediction",
metricName="accuracy")
accuracy = evaluator.evaluate(predictiondf)
print(accuracy)
```
# Confusion Matrix
```
from sklearn.metrics import confusion_matrix
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sn
outdataframe = predictiondf.select("prediction", "label")
pandadf = outdataframe.toPandas()
npmat = pandadf.values
labels = npmat[:,0]
predicted_label = npmat[:,1]
cnf_matrix = confusion_matrix(labels, predicted_label)
import numpy as np
def plot_confusion_matrix(cm,
target_names,
title='Confusion matrix',
cmap=None,
normalize=True):
import matplotlib.pyplot as plt
import numpy as np
import itertools
accuracy = np.trace(cm) / float(np.sum(cm))
misclass = 1 - accuracy
if cmap is None:
cmap = plt.get_cmap('Blues')
plt.figure(figsize=(8, 6))
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
if target_names is not None:
tick_marks = np.arange(len(target_names))
plt.xticks(tick_marks, target_names, rotation=45)
plt.yticks(tick_marks, target_names)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
thresh = cm.max() / 1.5 if normalize else cm.max() / 2
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
if normalize:
plt.text(j, i, "{:0.4f}".format(cm[i, j]),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
else:
plt.text(j, i, "{:,}".format(cm[i, j]),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('label')
plt.xlabel('Predicted \naccuracy={:0.4f}; misclass={:0.4f}'.format(accuracy, misclass))
plt.show()
plot_confusion_matrix(cnf_matrix,
normalize = False,
target_names = ['Positive', 'Negative'],
title = "Confusion Matrix")
from pyspark.mllib.evaluation import MulticlassMetrics
# Create (prediction, label) pairs
predictionAndLabel = predictiondf.select("prediction", "label").rdd
# Generate confusion matrix
metrics = MulticlassMetrics(predictionAndLabel)
print(metrics.confusionMatrix())
```
# Cross Validation
```
from pyspark.ml.tuning import ParamGridBuilder, CrossValidator
# Create ParamGrid and Evaluator for Cross Validation
paramGrid = ParamGridBuilder().addGrid(nb.smoothing, [0.0, 0.2, 0.4, 0.6, 0.8, 1.0]).build()
cvEvaluator = MulticlassClassificationEvaluator(metricName="accuracy")
# Run Cross-validation
cv = CrossValidator(estimator=pipeline, estimatorParamMaps=paramGrid, evaluator=cvEvaluator)
cvModel = cv.fit(trainData)
# Make predictions on testData. cvModel uses the bestModel.
cvPredictions = cvModel.transform(testData)
cvPredictions.select("label", "prediction", "probability").show()
# Evaluate bestModel found from Cross Validation
evaluator.evaluate(cvPredictions)
```
|
github_jupyter
|
# Recommending products with RetailRocket event logs
This IPython notebook illustrates the usage of the [ctpfrec](https://github.com/david-cortes/ctpfrec/) Python package for _Collaborative Topic Poisson Factorization_ in recommender systems based on sparse count data using the [RetailRocket](https://www.kaggle.com/retailrocket/ecommerce-dataset) dataset, consisting of event logs (view, add to cart, purchase) from an online catalog of products plus anonymized text descriptions of items.
Collaborative Topic Poisson Factorization is a probabilistic model that tries to jointly factorize the user-item interaction matrix along with item-word text descriptions (as bag-of-words) of the items by the product of lower dimensional matrices. The package can also extend this model to add user attributes in the same format as the items’.
Compared to competing methods such as BPR (Bayesian Personalized Ranking) or weighted-implicit NMF (non-negative matrix factorization of the non-probabilistic type that uses squared loss), it only requires iterating over the data for which an interaction was observed and not over data for which no interaction was observed (i.e. it doesn’t iterate over items not clicked by a user), thus being more scalable, and at the same time producing better results when fit to sparse count data (in general). Same for the word counts of items.
The implementation here is based on the paper _Content-based recommendations with poisson factorization (Gopalan, P.K., Charlin, L. and Blei, D., 2014)_.
For a similar package for explicit feedback data see also [cmfrec](https://github.com/david-cortes/cmfrec/). For Poisson factorization without side information see [hpfrec](https://github.com/david-cortes/hpfrec/).
**Small note: if the TOC here is not clickable or the math symbols don't show properly, try visualizing this same notebook from nbviewer following [this link](http://nbviewer.jupyter.org/github/david-cortes/ctpfrec/blob/master/example/ctpfrec_retailrocket.ipynb).**
** *
## Sections
* [1. Model description](#p1)
* [2. Loading and processing the dataset](#p2)
* [3. Fitting the model](#p3)
* [4. Common sense checks](#p4)
* [5. Comparison to model without item information](#p5)
* [6. Making recommendations](#p6)
* [7. References](#p7)
** *
<a id="p1"></a>
## 1. Model description
The model consists in producing a low-rank non-negative matrix factorization of the item-word matrix (a.k.a. bag-of-words, a matrix where each row represents an item and each column a word, with entries containing the number of times each word appeared in an item’s text, ideally with some pre-processing on the words such as stemming or lemmatization) by the product of two lower-rank matrices
$$ W_{iw} \approx \Theta_{ik} \beta_{wk}^T $$
along with another low-rank matrix factorization of the user-item activity matrix (a matrix where each entry corresponds to how many times each user interacted with each item) that shares the same item-factor matrix above plus an offset based on user activity and not based on items’ words
$$ Y_{ui} \approx \eta_{uk} (\Theta_{ik} + \epsilon_{ik})^T $$
These matrices are assumed to come from a generative process as follows:
* Items:
$$ \beta_{wk} \sim Gamma(a,b) $$
$$ \Theta_{ik} \sim Gamma(c,d)$$
$$ W_{iw} \sim Poisson(\Theta_{ik} \beta_{wk}^T) $$
_(Where $W$ is the item-word count matrix, $k$ is the number of latent factors, $i$ is the number of items, $w$ is the number of words)_
* User-Item interactions
$$ \eta_{uk} \sim Gamma(e,f) $$
$$ \epsilon_{ik} \sim Gamma(g,h) $$
$$ Y_{ui} \sim Poisson(\eta_{uk} (\Theta_{ik} + \epsilon_{ik})^T) $$
_(Where $u$ is the number of users, $Y$ is the user-item interaction matrix)_
The model is fit using mean-field variational inference with coordinate ascent. For more details see the paper in the references.
** *
<a id="p2"></a>
## 2. Loading and processing the data
Reading and concatenating the data. First the event logs:
```
import numpy as np, pandas as pd
events = pd.read_csv("events.csv")
events.head()
events.event.value_counts()
```
In order to put all user-item interactions in one scale, I will arbitrarily assign values as follows:
* View: +1
* Add to basket: +3
* Purchase: +3
Thus, if a user clicks an item, that `(user, item)` pair will have `value=1`, if she later adds it to cart and purchases it, will have `value=7` (plus any other views of the same item), and so on.
The reasoning behind this scale is because the distributions of counts and sums of counts seem to still follow a nice exponential distribution with these values, but different values might give better results in terms of models fit to them.
```
%matplotlib inline
equiv = {
'view':1,
'addtocart':3,
'transaction':3
}
events['count']=events.event.map(equiv)
events.groupby('visitorid')['count'].sum().value_counts().hist(bins=200)
events = events.groupby(['visitorid','itemid'])['count'].sum().to_frame().reset_index()
events.rename(columns={'visitorid':'UserId', 'itemid':'ItemId', 'count':'Count'}, inplace=True)
events.head()
```
Now creating a train and test split. For simplicity purposes and in order to be able to make a fair comparison with a model that doesn't use item descriptions, I will try to only take users that had >= 3 items in the training data, and items that had >= 3 users.
Given the lack of user attributes and the fact that it will be compared later to a model without side information, the test set will only have users from the training data, but it's also possible to use user attributes if they follow the same format as the items', in which case the model can also recommend items to new users.
In order to compare it later to a model without items' text, I will also filter out the test set to have only items that were in the training set. **This is however not a model limitation, as it can also recommend items that have descriptions but no user interactions**.
```
from sklearn.model_selection import train_test_split
events_train, events_test = train_test_split(events, test_size=.2, random_state=1)
del events
## In order to find users and items with at least 3 interactions each,
## it's easier and faster to use a simple heuristic that first filters according to one criteria,
## then, according to the other, and repeats.
## Finding a real subset of the data in which each item has strictly >= 3 users,
## and each user has strictly >= 3 items, is a harder graph partitioning or optimization
## problem. For a similar example of finding such subsets see also:
## http://nbviewer.ipython.org/github/david-cortes/datascienceprojects/blob/master/optimization/dataset_splitting.ipynb
users_filter_out = events_train.groupby('UserId')['ItemId'].agg(lambda x: len(tuple(x)))
users_filter_out = np.array(users_filter_out.index[users_filter_out < 3])
items_filter_out = events_train.loc[~np.in1d(events_train.UserId, users_filter_out)].groupby('ItemId')['UserId'].agg(lambda x: len(tuple(x)))
items_filter_out = np.array(items_filter_out.index[items_filter_out < 3])
users_filter_out = events_train.loc[~np.in1d(events_train.ItemId, items_filter_out)].groupby('UserId')['ItemId'].agg(lambda x: len(tuple(x)))
users_filter_out = np.array(users_filter_out.index[users_filter_out < 3])
events_train = events_train.loc[~np.in1d(events_train.UserId.values, users_filter_out)]
events_train = events_train.loc[~np.in1d(events_train.ItemId.values, items_filter_out)]
events_test = events_test.loc[np.in1d(events_test.UserId.values, events_train.UserId.values)]
events_test = events_test.loc[np.in1d(events_test.ItemId.values, events_train.ItemId.values)]
print(events_train.shape)
print(events_test.shape)
```
Now processing the text descriptions of the items:
```
iteminfo = pd.read_csv("item_properties_part1.csv")
iteminfo2 = pd.read_csv("item_properties_part2.csv")
iteminfo = iteminfo.append(iteminfo2, ignore_index=True)
iteminfo.head()
```
The item's description contain many fields and have a mixture of words and numbers. The numeric variables, as per the documentation, are prefixed with an "n" and have three digits decimal precision - I will exclude them here since this model is insensitive to numeric attributes such as price. The words are already lemmazed, and since we only have their IDs, it's not possible to do any other pre-processing on them.
Although the descriptions don't say anything about it, looking at the contents and the lengths of the different fields, here I will assume that the field $283$ is the product title and the field $888$ is the product description. I will just concatenate them to obtain an overall item text, but there might be better ways of doing this (such as having different IDs for the same word when it appears in the title or the body, or multiplying those in the title by some number, etc.)
As the descriptions vary over time, I will only take the most recent version for each item:
```
iteminfo = iteminfo.loc[iteminfo.property.isin(('888','283'))]
iteminfo = iteminfo.loc[iteminfo.groupby(['itemid','property'])['timestamp'].idxmax()]
iteminfo.reset_index(drop=True, inplace=True)
iteminfo.head()
```
**Note that for simplicity I am completely ignoring the categories (these are easily incorporated e.g. by adding a count of +1 for each category to which an item belongs) and important factors such as the price. I am also completely ignoring all the other fields.**
```
from sklearn.feature_extraction.text import CountVectorizer
from scipy.sparse import coo_matrix
import re
def concat_fields(x):
x = list(x)
out = x[0]
for i in x[1:]:
out += " " + i
return out
class NonNumberTokenizer(object):
def __init__(self):
pass
def __call__(self, txt):
return [i for i in txt.split(" ") if bool(re.search("^\d", i))]
iteminfo = iteminfo.groupby('itemid')['value'].agg(lambda x: concat_fields(x))
t = CountVectorizer(tokenizer=NonNumberTokenizer(), stop_words=None,
dtype=np.int32, strip_accents=None, lowercase=False)
bag_of_words = t.fit_transform(iteminfo)
bag_of_words = coo_matrix(bag_of_words)
bag_of_words = pd.DataFrame({
'ItemId' : iteminfo.index[bag_of_words.row],
'WordId' : bag_of_words.col,
'Count' : bag_of_words.data
})
del iteminfo
bag_of_words.head()
```
In this case, I will not filter it out by only items that were in the training set, as other items can still be used to get better latent factors.
** *
<a id="p3"></a>
## 3. Fitting the model
Fitting the model - note that I'm using some enhancements (passed as arguments to the class constructor) over the original version in the paper:
* Standardizing item counts so as not to favor items with longer descriptions.
* Initializing $\Theta$ and $\beta$ through hierarchical Poisson factorization instead of latent Dirichlet allocation.
* Using a small step size for the updates for the parameters obtained from hierarchical Poisson factorization at the beginning, which then grows to one with increasing iteration numbers (informally, this achieves to somehwat "preserve" these fits while the user parameters are adjusted to these already-fit item parameters - then as the user parameters are already defined towards them, the item and word parameters start changing too).
I'll be also fitting two slightly different models: one that takes (and can make recommendations for) all the items for which there are either descriptions or user clicks, and another that uses all the items for which there are descriptions to initialize the item-related parameters but discards the ones without clicks (can only make recommendations for items that users have clicked).
For more information about the parameters and what they do, see the online documentation:
[http://ctpfrec.readthedocs.io](http://ctpfrec.readthedocs.io)
```
print(events_train.shape)
print(events_test.shape)
print(bag_of_words.shape)
%%time
from ctpfrec import CTPF
recommender_all_items = CTPF(k=70, step_size=lambda x: 1-1/np.sqrt(x+1),
standardize_items=True, initialize_hpf=True, reindex=True,
missing_items='include', allow_inconsistent_math=True, random_seed=1)
recommender_all_items.fit(counts_df=events_train.copy(), words_df=bag_of_words.copy())
%%time
recommender_clicked_items_only = CTPF(k=70, step_size=lambda x: 1-1/np.sqrt(x+1),
standardize_items=True, initialize_hpf=True, reindex=True,
missing_items='exclude', allow_inconsistent_math=True, random_seed=1)
recommender_clicked_items_only.fit(counts_df=events_train.copy(), words_df=bag_of_words.copy())
```
Most of the time here was spent in fitting the model to items that no user in the training set had clicked. If using instead a random initialization, it would have taken a lot less time to fit this model (there would be only a fraction of the items - see above time spent in each procedure), but the results are slightly worse.
_Disclaimer: this notebook was run on a Google cloud server with Skylake CPU using 8 cores, and memory usage tops at around 6GB of RAM for the first model (including all the objects loaded before). In a desktop computer, it would take a bit longer to fit._
** *
<a id="p4"></a>
## 4. Common sense checks
There are many different metrics to evaluate recommendation quality in implicit datasets, but all of them have their drawbacks. The idea of this notebook is to illustrate the package usage and not to introduce and compare evaluation metrics, so I will only perform some common sense checks on the test data.
For implementations of evaluation metrics for implicit recommendations see other packages such as [lightFM](https://github.com/lyst/lightfm).
As some common sense checks, the predictions should:
* Be higher for this non-zero hold-out sample than for random items.
* Produce a good discrimination between random items and those in the hold-out sample (very related to the first point).
* Be correlated with the numer of events per user-item pair in the hold-out sample.
* Follow an exponential distribution rather than a normal or some other symmetric distribution.
Here I'll check these four conditions:
#### Model with all items
```
events_test['Predicted'] = recommender_all_items.predict(user=events_test.UserId, item=events_test.ItemId)
events_test['RandomItem'] = np.random.choice(events_train.ItemId.unique(), size=events_test.shape[0])
events_test['PredictedRandom'] = recommender_all_items.predict(user=events_test.UserId,
item=events_test.RandomItem)
print("Average prediction for combinations in test set: ", events_test.Predicted.mean())
print("Average prediction for random combinations: ", events_test.PredictedRandom.mean())
from sklearn.metrics import roc_auc_score
was_clicked = np.r_[np.ones(events_test.shape[0]), np.zeros(events_test.shape[0])]
score_model = np.r_[events_test.Predicted.values, events_test.PredictedRandom.values]
roc_auc_score(was_clicked[~np.isnan(score_model)], score_model[~np.isnan(score_model)])
np.corrcoef(events_test.Count[~events_test.Predicted.isnull()], events_test.Predicted[~events_test.Predicted.isnull()])[0,1]
import matplotlib.pyplot as plt
%matplotlib inline
_ = plt.hist(events_test.Predicted, bins=200)
plt.xlim(0,5)
plt.show()
```
#### Model with clicked items only
```
events_test['Predicted'] = recommender_clicked_items_only.predict(user=events_test.UserId, item=events_test.ItemId)
events_test['PredictedRandom'] = recommender_clicked_items_only.predict(user=events_test.UserId,
item=events_test.RandomItem)
print("Average prediction for combinations in test set: ", events_test.Predicted.mean())
print("Average prediction for random combinations: ", events_test.PredictedRandom.mean())
was_clicked = np.r_[np.ones(events_test.shape[0]), np.zeros(events_test.shape[0])]
score_model = np.r_[events_test.Predicted.values, events_test.PredictedRandom.values]
roc_auc_score(was_clicked, score_model)
np.corrcoef(events_test.Count, events_test.Predicted)[0,1]
_ = plt.hist(events_test.Predicted, bins=200)
plt.xlim(0,5)
plt.show()
```
** *
<a id="p5"></a>
## 5. Comparison to model without item information
A natural benchmark to compare this model is to is a Poisson factorization model without any item side information - here I'll do the comparison with a _Hierarchical Poisson factorization_ model with the same metrics as above:
```
%%time
from hpfrec import HPF
recommender_no_sideinfo = HPF(k=70)
recommender_no_sideinfo.fit(events_train.copy())
events_test_comp = events_test.copy()
events_test_comp['Predicted'] = recommender_no_sideinfo.predict(user=events_test_comp.UserId, item=events_test_comp.ItemId)
events_test_comp['PredictedRandom'] = recommender_no_sideinfo.predict(user=events_test_comp.UserId,
item=events_test_comp.RandomItem)
print("Average prediction for combinations in test set: ", events_test_comp.Predicted.mean())
print("Average prediction for random combinations: ", events_test_comp.PredictedRandom.mean())
was_clicked = np.r_[np.ones(events_test_comp.shape[0]), np.zeros(events_test_comp.shape[0])]
score_model = np.r_[events_test_comp.Predicted.values, events_test_comp.PredictedRandom.values]
roc_auc_score(was_clicked, score_model)
np.corrcoef(events_test_comp.Count, events_test_comp.Predicted)[0,1]
```
As can be seen, adding the side information and widening the catalog to include more items using only their text descriptions (no clicks) results in an improvemnet over all 3 metrics, especially correlation with number of clicks.
More important than that however, is its ability to make recommendations from a far wider catalog of items, which in practice can make a much larger difference in recommendation quality than improvement in typicall offline metrics.
** *
<a id="p6"></a>
## 6. Making recommendations
The package provides a simple API for making predictions and Top-N recommended lists. These Top-N lists can be made among all items, or across some user-provided subset only, and you can choose to discard items with which the user had already interacted in the training set.
Here I will:
* Pick a random user with a reasonably long event history.
* See which items would the model recommend to them among those which he has not yet clicked.
* Compare it with the recommended list from the model without item side information.
Unfortunately, since all the data is anonymized, it's not possible to make a qualitative evaluation of the results by looking at the recommended lists as it is in other datasets.
```
users_many_events = events_train.groupby('UserId')['ItemId'].agg(lambda x: len(tuple(x)))
users_many_events = np.array(users_many_events.index[users_many_events > 20])
np.random.seed(1)
chosen_user = np.random.choice(users_many_events)
chosen_user
%%time
recommender_all_items.topN(chosen_user, n=20)
```
*(These numbers represent the IDs of the items being recommended as they appeared in the `events_train` data frame)*
```
%%time
recommender_clicked_items_only.topN(chosen_user, n=20)
%%time
recommender_no_sideinfo.topN(chosen_user, n=20)
```
** *
<a id="p7"></a>
## 7. References
* Gopalan, Prem K., Laurent Charlin, and David Blei. "Content-based recommendations with poisson factorization." Advances in Neural Information Processing Systems. 2014.
|
github_jupyter
|
```
import numpy as np
import seaborn as sns
import pandas as pd
import matplotlib.pyplot as plt
plt.rcParams['axes.titlesize'] = 20
plt.rcParams['axes.titleweight'] = 10
```
## 1. Dataset Read
```
df = pd.read_csv("haberman.csv")
df.head()
```
## 2. Basic Analysis
```
print("No. of features are in given dataset : {} \n".format(len(df.columns[:-1])))
print("Features are : {} \n".format(list(df.columns)[:-1]))
print("Target Feature is : {}".format(df.columns[-1]))
df.info() # Note: No null entries
df.describe() # basic stats about df
# !pip install statsmodels
#Median, Quantiles, Percentiles, IQR.
print("\nMedians:")
print(np.median(df["nodes"]))
#Median with an outlier
print(np.median(np.append(df["age"],50)));
print(np.median(df["year"]))
print("\nQuantiles:")
print(np.percentile(df["nodes"],np.arange(0, 100, 25)))
print(np.percentile(df["age"],np.arange(0, 100, 25)))
print(np.percentile(df["year"], np.arange(0, 100, 25)))
print("\n90th Percentiles:")
print(np.percentile(df["nodes"],90))
print(np.percentile(df["age"],90))
print(np.percentile(df["year"], 90))
from statsmodels import robust
print ("\nMedian Absolute Deviation")
print(robust.mad(df["nodes"]))
print(robust.mad(df["age"]))
print(robust.mad(df["year"]))
print("No of datapoint is in each feature : {} \n".format(df.size / 4))
print("No of classes and datapoint is in dataset :\n{}\n".format(df.status.value_counts()))
```
## 3. Insights of cancer patients survival rate of haberman hospital before/after surgery
```
plt.figure(figsize=(15,8));
sns.set(style='whitegrid');
plt.rcParams['axes.titlesize'] = 15
plt.rcParams['axes.titleweight'] = 50
sns.FacetGrid(df, hue='status', size=4) \
.map(plt.scatter, 'age', 'nodes') \
.add_legend();
plt.title("AGE - NODES SCATTER PLOT");
plt.show();
```
`Observation :`
1. Features are tightly merged with each other
2. Higher lyph nodes causes higher risks
### 3.1 Multivariate Analysis
```
plt.close();
sns.set_style("whitegrid");
sns.pairplot(df, hue="status", size=4, vars=['year','age', 'nodes'], diag_kind='kde', kind='scatter');
plt.show()
```
`observation : `
1. All features are tightly overlapped
2. aged peoples with higher lyph nodes is having higher possibility of survival risk
### 3.2 Univariate Analysis
#### Histogram, CDF, PDF
```
sns.set_style("whitegrid");
plt.figure(figsize=(15,8))
plt.rcParams['axes.titlesize'] = 20
plt.rcParams['axes.titleweight'] = 10
sns.distplot(df.loc[df['status'] == 1].nodes , bins=20, label='survived', color='Green');
sns.distplot(df.loc[df['status'] == 2].nodes , bins=20, label='unsurvived', color='Red');
plt.legend();
plt.title("NODES DISTRIBUTION OVER TARGET");
```
`observation :`
1. The person who has nodes > 10, is having higher survival risk;
2. if the person who has < 10, is having low survival risk
```
sns.set_style("whitegrid");
plt.figure(figsize=(15,8))
plt.rcParams['axes.titlesize'] = 20
plt.rcParams['axes.titleweight'] = 10
sns.distplot(df.loc[df['status'] == 1].age , bins=20, label='survived', color='Green');
sns.distplot(df.loc[df['status'] == 2].age , bins=20, label='unsurvived', color='Red');
plt.legend();
plt.title("AGE DISTRIBUTION OVER TARGET");
```
`observation : `
1. people whose age lies between 42-55, they slightly have higher survival risk possibility;
```
sns.set_style("whitegrid");
plt.figure(figsize=(15,8))
plt.rcParams['axes.titlesize'] = 20
plt.rcParams['axes.titleweight'] = 10
sns.distplot(df.loc[df['status'] == 1].year , bins=20, label='survived', color='Green')
sns.distplot(df.loc[df['status'] == 2].year , bins=20, label='unsurvived', color='Red')
plt.legend()
plt.title("YEAR DISTRIBUTION OVER TARGET");
```
`observation : `
1. patient who had surgery inbetween [1958 - mid of 1963] and [1966 - 1968] years had highly survived;
2. patient who had surgery inbetween [1963 - 1966] years had higher survival risk.
```
sns.set_style("whitegrid");
plt.figure(figsize=(10,6))
plt.rcParams['axes.titlesize'] = 20
plt.rcParams['axes.titleweight'] = 10
count, bin_edges = np.histogram(df.loc[df['status'] == 1].nodes, bins=10, density=True)
nodes_pdf = count / sum(count)
nodes_cdf = np.cumsum(nodes_pdf)
plt.plot(bin_edges[1:],nodes_pdf, color='green', marker='o', linestyle='dashed')
plt.plot(bin_edges[1:],nodes_cdf, color='black', marker='o', linestyle='dashed')
count, bin_edges = np.histogram(df.loc[df['status'] == 1].age, bins=10, density=True)
age_pdf = count / sum(count)
age_cdf = np.cumsum(age_pdf)
plt.plot(bin_edges[1:],age_pdf, color='red', marker='o', linestyle='dotted')
plt.plot(bin_edges[1:],age_cdf, color='black', marker='o', linestyle='dotted')
count, bin_edges = np.histogram(df.loc[df['status'] == 1].year, bins=10, density=True)
year_pdf = count / sum(count)
year_cdf = np.cumsum(year_pdf)
plt.plot(bin_edges[1:],year_pdf, color='blue', marker='o', linestyle='solid')
plt.plot(bin_edges[1:],year_cdf, color='black', marker='o', linestyle='solid')
plt.title("SURVIVED PATIENTS PDF & CDF")
plt.legend(["nodes_pdf","nodes_cdf", "age_pdf", "age_cdf", "year_pdf", "year_cdf"])
plt.show();
```
`observation : `
1. if nodes < 10, 82% patient survived, else: 10% chances of survival
2. if age 45-65, 18% patient survived, than among other patients;
```
sns.set_style("whitegrid");
plt.figure(figsize=(10,6))
plt.rcParams['axes.titlesize'] = 20
plt.rcParams['axes.titleweight'] = 10
count, bin_edges = np.histogram(df.loc[df['status'] == 2].nodes, bins=10, density=True)
nodes_pdf = count / sum(count)
nodes_cdf = np.cumsum(nodes_pdf)
plt.plot(bin_edges[1:],nodes_pdf, color='green', marker='o', linestyle='dashed')
plt.plot(bin_edges[1:],nodes_cdf, color='black', marker='o', linestyle='dashed')
count, bin_edges = np.histogram(df.loc[df['status'] == 2].age, bins=10, density=True)
age_pdf = count / sum(count)
age_cdf = np.cumsum(age_pdf)
plt.plot(bin_edges[1:],age_pdf, color='red', marker='o', linestyle='dotted')
plt.plot(bin_edges[1:],age_cdf, color='black', marker='o', linestyle='dotted')
count, bin_edges = np.histogram(df.loc[df['status'] == 2].year, bins=10, density=True)
year_pdf = count / sum(count)
year_cdf = np.cumsum(year_pdf)
plt.plot(bin_edges[1:],year_pdf, color='blue', marker='o', linestyle='solid')
plt.plot(bin_edges[1:],year_cdf, color='black', marker='o', linestyle='solid')
plt.title("UNSURVIVED PATIENTS PDF & CDF")
plt.legend(["nodes_pdf","nodes_cdf", "age_pdf", "age_cdf", "year_pdf", "year_cdf"])
plt.show();
```
`observation : `
1. nodes > 20, 97% of unsurvived rate.
2. age inbetween 38 - 48 has 20% of unsurvived rate.
```
sns.set_style("whitegrid");
g = sns.catplot(x="status", y="nodes", hue="status", data=df, kind="box", height=4, aspect=.7)
g.fig.set_figwidth(10)
g.fig.set_figheight(5)
g.fig.suptitle('[BOX PLOT] NODES OVER STATUS', fontsize=20)
g.add_legend()
plt.show()
```
`observation :`
1. 75 percentile of patient who has survived had lyph node less than 5.
2. Threshold for unsurvival is 25 percentile, 75 percentile is 12, 25 percentile is 2
```
sns.set_style("whitegrid");
g = sns.catplot(x="status", y="nodes", hue="status", data=df, kind="violin", height=4, aspect=.7);
g.fig.set_figwidth(10)
g.fig.set_figheight(5)
g.add_legend()
g.fig.suptitle('[VIOLIN PLOT] NODES OVER STATUS', fontsize=20)
plt.show()
```
`observation : `
1. plot 1 clearly is showing lymph nodes closer to zero has highely survived, whiskers also 0-7.
2. plot 2 is showing lymph nodes far away from zero has highly unsurvived, whiskers 0-20 has threshold 0-12 short survival chance.
```
plt.rcParams['axes.titlesize'] = 20
plt.rcParams['axes.titleweight'] = 10
sns.jointplot(x='age',y='nodes',data=df,kind='kde')
plt.suptitle("JOINT_PLOT FOR NODES - AGE",fontsize=20)
plt.show()
```
`observation : `
1. long survival is more from age range 47–60 and axillary nodes from 0–3.
### 3.3 BAR_PLOTS [SUMMARIZATION]
```
plt.figure(figsize=(15,8));
sns.set(style="whitegrid");
# sns.FacetGrid(df, hue='status')
# Draw a nested barplot to show survival for class and sex
g = sns.catplot(x="nodes", y="status", data=df,
height=6, kind="bar", palette="muted");
# g.despine(left=True)
g.set_ylabels("survival probability");
g.fig.set_figwidth(15);
g.fig.set_figheight(8.27);
g.fig.suptitle("Survival rate for node vise [0-2]", fontsize=20);
plt.figure(figsize=(15,8))
sns.set(style="whitegrid");
g = sns.catplot(x="age", y="status", data=df,
height=6, kind="bar", palette="muted");
g.set_ylabels("survival probability");
g.fig.set_figwidth(15)
g.fig.set_figheight(8.27)
g.fig.suptitle("Survival rate for age vise [0-2]", fontsize=20);
```
## 4. Conclusion :
`1. Patient’s age and operation year alone are not deciding factors for his/her survival. Yet, people less than 35 years have more chance of survival.`
`2. Survival chance is inversely proportional to the number of positive axillary nodes. We also saw that the absence of positive axillary nodes cannot always guarantee survival.`
`3. The objective of classifying the survival status of a new patient based on the given features is a difficult task as the data is imbalanced.`
|
github_jupyter
|
# Continuous Control
---
In this notebook, you will learn how to use the Unity ML-Agents environment for the second project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893) program.
### 1. Start the Environment
We begin by importing the necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).
```
import torch
import numpy as np
import pandas as pd
from collections import deque
from unityagents import UnityEnvironment
import random
import matplotlib.pyplot as plt
%matplotlib inline
from ddpg_agent import Agent
```
Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.
- **Mac**: `"path/to/Reacher.app"`
- **Windows** (x86): `"path/to/Reacher_Windows_x86/Reacher.exe"`
- **Windows** (x86_64): `"path/to/Reacher_Windows_x86_64/Reacher.exe"`
- **Linux** (x86): `"path/to/Reacher_Linux/Reacher.x86"`
- **Linux** (x86_64): `"path/to/Reacher_Linux/Reacher.x86_64"`
- **Linux** (x86, headless): `"path/to/Reacher_Linux_NoVis/Reacher.x86"`
- **Linux** (x86_64, headless): `"path/to/Reacher_Linux_NoVis/Reacher.x86_64"`
For instance, if you are using a Mac, then you downloaded `Reacher.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:
```
env = UnityEnvironment(file_name="Reacher.app")
```
```
env = UnityEnvironment(file_name="Reacher1.app")
```
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
```
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
```
### 2. Examine the State and Action Spaces
In this environment, a double-jointed arm can move to target locations. A reward of `+0.1` is provided for each step that the agent's hand is in the goal location. Thus, the goal of your agent is to maintain its position at the target location for as many time steps as possible.
The observation space consists of `33` variables corresponding to position, rotation, velocity, and angular velocities of the arm. Each action is a vector with four numbers, corresponding to torque applicable to two joints. Every entry in the action vector must be a number between `-1` and `1`.
Run the code cell below to print some information about the environment.
```
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents
num_agents = len(env_info.agents)
print('Number of agents:', num_agents)
# size of each action
action_size = brain.vector_action_space_size
print('Size of each action:', action_size)
# examine the state space
states = env_info.vector_observations
state_size = states.shape[1]
print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))
print('The state for the first agent looks like:', states[0])
```
### 3. Take Random Actions in the Environment
In the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.
Once this cell is executed, you will watch the agent's performance, if it selects an action at random with each time step. A window should pop up that allows you to observe the agent, as it moves through the environment.
Of course, as part of the project, you'll have to change the code so that the agent is able to use its experience to gradually choose better actions when interacting with the environment!
```
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
while True:
actions = np.random.randn(num_agents, action_size) # select an action (for each agent)
actions = np.clip(actions, -1, 1) # all actions between -1 and 1
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
scores += env_info.rewards # update the score (for each agent)
states = next_states # roll over states to next time step
if np.any(dones): # exit loop if episode finished
break
print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores)))
```
### 4. It's Your Turn!
Now it's your turn to train your own agent to solve the environment! When training the environment, set `train_mode=True`, so that the line for resetting the environment looks like the following:
```python
env_info = env.reset(train_mode=True)[brain_name]
```
```
agent = Agent(state_size=state_size, action_size=action_size,
n_agents=num_agents, random_seed=42)
def plot_scores(scores, rolling_window=10, save_fig=False):
"""Plot scores and optional rolling mean using specified window."""
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(len(scores)), scores)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.title(f'scores')
rolling_mean = pd.Series(scores).rolling(rolling_window).mean()
plt.plot(rolling_mean);
if save_fig:
plt.savefig(f'figures_scores.png', bbox_inches='tight', pad_inches=0)
def ddpg(n_episodes=10000, max_t=1000, print_every=100):
scores_deque = deque(maxlen=print_every)
scores = []
for i_episode in range(1, n_episodes+1):
env_info = env.reset(train_mode=True)[brain_name]
states = env_info.vector_observations
agent.reset()
score = np.zeros(num_agents)
for t in range(max_t):
actions = agent.act(states)
env_info = env.step(actions)[brain_name]
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
agent.step(states, actions, rewards, next_states, dones)
states = next_states
score += rewards
if any(dones):
break
scores_deque.append(np.mean(score))
scores.append(np.mean(score))
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque)), end="")
torch.save(agent.actor_local.state_dict(), './weights/checkpoint_actor.pth')
torch.save(agent.critic_local.state_dict(), './weights/checkpoint_critic.pth')
if i_episode % print_every == 0:
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque)))
plot_scores(scores)
if np.mean(scores_deque) >= 30.0:
print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode - print_every, np.mean(scores_deque)))
torch.save(agent.actor_local.state_dict(), './weights/checkpoint_actor.pth')
torch.save(agent.critic_local.state_dict(), './weights/checkpoint_critic.pth')
break
return scores
scores = ddpg()
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(1, len(scores)+1), scores)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.show()
plot_scores(scores)
```
When finished, you can close the environment.
```
env.close()
```
|
github_jupyter
|
# Introduction to Python part IV (And a discussion of linear transformations)
## Activity 1: Discussion of linear transformations
* Orthogonality also plays a key role in understanding linear transformations. How can we understand linear transformations in terms of a composition of rotations and diagonal matrices? There are two specific matrix factorizations that arise this way, can you name them and describe the conditions in which they are applicable?
* What is a linear inverse problem? What conditions guarantee a solution?
* What is a pseudo-inverse? How is this related to an orthogonal projection? How is this related to the linear inverse problem?
* What is a weighted norm and what is a weighted pseudo-norm?
## Activity 2: Basic data analysis and manipulation
```
import numpy as np
```
### Exercise 1:
Arrays can be concatenated and stacked on top of one another, using NumPy’s `vstack` and `hstack` functions for vertical and horizontal stacking, respectively.
```
A = np.array([[1,2,3], [4,5,6], [7, 8, 9]])
print('A = ')
print(A)
B = np.hstack([A, A])
print('B = ')
print(B)
C = np.vstack([A, A])
print('C = ')
print(C)
```
Write some additional code that slices the first and last columns of A, and stacks them into a 3x2 array. Make sure to print the results to verify your solution.
Note a ‘gotcha’ with array indexing is that singleton dimensions are dropped by default. That means `A[:, 0]` is a one dimensional array, which won’t stack as desired. To preserve singleton dimensions, the index itself can be a slice or array. For example, `A[:, :1]` returns a two dimensional array with one singleton dimension (i.e. a column vector).
```
D = np.hstack((A[:, :1], A[:, -1:]))
print('D = ')
print(D)
```
An alternative way to achieve the same result is to use Numpy’s delete function to remove the second column of A. Use the search function for the documentation on the `np.delete` function to find the syntax for constructing such an array.
### Exercise 2:
The patient data is longitudinal in the sense that each row represents a series of observations relating to one individual. This means that the change in inflammation over time is a meaningful concept. Let’s find out how to calculate changes in the data contained in an array with NumPy.
The `np.diff` function takes an array and returns the differences between two successive values. Let’s use it to examine the changes each day across the first week of patient 3 from our inflammation dataset.
```
patient3_week1 = data[3, :7]
print(patient3_week1)
```
Calling `np.diff(patient3_week1)` would do the following calculations
`[ 0 - 0, 2 - 0, 0 - 2, 4 - 0, 2 - 4, 2 - 2 ]`
and return the 6 difference values in a new array.
```
np.diff(patient3_week1)
```
Note that the array of differences is shorter by one element (length 6).
When calling `np.diff` with a multi-dimensional array, an axis argument may be passed to the function to specify which axis to process. When applying `np.diff` to our 2D inflammation array data, which axis would we specify? Take the differences in the appropriate axis and compute a basic summary of the differences with our standard statistics above.
If the shape of an individual data file is (60, 40) (60 rows and 40 columns), what is the shape of the array after you run the `np.diff` function and why?
How would you find the largest change in inflammation for each patient? Does it matter if the change in inflammation is an increase or a decrease?
## Summary of key points
Some of the key takeaways from this activity are the following:
* Import a library into a program using import libraryname.
* Use the numpy library to work with arrays in Python.
* The expression `array.shape` gives the shape of an array.
* Use `array[x, y]` to select a single element from a 2D array.
* Array indices start at 0, not 1.
* Use `low:high` to specify a slice that includes the indices from `low` to `high-1`.
* Use `# some kind of explanation` to add comments to programs.
* Use `np.mean(array)`, `np.std(array)`, `np.quantile(array)`, `np.max(array)`, and `np.min(array)` to calculate simple statistics.
* Use `sp.mode(array)` to compute additional statistics.
* Use `np.mean(array, axis=0)` or `np.mean(array, axis=1)` to calculate statistics across the specified axis.
|
github_jupyter
|
# 人脸检测
人脸检测,顾名思义,从图像中找到人脸。这是计算机视觉中一个非常经典的物体检测问题。经典人脸检测算法如Viola-Jones算法已经内置在OpenCV中,一度是使用OpenCV实现人脸检测的默认方案。不过OpenCV最新发布的4.5.4版本中提供了一个全新的基于神经网络的人脸检测器。这篇笔记展示了该检测器的使用方法。
## 准备工作
首先载入必要的包,并检查OpenCV版本。
如果你还没有安装OpenCV,可以通过如下命令安装:
```bash
pip install opencv-python
```
```
import cv2
from PIL import Image
print(f"你需要OpenCV 4.5.4或者更高版本。当前版本为:{cv2.__version__}")
```
请从下方地址下载模型文件,并将模型文件放置在当前目录下。
模型下载地址:https://github.com/ShiqiYu/libfacedetection.train/tree/master/tasks/task1/onnx
当前目录为:
```
!pwd
```
## 构建检测器
检测器的构建函数为`FaceDetectorYN_create`,必选参数有三个:
- `model` ONNX模型路径
- `config` 配置(使用ONNX时为可选项)
- `input_size` 输入图像的尺寸。如果构建时输入尺寸未知,可以在执行前指定。
```
face_detector = cv2.FaceDetectorYN_create("yunet.onnx", "", (0, 0))
print("检测器构建完成。")
```
## 执行检测
一旦检测器构建完成,便可以使用`detect`方法检测人脸。注意,如果在构建时未指定输入大小,可以在调用前通过`setInputSzie`方法指定。
```
# 读入待检测图像。图像作者:@anyataylorjoy on Instagram
image = cv2.imread("queen.jpg")
# 获取图像大小并设定检测器
height, width, _ = image.shape
face_detector.setInputSize((width, height))
# 执行检测
result, faces = face_detector.detect(image)
print("检测完成。")
```
## 绘制检测结果
首先将检测结果打印出来。
```
print(faces)
```
检测结果为一个嵌套列表。最外层代表了检测结果的数量,即检测到几个人脸。每一个检测结果包含15个数。其含义如下。
| 序号 | 含义 |
| --- | --- |
| 0 | 人脸框坐标x |
| 1 | 人脸框坐标y |
| 2 | 人脸框的宽度 |
| 3 | 人脸框的高度 |
| 4 | 左眼瞳孔坐标x |
| 5 | 左眼瞳孔坐标y |
| 6 | 右眼瞳孔坐标x |
| 7 | 右眼瞳孔坐标y |
| 8 | 鼻尖坐标x |
| 9 | 鼻尖坐标y |
| 10 | 左侧嘴角坐标x |
| 11 | 左侧嘴角坐标y |
| 12 | 右侧嘴角坐标x |
| 13 | 右侧嘴角坐标y |
| 14 | 人脸置信度分值 |
接下来依次在图中绘制这些坐标。
### 绘制人脸框
OpenCV提供了`rectangle`与`cirle`函数用于在图像中绘制方框与圆点。首先使用`rectangle`绘制人脸框。
```
# 提取第一个检测结果,并将坐标转换为整数,用于绘制。
face = faces[0].astype(int)
# 获得人脸框的位置与宽高。
x, y, w, h = face[:4]
# 在图像中绘制结果。
image_with_marks = cv2.rectangle(image, (x, y), (x+w, y+h), (255, 255, 255))
# 显示绘制结果
display(Image.fromarray(cv2.cvtColor(image_with_marks, cv2.COLOR_BGR2RGB)))
```
# 绘制五官坐标
```
# 绘制瞳孔位置
left_eye_x, left_eye_y, right_eye_x, right_eye_y = face[4:8]
cv2.circle(image_with_marks, (left_eye_x, left_eye_y), 2, (0, 255, 0), -1)
cv2.circle(image_with_marks, (right_eye_x, right_eye_y), 2, (0, 255, 0), -1)
# 绘制鼻尖
nose_x, nose_y = face[8:10]
cv2.circle(image_with_marks, (nose_x, nose_y), 2, (0, 255, 0), -1)
# 绘制嘴角
mouth_left_x, mouth_left_y, mouth_right_x, mouth_right_y = face[10:14]
cv2.circle(image_with_marks, (mouth_left_x, mouth_left_y), 2, (0, 255, 0), -1)
cv2.circle(image_with_marks, (mouth_right_x, mouth_right_y), 2, (0, 255, 0), -1)
# 显示绘制结果
display(Image.fromarray(cv2.cvtColor(image_with_marks, cv2.COLOR_BGR2RGB)))
```
## 性能测试
人脸检测器很有可能用在一些实时运算场景。此时的性能便是一个不可忽略的因素。下边这段代码展示了新版人脸检测器在当前计算设备上的运行速度。
```
tm = cv2.TickMeter()
for _ in range(1000):
tm.start()
_ = face_detector.detect(image)
tm.stop()
print(f"检测速度:{tm.getFPS():.0f} FPS")
```
## 总结
OpenCV 4.5.4提供的人脸检测器采用了基于神经网络的方案。与先前基于Viola-Jones算法的方案相比还可以提供面部五官的位置。可以考虑作为默认检测方案使用。
|
github_jupyter
|
<a href="https://colab.research.google.com/github/Eurus-Holmes/PyTorch-Tutorials/blob/master/Training_a__Classifier.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
%matplotlib inline
```
Training a Classifier
=====================
This is it. You have seen how to define neural networks, compute loss and make
updates to the weights of the network.
Now you might be thinking,
What about data?
----------------
Generally, when you have to deal with image, text, audio or video data,
you can use standard python packages that load data into a numpy array.
Then you can convert this array into a ``torch.*Tensor``.
- For images, packages such as Pillow, OpenCV are useful
- For audio, packages such as scipy and librosa
- For text, either raw Python or Cython based loading, or NLTK and
SpaCy are useful
Specifically for vision, we have created a package called
``torchvision``, that has data loaders for common datasets such as
Imagenet, CIFAR10, MNIST, etc. and data transformers for images, viz.,
``torchvision.datasets`` and ``torch.utils.data.DataLoader``.
This provides a huge convenience and avoids writing boilerplate code.
For this tutorial, we will use the CIFAR10 dataset.
It has the classes: ‘airplane’, ‘automobile’, ‘bird’, ‘cat’, ‘deer’,
‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’. The images in CIFAR-10 are of
size 3x32x32, i.e. 3-channel color images of 32x32 pixels in size.
Training an image classifier
----------------------------
We will do the following steps in order:
1. Load and normalizing the CIFAR10 training and test datasets using
``torchvision``
2. Define a Convolution Neural Network
3. Define a loss function
4. Train the network on the training data
5. Test the network on the test data
1. Loading and normalizing CIFAR10
----------------------------
# 1. Loading and normalizing CIFAR10
Using ``torchvision``, it’s extremely easy to load CIFAR10.
```
import torch
import torchvision
import torchvision.transforms as transforms
```
The output of torchvision datasets are PILImage images of range [0, 1].
We transform them to Tensors of normalized range [-1, 1].
```
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
```
Let us show some of the training images, for fun.
```
import matplotlib.pyplot as plt
import numpy as np
# functions to show an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))
```
# 2. Define a Convolution Neural Network
----
Copy the neural network from the Neural Networks section before and modify it to
take 3-channel images (instead of 1-channel images as it was defined).
```
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
```
# 3. Define a Loss function and optimizer
----
Let's use a Classification Cross-Entropy loss and SGD with momentum.
```
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
```
# 4. Train the network
----
This is when things start to get interesting.
We simply have to loop over our data iterator, and feed the inputs to the
network and optimize.
```
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
```
# 5. Test the network on the test data
----
We have trained the network for 2 passes over the training dataset.
But we need to check if the network has learnt anything at all.
We will check this by predicting the class label that the neural network
outputs, and checking it against the ground-truth. If the prediction is
correct, we add the sample to the list of correct predictions.
Okay, first step. Let us display an image from the test set to get familiar.
```
dataiter = iter(testloader)
images, labels = dataiter.next()
# print images
imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
```
Okay, now let us see what the neural network thinks these examples above are:
```
outputs = net(images)
```
The outputs are energies for the 10 classes.
Higher the energy for a class, the more the network
thinks that the image is of the particular class.
So, let's get the index of the highest energy:
```
_, predicted = torch.max(outputs, 1)
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
for j in range(4)))
```
The results seem pretty good.
Let us look at how the network performs on the whole dataset.
```
correct = 0
total = 0
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
```
That looks waaay better than chance, which is 10% accuracy (randomly picking
a class out of 10 classes).
Seems like the network learnt something.
Hmmm, what are the classes that performed well, and the classes that did
not perform well:
```
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs, 1)
c = (predicted == labels).squeeze()
for i in range(4):
label = labels[i]
class_correct[label] += c[i].item()
class_total[label] += 1
for i in range(10):
print('Accuracy of %5s : %2d %%' % (
classes[i], 100 * class_correct[i] / class_total[i]))
```
Okay, so what next?
How do we run these neural networks on the GPU?
Training on GPU
----------------
Just like how you transfer a Tensor on to the GPU, you transfer the neural
net onto the GPU.
Let's first define our device as the first visible cuda device if we have
CUDA available:
```
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# Assume that we are on a CUDA machine, then this should print a CUDA device:
print(device)
```
The rest of this section assumes that `device` is a CUDA device.
Then these methods will recursively go over all modules and convert their
parameters and buffers to CUDA tensors:
`net.to(device)`
Remember that you will have to send the inputs and targets at every step
to the GPU too:
`inputs, labels = inputs.to(device), labels.to(device)`
Why dont I notice MASSIVE speedup compared to CPU? Because your network
is realllly small.
**Exercise:** Try increasing the width of your network (argument 2 of
the first ``nn.Conv2d``, and argument 1 of the second ``nn.Conv2d`` –
they need to be the same number), see what kind of speedup you get.
**Goals achieved**:
- Understanding PyTorch's Tensor library and neural networks at a high level.
- Train a small neural network to classify images
Training on multiple GPUs
-------------------------
If you want to see even more MASSIVE speedup using all of your GPUs,
please check out :doc:`data_parallel_tutorial`.
Where do I go next?
-------------------
- `Train neural nets to play video games`
- `Train a state-of-the-art ResNet network on imagenet`
- `Train a face generator using Generative Adversarial Networks`
- `Train a word-level language model using Recurrent LSTM networks`
- `More examples`
- `More tutorials`
- `Discuss PyTorch on the Forums`
- `Chat with other users on Slack`
|
github_jupyter
|
# Direct Outcome Prediction Model
Also known as standardization
```
%matplotlib inline
from sklearn.linear_model import LinearRegression
from sklearn.ensemble import GradientBoostingRegressor
from causallib.datasets import load_smoking_weight
from causallib.estimation import Standardization, StratifiedStandardization
from causallib.evaluation import OutcomeEvaluator
```
#### Data:
The effect of quitting to smoke on weight loss.
Data example is taken from [Hernan and Robins Causal Inference Book](https://www.hsph.harvard.edu/miguel-hernan/causal-inference-book/)
```
data = load_smoking_weight()
data.X.join(data.a).join(data.y).head()
```
## "Standard" Standardization
A single model is trained with the treatment assignment as an additional feature.
During inference, the model assigns a treatment value for all samples,
thus predicting the potential outcome of all samples.
```
std = Standardization(LinearRegression())
std.fit(data.X, data.a, data.y)
```
##### Outcome Prediction
The model can be used to predict individual outcomes:
The potential outcome under each intervention
```
ind_outcomes = std.estimate_individual_outcome(data.X, data.a)
ind_outcomes.head()
```
The model can be used to predict population outcomes,
By aggregating the individual outcome prediction (e.g., mean or median).
Providing `agg_func` which is defaulted to `'mean'`
```
median_pop_outcomes = std.estimate_population_outcome(data.X, data.a, agg_func="median")
median_pop_outcomes.rename("median", inplace=True)
mean_pop_outcomes = std.estimate_population_outcome(data.X, data.a, agg_func="mean")
mean_pop_outcomes.rename("mean", inplace=True)
pop_outcomes = mean_pop_outcomes.to_frame().join(median_pop_outcomes)
pop_outcomes
```
##### Effect Estimation
Similarly, Effect estimation can be done on either individual or population level, depending on the outcomes provided.
Population level effect using population outcomes:
```
std.estimate_effect(mean_pop_outcomes[1], mean_pop_outcomes[0])
```
Population level effect using individual outcome, but asking for aggregation (default behaviour):
```
std.estimate_effect(ind_outcomes[1], ind_outcomes[0], agg="population")
```
Individual level effect using inidiviual outcomes:
Since we're using a binary treatment with logistic regression on a standard model,
the difference is same for all individuals, and is equal to the coefficient of the treatment varaible
```
print(std.learner.coef_[0])
std.estimate_effect(ind_outcomes[1], ind_outcomes[0], agg="individual").head()
```
Multiple types of effect are also supported:
```
std.estimate_effect(ind_outcomes[1], ind_outcomes[0],
agg="individual", effect_types=["diff", "ratio"]).head()
```
### Treament one-hot encoded
For multi-treatment cases, where treatments are coded as 0, 1, 2, ... but have no ordinal interpretation,
It is possible to make the model encode the treatment assignment vector as one hot matrix.
```
std = Standardization(LinearRegression(), encode_treatment=True)
std.fit(data.X, data.a, data.y)
pop_outcomes = std.estimate_population_outcome(data.X, data.a, agg_func="mean")
std.estimate_effect(mean_pop_outcomes[1], mean_pop_outcomes[0])
```
## Stratified Standarziation
While standardization can be viewed as a **"complete pooled"** estimator,
as it includes both treatment groups together,
Stratified Standardization can viewed as **"complete unpooled"** one,
as it completly stratifies the dataset by treatment values and learns a different model for each treatment group.
```
std = StratifiedStandardization(LinearRegression())
std.fit(data.X, data.a, data.y)
```
Checking the core `learner` we can see that it actually has two models, indexed by the treatment value:
```
std.learner
```
We can apply same analysis as above.
```
pop_outcomes = std.estimate_population_outcome(data.X, data.a, agg_func="mean")
std.estimate_effect(mean_pop_outcomes[1], mean_pop_outcomes[0])
```
We can see that internally, when asking for some potential outcome,
the model simply applies the model trained on the group of that treatment:
```
potential_outcome = std.estimate_individual_outcome(data.X, data.a)[1]
direct_prediction = std.learner[1].predict(data.X)
(potential_outcome == direct_prediction).all()
```
#### Providing complex scheme of learners
When supplying a single learner to the standardization above,
the model simply duplicates it for each treatment value.
However, it is possible to specify a different model for each treatment value explicitly.
For example, in cases where the treated are more complex than the untreated
(because, say, background of those choosed to be treated),
it is possible to specify them with a more expressive model:
```
learner = {0: LinearRegression(),
1: GradientBoostingRegressor()}
std = StratifiedStandardization(learner)
std.fit(data.X, data.a, data.y)
std.learner
ind_outcomes = std.estimate_individual_outcome(data.X, data.a)
ind_outcomes.head()
std.estimate_effect(ind_outcomes[1], ind_outcomes[0])
```
## Evaluation
#### Simple evaluation
```
plots = ["common_support", "continuous_accuracy"]
evaluator = OutcomeEvaluator(std)
evaluator._regression_metrics.pop("msle") # We have negative values and this is log transforms
results = evaluator.evaluate_simple(data.X, data.a, data.y, plots=plots)
```
Results show the results for each treatment group separetly and also combined:
```
results.scores
```
#### Thorough evaluation
```
plots=["common_support", "continuous_accuracy", "residuals"]
evaluator = OutcomeEvaluator(Standardization(LinearRegression()))
results = evaluator.evaluate_cv(data.X, data.a, data.y,
plots=plots)
results.scores
results.models
```
|
github_jupyter
|
# Accessing data in a DataSet
After a measurement is completed all the acquired data and metadata around it is accessible via a `DataSet` object. This notebook presents the useful methods and properties of the `DataSet` object which enable convenient access to the data, parameters information, and more. For general overview of the `DataSet` class, refer to [DataSet class walkthrough](DataSet-class-walkthrough.ipynb).
## Preparation: a DataSet from a dummy Measurement
In order to obtain a `DataSet` object, we are going to run a `Measurement` storing some dummy data (see [Dataset Context Manager](Dataset%20Context%20Manager.ipynb) notebook for more details).
```
import tempfile
import os
import numpy as np
import qcodes
from qcodes import initialise_or_create_database_at, \
load_or_create_experiment, Measurement, Parameter, \
Station
from qcodes.dataset.plotting import plot_dataset
db_path = os.path.join(tempfile.gettempdir(), 'data_access_example.db')
initialise_or_create_database_at(db_path)
exp = load_or_create_experiment(experiment_name='greco', sample_name='draco')
x = Parameter(name='x', label='Voltage', unit='V',
set_cmd=None, get_cmd=None)
t = Parameter(name='t', label='Time', unit='s',
set_cmd=None, get_cmd=None)
y = Parameter(name='y', label='Voltage', unit='V',
set_cmd=None, get_cmd=None)
y2 = Parameter(name='y2', label='Current', unit='A',
set_cmd=None, get_cmd=None)
q = Parameter(name='q', label='Qredibility', unit='$',
set_cmd=None, get_cmd=None)
meas = Measurement(exp=exp, name='fresco')
meas.register_parameter(x)
meas.register_parameter(t)
meas.register_parameter(y, setpoints=(x, t))
meas.register_parameter(y2, setpoints=(x, t))
meas.register_parameter(q) # a standalone parameter
x_vals = np.linspace(-4, 5, 50)
t_vals = np.linspace(-500, 1500, 25)
with meas.run() as datasaver:
for xv in x_vals:
for tv in t_vals:
yv = np.sin(2*np.pi*xv)*np.cos(2*np.pi*0.001*tv) + 0.001*tv
y2v = np.sin(2*np.pi*xv)*np.cos(2*np.pi*0.001*tv + 0.5*np.pi) - 0.001*tv
datasaver.add_result((x, xv), (t, tv), (y, yv), (y2, y2v))
q_val = np.max(yv) - np.min(y2v) # a meaningless value
datasaver.add_result((q, q_val))
dataset = datasaver.dataset
```
For the sake of demonstrating what kind of data we've produced, let's use `plot_dataset` to make some default plots of the data.
```
plot_dataset(dataset)
```
## DataSet indentification
Before we dive into what's in the `DataSet`, let's briefly note how a `DataSet` is identified.
```
dataset.captured_run_id
dataset.exp_name
dataset.sample_name
dataset.name
```
## Parameters in the DataSet
In this section we are getting information about the parameters stored in the given `DataSet`.
> Why is that important? Let's jump into *data*!
As it turns out, just "arrays of numbers" are not enough to reason about a given `DataSet`. Even comping up with a reasonable deafult plot, which is what `plot_dataset` does, requires information on `DataSet`'s parameters. In this notebook, we first have a detailed look at what is stored about parameters and how to work with this information. After that, we will cover data access methods.
### Run description
Every dataset comes with a "description" (aka "run description"):
```
dataset.description
```
The description, an instance of `RunDescriber` object, is intended to describe the details of a dataset. In the future releases of QCoDeS it will likely be expanded. At the moment, it only contains an `InterDependencies_` object under its `interdeps` attribute - which stores all the information about the parameters of the `DataSet`.
Let's look into this `InterDependencies_` object.
### Interdependencies
`Interdependencies_` object inside the run description contains information about all the parameters that are stored in the `DataSet`. Subsections below explain how the individual information about the parameters as well as their relationships are captured in the `Interdependencies_` object.
```
interdeps = dataset.description.interdeps
interdeps
```
#### Dependencies, inferences, standalones
Information about every parameter is stored in the form of `ParamSpecBase` objects, and the releationship between parameters is captured via `dependencies`, `inferences`, and `standalones` attributes.
For example, the dataset that we are inspecting contains no inferences, and one standalone parameter `q`, and two dependent parameters `y` and `y2`, which both depend on independent `x` and `t` parameters:
```
interdeps.inferences
interdeps.standalones
interdeps.dependencies
```
`dependencies` is a dictionary of `ParamSpecBase` objects. The keys are dependent parameters (those which depend on other parameters), and the corresponding values in the dictionary are tuples of independent parameters that the dependent parameter in the key depends on. Coloquially, each key-value pair of the `dependencies` dictionary is sometimes referred to as "parameter tree".
`inferences` follows the same structure as `dependencies`.
`standalones` is a set - an unordered collection of `ParamSpecBase` objects representing "standalone" parameters, the ones which do not depend on other parameters, and no other parameter depends on them.
#### ParamSpecBase objects
`ParamSpecBase` object contains all the necessary information about a given parameter, for example, its `name` and `unit`:
```
ps = list(interdeps.dependencies.keys())[0]
print(f'Parameter {ps.name!r} is in {ps.unit!r}')
```
`paramspecs` property returns a tuple of `ParamSpecBase`s for all the parameters contained in the `Interdependencies_` object:
```
interdeps.paramspecs
```
Here's a trivial example of iterating through dependent parameters of the `Interdependencies_` object and extracting information about them from the `ParamSpecBase` objects:
```
for d in interdeps.dependencies.keys():
print(f'Parameter {d.name!r} ({d.label}, {d.unit}) depends on:')
for i in interdeps.dependencies[d]:
print(f'- {i.name!r} ({i.label}, {i.unit})')
```
#### Other useful methods and properties
`Interdependencies_` object has a few useful properties and methods which make it easy to work it and with other `Interdependencies_` and `ParamSpecBase` objects.
For example, `non_dependencies` returns a tuple of all dependent parameters together with standalone parameters:
```
interdeps.non_dependencies
```
`what_depends_on` method allows to find what parameters depend on a given parameter:
```
t_ps = interdeps.paramspecs[2]
t_deps = interdeps.what_depends_on(t_ps)
print(f'Following parameters depend on {t_ps.name!r} ({t_ps.label}, {t_ps.unit}):')
for t_dep in t_deps:
print(f'- {t_dep.name!r} ({t_dep.label}, {t_dep.unit})')
```
### Shortcuts to important parameters
For the frequently needed groups of parameters, `DataSet` object itself provides convenient methods and properties.
For example, use `dependent_parameters` property to get only dependent parameters of a given `DataSet`:
```
dataset.dependent_parameters
```
This is equivalent to:
```
tuple(dataset.description.interdeps.dependencies.keys())
```
### Note on inferences
Inferences between parameters is a feature that has not been used yet within QCoDeS. The initial concepts around `DataSet` included it in order to link parameters that are not directly dependent on each other as "dependencies" are. It is very likely that "inferences" will be eventually deprecated and removed.
### Note on ParamSpec's
> `ParamSpec`s originate from QCoDeS versions prior to `0.2.0` and for now are kept for backwards compatibility. `ParamSpec`s are completely superseded by `InterDependencies_`/`ParamSpecBase` bundle and will likely be deprecated in future versions of QCoDeS together with the `DataSet` methods/properties that return `ParamSpec`s objects.
In addition to the `Interdependencies_` object, `DataSet` also holds `ParamSpec` objects (not to be confused with `ParamSpecBase` objects from above). Similar to `Interdependencies_` object, the `ParamSpec` objects hold information about parameters and their interdependencies but in a different way: for a given parameter, `ParamSpec` object itself contains information on names of parameters that it depends on, while for the `InterDependencies_`/`ParamSpecBase`s this information is stored only in the `InterDependencies_` object.
`DataSet` exposes `paramspecs` property and `get_parameters()` method, both of which return `ParamSpec` objects of all the parameters of the dataset, and are not recommended for use:
```
dataset.paramspecs
dataset.get_parameters()
dataset.parameters
```
To give an example of what it takes to work with `ParamSpec` objects as opposed to `Interdependencies_` object, here's a function that one needs to write in order to find standalone `ParamSpec`s from a given list of `ParamSpec`s:
```
def get_standalone_parameters(paramspecs):
all_independents = set(spec.name
for spec in paramspecs
if len(spec.depends_on_) == 0)
used_independents = set(d for spec in paramspecs for d in spec.depends_on_)
standalones = all_independents.difference(used_independents)
return tuple(ps for ps in paramspecs if ps.name in standalones)
all_parameters = dataset.get_parameters()
standalone_parameters = get_standalone_parameters(all_parameters)
standalone_parameters
```
## Getting data from DataSet
In this section methods for retrieving the actual data from the `DataSet` are discussed.
### `get_parameter_data` - the powerhorse
`DataSet` provides one main method of accessing data - `get_parameter_data`. It returns data for groups of dependent-parameter-and-its-independent-parameters in a form of a nested dictionary of `numpy` arrays:
```
dataset.get_parameter_data()
```
#### Avoid excessive calls to loading data
Note that this call actually reads the data of the `DataSet` and in case of a `DataSet` with a lot of data can take noticable amount of time. Hence, it is recommended to limit the number of times the same data gets loaded in order to speed up the user's code.
#### Loading data of selected parameters
Sometimes data only for a particular parameter or parameters needs to be loaded. For example, let's assume that after inspecting the `InterDependencies_` object from `dataset.description.interdeps`, we concluded that we want to load data of the `q` parameter and the `y2` parameter. In order to do that, we just pass the names of these parameters, or their `ParamSpecBase`s to `get_parameter_data` call:
```
q_param_spec = list(interdeps.standalones)[0]
q_param_spec
y2_param_spec = interdeps.non_dependencies[-1]
y2_param_spec
dataset.get_parameter_data(q_param_spec, y2_param_spec)
```
### `get_data_as_pandas_dataframe` - for `pandas` fans
`DataSet` provides one main method of accessing data - `get_data_as_pandas_dataframe`. It returns data for groups of dependent-parameter-and-its-independent-parameters in a form of a dictionary of `pandas.DataFrame` s:
```
dfs = dataset.get_data_as_pandas_dataframe()
# For the sake of making this article more readable,
# we will print the contents of the `dfs` dictionary
# manually by calling `.head()` on each of the DataFrames
for parameter_name, df in dfs.items():
print(f"DataFrame for parameter {parameter_name}")
print("-----------------------------")
print(f"{df.head()!r}")
print("")
```
Similar to `get_parameter_data`, `get_data_as_pandas_dataframe` also supports retrieving data for a given parameter(s), as well as `start`/`stop` arguments.
`get_data_as_pandas_dataframe` is implemented based on `get_parameter_data`, hence the performance considerations mentioned above for `get_parameter_data` apply to `get_data_as_pandas_dataframe` as well.
For more details on `get_data_as_pandas_dataframe` refer to [Working with pandas and xarray article](Working-With-Pandas-and-XArray.ipynb).
### Data extraction into "other" formats
If the user desires to export a QCoDeS `DataSet` into a format that is not readily supported by `DataSet` methods, we recommend to use `get_data_as_pandas_dataframe` first, and then convert the resulting `DataFrame` s into a the desired format. This is becuase `pandas` package already implements converting `DataFrame` to various popular formats including comma-separated text file (`.csv`), HDF (`.hdf5`), xarray, Excel (`.xls`, `.xlsx`), and more; refer to [Working with pandas and xarray article](Working-With-Pandas-and-XArray.ipynb), and [`pandas` documentation](https://pandas.pydata.org/pandas-docs/stable/reference/frame.html#serialization-io-conversion) for more information.
Nevertheless, `DataSet` also provides the following convenient methods:
* `DataSet.write_data_to_text_file`
Refer to the docstrings of those methods for more information on how to use them.
### Not recommended data access methods
The following tree methods of accessing data in a dataset are not recommended for use, and will be deprecated soon:
* `DataSet.get_data`
* `DataSet.get_values`
* `DataSet.get_setpoints`
|
github_jupyter
|
## Interacting with CerebralCortex Data
Cerebral Cortex is MD2K's big data cloud tool designed to support population-scale data analysis, visualization, model development, and intervention design for mobile-sensor data. It provides the ability to do machine learning model development on population scale datasets and provides interoperable interfaces for aggregation of diverse data sources.
This page provides an overview of the core Cerebral Cortex operations to familiarilze you with how to discover and interact with different sources of data that could be contained within the system.
_Note:_ While some of these examples are showing generated data, they are designed to function on real-world mCerebrum data and the signal generators were built to facilitate the testing and evaluation of the Cerebral Cortex platform by those individuals that are unable to see those original datasets or do not wish to collect data before evaluating the system.
## Setting Up Environment
Notebook does not contain the necessary runtime enviornments necessary to run Cerebral Cortex. The following commands will download and install these tools, framework, and datasets.
```
import importlib, sys, os
from os.path import expanduser
sys.path.insert(0, os.path.abspath('..'))
DOWNLOAD_USER_DATA=False
ALL_USERS=False #this will only work if DOWNLOAD_USER_DATA=True
IN_COLAB = 'google.colab' in sys.modules
MD2K_JUPYTER_NOTEBOOK = "MD2K_JUPYTER_NOTEBOOK" in os.environ
if (get_ipython().__class__.__name__=="ZMQInteractiveShell"): IN_JUPYTER_NOTEBOOK = True
JAVA_HOME_DEFINED = "JAVA_HOME" in os.environ
SPARK_HOME_DEFINED = "SPARK_HOME" in os.environ
PYSPARK_PYTHON_DEFINED = "PYSPARK_PYTHON" in os.environ
PYSPARK_DRIVER_PYTHON_DEFINED = "PYSPARK_DRIVER_PYTHON" in os.environ
HAVE_CEREBRALCORTEX_KERNEL = importlib.util.find_spec("cerebralcortex") is not None
SPARK_VERSION = "3.1.2"
SPARK_URL = "https://archive.apache.org/dist/spark/spark-"+SPARK_VERSION+"/spark-"+SPARK_VERSION+"-bin-hadoop2.7.tgz"
SPARK_FILE_NAME = "spark-"+SPARK_VERSION+"-bin-hadoop2.7.tgz"
CEREBRALCORTEX_KERNEL_VERSION = "3.3.14"
DATA_PATH = expanduser("~")
if DATA_PATH[:-1]!="/":
DATA_PATH+="/"
USER_DATA_PATH = DATA_PATH+"cc_data/"
if MD2K_JUPYTER_NOTEBOOK:
print("Java, Spark, and CerebralCortex-Kernel are installed and paths are already setup.")
else:
SPARK_PATH = DATA_PATH+"spark-"+SPARK_VERSION+"-bin-hadoop2.7/"
if(not HAVE_CEREBRALCORTEX_KERNEL):
print("Installing CerebralCortex-Kernel")
!pip -q install cerebralcortex-kernel==$CEREBRALCORTEX_KERNEL_VERSION
else:
print("CerebralCortex-Kernel is already installed.")
if not JAVA_HOME_DEFINED:
if not os.path.exists("/usr/lib/jvm/java-8-openjdk-amd64/") and not os.path.exists("/usr/lib/jvm/java-11-openjdk-amd64/"):
print("\nInstalling/Configuring Java")
!sudo apt update
!sudo apt-get install -y openjdk-8-jdk-headless
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64/"
elif os.path.exists("/usr/lib/jvm/java-8-openjdk-amd64/"):
print("\nSetting up Java path")
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64/"
elif os.path.exists("/usr/lib/jvm/java-11-openjdk-amd64/"):
print("\nSetting up Java path")
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-11-openjdk-amd64/"
else:
print("JAVA is already installed.")
if (IN_COLAB or IN_JUPYTER_NOTEBOOK) and not MD2K_JUPYTER_NOTEBOOK:
if SPARK_HOME_DEFINED:
print("SPARK is already installed.")
elif not os.path.exists(SPARK_PATH):
print("\nSetting up Apache Spark ", SPARK_VERSION)
!pip -q install findspark
import pyspark
spark_installation_path = os.path.dirname(pyspark.__file__)
import findspark
findspark.init(spark_installation_path)
if not os.getenv("PYSPARK_PYTHON"):
os.environ["PYSPARK_PYTHON"] = os.popen('which python3').read().replace("\n","")
if not os.getenv("PYSPARK_DRIVER_PYTHON"):
os.environ["PYSPARK_DRIVER_PYTHON"] = os.popen('which python3').read().replace("\n","")
else:
print("SPARK is already installed.")
else:
raise SystemExit("Please check your environment configuration at: https://github.com/MD2Korg/CerebralCortex-Kernel/")
if DOWNLOAD_USER_DATA:
if not os.path.exists(USER_DATA_PATH):
if ALL_USERS:
print("\nDownloading all users' data.")
!rm -rf $USER_DATA_PATH
!wget -q http://mhealth.md2k.org/images/datasets/cc_data.tar.bz2 && tar -xf cc_data.tar.bz2 -C $DATA_PATH && rm cc_data.tar.bz2
else:
print("\nDownloading a user's data.")
!rm -rf $USER_DATA_PATH
!wget -q http://mhealth.md2k.org/images/datasets/s2_data.tar.bz2 && tar -xf s2_data.tar.bz2 -C $DATA_PATH && rm s2_data.tar.bz2
else:
print("Data already exist. Please remove folder", USER_DATA_PATH, "if you want to download the data again")
```
# Import Your Own Data
mCerebrum is not the only way to collect and load data into *Cerebral Cortex*. It is possible to import your own structured datasets into the platform. This example will demonstrate how to load existing data and subsequently how to read it back from Cerebral Cortex through the same mechanisms you have been utilizing. Additionally, it demonstrates how to write a custom data transformation fuction to manipulate data and produce a smoothed result which can then be visualized.
## Initialize the system
```
from cerebralcortex.kernel import Kernel
CC = Kernel(cc_configs="default", study_name="default", new_study=True)
```
# Import Data
Cerebral Cortex provides a set of predefined data import routines that fit typical use cases. The most common is CSV data parser, `csv_data_parser`. These parsers are easy to write and can be extended to support most types of data. Additionally, the data importer, `import_data`, needs to be brought into this notebook so that we can start the data import process.
The `import_data` method requires several parameters that are discussed below.
- `cc_config`: The path to the configuration files for Cerebral Cortex; this is the same folder that you would utilize for the `Kernel` initialization
- `input_data_dir`: The path to where the data to be imported is located; in this example, `sample_data` is available in the file/folder browser on the left and you should explore the files located inside of it
- `user_id`: The universally unique identifier (UUID) that owns the data to be imported into the system
- `data_file_extension`: The type of files to be considered for import
- `data_parser`: The import method or another that defines how to interpret the data samples on a per-line basis
- `gen_report`: A simple True/False value that controls if a report is printed to the screen when complete
### Download sample data
```
sample_file = DATA_PATH+"data.csv"
!wget -q https://raw.githubusercontent.com/MD2Korg/CerebralCortex/master/jupyter_demo/sample_data/data.csv -O $sample_file
iot_stream = CC.read_csv(file_path=sample_file, stream_name="some-sample-iot-stream", column_names=["timestamp", "some_vals", "version", "user"])
```
## View Imported Data
```
iot_stream.show(4)
```
## Document Data
```
from cerebralcortex.core.metadata_manager.stream.metadata import Metadata, DataDescriptor, ModuleMetadata
stream_metadata = Metadata()
stream_metadata.set_name("iot-data-stream").set_description("This is randomly generated data for demo purposes.") \
.add_dataDescriptor(
DataDescriptor().set_name("timestamp").set_type("datetime").set_attribute("description", "UTC timestamp of data point collection.")) \
.add_dataDescriptor(
DataDescriptor().set_name("some_vals").set_type("float").set_attribute("description", \
"Random values").set_attribute("range", \
"Data is between 0 and 1.")) \
.add_dataDescriptor(
DataDescriptor().set_name("version").set_type("int").set_attribute("description", "version of the data")) \
.add_dataDescriptor(
DataDescriptor().set_name("user").set_type("string").set_attribute("description", "user id")) \
.add_module(ModuleMetadata().set_name("cerebralcortex.data_importer").set_attribute("url", "hhtps://md2k.org").set_author(
"Nasir Ali", "[email protected]"))
iot_stream.metadata = stream_metadata
```
## View Metadata
```
iot_stream.metadata
```
## How to write an algorithm
This section provides an example of how to write a simple smoothing algorithm and apply it to the data that was just imported.
### Import the necessary modules
```
from pyspark.sql.functions import pandas_udf, PandasUDFType
from pyspark.sql.types import StructField, StructType, StringType, FloatType, TimestampType, IntegerType
from pyspark.sql.functions import minute, second, mean, window
from pyspark.sql import functions as F
import numpy as np
```
### Define the Schema
This schema defines what the computation module will return to the execution context for each row or window in the datastream.
```
# column name and return data type
# acceptable data types for schem are - "null", "string", "binary", "boolean",
# "date", "timestamp", "decimal", "double", "float", "byte", "integer",
# "long", "short", "array", "map", "structfield", "struct"
schema="timestamp timestamp, some_vals double, version int, user string, vals_avg double"
```
### Write a user defined function
The user-defined function (UDF) is one of two mechanisms available for distributed data processing within the Apache Spark framework. In this case, we are computing a simple windowed average.
```
def smooth_algo(key, df):
# key contains all the grouped column values
# In this example, grouped columns are (userID, version, window{start, end})
# For example, if you want to get the start and end time of a window, you can
# get both values by calling key[2]["start"] and key[2]["end"]
some_vals_mean = df["some_vals"].mean()
df["vals_avg"] = some_vals_mean
return df
```
## Run the smoothing algorithm on imported data
The smoothing algorithm is applied to the datastream by calling the `run_algorithm` method and passing the method as a parameter along with which columns, `some_vals`, that should be sent. Finally, the `windowDuration` parameter specified the size of the time windows on which to segment the data before applying the algorithm. Notice that when the next cell is run, the operation completes nearly instantaneously. This is due to the lazy evaluation aspects of the Spark framework. When you run the next cell to show the data, the algorithm will be applied to the whole dataset before displaying the results on the screen.
```
smooth_stream = iot_stream.compute(smooth_algo, schema=schema, windowDuration=10)
smooth_stream.show(truncate=False)
```
## Visualize data
These are two plots that show the original and smoothed data to visually check how the algorithm transformed the data.
```
from cerebralcortex.plotting.basic.plots import plot_timeseries
plot_timeseries(iot_stream)
plot_timeseries(smooth_stream)
```
|
github_jupyter
|
```
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from tqdm import tqdm
%matplotlib inline
from torch.utils.data import Dataset, DataLoader
import torch
import torchvision
import torch.nn as nn
import torch.optim as optim
from torch.nn import functional as F
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)
```
# Generate dataset
```
y = np.random.randint(0,10,5000)
idx= []
for i in range(10):
print(i,sum(y==i))
idx.append(y==i)
x = np.zeros((5000,2))
x[idx[0],:] = np.random.multivariate_normal(mean = [4,6.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[0]))
x[idx[1],:] = np.random.multivariate_normal(mean = [5.5,6],cov=[[0.01,0],[0,0.01]],size=sum(idx[1]))
x[idx[2],:] = np.random.multivariate_normal(mean = [4.5,4.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[2]))
# x[idx[0],:] = np.random.multivariate_normal(mean = [5,5],cov=[[0.1,0],[0,0.1]],size=sum(idx[0]))
# x[idx[1],:] = np.random.multivariate_normal(mean = [6,6],cov=[[0.1,0],[0,0.1]],size=sum(idx[1]))
# x[idx[2],:] = np.random.multivariate_normal(mean = [5.5,6.5],cov=[[0.1,0],[0,0.1]],size=sum(idx[2]))
x[idx[3],:] = np.random.multivariate_normal(mean = [3,3.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[3]))
x[idx[4],:] = np.random.multivariate_normal(mean = [2.5,5.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[4]))
x[idx[5],:] = np.random.multivariate_normal(mean = [3.5,8],cov=[[0.01,0],[0,0.01]],size=sum(idx[5]))
x[idx[6],:] = np.random.multivariate_normal(mean = [5.5,8],cov=[[0.01,0],[0,0.01]],size=sum(idx[6]))
x[idx[7],:] = np.random.multivariate_normal(mean = [7,6.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[7]))
x[idx[8],:] = np.random.multivariate_normal(mean = [6.5,4.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[8]))
x[idx[9],:] = np.random.multivariate_normal(mean = [5,3],cov=[[0.01,0],[0,0.01]],size=sum(idx[9]))
for i in range(10):
plt.scatter(x[idx[i],0],x[idx[i],1],label="class_"+str(i))
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
foreground_classes = {'class_0','class_1', 'class_2'}
background_classes = {'class_3','class_4', 'class_5', 'class_6','class_7', 'class_8', 'class_9'}
fg_class = np.random.randint(0,3)
fg_idx = np.random.randint(0,9)
a = []
for i in range(9):
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
a.append(x[b])
print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(3,10)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
a.append(x[b])
print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
print(a.shape)
print(fg_class , fg_idx)
a.shape
np.reshape(a,(18,1))
a=np.reshape(a,(3,6))
plt.imshow(a)
desired_num = 3000
mosaic_list =[]
mosaic_label = []
fore_idx=[]
for j in range(desired_num):
fg_class = np.random.randint(0,3)
fg_idx = np.random.randint(0,9)
a = []
for i in range(9):
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
a.append(x[b])
# print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(3,10)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
a.append(x[b])
# print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
mosaic_list.append(np.reshape(a,(18,1)))
mosaic_label.append(fg_class)
fore_idx.append(fg_idx)
mosaic_list = np.concatenate(mosaic_list,axis=1).T
# print(mosaic_list)
print(np.shape(mosaic_label))
print(np.shape(fore_idx))
class MosaicDataset(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list, mosaic_label, fore_idx):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list
self.label = mosaic_label
self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx], self.fore_idx[idx]
batch = 250
msd = MosaicDataset(mosaic_list, mosaic_label , fore_idx)
train_loader = DataLoader( msd,batch_size= batch ,shuffle=True)
class Wherenet(nn.Module):
def __init__(self):
super(Wherenet,self).__init__()
self.linear1 = nn.Linear(2,50)
self.linear2 = nn.Linear(50,50)
self.linear3 = nn.Linear(50,1)
def forward(self,z):
x = torch.zeros([batch,9],dtype=torch.float64)
y = torch.zeros([batch,2], dtype=torch.float64)
#x,y = x.to("cuda"),y.to("cuda")
for i in range(9):
x[:,i] = self.helper(z[:,2*i:2*i+2])[:,0]
#print(k[:,0].shape,x[:,i].shape)
x = F.softmax(x,dim=1) # alphas
x1 = x[:,0]
for i in range(9):
x1 = x[:,i]
#print()
y = y+torch.mul(x1[:,None],z[:,2*i:2*i+2])
return y , x
def helper(self,x):
x = F.relu(self.linear1(x))
x = F.relu(self.linear2(x))
x = self.linear3(x)
return x
trainiter = iter(train_loader)
input1,labels1,index1 = trainiter.next()
where = Wherenet().double()
where = where
out_where,alphas = where(input1)
out_where.shape,alphas.shape
class Whatnet(nn.Module):
def __init__(self):
super(Whatnet,self).__init__()
self.linear1 = nn.Linear(2,50)
self.linear2 = nn.Linear(50,3)
# self.linear3 = nn.Linear(8,3)
def forward(self,x):
x = F.relu(self.linear1(x))
#x = F.relu(self.linear2(x))
x = self.linear2(x)
return x
what = Whatnet().double()
# what(out_where)
test_data_required = 1000
mosaic_list_test =[]
mosaic_label_test = []
fore_idx_test=[]
for j in range(test_data_required):
fg_class = np.random.randint(0,3)
fg_idx = np.random.randint(0,9)
a = []
for i in range(9):
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
a.append(x[b])
# print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(3,10)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
a.append(x[b])
# print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
mosaic_list_test.append(np.reshape(a,(18,1)))
mosaic_label_test.append(fg_class)
fore_idx_test.append(fg_idx)
mosaic_list_test = np.concatenate(mosaic_list_test,axis=1).T
print(mosaic_list_test.shape)
test_data = MosaicDataset(mosaic_list_test,mosaic_label_test,fore_idx_test)
test_loader = DataLoader( test_data,batch_size= batch ,shuffle=False)
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
col1=[]
col2=[]
col3=[]
col4=[]
col5=[]
col6=[]
col7=[]
col8=[]
col9=[]
col10=[]
col11=[]
col12=[]
col13=[]
criterion = nn.CrossEntropyLoss()
optimizer_where = optim.SGD(where.parameters(), lr=0.01, momentum=0.9)
optimizer_what = optim.SGD(what.parameters(), lr=0.01, momentum=0.9)
nos_epochs = 250
train_loss=[]
test_loss =[]
train_acc = []
test_acc = []
loss_curi = []
for epoch in range(nos_epochs): # loop over the dataset multiple times
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
running_loss = 0.0
cnt=0
ep_lossi = []
iteration = desired_num // batch
#training data set
for i, data in enumerate(train_loader):
inputs , labels , fore_idx = data
#inputs,labels,fore_idx = inputs.to(device),labels.to(device),fore_idx.to(device)
# zero the parameter gradients
optimizer_what.zero_grad()
optimizer_where.zero_grad()
avg_inp,alphas = where(inputs)
outputs = what(avg_inp)
_, predicted = torch.max(outputs.data, 1)
loss = criterion(outputs, labels)
loss.backward()
optimizer_what.step()
optimizer_where.step()
running_loss += loss.item()
if cnt % 6 == 5: # print every 6 mini-batches
print('[%d, %5d] loss: %.3f' %(epoch + 1, cnt + 1, running_loss / 6))
ep_lossi.append(running_loss/6)
running_loss = 0.0
cnt=cnt+1
if epoch % 1 == 0:
for j in range (batch):
focus = torch.argmax(alphas[j])
if(alphas[j][focus] >= 0.5):
argmax_more_than_half +=1
else:
argmax_less_than_half +=1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true +=1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false +=1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false +=1
loss_curi.append(np.mean(ep_lossi)) #loss per epoch
if (np.mean(ep_lossi) <= 0.01):
break
if epoch % 1 == 0:
col1.append(epoch)
col2.append(argmax_more_than_half)
col3.append(argmax_less_than_half)
col4.append(focus_true_pred_true)
col5.append(focus_false_pred_true)
col6.append(focus_true_pred_false)
col7.append(focus_false_pred_false)
#************************************************************************
#testing data set
with torch.no_grad():
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
for data in test_loader:
inputs, labels , fore_idx = data
#inputs,labels,fore_idx = inputs.to(device),labels.to(device),fore_idx.to(device)
# print(inputs.shtorch.save(where.state_dict(),"model_epoch"+str(epoch)+".pt")ape,labels.shape)
avg_inp,alphas = where(inputs)
outputs = what(avg_inp)
_, predicted = torch.max(outputs.data, 1)
for j in range (batch):
focus = torch.argmax(alphas[j])
if(alphas[j][focus] >= 0.5):
argmax_more_than_half +=1
else:
argmax_less_than_half +=1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true +=1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false +=1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false +=1
col8.append(argmax_more_than_half)
col9.append(argmax_less_than_half)
col10.append(focus_true_pred_true)
col11.append(focus_false_pred_true)
col12.append(focus_true_pred_false)
col13.append(focus_false_pred_false)
torch.save(where.state_dict(),"where_model_epoch"+str(epoch)+".pt")
torch.save(what.state_dict(),"what_model_epoch"+str(epoch)+".pt")
print('Finished Training')
# torch.save(where.state_dict(),"where_model_epoch"+str(nos_epochs)+".pt")
# torch.save(what.state_dict(),"what_model_epoch"+str(epoch)+".pt")
columns = ["epochs", "argmax > 0.5" ,"argmax < 0.5", "focus_true_pred_true", "focus_false_pred_true", "focus_true_pred_false", "focus_false_pred_false" ]
df_train = pd.DataFrame()
df_test = pd.DataFrame()
df_train[columns[0]] = col1
df_train[columns[1]] = col2
df_train[columns[2]] = col3
df_train[columns[3]] = col4
df_train[columns[4]] = col5
df_train[columns[5]] = col6
df_train[columns[6]] = col7
df_test[columns[0]] = col1
df_test[columns[1]] = col8
df_test[columns[2]] = col9
df_test[columns[3]] = col10
df_test[columns[4]] = col11
df_test[columns[5]] = col12
df_test[columns[6]] = col13
df_train
plt.plot(col1,col2, label='argmax > 0.5')
plt.plot(col1,col3, label='argmax < 0.5')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
plt.ylabel("training data")
plt.title("On Training set")
plt.show()
plt.plot(col1,col4, label ="focus_true_pred_true ")
plt.plot(col1,col5, label ="focus_false_pred_true ")
plt.plot(col1,col6, label ="focus_true_pred_false ")
plt.plot(col1,col7, label ="focus_false_pred_false ")
plt.title("On Training set")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
plt.ylabel("training data")
plt.show()
df_test
plt.plot(col1,col8, label='argmax > 0.5')
plt.plot(col1,col9, label='argmax < 0.5')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
plt.ylabel("Testing data")
plt.title("On Testing set")
plt.show()
plt.plot(col1,col10, label ="focus_true_pred_true ")
plt.plot(col1,col11, label ="focus_false_pred_true ")
plt.plot(col1,col12, label ="focus_true_pred_false ")
plt.plot(col1,col13, label ="focus_false_pred_false ")
plt.title("On Testing set")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
plt.ylabel("Testing data")
plt.show()
print(x[0])
for i in range(9):
print(x[0,2*i:2*i+2])
correct = 0
total = 0
count = 0
flag = 1
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
with torch.no_grad():
for data in train_loader:
inputs , labels , fore_idx = data
#inputs,labels,fore_idx = inputs.to(device),labels.to(device),fore_idx.to(device)
# zero the parameter gradients
optimizer_what.zero_grad()
optimizer_where.zero_grad()
avg_inp,alphas = where(inputs)
outputs = what(avg_inp)
_, predicted = torch.max(outputs.data, 1)
for j in range(labels.size(0)):
count += 1
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true += 1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false += 1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false += 1
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 3000 train images: %d %%' % ( 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
print("focus_true_pred_true %d =============> FTPT : %d %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) )
print("focus_false_pred_true %d =============> FFPT : %d %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) )
print("focus_true_pred_false %d =============> FTPF : %d %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) )
print("focus_false_pred_false %d =============> FFPF : %d %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) )
print("argmax_more_than_half ==================> ",argmax_more_than_half)
print("argmax_less_than_half ==================> ",argmax_less_than_half)
print(count)
print("="*100)
correct = 0
total = 0
count = 0
flag = 1
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
with torch.no_grad():
for data in test_loader:
inputs , labels , fore_idx = data
#inputs,labels,fore_idx = inputs.to(device),labels.to(device),fore_idx.to(device)
# zero the parameter gradients
optimizer_what.zero_grad()
optimizer_where.zero_grad()
avg_inp,alphas = where(inputs)
outputs = what(avg_inp)
_, predicted = torch.max(outputs.data, 1)
for j in range(labels.size(0)):
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true += 1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false += 1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false += 1
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 1000 test images: %d %%' % (
100 * correct / total))
print("total correct", correct)
print("total train set images", total)
print("focus_true_pred_true %d =============> FTPT : %d %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) )
print("focus_false_pred_true %d =============> FFPT : %d %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) )
print("focus_true_pred_false %d =============> FTPF : %d %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) )
print("focus_false_pred_false %d =============> FFPF : %d %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) )
print("argmax_more_than_half ==================> ",argmax_more_than_half)
print("argmax_less_than_half ==================> ",argmax_less_than_half)
```
|
github_jupyter
|
```
from fastai.vision.all import *
from moving_mnist.models.conv_rnn import *
from moving_mnist.data import *
if torch.cuda.is_available():
torch.cuda.set_device(1)
print(torch.cuda.get_device_name())
```
# Train Example:
We wil predict:
- `n_in`: 5 images
- `n_out`: 5 images
- `n_obj`: up to 3 objects
```
DATA_PATH = Path.cwd()/'data'
ds = MovingMNIST(DATA_PATH, n_in=5, n_out=5, n_obj=[1,2,3])
train_tl = TfmdLists(range(7500), ImageTupleTransform(ds))
valid_tl = TfmdLists(range(100), ImageTupleTransform(ds))
dls = DataLoaders.from_dsets(train_tl, valid_tl, bs=32,
after_batch=[Normalize.from_stats(imagenet_stats[0][0],
imagenet_stats[1][0])]).cuda()
loss_func = StackLoss(MSELossFlat())
```
Left: Input, Right: Target
```
dls.show_batch()
b = dls.one_batch()
explode_types(b)
```
`StackUnstack` takes cares of stacking the list of images into a fat tensor, and unstacking them at the end, we will need to modify our loss function to take a list of tensors as input and target.
## Simple model
```
model = StackUnstack(SimpleModel())
```
As the `ImageSeq` is a `tuple` of images, we will need to stack them to compute loss.
```
learn = Learner(dls, model, loss_func=loss_func, cbs=[]).to_fp16()
```
I have a weird bug that if I use `nn.LeakyReLU` after doing `learn.lr_find()` the model does not train (the loss get stucked).
```
x,y = dls.one_batch()
learn.lr_find()
learn.fit_one_cycle(10, 1e-4)
p,t = learn.get_preds()
```
As you can see, the results is a list of 5 tensors with 100 samples each.
```
len(p), p[0].shape
def show_res(t, idx):
im_seq = ImageSeq.create([t[i][idx] for i in range(5)])
im_seq.show(figsize=(8,4));
k = random.randint(0,100)
show_res(t,k)
show_res(p,k)
```
## A bigger Decoder
We will pass:
- `blur`: to use blur on the upsampling path (this is done by using and a poolling layer and a replication)
- `attn`: to include a self attention layer on the decoder
```
model2 = StackUnstack(SimpleModel(szs=[16,64,96], act=partial(nn.LeakyReLU, 0.2, inplace=True),blur=True, attn=True))
```
We have to reduce batch size as the self attention layer is heavy.
```
dls = DataLoaders.from_dsets(train_tl, valid_tl, bs=8,
after_batch=[Normalize.from_stats(imagenet_stats[0][0],
imagenet_stats[1][0])]).cuda()
learn2 = Learner(dls, model2, loss_func=loss_func, cbs=[]).to_fp16()
learn2.lr_find()
learn2.fit_one_cycle(10, 1e-4)
p,t = learn2.get_preds()
```
As you can see, the results is a list of 5 tensors with 100 samples each.
```
len(p), p[0].shape
def show_res(t, idx):
im_seq = ImageSeq.create([t[i][idx] for i in range(5)])
im_seq.show(figsize=(8,4));
k = random.randint(0,100)
show_res(t,k)
show_res(p,k)
```
|
github_jupyter
|
```
from __future__ import print_function
import keras
from keras.models import Sequential, Model, load_model
import keras.backend as K
import tensorflow as tf
import pandas as pd
import os
import pickle
import numpy as np
import scipy.sparse as sp
import scipy.io as spio
import isolearn.io as isoio
from scipy.stats import pearsonr
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import matplotlib.colors as colors
import matplotlib as mpl
from matplotlib.text import TextPath
from matplotlib.patches import PathPatch, Rectangle
from matplotlib.font_manager import FontProperties
from matplotlib import gridspec
from matplotlib.ticker import FormatStrFormatter
from aparent.data.aparent_data_plasmid_legacy import load_data
from analyze_aparent_conv_layers_helpers import *
#Load random MPRA data
file_path = '../data/random_mpra_legacy/combined_library/processed_data_lifted/'
plasmid_gens = load_data(batch_size=32, valid_set_size=1000, test_set_size=40000, kept_libraries=[22], canonical_pas=True, no_dse_canonical_pas=True, file_path=file_path)
#Load legacy APARENT model (lifted from theano)
model_name = 'aparent_theano_legacy_30_31_34'#_pasaligned
save_dir = os.path.join(os.getcwd(), '../saved_models/legacy_models')
model_path = os.path.join(save_dir, model_name + '.h5')
aparent_model = load_model(model_path)
#Create a new model that outputs the conv layer activation maps together with the isoform proportion
conv_layer_iso_model = Model(
inputs = aparent_model.inputs,
outputs = [
aparent_model.get_layer('iso_conv_layer_1').output,
aparent_model.get_layer('iso_out_layer_1').output
]
)
#Predict from test data generator
iso_conv_1_out, iso_pred = conv_layer_iso_model.predict_generator(plasmid_gens['test'], workers=4, use_multiprocessing=True)
iso_conv_1_out = np.reshape(iso_conv_1_out, (iso_conv_1_out.shape[0], iso_conv_1_out.shape[1], iso_conv_1_out.shape[2]))
iso_pred = np.ravel(iso_pred[:, 1])
logodds_pred = np.log(iso_pred / (1. - iso_pred))
#Retrieve one-hot input sequences
onehot_seqs = np.concatenate([plasmid_gens['test'][i][0][0][:, 0, :, :] for i in range(len(plasmid_gens['test']))], axis=0)
#Mask for simple library (Alien1)
mask_seq = ('X' * 4) + ('N' * (45 + 6 + 45 + 6 + 45)) + ('X' * 27)
for j in range(len(mask_seq)) :
if mask_seq[j] == 'X' :
iso_conv_1_out[:, :, j] = 0
#Layer 1: Compute Max Activation Correlation maps and PWMs
filter_width = 8
n_samples = 5000
pwms = np.zeros((iso_conv_1_out.shape[1], filter_width, 4))
pwms_top = np.zeros((iso_conv_1_out.shape[1], filter_width, 4))
for k in range(iso_conv_1_out.shape[1]) :
for i in range(iso_conv_1_out.shape[0]) :
max_j = np.argmax(iso_conv_1_out[i, k, :])
if iso_conv_1_out[i, k, max_j] > 0 :
pwms[k, :, :] += onehot_seqs[i, max_j: max_j+filter_width, :]
sort_index = np.argsort(np.max(iso_conv_1_out[:, k, :], axis=-1))[::-1]
for i in range(n_samples) :
max_j = np.argmax(iso_conv_1_out[sort_index[i], k, :])
if iso_conv_1_out[sort_index[i], k, max_j] > 0 :
pwms_top[k, :, :] += onehot_seqs[sort_index[i], max_j: max_j+filter_width, :]
pwms[k, :, :] /= np.expand_dims(np.sum(pwms[k, :, :], axis=-1), axis=-1)
pwms_top[k, :, :] /= np.expand_dims(np.sum(pwms_top[k, :, :], axis=-1), axis=-1)
r_vals = np.zeros((iso_conv_1_out.shape[1], iso_conv_1_out.shape[2]))
for k in range(iso_conv_1_out.shape[1]) :
for j in range(iso_conv_1_out.shape[2]) :
if np.any(iso_conv_1_out[:, k, j] > 0.) :
r_val, _ = pearsonr(iso_conv_1_out[:, k, j], logodds_pred)
r_vals[k, j] = r_val if not np.isnan(r_val) else 0
#Plot Max Activation PWMs and Correlation maps
n_filters_per_row = 5
n_rows = int(pwms.shape[0] / n_filters_per_row)
k = 0
for row_i in range(n_rows) :
f, ax = plt.subplots(2, n_filters_per_row, figsize=(2.5 * n_filters_per_row, 2), gridspec_kw = {'height_ratios':[3, 1]})
for kk in range(n_filters_per_row) :
plot_pwm_iso_logo(pwms_top, r_vals, k, ax[0, kk], ax[1, kk], seq_start=24, seq_end=95, cse_start=49)
k += 1
plt.tight_layout()
plt.show()
#Create a new model that outputs the conv layer activation maps together with the isoform proportion
conv_layer_iso_model = Model(
inputs = aparent_model.inputs,
outputs = [
aparent_model.get_layer('iso_conv_layer_2').output,
aparent_model.get_layer('iso_out_layer_1').output
]
)
#Predict from test data generator
iso_conv_2_out, iso_pred = conv_layer_iso_model.predict_generator(plasmid_gens['test'], workers=4, use_multiprocessing=True)
iso_conv_2_out = np.reshape(iso_conv_2_out, (iso_conv_2_out.shape[0], iso_conv_2_out.shape[1], iso_conv_2_out.shape[2]))
iso_pred = np.ravel(iso_pred[:, 1])
logodds_pred = np.log(iso_pred / (1. - iso_pred))
#Retrieve one-hot input sequences
onehot_seqs = np.concatenate([plasmid_gens['test'][i][0][0][:, 0, :, :] for i in range(len(plasmid_gens['test']))], axis=0)
#Layer 2: Compute Max Activation Correlation maps and PWMs
filter_width = 19
n_samples = 200
pwms = np.zeros((iso_conv_2_out.shape[1], filter_width, 4))
pwms_top = np.zeros((iso_conv_2_out.shape[1], filter_width, 4))
for k in range(iso_conv_2_out.shape[1]) :
for i in range(iso_conv_2_out.shape[0]) :
max_j = np.argmax(iso_conv_2_out[i, k, :])
if iso_conv_2_out[i, k, max_j] > 0 :
pwms[k, :, :] += onehot_seqs[i, max_j * 2: max_j * 2 + filter_width, :]
sort_index = np.argsort(np.max(iso_conv_2_out[:, k, :], axis=-1))[::-1]
for i in range(n_samples) :
max_j = np.argmax(iso_conv_2_out[sort_index[i], k, :])
if iso_conv_2_out[sort_index[i], k, max_j] > 0 :
pwms_top[k, :, :] += onehot_seqs[sort_index[i], max_j * 2: max_j * 2 + filter_width, :]
pwms[k, :, :] /= np.expand_dims(np.sum(pwms[k, :, :], axis=-1), axis=-1)
pwms_top[k, :, :] /= np.expand_dims(np.sum(pwms_top[k, :, :], axis=-1), axis=-1)
r_vals = np.zeros((iso_conv_2_out.shape[1], iso_conv_2_out.shape[2]))
for k in range(iso_conv_2_out.shape[1]) :
for j in range(iso_conv_2_out.shape[2]) :
if np.any(iso_conv_2_out[:, k, j] > 0.) :
r_val, _ = pearsonr(iso_conv_2_out[:, k, j], logodds_pred)
r_vals[k, j] = r_val if not np.isnan(r_val) else 0
#Plot Max Activation PWMs and Correlation maps
n_filters_per_row = 5
n_rows = int(pwms.shape[0] / n_filters_per_row)
k = 0
for row_i in range(n_rows) :
f, ax = plt.subplots(2, n_filters_per_row, figsize=(3 * n_filters_per_row, 2), gridspec_kw = {'height_ratios':[3, 1.5]})
for kk in range(n_filters_per_row) :
plot_pwm_iso_logo(pwms_top, r_vals, k, ax[0, kk], ax[1, kk], seq_start=12, seq_end=44)
k += 1
plt.tight_layout()
plt.show()
```
|
github_jupyter
|
***
***
# Introduction to Gradient Descent
The Idea Behind Gradient Descent 梯度下降
***
***
<img src='./img/stats/gradient_descent.gif' align = "middle" width = '400px'>
<img align="left" style="padding-right:10px;" width ="400px" src="./img/stats/gradient2.png">
**如何找到最快下山的路?**
- 假设此时山上的浓雾很大,下山的路无法确定;
- 假设你摔不死!
- 你只能利用自己周围的信息去找到下山的路径。
- 以你当前的位置为基准,寻找这个位置最陡峭的方向,从这个方向向下走。
<img style="padding-right:10px;" width ="500px" src="./img/stats/gradient.png" align = 'right'>
**Gradient is the vector of partial derivatives**
One approach to maximizing a function is to
- pick a random starting point,
- compute the gradient,
- take a small step in the direction of the gradient, and
- repeat with a new staring point.
<img src='./img/stats/gd.webp' width = '700' align = 'middle'>
Let's represent parameters as $\Theta$, learning rate as $\alpha$, and gradient as $\bigtriangledown J(\Theta)$,
To the find the best model is an optimization problem
- “minimizes the error of the model”
- “maximizes the likelihood of the data.”
We’ll frequently need to maximize (or minimize) functions.
- to find the input vector v that produces the largest (or smallest) possible value.
# Mathematics behind Gradient Descent
A simple mathematical intuition behind one of the commonly used optimisation algorithms in Machine Learning.
https://www.douban.com/note/713353797/
The cost or loss function:
$$Cost = \frac{1}{N} \sum_{i = 1}^N (Y' -Y)^2$$
<img src='./img/stats/x2.webp' width = '700' align = 'center'>
Parameters with small changes:
$$ m_1 = m_0 - \delta m, b_1 = b_0 - \delta b$$
The cost function J is a function of m and b:
$$J_{m, b} = \frac{1}{N} \sum_{i = 1}^N (Y' -Y)^2 = \frac{1}{N} \sum_{i = 1}^N Error_i^2$$
$$\frac{\partial J}{\partial m} = 2 Error \frac{\partial}{\partial m}Error$$
$$\frac{\partial J}{\partial b} = 2 Error \frac{\partial}{\partial b}Error$$
Let's fit the data with linear regression:
$$\frac{\partial}{\partial m}Error = \frac{\partial}{\partial m}(Y' - Y) = \frac{\partial}{\partial m}(mX + b - Y)$$
Since $X, b, Y$ are constant:
$$\frac{\partial}{\partial m}Error = X$$
$$\frac{\partial}{\partial b}Error = \frac{\partial}{\partial b}(Y' - Y) = \frac{\partial}{\partial b}(mX + b - Y)$$
Since $X, m, Y$ are constant:
$$\frac{\partial}{\partial m}Error = 1$$
Thus:
$$\frac{\partial J}{\partial m} = 2 * Error * X$$
$$\frac{\partial J}{\partial b} = 2 * Error$$
Let's get rid of the constant 2 and multiplying the learning rate $\alpha$, who determines how large a step to take:
$$\frac{\partial J}{\partial m} = Error * X * \alpha$$
$$\frac{\partial J}{\partial b} = Error * \alpha$$
Since $ m_1 = m_0 - \delta m, b_1 = b_0 - \delta b$:
$$ m_1 = m_0 - Error * X * \alpha$$
$$b_1 = b_0 - Error * \alpha$$
**Notice** that the slope b can be viewed as the beta value for X = 1. Thus, the above two equations are in essence the same.
Let's represent parameters as $\Theta$, learning rate as $\alpha$, and gradient as $\bigtriangledown J(\Theta)$, we have:
$$\Theta_1 = \Theta_0 - \alpha \bigtriangledown J(\Theta)$$
<img src='./img/stats/gd.webp' width = '800' align = 'center'>
Hence,to solve for the gradient, we iterate through our data points using our new $m$ and $b$ values and compute the partial derivatives.
This new gradient tells us
- the slope of our cost function at our current position
- the direction we should move to update our parameters.
- The size of our update is controlled by the learning rate.
```
import numpy as np
# Size of the points dataset.
m = 20
# Points x-coordinate and dummy value (x0, x1).
X0 = np.ones((m, 1))
X1 = np.arange(1, m+1).reshape(m, 1)
X = np.hstack((X0, X1))
# Points y-coordinate
y = np.array([3, 4, 5, 5, 2, 4, 7, 8, 11, 8, 12,
11, 13, 13, 16, 17, 18, 17, 19, 21]).reshape(m, 1)
# The Learning Rate alpha.
alpha = 0.01
def error_function(theta, X, y):
'''Error function J definition.'''
diff = np.dot(X, theta) - y
return (1./2*m) * np.dot(np.transpose(diff), diff)
def gradient_function(theta, X, y):
'''Gradient of the function J definition.'''
diff = np.dot(X, theta) - y
return (1./m) * np.dot(np.transpose(X), diff)
def gradient_descent(X, y, alpha):
'''Perform gradient descent.'''
theta = np.array([1, 1]).reshape(2, 1)
gradient = gradient_function(theta, X, y)
while not np.all(np.absolute(gradient) <= 1e-5):
theta = theta - alpha * gradient
gradient = gradient_function(theta, X, y)
return theta
# source:https://www.jianshu.com/p/c7e642877b0e
optimal = gradient_descent(X, y, alpha)
print('Optimal parameters Theta:', optimal[0][0], optimal[1][0])
print('Error function:', error_function(optimal, X, y)[0,0])
```
# This is the End!
# Estimating the Gradient
If f is a function of one variable, its derivative at a point x measures how f(x) changes when we make a very small change to x.
> It is defined as the limit of the difference quotients:
差商(difference quotient)就是因变量的改变量与自变量的改变量两者相除的商。
```
def difference_quotient(f, x, h):
return (f(x + h) - f(x)) / h
```
For many functions it’s easy to exactly calculate derivatives.
For example, the square function:
def square(x):
return x * x
has the derivative:
def derivative(x):
return 2 * x
```
def square(x):
return x * x
def derivative(x):
return 2 * x
derivative_estimate = lambda x: difference_quotient(square, x, h=0.00001)
def sum_of_squares(v):
"""computes the sum of squared elements in v"""
return sum(v_i ** 2 for v_i in v)
# plot to show they're basically the same
import matplotlib.pyplot as plt
x = range(-10,10)
plt.plot(x, list(map(derivative, x)), 'rx') # red x
plt.plot(x, list(map(derivative_estimate, x)), 'b+') # blue +
plt.show()
```
When f is a function of many variables, it has multiple partial derivatives.
```
def partial_difference_quotient(f, v, i, h):
# add h to just the i-th element of v
w = [v_j + (h if j == i else 0)
for j, v_j in enumerate(v)]
return (f(w) - f(v)) / h
def estimate_gradient(f, v, h=0.00001):
return [partial_difference_quotient(f, v, i, h)
for i, _ in enumerate(v)]
```
# Using the Gradient
```
def step(v, direction, step_size):
"""move step_size in the direction from v"""
return [v_i + step_size * direction_i
for v_i, direction_i in zip(v, direction)]
def sum_of_squares_gradient(v):
return [2 * v_i for v_i in v]
from collections import Counter
from linear_algebra import distance, vector_subtract, scalar_multiply
from functools import reduce
import math, random
print("using the gradient")
# generate 3 numbers
v = [random.randint(-10,10) for i in range(3)]
print(v)
tolerance = 0.0000001
n = 0
while True:
gradient = sum_of_squares_gradient(v) # compute the gradient at v
if n%50 ==0:
print(v, sum_of_squares(v))
next_v = step(v, gradient, -0.01) # take a negative gradient step
if distance(next_v, v) < tolerance: # stop if we're converging
break
v = next_v # continue if we're not
n += 1
print("minimum v", v)
print("minimum value", sum_of_squares(v))
```
# Choosing the Right Step Size
Although the rationale for moving against the gradient is clear,
- how far to move is not.
- Indeed, choosing the right step size is more of an art than a science.
Methods:
1. Using a fixed step size
1. Gradually shrinking the step size over time
1. At each step, choosing the step size that minimizes the value of the objective function
```
step_sizes = [100, 10, 1, 0.1, 0.01, 0.001, 0.0001, 0.00001]
```
It is possible that certain step sizes will result in invalid inputs for our function.
So we’ll need to create a “safe apply” function
- returns infinity for invalid inputs:
- which should never be the minimum of anything
```
def safe(f):
"""define a new function that wraps f and return it"""
def safe_f(*args, **kwargs):
try:
return f(*args, **kwargs)
except:
return float('inf') # this means "infinity" in Python
return safe_f
```
# Putting It All Together
- **target_fn** that we want to minimize
- **gradient_fn**.
For example, the target_fn could represent the errors in a model as a function of its parameters,
To choose a starting value for the parameters `theta_0`.
```
def minimize_batch(target_fn, gradient_fn, theta_0, tolerance=0.000001):
"""use gradient descent to find theta that minimizes target function"""
step_sizes = [100, 10, 1, 0.1, 0.01, 0.001, 0.0001, 0.00001]
theta = theta_0 # set theta to initial value
target_fn = safe(target_fn) # safe version of target_fn
value = target_fn(theta) # value we're minimizing
while True:
gradient = gradient_fn(theta)
next_thetas = [step(theta, gradient, -step_size)
for step_size in step_sizes]
# choose the one that minimizes the error function
next_theta = min(next_thetas, key=target_fn)
next_value = target_fn(next_theta)
# stop if we're "converging"
if abs(value - next_value) < tolerance:
return theta
else:
theta, value = next_theta, next_value
# minimize_batch"
v = [random.randint(-10,10) for i in range(3)]
v = minimize_batch(sum_of_squares, sum_of_squares_gradient, v)
print("minimum v", v)
print("minimum value", sum_of_squares(v))
```
Sometimes we’ll instead want to maximize a function, which we can do by minimizing its negative
```
def negate(f):
"""return a function that for any input x returns -f(x)"""
return lambda *args, **kwargs: -f(*args, **kwargs)
def negate_all(f):
"""the same when f returns a list of numbers"""
return lambda *args, **kwargs: [-y for y in f(*args, **kwargs)]
def maximize_batch(target_fn, gradient_fn, theta_0, tolerance=0.000001):
return minimize_batch(negate(target_fn),
negate_all(gradient_fn),
theta_0,
tolerance)
```
Using the batch approach, each gradient step requires us to make a prediction and compute the gradient for the whole data set, which makes each step take a long time.
Error functions are additive
- The predictive error on the whole data set is simply the sum of the predictive errors for each data point.
When this is the case, we can instead apply a technique called **stochastic gradient descent**
- which computes the gradient (and takes a step) for only one point at a time.
- It cycles over our data repeatedly until it reaches a stopping point.
# Stochastic Gradient Descent
During each cycle, we’ll want to iterate through our data in a random order:
```
def in_random_order(data):
"""generator that returns the elements of data in random order"""
indexes = [i for i, _ in enumerate(data)] # create a list of indexes
random.shuffle(indexes) # shuffle them
for i in indexes: # return the data in that order
yield data[i]
```
This approach avoids circling around near a minimum forever
- whenever we stop getting improvements we’ll decrease the step size and eventually quit.
```
def minimize_stochastic(target_fn, gradient_fn, x, y, theta_0, alpha_0=0.01):
data = list(zip(x, y))
theta = theta_0 # initial guess
alpha = alpha_0 # initial step size
min_theta, min_value = None, float("inf") # the minimum so far
iterations_with_no_improvement = 0
# if we ever go 100 iterations with no improvement, stop
while iterations_with_no_improvement < 100:
value = sum( target_fn(x_i, y_i, theta) for x_i, y_i in data )
if value < min_value:
# if we've found a new minimum, remember it
# and go back to the original step size
min_theta, min_value = theta, value
iterations_with_no_improvement = 0
alpha = alpha_0
else:
# otherwise we're not improving, so try shrinking the step size
iterations_with_no_improvement += 1
alpha *= 0.9
# and take a gradient step for each of the data points
for x_i, y_i in in_random_order(data):
gradient_i = gradient_fn(x_i, y_i, theta)
theta = vector_subtract(theta, scalar_multiply(alpha, gradient_i))
return min_theta
def maximize_stochastic(target_fn, gradient_fn, x, y, theta_0, alpha_0=0.01):
return minimize_stochastic(negate(target_fn),
negate_all(gradient_fn),
x, y, theta_0, alpha_0)
print("using minimize_stochastic_batch")
x = list(range(101))
y = [3*x_i + random.randint(-10, 20) for x_i in x]
theta_0 = random.randint(-10,10)
v = minimize_stochastic(sum_of_squares, sum_of_squares_gradient, x, y, theta_0)
print("minimum v", v)
print("minimum value", sum_of_squares(v))
```
Scikit-learn has a Stochastic Gradient Descent module http://scikit-learn.org/stable/modules/sgd.html
|
github_jupyter
|
# What is probability? A simulated introduction
```
#Import packages
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
sns.set()
```
## Learning Objectives of Part 1
- To have an understanding of what "probability" means, in both Bayesian and Frequentist terms;
- To be able to simulate probability distributions that model real-world phenomena;
- To understand how probability distributions relate to data-generating **stories**.
## Probability
> To the pioneers such as Bernoulli, Bayes and Laplace, a probability represented a _degree-of-belief_ or plausibility; how much they thought that something was true, based on the evidence at hand. To the 19th century scholars, however, this seemed too vague and subjective an idea to be the basis of a rigorous mathematical theory. So they redefined probability as the _long-run relative frequency_ with which an event occurred, given (infinitely) many repeated (experimental) trials. Since frequencies can be measured, probability was now seen as an objective tool for dealing with _random_ phenomena.
-- _Data Analysis, A Bayesian Tutorial_, Sivia & Skilling (p. 9)
What type of random phenomena are we talking about here? One example is:
- Knowing that a website has a click-through rate (CTR) of 10%, we can calculate the probability of having 10 people, 9 people, 8 people ... and so on click through, upon drawing 10 people randomly from the population;
- But given the data of how many people click through, how can we calculate the CTR? And how certain can we be of this CTR? Or how likely is a particular CTR?
Science mostly asks questions of the second form above & Bayesian thinking provides a wonderful framework for answering such questions. Essentially Bayes' Theorem gives us a way of moving from the probability of the data given the model (written as $P(data|model)$) to the probability of the model given the data ($P(model|data)$).
We'll first explore questions of the 1st type using simulation: knowing the model, what is the probability of seeing certain data?
## Simulating probabilities
* Let's say that a website has a CTR of 50%, i.e. that 50% of people click through. If we picked 1000 people at random from thepopulation, how likely would it be to find that a certain number of people click?
We can simulate this using `numpy`'s random number generator.
To do so, first note we can use `np.random.rand()` to randomly select floats between 0 and 1 (known as the _uniform distribution_). Below, we do so and plot a histogram:
```
# Draw 1,000 samples from uniform & plot results
x = np.random.rand(1000)
plt.hist(x, bins=20);
```
To then simulate the sampling from the population, we check whether each float was greater or less than 0.5. If less than or equal to 0.5, we say the person clicked.
```
# Computed how many people click
clicks = x <= 0.5
n_clicks = clicks.sum()
f"Number of clicks = {n_clicks}"
```
The proportion of people who clicked can be calculated as the total number of clicks over the number of people:
```
# Computed proportion of people who clicked
f"Proportion who clicked = {n_clicks/len(clicks)}"
```
**Discussion**: Did you get the same answer as your neighbour? If you did, why? If not, why not?
**Up for discussion:** Let's say that all you had was this data and you wanted to figure out the CTR (probability of clicking).
* What would your estimate be?
* Bonus points: how confident would you be of your estimate?
**Note:** Although, in the above, we have described _probability_ in two ways, we have not described it mathematically. We're not going to do so rigorously here, but we will say that _probability_ defines a function from the space of possibilities (in the above, the interval $[0,1]$) that describes how likely it is to get a particular point or region in that space. Mike Betancourt has an elegant [Introduction to Probability Theory (For Scientists and Engineers)](https://betanalpha.github.io/assets/case_studies/probability_theory.html) that I can recommend.
### Hands-on: clicking
Use random sampling to simulate how many people click when the CTR is 0.7. How many click? What proportion?
```
# Solution
clicks = x <= 0.7
n_clicks = clicks.sum()
print(f"Number of clicks = {n_clicks}")
print(f"Proportion who clicked = {n_clicks/len(clicks)}")
```
_Discussion point_: This model is known as the bias coin flip.
- Can you see why?
- Can it be used to model other phenomena?
### Galapagos finch beaks
You can also calculate such proportions with real-world data. Here we import a dataset of Finch beak measurements from the Galápagos islands. You can find the data [here](https://datadryad.org/resource/doi:10.5061/dryad.9gh90).
```
# Import and view head of data
df_12 = pd.read_csv('../data/finch_beaks_2012.csv')
df_12.head()
# Store lengths in a pandas series
lengths = df_12['blength']
```
* What proportion of birds have a beak length > 10 ?
```
p = sum(lengths > 10) / len(lengths)
p
```
**Note:** This is the proportion of birds that have beak length $>10$ in your empirical data, not the probability that any bird drawn from the population will have beak length $>10$.
### Proportion: A proxy for probability
As stated above, we have calculated a proportion, not a probability. As a proxy for the probability, we can simulate drawing random samples (with replacement) from the data seeing how many lengths are > 10 and calculating the proportion (commonly referred to as [hacker statistics](https://speakerdeck.com/jakevdp/statistics-for-hackers)):
```
n_samples = 10000
sum(np.random.choice(lengths, n_samples, replace=True) > 10) / n_samples
```
### Another way to simulate coin-flips
In the above, you have used the uniform distribution to sample from a series of biased coin flips. I want to introduce you to another distribution that you can also use to do so: the **binomial distribution**.
The **binomial distribution** with parameters $n$ and $p$ is defined as the probability distribution of
> the number of heads seen when flipping a coin $n$ times when with $p(heads)=p$.
**Note** that this distribution essentially tells the **story** of a general model in the following sense: if we believe that they underlying process generating the observed data has a binary outcome (affected by disease or not, head or not, 0 or 1, clicked through or not), and that one the of the two outcomes occurs with probability $p$, then the probability of seeing a particular outcome is given by the **binomial distribution** with parameters $n$ and $p$.
Any process that matches the coin flip story is a Binomial process (note that you'll see such coin flips also referred to as Bernoulli trials in the literature). So we can also formulate the story of the Binomial distribution as
> the number $r$ of successes in $n$ Bernoulli trials with probability $p$ of success, is Binomially distributed.
We'll now use the binomial distribution to answer the same question as above:
* If P(heads) = 0.7 and you flip the coin ten times, how many heads will come up?
We'll also set the seed to ensure reproducible results.
```
# Set seed
np.random.seed(42)
# Simulate one run of flipping the biased coin 10 times
np.random.binomial(10,0.7)
```
### Simulating many times to get the distribution
In the above, we have simulated the scenario once. But this only tells us one potential outcome. To see how likely it is to get $n$ heads, for example, we need to simulate it a lot of times and check what proportion ended up with $n$ heads.
```
# Simulate 1,000 run of flipping the biased coin 10 times
x = np.random.binomial(10, 0.7, size=10_000)
# Plot normalized histogram of results
plt.hist(x, density=True, bins=10);
```
* Group chat: what do you see in the above?
### Hands-on: Probabilities
- If I flip a biased coin ($P(H)=0.3$) 20 times, what is the probability of 5 or more heads?
```
# Calculate the probability of 5 or more heads for p=0.3
sum(np.random.binomial(20, 0.3, 10_000) >= 5) / 10_000
```
- If I flip a fair coin 20 times, what is the probability of 5 or more heads?
```
# Calculate the probability of 5 or more heads for p=0.5
sum(np.random.binomial(20, 0.5, 10_000) >= 5) / 10_000
```
- Plot the normalized histogram of number of heads of the following experiment: flipping a fair coin 10 times.
```
# Plot histogram
x = np.random.binomial(10, 0.5, 10_000)
plt.hist(x);
```
**Note:** you may have noticed that the _binomial distribution_ can take on only a finite number of values, whereas the _uniform distribution_ above can take on any number between $0$ and $1$. These are different enough cases to warrant special mention of this & two different names: the former is called a _probability mass function_ (PMF) and the latter a _probability distribution function_ (PDF). Time permitting, we may discuss some of the subtleties here. If not, all good texts will cover this. I like (Sivia & Skilling, 2006), among many others.
**Question:**
* Looking at the histogram, can you tell me the probability of seeing 4 or more heads?
Enter the ECDF.
## Empirical cumulative distribution functions (ECDFs)
An ECDF is, as an alternative to a histogram, a way to visualize univariate data that is rich in information. It allows you to visualize all of your data and, by doing so, avoids the very real problem of binning.
- can plot control plus experiment
- data plus model!
- many populations
- can see multimodality (though less pronounced) -- a mode becomes a point of inflexion!
- can read off so much: e.g. percentiles.
See Eric Ma's great post on ECDFS [here](https://ericmjl.github.io/blog/2018/7/14/ecdfs/) AND [this twitter thread](https://twitter.com/allendowney/status/1019171696572583936) (thanks, Allen Downey!).
So what is this ECDF?
**Definition:** In an ECDF, the x-axis is the range of possible values for the data & for any given x-value, the corresponding y-value is the proportion of data points less than or equal to that x-value.
Let's define a handy ECDF function that takes in data and outputs $x$ and $y$ data for the ECDF.
```
def ecdf(data):
"""Compute ECDF for a one-dimensional array of measurements."""
# Number of data points
n = len(data)
# x-data for the ECDF
x = np.sort(data)
# y-data for the ECDF
y = np.arange(1, n+1) / n
return x, y
```
### Hands-on: Plotting ECDFs
Plot the ECDF for the previous hands-on exercise. Read the answer to the following question off the ECDF: he probability of seeing 4 or more heads?
```
# Generate x- and y-data for the ECDF
x_flips, y_flips = ecdf(x)
# Plot the ECDF
plt.plot(x_flips, y_flips, marker=".")
```
## Probability distributions and their stories
**Credit:** Thank you to [Justin Bois](http://bois.caltech.edu/) for countless hours of discussion, work and collaboration on thinking about probability distributions and their stories. All of the following is inspired by Justin & his work, if not explicitly drawn from.
___
In the above, we saw that we could match data-generating processes with binary outcomes to the story of the binomial distribution.
> The Binomial distribution's story is as follows: the number $r$ of successes in $n$ Bernoulli trials with probability $p$ of success, is Binomially distributed.
There are many other distributions with stories also!
### Poisson processes and the Poisson distribution
In the book [Information Theory, Inference and Learning Algorithms](https://www.amazon.com/Information-Theory-Inference-Learning-Algorithms/dp/0521642981) David MacKay tells the tale of a town called Poissonville, in which the buses have an odd schedule. Standing at a bus stop in Poissonville, the amount of time you have to wait for a bus is totally independent of when the previous bus arrived. This means you could watch a bus drive off and another arrive almost instantaneously, or you could be waiting for hours.
Arrival of buses in Poissonville is what we call a Poisson process. The timing of the next event is completely independent of when the previous event happened. Many real-life processes behave in this way.
* natural births in a given hospital (there is a well-defined average number of natural births per year, and the timing of one birth is independent of the timing of the previous one);
* Landings on a website;
* Meteor strikes;
* Molecular collisions in a gas;
* Aviation incidents.
Any process that matches the buses in Poissonville **story** is a Poisson process.
The number of arrivals of a Poisson process in a given amount of time is Poisson distributed. The Poisson distribution has one parameter, the average number of arrivals in a given length of time. So, to match the story, we could consider the number of hits on a website in an hour with an average of six hits per hour. This is Poisson distributed.
```
# Generate Poisson-distributed data
samples = np.random.poisson(6, 10**6)
# Plot histogram
plt.hist(samples, bins=21);
```
**Question:** Does this look like anything to you?
In fact, the Poisson distribution is the limit of the Binomial distribution for low probability of success and large number of trials, that is, for rare events.
To see this, think about the stories. Picture this: you're doing a Bernoulli trial once a minute for an hour, each with a success probability of 0.05. We would do 60 trials, and the number of successes is Binomially distributed, and we would expect to get about 3 successes. This is just like the Poisson story of seeing 3 buses on average arrive in a given interval of time. Thus the Poisson distribution with arrival rate equal to np approximates a Binomial distribution for n Bernoulli trials with probability p of success (with n large and p small). This is useful because the Poisson distribution can be simpler to work with as it has only one parameter instead of two for the Binomial distribution.
#### Hands-on: Poisson
Plot the ECDF of the Poisson-distributed data that you generated above.
```
# Generate x- and y-data for the ECDF
x_p, y_p = ecdf(samples)
# Plot the ECDF
plt.plot(x_p, y_p, marker=".");
```
#### Example Poisson distribution: field goals attempted per game
This section is explicitly taken from the great work of Justin Bois. You can find more [here](https://github.com/justinbois/dataframed-plot-examples/blob/master/lebron_field_goals.ipynb).
Let's first remind ourselves of the story behind the Poisson distribution.
> The number of arrivals of a Poisson processes in a given set time interval is Poisson distributed.
To quote Justin Bois:
> We could model field goal attempts in a basketball game using a Poisson distribution. When a player takes a shot is a largely stochastic process, being influenced by the myriad ebbs and flows of a basketball game. Some players shoot more than others, though, so there is a well-defined rate of shooting. Let's consider LeBron James's field goal attempts for the 2017-2018 NBA season.
First thing's first, the data ([from here](https://www.basketball-reference.com/players/j/jamesle01/gamelog/2018)):
```
fga = [19, 16, 15, 20, 20, 11, 15, 22, 34, 17, 20, 24, 14, 14,
24, 26, 14, 17, 20, 23, 16, 11, 22, 15, 18, 22, 23, 13,
18, 15, 23, 22, 23, 18, 17, 22, 17, 15, 23, 8, 16, 25,
18, 16, 17, 23, 17, 15, 20, 21, 10, 17, 22, 20, 20, 23,
17, 18, 16, 25, 25, 24, 19, 17, 25, 20, 20, 14, 25, 26,
29, 19, 16, 19, 18, 26, 24, 21, 14, 20, 29, 16, 9]
```
To show that this LeBron's attempts are ~ Poisson distributed, you're now going to plot the ECDF and compare it with the the ECDF of the Poisson distribution that has the mean of the data (technically, this is the maximum likelihood estimate).
#### Hands-on: Simulating Data Generating Stories
Generate the x and y values for the ECDF of LeBron's field attempt goals.
```
# Generate x & y data for ECDF
x_ecdf, y_ecdf = ecdf(fga)
```
Now we'll draw samples out of a Poisson distribution to get the theoretical ECDF, plot it with the ECDF of the data and see how they look.
```
# Number of times we simulate the model
n_reps = 1000
# Plot ECDF of data
plt.plot(x_ecdf, y_ecdf, '.', color='black');
# Plot ECDF of model
for _ in range(n_reps):
samples = np.random.poisson(np.mean(fga), size=len(fga))
x_theor, y_theor = ecdf(samples)
plt.plot(x_theor, y_theor, '.', alpha=0.01, color='lightgray');
# Label your axes
plt.xlabel('field goal attempts')
plt.ylabel('ECDF');
```
You can see from the ECDF that LeBron's field goal attempts per game are Poisson distributed.
### Exponential distribution
We've encountered a variety of named _discrete distributions_. There are also named _continuous distributions_, such as the Exponential distribution and the Normal (or Gaussian) distribution. To see what the story of the Exponential distribution is, let's return to Poissonville, in which the number of buses that will arrive per hour are Poisson distributed.
However, the waiting time between arrivals of a Poisson process are exponentially distributed.
So: the exponential distribution has the following story: the waiting time between arrivals of a Poisson process are exponentially distributed. It has a single parameter, the mean waiting time. This distribution is not peaked, as we can see from its PDF.
For an illustrative example, lets check out the time between all incidents involving nuclear power since 1974. It's a reasonable first approximation to expect incidents to be well-modeled by a Poisson process, which means the timing of one incident is independent of all others. If this is the case, the time between incidents should be Exponentially distributed.
To see if this story is credible, we can plot the ECDF of the data with the CDF that we'd get from an exponential distribution with the sole parameter, the mean, given by the mean inter-incident time of the data.
```
# Load nuclear power accidents data & create array of inter-incident times
df = pd.read_csv('../data/nuclear_power_accidents.csv')
df.Date = pd.to_datetime(df.Date)
df = df[df.Date >= pd.to_datetime('1974-01-01')]
inter_times = np.diff(np.sort(df.Date)).astype(float) / 1e9 / 3600 / 24
# Compute mean and sample from exponential
mean = ___
samples = ___
# Compute ECDFs for sample & model
x, y = ___
x_theor, y_theor = ___
# Plot sample & model ECDFs
___;
plt.plot(x, y, marker='.', linestyle='none');
```
We see that the data is close to being Exponentially distributed, which means that we can model the nuclear incidents as a Poisson process.
### Normal distribution
The Normal distribution, also known as the Gaussian or Bell Curve, appears everywhere. There are many reasons for this. One is the following:
> When doing repeated measurements, we expect them to be Normally distributed, owing to the many subprocesses that contribute to a measurement. This is because (a formulation of the Central Limit Theorem) **any quantity that emerges as the sum of a large number of subprocesses tends to be Normally distributed** provided none of the subprocesses is very broadly distributed.
Now it's time to see if this holds for the measurements of the speed of light in the famous Michelson–Morley experiment:
Below, I'll plot the histogram with a Gaussian curve fitted to it. Even if that looks good, though, that could be due to binning bias. SO then you'll plot the ECDF of the data and the CDF of the model!
```
# Load data, plot histogram
import scipy.stats as st
df = pd.read_csv('../data/michelson_speed_of_light.csv')
df = df.rename(columns={'velocity of light in air (km/s)': 'c'})
c = df.c.values
x_s = np.linspace(299.6, 300.1, 400) * 1000
plt.plot(x_s, st.norm.pdf(x_s, c.mean(), c.std(ddof=1)))
plt.hist(c, bins=9, density=True)
plt.xlabel('speed of light (km/s)')
plt.ylabel('PDF');
```
#### Hands-on: Simulating Normal
```
# Get speed of light measurement + mean & standard deviation
michelson_speed_of_light = df.c.values
mean = np.mean(michelson_speed_of_light)
std = np.std(michelson_speed_of_light, ddof=1)
# Generate normal samples w/ mean, std of data
samples = np.random.normal(mean, std, size=10000)
# Generate data ECDF & model CDF
x, y =ecdf(michelson_speed_of_light)
x_theor, y_theor = ecdf(samples)
# Plot data & model (E)CDFs
plt.plot(x_theor, y_theor)
plt.plot(x, y, marker=".")
plt.xlabel('speed of light (km/s)')
plt.ylabel('CDF');
```
Some of you may ask but is the data really normal? I urge you to check out Allen Downey's post [_Are your data normal? Hint: no._ ](http://allendowney.blogspot.com/2013/08/are-my-data-normal.html)
|
github_jupyter
|
# *Quick, Draw!* GAN
In this notebook, we use Generative Adversarial Network code (adapted from [Rowel Atienza's](https://github.com/roatienza/Deep-Learning-Experiments/blob/master/Experiments/Tensorflow/GAN/dcgan_mnist.py) under [MIT License](https://github.com/roatienza/Deep-Learning-Experiments/blob/master/LICENSE)) to create sketches in the style of humans who have played the [*Quick, Draw!* game](https://quickdraw.withgoogle.com) (data available [here](https://github.com/googlecreativelab/quickdraw-dataset) under [Creative Commons Attribution 4.0 license](https://creativecommons.org/licenses/by/4.0/)).
#### Load dependencies
```
# for data input and output:
import numpy as np
import os
# for deep learning:
import keras
from keras.models import Model
from keras.layers import Input, Dense, Conv2D, Dropout
from keras.layers import BatchNormalization, Flatten
from keras.layers import Activation
from keras.layers import Reshape # new!
from keras.layers import Conv2DTranspose, UpSampling2D # new!
from keras.optimizers import RMSprop # new!
# for plotting:
import pandas as pd
from matplotlib import pyplot as plt
%matplotlib inline
```
#### Load data
NumPy bitmap files are [here](https://console.cloud.google.com/storage/browser/quickdraw_dataset/full/numpy_bitmap) -- pick your own drawing category -- you don't have to pick *apples* :)
```
input_images = "../quickdraw_data/apple.npy"
data = np.load(input_images) # 28x28 (sound familiar?) grayscale bitmap in numpy .npy format; images are centered
data.shape
data[4242]
data = data/255
data = np.reshape(data,(data.shape[0],28,28,1)) # fourth dimension is color
img_w,img_h = data.shape[1:3]
data.shape
data[4242]
plt.imshow(data[4242,:,:,0], cmap='Greys')
```
#### Create discriminator network
```
def build_discriminator(depth=64, p=0.4):
# Define inputs
image = Input((img_w,img_h,1))
# Convolutional layers
conv1 = Conv2D(depth*1, 5, strides=2,
padding='same', activation='relu')(image)
conv1 = Dropout(p)(conv1)
conv2 = Conv2D(depth*2, 5, strides=2,
padding='same', activation='relu')(conv1)
conv2 = Dropout(p)(conv2)
conv3 = Conv2D(depth*4, 5, strides=2,
padding='same', activation='relu')(conv2)
conv3 = Dropout(p)(conv3)
conv4 = Conv2D(depth*8, 5, strides=1,
padding='same', activation='relu')(conv3)
conv4 = Flatten()(Dropout(p)(conv4))
# Output layer
prediction = Dense(1, activation='sigmoid')(conv4)
# Model definition
model = Model(inputs=image, outputs=prediction)
return model
discriminator = build_discriminator()
discriminator.summary()
discriminator.compile(loss='binary_crossentropy',
optimizer=RMSprop(lr=0.0008,
decay=6e-8,
clipvalue=1.0),
metrics=['accuracy'])
```
#### Create generator network
```
z_dimensions = 32
def build_generator(latent_dim=z_dimensions,
depth=64, p=0.4):
# Define inputs
noise = Input((latent_dim,))
# First dense layer
dense1 = Dense(7*7*depth)(noise)
dense1 = BatchNormalization(momentum=0.9)(dense1) # default momentum for moving average is 0.99
dense1 = Activation(activation='relu')(dense1)
dense1 = Reshape((7,7,depth))(dense1)
dense1 = Dropout(p)(dense1)
# De-Convolutional layers
conv1 = UpSampling2D()(dense1)
conv1 = Conv2DTranspose(int(depth/2),
kernel_size=5, padding='same',
activation=None,)(conv1)
conv1 = BatchNormalization(momentum=0.9)(conv1)
conv1 = Activation(activation='relu')(conv1)
conv2 = UpSampling2D()(conv1)
conv2 = Conv2DTranspose(int(depth/4),
kernel_size=5, padding='same',
activation=None,)(conv2)
conv2 = BatchNormalization(momentum=0.9)(conv2)
conv2 = Activation(activation='relu')(conv2)
conv3 = Conv2DTranspose(int(depth/8),
kernel_size=5, padding='same',
activation=None,)(conv2)
conv3 = BatchNormalization(momentum=0.9)(conv3)
conv3 = Activation(activation='relu')(conv3)
# Output layer
image = Conv2D(1, kernel_size=5, padding='same',
activation='sigmoid')(conv3)
# Model definition
model = Model(inputs=noise, outputs=image)
return model
generator = build_generator()
generator.summary()
```
#### Create adversarial network
```
z = Input(shape=(z_dimensions,))
img = generator(z)
discriminator.trainable = False
pred = discriminator(img)
adversarial_model = Model(z, pred)
adversarial_model.compile(loss='binary_crossentropy',
optimizer=RMSprop(lr=0.0004,
decay=3e-8,
clipvalue=1.0),
metrics=['accuracy'])
```
#### Train!
```
def train(epochs=2000, batch=128, z_dim=z_dimensions):
d_metrics = []
a_metrics = []
running_d_loss = 0
running_d_acc = 0
running_a_loss = 0
running_a_acc = 0
for i in range(epochs):
# sample real images:
real_imgs = np.reshape(
data[np.random.choice(data.shape[0],
batch,
replace=False)],
(batch,28,28,1))
# generate fake images:
fake_imgs = generator.predict(
np.random.uniform(-1.0, 1.0,
size=[batch, z_dim]))
# concatenate images as discriminator inputs:
x = np.concatenate((real_imgs,fake_imgs))
# assign y labels for discriminator:
y = np.ones([2*batch,1])
y[batch:,:] = 0
# train discriminator:
d_metrics.append(
discriminator.train_on_batch(x,y)
)
running_d_loss += d_metrics[-1][0]
running_d_acc += d_metrics[-1][1]
# adversarial net's noise input and "real" y:
noise = np.random.uniform(-1.0, 1.0,
size=[batch, z_dim])
y = np.ones([batch,1])
# train adversarial net:
a_metrics.append(
adversarial_model.train_on_batch(noise,y)
)
running_a_loss += a_metrics[-1][0]
running_a_acc += a_metrics[-1][1]
# periodically print progress & fake images:
if (i+1)%100 == 0:
print('Epoch #{}'.format(i))
log_mesg = "%d: [D loss: %f, acc: %f]" % \
(i, running_d_loss/i, running_d_acc/i)
log_mesg = "%s [A loss: %f, acc: %f]" % \
(log_mesg, running_a_loss/i, running_a_acc/i)
print(log_mesg)
noise = np.random.uniform(-1.0, 1.0,
size=[16, z_dim])
gen_imgs = generator.predict(noise)
plt.figure(figsize=(5,5))
for k in range(gen_imgs.shape[0]):
plt.subplot(4, 4, k+1)
plt.imshow(gen_imgs[k, :, :, 0],
cmap='gray')
plt.axis('off')
plt.tight_layout()
plt.show()
return a_metrics, d_metrics
a_metrics_complete, d_metrics_complete = train()
ax = pd.DataFrame(
{
'Adversarial': [metric[0] for metric in a_metrics_complete],
'Discriminator': [metric[0] for metric in d_metrics_complete],
}
).plot(title='Training Loss', logy=True)
ax.set_xlabel("Epochs")
ax.set_ylabel("Loss")
ax = pd.DataFrame(
{
'Adversarial': [metric[1] for metric in a_metrics_complete],
'Discriminator': [metric[1] for metric in d_metrics_complete],
}
).plot(title='Training Accuracy')
ax.set_xlabel("Epochs")
ax.set_ylabel("Accuracy")
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/AaronGe88inTHU/dreye-thu/blob/master/DataGenerator.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
from google.colab import drive
drive.mount('/content/drive')
import numpy as np
import cv2
from PIL import Image
import tensorflow as tf
import os, glob
from matplotlib import pyplot as plt
import tarfile
import shutil
def target_generate(file_path):
cmd_str = os.path.join(file_path,"*.png")
file_list = glob.glob("/content/drive/My Drive/dreye/ImagePreprocess/*.png")
file_list = sorted(file_list, key=lambda name: int(name.split("/")[-1][:-4]))#int(name[:-4]))
print("{0} images need to be processed! in {1}".format(len(file_list), cmd_str))
batch_size = 20
image_arrays = []
crop_arrays = []
batches = int(np.floor(len(file_list) / batch_size))
mod = len(file_list) % batch_size
for ii in range(batches):
image_batch = []
crop_batch = []
for jj in range(batch_size):
im = np.array(Image.open(file_list[ii * batch_size + jj]))
im = cv2.resize(im, dsize=(448, 448), interpolation=cv2.INTER_CUBIC)
cp = im[167:279, 167:279]
image_batch.append(im)
crop_batch.append(cp)
print("{} images resized!".format(batch_size))
print("{} images cropped!".format(batch_size))
image_batch = np.array(image_batch)
crop_batch = np.array(crop_batch)
image_arrays.extend(image_batch)
crop_arrays.extend(crop_batch)
#print(len(image_arrays), len(crop_arrays))#plt.imshow(np.array(images[0], dtype=np.int32))
image_batch = []
crop_batch = []
for jj in range (int(batches * batch_size + mod)):
im = np.array(Image.open(file_list[jj]))
im = cv2.resize(im, dsize=(448, 448), interpolation=cv2.INTER_CUBIC)
cp = im[167:279, 167:279]
image_batch.append(im)
crop_batch.append(cp)
print("{} images resized!".format(mod))
print("{} images cropped!".format(mod))
image_batch = np.array(image_batch)
image_arrays.extend(image_batch
)
crop_batch = np.array(crop_batch)
crop_arrays.extend(crop_batch)
image_arrays = np.array(image_arrays)
crop_arrays = np.array(crop_arrays)
#print(image_arrays.shape, crop_arrays.shape)
resize_path = os.path.join(file_path,"resize")
crop_path = os.path.join(file_path,"crop")
os.mkdir(resize_path)
os.mkdir(crop_path)
for ii in range(image_arrays.shape[0]):
im = Image.fromarray(image_arrays[ii])
im.save(os.path.join(resize_path,"{}.png".format(ii)))
im = Image.fromarray(crop_arrays[ii])
im.save(os.path.join(crop_path,"{}.png".format(ii)))
print("Saved successfully!")
#target_generate('/content/drive/My Drive/dreye/ImagePreprocess')
path_name = os.path.join("/content/drive/My Drive/dreye/ImagePreprocess/", "resize")
tar = tarfile.open(path_name+".tar.gz", "w:gz")
tar.add(path_name, arcname="resize")
tar.close()
shutil.rmtree(path_name)
path_name = os.path.join("/content/drive/My Drive/dreye/ImagePreprocess/", "crop")
tar2 = tarfile.open(path_name+".tar.gz", "w:gz")
tar2.add(path_name, arcname="crop")
tar2.close()
shutil.rmtree(path_name)
```
|
github_jupyter
|
# 執行語音轉文字服務操作
```
import azure.cognitiveservices.speech as speechsdk
# Creates an instance of a speech config with specified subscription key and service region.
# Replace with your own subscription key and region identifier from here: https://aka.ms/speech/sdkregion
speech_key, service_region = "196f2f318dc744049eafb9cf89631e42", "southcentralus"
speech_config = speechsdk.SpeechConfig(subscription=speech_key, region=service_region)
# Creates an audio configuration that points to an audio file.
# Replace with your own audio filename.
audio_filename = "narration.wav"
audio_input = speechsdk.audio.AudioConfig(filename=audio_filename)
# Creates a recognizer with the given settings
speech_recognizer = speechsdk.SpeechRecognizer(speech_config=speech_config, audio_config=audio_input)
print("Recognizing first result...")
# Starts speech recognition, and returns after a single utterance is recognized. The end of a
# single utterance is determined by listening for silence at the end or until a maximum of 15
# seconds of audio is processed. The task returns the recognition text as result.
# Note: Since recognize_once() returns only a single utterance, it is suitable only for single
# shot recognition like command or query.
# For long-running multi-utterance recognition, use start_continuous_recognition() instead.
result = speech_recognizer.recognize_once()
# Checks result.
if result.reason == speechsdk.ResultReason.RecognizedSpeech:
print("Recognized: {}".format(result.text))
elif result.reason == speechsdk.ResultReason.NoMatch:
print("No speech could be recognized: {}".format(result.no_match_details))
elif result.reason == speechsdk.ResultReason.Canceled:
cancellation_details = result.cancellation_details
print("Speech Recognition canceled: {}".format(cancellation_details.reason))
if cancellation_details.reason == speechsdk.CancellationReason.Error:
print("Error details: {}".format(cancellation_details.error_details))
```
# 執行文字轉語音服務操作
## 文字轉成合成語音
```
import azure.cognitiveservices.speech as speechsdk
# Creates an instance of a speech config with specified subscription key and service region.
# Replace with your own subscription key and service region (e.g., "westus").
speech_key, service_region = "196f2f318dc744049eafb9cf89631e42", "southcentralus"
speech_config = speechsdk.SpeechConfig(subscription=speech_key, region=service_region)
# Creates a speech synthesizer using the default speaker as audio output.
speech_synthesizer = speechsdk.SpeechSynthesizer(speech_config=speech_config)
# Receives a text from console input.
print("Type some text that you want to speak...")
text = input()
# Synthesizes the received text to speech.
# The synthesized speech is expected to be heard on the speaker with this line executed.
result = speech_synthesizer.speak_text_async(text).get()
# Checks result.
if result.reason == speechsdk.ResultReason.SynthesizingAudioCompleted:
print("Speech synthesized to speaker for text [{}]".format(text))
elif result.reason == speechsdk.ResultReason.Canceled:
cancellation_details = result.cancellation_details
print("Speech synthesis canceled: {}".format(cancellation_details.reason))
if cancellation_details.reason == speechsdk.CancellationReason.Error:
if cancellation_details.error_details:
print("Error details: {}".format(cancellation_details.error_details))
print("Did you update the subscription info?")
```
## 文字轉成音訊檔案
```
import azure.cognitiveservices.speech as speechsdk
# Replace with your own subscription key and region identifier from here: https://aka.ms/speech/sdkregion
speech_key, service_region = "196f2f318dc744049eafb9cf89631e42", "southcentralus"
speech_config = speechsdk.SpeechConfig(subscription=speech_key, region=service_region)
# Creates an audio configuration that points to an audio file.
# Replace with your own audio filename.
audio_filename = "helloworld.wav"
audio_output = speechsdk.audio.AudioOutputConfig(filename=audio_filename)
# Creates a synthesizer with the given settings
speech_synthesizer = speechsdk.SpeechSynthesizer(speech_config=speech_config, audio_config=audio_output)
# Synthesizes the text to speech.
# Replace with your own text.
text = "Hello world!"
result = speech_synthesizer.speak_text_async(text).get()
# Checks result.
if result.reason == speechsdk.ResultReason.SynthesizingAudioCompleted:
print("Speech synthesized to [{}] for text [{}]".format(audio_filename, text))
elif result.reason == speechsdk.ResultReason.Canceled:
cancellation_details = result.cancellation_details
print("Speech synthesis canceled: {}".format(cancellation_details.reason))
if cancellation_details.reason == speechsdk.CancellationReason.Error:
if cancellation_details.error_details:
print("Error details: {}".format(cancellation_details.error_details))
print("Did you update the subscription info?")
```
# 語音轉成翻譯文字服務操作
```
import azure.cognitiveservices.speech as speechsdk
speech_key, service_region = "196f2f318dc744049eafb9cf89631e42", "southcentralus"
def translate_speech_to_text():
# Creates an instance of a speech translation config with specified subscription key and service region.
# Replace with your own subscription key and region identifier from here: https://aka.ms/speech/sdkregion
translation_config = speechsdk.translation.SpeechTranslationConfig(subscription=speech_key, region=service_region)
# Sets source and target languages.
# Replace with the languages of your choice, from list found here: https://aka.ms/speech/sttt-languages
fromLanguage = 'en-US'
toLanguage = 'de' #找 文本語言 line 33底下也需要更改
translation_config.speech_recognition_language = fromLanguage
translation_config.add_target_language(toLanguage)
# Creates a translation recognizer using and audio file as input.
recognizer = speechsdk.translation.TranslationRecognizer(translation_config=translation_config)
# Starts translation, and returns after a single utterance is recognized. The end of a
# single utterance is determined by listening for silence at the end or until a maximum of 15
# seconds of audio is processed. It returns the recognized text as well as the translation.
# Note: Since recognize_once() returns only a single utterance, it is suitable only for single
# shot recognition like command or query.
# For long-running multi-utterance recognition, use start_continuous_recognition() instead.
print("Say something...")
result = recognizer.recognize_once()
# Check the result
if result.reason == speechsdk.ResultReason.TranslatedSpeech:
print("RECOGNIZED '{}': {}".format(fromLanguage, result.text))
print("TRANSLATED into {}: {}".format(toLanguage, result.translations['de']))
elif result.reason == speechsdk.ResultReason.RecognizedSpeech:
print("RECOGNIZED: {} (text could not be translated)".format(result.text))
elif result.reason == speechsdk.ResultReason.NoMatch:
print("NOMATCH: Speech could not be recognized: {}".format(result.no_match_details))
elif result.reason == speechsdk.ResultReason.Canceled:
print("CANCELED: Reason={}".format(result.cancellation_details.reason))
if result.cancellation_details.reason == speechsdk.CancellationReason.Error:
print("CANCELED: ErrorDetails={}".format(result.cancellation_details.error_details))
translate_speech_to_text()
```
# 語音轉成多國翻譯文字服務操作
```
import azure.cognitiveservices.speech as speechsdk
speech_key, service_region = "196f2f318dc744049eafb9cf89631e42", "southcentralus"
def translate_speech_to_text():
# Creates an instance of a speech translation config with specified subscription key and service region.
# Replace with your own subscription key and region identifier from here: https://aka.ms/speech/sdkregion
translation_config = speechsdk.translation.SpeechTranslationConfig(subscription=speech_key, region=service_region)
# Sets source and target languages.
# Replace with the languages of your choice, from list found here: https://aka.ms/speech/sttt-languages
fromLanguage = 'en-US'
translation_config.speech_recognition_language = fromLanguage
translation_config.add_target_language('de')
translation_config.add_target_language('fr')
# Creates a translation recognizer using and audio file as input.
recognizer = speechsdk.translation.TranslationRecognizer(translation_config=translation_config)
# Starts translation, and returns after a single utterance is recognized. The end of a
# single utterance is determined by listening for silence at the end or until a maximum of 15
# seconds of audio is processed. It returns the recognized text as well as the translation.
# Note: Since recognize_once() returns only a single utterance, it is suitable only for single
# shot recognition like command or query.
# For long-running multi-utterance recognition, use start_continuous_recognition() instead.
print("Say something...")
result = recognizer.recognize_once()
# Check the result
if result.reason == speechsdk.ResultReason.TranslatedSpeech:
print("RECOGNIZED '{}': {}".format(fromLanguage, result.text))
print("TRANSLATED into {}: {}".format('de', result.translations['de']))
print("TRANSLATED into {}: {}".format('fr', result.translations['fr']))
elif result.reason == speechsdk.ResultReason.RecognizedSpeech:
print("RECOGNIZED: {} (text could not be translated)".format(result.text))
elif result.reason == speechsdk.ResultReason.NoMatch:
print("NOMATCH: Speech could not be recognized: {}".format(result.no_match_details))
elif result.reason == speechsdk.ResultReason.Canceled:
print("CANCELED: Reason={}".format(result.cancellation_details.reason))
if result.cancellation_details.reason == speechsdk.CancellationReason.Error:
print("CANCELED: ErrorDetails={}".format(result.cancellation_details.error_details))
translate_speech_to_text()
```
# 語音轉成多國語音服務操作
```
import azure.cognitiveservices.speech as speechsdk
speech_key, service_region = "196f2f318dc744049eafb9cf89631e42", "southcentralus"
def translate_speech_to_speech():
# Creates an instance of a speech translation config with specified subscription key and service region.
# Replace with your own subscription key and region identifier from here: https://aka.ms/speech/sdkregion
translation_config = speechsdk.translation.SpeechTranslationConfig(subscription=speech_key, region=service_region)
# Sets source and target languages.
# Replace with the languages of your choice, from list found here: https://aka.ms/speech/sttt-languages
fromLanguage = 'en-US'
toLanguage = 'de'
translation_config.speech_recognition_language = fromLanguage
translation_config.add_target_language(toLanguage)
# Sets the synthesis output voice name.
# Replace with the languages of your choice, from list found here: https://aka.ms/speech/tts-languages
translation_config.voice_name = "de-DE-Hedda"
# Creates a translation recognizer using and audio file as input.
recognizer = speechsdk.translation.TranslationRecognizer(translation_config=translation_config)
# Prepare to handle the synthesized audio data.
def synthesis_callback(evt):
size = len(evt.result.audio)
print('AUDIO SYNTHESIZED: {} byte(s) {}'.format(size, '(COMPLETED)' if size == 0 else ''))
recognizer.synthesizing.connect(synthesis_callback)
# Starts translation, and returns after a single utterance is recognized. The end of a
# single utterance is determined by listening for silence at the end or until a maximum of 15
# seconds of audio is processed. It returns the recognized text as well as the translation.
# Note: Since recognize_once() returns only a single utterance, it is suitable only for single
# shot recognition like command or query.
# For long-running multi-utterance recognition, use start_continuous_recognition() instead.
print("Say something...")
result = recognizer.recognize_once()
# Check the result
if result.reason == speechsdk.ResultReason.TranslatedSpeech:
print("RECOGNIZED '{}': {}".format(fromLanguage, result.text))
print("TRANSLATED into {}: {}".format(toLanguage, result.translations['de']))
elif result.reason == speechsdk.ResultReason.RecognizedSpeech:
print("RECOGNIZED: {} (text could not be translated)".format(result.text))
elif result.reason == speechsdk.ResultReason.NoMatch:
print("NOMATCH: Speech could not be recognized: {}".format(result.no_match_details))
elif result.reason == speechsdk.ResultReason.Canceled:
print("CANCELED: Reason={}".format(result.cancellation_details.reason))
if result.cancellation_details.reason == speechsdk.CancellationReason.Error:
print("CANCELED: ErrorDetails={}".format(result.cancellation_details.error_details))
translate_speech_to_speech()
```
# 文字語言偵測服務操作
```
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import TextAnalyticsClient
key = "bdb7d45b308f4851bd1b8cae9a1d3453"
endpoint = "https://test0524.cognitiveservices.azure.com/"
text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))
documents = [
"This document is written in English.",
"Este es un document escrito en Español.",
"这是一个用中文写的文件",
"Dies ist ein Dokument in deutsche Sprache.",
"Detta är ett dokument skrivet på engelska."
]
result = text_analytics_client.detect_language(documents)
for idx, doc in enumerate(result):
if not doc.is_error:
print("Document text: {}".format(documents[idx]))
print("Language detected: {}".format(doc.primary_language.name))
print("ISO6391 name: {}".format(doc.primary_language.iso6391_name))
print("Confidence score: {}\n".format(doc.primary_language.confidence_score))
if doc.is_error:
print(doc.id, doc.error)
```
# 執行關鍵字詞擷取服務操作
```
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import TextAnalyticsClient
key = "bdb7d45b308f4851bd1b8cae9a1d3453"
endpoint = "https://test0524.cognitiveservices.azure.com/"
text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))
documents = [
"Redmond is a city in King County, Washington, United States, located 15 miles east of Seattle.",
"I need to take my cat to the veterinarian.",
"I will travel to South America in the summer.",
]
result = text_analytics_client.extract_key_phrases(documents)
for doc in result:
if not doc.is_error:
print(doc.key_phrases)
if doc.is_error:
print(doc.id, doc.error)
```
# 執行實體辨識服務操作
## 實體辨識
```
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import TextAnalyticsClient
key = "bdb7d45b308f4851bd1b8cae9a1d3453"
endpoint = "https://test0524.cognitiveservices.azure.com/"
text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))
documents = [
"Microsoft was founded by Bill Gates and Paul Allen.",
"I had a wonderful trip to Seattle last week.",
"I visited the Space Needle 2 times.",
]
result = text_analytics_client.recognize_entities(documents)
docs = [doc for doc in result if not doc.is_error]
for idx, doc in enumerate(docs):
print("\nDocument text: {}".format(documents[idx]))
for entity in doc.entities:
print("Entity: \t", entity.text, "\tCategory: \t", entity.category,
"\tConfidence Score: \t", entity.confidence_score)
```
## 實體連結
```
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import TextAnalyticsClient
key = "bdb7d45b308f4851bd1b8cae9a1d3453"
endpoint = "https://test0524.cognitiveservices.azure.com/"
text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))
documents = [
"Microsoft moved its headquarters to Bellevue, Washington in January 1979.",
"Steve Ballmer stepped down as CEO of Microsoft and was succeeded by Satya Nadella.",
"Microsoft superó a Apple Inc. como la compañía más valiosa que cotiza en bolsa en el mundo.",
]
result = text_analytics_client.recognize_linked_entities(documents)
docs = [doc for doc in result if not doc.is_error]
for idx, doc in enumerate(docs):
print("Document text: {}\n".format(documents[idx]))
for entity in doc.entities:
print("Entity: {}".format(entity.name))
print("Url: {}".format(entity.url))
print("Data Source: {}".format(entity.data_source))
for match in entity.matches:
print("Confidence Score: {}".format(match.confidence_score))
print("Entity as appears in request: {}".format(match.text))
print("------------------------------------------")
```
# 執行文本翻譯服務操作
```
# -*- coding: utf-8 -*-
import os, requests, uuid, json
subscription_key = 'ab93f8c61e174973818ac06706a5a5d5' # your key
endpoint = 'https://api.cognitive.microsofttranslator.com/'
# key_var_name = 'TRANSLATOR_TEXT_SUBSCRIPTION_KEY'
# if not key_var_name in os.environ:
# raise Exception('Please set/export the environment variable: {}'.format(key_var_name))
# subscription_key = os.environ[key_var_name]
# endpoint_var_name = 'TRANSLATOR_TEXT_ENDPOINT'
# if not endpoint_var_name in os.environ:
# raise Exception('Please set/export the environment variable: {}'.format(endpoint_var_name))
# endpoint = os.environ[endpoint_var_name]
path = '/translate?api-version=3.0'
# Output language setting
params = '&to=de&to=it'
constructed_url = endpoint + path + params
headers = {
'Ocp-Apim-Subscription-Key': subscription_key,
'Content-type': 'application/json',
'X-ClientTraceId': str(uuid.uuid4())
}
body = [{
'text': 'Hello World!'
}]
request = requests.post(constructed_url, headers=headers, json=body)
response = request.json()
print(json.dumps(response, sort_keys=True, indent=4,
ensure_ascii=False, separators=(',', ': ')))
```
# 執行LUIS意圖辨識服務操作
## REST API
```
import requests
try:
key = '8286e59fe6f54ab9826222300bbdcb11' # your Runtime key
endpoint = 'westus.api.cognitive.microsoft.com' # such as 'your-resource-name.api.cognitive.microsoft.com'
appId = 'df67dcdb-c37d-46af-88e1-8b97951ca1c2'
utterance = 'turn on all lights'
headers = {
}
params ={
'query': utterance,
'timezoneOffset': '0',
'verbose': 'true',
'show-all-intents': 'true',
'spellCheck': 'false',
'staging': 'false',
'subscription-key': key
}
r = requests.get(f'https://{endpoint}/luis/prediction/v3.0/apps/{appId}/slots/production/predict',headers=headers, params=params)
print(r.json())
except Exception as e:
print(f'{e}')
```
## SDK
```
from azure.cognitiveservices.language.luis.runtime import LUISRuntimeClient
from msrest.authentication import CognitiveServicesCredentials
import datetime, json, os, time
# Use public app ID or replace with your own trained and published app's ID
# to query your own app
# public appID = 'df67dcdb-c37d-46af-88e1-8b97951ca1c2'
luisAppID = 'dcb2cb33-dee6-46c1-a3a6-28e266d159e0'
runtime_key = '8286e59fe6f54ab9826222300bbdcb11'
runtime_endpoint = 'https://westus.api.cognitive.microsoft.com/'
# production or staging
luisSlotName = 'production'
# Instantiate a LUIS runtime client
clientRuntime = LUISRuntimeClient(runtime_endpoint, CognitiveServicesCredentials(runtime_key))
def predict(app_id, slot_name):
request = { "query" : "hi, show me lovely baby pictures" }
# Note be sure to specify, using the slot_name parameter, whether your application is in staging or \
# production.
response = clientRuntime.prediction.get_slot_prediction(app_id=app_id, slot_name=slot_name, \
prediction_request=request)
print("Top intent: {}".format(response.prediction.top_intent))
print("Sentiment: {}".format (response.prediction.sentiment))
print("Intents: ")
for intent in response.prediction.intents:
print("\t{}".format (json.dumps (intent)))
print("Entities: {}".format (response.prediction.entities))
predict(luisAppID, luisSlotName)
```
|
github_jupyter
|
```
import numpy as np
import gym
import k3d
from ratelimiter import RateLimiter
from k3d.platonic import Cube
from time import time
rate_limiter = RateLimiter(max_calls=4, period=1)
env = gym.make('CartPole-v0')
observation = env.reset()
plot = k3d.plot(grid_auto_fit=False, camera_auto_fit=False, grid=(-1,-1,-1,1,1,1))
joint_positions = np.array([observation[0], 0, 0], dtype=np.float32)
pole_positions = joint_positions + np.array([np.sin(observation[2]), 0, np.cos(observation[2])], dtype=np.float32)
cart = Cube(origin=joint_positions, size=0.1).mesh
cart.scaling = [1, 0.5, 1]
joint = k3d.points(np.mean(cart.vertices[[0,2,4,6]], axis=0), point_size=0.03, color=0xff00, shader='mesh')
pole = k3d.line(vertices=np.array([joint.positions, pole_positions]), shader='mesh', color=0xff0000)
box = cart.vertices
mass = k3d.points(pole_positions, point_size=0.03, color=0xff0000, shader='mesh')
plot += pole + cart + joint + mass
plot.display()
for i_episode in range(20):
observation = env.reset()
for t in range(100):
with rate_limiter:
joint_positions = np.array([observation[0], 0, 0], dtype=np.float32)
pole_positions = joint_positions + np.array([np.sin(observation[2]), 0, np.cos(observation[2])], dtype=np.float32)
cart.vertices = box + joint_positions
joint.positions = np.mean(cart.vertices[[0,2,4,6]], axis=0)
pole.vertices = [joint.positions, pole_positions]
mass.positions = pole_positions
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
if done:
break
plot.display()
for i_episode in range(20):
observation = env.reset()
for t in range(100):
joint_positions = np.array([observation[0], 0, 0], dtype=np.float32)
pole_positions = joint_positions + np.array([np.sin(observation[2]), 0, np.cos(observation[2])], dtype=np.float32)
with rate_limiter:
cart.vertices = box + joint_positions
joint.positions = np.mean(cart.vertices[[0,2,4,6]], axis=0)
pole.vertices = [joint.positions, pole_positions]
mass.positions = pole_positions
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
if done:
break
max_calls, period = 3, 1
call_time = period/max_calls
for i_episode in range(20):
observation = env.reset()
for t in range(100):
joint_positions = np.array([observation[0], 0, 0], dtype=np.float32)
pole_positions = joint_positions + np.array([np.sin(observation[2]), 0, np.cos(observation[2])], dtype=np.float32)
time_stamp2 = time()
if t>0:
d = time_stamp2 - time_stamp1
if d < call_time:
cart.vertices = box + joint_positions
joint.positions = np.mean(cart.vertices[[0,2,4,6]], axis=0)
pole.vertices = [joint.positions, pole_positions]
mass.positions = pole_positions
if t==0:
cart.vertices = box + joint_positions
joint.positions = np.mean(cart.vertices[[0,2,4,6]], axis=0)
pole.vertices = [joint.positions, pole_positions]
mass.positions = pole_positions
time_stamp1 = time()
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
if done:
break
max_calls, period = 3, 1
call_time = period/max_calls
i = 1
all_it_time = 0
cache = []
iterator = []
for i_episode in range(20):
cache.append([])
observation = env.reset()
for t in range(100):
ts1 = time()
joint_positions = np.array([observation[0], 0, 0], dtype=np.float32)
pole_positions = joint_positions + np.array([np.sin(observation[2]), 0, np.cos(observation[2])], dtype=np.float32)
# [cart.vertices, joint.positions, pole.vertices, mass.positions]
cache[i_episode].append([box + joint_positions, np.mean((box + joint_positions)[[0,2,4,6]], axis=0),
[np.mean((box + joint_positions)[[0,2,4,6]], axis=0), pole_positions],
pole_positions])
if all_it_time > call_time*i:
i += 1
iterator = iter(iterator)
element = next(iterator)
cart.vertices = element[0]
joint.positions = element[1]
pole.vertices = element[2]
mass.positions = element[3]
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
ts2 = time()
it_time = ts2 - ts1
all_it_time += it_time
if done:
break
temp_list = []
to_pull = t//max_calls
if max_calls > t:
to_pull = 1
for j in range(max_calls):
temp_list.append(cache[i_episode][to_pull*i])
iterator = list(iterator) + temp_list
del cache
for element in iterator:
with RateLimiter(max_calls=max_calls):
i += 1
iterator = iter(iterator)
element = next(iterator)
cart.vertices = element[0]
joint.positions = element[1]
pole.vertices = element[2]
mass.positions = element[3]
plot.display()
```
|
github_jupyter
|
```
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import pickle
import numpy as np
import cv2
from moviepy.editor import VideoFileClip
import math
import glob
class Left_Right:
last_L_points = []
last_R_points = []
def __init__(self, last_L_points, last_R_points):
self.last_L_points = last_L_points
self.last_R_points = last_R_points
calib_image = mpimg.imread(r'C:\Users\pramo\Documents\Project4\camera_cal\calibration1.jpg')
plt.imshow(calib_image)
objp = np.zeros((6*9,3), np.float32)
objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d points in real world space
imgpoints = [] # 2d points in image plane.
# Make a list of calibration images
images = glob.glob(r'C:\Users\pramo\Documents\Project4\camera_cal\calibration*.jpg')
show_images = []
# Step through the list and search for chessboard corners
for fname in images:
img = cv2.imread(fname)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Find the chessboard corners
ret, corners = cv2.findChessboardCorners(gray, (9,6),None)
# If found, add object points, image points
if ret == True:
objpoints.append(objp)
imgpoints.append(corners)
# Draw and display the corners
img = cv2.drawChessboardCorners(img, (9,6), corners, ret)
show_images.append (img)
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)
dist_pickle = {}
dist_pickle["mtx"] = mtx
dist_pickle["dist"] = dist
pickle.dump( dist_pickle, open( "wide_dist_pickle.p", "wb" ) )
fig=plt.figure(figsize=(20, 20))
columns = 2
rows = 10
for i in range(len(show_images)):
j= i+1
img = show_images[i].squeeze()
fig.add_subplot(rows, columns, j)
plt.imshow(img, cmap="gray")
plt.show()
dist_pickle = pickle.load( open( "wide_dist_pickle.p", "rb" ) )
mtx = dist_pickle["mtx"]
dist = dist_pickle["dist"]
def cal_undistort(img, mtx, dist):
return cv2.undistort(img, mtx, dist, None, mtx)
def gray_image(img):
thresh = (200, 220)
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
binary = np.zeros_like(gray)
binary[(gray > thresh[0]) & (gray <= thresh[1])] = 1
return binary
def abs_sobel_img(img, orient='x', thresh_min=0, thresh_max=255):
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
if orient == 'x':
abs_sobel = np.absolute(cv2.Sobel(gray , cv2.CV_64F, 1, 0))
if orient == 'y':
abs_sobel = np.absolute(cv2.Sobel(gray , cv2.CV_64F, 0, 1))
scaled_sobel = np.uint8(255*abs_sobel/np.max(abs_sobel))
abs_sobel_output = np.zeros_like(scaled_sobel)
abs_sobel_output[(scaled_sobel >= thresh_min) & (scaled_sobel <= thresh_max)] = 1
return abs_sobel_output
def hls_select(img, thresh_min=0, thresh_max=255):
hls = cv2.cvtColor(img, cv2.COLOR_RGB2HLS)
s_channel = hls[:,:,2]
binary_output = np.zeros_like(s_channel)
binary_output[(s_channel > thresh_min) & (s_channel <= thresh_max)] = 1
return binary_output
#hls_binary = hls_select(image, thresh=(90, 255))
def wrap_transform(img, inverse ='TRUE'):
img_size = (img.shape[1], img.shape[0])
src = np.float32(
[[(img_size[0] / 2) - 55, img_size[1] / 2 + 100],
[((img_size[0] / 6) - 10), img_size[1]],
[(img_size[0] * 5 / 6) + 60, img_size[1]],
[(img_size[0] / 2 + 55), img_size[1] / 2 + 100]])
dst = np.float32(
[[(img_size[0] / 4), 0],
[(img_size[0] / 4), img_size[1]],
[(img_size[0] * 3 / 4), img_size[1]],
[(img_size[0] * 3 / 4), 0]])
if inverse == 'FALSE':
M = cv2.getPerspectiveTransform(src, dst)
if inverse == 'TRUE':
M = cv2.getPerspectiveTransform(dst, src)
return cv2.warpPerspective(img, M, img_size, flags=cv2.INTER_LINEAR)
def combined_image(img):
undisort_image = cal_undistort(img, mtx, dist)
W_image = wrap_transform(undisort_image, inverse ='FALSE')
grayimage = gray_image(W_image )
sobelx = abs_sobel_img(W_image,'x', 20, 100)
s_binary = hls_select(W_image, 150, 255)
color_binary = np.dstack(( np.zeros_like(sobelx), sobelx, s_binary)) * 255
combined_binary = np.zeros_like(sobelx)
combined_binary[(s_binary == 1) | (sobelx == 1) | (grayimage == 1)] = 1
return undisort_image, sobelx, s_binary, combined_binary, color_binary, W_image
img = cv2.imread(r'C:\Users\pramo\Documents\Project4\camera_cal\calibration2.jpg')
undisort_image = cal_undistort(img, mtx, dist)
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 10))
f.tight_layout()
ax1.imshow(img)
ax1.set_title('Original Image', fontsize=10)
ax2.imshow(undisort_image, cmap="gray")
ax2.set_title('undisort_image', fontsize=10)
plt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.)
img = cv2.imread(r'C:\Users\pramo\Documents\Project4\test_images\test6.jpg')
undisort_image, sobelx, s_binary, combined_binary, color_binary, W_image = combined_image(img)
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 10))
f.tight_layout()
ax1.imshow(img)
ax1.set_title('Original Image', fontsize=10)
ax2.imshow(undisort_image, cmap="gray")
ax2.set_title('undisort_image', fontsize=10)
plt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.)
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 10))
f.tight_layout()
ax1.imshow(sobelx)
ax1.set_title('sobelx', fontsize=10)
ax2.imshow(s_binary, cmap="gray")
ax2.set_title('s_binary', fontsize=10)
plt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.)
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 10))
f.tight_layout()
ax1.imshow(color_binary)
ax1.set_title('color_binary', fontsize=10)
ax2.imshow(combined_binary, cmap="gray")
ax2.set_title('combined_binary', fontsize=10)
plt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.)
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 10))
f.tight_layout()
ax1.imshow(W_image)
ax1.set_title('W_image', fontsize=10)
ax2.imshow(img)
ax2.set_title('Original Image', fontsize=10)
plt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.)
def hist(img):
return np.sum(img[img.shape[0]//2:,:], axis=0)
def find_lane_pixels(binary_warped, image_show = True):
# Take a histogram of the bottom half of the image
histogram = hist(binary_warped)
# Create an output image to draw on and visualize the result
out_img = np.dstack((binary_warped, binary_warped, binary_warped))
# Find the peak of the left and right halves of the histogram
# These will be the starting point for the left and right lines
midpoint = np.int(histogram.shape[0]//2)
leftx_base = np.argmax(histogram[:midpoint])
rightx_base = np.argmax(histogram[midpoint:]) + midpoint
# HYPERPARAMETERS
# Choose the number of sliding windows
nwindows = 8
# Set the width of the windows +/- margin
margin = 150
# Set minimum number of pixels found to recenter window
minpix = 50
# Set height of windows - based on nwindows above and image shape
window_height = np.int(binary_warped.shape[0]//nwindows)
# Identify the x and y positions of all nonzero pixels in the image
nonzero = binary_warped.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
# Current positions to be updated later for each window in nwindows
leftx_current = leftx_base
rightx_current = rightx_base
# Create empty lists to receive left and right lane pixel indices
left_lane_inds = []
right_lane_inds = []
# Step through the windows one by one
for window in range(nwindows):
# Identify window boundaries in x and y (and right and left)
win_y_low = binary_warped.shape[0] - (window+1)*window_height
win_y_high = binary_warped.shape[0] - window*window_height
win_xleft_low = leftx_current - margin
win_xleft_high = leftx_current + margin
win_xright_low = rightx_current - margin
win_xright_high = rightx_current + margin
# Draw the windows on the visualization image
cv2.rectangle(out_img,(win_xleft_low,win_y_low),
(win_xleft_high,win_y_high),(0,255,0), 2)
cv2.rectangle(out_img,(win_xright_low,win_y_low),
(win_xright_high,win_y_high),(0,255,0), 2)
# Identify the nonzero pixels in x and y within the window #
good_left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) &
(nonzerox >= win_xleft_low) & (nonzerox < win_xleft_high)).nonzero()[0]
good_right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) &
(nonzerox >= win_xright_low) & (nonzerox < win_xright_high)).nonzero()[0]
# Append these indices to the lists
left_lane_inds.append(good_left_inds)
right_lane_inds.append(good_right_inds)
# If you found > minpix pixels, recenter next window on their mean position
if len(good_left_inds) > minpix:
leftx_current = np.int(np.mean(nonzerox[good_left_inds]))
if len(good_right_inds) > minpix:
rightx_current = np.int(np.mean(nonzerox[good_right_inds]))
# Concatenate the arrays of indices (previously was a list of lists of pixels)
try:
left_lane_inds = np.concatenate(left_lane_inds)
right_lane_inds = np.concatenate(right_lane_inds)
except ValueError:
# Avoids an error if the above is not implemented fully
pass
# Extract left and right line pixel positions
leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
left_fit = np.polyfit(lefty, leftx, 2)
right_fit = np.polyfit(righty, rightx, 2)
if image_show == True:
#Generate x and y values for plotting
ploty = np.linspace(0, binary_warped.shape[0]-1, binary_warped.shape[0] )
try:
left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
except TypeError:
#Avoids an error if `left` and `right_fit` are still none or incorrect
print('The function failed to fit a line!')
left_fitx = 1*ploty**2 + 1*ploty
right_fitx = 1*ploty**2 + 1*ploty
## Visualization ##
# Colors in the left and right lane regions
out_img[lefty, leftx] = [255, 0, 0]
out_img[righty, rightx] = [0, 0, 255]
if image_show == True:
return out_img, left_fit, right_fit
else:
return left_fit, right_fit
images = glob.glob(r'C:\Users\pramo\Documents\Project4\test_images\test*.jpg')
show_images = []
# Step through the list and search for chessboard corners
for fname in images:
img = cv2.imread(fname)
undisort_image, sobelx, s_binary, combined_binary, color_binary, W_image = combined_image(img)
outImage, left_fit, right_fit = find_lane_pixels(combined_binary, image_show = True)
show_images.append (outImage)
fig=plt.figure(figsize=(20, 20))
columns = 2
rows = 4
for i in range(len(show_images)):
j= i+1
img = show_images[i].squeeze()
fig.add_subplot(rows, columns, j)
plt.imshow(img)
plt.show()
def fit_poly(img_shape, left_fitn, right_fitn):
# Generate x and y values for plotting
ploty = np.linspace(0, img_shape[0]-1, img_shape[0])
### TO-DO: Calc both polynomials using ploty, left_fit and right_fit ###
left_fitx = left_fitn[0]*ploty**2 + left_fitn[1]*ploty + left_fitn[2]
right_fitx = right_fitn[0]*ploty**2 + right_fitn[1]*ploty + right_fitn[2]
return left_fitx, right_fitx, ploty
def fit_polynomial(binary_warped, left_fit, right_fit, image_show = True):
# Find our lane pixels first
margin = 10
nonzero = binary_warped.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
left_lane_inds = ((nonzerox > (left_fit[0]*(nonzeroy**2) + left_fit[1]*nonzeroy +
left_fit[2] - margin)) & (nonzerox < (left_fit[0]*(nonzeroy**2) +
left_fit[1]*nonzeroy + left_fit[2] + margin)))
right_lane_inds = ((nonzerox > (right_fit[0]*(nonzeroy**2) + right_fit[1]*nonzeroy +
right_fit[2] - margin)) & (nonzerox < (right_fit[0]*(nonzeroy**2) +
right_fit[1]*nonzeroy + right_fit[2] + margin)))
# Again, extract left and right line pixel positions
leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
# Color in left and right line pixels
left_fitn = np.polyfit(lefty, leftx, 2)
right_fitn = np.polyfit(righty, rightx, 2)
if image_show == True:
left_fitx, right_fitx, ploty = fit_poly(binary_warped.shape, left_fitn, right_fitn)
out_img = np.dstack((binary_warped, binary_warped, binary_warped))*255
window_img = np.zeros_like(out_img)
out_img[nonzeroy[left_lane_inds], nonzerox[left_lane_inds]] = [255, 0, 0]
out_img[nonzeroy[right_lane_inds], nonzerox[right_lane_inds]] = [0, 0, 255]
# Generate a polygon to illustrate the search window area
# And recast the x and y points into usable format for cv2.fillPoly()
left_line_window1 = np.array([np.transpose(np.vstack([left_fitx-margin, ploty]))])
left_line_window2 = np.array([np.flipud(np.transpose(np.vstack([left_fitx+margin,
ploty])))])
left_line_pts = np.hstack((left_line_window1, left_line_window2))
right_line_window1 = np.array([np.transpose(np.vstack([right_fitx-margin, ploty]))])
right_line_window2 = np.array([np.flipud(np.transpose(np.vstack([right_fitx+margin,
ploty])))])
right_line_pts = np.hstack((right_line_window1, right_line_window2))
# Draw the lane onto the warped blank image
cv2.fillPoly(window_img, np.int_([left_line_pts]), (0,255, 0))
cv2.fillPoly(window_img, np.int_([right_line_pts]), (0,255, 0))
out_img = cv2.addWeighted(out_img, 1, window_img, 0.3, 0)
## End visualization steps ##
if image_show == True:
return out_img
else:
return left_fitn, right_fitn
img = cv2.imread(r'C:\Users\pramo\Documents\Project4\test_images\test8.jpg')
undisort_image, sobelx, s_binary, combined_binary, color_binary, W_image = combined_image(img)
outImage = fit_polynomial(combined_binary, left_fit, right_fit, image_show = True)
plt.imshow(outImage)
def center(X_pointL, X_pointR):
mid_pointx = (X_pointL + X_pointR)/2
image_mid_pointx = 640
dist = distance(mid_pointx, image_mid_pointx)
dist = dist*(3.7/700)
return dist, mid_pointx
def distance(pointL, pointR):
return math.sqrt((pointL - pointR)**2)
def measure_curvature_pixels(img_shape, left_fit, right_fit):
ym_per_pix = 30/720 # meters per pixel in y dimension
xm_per_pix = 3.7/700 # meters per pixel in x dimension
# Start by generating our fake example data
# Make sure to feed in your real data instead in your project!
leftx, rightx, ploty = fit_poly(img_shape, left_fit, right_fit)
leftx = leftx[::-1] # Reverse to match top-to-bottom in y
rightx = rightx[::-1] # Reverse to match top-to-bottom in y
first_element_L = leftx[-720]
first_element_R = rightx[-720]
center_dist, mid_pointx = center(first_element_L, first_element_R)
# Fit a second order polynomial to pixel positions in each fake lane line
# Fit new polynomials to x,y in world space
left_fit_cr = np.polyfit(ploty*ym_per_pix, leftx*xm_per_pix, 2)
right_fit_cr = np.polyfit(ploty*ym_per_pix, rightx*xm_per_pix, 2)
# Define y-value where we want radius of curvature
# We'll choose the maximum y-value, corresponding to the bottom of the image
y_eval = np.max(ploty)
# Calculation of R_curve (radius of curvature)
left_curverad = ((1 + (2*left_fit_cr[0]*y_eval*ym_per_pix + left_fit_cr[1])**2)**1.5) / np.absolute(2*left_fit_cr[0])
right_curverad = ((1 + (2*right_fit_cr[0]*y_eval*ym_per_pix + right_fit_cr[1])**2)**1.5) / np.absolute(2*right_fit_cr[0])
return left_curverad, right_curverad, center_dist, mid_pointx
def Sanity_Check(img_shape, left_fit, right_fit):
xm_per_pix = 3.7/700 # meters per pixel in x dimension
ploty = np.linspace(0, 719, num=720)
left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
left_fitx, right_fitx, ploty = fit_poly(img_shape, left_fit, right_fit)
left_fitx = left_fitx[::-1] # Reverse to match top-to-bottom in y
right_fitx = right_fitx[::-1] # Reverse to match top-to-bottom in y
last_element_L = left_fitx[-1]
last_element_R = right_fitx [-1]
#print(last_element_L)
mid_element_L = left_fitx[-360]
mid_element_R = right_fitx [-360]
first_element_L = left_fitx[-720]
first_element_R = right_fitx [-720]
b_dist = (distance(last_element_L, last_element_R)*xm_per_pix)
m_dist = (distance(mid_element_L, mid_element_R)*xm_per_pix)
t_dist = (distance(first_element_L, first_element_R)*xm_per_pix)
return b_dist, m_dist, t_dist
def draw_poly(u_imag, binary_warped, left_fit, right_fit):
warp_zero = np.zeros_like(binary_warped).astype(np.uint8)
color_warp = np.dstack((warp_zero, warp_zero, warp_zero))
# Recast the x and y points into usable format for cv2.fillPoly()
left_fitx, right_fitx, ploty = fit_poly(binary_warped.shape, left_fit, right_fit)
pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))])
pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))])
pts = np.hstack((pts_left, pts_right))
pts_left = np.array([pts_left], np.int32)
pts_right = np.array([pts_right], np.int32)
# Draw the lane onto the warped blank image
cv2.fillPoly(color_warp, np.int_([pts]), (0,255, 0))
cv2.polylines(color_warp, pts_left, 0, (255,0,0), 40)
cv2.polylines(color_warp, pts_right, 0, (255,0,0), 40)
# Warp the blank back to original image space using inverse perspective matrix (Minv)
un_warped = wrap_transform(color_warp, inverse = 'TRUE')
# Combine the result with the original image
out_img = cv2.addWeighted(u_imag, 1, un_warped, 0.3, 0)
return out_img
image_show = False
def process_image(image):
left_fit = []
right_fit = []
undisort_image, sobelx, s_binary, combined_binary, color_binary, W_image = combined_image(image)
if len(Left_Right.last_L_points) == 0 or len(Left_Right.last_R_points) == 0:
left_fit, right_fit = find_lane_pixels(combined_binary, image_show = False)
else:
left_fit = Left_Right.last_L_points
right_fit = Left_Right.last_R_points
left_fit, right_fit = fit_polynomial(combined_binary, left_fit, right_fit, image_show = False)
b_dist, m_dist, t_dist = Sanity_Check(combined_binary.shape, left_fit, right_fit)
mean = (b_dist + m_dist + t_dist)/3
#print (t_dist)
if (3.8 > mean > 3.1) and (3.5 > t_dist > 3.1):
Left_Right.last_L_points = left_fit
Left_Right.last_R_points = right_fit
else:
left_fit = Left_Right.last_L_points
right_fit = Left_Right.last_R_points
L_curvature, R_Curvature, center_dist, mid_pointx = measure_curvature_pixels(combined_binary.shape, left_fit, right_fit)
curvature = (L_curvature + R_Curvature)/2
result = draw_poly(undisort_image, combined_binary, left_fit, right_fit)
TEXT = 'Center Curvature = %f(m)' %curvature
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.putText(result, TEXT, (50,50), font, 1, (0, 255, 0), 2)
if (mid_pointx > 640):
TEXT = 'Away from center = %f(m - To Right)' %center_dist
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.putText(result, TEXT, (50,100), font, 1, (0, 255, 0), 2)
else:
TEXT = 'Away from center = %f(m - To Left)' %center_dist
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.putText(result, TEXT, (50,100), font, 1, (0, 255, 0), 2)
return result
img = cv2.imread(r'C:\Users\pramo\Documents\Project4\test_images\test8.jpg')
outImage = process_image(img)
plt.imshow(outImage)
output = 'project_video_out.mp4'
clip = VideoFileClip('project_video.mp4')
yellow_clip = clip.fl_image(process_image)
yellow_clip.write_videofile(output, audio=False)
```
|
github_jupyter
|
# Getting started
## Installing Python
It is recommended that you install the full Anaconda Python 3.8, as it set up your Python environment, together with a bunch of often used packages that you'll use during this course. A guide on installing Anaconda can be found here: https://docs.anaconda.com/anaconda/install/. NB: You don't have to install the optional stuff, such as the PyCharm editor.
For more instructions, take a look at: https://github.com/uvacreate/2021-coding-the-humanities/blob/master/setup.md.
If you completed all the steps and you have Python and Jupyter notebooks installed, open this file again as a notebook and continue with the content below. Good luck and have fun! 🎉
# Hello World
This notebook contains some code to allow you to check if everything runs as intended.
[Jupyter notebooks](https://jupyter.org) contain cells of Python code, or text written in [markdown](https://www.markdownguide.org/getting-started/). This cell for instance contains text written in markdown syntax. You can edit it by double clicking on it. You can create new cells using the "+" (top right bar), and you can run cells to 'execute' the markdown syntax they contain and see what happens.
The other type of cells contain Python code and need to be executed. You can either do this by clicking on the cell and then on the play button in the top of the window. Or by pressing `shift + ENTER`. Try this with the next cell, and you'll see the result of this first line of Python.
**For a more extended revision of these materials, see http://www.karsdorp.io/python-course (Chapter 1).**
```
# It is customary for your first program to print Hello World! This is how you do it in Python.
print("Hello World!")
# You can comment your code using '#'. What you write afterwards won't be interpreted as code.
# This comes in handy if you want to comment on smaller bits of your code. Or if you want to
# add a TODO for yourself to remind you that some code needs to be added or revised.
```
The code you write is executed from a certain *working directory* (we will see more when doing input/output).
You can access your working directory by using a *package* (bundle of Python code which does something for you) part of the so-called Python standard library: `os` (a package to interact with the operating system).
```
import os # we first import the package
os.getcwd() # we then can use some of its functionalities. In this case, we get the current working directory (cwd)
```
## Python versions

It is important that you at least run a version of Python that is being supported with security updates. Currently (Spring 2021), this means Python 3.6 or higher. You can see all current versions and their support dates on the [Python website](https://www.python.org/downloads/)
For this course it is recommended to have Python 3.8 installed, since every Python version adds, but sometimes also changes functionality. If you recently installed Python through [Anaconda](https://www.anaconda.com/products/individual#), you're most likely running version 3.8!
Let's check the Python version you are using by importing the `sys` package. Try running the next cell and see it's output.
```
import sys
print(sys.executable) # the path where the Python executable is located
print(sys.version) # its version
print(sys.version_info)
```
You now printed the version of Python you have installed.
You can also check the version of a package via its property `__version__`. A common package for working with tabular data is `pandas` (more on this package later). You can import the package and make it referencable by another name (a shorthand) by doing:
```
import pandas as pd # now 'pd' is the shorthand for the 'pandas' package
```
NB: Is this raising an error? Look further down for a (possible) explanation!
Now the `pandas` package can be called by typing `pd`. The version number of packages is usually stored in a _magic attribute_ or a _dunder_ (=double underscore) called `__version__`.
```
pd.__version__
```
The code above printed something without using the `print()` statement. Let's do the same, but this time by using a `print()` statement.
```
print(pd.__version__)
```
Can you spot the difference? Why do you think this is? What kind of datatype do you think the version number is? And what kind of datatype can be printed on your screen? We'll go over these differences and the involved datatypes during the first lecture and seminar.
If you want to know more about a (built-in) function of Python, you can check its manual online. The information on the `print()` function can be found in the manual for [built-in functions](https://docs.python.org/3.8/library/functions.html#print)
More on datatypes later on.
### Exercise
Try printing your own name using the `print()` function.
```
# TODO: print your own name
# TODO: print your own name and your age on one line
```
If all of the above cells were executed without any errors, you're clear to go!
However, if you did get an error, you should start debugging. Most of the times, the errors returned by Python are quite meaningful. Perhaps you got this message when trying to import the `pandas` package:
```python
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-26-981caee58ba7> in <module>
----> 1 import pandas as pd
ModuleNotFoundError: No module named 'pandas'
```
If you go over this error message, you can see:
1. The type of error, in this example `ModuleNotFoundError` with some extra explanation
2. The location in your code where the error occurred or was _raised_, indicated with the ----> arrow
In this case, you do not have this (external) package installed in your Python installation. Have you installed the full Anaconda package? You can resolve this error by installing the package from Python's package index ([PyPI](https://pypi.org/)), which is like a store for Python packages you can use in your code.
To install the `pandas` package (if missing), run in a cell:
```python
pip install pandas
```
Or to update the `pandas` package you already have installed:
```python
pip install pandas -U
```
Try this in the cell below!
```
# Try either installing or updating (if there is an update) your pandas package
# your code here
```
If you face other errors, then Google (or DuckDuckGo etc.) is your friend. You'll see tons of questions on Python related problems on websites such as Stack Overflow. It's tempting to simply copy paste a coding pattern from there into your own code. But if you do, make sure you fully understand what is going on. Also, in assignments in this course, we ask you to:
1. Specify a URL or source of the website/book you got your copied code from
2. Explain in a _short_ text or through comments by line what the copied code is doing
This will be repeated during the lectures.
However, if you're still stuck, you can open a discussion in our [Canvas course](https://canvas.uva.nl/courses/22381/discussion_topics). You're also very much invited to engage in threads on the discussion board of others and help them out. Debugging, solving, and explaining these coding puzzles for sure makes you a better programmer!
# Basic stuff
The code below does some basic things using Python. Please check if you know what it does and, if not, you can still figure it out. Just traverse through the rest of this notebook by executing each cell if this is all new to you and try to understand what happens.

The [first notebook](https://github.com/uvacreate/2021-coding-the-humanities/blob/master/notebooks/1_Basics.ipynb) that we're discussing in class is paced more slowly. You can already take a look at it if you want to work ahead. We'll be repeating the concepts below, and more.
If you think you already master these 'Python basics' and the material from the first notebook, then get into contact with us for some more challenging exercises!
## Variables and operations
```
a = 2
b = a
# Or, assign two variables at the same time
c, d = 10, 20
c
b += c
# Just typing a variable name in the Python interpreter (= terminal/shell/cell) also returns/prints its value
a
# Now, what's the value of b?
b
# Why the double equals sign? How is this different from the above a = b ?
a == b
# Because the ≠ sign is hard to find on your keyboard
a != b
s = "Hello World!"
print(s)
s[-1]
s[:5]
s[6:]
s[6:-1]
s
words = ["A", "list", "of", "strings"]
words
letters = list(s) # Names in green are reserved by Python: avoid using them as variable names
letters
```
If you do have bound a value to a built-in function of Python by accident, you can undo this by restarting your 'kernel' in Jupyter Notebook. Click `Kernel` and then `Restart` in the bar in the top of the screen. You'll make Python loose it's memory of previously declared variables. This also means that you must re-run all cells again if you need the executions and their outcomes.
```
# Sets are unordered collections of unique elements
unique_letters = set(letters)
unique_letters
# Variables have a certain data type.
# Python is very flexible with allowing you to assign variables to data as you like
# If you need a certain data type, you need to check it explicitly
type(s)
print("If you forgot the value of variable 'a':", a)
type(a)
type(2.3)
type("Hello")
type(letters)
type(unique_letters)
```
#### Exercise
1. Create variables of each type: integer, float, text, list, and set.
2. Try using mathematical operators such as `+ - * / **` on the numerical datatypes (integer and float)
3. Print their value as a string
```
# Your code here
```
Hint: You can insert more cells by going to `Insert` and then `Insert Cell Above/Below` in this Jupyter Notebook.
|
github_jupyter
|
```
!pip install efficientnet
#import the libraries needed
import pandas as pd
import numpy as np
import os
import cv2
from tqdm import tqdm_notebook as tqdm
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from keras_preprocessing.image import ImageDataGenerator
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense
import efficientnet.tfkeras as efn
import warnings
warnings.filterwarnings("ignore")
current_path = r'C:\Users\nguyent2\Desktop\Kaggle-Four-Shapes-Classification-Challenge\Kaggle Dataset\shapes'
circle_paths = os.listdir(os.path.join(current_path, 'circle'))
square_paths = os.listdir(os.path.join(current_path, 'square'))
star_paths = os.listdir(os.path.join(current_path, 'star'))
triangle_paths = os.listdir(os.path.join(current_path, 'triangle'))
print(f'We got {len(circle_paths)} circles, {len(square_paths)} squares, {len(star_paths)} stars, and {len(triangle_paths)} triangles' )
circles = pd.DataFrame()
squares = pd.DataFrame()
stars = pd.DataFrame()
triangles = pd.DataFrame()
for n,i in enumerate(tqdm(range(len(circle_paths)))):
circle_path = os.path.join(current_path, 'circle' ,circle_paths[i])
circles.loc[n,'path'] = circle_path
circles.loc[n, 'circle'] = 1
circles.loc[n, 'square'] = 2
circles.loc[n, 'star'] = 3
circles.loc[n, 'triangle'] = 0
for n,i in enumerate(tqdm(range(len(square_paths)))):
square_path = os.path.join(current_path, 'square' ,square_paths[i])
squares.loc[n,'path'] = square_path
squares.loc[n, 'circle'] = 0
squares.loc[n, 'square'] = 1
squares.loc[n, 'star'] = 2
squares.loc[n, 'triangle'] = 3
for n,i in enumerate(tqdm(range(len(star_paths)))):
star_path = os.path.join(current_path, 'star' ,star_paths[i])
stars.loc[n,'path'] = star_path
stars.loc[n, 'circle'] = 3
stars.loc[n, 'square'] = 0
stars.loc[n, 'star'] = 1
stars.loc[n, 'triangle'] = 2
for n,i in enumerate(tqdm(range(len(triangle_paths)))):
triangle_path = os.path.join(current_path, 'triangle' ,triangle_paths[i])
triangles.loc[n,'path'] = triangle_path
triangles.loc[n, 'circle'] = 2
triangles.loc[n, 'square'] = 3
triangles.loc[n, 'star'] = 0
triangles.loc[n, 'triangle'] = 1
data = pd.concat([circles, squares, stars, triangles], axis=0).sample(frac=1.0, random_state=42).reset_index(drop=True)
plt.figure(figsize=(16,16))
for i in range(36):
plt.subplot(6,6,i+1)
img = cv2.imread(data.path[i])
plt.imshow(img)
plt.title(data.iloc[i,1:].sort_values().index[1])
plt.axis('off')
train, test = train_test_split(data, test_size=.3, random_state=42)
train.shape, test.shape
example = train.sample(n=1).reset_index(drop=True)
example_data_gen = ImageDataGenerator(
rescale=1./255,
horizontal_flip=True,
vertical_flip=True,
)
example_gen = example_data_gen.flow_from_dataframe(example,
target_size=(200,200),
x_col="path",
y_col=['circle', 'square', 'star','triangle'],
class_mode='raw',
shuffle=False,
batch_size=32)
plt.figure(figsize=(20, 20))
for i in range(0, 9):
plt.subplot(3, 3, i+1)
for X_batch, _ in example_gen:
image = X_batch[0]
plt.imshow(image)
plt.axis('off')
break
test_data_gen= ImageDataGenerator(rescale=1./255)
train_data_gen= ImageDataGenerator(
rescale=1./255,
horizontal_flip=True,
vertical_flip=True,
)
train_generator=train_data_gen.flow_from_dataframe(train,
target_size=(200,200),
x_col="path",
y_col=['circle','square', 'star','triangle'],
class_mode='raw',
shuffle=False,
batch_size=32)
test_generator=test_data_gen.flow_from_dataframe(test,
target_size=(200,200),
x_col="path",
y_col=['circle', 'square','star','triangle'],
class_mode='raw',
shuffle=False,
batch_size=1)
def get_model():
base_model = efn.EfficientNetB0(weights='imagenet', include_top=False, pooling='avg', input_shape=(200, 200, 3))
x = base_model.output
predictions = Dense(4, activation='softmax')(x)
model = Model(inputs=base_model.input, outputs=predictions)
model.compile(optimizer='adam', loss='categorical_crossentropy',metrics=['accuracy'])
return model
model = get_model()
model.fit_generator(train_generator,
epochs=1,
steps_per_epoch=train_generator.n/32,
)
model.evaluate(test_generator)
pred_test = np.argmax(model.predict(test_generator, verbose=1), axis=1)
plt.figure(figsize=(24,24))
for i in range(100):
plt.subplot(10,10,i+1)
img = cv2.imread(test.reset_index(drop=True).path[i])
plt.imshow(img)
plt.title(test.reset_index(drop=True).iloc[0,1:].index[pred_test[i]])
plt.axis('off')
```
|
github_jupyter
|
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import math
import seaborn as sns
from sklearn import datasets
from sklearn import metrics
from sklearn.preprocessing import LabelBinarizer
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import confusion_matrix, roc_auc_score, f1_score, precision_score, recall_score
os.environ['TF_CPP_MIN_LOG_LEVEL']= '2'
# !gdown --id '1S9iwczSf6KL5jMSmU20SXKCSD3BUx4o_' --output level-6.csv #GMPR_genus
!gdown --id '1q0yp1iM66BKvqee46bOuSZYwl_SJCTp0' --output level-6.csv #GMPR_species
train = pd.read_csv("level-6.csv")
train.head()
train.info()
from sklearn.preprocessing import LabelEncoder
labelencoder = LabelEncoder()
train["Diagnosis"] = labelencoder.fit_transform(train["Diagnosis"])
# test["Diagnosis"] = labelencoder.fit_transform(test["Diagnosis"])
# for i in range(len(train)):
# if train["Diagnosis"][i] == 'Cancer':
# train["Diagnosis"][i] = str(1)
# else:
# train["Diagnosis"][i] = str(0)
train
not_select = ["index", "Diagnosis"]
train_select = train.drop(not_select,axis=1)
df_final_select = train_select
```
#Random Forest Classifier
```
#Use RandomForestClassifier to predict Cancer
x = df_final_select
y = train["Diagnosis"]
# y = np.array(y,dtype=int)
X_train,X_test,y_train,y_test = train_test_split(x,y,test_size=0.2,random_state=0)
#RandomForest
rfc = RandomForestClassifier(n_estimators=1000)
rfc.fit(X_train,y_train)
y_predict = rfc.predict(X_test)
score_rfc = rfc.score(X_test,y_test)
score_rfc_train = rfc.score(X_train,y_train)
print("train_accuracy = ",score_rfc_train*100," %")
print("val_accuracy = ",score_rfc*100," %")
mat = confusion_matrix(y_test, y_predict)
sns.heatmap(mat.T, square=True, annot=True, fmt='d', cbar=False)
plt.xlabel('true label')
plt.ylabel('predicted label')
score_recall = recall_score(y_test, y_predict, average=None)
f1score = f1_score(y_test, y_predict, average="macro")
precisionscore = precision_score(y_test, y_predict, average=None)
auc_roc = roc_auc_score(y_test, y_predict)
print("precision = ",precisionscore)
print("recall = ",score_recall)
print("auc_roc = ",auc_roc)
print("f1_score = ",f1score)
with open('RF_result.csv','w') as f:
f.write('Precision_Normal,Precision_Cancer,Recall_Normal,Recall_Cancer,Auc_Score,F1_Score,')
f.write('\n')
f.write(str(precisionscore[0])+','+str(precisionscore[1])+','+str(score_recall[0])+','+str(score_recall[1])+','+str(auc_roc)+','+str(f1score))
```
|
github_jupyter
|
```
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '3'
from tensor2tensor.data_generators import problem
from tensor2tensor.data_generators import text_problems
from tensor2tensor.data_generators import translate
from tensor2tensor.layers import common_attention
from tensor2tensor.utils import registry
from tensor2tensor import problems
import tensorflow as tf
import os
import logging
import sentencepiece as spm
import transformer_tag
from tensor2tensor.layers import modalities
vocab = 'sp10m.cased.t5.model'
sp = spm.SentencePieceProcessor()
sp.Load(vocab)
class Encoder:
def __init__(self, sp):
self.sp = sp
self.vocab_size = sp.GetPieceSize() + 100
def encode(self, s):
return self.sp.EncodeAsIds(s)
def decode(self, ids, strip_extraneous = False):
return self.sp.DecodeIds(list(ids))
d = [
{'class': 0, 'Description': 'PAD', 'salah': '', 'betul': ''},
{
'class': 1,
'Description': 'kesambungan subwords',
'salah': '',
'betul': '',
},
{
'class': 2,
'Description': 'tiada kesalahan',
'salah': '',
'betul': '',
},
{
'class': 3,
'Description': 'kesalahan frasa nama, Perkara yang diterangkan mesti mendahului "penerang"',
'salah': 'Cili sos',
'betul': 'sos cili',
},
{
'class': 4,
'Description': 'kesalahan kata jamak',
'salah': 'mereka-mereka',
'betul': 'mereka',
},
{
'class': 5,
'Description': 'kesalahan kata penguat',
'salah': 'sangat tinggi sekali',
'betul': 'sangat tinggi',
},
{
'class': 6,
'Description': 'kata adjektif dan imbuhan "ter" tanpa penguat.',
'salah': 'Sani mendapat markah yang tertinggi sekali.',
'betul': 'Sani mendapat markah yang tertinggi.',
},
{
'class': 7,
'Description': 'kesalahan kata hubung',
'salah': 'Sally sedang membaca bila saya tiba di rumahnya.',
'betul': 'Sally sedang membaca apabila saya tiba di rumahnya.',
},
{
'class': 8,
'Description': 'kesalahan kata bilangan',
'salah': 'Beribu peniaga tidak membayar cukai pendapatan.',
'betul': 'Beribu-ribu peniaga tidak membayar cukai pendapatan',
},
{
'class': 9,
'Description': 'kesalahan kata sendi',
'salah': 'Umar telah berpindah daripada sekolah ini bulan lalu.',
'betul': 'Umar telah berpindah dari sekolah ini bulan lalu.',
},
{
'class': 10,
'Description': 'kesalahan penjodoh bilangan',
'salah': 'Setiap orang pelajar',
'betul': 'Setiap pelajar.',
},
{
'class': 11,
'Description': 'kesalahan kata ganti diri',
'salah': 'Pencuri itu telah ditangkap. Beliau dibawa ke balai polis.',
'betul': 'Pencuri itu telah ditangkap. Dia dibawa ke balai polis.',
},
{
'class': 12,
'Description': 'kesalahan ayat pasif',
'salah': 'Cerpen itu telah dikarang oleh saya.',
'betul': 'Cerpen itu telah saya karang.',
},
{
'class': 13,
'Description': 'kesalahan kata tanya',
'salah': 'Kamu berasal dari manakah ?',
'betul': 'Kamu berasal dari mana ?',
},
{
'class': 14,
'Description': 'kesalahan tanda baca',
'salah': 'Kamu berasal dari manakah .',
'betul': 'Kamu berasal dari mana ?',
},
{
'class': 15,
'Description': 'kesalahan kata kerja tak transitif',
'salah': 'Dia kata kepada saya',
'betul': 'Dia berkata kepada saya',
},
{
'class': 16,
'Description': 'kesalahan kata kerja transitif',
'salah': 'Dia suka baca buku',
'betul': 'Dia suka membaca buku',
},
{
'class': 17,
'Description': 'penggunaan kata yang tidak tepat',
'salah': 'Tembuk Besar negeri Cina dibina oleh Shih Huang Ti.',
'betul': 'Tembok Besar negeri Cina dibina oleh Shih Huang Ti',
},
]
class Tatabahasa:
def __init__(self, d):
self.d = d
self.kesalahan = {i['Description']: no for no, i in enumerate(self.d)}
self.reverse_kesalahan = {v: k for k, v in self.kesalahan.items()}
self.vocab_size = len(self.d)
def encode(self, s):
return [self.kesalahan[i] for i in s]
def decode(self, ids, strip_extraneous = False):
return [self.reverse_kesalahan[i] for i in ids]
@registry.register_problem
class Grammar(text_problems.Text2TextProblem):
"""grammatical error correction."""
def feature_encoders(self, data_dir):
encoder = Encoder(sp)
t = Tatabahasa(d)
return {'inputs': encoder, 'targets': encoder, 'targets_error_tag': t}
def hparams(self, defaults, model_hparams):
super(Grammar, self).hparams(defaults, model_hparams)
if 'use_error_tags' not in model_hparams:
model_hparams.add_hparam('use_error_tags', True)
if 'middle_prediction' not in model_hparams:
model_hparams.add_hparam('middle_prediction', False)
if 'middle_prediction_layer_factor' not in model_hparams:
model_hparams.add_hparam('middle_prediction_layer_factor', 2)
if 'ffn_in_prediction_cascade' not in model_hparams:
model_hparams.add_hparam('ffn_in_prediction_cascade', 1)
if 'error_tag_embed_size' not in model_hparams:
model_hparams.add_hparam('error_tag_embed_size', 12)
if model_hparams.use_error_tags:
defaults.modality[
'targets_error_tag'
] = modalities.ModalityType.SYMBOL
error_tag_vocab_size = self._encoders[
'targets_error_tag'
].vocab_size
defaults.vocab_size['targets_error_tag'] = error_tag_vocab_size
def example_reading_spec(self):
data_fields, _ = super(Grammar, self).example_reading_spec()
data_fields['targets_error_tag'] = tf.VarLenFeature(tf.int64)
return data_fields, None
@property
def approx_vocab_size(self):
return 32100
@property
def is_generate_per_split(self):
return False
@property
def dataset_splits(self):
return [
{'split': problem.DatasetSplit.TRAIN, 'shards': 200},
{'split': problem.DatasetSplit.EVAL, 'shards': 1},
]
DATA_DIR = os.path.expanduser('t2t-tatabahasa/data')
TMP_DIR = os.path.expanduser('t2t-tatabahasa/tmp')
TRAIN_DIR = os.path.expanduser('t2t-tatabahasa/train-base')
PROBLEM = 'grammar'
t2t_problem = problems.problem(PROBLEM)
MODEL = 'transformer_tag'
HPARAMS = 'transformer_base'
from tensor2tensor.utils.trainer_lib import create_run_config, create_experiment
from tensor2tensor.utils.trainer_lib import create_hparams
from tensor2tensor.utils import registry
from tensor2tensor import models
from tensor2tensor import problems
from tensor2tensor.utils import trainer_lib
X = tf.placeholder(tf.int32, [None, None], name = 'x_placeholder')
Y = tf.placeholder(tf.int32, [None, None], name = 'y_placeholder')
targets_error_tag = tf.placeholder(tf.int32, [None, None], 'error_placeholder')
X_seq_len = tf.count_nonzero(X, 1, dtype=tf.int32)
maxlen_decode = tf.reduce_max(X_seq_len)
x = tf.expand_dims(tf.expand_dims(X, -1), -1)
y = tf.expand_dims(tf.expand_dims(Y, -1), -1)
targets_error_tag_ = tf.expand_dims(tf.expand_dims(targets_error_tag, -1), -1)
features = {
"inputs": x,
"targets": y,
"target_space_id": tf.constant(1, dtype=tf.int32),
'targets_error_tag': targets_error_tag,
}
Modes = tf.estimator.ModeKeys
hparams = trainer_lib.create_hparams(HPARAMS, data_dir=DATA_DIR, problem_name=PROBLEM)
hparams.filter_size = 3072
hparams.hidden_size = 768
hparams.num_heads = 12
hparams.num_hidden_layers = 8
hparams.vocab_divisor = 128
hparams.dropout = 0.1
hparams.max_length = 256
# LM
hparams.label_smoothing = 0.0
hparams.shared_embedding_and_softmax_weights = False
hparams.eval_drop_long_sequences = True
hparams.max_length = 256
hparams.multiproblem_mixing_schedule = 'pretrain'
# tpu
hparams.symbol_modality_num_shards = 1
hparams.attention_dropout_broadcast_dims = '0,1'
hparams.relu_dropout_broadcast_dims = '1'
hparams.layer_prepostprocess_dropout_broadcast_dims = '1'
model = registry.model(MODEL)(hparams, Modes.PREDICT)
# logits = model(features)
# logits
# sess = tf.InteractiveSession()
# sess.run(tf.global_variables_initializer())
# l = sess.run(logits, feed_dict = {X: [[10,10, 10, 10,10,1],[10,10, 10, 10,10,1]],
# Y: [[10,10, 10, 10,10,1],[10,10, 10, 10,10,1]],
# targets_error_tag: [[10,10, 10, 10,10,1],
# [10,10, 10, 10,10,1]]})
features = {
"inputs": x,
"target_space_id": tf.constant(1, dtype=tf.int32),
}
with tf.variable_scope(tf.get_variable_scope(), reuse = False):
fast_result = model._greedy_infer(features, maxlen_decode)
result_seq = tf.identity(fast_result['outputs'], name = 'greedy')
result_tag = tf.identity(fast_result['outputs_tag'], name = 'tag_greedy')
from tensor2tensor.layers import common_layers
def accuracy_per_sequence(predictions, targets, weights_fn = common_layers.weights_nonzero):
padded_predictions, padded_labels = common_layers.pad_with_zeros(predictions, targets)
weights = weights_fn(padded_labels)
padded_labels = tf.to_int32(padded_labels)
padded_predictions = tf.to_int32(padded_predictions)
not_correct = tf.to_float(tf.not_equal(padded_predictions, padded_labels)) * weights
axis = list(range(1, len(padded_predictions.get_shape())))
correct_seq = 1.0 - tf.minimum(1.0, tf.reduce_sum(not_correct, axis=axis))
return tf.reduce_mean(correct_seq)
def padded_accuracy(predictions, targets, weights_fn = common_layers.weights_nonzero):
padded_predictions, padded_labels = common_layers.pad_with_zeros(predictions, targets)
weights = weights_fn(padded_labels)
padded_labels = tf.to_int32(padded_labels)
padded_predictions = tf.to_int32(padded_predictions)
n = tf.to_float(tf.equal(padded_predictions, padded_labels)) * weights
d = tf.reduce_sum(weights)
return tf.reduce_sum(n) / d
acc_seq = padded_accuracy(result_seq, Y)
acc_tag = padded_accuracy(result_tag, targets_error_tag)
ckpt_path = tf.train.latest_checkpoint(os.path.join(TRAIN_DIR))
ckpt_path
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
var_lists = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)
saver = tf.train.Saver(var_list = var_lists)
saver.restore(sess, ckpt_path)
import pickle
with open('../pure-text/dataset-tatabahasa.pkl', 'rb') as fopen:
data = pickle.load(fopen)
encoder = Encoder(sp)
def get_xy(row, encoder):
x, y, tag = [], [], []
for i in range(len(row[0])):
t = encoder.encode(row[0][i][0])
y.extend(t)
t = encoder.encode(row[1][i][0])
x.extend(t)
tag.extend([row[1][i][1]] * len(t))
# EOS
x.append(1)
y.append(1)
tag.append(0)
return x, y, tag
import numpy as np
x, y, tag = get_xy(data[10], encoder)
e = encoder.encode('Pilih mana jurusan yang sesuai dengan kebolehan anda dalam peperiksaan Sijil Pelajaran Malaysia semasa memohon kemasukan ke institusi pengajian tinggi.') + [1]
r = sess.run(fast_result,
feed_dict = {X: [e]})
r['outputs_tag']
encoder.decode(r['outputs'][0].tolist())
encoder.decode(x)
encoder.decode(y)
hparams.problem.example_reading_spec()[0]
def parse(serialized_example):
data_fields = hparams.problem.example_reading_spec()[0]
features = tf.parse_single_example(
serialized_example, features = data_fields
)
for k in features.keys():
features[k] = features[k].values
return features
dataset = tf.data.TFRecordDataset('t2t-tatabahasa/data/grammar-dev-00000-of-00001')
dataset = dataset.map(parse, num_parallel_calls=32)
dataset = dataset.padded_batch(32,
padded_shapes = {
'inputs': tf.TensorShape([None]),
'targets': tf.TensorShape([None]),
'targets_error_tag': tf.TensorShape([None])
},
padding_values = {
'inputs': tf.constant(0, dtype = tf.int64),
'targets': tf.constant(0, dtype = tf.int64),
'targets_error_tag': tf.constant(0, dtype = tf.int64),
})
dataset = dataset.make_one_shot_iterator().get_next()
dataset
seqs, tags = [], []
index = 0
while True:
try:
d = sess.run(dataset)
s, t = sess.run([acc_seq, acc_tag], feed_dict = {X:d['inputs'],
Y: d['targets'],
targets_error_tag: d['targets_error_tag']})
seqs.append(s)
tags.append(t)
print(f'done {index}')
index += 1
except:
break
np.mean(seqs), np.mean(tags)
saver = tf.train.Saver(tf.trainable_variables())
saver.save(sess, 'transformertag-base/model.ckpt')
strings = ','.join(
[
n.name
for n in tf.get_default_graph().as_graph_def().node
if ('Variable' in n.op
or 'Placeholder' in n.name
or 'greedy' in n.name
or 'tag_greedy' in n.name
or 'x_placeholder' in n.name
or 'self/Softmax' in n.name)
and 'adam' not in n.name
and 'beta' not in n.name
and 'global_step' not in n.name
and 'modality' not in n.name
and 'Assign' not in n.name
]
)
strings.split(',')
def freeze_graph(model_dir, output_node_names):
if not tf.gfile.Exists(model_dir):
raise AssertionError(
"Export directory doesn't exists. Please specify an export "
'directory: %s' % model_dir
)
checkpoint = tf.train.get_checkpoint_state(model_dir)
input_checkpoint = checkpoint.model_checkpoint_path
absolute_model_dir = '/'.join(input_checkpoint.split('/')[:-1])
output_graph = absolute_model_dir + '/frozen_model.pb'
clear_devices = True
with tf.Session(graph = tf.Graph()) as sess:
saver = tf.train.import_meta_graph(
input_checkpoint + '.meta', clear_devices = clear_devices
)
saver.restore(sess, input_checkpoint)
output_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
tf.get_default_graph().as_graph_def(),
output_node_names.split(','),
)
with tf.gfile.GFile(output_graph, 'wb') as f:
f.write(output_graph_def.SerializeToString())
print('%d ops in the final graph.' % len(output_graph_def.node))
freeze_graph('transformertag-base', strings)
def load_graph(frozen_graph_filename):
with tf.gfile.GFile(frozen_graph_filename, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def)
return graph
g = load_graph('transformertag-base/frozen_model.pb')
x = g.get_tensor_by_name('import/x_placeholder:0')
greedy = g.get_tensor_by_name('import/greedy:0')
tag_greedy = g.get_tensor_by_name('import/tag_greedy:0')
test_sess = tf.InteractiveSession(graph = g)
test_sess.run([greedy, tag_greedy], feed_dict = {x:d['inputs']})
import tensorflow as tf
from tensorflow.tools.graph_transforms import TransformGraph
from glob import glob
tf.set_random_seed(0)
import tensorflow_text
import tf_sentencepiece
transforms = ['add_default_attributes',
'remove_nodes(op=Identity, op=CheckNumerics, op=Dropout)',
'fold_constants(ignore_errors=true)',
'fold_batch_norms',
'fold_old_batch_norms',
'quantize_weights(fallback_min=-10, fallback_max=10)',
'strip_unused_nodes',
'sort_by_execution_order']
pb = 'transformertag-base/frozen_model.pb'
input_graph_def = tf.GraphDef()
with tf.gfile.FastGFile(pb, 'rb') as f:
input_graph_def.ParseFromString(f.read())
transformed_graph_def = TransformGraph(input_graph_def,
['x_placeholder'],
['greedy', 'tag_greedy'], transforms)
with tf.gfile.GFile(f'{pb}.quantized', 'wb') as f:
f.write(transformed_graph_def.SerializeToString())
g = load_graph('transformertag-base/frozen_model.pb.quantized')
x = g.get_tensor_by_name('import/x_placeholder:0')
greedy = g.get_tensor_by_name('import/greedy:0')
tag_greedy = g.get_tensor_by_name('import/tag_greedy:0')
test_sess = tf.InteractiveSession(graph = g)
test_sess.run([greedy, tag_greedy], feed_dict = {x:d['inputs']})
```
|
github_jupyter
|
# Preprocess "ROC Stories" for Story Completion
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os
import glob
import pandas as pd
DATAPATH = '/path/to/ROCStories'
ROCstory_spring2016 = pd.read_csv(os.path.join(DATAPATH, "ROCStories__spring2016 - ROCStories_spring2016.csv"))
ROCstory_winter2017 = pd.read_csv(os.path.join(DATAPATH, "ROCStories_winter2017 - ROCStories_winter2017.csv"))
ROCstory_train = pd.concat([ROCstory_spring2016, ROCstory_winter2017])
len(ROCstory_train["storyid"].unique())
stories = ROCstory_train.loc[:, "sentence1":"sentence5"].values
```
## Train, Dev, Test
```
from sklearn.model_selection import train_test_split
train_and_dev, test_stories = train_test_split(stories, test_size=0.1)
train_stories, dev_stories = train_test_split(train_and_dev, test_size=1/9)
len(train_stories), len(dev_stories), len(test_stories)
```
### dev
```
import numpy as np
np.random.seed(1234)
dev_missing_indexes = np.random.randint(low=0, high=5, size=len(dev_stories))
dev_stories_with_missing = []
for st, mi in zip(dev_stories, dev_missing_indexes):
missing_sentence = st[mi]
remain_sentences = np.delete(st, mi)
dev_stories_with_missing.append([remain_sentences[0],
remain_sentences[1],
remain_sentences[2],
remain_sentences[3],
mi, missing_sentence])
dev_df = pd.DataFrame(dev_stories_with_missing,
columns=['stories_with_missing_sentence1',
'stories_with_missing_sentence2',
'stories_with_missing_sentence3',
'stories_with_missing_sentence4',
'missing_id', 'missing_sentence'])
dev_df.to_csv("./data/rocstories_completion_dev.csv", index=False)
```
### test
```
test_missing_indexes = np.random.randint(low=0, high=5, size=len(test_stories))
test_stories_with_missing = []
for st, mi in zip(test_stories, test_missing_indexes):
missing_sentence = st[mi]
remain_sentences = np.delete(st, mi)
test_stories_with_missing.append([remain_sentences[0],
remain_sentences[1],
remain_sentences[2],
remain_sentences[3],
mi, missing_sentence])
test_df = pd.DataFrame(test_stories_with_missing,
columns=['stories_with_missing_sentence1',
'stories_with_missing_sentence2',
'stories_with_missing_sentence3',
'stories_with_missing_sentence4',
'missing_id', 'missing_sentence'])
test_df.to_csv("./data/rocstories_completion_test.csv", index=False)
```
### train
```
train_df = pd.DataFrame(train_stories,
columns=['sentence1',
'sentence2',
'sentence3',
'sentence4',
'sentence5'])
train_df.to_csv("./data/rocstories_completion_train.csv", index=False)
```
## load saved data
```
train_df2 = pd.read_csv("./data/rocstories_completion_train.csv")
# train_df2.head()
dev_df2 = pd.read_csv("./data/rocstories_completion_dev.csv")
# dev_df2.head()
test_df2 = pd.read_csv("./data/rocstories_completion_test.csv")
# test_df2.head()
dev_df2.missing_id.value_counts()
test_df2.missing_id.value_counts()
```
### mini size dataset
```
train_mini, train_else = train_test_split(train_df, test_size=0.9)
len(train_mini)
train_mini.to_csv("./data/rocstories_completion_train_mini.csv", index=False)
dev_mini, dev_else = train_test_split(dev_df, test_size=0.9)
len(dev_mini)
dev_mini.to_csv("./data/rocstories_completion_dev_mini.csv", index=False)
```
|
github_jupyter
|
# Part 2: Introduction to Umami and the `Residual` Class
Umami is a package for calculating metrics for use with for Earth surface dynamics models. This notebook is the second notebook in a three-part introduction to using umami.
## Scope of this tutorial
Before starting this tutorial, you should have completed [Part 1: Introduction to Umami and the `Metric` Class](IntroductionToMetric.ipynb).
In this tutorial you will learn the basic principles behind using the `Residual` class to compare models and data using terrain statistics.
If you have comments or questions about the notebooks, the best place to get help is through [GitHub Issues](https://github.com/TerrainBento/umami/issues).
To begin this example, we will import the required python packages.
```
import warnings
warnings.filterwarnings('ignore')
from io import StringIO
import numpy as np
from landlab import RasterModelGrid, imshow_grid
from umami import Residual
```
## Step 1 Create grids
Unlike the first notebook, here we need to compare model and data. We will create two grids, the `model_grid` and the `data_grid` each with a field called field called `topographic__elevation` to it. Both are size (10x10). The `data_grid` slopes to the south-west, while the `model_grid` has some additional noise added to it.
First, we construct and plot the `data_grid`.
```
data_grid = RasterModelGrid((10, 10))
data_z = data_grid.add_zeros("node", "topographic__elevation")
data_z += data_grid.x_of_node + data_grid.y_of_node
imshow_grid(data_grid, data_z)
```
Next, we construct and plot `model_grid`. It differs only in that it has random noise added to the core nodes.
```
np.random.seed(42)
model_grid = RasterModelGrid((10, 10))
model_z = model_grid.add_zeros("node", "topographic__elevation")
model_z += model_grid.x_of_node + model_grid.y_of_node
model_z[model_grid.core_nodes] += np.random.randn(model_grid.core_nodes.size)
imshow_grid(model_grid, model_z)
```
We can difference the two grids to see how they differ. As expected, it looks like normally distributed noise.
```
imshow_grid(model_grid, data_z - model_z, cmap="seismic")
```
This example shows a difference map with 64 residuals on it. A more realistic application with a much larger domain would have tens of thousands. Methods of model analysis such as calibration and sensitivity analysis need model output, such as the topography shown here, to be distilled into a smaller number of values. This is the task that umami facilitates.
## Step 2: Construct an umami `Residual`
Similar to constructing a `Metric`, a residual is specified by a dictionary or YAML-style input file.
Here we repeat some of the content of the prior notebook:
Each calculation gets its own unique name (the key in the dictionary), and is associated with a value, a dictionary specifying exactly what should be calculated. The only value of the dictionary required by all umami calculations is `_func`, which indicates which of the [`umami.calculations`](https://umami.readthedocs.io/en/latest/umami.calculations.html) will be performed. Subsequent elements of this dictionary are the required inputs to the calculation function and are described in their documentation.
Note that some calculations listed in the [`umami.calculations`](https://umami.readthedocs.io/en/latest/umami.calculations.html) submodule are valid for both the umami `Metric` and `Residual` classes, while others are for `Residual`s only (the `Metric` class was covered in [Part 1](IntroductionToMetric.ipynb) of this notebook series).
The order that calculations are listed is read in as an [OrderedDict](https://docs.python.org/3/library/collections.html#collections.OrderedDict) and retained as the "calculation order".
In our example we will use the following dictionary:
```python
residuals = {
"me": {
"_func": "aggregate",
"method": "mean",
"field": "topographic__elevation"
},
"ep10": {
"_func": "aggregate",
"method": "percentile",
"field": "topographic__elevation",
"q": 10
}
}
```
This specifies calculation of the mean of `topographic__elevation` (to be called "me") and the 10th percentile `topographic__elevation` (called "ep10"). The equivalent portion of a YAML input file would look like:
```yaml
residuals:
me:
_func: aggregate
method: mean
field: topographic__elevation
ep10:
_func: aggregate
method: percentile
field: topographic__elevation
q: 10
```
The following code constructs the `Residual`. Note that the only difference with the prior notebook is that instead of specifying only one grid, here we provide two. Under the hood umami checkes that the grids are compatible and will raise errors if they are not.
```
residuals = {
"me": {
"_func": "aggregate",
"method": "mean",
"field": "topographic__elevation"
},
"ep10": {
"_func": "aggregate",
"method": "percentile",
"field": "topographic__elevation",
"q": 10
}
}
residual = Residual(model_grid, data_grid, residuals=residuals)
```
To calculate the residuals, run the `calculate` bound method.
```
residual.calculate()
```
Just like `Metric` classes, the `Residual` has some usefull methods and attributes.
`residual.names` gives the names as a list, in calculation order.
```
residual.names
```
`residual.values` gives the values as a list, in calculation order.
```
residual.values
```
And a function is available to get the value of a given metric.
```
residual.value("me")
```
## Step 5: Write output
The methods for writing output avaiable in `Metric` are also provided by `Residual`.
```
out = StringIO()
residual.write_residuals_to_file(out, style="dakota")
file_contents = out.getvalue().splitlines()
for line in file_contents:
print(line.strip())
out = StringIO()
residual.write_residuals_to_file(out, style="yaml")
file_contents = out.getvalue().splitlines()
for line in file_contents:
print(line.strip())
```
# Next steps
Now that you have a sense for how the `Metric` and `Residual` classes are used, try the next notebook: [Part 3: Other IO options (using umami without Landlab or terrainbento)](OtherIO_options.ipynb).
|
github_jupyter
|
```
import pickle as pk
import pandas as pd
%pylab inline
y_dic = pk.load(open("labelDic.cPickle","rb"))
X_dic = pk.load(open("vectorDicGDIpair.cPickle","rb"))
df = pd.read_csv('dida_v2_full.csv', index_col=0).replace('CO', 1).replace('TD', 0).replace('UK', -1)
rd = np.vectorize(lambda x: round(x * 10)/10)
essA_changed = {}
essB_changed = {}
recA_changed = {}
recB_changed = {}
path_changed = {}
deef_changed = {}
for ddid in X_dic:
x1 = rd(array(X_dic[ddid])[ [2, 3, 6, 7, 8] ])
x2 = rd(array(df.loc[ddid])[ [2, 3, 6, 7, 9, 12] ])
if x1[0] != x2[0]: recA_changed[ddid] = (x1[0], x2[0])
if x1[1] != x2[1]: essA_changed[ddid] = (x1[1], x2[1])
if x1[2] != x2[2]: recB_changed[ddid] = (x1[2], x2[2])
if x1[3] != x2[3]: essB_changed[ddid] = (x1[3], x2[3])
if x1[4] != x2[4]: path_changed[ddid] = (x1[4], x2[4])
if y_dic[ddid] != x2[5]: deef_changed[ddid] = (y_dic[ddid], x2[5])
print(essA_changed)
print('Essentiality gene A lost: ' + ', '.join(sorted(essA_changed.keys())))
print('Essentiality gene B lost: ' + ', '.join(sorted(essB_changed.keys())))
print('Recessiveness gene A changed: dd207, 1.00 -> 0.15')
df_sapiens = pd.read_csv('Mus musculus_consolidated.csv').drop(['locus', 'datasets', 'datasetIDs', 'essentiality status'], 1)
df_sapiens.head()
genes = []
for k in df['Pair']:
g1, g2 = k.split('/')
if g1 not in genes:
genes.append(g1)
if g2 not in genes:
genes.append(g2)
genes = sorted(genes)
lookup_ess = {}
for line in array(df_sapiens):
name, ess = line
if type(name) is float: continue;
lookup_ess[name.upper()] = ess
import pickle
pathway_pickle = open('ess_pickle', 'wb')
pickle.dump(lookup_ess, pathway_pickle)
result_s = {}
for g in genes:
if g in lookup_ess:
result_s[g] = lookup_ess[g]
else:
result_s[g] = 'N/A'
print(g, 'not found.')
for key in result_s:
x = result_s[key]
if x == 'Essential':
result_s[key] = 1
elif x == 'Nonessential':
result_s[key] = 0
new_essA, new_essB = [], []
for pair in df['Pair']:
g1, g2 = pair.split('/')
new_essA.append(result_s[g1])
new_essB.append(result_s[g2])
new_essA = array(new_essA)
new_essB = array(new_essB)
df2 = pd.read_csv('dida_v2_full.csv', index_col=0)
new_essA[new_essA == 'N/A'] = 0.67
new_essB[new_essB == 'N/A'] = 0.62
df2['EssA'] = new_essA
df2['EssB'] = new_essB
df2.to_csv('dida_v2_full_newess.csv')
pd.read_csv('dida_v2_full_newess.csv', index_col=0)
new_essA[new_essA == 'N/A'] = 0
mean(array(new_essA).astype(int))
new_essB[new_essB == 'N/A'] = 0
mean(array(new_essB).astype(int))
for g in result_s:
print(g + ',' + result_s[g])
```
|
github_jupyter
|
# Operations on word vectors
Welcome to your first assignment of this week!
Because word embeddings are very computionally expensive to train, most ML practitioners will load a pre-trained set of embeddings.
**After this assignment you will be able to:**
- Load pre-trained word vectors, and measure similarity using cosine similarity
- Use word embeddings to solve word analogy problems such as Man is to Woman as King is to ______.
- Modify word embeddings to reduce their gender bias
Let's get started! Run the following cell to load the packages you will need.
```
import numpy as np
from w2v_utils import *
```
Next, lets load the word vectors. For this assignment, we will use 50-dimensional GloVe vectors to represent words. Run the following cell to load the `word_to_vec_map`.
```
words, word_to_vec_map = read_glove_vecs('data/glove.6B.50d.txt')
```
You've loaded:
- `words`: set of words in the vocabulary.
- `word_to_vec_map`: dictionary mapping words to their GloVe vector representation.
You've seen that one-hot vectors do not do a good job cpaturing what words are similar. GloVe vectors provide much more useful information about the meaning of individual words. Lets now see how you can use GloVe vectors to decide how similar two words are.
# 1 - Cosine similarity
To measure how similar two words are, we need a way to measure the degree of similarity between two embedding vectors for the two words. Given two vectors $u$ and $v$, cosine similarity is defined as follows:
$$\text{CosineSimilarity(u, v)} = \frac {u . v} {||u||_2 ||v||_2} = cos(\theta) \tag{1}$$
where $u.v$ is the dot product (or inner product) of two vectors, $||u||_2$ is the norm (or length) of the vector $u$, and $\theta$ is the angle between $u$ and $v$. This similarity depends on the angle between $u$ and $v$. If $u$ and $v$ are very similar, their cosine similarity will be close to 1; if they are dissimilar, the cosine similarity will take a smaller value.
<img src="images/cosine_sim.png" style="width:800px;height:250px;">
<caption><center> **Figure 1**: The cosine of the angle between two vectors is a measure of how similar they are</center></caption>
**Exercise**: Implement the function `cosine_similarity()` to evaluate similarity between word vectors.
**Reminder**: The norm of $u$ is defined as $ ||u||_2 = \sqrt{\sum_{i=1}^{n} u_i^2}$
```
# GRADED FUNCTION: cosine_similarity
def cosine_similarity(u, v):
"""
Cosine similarity reflects the degree of similariy between u and v
Arguments:
u -- a word vector of shape (n,)
v -- a word vector of shape (n,)
Returns:
cosine_similarity -- the cosine similarity between u and v defined by the formula above.
"""
distance = 0.0
### START CODE HERE ###
# Compute the dot product between u and v (≈1 line)
dot = np.dot(u,v)
# Compute the L2 norm of u (≈1 line)
norm_u = np.linalg.norm(u)
# Compute the L2 norm of v (≈1 line)
norm_v = np.linalg.norm(v)
# Compute the cosine similarity defined by formula (1) (≈1 line)
cosine_similarity = dot/(norm_u*norm_v)
### END CODE HERE ###
return cosine_similarity
father = word_to_vec_map["father"]
mother = word_to_vec_map["mother"]
ball = word_to_vec_map["ball"]
crocodile = word_to_vec_map["crocodile"]
france = word_to_vec_map["france"]
italy = word_to_vec_map["italy"]
paris = word_to_vec_map["paris"]
rome = word_to_vec_map["rome"]
print("cosine_similarity(father, mother) = ", cosine_similarity(father, mother))
print("cosine_similarity(ball, crocodile) = ",cosine_similarity(ball, crocodile))
print("cosine_similarity(france - paris, rome - italy) = ",cosine_similarity(france - paris, rome - italy))
```
**Expected Output**:
<table>
<tr>
<td>
**cosine_similarity(father, mother)** =
</td>
<td>
0.890903844289
</td>
</tr>
<tr>
<td>
**cosine_similarity(ball, crocodile)** =
</td>
<td>
0.274392462614
</td>
</tr>
<tr>
<td>
**cosine_similarity(france - paris, rome - italy)** =
</td>
<td>
-0.675147930817
</td>
</tr>
</table>
After you get the correct expected output, please feel free to modify the inputs and measure the cosine similarity between other pairs of words! Playing around the cosine similarity of other inputs will give you a better sense of how word vectors behave.
## 2 - Word analogy task
In the word analogy task, we complete the sentence <font color='brown'>"*a* is to *b* as *c* is to **____**"</font>. An example is <font color='brown'> '*man* is to *woman* as *king* is to *queen*' </font>. In detail, we are trying to find a word *d*, such that the associated word vectors $e_a, e_b, e_c, e_d$ are related in the following manner: $e_b - e_a \approx e_d - e_c$. We will measure the similarity between $e_b - e_a$ and $e_d - e_c$ using cosine similarity.
**Exercise**: Complete the code below to be able to perform word analogies!
```
# GRADED FUNCTION: complete_analogy
def complete_analogy(word_a, word_b, word_c, word_to_vec_map):
"""
Performs the word analogy task as explained above: a is to b as c is to ____.
Arguments:
word_a -- a word, string
word_b -- a word, string
word_c -- a word, string
word_to_vec_map -- dictionary that maps words to their corresponding vectors.
Returns:
best_word -- the word such that v_b - v_a is close to v_best_word - v_c, as measured by cosine similarity
"""
# convert words to lower case
word_a, word_b, word_c = word_a.lower(), word_b.lower(), word_c.lower()
### START CODE HERE ###
# Get the word embeddings v_a, v_b and v_c (≈1-3 lines)
e_a = word_to_vec_map.get(word_a)
e_b = word_to_vec_map.get(word_b)
e_c = word_to_vec_map.get(word_c)
# e_a, e_b, e_c = word_to_vec_map[word_a], word_to_vec_map[word_b], word_to_vec_map[word_c]
### END CODE HERE ###
words = word_to_vec_map.keys()
max_cosine_sim = -100 # Initialize max_cosine_sim to a large negative number
best_word = None # Initialize best_word with None, it will help keep track of the word to output
# loop over the whole word vector set
for w in words:
# to avoid best_word being one of the input words, pass on them.
if w in [word_a, word_b, word_c] :
continue
### START CODE HERE ###
# Compute cosine similarity between the vector (e_b - e_a) and the vector ((w's vector representation) - e_c) (≈1 line)
cosine_sim = cosine_similarity(np.subtract(e_b,e_a), np.subtract(word_to_vec_map.get(w),e_c))
# If the cosine_sim is more than the max_cosine_sim seen so far,
# then: set the new max_cosine_sim to the current cosine_sim and the best_word to the current word (≈3 lines)
if cosine_sim > max_cosine_sim:
max_cosine_sim = cosine_sim
best_word = w
### END CODE HERE ###
return best_word
```
Run the cell below to test your code, this may take 1-2 minutes.
```
triads_to_try = [('italy', 'italian', 'spain'), ('india', 'delhi', 'japan'), ('man', 'woman', 'boy'), ('small', 'smaller', 'large')]
for triad in triads_to_try:
print ('{} -> {} :: {} -> {}'.format( *triad, complete_analogy(*triad,word_to_vec_map)))
```
**Expected Output**:
<table>
<tr>
<td>
**italy -> italian** ::
</td>
<td>
spain -> spanish
</td>
</tr>
<tr>
<td>
**india -> delhi** ::
</td>
<td>
japan -> tokyo
</td>
</tr>
<tr>
<td>
**man -> woman ** ::
</td>
<td>
boy -> girl
</td>
</tr>
<tr>
<td>
**small -> smaller ** ::
</td>
<td>
large -> larger
</td>
</tr>
</table>
Once you get the correct expected output, please feel free to modify the input cells above to test your own analogies. Try to find some other analogy pairs that do work, but also find some where the algorithm doesn't give the right answer: For example, you can try small->smaller as big->?.
### Congratulations!
You've come to the end of this assignment. Here are the main points you should remember:
- Cosine similarity a good way to compare similarity between pairs of word vectors. (Though L2 distance works too.)
- For NLP applications, using a pre-trained set of word vectors from the internet is often a good way to get started.
Even though you have finished the graded portions, we recommend you take a look too at the rest of this notebook.
Congratulations on finishing the graded portions of this notebook!
## 3 - Debiasing word vectors (OPTIONAL/UNGRADED)
In the following exercise, you will examine gender biases that can be reflected in a word embedding, and explore algorithms for reducing the bias. In addition to learning about the topic of debiasing, this exercise will also help hone your intuition about what word vectors are doing. This section involves a bit of linear algebra, though you can probably complete it even without being expert in linear algebra, and we encourage you to give it a shot. This portion of the notebook is optional and is not graded.
Lets first see how the GloVe word embeddings relate to gender. You will first compute a vector $g = e_{woman}-e_{man}$, where $e_{woman}$ represents the word vector corresponding to the word *woman*, and $e_{man}$ corresponds to the word vector corresponding to the word *man*. The resulting vector $g$ roughly encodes the concept of "gender". (You might get a more accurate representation if you compute $g_1 = e_{mother}-e_{father}$, $g_2 = e_{girl}-e_{boy}$, etc. and average over them. But just using $e_{woman}-e_{man}$ will give good enough results for now.)
```
g = word_to_vec_map['woman'] - word_to_vec_map['man']
print(g)
```
Now, you will consider the cosine similarity of different words with $g$. Consider what a positive value of similarity means vs a negative cosine similarity.
```
print ('List of names and their similarities with constructed vector:')
# girls and boys name
name_list = ['john', 'marie', 'sophie', 'ronaldo', 'priya', 'rahul', 'danielle', 'reza', 'katy', 'yasmin']
for w in name_list:
print (w, cosine_similarity(word_to_vec_map[w], g))
```
As you can see, female first names tend to have a positive cosine similarity with our constructed vector $g$, while male first names tend to have a negative cosine similarity. This is not suprising, and the result seems acceptable.
But let's try with some other words.
```
print('Other words and their similarities:')
word_list = ['lipstick', 'guns', 'science', 'arts', 'literature', 'warrior','doctor', 'tree', 'receptionist',
'technology', 'fashion', 'teacher', 'engineer', 'pilot', 'computer', 'singer']
for w in word_list:
print (w, cosine_similarity(word_to_vec_map[w], g))
```
Do you notice anything surprising? It is astonishing how these results reflect certain unhealthy gender stereotypes. For example, "computer" is closer to "man" while "literature" is closer to "woman". Ouch!
We'll see below how to reduce the bias of these vectors, using an algorithm due to [Boliukbasi et al., 2016](https://arxiv.org/abs/1607.06520). Note that some word pairs such as "actor"/"actress" or "grandmother"/"grandfather" should remain gender specific, while other words such as "receptionist" or "technology" should be neutralized, i.e. not be gender-related. You will have to treat these two type of words differently when debiasing.
### 3.1 - Neutralize bias for non-gender specific words
The figure below should help you visualize what neutralizing does. If you're using a 50-dimensional word embedding, the 50 dimensional space can be split into two parts: The bias-direction $g$, and the remaining 49 dimensions, which we'll call $g_{\perp}$. In linear algebra, we say that the 49 dimensional $g_{\perp}$ is perpendicular (or "othogonal") to $g$, meaning it is at 90 degrees to $g$. The neutralization step takes a vector such as $e_{receptionist}$ and zeros out the component in the direction of $g$, giving us $e_{receptionist}^{debiased}$.
Even though $g_{\perp}$ is 49 dimensional, given the limitations of what we can draw on a screen, we illustrate it using a 1 dimensional axis below.
<img src="images/neutral.png" style="width:800px;height:300px;">
<caption><center> **Figure 2**: The word vector for "receptionist" represented before and after applying the neutralize operation. </center></caption>
**Exercise**: Implement `neutralize()` to remove the bias of words such as "receptionist" or "scientist". Given an input embedding $e$, you can use the following formulas to compute $e^{debiased}$:
$$e^{bias\_component} = \frac{e \cdot g}{||g||_2^2} * g\tag{2}$$
$$e^{debiased} = e - e^{bias\_component}\tag{3}$$
If you are an expert in linear algebra, you may recognize $e^{bias\_component}$ as the projection of $e$ onto the direction $g$. If you're not an expert in linear algebra, don't worry about this.
<!--
**Reminder**: a vector $u$ can be split into two parts: its projection over a vector-axis $v_B$ and its projection over the axis orthogonal to $v$:
$$u = u_B + u_{\perp}$$
where : $u_B = $ and $ u_{\perp} = u - u_B $
!-->
```
def neutralize(word, g, word_to_vec_map):
"""
Removes the bias of "word" by projecting it on the space orthogonal to the bias axis.
This function ensures that gender neutral words are zero in the gender subspace.
Arguments:
word -- string indicating the word to debias
g -- numpy-array of shape (50,), corresponding to the bias axis (such as gender)
word_to_vec_map -- dictionary mapping words to their corresponding vectors.
Returns:
e_debiased -- neutralized word vector representation of the input "word"
"""
### START CODE HERE ###
# Select word vector representation of "word". Use word_to_vec_map. (≈ 1 line)
e = word_to_vec_map[word]
# Compute e_biascomponent using the formula give above. (≈ 1 line)
e_biascomponent = (np.dot(e,g)/np.linalg.norm(g)**2)*g
# Neutralize e by substracting e_biascomponent from it
# e_debiased should be equal to its orthogonal projection. (≈ 1 line)
e_debiased = e-e_biascomponent
### END CODE HERE ###
return e_debiased
e = "receptionist"
print("cosine similarity between " + e + " and g, before neutralizing: ", cosine_similarity(word_to_vec_map["receptionist"], g))
e_debiased = neutralize("receptionist", g, word_to_vec_map)
print("cosine similarity between " + e + " and g, after neutralizing: ", cosine_similarity(e_debiased, g))
```
**Expected Output**: The second result is essentially 0, up to numerical roundof (on the order of $10^{-17}$).
<table>
<tr>
<td>
**cosine similarity between receptionist and g, before neutralizing:** :
</td>
<td>
0.330779417506
</td>
</tr>
<tr>
<td>
**cosine similarity between receptionist and g, after neutralizing:** :
</td>
<td>
-3.26732746085e-17
</tr>
</table>
### 3.2 - Equalization algorithm for gender-specific words
Next, lets see how debiasing can also be applied to word pairs such as "actress" and "actor." Equalization is applied to pairs of words that you might want to have differ only through the gender property. As a concrete example, suppose that "actress" is closer to "babysit" than "actor." By applying neutralizing to "babysit" we can reduce the gender-stereotype associated with babysitting. But this still does not guarantee that "actor" and "actress" are equidistant from "babysit." The equalization algorithm takes care of this.
The key idea behind equalization is to make sure that a particular pair of words are equi-distant from the 49-dimensional $g_\perp$. The equalization step also ensures that the two equalized steps are now the same distance from $e_{receptionist}^{debiased}$, or from any other work that has been neutralized. In pictures, this is how equalization works:
<img src="images/equalize10.png" style="width:800px;height:400px;">
The derivation of the linear algebra to do this is a bit more complex. (See Bolukbasi et al., 2016 for details.) But the key equations are:
$$ \mu = \frac{e_{w1} + e_{w2}}{2}\tag{4}$$
$$ \mu_{B} = \frac {\mu \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}
\tag{5}$$
$$\mu_{\perp} = \mu - \mu_{B} \tag{6}$$
$$ e_{w1B} = \frac {e_{w1} \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}
\tag{7}$$
$$ e_{w2B} = \frac {e_{w2} \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}
\tag{8}$$
$$e_{w1B}^{corrected} = \sqrt{ |{1 - ||\mu_{\perp} ||^2_2} |} * \frac{e_{\text{w1B}} - \mu_B} {|(e_{w1} - \mu_{\perp}) - \mu_B)|} \tag{9}$$
$$e_{w2B}^{corrected} = \sqrt{ |{1 - ||\mu_{\perp} ||^2_2} |} * \frac{e_{\text{w2B}} - \mu_B} {|(e_{w2} - \mu_{\perp}) - \mu_B)|} \tag{10}$$
$$e_1 = e_{w1B}^{corrected} + \mu_{\perp} \tag{11}$$
$$e_2 = e_{w2B}^{corrected} + \mu_{\perp} \tag{12}$$
**Exercise**: Implement the function below. Use the equations above to get the final equalized version of the pair of words. Good luck!
```
def equalize(pair, bias_axis, word_to_vec_map):
"""
Debias gender specific words by following the equalize method described in the figure above.
Arguments:
pair -- pair of strings of gender specific words to debias, e.g. ("actress", "actor")
bias_axis -- numpy-array of shape (50,), vector corresponding to the bias axis, e.g. gender
word_to_vec_map -- dictionary mapping words to their corresponding vectors
Returns
e_1 -- word vector corresponding to the first word
e_2 -- word vector corresponding to the second word
"""
### START CODE HERE ###
# Step 1: Select word vector representation of "word". Use word_to_vec_map. (≈ 2 lines)
w1, w2 = pair[0],pair[1]
e_w1, e_w2 = word_to_vec_map[w1],word_to_vec_map[w2]
# Step 2: Compute the mean of e_w1 and e_w2 (≈ 1 line)
mu = (e_w1 + e_w2)/2
# Step 3: Compute the projections of mu over the bias axis and the orthogonal axis (≈ 2 lines)
mu_B = (np.dot(mu,bias_axis)/np.linalg.norm(bias_axis)**2)*bias_axis
mu_orth = mu-mu_B
# Step 4: Use equations (7) and (8) to compute e_w1B and e_w2B (≈2 lines)
e_w1B = (np.dot(e_w1,bias_axis)/np.linalg.norm(bias_axis)**2)*bias_axis
e_w2B = (np.dot(e_w2,bias_axis)/np.linalg.norm(bias_axis)**2)*bias_axis
# Step 5: Adjust the Bias part of e_w1B and e_w2B using the formulas (9) and (10) given above (≈2 lines)
corrected_e_w1B = np.sqrt(np.abs(1-np.linalg.norm(mu_orth)**2))*((e_w1B - mu_B)/np.abs((e_w1-mu_orth)-mu_B))
corrected_e_w2B = np.sqrt(np.abs(1-np.linalg.norm(mu_orth)**2))*((e_w2B - mu_B)/np.abs((e_w2-mu_orth)-mu_B))
# Step 6: Debias by equalizing e1 and e2 to the sum of their corrected projections (≈2 lines)
e1 = corrected_e_w1B + mu_orth
e2 = corrected_e_w2B + mu_orth
### END CODE HERE ###
return e1, e2
print("cosine similarities before equalizing:")
print("cosine_similarity(word_to_vec_map[\"man\"], gender) = ", cosine_similarity(word_to_vec_map["man"], g))
print("cosine_similarity(word_to_vec_map[\"woman\"], gender) = ", cosine_similarity(word_to_vec_map["woman"], g))
print()
e1, e2 = equalize(("man", "woman"), g, word_to_vec_map)
print("cosine similarities after equalizing:")
print("cosine_similarity(e1, gender) = ", cosine_similarity(e1, g))
print("cosine_similarity(e2, gender) = ", cosine_similarity(e2, g))
```
**Expected Output**:
cosine similarities before equalizing:
<table>
<tr>
<td>
**cosine_similarity(word_to_vec_map["man"], gender)** =
</td>
<td>
-0.117110957653
</td>
</tr>
<tr>
<td>
**cosine_similarity(word_to_vec_map["woman"], gender)** =
</td>
<td>
0.356666188463
</td>
</tr>
</table>
cosine similarities after equalizing:
<table>
<tr>
<td>
**cosine_similarity(u1, gender)** =
</td>
<td>
-0.700436428931
</td>
</tr>
<tr>
<td>
**cosine_similarity(u2, gender)** =
</td>
<td>
0.700436428931
</td>
</tr>
</table>
Please feel free to play with the input words in the cell above, to apply equalization to other pairs of words.
These debiasing algorithms are very helpful for reducing bias, but are not perfect and do not eliminate all traces of bias. For example, one weakness of this implementation was that the bias direction $g$ was defined using only the pair of words _woman_ and _man_. As discussed earlier, if $g$ were defined by computing $g_1 = e_{woman} - e_{man}$; $g_2 = e_{mother} - e_{father}$; $g_3 = e_{girl} - e_{boy}$; and so on and averaging over them, you would obtain a better estimate of the "gender" dimension in the 50 dimensional word embedding space. Feel free to play with such variants as well.
### Congratulations
You have come to the end of this notebook, and have seen a lot of the ways that word vectors can be used as well as modified.
Congratulations on finishing this notebook!
**References**:
- The debiasing algorithm is from Bolukbasi et al., 2016, [Man is to Computer Programmer as Woman is to
Homemaker? Debiasing Word Embeddings](https://papers.nips.cc/paper/6228-man-is-to-computer-programmer-as-woman-is-to-homemaker-debiasing-word-embeddings.pdf)
- The GloVe word embeddings were due to Jeffrey Pennington, Richard Socher, and Christopher D. Manning. (https://nlp.stanford.edu/projects/glove/)
|
github_jupyter
|
# Introduction #
In this exercise, you'll work through several applications of PCA to the [*Ames*](https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data) dataset.
Run this cell to set everything up!
```
# Setup feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.feature_engineering_new.ex5 import *
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.decomposition import PCA
from sklearn.feature_selection import mutual_info_regression
from sklearn.model_selection import cross_val_score
from xgboost import XGBRegressor
# Set Matplotlib defaults
plt.style.use("seaborn-whitegrid")
plt.rc("figure", autolayout=True)
plt.rc(
"axes",
labelweight="bold",
labelsize="large",
titleweight="bold",
titlesize=14,
titlepad=10,
)
def apply_pca(X, standardize=True):
# Standardize
if standardize:
X = (X - X.mean(axis=0)) / X.std(axis=0)
# Create principal components
pca = PCA()
X_pca = pca.fit_transform(X)
# Convert to dataframe
component_names = [f"PC{i+1}" for i in range(X_pca.shape[1])]
X_pca = pd.DataFrame(X_pca, columns=component_names)
# Create loadings
loadings = pd.DataFrame(
pca.components_.T, # transpose the matrix of loadings
columns=component_names, # so the columns are the principal components
index=X.columns, # and the rows are the original features
)
return pca, X_pca, loadings
def plot_variance(pca, width=8, dpi=100):
# Create figure
fig, axs = plt.subplots(1, 2)
n = pca.n_components_
grid = np.arange(1, n + 1)
# Explained variance
evr = pca.explained_variance_ratio_
axs[0].bar(grid, evr)
axs[0].set(
xlabel="Component", title="% Explained Variance", ylim=(0.0, 1.0)
)
# Cumulative Variance
cv = np.cumsum(evr)
axs[1].plot(np.r_[0, grid], np.r_[0, cv], "o-")
axs[1].set(
xlabel="Component", title="% Cumulative Variance", ylim=(0.0, 1.0)
)
# Set up figure
fig.set(figwidth=8, dpi=100)
return axs
def make_mi_scores(X, y):
X = X.copy()
for colname in X.select_dtypes(["object", "category"]):
X[colname], _ = X[colname].factorize()
# All discrete features should now have integer dtypes
discrete_features = [pd.api.types.is_integer_dtype(t) for t in X.dtypes]
mi_scores = mutual_info_regression(X, y, discrete_features=discrete_features, random_state=0)
mi_scores = pd.Series(mi_scores, name="MI Scores", index=X.columns)
mi_scores = mi_scores.sort_values(ascending=False)
return mi_scores
def score_dataset(X, y, model=XGBRegressor()):
# Label encoding for categoricals
for colname in X.select_dtypes(["category", "object"]):
X[colname], _ = X[colname].factorize()
# Metric for Housing competition is RMSLE (Root Mean Squared Log Error)
score = cross_val_score(
model, X, y, cv=5, scoring="neg_mean_squared_log_error",
)
score = -1 * score.mean()
score = np.sqrt(score)
return score
df = pd.read_csv("../input/fe-course-data/ames.csv")
```
Let's choose a few features that are highly correlated with our target, `SalePrice`.
```
features = [
"GarageArea",
"YearRemodAdd",
"TotalBsmtSF",
"GrLivArea",
]
print("Correlation with SalePrice:\n")
print(df[features].corrwith(df.SalePrice))
```
We'll rely on PCA to untangle the correlational structure of these features and suggest relationships that might be usefully modeled with new features.
Run this cell to apply PCA and extract the loadings.
```
X = df.copy()
y = X.pop("SalePrice")
X = X.loc[:, features]
# `apply_pca`, defined above, reproduces the code from the tutorial
pca, X_pca, loadings = apply_pca(X)
print(loadings)
```
# 1) Interpret Component Loadings
Look at the loadings for components `PC1` and `PC3`. Can you think of a description of what kind of contrast each component has captured? After you've thought about it, run the next cell for a solution.
```
# View the solution (Run this cell to receive credit!)
q_1.check()
```
-------------------------------------------------------------------------------
Your goal in this question is to use the results of PCA to discover one or more new features that improve the performance of your model. One option is to create features inspired by the loadings, like we did in the tutorial. Another option is to use the components themselves as features (that is, add one or more columns of `X_pca` to `X`).
# 2) Create New Features
Add one or more new features to the dataset `X`. For a correct solution, get a validation score below 0.140 RMSLE. (If you get stuck, feel free to use the `hint` below!)
```
X = df.copy()
y = X.pop("SalePrice")
# YOUR CODE HERE: Add new features to X.
# ____
score = score_dataset(X, y)
print(f"Your score: {score:.5f} RMSLE")
# Check your answer
q_2.check()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_2.hint()
#_COMMENT_IF(PROD)_
q_2.solution()
#%%RM_IF(PROD)%%
X = df.copy()
y = X.pop("SalePrice")
X["Feature1"] = X.GrLivArea - X.TotalBsmtSF
score = score_dataset(X, y)
print(f"Your score: {score:.5f} RMSLE")
q_2.assert_check_failed()
#%%RM_IF(PROD)%%
# Solution 1: Inspired by loadings
X = df.copy()
y = X.pop("SalePrice")
X["Feature1"] = X.GrLivArea + X.TotalBsmtSF
X["Feature2"] = X.YearRemodAdd * X.TotalBsmtSF
score = score_dataset(X, y)
print(f"Your score: {score:.5f} RMSLE")
# Solution 2: Uses components
X = df.copy()
y = X.pop("SalePrice")
X = X.join(X_pca)
score = score_dataset(X, y)
print(f"Your score: {score:.5f} RMSLE")
q_2.assert_check_passed()
```
-------------------------------------------------------------------------------
The next question explores a way you can use PCA to detect outliers in the dataset (meaning, data points that are unusually extreme in some way). Outliers can have a detrimental effect on model performance, so it's good to be aware of them in case you need to take corrective action. PCA in particular can show you anomalous *variation* which might not be apparent from the original features: neither small houses nor houses with large basements are unusual, but it is unusual for small houses to have large basements. That's the kind of thing a principal component can show you.
Run the next cell to show distribution plots for each of the principal components you created above.
```
sns.catplot(
y="value",
col="variable",
data=X_pca.melt(),
kind='boxen',
sharey=False,
col_wrap=2,
);
```
As you can see, in each of the components there are several points lying at the extreme ends of the distributions -- outliers, that is.
Now run the next cell to see those houses that sit at the extremes of a component:
```
# You can change PC1 to PC2, PC3, or PC4
component = "PC1"
idx = X_pca[component].sort_values(ascending=False).index
df.loc[idx, ["SalePrice", "Neighborhood", "SaleCondition"] + features]
```
# 3) Outlier Detection
Do you notice any patterns in the extreme values? Does it seem like the outliers are coming from some special subset of the data?
After you've thought about your answer, run the next cell for the solution and some discussion.
```
# View the solution (Run this cell to receive credit!)
q_3.check()
```
# Keep Going #
[**Apply target encoding**](#$NEXT_NOTEBOOK_URL$) to give a boost to categorical features.
|
github_jupyter
|
<img width="10%" alt="Naas" src="https://landen.imgix.net/jtci2pxwjczr/assets/5ice39g4.png?w=160"/>
# LinkedIn - Send posts feed to gsheet
<a href="https://app.naas.ai/user-redirect/naas/downloader?url=https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/LinkedIn/LinkedIn_Send_posts_feed_to_gsheet.ipynb" target="_parent"><img src="https://naasai-public.s3.eu-west-3.amazonaws.com/open_in_naas.svg"/></a>
**Tags:** #linkedin #profile #post #stats #naas_drivers #automation #content #googlesheets
**Author:** [Florent Ravenel](https://www.linkedin.com/in/florent-ravenel/)
## Input
### Import libraries
```
from naas_drivers import linkedin, gsheet
import naas
import pandas as pd
```
### Setup LinkedIn
👉 <a href='https://www.notion.so/LinkedIn-driver-Get-your-cookies-d20a8e7e508e42af8a5b52e33f3dba75'>How to get your cookies ?</a>
```
# Lindekin cookies
LI_AT = "AQEDARCNSioDe6wmAAABfqF-HR4AAAF-xYqhHlYAtSu7EZZEpFer0UZF-GLuz2DNSz4asOOyCRxPGFjenv37irMObYYgxxxxxxx"
JSESSIONID = "ajax:12XXXXXXXXXXXXXXXXX"
# Linkedin profile url
PROFILE_URL = "https://www.linkedin.com/in/xxxxxx/"
# Number of posts updated in Gsheet (This avoid to requests the entire database)
LIMIT = 10
```
### Setup your Google Sheet
👉 Get your spreadsheet URL<br>
👉 Share your gsheet with our service account to connect : [email protected]<br>
👉 Create your sheet before sending data into it
```
# Spreadsheet URL
SPREADSHEET_URL = "https://docs.google.com/spreadsheets/d/XXXXXXXXXXXXXXXXXXXX"
# Sheet name
SHEET_NAME = "LK_POSTS_FEED"
```
### Setup Naas
```
naas.scheduler.add(cron="0 8 * * *")
#-> To delete your scheduler, please uncomment the line below and execute this cell
# naas.scheduler.delete()
```
## Model
### Get data from Google Sheet
```
df_gsheet = gsheet.connect(SPREADSHEET_URL).get(sheet_name=SHEET_NAME)
df_gsheet
```
### Get new posts and update last posts stats
```
def get_new_posts(df_gsheet, key, limit=LIMIT, sleep=False):
posts = []
if len(df_gsheet) > 0:
posts = df_gsheet[key].unique()
else:
df_posts_feed = linkedin.connect(LI_AT, JSESSIONID).profile.get_posts_feed(PROFILE_URL, limit=-1, sleep=sleep)
return df_posts_feed
# Get new
df_posts_feed = linkedin.connect(LI_AT, JSESSIONID).profile.get_posts_feed(PROFILE_URL, limit=LIMIT, sleep=sleep)
df_new = pd.concat([df_posts_feed, df_gsheet]).drop_duplicates(key, keep="first")
return df_new
df_new = get_new_posts(df_gsheet, "POST_URL", limit=LIMIT)
df_new
```
## Output
### Send to Google Sheet
```
gsheet.connect(SPREADSHEET_URL).send(df_new,
sheet_name=SHEET_NAME,
append=False)
```
|
github_jupyter
|
```
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
def N_single_qubit_gates_req_Rot(N_system_qubits, set_size):
return (2*N_system_qubits+1)*(set_size-1)
def N_CNOT_gates_req_Rot(N_system_qubits, set_size):
return 2*(N_system_qubits-1)*(set_size-1)
def N_cV_gates_req_LCU(N_system_qubits, set_size):
Na=np.ceil(np.log2(set_size))
return (N_system_qubits*((2**Na) -1)) *(set_size-1)
def N_CNOT_gates_req_LCU(N_system_qubits, set_size):
Na=np.ceil(np.log2(set_size))
return ((2**Na) -2) *(set_size-1)
## better
# O(2 N_system) change of basis single qubit gates
# O(2 [N_system-1]) CNOT gates
# 2 * Hadamard gates
# 1 m-controlled Tofolli gate!
## overall reduction = 16m-32
## requiring (m-2) garbage bits --> ALWAYS PRESENT IN SYSTEM REGISTER!!!
def N_single_qubit_gates_req_LCU_new_Decomp(N_system_qubits, set_size):
change_of_basis = 2*N_system_qubits
H_gates = 2
return (change_of_basis+H_gates)*(set_size-1)
def N_CNOT_gates_req_LCU_new_Decomp(N_system_qubits, set_size):
cnot_Gates = 2*(N_system_qubits-1)
Na=np.ceil(np.log2(set_size))
## perez gates
N_perez_gates = 4*(Na-2)
N_CNOT_in_perez = N_perez_gates*1
N_cV_gates_in_perez = N_perez_gates*3
if ((16*Na-32)!=(N_CNOT_in_perez+N_cV_gates_in_perez)).all():
raise ValueError('16m-32 is the expected decomposition!')
# if np.array_equal((16*Na-32), (N_CNOT_in_perez+N_cV_gates_in_perez)):
# raise ValueError('16m-32 is the expected decomposition!')
return ((cnot_Gates+N_CNOT_in_perez)*(set_size-1)) , (N_cV_gates_in_perez*(set_size-1))
x_nsets=np.arange(2,200,1)
# Data for plotting
N_system_qubits=4
y_rot_single=N_single_qubit_gates_req_Rot(N_system_qubits, x_nsets)
y_rot_CNOT = N_CNOT_gates_req_Rot(N_system_qubits, x_nsets)
y_LCU_cV=N_cV_gates_req_LCU(N_system_qubits, x_nsets)
y_LCU_CNOT = N_CNOT_gates_req_LCU(N_system_qubits, x_nsets)
y_LCU_single_NEW=N_single_qubit_gates_req_LCU_new_Decomp(N_system_qubits, x_nsets)
y_LCU_CNOT_NEW, y_LCU_cV_NEW = N_CNOT_gates_req_LCU_new_Decomp(N_system_qubits, x_nsets)
%matplotlib notebook
fig, ax = plt.subplots()
ax.plot(x_nsets, y_rot_single, color='b', label='Single qubit gates - Sequence of Rotations')
ax.plot(x_nsets, y_rot_CNOT, color='r', linestyle='--', label='CNOT gates - Sequence of Rotations')
ax.plot(x_nsets, y_LCU_cV, color='g', label='c-$V$ and c-$V^{\dagger}$ gates - LCU')
ax.plot(x_nsets, y_LCU_CNOT, color='k', label='CNOT gates - LCU', linestyle='--')
ax.plot(x_nsets, y_LCU_single_NEW, color='yellow', label='Single qubit gates - LCU_new')
ax.plot(x_nsets, y_LCU_CNOT_NEW, color='teal', linestyle='--', label='CNOT gates - LCU_new')
ax.plot(x_nsets, y_LCU_cV_NEW, color='slategrey', label='cV gates - LCU_new')
ax.set(xlabel='$|S_{l}|$ (size of clique)', ylabel='Number of gates')
# ,title='Scaling of methods')
ax.grid()
plt.legend()
# # http://akuederle.com/matplotlib-zoomed-up-inset
# from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes, inset_axes
# # axins = zoomed_inset_axes(ax, 40, loc='center') # zoom-factor: 2.5, location: upper-left
# axins = inset_axes(ax, 2,1 , loc='center',bbox_to_anchor=(0.4, 0.55),bbox_transform=ax.figure.transFigure) # no zoom
# axins.plot(x_nsets, y_rot_single, color='b')
# axins.plot(x_nsets, y_rot_CNOT, color='r', linestyle='--')
# axins.plot(x_nsets, y_LCU_cV, color='g')
# axins.plot(x_nsets, y_LCU_CNOT, color='k', linestyle='--')
# x1, x2, y1, y2 = 2, 5, 0, 50 # specify the limits
# axins.set_xlim(x1, x2) # apply the x-limits
# axins.set_ylim(y1, y2) # apply the y-limits
# # axins.set_yticks(np.arange(0, 100, 20))
# plt.yticks(visible=True)
# plt.xticks(visible=True)
# from mpl_toolkits.axes_grid1.inset_locator import mark_inset
# mark_inset(ax, axins, loc1=2, loc2=4, fc="none", ec="0.5") # loc here is which corner zoom goes to!
# fig.savefig("test.png")
plt.show()
# %matplotlib notebook
# fig, ax = plt.subplots()
# ax.plot(x_nsets, y_rot_single, color='b', label='Single qubit gates - Sequence of Rotations')
# ax.plot(x_nsets, y_rot_CNOT, color='r', label='CNOT gates - Sequence of Rotations')
# ax.plot(x_nsets, y_LCU_single, color='g', label='c-$V$ and c-$V^{\dagger}$ gates - LCU')
# ax.plot(x_nsets, y_LCU_CNOT, color='k', label='CNOT gates - LCU', linestyle='--')
# ax.set(xlabel='$|S_{l}|$ (size of clique)', ylabel='Number of gates',
# title='Scaling of methods')
# ax.grid()
# plt.legend()
# # http://akuederle.com/matplotlib-zoomed-up-inset
# from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes
# axins = zoomed_inset_axes(ax, 40, loc='center') # zoom-factor: 2.5, location: upper-left
# axins.plot(x_nsets, y_rot_single, color='b')
# axins.plot(x_nsets, y_rot_CNOT, color='r')
# axins.plot(x_nsets, y_LCU_single, color='g')
# axins.plot(x_nsets, y_LCU_CNOT, color='k', linestyle='--')
# x1, x2, y1, y2 = 2, 4, 0, 100 # specify the limits
# axins.set_xlim(x1, x2) # apply the x-limits
# axins.set_ylim(y1, y2) # apply the y-limits
# # axins.set_yticks(np.arange(0, 100, 20))
# plt.yticks(visible=True)
# plt.xticks(visible=True)
# from mpl_toolkits.axes_grid1.inset_locator import mark_inset
# mark_inset(ax, axins, loc1=2, loc2=4, fc="none", ec="0.5") # loc here is which corner zoom goes to!
# # fig.savefig("test.png")
# plt.show()
# Data for plotting
N_system_qubits=10 # < ---- CHANGED
y_rot_single=N_single_qubit_gates_req_Rot(N_system_qubits, x_nsets)
y_rot_CNOT = N_CNOT_gates_req_Rot(N_system_qubits, x_nsets)
y_LCU_cV=N_cV_gates_req_LCU(N_system_qubits, x_nsets)
y_LCU_CNOT = N_CNOT_gates_req_Rot(N_system_qubits, x_nsets)
y_LCU_single_NEW=N_single_qubit_gates_req_LCU_new_Decomp(N_system_qubits, x_nsets)
y_LCU_CNOT_NEW, y_LCU_cV_NEW = N_CNOT_gates_req_LCU_new_Decomp(N_system_qubits, x_nsets)
%matplotlib notebook
fig, ax = plt.subplots()
ax.plot(x_nsets, y_rot_single, color='b', label='Single qubit gates - Sequence of Rotations')
ax.plot(x_nsets, y_rot_CNOT, color='r', label='CNOT gates - Sequence of Rotations')
ax.plot(x_nsets, y_LCU_cV, color='g', label='c-$V$ and c-$V^{\dagger}$ gates - LCU')
ax.plot(x_nsets, y_LCU_CNOT, color='k', label='CNOT gates - LCU', linestyle='--')
ax.plot(x_nsets, y_LCU_single_NEW, color='yellow', label='Single qubit gates - LCU_new')
ax.plot(x_nsets, y_LCU_CNOT_NEW, color='teal', linestyle='--', label='CNOT gates - LCU_new')
ax.plot(x_nsets, y_LCU_cV_NEW, color='slategrey', label='cV gates - LCU_new')
ax.set(xlabel='$|S_{l}|$ (size of clique)', ylabel='Number of gates')
# ,title='Scaling of methods')
ax.grid()
plt.legend()
# # # http://akuederle.com/matplotlib-zoomed-up-inset
# # from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes, inset_axes
# # # axins = zoomed_inset_axes(ax, 40, loc='center') # zoom-factor: 2.5, location: upper-left
# axins = inset_axes(ax, 2,1 , loc='center',bbox_to_anchor=(0.4, 0.55),bbox_transform=ax.figure.transFigure) # no zoom
# axins.plot(x_nsets, y_rot_single, color='b')
# axins.plot(x_nsets, y_rot_CNOT, color='r')
# axins.plot(x_nsets, y_LCU_cV, color='g')
# axins.plot(x_nsets, y_LCU_CNOT, color='k', linestyle='--')
# x1, x2, y1, y2 = 2, 3, 0, 50 # specify the limits
# axins.set_xlim(x1, x2) # apply the x-limits
# axins.set_ylim(y1, y2) # apply the y-limits
# axins.set_xticks(np.arange(2, 4, 1))
# plt.yticks(visible=True)
# plt.xticks(visible=True)
# from mpl_toolkits.axes_grid1.inset_locator import mark_inset
# mark_inset(ax, axins, loc1=2, loc2=4, fc="none", ec="0.5") # loc here is which corner zoom goes to!
# # fig.savefig("test.png")
plt.show()
# Data for plotting
N_system_qubits=100 # < ---- CHANGED
y_rot_single=N_single_qubit_gates_req_Rot(N_system_qubits, x_nsets)
y_rot_CNOT = N_CNOT_gates_req_Rot(N_system_qubits, x_nsets)
y_LCU_cV=N_cV_gates_req_LCU(N_system_qubits, x_nsets)
y_LCU_CNOT = N_CNOT_gates_req_Rot(N_system_qubits, x_nsets)
%matplotlib notebook
fig, ax = plt.subplots()
ax.plot(x_nsets, y_rot_single, color='b', label='Single qubit gates - Sequence of Rotations', linewidth=3)
ax.plot(x_nsets, y_rot_CNOT, color='r', label='CNOT gates - Sequence of Rotations')
ax.plot(x_nsets, y_LCU_cV, color='g', label='c-$V$ and c-$V^{\dagger}$ gates - LCU')
ax.plot(x_nsets, y_LCU_CNOT, color='k', label='CNOT gates - LCU', linestyle='--')
ax.set(xlabel='$|S_{l}|$ (size of clique)', ylabel='Number of gates')
# ,title='Scaling of methods')
ax.grid()
plt.legend()
# http://akuederle.com/matplotlib-zoomed-up-inset
from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes, inset_axes
# axins = zoomed_inset_axes(ax, 40, loc='center') # zoom-factor: 2.5, location: upper-left
axins = inset_axes(ax, 2,1 , loc='center',bbox_to_anchor=(0.4, 0.55),bbox_transform=ax.figure.transFigure) # no zoom
axins.plot(x_nsets, y_rot_single, color='b', linewidth=2)
axins.plot(x_nsets, y_rot_CNOT, color='r')
axins.plot(x_nsets, y_LCU_cV, color='g')
axins.plot(x_nsets, y_LCU_CNOT, color='k', linestyle='--')
x1, x2, y1, y2 = 1.5, 3, 90, 500 # specify the limits
axins.set_xlim(x1, x2) # apply the x-limits
axins.set_ylim(y1, y2) # apply the y-limits
# axins.set_yticks(np.arange(0, 100, 20))
plt.yticks(visible=True)
plt.xticks(visible=True)
from mpl_toolkits.axes_grid1.inset_locator import mark_inset
mark_inset(ax, axins, loc1=2, loc2=4, fc="none", ec="0.5") # loc here is which corner zoom goes to!
# fig.savefig("test.png")
plt.show()
# Data for plotting
N_system_qubits=1
y_rot_single=N_single_qubit_gates_req_Rot(N_system_qubits, x_nsets)
y_rot_CNOT = N_CNOT_gates_req_Rot(N_system_qubits, x_nsets)
y_LCU_cV=N_cV_gates_req_LCU(N_system_qubits, x_nsets)
y_LCU_CNOT = N_CNOT_gates_req_LCU(N_system_qubits, x_nsets)
%matplotlib notebook
fig, ax = plt.subplots()
ax.plot(x_nsets, y_rot_single, color='b', label='Single qubit gates - Sequence of Rotations')
ax.plot(x_nsets, y_rot_CNOT, color='r', linestyle='--', label='CNOT gates - Sequence of Rotations')
ax.plot(x_nsets, y_LCU_cV, color='g', label='c-$V$ and c-$V^{\dagger}$ gates - LCU')
ax.plot(x_nsets, y_LCU_CNOT, color='k', label='CNOT gates - LCU', linestyle='--')
ax.set(xlabel='$|S_{l}|$ (size of clique)', ylabel='Number of gates')
# ,title='Scaling of methods')
ax.grid()
plt.legend()
# http://akuederle.com/matplotlib-zoomed-up-inset
from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes, inset_axes
# axins = zoomed_inset_axes(ax, 40, loc='center') # zoom-factor: 2.5, location: upper-left
axins = inset_axes(ax, 2,1 , loc='center',bbox_to_anchor=(0.4, 0.55),bbox_transform=ax.figure.transFigure) # no zoom
axins.plot(x_nsets, y_rot_single, color='b')
axins.plot(x_nsets, y_rot_CNOT, color='r', linestyle='--')
axins.plot(x_nsets, y_LCU_cV, color='g')
axins.plot(x_nsets, y_LCU_CNOT, color='k', linestyle='--')
x1, x2, y1, y2 = 2, 3, -5, 10 # specify the limits
axins.set_xlim(x1, x2) # apply the x-limits
axins.set_ylim(y1, y2) # apply the y-limits
# axins.set_yticks(np.arange(0, 100, 20))
axins.set_xticks(np.arange(2, 4, 1))
plt.yticks(visible=True)
plt.xticks(visible=True)
from mpl_toolkits.axes_grid1.inset_locator import mark_inset
mark_inset(ax, axins, loc1=2, loc2=4, fc="none", ec="0.5") # loc here is which corner zoom goes to!
# fig.savefig("test.png")
plt.show()
# Data for plotting
N_system_qubits=5
x_nsets=2
print(N_single_qubit_gates_req_Rot(N_system_qubits, x_nsets))
print(N_CNOT_gates_req_Rot(N_system_qubits, x_nsets))
print('###')
print(N_cV_gates_req_LCU(N_system_qubits, x_nsets))
print(N_CNOT_gates_req_LCU(N_system_qubits, x_nsets))
print(4)
### results for |S_l|=2
X_no_system_qubits=np.arange(1,11,1)
x_nsets=2
y_rot_single=N_single_qubit_gates_req_Rot(X_no_system_qubits, x_nsets)
y_rot_CNOT = N_CNOT_gates_req_Rot(X_no_system_qubits, x_nsets)
y_LCU_cV=N_cV_gates_req_LCU(X_no_system_qubits, x_nsets)
# y_LCU_CNOT = N_CNOT_gates_req_LCU(X_no_system_qubits, x_nsets)
y_LCU_CNOT=np.zeros(len(X_no_system_qubits))
single_qubit_LCU_gates=np.array([4 for _ in range(len(X_no_system_qubits))])
%matplotlib notebook
fig, ax = plt.subplots()
ax.plot(X_no_system_qubits, y_rot_single, color='b', label='Single qubit gates - Sequence of Rotations')
ax.plot(X_no_system_qubits, y_rot_CNOT, color='r', linestyle='-', label='CNOT gates - Sequence of Rotations')
ax.plot(X_no_system_qubits, y_LCU_cV, color='g', label='single controlled $\sigma$ gates - LCU')
ax.plot(X_no_system_qubits, y_LCU_CNOT, color='k', label='CNOT gates - LCU', linestyle='-')
ax.plot(X_no_system_qubits, single_qubit_LCU_gates, color='m', label='Single qubit gates - LCU', linestyle='-')
ax.set(xlabel='$N_{s}$', ylabel='Number of gates')
# ,title='Scaling of methods')
ax.set_xticks(X_no_system_qubits)
ax.grid()
plt.legend()
# # http://akuederle.com/matplotlib-zoomed-up-inset
# from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes, inset_axes
# # axins = zoomed_inset_axes(ax, 40, loc='center') # zoom-factor: 2.5, location: upper-left
# axins = inset_axes(ax, 2,1 , loc='center',bbox_to_anchor=(0.4, 0.55),bbox_transform=ax.figure.transFigure) # no zoom
# axins.plot(x_nsets, y_rot_single, color='b')
# axins.plot(x_nsets, y_rot_CNOT, color='r', linestyle='--')
# axins.plot(x_nsets, y_LCU_cV, color='g')
# axins.plot(x_nsets, y_LCU_CNOT, color='k', linestyle='--')
# x1, x2, y1, y2 = 2, 3, -5, 10 # specify the limits
# axins.set_xlim(x1, x2) # apply the x-limits
# axins.set_ylim(y1, y2) # apply the y-limits
# # axins.set_yticks(np.arange(0, 100, 20))
# axins.set_xticks(np.arange(2, 4, 1))
# plt.yticks(visible=True)
# plt.xticks(visible=True)
# from mpl_toolkits.axes_grid1.inset_locator import mark_inset
# mark_inset(ax, axins, loc1=2, loc2=4, fc="none", ec="0.5") # loc here is which corner zoom goes to!
# fig.savefig("test.png")
plt.show()
V = ((1j+1)/2)*np.array([[1,-1j],[-1j, 1]], dtype=complex)
CNOT
from functools import reduce
zero=np.array([[1],[0]])
one=np.array([[0],[1]])
identity=np.eye(2)
X=np.array([[0,1], [1,0]])
CNOT= np.kron(np.outer(one, one), X)+np.kron(np.outer(zero, zero), identity)
###
I_one_V = reduce(np.kron, [identity, np.kron(np.outer(one, one), V)+np.kron(np.outer(zero, zero), identity)])
###
zero_zero=np.kron(zero,zero)
zero_one=np.kron(zero,one)
one_zero=np.kron(one,zero)
one_one=np.kron(one,one)
one_I_V = np.kron(np.outer(zero_zero, zero_zero), identity)+np.kron(np.outer(zero_one, zero_one), identity)+ \
np.kron(np.outer(one_zero, one_zero), V)+np.kron(np.outer(one_one, one_one), V)
###
CNOT_I=reduce(np.kron, [CNOT, identity])
##
I_one_Vdag = reduce(np.kron, [identity, np.kron(np.outer(one, one), V.conj().transpose())+np.kron(np.outer(zero, zero), identity)])
##
perez_gate = reduce(np.multiply, [I_one_V, one_I_V, CNOT_I, I_one_Vdag])
##check
# peres = TOF(x0,x1,x2) CNOT(x0, x1)
zero_zero=np.kron(zero,zero)
zero_one=np.kron(zero,one)
one_zero=np.kron(one,zero)
one_one=np.kron(one,one)
TOF = np.kron(np.outer(zero_zero, zero_zero), identity)+np.kron(np.outer(zero_one, zero_one), identity)+ \
np.kron(np.outer(one_zero, one_zero), identity)+np.kron(np.outer(one_one, one_one), X)
CNOT_I = reduce(np.kron, [CNOT, identity])
checker = np.multiply(CNOT_I, TOF)
checker==perez_gate
print(perez_gate)
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/prithwis/KKolab/blob/main/KK_B2_Hadoop_and_Hive.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<br>
<hr>
[Prithwis Mukerjee](http://www.linkedin.com/in/prithwis)<br>
#Hive with Hadoop
This notebook has all the codes / commands required to install Hadoop and Hive <br>
##Acknowledgements
Hadoop Installation from [Anjaly Sam's Github Repository](https://github.com/anjalysam/Hadoop) <br>
Hive Installation from [PhoenixNAP](https://phoenixnap.com/kb/install-hive-on-ubuntu) website
#1 Hadoop
Hadoop is a pre-requisite for Hive <br>
## 1.1 Download, Install Hadoop
```
# The default JVM available at /usr/lib/jvm/java-11-openjdk-amd64/ works for Hadoop
# But gives errors with Hive https://stackoverflow.com/questions/54037773/hive-exception-class-jdk-internal-loader-classloadersappclassloader-cannot
# Hence this JVM needs to be installed
!apt-get update > /dev/null
!apt-get install openjdk-8-jdk-headless -qq > /dev/null
# Download the latest version of Hadoop
# Change the version number in this and subsequent cells
#
!wget https://downloads.apache.org/hadoop/common/hadoop-3.3.0/hadoop-3.3.0.tar.gz
# Unzip it
# the tar command with the -x flag to extract, -z to uncompress, -v for verbose output, and -f to specify that we’re extracting from a file
!tar -xzf hadoop-3.3.0.tar.gz
#copy hadoop file to user/local
!mv hadoop-3.3.0/ /usr/local/
```
## 1.2 Set Environment Variables
```
#To find the default Java path
!readlink -f /usr/bin/java | sed "s:bin/java::"
!ls /usr/lib/jvm/
#To set java path, go to /usr/local/hadoop-3.3.0/etc/hadoop/hadoop-env.sh then
#. . . export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64/ . . .
#we have used a simpler alternative route using os.environ - it works
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64" # default is changed
#os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-11-openjdk-amd64/"
os.environ["HADOOP_HOME"] = "/usr/local/hadoop-3.3.0/"
!echo $PATH
# Add Hadoop BIN to PATH
# get current_path from output of previous command
current_path = '/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tools/node/bin:/tools/google-cloud-sdk/bin:/opt/bin'
new_path = current_path+':/usr/local/hadoop-3.3.0/bin/'
os.environ["PATH"] = new_path
```
## 1.3 Test Hadoop Installation
```
#Running Hadoop - Test RUN, not doing anything at all
#!/usr/local/hadoop-3.3.0/bin/hadoop
# UNCOMMENT the following line if you want to make sure that Hadoop is alive!
#!hadoop
# Testing Hadoop with PI generating sample program, should calculate value of pi = 3.14157500000000000000
# pi example
#Uncomment the following line if you want to test Hadoop with pi example
#!hadoop jar /usr/local/hadoop-3.3.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.0.jar pi 16 100000
```
#2 Hive
## 2.1 Download, Install HIVE
```
# Download and Unzip the correct version and unzip
!wget https://downloads.apache.org/hive/hive-3.1.2/apache-hive-3.1.2-bin.tar.gz
!tar xzf apache-hive-3.1.2-bin.tar.gz
```
## 2.2 Set Environment *Variables*
```
# Make sure that the version number is correct and is as downloaded
os.environ["HIVE_HOME"] = "/content/apache-hive-3.1.2-bin"
!echo $HIVE_HOME
!echo $PATH
# current_path is set from output of previous command
current_path = '/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tools/node/bin:/tools/google-cloud-sdk/bin:/opt/bin:/usr/local/hadoop-3.3.0/bin/'
new_path = current_path+':/content/apache-hive-3.1.2-bin/bin'
os.environ["PATH"] = new_path
!echo $PATH
!echo $JAVA_HOME
!echo $HADOOP_HOME
!echo $HIVE_HOME
```
## 2.3 Set up HDFS Directories
```
!hdfs dfs -mkdir /tmp
!hdfs dfs -chmod g+w /tmp
#!hdfs dfs -ls /
!hdfs dfs -mkdir -p /content/warehouse
!hdfs dfs -chmod g+w /content/warehouse
#!hdfs dfs -ls /content/
```
## 2.4 Initialise HIVE - note and fix errors
```
# TYPE this command, do not copy and paste. Non printing characters cause havoc
# There will be two errors, that we will fix
# UNCOMMENT the following line if you WISH TO SEE the errors
!schematool -initSchema -dbType derby
```
### 2.4.1 Fix One Warning, One Error
SLF4J is duplicate, need to locate them and remove one <br>
Guava jar version is low
```
# locate multiple instances of slf4j ...
!ls $HADOOP_HOME/share/hadoop/common/lib/*slf4j*
!ls $HIVE_HOME/lib/*slf4j*
# removed the logging jar from Hive, retaining the Hadoop jar
!mv /content/apache-hive-3.1.2-bin/lib/log4j-slf4j-impl-2.10.0.jar ./
# guava jar needs to above v 20
# https://stackoverflow.com/questions/45247193/nosuchmethoderror-com-google-common-base-preconditions-checkargumentzljava-lan
!ls $HIVE_HOME/lib/gu*
# the one available with Hadoop is better, v 27
!ls $HADOOP_HOME/share/hadoop/hdfs/lib/gu*
# Remove the Hive Guava and replace with Hadoop Guava
!mv $HIVE_HOME/lib/guava-19.0.jar ./
!cp $HADOOP_HOME/share/hadoop/hdfs/lib/guava-27.0-jre.jar $HIVE_HOME/lib/
```
##2.5 Initialize HIVE
```
#Type this command, dont copy-paste
# Non printing characters inside the command will give totally illogical errors
!schematool -initSchema -dbType derby
```
## 2.6 Test HIVE
1. Create database
2. Create table
3. Insert data
4. Retrieve data
using command line options as [given here](https://cwiki.apache.org/confluence/display/hive/languagemanual+cli#).
```
!hive -e "create database if not exists praxisDB;"
!hive -e "show databases"
!hive -database praxisdb -e "create table if not exists emp (name string, age int)"
!hive -database praxisdb -e "show tables"
!hive -database praxisdb -e "insert into emp values ('naren', 70)"
!hive -database praxisdb -e "insert into emp values ('aditya', 49)"
!hive -database praxisdb -e "select * from emp"
# Silent Mode
!hive -S -database praxisdb -e "select * from emp"
```
## 2.7 Bulk Data Load from CSV file
```
#drop table
!hive -database praxisDB -e 'DROP table if exists eCommerce'
#create table
# Invoice Date is being treated as a STRING because input data is not correctly formatted
!hive -database praxisDB -e " \
CREATE TABLE eCommerce ( \
InvoiceNo varchar(10), \
StockCode varchar(10), \
Description varchar(50), \
Quantity int, \
InvoiceDate string, \
UnitPrice decimal(6,2), \
CustomerID varchar(10), \
Country varchar(15) \
) row format delimited fields terminated by ','; \
"
!hive -database praxisdb -e "describe eCommerce"
```
This data may not be clean and may have commas embedded in the CSV file. To see how clearn this look at this notebook : [Spark SQLContext HiveContext](https://github.com/prithwis/KKolab/blob/main/KK_C1_SparkSQL_SQLContext_HiveContext.ipynb)
```
#Data as CSV file
!gdown https://drive.google.com/uc?id=1JJH24ZZaiJrEKValD--UtyFcWl7UanwV # 2% data ~ 10K rows
!gdown https://drive.google.com/uc?id=1g7mJ0v4fkERW0HWc1eq-SHs_jvQ0N2Oe # 100% data ~ 500K rows
#remove the CRLF character from the end of the row if it exists
!sed 's/\r//' /content/eCommerce_Full_2021.csv > datafile.csv
#!sed 's/\r//' /content/eCommerce_02PC_2021.csv > datafile.csv
# remove the first line containing headers from the file
!sed -i -e "1d" datafile.csv
!head datafile.csv
# delete all rows from table
!hive -database praxisdb -e 'TRUNCATE TABLE eCommerce'
# LOAD
!hive -database praxisdb -e "LOAD DATA LOCAL INPATH 'datafile.csv' INTO TABLE eCommerce"
!hive -S -database praxisdb -e "select count(*) from eCommerce"
!hive -S -database praxisdb -e "select * from eCommerce limit 30"
```
#Chronobooks <br>
<hr>
Chronotantra and Chronoyantra are two science fiction novels that explore the collapse of human civilisation on Earth and then its rebirth and reincarnation both on Earth as well as on the distant worlds of Mars, Titan and Enceladus. But is it the human civilisation that is being reborn? Or is it some other sentience that is revealing itself.
If you have an interest in AI and found this material useful, you may consider buying these novels, in paperback or kindle, from [http://bit.ly/chronobooks](http://bit.ly/chronobooks)
|
github_jupyter
|
# what's the neuron yield across probes, experimenters and recording sites?
Anne Urai & Nate Miska, 2020
```
# GENERAL THINGS FOR COMPUTING AND PLOTTING
import pandas as pd
import numpy as np
import os, sys, time
import scipy as sp
# visualisation
import matplotlib.pyplot as plt
import seaborn as sns
# ibl specific things
import datajoint as dj
from ibl_pipeline import reference, subject, action, acquisition, data, behavior
from ibl_pipeline.analyses import behavior as behavior_analysis
ephys = dj.create_virtual_module('ephys', 'ibl_ephys')
figpath = os.path.join(os.path.expanduser('~'), 'Data/Figures_IBL')
```
## 1. neuron yield per lab and Npix probe over time
Replicates https://github.com/int-brain-lab/analysis/blob/master/python/probe_performance_over_sessions.py using DJ
```
probe_insertions = ephys.ProbeInsertion * ephys.DefaultCluster.Metrics * subject.SubjectLab \
* (acquisition.SessionProject
& 'session_project = "ibl_neuropixel_brainwide_01"') \
* behavior_analysis.SessionTrainingStatus
probe_insertions = probe_insertions.proj('probe_serial_number', 'probe_model_name', 'lab_name', 'metrics',
'good_enough_for_brainwide_map',
session_date='DATE(session_start_time)')
clusts = probe_insertions.fetch(format='frame').reset_index()
# put metrics into df columns from the blob (feature request: can these be added as attributes instead?)
for kix, k in enumerate(['ks2_label']):
tmp_var = []
for id, c in clusts.iterrows():
if k in c['metrics'].keys():
tmp = c['metrics'][k]
else:
tmp = np.nan
tmp_var.append(tmp)
clusts[k] = tmp_var
# hofer and mrsic-flogel probes are shared
clusts['lab_name'] = clusts['lab_name'].str.replace('mrsicflogellab','swclab')
clusts['lab_name'] = clusts['lab_name'].str.replace('hoferlab','swclab')
clusts.lab_name.unique()
clusts['probe_name'] = clusts['lab_name'] + ', ' + clusts['probe_model_name'] + ': ' + clusts['probe_serial_number']
clusts_summ = clusts.groupby(['lab_name', 'probe_name', 'session_start_time', 'ks2_label'])['session_date'].count().reset_index()
# use recording session number instead of date
clusts_summ['recording'] = clusts_summ.groupby(['probe_name']).cumcount() + 1
sns.set(style="ticks", context="paper")
g, axes = plt.subplots(6,6,figsize=(18,20))
for probe, ax in zip(clusts_summ.probe_name.unique(), axes.flatten()):
df = clusts_summ[clusts_summ.probe_name==probe].groupby(['session_start_time','ks2_label']).session_date.sum()
df.unstack().plot.barh(ax=ax, stacked=True, legend=False, colormap='Pastel2')
ax.set_title(probe, fontsize=12)
ax.axvline(x=60, color='seagreen', linestyle="--")
ax.set_yticks([])
ax.set_ylabel('')
ax.set_ylim([-1, np.max([max(ax.get_ylim()), 10])])
ax.set_xlim([0, 1000])
axes.flatten()[-1].set_axis_off()
sns.despine(trim=True)
plt.tight_layout()
plt.xlabel('Number of KS2 neurons')
plt.ylabel('Recording session')
g.savefig(os.path.join(figpath, 'probe_yield_oversessions.pdf'))
```
# 2. what is the overall yield of sessions, neurons etc?
```
## overall distribution of neurons per session
g = sns.FacetGrid(data=clusts_summ, hue='ks2_label', palette='Set2')
g.map(sns.distplot, "session_date", bins=np.arange(10, 500, 15), hist=True, rug=False, kde=False).add_legend()
for ax in g.axes.flatten():
ax.axvline(x=60, color='seagreen', linestyle="--")
g.set_xlabels('Number of KS2 neurons')
g.set_ylabels('Number of sessions')
g.savefig(os.path.join(figpath, 'probe_yield_allrecs.pdf'))
print('TOTAL YIELD SO FAR:')
clusts.groupby(['ks2_label'])['ks2_label'].count()
## overall distribution of neurons per session
g = sns.FacetGrid(data=clusts_summ, hue='ks2_label', col_wrap=4, col='lab_name', palette='Set2')
g.map(sns.distplot, "session_date", bins=np.arange(10, 500, 15), hist=True, rug=False, kde=False).add_legend()
for ax in g.axes.flatten():
ax.axvline(x=60, color='seagreen', linestyle="--")
g.set_xlabels('Number of KS2 neurons')
g.set_ylabels('Number of sessions')
#g.savefig(os.path.join(figpath, 'probe_yield_allrecs_perlab.pdf'))
## overall number of sessions that meet criteria for behavior and neural yield
sessions = clusts.loc[clusts.ks2_label == 'good', :].groupby(['lab_name', 'subject_uuid', 'session_start_time',
'good_enough_for_brainwide_map'])['cluster_id'].count().reset_index()
sessions['enough_neurons'] = (sessions['cluster_id'] > 60)
ct = sessions.groupby(['good_enough_for_brainwide_map', 'enough_neurons'])['cluster_id'].count().reset_index()
print('total nr of sessions: %d'%ct.cluster_id.sum())
pd.pivot_table(ct, columns=['good_enough_for_brainwide_map'], values=['cluster_id'], index=['enough_neurons'])
#sessions.describe()
# pd.pivot_table(df, values='cluster_id', index=['lab_name'],
# columns=['enough_neurons'], aggfunc=np.sum)
# check that this pandas wrangling is correct...
ephys_sessions = acquisition.Session * subject.Subject * subject.SubjectLab \
* (acquisition.SessionProject
& 'session_project = "ibl_neuropixel_brainwide_01"') \
* behavior_analysis.SessionTrainingStatus \
& ephys.ProbeInsertion & ephys.DefaultCluster.Metrics
ephys_sessions = ephys_sessions.fetch(format='frame').reset_index()
# ephys_sessions
# ephys_sessions.groupby(['good_enough_for_brainwide_map'])['session_start_time'].describe()
# which sessions do *not* show good enough behavior?
ephys_sessions.loc[ephys_sessions.good_enough_for_brainwide_map == 0, :].groupby([
'lab_name', 'subject_nickname', 'session_start_time'])['session_start_time'].unique()
# per lab, what's the drop-out due to behavior?
ephys_sessions['good_enough_for_brainwide_map'] = ephys_sessions['good_enough_for_brainwide_map'].astype(int)
ephys_sessions.groupby(['lab_name'])['good_enough_for_brainwide_map'].describe()
ephys_sessions['good_enough_for_brainwide_map'].describe()
# per lab, what's the dropout due to yield?
sessions['enough_neurons'] = sessions['enough_neurons'].astype(int)
sessions.groupby(['lab_name'])['enough_neurons'].describe()
## also show the total number of neurons, only from good behavior sessions
probe_insertions = ephys.ProbeInsertion * ephys.DefaultCluster.Metrics * subject.SubjectLab \
* (acquisition.SessionProject
& 'session_project = "ibl_neuropixel_brainwide_01"') \
* (behavior_analysis.SessionTrainingStatus &
'good_enough_for_brainwide_map = 1')
probe_insertions = probe_insertions.proj('probe_serial_number', 'probe_model_name', 'lab_name', 'metrics',
'good_enough_for_brainwide_map',
session_date='DATE(session_start_time)')
clusts = probe_insertions.fetch(format='frame').reset_index()
# put metrics into df columns from the blob (feature request: can these be added as attributes instead?)
for kix, k in enumerate(['ks2_label']):
tmp_var = []
for id, c in clusts.iterrows():
if k in c['metrics'].keys():
tmp = c['metrics'][k]
else:
tmp = np.nan
tmp_var.append(tmp)
clusts[k] = tmp_var
# hofer and mrsic-flogel probes are shared
clusts['lab_name'] = clusts['lab_name'].str.replace('mrsicflogellab','swclab')
clusts['lab_name'] = clusts['lab_name'].str.replace('hoferlab','swclab')
clusts.lab_name.unique()
clusts['probe_name'] = clusts['lab_name'] + ', ' + clusts['probe_model_name'] + ': ' + clusts['probe_serial_number']
clusts_summ = clusts.groupby(['lab_name', 'probe_name', 'session_start_time', 'ks2_label'])['session_date'].count().reset_index()
# use recording session number instead of date
clusts_summ['recording'] = clusts_summ.groupby(['probe_name']).cumcount() + 1
## overall distribution of neurons per session
g = sns.FacetGrid(data=clusts_summ, hue='ks2_label', palette='Set2')
g.map(sns.distplot, "session_date", bins=np.arange(10, 500, 15), hist=True, rug=False, kde=False).add_legend()
for ax in g.axes.flatten():
ax.axvline(x=60, color='seagreen', linestyle="--")
g.set_xlabels('Number of KS2 neurons')
g.set_ylabels('Number of sessions')
g.savefig(os.path.join(figpath, 'probe_yield_allrecs_goodsessions.pdf'))
print('TOTAL YIELD (from good sessions) SO FAR:')
clusts.groupby(['ks2_label'])['ks2_label'].count()
```
## 2. how does probe yield in the repeated site differ between mice/experimenters?
```
probes_rs = (ephys.ProbeTrajectory & 'insertion_data_source = "Planned"'
& 'x BETWEEN -2400 AND -2100' & 'y BETWEEN -2100 AND -1900' & 'theta BETWEEN 14 AND 16')
clust = ephys.DefaultCluster * ephys.DefaultCluster.Metrics * probes_rs * subject.SubjectLab() * subject.Subject()
clust = clust.proj('cluster_amp', 'cluster_depth', 'firing_rate', 'subject_nickname', 'lab_name','metrics',
'x', 'y', 'theta', 'phi', 'depth')
clusts = clust.fetch(format='frame').reset_index()
clusts['col_name'] = clusts['lab_name'] + ', ' + clusts['subject_nickname']
# put metrics into df columns from the blob
for kix, k in enumerate(clusts['metrics'][0].keys()):
tmp_var = []
for id, c in clusts.iterrows():
if k in c['metrics'].keys():
tmp = c['metrics'][k]
else:
tmp = np.nan
tmp_var.append(tmp)
clusts[k] = tmp_var
clusts
sns.set(style="ticks", context="paper")
g, axes = plt.subplots(1,1,figsize=(4,4))
df = clusts.groupby(['col_name', 'ks2_label']).ks2_label.count()
df.unstack().plot.barh(ax=axes, stacked=True, legend=True, colormap='Pastel2')
axes.axvline(x=60, color='seagreen', linestyle="--")
axes.set_ylabel('')
sns.despine(trim=True)
plt.xlabel('Number of KS2 neurons')
g.savefig(os.path.join(figpath, 'probe_yield_rs.pdf'))
## firing rate as a function of depth
print('plotting')
g = sns.FacetGrid(data=clusts, col='col_name', col_wrap=4, hue='ks2_label',
palette='Pastel2', col_order=sorted(clusts.col_name.unique()))
g.map(sns.scatterplot, "firing_rate", "cluster_depth", alpha=0.5).add_legend()
g.set_titles('{col_name}')
g.set_xlabels('Firing rate (spks/s)')
g.set_ylabels('Depth')
plt.tight_layout()
sns.despine(trim=True)
g.savefig(os.path.join(figpath, 'neurons_rsi_firingrate.pdf'))
```
|
github_jupyter
|
# Measurement Error Mitigation
## Introduction
The measurement calibration is used to mitigate measurement errors.
The main idea is to prepare all $2^n$ basis input states and compute the probability of measuring counts in the other basis states.
From these calibrations, it is possible to correct the average results of another experiment of interest. This notebook gives examples for how to use the ``ignis.mitigation.measurement`` module.
```
# Import general libraries (needed for functions)
import numpy as np
import time
# Import Qiskit classes
import qiskit
from qiskit import QuantumRegister, QuantumCircuit, ClassicalRegister, Aer
from qiskit.providers.aer import noise
from qiskit.tools.visualization import plot_histogram
# Import measurement calibration functions
from qiskit.ignis.mitigation.measurement import (complete_meas_cal, tensored_meas_cal,
CompleteMeasFitter, TensoredMeasFitter)
```
## 3 Qubit Example of the Calibration Matrices
Assume that we would like to generate a calibration matrix for the 3 qubits Q2, Q3 and Q4 in a 5-qubit Quantum Register [Q0,Q1,Q2,Q3,Q4].
Since we have 3 qubits, there are $2^3=8$ possible quantum states.
## Generating Measurement Calibration Circuits
First, we generate a list of measurement calibration circuits for the full Hilbert space.
Each circuit creates a basis state.
If there are $n=3$ qubits, then we get $2^3=8$ calibration circuits.
The following function **complete_meas_cal** returns a list **meas_calibs** of `QuantumCircuit` objects containing the calibration circuits,
and a list **state_labels** of the calibration state labels.
The input to this function can be given in one of the following three forms:
- **qubit_list:** A list of qubits to perform the measurement correction on, or:
- **qr (QuantumRegister):** A quantum register, or:
- **cr (ClassicalRegister):** A classical register.
In addition, one can provide a string **circlabel**, which is added at the beginning of the circuit names for unique identification.
For example, in our case, the input is a 5-qubit `QuantumRegister` containing the qubits Q2,Q3,Q4:
```
# Generate the calibration circuits
qr = qiskit.QuantumRegister(5)
qubit_list = [2,3,4]
meas_calibs, state_labels = complete_meas_cal(qubit_list=qubit_list, qr=qr, circlabel='mcal')
```
Print the $2^3=8$ state labels (for the 3 qubits Q2,Q3,Q4):
```
state_labels
```
## Computing the Calibration Matrix
If we do not apply any noise, then the calibration matrix is expected to be the $8 \times 8$ identity matrix.
```
# Execute the calibration circuits without noise
backend = qiskit.Aer.get_backend('qasm_simulator')
job = qiskit.execute(meas_calibs, backend=backend, shots=1000)
cal_results = job.result()
# The calibration matrix without noise is the identity matrix
meas_fitter = CompleteMeasFitter(cal_results, state_labels, circlabel='mcal')
print(meas_fitter.cal_matrix)
```
Assume that we apply some noise model from Qiskit Aer to the 5 qubits,
then the calibration matrix will have most of its mass on the main diagonal, with some additional 'noise'.
Alternatively, we can execute the calibration circuits using an IBMQ provider.
```
# Generate a noise model for the 5 qubits
noise_model = noise.NoiseModel()
for qi in range(5):
read_err = noise.errors.readout_error.ReadoutError([[0.9, 0.1],[0.25,0.75]])
noise_model.add_readout_error(read_err, [qi])
# Execute the calibration circuits
backend = qiskit.Aer.get_backend('qasm_simulator')
job = qiskit.execute(meas_calibs, backend=backend, shots=1000, noise_model=noise_model)
cal_results = job.result()
# Calculate the calibration matrix with the noise model
meas_fitter = CompleteMeasFitter(cal_results, state_labels, qubit_list=qubit_list, circlabel='mcal')
print(meas_fitter.cal_matrix)
# Plot the calibration matrix
meas_fitter.plot_calibration()
```
## Analyzing the Results
We would like to compute the total measurement fidelity, and the measurement fidelity for a specific qubit, for example, Q0.
Since the on-diagonal elements of the calibration matrix are the probabilities of measuring state 'x' given preparation of state 'x',
then the trace of this matrix is the average assignment fidelity.
```
# What is the measurement fidelity?
print("Average Measurement Fidelity: %f" % meas_fitter.readout_fidelity())
# What is the measurement fidelity of Q0?
print("Average Measurement Fidelity of Q0: %f" % meas_fitter.readout_fidelity(
label_list = [['000','001','010','011'],['100','101','110','111']]))
```
## Applying the Calibration
We now perform another experiment and correct the measured results.
## Correct Measurement Noise on a 3Q GHZ State
As an example, we start with the 3-qubit GHZ state on the qubits Q2,Q3,Q4:
$$ \mid GHZ \rangle = \frac{\mid{000} \rangle + \mid{111} \rangle}{\sqrt{2}}$$
```
# Make a 3Q GHZ state
cr = ClassicalRegister(3)
ghz = QuantumCircuit(qr, cr)
ghz.h(qr[2])
ghz.cx(qr[2], qr[3])
ghz.cx(qr[3], qr[4])
ghz.measure(qr[2],cr[0])
ghz.measure(qr[3],cr[1])
ghz.measure(qr[4],cr[2])
```
We now execute the calibration circuits (with the noise model above):
```
job = qiskit.execute([ghz], backend=backend, shots=5000, noise_model=noise_model)
results = job.result()
```
We now compute the results without any error mitigation and with the mitigation, namely after applying the calibration matrix to the results.
There are two fitting methods for applying the calibration (if no method is defined, then 'least_squares' is used).
- **'pseudo_inverse'**, which is a direct inversion of the calibration matrix,
- **'least_squares'**, which constrains to have physical probabilities.
The raw data to be corrected can be given in a number of forms:
- Form1: A counts dictionary from results.get_counts,
- Form2: A list of counts of length=len(state_labels),
- Form3: A list of counts of length=M*len(state_labels) where M is an integer (e.g. for use with the tomography data),
- Form4: A qiskit Result (e.g. results as above).
```
# Results without mitigation
raw_counts = results.get_counts()
# Get the filter object
meas_filter = meas_fitter.filter
# Results with mitigation
mitigated_results = meas_filter.apply(results)
mitigated_counts = mitigated_results.get_counts(0)
```
We can now plot the results with and without error mitigation:
```
from qiskit.tools.visualization import *
plot_histogram([raw_counts, mitigated_counts], legend=['raw', 'mitigated'])
```
### Applying to a reduced subset of qubits
Consider now that we want to correct a 2Q Bell state, but we have the 3Q calibration matrix. We can reduce the matrix and build a new mitigation object.
```
# Make a 2Q Bell state between Q2 and Q4
cr = ClassicalRegister(2)
bell = QuantumCircuit(qr, cr)
bell.h(qr[2])
bell.cx(qr[2], qr[4])
bell.measure(qr[2],cr[0])
bell.measure(qr[4],cr[1])
job = qiskit.execute([bell], backend=backend, shots=5000, noise_model=noise_model)
results = job.result()
#build a fitter from the subset
meas_fitter_sub = meas_fitter.subset_fitter(qubit_sublist=[2,4])
#The calibration matrix is now in the space Q2/Q4
meas_fitter_sub.cal_matrix
# Results without mitigation
raw_counts = results.get_counts()
# Get the filter object
meas_filter_sub = meas_fitter_sub.filter
# Results with mitigation
mitigated_results = meas_filter_sub.apply(results)
mitigated_counts = mitigated_results.get_counts(0)
from qiskit.tools.visualization import *
plot_histogram([raw_counts, mitigated_counts], legend=['raw', 'mitigated'])
```
## Tensored mitigation
The calibration can be simplified if the error is known to be local. By "local error" we mean that the error can be tensored to subsets of qubits. In this case, less than $2^n$ states are needed for the computation of the calibration matrix.
Assume that the error acts locally on qubit 2 and the pair of qubits 3 and 4. Construct the calibration circuits by using the function `tensored_meas_cal`. Unlike before we need to explicitly divide the qubit list up into subset regions.
```
# Generate the calibration circuits
qr = qiskit.QuantumRegister(5)
mit_pattern = [[2],[3,4]]
meas_calibs, state_labels = tensored_meas_cal(mit_pattern=mit_pattern, qr=qr, circlabel='mcal')
```
We now retrieve the names of the generated circuits. Note that in each label (of length 3), the least significant bit corresponds to qubit 2, the middle bit corresponds to qubit 3, and the most significant bit corresponds to qubit 4.
```
for circ in meas_calibs:
print(circ.name)
```
Let us elaborate on the circuit names. We see that there are only four circuits, instead of eight. The total number of required circuits is $2^m$ where $m$ is the number of qubits in the larget subset (here $m=2$).
Each basis state of qubits 3 and 4 appears exactly once. Only two basis states are required for qubit 2, so these are split equally across the four experiments. For example, state '0' of qubit 2 appears in state labels '000' and '010'.
We now execute the calibration circuits on an Aer simulator, using the same noise model as before. This noise is in fact local to qubits 3 and 4 separately, but assume that we don't know it, and that we only know that it is local for qubit 2.
```
# Generate a noise model for the 5 qubits
noise_model = noise.NoiseModel()
for qi in range(5):
read_err = noise.errors.readout_error.ReadoutError([[0.9, 0.1],[0.25,0.75]])
noise_model.add_readout_error(read_err, [qi])
# Execute the calibration circuits
backend = qiskit.Aer.get_backend('qasm_simulator')
job = qiskit.execute(meas_calibs, backend=backend, shots=5000, noise_model=noise_model)
cal_results = job.result()
meas_fitter = TensoredMeasFitter(cal_results, mit_pattern=mit_pattern)
```
The fitter provides two calibration matrices. One matrix is for qubit 2, and the other matrix is for qubits 3 and 4.
```
print(meas_fitter.cal_matrices)
```
We can look at the readout fidelities of the individual tensored components or qubits within a set:
```
#readout fidelity of Q2
print('Readout fidelity of Q2: %f'%meas_fitter.readout_fidelity(0))
#readout fidelity of Q3/Q4
print('Readout fidelity of Q3/4 space (e.g. mean assignment '
'\nfidelity of 00,10,01 and 11): %f'%meas_fitter.readout_fidelity(1))
#readout fidelity of Q3
print('Readout fidelity of Q3: %f'%meas_fitter.readout_fidelity(1,[['00','10'],['01','11']]))
```
Plot the individual calibration matrices:
```
# Plot the calibration matrix
print('Q2 Calibration Matrix')
meas_fitter.plot_calibration(0)
print('Q3/Q4 Calibration Matrix')
meas_fitter.plot_calibration(1)
# Make a 3Q GHZ state
cr = ClassicalRegister(3)
ghz = QuantumCircuit(qr, cr)
ghz.h(qr[2])
ghz.cx(qr[2], qr[3])
ghz.cx(qr[3], qr[4])
ghz.measure(qr[2],cr[0])
ghz.measure(qr[3],cr[1])
ghz.measure(qr[4],cr[2])
```
We now execute the calibration circuits (with the noise model above):
```
job = qiskit.execute([ghz], backend=backend, shots=5000, noise_model=noise_model)
results = job.result()
# Results without mitigation
raw_counts = results.get_counts()
# Get the filter object
meas_filter = meas_fitter.filter
# Results with mitigation
mitigated_results = meas_filter.apply(results)
mitigated_counts = mitigated_results.get_counts(0)
```
Plot the raw vs corrected state:
```
meas_filter = meas_fitter.filter
mitigated_results = meas_filter.apply(results)
mitigated_counts = mitigated_results.get_counts(0)
plot_histogram([raw_counts, mitigated_counts], legend=['raw', 'mitigated'])
```
As a check we should get the same answer if we build the full correction matrix from a tensor product of the subspace calibration matrices:
```
meas_calibs2, state_labels2 = complete_meas_cal([2,3,4])
meas_fitter2 = CompleteMeasFitter(None, state_labels2)
meas_fitter2.cal_matrix = np.kron(meas_fitter.cal_matrices[1],meas_fitter.cal_matrices[0])
meas_filter2 = meas_fitter2.filter
mitigated_results2 = meas_filter2.apply(results)
mitigated_counts2 = mitigated_results2.get_counts(0)
plot_histogram([raw_counts, mitigated_counts2], legend=['raw', 'mitigated'])
```
## Running Aqua Algorithms with Measurement Error Mitigation
To use measurement error mitigation when running quantum circuits as part of an Aqua algorithm, we need to include the respective measurement error fitting instance in the QuantumInstance. This object also holds the specifications for the chosen backend.
In the following, we illustrate measurement error mitigation of Aqua algorithms on the example of searching the ground state of a Hamiltonian with VQE.
First, we need to import the libraries that provide backends as well as the classes that are needed to run the algorithm.
```
# Import qiskit functions and libraries
from qiskit import Aer, IBMQ
from qiskit.circuit.library import TwoLocal
from qiskit.aqua import QuantumInstance
from qiskit.aqua.algorithms import VQE
from qiskit.aqua.components.optimizers import COBYLA
from qiskit.aqua.operators import X, Y, Z, I, CX, T, H, S, PrimitiveOp
from qiskit.providers.aer import noise
# Import error mitigation functions
from qiskit.ignis.mitigation.measurement import CompleteMeasFitter
```
Then, we initialize the instances that are required to execute the algorithm.
```
# Initialize Hamiltonian
h_op = (-1.0523732 * I^I) + \
(0.39793742 * I^Z) + \
(-0.3979374 * Z^I) + \
(-0.0112801 * Z^Z) + \
(0.18093119 * X^X)
# Initialize trial state
var_form = TwoLocal(h_op.num_qubits, ['ry', 'rz'], 'cz', reps=3, entanglement='full')
# Initialize optimizer
optimizer = COBYLA(maxiter=350)
# Initialize algorithm to find the ground state
vqe = VQE(h_op, var_form, optimizer)
```
Here, we choose the Aer `qasm_simulator` as backend and also add a custom noise model.
The application of an actual quantum backend provided by IBMQ is outlined in the commented code.
```
# Generate a noise model
noise_model = noise.NoiseModel()
for qi in range(h_op.num_qubits):
read_err = noise.errors.readout_error.ReadoutError([[0.8, 0.2],[0.1,0.9]])
noise_model.add_readout_error(read_err, [qi])
# Initialize the backend configuration using measurement error mitigation with a QuantumInstance
qi_noise_model_qasm = QuantumInstance(backend=Aer.get_backend('qasm_simulator'), noise_model=noise_model, shots=1000,
measurement_error_mitigation_cls=CompleteMeasFitter,
measurement_error_mitigation_shots=1000)
# Intialize your TOKEN and provider with
# provider = IBMQ.get_provider(...)
# qi_noise_model_ibmq = QuantumInstance(backend=provider = provider.get_backend(backend_name)), shots=8000,
# measurement_error_mitigation_cls=CompleteMeasFitter, measurement_error_mitigation_shots=8000)
```
Finally, we can run the algorithm and check the results.
```
# Run the algorithm
result = vqe.run(qi_noise_model_qasm)
print(result)
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
```
|
github_jupyter
|
# FINN - Functional Verification of End-to-End Flow
-----------------------------------------------------------------
**Important: This notebook depends on the tfc_end2end_example notebook, because we are using models that are available at intermediate steps in the end-to-end flow. So please make sure the needed .onnx files are generated to run this notebook.**
In this notebook, we will show how to take the intermediate results of the end-to-end tfc example and verify their functionality with different methods. In the following picture you can see the section in the end-to-end flow about the *Simulation & Emulation Flows*. Besides the methods in this notebook, there is another one that is covered in the Jupyter notebook [tfc_end2end_example](tfc_end2end_example.ipynb): remote execution. The remote execution allows functional verification directly on the PYNQ board, for details please have a look at the mentioned Jupyter notebook.
<img src="verification.png" alt="Drawing" style="width: 500px;"/>
We will use the following helper functions, `showSrc` to show source code of FINN library calls and `showInNetron` to show the ONNX model at the current transformation step. The Netron displays are interactive, but they only work when running the notebook actively and not on GitHub (i.e. if you are viewing this on GitHub you'll only see blank squares).
```
from finn.util.basic import make_build_dir
from finn.util.visualization import showSrc, showInNetron
build_dir = "/workspace/finn"
```
To verify the simulations, a "golden" output is calculated as a reference. This is calculated directly from the Brevitas model using PyTorch, by running some example data from the MNIST dataset through the trained model.
```
from pkgutil import get_data
import onnx
import onnx.numpy_helper as nph
import torch
from finn.util.test import get_test_model_trained
fc = get_test_model_trained("TFC", 1, 1)
raw_i = get_data("finn.data", "onnx/mnist-conv/test_data_set_0/input_0.pb")
input_tensor = onnx.load_tensor_from_string(raw_i)
input_brevitas = torch.from_numpy(nph.to_array(input_tensor)).float()
output_golden = fc.forward(input_brevitas).detach().numpy()
output_golden
```
## Simulation using Python <a id='simpy'></a>
If an ONNX model consists of [standard ONNX](https://github.com/onnx/onnx/blob/master/docs/Operators.md) nodes and/or FINN custom operations that do not belong to the fpgadataflow (backend $\neq$ "fpgadataflow") this model can be checked for functionality using Python.
To simulate a standard ONNX node [onnxruntime](https://github.com/microsoft/onnxruntime) is used. onnxruntime is an open source tool developed by Microsoft to run standard ONNX nodes. For the FINN custom op nodes execution functions are defined. The following is an example of the execution function of a XNOR popcount node.
```
from finn.custom_op.general.xnorpopcount import xnorpopcountmatmul
showSrc(xnorpopcountmatmul)
```
The function contains a description of the behaviour in Python and can thus calculate the result of the node.
This execution function and onnxruntime is used when `execute_onnx` from `onnx_exec` is applied to the model. The model is then simulated node by node and the result is stored in a context dictionary, which contains the values of each tensor at the end of the execution. To get the result, only the output tensor has to be extracted.
The procedure is shown below. We take the model right before the nodes should be converted into HLS layers and generate an input tensor to pass to the execution function. The input tensor is generated from the Brevitas example inputs.
```
import numpy as np
from finn.core.modelwrapper import ModelWrapper
input_dict = {"global_in": nph.to_array(input_tensor)}
model_for_sim = ModelWrapper(build_dir+"/tfc_w1a1_ready_for_hls_conversion.onnx")
import finn.core.onnx_exec as oxe
output_dict = oxe.execute_onnx(model_for_sim, input_dict)
output_pysim = output_dict[list(output_dict.keys())[0]]
if np.isclose(output_pysim, output_golden, atol=1e-3).all():
print("Results are the same!")
else:
print("The results are not the same!")
```
The result is compared with the theoretical "golden" value for verification.
## Simulation (cppsim) using C++
When dealing with HLS custom op nodes in FINN the simulation using Python is no longer sufficient. After the nodes have been converted to HLS layers, the simulation using C++ can be used. To do this, the input tensor is stored in an .npy file and C++ code is generated that reads the values from the .npy array, streams them to the corresponding finn-hlslib function and writes the result to a new .npy file. This in turn can be read in Python and processed in the FINN flow. For this example the model after setting the folding factors in the HLS layers is used, please be aware that this is not the full model, but the dataflow partition, so before executing at the end of this section we have to integrate the model back into the parent model.
```
model_for_cppsim = ModelWrapper(build_dir+"/tfc_w1_a1_set_folding_factors.onnx")
```
To generate the code for this simulation and to generate the executable two transformations are used:
* `PrepareCppSim` which generates the C++ code for the corresponding hls layer
* `CompileCppSim` which compules the C++ code and stores the path to the executable
```
from finn.transformation.fpgadataflow.prepare_cppsim import PrepareCppSim
from finn.transformation.fpgadataflow.compile_cppsim import CompileCppSim
from finn.transformation.general import GiveUniqueNodeNames
model_for_cppsim = model_for_cppsim.transform(GiveUniqueNodeNames())
model_for_cppsim = model_for_cppsim.transform(PrepareCppSim())
model_for_cppsim = model_for_cppsim.transform(CompileCppSim())
```
When we take a look at the model using netron, we can see that the transformations introduced new attributes.
```
model_for_cppsim.save(build_dir+"/tfc_w1_a1_for_cppsim.onnx")
showInNetron(build_dir+"/tfc_w1_a1_for_cppsim.onnx")
```
The following node attributes have been added:
* `code_gen_dir_cppsim` indicates the directory where the files for the simulation using C++ are stored
* `executable_path` specifies the path to the executable
We take now a closer look into the files that were generated:
```
from finn.custom_op.registry import getCustomOp
fc0 = model_for_cppsim.graph.node[1]
fc0w = getCustomOp(fc0)
code_gen_dir = fc0w.get_nodeattr("code_gen_dir_cppsim")
!ls {code_gen_dir}
```
Besides the .cpp file, the folder contains .h files with the weights and thresholds. The shell script contains the compile command and *node_model* is the executable generated by compilation. Comparing this with the `executable_path` node attribute, it can be seen that it specifies exactly the path to *node_model*.
To simulate the model the execution mode(exec_mode) must be set to "cppsim". This is done using the transformation SetExecMode.
```
from finn.transformation.fpgadataflow.set_exec_mode import SetExecMode
model_for_cppsim = model_for_cppsim.transform(SetExecMode("cppsim"))
model_for_cppsim.save(build_dir+"/tfc_w1_a1_for_cppsim.onnx")
```
Before the model can be executed using `execute_onnx`, we integrate the child model in the parent model. The function reads then the `exec_mode` and writes the input into the correct directory in a .npy file. To be able to read this in C++, there is an additional .hpp file ([npy2apintstream.hpp](https://github.com/Xilinx/finn/blob/master/src/finn/data/cpp/npy2apintstream.hpp)) in FINN, which uses cnpy to read .npy files and convert them into streams, or to read a stream and write it into an .npy. [cnpy](https://github.com/rogersce/cnpy) is a helper to read and write .npy and .npz formates in C++.
The result is again compared to the "golden" output.
```
parent_model = ModelWrapper(build_dir+"/tfc_w1_a1_dataflow_parent.onnx")
sdp_node = parent_model.graph.node[2]
child_model = build_dir + "/tfc_w1_a1_for_cppsim.onnx"
getCustomOp(sdp_node).set_nodeattr("model", child_model)
output_dict = oxe.execute_onnx(parent_model, input_dict)
output_cppsim = output_dict[list(output_dict.keys())[0]]
if np.isclose(output_cppsim, output_golden, atol=1e-3).all():
print("Results are the same!")
else:
print("The results are not the same!")
```
## Emulation (rtlsim) using PyVerilator
The emulation using [PyVerilator](https://github.com/maltanar/pyverilator) can be done after IP blocks are generated from the corresponding HLS layers. Pyverilator is a tool which makes it possible to simulate verilog files using verilator via a python interface.
We have two ways to use rtlsim, one is to run the model node-by-node as with the simulation methods, but if the model is in the form of the dataflow partition, the part of the graph that consist of only HLS nodes could also be executed as whole.
Because at the point where we want to grab and verify the model, the model is already in split form (parent graph consisting of non-hls layers and child graph consisting only of hls layers) we first have to reference the child graph within the parent graph. This is done using the node attribute `model` for the `StreamingDataflowPartition` node.
First the procedure is shown, if the child graph has ip blocks corresponding to the individual layers, then the procedure is shown, if the child graph already has a stitched IP.
### Emulation of model node-by-node
The child model is loaded and the `exec_mode` for each node is set. To prepare the node-by-node emulation the transformation `PrepareRTLSim` is applied to the child model. With this transformation the emulation files are created for each node and can be used directly when calling `execute_onnx()`. Each node has a new node attribute "rtlsim_so" after transformation, which contains the path to the corresponding emulation files. Then it is saved in a new .onnx file so that the changed model can be referenced in the parent model.
```
from finn.transformation.fpgadataflow.prepare_rtlsim import PrepareRTLSim
from finn.transformation.fpgadataflow.prepare_ip import PrepareIP
from finn.transformation.fpgadataflow.hlssynth_ip import HLSSynthIP
test_fpga_part = "xc7z020clg400-1"
target_clk_ns = 10
child_model = ModelWrapper(build_dir + "/tfc_w1_a1_set_folding_factors.onnx")
child_model = child_model.transform(GiveUniqueNodeNames())
child_model = child_model.transform(PrepareIP(test_fpga_part, target_clk_ns))
child_model = child_model.transform(HLSSynthIP())
child_model = child_model.transform(SetExecMode("rtlsim"))
child_model = child_model.transform(PrepareRTLSim())
child_model.save(build_dir + "/tfc_w1_a1_dataflow_child.onnx")
```
The next step is to load the parent model and set the node attribute `model` in the StreamingDataflowPartition node (`sdp_node`). Afterwards the `exec_mode` is set in the parent model in each node.
```
# parent model
model_for_rtlsim = ModelWrapper(build_dir + "/tfc_w1_a1_dataflow_parent.onnx")
# reference child model
sdp_node = getCustomOp(model_for_rtlsim.graph.node[2])
sdp_node.set_nodeattr("model", build_dir + "/tfc_w1_a1_dataflow_child.onnx")
model_for_rtlsim = model_for_rtlsim.transform(SetExecMode("rtlsim"))
```
Because the necessary files for the emulation are already generated in Jupyter notebook [tfc_end2end_example](tfc_end2end_example.ipynb), in the next step the execution of the model can be done directly.
```
output_dict = oxe.execute_onnx(model_for_rtlsim, input_dict)
output_rtlsim = output_dict[list(output_dict.keys())[0]]
if np.isclose(output_rtlsim, output_golden, atol=1e-3).all():
print("Results are the same!")
else:
print("The results are not the same!")
```
### Emulation of stitched IP
Here we use the same procedure. First the child model is loaded, but in contrast to the layer-by-layer emulation, the metadata property `exec_mode` is set to "rtlsim" for the whole child model. When the model is integrated and executed in the last step, the verilog files of the stitched IP of the child model are used.
```
from finn.transformation.fpgadataflow.insert_dwc import InsertDWC
from finn.transformation.fpgadataflow.insert_fifo import InsertFIFO
from finn.transformation.fpgadataflow.create_stitched_ip import CreateStitchedIP
child_model = ModelWrapper(build_dir + "/tfc_w1_a1_dataflow_child.onnx")
child_model = child_model.transform(InsertDWC())
child_model = child_model.transform(InsertFIFO())
child_model = child_model.transform(GiveUniqueNodeNames())
child_model = child_model.transform(PrepareIP(test_fpga_part, target_clk_ns))
child_model = child_model.transform(HLSSynthIP())
child_model = child_model.transform(CreateStitchedIP(test_fpga_part, target_clk_ns))
child_model = child_model.transform(PrepareRTLSim())
child_model.set_metadata_prop("exec_mode","rtlsim")
child_model.save(build_dir + "/tfc_w1_a1_dataflow_child.onnx")
# parent model
model_for_rtlsim = ModelWrapper(build_dir + "/tfc_w1_a1_dataflow_parent.onnx")
# reference child model
sdp_node = getCustomOp(model_for_rtlsim.graph.node[2])
sdp_node.set_nodeattr("model", build_dir + "/tfc_w1_a1_dataflow_child.onnx")
output_dict = oxe.execute_onnx(model_for_rtlsim, input_dict)
output_rtlsim = output_dict[list(output_dict.keys())[0]]
if np.isclose(output_rtlsim, output_golden, atol=1e-3).all():
print("Results are the same!")
else:
print("The results are not the same!")
```
|
github_jupyter
|
```
import tensorflow as tf
from tensorflow.compat.v1 import ConfigProto
from tensorflow.compat.v1 import InteractiveSession
config = ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.5
config.gpu_options.allow_growth = True
session = InteractiveSession(config=config)
# import the libraries as shown below
from tensorflow.keras.layers import Input, Lambda, Dense, Flatten
from tensorflow.keras.models import Model
from tensorflow.keras.applications.resnet50 import ResNet50
#from keras.applications.vgg16 import VGG16
from tensorflow.keras.applications.resnet50 import preprocess_input
from tensorflow.keras.preprocessing import image
from tensorflow.keras.preprocessing.image import ImageDataGenerator,load_img
from tensorflow.keras.models import Sequential
import numpy as np
from glob import glob
#import matplotlib.pyplot as plt
# re-size all the images to this
IMAGE_SIZE = [224, 224]
train_path = 'Datasets/train'
valid_path = 'Datasets/test'
# Import the Vgg 16 library as shown below and add preprocessing layer to the front of VGG
# Here we will be using imagenet weights
resnet = ResNet50(input_shape=IMAGE_SIZE + [3], weights='imagenet', include_top=False)
# don't train existing weights
for layer in resnet.layers:
layer.trainable = False
# useful for getting number of output classes
folders = glob('Datasets/train/*')
# our layers - you can add more if you want
x = Flatten()(resnet.output)
prediction = Dense(len(folders), activation='softmax')(x)
# create a model object
model = Model(inputs=resnet.input, outputs=prediction)
# view the structure of the model
model.summary()
# tell the model what cost and optimization method to use
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy']
)
# Use the Image Data Generator to import the images from the dataset
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
test_datagen = ImageDataGenerator(rescale = 1./255)
# Make sure you provide the same target size as initialied for the image size
training_set = train_datagen.flow_from_directory('Datasets/train',
target_size = (224, 224),
batch_size = 32,
class_mode = 'categorical')
test_set = test_datagen.flow_from_directory('Datasets/test',
target_size = (224, 224),
batch_size = 32,
class_mode = 'categorical')
# fit the model
# Run the cell. It will take some time to execute
r = model.fit_generator(
training_set,
validation_data=test_set,
epochs=10,
steps_per_epoch=len(training_set),
validation_steps=len(test_set)
)
import matplotlib.pyplot as plt
# plot the loss
plt.plot(r.history['loss'], label='train loss')
plt.plot(r.history['val_loss'], label='val loss')
plt.legend()
plt.show()
plt.savefig('LossVal_loss')
# plot the accuracy
plt.plot(r.history['accuracy'], label='train acc')
plt.plot(r.history['val_accuracy'], label='val acc')
plt.legend()
plt.show()
plt.savefig('AccVal_acc')
from tensorflow.keras.models import load_model
model.save('model_resnet50.h5')
y_pred = model.predict(test_set)
y_pred
import numpy as np
y_pred = np.argmax(y_pred, axis=1)
y_pred
from tensorflow.keras.models import load_model
from tensorflow.keras.preprocessing import image
model=load_model('model_inception.h5')
from PIL import Image
img_data = np.random.random(size=(100, 100, 3))
img = tf.keras.preprocessing.image.array_to_img(img_data)
array = tf.keras.preprocessing.image.img_to_array(img)
img_data
img=image.load_img('Datasets/test/Covid/1-s2.0-S0929664620300449-gr2_lrg-a.jpg',target_size=(224,224))
x=image.img_to_array(img)
x
x.shape
x=x/255
import numpy as np
x=np.expand_dims(x,axis=0)
img_data=preprocess_input(x)
img_data.shape
model.predict(img_data)
a=np.argmax(model.predict(img_data), axis=1)
a==0
```
|
github_jupyter
|
```
from birdcall.data import *
from birdcall.metrics import *
from birdcall.ops import *
import torch
import torchvision
from torch import nn
import numpy as np
import pandas as pd
from pathlib import Path
import soundfile as sf
BS = 16
MAX_LR = 1e-3
classes = pd.read_pickle('data/classes.pkl')
splits = pd.read_pickle('data/all_splits.pkl')
all_train_items = pd.read_pickle('data/all_train_items.pkl')
train_items = np.array(all_train_items)[splits[0][0]].tolist()
val_items = np.array(all_train_items)[splits[0][1]].tolist()
from collections import defaultdict
class2train_items = defaultdict(list)
for cls_name, path, duration in train_items:
class2train_items[cls_name].append((path, duration))
train_ds = MelspecPoolDataset(class2train_items, classes, len_mult=50, normalize=False)
train_dl = torch.utils.data.DataLoader(train_ds, batch_size=BS, num_workers=NUM_WORKERS, pin_memory=True, shuffle=True)
val_items = [(classes.index(item[0]), item[1], item[2]) for item in val_items]
val_items_binned = bin_items_negative_class(val_items)
class Model(nn.Module):
def __init__(self):
super().__init__()
self.cnn = nn.Sequential(*list(torchvision.models.resnet34(True).children())[:-2])
self.classifier = nn.Sequential(*[
nn.Linear(512, 512), nn.ReLU(), nn.Dropout(p=0.5), nn.BatchNorm1d(512),
nn.Linear(512, 512), nn.ReLU(), nn.Dropout(p=0.5), nn.BatchNorm1d(512),
nn.Linear(512, len(classes))
])
def forward(self, x):
x = torch.log10(1 + x)
max_per_example = x.view(x.shape[0], -1).max(1)[0] # scaling to between 0 and 1
x /= max_per_example[:, None, None, None, None] # per example!
bs, im_num = x.shape[:2]
x = x.view(-1, x.shape[2], x.shape[3], x.shape[4])
x = self.cnn(x)
x = x.mean((2,3))
x = self.classifier(x)
x = x.view(bs, im_num, -1)
x = lme_pool(x)
return x
model = Model().cuda()
import torch.optim as optim
from sklearn.metrics import accuracy_score, f1_score
import time
criterion = nn.BCEWithLogitsLoss()
optimizer = optim.Adam(model.parameters(), 1e-3)
scheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, 5)
sc_ds = SoundscapeMelspecPoolDataset(pd.read_pickle('data/soundscape_items.pkl'), classes)
sc_dl = torch.utils.data.DataLoader(sc_ds, batch_size=2*BS, num_workers=NUM_WORKERS, pin_memory=True)
t0 = time.time()
for epoch in range(260):
running_loss = 0.0
for i, data in enumerate(train_dl, 0):
model.train()
inputs, labels = data[0].cuda(), data[1].cuda()
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, labels)
if np.isnan(loss.item()):
raise Exception(f'!!! nan encountered in loss !!! epoch: {epoch}\n')
loss.backward()
optimizer.step()
scheduler.step()
running_loss += loss.item()
if epoch % 5 == 4:
model.eval();
preds = []
targs = []
for num_specs in val_items_binned.keys():
valid_ds = MelspecShortishValidatioDataset(val_items_binned[num_specs], classes)
valid_dl = torch.utils.data.DataLoader(valid_ds, batch_size=2*BS, num_workers=NUM_WORKERS, pin_memory=True)
with torch.no_grad():
for data in valid_dl:
inputs, labels = data[0].cuda(), data[1].cuda()
outputs = model(inputs)
preds.append(outputs.cpu().detach())
targs.append(labels.cpu().detach())
preds = torch.cat(preds)
targs = torch.cat(targs)
f1s = []
ts = []
for t in np.linspace(0.4, 1, 61):
f1s.append(f1_score(preds.sigmoid() > t, targs, average='micro'))
ts.append(t)
sc_preds = []
sc_targs = []
with torch.no_grad():
for data in sc_dl:
inputs, labels = data[0].cuda(), data[1].cuda()
outputs = model(inputs)
sc_preds.append(outputs.cpu().detach())
sc_targs.append(labels.cpu().detach())
sc_preds = torch.cat(sc_preds)
sc_targs = torch.cat(sc_targs)
sc_f1 = f1_score(sc_preds.sigmoid() > 0.5, sc_targs, average='micro')
sc_f1s = []
sc_ts = []
for t in np.linspace(0.4, 1, 61):
sc_f1s.append(f1_score(sc_preds.sigmoid() > t, sc_targs, average='micro'))
sc_ts.append(t)
f1 = f1_score(preds.sigmoid() > 0.5, targs, average='micro')
print(f'[{epoch + 1}, {(time.time() - t0)/60:.1f}] loss: {running_loss / (len(train_dl)-1):.3f}, f1: {max(f1s):.3f}, sc_f1: {max(sc_f1s):.3f}')
running_loss = 0.0
torch.save(model.state_dict(), f'models/{epoch+1}_lmepool_simple_minmax_log_{round(f1, 2)}.pth')
```
|
github_jupyter
|
# 作業 : (Kaggle)鐵達尼生存預測
https://www.kaggle.com/c/titanic
# 作業1
* 參考範例,將鐵達尼的船票票號( 'Ticket' )欄位使用特徵雜湊 / 標籤編碼 / 目標均值編碼三種轉換後,
與其他數值型欄位一起預估生存機率
```
# 做完特徵工程前的所有準備 (與前範例相同)
import pandas as pd
import numpy as np
import copy, time
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LogisticRegression
data_path = 'data/data2/'
df_train = pd.read_csv(data_path + 'titanic_train.csv')
df_test = pd.read_csv(data_path + 'titanic_test.csv')
train_Y = df_train['Survived']
ids = df_test['PassengerId']
df_train = df_train.drop(['PassengerId', 'Survived'] , axis=1)
df_test = df_test.drop(['PassengerId'] , axis=1)
df = pd.concat([df_train,df_test])
df.head()
#只取類別值 (object) 型欄位, 存於 object_features 中
object_features = []
for dtype, feature in zip(df.dtypes, df.columns):
if dtype == 'object':
object_features.append(feature)
print(f'{len(object_features)} Numeric Features : {object_features}\n')
# 只留類別型欄位
df = df[object_features]
df = df.fillna('None')
train_num = train_Y.shape[0]
df.head()
```
# 作業2
* 承上題,三者比較效果何者最好?
- Answer: 在此例中三種效果差不多,計數編碼準確度略高一些,但不明顯
```
# 對照組 : 標籤編碼 + 邏輯斯迴歸
df_temp = pd.DataFrame()
for c in df.columns:
df_temp[c] = LabelEncoder().fit_transform(df[c])
train_X = df_temp[:train_num]
estimator = LogisticRegression()
print(cross_val_score(estimator, train_X, train_Y, cv=5).mean())
df_temp.head()
# 加上 'Cabin' 欄位的計數編碼
count_cabin = df.groupby('Cabin').size().reset_index()
count_cabin.columns = ['Cabin', 'Cabin_count']
count_df = pd.merge(df, count_cabin, on='Cabin', how='left')
df_temp['Cabin_count'] = count_df['Cabin_count']
train_X = df_temp[:train_Y.shape[0]]
train_X.head()
# 'Cabin'計數編碼 + 邏輯斯迴歸
cv = 5
LR = LogisticRegression()
mean_accuracy = cross_val_score(LR, train_X, train_Y, cv=cv).mean()
print('{}-fold cross validation average accuracy: {}'.format(cv, mean_accuracy))
# 'Cabin'特徵雜湊 + 邏輯斯迴歸
cv=5
df_temp = pd.DataFrame()
for c in df.columns:
df_temp[c] = LabelEncoder().fit_transform(df[c])
df_temp['Cabin_hash'] = df['Cabin'].apply(lambda x: hash(x) % 10).reset_index()['Cabin']
train_X = df_temp[:train_Y.shape[0]]
LR = LogisticRegression()
mean_accuracy = cross_val_score(LR, train_X, train_Y, cv=cv).mean()
print('{}-fold cross validation average accuracy: {}'.format(cv, mean_accuracy))
# 'Cabin'計數編碼 + 'Cabin'特徵雜湊 + 邏輯斯迴歸
cv=5
df_temp = pd.DataFrame()
for c in df.columns:
df_temp[c] = LabelEncoder().fit_transform(df[c])
df_temp['Cabin_hash']= df['Cabin'].apply(lambda x: hash(x) % 10).reset_index()['Cabin']
df_temp['Cabin_count'] = count_df['Cabin_count']
train_X = df_temp[:train_Y.shape[0]]
LR = LogisticRegression()
mean_accuracy = cross_val_score(LR, train_X, train_Y, cv=cv).mean()
print('{}-fold cross validation average accuracy: {}'.format(cv, mean_accuracy))
```
|
github_jupyter
|
### What if we buy a share every day at the highest price?
```
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
symbols = ['ABBV','AAPL','ADBE','APD','BRK-B','COST','CTL','DRI','IRM','KIM','MA','MCD','NFLX','NVDA','SO','V','VLO']
dates = ['2018-01-01', '2018-12-31']
data_directory = './data/hist/'
plot_directory = './plot/hist/'
def get_ticker_data(symbol, start_date, end_date):
ticker = pd.read_csv(data_directory + symbol + '.csv')
ticker['Date'] = pd.to_datetime(ticker['Date'], format='%Y-%m-%d')
ticker = ticker[(ticker['Date'] >= pd.to_datetime(start_date, format='%Y-%m-%d'))
& (ticker['Date'] <= pd.to_datetime(end_date, format='%Y-%m-%d'))]
ticker['units'] = 1
# At the highest price
ticker['investment'] = ticker['units'] * ticker['High']
ticker['total_units'] = ticker['units'].cumsum()
ticker['total_investment'] = ticker['investment'].cumsum()
# At the lowest price
ticker['total_value'] = ticker['total_units'] * ticker['Low']
ticker['percent'] = ((ticker['total_value'] - ticker['total_investment'])/ ticker['total_investment']) * 100.0
return ticker
def get_ticker_data_adj(symbol, start_date, end_date):
ticker = pd.read_csv(data_directory + symbol + '.csv')
ticker['Date'] = pd.to_datetime(ticker['Date'], format='%Y-%m-%d')
ticker = ticker[(ticker['Date'] >= pd.to_datetime(start_date, format='%Y-%m-%d'))
& (ticker['Date'] <= pd.to_datetime(end_date, format='%Y-%m-%d'))]
ticker['units'] = 1
ticker['investment'] = ticker['units'] * ticker['Adj Close']
ticker['total_units'] = ticker['units'].cumsum()
ticker['total_investment'] = ticker['investment'].cumsum()
ticker['total_value'] = ticker['total_units'] * ticker['Adj Close']
ticker['percent'] = ((ticker['total_value'] - ticker['total_investment'])/ ticker['total_investment']) * 100.0
return ticker
for symbol in symbols:
ticker = get_ticker_data(symbol, *dates)
fig = plt.figure(figsize=(8, 6), dpi=80, facecolor='w', edgecolor='k')
# 1
plt.subplot(2, 1, 1)
plt.plot(ticker['Date'], ticker['total_investment'], color='b')
plt.plot(ticker['Date'], ticker['total_value'], color='r')
plt.title(symbol + ' Dates: ' + dates[0] + ' to ' + dates[1])
plt.ylabel('Values')
# 2
plt.subplot(2, 1, 2)
plt.plot(ticker['Date'], ticker['percent'], color='b')
plt.xlabel('Dates')
plt.ylabel('Percent')
plt.show()
#fig.savefig(plot_directory + symbol + '.pdf', bbox_inches='tight')
plt.figure(num=None, figsize=(8, 6), dpi=80, facecolor='w', edgecolor='k')
for symbol in symbols:
ticker = get_ticker_data(symbol, *dates)
plt.plot(ticker['Date'], ticker['percent'])
plt.xlabel('Dates')
plt.ylabel('Percent')
plt.legend(symbols)
plt.show()
fig, axs = plt.subplots(len(symbols), 1, sharex=True)
# Remove horizontal space between axes
fig.subplots_adjust(hspace=0)
for i in range(0, len(symbols)):
ticker = get_ticker_data(symbols[i], *dates)
# Plot each graph, and manually set the y tick values
axs[i].plot(ticker['Date'], ticker['percent'])
axs[i].set_ylim(-200, 800)
axs[i].legend([symbols[i]])
print(type(axs[i]))
```
|
github_jupyter
|
# Self-Driving Car Engineer Nanodegree
## Project: **Finding Lane Lines on the Road**
***
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/322/view) for this project.
---
Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
**Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".**
---
**The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.**
---
<figure>
<img src="line-segments-example.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
</figcaption>
</figure>
<p></p>
<figure>
<img src="laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
</figcaption>
</figure>
**Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, see [this forum post](https://carnd-forums.udacity.com/cq/viewquestion.action?spaceKey=CAR&id=29496372&questionTitle=finding-lanes---import-cv2-fails-even-though-python-in-the-terminal-window-has-no-problem-with-import-cv2) for more troubleshooting tips.**
## Import Packages
```
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
```
## Read in an Image
```
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
```
## Ideas for Lane Detection Pipeline
**Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**
`cv2.inRange()` for color selection
`cv2.fillPoly()` for regions selection
`cv2.line()` to draw lines on an image given endpoints
`cv2.addWeighted()` to coadd / overlay two images
`cv2.cvtColor()` to grayscale or change color
`cv2.imwrite()` to output images to file
`cv2.bitwise_and()` to apply a mask to an image
**Check out the OpenCV documentation to learn about these and discover even more awesome functionality!**
## Helper Functions
Below are some helper functions to help get you started. They should look familiar from the lesson!
```
import math
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, roi_top, roi_bottom, min_slope, max_slope, color=[255, 0, 0], thickness=2):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
#Initialize variables
sum_fit_left = 0
sum_fit_right = 0
number_fit_left = 0
number_fit_right = 0
for line in lines:
for x1,y1,x2,y2 in line:
#find the slope and offset of each line found (y=mx+b)
fit = np.polyfit((x1, x2), (y1, y2), 1)
#limit the slope to plausible left lane values and compute the mean slope/offset
if fit[0] >= min_slope and fit[0] <= max_slope:
sum_fit_left = fit + sum_fit_left
number_fit_left = number_fit_left + 1
#limit the slope to plausible right lane values and compute the mean slope/offset
if fit[0] >= -max_slope and fit[0] <= -min_slope:
sum_fit_right = fit + sum_fit_right
number_fit_right = number_fit_right + 1
#avoid division by 0
if number_fit_left > 0:
#Compute the mean of all fitted lines
mean_left_fit = sum_fit_left/number_fit_left
#Given two y points (bottom of image and top of region of interest), compute the x coordinates
x_top_left = int((roi_top - mean_left_fit[1])/mean_left_fit[0])
x_bottom_left = int((roi_bottom - mean_left_fit[1])/mean_left_fit[0])
#Draw the line
cv2.line(img, (x_bottom_left,roi_bottom), (x_top_left,roi_top), [255, 0, 0], 5)
else:
mean_left_fit = (0,0)
if number_fit_right > 0:
#Compute the mean of all fitted lines
mean_right_fit = sum_fit_right/number_fit_right
#Given two y points (bottom of image and top of region of interest), compute the x coordinates
x_top_right = int((roi_top - mean_right_fit[1])/mean_right_fit[0])
x_bottom_right = int((roi_bottom - mean_right_fit[1])/mean_right_fit[0])
#Draw the line
cv2.line(img, (x_bottom_right,roi_bottom), (x_top_right,roi_top), [255, 0, 0], 5)
else:
fit_right_mean = (0,0)
def hough_lines(img, roi_top, roi_bottom, min_slope, max_slope, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines, roi_top, roi_bottom, min_slope, max_slope, color=[255, 0, 0], thickness=4)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., λ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + λ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, λ)
```
## Test Images
Build your pipeline to work on the images in the directory "test_images"
**You should make sure your pipeline works well on these images before you try the videos.**
```
import os
test_images = os.listdir("test_images/")
```
## Build a Lane Finding Pipeline
Build the pipeline and run your solution on all test_images. Make copies into the test_images directory, and you can use the images in your writeup report.
Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
```
# TODO: Build your pipeline that will draw lane lines on the test_images
# then save them to the test_images directory.
def process_image1(img):
#Apply greyscale
gray_img = grayscale(img)
# Define a kernel size and Apply Gaussian blur
kernel_size = 5
blur_img = gaussian_blur(gray_img, kernel_size)
#Apply the Canny transform
low_threshold = 50
high_threshold = 150
canny_img = canny(blur_img, low_threshold, high_threshold)
#Region of interest (roi) horizontal percentages
roi_hor_perc_top_left = 0.4675
roi_hor_perc_top_right = 0.5375
roi_hor_perc_bottom_left = 0.11
roi_hor_perc_bottom_right = 0.95
#Region of interest vertical percentages
roi_vert_perc = 0.5975
#Apply a region of interest mask of the image
vertices = np.array([[(int(roi_hor_perc_bottom_left*img.shape[1]),img.shape[0]), (int(roi_hor_perc_top_left*img.shape[1]), int(roi_vert_perc*img.shape[0])), (int(roi_hor_perc_top_right*img.shape[1]), int(roi_vert_perc*img.shape[0])), (int(roi_hor_perc_bottom_right*img.shape[1]),img.shape[0])]], dtype=np.int32)
croped_img = region_of_interest(canny_img,vertices)
# Define the Hough img parameters
rho = 2 # distance resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 15 # minimum number of votes (intersections in Hough grid cell)
min_line_length = 40 # minimum number of pixels making up a line
max_line_gap = 20 # maximum gap in pixels between connectable line segments
min_slope = 0.5 # minimum line slope
max_slope = 0.8 # maximum line slope
# Apply the Hough transform to get an image and the lines
hough_img = hough_lines(croped_img, int(roi_vert_perc*img.shape[0]), img.shape[0], min_slope, max_slope, rho, theta, threshold, min_line_length, max_line_gap)
# Return the image of the lines blended with the original
return weighted_img(img, hough_img, 0.7, 1.0)
#prepare directory to receive processed images
newpath = 'test_images/processed'
if not os.path.exists(newpath):
os.makedirs(newpath)
for file in test_images:
# skip files starting with processed
if file.startswith('processed'):
continue
image = mpimg.imread('test_images/' + file)
processed_img = process_image1(image)
#Extract file name
base = os.path.splitext(file)[0]
#break
mpimg.imsave('test_images/processed/processed-' + base +'.png', processed_img, format = 'png', cmap = plt.cm.gray)
print("Processed ", file)
```
```
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(img):
#Apply greyscale
gray_img = grayscale(img)
# Define a kernel size and Apply Gaussian blur
kernel_size = 5
blur_img = gaussian_blur(gray_img, kernel_size)
#Apply the Canny transform
low_threshold = 50
high_threshold = 150
canny_img = canny(blur_img, low_threshold, high_threshold)
#Region of interest (roi) horizontal percentages
roi_hor_perc_top_left = 0.4675
roi_hor_perc_top_right = 0.5375
roi_hor_perc_bottom_left = 0.11
roi_hor_perc_bottom_right = 0.95
#Region of interest vertical percentages
roi_vert_perc = 0.5975
#Apply a region of interest mask of the image
vertices = np.array([[(int(roi_hor_perc_bottom_left*img.shape[1]),img.shape[0]), (int(roi_hor_perc_top_left*img.shape[1]), int(roi_vert_perc*img.shape[0])), (int(roi_hor_perc_top_right*img.shape[1]), int(roi_vert_perc*img.shape[0])), (int(roi_hor_perc_bottom_right*img.shape[1]),img.shape[0])]], dtype=np.int32)
croped_img = region_of_interest(canny_img,vertices)
# Define the Hough img parameters
rho = 2 # distance resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 15 # minimum number of votes (intersections in Hough grid cell)
min_line_length = 40 # minimum number of pixels making up a line
max_line_gap = 20 # maximum gap in pixels between connectable line segments
min_slope = 0.5 # minimum line slope
max_slope = 0.8 # maximum line slope
# Apply the Hough transform to get an image and the lines
hough_img = hough_lines(croped_img, int(roi_vert_perc*img.shape[0]), img.shape[0], min_slope, max_slope, rho, theta, threshold, min_line_length, max_line_gap)
# Return the image of the lines blended with the original
return weighted_img(img, hough_img, 0.7, 1.0)
```
Let's try the one with the solid white lane on the right first ...
```
white_output = 'white.mp4'
clip1 = VideoFileClip("solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
```
Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
```
HTML("""
<video width="960" height="540" controls>
<source src="{0}" >
</video>
""".format(white_output))
```
## Improve the draw_lines() function
**At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".**
**Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.**
Now for the one with the solid yellow lane on the left. This one's more tricky!
```
yellow_output = 'yellow.mp4'
clip2 = VideoFileClip('solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
```
## Writeup and Submission
If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file.
## Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
```
challenge_output = 'extra.mp4'
clip2 = VideoFileClip('challenge.mp4')
challenge_clip = clip2.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
```
|
github_jupyter
|
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
```
# Регрессия: Спрогнозируй эффективность расхода топлива
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/beta/tutorials/keras/basic_regression"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/ru/beta/tutorials/keras/basic_regression.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/ru/beta/tutorials/keras/basic_regression.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/ru/beta/tutorials/keras/basic_regression.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
В задаче *регрессии*, мы хотим дать прогноз какого-либо непрерывного значения, например цену или вероятность. Сравните это с задачей *классификации*, где нужно выбрать конкретную категорию из ограниченного списка (например, есть ли на картинке яблоко или апельсин, распознайте какой фрукт на изображении).
Этот урок использует классический датасет [Auto MPG](https://archive.ics.uci.edu/ml/datasets/auto+mpg) и строит модель, предсказывающую эффективность расхода топлива автомобилей конца 70-х и начала 80-х. Чтобы сделать это, мы предоставим модели описания множества различных автомобилей того времени. Эти описания будут содержать такие параметры как количество цилиндров, лошадиных сил, объем двигателя и вес.
В этом примере используется tf.keras API, подробнее [смотри здесь](https://www.tensorflow.org/guide/keras).
```
# Установим библиотеку seaborn для построения парных графиков
!pip install seaborn
from __future__ import absolute_import, division, print_function, unicode_literals
import pathlib
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
!pip install tensorflow==2.0.0-beta1
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
print(tf.__version__)
```
## Датасет Auto MPG
Датасет доступен в [репозитории машинного обучения UCI](https://archive.ics.uci.edu/ml/).
### Получите данные
Сперва загрузим датасет.
```
dataset_path = keras.utils.get_file("auto-mpg.data", "http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data")
dataset_path
```
Импортируем его при помощи библиотеки Pandas:
```
column_names = ['MPG','Cylinders','Displacement','Horsepower','Weight',
'Acceleration', 'Model Year', 'Origin']
raw_dataset = pd.read_csv(dataset_path, names=column_names,
na_values = "?", comment='\t',
sep=" ", skipinitialspace=True)
dataset = raw_dataset.copy()
dataset.tail()
```
### Подготовьте данные
Датасет содержит несколько неизвестных значений.
```
dataset.isna().sum()
```
Чтобы урок оставался простым, удалим эти строки.
```
dataset = dataset.dropna()
```
Столбец `"Origin"` на самом деле категорийный, а не числовой. Поэтому конвертируем его в one-hot
```
origin = dataset.pop('Origin')
dataset['USA'] = (origin == 1)*1.0
dataset['Europe'] = (origin == 2)*1.0
dataset['Japan'] = (origin == 3)*1.0
dataset.tail()
```
### Разделите данные на обучающую и тестовую выборки
Сейчас разделим датасет на обучающую и тестовую выборки.
Тестовую выборку будем использовать для итоговой оценки нашей модели
```
train_dataset = dataset.sample(frac=0.8,random_state=0)
test_dataset = dataset.drop(train_dataset.index)
```
### Проверьте данные
Посмотрите на совместное распределение нескольких пар колонок из тренировочного набора данных:
```
sns.pairplot(train_dataset[["MPG", "Cylinders", "Displacement", "Weight"]], diag_kind="kde")
```
Также посмотрите на общую статистику:
```
train_stats = train_dataset.describe()
train_stats.pop("MPG")
train_stats = train_stats.transpose()
train_stats
```
### Отделите признаки от меток
Отделите целевые значения или "метки" от признаков. Обучите модель для предсказания значений.
```
train_labels = train_dataset.pop('MPG')
test_labels = test_dataset.pop('MPG')
```
### Нормализуйте данные
Взгляните еще раз на блок train_stats приведенный выше. Обратите внимание на то, как отличаются диапазоны каждого из признаков.
Это хорошая практика - нормализовать признаки у которых различные масштабы и диапазон изменений. Хотя модель *может* сходиться и без нормализации признаков, обучение при этом усложняется и итоговая модель становится зависимой от выбранных единиц измерения входных данных..
Примечание. Мы намеренно генерируем эти статистические данные только из обучающей выборки, они же будут использоваться для нормализации тестовой выборки. Мы должны сделать это, чтобы тестовая выборка была из того распределения на которой обучалась модель.
```
def norm(x):
return (x - train_stats['mean']) / train_stats['std']
normed_train_data = norm(train_dataset)
normed_test_data = norm(test_dataset)
```
Для обучения модели мы будем использовать эти нормализованные данные.
Внимание: статистики использованные для нормализации входных данных (среднее и стандартное отклонение) должны быть применены к любым другим данным, которые используются в модели. Это же касается one-hot кодирования которое мы делали ранее. Преобразования необходимо применять как к тестовым данным, так и к данным с которыми модель используется в работе.
## Модель
### Постройте модель
Давайте построим нашу модель. Мы будем использовать `Sequential` (последовательную) модель с двумя полносвязными скрытыми слоями, а выходной слой будет возвращать одно непрерывное значение. Этапы построения модели мы опишем в функции build_model, так как позже мы создадим еще одну модель.
```
def build_model():
model = keras.Sequential([
layers.Dense(64, activation='relu', input_shape=[len(train_dataset.keys())]),
layers.Dense(64, activation='relu'),
layers.Dense(1)
])
optimizer = tf.keras.optimizers.RMSprop(0.001)
model.compile(loss='mse',
optimizer=optimizer,
metrics=['mae', 'mse'])
return model
model = build_model()
```
### Проверьте модель
Используйте метод `.summary` чтобы напечатать простое описание модели.
```
model.summary()
```
Сейчас попробуем нашу модель. Возьмем пакет из`10` примеров из обучающей выборки и вызовем `model.predict` на них.
```
example_batch = normed_train_data[:10]
example_result = model.predict(example_batch)
example_result
```
Похоже все работает правильно, модель показывает результат ожидаемой размерности и типа.
### Обучите модель
Обучите модель за 1000 эпох и запишите точность модели на тренировочных и проверочных данных в объекте `history`.
```
# Выведем прогресс обучения в виде точек после каждой завершенной эпохи
class PrintDot(keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs):
if epoch % 100 == 0: print('')
print('.', end='')
EPOCHS = 1000
history = model.fit(
normed_train_data, train_labels,
epochs=EPOCHS, validation_split = 0.2, verbose=0,
callbacks=[PrintDot()])
```
Визуализируйте процесс обучения модели используя статистику содержащуюся в объекте `history`.
```
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
hist.tail()
def plot_history(history):
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
plt.figure()
plt.xlabel('Epoch')
plt.ylabel('Mean Abs Error [MPG]')
plt.plot(hist['epoch'], hist['mae'],
label='Train Error')
plt.plot(hist['epoch'], hist['val_mae'],
label = 'Val Error')
plt.ylim([0,5])
plt.legend()
plt.figure()
plt.xlabel('Epoch')
plt.ylabel('Mean Square Error [$MPG^2$]')
plt.plot(hist['epoch'], hist['mse'],
label='Train Error')
plt.plot(hist['epoch'], hist['val_mse'],
label = 'Val Error')
plt.ylim([0,20])
plt.legend()
plt.show()
plot_history(history)
```
Полученный график показывает, небольшое улучшение или даже деградацию ошибки валидации после примерно 100 эпох обучения. Давай обновим метод model.fit чтобы автоматически прекращать обучение когда ошибка валидации Val loss прекращает улучшаться. Для этого используем функцию *EarlyStopping callback* которая проверяет показатели обучения после каждой эпохи. Если после определенного количество эпох нет никаких улучшений, то функция автоматически остановит обучение.
Вы можете больше узнать про этот коллбек [здесь](https://www.tensorflow.org/versions/master/api_docs/python/tf/keras/callbacks/EarlyStopping).
```
model = build_model()
# Параметр patience определяет количество эпох, проверяемых на улучшение
early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10)
history = model.fit(normed_train_data, train_labels, epochs=EPOCHS,
validation_split = 0.2, verbose=0, callbacks=[early_stop, PrintDot()])
plot_history(history)
```
График показывает что среднее значение ошибки на проверочных данных - около 2 галлонов на милю. Хорошо это или плохо? Решать тебе.
Давай посмотрим как наша модель справится на **тестовой** выборке, которую мы еще не использовали при обучении модели. Эта проверка покажет нам какого результата ожидать от модели, когда мы будем ее использовать в реальном мире
```
loss, mae, mse = model.evaluate(normed_test_data, test_labels, verbose=0)
print("Testing set Mean Abs Error: {:5.2f} MPG".format(mae))
```
### Сделайте прогноз
Наконец, спрогнозируйте значения миль-на-галлон используя данные из тестовой выборки:
```
test_predictions = model.predict(normed_test_data).flatten()
plt.scatter(test_labels, test_predictions)
plt.xlabel('True Values [MPG]')
plt.ylabel('Predictions [MPG]')
plt.axis('equal')
plt.axis('square')
plt.xlim([0,plt.xlim()[1]])
plt.ylim([0,plt.ylim()[1]])
_ = plt.plot([-100, 100], [-100, 100])
```
Вроде наша модель дает хорошие предсказания. Давайте посмотрим распределение ошибки.
```
error = test_predictions - test_labels
plt.hist(error, bins = 25)
plt.xlabel("Prediction Error [MPG]")
_ = plt.ylabel("Count")
```
Она не достаточно гауссова, но мы могли это предполагать потому что количество примеров очень мало.
## Заключение
Это руководство познакомило тебя с несколькими способами решения задач регрессии.
* Среднеквадратичная ошибка (MSE) это распространенная функция потерь используемая для задач регрессии (для классификации используются другие функции).
* Аналогично, показатели оценки модели для регрессии отличаются от используемых в классификации. Обычной метрикой для регрессии является средняя абсолютная ошибка (MAE).
* Когда значения числовых входных данных из разных диапазонов, каждый признак должен быть незавизимо масштабирован до одного и того же диапазона.
* Если данных для обучения немного, используй небольшую сеть из нескольких скрытых слоев. Это поможет избежать переобучения.
* Метод ранней остановки очень полезная техника для избежания переобучения.
|
github_jupyter
|
# Tutorial 3 of 3: Advanced Topics and Usage
**Learning Outcomes**
* Use different methods to add boundary pores to a network
* Manipulate network topology by adding and removing pores and throats
* Explore the ModelsDict design, including copying models between objects, and changing model parameters
* Write a custom pore-scale model and a custom Phase
* Access and manipulate objects associated with the network
* Combine multiple algorithms to predict relative permeability
## Build and Manipulate Network Topology
For the present tutorial, we'll keep the topology simple to help keep the focus on other aspects of OpenPNM.
```
import warnings
import numpy as np
import scipy as sp
import openpnm as op
%matplotlib inline
np.random.seed(10)
ws = op.Workspace()
ws.settings['loglevel'] = 40
np.set_printoptions(precision=4)
pn = op.network.Cubic(shape=[10, 10, 10], spacing=0.00006, name='net')
```
## Adding Boundary Pores
When performing transport simulations it is often useful to have 'boundary' pores attached to the surface(s) of the network where boundary conditions can be applied. When using the **Cubic** class, two methods are available for doing this: ``add_boundaries``, which is specific for the **Cubic** class, and ``add_boundary_pores``, which is a generic method that can also be used on other network types and which is inherited from **GenericNetwork**. The first method automatically adds boundaries to ALL six faces of the network and offsets them from the network by 1/2 of the value provided as the network ``spacing``. The second method provides total control over which boundary pores are created and where they are positioned, but requires the user to specify to which pores the boundary pores should be attached to. Let's explore these two options:
```
pn.add_boundary_pores(labels=['top', 'bottom'])
```
Let's quickly visualize this network with the added boundaries:
```
#NBVAL_IGNORE_OUTPUT
fig = op.topotools.plot_connections(pn, c='r')
fig = op.topotools.plot_coordinates(pn, c='b', fig=fig)
fig.set_size_inches([10, 10])
```
### Adding and Removing Pores and Throats
OpenPNM uses a list-based data storage scheme for all properties, including topological connections. One of the benefits of this approach is that adding and removing pores and throats from the network is essentially as simple as adding or removing rows from the data arrays. The one exception to this 'simplicity' is that the ``'throat.conns'`` array must be treated carefully when trimming pores, so OpenPNM provides the ``extend`` and ``trim`` functions for adding and removing, respectively. To demonstrate, let's reduce the coordination number of the network to create a more random structure:
```
Ts = np.random.rand(pn.Nt) < 0.1 # Create a mask with ~10% of throats labeled True
op.topotools.trim(network=pn, throats=Ts) # Use mask to indicate which throats to trim
```
When the ``trim`` function is called, it automatically checks the health of the network afterwards, so logger messages might appear on the command line if problems were found such as isolated clusters of pores or pores with no throats. This health check is performed by calling the **Network**'s ``check_network_health`` method which returns a **HealthDict** containing the results of the checks:
```
a = pn.check_network_health()
print(a)
```
The **HealthDict** contains several lists including things like duplicate throats and isolated pores, but also a suggestion of which pores to trim to return the network to a healthy state. Also, the **HealthDict** has a ``health`` attribute that is ``False`` is any checks fail.
```
op.topotools.trim(network=pn, pores=a['trim_pores'])
```
Let's take another look at the network to see the trimmed pores and throats:
```
#NBVAL_IGNORE_OUTPUT
fig = op.topotools.plot_connections(pn, c='r')
fig = op.topotools.plot_coordinates(pn, c='b', fig=fig)
fig.set_size_inches([10, 10])
```
## Define Geometry Objects
The boundary pores we've added to the network should be treated a little bit differently. Specifically, they should have no volume or length (as they are not physically representative of real pores). To do this, we create two separate **Geometry** objects, one for internal pores and one for the boundaries:
```
Ps = pn.pores('*boundary', mode='not')
Ts = pn.throats('*boundary', mode='not')
geom = op.geometry.StickAndBall(network=pn, pores=Ps, throats=Ts, name='intern')
Ps = pn.pores('*boundary')
Ts = pn.throats('*boundary')
boun = op.geometry.Boundary(network=pn, pores=Ps, throats=Ts, name='boun')
```
The **StickAndBall** class is preloaded with the pore-scale models to calculate all the necessary size information (pore diameter, pore.volume, throat lengths, throat.diameter, etc). The **Boundary** class is speciall and is only used for the boundary pores. In this class, geometrical properties are set to small fixed values such that they don't affect the simulation results.
## Define Multiple Phase Objects
In order to simulate relative permeability of air through a partially water-filled network, we need to create each **Phase** object. OpenPNM includes pre-defined classes for each of these common fluids:
```
air = op.phases.Air(network=pn)
water = op.phases.Water(network=pn)
water['throat.contact_angle'] = 110
water['throat.surface_tension'] = 0.072
```
### Aside: Creating a Custom Phase Class
In many cases you will want to create your own fluid, such as an oil or brine, which may be commonly used in your research. OpenPNM cannot predict all the possible scenarios, but luckily it is easy to create a custom **Phase** class as follows:
```
from openpnm.phases import GenericPhase
class Oil(GenericPhase):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.add_model(propname='pore.viscosity',
model=op.models.misc.polynomial,
prop='pore.temperature',
a=[1.82082e-2, 6.51E-04, -3.48E-7, 1.11E-10])
self['pore.molecular_weight'] = 116 # g/mol
```
* Creating a **Phase** class basically involves placing a series of ``self.add_model`` commands within the ``__init__`` section of the class definition. This means that when the class is instantiated, all the models are added to *itself* (i.e. ``self``).
* ``**kwargs`` is a Python trick that captures all arguments in a *dict* called ``kwargs`` and passes them to another function that may need them. In this case they are passed to the ``__init__`` method of **Oil**'s parent by the ``super`` function. Specifically, things like ``name`` and ``network`` are expected.
* The above code block also stores the molecular weight of the oil as a constant value
* Adding models and constant values in this way could just as easily be done in a run script, but the advantage of defining a class is that it can be saved in a file (i.e. 'my_custom_phases') and reused in any project.
```
oil = Oil(network=pn)
print(oil)
```
## Define Physics Objects for Each Geometry and Each Phase
In the tutorial #2 we created two **Physics** object, one for each of the two **Geometry** objects used to handle the stratified layers. In this tutorial, the internal pores and the boundary pores each have their own **Geometry**, but there are two **Phases**, which also each require a unique **Physics**:
```
phys_water_internal = op.physics.GenericPhysics(network=pn, phase=water, geometry=geom)
phys_air_internal = op.physics.GenericPhysics(network=pn, phase=air, geometry=geom)
phys_water_boundary = op.physics.GenericPhysics(network=pn, phase=water, geometry=boun)
phys_air_boundary = op.physics.GenericPhysics(network=pn, phase=air, geometry=boun)
```
> To reiterate, *one* **Physics** object is required for each **Geometry** *AND* each **Phase**, so the number can grow to become annoying very quickly Some useful tips for easing this situation are given below.
### Create a Custom Pore-Scale Physics Model
Perhaps the most distinguishing feature between pore-network modeling papers is the pore-scale physics models employed. Accordingly, OpenPNM was designed to allow for easy customization in this regard, so that you can create your own models to augment or replace the ones included in the OpenPNM *models* libraries. For demonstration, let's implement the capillary pressure model proposed by [Mason and Morrow in 1994](http://dx.doi.org/10.1006/jcis.1994.1402). They studied the entry pressure of non-wetting fluid into a throat formed by spheres, and found that the converging-diverging geometry increased the capillary pressure required to penetrate the throat. As a simple approximation they proposed $P_c = -2 \sigma \cdot cos(2/3 \theta) / R_t$
Pore-scale models are written as basic function definitions:
```
def mason_model(target, diameter='throat.diameter', theta='throat.contact_angle',
sigma='throat.surface_tension', f=0.6667):
proj = target.project
network = proj.network
phase = proj.find_phase(target)
Dt = network[diameter]
theta = phase[theta]
sigma = phase[sigma]
Pc = 4*sigma*np.cos(f*np.deg2rad(theta))/Dt
return Pc[phase.throats(target.name)]
```
Let's examine the components of above code:
* The function receives a ``target`` object as an argument. This indicates which object the results will be returned to.
* The ``f`` value is a scale factor that is applied to the contact angle. Mason and Morrow suggested a value of 2/3 as a decent fit to the data, but we'll make this an adjustable parameter with 2/3 as the default.
* Note the ``pore.diameter`` is actually a **Geometry** property, but it is retrieved via the network using the data exchange rules outlined in the second tutorial.
* All of the calculations are done for every throat in the network, but this pore-scale model may be assigned to a ``target`` like a **Physics** object, that is a subset of the full domain. As such, the last line extracts values from the ``Pc`` array for the location of ``target`` and returns just the subset.
* The actual values of the contact angle, surface tension, and throat diameter are NOT sent in as numerical arrays, but rather as dictionary keys to the arrays. There is one very important reason for this: if arrays had been sent, then re-running the model would use the same arrays and hence not use any updated values. By having access to dictionary keys, the model actually looks up the current values in each of the arrays whenever it is run.
* It is good practice to include the dictionary keys as arguments, such as ``sigma = 'throat.contact_angle'``. This way the user can control where the contact angle could be stored on the ``target`` object.
### Copy Models Between Physics Objects
As mentioned above, the need to specify a separate **Physics** object for each **Geometry** and **Phase** can become tedious. It is possible to *copy* the pore-scale models assigned to one object onto another object. First, let's assign the models we need to ``phys_water_internal``:
```
mod = op.models.physics.hydraulic_conductance.hagen_poiseuille
phys_water_internal.add_model(propname='throat.hydraulic_conductance',
model=mod)
phys_water_internal.add_model(propname='throat.entry_pressure',
model=mason_model)
```
Now make a copy of the ``models`` on ``phys_water_internal`` and apply it all the other water **Physics** objects:
```
phys_water_boundary.models = phys_water_internal.models
```
The only 'gotcha' with this approach is that each of the **Physics** objects must be *regenerated* in order to place numerical values for all the properties into the data arrays:
```
phys_water_boundary.regenerate_models()
phys_air_internal.regenerate_models()
phys_air_internal.regenerate_models()
```
### Adjust Pore-Scale Model Parameters
The pore-scale models are stored in a **ModelsDict** object that is itself stored under the ``models`` attribute of each object. This arrangement is somewhat convoluted, but it enables integrated storage of models on the object's wo which they apply. The models on an object can be inspected with ``print(phys_water_internal)``, which shows a list of all the pore-scale properties that are computed by a model, and some information about the model's *regeneration* mode.
Each model in the **ModelsDict** can be individually inspected by accessing it using the dictionary key corresponding to *pore-property* that it calculates, i.e. ``print(phys_water_internal)['throat.capillary_pressure'])``. This shows a list of all the parameters associated with that model. It is possible to edit these parameters directly:
```
phys_water_internal.models['throat.entry_pressure']['f'] = 0.75 # Change value
phys_water_internal.regenerate_models() # Regenerate model with new 'f' value
```
More details about the **ModelsDict** and **ModelWrapper** classes can be found in :ref:`models`.
## Perform Multiphase Transport Simulations
### Use the Built-In Drainage Algorithm to Generate an Invading Phase Configuration
```
inv = op.algorithms.Porosimetry(network=pn)
inv.setup(phase=water)
inv.set_inlets(pores=pn.pores(['top', 'bottom']))
inv.run()
```
* The inlet pores were set to both ``'top'`` and ``'bottom'`` using the ``pn.pores`` method. The algorithm applies to the entire network so the mapping of network pores to the algorithm pores is 1-to-1.
* The ``run`` method automatically generates a list of 25 capillary pressure points to test, but you can also specify more pores, or which specific points to tests. See the methods documentation for the details.
* Once the algorithm has been run, the resulting capillary pressure curve can be viewed with ``plot_drainage_curve``. If you'd prefer a table of data for plotting in your software of choice you can use ``get_drainage_data`` which prints a table in the console.
### Set Pores and Throats to Invaded
After running, the ``mip`` object possesses an array containing the pressure at which each pore and throat was invaded, stored as ``'pore.inv_Pc'`` and ``'throat.inv_Pc'``. These arrays can be used to obtain a list of which pores and throats are invaded by water, using Boolean logic:
```
Pi = inv['pore.invasion_pressure'] < 5000
Ti = inv['throat.invasion_pressure'] < 5000
```
The resulting Boolean masks can be used to manually adjust the hydraulic conductivity of pores and throats based on their phase occupancy. The following lines set the water filled throats to near-zero conductivity for air flow:
```
Ts = phys_water_internal.map_throats(~Ti, origin=water)
phys_water_internal['throat.hydraulic_conductance'][Ts] = 1e-20
```
* The logic of these statements implicitly assumes that transport between two pores is only blocked if the throat is filled with the other phase, meaning that both pores could be filled and transport is still permitted. Another option would be to set the transport to near-zero if *either* or *both* of the pores are filled as well.
* The above approach can get complicated if there are several **Geometry** objects, and it is also a bit laborious. There is a pore-scale model for this under **Physics.models.multiphase** called ``conduit_conductance``. The term conduit refers to the path between two pores that includes 1/2 of each pores plus the connecting throat.
### Calculate Relative Permeability of Each Phase
We are now ready to calculate the relative permeability of the domain under partially flooded conditions. Instantiate an **StokesFlow** object:
```
water_flow = op.algorithms.StokesFlow(network=pn, phase=water)
water_flow.set_value_BC(pores=pn.pores('left'), values=200000)
water_flow.set_value_BC(pores=pn.pores('right'), values=100000)
water_flow.run()
Q_partial, = water_flow.rate(pores=pn.pores('right'))
```
The *relative* permeability is the ratio of the water flow through the partially water saturated media versus through fully water saturated media; hence we need to find the absolute permeability of water. This can be accomplished by *regenerating* the ``phys_water_internal`` object, which will recalculate the ``'throat.hydraulic_conductance'`` values and overwrite our manually entered near-zero values from the ``inv`` simulation using ``phys_water_internal.models.regenerate()``. We can then re-use the ``water_flow`` algorithm:
```
phys_water_internal.regenerate_models()
water_flow.run()
Q_full, = water_flow.rate(pores=pn.pores('right'))
```
And finally, the relative permeability can be found from:
```
K_rel = Q_partial/Q_full
print(f"Relative permeability: {K_rel:.5f}")
```
* The ratio of the flow rates gives the normalized relative permeability since all the domain size, viscosity and pressure differential terms cancel each other.
* To generate a full relative permeability curve the above logic would be placed inside a for loop, with each loop increasing the pressure threshold used to obtain the list of invaded throats (``Ti``).
* The saturation at each capillary pressure can be found be summing the pore and throat volume of all the invaded pores and throats using ``Vp = geom['pore.volume'][Pi]`` and ``Vt = geom['throat.volume'][Ti]``.
|
github_jupyter
|
<a href="https://colab.research.google.com/github/DingLi23/s2search/blob/pipelining/pipelining/exp-cshc/exp-cshc_cshc_1w_ale_plotting.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
### Experiment Description
> This notebook is for experiment \<exp-cshc\> and data sample \<cshc\>.
### Initialization
```
%load_ext autoreload
%autoreload 2
import numpy as np, sys, os
in_colab = 'google.colab' in sys.modules
# fetching code and data(if you are using colab
if in_colab:
!rm -rf s2search
!git clone --branch pipelining https://github.com/youyinnn/s2search.git
sys.path.insert(1, './s2search')
%cd s2search/pipelining/exp-cshc/
pic_dir = os.path.join('.', 'plot')
if not os.path.exists(pic_dir):
os.mkdir(pic_dir)
```
### Loading data
```
sys.path.insert(1, '../../')
import numpy as np, sys, os, pandas as pd
from getting_data import read_conf
from s2search_score_pdp import pdp_based_importance
sample_name = 'cshc'
f_list = [
'title', 'abstract', 'venue', 'authors',
'year',
'n_citations'
]
ale_xy = {}
ale_metric = pd.DataFrame(columns=['feature_name', 'ale_range', 'ale_importance', 'absolute mean'])
for f in f_list:
file = os.path.join('.', 'scores', f'{sample_name}_1w_ale_{f}.npz')
if os.path.exists(file):
nparr = np.load(file)
quantile = nparr['quantile']
ale_result = nparr['ale_result']
values_for_rug = nparr.get('values_for_rug')
ale_xy[f] = {
'x': quantile,
'y': ale_result,
'rug': values_for_rug,
'weird': ale_result[len(ale_result) - 1] > 20
}
if f != 'year' and f != 'n_citations':
ale_xy[f]['x'] = list(range(len(quantile)))
ale_xy[f]['numerical'] = False
else:
ale_xy[f]['xticks'] = quantile
ale_xy[f]['numerical'] = True
ale_metric.loc[len(ale_metric.index)] = [f, np.max(ale_result) - np.min(ale_result), pdp_based_importance(ale_result, f), np.mean(np.abs(ale_result))]
# print(len(ale_result))
print(ale_metric.sort_values(by=['ale_importance'], ascending=False))
print()
```
### ALE Plots
```
import matplotlib.pyplot as plt
import seaborn as sns
from matplotlib.ticker import MaxNLocator
categorical_plot_conf = [
{
'xlabel': 'Title',
'ylabel': 'ALE',
'ale_xy': ale_xy['title']
},
{
'xlabel': 'Abstract',
'ale_xy': ale_xy['abstract']
},
{
'xlabel': 'Authors',
'ale_xy': ale_xy['authors'],
# 'zoom': {
# 'inset_axes': [0.3, 0.3, 0.47, 0.47],
# 'x_limit': [89, 93],
# 'y_limit': [-1, 14],
# }
},
{
'xlabel': 'Venue',
'ale_xy': ale_xy['venue'],
# 'zoom': {
# 'inset_axes': [0.3, 0.3, 0.47, 0.47],
# 'x_limit': [89, 93],
# 'y_limit': [-1, 13],
# }
},
]
numerical_plot_conf = [
{
'xlabel': 'Year',
'ylabel': 'ALE',
'ale_xy': ale_xy['year'],
# 'zoom': {
# 'inset_axes': [0.15, 0.4, 0.4, 0.4],
# 'x_limit': [2019, 2023],
# 'y_limit': [1.9, 2.1],
# },
},
{
'xlabel': 'Citations',
'ale_xy': ale_xy['n_citations'],
# 'zoom': {
# 'inset_axes': [0.4, 0.65, 0.47, 0.3],
# 'x_limit': [-1000.0, 12000],
# 'y_limit': [-0.1, 1.2],
# },
},
]
def pdp_plot(confs, title):
fig, axes_list = plt.subplots(nrows=1, ncols=len(confs), figsize=(20, 5), dpi=100)
subplot_idx = 0
plt.suptitle(title, fontsize=20, fontweight='bold')
# plt.autoscale(False)
for conf in confs:
axes = axes if len(confs) == 1 else axes_list[subplot_idx]
sns.rugplot(conf['ale_xy']['rug'], ax=axes, height=0.02)
axes.axhline(y=0, color='k', linestyle='-', lw=0.8)
axes.plot(conf['ale_xy']['x'], conf['ale_xy']['y'])
axes.grid(alpha = 0.4)
# axes.set_ylim([-2, 20])
axes.xaxis.set_major_locator(MaxNLocator(integer=True))
axes.yaxis.set_major_locator(MaxNLocator(integer=True))
if ('ylabel' in conf):
axes.set_ylabel(conf.get('ylabel'), fontsize=20, labelpad=10)
# if ('xticks' not in conf['ale_xy'].keys()):
# xAxis.set_ticklabels([])
axes.set_xlabel(conf['xlabel'], fontsize=16, labelpad=10)
if not (conf['ale_xy']['weird']):
if (conf['ale_xy']['numerical']):
axes.set_ylim([-1.5, 1.5])
pass
else:
axes.set_ylim([-7, 20])
pass
if 'zoom' in conf:
axins = axes.inset_axes(conf['zoom']['inset_axes'])
axins.xaxis.set_major_locator(MaxNLocator(integer=True))
axins.yaxis.set_major_locator(MaxNLocator(integer=True))
axins.plot(conf['ale_xy']['x'], conf['ale_xy']['y'])
axins.set_xlim(conf['zoom']['x_limit'])
axins.set_ylim(conf['zoom']['y_limit'])
axins.grid(alpha=0.3)
rectpatch, connects = axes.indicate_inset_zoom(axins)
connects[0].set_visible(False)
connects[1].set_visible(False)
connects[2].set_visible(True)
connects[3].set_visible(True)
subplot_idx += 1
pdp_plot(categorical_plot_conf, f"ALE for {len(categorical_plot_conf)} categorical features")
# plt.savefig(os.path.join('.', 'plot', f'{sample_name}-1wale-categorical.png'), facecolor='white', transparent=False, bbox_inches='tight')
pdp_plot(numerical_plot_conf, f"ALE for {len(numerical_plot_conf)} numerical features")
# plt.savefig(os.path.join('.', 'plot', f'{sample_name}-1wale-numerical.png'), facecolor='white', transparent=False, bbox_inches='tight')
```
|
github_jupyter
|
```
from torchvision.models import *
import wandb
from sklearn.model_selection import train_test_split
import os,cv2
import numpy as np
import matplotlib.pyplot as plt
from torch.optim import *
from torch.nn import *
import torch,torchvision
from tqdm import tqdm
device = 'cuda'
PROJECT_NAME = 'Musical-Instruments-Image-Classification'
def load_data():
data = []
labels = {}
labels_r = {}
idx = 0
for label in os.listdir('./data/'):
idx += 1
labels[label] = idx
labels_r[idx] = label
for folder in os.listdir('./data/'):
for file in os.listdir(f'./data/{folder}/'):
img = cv2.imread(f'./data/{folder}/{file}')
img = cv2.resize(img,(56,56))
img = img / 255.0
data.append([
img,
np.eye(labels[folder]+1,len(labels))[labels[folder]-1]
])
X = []
y = []
for d in data:
X.append(d[0])
y.append(d[1])
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.125,shuffle=False)
X_train = torch.from_numpy(np.array(X_train)).to(device).view(-1,3,56,56).float()
y_train = torch.from_numpy(np.array(y_train)).to(device).float()
X_test = torch.from_numpy(np.array(X_test)).to(device).view(-1,3,56,56).float()
y_test = torch.from_numpy(np.array(y_test)).to(device).float()
return X,y,X_train,X_test,y_train,y_test,labels,labels_r,idx,data
X,y,X_train,X_test,y_train,y_test,labels,labels_r,idx,data = load_data()
# torch.save(labels_r,'labels_r.pt')
# torch.save(labels,'labels.pt')
# torch.save(X_train,'X_train.pth')
# torch.save(y_train,'y_train.pth')
# torch.save(X_test,'X_test.pth')
# torch.save(y_test,'y_test.pth')
# torch.save(labels_r,'labels_r.pth')
# torch.save(labels,'labels.pth')
def get_accuracy(model,X,y):
preds = model(X)
correct = 0
total = 0
for pred,yb in zip(preds,y):
pred = int(torch.argmax(pred))
yb = int(torch.argmax(yb))
if pred == yb:
correct += 1
total += 1
acc = round(correct/total,3)*100
return acc
def get_loss(model,X,y,criterion):
preds = model(X)
loss = criterion(preds,y)
return loss.item()
model = resnet18().to(device)
model.fc = Linear(512,len(labels))
criterion = MSELoss()
optimizer = Adam(model.parameters(),lr=0.001)
epochs = 100
batch_size = 32
wandb.init(project=PROJECT_NAME,name='baseline')
for _ in tqdm(range(epochs)):
for i in range(0,len(X_train),batch_size):
X_batch = X_train[i:i+batch_size]
y_batch = y_train[i:i+batch_size]
model.to(device)
preds = model(X_batch)
loss = criterion(preds,y_batch)
optimizer.zero_grad()
loss.backward()
optimizer.step()
model.eval()
torch.cuda.empty_cache()
wandb.log({'Loss':(get_loss(model,X_train,y_train,criterion)+get_loss(model,X_batch,y_batch,criterion))/2})
torch.cuda.empty_cache()
wandb.log({'Val Loss':get_loss(model,X_test,y_test,criterion)})
torch.cuda.empty_cache()
wandb.log({'Acc':(get_accuracy(model,X_train,y_train)+get_accuracy(model,X_batch,y_batch))/2})
torch.cuda.empty_cache()
wandb.log({'Val ACC':get_accuracy(model,X_test,y_test)})
torch.cuda.empty_cache()
model.train()
wandb.finish()
class Model(Module):
def __init__(self):
super().__init__()
self.max_pool2d = MaxPool2d((2,2),(2,2))
self.activation = ReLU()
self.conv1 = Conv2d(3,7,(5,5))
self.conv2 = Conv2d(7,14,(5,5))
self.conv2bn = BatchNorm2d(14)
self.conv3 = Conv2d(14,21,(5,5))
self.linear1 = Linear(21*3*3,256)
self.linear2 = Linear(256,512)
self.linear2bn = BatchNorm1d(512)
self.linear3 = Linear(512,256)
self.output = Linear(256,len(labels))
def forward(self,X):
preds = self.max_pool2d(self.activation(self.conv1(X)))
preds = self.max_pool2d(self.activation(self.conv2bn(self.conv2(preds))))
preds = self.max_pool2d(self.activation(self.conv3(preds)))
print(preds.shape)
preds = preds.view(-1,21*3*3)
preds = self.activation(self.linear1(preds))
preds = self.activation(self.linear2bn(self.linear2(preds)))
preds = self.activation(self.linear3(preds))
preds = self.output(preds)
return preds
model = Model().to(device)
criterion = MSELoss()
optimizer = Adam(model.parameters(),lr=0.001)
epochs = 100
batch_size = 32
wandb.init(project=PROJECT_NAME,name='baseline')
for _ in tqdm(range(epochs)):
for i in range(0,len(X_train),batch_size):
X_batch = X_train[i:i+batch_size]
y_batch = y_train[i:i+batch_size]
model.to(device)
preds = model(X_batch)
loss = criterion(preds,y_batch)
optimizer.zero_grad()
loss.backward()
optimizer.step()
model.eval()
torch.cuda.empty_cache()
wandb.log({'Loss':(get_loss(model,X_train,y_train,criterion)+get_loss(model,X_batch,y_batch,criterion))/2})
torch.cuda.empty_cache()
wandb.log({'Val Loss':get_loss(model,X_test,y_test,criterion)})
torch.cuda.empty_cache()
wandb.log({'Acc':(get_accuracy(model,X_train,y_train)+get_accuracy(model,X_batch,y_batch))/2})
torch.cuda.empty_cache()
wandb.log({'Val ACC':get_accuracy(model,X_test,y_test)})
torch.cuda.empty_cache()
model.train()
wandb.finish()
```
|
github_jupyter
|
# (Optional) Testing the Function Endpoint with your Own Audio Clips
Instead of using pre-recorded clips we show you in this notebook how to invoke the deployed Function
with your **own** audio clips.
In the cells below, we will use the [PyAudio library](https://pypi.org/project/PyAudio/) to record a short 1 second clip. we will then submit
that short clip to the Function endpoint on Oracle Functions. **Make sure PyAudio is installed on your laptop** before running this notebook.
The helper function defined below will record a 1-sec audio clip when executed. Speak into the microphone
of your computer and say one of the words `cat`, `eight`, `right`.
I'd recommend double-checking that you are not muted and that you are using the internal computer mic. No
headset.
```
# we will use pyaudio and wave in the
# bottom half of this notebook.
import pyaudio
import wave
print(pyaudio.__version__)
def record_wave(duration=1.0, output_wave='./output.wav'):
"""Using the pyaudio library, this function will record a video clip of a given duration.
Args:
- duration (float): duration of the recording in seconds
- output_wave (str) : filename of the wav file that contains your recording
Returns:
- frames : a list containing the recorded waveform
"""
# number of frames per buffer
frames_perbuff = 2048
# 16 bit int
format = pyaudio.paInt16
# mono sound
channels = 1
# Sampling rate -- CD quality (44.1 kHz). Standard
# for most recording devices.
sampling_rate = 44100
# frames contain the waveform data:
frames = []
# number of buffer chunks:
nchunks = int(duration * sampling_rate / frames_perbuff)
p = pyaudio.PyAudio()
stream = p.open(format=format,
channels=channels,
rate=sampling_rate,
input=True,
frames_per_buffer=frames_perbuff)
print("RECORDING STARTED ")
for i in range(0, nchunks):
data = stream.read(frames_perbuff)
frames.append(data)
print("RECORDING ENDED")
stream.stop_stream()
stream.close()
p.terminate()
# Write the audio clip to disk as a .wav file:
wf = wave.open(output_wave, 'wb')
wf.setnchannels(channels)
wf.setsampwidth(p.get_sample_size(format))
wf.setframerate(sampling_rate)
wf.writeframes(b''.join(frames))
wf.close()
# let's record your own, 1-sec clip
my_own_clip = "./my_clip.wav"
frames = record_wave(output_wave=my_own_clip)
# Playback
ipd.Audio("./my_clip.wav")
```
Looks good? Now let's try to send that clip to our model API endpoint. We will repeat the same process we adopted when we submitted pre-recorded clips.
```
# oci:
import oci
from oci.config import from_file
from oci import pagination
import oci.functions as functions
from oci.functions import FunctionsManagementClient, FunctionsInvokeClient
# Lets specify the location of our OCI configuration file:
oci_config = from_file("/home/datascience/block_storage/.oci/config")
# Lets specify the compartment OCID, and the application + function names:
compartment_id = 'ocid1.compartment.oc1..aaaaaaaafl3avkal72rrwuy4m5rumpwh7r4axejjwq5hvwjy4h4uoyi7kzyq'
app_name = 'machine-learning-models'
fn_name = 'speech-commands'
fn_management_client = FunctionsManagementClient(oci_config)
app_result = pagination.list_call_get_all_results(
fn_management_client.list_applications,
compartment_id,
display_name=app_name
)
fn_result = pagination.list_call_get_all_results(
fn_management_client.list_functions,
app_result.data[0].id,
display_name=fn_name
)
invoke_client = FunctionsInvokeClient(oci_config, service_endpoint=fn_result.data[0].invoke_endpoint)
# here we need to be careful. `my_own_clip` was recorded at a 44.1 kHz sampling rate.
# Yet the training sample has data at a 16 kHz rate. To ensure that we feed data of the same
# size, we will downsample the data to a 16 kHz rate (sr=16000)
waveform, _ = librosa.load(my_own_clip, mono=True, sr=16000)
```
Below we call the deployed Function. Note that the first call could take 60 sec. or more. This is due to the cold start problem of Function. Subsequent calls are much faster. Typically < 1 sec.
```
%%time
resp = invoke_client.invoke_function(fn_result.data[0].id,
invoke_function_body=json.dumps({"input": waveform.tolist()}))
print(resp.data.text)
```
|
github_jupyter
|

<font size=3 color="midnightblue" face="arial">
<h1 align="center">Escuela de Ciencias Básicas, Tecnología e Ingeniería</h1>
</font>
<font size=3 color="navy" face="arial">
<h1 align="center">ECBTI</h1>
</font>
<font size=2 color="darkorange" face="arial">
<h1 align="center">Curso:</h1>
</font>
<font size=2 color="navy" face="arial">
<h1 align="center">Introducción al lenguaje de programación Python</h1>
</font>
<font size=1 color="darkorange" face="arial">
<h1 align="center">Febrero de 2020</h1>
</font>
<h2 align="center">Sesión 08 - Manipulación de archivos JSON</h2>
## Introducción
`JSON` (*JavaScript Object Notation*) es un formato ligero de intercambio de datos que los humanos pueden leer y escribir fácilmente. También es fácil para las computadoras analizar y generar. `JSON` se basa en el lenguaje de programación [JavaScript](https://www.javascript.com/ 'JavaScript'). Es un formato de texto que es independiente del lenguaje y se puede usar en `Python`, `Perl`, entre otros idiomas. Se utiliza principalmente para transmitir datos entre un servidor y aplicaciones web. `JSON` se basa en dos estructuras:
- Una colección de pares nombre / valor. Esto se realiza como un objeto, registro, diccionario, tabla hash, lista con clave o matriz asociativa.
- Una lista ordenada de valores. Esto se realiza como una matriz, vector, lista o secuencia.
## JSON en Python
Hay una serie de paquetes que admiten `JSON` en `Python`, como [metamagic.json](https://pypi.org/project/metamagic.json/ 'metamagic.json'), [jyson](http://opensource.xhaus.com/projects/jyson/wiki 'jyson'), [simplejson](https://simplejson.readthedocs.io/en/latest/ 'simplejson'), [Yajl-Py](http://pykler.github.io/yajl-py/ 'Yajl-Py'), [ultrajson](https://github.com/esnme/ultrajson 'ultrajson') y [json](https://docs.python.org/3.6/library/json.html 'json'). En este curso, utilizaremos [json](https://docs.python.org/3.6/library/json.html 'json'), que es compatible de forma nativa con `Python`. Podemos usar [este sitio](https://jsonlint.com/ 'jsonlint') que proporciona una interfaz `JSON` para verificar nuestros datos `JSON`.
A continuación se muestra un ejemplo de datos `JSON`.
```
{
"nombre": "Jaime",
"apellido": "Perez",
"aficiones": ["correr", "ciclismo", "caminar"],
"edad": 35,
"hijos": [
{
"nombre": "Pedro",
"edad": 6
},
{
"nombre": "Alicia",
"edad": 8
}
]
}
```
Como puede verse, `JSON` admite tanto tipos primitivos, cadenas de caracteres y números, como listas y objetos anidados.
Notamos que la representación de datos es muy similar a los diccionarios de `Python`
```
{
"articulo": [
{
"id":"01",
"lenguaje": "JSON",
"edicion": "primera",
"autor": "Derrick Mwiti"
},
{
"id":"02",
"lenguaje": "Python",
"edicion": "segunda",
"autor": "Derrick Mwiti"
}
],
"blog":[
{
"nombre": "Datacamp",
"URL":"datacamp.com"
}
]
}
```
Reescribámoslo en una forma más familiar
```
{"articulo":[{"id":"01","lenguaje": "JSON","edicion": "primera","author": "Derrick Mwiti"},
{"id":"02","lenguaje": "Python","edicion": "segunda","autor": "Derrick Mwiti"}],
"blog":[{"nombre": "Datacamp","URL":"datacamp.com"}]}
```
## `JSON` nativo en `Python`
`Python` viene con un paquete incorporado llamado `json` para codificar y decodificar datos `JSON`.
```
import json
```
## Un poco de vocabulario
El proceso de codificación de `JSON` generalmente se llama serialización. Este término se refiere a la transformación de datos en una serie de bytes (por lo tanto, en serie) para ser almacenados o transmitidos a través de una red. También puede escuchar el término de clasificación, pero esa es otra discusión. Naturalmente, la deserialización es el proceso recíproco de decodificación de datos que se ha almacenado o entregado en el estándar `JSON`.
De lo que estamos hablando aquí es leer y escribir. Piénselo así: la codificación es para escribir datos en el disco, mientras que la decodificación es para leer datos en la memoria.
### Serialización en `JSON`
¿Qué sucede después de que una computadora procesa mucha información? Necesita tomar un volcado de datos. En consecuencia, la biblioteca `json` expone el método `dump()` para escribir datos en archivos. También hay un método `dumps()` (pronunciado como "*dump-s*") para escribir en una cadena de `Python`.
Los objetos simples de `Python` se traducen a `JSON` de acuerdo con una conversión bastante intuitiva.
Comparemos los tipos de datos en `Python` y `JSON`.
|**Python** | **JSON** |
|:---------:|:----------------:|
|dict |object |
|list|array |
|tuple| array|
|str| string|
|int| number|
|float| number|
|True| true|
|False| false|
|None| null|
### Serialización, ejemplo
tenemos un objeto `Python` en la memoria que se parece a algo así:
```
data = {
"president": {
"name": "Zaphod Beeblebrox",
"species": "Betelgeusian"
}
}
print(type(data))
```
Es fundamental que se guarde esta información en el disco, por lo que la tarea es escribirla en un archivo.
Con el administrador de contexto de `Python`, puede crear un archivo llamado `data_file.json` y abrirlo en modo de escritura. (Los archivos `JSON` terminan convenientemente en una extensión `.json`).
```
with open("data_file.json", "w") as write_file:
json.dump(data, write_file)
```
Tenga en cuenta que `dump()` toma dos argumentos posicionales:
1. el objeto de datos que se va a serializar y
2. el objeto tipo archivo en el que se escribirán los bytes.
O, si estaba tan inclinado a seguir usando estos datos `JSON` serializados en su programa, podría escribirlos en un objeto `str` nativo de `Python`.
```
json_string = json.dumps(data)
print(type(json_string))
```
Tenga en cuenta que el objeto similar a un archivo está ausente ya que no está escribiendo en el disco. Aparte de eso, `dumps()` es como `dump()`.
Se ha creado un objeto `JSON` y está listo para trabajarlo.
### Algunos argumentos útiles de palabras clave
Recuerde, `JSON` está destinado a ser fácilmente legible por los humanos, pero la sintaxis legible no es suficiente si se aprieta todo junto. Además, probablemente tenga un estilo de programación diferente a éste presentado, y puede que le resulte más fácil leer el código cuando está formateado a su gusto.
***NOTA:*** Los métodos `dump()` y `dumps()` usan los mismos argumentos de palabras clave.
La primera opción que la mayoría de la gente quiere cambiar es el espacio en blanco. Puede usar el argumento de sangría de palabras clave para especificar el tamaño de sangría para estructuras anidadas. Compruebe la diferencia por sí mismo utilizando los datos, que definimos anteriormente, y ejecutando los siguientes comandos en una consola:
```
json.dumps(data)
json.dumps(data, indent=4)
```
Otra opción de formato es el argumento de palabra clave de separadores. Por defecto, esta es una tupla de 2 de las cadenas de separación (`","`, `": "`), pero una alternativa común para `JSON` compacto es (`","`, `":"`). observe el ejemplo `JSON` nuevamente para ver dónde entran en juego estos separadores.
Hay otros, como `sort_keys`. Puede encontrar una lista completa en la [documentación](https://docs.python.org/3/library/json.html#basic-usage) oficial.
### Deserializando JSON
Hemos trabajado un poco de `JSON` muy básico, ahora es el momento de ponerlo en forma. En la biblioteca `json`, encontrará `load()` y `loads()` para convertir datos codificados con `JSON` en objetos de `Python`.
Al igual que la serialización, hay una tabla de conversión simple para la deserialización, aunque probablemente ya puedas adivinar cómo se ve.
|**JSON** | **Python** |
|:---------:|:----------------:|
|object |dict |
|array |list|
|array|tuple |
|string|str |
|number|int |
|number|float |
|true|True |
|false|False |
|null|None |
Técnicamente, esta conversión no es un inverso perfecto a la tabla de serialización. Básicamente, eso significa que si codifica un objeto de vez en cuando y luego lo decodifica nuevamente más tarde, es posible que no recupere exactamente el mismo objeto. Me imagino que es un poco como teletransportación: descomponga mis moléculas aquí y vuelva a unirlas allí. ¿Sigo siendo la misma persona?
En realidad, probablemente sea más como hacer que un amigo traduzca algo al japonés y que otro amigo lo traduzca nuevamente al inglés. De todos modos, el ejemplo más simple sería codificar una tupla y recuperar una lista después de la decodificación, así:
```
blackjack_hand = (8, "Q")
encoded_hand = json.dumps(blackjack_hand)
decoded_hand = json.loads(encoded_hand)
blackjack_hand == decoded_hand
type(blackjack_hand)
type(decoded_hand)
blackjack_hand == tuple(decoded_hand)
```
### Deserialización, ejemplo
Esta vez, imagine que tiene algunos datos almacenados en el disco que le gustaría manipular en la memoria. Todavía usará el administrador de contexto, pero esta vez abrirá el archivo de datos existente `archivo_datos.json` en modo de lectura.
```
with open("data_file.json", "r") as read_file:
data = json.load(read_file)
```
Hasta ahora las cosas son bastante sencillas, pero tenga en cuenta que el resultado de este método podría devolver cualquiera de los tipos de datos permitidos de la tabla de conversión. Esto solo es importante si está cargando datos que no ha visto antes. En la mayoría de los casos, el objeto raíz será un diccionario o una lista.
Si ha extraído datos `JSON` de otro programa o ha obtenido una cadena de datos con formato `JSON` en `Python`, puede deserializarlo fácilmente con `loads()`, que naturalmente se carga de una cadena:
```
my_json_string = """{
"article": [
{
"id":"01",
"language": "JSON",
"edition": "first",
"author": "Derrick Mwiti"
},
{
"id":"02",
"language": "Python",
"edition": "second",
"author": "Derrick Mwiti"
}
],
"blog":[
{
"name": "Datacamp",
"URL":"datacamp.com"
}
]
}
"""
to_python = json.loads(my_json_string)
print(type(to_python))
```
Ahora ya estamos trabajando con `JSON` puro. Lo que se hará de ahora en adelante dependerá del usuario, por lo que hay qué estar muy atentos con lo que se quiere hacer, se hace, y el resultado que se obtiene.
## Un ejemplo real
Para este ejemplo introductorio, utilizaremos [JSONPlaceholder](https://jsonplaceholder.typicode.com/ "JSONPlaceholder"), una excelente fuente de datos `JSON` falsos para fines prácticos.
Primero cree un archivo de script llamado `scratch.py`, o como desee llamarlo.
Deberá realizar una solicitud de `API` al servicio `JSONPlaceholder`, así que solo use el paquete de solicitudes para realizar el trabajo pesado. Agregue estas importaciones en la parte superior de su archivo:
```
import json
import requests
```
Ahora haremos una solicitud a la `API` `JSONPlaceholder`, si no está familiarizado con las solicitudes, existe un práctico método `json()` que hará todo el trabajo, pero puede practicar el uso de la biblioteca `json` para deserializar el atributo de texto del objeto de respuesta. Debería verse más o menos así:
```
response = requests.get("https://jsonplaceholder.typicode.com/todos")
todos = json.loads(response.text)
```
Para saber si lo anterior funcionó (por lo menos no sacó ningún error), verifique el tipo de `todos` y luego hacer una consulta a los 10 primeros elementos de la lista.
```
todos == response.json()
type(todos)
todos[:10]
len(todos)
```
Puede ver la estructura de los datos visualizando el archivo en un navegador, pero aquí hay un ejemplo de parte de él:
```
# parte del archivo JSON - TODO
{
"userId": 1,
"id": 1,
"title": "delectus aut autem",
"completed": false
}
```
Hay varios usuarios, cada uno con un ID de usuario único, y cada tarea tiene una propiedad booleana completada. ¿Puedes determinar qué usuarios han completado la mayoría de las tareas?
```
# Mapeo de userID para la cantidad completa de TODOS para cada usuario
todos_by_user = {}
# Incrementa el recuento completo de TODOs para cada usuario.
for todo in todos:
if todo["completed"]:
try:
# Incrementa el conteo del usuario existente.
todos_by_user[todo["userId"]] += 1
except KeyError:
# Este usuario no ha sido visto, se inicia su conteo en 1.
todos_by_user[todo["userId"]] = 1
# Crea una lista ordenada de pares (userId, num_complete).
top_users = sorted(todos_by_user.items(),
key=lambda x: x[1], reverse=True)
# obtiene el número máximo completo de TODO
max_complete = top_users[0][1]
# Cree una lista de todos los usuarios que hayan completado la cantidad máxima de TODO
users = []
for user, num_complete in top_users:
if num_complete < max_complete:
break
users.append(str(user))
max_users = " y ".join(users)
```
Ahora se pueden manipular los datos `JSON` como un objeto `Python` normal.
Al ejecutar el script se obtienen los siguientes resultados:
```
s = "s" if len(users) > 1 else ""
print(f"usuario{s} {max_users} completaron {max_complete} TODOs")
```
Continuando, se creará un archivo `JSON` que contiene los *TODO* completos para cada uno de los usuarios que completaron el número máximo de *TODO*.
Todo lo que necesita hacer es filtrar todos y escribir la lista resultante en un archivo. llamaremos al archivo de salida `filter_data_file.json`. Hay muchas maneras de hacerlo, pero aquí hay una:
```
# Defina una función para filtrar TODO completos de usuarios con TODOS máximos completados.
def keep(todo):
is_complete = todo["completed"]
has_max_count = str(todo["userId"]) in users
return is_complete and has_max_count
# Escriba el filtrado de TODO a un archivo.
with open("filtered_data_file.json", "w") as data_file:
filtered_todos = list(filter(keep, todos))
json.dump(filtered_todos, data_file, indent=2)
```
Se han filtrado todos los datos que no se necesitan y se han guardado los necesarios en un archivo nuevo! Vuelva a ejecutar el script y revise `filter_data_file.json` para verificar que todo funcionó. Estará en el mismo directorio que `scratch.py` cuando lo ejecutes.
```
s = "s" if len(users) > 1 else ""
print(f"usuario{s} {max_users} completaron {max_complete} TODOs")
```
Por ahora estamos viendo los aspectos básicos de la manipulación de datos en `JSON`. Ahora vamos a tratar de avanzar un poco más en profundidad.
## Codificación y decodificación de objetos personalizados de `Python`
Veamos un ejemplo de una clase de un juego muy famoso (Dungeons & Dragons) ¿Qué sucede cuando intentamos serializar la clase `Elf` de esa aplicación?
```
class Elf:
def __init__(self, level, ability_scores=None):
self.level = level
self.ability_scores = {
"str": 11, "dex": 12, "con": 10,
"int": 16, "wis": 14, "cha": 13
} if ability_scores is None else ability_scores
self.hp = 10 + self.ability_scores["con"]
elf = Elf(level=4)
json.dumps(elf)
```
`Python` indica que `Elf` no es serializable
Aunque el módulo `json` puede manejar la mayoría de los tipos de `Python` integrados, no comprende cómo codificar los tipos de datos personalizados de forma predeterminada. Es como tratar de colocar una clavija cuadrada en un orificio redondo: necesita una sierra circular y la supervisión de los padres.
## Simplificando las estructuras de datos
cómo lidiar con estructuras de datos más complejas?. Se podría intentar codificar y decodificar el `JSON` "*manualmente*", pero hay una solución un poco más inteligente que ahorrará algo de trabajo. En lugar de pasar directamente del tipo de datos personalizado a `JSON`, puede lanzar un paso intermedio.
Todo lo que se necesita hacer es representar los datos en términos de los tipos integrados que `json` ya comprende. Esencialmente, traduce el objeto más complejo en una representación más simple, que el módulo `json` luego traduce a `JSON`. Es como la propiedad transitiva en matemáticas: si `A = B` y `B = C`, entonces `A = C`.
Para entender esto, necesitarás un objeto complejo con el que jugar. Puede usar cualquier clase personalizada que desee, pero `Python` tiene un tipo incorporado llamado `complex` para representar números complejos, y no es serializable por defecto.
```
z = 3 + 8j
type(z)
json.dumps(z)
```
Una buena pregunta que debe hacerse al trabajar con tipos personalizados es ¿Cuál es la cantidad mínima de información necesaria para recrear este objeto? En el caso de números complejos, solo necesita conocer las partes real e imaginaria, a las que puede acceder como atributos en el objeto `complex`:
```
z.real
z.imag
```
Pasar los mismos números a un constructor `complex` es suficiente para satisfacer el operador de comparación `__eq__`:
```
complex(3, 8) == z
```
Desglosar los tipos de datos personalizados en sus componentes esenciales es fundamental para los procesos de serialización y deserialización.
## Codificación de tipos personalizados
Para traducir un objeto personalizado a `JSON`, todo lo que necesita hacer es proporcionar una función de codificación al parámetro predeterminado del método `dump()`. El módulo `json` llamará a esta función en cualquier objeto que no sea serializable de forma nativa. Aquí hay una función de decodificación simple que puede usar para practicar ([aquí](https://www.programiz.com/python-programming/methods/built-in/isinstance "isinstance") encontrará información acerca de la función `isinstance`):
```
def encode_complex(z):
if isinstance(z, complex):
return (z.real, z.imag)
else:
type_name = z.__class__.__name__
raise TypeError(f"Object of type '{type_name}' is not JSON serializable")
```
Tenga en cuenta que se espera que genere un `TypeError` si no obtiene el tipo de objeto que esperaba. De esta manera, se evita serializar accidentalmente a cualquier `Elfo`. Ahora ya podemos intentar codificar objetos complejos.
```
json.dumps(9 + 5j, default=encode_complex)
json.dumps(elf, default=encode_complex)
```
¿Por qué codificamos el número complejo como una tupla? es la única opción, es la mejor opción? Qué pasaría si necesitáramos decodificar el objeto más tarde?
El otro enfoque común es subclasificar el `JSONEncoder` estándar y anular el método `default()`:
```
class ComplexEncoder(json.JSONEncoder):
def default(self, z):
if isinstance(z, complex):
return (z.real, z.imag)
else:
return super().default(z)
```
En lugar de subir el `TypeError` usted mismo, simplemente puede dejar que la clase base lo maneje. Puede usar esto directamente en el método `dump()` a través del parámetro `cls` o creando una instancia del codificador y llamando a su método `encode()`:
```
json.dumps(2 + 5j, cls=ComplexEncoder)
encoder = ComplexEncoder()
encoder.encode(3 + 6j)
```
## Decodificación de tipos personalizados
Si bien las partes reales e imaginarias de un número complejo son absolutamente necesarias, en realidad no son suficientes para recrear el objeto. Esto es lo que sucede cuando intenta codificar un número complejo con `ComplexEncoder` y luego decodifica el resultado:
```
complex_json = json.dumps(4 + 17j, cls=ComplexEncoder)
json.loads(complex_json)
```
Todo lo que se obtiene es una lista, y se tendría que pasar los valores a un constructor complejo si se quiere ese objeto complejo nuevamente. Recordemos el comentario sobre *teletransportación*. Lo que falta son metadatos o información sobre el tipo de datos que está codificando.
La pregunta que realmente debería hacerse es ¿Cuál es la cantidad mínima de información necesaria y suficiente para recrear este objeto?
El módulo `json` espera que todos los tipos personalizados se expresen como objetos en el estándar `JSON`. Para variar, puede crear un archivo `JSON` esta vez llamado `complex_data.json` y agregar el siguiente objeto que representa un número complejo:
```
# JSON
{
"__complex__": true,
"real": 42,
"imag": 36
}
```
¿Ves la parte inteligente? Esa clave "`__complex__`" son los metadatos de los que acabamos de hablar. Realmente no importa cuál sea el valor asociado. Para que este pequeño truco funcione, todo lo que necesitas hacer es verificar que exista la clave:
```
def decode_complex(dct):
if "__complex__" in dct:
return complex(dct["real"], dct["imag"])
return dct
```
Si "`__complex__`" no está en el diccionario, puede devolver el objeto y dejar que el decodificador predeterminado se encargue de él.
Cada vez que el método `load()` intenta analizar un objeto, se le da la oportunidad de interceder antes de que el decodificador predeterminado se adapte a los datos. Puede hacerlo pasando su función de decodificación al parámetro `object_hook`.
Ahora regresemos a lo de antes
```
with open("complex_data.json") as complex_data:
data = complex_data.read()
z = json.loads(data, object_hook=decode_complex)
type(z)
```
Si bien `object_hook` puede parecer la contraparte del parámetro predeterminado del método `dump()`, la analogía realmente comienza y termina allí.
```
# JSON
[
{
"__complex__":true,
"real":42,
"imag":36
},
{
"__complex__":true,
"real":64,
"imag":11
}
]
```
Esto tampoco funciona solo con un objeto. Intente poner esta lista de números complejos en `complex_data.json` y vuelva a ejecutar el script:
```
with open("complex_data.json") as complex_data:
data = complex_data.read()
numbers = json.loads(data, object_hook=decode_complex)
```
Si todo va bien, obtendrá una lista de objetos complejos:
```
type(z)
numbers
```
## Finalizando...
Ahora puede ejercer el poderoso poder de JSON para todas y cada una de las necesidades de `Python`.
Si bien los ejemplos con los que ha trabajado aquí son ciertamente demasiado simplistas, ilustran un flujo de trabajo que puede aplicar a tareas más generales:
- Importa el paquete json.
- Lea los datos con `load()` o `loads()`.
- Procesar los datos.
- Escriba los datos alterados con `dump()` o `dumps()`.
Lo que haga con los datos una vez que se hayan cargado en la memoria dependerá de su caso de uso. En general, el objetivo será recopilar datos de una fuente, extraer información útil y transmitir esa información o mantener un registro de la misma.
|
github_jupyter
|
# Import Libraries
```
import nltk
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize, sent_tokenize
```
# Sentences
```
sentence = [("the", "DT"), ("little", "JJ"), ("yellow", "JJ"),("dog", "NN"), ("barked", "VBD"), ("at", "IN"), ("the", "DT"), ("cat", "NN")]
sentence2 = "Four score and seven years ago our fathers brought forth on this continent, a new nation, conceived in Liberty, and dedicated to the proposition that all men are created equal"
sentence3 = "Four score and seven years ago our fathers brought forth on this continent, a new nation, conceived in Liberty, and dedicated to the proposition that all men are created equal. Now we are engaged in a great civil war, testing whether that nation, or any nation so conceived and so dedicated, can long endure. We are met on a great battle-field of that war. We have come to dedicate a portion of that field, as a final resting place for those who here gave their lives that that nation might live. It is altogether fitting and proper that we should do this. But, in a larger sense, we can not dedicate—we can not consecrate—we can not hallow—this ground. The brave men, living and dead, who struggled here, have consecrated it, far above our poor power to add or detract. The world will little note, nor long remember what we say here, but it can never forget what they did here. It is for us the living, rather, to be dedicated here to the unfinished work which they who fought here have thus far so nobly advanced. It is rather for us to be here dedicated to the great task remaining before us—that from these honored dead we take increased devotion to that cause for which they gave the last full measure of devotion—that we here highly resolve that these dead shall not have died in vain—that this nation, under God, shall have a new birth of freedom—and that government of the people, by the people, for the people, shall not perish from the earth."
```
# Regex Function
```
grammar = "NP: {<DT>?<JJ>*<NN>}"
c = nltk.RegexpParser(grammar)
result = c.parse(sentence)
print(result)
result.draw()
```
# Preprocessing - I
```
# Sentence read by Keerthivasan S M :D
stop_words = set(stopwords.words('english'))
l = list(sentence2.split(" "))
print(l)
# Geeks for Geeks
tokenized = sent_tokenize(sentence2)
for i in tokenized:
wordsList = nltk.word_tokenize(i)
wordsList = [w for w in wordsList if not w in stop_words]
tagged = nltk.pos_tag(wordsList)
```
# Regex Function - I
```
grammar = "NP: {<DT>?<JJ>*<NN>}"
c = nltk.RegexpParser(grammar)
result = c.parse(tagged)
print(result)
result.draw()
```
# Assignment
```
stop_words = set(stopwords.words('english'))
l = list(sentence3.split(". "))
print(l)
for i in range(len(l)):
l1 = l[i].split(" ")
tokenized = sent_tokenize(l[i])
for i in tokenized:
wordsList = nltk.word_tokenize(i)
wordsList = [w for w in wordsList if not w in stop_words]
tagged = nltk.pos_tag(wordsList)
grammar = "NP: {<DT>?<JJ>*<NN>}"
c = nltk.RegexpParser(grammar)
result = c.parse(tagged)
print(result)
print()
for i in range(len(l)):
l1 = l[i].split(" ")
tokenized = sent_tokenize(l[i])
for i in tokenized:
wordsList = nltk.word_tokenize(i)
wordsList = [w for w in wordsList if not w in stop_words]
tagged = nltk.pos_tag(wordsList)
grammar = "VP: {<MD>?<VB.*><NP|PP>}"
c = nltk.RegexpParser(grammar)
result = c.parse(tagged)
print(result)
print()
```
|
github_jupyter
|
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
from __future__ import division
from __future__ import print_function
from __future__ import absolute_import
from __future__ import unicode_literals
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from scipy.stats import pearsonr
from scipy.misc import logsumexp
sns.set(color_codes=True)
def annotate_upper_left(ax, text, annotation_offset=(-50, 30)):
ax.annotate(text, xy=(0, 1), xycoords='axes fraction', fontsize=18,
xytext=annotation_offset, textcoords='offset points',
ha='left', va='top')
f = np.load('../output/hinge_results.npz')
temps = f['temps']
indices_to_remove = f['indices_to_remove']
actual_loss_diffs = f['actual_loss_diffs']
predicted_loss_diffs = f['predicted_loss_diffs']
influences = f['influences']
sns.set_style('white')
fontsize=14
fig, axs = plt.subplots(1, 4, sharex=False, sharey=False, figsize=(13, 3))
# Graph of approximations
x = np.arange(-5, 15, 0.01)
ts = [0.001, 0.01, 0.1]
y = 1 - x
y[y < 0] = 0
axs[0].plot(x, y, label='t=0 (Hinge)')
for t in ts:
# y = t * np.log(1 + np.exp(-(x-1)/t))
y = t * logsumexp(
np.vstack((np.zeros_like(x), -(x-1)/t)),
axis=0)
axs[0].plot(x, y, label='t=%s' % t)
axs[0].set_xlim((0.8, 1.2))
axs[0].set_xticks((0.8, 0.9, 1.0, 1.1, 1.2))
axs[0].set_ylim((0, 0.3))
axs[0].legend(fontsize=fontsize-4)
axs[0].set_xlabel('s', fontsize=fontsize)
axs[0].set_ylabel('SmoothHinge(s)', fontsize=fontsize)
# Hinge loss
ax_idx = 1
temp_idx = 0
smooth_influence_preds = influences[temp_idx, indices_to_remove[0, :]]
print(temps[temp_idx], pearsonr(actual_loss_diffs[0, :], smooth_influence_preds)[0])
axs[ax_idx].scatter(actual_loss_diffs[0, :], smooth_influence_preds)
max_value = 1.1 * np.max([np.max(np.abs(actual_loss_diffs[0, :])), np.max(np.abs(smooth_influence_preds))])
axs[ax_idx].set_xlim((-0.025, 0.025))
axs[ax_idx].set_ylim(-max_value,max_value)
axs[ax_idx].set_xlabel('Actual diff in loss', fontsize=fontsize)
axs[ax_idx].set_ylabel('Predicted diff in loss', fontsize=fontsize)
axs[ax_idx].plot([-0.025, 0.025], [-0.025, 0.025], 'k-', alpha=0.2, zorder=1)
axs[ax_idx].set_title('t=0 (Hinge)', fontsize=fontsize)
# t = 0.001
ax_idx = 2
temp_idx = 1
smooth_influence_preds = influences[temp_idx, indices_to_remove[0, :]]
print(temps[temp_idx], pearsonr(actual_loss_diffs[0, :], smooth_influence_preds)[0])
axs[ax_idx].scatter(actual_loss_diffs[0, :], smooth_influence_preds)
max_value = 1.1 * np.max([np.max(np.abs(actual_loss_diffs[0, :])), np.max(np.abs(smooth_influence_preds))])
axs[ax_idx].set_xlim((-0.025, 0.025))
axs[ax_idx].set_ylim((-0.025, 0.025))
axs[ax_idx].set_aspect('equal')
axs[ax_idx].set_xlabel('Actual diff in loss', fontsize=fontsize)
axs[ax_idx].plot([-0.025, 0.025], [-0.025, 0.025], 'k-', alpha=0.2, zorder=1)
axs[ax_idx].set_title('t=0.001', fontsize=fontsize)
# t = 0.1
ax_idx = 3
temp_idx = 2
smooth_influence_preds = influences[temp_idx, indices_to_remove[0, :]]
print(temps[temp_idx], pearsonr(actual_loss_diffs[0, :], smooth_influence_preds)[0])
axs[ax_idx].scatter(actual_loss_diffs[0, :], smooth_influence_preds)
max_value = 1.1 * np.max([np.max(np.abs(actual_loss_diffs[0, :])), np.max(np.abs(smooth_influence_preds))])
axs[ax_idx].set_xlim((-0.025, 0.025))
axs[ax_idx].set_ylim((-0.025, 0.025))
axs[ax_idx].set_aspect('equal')
axs[ax_idx].set_xlabel('Actual diff in loss', fontsize=fontsize)
axs[ax_idx].plot([-0.025, 0.025], [-0.025, 0.025], 'k-', alpha=0.2, zorder=1)
axs[ax_idx].set_title('t=0.1', fontsize=fontsize)
# plt.setp(axs[ax_idx].get_yticklabels(), visible=False)
def move_ax_right(ax, dist):
bbox = ax.get_position()
bbox.x0 += dist
bbox.x1 += dist
ax.set_position(bbox)
move_ax_right(axs[1], 0.05)
move_ax_right(axs[2], 0.06)
move_ax_right(axs[3], 0.07)
annotate_upper_left(axs[0], '(a)', (-50, 15))
annotate_upper_left(axs[1], '(b)', (-50, 15))
# plt.savefig(
# '../figs/fig-hinge.png',
# dpi=600, bbox_inches='tight')
```
|
github_jupyter
|
# Segregation Index Decomposition
## Table of Contents
* [Decomposition framework of the PySAL *segregation* module](#Decomposition-framework-of-the-PySAL-*segregation*-module)
* [Map of the composition of the Metropolitan area of Los Angeles](#Map-of-the-composition-of-the-Metropolitan-area-of-Los-Angeles)
* [Map of the composition of the Metropolitan area of New York](#Map-of-the-composition-of-the-Metropolitan-area-of-New-York)
* [Composition Approach (default)](#Composition-Approach-%28default%29)
* [Share Approach](#Share-Approach)
* [Dual Composition Approach](#Dual-Composition-Approach)
* [Inspecting a different index: Relative Concentration](#Inspecting-a-different-index:-Relative-Concentration)
This is a notebook that explains a step-by-step procedure to perform decomposition on comparative segregation measures.
First, let's import all the needed libraries.
```
import pandas as pd
import pickle
import numpy as np
import matplotlib.pyplot as plt
import segregation
from segregation.decomposition import DecomposeSegregation
```
In this example, we are going to use census data that the user must download its own copy, following similar guidelines explained in https://github.com/spatialucr/geosnap/blob/master/examples/01_getting_started.ipynb where you should download the full type file of 2010. The zipped file download will have a name that looks like `LTDB_Std_All_fullcount.zip`. After extracting the zipped content, the filepath of the data should looks like this:
```
#filepath = '~/LTDB_Std_2010_fullcount.csv'
```
Then, we read the data:
```
df = pd.read_csv(filepath, encoding = "ISO-8859-1", sep = ",")
```
We are going to work with the variable of the nonhispanic black people (`nhblk10`) and the total population of each unit (`pop10`). So, let's read the map of all census tracts of US and select some specific columns for the analysis:
```
# This file can be download here: https://drive.google.com/open?id=1gWF0OCn6xuR_WrEj7Ot2jY6KI2t6taIm
with open('data/tracts_US.pkl', 'rb') as input:
map_gpd = pickle.load(input)
map_gpd['INTGEOID10'] = pd.to_numeric(map_gpd["GEOID10"])
gdf_pre = map_gpd.merge(df, left_on = 'INTGEOID10', right_on = 'tractid')
gdf = gdf_pre[['GEOID10', 'geometry', 'pop10', 'nhblk10']]
```
In this notebook, we use the Metropolitan Statistical Area (MSA) of US (we're also using the word 'cities' here to refer them). So, let's read the correspondence table that relates the tract id with the corresponding Metropolitan area...
```
# You can download this file here: https://drive.google.com/open?id=10HUUJSy9dkZS6m4vCVZ-8GiwH0EXqIau
with open('data/tract_metro_corresp.pkl', 'rb') as input:
tract_metro_corresp = pickle.load(input).drop_duplicates()
```
..and merge them with the previous data.
```
merged_gdf = gdf.merge(tract_metro_corresp, left_on = 'GEOID10', right_on = 'geoid10')
```
We now build the composition variable (`compo`) which is the division of the frequency of the chosen group and total population. Let's inspect the first rows of the data.
```
merged_gdf['compo'] = np.where(merged_gdf['pop10'] == 0, 0, merged_gdf['nhblk10'] / merged_gdf['pop10'])
merged_gdf.head()
```
Now, we chose two different metropolitan areas to compare the degree of segregation.
## Map of the composition of the Metropolitan area of Los Angeles
```
la_2010 = merged_gdf.loc[(merged_gdf.name == "Los Angeles-Long Beach-Anaheim, CA")]
la_2010.plot(column = 'compo', figsize = (10, 10), cmap = 'OrRd', legend = True)
plt.axis('off')
```
## Map of the composition of the Metropolitan area of New York
```
ny_2010 = merged_gdf.loc[(merged_gdf.name == 'New York-Newark-Jersey City, NY-NJ-PA')]
ny_2010.plot(column = 'compo', figsize = (20, 10), cmap = 'OrRd', legend = True)
plt.axis('off')
```
We first compare the Gini index of both cities. Let's import the `Gini_Seg` class from `segregation`, fit both indexes and check the difference in point estimation.
```
from segregation.aspatial import GiniSeg
G_la = GiniSeg(la_2010, 'nhblk10', 'pop10')
G_ny = GiniSeg(ny_2010, 'nhblk10', 'pop10')
G_la.statistic - G_ny.statistic
```
Let's decompose these difference according to *Rey, S. et al "Comparative Spatial Segregation Analytics". Forthcoming*. You can check the options available in this decomposition below:
```
help(DecomposeSegregation)
```
## Composition Approach (default)
The difference of -0.10653 fitted previously, can be decomposed into two components. The Spatial component and the attribute component. Let's estimate both, respectively.
```
DS_composition = DecomposeSegregation(G_la, G_ny)
DS_composition.c_s
DS_composition.c_a
```
So, the first thing to notice is that attribute component, i.e., given by a difference in the population structure (in this case, the composition) plays a more important role in the difference, since it has a higher absolute value.
The difference in the composition can be inspected in the plotting method with the type `cdfs`:
```
DS_composition.plot(plot_type = 'cdfs')
```
If your data is a GeoDataFrame, it is also possible to visualize the counterfactual compositions with the argument `plot_type = 'maps'`
The first and second contexts are Los Angeles and New York, respectively.
```
DS_composition.plot(plot_type = 'maps')
```
*Note that in all plotting methods, the title presents each component of the decomposition performed.*
## Share Approach
The share approach takes into consideration the share of each group in each city. Since this approach takes into consideration the focus group and the complementary group share to build the "counterfactual" total population of each unit, it is of interest to inspect all these four cdf's.
*ps.: The share is the population frequency of each group in each unit over the total population of that respectively group.*
```
DS_share = DecomposeSegregation(G_la, G_ny, counterfactual_approach = 'share')
DS_share.plot(plot_type = 'cdfs')
```
We can see that curve between the contexts are closer to each other which represent a drop in the importance of the population structure (attribute component) to -0.062. However, this attribute still overcomes the spatial component (-0.045) in terms of importance due to both absolute magnitudes.
```
DS_share.plot(plot_type = 'maps')
```
We can see that the counterfactual maps of the composition (outside of the main diagonal), in this case, are slightly different from the previous approach.
## Dual Composition Approach
The `dual_composition` approach is similar to the composition approach. However, it uses also the counterfactual composition of the cdf of the complementary group.
```
DS_dual = DecomposeSegregation(G_la, G_ny, counterfactual_approach = 'dual_composition')
DS_dual.plot(plot_type = 'cdfs')
```
It is possible to see that the component values are very similar with slight changes from the `composition` approach.
```
DS_dual.plot(plot_type = 'maps')
```
The counterfactual distributions are virtually the same (but not equal) as the one from the `composition` approach.
## Inspecting a different index: Relative Concentration
```
from segregation.spatial import RelativeConcentration
RCO_la = RelativeConcentration(la_2010, 'nhblk10', 'pop10')
RCO_ny = RelativeConcentration(ny_2010, 'nhblk10', 'pop10')
RCO_la.statistic - RCO_ny.statistic
RCO_DS_composition = DecomposeSegregation(RCO_la, RCO_ny)
RCO_DS_composition.c_s
RCO_DS_composition.c_a
```
It is possible to note that, in this case, the spatial component is playing a much more relevant role in the decomposition.
|
github_jupyter
|
#Given a budget of 30 million dollar (or less) and genre, can I predict gross domestic profit using linear regression?
```
%matplotlib inline
import pickle
from pprint import pprint
import pandas as pd
import numpy as np
from dateutil.parser import parse
import math
# For plotting
import seaborn as sb
import matplotlib.pyplot as plt
# For linear regression
from patsy import dmatrices
from patsy import dmatrix
import statsmodels.api as sm
from sklearn.linear_model import LinearRegression
from sklearn.cross_validation import train_test_split
from sklearn import cross_validation
def perform_linear_regression(df, axes, title):
plot_data = df.sort('budget', ascending = True)
y, X = dmatrices('log_gross ~ budget', data = plot_data, return_type = 'dataframe')
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=10)
columns = ['budget']
#Patsy
model = sm.OLS(y_train,X_train)
fitted = model.fit()
r_squared = fitted.rsquared
pval = fitted.pvalues
params = fitted.params
#Plotting
axes.plot(X_train[columns], y_train, 'go')
#axes.plot(X_test[columns], y_test, 'yo')
#axes.plot(X_test[columns], fitted.predict(X_test), 'ro')
axes.plot(X[columns], fitted.predict(X), '-')
axes.set_title('{0} (Rsquared = {1:.2f}) p = {2:.2f} m = {3:.2f}'.format(title, r_squared, pval[1], np.exp(params[1])))
axes.set_xlabel('Budget')
axes.set_ylabel('ln(Gross)')
axes.set_ylim(0, 25)
return None
def perform_linear_regression1(df, axes, title):
plot_data = df.sort('budget', ascending = True)
y, X = dmatrices('log_gross ~ budget', data = plot_data, return_type = 'dataframe')
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=10)
columns = ['budget']
#Patsy
model = sm.OLS(y_train,X_train)
fitted = model.fit()
r_squared = fitted.rsquared
pval = fitted.pvalues
params = fitted.params
y_test = y_test
#Plotting
#axes.plot(X_train[columns], y_train, 'go')
axes.plot(X_test[columns], y_test, 'yo')
#axes.plot(X_test[columns], fitted.predict(X_test), 'ro')
axes.plot(X[columns], fitted.predict(X), '-')
axes.set_title('{0} (Rsquared = {1:.2f}) p = {2:.2f}'.format(title, r_squared, pval[1]))
axes.set_xlabel('Budget')
axes.set_ylabel('ln(Gross)')
axes.set_ylim(0, 25)
return None
def perform_linear_regression_all(df, axes, title):
plot_data = df.sort('budget', ascending = True)
y, X = dmatrices('log_gross ~ budget', data = plot_data, return_type = 'dataframe')
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=10)
columns = ['budget']
#Patsy
model = sm.OLS(y_train,X_train)
fitted = model.fit()
r_squared = fitted.rsquared
pval = fitted.pvalues
params = fitted.params
#Plotting
axes.plot(X_train[columns], y_train, 'go')
axes.plot(X_test[columns], y_test, 'yo')
axes.set_title('{0}'.format(title))
axes.set_xlabel('Budget')
axes.set_ylabel('ln(Gross)')
axes.set_ylim(0, 25)
return None
def create_genre_column(df, genre):
return df['genre'].apply(lambda x: 1 if genre in x else 0)
def get_genre_dataframes(df, genre):
columns = ['log_gross', 'gross', 'log_budget', 'budget', 'runtime']
df_out = df.copy()[df[genre] == 1][columns]
df_out['genre'] = genre
return df_out
```
###Load the movie dictionary
```
d = pickle.load(open('movie_dictionary.p'))
#Create a dataframe
df = pd.DataFrame.from_dict(d, orient = 'index')
```
###Clean the data and remove N/A's
Keep only movies with a positive runtime
```
df2 = df.copy()
df2 = df2[['gross', 'date', 'budget', 'genre', 'runtime']]
df2['gross'][df2.gross == 'N/A'] = np.nan
df2['budget'][df2.budget == 'N/A'] = np.nan
df2['date'][df2.date == 'N/A'] = np.nan
df2['genre'][df2.genre == 'N/A'] = np.nan
df2['genre'][df2.genre == 'Unknown'] = np.nan
df2['runtime'][df2.runtime == 'N/A'] = np.nan
df2 = df2[df2.date > parse('01-01-2005').date()]
df2 = df2[df2.runtime >= 0]
#df2 = df2[df2.budget <30]
df2 = df2.dropna()
```
For budget and gross, if data is missing, populate them with the mean of all the movies
```
#df2['budget'][df2['budget'].isnull()] = df2['budget'].mean()
#df2['gross'][df2['gross'].isnull()] = df2['gross'].mean()
df2['date'] = pd.to_datetime(df2['date'])
df2['year'] = df2['date'].apply(lambda x: x.year)
df2['gross'] = df2['gross'].astype(float)
df2['budget'] = df2['budget'].astype(float)
df2['runtime'] = df2['runtime'].astype(float)
df2['genre'] = df2['genre'].astype(str)
```
###Create some log columns
```
df2['log_runtime'] = df2['runtime'].apply(lambda x: np.log(x))
df2['log_budget'] = df2['budget'].apply(lambda x: np.log(x))
df2['log_gross'] = df2['gross'].apply(lambda x: np.log(x))
```
###How does the gross and budget data look? (Not normally distributed)
```
fig, axes = plt.subplots(nrows=1, ncols=2,figsize=(10,5))
df2['gross'].plot(ax = axes[0], kind = 'hist', title = 'Gross Histogram')
df2['budget'].plot(ax = axes[1], kind = 'hist', title = 'Budget Histogram')
```
###Looks more normally distributed now!
```
fig, axes = plt.subplots(nrows=1, ncols=2,figsize=(10,5))
df2['log_gross'].plot(ax = axes[0], kind = 'hist', title = 'log(Gross)')
df2['budget'].plot(ax = axes[1], kind = 'hist', title = 'Budget')
```
###Check top grossing genres
```
df2.groupby('genre')[['gross']].mean().sort('gross', ascending = True).plot(figsize = (10,10), kind = 'barh', legend = False, title = 'Mean Domestic Gross by Genre')
test = df2.groupby('genre')[['gross']].mean()
test['Count'] = df2.groupby('genre')[['gross']].count()
test.sort('gross', ascending = False)
```
### Check top genres by count
```
df2.groupby('genre')[['gross']].count().sort('gross', ascending = True).plot(figsize = (10,10), kind = 'barh', legend = False, title = 'Count of Movies by Genre')
```
###Create categories for top unique grossing genres
```
genre_list = ['Comedy', 'Drama', 'Horror', 'Romance', 'Thriller', 'Sci-Fi', 'Music', 'Action', 'Adventure', 'Historical', \
'Family', 'War', 'Sports', 'Crime', 'Animation']
for genre in genre_list:
df2[genre] = create_genre_column(df2, genre)
```
###Create a new column for genres that concatenates all the individual columns
```
df_comedy = get_genre_dataframes(df2, 'Comedy')
df_drama = get_genre_dataframes(df2, 'Drama')
df_horror = get_genre_dataframes(df2, 'Horror')
df_romance = get_genre_dataframes(df2, 'Romance')
df_thriller = get_genre_dataframes(df2, 'Thriller')
df_scifi = get_genre_dataframes(df2, 'Sci-Fi')
df_music = get_genre_dataframes(df2, 'Music')
df_action = get_genre_dataframes(df2, 'Action')
df_adventure = get_genre_dataframes(df2, 'Adventure')
df_historical = get_genre_dataframes(df2, 'Historical')
df_family = get_genre_dataframes(df2, 'Family')
df_war = get_genre_dataframes(df2, 'War')
df_sports = get_genre_dataframes(df2, 'Sports')
df_crime = get_genre_dataframes(df2, 'Crime')
df_animation = get_genre_dataframes(df2, 'Animation')
final_df = df_comedy.copy()
final_df = final_df.append(df_drama)
final_df = final_df.append(df_horror)
final_df = final_df.append(df_romance)
final_df = final_df.append(df_thriller)
final_df = final_df.append(df_scifi)
final_df = final_df.append(df_music)
final_df = final_df.append(df_action)
final_df = final_df.append(df_adventure)
final_df = final_df.append(df_historical)
final_df = final_df.append(df_family)
final_df = final_df.append(df_war)
final_df = final_df.append(df_sports)
final_df = final_df.append(df_crime)
final_df = final_df.append(df_animation)
final_df[['genre', 'budget', 'log_gross']].head()
final_df[['log_gross', 'genre']].groupby('genre').count().sort('log_gross', ascending = False).plot(kind = 'bar', legend = False, title = 'Counts of Movies by Genre')
temp = final_df[['gross', 'genre']].groupby('genre').mean()
temp['Count'] = final_df[['gross', 'genre']].groupby('genre').count()
temp.sort('gross',ascending = False)
temp = temp.rename(columns={'gross': 'Average Gross'})
temp.sort('Average Gross', ascending = False)
fig, axes = plt.subplots(nrows=4, ncols=4,figsize=(25,25))
perform_linear_regression_all(df_comedy, axes[0,0], 'Comedy')
perform_linear_regression_all(df_horror, axes[0,1], 'Horror')
perform_linear_regression_all(df_romance, axes[0,2], 'Romance')
perform_linear_regression_all(df_thriller, axes[0,3], 'Thriller')
perform_linear_regression_all(df_scifi, axes[1,0], 'Sci_Fi')
perform_linear_regression_all(df_music, axes[1,1], 'Music')
perform_linear_regression_all(df_action, axes[1,2], 'Action')
perform_linear_regression_all(df_adventure, axes[1,3], 'Adventure')
perform_linear_regression_all(df_historical, axes[2,0], 'Historical')
perform_linear_regression_all(df_family, axes[2,1], 'Family')
perform_linear_regression_all(df_war, axes[2,2], 'War')
perform_linear_regression_all(df_sports, axes[2,3], 'Sports')
perform_linear_regression_all(df_crime, axes[3,0], 'Crime')
perform_linear_regression_all(df_animation, axes[3,1], 'Animation')
perform_linear_regression_all(df_drama, axes[3,2], 'Drama')
fig, axes = plt.subplots(nrows=4, ncols=4,figsize=(25,25))
perform_linear_regression(df_comedy, axes[0,0], 'Comedy')
perform_linear_regression(df_horror, axes[0,1], 'Horror')
perform_linear_regression(df_romance, axes[0,2], 'Romance')
perform_linear_regression(df_thriller, axes[0,3], 'Thriller')
perform_linear_regression(df_scifi, axes[1,0], 'Sci_Fi')
perform_linear_regression(df_music, axes[1,1], 'Music')
perform_linear_regression(df_action, axes[1,2], 'Action')
perform_linear_regression(df_adventure, axes[1,3], 'Adventure')
perform_linear_regression(df_historical, axes[2,0], 'Historical')
perform_linear_regression(df_family, axes[2,1], 'Family')
perform_linear_regression(df_war, axes[2,2], 'War')
perform_linear_regression(df_sports, axes[2,3], 'Sports')
perform_linear_regression(df_crime, axes[3,0], 'Crime')
perform_linear_regression(df_animation, axes[3,1], 'Animation')
perform_linear_regression(df_drama, axes[3,2], 'Drama')
```
###Linear Regression
```
fig, axes = plt.subplots(nrows=4, ncols=4,figsize=(25,25))
perform_linear_regression1(df_comedy, axes[0,0], 'Comedy')
perform_linear_regression1(df_horror, axes[0,1], 'Horror')
perform_linear_regression1(df_romance, axes[0,2], 'Romance')
perform_linear_regression1(df_thriller, axes[0,3], 'Thriller')
perform_linear_regression1(df_scifi, axes[1,0], 'Sci-Fi')
perform_linear_regression1(df_music, axes[1,1], 'Music')
perform_linear_regression1(df_action, axes[1,2], 'Action')
perform_linear_regression1(df_adventure, axes[1,3], 'Adventure')
perform_linear_regression1(df_historical, axes[2,0], 'Historical')
perform_linear_regression1(df_family, axes[2,1], 'Family')
perform_linear_regression1(df_war, axes[2,2], 'War')
perform_linear_regression1(df_sports, axes[2,3], 'Sports')
perform_linear_regression1(df_crime, axes[3,0], 'Crime')
perform_linear_regression1(df_animation, axes[3,1], 'Animation')
perform_linear_regression1(df_drama, axes[3,2], 'Drama')
```
|
github_jupyter
|
## Import a model from ONNX and run using PyTorch
We demonstrate how to import a model from ONNX and convert to PyTorch
#### Imports
```
import os
import operator as op
import warnings; warnings.simplefilter(action='ignore', category=FutureWarning)
import numpy as np
import torch
from torch import nn
from torch.nn import functional as F
from torch.autograd import Variable
import onnx
import gamma
from gamma import convert, protobuf, utils
```
#### 1: Download the model
```
fpath = utils.get_file('https://s3.amazonaws.com/download.onnx/models/squeezenet.tar.gz')
onnx_model = onnx.load(os.path.join(fpath, 'squeezenet/model.onnx'))
inputs = [i.name for i in onnx_model.graph.input if
i.name not in {x.name for x in onnx_model.graph.initializer}]
outputs = [o.name for o in onnx_model.graph.output]
```
#### 2: Import into Gamma
```
graph = convert.from_onnx(onnx_model)
constants = {k for k, (v, i) in graph.items() if v['type'] == 'Constant'}
utils.draw(gamma.strip(graph, constants))
```
#### 3: Convert to PyTorch
```
make_node = gamma.make_node_attr
def torch_padding(params):
padding = params.get('pads', [0,0,0,0])
assert (padding[0] == padding[2]) and (padding[1] == padding[3])
return (padding[0], padding[1])
torch_ops = {
'Add': lambda params: op.add,
'Concat': lambda params: (lambda *xs: torch.cat(xs, dim=params['axis'])),
'Constant': lambda params: nn.Parameter(torch.FloatTensor(params['value'])),
'Dropout': lambda params: nn.Dropout(params.get('ratio', 0.5)).eval(), #.eval() sets to inference mode. where should this logic live?
'GlobalAveragePool': lambda params: nn.AdaptiveAvgPool2d(1),
'MaxPool': lambda params: nn.MaxPool2d(params['kernel_shape'], stride=params.get('strides', [1,1]),
padding=torch_padding(params),
dilation=params.get('dilations', [1,1])),
'Mul': lambda params: op.mul,
'Relu': lambda params: nn.ReLU(),
'Softmax': lambda params: nn.Softmax(dim=params.get('axis', 1)),
}
def torch_op(node, inputs):
if node['type'] in torch_ops:
op = torch_ops[node['type']](node['params'])
return make_node('Torch_op', {'op': op}, inputs)
return (node, inputs)
def torch_conv_node(params, x, w, b):
ko, ki, kh, kw = w.shape
group = params.get('group', 1)
ki *= group
conv = nn.Conv2d(ki, ko, (kh,kw),
stride=tuple(params.get('strides', [1,1])),
padding=torch_padding(params),
dilation=tuple(params.get('dilations', [1,1])),
groups=group)
conv.weight = nn.Parameter(torch.FloatTensor(w))
conv.bias = nn.Parameter(torch.FloatTensor(b))
return make_node('Torch_op', {'op': conv}, [x])
def convert_to_torch(graph):
v, _ = gamma.var, gamma.Wildcard
conv_pattern = {
v('conv'): make_node('Conv', v('params'), [v('x'), v('w'), v('b')]),
v('w'): make_node('Constant', {'value': v('w_val')}, []),
v('b'): make_node('Constant', {'value': v('b_val')}, [])
}
matches = gamma.search(conv_pattern, graph)
g = gamma.union(graph, {m[v('conv')]:
torch_conv_node(m[v('params')], m[v('x')], m[v('w_val')], m[v('b_val')])
for m in matches})
remove = {m[x] for m in matches for x in (v('w'), v('b'))}
g = {k: torch_op(v, i) for k, (v, i) in g.items() if k not in remove}
return g
def torch_graph(graph):
return gamma.FuncCache(lambda k: graph[k][0]['params']['op'](*[tg[x] for x in graph[k][1]]))
g = convert_to_torch(graph)
utils.draw(g)
```
#### 4: Load test example and check PyTorch output
```
def load_onnx_tensor(fname):
tensor = onnx.TensorProto()
with open(fname, 'rb') as f:
tensor.ParseFromString(f.read())
return protobuf.unwrap(tensor)
input_0 = load_onnx_tensor(os.path.join(fpath, 'squeezenet/test_data_set_0/input_0.pb'))
output_0 = load_onnx_tensor(os.path.join(fpath, 'squeezenet/test_data_set_0/output_0.pb'))
tg = torch_graph(g)
tg[inputs[0]] = Variable(torch.Tensor(input_0))
torch_outputs = tg[outputs[0]]
np.testing.assert_almost_equal(output_0, torch_outputs.data.numpy(), decimal=5)
print('Success!')
```
|
github_jupyter
|
# Building Simple Neural Networks
In this section you will:
* Import the MNIST dataset from Keras.
* Format the data so it can be used by a Sequential model with Dense layers.
* Split the dataset into training and test sections data.
* Build a simple neural network using Keras Sequential model and Dense layers.
* Train that model.
* Evaluate the performance of that model.
While we are accomplishing these tasks, we will also stop to discuss important concepts:
* Splitting data into test and training sets.
* Training rounds, batch size, and epochs.
* Validation data vs test data.
* Examining results.
## Importing and Formatting the Data
Keras has several built-in datasets that are already well formatted and properly cleaned. These datasets are an invaluable learning resource. Collecting and processing datasets is a serious undertaking, and deep learning tactics perform poorly without large high quality datasets. We will be leveraging the [Keras built in datasets](https://keras.io/datasets/) extensively, and you may wish to explore them further on your own.
In this exercise, we will be focused on the MNIST dataset, which is a set of 70,000 images of handwritten digits each labeled with the value of the written digit. Additionally, the images have been split into training and test sets.
```
# For drawing the MNIST digits as well as plots to help us evaluate performance we
# will make extensive use of matplotlib
from matplotlib import pyplot as plt
# All of the Keras datasets are in keras.datasets
from tensorflow.keras.datasets import mnist
# Keras has already split the data into training and test data
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
# Training images is a list of 60,000 2D lists.
# Each 2D list is 28 by 28—the size of the MNIST pixel data.
# Each item in the 2D array is an integer from 0 to 255 representing its grayscale
# intensity where 0 means white, 255 means black.
print(len(training_images), training_images[0].shape)
# training_labels are a value between 0 and 9 indicating which digit is represented.
# The first item in the training data is a 5
print(len(training_labels), training_labels[0])
# Lets visualize the first 100 images from the dataset
for i in range(100):
ax = plt.subplot(10, 10, i+1)
ax.axis('off')
plt.imshow(training_images[i], cmap='Greys')
```
## Problems With This Data
There are (at least) two problems with this data as it is currently formatted, what do you think they are?
1. The input data is formatted as a 2D array, but our deep neural network needs to data as a 1D vector.
* This is because of how deep neural networks are constructed, it is simply not possible to send anything but a vector as input.
* These vectors can be/represent anything, but from the computer's perspective they must be a 1D vector.
2. Our labels are numbers, but we're not performing regression. We need to use a 1-hot vector encoding for our labels.
* This is important because if we use the number values we would be training our network to think of these values as continuous.
* If the digit is supposed to be a 2, guessing 1 and guessing 9 are both equally wrong.
* Training the network with numbers would imply that a prediction of 1 would be "less wrong" than a prediction of 9, when in fact both are equally wrong.
### Fixing the data format
Luckily, this is a common problem and we can use two methods to fix the data: `numpy.reshape` and `keras.utils.to_categorical`. This is nessesary because of how deep neural networks process data, there is no way to send 2D data to a `Sequential` model made of `Dense` layers.
```
from tensorflow.keras.utils import to_categorical
# Preparing the dataset
# Setup train and test splits
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
# 28 x 28 = 784, because that's the dimensions of the MNIST data.
image_size = 784
# Reshaping the training_images and test_images to lists of vectors with length 784
# instead of lists of 2D arrays. Same for the test_images
training_data = training_images.reshape(training_images.shape[0], image_size)
test_data = test_images.reshape(test_images.shape[0], image_size)
# [
# [1,2,3]
# [4,5,6]
# ]
# => [1,2,3,4,5,6]
# Just showing the changes...
print("training data: ", training_images.shape, " ==> ", training_data.shape)
print("test data: ", test_images.shape, " ==> ", test_data.shape)
# Create 1-hot encoded vectors using to_categorical
num_classes = 10 # Because it's how many digits we have (0-9)
# to_categorical takes a list of integers (our labels) and makes them into 1-hot vectors
training_labels = to_categorical(training_labels, num_classes)
test_labels = to_categorical(test_labels, num_classes)
# Recall that before this transformation, training_labels[0] was the value 5. Look now:
print(training_labels[0])
```
## Building a Deep Neural Network
Now that we've prepared our data, it's time to build a simple neural network. To start we'll make a deep network with 3 layers—the input layer, a single hidden layer, and the output layer. In a deep neural network all the layers are 1 dimensional. The input layer has to be the shape of our input data, meaning it must have 784 nodes. Similarly, the output layer must match our labels, meaning it must have 10 nodes. We can choose the number of nodes in our hidden layer, I've chosen 32 arbitrarally.
```
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
# Sequential models are a series of layers applied linearly.
model = Sequential()
# The first layer must specify it's input_shape.
# This is how the first two layers are added, the input layer and the hidden layer.
model.add(Dense(units=32, activation='sigmoid', input_shape=(image_size,)))
# This is how the output layer gets added, the 'softmax' activation function ensures
# that the sum of the values in the output nodes is 1. Softmax is very
# common in classification networks.
model.add(Dense(units=num_classes, activation='softmax'))
# This function provides useful text data for our network
model.summary()
```
## Compiling and Training a Model
Our model must be compiled and trained before it can make useful predictions. Models are trainined with the training data and training labels. During this process Keras will use an optimizer, loss function, metrics of our chosing to repeatedly make predictions and recieve corrections. The loss function is used to train the model, the metrics are only used for human evaluation of the model during and after training.
Training happens in a series of epochs which are divided into a series of rounds. Each round the network will recieve `batch_size` samples from the training data, make predictions, and recieve one correction based on the errors in those predictions. In a single epoch, the model will look at every item in the training set __exactly once__, which means individual data points are sampled from the training data without replacement during each round of each epoch.
During training, the training data itself will be broken into two parts according to the `validation_split` parameter. The proportion that you specify will be left out of the training process, and used to evaluate the accuracy of the model. This is done to preserve the test data, while still having a set of data left out in order to test against — and hopefully prevent — overfitting. At the end of each epoch, predictions will be made for all the items in the validation set, but those predictions won't adjust the weights in the model. Instead, if the accuracy of the predictions in the validation set stops improving then training will stop early, even if accuracy in the training set is improving.
```
# sgd stands for stochastic gradient descent.
# categorical_crossentropy is a common loss function used for categorical classification.
# accuracy is the percent of predictions that were correct.
model.compile(optimizer="sgd", loss='categorical_crossentropy', metrics=['accuracy'])
# The network will make predictions for 128 flattened images per correction.
# It will make a prediction on each item in the training set 5 times (5 epochs)
# And 10% of the data will be used as validation data.
history = model.fit(training_data, training_labels, batch_size=128, epochs=5, verbose=True, validation_split=.1)
```
## Evaluating Our Model
Now that we've trained our model, we want to evaluate its performance. We're using the "test data" here although in a serious experiment, we would likely not have done nearly enough work to warrent the application of the test data. Instead, we would rely on the validation metrics as a proxy for our test results until we had models that we believe would perform well.
Once we evaluate our model on the test data, any subsequent changes we make would be based on what we learned from the test data. Meaning, we would have functionally incorporated information from the test set into our training procedure which could bias and even invalidate the results of our research. In a non-research setting the real test might be more like putting this feature into production.
Nevertheless, it is always wise to create a test set that is not used as an evaluative measure until the very end of an experimental lifecycle. That is, once you have a model that you believe __should__ generalize well to unseen data you should test it on the test data to test that hypothosis. If your model performs poorly on the test data, you'll have to reevaluate your model, training data, and procedure.
```
loss, accuracy = model.evaluate(test_data, test_labels, verbose=True)
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['training', 'validation'], loc='best')
plt.show()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['training', 'validation'], loc='best')
plt.show()
print(f'Test loss: {loss:.3}')
print(f'Test accuracy: {accuracy:.3}')
```
## How Did Our Network Do?
* Why do we only have one value for test loss and test accuracy, but a chart over time for training and validation loss and accuracy?
* Our model was more accurate on the validation data than it was on the training data.
* Is this okay? Why or why not?
* What if our model had been more accurate on the training data than the validation data?
* Did our model get better during each epoch?
* If not: why might that be the case?
* If so: should we always expect this, where each epoch strictly improves training/validation accuracy/loss?
### Answers:
* Why do we only have one value for test loss and test accuracy, but a chart over time for training and validation loss and accuracy?
* __Because we only evaluate the test data once at the very end, but we evaluate training and validation scores once per epoch.__
* Our model was more accurate on the validation data than it was on the training data.
* Is this okay? Why or why not?
* __Yes, this is okay, and even good. When our validation scores are better than our training scores, it's a sign that we are probably not overfitting__
* What if our model had been more accurate on the training data than the validation data?
* __This would concern us, because it would suggest we are probably overfitting.__
* Did our model get better during each epoch?
* If not: why might that be the case?
* __Optimizers rely on the gradient to update our weights, but the 'function' we are optimizing (our neural network) is not a ground truth. A single batch, and even a complete epoch, may very well result in an adjustment that hurts overall performance.__
* If so: should we always expect this, where each epoch strictly improves training/validation accuracy/loss?
* __Not at all, see the above answer.__
## Look at Specific Results
Often, it can be illuminating to view specific results, both when the model is correct and when the model is wrong. Lets look at the images and our model's predictions for the first 16 samples in the test set.
```
from numpy import argmax
# Predicting once, then we can use these repeatedly in the next cell without recomputing the predictions.
predictions = model.predict(test_data)
# For pagination & style in second cell
page = 0
fontdict = {'color': 'black'}
# Repeatedly running this cell will page through the predictions
for i in range(16):
ax = plt.subplot(4, 4, i+1)
ax.axis('off')
plt.imshow(test_images[i + page], cmap='Greys')
prediction = argmax(predictions[i + page])
true_value = argmax(test_labels[i + page])
fontdict['color'] = 'black' if prediction == true_value else 'red'
plt.title("{}, {}".format(prediction, true_value), fontdict=fontdict)
page += 16
plt.tight_layout()
plt.show()
```
## Will A Different Network Perform Better?
Given what you know so far, use Keras to build and train another sequential model that you think will perform __better__ than the network we just built and trained. Then evaluate that model and compare its performance to our model. Remember to look at accuracy and loss for training and validation data over time, as well as test accuracy and loss.
```
# Your code here...
```
## Bonus questions: Go Further
Here are some questions to help you further explore the concepts in this lab.
* Does the original model, or your model, fail more often on a particular digit?
* Write some code that charts the accuracy of our model's predictions on the test data by digit.
* Is there a clear pattern? If so, speculate about why that could be...
* Training for longer typically improves performance, up to a point.
* For a simple model, try training it for 20 epochs, and 50 epochs.
* Look at the charts of accuracy and loss over time, have you reached diminishing returns after 20 epochs? after 50?
* More complex networks require more training time, but can outperform simpler networks.
* Build a more complex model, with at least 3 hidden layers.
* Like before, train it for 5, 20, and 50 epochs.
* Evaluate the performance of the model against the simple model, and compare the total amount of time it took to train.
* Was the extra complexity worth the additional training time?
* Do you think your complex model would get even better with more time?
* A little perspective on this last point: Some models train for [__weeks to months__](https://openai.com/blog/ai-and-compute/).
|
github_jupyter
|
```
from PIL import Image
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
import math
%matplotlib inline
```
# Volunteer 1
## 3M Littmann Data
```
image = Image.open('3Ms.bmp')
image
x = image.size[0]
y = image.size[1]
print(x)
print(y)
matrix = []
points = []
integrated_density = 0
for i in range(x):
matrix.append([])
for j in range(y):
matrix[i].append(image.getpixel((i,j)))
#integrated_density += image.getpixel((i,j))[1]
#points.append(image.getpixel((i,j))[1])
```
### Extract Red Line Position
```
redMax = 0
xStore = 0
yStore = 0
for xAxis in range(x):
for yAxis in range(y):
currentPoint = matrix[xAxis][yAxis]
if currentPoint[0] ==255 and currentPoint[1] < 10 and currentPoint[2] < 10:
redMax = currentPoint[0]
xStore = xAxis
yStore = yAxis
print(xStore, yStore)
```
- The redline position is located at y = 252.
### Extract Blue Points
```
redline_pos = 51
absMax = 0
littmannArr = []
points_vertical = []
theOne = 0
for xAxis in range(x):
for yAxis in range(y):
currentPoint = matrix[xAxis][yAxis]
# Pickup Blue points
if currentPoint[2] == 255 and currentPoint[0] < 220 and currentPoint[1] < 220:
points_vertical.append(yAxis)
#print(points_vertical)
# Choose the largest amplitude
for item in points_vertical:
if abs(item-redline_pos) > absMax:
absMax = abs(item-redline_pos)
theOne = item
littmannArr.append((theOne-redline_pos)*800)
absMax = 0
theOne = 0
points_vertical = []
fig = plt.figure()
s = fig.add_subplot(111)
s.plot(littmannArr, linewidth=0.6, color='blue')
```
# Ascul Pi Data
```
pathBase = 'C://Users//triti//OneDrive//Dowrun//Text//Manuscripts//Data//YangChuan//AusculPi//'
filename = 'Numpy_Array_File_2020-06-21_07_54_16.npy'
line = pathBase + filename
arr = np.load(line)
arr
arr.shape
fig = plt.figure()
s = fig.add_subplot(111)
s.plot(arr[0], linewidth=1.0, color='black')
fig = plt.figure()
s = fig.add_subplot(111)
s.plot(arr[:,100], linewidth=1.0, color='black')
start = 1830
end = 2350
start_adj = int(start * 2583 / 3000)
end_adj = int(end * 2583 / 3000)
fig = plt.figure()
s = fig.add_subplot(111)
s.plot(arr[start_adj:end_adj,460], linewidth=0.6, color='black')
fig = plt.figure()
s = fig.add_subplot(111)
s.plot(littmannArr, linewidth=0.6, color='blue')
asculArr = arr[start_adj:end_adj,460]
```
## Preprocess the two array
```
asculArr_processed = []
littmannArr_processed = []
for item in asculArr:
asculArr_processed.append(abs(item))
for item in littmannArr:
littmannArr_processed.append(abs(item))
fig = plt.figure()
s = fig.add_subplot(111)
s.plot(asculArr_processed, linewidth=0.6, color='black')
fig = plt.figure()
s = fig.add_subplot(111)
s.plot(littmannArr_processed, linewidth=0.6, color='blue')
fig = plt.figure()
s = fig.add_subplot(111)
s.plot(asculArr_processed[175:375], linewidth=1.0, color='black')
fig = plt.figure()
s = fig.add_subplot(111)
s.plot(littmannArr_processed[:200], linewidth=1.0, color='blue')
len(littmannArr)
len(asculArr)
```
### Coeffient
```
stats.pearsonr(asculArr_processed, littmannArr_processed)
stats.pearsonr(asculArr_processed[176:336], littmannArr_processed[:160])
fig = plt.figure()
s = fig.add_subplot(111)
s.plot(arr[start_adj:end_adj,460][176:336], linewidth=0.6, color='black')
fig = plt.figure()
s = fig.add_subplot(111)
s.plot(littmannArr[:160], linewidth=0.6, color='blue')
```
### Fitness
```
stats.chisquare(asculArr_processed[174:334], littmannArr_processed[:160])
def cosCalculate(a, b):
l = len(a)
sumXY = 0
sumRootXSquare = 0
sumRootYSquare = 0
for i in range(l):
sumXY = sumXY + a[i]*b[i]
sumRootXSquare = sumRootXSquare + math.sqrt(a[i]**2)
sumRootYSquare = sumRootYSquare + math.sqrt(b[i]**2)
cosValue = sumXY / (sumRootXSquare * sumRootYSquare)
return cosValue
cosCalculate(asculArr_processed[175:335], littmannArr_processed[:160])
```
|
github_jupyter
|
```
%matplotlib inline
from pyvista import set_plot_theme
set_plot_theme('document')
```
Volumetric Analysis
===================
Calculate mass properties such as the volume or area of datasets
```
# sphinx_gallery_thumbnail_number = 4
import numpy as np
from pyvista import examples
```
Computing mass properties such as the volume or area of datasets in
PyVista is quite easy using the
`pyvista.DataSetFilters.compute_cell_sizes`{.interpreted-text
role="func"} filter and the `pyvista.DataSet.volume`{.interpreted-text
role="attr"} property on all PyVista meshes.
Let\'s get started with a simple gridded mesh:
```
# Load a simple example mesh
dataset = examples.load_uniform()
dataset.set_active_scalars("Spatial Cell Data")
```
We can then calculate the volume of every cell in the array using the
`.compute_cell_sizes` filter which will add arrays to the cell data of
the mesh core the volume and area by default.
```
# Compute volumes and areas
sized = dataset.compute_cell_sizes()
# Grab volumes for all cells in the mesh
cell_volumes = sized.cell_arrays["Volume"]
```
We can also compute the total volume of the mesh using the `.volume`
property:
```
# Compute the total volume of the mesh
volume = dataset.volume
```
Okay awesome! But what if we have have a dataset that we threshold with
two volumetric bodies left over in one dataset? Take this for example:
```
threshed = dataset.threshold_percent([0.15, 0.50], invert=True)
threshed.plot(show_grid=True, cpos=[-2, 5, 3])
```
We could then assign a classification array for the two bodies, compute
the cell sizes, then extract the volumes of each body. Note that there
is a simpler implementation of this below in
`split_vol_ref`{.interpreted-text role="ref"}.
```
# Create a classifying array to ID each body
rng = dataset.get_data_range()
cval = ((rng[1] - rng[0]) * 0.20) + rng[0]
classifier = threshed.cell_arrays["Spatial Cell Data"] > cval
# Compute cell volumes
sizes = threshed.compute_cell_sizes()
volumes = sizes.cell_arrays["Volume"]
# Split volumes based on classifier and get volumes!
idx = np.argwhere(classifier)
hvol = np.sum(volumes[idx])
idx = np.argwhere(~classifier)
lvol = np.sum(volumes[idx])
print(f"Low grade volume: {lvol}")
print(f"High grade volume: {hvol}")
print(f"Original volume: {dataset.volume}")
```
Or better yet, you could simply extract the largest volume from your
thresholded dataset by passing `largest=True` to the `connectivity`
filter or by using `extract_largest` filter (both are equivalent).
```
# Grab the largest connected volume present
largest = threshed.connectivity(largest=True)
# or: largest = threshed.extract_largest()
# Get volume as numeric value
large_volume = largest.volume
# Display it!
largest.plot(show_grid=True, cpos=[-2, 5, 3])
```
------------------------------------------------------------------------
Splitting Volumes {#split_vol_ref}
=================
What if instead, we wanted to split all the different connected bodies /
volumes in a dataset like the one above? We could use the
`pyvista.DataSetFilters.split_bodies`{.interpreted-text role="func"}
filter to extract all the different connected volumes in a dataset into
blocks in a `pyvista.MultiBlock`{.interpreted-text role="class"}
dataset. For example, lets split the thresholded volume in the example
above:
```
# Load a simple example mesh
dataset = examples.load_uniform()
dataset.set_active_scalars("Spatial Cell Data")
threshed = dataset.threshold_percent([0.15, 0.50], invert=True)
bodies = threshed.split_bodies()
for i, body in enumerate(bodies):
print(f"Body {i} volume: {body.volume:.3f}")
bodies.plot(show_grid=True, multi_colors=True, cpos=[-2, 5, 3])
```
------------------------------------------------------------------------
A Real Dataset
==============
Here is a realistic training dataset of fluvial channels in the
subsurface. This will threshold the channels from the dataset then
separate each significantly large body and compute the volumes for each!
Load up the data and threshold the channels:
```
data = examples.load_channels()
channels = data.threshold([0.9, 1.1])
```
Now extract all the different bodies and compute their volumes:
```
bodies = channels.split_bodies()
# Now remove all bodies with a small volume
for key in bodies.keys():
b = bodies[key]
vol = b.volume
if vol < 1000.0:
del bodies[key]
continue
# Now lets add a volume array to all blocks
b.cell_arrays["TOTAL VOLUME"] = np.full(b.n_cells, vol)
```
Print out the volumes for each body:
```
for i, body in enumerate(bodies):
print(f"Body {i:02d} volume: {body.volume:.3f}")
```
And visualize all the different volumes:
```
bodies.plot(scalars="TOTAL VOLUME", cmap="viridis", show_grid=True)
```
|
github_jupyter
|
# Exemplo sobre a correlação cruzada
A correlação cruzada é definida por
\begin{equation}
R_{xy}(\tau)=\int_{-\infty}^{\infty}x(t)y(t+\tau)\mathrm{d} t
\tag{1}
\end{equation}
Considerede um navio a navegar por águas não muito conhecidas. Para navegar com segurança, o navio necessita ter uma noção da profundidade da coluna de água sobre a qual navega. É difícil inspecionar a coluna d'água por inspeção visual, já que a luz não se propaga bem na água. No entanto, podemos usar as ondas sonoras para tal.

Assim, o navio é equipado com uma fonte sonora e um hidrofone. A fonte emite um sinal na água, $s(t)$, que se propaga até o fundo e é, então refletido. O hidrofone, próximo à fonte sonora, captará o som direto, $s(t)$, e a reflexão - uma versão atrasada e reduzida do sinal emitido, $r_c s(t-\Delta)$ . No entanto, ambos sinais são corrompidos por ruído, especialmente a reflexão. Assim, os sinais medidos são:
\begin{equation}
x(t)=s(t) + n_x(t)
\end{equation}
\begin{equation}
y(t)=s(t) + r_c s(t-\Delta) + n_y(t)
\end{equation}
Vamos iniciar olhando para estes sinais.
```
# importar as bibliotecas necessárias
import numpy as np # arrays
import matplotlib.pyplot as plt # plots
from scipy.stats import norm
from scipy import signal
plt.rcParams.update({'font.size': 14})
import IPython.display as ipd # to play signals
import sounddevice as sd
# Frequencia de amostragem e vetor temporal
fs = 1000
time = np.arange(0, 2, 1/fs)
Delta = 0.25
r_c = 0.5
# inicializar o gerador de números aleatórios
#np.random.seed(0)
# sinal s(t)
st = np.random.normal(loc = 0, scale = 1, size = len(time))
# Ruído de fundo
n_x = np.random.normal(loc = 0, scale = 0.1, size = len(time))
n_y = np.random.normal(loc = 0, scale = 1, size = len(time))
# Sinais x(t) e y(t)
xt = st + n_x # O sinal é totalmente contaminado por ruído
yt = np.zeros(len(time)) + st + n_y # Inicialize - o sinal é totalmente contaminado por ruído
yt[int(Delta*fs):] = yt[int(Delta*fs):] + r_c * st[:len(time)-int(Delta*fs)] # A partir de um certo instante temos a reflexão
# plot signal
plt.figure(figsize = (10, 6))
plt.subplot(2,1,1)
plt.plot(time, xt, linewidth = 1, color='b', alpha = 0.7)
plt.grid(linestyle = '--', which='both')
plt.title('Sinal emitidqo e contaminado por ruído')
plt.ylabel(r'$x(t)$')
plt.xlabel('Tempo [s]')
plt.xlim((0, time[-1]))
plt.ylim((-5, 5))
plt.subplot(2,1,2)
plt.plot(time, yt, linewidth = 1, color='b', alpha = 0.7)
plt.grid(linestyle = '--', which='both')
plt.title('Sinal gravado e contaminado por ruído')
plt.ylabel(r'$y(t)$')
plt.xlabel('Tempo [s]')
plt.xlim((0, time[-1]))
plt.ylim((-5, 5))
plt.tight_layout()
```
# Como podemos estimar a distância até o fundo?
Vamos pensar em mensurar a auto-correlação de $y(t)$ e a correlação cruzada entre $x(t)$ e $y(t)$. Tente suar o conceito de estimadores ($E[\cdot]$) para ter uma intuição a respeito. Com eles, você poderá provar que
\begin{equation}
R_{yy}(\tau)=(1+r_{c}^{2})R_{ss}(\tau) + R_{n_y n_y}(\tau) + r_c R_{ss}(\tau-\Delta) + r_c R_{ss}(\tau+\Delta)
\end{equation}
\begin{equation}
R_{xy}(\tau)=R_{ss}(\tau) + r_c R_{ss}(\tau-\Delta)
\end{equation}
```
# Calculemos a auto-correlação
Ryy = np.correlate(yt, yt, mode = 'same')
Rxy = np.correlate(xt, yt, mode = 'same')
tau = np.linspace(-0.5*len(Rxy)/fs, 0.5*len(Rxy)/fs, len(Rxy))
#tau = np.linspace(0, len(Rxy)/fs, len(Rxy))
# plot autocorrelação
plt.figure(figsize = (10, 3))
plt.plot(tau, Ryy/len(Ryy), linewidth = 1, color='b')
plt.grid(linestyle = '--', which='both')
plt.ylabel(r'$R_{yy}(\tau)$')
#plt.xlim((tau[0], tau[-1]))
plt.xlabel(r'$\tau$ [s]')
plt.tight_layout()
# plot Correlação cruzada
plt.figure(figsize = (10, 3))
plt.plot(-tau,Rxy/len(Ryy), linewidth = 1, color='b')
plt.grid(linestyle = '--', which='both')
plt.ylabel(r'$R_{xy}(\tau)$')
#plt.xlim((tau[0], tau[-1]))
plt.xlabel(r'$\tau$ [s]')
plt.tight_layout()
```
# Conhecendo a velocidade do som na água...
Podemos calcular a distância. $c_{a} = 1522$ [m/s].
```
find_peak = np.where(np.logical_and(Rxy/len(Ryy) >= 0.2, Rxy/len(Ryy) <= 0.5))
lag = -tau[find_peak[0][0]]
distance = 0.5*1522*lag
print('O atraso detectado é: {:.2f} [s]'.format(lag))
print('A distância para o fundo é: {:.2f} [m]'.format(distance))
```
|
github_jupyter
|
```
#Python Basics
#Functions in Python
#Functions take some inputs, then they produce some outputs
#The functions are just a piece of code that you can reuse
#You can implement your functions, but in many cases, people reuse other people's functions
#in this case, it is important how the function work and how we can import function
#Python has many bult-in functions
#For example function"len" which computes the length of the lists
list1=[1,7,7.8, 9,3.9, 2, 8, 5.01, 6,2, 9, 11, 46, 91, 58, 2]
n=len(list1)
print("The length of list1 is ",n, "elements")
list2=[2, 8, 5.01, 6,2, 9]
m=len(list2)
print("The length of list2 is ",m, "elements")
#For example function"sum" which returns the total of all of the elements
list3=[10,74,798, 19,3.9, 12, 8, 5.01, 6,2, 19, 11, 246, 91, 58, 2.2]
n=sum(list3)
print("The total of list1 is ",n)
list4=[72, 98, 15.01, 16,2, 69.78]
m=sum(list4)
print("The total of list2 is ",m )
#METHODS are similar to the functions
#sort vs sorted
#for example we have two ways to sort a list through "sort method" and "sorted function"
#Assume we have a list, namely Num
Num=[10,74,798, 19,3.9, 12, 8, 5.01, 6,2, 19, 11, 246, 91, 58, 2.2]
#we can sort this variable using sorted function as follows
#sort vs sorted
Num_rating=sorted(Num)
print(Num_rating)
print(Num)
#So Num is not changed, but in the case of sort method the list itself is changed
#in this case no new list is created
Num=[10,74,798, 19,3.9, 12, 8, 5.01, 6,2, 19, 11, 246, 91, 58, 2.2]
print("Befor appling the sort method, Num has these values:", Num)
Num.sort()
print("After appling the sort method, Num has these values:", Num)
#Making our own functions in Python
#For making a function, def FunctionName(input):
def Add1(InputFunc):
OUT=InputFunc+15
return OUT
#We can reuse function Add1 among our program
print(Add1(3))
print(Add1(15))
Add1(3.144)
#Example
#a is input of the function
#y is the output of the fuvtion
#Whenever function Time1 is called the output is calculated
def Time1(a):
y=a*15
return y
c=Time1(2)
print(c)
d=Time1(30)
print(d)
#Documentation function using """ Documentation"""
def Add1(InputFunc):
"""
ADD Function
"""
OUT=InputFunc+15
return OUT
#functions with multiple parameters
def MULTIPARA(a, b, c):
W1=a*b+ c
W2=(a+b)*c
return (W1,W2)
print(MULTIPARA(2,3,7))
#functions with multiple parameters
def Mu2(a, b):
W1=a*b+ c
W2=(a+b)*c
W3=15/a
W4=65/b
return (W1,W2,W3,W4)
print(Mu2(11,3))
def Mu3(a1, a2, a3, a4):
c1=a1*a2+ a4+23
c2=(a3+a1)*a2
c3=15/a3
c4=65/a4+8976*d
return (c1,c2,c3,c4)
print(Mu3(0.008,0.0454,0.0323, 0.00232))
#repeating a string for n times
def mu4(St, REPEAT):
OUT=St*REPEAT
return OUT
print(mu4("Michel Jackson", 2))
print(mu4("Michel Jackson", 3))
print(mu4("Michel Jackson", 4))
print(mu4("Michel Jackson", 5))
#In many cases functions does not have a return statement
#In these cases, Python will return "NONE" Special
#Assume MBJ() function with no inputs
def MBJ():
print("M: Mohammad")
print("B:Behdad")
print("J:Jamshdi")
#Calling functins with no parameters
MBJ()
def ERROR():
print("There is something wrong in codes")
#Calling function with no parameters
ERROR()
#Function which does not do anything
def NOWORK():
pass
#Calling function NOWORK
print(NOWORK())
#this function return "None"
#LOOPS in FUNCTIONS
#we can use loops in functions
#example
#Force [N] to Mass [Kg] Convertor
def Forve2Mass(F):
for S,Val in enumerate(F):
print("The mass of number", S,"is measured: ", Val/9.8, "Kg")
Fl=[344, 46783, 5623, 6357]
Forve2Mass(Fl)
#Mass [Kg] to Force [N] Convertor
def Mass2Force(M):
for S,Val in enumerate(M):
print("The mass of number", S,"is measured: ",Val*9.8, "Kg")
we=Val*9.8
if (we>200):
print ("The above item is over weight")
M1=[54, 71, 59, 34, 21, 16, 15]
Mass2Force(M1)
#Collecting arguments
def AI_Methods(*names):
#Star * is used for string
for name in names:
print(name)
#calling the function
AI_Methods("Deep Learning", "Machine Learning", "ANNs", "LSTM")
#or
AI1=["Deep Learning", "Machine Learning", "ANNs", "LSTM"]
AI_Methods(AI1)
#Local scope and global scope in function
#Every vaiable within the function is counted as a local scope
#Every vaiable out of the function is counted as a global scope
#local scopes just reflect the output of function while global scopes reflect the value of body
#Assume we have Date as both global scope and local scope
def LOCAL(a):
Date=a+15
return(Date)
Date=1986
y=LOCAL(Date)
print(y)
#The different is here, look at the output of the function
#Global Scope
print("Global Scope (BODY): ",Date)
#Local Scope
print("Local Scopes (Function): ", LOCAL(Date))
#if the variable is not defined in the function, function considers the value of it in BODY
#Let's look at vaiable a
a=1
def add(b):
return a+b
c=add(10)
print(c)
def f(c):
return sum(c)
f([11, 67])
#Using if/else Statements and Loops in Functions
# Function example
def type_of_album(artist, album, year_released):
print(artist, album, year_released)
if year_released > 1980:
return "Modern"
else:
return "Oldie"
x = type_of_album("Michael Jackson", "Thriller", 1980)
print(x)
```
|
github_jupyter
|
```
#Import section
import numpy as np
import cv2
import glob
import matplotlib.pyplot as plt
import pickle
%matplotlib inline
# Loading camera calibration coefficients(matrix and camera coefficients) from pickle file
def getCameraCalibrationCoefficientsFromPickleFile(filePath):
cameraCalibration = pickle.load( open(filePath, 'rb' ) )
mtx, dist = map(cameraCalibration.get, ('mtx', 'dist'))
return mtx, dist
def getTestImages(filePath):
# Load test images.
testImages = list(map(lambda imageFileName: (imageFileName, cv2.imread(imageFileName)), glob.glob(filePath)))
return testImages
def undistortImageAndGetHLS(image, mtx, dist):
# hlsOriginal = undistortAndHLS(originalImage, mtx, dist)
"""
Undistort the image with `mtx`, `dist` and convert it to HLS.
"""
undist = cv2.undistort(image, mtx, dist, None, mtx)
hls = cv2.cvtColor(undist, cv2.COLOR_RGB2HLS)
#extract HLS from the image
H = hls[:,:,0] #channels
L = hls[:,:,1]
S = hls[:,:,2]
return H, L, S
def thresh(yourChannel, threshMin = 0, threshMax = 255):
# Apply a threshold to the S channel
# thresh = (0, 160)
binary_output = np.zeros_like(yourChannel)
binary_output[(yourChannel >= threshMin) & (yourChannel <= threshMax)] = 1
# Return a binary image of threshold result
return binary_output
def applySobel(img, orient='x', sobel_kernel=3, thresh_min = 0, thresh_max = 255):
# Apply the following steps to img
# 1) Take the derivative in x or y given orient = 'x' or 'y'
sobel = 0
if orient == 'x':
sobel = cv2.Sobel(img, cv2.CV_64F, 1, 0, ksize=sobel_kernel)
else:
sobel = cv2.Sobel(img, cv2.CV_64F, 0, 1, ksize=sobel_kernel)
# 2) Take the absolute value of the derivative or gradient
abs_sobel = np.absolute(sobel)
# 3) Scale to 8-bit (0 - 255) then convert to type = np.uint8
scaled_sobel = np.uint8(255*abs_sobel/np.max(abs_sobel))
# 4) Create a mask of 1's where the scaled gradient magnitude is > thresh_min and < thresh_max
binary_output = thresh(scaled_sobel,thresh_min,thresh_max)
# 5) Return this mask as your binary_output image
return binary_output
def applyActionToImages(images, action):
return list(map(lambda img: (img[0], action(img[1])), images))
# Method to plot images on cols / rows
def showImages(images, cols = 4, rows = 5, figsize=(15,10), cmap = None):
imgLength = len(images)
fig, axes = plt.subplots(rows, cols, figsize=figsize)
indexes = range(cols * rows)
for ax, index in zip(axes.flat, indexes):
if index < imgLength:
imagePathName, image = images[index]
if cmap == None:
ax.imshow(image)
else:
ax.imshow(image, cmap=cmap)
ax.set_title(imagePathName)
ax.axis('off')
# Get camera matrix and distortion coefficient
mtx, dist = getCameraCalibrationCoefficientsFromPickleFile('./pickled_data/camera_calibration.p')
# Lambda action applied on all images
useSChannel = lambda img: undistortImageAndGetHLS(img, mtx, dist)[2]
# Get Test images
testImages = getTestImages('./test_images/*.jpg')
# Get all 'S' channels from all Test images
resultSChannel = applyActionToImages(testImages, useSChannel)
# Show our result
#showImages(resultSChannel, 2, 3, (15, 13), cmap='gray')
# Apply Sobel in 'x' direction and plot images
applySobelX = lambda img: applySobel(useSChannel(img), orient='x', thresh_min=10, thresh_max=160)
# Get all 'S' channels from all Test images
resultApplySobelX = applyActionToImages(testImages, applySobelX)
# Show our result
#showImages(resultApplySobelX, 2, 3, (15, 13), cmap='gray')
# Apply Sobel in 'y' direction and plot images
applySobelY = lambda img: applySobel(useSChannel(img), orient='y', thresh_min=10, thresh_max=160)
# Get all 'S' channels from all Test images
resultApplySobelY = applyActionToImages(testImages, applySobelY)
# Show our result
#showImages(resultApplySobelY, 2, 3, (15, 13), cmap='gray')
def mag_thresh(img, sobel_kernel=3, thresh_min = 0, thresh_max = 255):
# Apply the following steps to img
# 1) Take the gradient in x and y separately
sobelX = cv2.Sobel(img, cv2.CV_64F, 1, 0, ksize=sobel_kernel)
sobelY = cv2.Sobel(img, cv2.CV_64F, 0, 1, ksize=sobel_kernel)
# 2) Calculate the magnitude
gradmag = np.sqrt(sobelX**2 + sobelY**2)
# 3) Scale to 8-bit (0 - 255) and convert to type = np.uint8
scale_factor = np.max(gradmag)/255
gradmag = (gradmag/scale_factor).astype(np.uint8)
# 4) Create a binary mask where mag thresholds are met
binary_output = thresh(gradmag,thresh_min, thresh_max)
# 5) Return this mask as your binary_output image
return binary_output
# Apply Magnitude in 'x' and 'y' directions in order to calcultate the magnitude of pixels and plot images
applyMagnitude = lambda img: mag_thresh(useSChannel(img), thresh_min=5, thresh_max=160)
# Apply the lamnda function to all test images
resultMagnitudes = applyActionToImages(testImages, applyMagnitude)
# Show our result
#showImages(resultMagnitudes, 2, 3, (15, 13), cmap='gray')
# Define a function that applies Sobel x and y,
# then computes the direction of the gradient
# and applies a threshold.
def dir_threshold(img, sobel_kernel=3, thresh_min = 0, thresh_max = np.pi/2):
# 1) Take the gradient in x and y separately and
# Take the absolute value of the x and y gradients
sobelX = np.absolute(cv2.Sobel(img, cv2.CV_64F, 1, 0, ksize=sobel_kernel))
sobelY = np.absolute(cv2.Sobel(img, cv2.CV_64F, 0, 1, ksize=sobel_kernel))
# 2) Use np.arctan2(abs_sobely, abs_sobelx) to calculate the direction of the gradient
# sobelY / sobelX
gradientDirection = np.arctan2(sobelY, sobelX)
# 3) Create a binary mask where direction thresholds are met
binary_output = thresh(gradientDirection, thresh_min, thresh_max)
# 4) Return this mask as your binary_output image
return binary_output
# Apply direction of the gradient
applyDirection = lambda img: dir_threshold(useSChannel(img), thresh_min=0.79, thresh_max=1.20)
# Apply the lambda function to all test images
resultDirection = applyActionToImages(testImages, applyDirection)
# Show our result
#showImages(resultDirection, 2, 3, (15, 13), cmap='gray')
def combineGradients(img):
sobelX = applySobelX(img)
sobelY = applySobelY(img)
magnitude = applyMagnitude(img)
direction = applyDirection(img)
combined = np.zeros_like(sobelX)
combined[((sobelX == 1) & (sobelY == 1)) | ((magnitude == 1) & (direction == 1))] = 1
return combined
resultCombined = applyActionToImages(testImages, combineGradients)
# Show our result
#showImages(resultCombined, 2, 3, (15, 13), cmap='gray')
def show_compared_results():
titles = ['Apply Sobel X', 'Apply Sobel Y', 'Apply Magnitude', 'Apply Direction', 'Combined']
results = list(zip(resultApplySobelX, resultApplySobelY, resultMagnitudes, resultDirection, resultCombined))
# only 5 images
resultsAndTitle = list(map(lambda images: list(zip(titles, images)), results))[3:6]
flattenResults = [item for sublist in resultsAndTitle for item in sublist]
fig, axes = plt.subplots(ncols=5, nrows=len(resultsAndTitle), figsize=(25,10))
for ax, imageTuple in zip(axes.flat, flattenResults):
title, images = imageTuple
imagePath, img = images
ax.imshow(img, cmap='gray')
ax.set_title(imagePath + '\n' + title, fontsize=8)
ax.axis('off')
fig.subplots_adjust(hspace=0, wspace=0.05, bottom=0)
```
|
github_jupyter
|
```
import numpy as np
import re
import pandas as pd
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans, DBSCAN
from sklearn.neighbors import NearestNeighbors
from requests import get
import unicodedata
from bs4 import BeautifulSoup
import seaborn as sns
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import GradientBoostingClassifier
import xgboost as xgb
from sklearn.metrics import accuracy_score
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
%matplotlib inline
```
# Reading in the data
```
df = pd.read_csv('movie_metadata.csv')
df.head()
df.shape
def classify(col):
if col['imdb_score'] >= 0 and col['imdb_score'] < 4:
return 0
elif col['imdb_score'] >= 4 and col['imdb_score'] < 6:
return 1
elif col['imdb_score'] >= 6 and col['imdb_score'] < 7:
return 2
elif col['imdb_score'] >= 7 and col['imdb_score'] < 8:
return 3
elif col['imdb_score'] >= 8 and col['imdb_score'] <= 10:
return 4
df['Success'] = df.apply(classify, axis=1)
df.describe()
df.head()
```
# Filling NAN's with median.
```
def fill_nan(col):
df[col] = df[col].fillna(df[col].median())
cols = list(df.columns)
fill_nan(cols)
```
# Cleaning
```
def clean_backward_title(col):
string = col.rstrip()[:-2]
return unicodedata.normalize('NFD', unicode(string, 'utf-8')).encode('ascii', 'ignore')
df['movie_title'] = df['movie_title'].astype(str)
df['movie_title'] = df['movie_title'].apply(clean_backward_title)
df['movie_title']
```
# IMDB Revenue scraping script. Redundant right now.. but can be useful in other projects
```
# def revenue_parse(url, revenue_per_movie):
# url = url + 'business'
# response = get(url)
# html_soup = BeautifulSoup(response.text, 'html.parser')
# movie_containers = html_soup.find('div', {"id": "tn15content"})
# text_spend = movie_containers.text.split('\n')
# if 'Gross' in text_spend:
# gross_index = text_spend.index('Gross')
# rev = [int(i[1:].replace(',', '')) if i[1:].replace(',', '').isdigit() else -1 for i in re.findall(r'[$]\S*', text_spend[gross_index+1])]
# if len(rev) == 0:
# revenue_per_movie.append(-1)
# else:
# revenue_per_movie.append(max(rev))
# else:
# revenue_per_movie.append(-1)
# revenue_per_movie = []
# for i in df['url']:
# revenue_parse(i, revenue_per_movie)
```
# Describing the data to find the Missing values
```
df.describe()
```
# Normalizing or Standardizing the data.. change the commenting as per your needs
```
col = list(df.describe().columns)
col.remove('Success')
sc = StandardScaler()
# sc = MinMaxScaler()
temp = sc.fit_transform(df[col])
df[col] = temp
df.head()
```
# PCA
```
pca = PCA(n_components=3)
df_pca = pca.fit_transform(df[col])
df_pca
pca.explained_variance_ratio_
df['pca_one'] = df_pca[:, 0]
df['pca_two'] = df_pca[:, 1]
df['pca_three'] = df_pca[:, 2]
plt.figure(figsize=(12,12))
plt.scatter(df['pca_one'][:50], df['pca_two'][:50], color=['orange', 'cyan', 'brown'], cmap='viridis')
for m, p1, p2 in zip(df['movie_title'][:50], df['pca_one'][:50], df['pca_two'][:50]):
plt.text(p1, p2, s=m, color=np.random.rand(3)*0.7)
```
# KMeans
```
km = KMeans(n_clusters = 5)
#P_fit = km.fit(df[['gross','imdb_score','num_critic_for_reviews','director_facebook_likes','actor_1_facebook_likes','movie_facebook_likes','actor_3_facebook_likes','actor_2_facebook_likes']])
P_fit = km.fit(df[['gross','imdb_score']])
P_fit.labels_
# colormap = {0:'red',1:'green',2:'blue'}
# lc = [colormap[c] for c in colormap]
# plt.scatter(df['pca_one'],df['pca_two'],c = lc)
df['cluster'] = P_fit.labels_
np.unique(P_fit.labels_)
for i in np.unique(P_fit.labels_):
temp = df[df['cluster'] == i]
plt.scatter(temp['gross'], temp['imdb_score'], color=np.random.rand(3)*0.7)
```
# DBSCAN
```
cols3 = ['director_facebook_likes','imdb_score']
#The min_pts are taken as >= D+1 and the eps value is estimated from the elbow in k-distance graph
db = DBSCAN(eps = .5, min_samples=3).fit(df[cols3])
len(db.core_sample_indices_)
df['cluster'] = db.labels_
colors = [plt.cm.Spectral(each) for each in np.linspace(0, 1, len(np.unique(db.labels_)))]
plt.figure(figsize= (12,12))
for i in np.unique(db.labels_):
temp = df[df['cluster'] == i]
plt.scatter(temp['director_facebook_likes'], temp['imdb_score'], color = np.random.rand(3)*0.7)
```
# Random Forest
```
features = col
features.remove('imdb_score')
features
X_train, X_test, y_train, y_test = train_test_split(df[features], df['Success'], test_size=0.2)
# rf = RandomForestClassifier(random_state=1, n_estimators=250, min_samples_split=8, min_samples_leaf=4)
# rf = GradientBoostingClassifier(random_state=0, n_estimators=250, min_samples_split=8,
# min_samples_leaf=4, learning_rate=0.1)
rf = xgb.XGBClassifier(n_estimators=250)
rf.fit(X_train, y_train)
predictions = rf.predict(X_test)
predictions = predictions.astype(int)
np.unique(predictions)
accuracy_score(y_test, predictions)
features.insert(0, 'imdb_score')
sns.heatmap(df[features].corr())
```
|
github_jupyter
|
# Import and convert Neo23x0 Sigma scripts
[email protected]
This notebook is a is a quick and dirty Sigma to Log Analytics converter.
It uses the modules from sigmac package to do the conversion.
Only a subset of the Sigma rules are convertible currently. Failure to convert
could be for one or more of these reasons:
- known limitations of the converter
- mismatch between the syntax expressible in Sigma and KQL
- data sources referenced in Sigma rules do not yet exist in Azure Sentinel
The sigmac tool is downloadable as a package from PyPi but since we are downloading
the rules from the repo, we also copy and import the package from the repo source.
After conversion you can use an interactive browser to step through the rules and
view (and copy/save) the KQL equivalents. You can also take the conversion results and
use them in another way (e.g.bulk save to files).
The notebook is all somewhat experimental and offered as-is without any guarantees
## Download and unzip the Sigma repo
```
import requests
# Download the repo ZIP
sigma_git_url = 'https://github.com/Neo23x0/sigma/archive/master.zip'
r = requests.get(sigma_git_url)
from ipywidgets import widgets, Layout
import os
from pathlib import Path
def_path = Path.joinpath(Path(os.getcwd()), "sigma")
path_wgt = widgets.Text(value=str(def_path),
description='Path to extract to zipped repo files: ',
layout=Layout(width='50%'),
style={'description_width': 'initial'})
path_wgt
import zipfile
import io
repo_zip = io.BytesIO(r.content)
zip_archive = zipfile.ZipFile(repo_zip, mode='r')
zip_archive.extractall(path=path_wgt.value)
RULES_REL_PATH = 'sigma-master/rules'
rules_root = Path(path_wgt.value) / RULES_REL_PATH
```
### Check that we have the files
You should see a folder with folders such as application, apt, windows...
```
%ls {rules_root}
```
## Convert Sigma Files to Log Analytics Kql queries
```
# Read the Sigma YAML file paths into a dict and make a
# a copy for the target Kql queries
from pathlib import Path
from collections import defaultdict
import copy
def get_rule_files(rules_root):
file_dict = defaultdict(dict)
for file in Path(rules_root).resolve().rglob("*.yml"):
rel_path = Path(file).relative_to(rules_root)
path_key = '.'.join(rel_path.parent.parts)
file_dict[path_key][rel_path.name] = file
return file_dict
sigma_dict = get_rule_files(rules_root)
kql_dict = copy.deepcopy(sigma_dict)
# Add downloaded sigmac tool to sys.path and import Sigmac functions
import os
import sys
module_path = os.path.abspath(os.path.join('sigma/sigma-master/tools'))
if module_path not in sys.path:
sys.path.append(module_path)
from sigma.parser.collection import SigmaCollectionParser
from sigma.parser.exceptions import SigmaCollectionParseError, SigmaParseError
from sigma.configuration import SigmaConfiguration, SigmaConfigurationChain
from sigma.config.exceptions import SigmaConfigParseError, SigmaRuleFilterParseException
from sigma.filter import SigmaRuleFilter
import sigma.backends.discovery as backends
from sigma.backends.base import BackendOptions
from sigma.backends.exceptions import BackendError, NotSupportedError, PartialMatchError, FullMatchError
# Sigma to Log Analytics Conversion
import yaml
_LA_MAPPINGS = '''
fieldmappings:
Image: NewProcessName
ParentImage: ProcessName
ParentCommandLine: NO_MAPPING
'''
NOT_CONVERTIBLE = 'Not convertible'
def sigma_to_la(file_path):
with open(file_path, 'r') as input_file:
try:
sigmaconfigs = SigmaConfigurationChain()
sigmaconfig = SigmaConfiguration(_LA_MAPPINGS)
sigmaconfigs.append(sigmaconfig)
backend_options = BackendOptions(None, None)
backend = backends.getBackend('ala')(sigmaconfigs, backend_options)
parser = SigmaCollectionParser(input_file, sigmaconfigs, None)
results = parser.generate(backend)
kql_result = ''
for result in results:
kql_result += result
except (NotImplementedError, NotSupportedError):
kql_result = NOT_CONVERTIBLE
input_file.seek(0,0)
sigma_txt = input_file.read()
if not kql_result == NOT_CONVERTIBLE:
try:
kql_header = "\n".join(get_sigma_properties(sigma_txt))
kql_result = kql_header + "\n" + kql_result
except Exception as e:
print("exception reading sigma YAML: ", e)
print(sigma_txt, kql_result, sep='\n')
return sigma_txt, kql_result
sigma_keys = ['title', 'description', 'tags', 'status',
'author', 'logsource', 'falsepositives', 'level']
def get_sigma_properties(sigma_rule):
sigma_docs = yaml.load_all(sigma_rule, Loader=yaml.SafeLoader)
sigma_rule_dict = next(sigma_docs)
for prop in sigma_keys:
yield get_property(prop, sigma_rule_dict)
def get_property(name, sigma_rule_dict):
sig_prop = sigma_rule_dict.get(name, 'na')
if isinstance(sig_prop, dict):
sig_prop = ' '.join([f"{k}: {v}" for k, v in sig_prop.items()])
return f"// {name}: {sig_prop}"
_KQL_FILTERS = {
'date': ' | where TimeGenerated >= datetime({start}) and TimeGenerated <= datetime({end}) ',
'host': ' | where Computer has {host_name} '
}
def insert_at(source, insert, find_sub):
pos = source.find(find_sub)
if pos != -1:
return source[:pos] + insert + source[pos:]
else:
return source + insert
def add_filter_clauses(source, **kwargs):
if "{" in source or "}" in source:
source = ("// Warning: embedded braces in source. Please edit if necessary.\n"
+ source)
source = source.replace('{', '{{').replace('}', '}}')
if kwargs.get('host', False):
source = insert_at(source, _KQL_FILTERS['host'], '|')
if kwargs.get('date', False):
source = insert_at(source, _KQL_FILTERS['date'], '|')
return source
# Run the conversion
conv_counter = {}
for categ, sources in sigma_dict.items():
src_converted = 0
for file_name, file_path in sources.items():
sigma, kql = sigma_to_la(file_path)
kql_dict[categ][file_name] = (sigma, kql)
if not kql == NOT_CONVERTIBLE:
src_converted += 1
conv_counter[categ] = (len(sources), src_converted)
print("Conversion statistics")
print("-" * len("Conversion statistics"))
print('\n'.join([f'{categ}: rules: {counter[0]}, converted: {counter[1]}'
for categ, counter in conv_counter.items()]))
```
## Display the results in an interactive browser
```
from ipywidgets import widgets, Layout
# Browser Functions
def on_cat_value_change(change):
queries_w.options = kql_dict[change['new']].keys()
queries_w.value = queries_w.options[0]
def on_query_value_change(change):
if view_qry_check.value:
qry_text = kql_dict[sub_cats_w.value][queries_w.value][1]
if "Not convertible" not in qry_text:
qry_text = add_filter_clauses(qry_text,
date=add_date_filter_check.value,
host=add_host_filter_check.value)
query_text_w.value = qry_text.replace('|', '\n|')
orig_text_w.value = kql_dict[sub_cats_w.value][queries_w.value][0]
def on_view_query_value_change(change):
vis = 'visible' if view_qry_check.value else 'hidden'
on_query_value_change(None)
query_text_w.layout.visibility = vis
orig_text_w.layout.visibility = vis
# Function defs for ExecuteQuery cell below
def click_exec_hqry(b):
global qry_results
query_name = queries_w.value
query_cat = sub_cats_w.value
query_text = query_text_w.value
query_text = query_text.format(**qry_wgt.query_params)
disp_results(query_text)
def disp_results(query_text):
out_wgt.clear_output()
with out_wgt:
print("Running query...", end=' ')
qry_results = execute_kql_query(query_text)
print(f'done. {len(qry_results)} rows returned.')
display(qry_results)
exec_hqry_button = widgets.Button(description="Execute query..")
out_wgt = widgets.Output() #layout=Layout(width='100%', height='200px', visiblity='visible'))
exec_hqry_button.on_click(click_exec_hqry)
# Browser widget setup
categories = list(sorted(kql_dict.keys()))
sub_cats_w = widgets.Select(options=categories,
description='Category : ',
layout=Layout(width='30%', height='120px'),
style = {'description_width': 'initial'})
queries_w = widgets.Select(options = kql_dict[categories[0]].keys(),
description='Query : ',
layout=Layout(width='30%', height='120px'),
style = {'description_width': 'initial'})
query_text_w = widgets.Textarea(
value='',
description='Kql Query:',
layout=Layout(width='100%', height='300px', visiblity='hidden'),
disabled=False)
orig_text_w = widgets.Textarea(
value='',
description='Sigma Query:',
layout=Layout(width='100%', height='250px', visiblity='hidden'),
disabled=False)
query_text_w.layout.visibility = 'hidden'
orig_text_w.layout.visibility = 'hidden'
sub_cats_w.observe(on_cat_value_change, names='value')
queries_w.observe(on_query_value_change, names='value')
view_qry_check = widgets.Checkbox(description="View query", value=True)
add_date_filter_check = widgets.Checkbox(description="Add date filter", value=False)
add_host_filter_check = widgets.Checkbox(description="Add host filter", value=False)
view_qry_check.observe(on_view_query_value_change, names='value')
add_date_filter_check.observe(on_view_query_value_change, names='value')
add_host_filter_check.observe(on_view_query_value_change, names='value')
# view_qry_button.on_click(click_exec_hqry)
# display(exec_hqry_button);
vbox_opts = widgets.VBox([view_qry_check, add_date_filter_check, add_host_filter_check])
hbox = widgets.HBox([sub_cats_w, queries_w, vbox_opts])
vbox = widgets.VBox([hbox, orig_text_w, query_text_w])
on_view_query_value_change(None)
display(vbox)
```
## Click the `Execute query` button to run the currently display query
**Notes:**
- To run the queries, first authenticate to Log Analytics (scroll down and execute remaining cells in the notebook)
- If you added a date filter to the query set the date range below
```
from msticpy.nbtools.nbwidgets import QueryTime
qry_wgt = QueryTime(units='days', before=5, after=0, max_before=30, max_after=10)
vbox = widgets.VBox([exec_hqry_button, out_wgt])
display(vbox)
```
### Set Query Time bounds
```
qry_wgt.display()
```
### Authenticate to Azure Sentinel
```
def clean_kql_comments(query_string):
"""Cleans"""
import re
return re.sub(r'(//[^\n]+)', '', query_string, re.MULTILINE).replace('\n', '').strip()
def execute_kql_query(query_string):
if not query_string or len(query_string.strip()) == 0:
print('No query supplied')
return None
src_query = clean_kql_comments(query_string)
result = get_ipython().run_cell_magic('kql', line='', cell=src_query)
if result is not None and result.completion_query_info['StatusCode'] == 0:
results_frame = result.to_dataframe()
return results_frame
return []
import os
from msticpy.nbtools.wsconfig import WorkspaceConfig
from msticpy.nbtools import kql, GetEnvironmentKey
ws_config_file = 'config.json'
try:
ws_config = WorkspaceConfig(ws_config_file)
print('Found config file')
for cf_item in ['tenant_id', 'subscription_id', 'resource_group', 'workspace_id', 'workspace_name']:
print(cf_item, ws_config[cf_item])
except:
ws_config = None
ws_id = GetEnvironmentKey(env_var='WORKSPACE_ID',
prompt='Log Analytics Workspace Id:')
if ws_config:
ws_id.value = ws_config['workspace_id']
ws_id.display()
try:
WORKSPACE_ID = select_ws.value
except NameError:
try:
WORKSPACE_ID = ws_id.value
except NameError:
WORKSPACE_ID = None
if not WORKSPACE_ID:
raise ValueError('No workspace selected.')
kql.load_kql_magic()
%kql loganalytics://code().workspace(WORKSPACE_ID)
```
## Save All Converted Files
```
path_save_wgt = widgets.Text(value=str(def_path) + "_kql_out",
description='Path to save KQL files: ',
layout=Layout(width='50%'),
style={'description_width': 'initial'})
path_save_wgt
root = Path(path_save_wgt.value)
root.mkdir(exist_ok=True)
for categ, kql_files in kql_dict.items():
sub_dir = root.joinpath(categ)
for file_name, contents in kql_files.items():
kql_txt = contents[1]
if not kql_txt == NOT_CONVERTIBLE:
sub_dir.mkdir(exist_ok=True)
file_path = sub_dir.joinpath(file_name.replace('.yml', '.kql'))
with open(file_path, 'w') as output_file:
output_file.write(kql_txt)
print(f"Saved {file_path}")
```
|
github_jupyter
|
```
%matplotlib inline
import re
import time
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from numpy import nan
from selenium import webdriver
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.support.wait import WebDriverWait
## create a pandas dataframe to store the scraped data
df = pd.DataFrame(
columns=['hotel', 'rating', 'distance', 'score', 'recommendation_ratio', 'review_number', 'lowest_price'])
## launch Firefox driver, please change directory to the location of your Geckodriver exe file and save that as my_path
#Use Chromedriver for launching chrome
#Use Edgedriver for launching edge
# headless mode
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--headless')
chrome_options.add_argument('--disable-gpu')
chrome_options.add_argument('--window-size=1920x1080') # indispensable
my_path = r"chromedriver.exe" # choose your own path
browser = webdriver.Chrome(chrome_options=chrome_options, executable_path=my_path) #webdriver.Chrome for chromedriver
browser.maximize_window()
def get_elements(xpath, attr = 'text', pattern = ''):
elements = browser.find_elements_by_xpath(xpath) # find the elements according to conditions stated in xpath
if (attr == 'text'):
res = list(map(lambda x: x.text, elements))
else:
res = list(map(lambda x: x.get_attribute(attr),elements))
return res
columns=['hotel', 'rating', 'distance', 'score', 'recommendation_ratio', 'review_number', 'lowest_price'];
df = pd.DataFrame(columns=columns)
place = '旺角'; # choose a place in HK
url = r"http://hotels.ctrip.com/hotel/hong%20kong58/k1"+place;
try:
browser.get(url)
star3 = browser.find_element_by_id("star-3")
star4 = browser.find_element_by_id("star-4")
star5 = browser.find_element_by_id("star-5")
# choose hotels that >= 3 stars
ActionChains(browser).click(star3).perform()
ActionChains(browser).click(star4).perform()
ActionChains(browser).click(star5).perform()
time.sleep(4) # better way: WebDriverWait
from selenium.webdriver.support.wait import WebDriverWait
tst = WebDriverWait(browser, 5).until(
lambda x: x.find_element_by_link_text("下一页"))
clo =browser.find_element_by_id('appd_wrap_close')
ActionChains(browser).move_to_element(clo).click(clo).perform()
page = 0
while (tst.get_attribute('class') != 'c_down_nocurrent'): # utill the last page
page += 1
# hotel brand
hotel_xpath = "//h2[@class='hotel_name']/a"
hotel = get_elements(hotel_xpath,'title')
hnum = len(hotel) # hotel numbers in current page
# hotel rating
rating_xpath = "//span[@class='hotel_ico']/span[starts-with(@class,'hotel_diamond')]"
rating = get_elements(rating_xpath,'class')
rating = [rating[i][-1:] for i in range(hnum)]
# distance
distance_xpath = "//p[@class='hotel_item_htladdress']/span[@class='dest_distance']"
distance = get_elements(distance_xpath)
distance_pattern = re.compile(r"\D+(\d+.\d+)\D+");
distance = list(map(lambda x: distance_pattern.match(x).group(1), distance))
# score
score_xpath = "//div[@class='hotelitem_judge_box']/a | //div[@class='hotelitem_judge_box']/span[@class='no_grade']"
score = get_elements(score_xpath,'title')
score_pattern = re.compile(r"\D+(\d+.\d+)\D+?");
score = list(map(lambda x: '/' if x=='暂无评分' or x=='' else score_pattern.match(x).group(1), score))
# recommendation
ratio_xpath = "//div[@class='hotelitem_judge_box']/a/span[@class='total_judgement_score']/span | //div[@class='hotelitem_judge_box']/span[@class='no_grade']"
ratio = get_elements(ratio_xpath)
# review
review_xpath = "//div[@class='hotelitem_judge_box']/a/span[@class='hotel_judgement']/span | //div[@class='hotelitem_judge_box']/span[@class='no_grade'] "
review = get_elements(review_xpath)
# lowset price
lowest_price_xpath = "//span[@class='J_price_lowList']"
price = get_elements(lowest_price_xpath)
rows = np.array([hotel, rating, distance, score, ratio, review, price]).T
dfrows = pd.DataFrame(rows,columns=columns)
df = df.append(dfrows,ignore_index=True)
ActionChains(browser).click(tst).perform() # next page
tst = WebDriverWait(browser, 10).until(
lambda x: x.find_element_by_link_text("下一页"))
print(tst.get_attribute('class'), page)
except Exception as e:
print(e.__doc__)
print(e.message)
finally:
browser.quit()
## create a csv file in our working directory with our scraped data
df.to_csv(place+"_hotel.csv", index=False,encoding='utf_8_sig')
print('Scraping is done!')
df.score = pd.to_numeric(df.score, errors='coerce')
df.rating = pd.to_numeric(df.rating, errors='coerce')
#df.recommendation_ratio = pd.to_numeric(df.recommendation_ratio,errors='coerce')
df['distance']=pd.to_numeric(df['distance'], errors='coerce')
df.review_number = pd.to_numeric(df.review_number, errors='coerce')
df.lowest_price = pd.to_numeric(df.lowest_price,errors='coerce')
df=df.sort_values(by='distance')
df
def piepic():
plt.figure(num='Rpie',dpi=100)
labels = ['3 stars', '4 stars', '5 stars']
sizes = [df.rating[df.rating==k].count() for k in [3,4,5] ]
colors = ['gold', 'lightcoral', 'lightskyblue']
explode = (0.01, 0.01, 0.01) # explode 1st slice
def atxt(pct, allvals):
absolute = int(pct/100.*np.sum(allvals))
return "{:.1f}%\n({:d})".format(pct, absolute)
# Plot
plt.pie(sizes, labels=labels, colors=colors, explode=explode, autopct=lambda pct: atxt(pct, sizes),
shadow=True, startangle=140)
plt.legend(labels,
title="hotel rating")
plt.axis('equal')
plt.savefig('Rpie.jpg')
plt.show()
plt.close()
def DvPpic(): # distance vs price
plt.figure(num='DvP',dpi=100)
plt.plot(df.distance[df.rating==3],df.lowest_price[df.rating==3],'x-',label='3 stars')
plt.plot(df.distance[df.rating==4],df.lowest_price[df.rating==4],'*-',label='4 stars')
plt.plot(df.distance[df.rating==5],df.lowest_price[df.rating==5],'rD-',label='5 stars')
plt.legend()
plt.xlabel('Distance (km)')
plt.ylabel('Price (Yuan)')
plt.grid()
plt.title('Distance vs. Price')
plt.savefig('DvP.jpg')
plt.show()
plt.close()
def Pdensity():
plt.figure(num='Pdensity',dpi=100)
df.lowest_price[df.rating==3].plot(kind='density',label='3 stars')
df.lowest_price[df.rating==4].plot(kind='density',label='4 stars')
df.lowest_price[df.rating==5].plot(kind='density',label='5 stars')
plt.grid()
plt.legend()
plt.xlabel('Price (Yuan)')
plt.title('Distribution of Price')
plt.savefig('Pdensity.jpg')
plt.show()
def Sbox():
plt.figure(num='Sbox',dpi=200)
data = pd.concat([df.score[df.rating==3].rename('3 stars'),
df.score[df.rating==4].rename('4 stars'),
df.score[df.rating==5].rename('5 stars')],
axis=1)
data.plot.box()
plt.minorticks_on()
plt.ylabel('score')
# plt.grid(b=True, which='minor', color='r', linestyle='--')
plt.title('Boxplot of Scores')
plt.savefig('Sbox.jpg')
#data.plot.box()
piepic()
DvPpic()
Pdensity()
Sbox()
```
|
github_jupyter
|
```
%pushd ../../
%env CUDA_VISIBLE_DEVICES=3
import json
import os
import sys
import tempfile
from tqdm.auto import tqdm
import torch
import torchvision
from torchvision import transforms
from PIL import Image
import numpy as np
torch.cuda.set_device(0)
from netdissect import setting
segopts = 'netpqc'
segmodel, seglabels, _ = setting.load_segmenter(segopts)
class UnsupervisedImageFolder(torchvision.datasets.ImageFolder):
def __init__(self, root, transform=None, max_size=None, get_path=False):
self.temp_dir = tempfile.TemporaryDirectory()
os.symlink(root, os.path.join(self.temp_dir.name, 'dummy'))
root = self.temp_dir.name
super().__init__(root, transform=transform)
self.get_path = get_path
self.perm = None
if max_size is not None:
actual_size = super().__len__()
if actual_size > max_size:
self.perm = torch.randperm(actual_size)[:max_size].clone()
logging.info(f"{root} has {actual_size} images, downsample to {max_size}")
else:
logging.info(f"{root} has {actual_size} images <= max_size={max_size}")
def _find_classes(self, dir):
return ['./dummy'], {'./dummy': 0}
def __getitem__(self, key):
if self.perm is not None:
key = self.perm[key].item()
if isinstance(key, str):
path = key
else:
path, target = self.samples[index]
sample = self.loader(path)
if self.transform is not None:
sample = self.transform(sample)
if self.get_path:
return sample, path
else:
return sample
def __len__(self):
if self.perm is not None:
return self.perm.size(0)
else:
return super().__len__()
len(seglabels)
class Sampler(torch.utils.data.Sampler):
def __init__(self, dataset, seg_path):
self.todos = []
for path, _ in dataset.samples:
k = os.path.splitext(os.path.basename(path))[0]
if not os.path.exists(os.path.join(seg_path, k + '.pth')):
self.todos.append(path)
def __len__(self):
return len(self.todos)
def __iter__(self):
yield from self.todos
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
def process(img_path, seg_path, device='cuda', batch_size=128, **kwargs):
os.makedirs(seg_path, exist_ok=True)
dataset = UnsupervisedImageFolder(img_path, transform=transform, get_path=True)
sampler = Sampler(dataset, seg_path)
loader = torch.utils.data.DataLoader(dataset, num_workers=24, batch_size=batch_size, pin_memory=True, sampler=sampler)
with torch.no_grad():
for x, paths in tqdm(loader):
segs = segmodel.segment_batch(x.to(device), **kwargs).detach().cpu()
for path, seg in zip(paths, segs):
k = os.path.splitext(os.path.basename(path))[0]
torch.save(seg, os.path.join(seg_path, k + '.pth'))
del segs
import glob
torch.backends.cudnn.benchmark=True
!ls churches/dome2tree
!ls notebooks/stats/churches/
process(
'/data/vision/torralba/ganprojects/placesgan/tracer/baselines/pyflow/dome2spire_all_256/naive',
'notebooks/stats/churches/dome2spire_all/naive',
batch_size=8,
)
process(
'/data/vision/torralba/ganprojects/placesgan/tracer/baselines/pyflow/dome2spire_all_256/poisson',
'notebooks/stats/churches/dome2spire_all/poisson',
batch_size=8,
)
process(
'/data/vision/torralba/ganprojects/placesgan/tracer/baselines/pyflow/dome2spire_all_256/laplace',
'notebooks/stats/churches/dome2spire_all/laplace',
batch_size=8,
)
process(
'/data/vision/torralba/distillation/gan_rewriting/results/ablations/stylegan-church-dome2tree-8-1-2001-0.0001-overfitdomes_filtered/images',
'churches/dome2tree/overfit',
batch_size=8)
process(
'/data/vision/torralba/ganprojects/placesgan/tracer/utils/samples/domes',
'churches/domes',
batch_size=12)
process(
'/data/vision/torralba/ganprojects/placesgan/tracer/utils/samples/dome2tree',
'churches/dome2tree/ours',
batch_size=8)
process(
'/data/vision/torralba/ganprojects/placesgan/tracer/utils/samples/dome2spire',
'churches/dome2spire/ours',
batch_size=8)
```
|
github_jupyter
|
# How do distributions transform under a change of variables ?
Kyle Cranmer, March 2016
```
%pylab inline --no-import-all
```
We are interested in understanding how distributions transofrm under a change of variables.
Let's start with a simple example. Think of a spinner like on a game of twister.
<!--<img src="http://cdn.krrb.com/post_images/photos/000/273/858/DSCN3718_large.jpg?1393271975" width=300 />-->
We flick the spinner and it stops. Let's call the angle of the pointer $x$. It seems a safe assumption that the distribution of $x$ is uniform between $[0,2\pi)$... so $p_x(x) = 1/\sqrt{2\pi}$
Now let's say that we change variables to $y=\cos(x)$ (sorry if the names are confusing here, don't think about x- and y-coordinates, these are just names for generic variables). The question is this:
**what is the distribution of y?** Let's call it $p_y(y)$
Well it's easy to do with a simulation, let's try it out
```
# generate samples for x, evaluate y=cos(x)
n_samples = 100000
x = np.random.uniform(0,2*np.pi,n_samples)
y = np.cos(x)
# make a histogram of x
n_bins = 50
counts, bins, patches = plt.hist(x, bins=50, density=True, alpha=0.3)
plt.plot([0,2*np.pi], (1./2/np.pi, 1./2/np.pi), lw=2, c='r')
plt.xlim(0,2*np.pi)
plt.xlabel('x')
plt.ylabel('$p_x(x)$')
```
Ok, now let's make a histogram for $y=\cos(x)$
```
counts, y_bins, patches = plt.hist(y, bins=50, density=True, alpha=0.3)
plt.xlabel('y')
plt.ylabel('$p_y(y)$')
```
It's not uniform! Why is that? Let's look at the $x-y$ relationship
```
# make a scatter of x,y
plt.scatter(x[:300],y[:300]) #just the first 300 points
xtest = .2
plt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='r')
plt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='r')
xtest = xtest+.1
plt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='r')
plt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='r')
xtest = 2*np.pi-xtest
plt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='g')
plt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='g')
xtest = xtest+.1
plt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='g')
plt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='g')
xtest = np.pi/2
plt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='r')
plt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='r')
xtest = xtest+.1
plt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='r')
plt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='r')
xtest = 2*np.pi-xtest
plt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='g')
plt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='g')
xtest = xtest+.1
plt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='g')
plt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='g')
plt.ylim(-1.5,1.5)
plt.xlim(-1,7)
```
The two sets of vertical lines are both separated by $0.1$. The probability $P(a < x < b)$ must equal the probability of $P( cos(b) < y < cos(a) )$. In this example there are two different values of $x$ that give the same $y$ (see green and red lines), so we need to take that into account. For now, let's just focus on the first part of the curve with $x<\pi$.
So we can write (this is the important equation):
\begin{equation}
\int_a^b p_x(x) dx = \int_{y_b}^{y_a} p_y(y) dy
\end{equation}
where $y_a = \cos(a)$ and $y_b = \cos(b)$.
and we can re-write the integral on the right by using a change of variables (pure calculus)
\begin{equation}
\int_a^b p_x(x) dx = \int_{y_b}^{y_a} p_y(y) dy = \int_a^b p_y(y(x)) \left| \frac{dy}{dx}\right| dx
\end{equation}
notice that the limits of integration and integration variable are the same for the left and right sides of the equation, so the integrands must be the same too. Therefore:
\begin{equation}
p_x(x) = p_y(y) \left| \frac{dy}{dx}\right|
\end{equation}
and equivalently
\begin{equation}
p_y(y) = p_x(x) \,/ \,\left| \, {dy}/{dx}\, \right |
\end{equation}
The factor $\left|\frac{dy}{dx} \right|$ is called a Jacobian. When it is large it is stretching the probability in $x$ over a large range of $y$, so it makes sense that it is in the denominator.
```
plt.plot((0.,1), (0,.3))
plt.plot((0.,1), (0,0), lw=2)
plt.plot((1.,1), (0,.3))
plt.ylim(-.1,.4)
plt.xlim(-.1,1.6)
plt.text(0.5,0.2, '1', color='b')
plt.text(0.2,0.03, 'x', color='black')
plt.text(0.5,-0.05, 'y=cos(x)', color='g')
plt.text(1.02,0.1, '$\sin(x)=\sqrt{1-y^2}$', color='r')
```
In our case:
\begin{equation}
\left|\frac{dy}{dx} \right| = \sin(x)
\end{equation}
Looking at the right-triangle above you can see $\sin(x)=\sqrt{1-y^2}$ and finally there will be an extra factor of 2 for $p_y(y)$ to take into account $x>\pi$. So we arrive at
\begin{equation}
p_y(y) = 2 \times \frac{1}{2 \pi} \frac{1}{\sin(x)} = \frac{1}{\pi} \frac{1}{\sin(\arccos(y))} = \frac{1}{\pi} \frac{1}{\sqrt{1-y^2}}
\end{equation}
Notice that when $y=\pm 1$ the pdf is diverging. This is called a [caustic](http://www.phikwadraat.nl/huygens_cusp_of_tea/) and you see them in your coffee and rainbows!
| | |
|---|---|
| <img src="http://www.nanowerk.com/spotlight/id19915_1.jpg" size=200 /> | <img src="http://www.ams.org/featurecolumn/images/february2009/caustic.gif" size=200> |
**Let's check our prediction**
```
counts, y_bins, patches = plt.hist(y, bins=50, density=True, alpha=0.3)
pdf_y = (1./np.pi)/np.sqrt(1.-y_bins**2)
plt.plot(y_bins, pdf_y, c='r', lw=2)
plt.ylim(0,5)
plt.xlabel('y')
plt.ylabel('$p_y(y)$')
```
Perfect!
## A trick using the cumulative distribution function (cdf) to generate random numbers
Let's consider a different variable transformation now -- it is a special one that we can use to our advantage.
\begin{equation}
y(x) = \textrm{cdf}(x) = \int_{-\infty}^x p_x(x') dx'
\end{equation}
Here's a plot of a distribution and cdf for a Gaussian.
(NOte: the axes are different for the pdf and the cdf http://matplotlib.org/examples/api/two_scales.html
```
from scipy.stats import norm
x_for_plot = np.linspace(-3,3, 30)
fig, ax1 = plt.subplots()
ax1.plot(x_for_plot, norm.pdf(x_for_plot), c='b')
ax1.set_ylabel('p(x)', color='b')
for tl in ax1.get_yticklabels():
tl.set_color('b')
ax2 = ax1.twinx()
ax2.plot(x_for_plot, norm.cdf(x_for_plot), c='r')
ax2.set_ylabel('cdf(x)', color='r')
for tl in ax2.get_yticklabels():
tl.set_color('r')
```
Ok, so let's use our result about how distributions transform under a change of variables to predict the distribution of $y=cdf(x)$. We need to calculate
\begin{equation}
\frac{dy}{dx} = \frac{d}{dx} \int_{-\infty}^x p_x(x') dx'
\end{equation}
Just like particles and anti-particles, when derivatives meet anti-derivatives they annihilate. So $\frac{dy}{dx} = p_x(x)$, which shouldn't be a surprise.. the slope of the cdf is the pdf.
So putting these together we find the distribution for $y$ is:
\begin{equation}
p_y(y) = p_x(x) \, / \, \frac{dy}{dx} = p_x(x) /p_x(x) = 1
\end{equation}
So it's just a uniform distribution from $[0,1]$, which is perfect for random numbers.
We can turn this around and generate a uniformly random number between $[0,1]$, take the inverse of the cdf and we should have the distribution we want for $x$.
Let's try it for a Gaussian. The inverse of the cdf for a Gaussian is called [ppf](http://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.stats.norm.html)
```
norm.ppf.__doc__
#check it out
norm.cdf(0), norm.ppf(0.5)
```
Ok, let's use CDF trick to generate Normally-distributed (aka Gaussian-distributed) random numbers
```
rand_cdf = np.random.uniform(0,1,10000)
rand_norm = norm.ppf(rand_cdf)
_ = plt.hist(rand_norm, bins=30, density=True, alpha=0.3)
plt.xlabel('x')
```
**Pros**: The great thing about this technique is it is very efficient. You only generate one random number per random $x$.
**Cons**: the downside is you need to know how to compute the inverse cdf for $p_x(x)$ and that can be difficult. It works for a distribution like a Gaussian, but for some random distribution this might be even more computationally expensive than the accept/reject approach. This approach also doesn't really work if your distribution is for more than one variable.
## Going full circle
Ok, let's try it for our distribution of $y=\cos(x)$ above. We found
\begin{equation}
p_y(y) = \frac{1}{\pi} \frac{1}{\sqrt{1-y^2}}
\end{equation}
So the CDF is (see Wolfram alpha for [integral](http://www.wolframalpha.com/input/?i=integrate%5B1%2Fsqrt%5B1-x%5E2%5D%2FPi%5D) )
\begin{equation}
cdf(y') = \int_{-1}^{y'} \frac{1}{\pi} \frac{1}{\sqrt{1-y^2}} = \frac{1}{\pi}\arcsin(y') + C
\end{equation}
and we know that for $y=-1$ the CDF must be 0, so the constant is $1/2$ and by looking at the plot or remembering some trig you know that it's also $cdf(y') = (1/\pi) \arccos(y')$.
So to apply the trick, we need to generate uniformly random variables $z$ between 0 and 1, and then take the inverse of the cdf to get $y$. Ok, so what would that be:
\begin{equation}
y = \textrm{cdf}^{-1}(z) = \cos(\pi z)
\end{equation}
**Of course!** that's how we started in the first place, we started with a uniform $x$ in $[0,2\pi]$ and then defined $y=\cos(x)$. So we just worked backwards to get where we started. The only difference here is that we only evaluate the first half: $\cos(x < \pi)$
|
github_jupyter
|
```
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import glob
import os
from skimage.io import imread,imshow
from skimage.transform import resize
from sklearn.utils import shuffle
from tqdm import tqdm
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import InputLayer,Conv2D,MaxPool2D,BatchNormalization,Dropout,Flatten,Dense
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint, ReduceLROnPlateau
%matplotlib inline
from skimage.io import imread,imshow
from skimage.transform import resize
from sklearn.utils import shuffle
from tqdm import tqdm
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import InputLayer,Conv2D,MaxPool2D,BatchNormalization,Dropout,Flatten,Dense
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint, ReduceLROnPlateau
%matplotlib inline
# my Contribution: Reduce Dataset
train_dataset_0_all=glob.glob('../input/leukemia-classification/C-NMC_Leukemia/training_data/fold_0/all/*.bmp')
train_dataset_0_hem=glob.glob('../input/leukemia-classification/C-NMC_Leukemia/training_data/fold_0/hem/*.bmp')
train_dataset_1_all=glob.glob('../input/leukemia-classification/C-NMC_Leukemia/training_data/fold_1/all/*.bmp')
train_dataset_1_hem=glob.glob('../input/leukemia-classification/C-NMC_Leukemia/training_data/fold_1/hem/*.bmp')
valid_data=pd.read_csv('../input/leukemia-classification/C-NMC_Leukemia/validation_data/C-NMC_test_prelim_phase_data_labels.csv')
Test_dataset = glob.glob('../input/leukemia-classification/C-NMC_Leukemia/testing_data/C-NMC_test_final_phase_data')
a,b=len(train_dataset_0_all),len(train_dataset_1_all)
d=a+b
print('count:',d)
valid_data.head()
A=[]
H=[]
A.extend(train_dataset_0_all)
A.extend(train_dataset_1_all)
H.extend(train_dataset_0_hem)
H.extend(train_dataset_1_hem)
print(len(A))
print(len(H))
A=np.array(A)
H=np.array(H)
fig, ax = plt.subplots(nrows = 1, ncols = 5, figsize = (20,20))
for i in tqdm(range(0,5)):
rand=np.random.randint(len(A))
img=imread(A[rand])
img=resize(img,(128,128))
ax[i].imshow(img)
fig, ax = plt.subplots(nrows = 1, ncols = 5, figsize = (20,20))
for i in tqdm(range(0,5)):
rand=np.random.randint(len(H))
img=imread(H[rand])
img=resize(img,(128,128))
ax[i].imshow(img)
image=[]
label=[]
for i in tqdm(range(len(A))):
img=imread(A[i])
img=resize(img,(128,128))
image.append(img)
label.append(1)
for i in tqdm(range(len(H))):
img_=imread(H[i])
img=resize(img,(128,128))
image.append(img)
label.append(0)
image=np.array(image)
label=np.array(label)
del A
del H
image, label = shuffle(image, label, random_state = 42)
fig, ax = plt.subplots(nrows = 1, ncols = 5, figsize = (20,20))
for i in tqdm(range(0,5)):
rand=np.random.randint(len(image))
ax[i].imshow(image[rand])
a=label[rand]
if a==1:
ax[i].set_title('diseased')
else:
ax[i].set_title('fine')
X=image
y=label
del image
del label
X_val = []
for image_name in valid_data.new_names:
# Loading images
img = imread('../input/leukemia-classification/C-NMC_Leukemia/validation_data/C-NMC_test_prelim_phase_data/' + image_name)
# Resizing
img = resize(img, (128,128))
# Appending them into list
X_val.append(img)
# Converting into array
X_val = np.array(X_val)
# Storing target values as well
y_val = valid_data.labels.values
train_datagen = ImageDataGenerator(horizontal_flip=True,
vertical_flip=True,
zoom_range = 0.2)
train_datagen.fit(X)
#Contribution: Made some changes in layers and add some hidden layers
model=Sequential()
model.add(InputLayer(input_shape=(128,128,3)))
model.add(Conv2D(filters=32,kernel_size=(3,3),padding='valid',activation='relu'))
model.add(BatchNormalization())
model.add(Dense(4))
model.add(MaxPool2D(pool_size=(2,2),padding='valid'))
model.add(Dropout(.2))
model.add(Conv2D(filters=64,kernel_size=(3,3),padding='valid',activation='relu'))
model.add(BatchNormalization())
model.add(Dense(2))
model.add(MaxPool2D(pool_size=(2,2),padding='valid'))
model.add(Dropout(.2))
model.add(Conv2D(filters=128,kernel_size=(3,3),padding='valid',activation='relu'))
model.add(BatchNormalization())
model.add(Dense(1))
model.add(MaxPool2D(pool_size=(2,2),padding='valid'))
model.add(Dropout(.2))
model.add(Conv2D(filters=256,kernel_size=(3,3),padding='valid',activation='relu')) #contribution
model.add(BatchNormalization())#contribution
model.add(Flatten())
model.add(Dense(units = 128, activation = 'relu')) #contribution
model.add(Dropout(0.3))
model.add(Dense(units = 64, activation = 'relu')) #contribution
model.add(Dropout(0.3))
model.add(Dense(units=1, activation='sigmoid'))
model.summary()
model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy'])
filepath = './best_weights.hdf5'
earlystopping = EarlyStopping(monitor = 'val_accuracy',
mode = 'max' ,
patience = 15)
checkpoint = ModelCheckpoint(filepath,
monitor = 'val_accuracy',
mode='max',
save_best_only=True,
verbose = 1)
callback_list = [earlystopping, checkpoint]
len(X),len(X_val)
#contribution
lr_reduction = ReduceLROnPlateau(monitor='val_loss',
patience=10,
verbose=2,
factor=.75)
model_checkpoint= ModelCheckpoint("/best_result_checkpoint", monitor='val_loss', save_best_only=True, verbose=0)
#history1 = model.fit(train_datagen.flow(X, y, batch_size = 512),
#validation_data = (X_val, y_val),
# epochs = 2,
#verbose = 1,
# callbacks =[earlystopping])
#history2 = model.fit(train_datagen.flow(X, y, batch_size = 512),
# validation_data = (X_val, y_val),
# epochs = 4,
#verbose = 1,
#callbacks =[earlystopping])
#contribution: Reduced in Epochs and changed in Activation function using Softmax and reduce batch size
history = model.fit(train_datagen.flow(X, y, batch_size = 212),
validation_data = (X_val, y_val),
epochs = 6,
verbose = 1,
callbacks =[earlystopping])
tf.keras.models.save_model(model,'my_model.hdf5')
model.summary
#my Contribution
print(history.history['accuracy'])
plt.plot(history.history['accuracy'],'--', label='accuracy on training set')
plt.plot(history.history['val_accuracy'], label='accuracy on validation set')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.show()
#my Contribution
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
import numpy as np
import matplotlib.pyplot as plt
# creating the dataset
data = {'Reference Accuracy':65, 'My Accuracy':97}
courses = list(data.keys())
values = list(data.values())
fig = plt.figure(figsize = (10, 5))
# creating the bar plot
plt.bar(courses, values, color ='maroon',
width = 0.4)
plt.xlabel("Comparison Between My Accuracy and Reference Accuracy ")
plt.ylabel("Accuracy")
plt.show()
Reference:
https://www.kaggle.com/dimaorizz/kravtsov-lab7
https://towardsdatascience.com/cnn-architectures-a-deep-dive-a99441d18049
https://ieeexplore.ieee.org/document/9071471
google.com/leukemia classification
```
|
github_jupyter
|
[<img src="https://deepnote.com/buttons/launch-in-deepnote-small.svg">](https://deepnote.com/launch?url=https%3A%2F%2Fgithub.com%2Fgordicaleksa%2Fget-started-with-JAX%2Fblob%2Fmain%2FTutorial_4_Flax_Zero2Hero_Colab.ipynb)
<a href="https://colab.research.google.com/github/gordicaleksa/get-started-with-JAX/blob/main/Tutorial_4_Flax_Zero2Hero_Colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Flax: From Zero to Hero!
This notebook heavily relies on the [official Flax docs](https://flax.readthedocs.io/en/latest/) and [examples](https://github.com/google/flax/blob/main/examples/) + some additional code/modifications, comments/notes, etc.
### Enter Flax - the basics ❤️
Before you jump into the Flax world I strongly recommend you check out my JAX tutorials, as I won't be covering the details of JAX here.
* (Tutorial 1) ML with JAX: From Zero to Hero ([video](https://www.youtube.com/watch?v=SstuvS-tVc0), [notebook](https://github.com/gordicaleksa/get-started-with-JAX/blob/main/Tutorial_1_JAX_Zero2Hero_Colab.ipynb))
* (Tutorial 2) ML with JAX: from Hero to Hero Pro+ ([video](https://www.youtube.com/watch?v=CQQaifxuFcs), [notebook](https://github.com/gordicaleksa/get-started-with-JAX/blob/main/Tutorial_2_JAX_HeroPro%2B_Colab.ipynb))
* (Tutorial 3) ML with JAX: Coding a Neural Network from Scratch in Pure JAX ([video](https://www.youtube.com/watch?v=6_PqUPxRmjY), [notebook](https://github.com/gordicaleksa/get-started-with-JAX/blob/main/Tutorial_3_JAX_Neural_Network_from_Scratch_Colab.ipynb))
That out of the way - let's start with the basics!
```
# Install Flax and JAX
!pip install --upgrade -q "jax[cuda11_cudnn805]" -f https://storage.googleapis.com/jax-releases/jax_releases.html
!pip install --upgrade -q git+https://github.com/google/flax.git
!pip install --upgrade -q git+https://github.com/deepmind/dm-haiku # Haiku is here just for comparison purposes
import jax
from jax import lax, random, numpy as jnp
# NN lib built on top of JAX developed by Google Research (Brain team)
# Flax was "designed for flexibility" hence the name (Flexibility + JAX -> Flax)
import flax
from flax.core import freeze, unfreeze
from flax import linen as nn # nn notation also used in PyTorch and in Flax's older API
from flax.training import train_state # a useful dataclass to keep train state
# DeepMind's NN JAX lib - just for comparison purposes, we're not learning Haiku here
import haiku as hk
# JAX optimizers - a separate lib developed by DeepMind
import optax
# Flax doesn't have its own data loading functions - we'll be using PyTorch dataloaders
from torchvision.datasets import MNIST
from torch.utils.data import DataLoader
# Python libs
import functools # useful utilities for functional programs
from typing import Any, Callable, Sequence, Optional
# Other important 3rd party libs
import numpy as np
import matplotlib.pyplot as plt
```
The goal of this notebook is to get you started with Flax!
I'll only cover the most essential parts of Flax (and Optax) - just as much as needed to get you started with training NNs!
```
# Let's start with the simplest model possible: a single feed-forward layer (linear regression)
model = nn.Dense(features=5)
# All of the Flax NN layers inherit from the Module class (similarly to PyTorch)
print(nn.Dense.__bases__)
```
So how can we do inference with this simple model? 2 steps: init and apply!
```
# Step 1: init
seed = 23
key1, key2 = random.split(random.PRNGKey(seed))
x = random.normal(key1, (10,)) # create a dummy input, a 10-dimensional random vector
# Initialization call - this gives us the actual model weights
# (remember JAX handles state externally!)
y, params = model.init_with_output(key2, x)
print(y)
print(jax.tree_map(lambda x: x.shape, params))
# Note1: automatic shape inference
# Note2: immutable structure (hence FrozenDict)
# Note3: init_with_output if you care, for whatever reason, about the output here
# Step 2: apply
y = model.apply(params, x) # this is how you run prediction in Flax, state is external!
print(y)
try:
y = model(x) # this doesn't work anymore (bye bye PyTorch syntax)
except Exception as e:
print(e)
# todo: a small coding exercise - let's contrast Flax with Haiku
#@title Haiku vs Flax solution
model = hk.transform(lambda x: hk.Linear(output_size=5)(x))
seed = 23
key1, key2 = random.split(random.PRNGKey(seed))
x = random.normal(key1, (10,)) # create a dummy input, a 10-dimensional random vector
params = model.init(key2, x)
out = model.apply(params, None, x)
print(out)
print(hk.Linear.__bases__)
```
All of this might (initially!) be overwhelming if you're used to stateful, object-oriented paradigm.
What Flax offers is high performance and flexibility (similarly to JAX).
Here are some [benchmark numbers](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) from the HuggingFace team.

Now that we have a an answer to "why should I learn Flax?" - let's start our descent into Flaxlandia!
### A toy example 🚚 - training a linear regression model
We'll first implement a pure-JAX appoach and then we'll do it the Flax-way.
```
# Defining a toy dataset
n_samples = 150
x_dim = 2 # putting small numbers here so that we can visualize the data easily
y_dim = 1
noise_amplitude = 0.1
# Generate (random) ground truth W and b
# Note: we could get W, b from a randomely initialized nn.Dense here, being closer to JAX for now
key, w_key, b_key = random.split(random.PRNGKey(seed), num=3)
W = random.normal(w_key, (x_dim, y_dim)) # weight
b = random.normal(b_key, (y_dim,)) # bias
# This is the structure that Flax expects (recall from the previous section!)
true_params = freeze({'params': {'bias': b, 'kernel': W}})
# Generate samples with additional noise
key, x_key, noise_key = random.split(key, num=3)
xs = random.normal(x_key, (n_samples, x_dim))
ys = jnp.dot(xs, W) + b
ys += noise_amplitude * random.normal(noise_key, (n_samples, y_dim))
print(f'xs shape = {xs.shape} ; ys shape = {ys.shape}')
# Let's visualize our data (becoming one with the data paradigm <3)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
assert xs.shape[-1] == 2 and ys.shape[-1] == 1 # low dimensional data so that we can plot it
ax.scatter(xs[:, 0], xs[:, 1], zs=ys)
# todo: exercise - let's show that our data lies on the 2D plane embedded in 3D
# option 1: analytic approach
# option 2: data-driven approach
def make_mse_loss(xs, ys):
def mse_loss(params):
"""Gives the value of the loss on the (xs, ys) dataset for the given model (params)."""
# Define the squared loss for a single pair (x,y)
def squared_error(x, y):
pred = model.apply(params, x)
# Inner because 'y' could have in general more than 1 dims
return jnp.inner(y-pred, y-pred) / 2.0
# Batched version via vmap
return jnp.mean(jax.vmap(squared_error)(xs, ys), axis=0)
return jax.jit(mse_loss) # and finally we jit the result (mse_loss is a pure function)
mse_loss = make_mse_loss(xs, ys)
value_and_grad_fn = jax.value_and_grad(mse_loss)
# Let's reuse the simple feed-forward layer since it trivially implements linear regression
model = nn.Dense(features=y_dim)
params = model.init(key, xs)
print(f'Initial params = {params}')
# Let's set some reasonable hyperparams
lr = 0.3
epochs = 20
log_period_epoch = 5
print('-' * 50)
for epoch in range(epochs):
loss, grads = value_and_grad_fn(params)
# SGD (closer to JAX again, but we'll progressively go towards how stuff is done in Flax)
params = jax.tree_multimap(lambda p, g: p - lr * g, params, grads)
if epoch % log_period_epoch == 0:
print(f'epoch {epoch}, loss = {loss}')
print('-' * 50)
print(f'Learned params = {params}')
print(f'Gt params = {true_params}')
```
Now let's do the same thing but this time with dedicated optimizers!
Enter DeepMind's optax! ❤️🔥
```
opt_sgd = optax.sgd(learning_rate=lr)
opt_state = opt_sgd.init(params) # always the same pattern - handling state externally
print(opt_state)
# todo: exercise - compare Adam's and SGD's states
params = model.init(key, xs) # let's start with fresh params again
for epoch in range(epochs):
loss, grads = value_and_grad_fn(params)
updates, opt_state = opt_sgd.update(grads, opt_state) # arbitrary optim logic!
params = optax.apply_updates(params, updates)
if epoch % log_period_epoch == 0:
print(f'epoch {epoch}, loss = {loss}')
# Note 1: as expected we get the same loss values
# Note 2: we'll later see more concise ways to handle all of these state components (hint: TrainState)
```
In this toy SGD example Optax may not seem that useful but it's very powerful.
You can build arbitrary optimizers with arbitrary hyperparam schedules, chaining, param freezing, etc. You can check the [official docs here](https://optax.readthedocs.io/en/latest/).
```
#@title Optax Advanced Examples
# This cell won't "compile" (no ml_collections package) and serves just as an example
# Example from Flax (ImageNet example)
# https://github.com/google/flax/blob/main/examples/imagenet/train.py#L88
def create_learning_rate_fn(
config: ml_collections.ConfigDict,
base_learning_rate: float,
steps_per_epoch: int):
"""Create learning rate schedule."""
warmup_fn = optax.linear_schedule(
init_value=0., end_value=base_learning_rate,
transition_steps=config.warmup_epochs * steps_per_epoch)
cosine_epochs = max(config.num_epochs - config.warmup_epochs, 1)
cosine_fn = optax.cosine_decay_schedule(
init_value=base_learning_rate,
decay_steps=cosine_epochs * steps_per_epoch)
schedule_fn = optax.join_schedules(
schedules=[warmup_fn, cosine_fn],
boundaries=[config.warmup_epochs * steps_per_epoch])
return schedule_fn
tx = optax.sgd(
learning_rate=learning_rate_fn,
momentum=config.momentum,
nesterov=True,
)
# Example from Haiku (ImageNet example)
# https://github.com/deepmind/dm-haiku/blob/main/examples/imagenet/train.py#L116
def make_optimizer() -> optax.GradientTransformation:
"""SGD with nesterov momentum and a custom lr schedule."""
return optax.chain(
optax.trace(
decay=FLAGS.optimizer_momentum,
nesterov=FLAGS.optimizer_use_nesterov),
optax.scale_by_schedule(lr_schedule), optax.scale(-1))
```
Now let's go beyond these extremely simple models!
### Creating custom models ⭐
```
class MLP(nn.Module):
num_neurons_per_layer: Sequence[int] # data field (nn.Module is Python's dataclass)
def setup(self): # because dataclass is implicitly using the __init__ function... :')
self.layers = [nn.Dense(n) for n in self.num_neurons_per_layer]
def __call__(self, x):
activation = x
for i, layer in enumerate(self.layers):
activation = layer(activation)
if i != len(self.layers) - 1:
activation = nn.relu(activation)
return activation
x_key, init_key = random.split(random.PRNGKey(seed))
model = MLP(num_neurons_per_layer=[16, 8, 1]) # define an MLP model
x = random.uniform(x_key, (4,4)) # dummy input
params = model.init(init_key, x) # initialize via init
y = model.apply(params, x) # do a forward pass via apply
print(jax.tree_map(jnp.shape, params))
print(f'Output: {y}')
# todo: exercise - use @nn.compact pattern instead
# todo: check out https://realpython.com/python-data-classes/
```
Great!
Now that we know how to build more complex models let's dive deeper and understand how the 'nn.Dense' module is designed itself.
#### Introducing "param"
```
class MyDenseImp(nn.Module):
num_neurons: int
weight_init: Callable = nn.initializers.lecun_normal()
bias_init: Callable = nn.initializers.zeros
@nn.compact
def __call__(self, x):
weight = self.param('weight', # parametar name (as it will appear in the FrozenDict)
self.weight_init, # initialization function, RNG passed implicitly through init fn
(x.shape[-1], self.num_neurons)) # shape info
bias = self.param('bias', self.bias_init, (self.num_neurons,))
return jnp.dot(x, weight) + bias
x_key, init_key = random.split(random.PRNGKey(seed))
model = MyDenseImp(num_neurons=3) # initialize the model
x = random.uniform(x_key, (4,4)) # dummy input
params = model.init(init_key, x) # initialize via init
y = model.apply(params, x) # do a forward pass via apply
print(jax.tree_map(jnp.shape, params))
print(f'Output: {y}')
# todo: exercise - check out the source code:
# https://github.com/google/flax/blob/main/flax/linen/linear.py
# https://github.com/google/jax/blob/main/jax/_src/nn/initializers.py#L150 <- to see why lecun_normal() vs zeros (no brackets)
from inspect import signature
# You can see it expects a PRNG key and it is passed implicitly through the init fn (same for zeros)
print(signature(nn.initializers.lecun_normal()))
```
So far we've only seen **trainable** params.
ML models often times have variables which are part of the state but are not optimized via gradient descent.
Let's see how we can handle them using a simple (and contrived) example!
#### Introducing "variable"
*Note on terminology: variable is a broader term and it includes both params (trainable variables) as well as non-trainable vars.*
```
class BiasAdderWithRunningMean(nn.Module):
decay: float = 0.99
@nn.compact
def __call__(self, x):
is_initialized = self.has_variable('batch_stats', 'ema')
# 'batch_stats' is not an arbitrary name!
# Flax uses that name in its implementation of BatchNorm (hard-coded, probably not the best of designs?)
ema = self.variable('batch_stats', 'ema', lambda shape: jnp.zeros(shape), x.shape[1:])
# self.param will by default add this variable to 'params' collection (vs 'batch_stats' above)
# Again some idiosyncrasies here we need to pass a key even though we don't actually use it...
bias = self.param('bias', lambda key, shape: jnp.zeros(shape), x.shape[1:])
if is_initialized:
# self.variable returns a reference hence .value
ema.value = self.decay * ema.value + (1.0 - self.decay) * jnp.mean(x, axis=0, keepdims=True)
return x - ema.value + bias
x_key, init_key = random.split(random.PRNGKey(seed))
model = BiasAdderWithRunningMean()
x = random.uniform(x_key, (10,4)) # dummy input
variables = model.init(init_key, x)
print(f'Multiple collections = {variables}') # we can now see a new collection 'batch_stats'
# We have to use mutable since regular params are not modified during the forward
# pass, but these variables are. We can't keep state internally (because JAX) so we have to return it.
y, updated_non_trainable_params = model.apply(variables, x, mutable=['batch_stats'])
print(updated_non_trainable_params)
# Let's see how we could train such model!
def update_step(opt, apply_fn, x, opt_state, params, non_trainable_params):
def loss_fn(params):
y, updated_non_trainable_params = apply_fn(
{'params': params, **non_trainable_params},
x, mutable=list(non_trainable_params.keys()))
loss = ((x - y) ** 2).sum() # not doing anything really, just for the demo purpose
return loss, updated_non_trainable_params
(loss, non_trainable_params), grads = jax.value_and_grad(loss_fn, has_aux=True)(params)
updates, opt_state = opt.update(grads, opt_state)
params = optax.apply_updates(params, updates)
return opt_state, params, non_trainable_params # all of these represent the state - ugly, for now
model = BiasAdderWithRunningMean()
x = jnp.ones((10,4)) # dummy input, using ones because it's easier to see what's going on
variables = model.init(random.PRNGKey(seed), x)
non_trainable_params, params = variables.pop('params')
del variables # delete variables to avoid wasting resources (this pattern is used in the official code)
sgd_opt = optax.sgd(learning_rate=0.1) # originally you'll see them use the 'tx' naming (from opTaX)
opt_state = sgd_opt.init(params)
for _ in range(3):
# We'll later see how TrainState abstraction will make this step much more elegant!
opt_state, params, non_trainable_params = update_step(sgd_opt, model.apply, x, opt_state, params, non_trainable_params)
print(non_trainable_params)
```
Let's go a level up in abstraction again now that we understand params and variables!
Certain layers like BatchNorm will use variables in the background.
Let's see a last example that is conceptually as complicated as it gets when it comes to Flax's idiosyncrasies, and high-level at the same time.
```
class DDNBlock(nn.Module):
"""Dense, dropout + batchnorm combo.
Contains trainable variables (params), non-trainable variables (batch stats),
and stochasticity in the forward pass (because of dropout).
"""
num_neurons: int
training: bool
@nn.compact
def __call__(self, x):
x = nn.Dense(self.num_neurons)(x)
x = nn.Dropout(rate=0.5, deterministic=not self.training)(x)
x = nn.BatchNorm(use_running_average=not self.training)(x)
return x
key1, key2, key3, key4 = random.split(random.PRNGKey(seed), 4)
model = DDNBlock(num_neurons=3, training=True)
x = random.uniform(key1, (3,4,4))
# New: because of Dropout we now have to include its unique key - kinda weird, but you get used to it
variables = model.init({'params': key2, 'dropout': key3}, x)
print(variables)
# And same here, everything else remains the same as the previous example
y, non_trainable_params = model.apply(variables, x, rngs={'dropout': key4}, mutable=['batch_stats'])
# Let's run these model variables during "evaluation":
eval_model = DDNBlock(num_neurons=3, training=False)
# Because training=False we don't have stochasticity in the forward pass neither do we update the stats
y = eval_model.apply(variables, x)
```
### A fully-fledged CNN on MNIST example in Flax! 💥
Modified the official MNIST example here: https://github.com/google/flax/tree/main/examples/mnist
We'll be using PyTorch dataloading instead of TFDS.
Let's start by defining a model:
```
class CNN(nn.Module): # lots of hardcoding, but it serves a purpose for a simple demo
@nn.compact
def __call__(self, x):
x = nn.Conv(features=32, kernel_size=(3, 3))(x)
x = nn.relu(x)
x = nn.avg_pool(x, window_shape=(2, 2), strides=(2, 2))
x = nn.Conv(features=64, kernel_size=(3, 3))(x)
x = nn.relu(x)
x = nn.avg_pool(x, window_shape=(2, 2), strides=(2, 2))
x = x.reshape((x.shape[0], -1)) # flatten
x = nn.Dense(features=256)(x)
x = nn.relu(x)
x = nn.Dense(features=10)(x)
x = nn.log_softmax(x)
return x
```
Let's add the data loading support in PyTorch!
I'll be reusing code from [tutorial #3](https://github.com/gordicaleksa/get-started-with-JAX/blob/main/Tutorial_3_JAX_Neural_Network_from_Scratch_Colab.ipynb):
```
def custom_transform(x):
# A couple of modifications here compared to tutorial #3 since we're using a CNN
# Input: (28, 28) uint8 [0, 255] torch.Tensor, Output: (28, 28, 1) float32 [0, 1] np array
return np.expand_dims(np.array(x, dtype=np.float32), axis=2) / 255.
def custom_collate_fn(batch):
"""Provides us with batches of numpy arrays and not PyTorch's tensors."""
transposed_data = list(zip(*batch))
labels = np.array(transposed_data[1])
imgs = np.stack(transposed_data[0])
return imgs, labels
mnist_img_size = (28, 28, 1)
batch_size = 128
train_dataset = MNIST(root='train_mnist', train=True, download=True, transform=custom_transform)
test_dataset = MNIST(root='test_mnist', train=False, download=True, transform=custom_transform)
train_loader = DataLoader(train_dataset, batch_size, shuffle=True, collate_fn=custom_collate_fn, drop_last=True)
test_loader = DataLoader(test_dataset, batch_size, shuffle=False, collate_fn=custom_collate_fn, drop_last=True)
# optimization - loading the whole dataset into memory
train_images = jnp.array(train_dataset.data)
train_lbls = jnp.array(train_dataset.targets)
# np.expand_dims is to convert shape from (10000, 28, 28) -> (10000, 28, 28, 1)
# We don't have to do this for training images because custom_transform does it for us.
test_images = np.expand_dims(jnp.array(test_dataset.data), axis=3)
test_lbls = jnp.array(test_dataset.targets)
# Visualize a single image
imgs, lbls = next(iter(test_loader))
img = imgs[0].reshape(mnist_img_size)[:, :, 0]
gt_lbl = lbls[0]
print(gt_lbl)
plt.imshow(img); plt.show()
```
Great - we have our data pipeline ready and the model architecture defined.
Now let's define core training functions:
```
@jax.jit
def train_step(state, imgs, gt_labels):
def loss_fn(params):
logits = CNN().apply({'params': params}, imgs)
one_hot_gt_labels = jax.nn.one_hot(gt_labels, num_classes=10)
loss = -jnp.mean(jnp.sum(one_hot_gt_labels * logits, axis=-1))
return loss, logits
(_, logits), grads = jax.value_and_grad(loss_fn, has_aux=True)(state.params)
state = state.apply_gradients(grads=grads) # this is the whole update now! concise!
metrics = compute_metrics(logits=logits, gt_labels=gt_labels) # duplicating loss calculation but it's a bit cleaner
return state, metrics
@jax.jit
def eval_step(state, imgs, gt_labels):
logits = CNN().apply({'params': state.params}, imgs)
return compute_metrics(logits=logits, gt_labels=gt_labels)
def train_one_epoch(state, dataloader, epoch):
"""Train for 1 epoch on the training set."""
batch_metrics = []
for cnt, (imgs, labels) in enumerate(dataloader):
state, metrics = train_step(state, imgs, labels)
batch_metrics.append(metrics)
# Aggregate the metrics
batch_metrics_np = jax.device_get(batch_metrics) # pull from the accelerator onto host (CPU)
epoch_metrics_np = {
k: np.mean([metrics[k] for metrics in batch_metrics_np])
for k in batch_metrics_np[0]
}
return state, epoch_metrics_np
def evaluate_model(state, test_imgs, test_lbls):
"""Evaluate on the validation set."""
metrics = eval_step(state, test_imgs, test_lbls)
metrics = jax.device_get(metrics) # pull from the accelerator onto host (CPU)
metrics = jax.tree_map(lambda x: x.item(), metrics) # np.ndarray -> scalar
return metrics
# This one will keep things nice and tidy compared to our previous examples
def create_train_state(key, learning_rate, momentum):
cnn = CNN()
params = cnn.init(key, jnp.ones([1, *mnist_img_size]))['params']
sgd_opt = optax.sgd(learning_rate, momentum)
# TrainState is a simple built-in wrapper class that makes things a bit cleaner
return train_state.TrainState.create(apply_fn=cnn.apply, params=params, tx=sgd_opt)
def compute_metrics(*, logits, gt_labels):
one_hot_gt_labels = jax.nn.one_hot(gt_labels, num_classes=10)
loss = -jnp.mean(jnp.sum(one_hot_gt_labels * logits, axis=-1))
accuracy = jnp.mean(jnp.argmax(logits, -1) == gt_labels)
metrics = {
'loss': loss,
'accuracy': accuracy,
}
return metrics
# Finally let's define the high-level training/val loops
seed = 0 # needless to say these should be in a config or defined like flags
learning_rate = 0.1
momentum = 0.9
num_epochs = 2
batch_size = 32
train_state = create_train_state(jax.random.PRNGKey(seed), learning_rate, momentum)
for epoch in range(1, num_epochs + 1):
train_state, train_metrics = train_one_epoch(train_state, train_loader, epoch)
print(f"Train epoch: {epoch}, loss: {train_metrics['loss']}, accuracy: {train_metrics['accuracy'] * 100}")
test_metrics = evaluate_model(train_state, test_images, test_lbls)
print(f"Test epoch: {epoch}, loss: {test_metrics['loss']}, accuracy: {test_metrics['accuracy'] * 100}")
# todo: exercise - how would we go about adding dropout? What about BatchNorm? What would have to change?
```
Bonus point: a walk-through the "non-toy", distributed ImageNet CNN training example.
Head over to https://github.com/google/flax/tree/main/examples/imagenet
You'll keep seeing the same pattern/structure in all official Flax examples.
### Further learning resources 📚
Aside from the [official docs](https://flax.readthedocs.io/en/latest/) and [examples](https://github.com/google/flax/tree/main/examples) I found [HuggingFace's Flax examples](https://github.com/huggingface/transformers/tree/master/examples/flax) and the resources from their ["community week"](https://github.com/huggingface/transformers/tree/master/examples/research_projects/jax-projects) useful as well.
Finally, [source code](https://github.com/google/flax) is also your friend, as the library is still evolving.
### Connect with me ❤️
Last but not least I regularly post AI-related stuff (paper summaries, AI news, etc.) on my Twitter/LinkedIn. We also have an ever increasing Discord community (1600+ members at the time of writing this). If you care about any of these I encourage you to connect!
Social: <br/>
💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ <br/>
🐦 Twitter - https://twitter.com/gordic_aleksa <br/>
👨👩👧👦 Discord - https://discord.gg/peBrCpheKE <br/>
🙏 Patreon - https://www.patreon.com/theaiepiphany <br/>
Content: <br/>
📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ <br/>
📚 Medium - https://gordicaleksa.medium.com/ <br/>
💻 GitHub - https://github.com/gordicaleksa <br/>
📢 AI Newsletter - https://aiepiphany.substack.com/ <br/>
|
github_jupyter
|
```
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
plt.style.use('ggplot')
import torch
print(torch.__version__)
import torch.nn as nn
import torch.optim as optim
import torch.utils.data as data_utils
from torch.utils.data import DataLoader, Dataset, Sampler
from torch.utils.data.dataloader import default_collate
from torch.utils.tensorboard import SummaryWriter
from pytorch_lightning.metrics import Accuracy
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
INPUT_SIZE = 36
HIDDEN_SIZE = 25
OUTPUT_SIZE = 5
LEARNING_RATE = 1e-2
EPOCHS = 400
BATCH_SIZE = 256
EMBEDDING_SIZE = 5
class CustomDataset(Dataset):
# Конструктор, где считаем датасет
def __init__(self):
X = pd.read_csv('./data/X_cat.csv', sep='\t', index_col=0)
target = pd.read_csv('./data/y_cat.csv', sep='\t', index_col=0, names=['status']) # header=-1,
weekday_columns = ['Weekday_0', 'Weekday_1', 'Weekday_2',
'Weekday_3', 'Weekday_4', 'Weekday_5', 'Weekday_6']
weekdays = np.argmax(X[weekday_columns].values, axis=1)
X.drop(weekday_columns, axis=1, inplace=True)
X['Weekday_cos'] = np.cos(2 * np.pi / 7.) * weekdays
X['Weekday_sin'] = np.sin(2 * np.pi / 7.) * weekdays
X['Hour_cos'] = np.cos(2 * np.pi / 24.) * X['Hour'].values
X['Hour_sin'] = np.sin(2 * np.pi / 24.) * X['Hour'].values
X['Month_cos'] = np.cos(2 * np.pi / 12.) * X['Month'].values
X['Month_sin'] = np.sin(2 * np.pi / 12.) * X['Month'].values
X['Gender'] = np.argmax(X[['Sex_Female', 'Sex_Male', 'Sex_Unknown']].values, axis=1)
X.drop(['Sex_Female', 'Sex_Male', 'Sex_Unknown'], axis=1, inplace=True)
print(X.shape)
print(X.head())
target = target.iloc[:, :].values
target[target == 'Died'] = 'Euthanasia'
le = LabelEncoder()
self.y = le.fit_transform(target)
self.X = X.values
self.columns = X.columns.values
self.embedding_column = 'Gender'
self.nrof_emb_categories = 3
self.numeric_columns = ['IsDog', 'Age', 'HasName', 'NameLength', 'NameFreq', 'MixColor', 'ColorFreqAsIs',
'ColorFreqBase', 'TabbyColor', 'MixBreed', 'Domestic', 'Shorthair', 'Longhair',
'Year', 'Day', 'Breed_Chihuahua Shorthair Mix', 'Breed_Domestic Medium Hair Mix',
'Breed_Domestic Shorthair Mix', 'Breed_German Shepherd Mix', 'Breed_Labrador Retriever Mix',
'Breed_Pit Bull Mix', 'Breed_Rare',
'SexStatus_Flawed', 'SexStatus_Intact', 'SexStatus_Unknown',
'Weekday_cos', 'Weekday_sin', 'Hour_cos', 'Hour_sin',
'Month_cos', 'Month_sin']
return
def __len__(self):
return len(self.X)
# Переопределяем метод,
# который достает по индексу наблюдение из датасет
def __getitem__(self, idx):
row = self.X[idx, :]
row = {col: torch.tensor(row[i]) for i, col in enumerate(self.columns)}
return row, self.y[idx]
class MLPNet(nn.Module):
def __init__(self, input_size, hidden_size, output_size, nrof_cat, emb_dim,
emb_columns, numeric_columns):
super(MLPNet, self).__init__()
self.emb_columns = emb_columns
self.numeric_columns = numeric_columns
self.emb_layer = torch.nn.Embedding(nrof_cat, emb_dim)
self.feature_bn = torch.nn.BatchNorm1d(input_size)
self.linear1 = torch.nn.Linear(input_size, hidden_size)
self.linear1.apply(self.init_weights)
self.bn1 = torch.nn.BatchNorm1d(hidden_size)
self.linear2 = torch.nn.Linear(hidden_size, hidden_size)
self.linear2.apply(self.init_weights)
self.bn2 = torch.nn.BatchNorm1d(hidden_size)
self.linear3 = torch.nn.Linear(hidden_size, output_size)
def init_weights(self, m):
if type(m) == nn.Linear:
torch.nn.init.xavier_uniform(m.weight)
# m.bias.data.fill_(0.001)
def forward(self, x):
emb_output = self.emb_layer(torch.tensor(x[self.emb_columns], dtype=torch.int64))
numeric_feats = torch.tensor(pd.DataFrame(x)[self.numeric_columns].values, dtype=torch.float32)
concat_input = torch.cat([numeric_feats, emb_output], dim=1)
output = self.feature_bn(concat_input)
output = self.linear1(output)
output = self.bn1(output)
output = torch.relu(output)
output = self.linear2(output)
output = self.bn2(output)
output = torch.relu(output)
output = self.linear3(output)
predictions = torch.softmax(output, dim=1)
return predictions
def run_train(model, train_loader):
step = 0
for epoch in range(EPOCHS):
model.train()
for features, label in train_loader:
# Reset gradients
optimizer.zero_grad()
output = model(features)
# Calculate error and backpropagate
loss = criterion(output, label)
loss.backward()
acc = accuracy(output, label).item()
# Update weights with gradients
optimizer.step()
step += 1
if step % 100 == 0:
print('EPOCH %d STEP %d : train_loss: %f train_acc: %f' %
(epoch, step, loss.item(), acc))
return step
animal_dataset = CustomDataset()
train_loader = data_utils.DataLoader(dataset=animal_dataset,
batch_size=BATCH_SIZE, shuffle=True)
model = MLPNet(INPUT_SIZE, HIDDEN_SIZE, OUTPUT_SIZE, animal_dataset.nrof_emb_categories,
EMBEDDING_SIZE,
animal_dataset.embedding_column, animal_dataset.numeric_columns)
criterion = nn.CrossEntropyLoss()
accuracy = Accuracy()
optimizer = optim.Adam(model.parameters(), lr=LEARNING_RATE)
step = run_train(model, train_loader)
```
|
github_jupyter
|
```
# Initialize Otter Grader
import otter
grader = otter.Notebook()
```

# In-class Assignment (Feb 9)
Run the following two cells to load the required modules and read the data.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
df = pd.read_csv("Video_Games_Sales_cleaned_sampled.csv")
df.head(5)
```
## Exploring Data with Pandas
### Q1:
How many data points (rows) are there in this dataset? Store it in ```num_rows```.
<!--
BEGIN QUESTION
name: q1
manual: false
-->
```
# your code here
num_rows = df.shape[0] # SOLUTION
print(num_rows)
```
### Q2
What are the max and min values in Global Sales? What about the quartiles (25%, 50%, and 75%)? Can you answer this question with a one-liner code?
<!--
BEGIN QUESTION
name: q2
manual: false
-->
```
# your code here
df["Global_Sales"].describe() # SOLUTION
```
### Q3
What are the unique genres and consoles that the dataset contains? Store them in ```genre_unique``` and ```console_unique```.
<!--
BEGIN QUESTION
name: q3
manual: false
-->
```
# your code here
genre_unique = df["Genre"].unique() # SOLUTION
console_unique = df["Console"].unique() # SOLUTION
print("All genres:", genre_unique)
print("All consoles:", console_unique)
```
### Q4
What are the top five games with the most global sales?
<!--
BEGIN QUESTION
name: q4
manual: false
-->
```
# your code here
df.sort_values(by="Global_Sales",ascending=False).head(5) # SOLUTION
```
### Q5 (Optional: Do it if you had enough time)
How many games in the dataset are developed by Nintendo? What are their names?
<!--
BEGIN QUESTION
name: q5
manual: false
-->
```
# your code here
# BEGIN SOLUTION
arr_name_by_nintendo = df.loc[df["Developer"] == "Nintendo","Name"]
print (arr_name_by_nintendo.nunique())
print (arr_name_by_nintendo.unique())
# END SOLUTION
```
## Linear Regression
Suppose that you want to regress the global sales on four features: Critic_Score, Critic_Count, User_Score, and User_Count.
The input matrix $X$ and the output $y$ are given to you below.
```
## No need for modification, just run this cell
X = df[['Critic_Score', 'Critic_Count', 'User_Score', 'User_Count']].values
y = df[['Global_Sales']].values
```
### Q6
Use train_test_split function in sklearn to split the dataset into training and test sets. Set 80% of the dataset aside for training and use the rest for testing. (set random_state=0)
<!--
BEGIN QUESTION
name: q6
manual: false
-->
```
# your code here
# BEGIN SOLUTION
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=0)
# END SOLUTION
```
### Q7
Train your linear regression model using the training set you obtained above. Then, store the coefficients and the intercept of your model in ```coefs``` and ```intercept```, respectively.
<!--
BEGIN QUESTION
name: q7
manual: false
-->
```
# your code here
# BEGIN SOLUTION NO PROMPT
model = LinearRegression()
model.fit(X_train,y_train)
# END SOLUTION
coefs = model.coef_ # SOLUTION
intercept = model.intercept_ # SOLUTION
print("Coefficients:", coefs)
print("Intercept:", intercept)
```
### Q8 (Optional: Do it if you had enough time.)
Compute the mean-squared-error of your model's prediction on the training and test sets and store them in ```train_error``` and ```test_error```, respectively.
<!--
BEGIN QUESTION
name: q8
manual: false
-->
```
# your code here
# BEGIN SOLUTION NO PROMPT
y_pred_train = model.predict(X_train)
y_pred_test = model.predict(X_test)
# END SOLUTION
train_error = mean_squared_error(y_train, y_pred_train) # SOLUTION
test_error = mean_squared_error(y_test, y_pred_test) # SOLUTION
print(train_error)
print(test_error)
```
# Submit
Make sure you have run all cells in your notebook in order before running the cell below, so that all images/graphs appear in the output.
**Please save before submitting!**
```
# Save your notebook first, then run this cell to create a pdf for your reference.
```
|
github_jupyter
|
# SLU10 - Classification: Exercise notebook
```
import pandas as pd
import numpy as np
```
In this notebook you will practice the following:
- What classification is for
- Logistic regression
- Cost function
- Binary classification
You thought that you would get away without implementing your own little Logistic Regression? Hah!
# Exercise 1. Implement the Logistic Function
*aka the sigmoid function*
As a very simple warmup, you will implement the logistic function. Let's keep this simple!
Here's a quick reminder of the formula:
$$\hat{p} = \frac{1}{1 + e^{-z}}$$
**Complete here:**
```
def logistic_function(z):
"""
Implementation of the logistic function by hand
Args:
z (np.float64): a float
Returns:
proba (np.float64): the predicted probability for a given observation
"""
# define the numerator and the denominator and obtain the predicted probability
# clue: you can use np.exp()
numerator = None
denominator = None
proba = None
# YOUR CODE HERE
raise NotImplementedError()
return proba
z = 1.2
print('Predicted probability: %.2f' % logistic_function(z))
```
Expected output:
Predicted probability: 0.77
```
z = 3.4
assert np.isclose(np.round(logistic_function(z),2), 0.97)
z = -2.1
assert np.isclose(np.round(logistic_function(z),2), 0.11)
```
# Exercise 2: Make Predictions From Observations
The next step is to implement a function that receives observations and returns predicted probabilities.
For instance, remember that for an observation with two variables we have:
$$z = \beta_0 + \beta_1 x_1 + \beta_2 x_2$$
where $\beta_0$ is the intercept and $\beta_1, \beta_2$ are the coefficients.
**Complete here:**
```
def predict_proba(x, coefficients):
"""
Implementation of a function that returns a predicted probability for a given data observation
Args:
x (np.array): a numpy array of shape (n,)
- n: number of variables
coefficients (np.array): a numpy array of shape (n + 1,)
- coefficients[0]: intercept
- coefficients[1:]: remaining coefficients
Returns:
proba (np.array): the predicted probability for a given data observation
"""
# start by assigning the intercept to z
# clue: the intercept is the first element of the list of coefficients
z = None
# YOUR CODE HERE
raise NotImplementedError()
# sum the remaining variable * coefficient products to z
# clue: the variables and coefficients indeces are not exactly aligned, but correctly ordered
for i in range(None): # iterate through the observation variables (clue: you can use len())
z += None # multiply the variable value by its coefficient and add to z
# YOUR CODE HERE
raise NotImplementedError()
# obtain the predicted probability from z
# clue: we already implemented something that can give us that
proba = None
# YOUR CODE HERE
raise NotImplementedError()
return proba
x = np.array([0.2,2.32,1.3,3.2])
coefficients = np.array([2.1,0.22,-2, 0.4, 0.1])
print('Predicted probability: %.3f' % predict_proba(x, coefficients))
```
Expected output:
Predicted probability: 0.160
```
x = np.array([1,0,2,3.2])
coefficients = np.array([-0.2,2,-6, 1.2, -1])
assert np.isclose(np.round(predict_proba(x, coefficients),2), 0.73)
x = np.array([3.2,1.2,-1.2])
coefficients = np.array([-1.,3.1,-3,4])
assert np.isclose(np.round(predict_proba(x, coefficients),2), 0.63)
```
# Exercise 3: Compute the Cross-Entropy Cost Function
As you will implement stochastic gradient descent, you only have to do the following for each prediction:
$$H_{\hat{p}}(y) = - (y \log(\hat{p}) + (1-y) \log (1-\hat{p}))$$
**Complete here:**
```
def cross_entropy(y, proba):
"""
Implementation of a function that returns the Cross-Entropy loss
Args:
y (np.int64): an integer
proba (np.float64): a float
Returns:
loss (np.float): a float with the resulting loss for a given prediction
"""
# compute the inner left side of the loss function (for when y == 1)
# clue: use np.log()
left = None
# YOUR CODE HERE
raise NotImplementedError()
# compute the inner right side of the loss function (for when y == 0)
right = None
# YOUR CODE HERE
raise NotImplementedError()
# compute the total loss
# clue: do not forget the minus sign
loss = None
# YOUR CODE HERE
raise NotImplementedError()
return loss
y = 1
proba = 0.7
print('Computed loss: %.3f' % cross_entropy(y, proba))
```
Expected output:
Computed loss: 0.357
```
y = 1
proba = 0.35
assert np.isclose(np.round(cross_entropy(y, proba),3), 1.050)
y = 1
proba = 0.77
assert np.isclose(np.round(cross_entropy(y, proba),3), 0.261)
```
# Exercise 4: Obtain the Optimized Coefficients
Now that the warmup is done, let's do the most interesting exercise. Here you will implement the optimized coefficients through Stochastic Gradient Descent.
Quick reminders:
$$H_{\hat{p}}(y) = - \frac{1}{N}\sum_{i=1}^{N} \left [{ y_i \ \log(\hat{p}_i) + (1-y_i) \ \log (1-\hat{p}_i)} \right ]$$
and
$$\beta_{0(t+1)} = \beta_{0(t)} - learning\_rate \frac{\partial H_{\hat{p}}(y)}{\partial \beta_{0(t)}}$$
$$\beta_{t+1} = \beta_t - learning\_rate \frac{\partial H_{\hat{p}}(y)}{\partial \beta_t}$$
which can be simplified to
$$\beta_{0(t+1)} = \beta_{0(t)} + learning\_rate \left [(y - \hat{p}) \ \hat{p} \ (1 - \hat{p})\right]$$
$$\beta_{t+1} = \beta_t + learning\_rate \left [(y - \hat{p}) \ \hat{p} \ (1 - \hat{p}) \ x \right]$$
You will have to initialize a numpy array full of zeros for the coefficients. If you have a training set $X$, you can initialize it this way:
```python
coefficients = np.zeros(X.shape[1]+1)
```
where the $+1$ is adding the intercept.
You will also iterate through the training set $X$ alongside their respective labels $Y$. To do so simultaneously you can do it this way:
```python
for x_sample, y_sample in zip(X, Y):
...
```
**Complete here:**
```
def compute_coefficients(x_train, y_train, learning_rate = 0.1, n_epoch = 50, verbose = False):
"""
Implementation of a function that returns the optimized intercept and coefficients
Args:
x_train (np.array): a numpy array of shape (m, n)
m: number of training observations
n: number of variables
y_train (np.array): a numpy array of shape (m,)
learning_rate (np.float64): a float
n_epoch (np.int64): an integer of the number of full training cycles to perform on the training set
Returns:
coefficients (np.array): a numpy array of shape (n+1,)
"""
# initialize the coefficients array with zeros
# clue: use np.zeros()
coefficients = None
# YOUR CODE HERE
raise NotImplementedError()
# run the stochastic gradient descent algorithm n_epoch times and update the coefficients
for epoch in range(None): # iterate n_epoch times
loss = None # initialize the cross entropy loss with an empty list
for x, y in zip(None, None): # iterate through the training set observations and labels
proba = None # compute the predicted probability
loss.append(None) # compute the cross entropy loss and append it to the list
coefficients[0] += None # update the intercept
for i in range(None): # iterate through the observation variables (clue: use len())
coefficients[i + 1] += None # update each coefficient
loss = None # average the obtained cross entropies (clue: use np.mean())
# YOUR CODE HERE
raise NotImplementedError()
if((epoch%10==0) & verbose):
print('>epoch=%d, learning_rate=%.3f, error=%.3f' % (epoch, learning_rate, loss))
return coefficients
x_train = np.array([[1,2,3], [2,5,9], [3,1,4], [8,2,9]])
y_train = np.array([0,1,0,1])
learning_rate = 0.1
n_epoch = 50
coefficients = compute_coefficients(x_train, y_train, learning_rate=learning_rate, n_epoch=n_epoch, verbose=True)
print('Computed coefficients:')
print(coefficients)
```
Expected output:
>epoch=0, learning_rate=0.100, error=0.811
>epoch=10, learning_rate=0.100, error=0.675
>epoch=20, learning_rate=0.100, error=0.640
>epoch=30, learning_rate=0.100, error=0.606
>epoch=40, learning_rate=0.100, error=0.574
Computed coefficients:
[-0.82964483 0.02698239 -0.04632395 0.27761155]
```
x_train = np.array([[3,1,3], [1,0,9], [3,3,4], [2,-1,10]])
y_train = np.array([0,1,0,1])
learning_rate = 0.3
n_epoch = 100
coefficients = compute_coefficients(x_train, y_train, learning_rate=learning_rate, n_epoch=n_epoch, verbose=False)
assert np.allclose(coefficients, np.array([-0.25917811, -1.15128387, -0.85317139, 0.55286134]))
x_train = np.array([[3,-1,-2], [-6,9,3], [3,-1,4], [5,1,6]])
coefficients = compute_coefficients(x_train, y_train, learning_rate=learning_rate, n_epoch=n_epoch, verbose=False)
assert np.allclose(coefficients, np.array([-0.53111811, -0.16120628, 2.20202909, 0.27270437]))
```
# Exercise 5: Normalize Data
Just a quick and easy function to normalize the data. It is crucial that your variables are adjusted between $[0;1]$ (normalized) or standardized so that you can correctly analyse some logistic regression coefficients for your possible future employer.
You only have to implement this formula
$$ x_{normalized} = \frac{x - x_{min}}{x_{max} - x_{min}}$$
Don't forget that the `axis` argument is critical when obtaining the maximum, minimum and mean values! As you want to obtain the maximum and minimum values of each individual feature, you have to specify `axis=0`. Thus, if you wanted to obtain the maximum values of each feature of data $X$, you would do the following:
```python
X_max = np.max(X, axis=0)
```
**Complete here:**
```
def normalize_data(data):
"""
Implementation of a function that normalizes your data variables
Args:
data (np.array): a numpy array of shape (m, n)
m: number of observations
n: number of variables
Returns:
normalized_data (np.array): a numpy array of shape (m, n)
"""
# compute the numerator
# clue: use np.min()
numerator = None
# YOUR CODE HERE
raise NotImplementedError()
# compute the numerator
# clue: use np.max() and np.min()
denominator = None
# YOUR CODE HERE
raise NotImplementedError()
# obtain the normalized data
normalized_data = None
# YOUR CODE HERE
raise NotImplementedError()
return normalized_data
data = np.array([[9,5,2], [7,7,3], [2,2,11], [1,5,2], [10,1,3], [0,9,5]])
normalized_data = normalize_data(data)
print('Before normalization:')
print(data)
print('\n-------------------\n')
print('After normalization:')
print(normalized_data)
```
Expected output:
Before normalization:
[[ 9 5 2]
[ 7 7 3]
[ 2 2 11]
[ 1 5 2]
[10 1 3]
[ 0 9 5]]
-------------------
After normalization:
[[0.9 0.5 0. ]
[0.7 0.75 0.11111111]
[0.2 0.125 1. ]
[0.1 0.5 0. ]
[1. 0. 0.11111111]
[0. 1. 0.33333333]]
```
data = np.array([[9,5,2,6], [7,5,1,3], [2,2,11,1]])
normalized_data = normalize_data(data)
assert np.allclose(normalized_data, np.array([[1., 1., 0.1, 1.],[0.71428571, 1., 0., 0.4],[0., 0., 1., 0.]]))
data = np.array([[9,5,3,1], [1,3,1,3], [2,2,4,6]])
normalized_data = normalize_data(data)
assert np.allclose(normalized_data, np.array([[1., 1., 0.66666667, 0.],[0., 0.33333333, 0., 0.4],
[0.125, 0., 1., 1.]]))
```
# Exercise 6: Putting it All Together
The Wisconsin Breast Cancer Diagnostic dataset is another data science classic. It is the result of extraction of breast cell's nuclei characteristics to understand which of them are the most relevent for developing breast cancer.
Your quest, is to first analyze this dataset from the materials that you've learned in the previous SLUs and then create a logistic regression model that can correctly classify cancer cells from healthy ones.
Dataset description:
1. Sample code number: id number
2. Clump Thickness
3. Uniformity of Cell Size
4. Uniformity of Cell Shape
5. Marginal Adhesion
6. Single Epithelial Cell Size
7. Bare Nuclei
8. Bland Chromatin
9. Normal Nucleoli
10. Mitoses
11. Class: (2 for benign, 4 for malignant) > We will modify to (0 for benign, 1 for malignant) for simplicity
The data is loaded for you below.
```
columns = ['Sample code number','Clump Thickness','Uniformity of Cell Size','Uniformity of Cell Shape',
'Marginal Adhesion','Single Epithelial Cell Size','Bare Nuclei','Bland Chromatin','Normal Nucleoli',
'Mitoses','Class']
data = pd.read_csv('data/breast-cancer-wisconsin.csv',names=columns, index_col=0)
data["Bare Nuclei"] = data["Bare Nuclei"].replace(['?'],np.nan)
data = data.dropna()
data["Bare Nuclei"] = data["Bare Nuclei"].map(int)
data.Class = data.Class.map(lambda x: 1 if x == 4 else 0)
X = data.drop('Class').values
y_train = data.Class.values
```
You will also have to return several values, such as the number of cancer and healthy cells. To do so, remember that you can do masks in numpy arrays. If you had a numpy array of labels called `labels` and wanted to obtain the ones with label $3$, you would do the following:
```python
filtered_labels = labels[labels==3]
```
You will additionally be asked to obtain the number of correct cancer cell predictions. Imagine that you have a numpy array with the predictions called `predictions` and a numpy array with the correct labels called `labels` and you wanted to obtain the number of correct predictions of a label $4$. You would do the following:
```python
n_correct_predictions = labels[(labels==4) & (predictions==4)].shape[0]
```
Also, don't forget to use these values for your logistic regression!
```
# Hyperparameters
learning_rate = 0.01
n_epoch = 100
# For validation
verbose = True
```
Now let's do this!
**Complete here:**
```
# STEP ONE: Initial analysis and data processing
# How many cells have cancer? (clue: use y_train)
n_cancer = None
# YOUR CODE HERE
raise NotImplementedError()
# How many cells are healthy? (clue: use y_train)
n_healthy = None
# YOUR CODE HERE
raise NotImplementedError()
# Normalize the training data X (clue: we have already implemented this)
x_train = None
# YOUR CODE HERE
raise NotImplementedError()
print("Number of cells with cancer: %i" % n_cancer)
print("\nThe last three normalized rows:")
print(x_train[-3:])
```
Expected output:
Number of cells with cancer: 239
The last three normalized rows:
[[0.44444444 1. 1. 0.22222222 0.66666667 0.22222222
0.77777778 1. 0.11111111 1. ]
[0.33333333 0.77777778 0.55555556 0.33333333 0.22222222 0.33333333
1. 0.55555556 0. 1. ]
[0.33333333 0.77777778 0.77777778 0.44444444 0.33333333 0.44444444
1. 0.33333333 0. 1. ]]
```
# STEP TWO: Model training and predictions
# What coefficients can we get? (clue: we have already implemented this)
# note: don't forget to use all the hyperparameters defined above
coefficients = None
# YOUR CODE HERE
raise NotImplementedError()
# Initialize the predicted probabilities list
probas = None
# YOUR CODE HERE
raise NotImplementedError()
# What are the predicted probabilities on the training data?
for x in None: # iterate through the training data x_train
probas.append(None) # append the list the predicted probability (clue: we already implemented this)
# YOUR CODE HERE
raise NotImplementedError()
# If we had to say whether a cells had breast cancer, what are the predictions?
# clue 1: Hard assign the predicted probabilities by rounding them to the nearest integer
# clue 2: use np.round()
preds = None
# YOUR CODE HERE
raise NotImplementedError()
print("\nThe last three coefficients:")
print(coefficients[-3:])
print("\nThe last three obtained probas:")
print(probas[-3:])
print("\nThe last three predictions:")
print(preds[-3:])
```
Expected output:
>epoch=0, learning_rate=0.010, error=0.617
>epoch=10, learning_rate=0.010, error=0.209
>epoch=20, learning_rate=0.010, error=0.143
>epoch=30, learning_rate=0.010, error=0.114
>epoch=40, learning_rate=0.010, error=0.097
>epoch=50, learning_rate=0.010, error=0.086
>epoch=60, learning_rate=0.010, error=0.077
>epoch=70, learning_rate=0.010, error=0.071
>epoch=80, learning_rate=0.010, error=0.066
>epoch=90, learning_rate=0.010, error=0.062
The last three coefficients:
[0.70702475 0.33306501 3.27480969]
The last three obtained probas:
[0.9679181578309998, 0.9356364708465178, 0.9482109014966041]
The last three predictions:
[1. 1. 1.]
```
# STEP THREE: Results analysis
# How many cells were predicted to have breast cancer? (clue: use preds and len() or .shape)
n_predicted_cancer = None
# YOUR CODE HERE
raise NotImplementedError()
# How many cells with cancer were correctly detected? (clue: use y_train, preds and len() or .shape)
n_correct_cancer_predictions = None
# YOUR CODE HERE
raise NotImplementedError()
print("Number of correct cancer predictions: %i" % n_correct_cancer_predictions)
```
Expected output:
Number of correct cancer predictions: 239
```
print('You have a dataset with %s cells with cancer and %s healthy cells. \n\n'
'After analysing the data and training your own logistic regression classifier you find out that it correctly '
'identified %s out of %s cancer cells which were all of them. You feel very lucky and happy. However, shortly '
'after you get somewhat suspicious after getting such amazing results. You feel that they should not be '
'that good, but you do not know how to be sure of it. This, because you trained and tested on the same '
'dataset, which does not seem right! You say to yourself that you will definitely give your best focus when '
'doing the next Small Learning Unit 11, which will tackle exactly that.' %
(n_cancer, n_healthy, n_predicted_cancer, n_correct_cancer_predictions))
assert np.allclose(probas[:3], np.array([0.05075437808498781, 0.30382227212694596, 0.05238389294132284]))
assert np.isclose(n_predicted_cancer, 239)
assert np.allclose(coefficients[:3], np.array([-3.22309346, 0.40712798, 0.80696792]))
assert np.isclose(n_correct_cancer_predictions, 239)
```
|
github_jupyter
|
# Railroad Diagrams
The code in this notebook helps with drawing syntax-diagrams. It is a (slightly customized) copy of the [excellent library from Tab Atkins jr.](https://github.com/tabatkins/railroad-diagrams), which unfortunately is not available as a Python package.
**Prerequisites**
* This notebook needs some understanding on advanced concepts in Python and Graphics, notably
* classes
* the Python `with` statement
* Scalable Vector Graphics
```
import fuzzingbook_utils
import re
import io
class C:
# Display constants
DEBUG = False # if true, writes some debug information into attributes
VS = 8 # minimum vertical separation between things. For a 3px stroke, must be at least 4
AR = 10 # radius of arcs
DIAGRAM_CLASS = 'railroad-diagram' # class to put on the root <svg>
STROKE_ODD_PIXEL_LENGTH = True # is the stroke width an odd (1px, 3px, etc) pixel length?
INTERNAL_ALIGNMENT = 'center' # how to align items when they have extra space. left/right/center
CHAR_WIDTH = 8.5 # width of each monospace character. play until you find the right value for your font
COMMENT_CHAR_WIDTH = 7 # comments are in smaller text by default
DEFAULT_STYLE = '''\
svg.railroad-diagram {
background-color:hsl(100,100%,100%);
}
svg.railroad-diagram path {
stroke-width:3;
stroke:black;
fill:rgba(0,0,0,0);
}
svg.railroad-diagram text {
font:bold 14px monospace;
text-anchor:middle;
}
svg.railroad-diagram text.label{
text-anchor:start;
}
svg.railroad-diagram text.comment{
font:italic 12px monospace;
}
svg.railroad-diagram rect{
stroke-width:3;
stroke:black;
fill:hsl(0,62%,82%);
}
'''
def e(text):
text = re.sub(r"&",'&', str(text))
text = re.sub(r"<",'<', str(text))
text = re.sub(r">",'>', str(text))
return str(text)
def determineGaps(outer, inner):
diff = outer - inner
if C.INTERNAL_ALIGNMENT == 'left':
return 0, diff
elif C.INTERNAL_ALIGNMENT == 'right':
return diff, 0
else:
return diff/2, diff/2
def doubleenumerate(seq):
length = len(list(seq))
for i,item in enumerate(seq):
yield i, i-length, item
def addDebug(el):
if not C.DEBUG:
return
el.attrs['data-x'] = "{0} w:{1} h:{2}/{3}/{4}".format(type(el).__name__, el.width, el.up, el.height, el.down)
class DiagramItem(object):
def __init__(self, name, attrs=None, text=None):
self.name = name
# up = distance it projects above the entry line
# height = distance between the entry/exit lines
# down = distance it projects below the exit line
self.height = 0
self.attrs = attrs or {}
self.children = [text] if text else []
self.needsSpace = False
def format(self, x, y, width):
raise NotImplementedError # Virtual
def addTo(self, parent):
parent.children.append(self)
return self
def writeSvg(self, write):
write(u'<{0}'.format(self.name))
for name, value in sorted(self.attrs.items()):
write(u' {0}="{1}"'.format(name, e(value)))
write(u'>')
if self.name in ["g", "svg"]:
write(u'\n')
for child in self.children:
if isinstance(child, DiagramItem):
child.writeSvg(write)
else:
write(e(child))
write(u'</{0}>'.format(self.name))
def __eq__(self, other):
return type(self) == type(other) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class Path(DiagramItem):
def __init__(self, x, y):
self.x = x
self.y = y
DiagramItem.__init__(self, 'path', {'d': 'M%s %s' % (x, y)})
def m(self, x, y):
self.attrs['d'] += 'm{0} {1}'.format(x,y)
return self
def l(self, x, y):
self.attrs['d'] += 'l{0} {1}'.format(x,y)
return self
def h(self, val):
self.attrs['d'] += 'h{0}'.format(val)
return self
def right(self, val):
return self.h(max(0, val))
def left(self, val):
return self.h(-max(0, val))
def v(self, val):
self.attrs['d'] += 'v{0}'.format(val)
return self
def down(self, val):
return self.v(max(0, val))
def up(self, val):
return self.v(-max(0, val))
def arc_8(self, start, dir):
# 1/8 of a circle
arc = C.AR
s2 = 1/math.sqrt(2) * arc
s2inv = (arc - s2)
path = "a {0} {0} 0 0 {1} ".format(arc, "1" if dir == 'cw' else "0")
sd = start+dir
if sd == 'ncw':
offset = [s2, s2inv]
elif sd == 'necw':
offset = [s2inv, s2]
elif sd == 'ecw':
offset = [-s2inv, s2]
elif sd == 'secw':
offset = [-s2, s2inv]
elif sd == 'scw':
offset = [-s2, -s2inv]
elif sd == 'swcw':
offset = [-s2inv, -s2]
elif sd == 'wcw':
offset = [s2inv, -s2]
elif sd == 'nwcw':
offset = [s2, -s2inv]
elif sd == 'nccw':
offset = [-s2, s2inv]
elif sd == 'nwccw':
offset = [-s2inv, s2]
elif sd == 'wccw':
offset = [s2inv, s2]
elif sd == 'swccw':
offset = [s2, s2inv]
elif sd == 'sccw':
offset = [s2, -s2inv]
elif sd == 'seccw':
offset = [s2inv, -s2]
elif sd == 'eccw':
offset = [-s2inv, -s2]
elif sd == 'neccw':
offset = [-s2, -s2inv]
path += " ".join(str(x) for x in offset)
self.attrs['d'] += path
return self
def arc(self, sweep):
x = C.AR
y = C.AR
if sweep[0] == 'e' or sweep[1] == 'w':
x *= -1
if sweep[0] == 's' or sweep[1] == 'n':
y *= -1
cw = 1 if sweep == 'ne' or sweep == 'es' or sweep == 'sw' or sweep == 'wn' else 0
self.attrs['d'] += 'a{0} {0} 0 0 {1} {2} {3}'.format(C.AR, cw, x, y)
return self
def format(self):
self.attrs['d'] += 'h.5'
return self
def __repr__(self):
return 'Path(%r, %r)' % (self.x, self.y)
def wrapString(value):
return value if isinstance(value, DiagramItem) else Terminal(value)
class Style(DiagramItem):
def __init__(self, css):
self.name = 'style'
self.css = css
self.height = 0
self.width = 0
self.needsSpace = False
def __repr__(self):
return 'Style(%r)' % css
def format(self, x, y, width):
return self
def writeSvg(self, write):
# Write included stylesheet as CDATA. See https:#developer.mozilla.org/en-US/docs/Web/SVG/Element/style
cdata = u'/* <![CDATA[ */\n{css}\n/* ]]> */\n'.format(css=self.css)
write(u'<style>{cdata}</style>'.format(cdata=cdata))
class Diagram(DiagramItem):
def __init__(self, *items, **kwargs):
# Accepts a type=[simple|complex] kwarg
DiagramItem.__init__(self, 'svg', {'class': C.DIAGRAM_CLASS, 'xmlns': "http://www.w3.org/2000/svg"})
self.type = kwargs.get("type", "simple")
self.items = [wrapString(item) for item in items]
if items and not isinstance(items[0], Start):
self.items.insert(0, Start(self.type))
if items and not isinstance(items[-1], End):
self.items.append(End(self.type))
self.css = kwargs.get("css", C.DEFAULT_STYLE)
if self.css:
self.items.insert(0, Style(self.css))
self.up = 0
self.down = 0
self.height = 0
self.width = 0
for item in self.items:
if isinstance(item, Style):
continue
self.width += item.width + (20 if item.needsSpace else 0)
self.up = max(self.up, item.up - self.height)
self.height += item.height
self.down = max(self.down - item.height, item.down)
if self.items[0].needsSpace:
self.width -= 10
if self.items[-1].needsSpace:
self.width -= 10
self.formatted = False
def __repr__(self):
if self.css:
items = ', '.join(map(repr, self.items[2:-1]))
else:
items = ', '.join(map(repr, self.items[1:-1]))
pieces = [] if not items else [items]
if self.css != C.DEFAULT_STYLE:
pieces.append('css=%r' % self.css)
if self.type != 'simple':
pieces.append('type=%r' % self.type)
return 'Diagram(%s)' % ', '.join(pieces)
def format(self, paddingTop=20, paddingRight=None, paddingBottom=None, paddingLeft=None):
if paddingRight is None:
paddingRight = paddingTop
if paddingBottom is None:
paddingBottom = paddingTop
if paddingLeft is None:
paddingLeft = paddingRight
x = paddingLeft
y = paddingTop + self.up
g = DiagramItem('g')
if C.STROKE_ODD_PIXEL_LENGTH:
g.attrs['transform'] = 'translate(.5 .5)'
for item in self.items:
if item.needsSpace:
Path(x, y).h(10).addTo(g)
x += 10
item.format(x, y, item.width).addTo(g)
x += item.width
y += item.height
if item.needsSpace:
Path(x, y).h(10).addTo(g)
x += 10
self.attrs['width'] = self.width + paddingLeft + paddingRight
self.attrs['height'] = self.up + self.height + self.down + paddingTop + paddingBottom
self.attrs['viewBox'] = "0 0 {width} {height}".format(**self.attrs)
g.addTo(self)
self.formatted = True
return self
def writeSvg(self, write):
if not self.formatted:
self.format()
return DiagramItem.writeSvg(self, write)
def parseCSSGrammar(self, text):
token_patterns = {
'keyword': r"[\w-]+\(?",
'type': r"<[\w-]+(\(\))?>",
'char': r"[/,()]",
'literal': r"'(.)'",
'openbracket': r"\[",
'closebracket': r"\]",
'closebracketbang': r"\]!",
'bar': r"\|",
'doublebar': r"\|\|",
'doubleand': r"&&",
'multstar': r"\*",
'multplus': r"\+",
'multhash': r"#",
'multnum1': r"{\s*(\d+)\s*}",
'multnum2': r"{\s*(\d+)\s*,\s*(\d*)\s*}",
'multhashnum1': r"#{\s*(\d+)\s*}",
'multhashnum2': r"{\s*(\d+)\s*,\s*(\d*)\s*}"
}
class Sequence(DiagramItem):
def __init__(self, *items):
DiagramItem.__init__(self, 'g')
self.items = [wrapString(item) for item in items]
self.needsSpace = True
self.up = 0
self.down = 0
self.height = 0
self.width = 0
for item in self.items:
self.width += item.width + (20 if item.needsSpace else 0)
self.up = max(self.up, item.up - self.height)
self.height += item.height
self.down = max(self.down - item.height, item.down)
if self.items[0].needsSpace:
self.width -= 10
if self.items[-1].needsSpace:
self.width -= 10
addDebug(self)
def __repr__(self):
items = ', '.join(map(repr, self.items))
return 'Sequence(%s)' % items
def format(self, x, y, width):
leftGap, rightGap = determineGaps(width, self.width)
Path(x, y).h(leftGap).addTo(self)
Path(x+leftGap+self.width, y+self.height).h(rightGap).addTo(self)
x += leftGap
for i,item in enumerate(self.items):
if item.needsSpace and i > 0:
Path(x, y).h(10).addTo(self)
x += 10
item.format(x, y, item.width).addTo(self)
x += item.width
y += item.height
if item.needsSpace and i < len(self.items)-1:
Path(x, y).h(10).addTo(self)
x += 10
return self
class Stack(DiagramItem):
def __init__(self, *items):
DiagramItem.__init__(self, 'g')
self.items = [wrapString(item) for item in items]
self.needsSpace = True
self.width = max(item.width + (20 if item.needsSpace else 0) for item in self.items)
# pretty sure that space calc is totes wrong
if len(self.items) > 1:
self.width += C.AR*2
self.up = self.items[0].up
self.down = self.items[-1].down
self.height = 0
last = len(self.items) - 1
for i,item in enumerate(self.items):
self.height += item.height
if i > 0:
self.height += max(C.AR*2, item.up + C.VS)
if i < last:
self.height += max(C.AR*2, item.down + C.VS)
addDebug(self)
def __repr__(self):
items = ', '.join(repr(item) for item in self.items)
return 'Stack(%s)' % items
def format(self, x, y, width):
leftGap, rightGap = determineGaps(width, self.width)
Path(x, y).h(leftGap).addTo(self)
x += leftGap
xInitial = x
if len(self.items) > 1:
Path(x, y).h(C.AR).addTo(self)
x += C.AR
innerWidth = self.width - C.AR*2
else:
innerWidth = self.width
for i,item in enumerate(self.items):
item.format(x, y, innerWidth).addTo(self)
x += innerWidth
y += item.height
if i != len(self.items)-1:
(Path(x,y)
.arc('ne').down(max(0, item.down + C.VS - C.AR*2))
.arc('es').left(innerWidth)
.arc('nw').down(max(0, self.items[i+1].up + C.VS - C.AR*2))
.arc('ws').addTo(self))
y += max(item.down + C.VS, C.AR*2) + max(self.items[i+1].up + C.VS, C.AR*2)
x = xInitial + C.AR
if len(self.items) > 1:
Path(x, y).h(C.AR).addTo(self)
x += C.AR
Path(x, y).h(rightGap).addTo(self)
return self
class OptionalSequence(DiagramItem):
def __new__(cls, *items):
if len(items) <= 1:
return Sequence(*items)
else:
return super(OptionalSequence, cls).__new__(cls)
def __init__(self, *items):
DiagramItem.__init__(self, 'g')
self.items = [wrapString(item) for item in items]
self.needsSpace = False
self.width = 0
self.up = 0
self.height = sum(item.height for item in self.items)
self.down = self.items[0].down
heightSoFar = 0
for i,item in enumerate(self.items):
self.up = max(self.up, max(C.AR * 2, item.up + C.VS) - heightSoFar)
heightSoFar += item.height
if i > 0:
self.down = max(self.height + self.down, heightSoFar + max(C.AR*2, item.down + C.VS)) - self.height
itemWidth = item.width + (20 if item.needsSpace else 0)
if i == 0:
self.width += C.AR + max(itemWidth, C.AR)
else:
self.width += C.AR*2 + max(itemWidth, C.AR) + C.AR
addDebug(self)
def __repr__(self):
items = ', '.join(repr(item) for item in self.items)
return 'OptionalSequence(%s)' % items
def format(self, x, y, width):
leftGap, rightGap = determineGaps(width, self.width)
Path(x, y).right(leftGap).addTo(self)
Path(x + leftGap + self.width, y + self.height).right(rightGap).addTo(self)
x += leftGap
upperLineY = y - self.up
last = len(self.items) - 1
for i,item in enumerate(self.items):
itemSpace = 10 if item.needsSpace else 0
itemWidth = item.width + itemSpace
if i == 0:
# Upper skip
(Path(x,y)
.arc('se')
.up(y - upperLineY - C.AR*2)
.arc('wn')
.right(itemWidth - C.AR)
.arc('ne')
.down(y + item.height - upperLineY - C.AR*2)
.arc('ws')
.addTo(self))
# Straight line
(Path(x, y)
.right(itemSpace + C.AR)
.addTo(self))
item.format(x + itemSpace + C.AR, y, item.width).addTo(self)
x += itemWidth + C.AR
y += item.height
elif i < last:
# Upper skip
(Path(x, upperLineY)
.right(C.AR*2 + max(itemWidth, C.AR) + C.AR)
.arc('ne')
.down(y - upperLineY + item.height - C.AR*2)
.arc('ws')
.addTo(self))
# Straight line
(Path(x,y)
.right(C.AR*2)
.addTo(self))
item.format(x + C.AR*2, y, item.width).addTo(self)
(Path(x + item.width + C.AR*2, y + item.height)
.right(itemSpace + C.AR)
.addTo(self))
# Lower skip
(Path(x,y)
.arc('ne')
.down(item.height + max(item.down + C.VS, C.AR*2) - C.AR*2)
.arc('ws')
.right(itemWidth - C.AR)
.arc('se')
.up(item.down + C.VS - C.AR*2)
.arc('wn')
.addTo(self))
x += C.AR*2 + max(itemWidth, C.AR) + C.AR
y += item.height
else:
# Straight line
(Path(x, y)
.right(C.AR*2)
.addTo(self))
item.format(x + C.AR*2, y, item.width).addTo(self)
(Path(x + C.AR*2 + item.width, y + item.height)
.right(itemSpace + C.AR)
.addTo(self))
# Lower skip
(Path(x,y)
.arc('ne')
.down(item.height + max(item.down + C.VS, C.AR*2) - C.AR*2)
.arc('ws')
.right(itemWidth - C.AR)
.arc('se')
.up(item.down + C.VS - C.AR*2)
.arc('wn')
.addTo(self))
return self
class AlternatingSequence(DiagramItem):
def __new__(cls, *items):
if len(items) == 2:
return super(AlternatingSequence, cls).__new__(cls)
else:
raise Exception("AlternatingSequence takes exactly two arguments got " + len(items))
def __init__(self, *items):
DiagramItem.__init__(self, 'g')
self.items = [wrapString(item) for item in items]
self.needsSpace = False
arc = C.AR
vert = C.VS
first = self.items[0]
second = self.items[1]
arcX = 1 / math.sqrt(2) * arc * 2
arcY = (1 - 1 / math.sqrt(2)) * arc * 2
crossY = max(arc, vert)
crossX = (crossY - arcY) + arcX
firstOut = max(arc + arc, crossY/2 + arc + arc, crossY/2 + vert + first.down)
self.up = firstOut + first.height + first.up
secondIn = max(arc + arc, crossY/2 + arc + arc, crossY/2 + vert + second.up)
self.down = secondIn + second.height + second.down
self.height = 0
firstWidth = (20 if first.needsSpace else 0) + first.width
secondWidth = (20 if second.needsSpace else 0) + second.width
self.width = 2*arc + max(firstWidth, crossX, secondWidth) + 2*arc
addDebug(self)
def __repr__(self):
items = ', '.join(repr(item) for item in self.items)
return 'AlternatingSequence(%s)' % items
def format(self, x, y, width):
arc = C.AR
gaps = determineGaps(width, self.width)
Path(x,y).right(gaps[0]).addTo(self)
x += gaps[0]
Path(x+self.width, y).right(gaps[1]).addTo(self)
# bounding box
# Path(x+gaps[0], y).up(self.up).right(self.width).down(self.up+self.down).left(self.width).up(self.down).addTo(self)
first = self.items[0]
second = self.items[1]
# top
firstIn = self.up - first.up
firstOut = self.up - first.up - first.height
Path(x,y).arc('se').up(firstIn-2*arc).arc('wn').addTo(self)
first.format(x + 2*arc, y - firstIn, self.width - 4*arc).addTo(self)
Path(x + self.width - 2*arc, y - firstOut).arc('ne').down(firstOut - 2*arc).arc('ws').addTo(self)
# bottom
secondIn = self.down - second.down - second.height
secondOut = self.down - second.down
Path(x,y).arc('ne').down(secondIn - 2*arc).arc('ws').addTo(self)
second.format(x + 2*arc, y + secondIn, self.width - 4*arc).addTo(self)
Path(x + self.width - 2*arc, y + secondOut).arc('se').up(secondOut - 2*arc).arc('wn').addTo(self)
# crossover
arcX = 1 / Math.sqrt(2) * arc * 2
arcY = (1 - 1 / Math.sqrt(2)) * arc * 2
crossY = max(arc, C.VS)
crossX = (crossY - arcY) + arcX
crossBar = (self.width - 4*arc - crossX)/2
(Path(x+arc, y - crossY/2 - arc).arc('ws').right(crossBar)
.arc_8('n', 'cw').l(crossX - arcX, crossY - arcY).arc_8('sw', 'ccw')
.right(crossBar).arc('ne').addTo(self))
(Path(x+arc, y + crossY/2 + arc).arc('wn').right(crossBar)
.arc_8('s', 'ccw').l(crossX - arcX, -(crossY - arcY)).arc_8('nw', 'cw')
.right(crossBar).arc('se').addTo(self))
return self
class Choice(DiagramItem):
def __init__(self, default, *items):
DiagramItem.__init__(self, 'g')
assert default < len(items)
self.default = default
self.items = [wrapString(item) for item in items]
self.width = C.AR * 4 + max(item.width for item in self.items)
self.up = self.items[0].up
self.down = self.items[-1].down
self.height = self.items[default].height
for i, item in enumerate(self.items):
if i in [default-1, default+1]:
arcs = C.AR*2
else:
arcs = C.AR
if i < default:
self.up += max(arcs, item.height + item.down + C.VS + self.items[i+1].up)
elif i == default:
continue
else:
self.down += max(arcs, item.up + C.VS + self.items[i-1].down + self.items[i-1].height)
self.down -= self.items[default].height # already counted in self.height
addDebug(self)
def __repr__(self):
items = ', '.join(repr(item) for item in self.items)
return 'Choice(%r, %s)' % (self.default, items)
def format(self, x, y, width):
leftGap, rightGap = determineGaps(width, self.width)
# Hook up the two sides if self is narrower than its stated width.
Path(x, y).h(leftGap).addTo(self)
Path(x + leftGap + self.width, y + self.height).h(rightGap).addTo(self)
x += leftGap
innerWidth = self.width - C.AR * 4
default = self.items[self.default]
# Do the elements that curve above
above = self.items[:self.default][::-1]
if above:
distanceFromY = max(
C.AR * 2,
default.up
+ C.VS
+ above[0].down
+ above[0].height)
for i,ni,item in doubleenumerate(above):
Path(x, y).arc('se').up(distanceFromY - C.AR * 2).arc('wn').addTo(self)
item.format(x + C.AR * 2, y - distanceFromY, innerWidth).addTo(self)
Path(x + C.AR * 2 + innerWidth, y - distanceFromY + item.height).arc('ne') \
.down(distanceFromY - item.height + default.height - C.AR*2).arc('ws').addTo(self)
if ni < -1:
distanceFromY += max(
C.AR,
item.up
+ C.VS
+ above[i+1].down
+ above[i+1].height)
# Do the straight-line path.
Path(x, y).right(C.AR * 2).addTo(self)
self.items[self.default].format(x + C.AR * 2, y, innerWidth).addTo(self)
Path(x + C.AR * 2 + innerWidth, y+self.height).right(C.AR * 2).addTo(self)
# Do the elements that curve below
below = self.items[self.default + 1:]
if below:
distanceFromY = max(
C.AR * 2,
default.height
+ default.down
+ C.VS
+ below[0].up)
for i, item in enumerate(below):
Path(x, y).arc('ne').down(distanceFromY - C.AR * 2).arc('ws').addTo(self)
item.format(x + C.AR * 2, y + distanceFromY, innerWidth).addTo(self)
Path(x + C.AR * 2 + innerWidth, y + distanceFromY + item.height).arc('se') \
.up(distanceFromY - C.AR * 2 + item.height - default.height).arc('wn').addTo(self)
distanceFromY += max(
C.AR,
item.height
+ item.down
+ C.VS
+ (below[i + 1].up if i+1 < len(below) else 0))
return self
class MultipleChoice(DiagramItem):
def __init__(self, default, type, *items):
DiagramItem.__init__(self, 'g')
assert 0 <= default < len(items)
assert type in ["any", "all"]
self.default = default
self.type = type
self.needsSpace = True
self.items = [wrapString(item) for item in items]
self.innerWidth = max(item.width for item in self.items)
self.width = 30 + C.AR + self.innerWidth + C.AR + 20
self.up = self.items[0].up
self.down = self.items[-1].down
self.height = self.items[default].height
for i, item in enumerate(self.items):
if i in [default-1, default+1]:
minimum = 10 + C.AR
else:
minimum = C.AR
if i < default:
self.up += max(minimum, item.height + item.down + C.VS + self.items[i+1].up)
elif i == default:
continue
else:
self.down += max(minimum, item.up + C.VS + self.items[i-1].down + self.items[i-1].height)
self.down -= self.items[default].height # already counted in self.height
addDebug(self)
def __repr__(self):
items = ', '.join(map(repr, self.items))
return 'MultipleChoice(%r, %r, %s)' % (self.default, self.type, items)
def format(self, x, y, width):
leftGap, rightGap = determineGaps(width, self.width)
# Hook up the two sides if self is narrower than its stated width.
Path(x, y).h(leftGap).addTo(self)
Path(x + leftGap + self.width, y + self.height).h(rightGap).addTo(self)
x += leftGap
default = self.items[self.default]
# Do the elements that curve above
above = self.items[:self.default][::-1]
if above:
distanceFromY = max(
10 + C.AR,
default.up
+ C.VS
+ above[0].down
+ above[0].height)
for i,ni,item in doubleenumerate(above):
(Path(x + 30, y)
.up(distanceFromY - C.AR)
.arc('wn')
.addTo(self))
item.format(x + 30 + C.AR, y - distanceFromY, self.innerWidth).addTo(self)
(Path(x + 30 + C.AR + self.innerWidth, y - distanceFromY + item.height)
.arc('ne')
.down(distanceFromY - item.height + default.height - C.AR - 10)
.addTo(self))
if ni < -1:
distanceFromY += max(
C.AR,
item.up
+ C.VS
+ above[i+1].down
+ above[i+1].height)
# Do the straight-line path.
Path(x + 30, y).right(C.AR).addTo(self)
self.items[self.default].format(x + 30 + C.AR, y, self.innerWidth).addTo(self)
Path(x + 30 + C.AR + self.innerWidth, y + self.height).right(C.AR).addTo(self)
# Do the elements that curve below
below = self.items[self.default + 1:]
if below:
distanceFromY = max(
10 + C.AR,
default.height
+ default.down
+ C.VS
+ below[0].up)
for i, item in enumerate(below):
(Path(x+30, y)
.down(distanceFromY - C.AR)
.arc('ws')
.addTo(self))
item.format(x + 30 + C.AR, y + distanceFromY, self.innerWidth).addTo(self)
(Path(x + 30 + C.AR + self.innerWidth, y + distanceFromY + item.height)
.arc('se')
.up(distanceFromY - C.AR + item.height - default.height - 10)
.addTo(self))
distanceFromY += max(
C.AR,
item.height
+ item.down
+ C.VS
+ (below[i + 1].up if i+1 < len(below) else 0))
text = DiagramItem('g', attrs={"class": "diagram-text"}).addTo(self)
DiagramItem('title', text="take one or more branches, once each, in any order" if self.type=="any" else "take all branches, once each, in any order").addTo(text)
DiagramItem('path', attrs={
"d": "M {x} {y} h -26 a 4 4 0 0 0 -4 4 v 12 a 4 4 0 0 0 4 4 h 26 z".format(x=x+30, y=y-10),
"class": "diagram-text"
}).addTo(text)
DiagramItem('text', text="1+" if self.type=="any" else "all", attrs={
"x": x + 15,
"y": y + 4,
"class": "diagram-text"
}).addTo(text)
DiagramItem('path', attrs={
"d": "M {x} {y} h 16 a 4 4 0 0 1 4 4 v 12 a 4 4 0 0 1 -4 4 h -16 z".format(x=x+self.width-20, y=y-10),
"class": "diagram-text"
}).addTo(text)
DiagramItem('text', text=u"↺", attrs={
"x": x + self.width - 10,
"y": y + 4,
"class": "diagram-arrow"
}).addTo(text)
return self
class HorizontalChoice(DiagramItem):
def __new__(cls, *items):
if len(items) <= 1:
return Sequence(*items)
else:
return super(HorizontalChoice, cls).__new__(cls)
def __init__(self, *items):
DiagramItem.__init__(self, 'g')
self.items = [wrapString(item) for item in items]
allButLast = self.items[:-1]
middles = self.items[1:-1]
first = self.items[0]
last = self.items[-1]
self.needsSpace = False
self.width = (C.AR # starting track
+ C.AR*2 * (len(self.items)-1) # inbetween tracks
+ sum(x.width + (20 if x.needsSpace else 0) for x in self.items) #items
+ (C.AR if last.height > 0 else 0) # needs space to curve up
+ C.AR) #ending track
# Always exits at entrance height
self.height = 0
# All but the last have a track running above them
self._upperTrack = max(
C.AR*2,
C.VS,
max(x.up for x in allButLast) + C.VS
)
self.up = max(self._upperTrack, last.up)
# All but the first have a track running below them
# Last either straight-lines or curves up, so has different calculation
self._lowerTrack = max(
C.VS,
max(x.height+max(x.down+C.VS, C.AR*2) for x in middles) if middles else 0,
last.height + last.down + C.VS
)
if first.height < self._lowerTrack:
# Make sure there's at least 2*C.AR room between first exit and lower track
self._lowerTrack = max(self._lowerTrack, first.height + C.AR*2)
self.down = max(self._lowerTrack, first.height + first.down)
addDebug(self)
def format(self, x, y, width):
# Hook up the two sides if self is narrower than its stated width.
leftGap, rightGap = determineGaps(width, self.width)
Path(x, y).h(leftGap).addTo(self)
Path(x + leftGap + self.width, y + self.height).h(rightGap).addTo(self)
x += leftGap
first = self.items[0]
last = self.items[-1]
# upper track
upperSpan = (sum(x.width+(20 if x.needsSpace else 0) for x in self.items[:-1])
+ (len(self.items) - 2) * C.AR*2
- C.AR)
(Path(x,y)
.arc('se')
.up(self._upperTrack - C.AR*2)
.arc('wn')
.h(upperSpan)
.addTo(self))
# lower track
lowerSpan = (sum(x.width+(20 if x.needsSpace else 0) for x in self.items[1:])
+ (len(self.items) - 2) * C.AR*2
+ (C.AR if last.height > 0 else 0)
- C.AR)
lowerStart = x + C.AR + first.width+(20 if first.needsSpace else 0) + C.AR*2
(Path(lowerStart, y+self._lowerTrack)
.h(lowerSpan)
.arc('se')
.up(self._lowerTrack - C.AR*2)
.arc('wn')
.addTo(self))
# Items
for [i, item] in enumerate(self.items):
# input track
if i == 0:
(Path(x,y)
.h(C.AR)
.addTo(self))
x += C.AR
else:
(Path(x, y - self._upperTrack)
.arc('ne')
.v(self._upperTrack - C.AR*2)
.arc('ws')
.addTo(self))
x += C.AR*2
# item
itemWidth = item.width + (20 if item.needsSpace else 0)
item.format(x, y, itemWidth).addTo(self)
x += itemWidth
# output track
if i == len(self.items)-1:
if item.height == 0:
(Path(x,y)
.h(C.AR)
.addTo(self))
else:
(Path(x,y+item.height)
.arc('se')
.addTo(self))
elif i == 0 and item.height > self._lowerTrack:
# Needs to arc up to meet the lower track, not down.
if item.height - self._lowerTrack >= C.AR*2:
(Path(x, y+item.height)
.arc('se')
.v(self._lowerTrack - item.height + C.AR*2)
.arc('wn')
.addTo(self))
else:
# Not enough space to fit two arcs
# so just bail and draw a straight line for now.
(Path(x, y+item.height)
.l(C.AR*2, self._lowerTrack - item.height)
.addTo(self))
else:
(Path(x, y+item.height)
.arc('ne')
.v(self._lowerTrack - item.height - C.AR*2)
.arc('ws')
.addTo(self))
return self
def Optional(item, skip=False):
return Choice(0 if skip else 1, Skip(), item)
class OneOrMore(DiagramItem):
def __init__(self, item, repeat=None):
DiagramItem.__init__(self, 'g')
repeat = repeat or Skip()
self.item = wrapString(item)
self.rep = wrapString(repeat)
self.width = max(self.item.width, self.rep.width) + C.AR * 2
self.height = self.item.height
self.up = self.item.up
self.down = max(
C.AR * 2,
self.item.down + C.VS + self.rep.up + self.rep.height + self.rep.down)
self.needsSpace = True
addDebug(self)
def format(self, x, y, width):
leftGap, rightGap = determineGaps(width, self.width)
# Hook up the two sides if self is narrower than its stated width.
Path(x, y).h(leftGap).addTo(self)
Path(x + leftGap + self.width, y +self.height).h(rightGap).addTo(self)
x += leftGap
# Draw item
Path(x, y).right(C.AR).addTo(self)
self.item.format(x + C.AR, y, self.width - C.AR * 2).addTo(self)
Path(x + self.width - C.AR, y + self.height).right(C.AR).addTo(self)
# Draw repeat arc
distanceFromY = max(C.AR*2, self.item.height + self.item.down + C.VS + self.rep.up)
Path(x + C.AR, y).arc('nw').down(distanceFromY - C.AR * 2) \
.arc('ws').addTo(self)
self.rep.format(x + C.AR, y + distanceFromY, self.width - C.AR*2).addTo(self)
Path(x + self.width - C.AR, y + distanceFromY + self.rep.height).arc('se') \
.up(distanceFromY - C.AR * 2 + self.rep.height - self.item.height).arc('en').addTo(self)
return self
def __repr__(self):
return 'OneOrMore(%r, repeat=%r)' % (self.item, self.rep)
def ZeroOrMore(item, repeat=None, skip=False):
result = Optional(OneOrMore(item, repeat), skip)
return result
class Start(DiagramItem):
def __init__(self, type="simple", label=None):
DiagramItem.__init__(self, 'g')
if label:
self.width = max(20, len(label) * C.CHAR_WIDTH + 10)
else:
self.width = 20
self.up = 10
self.down = 10
self.type = type
self.label = label
addDebug(self)
def format(self, x, y, _width):
path = Path(x, y-10)
if self.type == "complex":
path.down(20).m(0, -10).right(self.width).addTo(self)
else:
path.down(20).m(10, -20).down(20).m(-10, -10).right(self.width).addTo(self)
if self.label:
DiagramItem('text', attrs={"x":x, "y":y-15, "style":"text-anchor:start"}, text=self.label).addTo(self)
return self
def __repr__(self):
return 'Start(type=%r, label=%r)' % (self.type, self.label)
class End(DiagramItem):
def __init__(self, type="simple"):
DiagramItem.__init__(self, 'path')
self.width = 20
self.up = 10
self.down = 10
self.type = type
addDebug(self)
def format(self, x, y, _width):
if self.type == "simple":
self.attrs['d'] = 'M {0} {1} h 20 m -10 -10 v 20 m 10 -20 v 20'.format(x, y)
elif self.type == "complex":
self.attrs['d'] = 'M {0} {1} h 20 m 0 -10 v 20'
return self
def __repr__(self):
return 'End(type=%r)' % self.type
class Terminal(DiagramItem):
def __init__(self, text, href=None, title=None):
DiagramItem.__init__(self, 'g', {'class': 'terminal'})
self.text = text
self.href = href
self.title = title
self.width = len(text) * C.CHAR_WIDTH + 20
self.up = 11
self.down = 11
self.needsSpace = True
addDebug(self)
def __repr__(self):
return 'Terminal(%r, href=%r, title=%r)' % (self.text, self.href, self.title)
def format(self, x, y, width):
leftGap, rightGap = determineGaps(width, self.width)
# Hook up the two sides if self is narrower than its stated width.
Path(x, y).h(leftGap).addTo(self)
Path(x + leftGap + self.width, y).h(rightGap).addTo(self)
DiagramItem('rect', {'x': x + leftGap, 'y': y - 11, 'width': self.width,
'height': self.up + self.down, 'rx': 10, 'ry': 10}).addTo(self)
text = DiagramItem('text', {'x': x + width / 2, 'y': y + 4}, self.text)
if self.href is not None:
a = DiagramItem('a', {'xlink:href':self.href}, text).addTo(self)
text.addTo(a)
else:
text.addTo(self)
if self.title is not None:
DiagramItem('title', {}, self.title).addTo(self)
return self
class NonTerminal(DiagramItem):
def __init__(self, text, href=None, title=None):
DiagramItem.__init__(self, 'g', {'class': 'non-terminal'})
self.text = text
self.href = href
self.title = title
self.width = len(text) * C.CHAR_WIDTH + 20
self.up = 11
self.down = 11
self.needsSpace = True
addDebug(self)
def __repr__(self):
return 'NonTerminal(%r, href=%r, title=%r)' % (self.text, self.href, self.title)
def format(self, x, y, width):
leftGap, rightGap = determineGaps(width, self.width)
# Hook up the two sides if self is narrower than its stated width.
Path(x, y).h(leftGap).addTo(self)
Path(x + leftGap + self.width, y).h(rightGap).addTo(self)
DiagramItem('rect', {'x': x + leftGap, 'y': y - 11, 'width': self.width,
'height': self.up + self.down}).addTo(self)
text = DiagramItem('text', {'x': x + width / 2, 'y': y + 4}, self.text)
if self.href is not None:
a = DiagramItem('a', {'xlink:href':self.href}, text).addTo(self)
text.addTo(a)
else:
text.addTo(self)
if self.title is not None:
DiagramItem('title', {}, self.title).addTo(self)
return self
class Comment(DiagramItem):
def __init__(self, text, href=None, title=None):
DiagramItem.__init__(self, 'g')
self.text = text
self.href = href
self.title = title
self.width = len(text) * C.COMMENT_CHAR_WIDTH + 10
self.up = 11
self.down = 11
self.needsSpace = True
addDebug(self)
def __repr__(self):
return 'Comment(%r, href=%r, title=%r)' % (self.text, self.href, self.title)
def format(self, x, y, width):
leftGap, rightGap = determineGaps(width, self.width)
# Hook up the two sides if self is narrower than its stated width.
Path(x, y).h(leftGap).addTo(self)
Path(x + leftGap + self.width, y).h(rightGap).addTo(self)
text = DiagramItem('text', {'x': x + width / 2, 'y': y + 5, 'class': 'comment'}, self.text)
if self.href is not None:
a = DiagramItem('a', {'xlink:href':self.href}, text).addTo(self)
text.addTo(a)
else:
text.addTo(self)
if self.title is not None:
DiagramItem('title', {}, self.title).addTo(self)
return self
class Skip(DiagramItem):
def __init__(self):
DiagramItem.__init__(self, 'g')
self.width = 0
self.up = 0
self.down = 0
addDebug(self)
def format(self, x, y, width):
Path(x, y).right(width).addTo(self)
return self
def __repr__(self):
return 'Skip()'
def show_diagram(graph, log=False):
with io.StringIO() as f:
d = Diagram(graph)
if log:
print(d)
d.writeSvg(f.write)
mysvg = f.getvalue()
return mysvg
```
|
github_jupyter
|
```
# Copyright 2020 Google LLC. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# TensorFlow Cloud - Putting it all together
In this example, we will use all of the features outlined in the [Keras cloud guide](https://www.tensorflow.org/guide/keras/training_keras_models_on_cloud) to train a state-of-the-art model to classify dog breeds using feature extraction. Let's begin by installing TensorFlow Cloud and importing a few important packages.
## Setup
```
!pip install tensorflow-cloud
import datetime
import os
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_cloud as tfc
import tensorflow_datasets as tfds
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Model
```
### Cloud Configuration
In order to run TensorFlow Cloud from a Colab notebook, we'll need to upload our [authentication key](https://cloud.google.com/docs/authentication/getting-started) and specify our [Cloud storage bucket](https://cloud.google.com/storage/docs/creating-buckets) for image building and publishing.
```
if not tfc.remote():
from google.colab import files
key_upload = files.upload()
key_path = list(key_upload.keys())[0]
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = key_path
os.system(f"gcloud auth activate-service-account --key-file {key_path}")
GCP_BUCKET = "[your-bucket-name]" #@param {type:"string"}
```
## Model Creation
### Dataset preprocessing
We'll be loading our training data from TensorFlow Datasets:
```
(ds_train, ds_test), metadata = tfds.load(
"stanford_dogs",
split=["train", "test"],
shuffle_files=True,
with_info=True,
as_supervised=True,
)
NUM_CLASSES = metadata.features["label"].num_classes
```
Let's visualize this dataset:
```
print("Number of training samples: %d" % tf.data.experimental.cardinality(ds_train))
print("Number of test samples: %d" % tf.data.experimental.cardinality(ds_test))
print("Number of classes: %d" % NUM_CLASSES)
plt.figure(figsize=(10, 10))
for i, (image, label) in enumerate(ds_train.take(9)):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(image)
plt.title(int(label))
plt.axis("off")
```
Here we will resize and rescale our images to fit into our model's input, as well as create batches.
```
IMG_SIZE = 224
BATCH_SIZE = 64
BUFFER_SIZE = 2
size = (IMG_SIZE, IMG_SIZE)
ds_train = ds_train.map(lambda image, label: (tf.image.resize(image, size), label))
ds_test = ds_test.map(lambda image, label: (tf.image.resize(image, size), label))
def input_preprocess(image, label):
image = tf.keras.applications.resnet50.preprocess_input(image)
return image, label
ds_train = ds_train.map(
input_preprocess, num_parallel_calls=tf.data.experimental.AUTOTUNE
)
ds_train = ds_train.batch(batch_size=BATCH_SIZE, drop_remainder=True)
ds_train = ds_train.prefetch(tf.data.experimental.AUTOTUNE)
ds_test = ds_test.map(input_preprocess)
ds_test = ds_test.batch(batch_size=BATCH_SIZE, drop_remainder=True)
```
### Model Architecture
We're using ResNet50 pretrained on ImageNet, from the Keras Applications module.
```
inputs = tf.keras.layers.Input(shape=(IMG_SIZE, IMG_SIZE, 3))
base_model = tf.keras.applications.ResNet50(
weights="imagenet", include_top=False, input_tensor=inputs
)
x = tf.keras.layers.GlobalAveragePooling2D()(base_model.output)
x = tf.keras.layers.Dropout(0.5)(x)
outputs = tf.keras.layers.Dense(NUM_CLASSES)(x)
model = tf.keras.Model(inputs, outputs)
base_model.trainable = False
```
### Callbacks using Cloud Storage
```
MODEL_PATH = "resnet-dogs"
checkpoint_path = os.path.join("gs://", GCP_BUCKET, MODEL_PATH, "save_at_{epoch}")
tensorboard_path = os.path.join(
"gs://", GCP_BUCKET, "logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
)
callbacks = [
# TensorBoard will store logs for each epoch and graph performance for us.
keras.callbacks.TensorBoard(log_dir=tensorboard_path, histogram_freq=1),
# ModelCheckpoint will save models after each epoch for retrieval later.
keras.callbacks.ModelCheckpoint(checkpoint_path),
# EarlyStopping will terminate training when val_loss ceases to improve.
keras.callbacks.EarlyStopping(monitor="val_loss", patience=3),
]
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=1e-2),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=["accuracy"],
)
```
Here, we're using the `tfc.remote()` flag to designate a smaller number of epochs than intended for the full training job when running locally. This enables easy debugging on Colab.
```
if tfc.remote():
epochs = 500
train_data = ds_train
test_data = ds_test
else:
epochs = 1
train_data = ds_train.take(5)
test_data = ds_test.take(5)
callbacks = None
model.fit(
train_data, epochs=epochs, callbacks=callbacks, validation_data=test_data, verbose=2
)
if tfc.remote():
SAVE_PATH = os.path.join("gs://", GCP_BUCKET, MODEL_PATH)
model.save(SAVE_PATH)
```
Our model requires two additional libraries. We'll create a `requirements.txt` which specifies those libraries:
```
requirements = ["tensorflow-datasets", "matplotlib"]
f = open("requirements.txt", 'w')
f.write('\n'.join(requirements))
f.close()
```
Let's add a job label so we can document our job logs later:
```
job_labels = {"job":"resnet-dogs"}
```
### Train on Cloud
All that's left to do is run our model on Cloud. To recap, our `run()` call enables:
- A model that will be trained and stored on Cloud, including checkpoints
- Tensorboard callback logs that will be accessible through tensorboard.dev
- Specific python library requirements that will be fulfilled
- Customizable job labels for log documentation
- Real-time streaming logs printed in Colab
- Deeply customizable machine configuration (ours will use two Tesla T4s)
- An automatic resolution of distribution strategy for this configuration
```
tfc.run(
requirements_txt="requirements.txt",
distribution_strategy="auto",
chief_config=tfc.MachineConfig(
cpu_cores=8,
memory=30,
accelerator_type=tfc.AcceleratorType.NVIDIA_TESLA_T4,
accelerator_count=2,
),
docker_config=tfc.DockerConfig(
image_build_bucket=GCP_BUCKET,
),
job_labels=job_labels,
stream_logs=True,
)
```
### Evaluate your model
We'll use the cloud storage directories we saved for callbacks in order to load tensorboard and retrieve the saved model. Tensorboard logs can be used to monitor training performance in real-time
```
!tensorboard dev upload --logdir $tensorboard_path --name "ResNet Dogs"
if tfc.remote():
model = tf.keras.models.load_model(SAVE_PATH)
model.evaluate(test_data)
```
|
github_jupyter
|
# Distributed Training with Keras
## Import dependencies
```
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow import keras
import os
print(tf.__version__)
```
## Dataset - Fashion MNIST
```
#datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True)
#mnist_train, mnist_test = datasets['train'], datasets['test']
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
```
## Define a distribution Strategy
```
strategy = tf.distribute.MirroredStrategy()
print('Number of devices: {}'.format(strategy.num_replicas_in_sync))
num_train_examples = len(train_images)#info.splits['train'].num_examples
print(num_train_examples)
num_test_examples = len(test_images) #info.splits['test'].num_examples
print(num_test_examples)
BUFFER_SIZE = 10000
BATCH_SIZE_PER_REPLICA = 64
BATCH_SIZE = BATCH_SIZE_PER_REPLICA * strategy.num_replicas_in_sync
#train_dataset = train_images.map(scale).cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
#eval_dataset = test_images.map(scale).batch(BATCH_SIZE)
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
with strategy.scope():
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10)
])
model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer='adam',
metrics=['accuracy'])
# Define the checkpoint directory to store the checkpoints
checkpoint_dir = './training_checkpoints'
# Name of the checkpoint files
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}")
# Function for decaying the learning rate.
# You can define any decay function you need.
def decay(epoch):
if epoch < 3:
return 1e-3
elif epoch >= 3 and epoch < 7:
return 1e-4
else:
return 1e-5
class PrintLR(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
print('\nLearning rate for epoch {} is {}'.format(epoch + 1,
model.optimizer.lr.numpy()))
from tensorflow.keras.callbacks import ModelCheckpoint
#checkpoint = ModelCheckpoint(ckpt_model,
# monitor='val_accuracy',
# verbose=1,
# save_best_only=True,
# mode='max')
callbacks = [
tf.keras.callbacks.TensorBoard(log_dir='./logs'),
tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_prefix,
save_weights_only=True),
tf.keras.callbacks.LearningRateScheduler(decay),
PrintLR()
]
#model.fit(train_dataset, epochs=12, callbacks=callbacks)
history = model.fit(train_images, train_labels,validation_data=(test_images, test_labels),
epochs=15,callbacks=callbacks)
history.history.keys
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model Accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'])
plt.show()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model Loss')
plt.ylabel('Loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'])
plt.show()
```
|
github_jupyter
|
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
```
Manually Principal Component Analysis
```
#Reading wine data
df_wine = pd.read_csv('https://archive.ics.uci.edu/ml/'
'machine-learning-databases/wine/wine.data',
header=None)
# in the data first column is class label and rest
# 13 columns are different features
X,y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values
#Splitting Data into training set and test set
#using scikit-learn
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.3,
stratify=y, random_state=0)
#Standardarising all the columns
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train_std = sc.fit_transform(X_train)
X_test_std = sc.transform(X_test)
# covariance matrix using numpy
cov_mat = np.cov(X_train_std.T)
# eigen pair
eigen_vals, eigen_vecs = np.linalg.eig(cov_mat)
print('\nEigenvalues \n%s' % eigen_vecs[:3])
# only three rows are printed
# representing relative importance of features
tot = eigen_vals.sum()
var_exp = [(i/tot) for i in sorted(eigen_vals, reverse=True)]
cum_var_exp = np.cumsum(var_exp)
import matplotlib.pyplot as plt
plt.bar(range(1,14), var_exp, alpha=0.5, align='center',
label='Individual explained variance')
plt.step(range(1,14), cum_var_exp, where='mid',
label='CUmmulative explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal component index')
plt.legend(loc='best')
plt.tight_layout()
plt.show()
# plots explained variance ration of the features
# Explained variance is variance of one feature / sum of all the variances
# sorting the eigenpairs by decreasing order of the eigenvalues:
# list of (eigenvalue, eigenvector) tuples
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eigen_pairs.sort(key=lambda k:k[0], reverse=True)
# We take first two features which account for about 60% of variance
w = np.hstack((eigen_pairs[0][1][:, np.newaxis],
eigen_pairs[1][1][:, np.newaxis]))
# w is projection matrix
print('Matrix W:\n', w)
# converting 13 feature data to 2 feature data
X_train_pca = X_train_std.dot(w)
# Plotting the features on
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l,c,m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_pca[y_train==l, 0],
X_train_pca[y_train==l, 1],
c = c, label=l, marker = m)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='best')
plt.tight_layout()
plt.show()
```
Using Scikit Learn
```
# Class to plot decision region
from matplotlib.colors import ListedColormap
def plot_decision_regions(X, y, classifier, resolution=0.02):
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
x1_min, x1_max = X[:, 0].min()-1, X[:, 0].max()+1
x2_min, x2_max = X[:, 1].min()-1, X[:, 1].max()+1
xx1,xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1,xx2,Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x = X[y==cl, 0],
y = X[y==cl, 1],
alpha = 0.6,
color = cmap(idx),
edgecolor='black',
marker=markers[idx],
label=cl)
# Plotting decision region of training set after applying PCA
from sklearn.linear_model import LogisticRegression
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
lr = LogisticRegression(multi_class='ovr',
random_state=1,
solver = 'lbfgs')
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
lr.fit(X_train_pca, y_train)
plot_decision_regions(X_train_pca, y_train, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
plt.show()
# plotting decision regions of test data set after applying PCA
plot_decision_regions(X_test_pca, y_test, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
plt.show()
# finding explained variance ratio using scikit learn
pca1 = PCA(n_components=None)
X_train_pca1 = pca1.fit_transform(X_train_std)
pca1.explained_variance_ratio_
```
|
github_jupyter
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.