path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
soln/chap02.ipynb | ###Markdown
Bayes's Theorem Think Bayes, Second EditionCopyright 2020 Allen B. DowneyLicense: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/) In the previous chapter, we derived Bayes's Theorem:$$P(A|B) = \frac{P(A) P(B|A)}{P(B)}$$As an example, we used data from the General Social Survey and Bayes's Theorem to compute conditional probabilities.But since we had the complete dataset, we didn't really need Bayes's Theorem.It was easy enough to compute the left side of the equation directly, and no easier to compute the right side.But often we don't have a complete dataset, and in that case Bayes's Theorem is more useful. In this chapter, we'll use it to solve several more challenging problems related to conditional probability. The Cookie ProblemWe'll start with a thinly disguised version of an [urn problem](https://en.wikipedia.org/wiki/Urn_problem):> Suppose there are two bowls of cookies.>> * Bowl 1 contains 30 vanilla cookies and 10 chocolate cookies. >> * Bowl 2 contains 20 vanilla cookies and 20 chocolate cookies.>> Now suppose you choose one of the bowls at random and, without looking, choose a cookie at random. If the cookie is vanilla, what is the probability that it came from Bowl 1?What we want is the conditional probability that we chose from Bowl 1 given that we got a vanilla cookie, $P(B_1 | V)$.But what we get from the statement of the problem is:* The conditional probability of getting a vanilla cookie, given that we chose from Bowl 1, $P(V | B_1)$ and* The conditional probability of getting a vanilla cookie, given that we chose from Bowl 2, $P(V | B_2)$. Bayes's Theorem tells us how they are related:$$P(B_1|V) = \frac{P(B_1)~P(V|B_1)}{P(V)}$$The term on the left is what we want. The terms on the right are:- $P(B_1)$, the probability that we chose Bowl 1, unconditioned by what kind of cookie we got. Since the problem says we chose a bowl at random, we assume $P(B_1) = 1/2$.- $P(V|B_1)$, the probability of getting a vanilla cookie from Bowl 1, which is 3/4.- $P(V)$, the probability of drawing a vanilla cookie from either bowl. To compute $P(V)$, we can use the law of total probability:$$P(V) = P(B_1)~P(V|B_1) ~+~ P(B_2)~P(V|B_2)$$Plugging in the numbers from the statement of the problem, we have$$P(V) = (1/2)~(3/4) ~+~ (1/2)~(1/2) = 5/8$$We can also compute this result directly, like this: * Since we had an equal chance of choosing either bowl and the bowls contain the same number of cookies, we had the same chance of choosing any cookie. * Between the two bowls there are 50 vanilla and 30 chocolate cookies, so $P(V) = 5/8$. Finally, we can apply Bayes's Theorem to compute the posterior probability of Bowl 1:$$P(B_1|V) = (1/2)~(3/4)~/~(5/8) = 3/5$$This example demonstrates one use of Bayes's theorem: it provides away to get from $P(B|A)$ to $P(A|B)$. This strategy is useful in cases like this where it is easier to compute the terms on the right side than the term on the left. Diachronic BayesThere is another way to think of Bayes's theorem: it gives us a way toupdate the probability of a hypothesis, $H$, given some body of data, $D$.This interpretation is "diachronic", which means "related to change over time"; in this case, the probability of the hypotheses changes as we see new data.Rewriting Bayes's theorem with $H$ and $D$ yields:$$P(H|D) = \frac{P(H)~P(D|H)}{P(D)}$$In this interpretation, each term has a name:- $P(H)$ is the probability of the hypothesis before we see the data, called the prior probability, or just **prior**.- $P(H|D)$ is the probability of the hypothesis after we see the data, called the **posterior**.- $P(D|H)$ is the probability of the data under the hypothesis, called the **likelihood**.- $P(D)$ is the **total probability of the data**, under any hypothesis.Sometimes we can compute the prior based on background information. For example, the cookie problem specifies that we choose a bowl at random with equal probability.In other cases the prior is subjective; that is, reasonable people might disagree, either because they use different background information or because they interpret the same information differently.The likelihood is usually the easiest part to compute. In the cookieproblem, we are given the number of cookies in each bowl, so we can compute the probability of the data under each hypothesis. Computing the total probability of the data can be tricky. It is supposed to be the probability of seeing the data under any hypothesis at all, but it can be hard to nail down what that means.Most often we simplify things by specifying a set of hypotheses thatare:* Mutually exclusive, which means that only one of them can be true, and* Collectively exhaustive, which means one of them must be true.When these conditions apply, we can compute $P(D)$ using the law of total probability. For example, with two hypotheses, $H_1$ and $H_2$:$$P(D) = P(H_1)~P(D|H_1) + P(H_2)~P(D|H_2)$$And more generally, with any number of hypotheses:$$P(D) = \sum_i P(H_i)~P(D|H_i)$$The process in this section, using data and a prior probability to compute a posterior probability, is called a **Bayesian update**. Bayes TablesA convenient tool for doing a Bayesian update is a Bayes table.You can write a Bayes table on paper or use a spreadsheet, but in this section I'll use a Pandas `DataFrame`.First I'll make empty `DataFrame` with one row for each hypothesis:
###Code
import pandas as pd
table = pd.DataFrame(index=['Bowl 1', 'Bowl 2'])
###Output
_____no_output_____
###Markdown
Now I'll add a column to represent the priors:
###Code
table['prior'] = 1/2, 1/2
table
###Output
_____no_output_____
###Markdown
And a column for the likelihoods:
###Code
table['likelihood'] = 3/4, 1/2
table
###Output
_____no_output_____
###Markdown
Here we see a difference from the previous method: we compute likelihoods for both hypotheses, not just Bowl 1:* The chance of getting a vanilla cookie from Bowl 1 is 3/4.* The chance of getting a vanilla cookie from Bowl 2 is 1/2.You might notice that the likelihoods don't add up to 1. That's OK; each of them is a probability conditioned on a different hypothesis.There's no reason they should add up to 1 and no problem if they don't.The next step is similar to what we did with Bayes's Theorem; we multiply the priors by the likelihoods:
###Code
table['unnorm'] = table['prior'] * table['likelihood']
table
###Output
_____no_output_____
###Markdown
I call the result `unnorm` because these values are the "unnormalized posteriors". Each of them is the product of a prior and a likelihood:$$P(B_i)~P(D|B_i)$$which is the numerator of Bayes's Theorem. If we add them up, we have$$P(B_1)~P(D|B_1) + P(B_2)~P(D|B_2)$$which is the denominator of Bayes's Theorem, $P(D)$.So we can compute the total probability of the data like this:
###Code
prob_data = table['unnorm'].sum()
prob_data
###Output
_____no_output_____
###Markdown
Notice that we get 5/8, which is what we got by computing $P(D)$ directly.And we can compute the posterior probabilities like this:
###Code
table['posterior'] = table['unnorm'] / prob_data
table
###Output
_____no_output_____
###Markdown
The posterior probability for Bowl 1 is 0.6, which is what we got using Bayes's Theorem explicitly.As a bonus, we also get the posterior probability of Bowl 2, which is 0.4.When we add up the unnormalized posteriors and divide through, we force the posteriors to add up to 1. This process is called "normalization", which is why the total probability of the data is also called the "normalizing constant". The Dice ProblemA Bayes table can also solve problems with more than two hypotheses. For example:> Suppose I have a box with a 6-sided die, an 8-sided die, and a 12-sided die. I choose one of the dice at random, roll it, and report that the outcome is a 1. What is the probability that I chose the 6-sided die?In this example, there are three hypotheses with equal priorprobabilities. The data is my report that the outcome is a 1. If I chose the 6-sided die, the probability of the data is1/6. If I chose the 8-sided die, the probability is 1/8, and if I chose the 12-sided die, it's 1/12.Here's a Bayes table that uses integers to represent the hypotheses:
###Code
table2 = pd.DataFrame(index=[6, 8, 12])
###Output
_____no_output_____
###Markdown
I'll use fractions to represent the prior probabilities and the likelihoods. That way they don't get rounded off to floating-point numbers.
###Code
from fractions import Fraction
table2['prior'] = Fraction(1, 3)
table2['likelihood'] = Fraction(1, 6), Fraction(1, 8), Fraction(1, 12)
table2
###Output
_____no_output_____
###Markdown
Once you have priors and likelhoods, the remaining steps are always the same, so I'll put them in a function:
###Code
def update(table):
"""Compute the posterior probabilities."""
table['unnorm'] = table['prior'] * table['likelihood']
prob_data = table['unnorm'].sum()
table['posterior'] = table['unnorm'] / prob_data
return prob_data
###Output
_____no_output_____
###Markdown
And call it like this.
###Code
prob_data = update(table2)
###Output
_____no_output_____
###Markdown
Here is the final Bayes table:
###Code
table2
###Output
_____no_output_____
###Markdown
The posterior probability of the 6-sided die is 4/9, which is a little more than the probabilities for the other dice, 3/9 and 2/9.Intuitively, the 6-sided die is the most likely because it had the highest likelihood of producing the outcome we saw. The Monty Hall ProblemNext we'll use a Bayes table to solve one of the most contentious problems in probability.The Monty Hall problem is based on a game show called *Let's Make a Deal*. If you are a contestant on the show, here's how the game works:* The host, Monty Hall, shows you three closed doors -- numbered 1, 2, and 3 -- and tells you that there is a prize behind each door.* One prize is valuable (traditionally a car), the other two are less valuable (traditionally goats).* The object of the game is to guess which door has the car. If you guess right, you get to keep the car.Suppose you pick Door 1. Before opening the door you chose, Monty opens Door 3 and reveals a goat. Then Monty offers you the option to stick with your original choice or switch to the remaining unopened door. To maximize your chance of winning the car, should you stick with Door 1 or switch to Door 2?To answer this question, we have to make some assumptions about the behavior of the host:1. Monty always opens a door and offers you the option to switch.2. He never opens the door you picked or the door with the car.3. If you choose the door with the car, he chooses one of the other doors at random.Under these assumptions, you are better off switching. If you stick, you win $1/3$ of the time. If you switch, you win $2/3$ of the time.If you have not encountered this problem before, you might find thatanswer surprising. You would not be alone; many people have the strongintuition that it doesn't matter if you stick or switch. There are twodoors left, they reason, so the chance that the car is behind Door A is 50%. But that is wrong.To see why, it can help to use a Bayes table. We start with threehypotheses: the car might be behind Door 1, 2, or 3. According to thestatement of the problem, the prior probability for each door is 1/3.
###Code
table3 = pd.DataFrame(index=['Door 1', 'Door 2', 'Door 3'])
table3['prior'] = Fraction(1, 3)
table3
###Output
_____no_output_____
###Markdown
The data is that Monty opened Door 3 and revealed a goat. So let'sconsider the probability of the data under each hypothesis:* If the car is behind Door 1, Monty chooses Door 2 or 3 at random, so the probability he opens Door 3 is $1/2$.* If the car is behind Door 2, Monty has to open Door 3, so the probability of the data under this hypothesis is 1.* If the car is behind Door 3, Monty does not open it, so the probability of the data under this hypothesis is 0.Here are the likelihoods.
###Code
table3['likelihood'] = Fraction(1, 2), 1, 0
table3
###Output
_____no_output_____
###Markdown
Now that we have priors and likelihoods, we can use `update` to compute the posterior probabilities.
###Code
update(table3)
table3
###Output
_____no_output_____
###Markdown
After Monty opens Door 3, the posterior probability of Door 1 is $1/3$;the posterior probability of Door 2 is $2/3$.So you are better off switching from Door 1 to Door 2. As this example shows, our intuition for probability is not alwaysreliable. Bayes's Theorem can help by providing a divide-and-conquer strategy:1. First, write down the hypotheses and the data.2. Next, figure out the prior probabilities.3. Finally, compute the likelihood of the data under each hypothesis.The Bayes table does the rest. SummaryIn this chapter we solved the Cookie Problem using Bayes's theorem explicitly and using a Bayes table.There's no real difference between these methods, but the Bayes table can make it easier to compute the total probability of the data, especially for problems with more than two hypotheses.Then we solved the Dice Problem, which we will see again in the next chapter, and the Monty Hall problem, which you might hope you never see again.If the Monty Hall problem makes your head hurt, you are not alone. But I think it demonstrates the power of Bayes's Theorem as a divide-and-conquer strategy for solving tricky problems. And I hope it provides some insight into *why* the answer is what it is.When Monty opens a door, he provides information we can use to update our belief about the location of the car. Part of the information is obvious. If he opens Door 3, we know the car is not behind Door 3. But part of the information is more subtle. Opening Door 3 is more likely if the car is behind Door 2, and less likely if it is behind Door 1. So the data is evidence in favor of Door 2. We will come back to this notion of evidence in future chapters.In the next chapter we'll extend the Cookie Problem and the Dice Problem, and take the next step from basic probability to Bayesian statistics.But first, you might want to work on the exercises. Exercises **Exercise:** Suppose you have two coins in a box.One is a normal coin with heads on one side and tails on the other, and one is a trick coin with heads on both sides. You choose a coin at random and see that one of the sides is heads.What is the probability that you chose the trick coin?
###Code
# Solution
table4 = pd.DataFrame(index=['Normal', 'Trick'])
table4['prior'] = 1/2
table4['likelihood'] = 1/2, 1
update(table4)
table4
###Output
_____no_output_____
###Markdown
**Exercise:** Suppose you meet someone and learn that they have two children.You ask if either child is a girl and they say yes.What is the probability that both children are girls?Hint: Start with four equally likely hypotheses.
###Code
# Solution
table5 = pd.DataFrame(index=['GG', 'GB', 'BG', 'BB'])
table5['prior'] = 1/4
table5['likelihood'] = 1, 1, 1, 0
update(table5)
table5
###Output
_____no_output_____
###Markdown
**Exercise:** There are many variations of the [Monty Hall problem](https://en.wikipedia.org/wiki/Monty_Hall_problem). For example, suppose Monty always chooses Door 2 if he can, andonly chooses Door 3 if he has to (because the car is behind Door 2).If you choose Door 1 and Monty opens Door 2, what is the probability the car is behind Door 3?If you choose Door 1 and Monty opens Door 3, what is the probability the car is behind Door 2?
###Code
# Solution
# If the car is behind Door 1, Monty would always open Door 2
# If the car is behind Door 2, Monty would have opened Door 3
# If the car is behind Door 3, Monty would always open Door 2
table6 = pd.DataFrame(index=['Door 1', 'Door 2', 'Door 3'])
table6['prior'] = 1/3
table6['likelihood'] = 1, 0, 1
update(table6)
table6
# Solution
# If the car is behind Door 1, Monty would have opened Door 2
# If the car is behind Door 2, Monty would always Door 3
# If the car is behind Door 3, Monty would have opened Door 3
table7 = pd.DataFrame(index=['Door 1', 'Door 2', 'Door 3'])
table7['prior'] = 1/3
table7['likelihood'] = 0, 1, 0
update(table7)
table7
###Output
_____no_output_____
###Markdown
**Exercise:** M&M's are small candy-coated chocolates that come in a variety of colors. Mars, Inc., which makes M&M's, changes the mixture of colors from time to time.In 1995, they introduced blue M&M's. * In 1994, the color mix in a bag of plain M&M's was 30\% Brown, 20\% Yellow, 20\% Red, 10\% Green, 10\% Orange, 10\% Tan. * In 1996, it was 24\% Blue , 20\% Green, 16\% Orange, 14\% Yellow, 13\% Red, 13\% Brown.Suppose a friend of mine has two bags of M&M's, and he tells methat one is from 1994 and one from 1996. He won't tell me which iswhich, but he gives me one M&M from each bag. One is yellow andone is green. What is the probability that the yellow one camefrom the 1994 bag?Hint: The trick to this question is to define the hypotheses and the data carefully.
###Code
# Solution
# Hypotheses:
# A: yellow from 94, green from 96
# B: yellow from 96, green from 94
table8 = pd.DataFrame(index=['A', 'B'])
table8['prior'] = 1/2
table8['likelihood'] = 0.2*0.2, 0.14*0.1
update(table8)
table8
###Output
_____no_output_____
###Markdown
Bayes's Theorem Think Bayes, Second EditionCopyright 2020 Allen B. DowneyLicense: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/) In the previous chapter, we derived Bayes's Theorem:$$P(A|B) = \frac{P(A) P(B|A)}{P(B)}$$As an example, we used data from the General Social Survey and Bayes's Theorem to compute conditional probabilities.But since we had the complete dataset, we didn't really need Bayes's Theorem.It was easy enough to compute the left side of the equation directly, and no easier to compute the right side.But often we don't have a complete dataset, and in that case Bayes's Theorem is more useful. In this chapter, we'll use it to solve several more challenging problems related to conditional probability. The Cookie ProblemWe'll start with a thinly disguised version of an [urn problem](https://en.wikipedia.org/wiki/Urn_problem):> Suppose there are two bowls of cookies.>> * Bowl 1 contains 30 vanilla cookies and 10 chocolate cookies. >> * Bowl 2 contains 20 vanilla cookies and 20 chocolate cookies.>> Now suppose you choose one of the bowls at random and, without looking, choose a cookie at random. If the cookie is vanilla, what is the probability that it came from Bowl 1?What we want is the conditional probability that we chose from Bowl 1 given that we got a vanilla cookie, $P(B_1 | V)$.But what we get from the statement of the problem is:* The conditional probability of getting a vanilla cookie, given that we chose from Bowl 1, $P(V | B_1)$ and* The conditional probability of getting a vanilla cookie, given that we chose from Bowl 2, $P(V | B_2)$. Bayes's Theorem tells us how they are related:$$P(B_1|V) = \frac{P(B_1)~P(V|B_1)}{P(V)}$$The term on the left is what we want. The terms on the right are:- $P(B_1)$, the probability that we chose Bowl 1, unconditioned by what kind of cookie we got. Since the problem says we chose a bowl at random, we assume $P(B_1) = 1/2$.- $P(V|B_1)$, the probability of getting a vanilla cookie from Bowl 1, which is 3/4.- $P(V)$, the probability of drawing a vanilla cookie from either bowl. To compute $P(V)$, we can use the law of total probability:$$P(V) = P(B_1)~P(V|B_1) ~+~ P(B_2)~P(V|B_2)$$Plugging in the numbers from the statement of the problem, we have$$P(V) = (1/2)~(3/4) ~+~ (1/2)~(1/2) = 5/8$$We can also compute this result directly, like this: * Since we had an equal chance of choosing either bowl and the bowls contain the same number of cookies, we had the same chance of choosing any cookie. * Between the two bowls there are 50 vanilla and 30 chocolate cookies, so $P(V) = 5/8$. Finally, we can apply Bayes's Theorem to compute the posterior probability of Bowl 1:$$P(B_1|V) = (1/2)~(3/4)~/~(5/8) = 3/5$$This example demonstrates one use of Bayes's theorem: it provides away to get from $P(B|A)$ to $P(A|B)$. This strategy is useful in cases like this where it is easier to compute the terms on the right side than the term on the left. Diachronic BayesThere is another way to think of Bayes's theorem: it gives us a way toupdate the probability of a hypothesis, $H$, given some body of data, $D$.This interpretation is "diachronic", which means "related to change over time"; in this case, the probability of the hypotheses changes as we see new data.Rewriting Bayes's theorem with $H$ and $D$ yields:$$P(H|D) = \frac{P(H)~P(D|H)}{P(D)}$$In this interpretation, each term has a name:- $P(H)$ is the probability of the hypothesis before we see the data, called the prior probability, or just **prior**.- $P(H|D)$ is the probability of the hypothesis after we see the data, called the **posterior**.- $P(D|H)$ is the probability of the data under the hypothesis, called the **likelihood**.- $P(D)$ is the **total probability of the data**, under any hypothesis.Sometimes we can compute the prior based on background information. For example, the cookie problem specifies that we choose a bowl at random with equal probability.In other cases the prior is subjective; that is, reasonable people might disagree, either because they use different background information or because they interpret the same information differently.The likelihood is usually the easiest part to compute. In the cookieproblem, we are given the number of cookies in each bowl, so we can compute the probability of the data under each hypothesis. Computing the total probability of the data can be tricky. It is supposed to be the probability of seeing the data under any hypothesis at all, but it can be hard to nail down what that means.Most often we simplify things by specifying a set of hypotheses thatare:* Mutually exclusive, which means that only one of them can be true, and* Collectively exhaustive, which means one of them must be true.When these conditions apply, we can compute $P(D)$ using the law of total probability. For example, with two hypotheses, $H_1$ and $H_2$:$$P(D) = P(H_1)~P(D|H_1) + P(H_2)~P(D|H_2)$$And more generally, with any number of hypotheses:$$P(D) = \sum_i P(H_i)~P(D|H_i)$$The process in this section, using data and a prior probability to compute a posterior probability, is called a **Bayesian update**. Bayes TablesA convenient tool for doing a Bayesian update is a Bayes table.You can write a Bayes table on paper or use a spreadsheet, but in this section I'll use a Pandas `DataFrame`.First I'll make empty `DataFrame` with one row for each hypothesis:
###Code
import pandas as pd
table = pd.DataFrame(index=['Bowl 1', 'Bowl 2'])
###Output
_____no_output_____
###Markdown
Now I'll add a column to represent the priors:
###Code
table['prior'] = 1/2, 1/2
table
###Output
_____no_output_____
###Markdown
And a column for the likelihoods:
###Code
table['likelihood'] = 3/4, 1/2
table
###Output
_____no_output_____
###Markdown
Here we see a difference from the previous method: we compute likelihoods for both hypotheses, not just Bowl 1:* The chance of getting a vanilla cookie from Bowl 1 is 3/4.* The chance of getting a vanilla cookie from Bowl 2 is 1/2.You might notice that the likelihoods don't add up to 1. That's OK; each of them is a probability conditioned on a different hypothesis.There's no reason they should add up to 1 and no problem if they don't.The next step is similar to what we did with Bayes's Theorem; we multiply the priors by the likelihoods:
###Code
table['unnorm'] = table['prior'] * table['likelihood']
table
###Output
_____no_output_____
###Markdown
I call the result `unnorm` because these values are the "unnormalized posteriors". Each of them is the product of a prior and a likelihood:$$P(B_i)~P(D|B_i)$$which is the numerator of Bayes's Theorem. If we add them up, we have$$P(B_1)~P(D|B_1) + P(B_2)~P(D|B_2)$$which is the denominator of Bayes's Theorem, $P(D)$.So we can compute the total probability of the data like this:
###Code
prob_data = table['unnorm'].sum()
prob_data
###Output
_____no_output_____
###Markdown
Notice that we get 5/8, which is what we got by computing $P(D)$ directly.And we can compute the posterior probabilities like this:
###Code
table['posterior'] = table['unnorm'] / prob_data
table
###Output
_____no_output_____
###Markdown
The posterior probability for Bowl 1 is 0.6, which is what we got using Bayes's Theorem explicitly.As a bonus, we also get the posterior probability of Bowl 2, which is 0.4.When we add up the unnormalized posteriors and divide through, we force the posteriors to add up to 1. This process is called "normalization", which is why the total probability of the data is also called the "normalizing constant". The Dice ProblemA Bayes table can also solve problems with more than two hypotheses. For example:> Suppose I have a box with a 6-sided die, an 8-sided die, and a 12-sided die. I choose one of the dice at random, roll it, and report that the outcome is a 1. What is the probability that I chose the 6-sided die?In this example, there are three hypotheses with equal priorprobabilities. The data is my report that the outcome is a 1. If I chose the 6-sided die, the probability of the data is1/6. If I chose the 8-sided die, the probability is 1/8, and if I chose the 12-sided die, it's 1/12.Here's a Bayes table that uses integers to represent the hypotheses:
###Code
table2 = pd.DataFrame(index=[6, 8, 12])
###Output
_____no_output_____
###Markdown
I'll use fractions to represent the prior probabilities and the likelihoods. That way they don't get rounded off to floating-point numbers.
###Code
from fractions import Fraction
table2['prior'] = Fraction(1, 3)
table2['likelihood'] = Fraction(1, 6), Fraction(1, 8), Fraction(1, 12)
table2
###Output
_____no_output_____
###Markdown
Once you have priors and likelhoods, the remaining steps are always the same, so I'll put them in a function:
###Code
def update(table):
"""Compute the posterior probabilities."""
table['unnorm'] = table['prior'] * table['likelihood']
prob_data = table['unnorm'].sum()
table['posterior'] = table['unnorm'] / prob_data
return prob_data
###Output
_____no_output_____
###Markdown
And call it like this.
###Code
prob_data = update(table2)
###Output
_____no_output_____
###Markdown
Here is the final Bayes table:
###Code
table2
###Output
_____no_output_____
###Markdown
The posterior probability of the 6-sided die is 4/9, which is a little more than the probabilities for the other dice, 3/9 and 2/9.Intuitively, the 6-sided die is the most likely because it had the highest likelihood of producing the outcome we saw. The Monty Hall ProblemNext we'll use a Bayes table to solve one of the most contentious problems in probability.The Monty Hall problem is based on a game show called *Let's Make a Deal*. If you are a contestant on the show, here's how the game works:* The host, Monty Hall, shows you three closed doors -- numbered 1, 2, and 3 -- and tells you that there is a prize behind each door.* One prize is valuable (traditionally a car), the other two are less valuable (traditionally goats).* The object of the game is to guess which door has the car. If you guess right, you get to keep the car.Suppose you pick Door 1. Before opening the door you chose, Monty opens Door 3 and reveals a goat. Then Monty offers you the option to stick with your original choice or switch to the remaining unopened door. To maximize your chance of winning the car, should you stick with Door 1 or switch to Door 2?To answer this question, we have to make some assumptions about the behavior of the host:1. Monty always opens a door and offers you the option to switch.2. He never opens the door you picked or the door with the car.3. If you choose the door with the car, he chooses one of the other doors at random.Under these assumptions, you are better off switching. If you stick, you win $1/3$ of the time. If you switch, you win $2/3$ of the time.If you have not encountered this problem before, you might find thatanswer surprising. You would not be alone; many people have the strongintuition that it doesn't matter if you stick or switch. There are twodoors left, they reason, so the chance that the car is behind Door A is 50%. But that is wrong.To see why, it can help to use a Bayes table. We start with threehypotheses: the car might be behind Door 1, 2, or 3. According to thestatement of the problem, the prior probability for each door is 1/3.
###Code
table3 = pd.DataFrame(index=['Door 1', 'Door 2', 'Door 3'])
table3['prior'] = Fraction(1, 3)
table3
###Output
_____no_output_____
###Markdown
The data is that Monty opened Door 3 and revealed a goat. So let'sconsider the probability of the data under each hypothesis:* If the car is behind Door 1, Monty chooses Door 2 or 3 at random, so the probability he opens Door 3 is $1/2$.* If the car is behind Door 2, Monty has to open Door 3, so the probability of the data under this hypothesis is 1.* If the car is behind Door 3, Monty does not open it, so the probability of the data under this hypothesis is 0.Here are the likelihoods.
###Code
table3['likelihood'] = Fraction(1, 2), 1, 0
table3
###Output
_____no_output_____
###Markdown
Now that we have priors and likelihoods, we can use `update` to compute the posterior probabilities.
###Code
update(table3)
table3
###Output
_____no_output_____
###Markdown
After Monty opens Door 3, the posterior probability of Door 1 is $1/3$;the posterior probability of Door 2 is $2/3$.So you are better off switching from Door 1 to Door 2. As this example shows, our intuition for probability is not alwaysreliable. Bayes's Theorem can help by providing a divide-and-conquer strategy:1. First, write down the hypotheses and the data.2. Next, figure out the prior probabilities.3. Finally, compute the likelihood of the data under each hypothesis.The Bayes table does the rest. SummaryIn this chapter we solved the Cookie Problem using Bayes's theorem explicitly and using a Bayes table.There's no real difference between these methods, but the Bayes table can make it easier to compute the total probability of the data, especially for problems with more than two hypotheses.Then we solved the Dice Problem, which we will see again in the next chapter, and the Monty Hall problem, which you might hope you never see again.If the Monty Hall problem makes your head hurt, you are not alone. But I think it demonstrates the power of Bayes's Theorem as a divide-and-conquer strategy for solving tricky problems. And I hope it provides some insight into *why* the answer is what it is.When Monty opens a door, he provides information we can use to update our belief about the location of the car. Part of the information is obvious. If he opens Door 3, we know the car is not behind Door 3. But part of the information is more subtle. Opening Door 3 is more likely if the car is behind Door 2, and less likely if it is behind Door 1. So the data is evidence in favor of Door 2. We will come back to this notion of evidence in future chapters.In the next chapter we'll extend the Cookie Problem and the Dice Problem, and take the next step from basic probability to Bayesian statistics.But first, you might want to work on the exercises. Exercises **Exercise:** Suppose you have two coins in a box.One is a normal coin with heads on one side and tails on the other, and one is a trick coin with heads on both sides. You choose a coin at random and see that one of the sides is heads.What is the probability that you chose the trick coin?
###Code
# Solution
table4 = pd.DataFrame(index=['Normal', 'Trick'])
table4['prior'] = 1/2
table4['likelihood'] = 1/2, 1
update(table4)
table4
###Output
_____no_output_____
###Markdown
**Exercise:** Suppose you meet someone and learn that they have two children.You ask if either child is a girl and they say yes.What is the probability that both children are girls?Hint: Start with four equally likely hypotheses.
###Code
# Solution
table5 = pd.DataFrame(index=['GG', 'GB', 'BG', 'BB'])
table5['prior'] = 1/4
table5['likelihood'] = 1, 1, 1, 0
update(table5)
table5
###Output
_____no_output_____
###Markdown
**Exercise:** There are many variations of the [Monty Hall problem](https://en.wikipedia.org/wiki/Monty_Hall_problem). For example, suppose Monty always chooses Door 2 if he can, andonly chooses Door 3 if he has to (because the car is behind Door 2).If you choose Door 1 and Monty opens Door 2, what is the probability the car is behind Door 3?If you choose Door 1 and Monty opens Door 3, what is the probability the car is behind Door 2?
###Code
# Solution
# If the car is behind Door 1, Monty would always open Door 2
# If the car was behind Door 2, Monty would have opened Door 3
# If the car is behind Door 3, Monty would always open Door 2
table6 = pd.DataFrame(index=['Door 1', 'Door 2', 'Door 3'])
table6['prior'] = 1/3
table6['likelihood'] = 1, 0, 1
update(table6)
table6
# Solution
# If the car is behind Door 1, Monty would have opened Door 2
# If the car is behind Door 2, Monty would always open Door 3
# If the car is behind Door 3, Monty would have opened Door 2
table7 = pd.DataFrame(index=['Door 1', 'Door 2', 'Door 3'])
table7['prior'] = 1/3
table7['likelihood'] = 0, 1, 0
update(table7)
table7
###Output
_____no_output_____
###Markdown
**Exercise:** M&M's are small candy-coated chocolates that come in a variety of colors. Mars, Inc., which makes M&M's, changes the mixture of colors from time to time.In 1995, they introduced blue M&M's. * In 1994, the color mix in a bag of plain M&M's was 30\% Brown, 20\% Yellow, 20\% Red, 10\% Green, 10\% Orange, 10\% Tan. * In 1996, it was 24\% Blue , 20\% Green, 16\% Orange, 14\% Yellow, 13\% Red, 13\% Brown.Suppose a friend of mine has two bags of M&M's, and he tells methat one is from 1994 and one from 1996. He won't tell me which iswhich, but he gives me one M&M from each bag. One is yellow andone is green. What is the probability that the yellow one camefrom the 1994 bag?Hint: The trick to this question is to define the hypotheses and the data carefully.
###Code
# Solution
# Hypotheses:
# A: yellow from 94, green from 96
# B: yellow from 96, green from 94
table8 = pd.DataFrame(index=['A', 'B'])
table8['prior'] = 1/2
table8['likelihood'] = 0.2*0.2, 0.14*0.1
update(table8)
table8
###Output
_____no_output_____
###Markdown
Bayes's Theorem Think Bayes, Second EditionCopyright 2020 Allen B. DowneyLicense: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/) In the previous chapter, we derived Bayes's Theorem:$$P(A|B) = \frac{P(A) P(B|A)}{P(B)}$$As an example, we used data from the General Social Survey and Bayes's Theorem to compute conditional probabilities.But since we had the complete dataset, we didn't really need Bayes's Theorem.It was easy enough to compute the left side of the equation directly, and no easier to compute the right side.But often we don't have a complete dataset, and in that case Bayes's Theorem is more useful. In this chapter, we'll use it to solve several more challenging problems related to conditional probability. The Cookie ProblemWe'll start with a thinly disguised version of an [urn problem](https://en.wikipedia.org/wiki/Urn_problem):> Suppose there are two bowls of cookies.>> * Bowl 1 contains 30 vanilla cookies and 10 chocolate cookies. >> * Bowl 2 contains 20 vanilla cookies and 20 chocolate cookies.>> Now suppose you choose one of the bowls at random and, without looking, choose a cookie at random. If the cookie is vanilla, what is the probability that it came from Bowl 1?What we want is the conditional probability that we chose from Bowl 1 given that we got a vanilla cookie, $P(B_1 | V)$.But what we get from the statement of the problem is:* The conditional probability of getting a vanilla cookie, given that we chose from Bowl 1, $P(V | B_1)$ and* The conditional probability of getting a vanilla cookie, given that we chose from Bowl 2, $P(V | B_2)$. Bayes's Theorem tells us how they are related:$$P(B_1|V) = \frac{P(B_1)~P(V|B_1)}{P(V)}$$The term on the left is what we want. The terms on the right are:- $P(B_1)$, the probability that we chose Bowl 1, unconditioned by what kind of cookie we got. Since the problem says we chose a bowl at random, we assume $P(B_1) = 1/2$.- $P(V|B_1)$, the probability of getting a vanilla cookie from Bowl 1, which is 3/4.- $P(V)$, the probability of drawing a vanilla cookie from either bowl. To compute $P(V)$, we can use the law of total probability:$$P(V) = P(B_1)~P(V|B_1) ~+~ P(B_2)~P(V|B_2)$$Plugging in the numbers from the statement of the problem, we have$$P(V) = (1/2)~(3/4) ~+~ (1/2)~(1/2) = 5/8$$We can also compute this result directly, like this: * Since we had an equal chance of choosing either bowl and the bowls contain the same number of cookies, we had the same chance of choosing any cookie. * Between the two bowls there are 50 vanilla and 30 chocolate cookies, so $P(V) = 5/8$. Finally, we can apply Bayes's Theorem to compute the posterior probability of Bowl 1:$$P(B_1|V) = (1/2)~(3/4)~/~(5/8) = 3/5$$This example demonstrates one use of Bayes's theorem: it provides away to get from $P(B|A)$ to $P(A|B)$. This strategy is useful in cases like this where it is easier to compute the terms on the right side than the term on the left. Diachronic BayesThere is another way to think of Bayes's theorem: it gives us a way toupdate the probability of a hypothesis, $H$, given some body of data, $D$.This interpretation is "diachronic", which means "related to change over time"; in this case, the probability of the hypotheses changes as we see new data.Rewriting Bayes's theorem with $H$ and $D$ yields:$$P(H|D) = \frac{P(H)~P(D|H)}{P(D)}$$In this interpretation, each term has a name:- $P(H)$ is the probability of the hypothesis before we see the data, called the prior probability, or just **prior**.- $P(H|D)$ is the probability of the hypothesis after we see the data, called the **posterior**.- $P(D|H)$ is the probability of the data under the hypothesis, called the **likelihood**.- $P(D)$ is the **total probability of the data**, under any hypothesis.Sometimes we can compute the prior based on background information. For example, the cookie problem specifies that we choose a bowl at random with equal probability.In other cases the prior is subjective; that is, reasonable people might disagree, either because they use different background information or because they interpret the same information differently.The likelihood is usually the easiest part to compute. In the cookieproblem, we are given the number of cookies in each bowl, so we can compute the probability of the data under each hypothesis. Computing the total probability of the data can be tricky. It is supposed to be the probability of seeing the data under any hypothesis at all, but it can be hard to nail down what that means.Most often we simplify things by specifying a set of hypotheses thatare:* Mutually exclusive, which means that only one of them can be true, and* Collectively exhaustive, which means one of them must be true.When these conditions apply, we can compute $P(D)$ using the law of total probability. For example, with two hypotheses, $H_1$ and $H_2$:$$P(D) = P(H_1)~P(D|H_1) + P(H_2)~P(D|H_2)$$And more generally, with any number of hypotheses:$$P(D) = \sum_i P(H_i)~P(D|H_i)$$The process in this section, using data and a prior probability to compute a posterior probability, is called a **Bayesian update**. Bayes TablesA convenient tool for doing a Bayesian update is a Bayes table.You can write a Bayes table on paper or use a spreadsheet, but in this section I'll use a Pandas `DataFrame`.First I'll make empty `DataFrame` with one row for each hypothesis:
###Code
import pandas as pd
table = pd.DataFrame(index=['Bowl 1', 'Bowl 2'])
###Output
_____no_output_____
###Markdown
Now I'll add a column to represent the priors:
###Code
table['prior'] = 1/2, 1/2
table
###Output
_____no_output_____
###Markdown
And a column for the likelihoods:
###Code
table['likelihood'] = 3/4, 1/2
table
###Output
_____no_output_____
###Markdown
Here we see a difference from the previous method: we compute likelihoods for both hypotheses, not just Bowl 1:* The chance of getting a vanilla cookie from Bowl 1 is 3/4.* The chance of getting a vanilla cookie from Bowl 2 is 1/2.You might notice that the likelihoods don't add up to 1. That's OK; each of them is a probability conditioned on a different hypothesis.There's no reason they should add up to 1 and no problem if they don't.The next step is similar to what we did with Bayes's Theorem; we multiply the priors by the likelihoods:
###Code
table['unnorm'] = table['prior'] * table['likelihood']
table
###Output
_____no_output_____
###Markdown
I call the result `unnorm` because these values are the "unnormalized posteriors". Each of them is the product of a prior and a likelihood:$$P(B_i)~P(D|B_i)$$which is the numerator of Bayes's Theorem. If we add them up, we have$$P(B_1)~P(D|B_1) + P(B_2)~P(D|B_2)$$which is the denominator of Bayes's Theorem, $P(D)$.So we can compute the total probability of the data like this:
###Code
prob_data = table['unnorm'].sum()
prob_data
###Output
_____no_output_____
###Markdown
Notice that we get 5/8, which is what we got by computing $P(D)$ directly.And we can compute the posterior probabilities like this:
###Code
table['posterior'] = table['unnorm'] / prob_data
table
###Output
_____no_output_____
###Markdown
The posterior probability for Bowl 1 is 0.6, which is what we got using Bayes's Theorem explicitly.As a bonus, we also get the posterior probability of Bowl 2, which is 0.4.When we add up the unnormalized posteriors and divide through, we force the posteriors to add up to 1. This process is called "normalization", which is why the total probability of the data is also called the "normalizing constant". The Dice ProblemA Bayes table can also solve problems with more than two hypotheses. For example:> Suppose I have a box with a 6-sided die, an 8-sided die, and a 12-sided die. I choose one of the dice at random, roll it, and report that the outcome is a 1. What is the probability that I chose the 6-sided die?In this example, there are three hypotheses with equal priorprobabilities. The data is my report that the outcome is a 1. If I chose the 6-sided die, the probability of the data is1/6. If I chose the 8-sided die, the probability is 1/8, and if I chose the 12-sided die, it's 1/12.Here's a Bayes table that uses integers to represent the hypotheses:
###Code
table2 = pd.DataFrame(index=[6, 8, 12])
###Output
_____no_output_____
###Markdown
I'll use fractions to represent the prior probabilities and the likelihoods. That way they don't get rounded off to floating-point numbers.
###Code
from fractions import Fraction
table2['prior'] = Fraction(1, 3)
table2['likelihood'] = Fraction(1, 6), Fraction(1, 8), Fraction(1, 12)
table2
###Output
_____no_output_____
###Markdown
Once you have priors and likelhoods, the remaining steps are always the same, so I'll put them in a function:
###Code
def update(table):
"""Compute the posterior probabilities."""
table['unnorm'] = table['prior'] * table['likelihood']
prob_data = table['unnorm'].sum()
table['posterior'] = table['unnorm'] / prob_data
return prob_data
###Output
_____no_output_____
###Markdown
And call it like this.
###Code
prob_data = update(table2)
###Output
_____no_output_____
###Markdown
Here is the final Bayes table:
###Code
table2
###Output
_____no_output_____
###Markdown
The posterior probability of the 6-sided die is 4/9, which is a little more than the probabilities for the other dice, 3/9 and 2/9.Intuitively, the 6-sided die is the most likely because it had the highest likelihood of producing the outcome we saw. The Monty Hall ProblemNext we'll use a Bayes table to solve one of the most contentious problems in probability.The Monty Hall problem is based on a game show called *Let's Make a Deal*. If you are a contestant on the show, here's how the game works:* The host, Monty Hall, shows you three closed doors -- numbered 1, 2, and 3 -- and tells you that there is a prize behind each door.* One prize is valuable (traditionally a car), the other two are less valuable (traditionally goats).* The object of the game is to guess which door has the car. If you guess right, you get to keep the car.Suppose you pick Door 1. Before opening the door you chose, Monty opens Door 3 and reveals a goat. Then Monty offers you the option to stick with your original choice or switch to the remaining unopened door. To maximize your chance of winning the car, should you stick with Door 1 or switch to Door 2?To answer this question, we have to make some assumptions about the behavior of the host:1. Monty always opens a door and offers you the option to switch.2. He never opens the door you picked or the door with the car.3. If you choose the door with the car, he chooses one of the other doors at random.Under these assumptions, you are better off switching. If you stick, you win $1/3$ of the time. If you switch, you win $2/3$ of the time.If you have not encountered this problem before, you might find thatanswer surprising. You would not be alone; many people have the strongintuition that it doesn't matter if you stick or switch. There are twodoors left, they reason, so the chance that the car is behind Door A is 50%. But that is wrong.To see why, it can help to use a Bayes table. We start with threehypotheses: the car might be behind Door 1, 2, or 3. According to thestatement of the problem, the prior probability for each door is 1/3.
###Code
table3 = pd.DataFrame(index=['Door 1', 'Door 2', 'Door 3'])
table3['prior'] = Fraction(1, 3)
table3
###Output
_____no_output_____
###Markdown
The data is that Monty opened Door 3 and revealed a goat. So let'sconsider the probability of the data under each hypothesis:* If the car is behind Door 1, Monty chooses Door 2 or 3 at random, so the probability he opens Door 3 is $1/2$.* If the car is behind Door 2, Monty has to open Door 3, so the probability of the data under this hypothesis is 1.* If the car is behind Door 3, Monty does not open it, so the probability of the data under this hypothesis is 0.Here are the likelihoods.
###Code
table3['likelihood'] = Fraction(1, 2), 1, 0
table3
###Output
_____no_output_____
###Markdown
Now that we have priors and likelihoods, we can use `update` to compute the posterior probabilities.
###Code
update(table3)
table3
###Output
_____no_output_____
###Markdown
After Monty opens Door 3, the posterior probability of Door 1 is $1/3$;the posterior probability of Door 2 is $2/3$.So you are better off switching from Door 1 to Door 2. As this example shows, our intuition for probability is not alwaysreliable. Bayes's Theorem can help by providing a divide-and-conquer strategy:1. First, write down the hypotheses and the data.2. Next, figure out the prior probabilities.3. Finally, compute the likelihood of the data under each hypothesis.The Bayes table does the rest. SummaryIn this chapter we solved the Cookie Problem using Bayes's theorem explicitly and using a Bayes table.There's no real difference between these methods, but the Bayes table can make it easier to compute the total probability of the data, especially for problems with more than two hypotheses.Then we solved the Dice Problem, which we will see again in the next chapter, and the Monty Hall problem, which you might hope you never see again.If the Monty Hall problem makes your head hurt, you are not alone. But I think it demonstrates the power of Bayes's Theorem as a divide-and-conquer strategy for solving tricky problems. And I hope it provides some insight into *why* the answer is what it is.When Monty opens a door, he provides information we can use to update our belief about the location of the car. Part of the information is obvious. If he opens Door 3, we know the car is not behind Door 3. But part of the information is more subtle. Opening Door 3 is more likely if the car is behind Door 2, and less likely if it is behind Door 1. So the data is evidence in favor of Door 2. We will come back to this notion of evidence in future chapters.In the next chapter we'll extend the Cookie Problem and the Dice Problem, and take the next step from basic probability to Bayesian statistics.But first, you might want to work on the exercises. Exercises **Exercise:** Suppose you have two coins in a box.One is a normal coin with heads on one side and tails on the other, and one is a trick coin with heads on both sides. You choose a coin at random and see that one of the sides is heads.What is the probability that you chose the trick coin?
###Code
# Solution
table4 = pd.DataFrame(index=['Normal', 'Trick'])
table4['prior'] = 1/2
table4['likelihood'] = 1/2, 1
update(table4)
table4
###Output
_____no_output_____
###Markdown
**Exercise:** Suppose you meet someone and learn that they have two children.You ask if either child is a girl and they say yes.What is the probability that both children are girls?Hint: Start with four equally likely hypotheses.
###Code
# Solution
table5 = pd.DataFrame(index=['GG', 'GB', 'BG', 'BB'])
table5['prior'] = 1/4
table5['likelihood'] = 1, 1, 1, 0
update(table5)
table5
###Output
_____no_output_____
###Markdown
**Exercise:** There are many variations of the [Monty Hall problem](https://en.wikipedia.org/wiki/Monty_Hall_problem). For example, suppose Monty always chooses Door 2 if he can, andonly chooses Door 3 if he has to (because the car is behind Door 2).If you choose Door 1 and Monty opens Door 2, what is the probability the car is behind Door 3?If you choose Door 1 and Monty opens Door 3, what is the probability the car is behind Door 2?
###Code
# Solution
# If the car is behind Door 1, Monty would always open Door 2
# If the car was behind Door 2, Monty would have opened Door 3
# If the car is behind Door 3, Monty would always open Door 2
table6 = pd.DataFrame(index=['Door 1', 'Door 2', 'Door 3'])
table6['prior'] = 1/3
table6['likelihood'] = 1, 0, 1
update(table6)
table6
# Solution
# If the car is behind Door 1, Monty would have opened Door 2
# If the car is behind Door 2, Monty would always open Door 3
# If the car is behind Door 3, Monty would have opened Door 2
table7 = pd.DataFrame(index=['Door 1', 'Door 2', 'Door 3'])
table7['prior'] = 1/3
table7['likelihood'] = 0, 1, 0
update(table7)
table7
###Output
_____no_output_____
###Markdown
Bayes's Theorem Think Bayes, Second EditionCopyright 2020 Allen B. DowneyLicense: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/) In the previous chapter, we derived Bayes's Theorem:$$P(A|B) = \frac{P(A) P(B|A)}{P(B)}$$As an example, we used data from the General Social Survey and Bayes's Theorem to compute conditional probabilities.But since we had the complete dataset, we didn't really need Bayes's Theorem.It was easy enough to compute the left side of the equation directly, and no easier to compute the right side.But often we don't have a complete dataset, and in that case Bayes's Theorem is more useful. In this chapter, we'll use it to solve several more challenging problems related to conditional probability. The Cookie ProblemWe'll start with a thinly disguised version of an [urn problem](https://en.wikipedia.org/wiki/Urn_problem):> Suppose there are two bowls of cookies.>> * Bowl 1 contains 30 vanilla cookies and 10 chocolate cookies. >> * Bowl 2 contains 20 vanilla cookies and 20 chocolate cookies.>> Now suppose you choose one of the bowls at random and, without looking, choose a cookie at random. If the cookie is vanilla, what is the probability that it came from Bowl 1?What we want is the conditional probability that we chose from Bowl 1 given that we got a vanilla cookie, $P(B_1 | V)$.But what we get from the statement of the problem is:* The conditional probability of getting a vanilla cookie, given that we chose from Bowl 1, $P(V | B_1)$ and* The conditional probability of getting a vanilla cookie, given that we chose from Bowl 2, $P(V | B_2)$. Bayes's Theorem tells us how they are related:$$P(B_1|V) = \frac{P(B_1)~P(V|B_1)}{P(V)}$$The term on the left is what we want. The terms on the right are:- $P(B_1)$, the probability that we chose Bowl 1, unconditioned by what kind of cookie we got. Since the problem says we chose a bowl at random, we assume $P(B_1) = 1/2$.- $P(V|B_1)$, the probability of getting a vanilla cookie from Bowl 1, which is 3/4.- $P(V)$, the probability of drawing a vanilla cookie from either bowl. To compute $P(V)$, we can use the law of total probability:$$P(V) = P(B_1)~P(V|B_1) ~+~ P(B_2)~P(V|B_2)$$Plugging in the numbers from the statement of the problem, we have$$P(V) = (1/2)~(3/4) ~+~ (1/2)~(1/2) = 5/8$$We can also compute this result directly, like this: * Since we had an equal chance of choosing either bowl and the bowls contain the same number of cookies, we had the same chance of choosing any cookie. * Between the two bowls there are 50 vanilla and 30 chocolate cookies, so $P(V) = 5/8$. Finally, we can apply Bayes's Theorem to compute the posterior probability of Bowl 1:$$P(B_1|V) = (1/2)~(3/4)~/~(5/8) = 3/5$$This example demonstrates one use of Bayes's theorem: it provides away to get from $P(B|A)$ to $P(A|B)$. This strategy is useful in cases like this where it is easier to compute the terms on the right side than the term on the left. Diachronic BayesThere is another way to think of Bayes's theorem: it gives us a way toupdate the probability of a hypothesis, $H$, given some body of data, $D$.This interpretation is "diachronic", which means "related to change over time"; in this case, the probability of the hypotheses changes as we see new data.Rewriting Bayes's theorem with $H$ and $D$ yields:$$P(H|D) = \frac{P(H)~P(D|H)}{P(D)}$$In this interpretation, each term has a name:- $P(H)$ is the probability of the hypothesis before we see the data, called the prior probability, or just **prior**.- $P(H|D)$ is the probability of the hypothesis after we see the data, called the **posterior**.- $P(D|H)$ is the probability of the data under the hypothesis, called the **likelihood**.- $P(D)$ is the **total probability of the data**, under any hypothesis.Sometimes we can compute the prior based on background information. For example, the cookie problem specifies that we choose a bowl at random with equal probability.In other cases the prior is subjective; that is, reasonable people might disagree, either because they use different background information or because they interpret the same information differently.The likelihood is usually the easiest part to compute. In the cookieproblem, we are given the number of cookies in each bowl, so we can compute the probability of the data under each hypothesis. Computing the total probability of the data can be tricky. It is supposed to be the probability of seeing the data under any hypothesis at all, but it can be hard to nail down what that means.Most often we simplify things by specifying a set of hypotheses thatare:* Mutually exclusive, which means that only one of them can be true, and* Collectively exhaustive, which means one of them must be true.When these conditions apply, we can compute $P(D)$ using the law of total probability. For example, with two hypotheses, $H_1$ and $H_2$:$$P(D) = P(H_1)~P(D|H_1) + P(H_2)~P(D|H_2)$$And more generally, with any number of hypotheses:$$P(D) = \sum_i P(H_i)~P(D|H_i)$$The process in this section, using data and a prior probability to compute a posterior probability, is called a **Bayesian update**. Bayes TablesA convenient tool for doing a Bayesian update is a Bayes table.You can write a Bayes table on paper or use a spreadsheet, but in this section I'll use a Pandas `DataFrame`.First I'll make empty `DataFrame` with one row for each hypothesis:
###Code
import pandas as pd
table = pd.DataFrame(index=['Bowl 1', 'Bowl 2'])
###Output
_____no_output_____
###Markdown
Now I'll add a column to represent the priors:
###Code
table['prior'] = 1/2, 1/2
table
###Output
_____no_output_____
###Markdown
And a column for the likelihoods:
###Code
table['likelihood'] = 3/4, 1/2
table
###Output
_____no_output_____
###Markdown
Here we see a difference from the previous method: we compute likelihoods for both hypotheses, not just Bowl 1:* The chance of getting a vanilla cookie from Bowl 1 is 3/4.* The chance of getting a vanilla cookie from Bowl 2 is 1/2.You might notice that the likelihoods don't add up to 1. That's OK; each of them is a probability conditioned on a different hypothesis.There's no reason they should add up to 1 and no problem if they don't.The next step is similar to what we did with Bayes's Theorem; we multiply the priors by the likelihoods:
###Code
table['unnorm'] = table['prior'] * table['likelihood']
table
###Output
_____no_output_____
###Markdown
I call the result `unnorm` because these values are the "unnormalized posteriors". Each of them is the product of a prior and a likelihood:$$P(B_i)~P(D|B_i)$$which is the numerator of Bayes's Theorem. If we add them up, we have$$P(B_1)~P(D|B_1) + P(B_2)~P(D|B_2)$$which is the denominator of Bayes's Theorem, $P(D)$.So we can compute the total probability of the data like this:
###Code
prob_data = table['unnorm'].sum()
prob_data
###Output
_____no_output_____
###Markdown
Notice that we get 5/8, which is what we got by computing $P(D)$ directly.And we can compute the posterior probabilities like this:
###Code
table['posterior'] = table['unnorm'] / prob_data
table
###Output
_____no_output_____
###Markdown
The posterior probability for Bowl 1 is 0.6, which is what we got using Bayes's Theorem explicitly.As a bonus, we also get the posterior probability of Bowl 2, which is 0.4.When we add up the unnormalized posteriors and divide through, we force the posteriors to add up to 1. This process is called "normalization", which is why the total probability of the data is also called the "normalizing constant". The Dice ProblemA Bayes table can also solve problems with more than two hypotheses. For example:> Suppose I have a box with a 6-sided die, an 8-sided die, and a 12-sided die. I choose one of the dice at random, roll it, and report that the outcome is a 1. What is the probability that I chose the 6-sided die?In this example, there are three hypotheses with equal priorprobabilities. The data is my report that the outcome is a 1. If I chose the 6-sided die, the probability of the data is1/6. If I chose the 8-sided die, the probability is 1/8, and if I chose the 12-sided die, it's 1/12.Here's a Bayes table that uses integers to represent the hypotheses:
###Code
table2 = pd.DataFrame(index=[6, 8, 12])
###Output
_____no_output_____
###Markdown
I'll use fractions to represent the prior probabilities and the likelihoods. That way they don't get rounded off to floating-point numbers.
###Code
from fractions import Fraction
table2['prior'] = Fraction(1, 3)
table2['likelihood'] = Fraction(1, 6), Fraction(1, 8), Fraction(1, 12)
table2
###Output
_____no_output_____
###Markdown
Once you have priors and likelhoods, the remaining steps are always the same, so I'll put them in a function:
###Code
def update(table):
"""Compute the posterior probabilities."""
table['unnorm'] = table['prior'] * table['likelihood']
prob_data = table['unnorm'].sum()
table['posterior'] = table['unnorm'] / prob_data
return prob_data
###Output
_____no_output_____
###Markdown
And call it like this.
###Code
prob_data = update(table2)
###Output
_____no_output_____
###Markdown
Here is the final Bayes table:
###Code
table2
###Output
_____no_output_____
###Markdown
The posterior probability of the 6-sided die is 4/9, which is a little more than the probabilities for the other dice, 3/9 and 2/9.Intuitively, the 6-sided die is the most likely because it had the highest likelihood of producing the outcome we saw. The Monty Hall ProblemNext we'll use a Bayes table to solve one of the most contentious problems in probability.The Monty Hall problem is based on a game show called *Let's Make a Deal*. If you are a contestant on the show, here's how the game works:* The host, Monty Hall, shows you three closed doors -- numbered 1, 2, and 3 -- and tells you that there is a prize behind each door.* One prize is valuable (traditionally a car), the other two are less valuable (traditionally goats).* The object of the game is to guess which door has the car. If you guess right, you get to keep the car.Suppose you pick Door 1. Before opening the door you chose, Monty opens Door 3 and reveals a goat. Then Monty offers you the option to stick with your original choice or switch to the remaining unopened door. To maximize your chance of winning the car, should you stick with Door 1 or switch to Door 2?To answer this question, we have to make some assumptions about the behavior of the host:1. Monty always opens a door and offers you the option to switch.2. He never opens the door you picked or the door with the car.3. If you choose the door with the car, he chooses one of the other doors at random.Under these assumptions, you are better off switching. If you stick, you win $1/3$ of the time. If you switch, you win $2/3$ of the time.If you have not encountered this problem before, you might find thatanswer surprising. You would not be alone; many people have the strongintuition that it doesn't matter if you stick or switch. There are twodoors left, they reason, so the chance that the car is behind Door A is 50%. But that is wrong.To see why, it can help to use a Bayes table. We start with threehypotheses: the car might be behind Door 1, 2, or 3. According to thestatement of the problem, the prior probability for each door is 1/3.
###Code
table3 = pd.DataFrame(index=['Door 1', 'Door 2', 'Door 3'])
table3['prior'] = Fraction(1, 3)
table3
###Output
_____no_output_____
###Markdown
The data is that Monty opened Door 3 and revealed a goat. So let'sconsider the probability of the data under each hypothesis:* If the car is behind Door 1, Monty chooses Door 2 or 3 at random, so the probability he opens Door 3 is $1/2$.* If the car is behind Door 2, Monty has to open Door 3, so the probability of the data under this hypothesis is 1.* If the car is behind Door 3, Monty does not open it, so the probability of the data under this hypothesis is 0.Here are the likelihoods.
###Code
table3['likelihood'] = Fraction(1, 2), 1, 0
table3
###Output
_____no_output_____
###Markdown
Now that we have priors and likelihoods, we can use `update` to compute the posterior probabilities.
###Code
update(table3)
table3
###Output
_____no_output_____
###Markdown
After Monty opens Door 3, the posterior probability of Door 1 is $1/3$;the posterior probability of Door 2 is $2/3$.So you are better off switching from Door 1 to Door 2. As this example shows, our intuition for probability is not alwaysreliable. Bayes's Theorem can help by providing a divide-and-conquer strategy:1. First, write down the hypotheses and the data.2. Next, figure out the prior probabilities.3. Finally, compute the likelihood of the data under each hypothesis.The Bayes table does the rest. SummaryIn this chapter we solved the Cookie Problem using Bayes's theorem explicitly and using a Bayes table.There's no real difference between these methods, but the Bayes table can make it easier to compute the total probability of the data, especially for problems with more than two hypotheses.Then we solved the Dice Problem, which we will see again in the next chapter, and the Monty Hall problem, which you might hope you never see again.If the Monty Hall problem makes your head hurt, you are not alone. But I think it demonstrates the power of Bayes's Theorem as a divide-and-conquer strategy for solving tricky problems. And I hope it provides some insight into *why* the answer is what it is.When Monty opens a door, he provides information we can use to update our belief about the location of the car. Part of the information is obvious. If he opens Door 3, we know the car is not behind Door 3. But part of the information is more subtle. Opening Door 3 is more likely if the car is behind Door 2, and less likely if it is behind Door 1. So the data is evidence in favor of Door 2. We will come back to this notion of evidence in future chapters.In the next chapter we'll extend the Cookie Problem and the Dice Problem, and take the next step from basic probability to Bayesian statistics.But first, you might want to work on the exercises. Exercises **Exercise:** Suppose you have two coins in a box.One is a normal coin with heads on one side and tails on the other, and one is a trick coin with heads on both sides. You choose a coin at random and see that one of the sides is heads.What is the probability that you chose the trick coin?
###Code
# Solution
table4 = pd.DataFrame(index=['Normal', 'Trick'])
table4['prior'] = 1/2
table4['likelihood'] = 1/2, 1
update(table4)
table4
###Output
_____no_output_____
###Markdown
**Exercise:** Suppose you meet someone and learn that they have two children.You ask if either child is a girl and they say yes.What is the probability that both children are girls?Hint: Start with four equally likely hypotheses.
###Code
# Solution
table5 = pd.DataFrame(index=['GG', 'GB', 'BG', 'BB'])
table5['prior'] = 1/4
table5['likelihood'] = 1, 1, 1, 0
update(table5)
table5
###Output
_____no_output_____
###Markdown
**Exercise:** There are many variations of the [Monty Hall problem](https://en.wikipedia.org/wiki/Monty_Hall_problem}). For example, suppose Monty always chooses Door 2 if he can, andonly chooses Door 3 if he has to (because the car is behind Door 2).If you choose Door 1 and Monty opens Door 2, what is the probability the car is behind Door 3?If you choose Door 1 and Monty opens Door 3, what is the probability the car is behind Door 2?
###Code
# Solution
# If the car is behind Door 1, Monty would always open Door 2
# If the car is behind Door 2, Monty would have opened Door 3
# If the car is behind Door 3, Monty would always open Door 2
table6 = pd.DataFrame(index=['Door 1', 'Door 2', 'Door 3'])
table6['prior'] = 1/3
table6['likelihood'] = 1, 0, 1
update(table6)
table6
# Solution
# If the car is behind Door 1, Monty would have opened Door 2
# If the car is behind Door 2, Monty would always Door 3
# If the car is behind Door 3, Monty would have opened Door 3
table7 = pd.DataFrame(index=['Door 1', 'Door 2', 'Door 3'])
table7['prior'] = 1/3
table7['likelihood'] = 0, 1, 0
update(table7)
table7
###Output
_____no_output_____
###Markdown
**Exercise:** M&M's are small candy-coated chocolates that come in a variety of colors. Mars, Inc., which makes M&M's, changes the mixture of colors from time to time.In 1995, they introduced blue M&M's. * In 1994, the color mix in a bag of plain M&M's was 30\% Brown, 20\% Yellow, 20\% Red, 10\% Green, 10\% Orange, 10\% Tan. * In 1996, it was 24\% Blue , 20\% Green, 16\% Orange, 14\% Yellow, 13\% Red, 13\% Brown.Suppose a friend of mine has two bags of M&M's, and he tells methat one is from 1994 and one from 1996. He won't tell me which iswhich, but he gives me one M&M from each bag. One is yellow andone is green. What is the probability that the yellow one camefrom the 1994 bag?Hint: The trick to this question is to define the hypotheses and the data carefully.
###Code
# Solution
# Hypotheses:
# A: yellow from 94, green from 96
# B: yellow from 96, green from 94
table8 = pd.DataFrame(index=['A', 'B'])
table8['prior'] = 1/2
table8['likelihood'] = 0.2*0.2, 0.14*0.1
update(table8)
table8
###Output
_____no_output_____
###Markdown
Bayes's Theorem Think Bayes, Second EditionCopyright 2020 Allen B. DowneyLicense: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/) In the previous chapter, we derived Bayes's Theorem:$$P(A|B) = \frac{P(A) P(B|A)}{P(B)}$$As an example, we used data from the General Social Survey and Bayes's Theorem to compute conditional probabilities.But since we had the complete dataset, we didn't really need Bayes's Theorem.It was easy enough to compute the left side of the equation directly, and no easier to compute the right side.But often we don't have a complete dataset, and in that case Bayes's Theorem is more useful. In this chapter, we'll use it to solve several more challenging problems related to conditional probability. The Cookie ProblemWe'll start with a thinly disguised version of an [urn problem](https://en.wikipedia.org/wiki/Urn_problem):> Suppose there are two bowls of cookies.>> * Bowl 1 contains 30 vanilla cookies and 10 chocolate cookies. >> * Bowl 2 contains 20 vanilla cookies and 20 chocolate cookies.>> Now suppose you choose one of the bowls at random and, without looking, choose a cookie at random. If the cookie is vanilla, what is the probability that it came from Bowl 1?What we want is the conditional probability that we chose from Bowl 1 given that we got a vanilla cookie, $P(B_1 | V)$.But what we get from the statement of the problem is:* The conditional probability of getting a vanilla cookie, given that we chose from Bowl 1, $P(V | B_1)$ and* The conditional probability of getting a vanilla cookie, given that we chose from Bowl 2, $P(V | B_2)$. Bayes's Theorem tells us how they are related:$$P(B_1|V) = \frac{P(B_1)~P(V|B_1)}{P(V)}$$The term on the left is what we want. The terms on the right are:- $P(B_1)$, the probability that we chose Bowl 1, unconditioned by what kind of cookie we got. Since the problem says we chose a bowl at random, we assume $P(B_1) = 1/2$.- $P(V|B_1)$, the probability of getting a vanilla cookie from Bowl 1, which is 3/4.- $P(V)$, the probability of drawing a vanilla cookie from either bowl. To compute $P(V)$, we can use the law of total probability:$$P(V) = P(B_1)~P(V|B_1) ~+~ P(B_2)~P(V|B_2)$$Plugging in the numbers from the statement of the problem, we have$$P(V) = (1/2)~(3/4) ~+~ (1/2)~(1/2) = 5/8$$We can also compute this result directly, like this: * Since we had an equal chance of choosing either bowl and the bowls contain the same number of cookies, we had the same chance of choosing any cookie. * Between the two bowls there are 50 vanilla and 30 chocolate cookies, so $P(V) = 5/8$. Finally, we can apply Bayes's Theorem to compute the posterior probability of Bowl 1:$$P(B_1|V) = (1/2)~(3/4)~/~(5/8) = 3/5$$This example demonstrates one use of Bayes's theorem: it provides away to get from $P(B|A)$ to $P(A|B)$. This strategy is useful in cases like this where it is easier to compute the terms on the right side than the term on the left. Diachronic BayesThere is another way to think of Bayes's theorem: it gives us a way toupdate the probability of a hypothesis, $H$, given some body of data, $D$.This interpretation is "diachronic", which means "related to change over time"; in this case, the probability of the hypotheses changes as we see new data.Rewriting Bayes's theorem with $H$ and $D$ yields:$$P(H|D) = \frac{P(H)~P(D|H)}{P(D)}$$In this interpretation, each term has a name:- $P(H)$ is the probability of the hypothesis before we see the data, called the prior probability, or just **prior**.- $P(H|D)$ is the probability of the hypothesis after we see the data, called the **posterior**.- $P(D|H)$ is the probability of the data under the hypothesis, called the **likelihood**.- $P(D)$ is the **total probability of the data**, under any hypothesis.Sometimes we can compute the prior based on background information. For example, the cookie problem specifies that we choose a bowl at random with equal probability.In other cases the prior is subjective; that is, reasonable people might disagree, either because they use different background information or because they interpret the same information differently.The likelihood is usually the easiest part to compute. In the cookieproblem, we are given the number of cookies in each bowl, so we can compute the probability of the data under each hypothesis. Computing the total probability of the data can be tricky. It is supposed to be the probability of seeing the data under any hypothesis at all, but it can be hard to nail down what that means.Most often we simplify things by specifying a set of hypotheses thatare:* Mutually exclusive, which means that only one of them can be true, and* Collectively exhaustive, which means one of them must be true.When these conditions apply, we can compute $P(D)$ using the law of total probability. For example, with two hypotheses, $H_1$ and $H_2$:$$P(D) = P(H_1)~P(D|H_1) + P(H_2)~P(D|H_2)$$And more generally, with any number of hypotheses:$$P(D) = \sum_i P(H_i)~P(D|H_i)$$The process in this section, using data and a prior probability to compute a posterior probability, is called a **Bayesian update**. Bayes TablesA convenient tool for doing a Bayesian update is a Bayes table.You can write a Bayes table on paper or use a spreadsheet, but in this section I'll use a Pandas `DataFrame`.First I'll make empty `DataFrame` with one row for each hypothesis:
###Code
import pandas as pd
table = pd.DataFrame(index=['Bowl 1', 'Bowl 2'])
###Output
_____no_output_____
###Markdown
Now I'll add a column to represent the priors:
###Code
table['prior'] = 1/2, 1/2
table
###Output
_____no_output_____
###Markdown
And a column for the likelihoods:
###Code
table['likelihood'] = 3/4, 1/2
table
###Output
_____no_output_____
###Markdown
Here we see a difference from the previous method: we compute likelihoods for both hypotheses, not just Bowl 1:* The chance of getting a vanilla cookie from Bowl 1 is 3/4.* The chance of getting a vanilla cookie from Bowl 2 is 1/2.You might notice that the likelihoods don't add up to 1. That's OK; each of them is a probability conditioned on a different hypothesis.There's no reason they should add up to 1 and no problem if they don't.The next step is similar to what we did with Bayes's Theorem; we multiply the priors by the likelihoods:
###Code
table['unnorm'] = table['prior'] * table['likelihood']
table
###Output
_____no_output_____
###Markdown
I call the result `unnorm` because these values are the "unnormalized posteriors". Each of them is the product of a prior and a likelihood:$$P(B_i)~P(D|B_i)$$which is the numerator of Bayes's Theorem. If we add them up, we have$$P(B_1)~P(D|B_1) + P(B_2)~P(D|B_2)$$which is the denominator of Bayes's Theorem, $P(D)$.So we can compute the total probability of the data like this:
###Code
prob_data = table['unnorm'].sum()
prob_data
###Output
_____no_output_____
###Markdown
Notice that we get 5/8, which is what we got by computing $P(D)$ directly.And we can compute the posterior probabilities like this:
###Code
table['posterior'] = table['unnorm'] / prob_data
table
###Output
_____no_output_____
###Markdown
The posterior probability for Bowl 1 is 0.6, which is what we got using Bayes's Theorem explicitly.As a bonus, we also get the posterior probability of Bowl 2, which is 0.4.When we add up the unnormalized posteriors and divide through, we force the posteriors to add up to 1. This process is called "normalization", which is why the total probability of the data is also called the "normalizing constant". The Dice ProblemA Bayes table can also solve problems with more than two hypotheses. For example:> Suppose I have a box with a 6-sided die, an 8-sided die, and a 12-sided die. I choose one of the dice at random, roll it, and report that the outcome is a 1. What is the probability that I chose the 6-sided die?In this example, there are three hypotheses with equal priorprobabilities. The data is my report that the outcome is a 1. If I chose the 6-sided die, the probability of the data is1/6. If I chose the 8-sided die, the probability is 1/8, and if I chose the 12-sided die, it's 1/12.Here's a Bayes table that uses integers to represent the hypotheses:
###Code
table2 = pd.DataFrame(index=[6, 8, 12])
###Output
_____no_output_____
###Markdown
I'll use fractions to represent the prior probabilities and the likelihoods. That way they don't get rounded off to floating-point numbers.
###Code
from fractions import Fraction
table2['prior'] = Fraction(1, 3)
table2['likelihood'] = Fraction(1, 6), Fraction(1, 8), Fraction(1, 12)
table2
###Output
_____no_output_____
###Markdown
Once you have priors and likelhoods, the remaining steps are always the same, so I'll put them in a function:
###Code
def update(table):
"""Compute the posterior probabilities."""
table['unnorm'] = table['prior'] * table['likelihood']
prob_data = table['unnorm'].sum()
table['posterior'] = table['unnorm'] / prob_data
return prob_data
###Output
_____no_output_____
###Markdown
And call it like this.
###Code
prob_data = update(table2)
###Output
_____no_output_____
###Markdown
Here is the final Bayes table:
###Code
table2
###Output
_____no_output_____
###Markdown
The posterior probability of the 6-sided die is 4/9, which is a little more than the probabilities for the other dice, 3/9 and 2/9.Intuitively, the 6-sided die is the most likely because it had the highest likelihood of producing the outcome we saw. The Monty Hall ProblemNext we'll use a Bayes table to solve one of the most contentious problems in probability.The Monty Hall problem is based on a game show called *Let's Make a Deal*. If you are a contestant on the show, here's how the game works:* The host, Monty Hall, shows you three closed doors -- numbered 1, 2, and 3 -- and tells you that there is a prize behind each door.* One prize is valuable (traditionally a car), the other two are less valuable (traditionally goats).* The object of the game is to guess which door has the car. If you guess right, you get to keep the car.Suppose you pick Door 1. Before opening the door you chose, Monty opens Door 3 and reveals a goat. Then Monty offers you the option to stick with your original choice or switch to the remaining unopened door. To maximize your chance of winning the car, should you stick with Door 1 or switch to Door 2?To answer this question, we have to make some assumptions about the behavior of the host:1. Monty always opens a door and offers you the option to switch.2. He never opens the door you picked or the door with the car.3. If you choose the door with the car, he chooses one of the other doors at random.Under these assumptions, you are better off switching. If you stick, you win $1/3$ of the time. If you switch, you win $2/3$ of the time.If you have not encountered this problem before, you might find thatanswer surprising. You would not be alone; many people have the strongintuition that it doesn't matter if you stick or switch. There are twodoors left, they reason, so the chance that the car is behind Door A is 50%. But that is wrong.To see why, it can help to use a Bayes table. We start with threehypotheses: the car might be behind Door 1, 2, or 3. According to thestatement of the problem, the prior probability for each door is 1/3.
###Code
table3 = pd.DataFrame(index=['Door 1', 'Door 2', 'Door 3'])
table3['prior'] = Fraction(1, 3)
table3
###Output
_____no_output_____
###Markdown
The data is that Monty opened Door 3 and revealed a goat. So let'sconsider the probability of the data under each hypothesis:* If the car is behind Door 1, Monty chooses Door 2 or 3 at random, so the probability he opens Door 3 is $1/2$.* If the car is behind Door 2, Monty has to open Door 3, so the probability of the data under this hypothesis is 1.* If the car is behind Door 3, Monty does not open it, so the probability of the data under this hypothesis is 0.Here are the likelihoods.
###Code
table3['likelihood'] = Fraction(1, 2), 1, 0
table3
###Output
_____no_output_____
###Markdown
Now that we have priors and likelihoods, we can use `update` to compute the posterior probabilities.
###Code
update(table3)
table3
###Output
_____no_output_____
###Markdown
After Monty opens Door 3, the posterior probability of Door 1 is $1/3$;the posterior probability of Door 2 is $2/3$.So you are better off switching from Door 1 to Door 2. As this example shows, our intuition for probability is not alwaysreliable. Bayes's Theorem can help by providing a divide-and-conquer strategy:1. First, write down the hypotheses and the data.2. Next, figure out the prior probabilities.3. Finally, compute the likelihood of the data under each hypothesis.The Bayes table does the rest. SummaryIn this chapter we solved the Cookie Problem using Bayes's theorem explicitly and using a Bayes table.There's no real difference between these methods, but the Bayes table can make it easier to compute the total probability of the data, especially for problems with more than two hypotheses.Then we solved the Dice Problem, which we will see again in the next chapter, and the Monty Hall problem, which you might hope you never see again.If the Monty Hall problem makes your head hurt, you are not alone. But I think it demonstrates the power of Bayes's Theorem as a divide-and-conquer strategy for solving tricky problems. And I hope it provides some insight into *why* the answer is what it is.When Monty opens a door, he provides information we can use to update our belief about the location of the car. Part of the information is obvious. If he opens Door 3, we know the car is not behind Door 3. But part of the information is more subtle. Opening Door 3 is more likely if the car is behind Door 2, and less likely if it is behind Door 1. So the data is evidence in favor of Door 2. We will come back to this notion of evidence in future chapters.In the next chapter we'll extend the Cookie Problem and the Dice Problem, and take the next step from basic probability to Bayesian statistics.But first, you might want to work on the exercises. Exercises **Exercise:** Suppose you have two coins in a box.One is a normal coin with heads on one side and tails on the other, and one is a trick coin with heads on both sides. You choose a coin at random and see that one of the sides is heads.What is the probability that you chose the trick coin?
###Code
# Solution
table4 = pd.DataFrame(index=['Normal', 'Trick'])
table4['prior'] = 1/2
table4['likelihood'] = 1/2, 1
update(table4)
table4
###Output
_____no_output_____
###Markdown
**Exercise:** Suppose you meet someone and learn that they have two children.You ask if either child is a girl and they say yes.What is the probability that both children are girls?Hint: Start with four equally likely hypotheses.
###Code
# Solution
table5 = pd.DataFrame(index=['GG', 'GB', 'BG', 'BB'])
table5['prior'] = 1/4
table5['likelihood'] = 1, 1, 1, 0
update(table5)
table5
###Output
_____no_output_____
###Markdown
**Exercise:** There are many variations of the [Monty Hall problem](https://en.wikipedia.org/wiki/Monty_Hall_problem). For example, suppose Monty always chooses Door 2 if he can, andonly chooses Door 3 if he has to (because the car is behind Door 2).If you choose Door 1 and Monty opens Door 2, what is the probability the car is behind Door 3?If you choose Door 1 and Monty opens Door 3, what is the probability the car is behind Door 2?
###Code
# Solution
# If the car is behind Door 1, Monty would always open Door 2
# If the car was behind Door 2, Monty would have opened Door 3
# If the car is behind Door 3, Monty would always open Door 2
table6 = pd.DataFrame(index=['Door 1', 'Door 2', 'Door 3'])
table6['prior'] = 1/3
table6['likelihood'] = 1, 0, 1
update(table6)
table6
# Solution
# If the car is behind Door 1, Monty would have opened Door 2
# If the car is behind Door 2, Monty would always open Door 3
# If the car is behind Door 3, Monty would have opened Door 3
table7 = pd.DataFrame(index=['Door 1', 'Door 2', 'Door 3'])
table7['prior'] = 1/3
table7['likelihood'] = 0, 1, 0
update(table7)
table7
###Output
_____no_output_____
###Markdown
**Exercise:** M&M's are small candy-coated chocolates that come in a variety of colors. Mars, Inc., which makes M&M's, changes the mixture of colors from time to time.In 1995, they introduced blue M&M's. * In 1994, the color mix in a bag of plain M&M's was 30\% Brown, 20\% Yellow, 20\% Red, 10\% Green, 10\% Orange, 10\% Tan. * In 1996, it was 24\% Blue , 20\% Green, 16\% Orange, 14\% Yellow, 13\% Red, 13\% Brown.Suppose a friend of mine has two bags of M&M's, and he tells methat one is from 1994 and one from 1996. He won't tell me which iswhich, but he gives me one M&M from each bag. One is yellow andone is green. What is the probability that the yellow one camefrom the 1994 bag?Hint: The trick to this question is to define the hypotheses and the data carefully.
###Code
# Solution
# Hypotheses:
# A: yellow from 94, green from 96
# B: yellow from 96, green from 94
table8 = pd.DataFrame(index=['A', 'B'])
table8['prior'] = 1/2
table8['likelihood'] = 0.2*0.2, 0.14*0.1
update(table8)
table8
###Output
_____no_output_____
###Markdown
Bayes's Theorem Think Bayes, Second EditionCopyright 2020 Allen B. DowneyLicense: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/) In the previous chapter, we derived Bayes's Theorem:$$P(A|B) = \frac{P(A) P(B|A)}{P(B)}$$As an example, we used data from the General Social Survey and Bayes's Theorem to compute conditional probabilities.But since we had the complete dataset, we didn't really need Bayes's Theorem.It was easy enough to compute the left side of the equation directly, and no easier to compute the right side.But often we don't have a complete dataset, and in that case Bayes's Theorem is more useful. In this chapter, we'll use it to solve several more challenging problems related to conditional probability. The Cookie ProblemWe'll start with a thinly disguised version of an [urn problem](https://en.wikipedia.org/wiki/Urn_problem):> Suppose there are two bowls of cookies.>> * Bowl 1 contains 30 vanilla cookies and 10 chocolate cookies. >> * Bowl 2 contains 20 vanilla cookies and 20 chocolate cookies.>> Now suppose you choose one of the bowls at random and, without looking, choose a cookie at random. If the cookie is vanilla, what is the probability that it came from Bowl 1?What we want is the conditional probability that we chose from Bowl 1 given that we got a vanilla cookie, $P(B_1 | V)$.But what we get from the statement of the problem is:* The conditional probability of getting a vanilla cookie, given that we chose from Bowl 1, $P(V | B_1)$ and* The conditional probability of getting a vanilla cookie, given that we chose from Bowl 2, $P(V | B_2)$. Bayes's Theorem tells us how they are related:$$P(B_1|V) = \frac{P(B_1)~P(V|B_1)}{P(V)}$$The term on the left is what we want. The terms on the right are:- $P(B_1)$, the probability that we chose Bowl 1, unconditioned by what kind of cookie we got. Since the problem says we chose a bowl at random, we assume $P(B_1) = 1/2$.- $P(V|B_1)$, the probability of getting a vanilla cookie from Bowl 1, which is 3/4.- $P(V)$, the probability of drawing a vanilla cookie from either bowl. To compute $P(V)$, we can use the law of total probability:$$P(V) = P(B_1)~P(V|B_1) ~+~ P(B_2)~P(V|B_2)$$Plugging in the numbers from the statement of the problem, we have$$P(V) = (1/2)~(3/4) ~+~ (1/2)~(1/2) = 5/8$$We can also compute this result directly, like this: * Since we had an equal chance of choosing either bowl and the bowls contain the same number of cookies, we had the same chance of choosing any cookie. * Between the two bowls there are 50 vanilla and 30 chocolate cookies, so $P(V) = 5/8$. Finally, we can apply Bayes's Theorem to compute the posterior probability of Bowl 1:$$P(B_1|V) = (1/2)~(3/4)~/~(5/8) = 3/5$$This example demonstrates one use of Bayes's theorem: it provides away to get from $P(B|A)$ to $P(A|B)$. This strategy is useful in cases like this where it is easier to compute the terms on the right side than the term on the left. Diachronic BayesThere is another way to think of Bayes's theorem: it gives us a way toupdate the probability of a hypothesis, $H$, given some body of data, $D$.This interpretation is "diachronic", which means "related to change over time"; in this case, the probability of the hypotheses changes as we see new data.Rewriting Bayes's theorem with $H$ and $D$ yields:$$P(H|D) = \frac{P(H)~P(D|H)}{P(D)}$$In this interpretation, each term has a name:- $P(H)$ is the probability of the hypothesis before we see the data, called the prior probability, or just **prior**.- $P(H|D)$ is the probability of the hypothesis after we see the data, called the **posterior**.- $P(D|H)$ is the probability of the data under the hypothesis, called the **likelihood**.- $P(D)$ is the **total probability of the data**, under any hypothesis.Sometimes we can compute the prior based on background information. For example, the cookie problem specifies that we choose a bowl at random with equal probability.In other cases the prior is subjective; that is, reasonable people might disagree, either because they use different background information or because they interpret the same information differently.The likelihood is usually the easiest part to compute. In the cookieproblem, we are given the number of cookies in each bowl, so we can compute the probability of the data under each hypothesis. Computing the total probability of the data can be tricky. It is supposed to be the probability of seeing the data under any hypothesis at all, but it can be hard to nail down what that means.Most often we simplify things by specifying a set of hypotheses thatare:* Mutually exclusive: If one hypothesis is true, the others must be false, and* Collectively exhaustive: There are no other possibilities.Together, these conditions imply that exactly one of the hypotheses in the set must be true.When these conditions apply, we can compute $P(D)$ using the law of total probability. For example, with two hypotheses, $H_1$ and $H_2$:$$P(D) = P(H_1)~P(D|H_1) + P(H_2)~P(D|H_2)$$And more generally, with any number of hypotheses:$$P(D) = \sum_i P(H_i)~P(D|H_i)$$The process in this section, using data and a prior probability to compute a posterior probability, is called a **Bayesian update**. Bayes TablesA convenient tool for doing a Bayesian update is a Bayes table.You can write a Bayes table on paper or use a spreadsheet, but in this section I'll use a Pandas `DataFrame`.First I'll make empty `DataFrame` with one row for each hypothesis:
###Code
import pandas as pd
table = pd.DataFrame(index=['Bowl 1', 'Bowl 2'])
###Output
_____no_output_____
###Markdown
Now I'll add a column to represent the priors:
###Code
table['prior'] = 1/2, 1/2
table
###Output
_____no_output_____
###Markdown
And a column for the likelihoods:
###Code
table['likelihood'] = 3/4, 1/2
table
###Output
_____no_output_____
###Markdown
Here we see a difference from the previous method: we compute likelihoods for both hypotheses, not just Bowl 1:* The chance of getting a vanilla cookie from Bowl 1 is 3/4.* The chance of getting a vanilla cookie from Bowl 2 is 1/2.You might notice that the likelihoods don't add up to 1. That's OK; each of them is a probability conditioned on a different hypothesis.There's no reason they should add up to 1 and no problem if they don't.The next step is similar to what we did with Bayes's Theorem; we multiply the priors by the likelihoods:
###Code
table['unnorm'] = table['prior'] * table['likelihood']
table
###Output
_____no_output_____
###Markdown
I call the result `unnorm` because these values are the "unnormalized posteriors". Each of them is the product of a prior and a likelihood:$$P(B_i)~P(D|B_i)$$which is the numerator of Bayes's Theorem. If we add them up, we have$$P(B_1)~P(D|B_1) + P(B_2)~P(D|B_2)$$which is the denominator of Bayes's Theorem, $P(D)$.So we can compute the total probability of the data like this:
###Code
prob_data = table['unnorm'].sum()
prob_data
###Output
_____no_output_____
###Markdown
Notice that we get 5/8, which is what we got by computing $P(D)$ directly.And we can compute the posterior probabilities like this:
###Code
table['posterior'] = table['unnorm'] / prob_data
table
###Output
_____no_output_____
###Markdown
The posterior probability for Bowl 1 is 0.6, which is what we got using Bayes's Theorem explicitly.As a bonus, we also get the posterior probability of Bowl 2, which is 0.4.When we add up the unnormalized posteriors and divide through, we force the posteriors to add up to 1. This process is called "normalization", which is why the total probability of the data is also called the "[normalizing constant](https://en.wikipedia.org/wiki/Normalizing_constantBayes'_theorem)". The Dice Problem A Bayes table can also solve problems with more than two hypotheses. For example:> Suppose I have a box with a 6-sided die, an 8-sided die, and a 12-sided die. I choose one of the dice at random, roll it, and report that the outcome is a 1. What is the probability that I chose the 6-sided die?In this example, there are three hypotheses with equal priorprobabilities. The data is my report that the outcome is a 1. If I chose the 6-sided die, the probability of the data is1/6. If I chose the 8-sided die, the probability is 1/8, and if I chose the 12-sided die, it's 1/12.Here's a Bayes table that uses integers to represent the hypotheses:
###Code
table2 = pd.DataFrame(index=[6, 8, 12])
###Output
_____no_output_____
###Markdown
I'll use fractions to represent the prior probabilities and the likelihoods. That way they don't get rounded off to floating-point numbers.
###Code
from fractions import Fraction
table2['prior'] = Fraction(1, 3)
table2['likelihood'] = Fraction(1, 6), Fraction(1, 8), Fraction(1, 12)
table2
###Output
_____no_output_____
###Markdown
Once you have priors and likelhoods, the remaining steps are always the same, so I'll put them in a function:
###Code
def update(table):
"""Compute the posterior probabilities."""
table['unnorm'] = table['prior'] * table['likelihood']
prob_data = table['unnorm'].sum()
table['posterior'] = table['unnorm'] / prob_data
return prob_data
prob_data = update(table2)
print(prob_data)
###Output
1/8
###Markdown
The total probability of the data is $1/8$. And here is the final Bayes table:
###Code
table2
###Output
_____no_output_____
###Markdown
The posterior probability of the 6-sided die is 4/9. The Monty Hall problemNext we'll use a Bayes table to solve one of the most contentious problems in probability.The Monty Hall problem is based on a game show called *Let's Make a Deal*. If you are a contestant on the show, here's how the game works:* The host, Monty Hall, shows you three closed doors numbered 1, 2, and 3. He tells you that there is a prize behind each door.* One prize is valuable (traditionally a car), the other two are less valuable (traditionally goats).* The object of the game is to guess which door has the car. If you guess right, you get to keep the car.Suppose you pick Door 1. Before opening the door you chose, Monty opens Door 3 and reveals a goat. Then Monty offers you the option to stick with your original choice or switch to the remaining unopened door.To maximize your chance of winning the car, should you stick with Door 1 or switch to Door 2?To answer this question, we have to make some assumptions about the behavior of the host:1. Monty always opens a door and offers you the option to switch.2. He never opens the door you picked or the door with the car.3. If you choose the door with the car, he chooses one of the other doors at random.Under these assumptions, you are better off switching. If you stick, you win $1/3$ of the time. If you switch, you win $2/3$ of the time.If you have not encountered this problem before, you might find thatanswer surprising. You would not be alone; many people have the strongintuition that it doesn't matter if you stick or switch. There are twodoors left, they reason, so the chance that the car is behind Door A is 50%. But that is wrong.To see why, it can help to use a Bayes table. We start with threehypotheses: the car might be behind Door 1, 2, or 3. According to thestatement of the problem, the prior probability for each door is 1/3.
###Code
table3 = pd.DataFrame(index=['Door 1', 'Door 2', 'Door 3'])
table3['prior'] = Fraction(1, 3)
table3
###Output
_____no_output_____
###Markdown
The data is that Monty opened Door 3 and revealed a goat. So let'sconsider the probability of the data under each hypothesis:* If the car is behind Door 3, Monty does not open it, so the probability of the data under this hypothesis is 0.* If the car is behind Door 2, Monty has to open Door 3, so the probability of the data under this hypothesis is 1.* If the car is behind Door 1, Monty choose Door 2 or 3 at random; the probability he would open Door 3 is $1/2$.Here are the likelihoods.
###Code
table3['likelihood'] = Fraction(1, 2), 1, 0
table3
###Output
_____no_output_____
###Markdown
Now that we have priors and likelihoods, we can use `update` to compute the posterior probabilities.
###Code
update(table3)
table3
###Output
_____no_output_____
###Markdown
After Monty opens Door 3, the posterior probability of Door 1 is $1/3$;the posterior probability of Door 2 is $2/3$.So you are better off switching from Door 1 to Door 2. As this example shows, our intuition for probability is not alwaysreliable. Bayes's Theorem can help by providing a divide-and-conquer strategy:1. First, write down the hypotheses and the data.2. Next, figure out the prior probabilities.3. Finally, compute the likelihood of the data under each hypothesis.The Bayes table does the rest. SummaryIn this chapter we solved the Cookie Problem using Bayes's theorem explicitly and using a Bayes table.There's no real difference between these methods, but the Bayes table can make it easier to compute the total probability of the data, especially for problems with more than two hypotheses.Then we solved the Dice Problem, which we will see again in the next chapter, and the Monty Hall problem, which you might hope you never see again.If the Monty Hall problem makes your head hurt, you are not alone. But I think it demonstrates the power of Bayes's Theorem as a divide-and-conquer strategy for solving tricky problems. And I hope it provides some insight into *why* the answer is what it is.When Monty opens a door, he provides information we can use to update our belief about the location of the car. Part of the information is obvious. If he opens Door 3, we know the car is not behind Door 3. But part of the information is more subtle. Opening Door 3 is more likely if the car is behind Door 2, and less likely if it is behind Door 1. So the data is evidence in favor of Door 2. We will come back to this notion of evidence in future chapters.In the next chapter we'll extend the Cookie Problem and the Dice Problem, and take the next step from basic probability to Bayesian statistics.But first, you might want to work on the exercises. Exercises **Exercise:** Suppose you have two coins in a box.One is a normal coin with heads on one side and tails on the other, and one is a trick coin with heads on both sides. You choose a coin at random and see that one of the sides is heads.What is the probability that you chose the trick coin?
###Code
# Solution
table4 = pd.DataFrame(index=['Normal', 'Trick'])
table4['prior'] = 1/2
table4['likelihood'] = 1/2, 1
update(table4)
table4
###Output
_____no_output_____
###Markdown
**Exercise:** Suppose you meet someone and learn that they have two children.You ask if either child is a girl and they say yes.What is the probability that both children are girls?Hint: Start with four equally likely hypotheses.
###Code
# Solution
table5 = pd.DataFrame(index=['GG', 'GB', 'BG', 'BB'])
table5['prior'] = 1/4
table5['likelihood'] = 1, 1, 1, 0
update(table5)
table5
###Output
_____no_output_____
###Markdown
**Exercise:** There are many variations of the [Monty Hall problem](https://en.wikipedia.org/wiki/Monty_Hall_problem}). For example, suppose Monty always chooses Door 2 if he can, andonly chooses Door 3 if he has to (because the car is behind Door 2).If you choose Door 1 and Monty opens Door 2, what is the probability the car is behind Door 3?If you choose Door 1 and Monty opens Door 3, what is the probability the car is behind Door 2?
###Code
# Solution
# If the car is behind Door 1, Monty would always open Door 2
# If the car is behind Door 2, Monty would have opened Door 3
# If the car is behind Door 3, Monty would always open Door 2
table6 = pd.DataFrame(index=['Door 1', 'Door 2', 'Door 3'])
table6['prior'] = 1/3
table6['likelihood'] = 1, 0, 1
update(table6)
table6
# Solution
# If the car is behind Door 1, Monty would have opened Door 2
# If the car is behind Door 2, Monty would always Door 3
# If the car is behind Door 3, Monty would have opened Door 3
table7 = pd.DataFrame(index=['Door 1', 'Door 2', 'Door 3'])
table7['prior'] = 1/3
table7['likelihood'] = 0, 1, 0
update(table7)
table7
###Output
_____no_output_____
###Markdown
**Exercise:** M&M's are small candy-coated chocolates that come in a variety of colors. Mars, Inc., which makes M&M's, changes the mixture of colors from time to time.In 1995, they introduced blue M&M's. * In 1994, the color mix in a bag of plain M&M's was 30\% Brown, 20\% Yellow, 20\% Red, 10\% Green, 10\% Orange, 10\% Tan. * In 1996, it was 24\% Blue , 20\% Green, 16\% Orange, 14\% Yellow, 13\% Red, 13\% Brown.Suppose a friend of mine has two bags of M&M's, and he tells methat one is from 1994 and one from 1996. He won't tell me which iswhich, but he gives me one M&M from each bag. One is yellow andone is green. What is the probability that the yellow one camefrom the 1994 bag?Hint: The trick to this question is to define the hypotheses and the data carefully.
###Code
# Solution
# Hypotheses:
# A: yellow from 94, green from 96
# B: yellow from 96, green from 94
table8 = pd.DataFrame(index=['A', 'B'])
table8['prior'] = 1/2
table8['likelihood'] = 0.2*0.2, 0.14*0.1
update(table8)
table8
###Output
_____no_output_____
###Markdown
**Exercise:** M&M's are small candy-coated chocolates that come in a variety of colors. Mars, Inc., which makes M&M's, changes the mixture of colors from time to time.In 1995, they introduced blue M&M's. * In 1994, the color mix in a bag of plain M&M's was 30\% Brown, 20\% Yellow, 20\% Red, 10\% Green, 10\% Orange, 10\% Tan. * In 1996, it was 24\% Blue , 20\% Green, 16\% Orange, 14\% Yellow, 13\% Red, 13\% Brown.Suppose a friend of mine has two bags of M&M's, and he tells methat one is from 1994 and one from 1996. He won't tell me which iswhich, but he gives me one M&M from each bag. One is yellow andone is green. What is the probability that the yellow one camefrom the 1994 bag?Hint: The trick to this question is to define the hypotheses and the data carefully.
###Code
# Solution
# Hypotheses:
# A: yellow from 94, green from 96
# B: yellow from 96, green from 94
table8 = pd.DataFrame(index=['A', 'B'])
table8['prior'] = 1/2
table8['likelihood'] = 0.2*0.2, 0.14*0.1
update(table8)
table8
###Output
_____no_output_____
###Markdown
Bayes's Theorem Think Bayes, Second EditionCopyright 2020 Allen B. DowneyLicense: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/) In the previous chapter, we derived Bayes's Theorem:$P(A|B) = \frac{P(A) P(B|A)}{P(B)}$As an example, we used data from the General Social Survey and Bayes's Theorem to compute conditional probabilities.But since we had the complete dataset, we didn't really need Bayes's Theorem.It was easy enough to compute the left side of the equation directly, and no easier to compute the right side.But often we don't have a complete dataset, and in that case Bayes's Theorem is more useful. In this chapter, we'll use it to solve several more challenging problems related to conditional probability. The Cookie ProblemWe'll start with a thinly disguised version of an [urn problem](https://en.wikipedia.org/wiki/Urn_problem):> Suppose there are two bowls of cookies.>> Bowl 1 contains 30 vanilla cookies and 10 chocolate cookies. >> Bowl 2 contains 20 vanilla cookies and 20 chocolate cookies.>> Now suppose you choose one of the bowls at random and, without looking, choose a cookie at random.>> If the cookie is vanilla, what is the probability that it came from Bowl 1?What we want is the conditional probability that we chose from Bowl 1 given that we got a vanilla cookie, $P(B_1 | V)$.But what we get from the statement of the problem is:* The conditional probability of getting a vanilla cookie, given that we chose from Bowl 1, $P(V | B_1)$ and* The conditional probability of getting a vanilla cookie, given that we chose from Bowl 2, $P(V | B_2)$. Bayes's Theorem tells us how they are related:$P(B_1|V) = \frac{P(B_1)~P(V|B_1)}{P(V)})$ The term on the left is what we want. The terms on the right are:- $P(B_1)$, the probability that we chose Bowl 1, unconditioned by what kind of cookie we got. Since the problem says we chose a bowl at random, we assume $P(B_1) = 1/2$.- $P(V|B_1)$, the probability of getting a vanilla cookie from Bowl 1, which is 3/4.- $P(V)$, the probability of drawing a vanilla cookie from either bowl. To compute $P(V)$, we can use the law of total probability:$P(V) = P(B_1)~P(V|B_1) ~+~ P(B_2)~P(V|B_2)$Plugging in the numbers from the statement of the problem, we have$P(V) = (1/2)~(3/4) ~+~ (1/2)~(1/2) = 5/8$.We can also compute this result directly, like this: * Since we had an equal chance of choosing either bowl and the bowls contain the same number of cookies, we had the same chance of choosing any cookie. * Between the two bowls there are 50 vanilla and 30 chocolate cookies, so $P(V) = 5/8$. Finally, we can apply Bayes's Theorem to compute the posterior probability of Bowl 1:$P(B_1|V) = (1/2)~(3/4)~/~(5/8) = 3/5$.This example demonstrates one use of Bayes's theorem: it provides away to get from $P(B|A)$ to $P(A|B)$. This strategy is useful in cases like this where it is easier to compute the terms on the right side than the term on the left. Diachronic BayesThere is another way to think of Bayes's theorem: it gives us a way toupdate the probability of a hypothesis, $H$, given some body of data, $D$.This interpretation is "diachronic", which means "related to change over time"; in this case, the probability of the hypotheses changes as we see new data.Rewriting Bayes's theorem with $H$ and $D$ yields:$P(H|D) = \frac{P(H)~P(D|H)}{P(D)}$ In this interpretation, each term has a name:- $P(H)$ is the probability of the hypothesis before we see the data, called the prior probability, or just **prior**.- $P(H|D)$ is the probability of the hypothesis after we see the data, called the **posterior**.- $P(D|H)$ is the probability of the data under the hypothesis, called the **likelihood**.- $P(D)$ is the **total probability of the data**, under any hypothesis.Sometimes we can compute the prior based on background information. For example, the cookie problem specifies that we choose a bowl at random with equal probability.In other cases the prior is subjective; that is, reasonable people might disagree, either because they use different background information or because they interpret the same information differently.The likelihood is usually the easiest part to compute. In the cookieproblem, we are given the number of cookies in each bowl, so we can compute the probability of the data under each hypothesis. Computing the total probability of the data can be tricky. It is supposed to be the probability of seeing the data under any hypothesis at all, but it can be hard to nail down what that means.Most often we simplify things by specifying a set of hypotheses thatare:* Mutually exclusive: If one hypothesis is true, the others must be false, and* Collectively exhaustive: There are no other possibilities.Together, these conditions imply that exactly one of the hypotheses in the set must be true.When these conditions apply, we can compute $P(D)$ using the law of total probability. For example, with two hypotheses, $H_1$ and $H_2$:$P(D) = P(H_1)~P(D|H_1) + P(H_2)~P(D|H_2)$And more generally, with any number of hypotheses:$P(D) = \sum_i P(H_i)~P(D|H_i)$The process in this section, using data to and a prior probability to compute a posterior probability, is called a **Bayesian update**. Bayes TablesA convenient tool for doing a Bayesian update is a Bayes table.You can write a Bayes table on paper or use a spreadsheet, but in this section I'll use a Pandas `DataFrame`.First I'll make empty `DataFrame` with one row for each hypothesis:
###Code
import pandas as pd
table = pd.DataFrame(index=['Bowl 1', 'Bowl 2'])
###Output
_____no_output_____
###Markdown
Now I'll add a column to represent the priors:
###Code
table['prior'] = 1/2, 1/2
table
###Output
_____no_output_____
###Markdown
And a column for the likelihoods:
###Code
table['likelihood'] = 3/4, 1/2
table
###Output
_____no_output_____
###Markdown
Here we see a difference from the previous method: we compute likelihoods for both hypotheses, not just Bowl 1:* The chance of getting a vanilla cookie from Bowl 1 is 3/4.* The chance of getting a vanilla cookie from Bowl 2 is 1/2.You might notice that the likelihoods don't add up to 1. That's OK; each of them is a probability conditioned on a different hypothesis.There's no reason they should add up to 1 and no problem if they don't.The next step is similar to what we did with Bayes's Theorem; we multiply the priors by the likelihoods:
###Code
table['unnorm'] = table['prior'] * table['likelihood']
table
###Output
_____no_output_____
###Markdown
I call the result `unnorm` because these values are the "unnormalized posteriors". Each of them is the product of a prior and a likelihood:$P(B_i)~P(D|B_i)$which is the numerator of Bayes's Theorem. If we add them up, we have$P(B_1)~P(D|B_1) + P(B_2)~P(D|B_2)$which is the denominator of Bayes's Theorem, $P(D)$.So we can compute the total probability of the data like this:
###Code
prob_data = table['unnorm'].sum()
prob_data
###Output
_____no_output_____
###Markdown
Notice that we get 5/8, which is what we got by computing $P(D)$ directly.And we can compute the posterior probabilities like this:
###Code
table['posterior'] = table['unnorm'] / prob_data
table
###Output
_____no_output_____
###Markdown
The posterior probability for Bowl 1 is 0.6, which is what we got using Bayes's Theorem explicitly.As a bonus, we also get the posterior probability of Bowl 2, which is 0.4.When we add up the unnormalized posteriors and divide through, we force the posteriors to add up to 1. This process is called "normalization", which is why the total probability of the data is also called the "[normalizing constant](https://en.wikipedia.org/wiki/Normalizing_constantBayes'_theorem)". The Dice Problem A Bayes table can also solve problems with more than two hypotheses. For example:> Suppose I have a box with a 6-sided die, an 8-sided die, and a 12-sided die. I choose one of the dice at random, roll it, and report that the outcome is a 1. What is the probability that I chose the 6-sided die?In this example, there are three hypotheses with equal priorprobabilities. The data is my report that the outcome is a 1. If I chose the 6-sided die, the probability of the data is1/6. If I chose the 8-sided die, the probability is 1/8, and if I chose the 12-sided die, it's 1/12.Here's a Bayes table that uses integers to represent the hypotheses:
###Code
table2 = pd.DataFrame(index=[6, 8, 12])
###Output
_____no_output_____
###Markdown
I'll use fractions to represent the prior probabilities and the likelihoods. That way they don't get rounded off to floating-point numbers.
###Code
from fractions import Fraction
table2['prior'] = Fraction(1, 3)
table2['likelihood'] = Fraction(1, 6), Fraction(1, 8), Fraction(1, 12)
table2
###Output
_____no_output_____
###Markdown
Once you have priors and likelhoods, the remaining steps are always the same, so I'll put them in a function:
###Code
def update(table):
"""Compute the posterior probabilities.
table: DataFrame with priors and likelihoods
returns: total probability of the data
"""
table['unnorm'] = table['prior'] * table['likelihood']
prob_data = table['unnorm'].sum()
table['posterior'] = table['unnorm'] / prob_data
return prob_data
prob_data = update(table2)
print(prob_data)
###Output
1/8
###Markdown
The total probability of the data is $1/8$. And here is the final Bayes table:
###Code
table2
###Output
_____no_output_____
###Markdown
The posterior probability of the 6-sided die is 4/9. The Monty Hall problemNext we'll use a Bayes table to solve one of the most contentious problems in probability.The Monty Hall problem is based on a game show called *Let's Make a Deal*.If you are a contestant on the show, here's how the game works:- The host, Monty Hall, shows you three closed doors numbered 1, 2, and 3. He tells you that there is a prize behind each door.- One prize is valuable (traditionally a car), the other two are less valuable (traditionally goats).- The object of the game is to guess which door has the car. If you guess right, you get to keep the car.Suppose you pick Door 1. Before opening the door you chose, Monty opensDoor 3 and reveals a goat. Then Monty offers you the option to stickwith your original choice or switch to the remaining unopened door.To maximize your chance of winning the car, should you stick with Door 1or switch to Door 2?To answer this question, we have to make some assumptions about the behavior of the host:1. Monty always opens a door and offers you the option to switch.2. He never opens the door you picked or the door with the car.3. If you choose the door with the car, he chooses one of the other doors at random.Under these assumptions, you are better off switching. If you stick, you win $1/3$ of the time. If you switch, you win $2/3$ of the time.If you have not encountered this problem before, you might find thatanswer surprising. You would not be alone; many people have the strongintuition that it doesn't matter if you stick or switch. There are twodoors left, they reason, so the chance that the car is behind Door A is 50%. But that is wrong.To see why, it can help to use a Bayes table. We start with threehypotheses: the car might be behind Door 1, 2, or 3. According to thestatement of the problem, the prior probability for each door is 1/3.
###Code
table3 = pd.DataFrame(index=['Door 1', 'Door 2', 'Door 3'])
table3['prior'] = Fraction(1, 3)
table3
###Output
_____no_output_____
###Markdown
The data is that Monty opened Door 3 and revealed a goat. So let'sconsider the probability of the data under each hypothesis:- If the car is behind Door 3, Monty does not open it, so the probability of the data under this hypothesis is 0.- If the car is behind Door 2, Monty has to open Door 3, so the probability of the data under this hypothesis is 1.- If the car is behind Door 1, Monty choose Door 2 or 3 at random; the probability he would open Door 3 is $1/2$.Here are the likelihoods.
###Code
table3['likelihood'] = Fraction(1, 2), 1, 0
table3
###Output
_____no_output_____
###Markdown
Now that we have priors and likelihoods, we can use `update` to compute the posterior probabilities.
###Code
update(table3)
table3
###Output
_____no_output_____
###Markdown
After Monty opens Door 3, the posterior probability of Door 1 is $1/3$;the posterior probability of Door 2 is $2/3$.So you are better off switching from Door 1 to Door 2. As this example shows, our intuition for probability is not alwaysreliable. Bayes's Theorem can help by providing a divide-and-conquer strategy:1. First, write down the hypotheses and the data.2. Next, figure out the prior probabilities.3. Finally, compute the likelihood of the data under each hypothesis.The Bayes table does the rest. SummaryIn this chapter we solved the Cookie Problem using Bayes's theorem explicitly and using a Bayes table.There's no real difference between these methods, but the Bayes table can make it easier to compute the total probability of the data, especially for problems with more than two hypotheses.Then we solved the Dice Problem, which we will see again in the next chapter, and the Monty Hall problem, which you might hope you never see again.If the Monty Hall problem makes your head hurt, you are not alone. But I think it demonstrates the power of Bayes's Theorem as a divide-and-conquer strategy for solving tricky problems. And I hope it provides some insight into *why* the answer is what it is.When Monty opens a door, he provides information we can use to update our belief about the location of the car. Part of the information is obvious. If he opens Door 3, we know the car is not behind Door 3. But part of the information is more subtle. Opening Door 3 is more likely if the car is behind Door 2, and less likely if it is behind Door 1. So the data is evidence in favor of Door 2. We will come back to this notion of evidence in future chapters.In the next chapter we'll extend the Cookie Problem and the Dice Problem, and take the next step from basic probability to Bayesian statistics.But first, you might want to work on the exercises. Exercises **Exercise:** Suppose you have two coins in a box.One is a normal coin with heads on one side and tails on the other, and one is a trick coin with heads on both sides. You choose a coin at random and see that one of the sides is heads.What is the probability that you chose the trick coin?
###Code
# Solution
table4 = pd.DataFrame(index=['Normal', 'Trick'])
table4['prior'] = 1/2
table4['likelihood'] = 1/2, 1
update(table4)
table4
###Output
_____no_output_____
###Markdown
**Exercise:** Suppose you meet someone and learn that they have two children.You ask if either child is a girl and they say yes.What is the probability that both children are girls?Hint: Start with four equally likely hypotheses.
###Code
# Solution
table5 = pd.DataFrame(index=['GG', 'GB', 'BG', 'BB'])
table5['prior'] = 1/4
table5['likelihood'] = 1, 1, 1, 0
update(table5)
table5
###Output
_____no_output_____
###Markdown
**Exercise:** There are many variations of the [Monty Hall problem](https://en.wikipedia.org/wiki/Monty_Hall_problem}). For example, suppose Monty always chooses Door 2 if he can, andonly chooses Door 3 if he has to (because the car is behind Door 2).If you choose Door 1 and Monty opens Door 2, what is the probability the car is behind Door 3?If you choose Door 1 and Monty opens Door 3, what is the probability the car is behind Door 2?
###Code
# Solution
# If the car is behind Door 1, Monty would always open Door 2
# If the car is behind Door 2, Monty would have opened Door 3
# If the car is behind Door 3, Monty would always open Door 2
table6 = pd.DataFrame(index=['Door 1', 'Door 2', 'Door 3'])
table6['prior'] = 1/3
table6['likelihood'] = 1, 0, 1
update(table6)
table6
# Solution
# If the car is behind Door 1, Monty would have opened Door 2
# If the car is behind Door 2, Monty would always Door 3
# If the car is behind Door 3, Monty would have opened Door 3
table7 = pd.DataFrame(index=['Door 1', 'Door 2', 'Door 3'])
table7['prior'] = 1/3
table7['likelihood'] = 0, 1, 0
update(table7)
table7
###Output
_____no_output_____
###Markdown
**Exercise:** M&M's are small candy-coated chocolates that come in a variety of colors. Mars, Inc., which makes M&M's, changes the mixture of colors from time to time.In 1995, they introduced blue M&M's. * In 1994, the color mix in a bag of plain M&M's was 30\% Brown, 20\% Yellow, 20\% Red, 10\% Green, 10\% Orange, 10\% Tan. * In 1996, it was 24\% Blue , 20\% Green, 16\% Orange, 14\% Yellow, 13\% Red, 13\% Brown.Suppose a friend of mine has two bags of M&M's, and he tells methat one is from 1994 and one from 1996. He won't tell me which iswhich, but he gives me one M&M from each bag. One is yellow andone is green. What is the probability that the yellow one camefrom the 1994 bag?Hint: The trick to this question is to define the hypotheses and the data carefully.
###Code
# Solution
# Hypotheses:
# A: yellow from 94, green from 96
# B: yellow from 96, green from 94
table8 = pd.DataFrame(index=['A', 'B'])
table8['prior'] = 1/2
table8['likelihood'] = 0.2*0.2, 0.14*0.1
update(table8)
table8
###Output
_____no_output_____ |
HubSpot/HubSpot_Get_deal.ipynb | ###Markdown
HubSpot - Get deal **Tags:** hubspot crm sales deal naas_drivers **Author:** [Florent Ravenel](https://www.linkedin.com/in/florent-ravenel/) Input Import library
###Code
from naas_drivers import hubspot
###Output
_____no_output_____
###Markdown
Setup your HubSpot👉 Access your [HubSpot API key](https://knowledge.hubspot.com/integrations/how-do-i-get-my-hubspot-api-key)
###Code
HS_API_KEY = 'YOUR_HUBSPOT_API_KEY'
###Output
_____no_output_____
###Markdown
Enter your deal ID
###Code
deal_id = '70915045'
###Output
_____no_output_____
###Markdown
Model Get single deal
###Code
deal = hubspot.connect(HS_API_KEY).deals.get(deal_id)
###Output
_____no_output_____
###Markdown
Output Display result
###Code
deal
###Output
_____no_output_____
###Markdown
HubSpot - Get deal **Tags:** hubspot crm sales deal naas_drivers snippet dataframe **Author:** [Florent Ravenel](https://www.linkedin.com/in/florent-ravenel/) Input Import library
###Code
from naas_drivers import hubspot
###Output
_____no_output_____
###Markdown
Setup your HubSpot👉 Access your [HubSpot API key](https://knowledge.hubspot.com/integrations/how-do-i-get-my-hubspot-api-key)
###Code
HS_API_KEY = 'YOUR_HUBSPOT_API_KEY'
###Output
_____no_output_____
###Markdown
Enter your deal ID
###Code
deal_id = '70915045'
###Output
_____no_output_____
###Markdown
Model Get single deal
###Code
deal = hubspot.connect(HS_API_KEY).deals.get(deal_id)
###Output
_____no_output_____
###Markdown
Output Display result
###Code
deal
###Output
_____no_output_____
###Markdown
HubSpot - Get deal **Tags:** hubspot crm sales deal naas_drivers snippet dataframe **Author:** [Florent Ravenel](https://www.linkedin.com/in/florent-ravenel/) Input Import library
###Code
from naas_drivers import hubspot
###Output
_____no_output_____
###Markdown
Setup your HubSpot👉 Access your [HubSpot API key](https://knowledge.hubspot.com/integrations/how-do-i-get-my-hubspot-api-key)
###Code
HS_API_KEY = 'YOUR_HUBSPOT_API_KEY'
###Output
_____no_output_____
###Markdown
Enter your deal ID
###Code
deal_id = '70915045'
###Output
_____no_output_____
###Markdown
Model Get single deal
###Code
deal = hubspot.connect(HS_API_KEY).deals.get(deal_id)
###Output
_____no_output_____
###Markdown
Output Display result
###Code
deal
###Output
_____no_output_____ |
tutorials/LinearAlgebra/Workbook_LinearAlgebra.ipynb | ###Markdown
Linear Algebra Tutorial Workbook**What is this workbook?**A workbook is a collection of problems, accompanied by solutions to them. The explanations focus on the logical steps required to solve a problem; they illustrate the concepts that need to be applied to come up with a solution to the problem, explaining the mathematical steps required. Note that a workbook should not be the primary source of knowledge on the subject matter; it assumes that you've already read a tutorial or a textbook and that you are now seeking to improve your problem-solving skills. You should attempt solving the tasks of the respective kata first, and turn to the workbook only if stuck. While a textbook emphasizes knowledge acquisition, a workbook emphasizes skill acquisition.This workbook describes the solutions to the problems offered in the [Linear Algebra tutorial](./LinearAlgebra.ipynb). Since the tasks are offered as programming problems, the explanations also cover some elements of Python that might be non-obvious for a first-time user.**What you should know for this workbook**1. Complex arithmetic.2. Basic Python knowledge is helpful but not necessary. Click the cell with code below this block of text and press `Ctrl+Enter` (`⌘+Enter` on Mac). **Do not skip this step**.
###Code
# Run this cell using Ctrl+Enter (⌘+Enter on Mac).
from testing import exercise, create_empty_matrix
from typing import List
import math, cmath
Matrix = List[List[complex]]
###Output
_____no_output_____
###Markdown
Exercise 1: Matrix addition.**Inputs:**1. An $n \times m$ matrix $A$, represented as a two-dimensional list.2. An $n \times m$ matrix $B$, represented as a two-dimensional list.**Output:** Return the sum of the matrices $A + B$ - an $n \times m$ matrix, represented as a two-dimensional list. SolutionFollowing the definition given in the tutorial, the sum of two matrices is a matrix of element-wise sums of matrix elements; for example, for $2 \times 2$ matrices$$ A + B =\begin{bmatrix} a & b \\ c & d \end{bmatrix} + \begin{bmatrix} e & f \\ g & h \end{bmatrix} = \begin{bmatrix} a + e & b + f \\ c + g & d + h \end{bmatrix}$$> *Python note:* This tutorial uses a lot of lists and loops, so let's walk through some Python syntax details first. If you're familiar with Python syntax, feel free to skip this note!>> * [`range(x)`](https://docs.python.org/3/tutorial/controlflow.htmlthe-range-function) will create a [list](https://docs.python.org/3/tutorial/introduction.htmllists) of numbers from 0 to `x - 1`, inclusive; for example, `range(3)` will create a list `[0, 1, 2]`. > * [`for`](https://docs.python.org/3/tutorial/controlflow.htmlfor-statements) statement iterates over the items of a sequence; for example, the following code> ```python> for i in range(3):> print(i)> ```>> will print:> ```> 0> 1> 2> ```>> * Matrices are described as two-dimensional lists, > which are represented as lists of lists. For example, the following matrix:>> $$\begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix} $$>> is represented as a list of lists `[[1, 2, 3], [4, 5, 6]]`. >> * You can access a specific element of the list using the index of that element in the list (note that indices start with 0): the first element of `array` is `array[0]`, the second - `array[1]`, etc.> * Similarly, you can access an element of a matrix using the row and column indices of that element: `matrix[0][2]` would access the element in the first row and 3rd column.> * `len(array)` returns the number of elements in a list; for example, `len([0, 1, 2])` will return 3.> * Here is an example of creating a matrix from the example above and looping through its elements to print them:>>```Python>matrix = [[1, 2, 3], [4, 5, 6]]>numberOfRows = len(matrix) will return 2>numberOfColumns = len(matrix[0]) will return 3>for row in range(numberOfRows):> for column in range(numberOfColumns):> print(matrix[row][column])>>```>> * Finally, the first exercise offers you a template of a solution that uses a function `create_empty_matrix(n, m)`; this function creates an $n \times m$ matrix filled with 0's as values. This function is not a built-in Python function, this notebook defines it for you to use.
###Code
@exercise
def matrix_add(a : Matrix, b : Matrix) -> Matrix:
# You can get the size of a matrix like this:
rows = len(a)
columns = len(a[0])
# You can use the following function to initialize a rows×columns matrix filled with 0s to store your answer
c = create_empty_matrix(rows, columns)
for i in range(rows):
for j in range(columns):
# You can access elements of a matrix like this:
x = a[i][j]
y = b[i][j]
# You can modify the elements of a matrix like this:
c[i][j] = a[i][j] + b[i][j]
return c
###Output
_____no_output_____
###Markdown
[Return to task 1 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-1:-Matrix-addition.) Exercise 2: Scalar multiplication.**Inputs:**1. A scalar $x$.2. An $n \times m$ matrix $A$.**Output:** Return the $n \times m$ matrix $x \cdot A$. SolutionWe can again follow the definition given in the tutorial: to calculate the product of a number and a matrix, multiply each matrix element by that number. For example, for a $2 \times 2$ matrix:$$x \cdot A = x \cdot \begin{bmatrix} a & b \\ c & d \end{bmatrix} = \begin{bmatrix} x \cdot a & x \cdot b \\ x \cdot c & x \cdot d \end{bmatrix} $$ > *Python note:* We have to multiply each element in the matrix by the given number $x$. To do so, we will again loop trough each matrix element with 2 `for` loops, do the multiplication and store its result in the corresponding element of the newly created matrix.
###Code
@exercise
def scalar_mult(x : complex, a : Matrix) -> Matrix:
rows = len(a)
columns = len(a[0])
c = create_empty_matrix(rows, columns)
for i in range(rows):
for j in range(columns):
c[i][j] = a[i][j] * x
return c
###Output
_____no_output_____
###Markdown
[Return to task 2 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-2:-Scalar-multiplication.) Exercise 3: Matrix multiplication.**Inputs:**1. An $n \times m$ matrix $A$.2. An $m \times k$ matrix $B$.**Output:** Return the $n \times k$ matrix equal to the matrix product $AB$. SolutionAgain, the tutorial gives us the definition of how multiplication works, and we just need to implement it in code. Here is an example of multiplying a $2 \times 3$ matrix by a $3 \times 2$ matrix:$$ A \cdot B =\begin{bmatrix} a & b & c \\ d & e & f \end{bmatrix} \cdot \begin{bmatrix} h & i \\ j & k \\ l & m \end{bmatrix} = \begin{bmatrix} a \cdot h + b \cdot j + c \cdot l & a \cdot i + b \cdot k + c \cdot m \\ d \cdot h + e \cdot j + f \cdot l & d \cdot i + e \cdot k + f \cdot m \end{bmatrix} $$> *Python note*: In this exercise we'll need an extra nested loop. We will iterate trough the rows and columns of the resulting matrix, similar to the previous exercises, but for each element of the result we'll need to iterate through the row of the left matrix and the column of the right matrix that contribute to that element. In the example above, to get the element in the first row and the first column of the resulting matrix product we'll need to iterate through the first row of the left matrix $\begin{bmatrix} a & b & c \end{bmatrix}$ and the first column of the right matrix $\begin{bmatrix} h \\ j \\ l \end{bmatrix}$ and add up pairwise products of their elements.>> Note that the empty matrix we create for storing the result differs in dimensions from the previous exercises: its number of rows equals the number of rows of the left matrix, and its number of columns equals to the number of columns of the right matrix. >> Python `+=` operator is a convenient shorthand for assignment `variable = variable + increment`.
###Code
@exercise
def matrix_mult(a : Matrix, b : Matrix) -> Matrix:
rows = len(a) # the number of rows of the left matrix
common = len(a[0]) # = len(b) - the common dimension of the matrices
columns = len(b[0]) # the number of columns of the right matrix
ans = create_empty_matrix(rows, columns)
for currentRow in range(rows):
for currentColumn in range(columns):
for k in range(common):
ans[currentRow][currentColumn] += a[currentRow][k] * b[k][currentColumn]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 3 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-3:-Matrix-multiplication.) Exercise 4: Matrix Inversion.**Input:** An invertible $2 \times 2$ matrix $A$.**Output:** Return the inverse of $A$, a $2 \times 2$ matrix $A^{-1}$. SolutionSince we only need to invert a $2 \times 2$ matrix, we will not consider a solution which can be used for arbitrary-sized matrices. We will follow the algorithm described in the [Wikipedia article](https://en.wikipedia.org/wiki/Invertible_matrixInversion_of_2_%C3%97_2_matrices).$$ A = \begin{bmatrix} a & b \\ c & d \end{bmatrix} $$The determinant of the matrix is defined as $$ |A| = a \cdot d - b \cdot c $$$$A^{-1} = \frac{1}{|A|} \cdot \begin{bmatrix} d & -b \\ -c & a \end{bmatrix} = \begin{bmatrix} \frac{d}{|A|} & \frac{-b}{|A|} \\ \frac{-c}{|A|} & \frac{a}{|A|} \end{bmatrix} $$
###Code
@exercise
def matrix_inverse(m : Matrix) -> Matrix:
# Extract each element of the array into a named variable
a = m[0][0]
b = m[0][1]
c = m[1][0]
d = m[1][1]
# Calculate the determinant
determinant = (a * d) - (b * c)
# Create the inverse of the matrix following the formula above
ans = [[d / determinant, -b / determinant], [-c / determinant, a / determinant]]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 4 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-4:-Matrix-Inversion.) Exercise 5: Transpose.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $m \times n$ matrix $A^T$, the transpose of $A$. SolutionAgain, the tutorial gives us the definition of matrix transpose, so we just need to fill the resulting matrix with the elements of the original matrix in the right order. For example, for a $3 \times 2$ matrix$$\begin{bmatrix} a & b \\ c & d \\ e & f\end{bmatrix}^T=\begin{bmatrix} a & c & e \\ b & d & f\end{bmatrix}$$
###Code
@exercise
def transpose(a : Matrix) -> Matrix:
rows = len(a)
columns = len(a[0])
# Note that the resulting matrix dimensions are swapped compared to the original ones
ans = create_empty_matrix(columns, rows)
for i in range(rows):
for j in range(columns):
ans[j][i] = a[i][j]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 5 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-5:-Transpose.) Exercise 6: Conjugate.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $n \times m$ matrix $\overline{A}$, the conjugate of $A$. SolutionsTo get the conjugate of a matrix you take the conjugate of each individual element (check the [Complex Arithmetic tutorial](../ComplexArithmetic/ComplexArithmetic.ipynbComplex-Conjugate) for the definition.> *Python note*: In the complex arithmetic tutorial complex numbers were represented as tuples of real and imaginary components. However, this tutorial relies on Python's built-in [`complex`](https://docs.python.org/3.8/library/functions.htmlcomplex) data type. Python's [cmath library](https://docs.python.org/3.8/library/cmath.html) offers a lot of useful functions that deal with the `complex` data type.>> Here is an example of using the `complex` data type:>> ```Python> Import the cmath library> import cmath>> Create a new complex number 5 + 3i; the two arguments are the real and the imaginary parts of the number> complexNumber = complex(5, 3)>> Print the real and the imaginary parts of the number> print(complexNumber.real) > print(complexNumber.imag)>> Convert the complex number to its polar representation using the cmath library> polar = cmath.polar(complexNumber)> print(polar) This prints: (5.830951894845301, 0.5404195002705842)> ```>> To get the complex conjugate of a matrix, we loop trough each element of the matrix, extract real and imaginary parts of the number and flip the sign for the imaginary part.
###Code
@exercise
def conjugate(a : Matrix) -> Matrix:
rows = len(a)
columns = len(a[0])
ans = create_empty_matrix(rows, columns)
for i in range(rows):
for j in range(columns):
ans[i][j] = complex(a[i][j].real, -a[i][j].imag)
return ans
###Output
_____no_output_____
###Markdown
[Return to task 6 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-6:-Conjugate.) Exercise 7: Adjoint.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $m \times n$ matrix $A^\dagger$, the adjoint of $A$. SolutionTo get the adjoint we perform both **transpose** and **conjugate** operations on the input matrix. We can write out the whole procedure manually, like we have done above, but we can also leverage the code we have written above.> In Python the `def` word defines a function, which could be reused later in the code.
###Code
@exercise
def adjoint(a : Matrix) -> Matrix:
# Call the transpose function with the input matrix a
transp = transpose(a)
# Call the conjugate function with the transposed matrix as input
ans = conjugate(transp)
return ans
###Output
_____no_output_____
###Markdown
[Return to task 7 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-7:-Adjoint.) Exercise 8: Unitary Verification.**Input:** An $n \times n$ matrix $A$.**Output:** Check if the matrix is unitary and return `True` if it is, or `False` if it isn't. SolutionA matrix is unitary if this holds true: $UU^\dagger = U^\dagger U = I$.(As a reminder, an identity matrix is a matrix with 1s on the main diagonal and 0s everywhere else.)Thus, to check if the input matrix is unitary we will need to perform the following steps:1. Calculate the adjoint of the input matrix.2. Multiply it by the input matrix.3. Check if the multiplication result is equal to an identity matrix. > *Python note:* We will leverage the `adjoint` and the `matrix_mult` functions what we have created above.>> When we check each element of $UU^\dagger$ to see whether it equals the respective element of the identity matrix, we'll use Python function `approx` to perform this comparison approximately.
###Code
from pytest import approx
@exercise
def is_matrix_unitary(a : Matrix) -> bool:
n = len(a)
# Calculate the adjoint matrix
adjointA = adjoint(a)
# Multiply the adjoint matrix by the input matrix
multipliedMatrix = matrix_mult(a, adjointA)
# Check whether the multiplication result is (approximately) identity matrix
for i in range(n):
for j in range(n):
# An identity matrix has 1's in all the places where the row index and column index are equal...
if i == j:
if multipliedMatrix[i][j] != approx(1):
return False
# ... and 0's in all the places where the row index and column index are different
else:
if multipliedMatrix[i][j] != approx(0):
return False
return True
###Output
_____no_output_____
###Markdown
[Return to task 8 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-8:-Unitary-Verification.) Exercise 9: Inner product.**Inputs:**1. An $n \times 1$ vector $V$.2. An $n \times 1$ vector $W$.**Output:** Return a complex number - the inner product $\langle V , W \rangle$. SolutionFollowing the definition of the inner product, $\langle V , W \rangle = V^\dagger W$. For example, for vectors of length 2:$$\langle\begin{bmatrix} a \\ b\end{bmatrix},\begin{bmatrix} c \\ d\end{bmatrix}\rangle =\begin{bmatrix} a \\ b\end{bmatrix}^\dagger\begin{bmatrix} c \\ d\end{bmatrix}=\begin{bmatrix} \overline{a} & \overline{b} \end{bmatrix}\begin{bmatrix} c \\ d\end{bmatrix}= \overline{a} \cdot c + \overline{b} \cdot d$$> *Python note:* We will again use previously defined functions to calculate adjoint of a vector and a product of two vectors. > We need to keep in mind that the task asks us to return a complex number and not a $1 \times 1$ matrix which is the result of the multiplication. > Therefore at the end we'll extract the top left element of the `resultMatrix` and return it.
###Code
@exercise
def inner_prod(v : Matrix, w : Matrix) -> complex:
# Calculate the adjoint of the v vector
adjointV = adjoint(v)
# Multiply the adjoint v and w. The result will be a matrix with only one element.
resultMatrix = matrix_mult(adjointV, w)
# To get the actual complex number, we have to take one element from the multiplication result.
return resultMatrix[0][0]
###Output
_____no_output_____
###Markdown
[Return to task 9 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-9:-Inner-product.) Exercise 10: Normalized vectors.**Input:** A non-zero $n \times 1$ vector $V$.**Output:** Return an $n \times 1$ vector $\frac{V}{||V||}$ - the normalized version of the vector $V$. Solution If the vector $V = \begin{bmatrix}a & b & c \end{bmatrix}$, its norm $ ||V|| = \sqrt{|a|^2 + |b|^2 + |c|^2} $,and its normalized version is$ \begin{bmatrix}\frac{a}{||V||} & \frac{b}{||V||} & \frac{c}{||V||} \end{bmatrix} $.Thus, we need to calculate the norm of the vector and to divide each element of the vector by it. We will calculate the norm as a square root of an inner product of the vector with itself.
###Code
@exercise
def normalize(v : Matrix) -> Matrix:
norm = math.sqrt(inner_prod(v, v).real)
n = len(v)
ans = create_empty_matrix(n, 1)
# Divide each element of the vector by the norm
for i in range(n):
ans[i][0] = v[i][0] / norm
return ans
###Output
_____no_output_____
###Markdown
[Return to task 10 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-10:-Normalized-vectors.) Exercise 11: Outer product.**Inputs:**1. An $n \times 1$ vector $V$.2. An $m \times 1$ vector $W$.**Output:** Return an $n \times m$ matrix that represents the outer product of $V$ and $W$. SolutionBy definition, the outer product of $V$ and $W$ is $VW^\dagger$. We can use a similar approach to calculating the inner product, except here we will return the whole multiplication result rather than a specific number.
###Code
@exercise
def outer_prod(v : Matrix, w : Matrix) -> Matrix:
# Calculate adjoint of the W
adjointW = adjoint(w)
# Multiply V by W adjoint
return matrix_mult(v, adjointW)
###Output
_____no_output_____
###Markdown
[Return to task 11 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-11:-Outer-product.) Exercise 12*: Tensor Product.**Inputs:**1. An $n \times m$ matrix $A$.2. A $k \times l$ matrix $B$.**Output:** Return an $(n \cdot k) \times (m \cdot l)$ matrix $A \otimes B$, the tensor product of $A$ and $B$. SolutionWe will follow the definition of the tensor product. For example, tensor product of $2 \times 2$ matrices look as follows:$$\begin{bmatrix} a & b \\ c & d \end{bmatrix} \otimes \begin{bmatrix} e & f \\ g & h \end{bmatrix} =\begin{bmatrix} a \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} & b \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} \\ c \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} & d \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix}\end{bmatrix}=\begin{bmatrix} a \cdot e & a \cdot f & b \cdot e & b \cdot f \\ a \cdot g & a \cdot h & b \cdot g & b \cdot h \\ c \cdot e & c \cdot f & d \cdot e & d \cdot f \\ c \cdot g & c \cdot h & d \cdot g & d \cdot h\end{bmatrix}$$> *Python note:* We need to calculate pairwise products of all elements of the left matrix and all elements of the right matrix; this means we have to use 4 nested loops.
###Code
@exercise
def tensor_product(a : Matrix, b : Matrix) -> Matrix:
aRows = len(a) # the number of rows for matrix a
aColumns = len(a[0]) # the number of columns for matrix a
bRows = len(b) # the number of rows for matrix b
bColumns = len(b[0]) # the number of columns for matrix b
ans = create_empty_matrix(aRows * bRows, aColumns * bColumns)
# Outer pair of loops, iterating trough the elements of the left matrix
for i in range(aRows):
for j in range(aColumns):
# Inner pair of loops, iterating through the elements of the right matrix
for k in range(bRows):
for l in range(bColumns):
ans[i * bRows + k][j * bColumns + l] = a[i][j] * b[k][l]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 12 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-12*:-Tensor-Product.) Exercise 13: Finding an eigenvalue.**Inputs:**1. A real-valued $n \times n$ matrix $A$.2. An eigenvector $V$ of matrix $A$.**Output:** Return a real number - the eigenvalue of $A$ that is associated with the given eigenvector. SolutionLet's consider what happens when we multiply the matrix by its eigenvector for a $3 \times 3$ example:$$ A \cdot V = \begin{bmatrix} a & b & c \\ d & e & f \\ g & h & i \end{bmatrix} \cdot \begin{bmatrix}j \\ k \\ l \end{bmatrix} = \begin{bmatrix} m \\ n \\ o \end{bmatrix} = \alpha \begin{bmatrix}j \\ k \\ l \end{bmatrix} = \alpha V$$This means you can find the eigenvalue $\alpha$ from the equations $$ \begin{cases} \alpha j = m \\ \alpha k = n \\ \alpha l = o \end{cases}$$We can use any of them, keeping in mind that we need an equation in which the element of the eigenvector is not zero (otherwise we get an equation $0 \alpha = 0$ which doesn't help us find $\alpha$).Since eigenvectors are defined as non-zero vectors, we are guaranteed that at least one element of the vector will not be zero.
###Code
from pytest import approx
@exercise
def find_eigenvalue(a : Matrix, v : Matrix) -> float:
n = len(v)
multiplied = matrix_mult(a, v)
for i in range(n):
if (v[i][0] != approx(0)):
return multiplied[i][0] / v[i][0]
###Output
_____no_output_____
###Markdown
[Return to task 13 of the Linear Algebra tutorial.](/LinearAlgebra.ipynbExercise-13:-Finding-an-eigenvalue.) Exercise 14**: Finding an eigenvector.**Inputs:**1. A $2 \times 2$ matrix $A$.2. An eigenvalue $x$ of matrix $A$.**Output:** Return any non-zero eigenvector of $A$ that is associated with $x$. SolutionSearching for an eigenvector $V$ associated with a specific eigenvalue $x$ asks for solving the following equation:$$ AV = xV $$or, equivalently, $$(A - xI_n)V = 0$$In other words, for a $2 \times 2$ matrix the following happens: 1. Multiply the identity matrix $I_2$ by the eigenvalue:$$ x \cdot \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} x & 0 \\ 0 & x \end{bmatrix} $$2. Subtract this new matrix from the given matrix $A$:$$ \begin{bmatrix} a & b \\ c & d \end{bmatrix} - \begin{bmatrix} x & 0 \\ 0 & x \end{bmatrix} = \begin{bmatrix} a -x & b \\ c & d -x \end{bmatrix} $$ 3. Find a vector that, when multiplied by the resulting matrix, will produce a 0 vector:$$ \begin{bmatrix} a - x & b \\ c & d - x \end{bmatrix} \cdot \begin{bmatrix} v_0 \\ v_1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}$$This can be rewritten as the following system of equations:$$\begin{cases}(a - x) \cdot v_0 + b \cdot v_1 = 0 \\c \cdot v_0 + (d - x) \cdot v_1 = 0 \end{cases}$$Each eigenvalue has infinitely many eigenvectors associated with it (since multiplying an eigenvector by a number gives another valid eigenvector). We can limit our search and say that $v_0 = 1$, if possible. In this case, the system of equations becomes$$\begin{cases}(a - x) + b \cdot v_1 = 0 \\c + (d - x) \cdot v_1 = 0 \end{cases}$$and finally we get $v_1 = \frac{a-x}{-b}$.If $b = 0$, we can not perform this division, so we need to reconsider our choices. The first equation becomes $(a-x)v_0 = 0$, which is possible in two cases:* If $a - x \neq 0$, we get $v_0 = 0$ and thus $v_1$ has to be non-zero (we can pick $v_1 = 1$).* If $a - x = 0$, we can not get any information from the first equation and have to fall back to the second one:$c \cdot v_0 + (d - x) \cdot v_1 = 0$. Following a similar logic: * If $c = 0$, we get $(d - x) \cdot v_1 = 0$, so $v_0 = 1, v_1 = 0$. * If $c \neq 0$, we get $v_1 = 1, v_0 = \frac{d-x}{-c}$.
###Code
@exercise
def find_eigenvector(a : Matrix, x : float) -> Matrix:
# Check for possible edge cases
if (a[0][1] == 0):
if (a[0][0] - x == 0):
if (a[1][0] == 0):
return [[1], [0]]
else:
return [[(a[1][1] - x) / (-a[1][0])], [1]]
else:
return [[0], [1]]
v0 = 1
v1 = (a[0][0] - x) / (-a[0][1])
return [[v0], [v1]]
###Output
_____no_output_____
###Markdown
Linear Algebra Tutorial Workbook**What is this workbook?**A workbook is a collection of problems, accompanied by solutions to them. The explanations focus on the logical steps required to solve a problem; they illustrate the concepts that need to be applied to come up with a solution to the problem, explaining the mathematical steps required. Note that a workbook should not be the primary source of knowledge on the subject matter; it assumes that you've already read a tutorial or a textbook and that you are now seeking to improve your problem-solving skills. You should attempt solving the tasks of the respective kata first, and turn to the workbook only if stuck. While a textbook emphasizes knowledge acquisition, a workbook emphasizes skill acquisition.This workbook describes the solutions to the problems offered in the [Linear Algebra tutorial](./LinearAlgebra.ipynb). Since the tasks are offered as programming problems, the explanations also cover some elements of Python that might be non-obvious for a first-time user.**What you should know for this workbook**1. Complex arithmetic.2. Basic Python knowledge is helpful but not necessary. Click the cell with code below this block of text and press `Ctrl+Enter` (`⌘+Enter` on Mac). **Do not skip this step**.
###Code
# Run this cell using Ctrl+Enter (⌘+Enter on Mac).
from testing import exercise, create_empty_matrix
from typing import List
import math, cmath
Matrix = List[List[complex]]
###Output
_____no_output_____
###Markdown
Exercise 1: Matrix addition.**Inputs:**1. An $n \times m$ matrix $A$, represented as a two-dimensional list.2. An $n \times m$ matrix $B$, represented as a two-dimensional list.**Output:** Return the sum of the matrices $A + B$ - an $n \times m$ matrix, represented as a two-dimensional list. SolutionFollowing the definition given in the tutorial, the sum of two matrices is a matrix of element-wise sums of matrix elements; for example, for $2 \times 2$ matrices$$ A + B =\begin{bmatrix} a & b \\ c & d \end{bmatrix} + \begin{bmatrix} e & f \\ g & h \end{bmatrix} = \begin{bmatrix} a + e & b + f \\ c + g & d + h \end{bmatrix}$$> *Python note:* This tutorial uses a lot of lists and loops, so let's walk through some Python syntax details first. If you're familiar with Python syntax, feel free to skip this note!>> * [`range(x)`](https://docs.python.org/3/tutorial/controlflow.htmlthe-range-function) will create a [list](https://docs.python.org/3/tutorial/introduction.htmllists) of numbers from 0 to `x - 1`, inclusive; for example, `range(3)` will create a list `[0, 1, 2]`. > * [`for`](https://docs.python.org/3/tutorial/controlflow.htmlfor-statements) statement iterates over the items of a sequence; for example, the following code> ```python> for i in range(3):> print(i)> ```>> will print:> ```> 0> 1> 2> ```>> * Matrices are described as two-dimensional lists, > which are represented as lists of lists. For example, the following matrix:>> $$\begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix} $$>> is represented as a list of lists `[[1, 2, 3], [4, 5, 6]]`. >> * You can access a specific element of the list using the index of that element in the list (note that indices start with 0): the first element of `array` is `array[0]`, the second - `array[1]`, etc.> * Similarly, you can access an element of a matrix using the row and column indices of that element: `matrix[0][2]` would access the element in the first row and 3rd column.> * `len(array)` returns the number of elements in a list; for example, `len([0, 1, 2])` will return 3.> * Here is an example of creating a matrix from the example above and looping through its elements to print them:>>```Python>matrix = [[1, 2, 3], [4, 5, 6]]>numberOfRows = len(matrix) will return 2>numberOfColumns = len(matrix[0]) will return 3>for row in range(numberOfRows):> for column in range(numberOfColumns):> print(matrix[row][column])>>```>> * Finally, the first exercise offers you a template of a solution that uses a function `create_empty_matrix(n, m)`; this function creates an $n \times m$ matrix filled with 0's as values. This function is not a built-in Python function, this notebook defines it for you to use.
###Code
@exercise
def matrix_add(a : Matrix, b : Matrix) -> Matrix:
# You can get the size of a matrix like this:
rows = len(a)
columns = len(a[0])
# You can use the following function to initialize a rows×columns matrix filled with 0s to store your answer
c = create_empty_matrix(rows, columns)
for i in range(rows):
for j in range(columns):
# You can access elements of a matrix like this:
x = a[i][j]
y = b[i][j]
# You can modify the elements of a matrix like this:
c[i][j] = a[i][j] + b[i][j]
return c
###Output
_____no_output_____
###Markdown
[Return to task 1 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-1:-Matrix-addition.) Exercise 2: Scalar multiplication.**Inputs:**1. A scalar $x$.2. An $n \times m$ matrix $A$.**Output:** Return the $n \times m$ matrix $x \cdot A$. SolutionWe can again follow the definition given in the tutorial: to calculate the product of a number and a matrix, multiply each matrix element by that number. For example, for a $2 \times 2$ matrix:$$x \cdot A = x \cdot \begin{bmatrix} a & b \\ c & d \end{bmatrix} = \begin{bmatrix} x \cdot a & x \cdot b \\ x \cdot c & x \cdot d \end{bmatrix} $$ > *Python note:* We have to multiply each element in the matrix by the given number $x$. To do so, we will again loop trough each matrix element with 2 `for` loops, do the multiplication and store its result in the corresponding element of the newly created matrix.
###Code
@exercise
def scalar_mult(x : complex, a : Matrix) -> Matrix:
rows = len(a)
columns = len(a[0])
c = create_empty_matrix(rows, columns)
for i in range(rows):
for j in range(columns):
c[i][j] = a[i][j] * x
return c
###Output
_____no_output_____
###Markdown
[Return to task 2 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-2:-Scalar-multiplication.) Exercise 3: Matrix multiplication.**Inputs:**1. An $n \times m$ matrix $A$.2. An $m \times k$ matrix $B$.**Output:** Return the $n \times k$ matrix equal to the matrix product $AB$. SolutionAgain, the tutorial gives us the definition of how multiplication works, and we just need to implement it in code. Here is an example of multiplying a $2 \times 3$ matrix by a $3 \times 2$ matrix:$$ A \cdot B =\begin{bmatrix} a & b & c \\ d & e & f \end{bmatrix} \cdot \begin{bmatrix} h & i \\ j & k \\ l & m \end{bmatrix} = \begin{bmatrix} a \cdot h + b \cdot j + c \cdot l & a \cdot i + b \cdot k + c \cdot m \\ d \cdot h + e \cdot j + f \cdot l & d \cdot i + e \cdot k + f \cdot m \end{bmatrix} $$> *Python note*: In this exercise we'll need an extra nested loop. We will iterate trough the rows and columns of the resulting matrix, similar to the previous exercises, but for each element of the result we'll need to iterate through the row of the left matrix and the column of the right matrix that contribute to that element. In the example above, to get the element in the first row and the first column of the resulting matrix product we'll need to iterate through the first row of the left matrix $\begin{bmatrix} a & b & c \end{bmatrix}$ and the first column of the right matrix $\begin{bmatrix} h \\ j \\ l \end{bmatrix}$ and add up pairwise products of their elements.>> Note that the empty matrix we create for storing the result differs in dimensions from the previous exercises: its number of rows equals the number of rows of the left matrix, and its number of columns equals to the number of columns of the right matrix. >> Python `+=` operator is a convenient shorthand for assignment `variable = variable + increment`.
###Code
@exercise
def matrix_mult(a : Matrix, b : Matrix) -> Matrix:
rows = len(a) # the number of rows of the left matrix
common = len(a[0]) # = len(b) - the common dimension of the matrices
columns = len(b[0]) # the number of columns of the right matrix
ans = create_empty_matrix(rows, columns)
for currentRow in range(rows):
for currentColumn in range(columns):
for k in range(common):
ans[currentRow][currentColumn] += a[currentRow][k] * b[k][currentColumn]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 3 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-3:-Matrix-multiplication.) Exercise 4: Matrix Inversion.**Input:** An invertible $2 \times 2$ matrix $A$.**Output:** Return the inverse of $A$, a $2 \times 2$ matrix $A^{-1}$. SolutionSince we only need to invert a $2 \times 2$ matrix, we will not consider a solution which can be used for arbitrary-sized matrices. We will follow the algorithm described in the [Wikipedia article](https://en.wikipedia.org/wiki/Invertible_matrixInversion_of_2_%C3%97_2_matrices).$$ A = \begin{bmatrix} a & b \\ c & d \end{bmatrix} $$The determinant of the matrix is defined as $$ |A| = a \cdot d - b \cdot c $$$$A^{-1} = \frac{1}{|A|} \cdot \begin{bmatrix} d & -b \\ -c & a \end{bmatrix} = \begin{bmatrix} \frac{d}{|A|} & \frac{-b}{|A|} \\ \frac{-c}{|A|} & \frac{a}{|A|} \end{bmatrix} $$
###Code
@exercise
def matrix_inverse(m : Matrix) -> Matrix:
# Extract each element of the array into a named variable
a = m[0][0]
b = m[0][1]
c = m[1][0]
d = m[1][1]
# Calculate the determinant
determinant = (a * d) - (b * c)
# Create the inverse of the matrix following the formula above
ans = [[d / determinant, -b / determinant], [-c / determinant, a / determinant]]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 4 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-4:-Matrix-Inversion.) Exercise 5: Transpose.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $m \times n$ matrix $A^T$, the transpose of $A$. SolutionAgain, the tutorial gives us the definition of matrix transpose, so we just need to fill the resulting matrix with the elements of the original matrix in the right order. For example, for a $3 \times 2$ matrix$$\begin{bmatrix} a & b \\ c & d \\ e & f\end{bmatrix}^T=\begin{bmatrix} a & c & e \\ b & d & f\end{bmatrix}$$
###Code
@exercise
def transpose(a : Matrix) -> Matrix:
rows = len(a)
columns = len(a[0])
# Note that the resulting matrix dimensions are swapped compared to the original ones
ans = create_empty_matrix(columns, rows)
for i in range(rows):
for j in range(columns):
ans[j][i] = a[i][j]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 5 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-5:-Transpose.) Exercise 6: Conjugate.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $n \times m$ matrix $\overline{A}$, the conjugate of $A$. SolutionsTo get the conjugate of a matrix you take the conjugate of each individual element (check the [Complex Arithmetic tutorial](../ComplexArithmetic/ComplexArithmetic.ipynbComplex-Conjugate) for the definition.> *Python note*: In the complex arithmetic tutorial complex numbers were represented as tuples of real and imaginary components. However, this tutorial relies on Python's built-in [`complex`](https://docs.python.org/3.8/library/functions.htmlcomplex) data type. Python's [cmath library](https://docs.python.org/3.8/library/cmath.html) offers a lot of useful functions that deal with the `complex` data type.>> Here is an example of using the `complex` data type:>> ```Python> Import the cmath library> import cmath>> Create a new complex number 5 + 3i; the two arguments are the real and the imaginary parts of the number> complexNumber = complex(5, 3)>> Print the real and the imaginary parts of the number> print(complexNumber.real) > print(complexNumber.imag)>> Convert the complex number to its polar representation using the cmath library> polar = cmath.polar(complexNumber)> print(polar) This prints: (5.830951894845301, 0.5404195002705842)> ```>> To get the complex conjugate of a matrix, we loop trough each element of the matrix, extract real and imaginary parts of the number and flip the sign for the imaginary part.
###Code
@exercise
def conjugate(a : Matrix) -> Matrix:
rows = len(a)
columns = len(a[0])
ans = create_empty_matrix(rows, columns)
for i in range(rows):
for j in range(columns):
ans[i][j] = complex(a[i][j].real, -a[i][j].imag)
return ans
###Output
_____no_output_____
###Markdown
[Return to task 6 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-6:-Conjugate.) Exercise 7: Adjoint.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $m \times n$ matrix $A^\dagger$, the adjoint of $A$. SolutionTo get the adjoint we perform both **transpose** and **conjugate** operations on the input matrix. We can write out the whole procedure manually, like we have done above, but we can also leverage the code we have written above.> In Python the `def` word defines a function, which could be reused later in the code.
###Code
@exercise
def adjoint(a : Matrix) -> Matrix:
# Call the transpose function with the input matrix a
transp = transpose(a)
# Call the conjugate function with the transposed matrix as input
ans = conjugate(transp)
return ans
###Output
_____no_output_____
###Markdown
[Return to task 7 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-7:-Adjoint.) Exercise 8: Unitary Verification.**Input:** An $n \times n$ matrix $A$.**Output:** Check if the matrix is unitary and return `True` if it is, or `False` if it isn't. SolutionA matrix is unitary if this holds true: $UU^\dagger = U^\dagger U = I$.(As a reminder, an identity matrix is a matrix with 1s on the main diagonal and 0s everywhere else.)Thus, to check if the input matrix is unitary we will need to perform the following steps:1. Calculate the adjoint of the input matrix.2. Multiply it by the input matrix.3. Check if the multiplication result is equal to an identity matrix. > *Python note:* We will leverage the `adjoint` and the `matrix_mult` functions what we have created above.>> When we check each element of $UU^\dagger$ to see whether it equals the respective element of the identity matrix, we'll use Python function `approx` to perform this comparison approximately.
###Code
from pytest import approx
@exercise
def is_matrix_unitary(a : Matrix) -> bool:
n = len(a)
# Calculate the adjoint matrix
adjointA = adjoint(a)
# Multiply the adjoint matrix by the input matrix
multipliedMatrix = matrix_mult(a, adjointA)
# Check whether the multiplication result is (approximately) identity matrix
for i in range(n):
for j in range(n):
# An identity matrix has 1's in all the places where the row index and column index are equal...
if i == j:
if multipliedMatrix[i][j] != approx(1):
return False
# ... and 0's in all the places where the row index and column index are different
else:
if multipliedMatrix[i][j] != approx(0):
return False
return True
###Output
_____no_output_____
###Markdown
[Return to task 8 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-8:-Unitary-Verification.) Exercise 9: Inner product.**Inputs:**1. An $n \times 1$ vector $V$.2. An $n \times 1$ vector $W$.**Output:** Return a complex number - the inner product $\langle V , W \rangle$. SolutionFollowing the definition of the inner product, $\langle V , W \rangle = V^\dagger W$. For example, for vectors of length 2:$$\langle\begin{bmatrix} a \\ b\end{bmatrix},\begin{bmatrix} c \\ d\end{bmatrix}\rangle =\begin{bmatrix} a \\ b\end{bmatrix}^\dagger\begin{bmatrix} c \\ d\end{bmatrix}=\begin{bmatrix} \overline{a} & \overline{b} \end{bmatrix}\begin{bmatrix} c \\ d\end{bmatrix}= \overline{a} \cdot c + \overline{b} \cdot d$$> *Python note:* We will again use previously defined functions to calculate adjoint of a vector and a product of two vectors. > We need to keep in mind that the task asks us to return a complex number and not a $1 \times 1$ matrix which is the result of the multiplication. > Therefore at the end we'll extract the top left element of the `resultMatrix` and return it.
###Code
@exercise
def inner_prod(v : Matrix, w : Matrix) -> complex:
# Calculate the adjoint of the v vector
adjointV = adjoint(v)
# Multiply the adjoint v and w. The result will be a matrix with only one element.
resultMatrix = matrix_mult(adjointV, w)
# To get the actual complex number, we have to take one element from the multiplication result.
return resultMatrix[0][0]
###Output
_____no_output_____
###Markdown
[Return to task 9 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-9:-Inner-product.) Exercise 10: Normalized vectors.**Input:** A non-zero $n \times 1$ vector $V$.**Output:** Return an $n \times 1$ vector $\frac{V}{||V||}$ - the normalized version of the vector $V$. Solution If the vector $V = \begin{bmatrix}a & b & c \end{bmatrix}$, its norm $ ||V|| = \sqrt{|a|^2 + |b|^2 + |c|^2} $,and its normalized version is$ \begin{bmatrix}\frac{a}{||V||} & \frac{b}{||V||} & \frac{c}{||V||} \end{bmatrix} $.Thus, we need to calculate the norm of the vector and to divide each element of the vector by it. We will calculate the norm as a square root of an inner product of the vector with itself.
###Code
@exercise
def normalize(v : Matrix) -> Matrix:
norm = math.sqrt(inner_prod(v, v).real)
n = len(v)
ans = create_empty_matrix(n, 1)
# Divide each element of the vector by the norm
for i in range(n):
ans[i][0] = v[i][0] / norm
return ans
###Output
_____no_output_____
###Markdown
[Return to task 10 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-10:-Normalized-vectors.) Exercise 11: Outer product.**Inputs:**1. An $n \times 1$ vector $V$.2. An $m \times 1$ vector $W$.**Output:** Return an $n \times m$ matrix that represents the outer product of $V$ and $W$. SolutionBy definition, the outer product of $V$ and $W$ is $VW^\dagger$. We can use a similar approach to calculating the inner product, except here we will return the whole multiplication result rather than a specific number.
###Code
@exercise
def outer_prod(v : Matrix, w : Matrix) -> Matrix:
# Calculate adjoint of the W
adjointW = adjoint(w)
# Multiply V by W adjoint
return matrix_mult(v, adjointW)
###Output
_____no_output_____
###Markdown
[Return to task 11 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-11:-Outer-product.) Exercise 12*: Tensor Product.**Inputs:**1. An $n \times m$ matrix $A$.2. A $k \times l$ matrix $B$.**Output:** Return an $(n \cdot k) \times (m \cdot l)$ matrix $A \otimes B$, the tensor product of $A$ and $B$. SolutionWe will follow the definition of the tensor product. For example, tensor product of $2 \times 2$ matrices look as follows:$$\begin{bmatrix} a & b \\ c & d \end{bmatrix} \otimes \begin{bmatrix} e & f \\ g & h \end{bmatrix} =\begin{bmatrix} a \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} & b \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} \\ c \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} & d \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix}\end{bmatrix}=\begin{bmatrix} a \cdot e & a \cdot f & b \cdot e & b \cdot f \\ a \cdot g & a \cdot h & b \cdot g & b \cdot h \\ c \cdot e & c \cdot f & d \cdot e & d \cdot f \\ c \cdot g & c \cdot h & d \cdot g & d \cdot h\end{bmatrix}$$> *Python note:* We need to calculate pairwise products of all elements of the left matrix and all elements of the right matrix; this means we have to use 4 nested loops.
###Code
@exercise
def tensor_product(a : Matrix, b : Matrix) -> Matrix:
aRows = len(a) # the number of rows for matrix a
aColumns = len(a[0]) # the number of columns for matrix a
bRows = len(b) # the number of rows for matrix b
bColumns = len(b[0]) # the number of columns for matrix b
ans = create_empty_matrix(aRows * bRows, aColumns * bColumns)
# Outer pair of loops, iterating trough the elements of the left matrix
for i in range(aRows):
for j in range(aColumns):
# Inner pair of loops, iterating through the elements of the right matrix
for k in range(bRows):
for l in range(bColumns):
ans[i * bRows + k][j * bColumns + l] = a[i][j] * b[k][l]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 12 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-12*:-Tensor-Product.) Exercise 13: Finding an eigenvalue.**Inputs:**1. A real-valued $n \times n$ matrix $A$.2. An eigenvector $V$ of matrix $A$.**Output:** Return a real number - the eigenvalue of $A$ that is associated with the given eigenvector. SolutionLet's consider what happens when we multiply the matrix by its eigenvector for a $3 \times 3$ example:$$ A \cdot V = \begin{bmatrix} a & b & c \\ d & e & f \\ g & h & i \end{bmatrix} \cdot \begin{bmatrix}j \\ k \\ l \end{bmatrix} = \begin{bmatrix} m \\ n \\ o \end{bmatrix} = \alpha \begin{bmatrix}j \\ k \\ l \end{bmatrix} = \alpha V$$This means you can find the eigenvalue $\alpha$ from the equations $$ \begin{cases} \alpha j = m \\ \alpha k = n \\ \alpha l = o \end{cases}$$We can use any of them, keeping in mind that we need an equation in which the element of the eigenvector is not zero (otherwise we get an equation $0 \alpha = 0$ which doesn't help us find $\alpha$).Since eigenvectors are defined as non-zero vectors, we are guaranteed that at least one element of the vector will not be zero.
###Code
from pytest import approx
@exercise
def find_eigenvalue(a : Matrix, v : Matrix) -> float:
n = len(v)
multiplied = matrix_mult(a, v)
for i in range(n):
if (v[i][0] != approx(0)):
return multiplied[i][0] / v[i][0]
###Output
_____no_output_____
###Markdown
[Return to task 13 of the Linear Algebra tutorial.](/LinearAlgebra.ipynbExercise-13:-Finding-an-eigenvalue.) Exercise 14**: Finding an eigenvector.**Inputs:**1. A $2 \times 2$ matrix $A$.2. An eigenvalue $x$ of matrix $A$.**Output:** Return any non-zero eigenvector of $A$ that is associated with $x$. SolutionSearching for an eigenvector $V$ associated with a specific eigenvalue $x$ asks for solving the following equation:$$ AV = xV $$or, equivalently, $$(A - xI_n)V = 0$$In other words, for a $2 \times 2$ matrix the following happens: 1. Multiply the identity matrix $I_2$ by the eigenvalue:$$ x \cdot \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} x & 0 \\ 0 & x \end{bmatrix} $$2. Subtract this new matrix from the given matrix $A$:$$ \begin{bmatrix} a & b \\ c & d \end{bmatrix} - \begin{bmatrix} x & 0 \\ 0 & x \end{bmatrix} = \begin{bmatrix} a -x & b \\ c & d -x \end{bmatrix} $$ 3. Find a vector that, when multiplied by the resulting matrix, will produce a 0 vector:$$ \begin{bmatrix} a - x & b \\ c & d - x \end{bmatrix} \cdot \begin{bmatrix} v_0 \\ v_1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}$$This can be rewritten as the following system of equations:$$\begin{cases}(a - x) \cdot v_0 + b \cdot v_1 = 0 \\c \cdot v_0 + (d - x) \cdot v_1 = 0 \end{cases}$$Each eigenvalue has infinitely many eigenvectors associated with it (since multiplying an eigenvector by a number gives another valid eigenvector). We can limit our search and say that $v_0 = 1$, if possible. In this case, the system of equations becomes$$\begin{cases}(a - x) + b \cdot v_1 = 0 \\c + (d - x) \cdot v_1 = 0 \end{cases}$$and finally we get $v_1 = \frac{a-x}{-b}$.If $b = 0$, we can not perform this division, so we need to reconsider our choices. The first equation becomes $(a-x)v_0 = 0$, which is possible in two cases:* If $a - x \neq 0$, we get $v_0 = 0$ and thus $v_1$ has to be non-zero (we can pick $v_1 = 1$).* If $a - x = 0$, we can not get any information from the first equation and have to fall back to the second one:$c \cdot v_0 + (d - x) \cdot v_1 = 0$. Following a similar logic: * If $c = 0$, we get $(d - x) \cdot v_1 = 0$, so $v_0 = 1, v_1 = 0$. * If $c \neq 0$, we get $v_1 = 1, v_0 = \frac{d-x}{-c}$.
###Code
@exercise
def find_eigenvector(a : Matrix, x : float) -> Matrix:
# Check for possible edge cases
if (a[0][1] == 0):
if (a[0][0] - x == 0):
if (a[1][0] == 0):
return [[1], [0]]
else:
return [[(a[1][1] - x) / (-a[1][0])], [1]]
else:
return [[0], [1]]
v0 = 1
v1 = (a[0][0] - x) / (-a[0][1])
return [[v0], [v1]]
###Output
_____no_output_____
###Markdown
Linear Algebra Tutorial Workbook**What is this workbook?**A workbook is a collection of problems, accompanied by solutions to them. The explanations focus on the logical steps required to solve a problem; they illustrate the concepts that need to be applied to come up with a solution to the problem, explaining the mathematical steps required. Note that a workbook should not be the primary source of knowledge on the subject matter; it assumes that you've already read a tutorial or a textbook and that you are now seeking to improve your problem-solving skills. You should attempt solving the tasks of the respective kata first, and turn to the workbook only if stuck. While a textbook emphasizes knowledge acquisition, a workbook emphasizes skill acquisition.This workbook describes the solutions to the problems offered in the [Linear Algebra tutorial](./LinearAlgebra.ipynb). Since the tasks are offered as programming problems, the explanations also cover some elements of Python that might be non-obvious for a first-time user.**What you should know for this workbook**1. Complex arithmetic.2. Basic Python knowledge is helpful but not necessary. Click the cell with code below this block of text and press `Ctrl+Enter` (`⌘+Enter` on Mac). **Do not skip this step**.
###Code
# Run this cell using Ctrl+Enter (⌘+Enter on Mac).
from testing import exercise, create_empty_matrix
from typing import List
import math, cmath
Matrix = List[List[complex]]
###Output
_____no_output_____
###Markdown
Exercise 1: Matrix addition.**Inputs:**1. An $n \times m$ matrix $A$, represented as a two-dimensional list.2. An $n \times m$ matrix $B$, represented as a two-dimensional list.**Output:** Return the sum of the matrices $A + B$ - an $n \times m$ matrix, represented as a two-dimensional list. SolutionFollowing the definition given in the tutorial, the sum of two matrices is a matrix of element-wise sums of matrix elements; for example, for $2 \times 2$ matrices$$ A + B =\begin{bmatrix} a & b \\ c & d \end{bmatrix} + \begin{bmatrix} e & f \\ g & h \end{bmatrix} = \begin{bmatrix} a + e & b + f \\ c + g & d + h \end{bmatrix}$$> *Python note:* This tutorial uses a lot of lists and loops, so let's walk through some Python syntax details first. If you're familiar with Python syntax, feel free to skip this note!>> * [`range(x)`](https://docs.python.org/3/tutorial/controlflow.htmlthe-range-function) will create a [list](https://docs.python.org/3/tutorial/introduction.htmllists) of numbers from 0 to `x - 1`, inclusive; for example, `range(3)` will create a list `[0, 1, 2]`. > * [`for`](https://docs.python.org/3/tutorial/controlflow.htmlfor-statements) statement iterates over the items of a sequence; for example, the following code> ```python> for i in range(3):> print(i)> ```>> will print:> ```> 0> 1> 2> ```>> * Matrices are described as two-dimensional lists, > which are represented as lists of lists. For example, the following matrix:>> $$\begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix} $$>> is represented as a list of lists `[[1, 2, 3], [4, 5, 6]]`. >> * You can access a specific element of the list using the index of that element in the list (note that indices start with 0): the first element of `array` is `array[0]`, the second - `array[1]`, etc.> * Similarly, you can access an element of a matrix using the row and column indices of that element: `matrix[0][2]` would access the element in the first row and 3rd column.> * `len(array)` returns the number of elements in a list; for example, `len([0, 1, 2])` will return 3.> * Here is an example of creating a matrix from the example above and looping through its elements to print them:>>```Python>matrix = [[1, 2, 3], [4, 5, 6]]>numberOfRows = len(matrix) will return 2>numberOfColumns = len(matrix[0]) will return 3>for row in range(numberOfRows):> for column in range(numberOfColumns):> print(matrix[row][column])>>```>> * Finally, the first exercise offers you a template of a solution that uses a function `create_empty_matrix(n, m)`; this function creates an $n \times m$ matrix filled with 0's as values. This function is not a built-in Python function, this notebook defines it for you to use.
###Code
@exercise
def matrix_add(a : Matrix, b : Matrix) -> Matrix:
# You can get the size of a matrix like this:
rows = len(a)
columns = len(a[0])
# You can use the following function to initialize a rows×columns matrix filled with 0s to store your answer
c = create_empty_matrix(rows, columns)
for i in range(rows):
for j in range(columns):
# You can access elements of a matrix like this:
x = a[i][j]
y = b[i][j]
# You can modify the elements of a matrix like this:
c[i][j] = a[i][j] + b[i][j]
return c
###Output
_____no_output_____
###Markdown
[Return to task 1 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-1:-Matrix-addition.) Exercise 2: Scalar multiplication.**Inputs:**1. A scalar $x$.2. An $n \times m$ matrix $A$.**Output:** Return the $n \times m$ matrix $x \cdot A$. SolutionWe can again follow the definition given in the tutorial: to calculate the product of a number and a matrix, multiply each matrix element by that number. For example, for a $2 \times 2$ matrix:$$x \cdot A = x \cdot \begin{bmatrix} a & b \\ c & d \end{bmatrix} = \begin{bmatrix} x \cdot a & x \cdot b \\ x \cdot c & x \cdot d \end{bmatrix} $$ > *Python note:* We have to multiply each element in the matrix by the given number $x$. To do so, we will again loop trough each matrix element with 2 `for` loops, do the multiplication and store its result in the corresponding element of the newly created matrix.
###Code
@exercise
def scalar_mult(x : complex, a : Matrix) -> Matrix:
rows = len(a)
columns = len(a[0])
c = create_empty_matrix(rows, columns)
for i in range(rows):
for j in range(columns):
c[i][j] = a[i][j] * x
return c
###Output
_____no_output_____
###Markdown
[Return to task 2 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-2:-Scalar-multiplication.) Exercise 3: Matrix multiplication.**Inputs:**1. An $n \times m$ matrix $A$.2. An $m \times k$ matrix $B$.**Output:** Return the $n \times k$ matrix equal to the matrix product $AB$. SolutionAgain, the tutorial gives us the definition of how multiplication works, and we just need to implement it in code. Here is an example of multiplying a $2 \times 3$ matrix by a $3 \times 2$ matrix:$$ A \cdot B =\begin{bmatrix} a & b & c \\ d & e & f \end{bmatrix} \cdot \begin{bmatrix} h & i \\ j & k \\ l & m \end{bmatrix} = \begin{bmatrix} a \cdot h + b \cdot j + c \cdot l & a \cdot i + b \cdot k + c \cdot m \\ d \cdot h + e \cdot j + f \cdot l & d \cdot i + e \cdot k + f \cdot m \end{bmatrix} $$> *Python note*: In this exercise we'll need an extra nested loop. We will iterate trough the rows and columns of the resulting matrix, similar to the previous exercises, but for each element of the result we'll need to iterate through the row of the left matrix and the column of the right matrix that contribute to that element. In the example above, to get the element in the first row and the first column of the resulting matrix product we'll need to iterate through the first row of the left matrix $\begin{bmatrix} a & b & c \end{bmatrix}$ and the first column of the right matrix $\begin{bmatrix} h \\ j \\ l \end{bmatrix}$ and add up pairwise products of their elements.>> Note that the empty matrix we create for storing the result differs in dimensions from the previous exercises: its number of rows equals the number of rows of the left matrix, and its number of columns equals to the number of columns of the right matrix. >> Python `+=` operator is a convenient shorthand for assignment `variable = variable + increment`.
###Code
@exercise
def matrix_mult(a : Matrix, b : Matrix) -> Matrix:
rows = len(a) # the number of rows of the left matrix
common = len(a[0]) # = len(b) - the common dimension of the matrices
columns = len(b[0]) # the number of columns of the right matrix
ans = create_empty_matrix(rows, columns)
for currentRow in range(rows):
for currentColumn in range(columns):
for k in range(common):
ans[currentRow][currentColumn] += a[currentRow][k] * b[k][currentColumn]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 3 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-3:-Matrix-multiplication.) Exercise 4: Matrix Inversion.**Input:** An invertible $2 \times 2$ matrix $A$.**Output:** Return the inverse of $A$, a $2 \times 2$ matrix $A^{-1}$. SolutionSince we only need to invert a $2 \times 2$ matrix, we will not consider a solution which can be used for arbitrary-sized matrices. We will follow the algorithm described in the [Wikipedia article](https://en.wikipedia.org/wiki/Invertible_matrixInversion_of_2_%C3%97_2_matrices).$$ A = \begin{bmatrix} a & b \\ c & d \end{bmatrix} $$The determinant of the matrix is defined as $$ |A| = a \cdot d - b \cdot c $$$$A^{-1} = \frac{1}{|A|} \cdot \begin{bmatrix} d & -b \\ -c & a \end{bmatrix} = \begin{bmatrix} \frac{d}{|A|} & \frac{-b}{|A|} \\ \frac{-c}{|A|} & \frac{a}{|A|} \end{bmatrix} $$
###Code
@exercise
def matrix_inverse(m : Matrix) -> Matrix:
# Extract each element of the array into a named variable
a = m[0][0]
b = m[0][1]
c = m[1][0]
d = m[1][1]
# Calculate the determinant
determinant = (a * d) - (b * c)
# Create the inverse of the matrix following the formula above
ans = [[d / determinant, -b / determinant], [-c / determinant, a / determinant]]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 4 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-4:-Matrix-Inversion.) Exercise 5: Transpose.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $m \times n$ matrix $A^T$, the transpose of $A$. SolutionAgain, the tutorial gives us the definition of matrix transpose, so we just need to fill the resulting matrix with the elements of the original matrix in the right order. For example, for a $3 \times 2$ matrix$$\begin{bmatrix} a & b \\ c & d \\ e & f\end{bmatrix}^T=\begin{bmatrix} a & c & e \\ b & d & f\end{bmatrix}$$
###Code
@exercise
def transpose(a : Matrix) -> Matrix:
rows = len(a)
columns = len(a[0])
# Note that the resulting matrix dimensions are swapped compared to the original ones
ans = create_empty_matrix(columns, rows)
for i in range(rows):
for j in range(columns):
ans[j][i] = a[i][j]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 5 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-5:-Transpose.) Exercise 6: Conjugate.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $n \times m$ matrix $\overline{A}$, the conjugate of $A$. SolutionsTo get the conjugate of a matrix you take the conjugate of each individual element (check the [Complex Arithmetic tutorial](../ComplexArithmetic/ComplexArithmetic.ipynbComplex-Conjugate) for the definition.> *Python note*: In the complex arithmetic tutorial complex numbers were represented as tuples of real and imaginary components. However, this tutorial relies on Python's built-in [`complex`](https://docs.python.org/3.8/library/functions.htmlcomplex) data type. Python's [cmath library](https://docs.python.org/3.8/library/cmath.html) offers a lot of useful functions that deal with the `complex` data type.>> Here is an example of using the `complex` data type:>> ```Python> Import the cmath library> import cmath>> Create a new complex number 5 + 3i; the two arguments are the real and the imaginary parts of the number> complexNumber = complex(5, 3)>> Print the real and the imaginary parts of the number> print(complexNumber.real) > print(complexNumber.imag)>> Convert the complex number to its polar representation using the cmath library> polar = cmath.polar(complexNumber)> print(polar) This prints: (5.830951894845301, 0.5404195002705842)> ```>> To get the complex conjugate of a matrix, we loop trough each element of the matrix, extract real and imaginary parts of the number and flip the sign for the imaginary part.
###Code
@exercise
def conjugate(a : Matrix) -> Matrix:
rows = len(a)
columns = len(a[0])
ans = create_empty_matrix(rows, columns)
for i in range(rows):
for j in range(columns):
ans[i][j] = complex(a[i][j].real, -a[i][j].imag)
return ans
###Output
_____no_output_____
###Markdown
[Return to task 6 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-6:-Conjugate.) Exercise 7: Adjoint.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $m \times n$ matrix $A^\dagger$, the adjoint of $A$. SolutionTo get the adjoint we perform both **transpose** and **conjugate** operations on the input matrix. We can write out the whole procedure manually, like we have done above, but we can also leverage the code we have written above.> In Python the `def` word defines a function, which could be reused later in the code.
###Code
@exercise
def adjoint(a : Matrix) -> Matrix:
# Call the transpose function with the input matrix a
transp = transpose(a)
# Call the conjugate function with the transposed matrix as input
ans = conjugate(transp)
return ans
###Output
_____no_output_____
###Markdown
[Return to task 7 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-7:-Adjoint.) Exercise 8: Unitary Verification.**Input:** An $n \times n$ matrix $A$.**Output:** Check if the matrix is unitary and return `True` if it is, or `False` if it isn't. SolutionA matrix is unitary if this holds true: $UU^\dagger = U^\dagger U = I$.(As a reminder, an identity matrix is a matrix with 1s on the main diagonal and 0s everywhere else.)Thus, to check if the input matrix is unitary we will need to perform the following steps:1. Calculate the adjoint of the input matrix.2. Multiply it by the input matrix.3. Check if the multiplication result is equal to an identity matrix. > *Python note:* We will leverage the `adjoint` and the `matrix_mult` functions what we have created above.>> When we check each element of $UU^\dagger$ to see whether it equals the respective element of the identity matrix, we'll use Python function `approx` to perform this comparison approximately.
###Code
from pytest import approx
@exercise
def is_matrix_unitary(a : Matrix) -> bool:
n = len(a)
# Calculate the adjoint matrix
adjointA = adjoint(a)
# Multiply the adjoint matrix by the input matrix
multipliedMatrix = matrix_mult(a, adjointA)
# Check whether the multiplication result is (approximately) identity matrix
for i in range(n):
for j in range(n):
# An identity matrix has 1's in all the places where the row index and column index are equal...
if i == j:
if multipliedMatrix[i][j] != approx(1):
return False
# ... and 0's in all the places where the row index and column index are different
else:
if multipliedMatrix[i][j] != approx(0):
return False
return True
###Output
_____no_output_____
###Markdown
[Return to task 8 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-8:-Unitary-Verification.) Exercise 9: Inner product.**Inputs:**1. An $n \times 1$ vector $V$.2. An $n \times 1$ vector $W$.**Output:** Return a complex number - the inner product $\langle V , W \rangle$. SolutionFollowing the definition of the inner product, $\langle V , W \rangle = V^\dagger W$. For example, for vectors of length 2:$$\langle\begin{bmatrix} a \\ b\end{bmatrix},\begin{bmatrix} c \\ d\end{bmatrix}\rangle =\begin{bmatrix} a \\ b\end{bmatrix}^\dagger\begin{bmatrix} c \\ d\end{bmatrix}=\begin{bmatrix} \overline{a} & \overline{b} \end{bmatrix}\begin{bmatrix} c \\ d\end{bmatrix}= \overline{a} \cdot c + \overline{b} \cdot d$$> *Python note:* We will again use previously defined functions to calculate adjoint of a vector and a product of two vectors. > We need to keep in mind that the task asks us to return a complex number and not a $1 \times 1$ matrix which is the result of the multiplication. > Therefore at the end we'll extract the top left element of the `resultMatrix` and return it.
###Code
@exercise
def inner_prod(v : Matrix, w : Matrix) -> complex:
# Calculate the adjoint of the v vector
adjointV = adjoint(v)
# Multiply the adjoint v and w. The result will be a matrix with only one element.
resultMatrix = matrix_mult(adjointV, w)
# To get the actual complex number, we have to take one element from the multiplication result.
return resultMatrix[0][0]
###Output
_____no_output_____
###Markdown
[Return to task 9 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-9:-Inner-product.) Exercise 10: Normalized vectors.**Input:** A non-zero $n \times 1$ vector $V$.**Output:** Return an $n \times 1$ vector $\frac{V}{||V||}$ - the normalized version of the vector $V$. Solution If the vector $V = \begin{bmatrix}a & b & c \end{bmatrix}$, its norm $ ||V|| = \sqrt{|a|^2 + |b|^2 + |c|^2} $,and its normalized version is$ \begin{bmatrix}\frac{a}{||V||} & \frac{b}{||V||} & \frac{c}{||V||} \end{bmatrix} $.Thus, we need to calculate the norm of the vector and to divide each element of the vector by it. We will calculate the norm as a square root of an inner product of the vector with itself.
###Code
@exercise
def normalize(v : Matrix) -> Matrix:
norm = math.sqrt(inner_prod(v, v).real)
n = len(v)
ans = create_empty_matrix(n, 1)
# Divide each element of the vector by the norm
for i in range(n):
ans[i][0] = v[i][0] / norm
return ans
###Output
_____no_output_____
###Markdown
[Return to task 10 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-10:-Normalized-vectors.) Exercise 11: Outer product.**Inputs:**1. An $n \times 1$ vector $V$.2. An $m \times 1$ vector $W$.**Output:** Return an $n \times m$ matrix that represents the outer product of $V$ and $W$. SolutionBy definition, the outer product of $V$ and $W$ is $VW^\dagger$. We can use a similar approach to calculating the inner product, except here we will return the whole multiplication result rather than a specific number.
###Code
@exercise
def outer_prod(v : Matrix, w : Matrix) -> Matrix:
# Calculate adjoint of the W
adjointW = adjoint(w)
# Multiply V by W adjoint
return matrix_mult(v, adjointW)
###Output
_____no_output_____
###Markdown
[Return to task 11 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-11:-Outer-product.) Exercise 12*: Tensor Product.**Inputs:**1. An $n \times m$ matrix $A$.2. A $k \times l$ matrix $B$.**Output:** Return an $(n \cdot k) \times (m \cdot l)$ matrix $A \otimes B$, the tensor product of $A$ and $B$. SolutionWe will follow the definition of the tensor product. For example, tensor product of $2 \times 2$ matrices look as follows:$$\begin{bmatrix} a & b \\ c & d \end{bmatrix} \otimes \begin{bmatrix} e & f \\ g & h \end{bmatrix} =\begin{bmatrix} a \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} & b \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} \\ c \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} & d \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix}\end{bmatrix}=\begin{bmatrix} a \cdot e & a \cdot f & b \cdot e & b \cdot f \\ a \cdot g & a \cdot h & b \cdot g & b \cdot h \\ c \cdot e & c \cdot f & d \cdot e & d \cdot f \\ c \cdot g & c \cdot h & d \cdot g & d \cdot h\end{bmatrix}$$> *Python note:* We need to calculate pairwise products of all elements of the left matrix and all elements of the right matrix; this means we have to use 4 nested loops.
###Code
@exercise
def tensor_product(a : Matrix, b : Matrix) -> Matrix:
aRows = len(a) # the number of rows for matrix a
aColumns = len(a[0]) # the number of columns for matrix a
bRows = len(b) # the number of rows for matrix b
bColumns = len(b[0]) # the number of columns for matrix b
ans = create_empty_matrix(aRows * bRows, aColumns * bColumns)
# Outer pair of loops, iterating trough the elements of the left matrix
for i in range(aRows):
for j in range(aColumns):
# Inner pair of loops, iterating through the elements of the right matrix
for k in range(bRows):
for l in range(bColumns):
ans[i * bRows + k][j * bColumns + l] = a[i][j] * b[k][l]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 12 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-12*:-Tensor-Product.) Exercise 13: Finding an eigenvalue.**Inputs:**1. A real-valued $n \times n$ matrix $A$.2. An eigenvector $V$ of matrix $A$.**Output:** Return a real number - the eigenvalue of $A$ that is associated with the given eigenvector. SolutionLet's consider what happens when we multiply the matrix by its eigenvector for a $3 \times 3$ example:$$ A \cdot V = \begin{bmatrix} a & b & c \\ d & e & f \\ g & h & i \end{bmatrix} \cdot \begin{bmatrix}j \\ k \\ l \end{bmatrix} = \begin{bmatrix} m \\ n \\ o \end{bmatrix} = \alpha \begin{bmatrix}j \\ k \\ l \end{bmatrix} = \alpha V$$This means you can find the eigenvalue $\alpha$ from the equations $$ \begin{cases} \alpha j = m \\ \alpha k = n \\ \alpha l = o \end{cases}$$We can use any of them, keeping in mind that we need an equation in which the element of the eigenvector is not zero (otherwise we get an equation $0 \alpha = 0$ which doesn't help us find $\alpha$).Since eigenvectors are defined as non-zero vectors, we are guaranteed that at least one element of the vector will not be zero.
###Code
from pytest import approx
@exercise
def find_eigenvalue(a : Matrix, v : Matrix) -> float:
n = len(v)
multiplied = matrix_mult(a, v)
for i in range(n):
if (v[i][0] != approx(0)):
return multiplied[i][0] / v[i][0]
###Output
_____no_output_____
###Markdown
[Return to task 13 of the Linear Algebra tutorial.](/LinearAlgebra.ipynbExercise-13:-Finding-an-eigenvalue.) Exercise 14**: Finding an eigenvector.**Inputs:**1. A $2 \times 2$ matrix $A$.2. An eigenvalue $x$ of matrix $A$.**Output:** Return any non-zero eigenvector of $A$ that is associated with $x$. SolutionSearching for an eigenvector $V$ associated with a specific eigenvalue $x$ asks for solving the following equation:$$ AV = xV $$or, equivalently, $$(A - xI_n)V = 0$$In other words, for a $2 \times 2$ matrix the following happens: 1. Multiply the identity matrix $I_2$ by the eigenvalue:$$ x \cdot \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} x & 0 \\ 0 & x \end{bmatrix} $$2. Subtract this new matrix from the given matrix $A$:$$ \begin{bmatrix} a & b \\ c & d \end{bmatrix} - \begin{bmatrix} x & 0 \\ 0 & x \end{bmatrix} = \begin{bmatrix} a -x & b \\ c & d -x \end{bmatrix} $$ 3. Find a vector that, when multiplied by the resulting matrix, will produce a 0 vector:$$ \begin{bmatrix} a - x & b \\ c & d - x \end{bmatrix} \cdot \begin{bmatrix} v_0 \\ v_1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}$$This can be rewritten as the following system of equations:$$\begin{cases}(a - x) \cdot v_0 + b \cdot v_1 = 0 \\c \cdot v_0 + (d - x) \cdot v_1 = 0 \end{cases}$$Each eigenvalue has infinitely many eigenvectors associated with it (since multiplying an eigenvector by a number gives another valid eigenvector). We can limit our search and say that $v_0 = 1$, if possible. In this case, the system of equations becomes$$\begin{cases}(a - x) + b \cdot v_1 = 0 \\c + (d - x) \cdot v_1 = 0 \end{cases}$$and finally we get $v_1 = \frac{a-x}{-b}$.If $b = 0$, we can not perform this division, so we need to reconsider our choices. The first equation becomes $(a-x)v_0 = 0$, which is possible in two cases:* If $a - x \neq 0$, we get $v_0 = 0$ and thus $v_1$ has to be non-zero (we can pick $v_1 = 1$).* If $a - x = 0$, we can not get any information from the first equation and have to fall back to the second one:$c \cdot v_0 + (d - x) \cdot v_1 = 0$. Following a similar logic: * If $c = 0$, we get $(d - x) \cdot v_1 = 0$, so $v_0 = 1, v_1 = 0$. * If $c \neq 0$, we get $v_1 = 1, v_0 = \frac{d-x}{-c}$.
###Code
@exercise
def find_eigenvector(a : Matrix, x : float) -> Matrix:
# Check for possible edge cases
if (a[0][1] == 0):
if (a[0][0] - x == 0):
if (a[1][0] == 0):
return [[1], [0]]
else:
return [[(a[1][1] - x) / (-a[1][0])], [1]]
else:
return [[0], [1]]
v0 = 1
v1 = (a[0][0] - x) / (-a[0][1])
return [[v0], [v1]]
###Output
_____no_output_____
###Markdown
Linear Algebra Tutorial Workbook**What is this workbook?**A workbook is a collection of problems, accompanied by solutions to them. The explanations focus on the logical steps required to solve a problem; they illustrate the concepts that need to be applied to come up with a solution to the problem, explaining the mathematical steps required. Note that a workbook should not be the primary source of knowledge on the subject matter; it assumes that you've already read a tutorial or a textbook and that you are now seeking to improve your problem-solving skills. You should attempt solving the tasks of the respective kata first, and turn to the workbook only if stuck. While a textbook emphasizes knowledge acquisition, a workbook emphasizes skill acquisition.This workbook describes the solutions to the problems offered in the [Linear Algebra tutorial](./LinearAlgebra.ipynb). Since the tasks are offered as programming problems, the explanations also cover some elements of Python that might be non-obvious for a first-time user.**What you should know for this workbook**1. Complex arithmetic.2. Basic Python knowledge is helpful but not necessary. Click the cell with code below this block of text and press `Ctrl+Enter` (`⌘+Enter` on Mac). **Do not skip this step**.
###Code
# Run this cell using Ctrl+Enter (⌘+Enter on Mac).
from testing import exercise, create_empty_matrix
from typing import List
import math, cmath
Matrix = List[List[complex]]
###Output
_____no_output_____
###Markdown
Exercise 1: Matrix addition.**Inputs:**1. An $n \times m$ matrix $A$, represented as a two-dimensional list.2. An $n \times m$ matrix $B$, represented as a two-dimensional list.**Output:** Return the sum of the matrices $A + B$ - an $n \times m$ matrix, represented as a two-dimensional list. SolutionFollowing the definition given in the tutorial, the sum of two matrices is a matrix of element-wise sums of matrix elements; for example, for $2 \times 2$ matrices$$ A + B =\begin{bmatrix} a & b \\ c & d \end{bmatrix} + \begin{bmatrix} e & f \\ g & h \end{bmatrix} = \begin{bmatrix} a + e & b + f \\ c + g & d + h \end{bmatrix}$$> *Python note:* This tutorial uses a lot of lists and loops, so let's walk through some Python syntax details first. If you're familiar with Python syntax, feel free to skip this note!>> * [`range(x)`](https://docs.python.org/3/tutorial/controlflow.htmlthe-range-function) will create a [list](https://docs.python.org/3/tutorial/introduction.htmllists) of numbers from 0 to `x - 1`, inclusive; for example, `range(3)` will create a list `[0, 1, 2]`. > * [`for`](https://docs.python.org/3/tutorial/controlflow.htmlfor-statements) statement iterates over the items of a sequence; for example, the following code> ```python> for i in range(3):> print(i)> ```>> will print:> ```> 0> 1> 2> ```>> * Matrices are described as two-dimensional lists, > which are represented as lists of lists. For example, the following matrix:>> $$\begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix} $$>> is represented as a list of lists `[[1, 2, 3], [4, 5, 6]]`. >> * You can access a specific element of the list using the index of that element in the list (note that indices start with 0): the first element of `array` is `array[0]`, the second - `array[1]`, etc.> * Similarly, you can access an element of a matrix using the row and column indices of that element: `matrix[0][2]` would access the element in the first row and 3rd column.> * `len(array)` returns the number of elements in a list; for example, `len([0, 1, 2])` will return 3.> * Here is an example of creating a matrix from the example above and looping through its elements to print them:>>```Python>matrix = [[1, 2, 3], [4, 5, 6]]>numberOfRows = len(matrix) will return 2>numberOfColumns = len(matrix[0]) will return 3>for row in range(numberOfRows):> for column in range(numberOfColumns):> print(matrix[row][column])>>```>> * Finally, the first exercise offers you a template of a solution that uses a function `create_empty_matrix(n, m)`; this function creates an $n \times m$ matrix filled with 0's as values. This function is not a built-in Python function, this notebook defines it for you to use.
###Code
@exercise
def matrix_add(a : Matrix, b : Matrix) -> Matrix:
# You can get the size of a matrix like this:
rows = len(a)
columns = len(a[0])
# You can use the following function to initialize a rows×columns matrix filled with 0s to store your answer
c = create_empty_matrix(rows, columns)
for i in range(rows):
for j in range(columns):
# You can access elements of a matrix like this:
x = a[i][j]
y = b[i][j]
# You can modify the elements of a matrix like this:
c[i][j] = a[i][j] + b[i][j]
return c
###Output
_____no_output_____
###Markdown
[Return to task 1 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-1:-Matrix-addition.) Exercise 2: Scalar multiplication.**Inputs:**1. A scalar $x$.2. An $n \times m$ matrix $A$.**Output:** Return the $n \times m$ matrix $x \cdot A$. SolutionWe can again follow the definition given in the tutorial: to calculate the product of a number and a matrix, multiply each matrix element by that number. For example, for a $2 \times 2$ matrix:$$x \cdot A = x \cdot \begin{bmatrix} a & b \\ c & d \end{bmatrix} = \begin{bmatrix} x \cdot a & x \cdot b \\ x \cdot c & x \cdot d \end{bmatrix} $$ > *Python note:* We have to multiply each element in the matrix by the given number $x$. To do so, we will again loop trough each matrix element with 2 `for` loops, do the multiplication and store its result in the corresponding element of the newly created matrix.
###Code
@exercise
def scalar_mult(x : complex, a : Matrix) -> Matrix:
rows = len(a)
columns = len(a[0])
c = create_empty_matrix(rows, columns)
for i in range(rows):
for j in range(columns):
c[i][j] = a[i][j] * x
return c
###Output
_____no_output_____
###Markdown
[Return to task 2 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-2:-Scalar-multiplication.) Exercise 3: Matrix multiplication.**Inputs:**1. An $n \times m$ matrix $A$.2. An $m \times k$ matrix $B$.**Output:** Return the $n \times k$ matrix equal to the matrix product $AB$. SolutionAgain, the tutorial gives us the definition of how multiplication works, and we just need to implement it in code. Here is an example of multiplying a $2 \times 3$ matrix by a $3 \times 2$ matrix:$$ A \cdot B =\begin{bmatrix} a & b & c \\ d & e & f \end{bmatrix} \cdot \begin{bmatrix} h & i \\ j & k \\ l & m \end{bmatrix} = \begin{bmatrix} a \cdot h + b \cdot j + c \cdot l & a \cdot i + b \cdot k + c \cdot m \\ d \cdot h + e \cdot j + f \cdot l & d \cdot i + e \cdot k + f \cdot m \end{bmatrix} $$> *Python note*: In this exercise we'll need an extra nested loop. We will iterate trough the rows and columns of the resulting matrix, similar to the previous exercises, but for each element of the result we'll need to iterate through the row of the left matrix and the column of the right matrix that contribute to that element. In the example above, to get the element in the first row and the first column of the resulting matrix product we'll need to iterate through the first row of the left matrix $\begin{bmatrix} a & b & c \end{bmatrix}$ and the first column of the right matrix $\begin{bmatrix} h \\ j \\ l \end{bmatrix}$ and add up pairwise products of their elements.>> Note that the empty matrix we create for storing the result differs in dimensions from the previous exercises: its number of rows equals the number of rows of the left matrix, and its number of columns equals to the number of columns of the right matrix. >> Python `+=` operator is a convenient shorthand for assignment `variable = variable + increment`.
###Code
@exercise
def matrix_mult(a : Matrix, b : Matrix) -> Matrix:
rows = len(a) # the number of rows of the left matrix
common = len(a[0]) # = len(b) - the common dimension of the matrices
columns = len(b[0]) # the number of columns of the right matrix
ans = create_empty_matrix(rows, columns)
for currentRow in range(rows):
for currentColumn in range(columns):
for k in range(common):
ans[currentRow][currentColumn] += a[currentRow][k] * b[k][currentColumn]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 3 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-3:-Matrix-multiplication.) Exercise 4: Matrix Inversion.**Input:** An invertible $2 \times 2$ matrix $A$.**Output:** Return the inverse of $A$, a $2 \times 2$ matrix $A^{-1}$. SolutionSince we only need to invert a $2 \times 2$ matrix, we will not consider a solution which can be used for arbitrary-sized matrices. We will follow the algorithm described in the [Wikipedia article](https://en.wikipedia.org/wiki/Invertible_matrixInversion_of_2_%C3%97_2_matrices).$$ A = \begin{bmatrix} a & b \\ c & d \end{bmatrix} $$The determinant of the matrix is defined as $$ |A| = a \cdot d - b \cdot c $$$$A^{-1} = \frac{1}{|A|} \cdot \begin{bmatrix} d & -b \\ -c & a \end{bmatrix} = \begin{bmatrix} \frac{d}{|A|} & \frac{-b}{|A|} \\ \frac{-c}{|A|} & \frac{a}{|A|} \end{bmatrix} $$
###Code
@exercise
def matrix_inverse(m : Matrix) -> Matrix:
# Extract each element of the array into a named variable
a = m[0][0]
b = m[0][1]
c = m[1][0]
d = m[1][1]
# Calculate the determinant
determinant = (a * d) - (b * c)
# Create the inverse of the matrix following the formula above
ans = [[d / determinant, -b / determinant], [-c / determinant, a / determinant]]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 4 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-4:-Matrix-Inversion.) Exercise 5: Transpose.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $m \times n$ matrix $A^T$, the transpose of $A$. SolutionAgain, the tutorial gives us the definition of matrix transpose, so we just need to fill the resulting matrix with the elements of the original matrix in the right order. For example, for a $3 \times 2$ matrix$$\begin{bmatrix} a & b \\ c & d \\ e & f\end{bmatrix}^T=\begin{bmatrix} a & c & e \\ b & d & f\end{bmatrix}$$
###Code
@exercise
def transpose(a : Matrix) -> Matrix:
rows = len(a)
columns = len(a[0])
# Note that the resulting matrix dimensions are swapped compared to the original ones
ans = create_empty_matrix(columns, rows)
for i in range(rows):
for j in range(columns):
ans[j][i] = a[i][j]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 5 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-5:-Transpose.) Exercise 6: Conjugate.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $n \times m$ matrix $\overline{A}$, the conjugate of $A$. SolutionsTo get the conjugate of a matrix you take the conjugate of each individual element (check the [Complex Arithmetic tutorial](../ComplexArithmetic/ComplexArithmetic.ipynbComplex-Conjugate) for the definition.> *Python note*: In the complex arithmetic tutorial complex numbers were represented as tuples of real and imaginary components. However, this tutorial relies on Python's built-in [`complex`](https://docs.python.org/3.8/library/functions.htmlcomplex) data type. Python's [cmath library](https://docs.python.org/3.8/library/cmath.html) offers a lot of useful functions that deal with the `complex` data type.>> Here is an example of using the `complex` data type:>> ```Python> Import the cmath library> import cmath>> Create a new complex number 5 + 3i; the two arguments are the real and the imaginary parts of the number> complexNumber = complex(5, 3)>> Print the real and the imaginary parts of the number> print(complexNumber.real) > print(complexNumber.imag)>> Convert the complex number to its polar representation using the cmath library> polar = cmath.polar(complexNumber)> print(polar) This prints: (5.830951894845301, 0.5404195002705842)> ```>> To get the complex conjugate of a matrix, we loop trough each element of the matrix, extract real and imaginary parts of the number and flip the sign for the imaginary part.
###Code
@exercise
def conjugate(a : Matrix) -> Matrix:
rows = len(a)
columns = len(a[0])
ans = create_empty_matrix(rows, columns)
for i in range(rows):
for j in range(columns):
ans[i][j] = complex(a[i][j].real, -a[i][j].imag)
return ans
###Output
_____no_output_____
###Markdown
[Return to task 6 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-6:-Conjugate.) Exercise 7: Adjoint.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $m \times n$ matrix $A^\dagger$, the adjoint of $A$. SolutionTo get the adjoint we perform both **transpose** and **conjugate** operations on the input matrix. We can write out the whole procedure manually, like we have done above, but we can also leverage the code we have written above.> In Python the `def` word defines a function, which could be reused later in the code.
###Code
@exercise
def adjoint(a : Matrix) -> Matrix:
# Call the transpose function with the input matrix a
transp = transpose(a)
# Call the conjugate function with the transposed matrix as input
ans = conjugate(transp)
return ans
###Output
_____no_output_____
###Markdown
[Return to task 7 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-7:-Adjoint.) Exercise 8: Unitary Verification.**Input:** An $n \times n$ matrix $A$.**Output:** Check if the matrix is unitary and return `True` if it is, or `False` if it isn't. SolutionA matrix is unitary if this holds true: $UU^\dagger = U^\dagger U = I$.(As a reminder, an identity matrix is a matrix with 1s on the main diagonal and 0s everywhere else.)Thus, to check if the input matrix is unitary we will need to perform the following steps:1. Calculate the adjoint of the input matrix.2. Multiply it by the input matrix.3. Check if the multiplication result is equal to an identity matrix. > *Python note:* We will leverage the `adjoint` and the `matrix_mult` functions what we have created above.>> When we check each element of $UU^\dagger$ to see whether it equals the respective element of the identity matrix, we'll use Python function `approx` to perform this comparison approximately.
###Code
from pytest import approx
@exercise
def is_matrix_unitary(a : Matrix) -> bool:
n = len(a)
# Calculate the adjoint matrix
adjointA = adjoint(a)
# Multiply the adjoint matrix by the input matrix
multipliedMatrix = matrix_mult(a, adjointA)
# Check whether the multiplication result is (approximately) identity matrix
for i in range(n):
for j in range(n):
# An identity matrix has 1's in all the places where the row index and column index are equal...
if i == j:
if multipliedMatrix[i][j] != approx(1):
return False
# ... and 0's in all the places where the row index and column index are different
else:
if multipliedMatrix[i][j] != approx(0):
return False
return True
###Output
_____no_output_____
###Markdown
[Return to task 8 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-8:-Unitary-Verification.) Exercise 9: Inner product.**Inputs:**1. An $n \times 1$ vector $V$.2. An $n \times 1$ vector $W$.**Output:** Return a complex number - the inner product $\langle V , W \rangle$. SolutionFollowing the definition of the inner product, $\langle V , W \rangle = V^\dagger W$. For example, for vectors of length 2:$$\langle\begin{bmatrix} a \\ b\end{bmatrix},\begin{bmatrix} c \\ d\end{bmatrix}\rangle =\begin{bmatrix} a \\ b\end{bmatrix}^\dagger\begin{bmatrix} c \\ d\end{bmatrix}=\begin{bmatrix} \overline{a} & \overline{b} \end{bmatrix}\begin{bmatrix} c \\ d\end{bmatrix}= \overline{a} \cdot c + \overline{b} \cdot d$$> *Python note:* We will again use previously defined functions to calculate adjoint of a vector and a product of two vectors. > We need to keep in mind that the task asks us to return a complex number and not a $1 \times 1$ matrix which is the result of the multiplication. > Therefore at the end we'll extract the top left element of the `resultMatrix` and return it.
###Code
@exercise
def inner_prod(v : Matrix, w : Matrix) -> complex:
# Calculate the adjoint of the v vector
adjointV = adjoint(v)
# Multiply the adjoint v and w. The result will be a matrix with only one element.
resultMatrix = matrix_mult(adjointV, w)
# To get the actual complex number, we have to take one element from the multiplication result.
return resultMatrix[0][0]
###Output
_____no_output_____
###Markdown
[Return to task 9 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-9:-Inner-product.) Exercise 10: Normalized vectors.**Input:** A non-zero $n \times 1$ vector $V$.**Output:** Return an $n \times 1$ vector $\frac{V}{||V||}$ - the normalized version of the vector $V$. Solution If the vector $V = \begin{bmatrix}a & b & c \end{bmatrix}$, its norm $ ||V|| = \sqrt{|a|^2 + |b|^2 + |c|^2} $,and its normalized version is$ \begin{bmatrix}\frac{a}{||V||} & \frac{b}{||V||} & \frac{c}{||V||} \end{bmatrix} $.Thus, we need to calculate the norm of the vector and to divide each element of the vector by it. We will calculate the norm as a square root of an inner product of the vector with itself.
###Code
@exercise
def normalize(v : Matrix) -> Matrix:
norm = math.sqrt(inner_prod(v, v).real)
n = len(v)
ans = create_empty_matrix(n, 1)
# Divide each element of the vector by the norm
for i in range(n):
ans[i][0] = v[i][0] / norm
return ans
###Output
_____no_output_____
###Markdown
[Return to task 10 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-10:-Normalized-vectors.) Exercise 11: Outer product.**Inputs:**1. An $n \times 1$ vector $V$.2. An $m \times 1$ vector $W$.**Output:** Return an $n \times m$ matrix that represents the outer product of $V$ and $W$. SolutionBy definition, the outer product of $V$ and $W$ is $VW^\dagger$. We can use a similar approach to calculating the inner product, except here we will return the whole multiplication result rather than a specific number.
###Code
@exercise
def outer_prod(v : Matrix, w : Matrix) -> Matrix:
# Calculate adjoint of the W
adjointW = adjoint(w)
# Multiply V by W adjoint
return matrix_mult(v, adjointW)
###Output
_____no_output_____
###Markdown
[Return to task 11 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-11:-Outer-product.) Exercise 12*: Tensor Product.**Inputs:**1. An $n \times m$ matrix $A$.2. A $k \times l$ matrix $B$.**Output:** Return an $(n \cdot k) \times (m \cdot l)$ matrix $A \otimes B$, the tensor product of $A$ and $B$. SolutionWe will follow the definition of the tensor product. For example, tensor product of $2 \times 2$ matrices look as follows:$$\begin{bmatrix} a & b \\ c & d \end{bmatrix} \otimes \begin{bmatrix} e & f \\ g & h \end{bmatrix} =\begin{bmatrix} a \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} & b \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} \\ c \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} & d \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix}\end{bmatrix}=\begin{bmatrix} a \cdot e & a \cdot f & b \cdot e & b \cdot f \\ a \cdot g & a \cdot h & b \cdot g & b \cdot h \\ c \cdot e & c \cdot f & d \cdot e & d \cdot f \\ c \cdot g & c \cdot h & d \cdot g & d \cdot h\end{bmatrix}$$> *Python note:* We need to calculate pairwise products of all elements of the left matrix and all elements of the right matrix; this means we have to use 4 nested loops.
###Code
@exercise
def tensor_product(a : Matrix, b : Matrix) -> Matrix:
aRows = len(a) # the number of rows for matrix a
aColumns = len(a[0]) # the number of columns for matrix a
bRows = len(b) # the number of rows for matrix b
bColumns = len(b[0]) # the number of columns for matrix b
ans = create_empty_matrix(aRows * bRows, aColumns * bColumns)
# Outer pair of loops, iterating trough the elements of the left matrix
for i in range(aRows):
for j in range(aColumns):
# Inner pair of loops, iterating through the elements of the right matrix
for k in range(bRows):
for l in range(bColumns):
ans[i * bRows + k][j * bColumns + l] = a[i][j] * b[k][l]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 12 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-12*:-Tensor-Product.) Exercise 13: Finding an eigenvalue.**Inputs:**1. A real-valued $n \times n$ matrix $A$.2. An eigenvector $V$ of matrix $A$.**Output:** Return a real number - the eigenvalue of $A$ that is associated with the given eigenvector. SolutionLet's consider what happens when we multiply the matrix by its eigenvector for a $3 \times 3$ example:$$ A \cdot V = \begin{bmatrix} a & b & c \\ d & e & f \\ g & h & i \end{bmatrix} \cdot \begin{bmatrix}j \\ k \\ l \end{bmatrix} = \begin{bmatrix} m \\ n \\ o \end{bmatrix} = \alpha \begin{bmatrix}j \\ k \\ l \end{bmatrix} = \alpha V$$This means you can find the eigenvalue $\alpha$ from the equations $$ \begin{cases} \alpha j = m \\ \alpha k = n \\ \alpha l = o \end{cases}$$We can use any of them, keeping in mind that we need an equation in which the element of the eigenvector is not zero (otherwise we get an equation $0 \alpha = 0$ which doesn't help us find $\alpha$).Since eigenvectors are defined as non-zero vectors, we are guaranteed that at least one element of the vector will not be zero.
###Code
from pytest import approx
@exercise
def find_eigenvalue(a : Matrix, v : Matrix) -> float:
n = len(v)
multiplied = matrix_mult(a, v)
for i in range(n):
if (v[i][0] != approx(0)):
return multiplied[i][0] / v[i][0]
###Output
_____no_output_____
###Markdown
[Return to task 13 of the Linear Algebra tutorial.](/LinearAlgebra.ipynbExercise-13:-Finding-an-eigenvalue.) Exercise 14**: Finding an eigenvector.**Inputs:**1. A $2 \times 2$ matrix $A$.2. An eigenvalue $x$ of matrix $A$.**Output:** Return any non-zero eigenvector of $A$ that is associated with $x$. SolutionSearching for an eigenvector $V$ associated with a specific eigenvalue $x$ asks for solving the following equation:$$ AV = xV $$or, equivalently, $$(A - xI_n)V = 0$$In other words, for a $2 \times 2$ matrix the following happens: 1. Multiply the identity matrix $I_2$ by the eigenvalue:$$ x \cdot \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} x & 0 \\ 0 & x \end{bmatrix} $$2. Subtract this new matrix from the given matrix $A$:$$ \begin{bmatrix} a & b \\ c & d \end{bmatrix} - \begin{bmatrix} x & 0 \\ 0 & x \end{bmatrix} = \begin{bmatrix} a -x & b \\ c & d -x \end{bmatrix} $$ 3. Find a vector that, when multiplied by the resulting matrix, will produce a 0 vector:$$ \begin{bmatrix} a - x & b \\ c & d - x \end{bmatrix} \cdot \begin{bmatrix} v_0 \\ v_1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}$$This can be rewritten as the following system of equations:$$\begin{cases}(a - x) \cdot v_0 + b \cdot v_1 = 0 \\c \cdot v_0 + (d - x) \cdot v_1 = 0 \end{cases}$$Each eigenvalue has infinitely many eigenvectors associated with it (since multiplying an eigenvector by a number gives another valid eigenvector). We can limit our search and say that $v_0 = 1$, if possible. In this case, the system of equations becomes$$\begin{cases}(a - x) + b \cdot v_1 = 0 \\c + (d - x) \cdot v_1 = 0 \end{cases}$$and finally we get $v_1 = \frac{a-x}{-b}$.If $b = 0$, we can not perform this division, so we need to reconsider our choices. The first equation becomes $(a-x)v_0 = 0$, which is possible in two cases:* If $a - x \neq 0$, we get $v_0 = 0$ and thus $v_1$ has to be non-zero (we can pick $v_1 = 1$).* If $a - x = 0$, we can not get any information from the first equation and have to fall back to the second one:$c \cdot v_0 + (d - x) \cdot v_1 = 0$. Following a similar logic: * If $c = 0$, we get $(d - x) \cdot v_1 = 0$, so $v_0 = 1, v_1 = 0$. * If $c \neq 0$, we get $v_1 = 1, v_0 = \frac{d-x}{-c}$.
###Code
@exercise
def find_eigenvector(a : Matrix, x : float) -> Matrix:
# Check for possible edge cases
if (a[0][1] == 0):
if (a[0][0] - x == 0):
if (a[1][0] == 0):
return [[1], [0]]
else:
return [[(a[1][1] - x) / (-a[1][0])], [1]]
else:
return [[0], [1]]
v0 = 1
v1 = (a[0][0] - x) / (-a[0][1])
return [[v0], [v1]]
###Output
_____no_output_____
###Markdown
Linear Algebra Tutorial Workbook**What is this workbook?**A workbook is a collection of problems, accompanied by solutions to them. The explanations focus on the logical steps required to solve a problem; they illustrate the concepts that need to be applied to come up with a solution to the problem, explaining the mathematical steps required. Note that a workbook should not be the primary source of knowledge on the subject matter; it assumes that you've already read a tutorial or a textbook and that you are now seeking to improve your problem-solving skills. You should attempt solving the tasks of the respective kata first, and turn to the workbook only if stuck. While a textbook emphasizes knowledge acquisition, a workbook emphasizes skill acquisition.This workbook describes the solutions to the problems offered in the [Linear Algebra tutorial](./LinearAlgebra.ipynb). Since the tasks are offered as programming problems, the explanations also cover some elements of Python that might be non-obvious for a first-time user.**What you should know for this workbook**1. Complex arithmetic.2. Basic Python knowledge is helpful but not necessary. Click the cell with code below this block of text and press `Ctrl+Enter` (`⌘+Enter` on Mac). **Do not skip this step**.
###Code
# Run this cell using Ctrl+Enter (⌘+Enter on Mac).
from testing import exercise, create_empty_matrix
from typing import List
import math, cmath
Matrix = List[List[complex]]
###Output
_____no_output_____
###Markdown
Exercise 1: Matrix addition.**Inputs:**1. An $n \times m$ matrix $A$, represented as a two-dimensional list.2. An $n \times m$ matrix $B$, represented as a two-dimensional list.**Output:** Return the sum of the matrices $A + B$ - an $n \times m$ matrix, represented as a two-dimensional list. SolutionFollowing the definition given in the tutorial, the sum of two matrices is a matrix of element-wise sums of matrix elements; for example, for $2 \times 2$ matrices$$ A + B =\begin{bmatrix} a & b \\ c & d \end{bmatrix} + \begin{bmatrix} e & f \\ g & h \end{bmatrix} = \begin{bmatrix} a + e & b + f \\ c + g & d + h \end{bmatrix}$$> *Python note:* This tutorial uses a lot of lists and loops, so let's walk through some Python syntax details first. If you're familiar with Python syntax, feel free to skip this note!>> * [`range(x)`](https://docs.python.org/3/tutorial/controlflow.htmlthe-range-function) will create a [list](https://docs.python.org/3/tutorial/introduction.htmllists) of numbers from 0 to `x - 1`, inclusive; for example, `range(3)` will create a list `[0, 1, 2]`. > * [`for`](https://docs.python.org/3/tutorial/controlflow.htmlfor-statements) statement iterates over the items of a sequence; for example, the following code> ```python> for i in range(3):> print(i)> ```>> will print:> ```> 0> 1> 2> ```>> * Matrices are described as two-dimensional lists, > which are represented as lists of lists. For example, the following matrix:>> $$\begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix} $$>> is represented as a list of lists `[[1, 2, 3], [4, 5, 6]]`. >> * You can access a specific element of the list using the index of that element in the list (note that indices start with 0): the first element of `array` is `array[0]`, the second - `array[1]`, etc.> * Similarly, you can access an element of a matrix using the row and column indices of that element: `matrix[0][2]` would access the element in the first row and 3rd column.> * `len(array)` returns the number of elements in a list; for example, `len([0, 1, 2])` will return 3.> * Here is an example of creating a matrix from the example above and looping through its elements to print them:>>```Python>matrix = [[1, 2, 3], [4, 5, 6]]>numberOfRows = len(matrix) will return 2>numberOfColumns = len(matrix[0]) will return 3>for row in range(numberOfRows):> for column in range(numberOfColumns):> print(matrix[row][column])>>```>> * Finally, the first exercise offers you a template of a solution that uses a function `create_empty_matrix(n, m)`; this function creates an $n \times m$ matrix filled with 0's as values. This function is not a built-in Python function, this notebook defines it for you to use.
###Code
@exercise
def matrix_add(a : Matrix, b : Matrix) -> Matrix:
# You can get the size of a matrix like this:
rows = len(a)
columns = len(a[0])
# You can use the following function to initialize a rows×columns matrix filled with 0s to store your answer
c = create_empty_matrix(rows, columns)
for i in range(rows):
for j in range(columns):
# You can access elements of a matrix like this:
x = a[i][j]
y = b[i][j]
# You can modify the elements of a matrix like this:
c[i][j] = a[i][j] + b[i][j]
return c
###Output
_____no_output_____
###Markdown
[Return to task 1 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-1:-Matrix-addition.) Exercise 2: Scalar multiplication.**Inputs:**1. A scalar $x$.2. An $n \times m$ matrix $A$.**Output:** Return the $n \times m$ matrix $x \cdot A$. SolutionWe can again follow the definition given in the tutorial: to calculate the product of a number and a matrix, multiply each matrix element by that number. For example, for a $2 \times 2$ matrix:$$x \cdot A = x \cdot \begin{bmatrix} a & b \\ c & d \end{bmatrix} = \begin{bmatrix} x \cdot a & x \cdot b \\ x \cdot c & x \cdot d \end{bmatrix} $$ > *Python note:* We have to multiply each element in the matrix by the given number $x$. To do so, we will again loop trough each matrix element with 2 `for` loops, do the multiplication and store its result in the corresponding element of the newly created matrix.
###Code
@exercise
def scalar_mult(x : complex, a : Matrix) -> Matrix:
rows = len(a)
columns = len(a[0])
c = create_empty_matrix(rows, columns)
for i in range(rows):
for j in range(columns):
c[i][j] = a[i][j] * x
return c
###Output
_____no_output_____
###Markdown
[Return to task 2 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-2:-Scalar-multiplication.) Exercise 3: Matrix multiplication.**Inputs:**1. An $n \times m$ matrix $A$.2. An $m \times k$ matrix $B$.**Output:** Return the $n \times k$ matrix equal to the matrix product $AB$. SolutionAgain, the tutorial gives us the definition of how multiplication works, and we just need to implement it in code. Here is an example of multiplying a $2 \times 3$ matrix by a $3 \times 2$ matrix:$$ A \cdot B =\begin{bmatrix} a & b & c \\ d & e & f \end{bmatrix} \cdot \begin{bmatrix} h & i \\ j & k \\ l & m \end{bmatrix} = \begin{bmatrix} a \cdot h + b \cdot j + c \cdot l & a \cdot i + b \cdot k + c \cdot m \\ d \cdot h + e \cdot j + f \cdot l & d \cdot i + e \cdot k + f \cdot m \end{bmatrix} $$> *Python note*: In this exercise we'll need an extra nested loop. We will iterate trough the rows and columns of the resulting matrix, similar to the previous exercises, but for each element of the result we'll need to iterate through the row of the left matrix and the column of the right matrix that contribute to that element. In the example above, to get the element in the first row and the first column of the resulting matrix product we'll need to iterate through the first row of the left matrix $\begin{bmatrix} a & b & c \end{bmatrix}$ and the first column of the right matrix $\begin{bmatrix} h \\ j \\ l \end{bmatrix}$ and add up pairwise products of their elements.>> Note that the empty matrix we create for storing the result differs in dimensions from the previous exercises: its number of rows equals the number of rows of the left matrix, and its number of columns equals to the number of columns of the right matrix. >> Python `+=` operator is a convenient shorthand for assignment `variable = variable + increment`.
###Code
@exercise
def matrix_mult(a : Matrix, b : Matrix) -> Matrix:
rows = len(a) # the number of rows of the left matrix
common = len(a[0]) # = len(b) - the common dimension of the matrices
columns = len(b[0]) # the number of columns of the right matrix
ans = create_empty_matrix(rows, columns)
for currentRow in range(rows):
for currentColumn in range(columns):
for k in range(common):
ans[currentRow][currentColumn] += a[currentRow][k] * b[k][currentColumn]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 3 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-3:-Matrix-multiplication.) Exercise 4: Matrix Inversion.**Input:** An invertible $2 \times 2$ matrix $A$.**Output:** Return the inverse of $A$, a $2 \times 2$ matrix $A^{-1}$. SolutionSince we only need to invert a $2 \times 2$ matrix, we will not consider a solution which can be used for arbitrary-sized matrices. We will follow the algorithm described in the [Wikipedia article](https://en.wikipedia.org/wiki/Invertible_matrixInversion_of_2_%C3%97_2_matrices).$$ A = \begin{bmatrix} a & b \\ c & d \end{bmatrix} $$The determinant of the matrix is defined as $$ |A| = a \cdot d - b \cdot c $$$$A^{-1} = \frac{1}{|A|} \cdot \begin{bmatrix} d & -b \\ -c & a \end{bmatrix} = \begin{bmatrix} \frac{d}{|A|} & \frac{-b}{|A|} \\ \frac{-c}{|A|} & \frac{a}{|A|} \end{bmatrix} $$
###Code
@exercise
def matrix_inverse(m : Matrix) -> Matrix:
# Extract each element of the array into a named variable
a = m[0][0]
b = m[0][1]
c = m[1][0]
d = m[1][1]
# Calculate the determinant
determinant = (a * d) - (b * c)
# Create the inverse of the matrix following the formula above
ans = [[d / determinant, -b / determinant], [-c / determinant, a / determinant]]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 4 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-4:-Matrix-Inversion.) Exercise 5: Transpose.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $m \times n$ matrix $A^T$, the transpose of $A$. SolutionAgain, the tutorial gives us the definition of matrix transpose, so we just need to fill the resulting matrix with the elements of the original matrix in the right order. For example, for a $3 \times 2$ matrix$$\begin{bmatrix} a & b \\ c & d \\ e & f\end{bmatrix}^T=\begin{bmatrix} a & c & e \\ b & d & f\end{bmatrix}$$
###Code
@exercise
def transpose(a : Matrix) -> Matrix:
rows = len(a)
columns = len(a[0])
# Note that the resulting matrix dimensions are swapped compared to the original ones
ans = create_empty_matrix(columns, rows)
for i in range(rows):
for j in range(columns):
ans[j][i] = a[i][j]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 5 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-5:-Transpose.) Exercise 6: Conjugate.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $n \times m$ matrix $\overline{A}$, the conjugate of $A$. SolutionsTo get the conjugate of a matrix you take the conjugate of each individual element (check the [Complex Arithmetic tutorial](../ComplexArithmetic/ComplexArithmetic.ipynbComplex-Conjugate) for the definition.> *Python note*: In the complex arithmetic tutorial complex numbers were represented as tuples of real and imaginary components. However, this tutorial relies on Python's built-in [`complex`](https://docs.python.org/3.8/library/functions.htmlcomplex) data type. Python's [cmath library](https://docs.python.org/3.8/library/cmath.html) offers a lot of useful functions that deal with the `complex` data type.>> Here is an example of using the `complex` data type:>> ```Python> Import the cmath library> import cmath>> Create a new complex number 5 + 3i; the two arguments are the real and the imaginary parts of the number> complexNumber = complex(5, 3)>> Print the real and the imaginary parts of the number> print(complexNumber.real) > print(complexNumber.imag)>> Convert the complex number to its polar representation using the cmath library> polar = cmath.polar(complexNumber)> print(polar) This prints: (5.830951894845301, 0.5404195002705842)> ```>> To get the complex conjugate of a matrix, we loop trough each element of the matrix, extract real and imaginary parts of the number and flip the sign for the imaginary part.
###Code
@exercise
def conjugate(a : Matrix) -> Matrix:
rows = len(a)
columns = len(a[0])
ans = create_empty_matrix(rows, columns)
for i in range(rows):
for j in range(columns):
ans[i][j] = complex(a[i][j].real, -a[i][j].imag)
return ans
###Output
_____no_output_____
###Markdown
[Return to task 6 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-6:-Conjugate.) Exercise 7: Adjoint.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $m \times n$ matrix $A^\dagger$, the adjoint of $A$. SolutionTo get the adjoint we perform both **transpose** and **conjugate** operations on the input matrix. We can write out the whole procedure manually, like we have done above, but we can also leverage the code we have written above.> In Python the `def` word defines a function, which could be reused later in the code.
###Code
@exercise
def adjoint(a : Matrix) -> Matrix:
# Call the transpose function with the input matrix a
transp = transpose(a)
# Call the conjugate function with the transposed matrix as input
ans = conjugate(transp)
return ans
###Output
_____no_output_____
###Markdown
[Return to task 7 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-7:-Adjoint.) Exercise 8: Unitary Verification.**Input:** An $n \times n$ matrix $A$.**Output:** Check if the matrix is unitary and return `True` if it is, or `False` if it isn't. SolutionA matrix is unitary if this holds true: $UU^\dagger = U^\dagger U = I$.(As a reminder, an identity matrix is a matrix with 1s on the main diagonal and 0s everywhere else.)Thus, to check if the input matrix is unitary we will need to perform the following steps:1. Calculate the adjoint of the input matrix.2. Multiply it by the input matrix.3. Check if the multiplication result is equal to an identity matrix. > *Python note:* We will leverage the `adjoint` and the `matrix_mult` functions what we have created above.>> When we check each element of $UU^\dagger$ to see whether it equals the respective element of the identity matrix, we'll use Python function `approx` to perform this comparison approximately.
###Code
from pytest import approx
@exercise
def is_matrix_unitary(a : Matrix) -> bool:
n = len(a)
# Calculate the adjoint matrix
adjointA = adjoint(a)
# Multiply the adjoint matrix by the input matrix
multipliedMatrix = matrix_mult(a, adjointA)
# Check whether the multiplication result is (approximately) identity matrix
for i in range(n):
for j in range(n):
# An identity matrix has 1's in all the places where the row index and column index are equal...
if i == j:
if multipliedMatrix[i][j] != approx(1):
return False
# ... and 0's in all the places where the row index and column index are different
else:
if multipliedMatrix[i][j] != approx(0):
return False
return True
###Output
_____no_output_____
###Markdown
[Return to task 8 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-8:-Unitary-Verification.) Exercise 9: Inner product.**Inputs:**1. An $n \times 1$ vector $V$.2. An $n \times 1$ vector $W$.**Output:** Return a complex number - the inner product $\langle V , W \rangle$. SolutionFollowing the definition of the inner product, $\langle V , W \rangle = V^\dagger W$. For example, for vectors of length 2:$$\langle\begin{bmatrix} a \\ b\end{bmatrix},\begin{bmatrix} c \\ d\end{bmatrix}\rangle =\begin{bmatrix} a \\ b\end{bmatrix}^\dagger\begin{bmatrix} c \\ d\end{bmatrix}=\begin{bmatrix} \overline{a} & \overline{b} \end{bmatrix}\begin{bmatrix} c \\ d\end{bmatrix}= \overline{a} \cdot c + \overline{b} \cdot d$$> *Python note:* We will again use previously defined functions to calculate adjoint of a vector and a product of two vectors. > We need to keep in mind that the task asks us to return a complex number and not a $1 \times 1$ matrix which is the result of the multiplication. > Therefore at the end we'll extract the top left element of the `resultMatrix` and return it.
###Code
@exercise
def inner_prod(v : Matrix, w : Matrix) -> complex:
# Calculate the adjoint of the v vector
adjointV = adjoint(v)
# Multiply the adjoint v and w. The result will be a matrix with only one element.
resultMatrix = matrix_mult(adjointV, w)
# To get the actual complex number, we have to take one element from the multiplication result.
return resultMatrix[0][0]
###Output
_____no_output_____
###Markdown
[Return to task 9 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-9:-Inner-product.) Exercise 10: Normalized vectors.**Input:** A non-zero $n \times 1$ vector $V$.**Output:** Return an $n \times 1$ vector $\frac{V}{||V||}$ - the normalized version of the vector $V$. Solution If the vector $V = \begin{bmatrix}a & b & c \end{bmatrix}$, its norm $ ||V|| = \sqrt{|a|^2 + |b|^2 + |c|^2} $,and its normalized version is$ \begin{bmatrix}\frac{a}{||V||} & \frac{b}{||V||} & \frac{c}{||V||} \end{bmatrix} $.Thus, we need to calculate the norm of the vector and to divide each element of the vector by it. We will calculate the norm as a square root of an inner product of the vector with itself.
###Code
@exercise
def normalize(v : Matrix) -> Matrix:
norm = math.sqrt(inner_prod(v, v).real)
n = len(v)
ans = create_empty_matrix(n, 1)
# Divide each element of the vector by the norm
for i in range(n):
ans[i][0] = v[i][0] / norm
return ans
###Output
_____no_output_____
###Markdown
[Return to task 10 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-10:-Normalized-vectors.) Exercise 11: Outer product.**Inputs:**1. An $n \times 1$ vector $V$.2. An $m \times 1$ vector $W$.**Output:** Return an $n \times m$ matrix that represents the outer product of $V$ and $W$. SolutionBy definition, the outer product of $V$ and $W$ is $VW^\dagger$. We can use a similar approach to calculating the inner product, except here we will return the whole multiplication result rather than a specific number.
###Code
@exercise
def outer_prod(v : Matrix, w : Matrix) -> Matrix:
# Calculate adjoint of the W
adjointW = adjoint(w)
# Multiply V by W adjoint
return matrix_mult(v, adjointW)
###Output
_____no_output_____
###Markdown
[Return to task 11 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-11:-Outer-product.) Exercise 12*: Tensor Product.**Inputs:**1. An $n \times m$ matrix $A$.2. A $k \times l$ matrix $B$.**Output:** Return an $(n \cdot k) \times (m \cdot l)$ matrix $A \otimes B$, the tensor product of $A$ and $B$. SolutionWe will follow the definition of the tensor product. For example, tensor product of $2 \times 2$ matrices look as follows:$$\begin{bmatrix} a & b \\ c & d \end{bmatrix} \otimes \begin{bmatrix} e & f \\ g & h \end{bmatrix} =\begin{bmatrix} a \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} & b \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} \\ c \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} & d \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix}\end{bmatrix}=\begin{bmatrix} a \cdot e & a \cdot f & b \cdot e & b \cdot f \\ a \cdot g & a \cdot h & b \cdot g & b \cdot h \\ c \cdot e & c \cdot f & d \cdot e & d \cdot f \\ c \cdot g & c \cdot h & d \cdot g & d \cdot h\end{bmatrix}$$> *Python note:* We need to calculate pairwise products of all elements of the left matrix and all elements of the right matrix; this means we have to use 4 nested loops.
###Code
@exercise
def tensor_product(a : Matrix, b : Matrix) -> Matrix:
aRows = len(a) # the number of rows for matrix a
aColumns = len(a[0]) # the number of columns for matrix a
bRows = len(b) # the number of rows for matrix b
bColumns = len(b[0]) # the number of columns for matrix b
ans = create_empty_matrix(aRows * bRows, aColumns * bColumns)
# Outer pair of loops, iterating trough the elements of the left matrix
for i in range(aRows):
for j in range(aColumns):
# Inner pair of loops, iterating through the elements of the right matrix
for k in range(bRows):
for l in range(bColumns):
ans[i * bRows + k][j * bColumns + l] = a[i][j] * b[k][l]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 12 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-12*:-Tensor-Product.) Exercise 13: Finding an eigenvalue.**Inputs:**1. A real-valued $n \times n$ matrix $A$.2. An eigenvector $V$ of matrix $A$.**Output:** Return a real number - the eigenvalue of $A$ that is associated with the given eigenvector. SolutionLet's consider what happens when we multiply the matrix by its eigenvector for a $3 \times 3$ example:$$ A \cdot V = \begin{bmatrix} a & b & c \\ d & e & f \\ g & h & i \end{bmatrix} \cdot \begin{bmatrix}j \\ k \\ l \end{bmatrix} = \begin{bmatrix} m \\ n \\ o \end{bmatrix} = \alpha \begin{bmatrix}j \\ k \\ l \end{bmatrix} = \alpha V$$This means you can find the eigenvalue $\alpha$ from the equations $$ \begin{cases} \alpha j = m \\ \alpha k = n \\ \alpha l = o \end{cases}$$We can use any of them, keeping in mind that we need an equation in which the element of the eigenvector is not zero (otherwise we get an equation $0 \alpha = 0$ which doesn't help us find $\alpha$).Since eigenvectors are defined as non-zero vectors, we are guaranteed that at least one element of the vector will not be zero.
###Code
from pytest import approx
@exercise
def find_eigenvalue(a : Matrix, v : Matrix) -> float:
n = len(v)
multiplied = matrix_mult(a, v)
for i in range(n):
if (v[i][0] != approx(0)):
return multiplied[i][0] / v[i][0]
###Output
_____no_output_____
###Markdown
[Return to task 13 of the Linear Algebra tutorial.](/LinearAlgebra.ipynbExercise-13:-Finding-an-eigenvalue.) Exercise 14**: Finding an eigenvector.**Inputs:**1. A $2 \times 2$ matrix $A$.2. An eigenvalue $x$ of matrix $A$.**Output:** Return any non-zero eigenvector of $A$ that is associated with $x$. SolutionSearching for an eigenvector $V$ associated with a specific eigenvalue $x$ asks for solving the following equation:$$ AV = xV $$or, equivalently, $$(A - xI_n)V = 0$$In other words, for a $2 \times 2$ matrix the following happens: 1. Multiply the identity matrix $I_2$ by the eigenvalue:$$ x \cdot \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} x & 0 \\ 0 & x \end{bmatrix} $$2. Subtract this new matrix from the given matrix $A$:$$ \begin{bmatrix} a & b \\ c & d \end{bmatrix} - \begin{bmatrix} x & 0 \\ 0 & x \end{bmatrix} = \begin{bmatrix} a -x & b \\ c & d -x \end{bmatrix} $$ 3. Find a vector that, when multiplied by the resulting matrix, will produce a 0 vector:$$ \begin{bmatrix} a - x & b \\ c & d - x \end{bmatrix} \cdot \begin{bmatrix} v_0 \\ v_1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}$$This can be rewritten as the following system of equations:$$\begin{cases}(a - x) \cdot v_0 + b \cdot v_1 = 0 \\c \cdot v_0 + (d - x) \cdot v_1 = 0 \end{cases}$$Each eigenvalue has infinitely many eigenvectors associated with it (since multiplying an eigenvector by a number gives another valid eigenvector). We can limit our search and say that $v_0 = 1$, if possible. In this case, the system of equations becomes$$\begin{cases}(a - x) + b \cdot v_1 = 0 \\c + (d - x) \cdot v_1 = 0 \end{cases}$$and finally we get $v_1 = \frac{a-x}{-b}$.If $b = 0$, we can not perform this division, so we need to reconsider our choices. The first equation becomes $(a-x)v_0 = 0$, which is possible in two cases:* If $a - x \neq 0$, we get $v_0 = 0$ and thus $v_1$ has to be non-zero (we can pick $v_1 = 1$).* If $a - x = 0$, we can not get any information from the first equation and have to fall back to the second one:$c \cdot v_0 + (d - x) \cdot v_1 = 0$. Following a similar logic: * If $c = 0$, we get $(d - x) \cdot v_1 = 0$, so $v_0 = 1, v_1 = 0$. * If $c \neq 0$, we get $v_1 = 1, v_0 = \frac{d-x}{-c}$.
###Code
@exercise
def find_eigenvector(a : Matrix, x : float) -> Matrix:
# Check for possible edge cases
if (a[0][1] == 0):
if (a[0][0] - x == 0):
if (a[1][0] == 0):
return [[1], [0]]
else:
return [[(a[1][1] - x) / (-a[1][0])], [1]]
else:
return [[0], [1]]
v0 = 1
v1 = (a[0][0] - x) / (-a[0][1])
return [[v0], [v1]]
###Output
_____no_output_____
###Markdown
Linear Algebra Tutorial Workbook**What is this workbook?**A workbook is a collection of problems, accompanied by solutions to them. The explanations focus on the logical steps required to solve a problem; they illustrate the concepts that need to be applied to come up with a solution to the problem, explaining the mathematical steps required. Note that a workbook should not be the primary source of knowledge on the subject matter; it assumes that you've already read a tutorial or a textbook and that you are now seeking to improve your problem-solving skills. You should attempt solving the tasks of the respective kata first, and turn to the workbook only if stuck. While a textbook emphasizes knowledge acquisition, a workbook emphasizes skill acquisition.This workbook describes the solutions to the problems offered in the [Linear Algebra tutorial](./LinearAlgebra.ipynb). Since the tasks are offered as programming problems, the explanations also cover some elements of Python that might be non-obvious for a first-time user.**What you should know for this workbook**1. Complex arithmetic.2. Basic Python knowledge is helpful but not necessary. Click the cell with code below this block of text and press `Ctrl+Enter` (`⌘+Enter` on Mac). **Do not skip this step**.
###Code
# Run this cell using Ctrl+Enter (⌘+Enter on Mac).
from testing import exercise, create_empty_matrix
from typing import List
import math, cmath
Matrix = List[List[complex]]
###Output
_____no_output_____
###Markdown
Exercise 1: Matrix addition.**Inputs:**1. An $n \times m$ matrix $A$, represented as a two-dimensional list.2. An $n \times m$ matrix $B$, represented as a two-dimensional list.**Output:** Return the sum of the matrices $A + B$ - an $n \times m$ matrix, represented as a two-dimensional list. SolutionFollowing the definition given in the tutorial, the sum of two matrices is a matrix of element-wise sums of matrix elements; for example, for $2 \times 2$ matrices$$ A + B =\begin{bmatrix} a & b \\ c & d \end{bmatrix} + \begin{bmatrix} e & f \\ g & h \end{bmatrix} = \begin{bmatrix} a + e & b + f \\ c + g & d + h \end{bmatrix}$$> *Python note:* This tutorial uses a lot of lists and loops, so let's walk through some Python syntax details first. If you're familiar with Python syntax, feel free to skip this note!>> * [`range(x)`](https://docs.python.org/3/tutorial/controlflow.htmlthe-range-function) will create a [list](https://docs.python.org/3/tutorial/introduction.htmllists) of numbers from 0 to `x - 1`, inclusive; for example, `range(3)` will create a list `[0, 1, 2]`. > * [`for`](https://docs.python.org/3/tutorial/controlflow.htmlfor-statements) statement iterates over the items of a sequence; for example, the following code> ```python> for i in range(3):> print(i)> ```>> will print:> ```> 0> 1> 2> ```>> * Matrices are described as two-dimensional lists, > which are represented as lists of lists. For example, the following matrix:>> $$\begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix} $$>> is represented as a list of lists `[[1, 2, 3], [4, 5, 6]]`. >> * You can access a specific element of the list using the index of that element in the list (note that indices start with 0): the first element of `array` is `array[0]`, the second - `array[1]`, etc.> * Similarly, you can access an element of a matrix using the row and column indices of that element: `matrix[0][2]` would access the element in the first row and 3rd column.> * `len(array)` returns the number of elements in a list; for example, `len([0, 1, 2])` will return 3.> * Here is an example of creating a matrix from the example above and looping through its elements to print them:>>```Python>matrix = [[1, 2, 3], [4, 5, 6]]>numberOfRows = len(matrix) will return 2>numberOfColumns = len(matrix[0]) will return 3>for row in range(numberOfRows):> for column in range(numberOfColumns):> print(matrix[row][column])>>```>> * Finally, the first exercise offers you a template of a solution that uses a function `create_empty_matrix(n, m)`; this function creates an $n \times m$ matrix filled with 0's as values. This function is not a built-in Python function, this notebook defines it for you to use.
###Code
@exercise
def matrix_add(a : Matrix, b : Matrix) -> Matrix:
# You can get the size of a matrix like this:
rows = len(a)
columns = len(a[0])
# You can use the following function to initialize a rows×columns matrix filled with 0s to store your answer
c = create_empty_matrix(rows, columns)
for i in range(rows):
for j in range(columns):
# You can access elements of a matrix like this:
x = a[i][j]
y = b[i][j]
# You can modify the elements of a matrix like this:
c[i][j] = a[i][j] + b[i][j]
return c
###Output
_____no_output_____
###Markdown
[Return to task 1 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-1:-Matrix-addition.) Exercise 2: Scalar multiplication.**Inputs:**1. A scalar $x$.2. An $n \times m$ matrix $A$.**Output:** Return the $n \times m$ matrix $x \cdot A$. SolutionWe can again follow the definition given in the tutorial: to calculate the product of a number and a matrix, multiply each matrix element by that number. For example, for a $2 \times 2$ matrix:$$x \cdot A = x \cdot \begin{bmatrix} a & b \\ c & d \end{bmatrix} = \begin{bmatrix} x \cdot a & x \cdot b \\ x \cdot c & x \cdot d \end{bmatrix} $$ > *Python note:* We have to multiply each element in the matrix by the given number $x$. To do so, we will again loop trough each matrix element with 2 `for` loops, do the multiplication and store its result in the corresponding element of the newly created matrix.
###Code
@exercise
def scalar_mult(x : complex, a : Matrix) -> Matrix:
rows = len(a)
columns = len(a[0])
c = create_empty_matrix(rows, columns)
for i in range(rows):
for j in range(columns):
c[i][j] = a[i][j] * x
return c
###Output
_____no_output_____
###Markdown
[Return to task 2 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-2:-Scalar-multiplication.) Exercise 3: Matrix multiplication.**Inputs:**1. An $n \times m$ matrix $A$.2. An $m \times k$ matrix $B$.**Output:** Return the $n \times k$ matrix equal to the matrix product $AB$. SolutionAgain, the tutorial gives us the definition of how multiplication works, and we just need to implement it in code. Here is an example of multiplying a $2 \times 3$ matrix by a $3 \times 2$ matrix:$$ A \cdot B =\begin{bmatrix} a & b & c \\ d & e & f \end{bmatrix} \cdot \begin{bmatrix} h & i \\ j & k \\ l & m \end{bmatrix} = \begin{bmatrix} a \cdot h + b \cdot j + c \cdot l & a \cdot i + b \cdot k + c \cdot m \\ d \cdot h + e \cdot j + f \cdot l & d \cdot i + e \cdot k + f \cdot m \end{bmatrix} $$> *Python note*: In this exercise we'll need an extra nested loop. We will iterate trough the rows and columns of the resulting matrix, similar to the previous exercises, but for each element of the result we'll need to iterate through the row of the left matrix and the column of the right matrix that contribute to that element. In the example above, to get the element in the first row and the first column of the resulting matrix product we'll need to iterate through the first row of the left matrix $\begin{bmatrix} a & b & c \end{bmatrix}$ and the first column of the right matrix $\begin{bmatrix} h \\ j \\ l \end{bmatrix}$ and add up pairwise products of their elements.>> Note that the empty matrix we create for storing the result differs in dimensions from the previous exercises: its number of rows equals the number of rows of the left matrix, and its number of columns equals to the number of columns of the right matrix. >> Python `+=` operator is a convenient shorthand for assignment `variable = variable + increment`.
###Code
@exercise
def matrix_mult(a : Matrix, b : Matrix) -> Matrix:
rows = len(a) # the number of rows of the left matrix
common = len(a[0]) # = len(b) - the common dimension of the matrices
columns = len(b[0]) # the number of columns of the right matrix
ans = create_empty_matrix(rows, columns)
for currentRow in range(rows):
for currentColumn in range(columns):
for k in range(common):
ans[currentRow][currentColumn] += a[currentRow][k] * b[k][currentColumn]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 3 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-3:-Matrix-multiplication.) Exercise 4: Matrix Inversion.**Input:** An invertible $2 \times 2$ matrix $A$.**Output:** Return the inverse of $A$, a $2 \times 2$ matrix $A^{-1}$. SolutionSince we only need to invert a $2 \times 2$ matrix, we will not consider a solution which can be used for arbitrary-sized matrices. We will follow the algorithm described in the [Wikipedia article](https://en.wikipedia.org/wiki/Invertible_matrixInversion_of_2_%C3%97_2_matrices).$$ A = \begin{bmatrix} a & b \\ c & d \end{bmatrix} $$The determinant of the matrix is defined as $$ |A| = a \cdot d - b \cdot c $$$$A^{-1} = \frac{1}{|A|} \cdot \begin{bmatrix} d & -b \\ -c & a \end{bmatrix} = \begin{bmatrix} \frac{d}{|A|} & \frac{-b}{|A|} \\ \frac{-c}{|A|} & \frac{a}{|A|} \end{bmatrix} $$
###Code
@exercise
def matrix_inverse(m : Matrix) -> Matrix:
# Extract each element of the array into a named variable
a = m[0][0]
b = m[0][1]
c = m[1][0]
d = m[1][1]
# Calculate the determinant
determinant = (a * d) - (b * c)
# Create the inverse of the matrix following the formula above
ans = [[d / determinant, -b / determinant], [-c / determinant, a / determinant]]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 4 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-4:-Matrix-Inversion.) Exercise 5: Transpose.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $m \times n$ matrix $A^T$, the transpose of $A$. SolutionAgain, the tutorial gives us the definition of matrix transpose, so we just need to fill the resulting matrix with the elements of the original matrix in the right order. For example, for a $3 \times 2$ matrix$$\begin{bmatrix} a & b \\ c & d \\ e & f\end{bmatrix}^T=\begin{bmatrix} a & c & e \\ b & d & f\end{bmatrix}$$
###Code
@exercise
def transpose(a : Matrix) -> Matrix:
rows = len(a)
columns = len(a[0])
# Note that the resulting matrix dimensions are swapped compared to the original ones
ans = create_empty_matrix(columns, rows)
for i in range(rows):
for j in range(columns):
ans[j][i] = a[i][j]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 5 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-5:-Transpose.) Exercise 6: Conjugate.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $n \times m$ matrix $\overline{A}$, the conjugate of $A$. SolutionsTo get the conjugate of a matrix you take the conjugate of each individual element (check the [Complex Arithmetic tutorial](../ComplexArithmetic/ComplexArithmetic.ipynbComplex-Conjugate) for the definition.> *Python note*: In the complex arithmetic tutorial complex numbers were represented as tuples of real and imaginary components. However, this tutorial relies on Python's built-in [`complex`](https://docs.python.org/3.8/library/functions.htmlcomplex) data type. Python's [cmath library](https://docs.python.org/3.8/library/cmath.html) offers a lot of useful functions that deal with the `complex` data type.>> Here is an example of using the `complex` data type:>> ```Python> Import the cmath library> import cmath>> Create a new complex number 5 + 3i; the two arguments are the real and the imaginary parts of the number> complexNumber = complex(5, 3)>> Print the real and the imaginary parts of the number> print(complexNumber.real) > print(complexNumber.imag)>> Convert the complex number to its polar representation using the cmath library> polar = cmath.polar(complexNumber)> print(polar) This prints: (5.830951894845301, 0.5404195002705842)> ```>> To get the complex conjugate of a matrix, we loop trough each element of the matrix, extract real and imaginary parts of the number and flip the sign for the imaginary part.
###Code
@exercise
def conjugate(a : Matrix) -> Matrix:
rows = len(a)
columns = len(a[0])
ans = create_empty_matrix(rows, columns)
for i in range(rows):
for j in range(columns):
ans[i][j] = complex(a[i][j].real, -a[i][j].imag)
return ans
###Output
_____no_output_____
###Markdown
[Return to task 6 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-6:-Conjugate.) Exercise 7: Adjoint.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $m \times n$ matrix $A^\dagger$, the adjoint of $A$. SolutionTo get the adjoint we perform both **transpose** and **conjugate** operations on the input matrix. We can write out the whole procedure manually, like we have done above, but we can also leverage the code we have written above.> In Python the `def` word defines a function, which could be reused later in the code.
###Code
@exercise
def adjoint(a : Matrix) -> Matrix:
# Call the transpose function with the input matrix a
transp = transpose(a)
# Call the conjugate function with the transposed matrix as input
ans = conjugate(transp)
return ans
###Output
_____no_output_____
###Markdown
[Return to task 7 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-7:-Adjoint.) Exercise 8: Unitary Verification.**Input:** An $n \times n$ matrix $A$.**Output:** Check if the matrix is unitary and return `True` if it is, or `False` if it isn't. SolutionA matrix is unitary if this holds true: $UU^\dagger = U^\dagger U = I$.(As a reminder, an identity matrix is a matrix with 1s on the main diagonal and 0s everywhere else.)Thus, to check if the input matrix is unitary we will need to perform the following steps:1. Calculate the adjoint of the input matrix.2. Multiply it by the input matrix.3. Check if the multiplication result is equal to an identity matrix. > *Python note:* We will leverage the `adjoint` and the `matrix_mult` functions what we have created above.>> When we check each element of $UU^\dagger$ to see whether it equals the respective element of the identity matrix, we'll use Python function `approx` to perform this comparison approximately.
###Code
from pytest import approx
@exercise
def is_matrix_unitary(a : Matrix) -> bool:
n = len(a)
# Calculate the adjoint matrix
adjointA = adjoint(a)
# Multiply the adjoint matrix by the input matrix
multipliedMatrix = matrix_mult(a, adjointA)
# Check whether the multiplication result is (approximately) identity matrix
for i in range(n):
for j in range(n):
# An identity matrix has 1's in all the places where the row index and column index are equal...
if i == j:
if multipliedMatrix[i][j] != approx(1):
return False
# ... and 0's in all the places where the row index and column index are different
else:
if multipliedMatrix[i][j] != approx(0):
return False
return True
###Output
_____no_output_____
###Markdown
[Return to task 8 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-8:-Unitary-Verification.) Exercise 9: Inner product.**Inputs:**1. An $n \times 1$ vector $V$.2. An $n \times 1$ vector $W$.**Output:** Return a complex number - the inner product $\langle V , W \rangle$. SolutionFollowing the definition of the inner product, $\langle V , W \rangle = V^\dagger W$. For example, for vectors of length 2:$$\langle\begin{bmatrix} a \\ b\end{bmatrix},\begin{bmatrix} c \\ d\end{bmatrix}\rangle =\begin{bmatrix} a \\ b\end{bmatrix}^\dagger\begin{bmatrix} c \\ d\end{bmatrix}=\begin{bmatrix} \overline{a} & \overline{b} \end{bmatrix}\begin{bmatrix} c \\ d\end{bmatrix}= \overline{a} \cdot c + \overline{b} \cdot d$$> *Python note:* We will again use previously defined functions to calculate adjoint of a vector and a product of two vectors. > We need to keep in mind that the task asks us to return a complex number and not a $1 \times 1$ matrix which is the result of the multiplication. > Therefore at the end we'll extract the top left element of the `resultMatrix` and return it.
###Code
@exercise
def inner_prod(v : Matrix, w : Matrix) -> complex:
# Calculate the adjoint of the v vector
adjointV = adjoint(v)
# Multiply the adjoint v and w. The result will be a matrix with only one element.
resultMatrix = matrix_mult(adjointV, w)
# To get the actual complex number, we have to take one element from the multiplication result.
return resultMatrix[0][0]
###Output
_____no_output_____
###Markdown
[Return to task 9 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-9:-Inner-product.) Exercise 10: Normalized vectors.**Input:** A non-zero $n \times 1$ vector $V$.**Output:** Return an $n \times 1$ vector $\frac{V}{||V||}$ - the normalized version of the vector $V$. Solution If the vector $V = \begin{bmatrix}a & b & c \end{bmatrix}$, its norm $ ||V|| = \sqrt{|a|^2 + |b|^2 + |c|^2} $,and its normalized version is$ \begin{bmatrix}\frac{a}{||V||} & \frac{b}{||V||} & \frac{c}{||V||} \end{bmatrix} $.Thus, we need to calculate the norm of the vector and to divide each element of the vector by it. We will calculate the norm as a square root of an inner product of the vector with itself.
###Code
@exercise
def normalize(v : Matrix) -> Matrix:
norm = math.sqrt(inner_prod(v, v).real)
n = len(v)
ans = create_empty_matrix(n, 1)
# Divide each element of the vector by the norm
for i in range(n):
ans[i][0] = v[i][0] / norm
return ans
###Output
_____no_output_____
###Markdown
[Return to task 10 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-10:-Normalized-vectors.) Exercise 11: Outer product.**Inputs:**1. An $n \times 1$ vector $V$.2. An $m \times 1$ vector $W$.**Output:** Return an $n \times m$ matrix that represents the outer product of $V$ and $W$. SolutionBy definition, the outer product of $V$ and $W$ is $VW^\dagger$. We can use a similar approach to calculating the inner product, except here we will return the whole multiplication result rather than a specific number.
###Code
@exercise
def outer_prod(v : Matrix, w : Matrix) -> Matrix:
# Calculate adjoint of the W
adjointW = adjoint(w)
# Multiply V by W adjoint
return matrix_mult(v, adjointW)
###Output
_____no_output_____
###Markdown
[Return to task 11 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-11:-Outer-product.) Exercise 12*: Tensor Product.**Inputs:**1. An $n \times m$ matrix $A$.2. A $k \times l$ matrix $B$.**Output:** Return an $(n \cdot k) \times (m \cdot l)$ matrix $A \otimes B$, the tensor product of $A$ and $B$. SolutionWe will follow the definition of the tensor product. For example, tensor product of $2 \times 2$ matrices look as follows:$$\begin{bmatrix} a & b \\ c & d \end{bmatrix} \otimes \begin{bmatrix} e & f \\ g & h \end{bmatrix} =\begin{bmatrix} a \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} & b \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} \\ c \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} & d \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix}\end{bmatrix}=\begin{bmatrix} a \cdot e & a \cdot f & b \cdot e & b \cdot f \\ a \cdot g & a \cdot h & b \cdot g & b \cdot h \\ c \cdot e & c \cdot f & d \cdot e & d \cdot f \\ c \cdot g & c \cdot h & d \cdot g & d \cdot h\end{bmatrix}$$> *Python note:* We need to calculate pairwise products of all elements of the left matrix and all elements of the right matrix; this means we have to use 4 nested loops.
###Code
@exercise
def tensor_product(a : Matrix, b : Matrix) -> Matrix:
aRows = len(a) # the number of rows for matrix a
aColumns = len(a[0]) # the number of columns for matrix a
bRows = len(b) # the number of rows for matrix b
bColumns = len(b[0]) # the number of columns for matrix b
ans = create_empty_matrix(aRows * bRows, aColumns * bColumns)
# Outer pair of loops, iterating trough the elements of the left matrix
for i in range(aRows):
for j in range(aColumns):
# Inner pair of loops, iterating through the elements of the right matrix
for k in range(bRows):
for l in range(bColumns):
ans[i * bRows + k][j * bColumns + l] = a[i][j] * b[k][l]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 12 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-12*:-Tensor-Product.) Exercise 13: Finding an eigenvalue.**Inputs:**1. A real-valued $n \times n$ matrix $A$.2. An eigenvector $V$ of matrix $A$.**Output:** Return a real number - the eigenvalue of $A$ that is associated with the given eigenvector. SolutionLet's consider what happens when we multiply the matrix by its eigenvector for a $3 \times 3$ example:$$ A \cdot V = \begin{bmatrix} a & b & c \\ d & e & f \\ g & h & i \end{bmatrix} \cdot \begin{bmatrix}j \\ k \\ l \end{bmatrix} = \begin{bmatrix} m \\ n \\ o \end{bmatrix} = \alpha \begin{bmatrix}j \\ k \\ l \end{bmatrix} = \alpha V$$This means you can find the eigenvalue $\alpha$ from the equations $$ \begin{cases} \alpha j = m \\ \alpha k = n \\ \alpha l = o \end{cases}$$We can use any of them, keeping in mind that we need an equation in which the element of the eigenvector is not zero (otherwise we get an equation $0 \alpha = 0$ which doesn't help us find $\alpha$).Since eigenvectors are defined as non-zero vectors, we are guaranteed that at least one element of the vector will not be zero.
###Code
from pytest import approx
@exercise
def find_eigenvalue(a : Matrix, v : Matrix) -> float:
n = len(v)
multiplied = matrix_mult(a, v)
for i in range(n):
if (v[i][0] != approx(0)):
return multiplied[i][0] / v[i][0]
###Output
_____no_output_____
###Markdown
[Return to task 13 of the Linear Algebra tutorial.](/LinearAlgebra.ipynbExercise-13:-Finding-an-eigenvalue.) Exercise 14**: Finding an eigenvector.**Inputs:**1. A $2 \times 2$ matrix $A$.2. An eigenvalue $x$ of matrix $A$.**Output:** Return any non-zero eigenvector of $A$ that is associated with $x$. SolutionSearching for an eigenvector $V$ associated with a specific eigenvalue $x$ asks for solving the following equation:$$ AV = xV $$or, equivalently, $$(A - xI_n)V = 0$$In other words, for a $2 \times 2$ matrix the following happens: 1. Multiply the identity matrix $I_2$ by the eigenvalue:$$ x \cdot \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} x & 0 \\ 0 & x \end{bmatrix} $$2. Subtract this new matrix from the given matrix $A$:$$ \begin{bmatrix} a & b \\ c & d \end{bmatrix} - \begin{bmatrix} x & 0 \\ 0 & x \end{bmatrix} = \begin{bmatrix} a -x & b \\ c & d -x \end{bmatrix} $$ 3. Find a vector that, when multiplied by the resulting matrix, will produce a 0 vector:$$ \begin{bmatrix} a - x & b \\ c & d - x \end{bmatrix} \cdot \begin{bmatrix} v_0 \\ v_1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}$$This can be rewritten as the following system of equations:$$\begin{cases}(a - x) \cdot v_0 + b \cdot v_1 = 0 \\c \cdot v_0 + (d - x) \cdot v_1 = 0 \end{cases}$$Each eigenvalue has infinitely many eigenvectors associated with it (since multiplying an eigenvector by a number gives another valid eigenvector). We can limit our search and say that $v_0 = 1$, if possible. In this case, the system of equations becomes$$\begin{cases}(a - x) + b \cdot v_1 = 0 \\c + (d - x) \cdot v_1 = 0 \end{cases}$$and finally we get $v_1 = \frac{a-x}{-b}$.If $b = 0$, we can not perform this division, so we need to reconsider our choices. The first equation becomes $(a-x)v_0 = 0$, which is possible in two cases:* If $a - x \neq 0$, we get $v_0 = 0$ and thus $v_1$ has to be non-zero (we can pick $v_1 = 1$).* If $a - x = 0$, we can not get any information from the first equation and have to fall back to the second one:$c \cdot v_0 + (d - x) \cdot v_1 = 0$. Following a similar logic: * If $c = 0$, we get $(d - x) \cdot v_1 = 0$, so $v_0 = 1, v_1 = 0$. * If $c \neq 0$, we get $v_1 = 1, v_0 = \frac{d-x}{-c}$.
###Code
@exercise
def find_eigenvector(a : Matrix, x : float) -> Matrix:
# Check for possible edge cases
if (a[0][1] == 0):
if (a[0][0] - x == 0):
if (a[1][0] == 0):
return [[1], [0]]
else:
return [[(a[1][1] - x) / (-a[1][0])], [1]]
else:
return [[0], [1]]
v0 = 1
v1 = (a[0][0] - x) / (-a[0][1])
return [[v0], [v1]]
###Output
_____no_output_____
###Markdown
Linear Algebra Tutorial Workbook**What is this workbook?**A workbook is a collection of problems, accompanied by solutions to them. The explanations focus on the logical steps required to solve a problem; they illustrate the concepts that need to be applied to come up with a solution to the problem, explaining the mathematical steps required. Note that a workbook should not be the primary source of knowledge on the subject matter; it assumes that you've already read a tutorial or a textbook and that you are now seeking to improve your problem-solving skills. You should attempt solving the tasks of the respective kata first, and turn to the workbook only if stuck. While a textbook emphasizes knowledge acquisition, a workbook emphasizes skill acquisition.This workbook describes the solutions to the problems offered in the [Linear Algebra tutorial](./LinearAlgebra.ipynb). Since the tasks are offered as programming problems, the explanations also cover some elements of Python that might be non-obvious for a first-time user.**What you should know for this workbook**1. Complex arithmetic.2. Basic Python knowledge is helpful but not necessary. Click the cell with code below this block of text and press `Ctrl+Enter` (`⌘+Enter` on Mac). **Do not skip this step**.
###Code
# Run this cell using Ctrl+Enter (⌘+Enter on Mac).
from testing import exercise, create_empty_matrix
from typing import List
import math, cmath
Matrix = List[List[complex]]
###Output
_____no_output_____
###Markdown
Exercise 1: Matrix addition.**Inputs:**1. An $n \times m$ matrix $A$, represented as a two-dimensional list.2. An $n \times m$ matrix $B$, represented as a two-dimensional list.**Output:** Return the sum of the matrices $A + B$ - an $n \times m$ matrix, represented as a two-dimensional list. SolutionFollowing the definition given in the tutorial, the sum of two matrices is a matrix of element-wise sums of matrix elements; for example, for $2 \times 2$ matrices$$ A + B =\begin{bmatrix} a & b \\ c & d \end{bmatrix} + \begin{bmatrix} e & f \\ g & h \end{bmatrix} = \begin{bmatrix} a + e & b + f \\ c + g & d + h \end{bmatrix}$$> *Python note:* This tutorial uses a lot of lists and loops, so let's walk through some Python syntax details first. If you're familiar with Python syntax, feel free to skip this note!>> * [`range(x)`](https://docs.python.org/3/tutorial/controlflow.htmlthe-range-function) will create a [list](https://docs.python.org/3/tutorial/introduction.htmllists) of numbers from 0 to `x - 1`, inclusive; for example, `range(3)` will create a list `[0, 1, 2]`. > * [`for`](https://docs.python.org/3/tutorial/controlflow.htmlfor-statements) statement iterates over the items of a sequence; for example, the following code> ```python> for i in range(3):> print(i)> ```>> will print:> ```> 0> 1> 2> ```>> * Matrices are described as two-dimensional lists, > which are represented as lists of lists. For example, the following matrix:>> $$\begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix} $$>> is represented as a list of lists `[[1, 2, 3], [4, 5, 6]]`. >> * You can access a specific element of the list using the index of that element in the list (note that indices start with 0): the first element of `array` is `array[0]`, the second - `array[1]`, etc.> * Similarly, you can access an element of a matrix using the row and column indices of that element: `matrix[0][2]` would access the element in the first row and 3rd column.> * `len(array)` returns the number of elements in a list; for example, `len([0, 1, 2])` will return 3.> * Here is an example of creating a matrix from the example above and looping through its elements to print them:>>```Python>matrix = [[1, 2, 3], [4, 5, 6]]>numberOfRows = len(matrix) will return 2>numberOfColumns = len(matrix[0]) will return 3>for row in range(numberOfRows):> for column in range(numberOfColumns):> print(matrix[row][column])>>```>> * Finally, the first exercise offers you a template of a solution that uses a function `create_empty_matrix(n, m)`; this function creates an $n \times m$ matrix filled with 0's as values. This function is not a built-in Python function, this notebook defines it for you to use.
###Code
@exercise
def matrix_add(a : Matrix, b : Matrix) -> Matrix:
# You can get the size of a matrix like this:
rows = len(a)
columns = len(a[0])
# You can use the following function to initialize a rows×columns matrix filled with 0s to store your answer
c = create_empty_matrix(rows, columns)
for i in range(rows):
for j in range(columns):
# You can access elements of a matrix like this:
x = a[i][j]
y = b[i][j]
# You can modify the elements of a matrix like this:
c[i][j] = a[i][j] + b[i][j]
return c
###Output
_____no_output_____
###Markdown
[Return to task 1 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-1:-Matrix-addition.) Exercise 2: Scalar multiplication.**Inputs:**1. A scalar $x$.2. An $n \times m$ matrix $A$.**Output:** Return the $n \times m$ matrix $x \cdot A$. SolutionWe can again follow the definition given in the tutorial: to calculate the product of a number and a matrix, multiply each matrix element by that number. For example, for a $2 \times 2$ matrix:$$x \cdot A = x \cdot \begin{bmatrix} a & b \\ c & d \end{bmatrix} = \begin{bmatrix} x \cdot a & x \cdot b \\ x \cdot c & x \cdot d \end{bmatrix} $$ > *Python note:* We have to multiply each element in the matrix by the given number $x$. To do so, we will again loop trough each matrix element with 2 `for` loops, do the multiplication and store its result in the corresponding element of the newly created matrix.
###Code
@exercise
def scalar_mult(x : complex, a : Matrix) -> Matrix:
rows = len(a)
columns = len(a[0])
c = create_empty_matrix(rows, columns)
for i in range(rows):
for j in range(columns):
c[i][j] = a[i][j] * x
return c
###Output
_____no_output_____
###Markdown
[Return to task 2 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-2:-Scalar-multiplication.) Exercise 3: Matrix multiplication.**Inputs:**1. An $n \times m$ matrix $A$.2. An $m \times k$ matrix $B$.**Output:** Return the $n \times k$ matrix equal to the matrix product $AB$. SolutionAgain, the tutorial gives us the definition of how multiplication works, and we just need to implement it in code. Here is an example of multiplying a $2 \times 3$ matrix by a $3 \times 2$ matrix:$$ A \cdot B =\begin{bmatrix} a & b & c \\ d & e & f \end{bmatrix} \cdot \begin{bmatrix} h & i \\ j & k \\ l & m \end{bmatrix} = \begin{bmatrix} a \cdot h + b \cdot j + c \cdot l & a \cdot i + b \cdot k + c \cdot m \\ d \cdot h + e \cdot j + f \cdot l & d \cdot i + e \cdot k + f \cdot m \end{bmatrix} $$> *Python note*: In this exercise we'll need an extra nested loop. We will iterate trough the rows and columns of the resulting matrix, similar to the previous exercises, but for each element of the result we'll need to iterate through the row of the left matrix and the column of the right matrix that contribute to that element. In the example above, to get the element in the first row and the first column of the resulting matrix product we'll need to iterate through the first row of the left matrix $\begin{bmatrix} a & b & c \end{bmatrix}$ and the first column of the right matrix $\begin{bmatrix} h \\ j \\ l \end{bmatrix}$ and add up pairwise products of their elements.>> Note that the empty matrix we create for storing the result differs in dimensions from the previous exercises: its number of rows equals the number of rows of the left matrix, and its number of columns equals to the number of columns of the right matrix. >> Python `+=` operator is a convenient shorthand for assignment `variable = variable + increment`.
###Code
@exercise
def matrix_mult(a : Matrix, b : Matrix) -> Matrix:
rows = len(a) # the number of rows of the left matrix
common = len(a[0]) # = len(b) - the common dimension of the matrices
columns = len(b[0]) # the number of columns of the right matrix
ans = create_empty_matrix(rows, columns)
for currentRow in range(rows):
for currentColumn in range(columns):
for k in range(common):
ans[currentRow][currentColumn] += a[currentRow][k] * b[k][currentColumn]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 3 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-3:-Matrix-multiplication.) Exercise 4: Matrix Inversion.**Input:** An invertible $2 \times 2$ matrix $A$.**Output:** Return the inverse of $A$, a $2 \times 2$ matrix $A^{-1}$. SolutionSince we only need to invert a $2 \times 2$ matrix, we will not consider a solution which can be used for arbitrary-sized matrices. We will follow the algorithm described in the [Wikipedia article](https://en.wikipedia.org/wiki/Invertible_matrixInversion_of_2_%C3%97_2_matrices).$$ A = \begin{bmatrix} a & b \\ c & d \end{bmatrix} $$The determinant of the matrix is defined as $$ |A| = a \cdot d - b \cdot c $$$$A^{-1} = \frac{1}{|A|} \cdot \begin{bmatrix} d & -b \\ -c & a \end{bmatrix} = \begin{bmatrix} \frac{d}{|A|} & \frac{-b}{|A|} \\ \frac{-c}{|A|} & \frac{a}{|A|} \end{bmatrix} $$
###Code
@exercise
def matrix_inverse(m : Matrix) -> Matrix:
# Extract each element of the array into a named variable
a = m[0][0]
b = m[0][1]
c = m[1][0]
d = m[1][1]
# Calculate the determinant
determinant = (a * d) - (b * c)
# Create the inverse of the matrix following the formula above
ans = [[d / determinant, -b / determinant], [-c / determinant, a / determinant]]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 4 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-4:-Matrix-Inversion.) Exercise 5: Transpose.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $m \times n$ matrix $A^T$, the transpose of $A$. SolutionAgain, the tutorial gives us the definition of matrix transpose, so we just need to fill the resulting matrix with the elements of the original matrix in the right order. For example, for a $3 \times 2$ matrix$$\begin{bmatrix} a & b \\ c & d \\ e & f\end{bmatrix}^T=\begin{bmatrix} a & c & e \\ b & d & f\end{bmatrix}$$
###Code
@exercise
def transpose(a : Matrix) -> Matrix:
rows = len(a)
columns = len(a[0])
# Note that the resulting matrix dimensions are swapped compared to the original ones
ans = create_empty_matrix(columns, rows)
for i in range(rows):
for j in range(columns):
ans[j][i] = a[i][j]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 5 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-5:-Transpose.) Exercise 6: Conjugate.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $n \times m$ matrix $\overline{A}$, the conjugate of $A$. SolutionsTo get the conjugate of a matrix you take the conjugate of each individual element (check the [Complex Arithmetic tutorial](../ComplexArithmetic/ComplexArithmetic.ipynbComplex-Conjugate) for the definition.> *Python note*: In the complex arithmetic tutorial complex numbers were represented as tuples of real and imaginary components. However, this tutorial relies on Python's built-in [`complex`](https://docs.python.org/3.8/library/functions.htmlcomplex) data type. Python's [cmath library](https://docs.python.org/3.8/library/cmath.html) offers a lot of useful functions that deal with the `complex` data type.>> Here is an example of using the `complex` data type:>> ```Python> Import the cmath library> import cmath>> Create a new complex number 5 + 3i; the two arguments are the real and the imaginary parts of the number> complexNumber = complex(5, 3)>> Print the real and the imaginary parts of the number> print(complexNumber.real) > print(complexNumber.imag)>> Convert the complex number to its polar representation using the cmath library> polar = cmath.polar(complexNumber)> print(polar) This prints: (5.830951894845301, 0.5404195002705842)> ```>> To get the complex conjugate of a matrix, we loop trough each element of the matrix, extract real and imaginary parts of the number and flip the sign for the imaginary part.
###Code
@exercise
def conjugate(a : Matrix) -> Matrix:
rows = len(a)
columns = len(a[0])
ans = create_empty_matrix(rows, columns)
for i in range(rows):
for j in range(columns):
ans[i][j] = complex(a[i][j].real, -a[i][j].imag)
return ans
###Output
_____no_output_____
###Markdown
[Return to task 6 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-6:-Conjugate.) Exercise 7: Adjoint.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $m \times n$ matrix $A^\dagger$, the adjoint of $A$. SolutionTo get the adjoint we perform both **transpose** and **conjugate** operations on the input matrix. We can write out the whole procedure manually, like we have done above, but we can also leverage the code we have written above.> In Python the `def` word defines a function, which could be reused later in the code.
###Code
@exercise
def adjoint(a : Matrix) -> Matrix:
# Call the transpose function with the input matrix a
transp = transpose(a)
# Call the conjugate function with the transposed matrix as input
ans = conjugate(transp)
return ans
###Output
_____no_output_____
###Markdown
[Return to task 7 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-7:-Adjoint.) Exercise 8: Unitary Verification.**Input:** An $n \times n$ matrix $A$.**Output:** Check if the matrix is unitary and return `True` if it is, or `False` if it isn't. SolutionA matrix is unitary if this holds true: $UU^\dagger = U^\dagger U = I$.(As a reminder, an identity matrix is a matrix with 1s on the main diagonal and 0s everywhere else.)Thus, to check if the input matrix is unitary we will need to perform the following steps:1. Calculate the adjoint of the input matrix.2. Multiply it by the input matrix.3. Check if the multiplication result is equal to an identity matrix. > *Python note:* We will leverage the `adjoint` and the `matrix_mult` functions what we have created above.>> When we check each element of $UU^\dagger$ to see whether it equals the respective element of the identity matrix, we'll use Python function `approx` to perform this comparison approximately.
###Code
from pytest import approx
@exercise
def is_matrix_unitary(a : Matrix) -> bool:
n = len(a)
# Calculate the adjoint matrix
adjointA = adjoint(a)
# Multiply the adjoint matrix by the input matrix
multipliedMatrix = matrix_mult(a, adjointA)
# Check whether the multiplication result is (approximately) identity matrix
for i in range(n):
for j in range(n):
# An identity matrix has 1's in all the places where the row index and column index are equal...
if i == j:
if multipliedMatrix[i][j] != approx(1):
return False
# ... and 0's in all the places where the row index and column index are different
else:
if multipliedMatrix[i][j] != approx(0):
return False
return True
###Output
_____no_output_____
###Markdown
[Return to task 8 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-8:-Unitary-Verification.) Exercise 9: Inner product.**Inputs:**1. An $n \times 1$ vector $V$.2. An $n \times 1$ vector $W$.**Output:** Return a complex number - the inner product $\langle V , W \rangle$. SolutionFollowing the definition of the inner product, $\langle V , W \rangle = V^\dagger W$. For example, for vectors of length 2:$$\langle\begin{bmatrix} a \\ b\end{bmatrix},\begin{bmatrix} c \\ d\end{bmatrix}\rangle =\begin{bmatrix} a \\ b\end{bmatrix}^\dagger\begin{bmatrix} c \\ d\end{bmatrix}=\begin{bmatrix} \overline{a} & \overline{b} \end{bmatrix}\begin{bmatrix} c \\ d\end{bmatrix}= \overline{a} \cdot c + \overline{b} \cdot d$$> *Python note:* We will again use previously defined functions to calculate adjoint of a vector and a product of two vectors. > We need to keep in mind that the task asks us to return a complex number and not a $1 \times 1$ matrix which is the result of the multiplication. > Therefore at the end we'll extract the top left element of the `resultMatrix` and return it.
###Code
@exercise
def inner_prod(v : Matrix, w : Matrix) -> complex:
# Calculate the adjoint of the v vector
adjointV = adjoint(v)
# Multiply the adjoint v and w. The result will be a matrix with only one element.
resultMatrix = matrix_mult(adjointV, w)
# To get the actual complex number, we have to take one element from the multiplication result.
return resultMatrix[0][0]
###Output
_____no_output_____
###Markdown
[Return to task 9 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-9:-Inner-product.) Exercise 10: Normalized vectors.**Input:** A non-zero $n \times 1$ vector $V$.**Output:** Return an $n \times 1$ vector $\frac{V}{||V||}$ - the normalized version of the vector $V$. Solution If the vector $V = \begin{bmatrix}a & b & c \end{bmatrix}$, its norm $ ||V|| = \sqrt{|a|^2 + |b|^2 + |c|^2} $,and its normalized version is$ \begin{bmatrix}\frac{a}{||V||} & \frac{b}{||V||} & \frac{c}{||V||} \end{bmatrix} $.Thus, we need to calculate the norm of the vector and to divide each element of the vector by it. We will calculate the norm as a square root of an inner product of the vector with itself.
###Code
@exercise
def normalize(v : Matrix) -> Matrix:
norm = math.sqrt(inner_prod(v, v).real)
n = len(v)
ans = create_empty_matrix(n, 1)
# Divide each element of the vector by the norm
for i in range(n):
ans[i][0] = v[i][0] / norm
return ans
###Output
_____no_output_____
###Markdown
[Return to task 10 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-10:-Normalized-vectors.) Exercise 11: Outer product.**Inputs:**1. An $n \times 1$ vector $V$.2. An $m \times 1$ vector $W$.**Output:** Return an $n \times m$ matrix that represents the outer product of $V$ and $W$. SolutionBy definition, the outer product of $V$ and $W$ is $VW^\dagger$. We can use a similar approach to calculating the inner product, except here we will return the whole multiplication result rather than a specific number.
###Code
@exercise
def outer_prod(v : Matrix, w : Matrix) -> Matrix:
# Calculate adjoint of the W
adjointW = adjoint(w)
# Multiply V by W adjoint
return matrix_mult(v, adjointW)
###Output
_____no_output_____
###Markdown
[Return to task 11 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-11:-Outer-product.) Exercise 12*: Tensor Product.**Inputs:**1. An $n \times m$ matrix $A$.2. A $k \times l$ matrix $B$.**Output:** Return an $(n \cdot k) \times (m \cdot l)$ matrix $A \otimes B$, the tensor product of $A$ and $B$. SolutionWe will follow the definition of the tensor product. For example, tensor product of $2 \times 2$ matrices look as follows:$$\begin{bmatrix} a & b \\ c & d \end{bmatrix} \otimes \begin{bmatrix} e & f \\ g & h \end{bmatrix} =\begin{bmatrix} a \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} & b \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} \\ c \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} & d \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix}\end{bmatrix}=\begin{bmatrix} a \cdot e & a \cdot f & b \cdot e & b \cdot f \\ a \cdot g & a \cdot h & b \cdot g & b \cdot h \\ c \cdot e & c \cdot f & d \cdot e & d \cdot f \\ c \cdot g & c \cdot h & d \cdot g & d \cdot h\end{bmatrix}$$> *Python note:* We need to calculate pairwise products of all elements of the left matrix and all elements of the right matrix; this means we have to use 4 nested loops.
###Code
@exercise
def tensor_product(a : Matrix, b : Matrix) -> Matrix:
aRows = len(a) # the number of rows for matrix a
aColumns = len(a[0]) # the number of columns for matrix a
bRows = len(b) # the number of rows for matrix b
bColumns = len(b[0]) # the number of columns for matrix b
ans = create_empty_matrix(aRows * bRows, aColumns * bColumns)
# Outer pair of loops, iterating trough the elements of the left matrix
for i in range(aRows):
for j in range(aColumns):
# Inner pair of loops, iterating through the elements of the right matrix
for k in range(bRows):
for l in range(bColumns):
ans[i * bRows + k][j * bColumns + l] = a[i][j] * b[k][l]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 12 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-12*:-Tensor-Product.) Exercise 13: Finding an eigenvalue.**Inputs:**1. A real-valued $n \times n$ matrix $A$.2. An eigenvector $V$ of matrix $A$.**Output:** Return a real number - the eigenvalue of $A$ that is associated with the given eigenvector. SolutionLet's consider what happens when we multiply the matrix by its eigenvector for a $3 \times 3$ example:$$ A \cdot V = \begin{bmatrix} a & b & c \\ d & e & f \\ g & h & i \end{bmatrix} \cdot \begin{bmatrix}j \\ k \\ l \end{bmatrix} = \begin{bmatrix} m \\ n \\ o \end{bmatrix} = \alpha \begin{bmatrix}j \\ k \\ l \end{bmatrix} = \alpha V$$This means you can find the eigenvalue $\alpha$ from the equations $$ \begin{cases} \alpha j = m \\ \alpha k = n \\ \alpha l = o \end{cases}$$We can use any of them, keeping in mind that we need an equation in which the element of the eigenvector is not zero (otherwise we get an equation $0 \alpha = 0$ which doesn't help us find $\alpha$).Since eigenvectors are defined as non-zero vectors, we are guaranteed that at least one element of the vector will not be zero.
###Code
from pytest import approx
@exercise
def find_eigenvalue(a : Matrix, v : Matrix) -> float:
n = len(v)
multiplied = matrix_mult(a, v)
for i in range(n):
if (v[i][0] != approx(0)):
return multiplied[i][0] / v[i][0]
###Output
_____no_output_____
###Markdown
[Return to task 13 of the Linear Algebra tutorial.](/LinearAlgebra.ipynbExercise-13:-Finding-an-eigenvalue.) Exercise 14**: Finding an eigenvector.**Inputs:**1. A $2 \times 2$ matrix $A$.2. An eigenvalue $x$ of matrix $A$.**Output:** Return any non-zero eigenvector of $A$ that is associated with $x$. SolutionSearching for an eigenvector $V$ associated with a specific eigenvalue $x$ asks for solving the following equation:$$ AV = xV $$or, equivalently, $$(A - xI_n)V = 0$$In other words, for a $2 \times 2$ matrix the following happens: 1. Multiply the identity matrix $I_2$ by the eigenvalue:$$ x \cdot \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} x & 0 \\ 0 & x \end{bmatrix} $$2. Subtract this new matrix from the given matrix $A$:$$ \begin{bmatrix} a & b \\ c & d \end{bmatrix} - \begin{bmatrix} x & 0 \\ 0 & x \end{bmatrix} = \begin{bmatrix} a -x & b \\ c & d -x \end{bmatrix} $$ 3. Find a vector that, when multiplied by the resulting matrix, will produce a 0 vector:$$ \begin{bmatrix} a - x & b \\ c & d - x \end{bmatrix} \cdot \begin{bmatrix} v_0 \\ v_1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}$$This can be rewritten as the following system of equations:$$\begin{cases}(a - x) \cdot v_0 + b \cdot v_1 = 0 \\c \cdot v_0 + (d - x) \cdot v_1 = 0 \end{cases}$$Each eigenvalue has infinitely many eigenvectors associated with it (since multiplying an eigenvector by a number gives another valid eigenvector). We can limit our search and say that $v_0 = 1$, if possible. In this case, the system of equations becomes$$\begin{cases}(a - x) + b \cdot v_1 = 0 \\c + (d - x) \cdot v_1 = 0 \end{cases}$$and finally we get $v_1 = \frac{a-x}{-b}$.If $b = 0$, we can not perform this division, so we need to reconsider our choices. The first equation becomes $(a-x)v_0 = 0$, which is possible in two cases:* If $a - x \neq 0$, we get $v_0 = 0$ and thus $v_1$ has to be non-zero (we can pick $v_1 = 1$).* If $a - x = 0$, we can not get any information from the first equation and have to fall back to the second one:$c \cdot v_0 + (d - x) \cdot v_1 = 0$. Following a similar logic: * If $c = 0$, we get $(d - x) \cdot v_1 = 0$, so $v_0 = 1, v_1 = 0$. * If $c \neq 0$, we get $v_1 = 1, v_0 = \frac{d-x}{-c}$.
###Code
@exercise
def find_eigenvector(a : Matrix, x : float) -> Matrix:
# Check for possible edge cases
if (a[0][1] == 0):
if (a[0][0] - x == 0):
if (a[1][0] == 0):
return [[1], [0]]
else:
return [[(a[1][1] - x) / (-a[1][0])], [1]]
else:
return [[0], [1]]
v0 = 1
v1 = (a[0][0] - x) / (-a[0][1])
return [[v0], [v1]]
###Output
_____no_output_____
###Markdown
Linear Algebra Tutorial Workbook**What is this workbook?**A workbook is a collection of problems, accompanied by solutions to them. The explanations focus on the logical steps required to solve a problem; they illustrate the concepts that need to be applied to come up with a solution to the problem, explaining the mathematical steps required. Note that a workbook should not be the primary source of knowledge on the subject matter; it assumes that you've already read a tutorial or a textbook and that you are now seeking to improve your problem-solving skills. You should attempt solving the tasks of the respective kata first, and turn to the workbook only if stuck. While a textbook emphasizes knowledge acquisition, a workbook emphasizes skill acquisition.This workbook describes the solutions to the problems offered in the [Linear Algebra tutorial](./LinearAlgebra.ipynb). Since the tasks are offered as programming problems, the explanations also cover some elements of Python that might be non-obvious for a first-time user.**What you should know for this workbook**1. Complex arithmetic.2. Basic Python knowledge is helpful but not necessary. Click the cell with code below this block of text and press `Ctrl+Enter` (`⌘+Enter` on Mac). **Do not skip this step**.
###Code
# Run this cell using Ctrl+Enter (⌘+Enter on Mac).
from testing import exercise, create_empty_matrix
from typing import List
import math, cmath
Matrix = List[List[complex]]
###Output
_____no_output_____
###Markdown
Exercise 1: Matrix addition.**Inputs:**1. An $n \times m$ matrix $A$, represented as a two-dimensional list.2. An $n \times m$ matrix $B$, represented as a two-dimensional list.**Output:** Return the sum of the matrices $A + B$ - an $n \times m$ matrix, represented as a two-dimensional list. SolutionFollowing the definition given in the tutorial, the sum of two matrices is a matrix of element-wise sums of matrix elements; for example, for $2 \times 2$ matrices$$ A + B =\begin{bmatrix} a & b \\ c & d \end{bmatrix} + \begin{bmatrix} e & f \\ g & h \end{bmatrix} = \begin{bmatrix} a + e & b + f \\ c + g & d + h \end{bmatrix}$$> *Python note:* This tutorial uses a lot of lists and loops, so let's walk through some Python syntax details first. If you're familiar with Python syntax, feel free to skip this note!>> * [`range(x)`](https://docs.python.org/3/tutorial/controlflow.htmlthe-range-function) will create a [list](https://docs.python.org/3/tutorial/introduction.htmllists) of numbers from 0 to `x - 1`, inclusive; for example, `range(3)` will create a list `[0, 1, 2]`. > * [`for`](https://docs.python.org/3/tutorial/controlflow.htmlfor-statements) statement iterates over the items of a sequence; for example, the following code> ```python> for i in range(3):> print(i)> ```>> will print:> ```> 0> 1> 2> ```>> * Matrices are described as two-dimensional lists, > which are represented as lists of lists. For example, the following matrix:>> $$\begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix} $$>> is represented as a list of lists `[[1, 2, 3], [4, 5, 6]]`. >> * You can access a specific element of the list using the index of that element in the list (note that indices start with 0): the first element of `array` is `array[0]`, the second - `array[1]`, etc.> * Similarly, you can access an element of a matrix using the row and column indices of that element: `matrix[0][2]` would access the element in the first row and 3rd column.> * `len(array)` returns the number of elements in a list; for example, `len([0, 1, 2])` will return 3.> * Here is an example of creating a matrix from the example above and looping through its elements to print them:>>```Python>matrix = [[1, 2, 3], [4, 5, 6]]>numberOfRows = len(matrix) will return 2>numberOfColumns = len(matrix[0]) will return 3>for row in range(numberOfRows):> for column in range(numberOfColumns):> print(matrix[row][column])>>```>> * Finally, the first exercise offers you a template of a solution that uses a function `create_empty_matrix(n, m)`; this function creates an $n \times m$ matrix filled with 0's as values. This function is not a built-in Python function, this notebook defines it for you to use.
###Code
@exercise
def matrix_add(a : Matrix, b : Matrix) -> Matrix:
# You can get the size of a matrix like this:
rows = len(a)
columns = len(a[0])
# You can use the following function to initialize a rows×columns matrix filled with 0s to store your answer
c = create_empty_matrix(rows, columns)
for i in range(rows):
for j in range(columns):
# You can access elements of a matrix like this:
x = a[i][j]
y = b[i][j]
# You can modify the elements of a matrix like this:
c[i][j] = a[i][j] + b[i][j]
return c
###Output
_____no_output_____
###Markdown
[Return to task 1 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-1:-Matrix-addition.) Exercise 2: Scalar multiplication.**Inputs:**1. A scalar $x$.2. An $n \times m$ matrix $A$.**Output:** Return the $n \times m$ matrix $x \cdot A$. SolutionWe can again follow the definition given in the tutorial: to calculate the product of a number and a matrix, multiply each matrix element by that number. For example, for a $2 \times 2$ matrix:$$x \cdot A = x \cdot \begin{bmatrix} a & b \\ c & d \end{bmatrix} = \begin{bmatrix} x \cdot a & x \cdot b \\ x \cdot c & x \cdot d \end{bmatrix} $$ > *Python note:* We have to multiply each element in the matrix by the given number $x$. To do so, we will again loop trough each matrix element with 2 `for` loops, do the multiplication and store its result in the corresponding element of the newly created matrix.
###Code
@exercise
def scalar_mult(x : complex, a : Matrix) -> Matrix:
rows = len(a)
columns = len(a[0])
c = create_empty_matrix(rows, columns)
for i in range(rows):
for j in range(columns):
c[i][j] = a[i][j] * x
return c
###Output
_____no_output_____
###Markdown
[Return to task 2 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-2:-Scalar-multiplication.) Exercise 3: Matrix multiplication.**Inputs:**1. An $n \times m$ matrix $A$.2. An $m \times k$ matrix $B$.**Output:** Return the $n \times k$ matrix equal to the matrix product $AB$. SolutionAgain, the tutorial gives us the definition of how multiplication works, and we just need to implement it in code. Here is an example of multiplying a $2 \times 3$ matrix by a $3 \times 2$ matrix:$$ A \cdot B =\begin{bmatrix} a & b & c \\ d & e & f \end{bmatrix} \cdot \begin{bmatrix} h & i \\ j & k \\ l & m \end{bmatrix} = \begin{bmatrix} a \cdot h + b \cdot j + c \cdot l & a \cdot i + b \cdot k + c \cdot m \\ d \cdot h + e \cdot j + f \cdot l & d \cdot i + e \cdot k + f \cdot m \end{bmatrix} $$> *Python note*: In this exercise we'll need an extra nested loop. We will iterate trough the rows and columns of the resulting matrix, similar to the previous exercises, but for each element of the result we'll need to iterate through the row of the left matrix and the column of the right matrix that contribute to that element. In the example above, to get the element in the first row and the first column of the resulting matrix product we'll need to iterate through the first row of the left matrix $\begin{bmatrix} a & b & c \end{bmatrix}$ and the first column of the right matrix $\begin{bmatrix} h \\ j \\ l \end{bmatrix}$ and add up pairwise products of their elements.>> Note that the empty matrix we create for storing the result differs in dimensions from the previous exercises: its number of rows equals the number of rows of the left matrix, and its number of columns equals to the number of columns of the right matrix. >> Python `+=` operator is a convenient shorthand for assignment `variable = variable + increment`.
###Code
@exercise
def matrix_mult(a : Matrix, b : Matrix) -> Matrix:
rows = len(a) # the number of rows of the left matrix
common = len(a[0]) # = len(b) - the common dimension of the matrices
columns = len(b[0]) # the number of columns of the right matrix
ans = create_empty_matrix(rows, columns)
for currentRow in range(rows):
for currentColumn in range(columns):
for k in range(common):
ans[currentRow][currentColumn] += a[currentRow][k] * b[k][currentColumn]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 3 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-3:-Matrix-multiplication.) Exercise 4: Matrix Inversion.**Input:** An invertible $2 \times 2$ matrix $A$.**Output:** Return the inverse of $A$, a $2 \times 2$ matrix $A^{-1}$. SolutionSince we only need to invert a $2 \times 2$ matrix, we will not consider a solution which can be used for arbitrary-sized matrices. We will follow the algorithm described in the [Wikipedia article](https://en.wikipedia.org/wiki/Invertible_matrixInversion_of_2_%C3%97_2_matrices).$$ A = \begin{bmatrix} a & b \\ c & d \end{bmatrix} $$The determinant of the matrix is defined as $$ |A| = a \cdot d - b \cdot c $$$$A^{-1} = \frac{1}{|A|} \cdot \begin{bmatrix} d & -b \\ -c & a \end{bmatrix} = \begin{bmatrix} \frac{d}{|A|} & \frac{-b}{|A|} \\ \frac{-c}{|A|} & \frac{a}{|A|} \end{bmatrix} $$
###Code
@exercise
def matrix_inverse(m : Matrix) -> Matrix:
# Extract each element of the array into a named variable
a = m[0][0]
b = m[0][1]
c = m[1][0]
d = m[1][1]
# Calculate the determinant
determinant = (a * d) - (b * c)
# Create the inverse of the matrix following the formula above
ans = [[d / determinant, -b / determinant], [-c / determinant, a / determinant]]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 4 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-4:-Matrix-Inversion.) Exercise 5: Transpose.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $m \times n$ matrix $A^T$, the transpose of $A$. SolutionAgain, the tutorial gives us the definition of matrix transpose, so we just need to fill the resulting matrix with the elements of the original matrix in the right order. For example, for a $3 \times 2$ matrix$$\begin{bmatrix} a & b \\ c & d \\ e & f\end{bmatrix}^T=\begin{bmatrix} a & c & e \\ b & d & f\end{bmatrix}$$
###Code
@exercise
def transpose(a : Matrix) -> Matrix:
rows = len(a)
columns = len(a[0])
# Note that the resulting matrix dimensions are swapped compared to the original ones
ans = create_empty_matrix(columns, rows)
for i in range(rows):
for j in range(columns):
ans[j][i] = a[i][j]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 5 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-5:-Transpose.) Exercise 6: Conjugate.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $n \times m$ matrix $\overline{A}$, the conjugate of $A$. SolutionsTo get the conjugate of a matrix you take the conjugate of each individual element (check the [Complex Arithmetic tutorial](../ComplexArithmetic/ComplexArithmetic.ipynbComplex-Conjugate) for the definition.> *Python note*: In the complex arithmetic tutorial complex numbers were represented as tuples of real and imaginary components. However, this tutorial relies on Python's built-in [`complex`](https://docs.python.org/3.8/library/functions.htmlcomplex) data type. Python's [cmath library](https://docs.python.org/3.8/library/cmath.html) offers a lot of useful functions that deal with the `complex` data type.>> Here is an example of using the `complex` data type:>> ```Python> Import the cmath library> import cmath>> Create a new complex number 5 + 3i; the two arguments are the real and the imaginary parts of the number> complexNumber = complex(5, 3)>> Print the real and the imaginary parts of the number> print(complexNumber.real) > print(complexNumber.imag)>> Convert the complex number to its polar representation using the cmath library> polar = cmath.polar(complexNumber)> print(polar) This prints: (5.830951894845301, 0.5404195002705842)> ```>> To get the complex conjugate of a matrix, we loop trough each element of the matrix, extract real and imaginary parts of the number and flip the sign for the imaginary part.
###Code
@exercise
def conjugate(a : Matrix) -> Matrix:
rows = len(a)
columns = len(a[0])
ans = create_empty_matrix(rows, columns)
for i in range(rows):
for j in range(columns):
ans[i][j] = complex(a[i][j].real, -a[i][j].imag)
return ans
###Output
_____no_output_____
###Markdown
[Return to task 6 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-6:-Conjugate.) Exercise 7: Adjoint.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $m \times n$ matrix $A^\dagger$, the adjoint of $A$. SolutionTo get the adjoint we perform both **transpose** and **conjugate** operations on the input matrix. We can write out the whole procedure manually, like we have done above, but we can also leverage the code we have written above.> In Python the `def` word defines a function, which could be reused later in the code.
###Code
@exercise
def adjoint(a : Matrix) -> Matrix:
# Call the transpose function with the input matrix a
transp = transpose(a)
# Call the conjugate function with the transposed matrix as input
ans = conjugate(transp)
return ans
###Output
_____no_output_____
###Markdown
[Return to task 7 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-7:-Adjoint.) Exercise 8: Unitary Verification.**Input:** An $n \times n$ matrix $A$.**Output:** Check if the matrix is unitary and return `True` if it is, or `False` if it isn't. SolutionA matrix is unitary if this holds true: $UU^\dagger = U^\dagger U = I$.(As a reminder, an identity matrix is a matrix with 1s on the main diagonal and 0s everywhere else.)Thus, to check if the input matrix is unitary we will need to perform the following steps:1. Calculate the adjoint of the input matrix.2. Multiply it by the input matrix.3. Check if the multiplication result is equal to an identity matrix. > *Python note:* We will leverage the `adjoint` and the `matrix_mult` functions what we have created above.>> When we check each element of $UU^\dagger$ to see whether it equals the respective element of the identity matrix, we'll use Python function `approx` to perform this comparison approximately.
###Code
from pytest import approx
@exercise
def is_matrix_unitary(a : Matrix) -> bool:
n = len(a)
# Calculate the adjoint matrix
adjointA = adjoint(a)
# Multiply the adjoint matrix by the input matrix
multipliedMatrix = matrix_mult(a, adjointA)
# Check whether the multiplication result is (approximately) identity matrix
for i in range(n):
for j in range(n):
# An identity matrix has 1's in all the places where the row index and column index are equal...
if i == j:
if multipliedMatrix[i][j] != approx(1):
return False
# ... and 0's in all the places where the row index and column index are different
else:
if multipliedMatrix[i][j] != approx(0):
return False
return True
###Output
_____no_output_____
###Markdown
[Return to task 8 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-8:-Unitary-Verification.) Exercise 9: Inner product.**Inputs:**1. An $n \times 1$ vector $V$.2. An $n \times 1$ vector $W$.**Output:** Return a complex number - the inner product $\langle V , W \rangle$. SolutionFollowing the definition of the inner product, $\langle V , W \rangle = V^\dagger W$. For example, for vectors of length 2:$$\langle\begin{bmatrix} a \\ b\end{bmatrix},\begin{bmatrix} c \\ d\end{bmatrix}\rangle =\begin{bmatrix} a \\ b\end{bmatrix}^\dagger\begin{bmatrix} c \\ d\end{bmatrix}=\begin{bmatrix} \overline{a} & \overline{b} \end{bmatrix}\begin{bmatrix} c \\ d\end{bmatrix}= \overline{a} \cdot c + \overline{b} \cdot d$$> *Python note:* We will again use previously defined functions to calculate adjoint of a vector and a product of two vectors. > We need to keep in mind that the task asks us to return a complex number and not a $1 \times 1$ matrix which is the result of the multiplication. > Therefore at the end we'll extract the top left element of the `resultMatrix` and return it.
###Code
@exercise
def inner_prod(v : Matrix, w : Matrix) -> complex:
# Calculate the adjoint of the v vector
adjointV = adjoint(v)
# Multiply the adjoint v and w. The result will be a matrix with only one element.
resultMatrix = matrix_mult(adjointV, w)
# To get the actual complex number, we have to take one element from the multiplication result.
return resultMatrix[0][0]
###Output
_____no_output_____
###Markdown
[Return to task 9 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-9:-Inner-product.) Exercise 10: Normalized vectors.**Input:** A non-zero $n \times 1$ vector $V$.**Output:** Return an $n \times 1$ vector $\frac{V}{||V||}$ - the normalized version of the vector $V$. Solution If the vector $V = \begin{bmatrix}a & b & c \end{bmatrix}$, its norm $ ||V|| = \sqrt{|a|^2 + |b|^2 + |c|^2} $,and its normalized version is$ \begin{bmatrix}\frac{a}{||V||} & \frac{b}{||V||} & \frac{c}{||V||} \end{bmatrix} $.Thus, we need to calculate the norm of the vector and to divide each element of the vector by it. We will calculate the norm as a square root of an inner product of the vector with itself.
###Code
@exercise
def normalize(v : Matrix) -> Matrix:
norm = math.sqrt(inner_prod(v, v).real)
n = len(v)
ans = create_empty_matrix(n, 1)
# Divide each element of the vector by the norm
for i in range(n):
ans[i][0] = v[i][0] / norm
return ans
###Output
_____no_output_____
###Markdown
[Return to task 10 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-10:-Normalized-vectors.) Exercise 11: Outer product.**Inputs:**1. An $n \times 1$ vector $V$.2. An $m \times 1$ vector $W$.**Output:** Return an $n \times m$ matrix that represents the outer product of $V$ and $W$. SolutionBy definition, the outer product of $V$ and $W$ is $VW^\dagger$. We can use a similar approach to calculating the inner product, except here we will return the whole multiplication result rather than a specific number.
###Code
@exercise
def outer_prod(v : Matrix, w : Matrix) -> Matrix:
# Calculate adjoint of the W
adjointW = adjoint(w)
# Multiply V by W adjoint
return matrix_mult(v, adjointW)
###Output
_____no_output_____
###Markdown
[Return to task 11 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-11:-Outer-product.) Exercise 12*: Tensor Product.**Inputs:**1. An $n \times m$ matrix $A$.2. A $k \times l$ matrix $B$.**Output:** Return an $(n \cdot k) \times (m \cdot l)$ matrix $A \otimes B$, the tensor product of $A$ and $B$. SolutionWe will follow the definition of the tensor product. For example, tensor product of $2 \times 2$ matrices look as follows:$$\begin{bmatrix} a & b \\ c & d \end{bmatrix} \otimes \begin{bmatrix} e & f \\ g & h \end{bmatrix} =\begin{bmatrix} a \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} & b \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} \\ c \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} & d \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix}\end{bmatrix}=\begin{bmatrix} a \cdot e & a \cdot f & b \cdot e & b \cdot f \\ a \cdot g & a \cdot h & b \cdot g & b \cdot h \\ c \cdot e & c \cdot f & d \cdot e & d \cdot f \\ c \cdot g & c \cdot h & d \cdot g & d \cdot h\end{bmatrix}$$> *Python note:* We need to calculate pairwise products of all elements of the left matrix and all elements of the right matrix; this means we have to use 4 nested loops.
###Code
@exercise
def tensor_product(a : Matrix, b : Matrix) -> Matrix:
aRows = len(a) # the number of rows for matrix a
aColumns = len(a[0]) # the number of columns for matrix a
bRows = len(b) # the number of rows for matrix b
bColumns = len(b[0]) # the number of columns for matrix b
ans = create_empty_matrix(aRows * bRows, aColumns * bColumns)
# Outer pair of loops, iterating trough the elements of the left matrix
for i in range(aRows):
for j in range(aColumns):
# Inner pair of loops, iterating through the elements of the right matrix
for k in range(bRows):
for l in range(bColumns):
ans[i * bRows + k][j * bColumns + l] = a[i][j] * b[k][l]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 12 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-12*:-Tensor-Product.) Exercise 13: Finding an eigenvalue.**Inputs:**1. A real-valued $n \times n$ matrix $A$.2. An eigenvector $V$ of matrix $A$.**Output:** Return a real number - the eigenvalue of $A$ that is associated with the given eigenvector. SolutionLet's consider what happens when we multiply the matrix by its eigenvector for a $3 \times 3$ example:$$ A \cdot V = \begin{bmatrix} a & b & c \\ d & e & f \\ g & h & i \end{bmatrix} \cdot \begin{bmatrix}j \\ k \\ l \end{bmatrix} = \begin{bmatrix} m \\ n \\ o \end{bmatrix} = \alpha \begin{bmatrix}j \\ k \\ l \end{bmatrix} = \alpha V$$This means you can find the eigenvalue $\alpha$ from the equations $$ \begin{cases} \alpha j = m \\ \alpha k = n \\ \alpha l = o \end{cases}$$We can use any of them, keeping in mind that we need an equation in which the element of the eigenvector is not zero (otherwise we get an equation $0 \alpha = 0$ which doesn't help us find $\alpha$).Since eigenvectors are defined as non-zero vectors, we are guaranteed that at least one element of the vector will not be zero.
###Code
from pytest import approx
@exercise
def find_eigenvalue(a : Matrix, v : Matrix) -> float:
n = len(v)
multiplied = matrix_mult(a, v)
for i in range(n):
if (v[i][0] != approx(0)):
return multiplied[i][0] / v[i][0]
###Output
_____no_output_____
###Markdown
[Return to task 13 of the Linear Algebra tutorial.](/LinearAlgebra.ipynbExercise-13:-Finding-an-eigenvalue.) Exercise 14**: Finding an eigenvector.**Inputs:**1. A $2 \times 2$ matrix $A$.2. An eigenvalue $x$ of matrix $A$.**Output:** Return any non-zero eigenvector of $A$ that is associated with $x$. SolutionSearching for an eigenvector $V$ associated with a specific eigenvalue $x$ asks for solving the following equation:$$ AV = xV $$or, equivalently, $$(A - xI_n)V = 0$$In other words, for a $2 \times 2$ matrix the following happens: 1. Multiply the identity matrix $I_2$ by the eigenvalue:$$ x \cdot \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} x & 0 \\ 0 & x \end{bmatrix} $$2. Subtract this new matrix from the given matrix $A$:$$ \begin{bmatrix} a & b \\ c & d \end{bmatrix} - \begin{bmatrix} x & 0 \\ 0 & x \end{bmatrix} = \begin{bmatrix} a -x & b \\ c & d -x \end{bmatrix} $$ 3. Find a vector that, when multiplied by the resulting matrix, will produce a 0 vector:$$ \begin{bmatrix} a - x & b \\ c & d - x \end{bmatrix} \cdot \begin{bmatrix} v_0 \\ v_1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}$$This can be rewritten as the following system of equations:$$\begin{cases}(a - x) \cdot v_0 + b \cdot v_1 = 0 \\c \cdot v_0 + (d - x) \cdot v_1 = 0 \end{cases}$$Each eigenvalue has infinitely many eigenvectors associated with it (since multiplying an eigenvector by a number gives another valid eigenvector). We can limit our search and say that $v_0 = 1$, if possible. In this case, the system of equations becomes$$\begin{cases}(a - x) + b \cdot v_1 = 0 \\c + (d - x) \cdot v_1 = 0 \end{cases}$$and finally we get $v_1 = \frac{a-x}{-b}$.If $b = 0$, we can not perform this division, so we need to reconsider our choices. The first equation becomes $(a-x)v_0 = 0$, which is possible in two cases:* If $a - x \neq 0$, we get $v_0 = 0$ and thus $v_1$ has to be non-zero (we can pick $v_1 = 1$).* If $a - x = 0$, we can not get any information from the first equation and have to fall back to the second one:$c \cdot v_0 + (d - x) \cdot v_1 = 0$. Following a similar logic: * If $c = 0$, we get $(d - x) \cdot v_1 = 0$, so $v_0 = 1, v_1 = 0$. * If $c \neq 0$, we get $v_1 = 1, v_0 = \frac{d-x}{-c}$.
###Code
@exercise
def find_eigenvector(a : Matrix, x : float) -> Matrix:
# Check for possible edge cases
if (a[0][1] == 0):
if (a[0][0] - x == 0):
if (a[1][0] == 0):
return [[1], [0]]
else:
return [[(a[1][1] - x) / (-a[1][0])], [1]]
else:
return [[0], [1]]
v0 = 1
v1 = (a[0][0] - x) / (-a[0][1])
return [[v0], [v1]]
###Output
_____no_output_____
###Markdown
Linear Algebra Tutorial Workbook**What is this workbook?**A workbook is a collection of problems, accompanied by solutions to them. The explanations focus on the logical steps required to solve a problem; they illustrate the concepts that need to be applied to come up with a solution to the problem, explaining the mathematical steps required. Note that a workbook should not be the primary source of knowledge on the subject matter; it assumes that you've already read a tutorial or a textbook and that you are now seeking to improve your problem-solving skills. You should attempt solving the tasks of the respective kata first, and turn to the workbook only if stuck. While a textbook emphasizes knowledge acquisition, a workbook emphasizes skill acquisition.This workbook describes the solutions to the problems offered in the [Linear Algebra tutorial](./LinearAlgebra.ipynb). Since the tasks are offered as programming problems, the explanations also cover some elements of Python that might be non-obvious for a first-time user.**What you should know for this workbook**1. Complex arithmetic.2. Basic Python knowledge is helpful but not necessary. Click the cell with code below this block of text and press `Ctrl+Enter` (`⌘+Enter` on Mac). **Do not skip this step**.
###Code
# Run this cell using Ctrl+Enter (⌘+Enter on Mac).
from testing import exercise, create_empty_matrix
from typing import List
import math, cmath
Matrix = List[List[complex]]
###Output
Success!
###Markdown
Exercise 1: Matrix addition.**Inputs:**1. An $n \times m$ matrix $A$, represented as a two-dimensional list.2. An $n \times m$ matrix $B$, represented as a two-dimensional list.**Output:** Return the sum of the matrices $A + B$ - an $n \times m$ matrix, represented as a two-dimensional list. SolutionFollowing the definition given in the tutorial, the sum of two matrices is a matrix of element-wise sums of matrix elements; for example, for $2 \times 2$ matrices$$ A + B =\begin{bmatrix} a & b \\ c & d \end{bmatrix} + \begin{bmatrix} e & f \\ g & h \end{bmatrix} = \begin{bmatrix} a + e & b + f \\ c + g & d + h \end{bmatrix}$$> *Python note:* This tutorial uses a lot of lists and loops, so let's walk through some Python syntax details first. If you're familiar with Python syntax, feel free to skip this note!>> * [`range(x)`](https://docs.python.org/3/tutorial/controlflow.htmlthe-range-function) will create a [list](https://docs.python.org/3/tutorial/introduction.htmllists) of numbers from 0 to `x - 1`, inclusive; for example, `range(3)` will create a list `[0, 1, 2]`. > * [`for`](https://docs.python.org/3/tutorial/controlflow.htmlfor-statements) statement iterates over the items of a sequence; for example, the following code> ```python> for i in range(3):> print(i)> ```>> will print:> ```> 0> 1> 2> ```>> * Matrices are described as two-dimensional lists, > which are represented as lists of lists. For example, the following matrix:>> $$\begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix} $$>> is represented as a list of lists `[[1, 2, 3], [4, 5, 6]]`. >> * You can access a specific element of the list using the index of that element in the list (note that indices start with 0): the first element of `array` is `array[0]`, the second - `array[1]`, etc.> * Similarly, you can access an element of a matrix using the row and column indices of that element: `matrix[0][2]` would access the element in the first row and 3rd column.> * `len(array)` returns the number of elements in a list; for example, `len([0, 1, 2])` will return 3.> * Here is an example of creating a matrix from the example above and looping through its elements to print them:>>```Python>matrix = [[1, 2, 3], [4, 5, 6]]>numberOfRows = len(matrix) will return 2>numberOfColumns = len(matrix[0]) will return 3>for row in range(numberOfRows):> for column in range(numberOfColumns):> print(matrix[row][column])>>```>> * Finally, the first exercise offers you a template of a solution that uses a function `create_empty_matrix(n, m)`; this function creates an $n \times m$ matrix filled with 0's as values. This function is not a built-in Python function, this notebook defines it for you to use.
###Code
@exercise
def matrix_add(a : Matrix, b : Matrix) -> Matrix:
# You can get the size of a matrix like this:
rows = len(a)
columns = len(a[0])
# You can use the following function to initialize a rows×columns matrix filled with 0s to store your answer
c = create_empty_matrix(rows, columns)
for i in range(rows):
for j in range(columns):
# You can access elements of a matrix like this:
x = a[i][j]
y = b[i][j]
# You can modify the elements of a matrix like this:
c[i][j] = a[i][j] + b[i][j]
return c
###Output
_____no_output_____
###Markdown
[Return to task 1 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-1:-Matrix-addition.) Exercise 2: Scalar multiplication.**Inputs:**1. A scalar $x$.2. An $n \times m$ matrix $A$.**Output:** Return the $n \times m$ matrix $x \cdot A$. SolutionWe can again follow the definition given in the tutorial: to calculate the product of a number and a matrix, multiply each matrix element by that number. For example, for a $2 \times 2$ matrix:$$x \cdot A = x \cdot \begin{bmatrix} a & b \\ c & d \end{bmatrix} = \begin{bmatrix} x \cdot a & x \cdot b \\ x \cdot c & x \cdot d \end{bmatrix} $$ > *Python note:* We have to multiply each element in the matrix by the given number $x$. To do so, we will again loop trough each matrix element with 2 `for` loops, do the multiplication and store its result in the corresponding element of the newly created matrix.
###Code
@exercise
def scalar_mult(x : complex, a : Matrix) -> Matrix:
rows = len(a)
columns = len(a[0])
c = create_empty_matrix(rows, columns)
for i in range(rows):
for j in range(columns):
c[i][j] = a[i][j] * x
return c
###Output
_____no_output_____
###Markdown
[Return to task 2 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-2:-Scalar-multiplication.) Exercise 3: Matrix multiplication.**Inputs:**1. An $n \times m$ matrix $A$.2. An $m \times k$ matrix $B$.**Output:** Return the $n \times k$ matrix equal to the matrix product $AB$. SolutionAgain, the tutorial gives us the definition of how multiplication works, and we just need to implement it in code. Here is an example of multiplying a $2 \times 3$ matrix by a $3 \times 2$ matrix:$$ A \cdot B =\begin{bmatrix} a & b & c \\ d & e & f \end{bmatrix} \cdot \begin{bmatrix} h & i \\ j & k \\ l & m \end{bmatrix} = \begin{bmatrix} a \cdot h + b \cdot j + c \cdot l & a \cdot i + b \cdot k + c \cdot m \\ d \cdot h + e \cdot j + f \cdot l & d \cdot i + e \cdot k + f \cdot m \end{bmatrix} $$> *Python note*: In this exercise we'll need an extra nested loop. We will iterate trough the rows and columns of the resulting matrix, similar to the previous exercises, but for each element of the result we'll need to iterate through the row of the left matrix and the column of the right matrix that contribute to that element. In the example above, to get the element in the first row and the first column of the resulting matrix product we'll need to iterate through the first row of the left matrix $\begin{bmatrix} a & b & c \end{bmatrix}$ and the first column of the right matrix $\begin{bmatrix} h \\ j \\ l \end{bmatrix}$ and add up pairwise products of their elements.>> Note that the empty matrix we create for storing the result differs in dimensions from the previous exercises: its number of rows equals the number of rows of the left matrix, and its number of columns equals to the number of columns of the right matrix. >> Python `+=` operator is a convenient shorthand for assignment `variable = variable + increment`.
###Code
@exercise
def matrix_mult(a : Matrix, b : Matrix) -> Matrix:
rows = len(a) # the number of rows of the left matrix
common = len(a[0]) # = len(b) - the common dimension of the matrices
columns = len(b[0]) # the number of columns of the right matrix
ans = create_empty_matrix(rows, columns)
for currentRow in range(rows):
for currentColumn in range(columns):
for k in range(common):
ans[currentRow][currentColumn] += a[currentRow][k] * b[k][currentColumn]
return ans
###Output
Success!
###Markdown
[Return to task 3 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-3:-Matrix-multiplication.) Exercise 4: Matrix Inversion.**Input:** An invertible $2 \times 2$ matrix $A$.**Output:** Return the inverse of $A$, a $2 \times 2$ matrix $A^{-1}$. SolutionSince we only need to invert a $2 \times 2$ matrix, we will not consider a solution which can be used for arbitrary-sized matrices. We will follow the algorithm described in the [Wikipedia article](https://en.wikipedia.org/wiki/Invertible_matrixInversion_of_2_%C3%97_2_matrices).$$ A = \begin{bmatrix} a & b \\ c & d \end{bmatrix} $$The determinant of the matrix is defined as $$ |A| = a \cdot d - b \cdot c $$$$A^{-1} = \frac{1}{|A|} \cdot \begin{bmatrix} d & -b \\ -c & a \end{bmatrix} = \begin{bmatrix} \frac{d}{|A|} & \frac{-b}{|A|} \\ \frac{-c}{|A|} & \frac{a}{|A|} \end{bmatrix} $$
###Code
@exercise
def matrix_inverse(m : Matrix) -> Matrix:
# Extract each element of the array into a named variable
a = m[0][0]
b = m[0][1]
c = m[1][0]
d = m[1][1]
# Calculate the determinant
determinant = (a * d) - (b * c)
# Create the inverse of the matrix following the formula above
ans = [[d / determinant, -b / determinant], [-c / determinant, a / determinant]]
return ans
###Output
Success!
###Markdown
[Return to task 4 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-4:-Matrix-Inversion.) Exercise 5: Transpose.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $m \times n$ matrix $A^T$, the transpose of $A$. SolutionAgain, the tutorial gives us the definition of matrix transpose, so we just need to fill the resulting matrix with the elements of the original matrix in the right order. For example, for a $3 \times 2$ matrix$$\begin{bmatrix} a & b \\ c & d \\ e & f\end{bmatrix}^T=\begin{bmatrix} a & c & e \\ b & d & f\end{bmatrix}$$
###Code
@exercise
def transpose(a : Matrix) -> Matrix:
rows = len(a)
columns = len(a[0])
# Note that the resulting matrix dimensions are swapped compared to the original ones
ans = create_empty_matrix(columns, rows)
for i in range(rows):
for j in range(columns):
ans[j][i] = a[i][j]
return ans
###Output
Success!
###Markdown
[Return to task 5 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-5:-Transpose.) Exercise 6: Conjugate.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $n \times m$ matrix $\overline{A}$, the conjugate of $A$. SolutionsTo get the conjugate of a matrix you take the conjugate of each individual element (check the [Complex Arithmetic tutorial](../ComplexArithmetic/ComplexArithmetic.ipynbComplex-Conjugate) for the definition.> *Python note*: In the complex arithmetic tutorial complex numbers were represented as tuples of real and imaginary components. However, this tutorial relies on Python's built-in [`complex`](https://docs.python.org/3.8/library/functions.htmlcomplex) data type. Python's [cmath library](https://docs.python.org/3.8/library/cmath.html) offers a lot of useful functions that deal with the `complex` data type.>> Here is an example of using the `complex` data type:>> ```Python> Import the cmath library> import cmath>> Create a new complex number 5 + 3i; the two arguments are the real and the imaginary parts of the number> complexNumber = complex(5, 3)>> Print the real and the imaginary parts of the number> print(complexNumber.real) > print(complexNumber.imag)>> Convert the complex number to its polar representation using the cmath library> polar = cmath.polar(complexNumber)> print(polar) This prints: (5.830951894845301, 0.5404195002705842)> ```>> To get the complex conjugate of a matrix, we loop trough each element of the matrix, extract real and imaginary parts of the number and flip the sign for the imaginary part.
###Code
@exercise
def conjugate(a : Matrix) -> Matrix:
rows = len(a)
columns = len(a[0])
ans = create_empty_matrix(rows, columns)
for i in range(rows):
for j in range(columns):
ans[i][j] = complex(a[i][j].real, -a[i][j].imag)
return ans
###Output
Success!
###Markdown
[Return to task 6 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-6:-Conjugate.) Exercise 7: Adjoint.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $m \times n$ matrix $A^\dagger$, the adjoint of $A$. SolutionTo get the adjoint we perform both **transpose** and **conjugate** operations on the input matrix. We can write out the whole procedure manually, like we have done above, but we can also leverage the code we have written above.> In Python the `def` word defines a function, which could be reused later in the code.
###Code
@exercise
def adjoint(a : Matrix) -> Matrix:
# Call the transpose function with the input matrix a
transp = transpose(a)
# Call the conjugate function with the transposed matrix as input
ans = conjugate(transp)
return ans
###Output
Success!
###Markdown
[Return to task 7 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-7:-Adjoint.) Exercise 8: Unitary Verification.**Input:** An $n \times n$ matrix $A$.**Output:** Check if the matrix is unitary and return `True` if it is, or `False` if it isn't. SolutionA matrix is unitary if this holds true: $UU^\dagger = U^\dagger U = I$.(As a reminder, an identity matrix is a matrix with 1s on the main diagonal and 0s everywhere else.)Thus, to check if the input matrix is unitary we will need to perform the following steps:1. Calculate the adjoint of the input matrix.2. Multiply it by the input matrix.3. Check if the multiplication result is equal to an identity matrix. > *Python note:* We will leverage the `adjoint` and the `matrix_mult` functions what we have created above.>> When we check each element of $UU^\dagger$ to see whether it equals the respective element of the identity matrix, we'll use Python function `approx` to perform this comparison approximately.
###Code
from pytest import approx
@exercise
def is_matrix_unitary(a : Matrix) -> bool:
n = len(a)
# Calculate the adjoint matrix
adjointA = adjoint(a)
# Multiply the adjoint matrix by the input matrix
multipliedMatrix = matrix_mult(a, adjointA)
# Check whether the multiplication result is (approximately) identity matrix
for i in range(n):
for j in range(n):
# An identity matrix has 1's in all the places where the row index and column index are equal...
if i == j:
if multipliedMatrix[i][j] != approx(1):
return False
# ... and 0's in all the places where the row index and column index are different
else:
if multipliedMatrix[i][j] != approx(0):
return False
return True
###Output
Success!
###Markdown
[Return to task 8 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-8:-Unitary-Verification.) Exercise 9: Inner product.**Inputs:**1. An $n \times 1$ vector $V$.2. An $n \times 1$ vector $W$.**Output:** Return a complex number - the inner product $\langle V , W \rangle$. SolutionFollowing the definition of the inner product, $\langle V , W \rangle = V^\dagger W$. For example, for vectors of length 2:$$\langle\begin{bmatrix} a \\ b\end{bmatrix},\begin{bmatrix} c \\ d\end{bmatrix}\rangle =\begin{bmatrix} a \\ b\end{bmatrix}^\dagger\begin{bmatrix} c \\ d\end{bmatrix}=\begin{bmatrix} \overline{a} & \overline{b} \end{bmatrix}\begin{bmatrix} c \\ d\end{bmatrix}= \overline{a} \cdot c + \overline{b} \cdot d$$> *Python note:* We will again use previously defined functions to calculate adjoint of a vector and a product of two vectors. > We need to keep in mind that the task asks us to return a complex number and not a $1 \times 1$ matrix which is the result of the multiplication. > Therefore at the end we'll extract the top left element of the `resultMatrix` and return it.
###Code
@exercise
def inner_prod(v : Matrix, w : Matrix) -> complex:
# Calculate the adjoint of the v vector
adjointV = adjoint(v)
# Multiply the adjoint v and w. The result will be a matrix with only one element.
resultMatrix = matrix_mult(adjointV, w)
# To get the actual complex number, we have to take one element from the multiplication result.
return resultMatrix[0][0]
###Output
_____no_output_____
###Markdown
[Return to task 9 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-9:-Inner-product.) Exercise 10: Normalized vectors.**Input:** A non-zero $n \times 1$ vector $V$.**Output:** Return an $n \times 1$ vector $\frac{V}{||V||}$ - the normalized version of the vector $V$. Solution If the vector $V = \begin{bmatrix}a & b & c \end{bmatrix}$, its norm $ ||V|| = \sqrt{|a|^2 + |b|^2 + |c|^2} $,and its normalized version is$ \begin{bmatrix}\frac{a}{||V||} & \frac{b}{||V||} & \frac{c}{||V||} \end{bmatrix} $.Thus, we need to calculate the norm of the vector and to divide each element of the vector by it. We will calculate the norm as a square root of an inner product of the vector with itself.
###Code
@exercise
def normalize(v : Matrix) -> Matrix:
norm = math.sqrt(inner_prod(v, v).real)
n = len(v)
ans = create_empty_matrix(n, 1)
# Divide each element of the vector by the norm
for i in range(n):
ans[i][0] = v[i][0] / norm
return ans
###Output
_____no_output_____
###Markdown
[Return to task 10 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-10:-Normalized-vectors.) Exercise 11: Outer product.**Inputs:**1. An $n \times 1$ vector $V$.2. An $m \times 1$ vector $W$.**Output:** Return an $n \times m$ matrix that represents the outer product of $V$ and $W$. SolutionBy definition, the outer product of $V$ and $W$ is $VW^\dagger$. We can use a similar approach to calculating the inner product, except here we will return the whole multiplication result rather than a specific number.
###Code
@exercise
def outer_prod(v : Matrix, w : Matrix) -> Matrix:
# Calculate adjoint of the W
adjointW = adjoint(w)
# Multiply V by W adjoint
return matrix_mult(v, adjointW)
###Output
_____no_output_____
###Markdown
[Return to task 11 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-11:-Outer-product.) Exercise 12*: Tensor Product.**Inputs:**1. An $n \times m$ matrix $A$.2. A $k \times l$ matrix $B$.**Output:** Return an $(n \cdot k) \times (m \cdot l)$ matrix $A \otimes B$, the tensor product of $A$ and $B$. SolutionWe will follow the definition of the tensor product. For example, tensor product of $2 \times 2$ matrices look as follows:$$\begin{bmatrix} a & b \\ c & d \end{bmatrix} \otimes \begin{bmatrix} e & f \\ g & h \end{bmatrix} =\begin{bmatrix} a \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} & b \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} \\ c \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} & d \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix}\end{bmatrix}=\begin{bmatrix} a \cdot e & a \cdot f & b \cdot e & b \cdot f \\ a \cdot g & a \cdot h & b \cdot g & b \cdot h \\ c \cdot e & c \cdot f & d \cdot e & d \cdot f \\ c \cdot g & c \cdot h & d \cdot g & d \cdot h\end{bmatrix}$$> *Python note:* We need to calculate pairwise products of all elements of the left matrix and all elements of the right matrix; this means we have to use 4 nested loops.
###Code
@exercise
def tensor_product(a : Matrix, b : Matrix) -> Matrix:
aRows = len(a) # the number of rows for matrix a
aColumns = len(a[0]) # the number of columns for matrix a
bRows = len(b) # the number of rows for matrix b
bColumns = len(b[0]) # the number of columns for matrix b
ans = create_empty_matrix(aRows * bRows, aColumns * bColumns)
# Outer pair of loops, iterating trough the elements of the left matrix
for i in range(aRows):
for j in range(aColumns):
# Inner pair of loops, iterating through the elements of the right matrix
for k in range(bRows):
for l in range(bColumns):
ans[i * bRows + k][j * bColumns + l] = a[i][j] * b[k][l]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 12 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-12*:-Tensor-Product.) Exercise 13: Finding an eigenvalue.**Inputs:**1. A real-valued $n \times n$ matrix $A$.2. An eigenvector $V$ of matrix $A$.**Output:** Return a real number - the eigenvalue of $A$ that is associated with the given eigenvector. SolutionLet's consider what happens when we multiply the matrix by its eigenvector for a $3 \times 3$ example:$$ A \cdot V = \begin{bmatrix} a & b & c \\ d & e & f \\ g & h & i \end{bmatrix} \cdot \begin{bmatrix}j \\ k \\ l \end{bmatrix} = \begin{bmatrix} m \\ n \\ o \end{bmatrix} = \alpha \begin{bmatrix}j \\ k \\ l \end{bmatrix} = \alpha V$$This means you can find the eigenvalue $\alpha$ from the equations $$ \begin{cases} \alpha j = m \\ \alpha k = n \\ \alpha l = o \end{cases}$$We can use any of them, keeping in mind that we need an equation in which the element of the eigenvector is not zero (otherwise we get an equation $0 \alpha = 0$ which doesn't help us find $\alpha$).Since eigenvectors are defined as non-zero vectors, we are guaranteed that at least one element of the vector will not be zero.
###Code
from pytest import approx
@exercise
def find_eigenvalue(a : Matrix, v : Matrix) -> float:
n = len(v)
multiplied = matrix_mult(a, v)
for i in range(n):
if (v[i][0] != approx(0)):
return multiplied[i][0] / v[i][0]
###Output
_____no_output_____
###Markdown
[Return to task 13 of the Linear Algebra tutorial.](/LinearAlgebra.ipynbExercise-13:-Finding-an-eigenvalue.) Exercise 14**: Finding an eigenvector.**Inputs:**1. A $2 \times 2$ matrix $A$.2. An eigenvalue $x$ of matrix $A$.**Output:** Return any non-zero eigenvector of $A$ that is associated with $x$. SolutionSearching for an eigenvector $V$ associated with a specific eigenvalue $x$ asks for solving the following equation:$$ AV = xV $$or, equivalently, $$(A - xI_n)V = 0$$In other words, for a $2 \times 2$ matrix the following happens: 1. Multiply the identity matrix $I_2$ by the eigenvalue:$$ x \cdot \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} x & 0 \\ 0 & x \end{bmatrix} $$2. Subtract this new matrix from the given matrix $A$:$$ \begin{bmatrix} a & b \\ c & d \end{bmatrix} - \begin{bmatrix} x & 0 \\ 0 & x \end{bmatrix} = \begin{bmatrix} a -x & b \\ c & d -x \end{bmatrix} $$ 3. Find a vector that, when multiplied by the resulting matrix, will produce a 0 vector:$$ \begin{bmatrix} a - x & b \\ c & d - x \end{bmatrix} \cdot \begin{bmatrix} v_0 \\ v_1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}$$This can be rewritten as the following system of equations:$$\begin{cases}(a - x) \cdot v_0 + b \cdot v_1 = 0 \\c \cdot v_0 + (d - x) \cdot v_1 = 0 \end{cases}$$Each eigenvalue has infinitely many eigenvectors associated with it (since multiplying an eigenvector by a number gives another valid eigenvector). We can limit our search and say that $v_0 = 1$, if possible. In this case, the system of equations becomes$$\begin{cases}(a - x) + b \cdot v_1 = 0 \\c + (d - x) \cdot v_1 = 0 \end{cases}$$and finally we get $v_1 = \frac{a-x}{-b}$.If $b = 0$, we can not perform this division, so we need to reconsider our choices. The first equation becomes $(a-x)v_0 = 0$, which is possible in two cases:* If $a - x \neq 0$, we get $v_0 = 0$ and thus $v_1$ has to be non-zero (we can pick $v_1 = 1$).* If $a - x = 0$, we can not get any information from the first equation and have to fall back to the second one:$c \cdot v_0 + (d - x) \cdot v_1 = 0$. Following a similar logic: * If $c = 0$, we get $(d - x) \cdot v_1 = 0$, so $v_0 = 1, v_1 = 0$. * If $c \neq 0$, we get $v_1 = 1, v_0 = \frac{d-x}{-c}$.
###Code
@exercise
def find_eigenvector(a : Matrix, x : float) -> Matrix:
# Check for possible edge cases
if (a[0][1] == 0):
if (a[0][0] - x == 0):
if (a[1][0] == 0):
return [[1], [0]]
else:
return [[(a[1][1] - x) / (-a[1][0])], [1]]
else:
return [[0], [1]]
v0 = 1
v1 = (a[0][0] - x) / (-a[0][1])
return [[v0], [v1]]
###Output
_____no_output_____
###Markdown
Linear Algebra Tutorial Workbook**What is this workbook?**A workbook is a collection of problems, accompanied by solutions to them. The explanations focus on the logical steps required to solve a problem; they illustrate the concepts that need to be applied to come up with a solution to the problem, explaining the mathematical steps required. Note that a workbook should not be the primary source of knowledge on the subject matter; it assumes that you've already read a tutorial or a textbook and that you are now seeking to improve your problem-solving skills. You should attempt solving the tasks of the respective kata first, and turn to the workbook only if stuck. While a textbook emphasizes knowledge acquisition, a workbook emphasizes skill acquisition.This workbook describes the solutions to the problems offered in the [Linear Algebra tutorial](./LinearAlgebra.ipynb). Since the tasks are offered as programming problems, the explanations also cover some elements of Python that might be non-obvious for a first-time user.**What you should know for this workbook**1. Complex arithmetic.2. Basic Python knowledge is helpful but not necessary. Click the cell with code below this block of text and press `Ctrl+Enter` (`⌘+Enter` on Mac). **Do not skip this step**.
###Code
# Run this cell using Ctrl+Enter (⌘+Enter on Mac).
from testing import exercise, create_empty_matrix
from typing import List
import math, cmath
Matrix = List[List[complex]]
###Output
Success!
###Markdown
Exercise 1: Matrix addition.**Inputs:**1. An $n \times m$ matrix $A$, represented as a two-dimensional list.2. An $n \times m$ matrix $B$, represented as a two-dimensional list.**Output:** Return the sum of the matrices $A + B$ - an $n \times m$ matrix, represented as a two-dimensional list. SolutionFollowing the definition given in the tutorial, the sum of two matrices is a matrix of element-wise sums of matrix elements; for example, for $2 \times 2$ matrices$$ A + B =\begin{bmatrix} a & b \\ c & d \end{bmatrix} + \begin{bmatrix} e & f \\ g & h \end{bmatrix} = \begin{bmatrix} a + e & b + f \\ c + g & d + h \end{bmatrix}$$> *Python note:* This tutorial uses a lot of lists and loops, so let's walk through some Python syntax details first. If you're familiar with Python syntax, feel free to skip this note!>> * [`range(x)`](https://docs.python.org/3/tutorial/controlflow.htmlthe-range-function) will create a [list](https://docs.python.org/3/tutorial/introduction.htmllists) of numbers from 0 to `x - 1`, inclusive; for example, `range(3)` will create a list `[0, 1, 2]`. > * [`for`](https://docs.python.org/3/tutorial/controlflow.htmlfor-statements) statement iterates over the items of a sequence; for example, the following code> ```python> for i in range(3):> print(i)> ```>> will print:> ```> 0> 1> 2> ```>> * Matrices are described as two-dimensional lists, > which are represented as lists of lists. For example, the following matrix:>> $$\begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix} $$>> is represented as a list of lists `[[1, 2, 3], [4, 5, 6]]`. >> * You can access a specific element of the list using the index of that element in the list (note that indices start with 0): the first element of `array` is `array[0]`, the second - `array[1]`, etc.> * Similarly, you can access an element of a matrix using the row and column indices of that element: `matrix[0][2]` would access the element in the first row and 3rd column.> * `len(array)` returns the number of elements in a list; for example, `len([0, 1, 2])` will return 3.> * Here is an example of creating a matrix from the example above and looping through its elements to print them:>>```Python>matrix = [[1, 2, 3], [4, 5, 6]]>numberOfRows = len(matrix) will return 2>numberOfColumns = len(matrix[0]) will return 3>for row in range(numberOfRows):> for column in range(numberOfColumns):> print(matrix[row][column])>>```>> * Finally, the first exercise offers you a template of a solution that uses a function `create_empty_matrix(n, m)`; this function creates an $n \times m$ matrix filled with 0's as values. This function is not a built-in Python function, this notebook defines it for you to use.
###Code
@exercise
def matrix_add(a : Matrix, b : Matrix) -> Matrix:
# You can get the size of a matrix like this:
rows = len(a)
columns = len(a[0])
# You can use the following function to initialize a rows×columns matrix filled with 0s to store your answer
c = create_empty_matrix(rows, columns)
for i in range(rows):
for j in range(columns):
# You can access elements of a matrix like this:
x = a[i][j]
y = b[i][j]
# You can modify the elements of a matrix like this:
c[i][j] = a[i][j] + b[i][j]
return c
###Output
Success!
###Markdown
[Return to task 1 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-1:-Matrix-addition.) Exercise 2: Scalar multiplication.**Inputs:**1. A scalar $x$.2. An $n \times m$ matrix $A$.**Output:** Return the $n \times m$ matrix $x \cdot A$. SolutionWe can again follow the definition given in the tutorial: to calculate the product of a number and a matrix, multiply each matrix element by that number. For example, for a $2 \times 2$ matrix:$$x \cdot A = x \cdot \begin{bmatrix} a & b \\ c & d \end{bmatrix} = \begin{bmatrix} x \cdot a & x \cdot b \\ x \cdot c & x \cdot d \end{bmatrix} $$ > *Python note:* We have to multiply each element in the matrix by the given number $x$. To do so, we will again loop trough each matrix element with 2 `for` loops, do the multiplication and store its result in the corresponding element of the newly created matrix.
###Code
@exercise
def scalar_mult(x : complex, a : Matrix) -> Matrix:
rows = len(a)
columns = len(a[0])
c = create_empty_matrix(rows, columns)
for i in range(rows):
for j in range(columns):
c[i][j] = a[i][j] * x
return c
###Output
Success!
###Markdown
[Return to task 2 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-2:-Scalar-multiplication.) Exercise 3: Matrix multiplication.**Inputs:**1. An $n \times m$ matrix $A$.2. An $m \times k$ matrix $B$.**Output:** Return the $n \times k$ matrix equal to the matrix product $AB$. SolutionAgain, the tutorial gives us the definition of how multiplication works, and we just need to implement it in code. Here is an example of multiplying a $2 \times 3$ matrix by a $3 \times 2$ matrix:$$ A \cdot B =\begin{bmatrix} a & b & c \\ d & e & f \end{bmatrix} \cdot \begin{bmatrix} h & i \\ j & k \\ l & m \end{bmatrix} = \begin{bmatrix} a \cdot h + b \cdot j + c \cdot l & a \cdot i + b \cdot k + c \cdot m \\ d \cdot h + e \cdot j + f \cdot l & d \cdot i + e \cdot k + f \cdot m \end{bmatrix} $$> *Python note*: In this exercise we'll need an extra nested loop. We will iterate trough the rows and columns of the resulting matrix, similar to the previous exercises, but for each element of the result we'll need to iterate through the row of the left matrix and the column of the right matrix that contribute to that element. In the example above, to get the element in the first row and the first column of the resulting matrix product we'll need to iterate through the first row of the left matrix $\begin{bmatrix} a & b & c \end{bmatrix}$ and the first column of the right matrix $\begin{bmatrix} h \\ j \\ l \end{bmatrix}$ and add up pairwise products of their elements.>> Note that the empty matrix we create for storing the result differs in dimensions from the previous exercises: its number of rows equals the number of rows of the left matrix, and its number of columns equals to the number of columns of the right matrix. >> Python `+=` operator is a convenient shorthand for assignment `variable = variable + increment`.
###Code
@exercise
def matrix_mult(a : Matrix, b : Matrix) -> Matrix:
rows = len(a) # the number of rows of the left matrix
common = len(a[0]) # = len(b) - the common dimension of the matrices
columns = len(b[0]) # the number of columns of the right matrix
ans = create_empty_matrix(rows, columns)
for currentRow in range(rows):
for currentColumn in range(columns):
for k in range(common):
ans[currentRow][currentColumn] += a[currentRow][k] * b[k][currentColumn]
return ans
###Output
Success!
###Markdown
[Return to task 3 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-3:-Matrix-multiplication.) Exercise 4: Matrix Inversion.**Input:** An invertible $2 \times 2$ matrix $A$.**Output:** Return the inverse of $A$, a $2 \times 2$ matrix $A^{-1}$. SolutionSince we only need to invert a $2 \times 2$ matrix, we will not consider a solution which can be used for arbitrary-sized matrices. We will follow the algorithm described in the [Wikipedia article](https://en.wikipedia.org/wiki/Invertible_matrixInversion_of_2_%C3%97_2_matrices).$$ A = \begin{bmatrix} a & b \\ c & d \end{bmatrix} $$The determinant of the matrix is defined as $$ |A| = a \cdot d - b \cdot c $$$$A^{-1} = \frac{1}{|A|} \cdot \begin{bmatrix} d & -b \\ -c & a \end{bmatrix} = \begin{bmatrix} \frac{d}{|A|} & \frac{-b}{|A|} \\ \frac{-c}{|A|} & \frac{a}{|A|} \end{bmatrix} $$
###Code
@exercise
def matrix_inverse(m : Matrix) -> Matrix:
# Extract each element of the array into a named variable
a = m[0][0]
b = m[0][1]
c = m[1][0]
d = m[1][1]
# Calculate the determinant
determinant = (a * d) - (b * c)
# Create the inverse of the matrix following the formula above
ans = [[d / determinant, -b / determinant], [-c / determinant, a / determinant]]
return ans
###Output
Success!
###Markdown
[Return to task 4 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-4:-Matrix-Inversion.) Exercise 5: Transpose.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $m \times n$ matrix $A^T$, the transpose of $A$. SolutionAgain, the tutorial gives us the definition of matrix transpose, so we just need to fill the resulting matrix with the elements of the original matrix in the right order. For example, for a $3 \times 2$ matrix$$\begin{bmatrix} a & b \\ c & d \\ e & f\end{bmatrix}^T=\begin{bmatrix} a & c & e \\ b & d & f\end{bmatrix}$$
###Code
@exercise
def transpose(a : Matrix) -> Matrix:
rows = len(a)
columns = len(a[0])
# Note that the resulting matrix dimensions are swapped compared to the original ones
ans = create_empty_matrix(columns, rows)
for i in range(rows):
for j in range(columns):
ans[j][i] = a[i][j]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 5 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-5:-Transpose.) Exercise 6: Conjugate.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $n \times m$ matrix $\overline{A}$, the conjugate of $A$. SolutionsTo get the conjugate of a matrix you take the conjugate of each individual element (check the [Complex Arithmetic tutorial](../ComplexArithmetic/ComplexArithmetic.ipynbComplex-Conjugate) for the definition.> *Python note*: In the complex arithmetic tutorial complex numbers were represented as tuples of real and imaginary components. However, this tutorial relies on Python's built-in [`complex`](https://docs.python.org/3.8/library/functions.htmlcomplex) data type. Python's [cmath library](https://docs.python.org/3.8/library/cmath.html) offers a lot of useful functions that deal with the `complex` data type.>> Here is an example of using the `complex` data type:>> ```Python> Import the cmath library> import cmath>> Create a new complex number 5 + 3i; the two arguments are the real and the imaginary parts of the number> complexNumber = complex(5, 3)>> Print the real and the imaginary parts of the number> print(complexNumber.real) > print(complexNumber.imag)>> Convert the complex number to its polar representation using the cmath library> polar = cmath.polar(complexNumber)> print(polar) This prints: (5.830951894845301, 0.5404195002705842)> ```>> To get the complex conjugate of a matrix, we loop trough each element of the matrix, extract real and imaginary parts of the number and flip the sign for the imaginary part.
###Code
@exercise
def conjugate(a : Matrix) -> Matrix:
rows = len(a)
columns = len(a[0])
ans = create_empty_matrix(rows, columns)
for i in range(rows):
for j in range(columns):
ans[i][j] = complex(a[i][j].real, -a[i][j].imag)
return ans
###Output
_____no_output_____
###Markdown
[Return to task 6 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-6:-Conjugate.) Exercise 7: Adjoint.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $m \times n$ matrix $A^\dagger$, the adjoint of $A$. SolutionTo get the adjoint we perform both **transpose** and **conjugate** operations on the input matrix. We can write out the whole procedure manually, like we have done above, but we can also leverage the code we have written above.> In Python the `def` word defines a function, which could be reused later in the code.
###Code
@exercise
def adjoint(a : Matrix) -> Matrix:
# Call the transpose function with the input matrix a
transp = transpose(a)
# Call the conjugate function with the transposed matrix as input
ans = conjugate(transp)
return ans
###Output
_____no_output_____
###Markdown
[Return to task 7 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-7:-Adjoint.) Exercise 8: Unitary Verification.**Input:** An $n \times n$ matrix $A$.**Output:** Check if the matrix is unitary and return `True` if it is, or `False` if it isn't. SolutionA matrix is unitary if this holds true: $UU^\dagger = U^\dagger U = I$.(As a reminder, an identity matrix is a matrix with 1s on the main diagonal and 0s everywhere else.)Thus, to check if the input matrix is unitary we will need to perform the following steps:1. Calculate the adjoint of the input matrix.2. Multiply it by the input matrix.3. Check if the multiplication result is equal to an identity matrix. > *Python note:* We will leverage the `adjoint` and the `matrix_mult` functions what we have created above.>> When we check each element of $UU^\dagger$ to see whether it equals the respective element of the identity matrix, we'll use Python function `approx` to perform this comparison approximately.
###Code
from pytest import approx
@exercise
def is_matrix_unitary(a : Matrix) -> bool:
n = len(a)
# Calculate the adjoint matrix
adjointA = adjoint(a)
# Multiply the adjoint matrix by the input matrix
multipliedMatrix = matrix_mult(a, adjointA)
# Check whether the multiplication result is (approximately) identity matrix
for i in range(n):
for j in range(n):
# An identity matrix has 1's in all the places where the row index and column index are equal...
if i == j:
if multipliedMatrix[i][j] != approx(1):
return False
# ... and 0's in all the places where the row index and column index are different
else:
if multipliedMatrix[i][j] != approx(0):
return False
return True
###Output
_____no_output_____
###Markdown
[Return to task 8 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-8:-Unitary-Verification.) Exercise 9: Inner product.**Inputs:**1. An $n \times 1$ vector $V$.2. An $n \times 1$ vector $W$.**Output:** Return a complex number - the inner product $\langle V , W \rangle$. SolutionFollowing the definition of the inner product, $\langle V , W \rangle = V^\dagger W$. For example, for vectors of length 2:$$\langle\begin{bmatrix} a \\ b\end{bmatrix},\begin{bmatrix} c \\ d\end{bmatrix}\rangle =\begin{bmatrix} a \\ b\end{bmatrix}^\dagger\begin{bmatrix} c \\ d\end{bmatrix}=\begin{bmatrix} \overline{a} & \overline{b} \end{bmatrix}\begin{bmatrix} c \\ d\end{bmatrix}= \overline{a} \cdot c + \overline{b} \cdot d$$> *Python note:* We will again use previously defined functions to calculate adjoint of a vector and a product of two vectors. > We need to keep in mind that the task asks us to return a complex number and not a $1 \times 1$ matrix which is the result of the multiplication. > Therefore at the end we'll extract the top left element of the `resultMatrix` and return it.
###Code
@exercise
def inner_prod(v : Matrix, w : Matrix) -> complex:
# Calculate the adjoint of the v vector
adjointV = adjoint(v)
# Multiply the adjoint v and w. The result will be a matrix with only one element.
resultMatrix = matrix_mult(adjointV, w)
# To get the actual complex number, we have to take one element from the multiplication result.
return resultMatrix[0][0]
###Output
_____no_output_____
###Markdown
[Return to task 9 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-9:-Inner-product.) Exercise 10: Normalized vectors.**Input:** A non-zero $n \times 1$ vector $V$.**Output:** Return an $n \times 1$ vector $\frac{V}{||V||}$ - the normalized version of the vector $V$. Solution If the vector $V = \begin{bmatrix}a & b & c \end{bmatrix}$, its norm $ ||V|| = \sqrt{|a|^2 + |b|^2 + |c|^2} $,and its normalized version is$ \begin{bmatrix}\frac{a}{||V||} & \frac{b}{||V||} & \frac{c}{||V||} \end{bmatrix} $.Thus, we need to calculate the norm of the vector and to divide each element of the vector by it. We will calculate the norm as a square root of an inner product of the vector with itself.
###Code
@exercise
def normalize(v : Matrix) -> Matrix:
norm = math.sqrt(inner_prod(v, v).real)
n = len(v)
ans = create_empty_matrix(n, 1)
# Divide each element of the vector by the norm
for i in range(n):
ans[i][0] = v[i][0] / norm
return ans
###Output
_____no_output_____
###Markdown
[Return to task 10 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-10:-Normalized-vectors.) Exercise 11: Outer product.**Inputs:**1. An $n \times 1$ vector $V$.2. An $m \times 1$ vector $W$.**Output:** Return an $n \times m$ matrix that represents the outer product of $V$ and $W$. SolutionBy definition, the outer product of $V$ and $W$ is $VW^\dagger$. We can use a similar approach to calculating the inner product, except here we will return the whole multiplication result rather than a specific number.
###Code
@exercise
def outer_prod(v : Matrix, w : Matrix) -> Matrix:
# Calculate adjoint of the W
adjointW = adjoint(w)
# Multiply V by W adjoint
return matrix_mult(v, adjointW)
###Output
_____no_output_____
###Markdown
[Return to task 11 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-11:-Outer-product.) Exercise 12*: Tensor Product.**Inputs:**1. An $n \times m$ matrix $A$.2. A $k \times l$ matrix $B$.**Output:** Return an $(n \cdot k) \times (m \cdot l)$ matrix $A \otimes B$, the tensor product of $A$ and $B$. SolutionWe will follow the definition of the tensor product. For example, tensor product of $2 \times 2$ matrices look as follows:$$\begin{bmatrix} a & b \\ c & d \end{bmatrix} \otimes \begin{bmatrix} e & f \\ g & h \end{bmatrix} =\begin{bmatrix} a \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} & b \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} \\ c \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} & d \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix}\end{bmatrix}=\begin{bmatrix} a \cdot e & a \cdot f & b \cdot e & b \cdot f \\ a \cdot g & a \cdot h & b \cdot g & b \cdot h \\ c \cdot e & c \cdot f & d \cdot e & d \cdot f \\ c \cdot g & c \cdot h & d \cdot g & d \cdot h\end{bmatrix}$$> *Python note:* We need to calculate pairwise products of all elements of the left matrix and all elements of the right matrix; this means we have to use 4 nested loops.
###Code
@exercise
def tensor_product(a : Matrix, b : Matrix) -> Matrix:
aRows = len(a) # the number of rows for matrix a
aColumns = len(a[0]) # the number of columns for matrix a
bRows = len(b) # the number of rows for matrix b
bColumns = len(b[0]) # the number of columns for matrix b
ans = create_empty_matrix(aRows * bRows, aColumns * bColumns)
# Outer pair of loops, iterating trough the elements of the left matrix
for i in range(aRows):
for j in range(aColumns):
# Inner pair of loops, iterating through the elements of the right matrix
for k in range(bRows):
for l in range(bColumns):
ans[i * bRows + k][j * bColumns + l] = a[i][j] * b[k][l]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 12 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-12*:-Tensor-Product.) Exercise 13: Finding an eigenvalue.**Inputs:**1. A real-valued $n \times n$ matrix $A$.2. An eigenvector $V$ of matrix $A$.**Output:** Return a real number - the eigenvalue of $A$ that is associated with the given eigenvector. SolutionLet's consider what happens when we multiply the matrix by its eigenvector for a $3 \times 3$ example:$$ A \cdot V = \begin{bmatrix} a & b & c \\ d & e & f \\ g & h & i \end{bmatrix} \cdot \begin{bmatrix}j \\ k \\ l \end{bmatrix} = \begin{bmatrix} m \\ n \\ o \end{bmatrix} = \alpha \begin{bmatrix}j \\ k \\ l \end{bmatrix} = \alpha V$$This means you can find the eigenvalue $\alpha$ from the equations $$ \begin{cases} \alpha j = m \\ \alpha k = n \\ \alpha l = o \end{cases}$$We can use any of them, keeping in mind that we need an equation in which the element of the eigenvector is not zero (otherwise we get an equation $0 \alpha = 0$ which doesn't help us find $\alpha$).Since eigenvectors are defined as non-zero vectors, we are guaranteed that at least one element of the vector will not be zero.
###Code
from pytest import approx
@exercise
def find_eigenvalue(a : Matrix, v : Matrix) -> float:
n = len(v)
multiplied = matrix_mult(a, v)
for i in range(n):
if (v[i][0] != approx(0)):
return multiplied[i][0] / v[i][0]
###Output
_____no_output_____
###Markdown
[Return to task 13 of the Linear Algebra tutorial.](/LinearAlgebra.ipynbExercise-13:-Finding-an-eigenvalue.) Exercise 14**: Finding an eigenvector.**Inputs:**1. A $2 \times 2$ matrix $A$.2. An eigenvalue $x$ of matrix $A$.**Output:** Return any non-zero eigenvector of $A$ that is associated with $x$. SolutionSearching for an eigenvector $V$ associated with a specific eigenvalue $x$ asks for solving the following equation:$$ AV = xV $$or, equivalently, $$(A - xI_n)V = 0$$In other words, for a $2 \times 2$ matrix the following happens: 1. Multiply the identity matrix $I_2$ by the eigenvalue:$$ x \cdot \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} x & 0 \\ 0 & x \end{bmatrix} $$2. Subtract this new matrix from the given matrix $A$:$$ \begin{bmatrix} a & b \\ c & d \end{bmatrix} - \begin{bmatrix} x & 0 \\ 0 & x \end{bmatrix} = \begin{bmatrix} a -x & b \\ c & d -x \end{bmatrix} $$ 3. Find a vector that, when multiplied by the resulting matrix, will produce a 0 vector:$$ \begin{bmatrix} a - x & b \\ c & d - x \end{bmatrix} \cdot \begin{bmatrix} v_0 \\ v_1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}$$This can be rewritten as the following system of equations:$$\begin{cases}(a - x) \cdot v_0 + b \cdot v_1 = 0 \\c \cdot v_0 + (d - x) \cdot v_1 = 0 \end{cases}$$Each eigenvalue has infinitely many eigenvectors associated with it (since multiplying an eigenvector by a number gives another valid eigenvector). We can limit our search and say that $v_0 = 1$, if possible. In this case, the system of equations becomes$$\begin{cases}(a - x) + b \cdot v_1 = 0 \\c + (d - x) \cdot v_1 = 0 \end{cases}$$and finally we get $v_1 = \frac{a-x}{-b}$.If $b = 0$, we can not perform this division, so we need to reconsider our choices. The first equation becomes $(a-x)v_0 = 0$, which is possible in two cases:* If $a - x \neq 0$, we get $v_0 = 0$ and thus $v_1$ has to be non-zero (we can pick $v_1 = 1$).* If $a - x = 0$, we can not get any information from the first equation and have to fall back to the second one:$c \cdot v_0 + (d - x) \cdot v_1 = 0$. Following a similar logic: * If $c = 0$, we get $(d - x) \cdot v_1 = 0$, so $v_0 = 1, v_1 = 0$. * If $c \neq 0$, we get $v_1 = 1, v_0 = \frac{d-x}{-c}$.
###Code
@exercise
def find_eigenvector(a : Matrix, x : float) -> Matrix:
# Check for possible edge cases
if (a[0][1] == 0):
if (a[0][0] - x == 0):
if (a[1][0] == 0):
return [[1], [0]]
else:
return [[(a[1][1] - x) / (-a[1][0])], [1]]
else:
return [[0], [1]]
v0 = 1
v1 = (a[0][0] - x) / (-a[0][1])
return [[v0], [v1]]
###Output
_____no_output_____
###Markdown
Linear Algebra Tutorial Workbook**What is this workbook?**A workbook is a collection of problems, accompanied by solutions to them. The explanations focus on the logical steps required to solve a problem; they illustrate the concepts that need to be applied to come up with a solution to the problem, explaining the mathematical steps required. Note that a workbook should not be the primary source of knowledge on the subject matter; it assumes that you've already read a tutorial or a textbook and that you are now seeking to improve your problem-solving skills. You should attempt solving the tasks of the respective kata first, and turn to the workbook only if stuck. While a textbook emphasizes knowledge acquisition, a workbook emphasizes skill acquisition.This workbook describes the solutions to the problems offered in the [Linear Algebra tutorial](./LinearAlgebra.ipynb). Since the tasks are offered as programming problems, the explanations also cover some elements of Python that might be non-obvious for a first-time user.**What you should know for this workbook**1. Complex arithmetic.2. Basic Python knowledge is helpful but not necessary. Click the cell with code below this block of text and press `Ctrl+Enter` (`⌘+Enter` on Mac). **Do not skip this step**.
###Code
# Run this cell using Ctrl+Enter (⌘+Enter on Mac).
from testing import exercise, create_empty_matrix
from typing import List
import math, cmath
Matrix = List[List[complex]]
###Output
_____no_output_____
###Markdown
Exercise 1: Matrix addition.**Inputs:**1. An $n \times m$ matrix $A$, represented as a two-dimensional list.2. An $n \times m$ matrix $B$, represented as a two-dimensional list.**Output:** Return the sum of the matrices $A + B$ - an $n \times m$ matrix, represented as a two-dimensional list. SolutionFollowing the definition given in the tutorial, the sum of two matrices is a matrix of element-wise sums of matrix elements; for example, for $2 \times 2$ matrices$$ A + B =\begin{bmatrix} a & b \\ c & d \end{bmatrix} + \begin{bmatrix} e & f \\ g & h \end{bmatrix} = \begin{bmatrix} a + e & b + f \\ c + g & d + h \end{bmatrix}$$> *Python note:* This tutorial uses a lot of lists and loops, so let's walk through some Python syntax details first. If you're familiar with Python syntax, feel free to skip this note!>> * [`range(x)`](https://docs.python.org/3/tutorial/controlflow.htmlthe-range-function) will create a [list](https://docs.python.org/3/tutorial/introduction.htmllists) of numbers from 0 to `x - 1`, inclusive; for example, `range(3)` will create a list `[0, 1, 2]`. > * [`for`](https://docs.python.org/3/tutorial/controlflow.htmlfor-statements) statement iterates over the items of a sequence; for example, the following code> ```python> for i in range(3):> print(i)> ```>> will print:> ```> 0> 1> 2> ```>> * Matrices are described as two-dimensional lists, > which are represented as lists of lists. For example, the following matrix:>> $$\begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix} $$>> is represented as a list of lists `[[1, 2, 3], [4, 5, 6]]`. >> * You can access a specific element of the list using the index of that element in the list (note that indices start with 0): the first element of `array` is `array[0]`, the second - `array[1]`, etc.> * Similarly, you can access an element of a matrix using the row and column indices of that element: `matrix[0][2]` would access the element in the first row and 3rd column.> * `len(array)` returns the number of elements in a list; for example, `len([0, 1, 2])` will return 3.> * Here is an example of creating a matrix from the example above and looping through its elements to print them:>>```Python>matrix = [[1, 2, 3], [4, 5, 6]]>numberOfRows = len(matrix) will return 2>numberOfColumns = len(matrix[0]) will return 3>for row in range(numberOfRows):> for column in range(numberOfColumns):> print(matrix[row][column])>>```>> * Finally, the first exercise offers you a template of a solution that uses a function `create_empty_matrix(n, m)`; this function creates an $n \times m$ matrix filled with 0's as values. This function is not a built-in Python function, this notebook defines it for you to use.
###Code
@exercise
def matrix_add(a : Matrix, b : Matrix) -> Matrix:
# You can get the size of a matrix like this:
rows = len(a)
columns = len(a[0])
# You can use the following function to initialize a rows×columns matrix filled with 0s to store your answer
c = create_empty_matrix(rows, columns)
for i in range(rows):
for j in range(columns):
# You can access elements of a matrix like this:
x = a[i][j]
y = b[i][j]
# You can modify the elements of a matrix like this:
c[i][j] = a[i][j] + b[i][j]
return c
###Output
_____no_output_____
###Markdown
[Return to task 1 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-1:-Matrix-addition.) Exercise 2: Scalar multiplication.**Inputs:**1. A scalar $x$.2. An $n \times m$ matrix $A$.**Output:** Return the $n \times m$ matrix $x \cdot A$. SolutionWe can again follow the definition given in the tutorial: to calculate the product of a number and a matrix, multiply each matrix element by that number. For example, for a $2 \times 2$ matrix:$$x \cdot A = x \cdot \begin{bmatrix} a & b \\ c & d \end{bmatrix} = \begin{bmatrix} x \cdot a & x \cdot b \\ x \cdot c & x \cdot d \end{bmatrix} $$ > *Python note:* We have to multiply each element in the matrix by the given number $x$. To do so, we will again loop trough each matrix element with 2 `for` loops, do the multiplication and store its result in the corresponding element of the newly created matrix.
###Code
@exercise
def scalar_mult(x : complex, a : Matrix) -> Matrix:
rows = len(a)
columns = len(a[0])
c = create_empty_matrix(rows, columns)
for i in range(rows):
for j in range(columns):
c[i][j] = a[i][j] * x
return c
###Output
_____no_output_____
###Markdown
[Return to task 2 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-2:-Scalar-multiplication.) Exercise 3: Matrix multiplication.**Inputs:**1. An $n \times m$ matrix $A$.2. An $m \times k$ matrix $B$.**Output:** Return the $n \times k$ matrix equal to the matrix product $AB$. SolutionAgain, the tutorial gives us the definition of how multiplication works, and we just need to implement it in code. Here is an example of multiplying a $2 \times 3$ matrix by a $3 \times 2$ matrix:$$ A \cdot B =\begin{bmatrix} a & b & c \\ d & e & f \end{bmatrix} \cdot \begin{bmatrix} h & i \\ j & k \\ l & m \end{bmatrix} = \begin{bmatrix} a \cdot h + b \cdot j + c \cdot l & a \cdot i + b \cdot k + c \cdot m \\ d \cdot h + e \cdot j + f \cdot l & d \cdot i + e \cdot k + f \cdot m \end{bmatrix} $$> *Python note*: In this exercise we'll need an extra nested loop. We will iterate trough the rows and columns of the resulting matrix, similar to the previous exercises, but for each element of the result we'll need to iterate through the row of the left matrix and the column of the right matrix that contribute to that element. In the example above, to get the element in the first row and the first column of the resulting matrix product we'll need to iterate through the first row of the left matrix $\begin{bmatrix} a & b & c \end{bmatrix}$ and the first column of the right matrix $\begin{bmatrix} h \\ j \\ l \end{bmatrix}$ and add up pairwise products of their elements.>> Note that the empty matrix we create for storing the result differs in dimensions from the previous exercises: its number of rows equals the number of rows of the left matrix, and its number of columns equals to the number of columns of the right matrix. >> Python `+=` operator is a convenient shorthand for assignment `variable = variable + increment`.
###Code
@exercise
def matrix_mult(a : Matrix, b : Matrix) -> Matrix:
rows = len(a) # the number of rows of the left matrix
common = len(a[0]) # = len(b) - the common dimension of the matrices
columns = len(b[0]) # the number of columns of the right matrix
ans = create_empty_matrix(rows, columns)
for currentRow in range(rows):
for currentColumn in range(columns):
for k in range(common):
ans[currentRow][currentColumn] += a[currentRow][k] * b[k][currentColumn]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 3 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-3:-Matrix-multiplication.) Exercise 4: Matrix Inversion.**Input:** An invertible $2 \times 2$ matrix $A$.**Output:** Return the inverse of $A$, a $2 \times 2$ matrix $A^{-1}$. SolutionSince we only need to invert a $2 \times 2$ matrix, we will not consider a solution which can be used for arbitrary-sized matrices. We will follow the algorithm described in the [Wikipedia article](https://en.wikipedia.org/wiki/Invertible_matrixInversion_of_2_%C3%97_2_matrices).$$ A = \begin{bmatrix} a & b \\ c & d \end{bmatrix} $$The determinant of the matrix is defined as $$ |A| = a \cdot d - b \cdot c $$$$A^{-1} = \frac{1}{|A|} \cdot \begin{bmatrix} d & -b \\ -c & a \end{bmatrix} = \begin{bmatrix} \frac{d}{|A|} & \frac{-b}{|A|} \\ \frac{-c}{|A|} & \frac{a}{|A|} \end{bmatrix} $$
###Code
@exercise
def matrix_inverse(m : Matrix) -> Matrix:
# Extract each element of the array into a named variable
a = m[0][0]
b = m[0][1]
c = m[1][0]
d = m[1][1]
# Calculate the determinant
determinant = (a * d) - (b * c)
# Create the inverse of the matrix following the formula above
ans = [[d / determinant, -b / determinant], [-c / determinant, a / determinant]]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 4 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-4:-Matrix-Inversion.) Exercise 5: Transpose.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $m \times n$ matrix $A^T$, the transpose of $A$. SolutionAgain, the tutorial gives us the definition of matrix transpose, so we just need to fill the resulting matrix with the elements of the original matrix in the right order. For example, for a $3 \times 2$ matrix$$\begin{bmatrix} a & b \\ c & d \\ e & f\end{bmatrix}^T=\begin{bmatrix} a & c & e \\ b & d & f\end{bmatrix}$$
###Code
@exercise
def transpose(a : Matrix) -> Matrix:
rows = len(a)
columns = len(a[0])
# Note that the resulting matrix dimensions are swapped compared to the original ones
ans = create_empty_matrix(columns, rows)
for i in range(rows):
for j in range(columns):
ans[j][i] = a[i][j]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 5 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-5:-Transpose.) Exercise 6: Conjugate.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $n \times m$ matrix $\overline{A}$, the conjugate of $A$. SolutionsTo get the conjugate of a matrix you take the conjugate of each individual element (check the [Complex Arithmetic tutorial](../ComplexArithmetic/ComplexArithmetic.ipynbComplex-Conjugate) for the definition.> *Python note*: In the complex arithmetic tutorial complex numbers were represented as tuples of real and imaginary components. However, this tutorial relies on Python's built-in [`complex`](https://docs.python.org/3.8/library/functions.htmlcomplex) data type. Python's [cmath library](https://docs.python.org/3.8/library/cmath.html) offers a lot of useful functions that deal with the `complex` data type.>> Here is an example of using the `complex` data type:>> ```Python> Import the cmath library> import cmath>> Create a new complex number 5 + 3i; the two arguments are the real and the imaginary parts of the number> complexNumber = complex(5, 3)>> Print the real and the imaginary parts of the number> print(complexNumber.real) > print(complexNumber.imag)>> Convert the complex number to its polar representation using the cmath library> polar = cmath.polar(complexNumber)> print(polar) This prints: (5.830951894845301, 0.5404195002705842)> ```>> To get the complex conjugate of a matrix, we loop trough each element of the matrix, extract real and imaginary parts of the number and flip the sign for the imaginary part.
###Code
@exercise
def conjugate(a : Matrix) -> Matrix:
rows = len(a)
columns = len(a[0])
ans = create_empty_matrix(rows, columns)
for i in range(rows):
for j in range(columns):
ans[i][j] = complex(a[i][j].real, -a[i][j].imag)
return ans
###Output
_____no_output_____
###Markdown
[Return to task 6 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-6:-Conjugate.) Exercise 7: Adjoint.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $m \times n$ matrix $A^\dagger$, the adjoint of $A$. SolutionTo get the adjoint we perform both **transpose** and **conjugate** operations on the input matrix. We can write out the whole procedure manually, like we have done above, but we can also leverage the code we have written above.> In Python the `def` word defines a function, which could be reused later in the code.
###Code
@exercise
def adjoint(a : Matrix) -> Matrix:
# Call the transpose function with the input matrix a
transp = transpose(a)
# Call the conjugate function with the transposed matrix as input
ans = conjugate(transp)
return ans
###Output
_____no_output_____
###Markdown
[Return to task 7 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-7:-Adjoint.) Exercise 8: Unitary Verification.**Input:** An $n \times n$ matrix $A$.**Output:** Check if the matrix is unitary and return `True` if it is, or `False` if it isn't. SolutionA matrix is unitary if this holds true: $UU^\dagger = U^\dagger U = I$.(As a reminder, an identity matrix is a matrix with 1s on the main diagonal and 0s everywhere else.)Thus, to check if the input matrix is unitary we will need to perform the following steps:1. Calculate the adjoint of the input matrix.2. Multiply it by the input matrix.3. Check if the multiplication result is equal to an identity matrix. > *Python note:* We will leverage the `adjoint` and the `matrix_mult` functions what we have created above.>> When we check each element of $UU^\dagger$ to see whether it equals the respective element of the identity matrix, we'll use Python function `approx` to perform this comparison approximately.
###Code
from pytest import approx
@exercise
def is_matrix_unitary(a : Matrix) -> bool:
n = len(a)
# Calculate the adjoint matrix
adjointA = adjoint(a)
# Multiply the adjoint matrix by the input matrix
multipliedMatrix = matrix_mult(a, adjointA)
# Check whether the multiplication result is (approximately) identity matrix
for i in range(n):
for j in range(n):
# An identity matrix has 1's in all the places where the row index and column index are equal...
if i == j:
if multipliedMatrix[i][j] != approx(1):
return False
# ... and 0's in all the places where the row index and column index are different
else:
if multipliedMatrix[i][j] != approx(0):
return False
return True
###Output
_____no_output_____
###Markdown
[Return to task 8 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-8:-Unitary-Verification.) Exercise 9: Inner product.**Inputs:**1. An $n \times 1$ vector $V$.2. An $n \times 1$ vector $W$.**Output:** Return a complex number - the inner product $\langle V , W \rangle$. SolutionFollowing the definition of the inner product, $\langle V , W \rangle = V^\dagger W$. For example, for vectors of length 2:$$\langle\begin{bmatrix} a \\ b\end{bmatrix},\begin{bmatrix} c \\ d\end{bmatrix}\rangle =\begin{bmatrix} a \\ b\end{bmatrix}^\dagger\begin{bmatrix} c \\ d\end{bmatrix}=\begin{bmatrix} \overline{a} & \overline{b} \end{bmatrix}\begin{bmatrix} c \\ d\end{bmatrix}= \overline{a} \cdot c + \overline{b} \cdot d$$> *Python note:* We will again use previously defined functions to calculate adjoint of a vector and a product of two vectors. > We need to keep in mind that the task asks us to return a complex number and not a $1 \times 1$ matrix which is the result of the multiplication. > Therefore at the end we'll extract the top left element of the `resultMatrix` and return it.
###Code
@exercise
def inner_prod(v : Matrix, w : Matrix) -> complex:
# Calculate the adjoint of the v vector
adjointV = adjoint(v)
# Multiply the adjoint v and w. The result will be a matrix with only one element.
resultMatrix = matrix_mult(adjointV, w)
# To get the actual complex number, we have to take one element from the multiplication result.
return resultMatrix[0][0]
###Output
_____no_output_____
###Markdown
[Return to task 9 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-9:-Inner-product.) Exercise 10: Normalized vectors.**Input:** A non-zero $n \times 1$ vector $V$.**Output:** Return an $n \times 1$ vector $\frac{V}{||V||}$ - the normalized version of the vector $V$. Solution If the vector $V = \begin{bmatrix}a & b & c \end{bmatrix}$, its norm $ ||V|| = \sqrt{|a|^2 + |b|^2 + |c|^2} $,and its normalized version is$ \begin{bmatrix}\frac{a}{||V||} & \frac{b}{||V||} & \frac{c}{||V||} \end{bmatrix} $.Thus, we need to calculate the norm of the vector and to divide each element of the vector by it. We will calculate the norm as a square root of an inner product of the vector with itself.
###Code
@exercise
def normalize(v : Matrix) -> Matrix:
norm = math.sqrt(inner_prod(v, v).real)
n = len(v)
ans = create_empty_matrix(n, 1)
# Divide each element of the vector by the norm
for i in range(n):
ans[i][0] = v[i][0] / norm
return ans
###Output
_____no_output_____
###Markdown
[Return to task 10 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-10:-Normalized-vectors.) Exercise 11: Outer product.**Inputs:**1. An $n \times 1$ vector $V$.2. An $m \times 1$ vector $W$.**Output:** Return an $n \times m$ matrix that represents the outer product of $V$ and $W$. SolutionBy definition, the outer product of $V$ and $W$ is $VW^\dagger$. We can use a similar approach to calculating the inner product, except here we will return the whole multiplication result rather than a specific number.
###Code
@exercise
def outer_prod(v : Matrix, w : Matrix) -> Matrix:
# Calculate adjoint of the W
adjointW = adjoint(w)
# Multiply V by W adjoint
return matrix_mult(v, adjointW)
###Output
_____no_output_____
###Markdown
[Return to task 11 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-11:-Outer-product.) Exercise 12*: Tensor Product.**Inputs:**1. An $n \times m$ matrix $A$.2. A $k \times l$ matrix $B$.**Output:** Return an $(n \cdot k) \times (m \cdot l)$ matrix $A \otimes B$, the tensor product of $A$ and $B$. SolutionWe will follow the definition of the tensor product. For example, tensor product of $2 \times 2$ matrices look as follows:$$\begin{bmatrix} a & b \\ c & d \end{bmatrix} \otimes \begin{bmatrix} e & f \\ g & h \end{bmatrix} =\begin{bmatrix} a \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} & b \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} \\ c \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} & d \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix}\end{bmatrix}=\begin{bmatrix} a \cdot e & a \cdot f & b \cdot e & b \cdot f \\ a \cdot g & a \cdot h & b \cdot g & b \cdot h \\ c \cdot e & c \cdot f & d \cdot e & d \cdot f \\ c \cdot g & c \cdot h & d \cdot g & d \cdot h\end{bmatrix}$$> *Python note:* We need to calculate pairwise products of all elements of the left matrix and all elements of the right matrix; this means we have to use 4 nested loops.
###Code
@exercise
def tensor_product(a : Matrix, b : Matrix) -> Matrix:
aRows = len(a) # the number of rows for matrix a
aColumns = len(a[0]) # the number of columns for matrix a
bRows = len(b) # the number of rows for matrix b
bColumns = len(b[0]) # the number of columns for matrix b
ans = create_empty_matrix(aRows * bRows, aColumns * bColumns)
# Outer pair of loops, iterating trough the elements of the left matrix
for i in range(aRows):
for j in range(aColumns):
# Inner pair of loops, iterating through the elements of the right matrix
for k in range(bRows):
for l in range(bColumns):
ans[i * bRows + k][j * bColumns + l] = a[i][j] * b[k][l]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 12 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-12*:-Tensor-Product.) Exercise 13: Finding an eigenvalue.**Inputs:**1. A real-valued $n \times n$ matrix $A$.2. An eigenvector $V$ of matrix $A$.**Output:** Return a real number - the eigenvalue of $A$ that is associated with the given eigenvector. SolutionLet's consider what happens when we multiply the matrix by its eigenvector for a $3 \times 3$ example:$$ A \cdot V = \begin{bmatrix} a & b & c \\ d & e & f \\ g & h & i \end{bmatrix} \cdot \begin{bmatrix}j \\ k \\ l \end{bmatrix} = \begin{bmatrix} m \\ n \\ o \end{bmatrix} = \alpha \begin{bmatrix}j \\ k \\ l \end{bmatrix} = \alpha V$$This means you can find the eigenvalue $\alpha$ from the equations $$ \begin{cases} \alpha j = m \\ \alpha k = n \\ \alpha l = o \end{cases}$$We can use any of them, keeping in mind that we need an equation in which the element of the eigenvector is not zero (otherwise we get an equation $0 \alpha = 0$ which doesn't help us find $\alpha$).Since eigenvectors are defined as non-zero vectors, we are guaranteed that at least one element of the vector will not be zero.
###Code
from pytest import approx
@exercise
def find_eigenvalue(a : Matrix, v : Matrix) -> float:
n = len(v)
multiplied = matrix_mult(a, v)
for i in range(n):
if (v[i][0] != approx(0)):
return multiplied[i][0] / v[i][0]
###Output
_____no_output_____
###Markdown
[Return to task 13 of the Linear Algebra tutorial.](/LinearAlgebra.ipynbExercise-13:-Finding-an-eigenvalue.) Exercise 14**: Finding an eigenvector.**Inputs:**1. A $2 \times 2$ matrix $A$.2. An eigenvalue $x$ of matrix $A$.**Output:** Return any non-zero eigenvector of $A$ that is associated with $x$. SolutionSearching for an eigenvector $V$ associated with a specific eigenvalue $x$ asks for solving the following equation:$$ AV = xV $$or, equivalently, $$(A - xI_n)V = 0$$In other words, for a $2 \times 2$ matrix the following happens: 1. Multiply the identity matrix $I_2$ by the eigenvalue:$$ x \cdot \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} x & 0 \\ 0 & x \end{bmatrix} $$2. Subtract this new matrix from the given matrix $A$:$$ \begin{bmatrix} a & b \\ c & d \end{bmatrix} - \begin{bmatrix} x & 0 \\ 0 & x \end{bmatrix} = \begin{bmatrix} a -x & b \\ c & d -x \end{bmatrix} $$ 3. Find a vector that, when multiplied by the resulting matrix, will produce a 0 vector:$$ \begin{bmatrix} a - x & b \\ c & d - x \end{bmatrix} \cdot \begin{bmatrix} v_0 \\ v_1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}$$This can be rewritten as the following system of equations:$$\begin{cases}(a - x) \cdot v_0 + b \cdot v_1 = 0 \\c \cdot v_0 + (d - x) \cdot v_1 = 0 \end{cases}$$Each eigenvalue has infinitely many eigenvectors associated with it (since multiplying an eigenvector by a number gives another valid eigenvector). We can limit our search and say that $v_0 = 1$, if possible. In this case, the system of equations becomes$$\begin{cases}(a - x) + b \cdot v_1 = 0 \\c + (d - x) \cdot v_1 = 0 \end{cases}$$and finally we get $v_1 = \frac{a-x}{-b}$.If $b = 0$, we can not perform this division, so we need to reconsider our choices. The first equation becomes $(a-x)v_0 = 0$, which is possible in two cases:* If $a - x \neq 0$, we get $v_0 = 0$ and thus $v_1$ has to be non-zero (we can pick $v_1 = 1$).* If $a - x = 0$, we can not get any information from the first equation and have to fall back to the second one:$c \cdot v_0 + (d - x) \cdot v_1 = 0$. Following a similar logic: * If $c = 0$, we get $(d - x) \cdot v_1 = 0$, so $v_0 = 1, v_1 = 0$. * If $c \neq 0$, we get $v_1 = 1, v_0 = \frac{d-x}{-c}$.
###Code
@exercise
def find_eigenvector(a : Matrix, x : float) -> Matrix:
# Check for possible edge cases
if (a[0][1] == 0):
if (a[0][0] - x == 0):
if (a[1][0] == 0):
return [[1], [0]]
else:
return [[(a[1][1] - x) / (-a[1][0])], [1]]
else:
return [[0], [1]]
v0 = 1
v1 = (a[0][0] - x) / (-a[0][1])
return [[v0], [v1]]
###Output
_____no_output_____
###Markdown
Linear Algebra Tutorial Workbook**What is this workbook?**A workbook is a collection of problems, accompanied by solutions to them. The explanations focus on the logical steps required to solve a problem; they illustrate the concepts that need to be applied to come up with a solution to the problem, explaining the mathematical steps required. Note that a workbook should not be the primary source of knowledge on the subject matter; it assumes that you've already read a tutorial or a textbook and that you are now seeking to improve your problem-solving skills. You should attempt solving the tasks of the respective kata first, and turn to the workbook only if stuck. While a textbook emphasizes knowledge acquisition, a workbook emphasizes skill acquisition.This workbook describes the solutions to the problems offered in the [Linear Algebra tutorial](./LinearAlgebra.ipynb). Since the tasks are offered as programming problems, the explanations also cover some elements of Python that might be non-obvious for a first-time user.**What you should know for this workbook**1. Complex arithmetic.2. Basic Python knowledge is helpful but not necessary. Click the cell with code below this block of text and press `Ctrl+Enter` (`⌘+Enter` on Mac). **Do not skip this step**.
###Code
# Run this cell using Ctrl+Enter (⌘+Enter on Mac).
from testing import exercise, create_empty_matrix
from typing import List
import math, cmath
Matrix = List[List[complex]]
###Output
_____no_output_____
###Markdown
Exercise 1: Matrix addition.**Inputs:**1. An $n \times m$ matrix $A$, represented as a two-dimensional list.2. An $n \times m$ matrix $B$, represented as a two-dimensional list.**Output:** Return the sum of the matrices $A + B$ - an $n \times m$ matrix, represented as a two-dimensional list. SolutionFollowing the definition given in the tutorial, the sum of two matrices is a matrix of element-wise sums of matrix elements; for example, for $2 \times 2$ matrices$$ A + B =\begin{bmatrix} a & b \\ c & d \end{bmatrix} + \begin{bmatrix} e & f \\ g & h \end{bmatrix} = \begin{bmatrix} a + e & b + f \\ c + g & d + h \end{bmatrix}$$> *Python note:* This tutorial uses a lot of lists and loops, so let's walk through some Python syntax details first. If you're familiar with Python syntax, feel free to skip this note!>> * [`range(x)`](https://docs.python.org/3/tutorial/controlflow.htmlthe-range-function) will create a [list](https://docs.python.org/3/tutorial/introduction.htmllists) of numbers from 0 to `x - 1`, inclusive; for example, `range(3)` will create a list `[0, 1, 2]`. > * [`for`](https://docs.python.org/3/tutorial/controlflow.htmlfor-statements) statement iterates over the items of a sequence; for example, the following code> ```python> for i in range(3):> print(i)> ```>> will print:> ```> 0> 1> 2> ```>> * Matrices are described as two-dimensional lists, > which are represented as lists of lists. For example, the following matrix:>> $$\begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix} $$>> is represented as a list of lists `[[1, 2, 3], [4, 5, 6]]`. >> * You can access a specific element of the list using the index of that element in the list (note that indices start with 0): the first element of `array` is `array[0]`, the second - `array[1]`, etc.> * Similarly, you can access an element of a matrix using the row and column indices of that element: `matrix[0][2]` would access the element in the first row and 3rd column.> * `len(array)` returns the number of elements in a list; for example, `len([0, 1, 2])` will return 3.> * Here is an example of creating a matrix from the example above and looping through its elements to print them:>>```Python>matrix = [[1, 2, 3], [4, 5, 6]]>numberOfRows = len(matrix) will return 2>numberOfColumns = len(matrix[0]) will return 3>for row in range(numberOfRows):> for column in range(numberOfColumns):> print(matrix[row][column])>>```>> * Finally, the first exercise offers you a template of a solution that uses a function `create_empty_matrix(n, m)`; this function creates an $n \times m$ matrix filled with 0's as values. This function is not a built-in Python function, this notebook defines it for you to use.
###Code
@exercise
def matrix_add(a : Matrix, b : Matrix) -> Matrix:
# You can get the size of a matrix like this:
rows = len(a)
columns = len(a[0])
# You can use the following function to initialize a rows×columns matrix filled with 0s to store your answer
c = create_empty_matrix(rows, columns)
for i in range(rows):
for j in range(columns):
# You can access elements of a matrix like this:
x = a[i][j]
y = b[i][j]
# You can modify the elements of a matrix like this:
c[i][j] = a[i][j] + b[i][j]
return c
###Output
_____no_output_____
###Markdown
[Return to task 1 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-1:-Matrix-addition.) Exercise 2: Scalar multiplication.**Inputs:**1. A scalar $x$.2. An $n \times m$ matrix $A$.**Output:** Return the $n \times m$ matrix $x \cdot A$. SolutionWe can again follow the definition given in the tutorial: to calculate the product of a number and a matrix, multiply each matrix element by that number. For example, for a $2 \times 2$ matrix:$$x \cdot A = x \cdot \begin{bmatrix} a & b \\ c & d \end{bmatrix} = \begin{bmatrix} x \cdot a & x \cdot b \\ x \cdot c & x \cdot d \end{bmatrix} $$ > *Python note:* We have to multiply each element in the matrix by the given number $x$. To do so, we will again loop trough each matrix element with 2 `for` loops, do the multiplication and store its result in the corresponding element of the newly created matrix.
###Code
@exercise
def scalar_mult(x : complex, a : Matrix) -> Matrix:
rows = len(a)
columns = len(a[0])
c = create_empty_matrix(rows, columns)
for i in range(rows):
for j in range(columns):
c[i][j] = a[i][j] * x
return c
###Output
_____no_output_____
###Markdown
[Return to task 2 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-2:-Scalar-multiplication.) Exercise 3: Matrix multiplication.**Inputs:**1. An $n \times m$ matrix $A$.2. An $m \times k$ matrix $B$.**Output:** Return the $n \times k$ matrix equal to the matrix product $AB$. SolutionAgain, the tutorial gives us the definition of how multiplication works, and we just need to implement it in code. Here is an example of multiplying a $2 \times 3$ matrix by a $3 \times 2$ matrix:$$ A \cdot B =\begin{bmatrix} a & b & c \\ d & e & f \end{bmatrix} \cdot \begin{bmatrix} h & i \\ j & k \\ l & m \end{bmatrix} = \begin{bmatrix} a \cdot h + b \cdot j + c \cdot l & a \cdot i + b \cdot k + c \cdot m \\ d \cdot h + e \cdot j + f \cdot l & d \cdot i + e \cdot k + f \cdot m \end{bmatrix} $$> *Python note*: In this exercise we'll need an extra nested loop. We will iterate trough the rows and columns of the resulting matrix, similar to the previous exercises, but for each element of the result we'll need to iterate through the row of the left matrix and the column of the right matrix that contribute to that element. In the example above, to get the element in the first row and the first column of the resulting matrix product we'll need to iterate through the first row of the left matrix $\begin{bmatrix} a & b & c \end{bmatrix}$ and the first column of the right matrix $\begin{bmatrix} h \\ j \\ l \end{bmatrix}$ and add up pairwise products of their elements.>> Note that the empty matrix we create for storing the result differs in dimensions from the previous exercises: its number of rows equals the number of rows of the left matrix, and its number of columns equals to the number of columns of the right matrix. >> Python `+=` operator is a convenient shorthand for assignment `variable = variable + increment`.
###Code
@exercise
def matrix_mult(a : Matrix, b : Matrix) -> Matrix:
rows = len(a) # the number of rows of the left matrix
common = len(a[0]) # = len(b) - the common dimension of the matrices
columns = len(b[0]) # the number of columns of the right matrix
ans = create_empty_matrix(rows, columns)
for currentRow in range(rows):
for currentColumn in range(columns):
for k in range(common):
ans[currentRow][currentColumn] += a[currentRow][k] * b[k][currentColumn]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 3 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-3:-Matrix-multiplication.) Exercise 4: Matrix Inversion.**Input:** An invertible $2 \times 2$ matrix $A$.**Output:** Return the inverse of $A$, a $2 \times 2$ matrix $A^{-1}$. SolutionSince we only need to invert a $2 \times 2$ matrix, we will not consider a solution which can be used for arbitrary-sized matrices. We will follow the algorithm described in the [Wikipedia article](https://en.wikipedia.org/wiki/Invertible_matrixInversion_of_2_%C3%97_2_matrices).$$ A = \begin{bmatrix} a & b \\ c & d \end{bmatrix} $$The determinant of the matrix is defined as $$ |A| = a \cdot d - b \cdot c $$$$A^{-1} = \frac{1}{|A|} \cdot \begin{bmatrix} d & -b \\ -c & a \end{bmatrix} = \begin{bmatrix} \frac{d}{|A|} & \frac{-b}{|A|} \\ \frac{-c}{|A|} & \frac{a}{|A|} \end{bmatrix} $$
###Code
@exercise
def matrix_inverse(m : Matrix) -> Matrix:
# Extract each element of the array into a named variable
a = m[0][0]
b = m[0][1]
c = m[1][0]
d = m[1][1]
# Calculate the determinant
determinant = (a * d) - (b * c)
# Create the inverse of the matrix following the formula above
ans = [[d / determinant, -b / determinant], [-c / determinant, a / determinant]]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 4 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-4:-Matrix-Inversion.) Exercise 5: Transpose.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $m \times n$ matrix $A^T$, the transpose of $A$. SolutionAgain, the tutorial gives us the definition of matrix transpose, so we just need to fill the resulting matrix with the elements of the original matrix in the right order. For example, for a $3 \times 2$ matrix$$\begin{bmatrix} a & b \\ c & d \\ e & f\end{bmatrix}^T=\begin{bmatrix} a & c & e \\ b & d & f\end{bmatrix}$$
###Code
@exercise
def transpose(a : Matrix) -> Matrix:
rows = len(a)
columns = len(a[0])
# Note that the resulting matrix dimensions are swapped compared to the original ones
ans = create_empty_matrix(columns, rows)
for i in range(rows):
for j in range(columns):
ans[j][i] = a[i][j]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 5 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-5:-Transpose.) Exercise 6: Conjugate.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $n \times m$ matrix $\overline{A}$, the conjugate of $A$. SolutionsTo get the conjugate of a matrix you take the conjugate of each individual element (check the [Complex Arithmetic tutorial](../ComplexArithmetic/ComplexArithmetic.ipynbComplex-Conjugate) for the definition.> *Python note*: In the complex arithmetic tutorial complex numbers were represented as tuples of real and imaginary components. However, this tutorial relies on Python's built-in [`complex`](https://docs.python.org/3.8/library/functions.htmlcomplex) data type. Python's [cmath library](https://docs.python.org/3.8/library/cmath.html) offers a lot of useful functions that deal with the `complex` data type.>> Here is an example of using the `complex` data type:>> ```Python> Import the cmath library> import cmath>> Create a new complex number 5 + 3i; the two arguments are the real and the imaginary parts of the number> complexNumber = complex(5, 3)>> Print the real and the imaginary parts of the number> print(complexNumber.real) > print(complexNumber.imag)>> Convert the complex number to its polar representation using the cmath library> polar = cmath.polar(complexNumber)> print(polar) This prints: (5.830951894845301, 0.5404195002705842)> ```>> To get the complex conjugate of a matrix, we loop trough each element of the matrix, extract real and imaginary parts of the number and flip the sign for the imaginary part.
###Code
@exercise
def conjugate(a : Matrix) -> Matrix:
rows = len(a)
columns = len(a[0])
ans = create_empty_matrix(rows, columns)
for i in range(rows):
for j in range(columns):
ans[i][j] = complex(a[i][j].real, -a[i][j].imag)
return ans
###Output
_____no_output_____
###Markdown
[Return to task 6 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-6:-Conjugate.) Exercise 7: Adjoint.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $m \times n$ matrix $A^\dagger$, the adjoint of $A$. SolutionTo get the adjoint we perform both **transpose** and **conjugate** operations on the input matrix. We can write out the whole procedure manually, like we have done above, but we can also leverage the code we have written above.> In Python the `def` word defines a function, which could be reused later in the code.
###Code
@exercise
def adjoint(a : Matrix) -> Matrix:
# Call the transpose function with the input matrix a
transp = transpose(a)
# Call the conjugate function with the transposed matrix as input
ans = conjugate(transp)
return ans
###Output
_____no_output_____
###Markdown
[Return to task 7 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-7:-Adjoint.) Exercise 8: Unitary Verification.**Input:** An $n \times n$ matrix $A$.**Output:** Check if the matrix is unitary and return `True` if it is, or `False` if it isn't. SolutionA matrix is unitary if this holds true: $UU^\dagger = U^\dagger U = I$.(As a reminder, an identity matrix is a matrix with 1s on the main diagonal and 0s everywhere else.)Thus, to check if the input matrix is unitary we will need to perform the following steps:1. Calculate the adjoint of the input matrix.2. Multiply it by the input matrix.3. Check if the multiplication result is equal to an identity matrix. > *Python note:* We will leverage the `adjoint` and the `matrix_mult` functions what we have created above.>> When we check each element of $UU^\dagger$ to see whether it equals the respective element of the identity matrix, we'll use Python function `approx` to perform this comparison approximately.
###Code
from pytest import approx
@exercise
def is_matrix_unitary(a : Matrix) -> bool:
n = len(a)
# Calculate the adjoint matrix
adjointA = adjoint(a)
# Multiply the adjoint matrix by the input matrix
multipliedMatrix = matrix_mult(a, adjointA)
# Check whether the multiplication result is (approximately) identity matrix
for i in range(n):
for j in range(n):
# An identity matrix has 1's in all the places where the row index and column index are equal...
if i == j:
if multipliedMatrix[i][j] != approx(1):
return False
# ... and 0's in all the places where the row index and column index are different
else:
if multipliedMatrix[i][j] != approx(0):
return False
return True
###Output
_____no_output_____
###Markdown
[Return to task 8 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-8:-Unitary-Verification.) Exercise 9: Inner product.**Inputs:**1. An $n \times 1$ vector $V$.2. An $n \times 1$ vector $W$.**Output:** Return a complex number - the inner product $\langle V , W \rangle$. SolutionFollowing the definition of the inner product, $\langle V , W \rangle = V^\dagger W$. For example, for vectors of length 2:$$\langle\begin{bmatrix} a \\ b\end{bmatrix},\begin{bmatrix} c \\ d\end{bmatrix}\rangle =\begin{bmatrix} a \\ b\end{bmatrix}^\dagger\begin{bmatrix} c \\ d\end{bmatrix}=\begin{bmatrix} \overline{a} & \overline{b} \end{bmatrix}\begin{bmatrix} c \\ d\end{bmatrix}= \overline{a} \cdot c + \overline{b} \cdot d$$> *Python note:* We will again use previously defined functions to calculate adjoint of a vector and a product of two vectors. > We need to keep in mind that the task asks us to return a complex number and not a $1 \times 1$ matrix which is the result of the multiplication. > Therefore at the end we'll extract the top left element of the `resultMatrix` and return it.
###Code
@exercise
def inner_prod(v : Matrix, w : Matrix) -> complex:
# Calculate the adjoint of the v vector
adjointV = adjoint(v)
# Multiply the adjoint v and w. The result will be a matrix with only one element.
resultMatrix = matrix_mult(adjointV, w)
# To get the actual complex number, we have to take one element from the multiplication result.
return resultMatrix[0][0]
###Output
_____no_output_____
###Markdown
[Return to task 9 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-9:-Inner-product.) Exercise 10: Normalized vectors.**Input:** A non-zero $n \times 1$ vector $V$.**Output:** Return an $n \times 1$ vector $\frac{V}{||V||}$ - the normalized version of the vector $V$. Solution If the vector $V = \begin{bmatrix}a & b & c \end{bmatrix}$, its norm $ ||V|| = \sqrt{|a|^2 + |b|^2 + |c|^2} $,and its normalized version is$ \begin{bmatrix}\frac{a}{||V||} & \frac{b}{||V||} & \frac{c}{||V||} \end{bmatrix} $.Thus, we need to calculate the norm of the vector and to divide each element of the vector by it. We will calculate the norm as a square root of an inner product of the vector with itself.
###Code
@exercise
def normalize(v : Matrix) -> Matrix:
norm = math.sqrt(inner_prod(v, v).real)
n = len(v)
ans = create_empty_matrix(n, 1)
# Divide each element of the vector by the norm
for i in range(n):
ans[i][0] = v[i][0] / norm
return ans
###Output
_____no_output_____
###Markdown
[Return to task 10 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-10:-Normalized-vectors.) Exercise 11: Outer product.**Inputs:**1. An $n \times 1$ vector $V$.2. An $m \times 1$ vector $W$.**Output:** Return an $n \times m$ matrix that represents the outer product of $V$ and $W$. SolutionBy definition, the outer product of $V$ and $W$ is $VW^\dagger$. We can use a similar approach to calculating the inner product, except here we will return the whole multiplication result rather than a specific number.
###Code
@exercise
def outer_prod(v : Matrix, w : Matrix) -> Matrix:
# Calculate adjoint of the W
adjointW = adjoint(w)
# Multiply V by W adjoint
return matrix_mult(v, adjointW)
###Output
_____no_output_____
###Markdown
[Return to task 11 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-11:-Outer-product.) Exercise 12*: Tensor Product.**Inputs:**1. An $n \times m$ matrix $A$.2. A $k \times l$ matrix $B$.**Output:** Return an $(n \cdot k) \times (m \cdot l)$ matrix $A \otimes B$, the tensor product of $A$ and $B$. SolutionWe will follow the definition of the tensor product. For example, tensor product of $2 \times 2$ matrices look as follows:$$\begin{bmatrix} a & b \\ c & d \end{bmatrix} \otimes \begin{bmatrix} e & f \\ g & h \end{bmatrix} =\begin{bmatrix} a \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} & b \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} \\ c \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} & d \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix}\end{bmatrix}=\begin{bmatrix} a \cdot e & a \cdot f & b \cdot e & b \cdot f \\ a \cdot g & a \cdot h & b \cdot g & b \cdot h \\ c \cdot e & c \cdot f & d \cdot e & d \cdot f \\ c \cdot g & c \cdot h & d \cdot g & d \cdot h\end{bmatrix}$$> *Python note:* We need to calculate pairwise products of all elements of the left matrix and all elements of the right matrix; this means we have to use 4 nested loops.
###Code
@exercise
def tensor_product(a : Matrix, b : Matrix) -> Matrix:
aRows = len(a) # the number of rows for matrix a
aColumns = len(a[0]) # the number of columns for matrix a
bRows = len(b) # the number of rows for matrix b
bColumns = len(b[0]) # the number of columns for matrix b
ans = create_empty_matrix(aRows * bRows, aColumns * bColumns)
# Outer pair of loops, iterating trough the elements of the left matrix
for i in range(aRows):
for j in range(aColumns):
# Inner pair of loops, iterating through the elements of the right matrix
for k in range(bRows):
for l in range(bColumns):
ans[i * bRows + k][j * bColumns + l] = a[i][j] * b[k][l]
return ans
###Output
_____no_output_____
###Markdown
[Return to task 12 of the Linear Algebra tutorial.](./LinearAlgebra.ipynbExercise-12*:-Tensor-Product.) Exercise 13: Finding an eigenvalue.**Inputs:**1. A real-valued $n \times n$ matrix $A$.2. An eigenvector $V$ of matrix $A$.**Output:** Return a real number - the eigenvalue of $A$ that is associated with the given eigenvector. SolutionLet's consider what happens when we multiply the matrix by its eigenvector for a $3 \times 3$ example:$$ A \cdot V = \begin{bmatrix} a & b & c \\ d & e & f \\ g & h & i \end{bmatrix} \cdot \begin{bmatrix}j \\ k \\ l \end{bmatrix} = \begin{bmatrix} m \\ n \\ o \end{bmatrix} = \alpha \begin{bmatrix}j \\ k \\ l \end{bmatrix} = \alpha V$$This means you can find the eigenvalue $\alpha$ from the equations $$ \begin{cases} \alpha j = m \\ \alpha k = n \\ \alpha l = o \end{cases}$$We can use any of them, keeping in mind that we need an equation in which the element of the eigenvector is not zero (otherwise we get an equation $0 \alpha = 0$ which doesn't help us find $\alpha$).Since eigenvectors are defined as non-zero vectors, we are guaranteed that at least one element of the vector will not be zero.
###Code
from pytest import approx
@exercise
def find_eigenvalue(a : Matrix, v : Matrix) -> float:
n = len(v)
multiplied = matrix_mult(a, v)
for i in range(n):
if (v[i][0] != approx(0)):
return multiplied[i][0] / v[i][0]
###Output
_____no_output_____
###Markdown
[Return to task 13 of the Linear Algebra tutorial.](/LinearAlgebra.ipynbExercise-13:-Finding-an-eigenvalue.) Exercise 14**: Finding an eigenvector.**Inputs:**1. A $2 \times 2$ matrix $A$.2. An eigenvalue $x$ of matrix $A$.**Output:** Return any non-zero eigenvector of $A$ that is associated with $x$. SolutionSearching for an eigenvector $V$ associated with a specific eigenvalue $x$ asks for solving the following equation:$$ AV = xV $$or, equivalently, $$(A - xI_n)V = 0$$In other words, for a $2 \times 2$ matrix the following happens: 1. Multiply the identity matrix $I_2$ by the eigenvalue:$$ x \cdot \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} x & 0 \\ 0 & x \end{bmatrix} $$2. Subtract this new matrix from the given matrix $A$:$$ \begin{bmatrix} a & b \\ c & d \end{bmatrix} - \begin{bmatrix} x & 0 \\ 0 & x \end{bmatrix} = \begin{bmatrix} a -x & b \\ c & d -x \end{bmatrix} $$ 3. Find a vector that, when multiplied by the resulting matrix, will produce a 0 vector:$$ \begin{bmatrix} a - x & b \\ c & d - x \end{bmatrix} \cdot \begin{bmatrix} v_0 \\ v_1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}$$This can be rewritten as the following system of equations:$$\begin{cases}(a - x) \cdot v_0 + b \cdot v_1 = 0 \\c \cdot v_0 + (d - x) \cdot v_1 = 0 \end{cases}$$Each eigenvalue has infinitely many eigenvectors associated with it (since multiplying an eigenvector by a number gives another valid eigenvector). We can limit our search and say that $v_0 = 1$, if possible. In this case, the system of equations becomes$$\begin{cases}(a - x) + b \cdot v_1 = 0 \\c + (d - x) \cdot v_1 = 0 \end{cases}$$and finally we get $v_1 = \frac{a-x}{-b}$.If $b = 0$, we can not perform this division, so we need to reconsider our choices. The first equation becomes $(a-x)v_0 = 0$, which is possible in two cases:* If $a - x \neq 0$, we get $v_0 = 0$ and thus $v_1$ has to be non-zero (we can pick $v_1 = 1$).* If $a - x = 0$, we can not get any information from the first equation and have to fall back to the second one:$c \cdot v_0 + (d - x) \cdot v_1 = 0$. Following a similar logic: * If $c = 0$, we get $(d - x) \cdot v_1 = 0$, so $v_0 = 1, v_1 = 0$. * If $c \neq 0$, we get $v_1 = 1, v_0 = \frac{d-x}{-c}$.
###Code
@exercise
def find_eigenvector(a : Matrix, x : float) -> Matrix:
# Check for possible edge cases
if (a[0][1] == 0):
if (a[0][0] - x == 0):
if (a[1][0] == 0):
return [[1], [0]]
else:
return [[(a[1][1] - x) / (-a[1][0])], [1]]
else:
return [[0], [1]]
v0 = 1
v1 = (a[0][0] - x) / (-a[0][1])
return [[v0], [v1]]
###Output
_____no_output_____ |
BME511/RandomProcesses.ipynb | ###Markdown
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/haribharadwaj/notebooks/blob/main/BME511/RandomProcesses.ipynb) Random processesSo far, we have been loose with our definitions of random signals, correlations, etc. Let's try and make formalize some of those ideas.
###Code
import numpy as np
import pylab as pl
###Output
_____no_output_____
###Markdown
White Noise Let's begin with a white noise example. White noise is essentially a sequence of IID normal random numbers. Why is it called white noise?
###Code
N = 500
for k in range(5):
x = np.random.randn(N) # Generate 500 sample white noise
pl.plot(x)
pl.xlabel('Sample Number')
###Output
_____no_output_____
###Markdown
"Moving Average" (MA) processHere, let's start with IID draws, but then pass it out using a moving average filter. What does that do to the signal?
###Code
n_ma = 50 ## Do 100 sample moving average
h = np.ones(n_ma) / (n_ma ** 0.5)
from scipy import signal
for k in range(5):
y = signal.lfilter(h, 1, np.random.randn(N))
pl.plot(y)
pl.xlabel('Sample Number')
###Output
_____no_output_____
###Markdown
It's clear from above that the white-noise process and the MA process are fundamentally different in some way. How can we characterize this difference? Autocorrelation function
###Code
rxx = signal.correlate(x, x, 'full')
lags_x = signal.correlation_lags(x.shape[0], x.shape[0], 'full')
pl.plot(lags_x, rxx)
pl.xlabel('Lag (samples)')
nrep = 1000
ryy_ave = 0
for k in range(nrep):
y = signal.lfilter(h, 1, np.random.randn(N))
ryy = signal.correlate(y, y, 'full')
ryy_ave += ryy
lags_y = signal.correlation_lags(y.shape[0], y.shape[0], 'full')
pl.plot(lags_y, ryy_ave/nrep)
pl.xlabel('Lag (samples)')
###Output
_____no_output_____
###Markdown
Cross correlation function
###Code
N = 500
nreps = 1
rxx_ave = 0
rxy_ave = 0
for k in range(nreps):
x = np.random.randn(N) # Generate 500 sample white noise
y = np.random.randn(N)
rxx = signal.correlate(x, x, 'full')
rxx_ave += rxx
rxy = signal.correlate(x, y, 'full')
rxy_ave += rxy
lags = signal.correlation_lags(x.shape[0], y.shape[0], 'full')
pl.plot(lags, rxx_ave/nreps)
pl.plot(lags, rxy_ave/nreps)
pl.xlabel('Lag (samples)')
n_ma = 50 ## Do 100 sample moving average
h = np.ones(n_ma) / (n_ma ** 0.5)
nreps = 50
rxx_ave = 0
rxy_ave = 0
for k in range(nreps):
x = signal.lfilter(h, 1, np.random.randn(N))
y = signal.lfilter(h, 1, np.random.randn(N))
rxx = signal.correlate(x, x, 'full')
rxx_ave += rxx
rxy = signal.correlate(x, y, 'full')
rxy_ave += rxy
lags = signal.correlation_lags(x.shape[0], y.shape[0], 'full')
pl.plot(lags, rxx_ave/nreps)
pl.plot(lags, rxy_ave/nreps)
pl.xlabel('Lag (samples)')
###Output
_____no_output_____ |
docs/A simple klausen model.ipynb | ###Markdown
A simple `klausen` modelThe `klausen` package provides an easy way to write stochastic models in Python using named parameters. The lifecycle of a `klausen` calculation has three steps:* The modeller defines a set of (quasi-)independent input parameters in a `klausen.NamedParameters` instance. The uncertainty of these parameters can be defined by probability distribution functions (defined using [stats_arrays](https://stats-arrays.readthedocs.io/en/latest/), or by providing a population data sample.* The input parameters are provided to a Python model, which is executed. The model should be ready to accept each parameter as a one-dimensional Numpy array.* The model outputs are then directly analyzed, or exported to serve as inputs for other calculations (e.g. life cycle assessment using [Brightway](https://brightwaylca.org/). First step: Input parametersIn this simple example, we will examine the behaviour of a motor scooter. We will define two input parameters - standard fuel consumption (kg / km), and the behaviour of the driver (unitless). Fuel consumption will follow the Gaussian distribution, with a mean of 10 grams of gasoline per kilometer, a standard deviation of 3, and a minimum of 5. We assume that driver behaviour is a multiplier of fuel consumption, and follows a triangular distribution from 0.8 (minimum) to 1 (mode) to 1.2 (maximum).
###Code
import klausen
import presamples
import stats_arrays as sa
import numpy
parameters = {
'fuel_consumption': {
'loc': 0.01,
'scale': 0.003,
'minimum': 0.005,
'uncertainty_type': sa.NormalUncertainty.id
},
'driver': {
'uncertainty_type': sa.TriangularUncertainty.id,
'minimum': 0.8,
'loc': 1,
'maximum': 1.2
}
}
np = klausen.NamedParameters(parameters)
###Output
_____no_output_____
###Markdown
We want to do Monte Carlo analysis, so we tell the `NamedParameters` object to generate samples.
###Code
np.stochastic(iterations=1000)
###Output
_____no_output_____
###Markdown
Second step: ModelAs our example is quite simple, the model can also be quite simple.In the model, we assume that complete combustion of one kilogram of gasoline produces three kilograms of $CO_{2}$. This could also have been an uncertain parameter specified in step one.
###Code
def scooter_model(np):
actual_fuel_consumption = np['fuel_consumption'] * np['driver']
co2 = 3 * actual_fuel_consumption
return numpy.vstack((
numpy.array(actual_fuel_consumption),
numpy.array(co2)
))
results = scooter_model(np)
###Output
_____no_output_____
###Markdown
Third step: Interpretation or reuseIn this case, we will import the results into Brightway and link against ecoinvent.We will use the [presamples package](https://github.com/PascalLesage/brightway2-presamples) to substitute in our numbers during Monte Carlo *and* during static LCA. We start by defining the values to be used during Monte Carlo.We already have the values for $CO_{2}$ and fuel consumption, but we still need to know which *exchanges* in ecoinvent to change. There are better ways to do this with a lot of output parameters, but in our case we can just find the ones we want directly.
###Code
import brightway2 as bw
assert "ecoinvent 3.5 cutoff" in bw.databases
co2 = next(x for x in bw.Database("biosphere3")
if x['name'] == 'Carbon dioxide, fossil'
and x['categories'] == ('air',)).key
scooter = next(x for x in bw.Database("ecoinvent 3.5 cutoff")
if x['name'] == 'transport, passenger, motor scooter'
and x['location'] == 'CH').key
petrol = next(x for x in bw.Database("ecoinvent 3.5 cutoff")
if x['name'] == 'market for petrol, low-sulfur'
and x['location'] == 'RoW').key
_, stochastic_filepath = presamples.create_presamples_package(
matrix_data=[
(
results[0, :].reshape((1, -1)),
[(petrol, scooter, 'technosphere')],
'technosphere',
), (
results[1, :].reshape((1, -1)),
[(co2, scooter, 'biosphere')],
'biosphere'
),
],
name='Simple Klausen example'
)
np.static()
results = scooter_model(np).reshape((-1, 1))
_, static_filepath = presamples.create_presamples_package(
matrix_data=[
(
results[0, :].reshape((1, -1)),
[(petrol, scooter, 'technosphere')],
'technosphere',
), (
results[1, :].reshape((1, -1)),
[(co2, scooter, 'biosphere')],
'biosphere'
),
],
name='Simple Klausen example'
)
IPCC = ('IPCC 2013', 'climate change', 'GWP 100a')
lca = bw.LCA({scooter: 1}, IPCC)
lca.lci()
lca.lcia()
lca.score
lca = bw.LCA({scooter: 1}, IPCC, presamples=[static_filepath])
lca.lci()
lca.lcia()
lca.score
mc = bw.MonteCarloLCA({scooter: 1}, IPCC, presamples=[stochastic_filepath])
mc_results = numpy.array([next(mc) for _ in range(500)])
%matplotlib inline
import seaborn as sb
sb.distplot(mc_results)
###Output
_____no_output_____ |
Module -1/Class -1 (Python for Analytics)/OOPs Concepts/OOPs Concepts.ipynb | ###Markdown
Creating a class* This class will have limited number of attributes* __init__ function acts like a constructor here* Table_details is a method inside the Data* The Data can be inherited by Data_set1, Data_set2 etc* This class is re-usable
###Code
class DE_class:
class_type = "Data Engineering" # class attribute
def __init__(self, class_name, number_of_classes): # the constructor method
self.name = class_name # instance attribute
self.total_classes = number_of_classes # instance attribute
def description(self): # instance method - This method has no additional parameter.
#This method is using the instance attributes.
return f"The {self.class_type} will have {self.total_classes} classes for {self.name} module"
Session1=DE_class("Python for DE",3)
print(Session1.class_type)
print(Session1.description())
class DE_class: #parent class
class_type = "Data Engineering" # class attribute
def __init__(self, class_name, number_of_classes):
self.name = class_name
self.total_classes = number_of_classes
def description(self): # instance method - This method has no additional parameter.
#This method is using the instance attributes.
return f"The {self.class_type} will have {self.total_classes} classes for {self.name} module"
class Python(DE_class): #child class
pass
class SQL(DE_class): #child class
def sql_desc(self):
return "This is the description method of class SQL."
obj1 = Python("Python for DE",3)
print(obj1.description())
obj2 = SQL("SQL for DE",2)
print(obj2.description())
print(obj2.sql_desc())
class Data: # Class
def __init__(self,table_name,row_count,feature_count,fact_table): # Class Constructor
self.Name_of_Table=table_name # statements inside the constructor method
self.Records=row_count
self.Total_Columns=feature_count
self.Type=fact_table
def Table_details(self): # Class method
return "Table as input to model: {}".format(self.Type)
Data_set_1=Data("Customer",10000,20,0) # Object
Data1=Data("Customer",10000,20,0)
Data1.Table_details()
print(Data1.Name_of_Table)
obj1 = BMW("BMW 7-series",39.53)
print(obj1.description())
obj2 = Audi("Audi A8 L",14)
print(obj2.description())
print(obj2.audi_desc())
class Data: # Class
def __init__(self,table_name,row_count,feature_count,fact_table): # Class Constructor
self._Name=table_name # protected variable
self.Records=row_count
self.Total_Columns=feature_count
self.Type=fact_table
def description(self): # Class method
return f"The {self._Name} has str of {self.Records} * {self.Total_Columns} and status on i/p to model is{self.Type}"
Data_set_1=Data("Customer",10000,20,0)
#accessing protected variable via class method
print(Data_set_1.description())
class Data: # Class
def __init__(self,table_name,row_count,feature_count,fact_table): # Class Constructor
self._Name=table_name # protected variable
self.__Records=row_count # private variable
self.__Total_Columns=feature_count # private variable
self.Type=fact_table # public
def description(self): # Class method
return f"The {self._Name} has str of {self.__Records} * {self.__Total_Columns} and status on fact is {self.Type}"
Data_set_1=Data("Customer",10000,20,0)
#accessing protected variable via class method
print(Data_set_1.description())
#accessing public, protected and private variable directly from outside
print(Data_set_1.Type)
print(Data_set_1._Name)
print(Data_set_1.__Records)
class Python:
def description(self):
print("This the description of class Python.")
class SQL:
def description(self):
print("This the description of class SQL.")
class1 = Python()
class2 = SQL()
for Data_Engineering in (class1,class2):
Data_Engineering.description()
###Output
This the description of class Python.
This the description of class SQL.
###Markdown
When the function is called using the object class1 then the function of class Python is called and when it is called using the object class2 then the function of class SQL is called.
###Code
from abc import ABC # abc refers to abstract base class
class type_shape(ABC):
def area(self):
#abstract method
pass
class Rectangle(type_shape):
length = 6
breadth = 4
def area(self):
return self.length * self.breadth
class Circle(type_shape):
radius = 7
def area(self):
return 3.14 * self.radius * self.radius
r = Rectangle() # object created for the class 'Rectangle'
c = Circle() # object created for the class 'Circle'
print("Area of a rectangle:", r.area()) # call to 'area' method defined inside the class.
print("Area of a circle:", c.area()) # call to 'area' method defined inside the class.
class Data(ABC): # Class
def __init__(self,table_name,row_count,feature_count,fact_table): # Class Constructor
self._Name=table_name # protected variable
self.Records=row_count
self.Total_Columns=feature_count
self.Type=fact_table
def description(self): # Class method
return f"The {self._Name} has str of {self.Records} * {self.Total_Columns} and status on i/p to model is{self.Type}"
class structure(Data):
from abc import ABC # abc refers to abstract base class
class Structure_Data(ABC):
def description(self):
return "Logic implented to check whether the dataset is a Structured Data"
class Unstructure_Data(ABC):
def description(self):
return "Logic implented to check whether the dataset is a Unstructured Data"
# object 's' created for the class 'Structure_Data'
s = Structure_Data()
# object 'u' created for the class 'Unstructure_Data'
u = Unstructure_Data()
# call the 'description' method defined inside the class to identify datasets
print("Input Dataset is a structed dataset based on : ", s.description())
print("Input Dataset is an unstructed dataset based on: ", u.description())
###Output
Input Dataset is a structed dataset based on : Logic implented to check whether the dataset is a Structured Data
Input Dataset is an unstructed dataset based on: Logic implented to check whether the dataset is a Unstructured Data
|
09-machine-learning-model-for-a-metalworking-enterprise/machine-learning-model-for-a-metalworking-enterprise.ipynb | ###Markdown
Описание проектаПодготовьте прототип модели машинного обучения для «Цифры». Компания разрабатывает решения для эффективной работы промышленных предприятий.Модель должна предсказать коэффициент восстановления золота из золотосодержащей руды. В вашем распоряжении данные с параметрами добычи и очистки.Модель поможет оптимизировать производство, чтобы не запускать предприятие с убыточными характеристиками.Вам нужно:- Подготовить данные;- Провести исследовательский анализ данных;- Построить и обучить модель.- Чтобы выполнить проект, обращайтесь к библиотекам pandas, matplotlib и sklearn. Вам поможет их документация. Описание данных Технологический процесс- Rougher feed — исходное сырье- Rougher additions (или reagent additions) — флотационные реагенты: Xanthate, Sulphate, Depressant- Xanthate **— ксантогенат (промотер, или активатор флотации);- Sulphate — сульфат (на данном производстве сульфид натрия);- Depressant — депрессант (силикат натрия).- Rougher process (англ. «грубый процесс») — флотация- Rougher tails — отвальные хвосты- Float banks — флотационная установка- Cleaner process — очистка- Rougher Au — черновой концентрат золота- Final Au — финальный концентрат золота- Параметры этапов- air amount — объём воздуха- fluid levels — уровень жидкости- feed size — размер гранул сырья- feed rate — скорость подачи Наименование признаковНаименование признаков должно быть такое:[этап].[тип_параметра].[название_параметра]Пример: rougher.input.feed_ag Возможные значения для блока [этап]:- rougher — флотация- primary_cleaner — первичная очистка- secondary_cleaner — вторичная очистка- final — финальные характеристики Возможные значения для блока [тип_параметра]:- input — параметры сырья- output — параметры продукта- state — параметры, характеризующие текущее состояние этапа- calculation — расчётные характеристики Содержание проекта: 1. Подготовка данных Откроем файлы и изучим их Найдем MAE между нашими расчетами и значениями признака Проанализируем признаки, недоступные в тестовой выборке. Проведем предобработку данных Вывод Анализ данных Посмотрим, как меняется концентрация металлов на различных этапах очистки Сравним распределения размеров гранул сырья Исследуем суммарную концентрацию всех веществ на разных стадиях Модель Напишем функцию для вычисления итоговой sMAPE Обучим разные модели и оценим их качество кросс-валидацией Общий вывод 1. Подготовка данных
###Code
# Подключим все необходимые библиотеки
import os
import urllib.request
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import itertools
import math
from pathlib import Path
from sklearn.ensemble import RandomForestRegressor
from sklearn.linear_model import LinearRegression
from sklearn import linear_model
from sklearn.svm import SVR
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.utils import shuffle
from sklearn.model_selection import TimeSeriesSplit, cross_val_score, StratifiedKFold, GridSearchCV, cross_validate, KFold
from sklearn.metrics import roc_auc_score, roc_curve, precision_recall_curve, mean_absolute_error, make_scorer, make_scorer
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
class DataScience:
def path_to_files(self, path, link):
Path('datasets').mkdir(parents=True, exist_ok=True)
def get_file(file_name, url):
if not os.path.exists(file_name):
print(file_name, 'файл не найден, будет загружен из сети')
_ = urllib.request.urlretrieve(url, file_name)
urls = {
'dataset': (path, link)
}
[get_file(*urls[k]) for k in urls]
data = pd.read_csv(urls['dataset'][0])
return data
def clean_dataset(self, df):
assert isinstance(df, pd.DataFrame)
df.dropna(inplace=True)
#indices_to_keep = ~df.isin([np.nan, np.inf, -np.inf]).any(1)
#return df[indices_to_keep]#.astype(np.float64)
def missing_zero_values_table(self,df):
zero_val = (df == 0.00).astype(int).sum(axis=0)
mis_val = df.isnull().sum()
mis_val_percent = 100 * df.isnull().sum() / len(df)
mz_table = pd.concat([zero_val, mis_val, mis_val_percent], axis=1)
mz_table = mz_table.rename(
columns = {0 : 'Zero Values', 1 : 'Missing Values', 2 : '% of Total Values'})
mz_table['Total Zero Missing Values'] = mz_table['Zero Values'] + mz_table['Missing Values']
mz_table['% Total Zero Missing Values'] = 100 * mz_table['Total Zero Missing Values'] / len(df)
mz_table['Data Type'] = df.dtypes
mz_table = mz_table[
mz_table.iloc[:,1] != 0].sort_values(
'% of Total Values', ascending=False).round(1)
print ("Your selected dataframe has " + str(df.shape[1]) + " columns and " + str(df.shape[0]) + " Rows.\n"
"There are " + str(mz_table.shape[0]) +
" columns that have missing values.")
return mz_table
def differences_in_columns(self, df1, df2):
df1_cols = df1.columns
df2_cols = df2.columns
diff = df1_cols.difference(df2_cols) #Признаки имеющиеся только в тестовой выборке
counter = 0
print('======= Отличающиеся столбцы ======')
for i in diff:
counter += 1
print("{}. {}".format(counter,i))
print('\n')
print('Количество столбцов в df1:', len(df1_cols))
print('Количество столбцов в df2:', len(df2_cols))
print('\nОбщее количество отличающихся столбцов: ', counter)
def common_in_columns(self, df1, df2):
df1_cols = df1.columns
df2_cols = df2.columns
common_cols = df1_cols.intersection(df2_cols) #Признаки имеющиемся в двух выборках
counter = 0
for i in common_cols:
counter += 1
print("{}. {}".format(counter,i))
print('\n')
print('Количество столбцов в df1:', len(df1_cols))
print('Количество столбцов в df2:', len(df2_cols))
print('Общее количество похожих столбцов: ', counter)
###Output
_____no_output_____
###Markdown
1.1. Откроем файлы и изучим их
###Code
ds = DataScience()
#Загрузим данные
train = ds.path_to_files('gold_recovery_train.csv', 'https://code.s3.yandex.net/datasets/gold_recovery_train.csv')
test = ds.path_to_files('gold_recovery_test.csv', 'https://code.s3.yandex.net/datasets/gold_recovery_test.csv')
full = ds.path_to_files('gold_recovery_full.csv', 'https://code.s3.yandex.net/datasets/gold_recovery_full.csv')
#Посмотрим размеры выборок
print('Размер обучающей выборки: {} '.format(train.shape))
print('Размер тестовой выборки: {} '.format(test.shape))
print('Размер общей выборки: {} '.format(full.shape))
###Output
Размер обучающей выборки: (16860, 87)
Размер тестовой выборки: (5856, 53)
Размер общей выборки: (22716, 87)
###Markdown
- В тестовой выборке не все столбцы...Нужно будет посмотреть, что это за столбцы и почему их нет
###Code
display(train.head())
display(test.head())
display(full.head())
#Изучим наши выборки
display(train.info())
display(test.info())
display(full.info())
#Изучим наши выборки методом describe
print('===========Обучающая выборка============')
display(train.describe())
print('===========Тестовая выборка============')
display(test.describe())
print('============Full=========================')
display(full.describe())
#Уникальные значения таблиц
print('===========Тестовая выборка============')
display(test.nunique())
print('===========Обучающая выборка============')
display(train.nunique())
print('============Full=========================')
display(full.nunique())
#Проверим на наличие дубликатов
print('Тестовая выборка, количество дубликатов: ',test.duplicated().sum())
print('Обучающая выборка, количество дубликатов: ',train.duplicated().sum())
print('Full Dataset, количество дубликатов: ',full.duplicated().sum())
###Output
Тестовая выборка, количество дубликатов: 0
Обучающая выборка, количество дубликатов: 0
Full Dataset, количество дубликатов: 0
###Markdown
1.2. Проверим, что эффективность обогащения рассчитана правильно. Вычислим ее на обучающей выборке для признака "rougher.output.recovery". Найдем MAE между нашими расчетами и значениями признака. Опишем выводы.
###Code
def recovery(c,f,t):
#Расчитаем эффективность с помощью формулы recovery данной в инструкции.
prerecovery = (c*(f-t)) / (f*(c-t))
recovery = prerecovery * 100
# Заполним слишком маленькие и слишком большие значения с помощью np.nan
recovery[recovery<0] = np.nan
recovery[recovery>100] = np.nan
return recovery
c = train['rougher.output.concentrate_au']
f = train['rougher.input.feed_au']
t = train['rougher.output.tail_au']
recovery_list = recovery(c,f,t)
#Посчитаем MAE, но прежде заполним NAN значения на 0
mae = mean_absolute_error(train['rougher.output.recovery'].fillna(0),recovery_list.fillna(0))
print('Средняя абсолютная ошибка:', mae)
#Посчитаем количество пустых значений в наших расчетах и признаках
print('Пустых значений в recovery:', recovery_list.isna().sum())
print('Пустых значений в train[recovery]:' ,train['rougher.output.recovery'].isna().sum())
###Output
_____no_output_____
###Markdown
- Пустых значений выявили 2573- Для подсчета средней абсолютной ошибки пустые значений заменили на 0- Исходя из полученных результатов мы видим, что средняя абсолютная ошибка получилась 8. Можно считать, что наши расчеты достаточно верны.
###Code
#Изучим пустые значения в таблице train
ds.missing_zero_values_table(train)
#Изучим пустые значения в таблице test
ds.missing_zero_values_table(test)
###Output
Your selected dataframe has 53 columns and 5856 Rows.
There are 51 columns that have missing values.
###Markdown
- Невооруженным взглядом можно увидеть, что пропусков в обучающей выборке намного больше, чем тестовой. Нужно будет подумать, как и на что заполнять пустые значения, прежде чем переходить к создании нашей модели. 1.3. Проанализируем признаки, недоступные в тестовой выборке. Что это за параметры и к какому типу относятся?
###Code
#Сравним признаки в тестовой и обучающей выборке. Найдем различающиеся прзнаки
ds.differences_in_columns(train, test)
###Output
======= Отличающиеся столбцы ======
1. final.output.concentrate_ag
2. final.output.concentrate_au
3. final.output.concentrate_pb
4. final.output.concentrate_sol
5. final.output.recovery
6. final.output.tail_ag
7. final.output.tail_au
8. final.output.tail_pb
9. final.output.tail_sol
10. primary_cleaner.output.concentrate_ag
11. primary_cleaner.output.concentrate_au
12. primary_cleaner.output.concentrate_pb
13. primary_cleaner.output.concentrate_sol
14. primary_cleaner.output.tail_ag
15. primary_cleaner.output.tail_au
16. primary_cleaner.output.tail_pb
17. primary_cleaner.output.tail_sol
18. rougher.calculation.au_pb_ratio
19. rougher.calculation.floatbank10_sulfate_to_au_feed
20. rougher.calculation.floatbank11_sulfate_to_au_feed
21. rougher.calculation.sulfate_to_au_concentrate
22. rougher.output.concentrate_ag
23. rougher.output.concentrate_au
24. rougher.output.concentrate_pb
25. rougher.output.concentrate_sol
26. rougher.output.recovery
27. rougher.output.tail_ag
28. rougher.output.tail_au
29. rougher.output.tail_pb
30. rougher.output.tail_sol
31. secondary_cleaner.output.tail_ag
32. secondary_cleaner.output.tail_au
33. secondary_cleaner.output.tail_pb
34. secondary_cleaner.output.tail_sol
Количество столбцов в df1: 87
Количество столбцов в df2: 53
Общее количество отличающихся столбцов: 34
###Markdown
- В тестовой выборке отсутствуют признаки final.output*, primary_cleaner.output*, rougher.output*, secondary_cleaner.output*. - Данные относятся параметрам продукта, параметрам характеризующие текущее состояние этапа и расчетные характеристики. 1.4. Проведем предобработку данных.
###Code
# Удалим признаки в обучающей выборке, которых нет в тестовой выборке и сразу проверим.
train = train[test.columns]
ds.differences_in_columns(train, test)
#Удалим пустые строки в обучающей и тестовой выборках
ds.clean_dataset(train)
ds.clean_dataset(test)
print('В обучающей выборке : {} строк {} признаков'.format(train.shape[0],train.shape[1]))
print('В тестовой выборке: {} строк {} признаков'.format(test.shape[0], test.shape[1]))
###Output
В обучающей выборке : 13522 строк 53 признаков
В тестовой выборке: 5383 строк 53 признаков
###Markdown
1.5. Вывод - Удалили из обучающей выборки признаки которых нет в тестовой выборке- Удалили пустые строки в обучающей и тестовой выборках- В тестовой и обучающей выборках имеем одинаковое количество признаков. Размер тестовой 5383 строк, размер обучающей 1680 строк.- Можно приступать к анализу и машинному обучению 2. Анализ Данных 2.1. Посмотрим, как меняется концентрация металлов (Au, Ag, Pb) на различных этапах очистки.
###Code
metals = [('au', 'золота'), ('ag', 'серебра'), ('pb', 'свинца')]
stages = [('rougher.output.concentrate_', 'Флотация'),
('primary_cleaner.output.concentrate_', 'Первый этап очистки'),
('final.output.concentrate_', 'Второй этап очистки')]
for i in metals:
plt.figure(figsize=(8,5))
for item in stages:
ax = sns.distplot(full[item[0] + i[0]], label=item[1])
plt.legend()
_ = ax.set(xlabel='Распределение концeнтрации ' + i[1],
title='Изменение концентрации ' + i[1] + ' на каждом этапе очистки')
plt.show()
###Output
_____no_output_____
###Markdown
- Как видно из графиков, концентрация золота после всех этапов очистки возврастает, чего не скажешь про другие металлы.- На графиках видно большое количество выбросов в нулевом значении. 2.2. Сравним распределения размеров гранул сырья на обучающей и тестовой выборках. Если распределения сильно отличаются друг от друга, оценка модели будет неправильной.
###Code
plt.figure(figsize=(8,5))
sns.distplot(test['rougher.input.feed_size'], label='Тестовая выборка')
sns.distplot(train['rougher.input.feed_size'], label='Обучающая выборка')
plt.legend()
###Output
_____no_output_____
###Markdown
- Исходя из графика видно, что распределяния одинаковы, значит размеры гранул на тестовой и обучающей выборке практически не отличаются 2.3. Исследуем суммарную концентрацию всех веществ на разных стадиях: в сырье, в черновом и финальном концентратах. Имеются ли аномальные значения в суммарном распределении ? Если они есть нужно ли их удалять из обеих выборок. Опишем выводы и удалим аномалии
###Code
concentration_stages = [('rougher.input.feed_', 'в сырье'),
('rougher.output.concentrate_', 'в черновом концентрате'),
('final.output.concentrate_', 'в финальном концентрате')]
fig, axs = plt.subplots(1, len(concentration_stages), figsize=(20, 6))
fig.suptitle('Суммарная концентрация всех веществ на разных стадиях', fontsize=15)
for stage, ax in zip(concentration_stages, axs):
ax.set_title(stage[1])
full_sum = full[stage[0]+ 'ag'] + full[stage[0]+ 'au'] + full[stage[0]+ 'pb'] + full[stage[0]+ 'sol']
sns.distplot(full_sum, ax=ax)
plt.show()
###Output
_____no_output_____
###Markdown
- Видим большой столбец в нуле, очевидно, что это аномалии, так как не может быть, чтобы концентрации всех веществ вместе в какой-то стадии были равны нулю.- Удалим аномалии и отрисуем графики заново
###Code
# Удалим аномалии
full_sum = full.replace(0, np.nan)
full_sum = full.dropna(how='all', axis=0)
ds.clean_dataset(full_sum)
#Посмотрим на наш график еще раз без аномалий
fig, axs = plt.subplots(1, len(concentration_stages), figsize=(20, 6))
fig.suptitle('Суммарная концентрация всех веществ на разных стадиях', fontsize=15)
for stage, ax in zip(concentration_stages, axs):
ax.set_title(stage[1])
final = full_sum[stage[0]+ 'ag'] + full_sum[stage[0]+ 'au'] + full_sum[stage[0]+ 'pb'] + full_sum[stage[0]+ 'sol']
sns.distplot(final.replace(0,np.nan).dropna(), ax=ax)
plt.show()
###Output
_____no_output_____
###Markdown
- Исходя из графиков видно, что на финальной стадии суммарная концентрация веществ в разы уменьшается. 3. Модель 3.1. Напишем функцию для вычисления итоговой sMAPE.
###Code
#Исходя из полученной формулы из инструкции напишем функцию метрики качества SMAPE
def smape(y_target, y_pred):
return ((1/len(y_target)) * np.sum(2 * np.abs(y_target - y_pred) / (np.abs(y_target) + np.abs(y_pred)))) * 100
###Output
_____no_output_____
###Markdown
3.2. Обучим разные модели и оценим их качество кросс-валидацией. Выберем лучшую модель и проверим ее на тестовой выборке. Опишем выводы. План работы:- Модели которые будем использовать: Lasso, RandomForest, LinearRegression- Методом Gridsearch переберем параметры, чтобы найти лучшие. - Выберем наилучшую модель и протестим ее на тестовой выборке. - Метрикой Smape посмотрим на наши результаты.
###Code
#Удалим столбцы date
test.drop(['date'], axis=1, inplace=True)
train.drop(['date'], axis=1, inplace=True)
X_train_rougher = train
X_test_rougher = test
y_train_rougher = full.loc[X_train_rougher.index, 'rougher.output.recovery']
y_test_rougher = full.loc[X_test_rougher.index,'final.output.recovery']
X_train_final = train
X_test_final = test
y_train_final = full.loc[X_train_final.index, 'final.output.recovery']
y_test_final = full.loc[X_test_final.index, 'final.output.recovery']
print(X_train_rougher.shape, X_train_final.shape)
print(X_test_rougher.shape, X_test_final.shape)
print(y_train_rougher.shape, y_train_final.shape)
print(y_test_rougher.shape, y_test_final.shape)
# Будем использовать три модели, напишем какие параметры нужно будет подобрать для GridSearchCV
from numpy.random import RandomState
state = RandomState(12345)
pipe = Pipeline([
('imp', SimpleImputer(missing_values=np.nan)),
('scaler', StandardScaler()),
('model', RandomForestRegressor(n_estimators=100, random_state=state))
])
params = [
{
'imp__strategy': ['mean', 'median'],
'model': [RandomForestRegressor(n_estimators=10, random_state=state)],
'model__max_features': np.linspace(0.1, 1, 10)
}, {
'imp__strategy': ['mean', 'median'],
'model': [LinearRegression()]
}, {
'imp__strategy': ['mean', 'median'],
'model': [linear_model.Lasso(random_state=state)],
'model__alpha': np.logspace(-3, 1, 10)
}
]
y_train_final = y_train_final.fillna(y_train_final.mean())
y_train_rougher = y_train_rougher.fillna(y_train_rougher.mean())
#Метрику smape подготовим для scoring'a GridSearchCV
from sklearn.metrics import make_scorer, mean_squared_error
smape_score = make_scorer(smape, greater_is_better=False)
# Проведем кросс валидацию с помощью kFold и разделим на 5 частей
cv = KFold(n_splits=5, shuffle=False)
grid_rougher = GridSearchCV(pipe, param_grid=params, cv=cv, n_jobs=-1, scoring=smape_score)
grid_rougher.fit(X_train_rougher, y_train_rougher)
#Посмотрим наилучшие подобранные параметры
print('Best Params:', grid_rougher.best_params_)
print('Best smape Score:', -grid_rougher.best_score_)
# Поиск лучших параметров для final
grid_final = GridSearchCV(pipe, param_grid=params, cv=cv, n_jobs=-1,scoring=smape_score)
grid_final.fit(X_train_final, y_train_final)
#Вывод лучших подобранных параметров для final
print('Best Params:', grid_final.best_params_)
print('Best smape Score:', -grid_final.best_score_)
#Модель с лучшими параметрами для Тестовой выборки Rougher
pipe_rougher = grid_rougher.best_estimator_
pipe_rougher.fit(X_train_rougher, y_train_rougher)
y_pred = pipe_rougher.predict(X_test_rougher)
smape_rougher = smape(full.loc[X_test_rougher.index, 'rougher.output.recovery'], y_pred)
smape_rougher
#Модель с лучшими параметрами для Тестовой выборки Final
pipe_final = grid_final.best_estimator_
pipe_final.fit(X_train_final, y_train_final)
y_pred_final = pipe_final.predict(X_test_final)
smape_final = smape(full.loc[X_test_rougher.index, 'final.output.recovery'],y_pred_final)
smape_final
#Итоговый Smape
final_smape = 0.25*smape_rougher + 0.75*smape_final
final_smape
rougher_median = pd.Series(y_train_rougher.median(), index=y_test_rougher.index)
final_median = pd.Series(y_train_final.median(), index=y_test_final.index)
total = (smape(y_test_final, rougher_median)*0.25) + (smape(y_test_rougher, final_median)* 0.75)
print(total)
###Output
14.235686556756935
|
notebook/analisador.ipynb | ###Markdown
some configs for pyplot https://stackoverflow.com/questions/332289/how-do-you-change-the-size-of-figures-drawn-with-matplotlib
###Code
experimento = 'experimento_02'
sp_str = '50'
%rm *.eps
# %ls ../etc
from scipy.optimize import curve_fit
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import matplotlib
import scipy
print('Matplotlib Version:{}'.format(matplotlib.__version__))
print('Numpy Version:{}'.format(np.__version__))
print('Pandas Version:{}'.format(pd.__version__))
print('Scipy Version:{}'.format(scipy.__version__))
motor_d = pd.read_csv('../etc/motor_direito_'+sp_str+ '_'+experimento+ '.csv')
motor_e = pd.read_csv('../etc/motor_esquerdo_'+sp_str+'_'+experimento+'.csv')
motor_d_c = pd.read_csv('../etc/control_motor_direito_'+sp_str+ '_'+experimento+ '.csv')
motor_e_c = pd.read_csv('../etc/control_motor_esquerdo_'+sp_str+'_'+experimento+'.csv')
sp = motor_e_c['SET_POINT'][0]
t = motor_e['TIME']
we_raw = motor_e['OMEGA_RAW']
wd_raw = motor_d['OMEGA_RAW']
we = motor_e['OMEGA_FILTERED']
wd = motor_d['OMEGA_FILTERED']
t_c = motor_e_c['TIME']
we_c= motor_e_c['OMEGA_FILTERED']
wd_c= motor_d_c['OMEGA_FILTERED']
wref = motor_e_c['SET_POINT']*motor_e_c['OMEGA_MAX']
R_e = motor_e['OMEGA_RAW'][motor_e['TIME'] >= motor_e['TAU']*10].var()
R_d = motor_d['OMEGA_RAW'][motor_d['TIME'] >= motor_d['TAU']*10].var()
def func(t,K,Tm):
return K*(1.0 - np.exp(-t/Tm))
we_opt, _ = curve_fit(func, t, we, bounds=([-10000., 0.01], [10000., 1000]))
we_c_opt,_ = curve_fit(func, t, we_c, bounds=([-10000., 0.01], [10000., 1000]))
wd_opt,_ = curve_fit(func, t, wd, bounds=([-10000., 0.01], [10000., 1000]))
wd_c_opt,_ = curve_fit(func, t, wd_c, bounds=([-10000., 0.01], [10000., 1000]))
we_opt_raw = [motor_e['K'][0], motor_e['TAU'][0]]
wd_opt_raw = [motor_d['K'][0], motor_d['TAU'][0]]
we_c_opt_raw,_ = curve_fit(func, t, motor_d_c['OMEGA_RAW'], bounds=([-10000., 0.01], [10000., 1000]))
wd_c_opt_raw,_ = curve_fit(func, t, motor_d_c['OMEGA_RAW'], bounds=([-10000., 0.01], [10000., 1000]))
R_e
R_d
# data = np.array([
# [1905.72803 , 1.00000],
# [1821.21313 , 0.95750],
# [1729.47572 , 0.91500],
# [1653.03481 , 0.87250],
# [1616.04560 , 0.83000],
# [1580.67555 , 0.78750],
# [1480.13788 , 0.74500],
# [1376.98560 , 0.70250],
# [1230.54941 , 0.66000],
# [1111.08493 , 0.61750],
# [989.78975 , 0.57500],
# [877.41730 , 0.53250],
# [785.88934 , 0.49000],
# [678.45646 , 0.44750],
# [604.61752 , 0.40500],
# [507.48609 , 0.36250],
# [371.34665 , 0.32000],
# [210.34399 , 0.27750],
# [90.20955 , 0.23500],
# [0.00000 , 0.19250]])
# x = data[:,0]
# y = data[:,1]
# def _calcTau(t,w, wss):
# Sxx = 0.0
# Sxy = 0.0
# y = 0.0
# for i in range(len(t)):
# if (abs(w[i]) > 0.2*abs(wss)) and (abs(w[i]) < 0.8*abs(wss)):
# y = -np.log(1.0 - w[i]/wss);
# Sxy+= t[i]*y;
# Sxx+= t[i]*t[i];
# return Sxx/Sxy
# def line(x, a,b):
# return x*a + b
# pop, pcov = curve_fit(line, x, y, bounds=([0, 0], [np.inf, 1.0]))
# print(pop,pcov)
# plt.plot(x,y, label='data')
# plt.plot(x,line(x,pop[0], pop[1]), '--k')
# plt.ylim([0,1])
# plt.show()
# def _calcTau(t,w, wss):
# Sxx = 0.0
# Sxy = 0.0
# y = 0.0
# for i in range(len(t)):
# if (abs(w[i]) > 0.2*abs(wss)) and (abs(w[i]) < 0.8*abs(wss)):
# y = -np.log(1.0 - w[i]/wss);
# Sxy+= t[i]*y;
# Sxx+= t[i]*t[i];
# return Sxx/Sxy
# data = np.array([[0.00001, 0.00000],
# [0.00419, 0.00000],
# [0.00919, 1.39470],
# [0.01418, 1.40516],
# [0.01919, 325.16614],
# [0.02419, 325.16614],
# [0.02918, 414.23954],
# [0.03419, 709.00308],
# [0.03918, 841.12253],
# [0.04419, 953.29773]])
# x = data[:,0]
# y = data[:,1]
###Output
_____no_output_____
###Markdown
TESTAGEM DA CALIBRAÇÃO
###Code
motor_e.head()
motor_d.head()
print('Motor Esquerdo')
print('| Km | Tm | R | Kp |')
print('| {:.2f} | {:.2e} | {:.2f} | {:.2e}|'.format(motor_e['K'][0], motor_e['TAU'][0], R_e, motor_e['FORWARD_KP'][0]))
print('Motor Direito')
print('| Km | Tm | R | Kp |')
print('| {:.2f} | {:.2e} | {:.2f} | {:.2e}|'.format(motor_d['K'][0], motor_d['TAU'][0], R_d, motor_d['FORWARD_KP'][0]))
def reta(x, a, b):
return a*x + b
# Curva PWM x Omega
# Motor esquerdo
wmax = motor_e['OMEGA_MAX'][0]
w = np.linspace(0,wmax, 10)
ae_f = motor_e['FORWARD_ANG_COEF'][0]
be_f = motor_e['FORWARD_LIN_COEF'][0]
ae_b = motor_e['BACK_ANG_COEF'][0]
be_b = motor_e['BACK_LIN_COEF'][0]
ue_f = reta(w,ae_f, be_f)
ue_b = reta(-w,ae_b, be_b)
plt.figure(figsize=(8,4))
# plt.title(r'Curva $u(\omega_{ss})$ do Motor Esquerdo')
plt.plot(w, ue_f, 'b', label= r'$u_{f}(\omega) = %.2e\omega + %.2f$'%(ae_f, be_f))
plt.plot(-w, ue_b, 'r',label= r'$u_{b}(\omega) = %.2e\omega %.2f$'%(ae_b, be_b))
plt.ylabel('PWM',fontsize=14)
plt.xlabel(r'$\omega_{ss}$ [rad/s]',fontsize=14)
plt.grid(True)
plt.ylim([-1.1,1.1])
plt.xlim([-wmax-100,wmax+100])
plt.legend(prop={'size':14})
# plt.grid(linestyle="--", linewidth=0.5, color='.25', zorder=-10)
plt.savefig('curva_feedforward_esquerdo' + sp_str + '.eps', format='eps')
plt.show()
# Curva PWM x Omega
# Motor direito
wmax = motor_d['OMEGA_MAX'][0]
w = np.linspace(0,wmax, 10)
ad_f = motor_d['FORWARD_ANG_COEF'][0]
bd_f = motor_d['FORWARD_LIN_COEF'][0]
ad_b = motor_d['BACK_ANG_COEF'][0]
bd_b = motor_d['BACK_LIN_COEF'][0]
ud_f = reta(w,ad_f, bd_f)
ud_b = reta(-w,ad_b, bd_b)
plt.figure(figsize=(8,4))
# plt.title(r'Curva $u(\omega_{ss})$ do Motor Direito')
plt.plot(w, ud_f, 'b', label= r'$u_{f}(\omega) = %.2e\omega + %.2f$'%(ad_f, bd_f))
plt.plot(-w, ud_b, 'r',label= r'$u_{b}(\omega) = %.2e\omega %.2f$'%(ad_b, bd_b))
plt.ylabel('PWM',fontsize=14)
plt.xlabel(r'$\omega_{ss}$ [rad/s]',fontsize=14)
plt.grid(True)
plt.ylim([-1.1,1.1])
plt.xlim([-wmax-100,wmax+100])
plt.legend(prop={'size':14})
# plt.grid(linestyle="--", linewidth=0.5, color='.25', zorder=-10)
plt.savefig('curva_feedforward_direito' + sp_str + '.eps', format='eps')
plt.show()
figsize=(10,5)
# plt.title('Comportamento Esperado vs Velocidade Medida')
plt.plot(t, we_raw, '-b', label=r'$\omega_{E_{medido}}$')
plt.plot(t, func(t,sp*we_opt_raw[0],we_opt_raw[1]), '--k', label=r'$\omega_E(t) = %.2f\left( 1 - e^{-t/%.2f}\right)$'%(sp*we_opt_raw[0],we_opt_raw[1]))
plt.xlim([0, motor_e_c['TIME'].max()])
plt.ylabel(r'$rad.s^{-1}$',fontsize=14)
plt.xlabel(r'$t$ [s]',fontsize=14)
plt.grid(True)
plt.legend(prop={'size':14})
plt.savefig('regressao_vs_medido_esquerdo' + sp_str + '.eps', format='eps')
plt.show()
figsize=(10,5)
# plt.title('Comportamento Esperado vs Velocidade Medida')
plt.plot(t, wd_raw, '-g', label=r'$\omega_{D_{medido}}$')
plt.plot(t, func(t,sp*wd_opt_raw[0],wd_opt_raw[1]), '--k', label=r'$\omega_D(t) = %.2f\left( 1 - e^{-t/%.2f}\right)$'%(sp*wd_opt_raw[0],wd_opt_raw[1]))
plt.xlim([0, motor_e_c['TIME'].max()])
plt.ylabel(r'$rad.s^{-1}$',fontsize=14)
plt.xlabel(r'$t$ [s]',fontsize=14)
plt.grid(True)
plt.legend(prop={'size':14})
plt.savefig('regressao_vs_medido_direito' + sp_str + '.eps', format='eps')
plt.show()
###Output
_____no_output_____
###Markdown
TESTAGEM DO FILTRO Teste offline: Filtro de Kalman
###Code
# plt.figure(num=1,figsize=(15,5))
# plt.title('Função de aproximação para o motor Esquerdo e Ganho do sistema filtrado')
# plt.plot(t, motor_e['OMEGA_RAW'], '-y', label=r'$\omega_{e_{raw}}$')
# plt.plot(t, we, '-b', label=r'$\omega_e$')
# plt.plot(t, func(t,Ke,Tme), '--k', label=r'$\omega(t) = %.2f\left( 1 - e^{-t/%.2f}\right)$'%(Ke,Tme))
# # plt.plot(t, func(t,x_k,motor_e['TAU'][0]), '--g', label=r'$\omega_{cal}(t) = %.2f\left( 1 - e^{-t/%.2f}\right)$'%(x_k,motor_e['TAU'][0]))
# plt.plot(t, func(t,motor_e['SET_POINT']*motor_e['K'][0],motor_e['TAU'][0]), '--r', label=r'$\omega_{cal_{orig}}(t) = %.2f\left( 1 - e^{-t/%.2f}\right)$'%(motor_e['K'][0]*motor_e['SET_POINT'][0],motor_e['TAU'][0]))
# plt.xlim([0, motor_e_c['TIME'].max()])
# plt.ylabel(r'$rad.s^{-1}$')
# plt.xlabel(r'$t$ [s]')
# plt.grid(True)
# plt.legend()
# plt.show()
p0 = 100
r = 1200
q = 10
# # inicialização
# # r = motor_e['OMEGA_RAW'][t >= Tme*5].var() #incerteza da medição
# #############################################################
# w_mean = np.zeros_like(t) #omega predito
# w_check = np.zeros_like(t) #omega predito
# w_hat = np.zeros_like(t) #melhor estimativa de oemga (omega filtrado)
# p_check = np.zeros_like(t) #incerteza de omega
# p_hat = np.zeros_like(t)
# K = np.zeros_like(t) #ganho do filtro
# p_check[0] = p_hat[0] = p0
# ##############################################################
# #input
# Tm= motor_e['TAU'][0]
# Kgain= motor_e['K'][0]
# u = motor_e['SET_POINT'][0]*Kgain
# for i in range(1, len(t)):
# # medição
# wz = motor_e['OMEGA_RAW'][i]
# w_mean[i] = (w_mean[i-1] + wz)/2.0
# predição
# w_check[i] = w_hat[i-1] + (u - w_hat[i-1])*(1.0 - np.exp(-(t[i]- t[i-1])/Tm))
# p_check[i] = p_hat[i-1] + q
# # atualização
# K[i] = p_check[i]/(p_check[i]+r)
# w_hat[i] = w_check[i] + K[i]*(wz - w_check[i])
# p_hat[i] = (1 - K[i])*p_check[i]
# plt.figure(figsize=(10, 5), dpi=100)
# # plt.plot(t, w_check, '-b', label=r'$\check{\omega}(t)$')
# plt.plot(t, w_hat, '-g', label=r'$\hat{\omega}(t)$')
# plt.plot(t, motor_e['OMEGA_RAW'], '-k', label=r'$\omega_{raw}(t)$')
# # plt.plot(t, w_mean, '-y', label=r'$\omega_{mean}(t)$')
# # plt.plot(t, motor_e['OMEGA_FILTERED'], '-r', label=r'$\omega_{filtered}(t)$')
# plt.title('Teste filtro de Kalman para o Motor Esquerdo')
# plt.xlim([0, motor_e_c['TIME'].max()])
# plt.ylabel(r'$rad/s$');
# plt.xlabel(r'$t(s)$');
# plt.legend();
# plt.grid(True)
# plt.show();
# # inicialização
# # p0 = 60.0
# # r = motor_d['OMEGA_RAW'][t >= Tme*5].var() #incerteza da medição
# # q = 10 #bias da incerteza (procurar uma definição mais adequada)
# #############################################################
# w_mean = np.zeros_like(t) #omega predito
# w_check = np.zeros_like(t) #omega predito
# w_hat = np.zeros_like(t) #melhor estimativa de oemga (omega filtrado)
# p_check = np.zeros_like(t) #incerteza de omega
# p_hat = np.zeros_like(t)
# K = np.zeros_like(t) #ganho do filtro
# p_check[0] = p_hat[0] = p0
# ##############################################################
# #input
# Tm= motor_d['TAU'][0]
# Kgain= motor_d['K'][0]
# u = motor_d['SET_POINT'][0]*Kgain
# for i in range(1, len(t)):
# # medição
# wz = motor_d['OMEGA_RAW'][i]
# w_mean[i] = (w_mean[i-1] + wz)/2.0
# # predição
# w_check[i] = w_hat[i-1] + (u - w_hat[i-1])*(1.0 - np.exp(-(t[i]- t[i-1])/Tm))
# p_check[i] = p_hat[i-1] + q
# # atualização
# K[i] = p_check[i]/(p_check[i]+r)
# w_hat[i] = w_check[i] + K[i]*(wz - w_check[i])
# p_hat[i] = (1 - K[i])*p_check[i]
# plt.figure(figsize=(10, 5), dpi=100)
# # plt.plot(t, w_check, '-b', label=r'$\check{\omega}(t)$')
# plt.plot(t, w_hat, '-g', label=r'$\hat{\omega}(t)$')
# plt.plot(t, wd_raw, '-k', label=r'$\omega_{raw}(t)$')
# plt.plot(t, w_mean, '-y', label=r'$\omega_{mean}(t)$')
# plt.plot(t, wd, '-r', label=r'$\omega_{filtered}(t)$')
# plt.title('Teste filtro de Kalman para o Motor Direito')
# plt.xlim([0, motor_e_c['TIME'].max()])
# plt.ylabel(r'$rad/s$');
# plt.xlabel(r'$t(s)$');
# plt.legend();
# plt.grid(True)
# plt.show();
# Teste Com vs Sem Filtro
# Sem controlador
plt.figure(figsize=(10,5))
# plt.title('Com Filtro vs Sem Filtro')
plt.plot(t, we_raw, '-b', label=r'$\omega_{E_{medido}}$')
plt.plot(t, we, '--b', label=r'$\hat{\omega}_{E}$')
plt.xlim([0, motor_e['TIME'].max()])
plt.ylabel(r'$rad.s^{-1}$',fontsize=14)
plt.xlabel(r'$t$ [s]',fontsize=14)
plt.grid(True)
plt.legend(prop={'size':14})
plt.savefig('filtro_vs_sem_filtro_esquerdo' + sp_str + '.eps', format='eps')
plt.show()
plt.figure(figsize=(10,5))
# plt.title('Com Filtro vs Sem Filtro')
plt.plot(t, wd_raw, '-g', label=r'$\omega_{D_{medido}}$')
plt.plot(t, wd, '--g', label=r'$\hat{\omega}_{D}$')
plt.xlim([0, motor_e['TIME'].max()])
plt.ylabel(r'$rad.s^{-1}$',fontsize=14)
plt.xlabel(r'$t$ [s]',fontsize=14)
plt.grid(True)
plt.legend(prop={'size':14})
plt.savefig('filtro_vs_sem_filtro_direito' + sp_str + '.eps', format='eps')
plt.show()
###Output
_____no_output_____
###Markdown
TESTAGEM DO Controlador CONTROLE COM FILTRO VS CONTROLE SEM FILTRO
###Code
# Esse grafico seria informativo se fosse o controle usando o filtro vs o controle sem usar o filtro
# plt.figure(figsize=(15,5))
# plt.title('Ambos os motores | controlador ligado | Com Filtro x Sem Filtro')
# plt.plot(t_c, we_c, '--b', label=r'$\omega_e$')
# plt.plot(t_c, wd_c, '--g', label=r'$\omega_d$')
# plt.plot(t_c, motor_e_c['OMEGA_RAW'], '-b', label=r'$\omega_{e_{raw}}$')
# plt.plot(t_c, motor_d_c['OMEGA_RAW'], '-g', label=r'$\omega_{d_{raw}}$')
# plt.xlim([0, motor_e_c['TIME'].max()])
# plt.ylabel(r'$rad.s^{-1}$')
# plt.xlabel(r'$t$ [s]')
# plt.grid(True)
# plt.legend()
# plt.show()
###Output
_____no_output_____
###Markdown
CONTROLADOR VS SEM CONTROLADOR
###Code
plt.figure(figsize=(8,4))
# plt.title('Com Controlador vs Sem Controlador')
plt.plot(t_c, we_c, '--b', label=r'$\omega_{E_{control}}(t) = %.2f(1 - e^{-t/%.2f})$'%(sp*we_c_opt[0],we_c_opt[1]))
plt.plot(t_c, wd_c, '--g', label=r'$\omega_{D_{control}}(t) = %.2f(1 - e^{-t/%.2f})$'%(sp*wd_c_opt[0],wd_c_opt[1]))
plt.plot(t_c, we, '-b', label=r'$\omega_{E}(t) = %.2f(1 - e^{-t/%.2f})$'%(sp*we_opt[0],we_opt[1]))
plt.plot(t_c, wd, '-g', label=r'$\omega_{D}(t) = %.2f(1 - e^{-t/%.2f})$'%(sp*wd_opt[0],wd_opt[1]))
plt.plot(t_c, wref, '-k', label=r'$\omega_{ref} = %.2f rad.s^{-1}$'%(wref[0]))
plt.xlim([0, motor_e_c['TIME'].max()])
plt.ylabel(r'$rad.s^{-1}$',fontsize=14)
plt.xlabel(r'$t$ [s]',fontsize=14)
plt.grid(True)
plt.legend(prop={'size':14})
plt.savefig('controlador_vs_sem_controlador' + sp_str + '.eps', format='eps')
plt.show()
plt.figure(figsize=(8,4))
# plt.title('Antes vs Depois')
plt.plot(t_c, we_c, '--b', label=r'$\omega_{E}(t) = %.2f(1 - e^{-t/%.2f})$'%(sp*we_opt[0],we_opt[1]))
plt.plot(t_c, wd_c, '--g', label=r'$\omega_{D}(t) = %.2f(1 - e^{-t/%.2f})$'%(sp*wd_opt[0],wd_opt[1]))
plt.plot(t_c, we_raw, '-b', label=r'$\omega_{{E}}(t) = %.2f(1 - e^{-t/%.2f})$'%(sp*we_opt_raw[0],we_opt_raw[1]))
plt.plot(t_c, wd_raw, '-g', label=r'$\omega_{{D}}(t) = %.2f(1 - e^{-t/%.2f})$'%(sp*wd_opt_raw[0],wd_opt_raw[1]))
plt.plot(t_c, wref, '-k', label=r'$\omega_{ref} = %.2f$ $rad.s^{-1}$'%(wref[0]))
plt.xlim([0, motor_e_c['TIME'].max()])
plt.ylabel(r'$rad.s^{-1}$', fontsize=14)
plt.xlabel(r'$t$ [s]', fontsize=14)
plt.grid(True)
plt.legend(prop={'size':14})
plt.savefig('antes_vs_depois' + sp_str + '.eps', format='eps')
plt.show()
###Output
_____no_output_____ |
notebooks/GTO_integrals/GTO_1D_S.ipynb | ###Markdown
Parameters and two Gaussians
###Code
a, b, c, a1, a2 = symbols('a b c a1 a2', positive=True, real=True)
g1=exp(-a1*x**2)
g2=exp(-a2*x**2)
g1, g2
###Output
_____no_output_____
###Markdown
Normalization constant
###Code
N=integrate(g1*g1, (x, -oo, oo))
N
1/sqrt(N)
printing.sstrrepr(1/sqrt(N))
###Output
_____no_output_____
###Markdown
Overlap integral S
###Code
S=integrate(g1*g2, (x, -oo, oo))
S
S.simplify()
printing.sstrrepr(S.simplify())
###Output
_____no_output_____
###Markdown
Kinetic energy $T = -\frac{\hbar^2}{2m} \frac{d^2}{dx^2} = \frac{1}{2m}\left(\frac{\hbar}{i}\frac{d}{dx} \right)^2$
###Code
d1=diff(g1,x)
d2=diff(g2,x)
d1, d2
T = 1/2 * integrate(d1*d2, (x, -oo, oo))
#T=T.simplify()
#T=T.factor()
T.factor()
printing.sstrrepr(T.factor())
###Output
_____no_output_____
###Markdown
Potential $V(x) = (ax^2 - b)e^{-cx^2}$
###Code
v=(a*x**2-b)*exp(-c*x**2)
v
V = integrate(g1*v*g2, (x, -oo, oo))
V
V.factor()
printing.sstrrepr(V.factor())
###Output
_____no_output_____ |
docs/memo/notebooks/lectures/Plotting_Data/notebook.ipynb | ###Markdown
Graphical Representations of DataBy Evgenia "Jenny" Nitishinskaya, Maxwell Margenot, and Delaney Granizo-Mackenzie.Part of the Quantopian Lecture Series:* [www.quantopian.com/lectures](https://www.quantopian.com/lectures)* [github.com/quantopian/research_public](https://github.com/quantopian/research_public)Notebook released under the Creative Commons Attribution 4.0 License.Representing data graphically can be incredibly useful for learning how the data behaves and seeing potential structure or flaws. Care should be taken, as humans are incredibly good at seeing only evidence that confirms our beliefs, and visual data lends itself well to that. Plots are good to use when formulating a hypothesis, but should not be used to test a hypothesis.We will go over some common plots here.
###Code
# Import our libraries
# This is for numerical processing
import numpy as np
# This is the library most commonly used for plotting in Python.
# Notice how we import it 'as' plt, this enables us to type plt
# rather than the full string every time.
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Getting Some DataIf we're going to plot data we need some data to plot. We'll get the pricing data of Apple (000333) and Microsoft (000001) to use in our examples. Data StructureKnowing the structure of your data is very important. Normally you'll have to do a ton work molding your data into the form you need for testing. Quantopian has done a lot of cleaning on the data, but you still need to put it into the right shapes and formats for your purposes.In this case the data will be returned as a pandas dataframe object. The rows are timestamps, and the columns are the two assets, 000333 and 000001.
###Code
from zipline.component.data import load_bars
start = '2014-01-01'
end = '2015-01-01'
data = load_bars(['000001', '000333'], start=start, end=end)
data.head()
###Output
_____no_output_____
###Markdown
Indexing into the data with `data['000333']` will yield an error because the type of the columns are equity objects and not simple strings. Let's change that using this little piece of Python code. Don't worry about understanding it right now, unless you do, in which case congratulations.
###Code
data.head()
###Output
_____no_output_____
###Markdown
Much nicer, now we can index. Indexing into the 2D dataframe will give us a 1D series object. The index for the series is timestamps, the value upon index is a price. Similar to an array except instead of integer indecies it's times.
###Code
data['000001'].head()
###Output
_____no_output_____
###Markdown
柱状图(Histogram)A histogram is a visualization of how frequent different values of data are. By displaying a frequency distribution using bars, it lets us quickly see where most of the observations are clustered. The height of each bar represents the number of observations that lie in each interval. You can think of a histogram as an empirical and discrete Propoability Density Function (PDF).
###Code
# Plot a histogram using 20 bins
plt.hist(data['000001'], bins=20)
plt.xlabel('Price')
plt.ylabel('Number of Days Observed')
plt.title('Frequency Distribution of 000001 Prices, 2014')
###Output
_____no_output_____
###Markdown
Returns HistogramIn finance rarely will we look at the distribution of prices. The reason for this is that prices are non-stationary and move around a lot. For more info on non-stationarity please see [this lecture](https://www.quantopian.com/lectures/integration-cointegration-and-stationarity). Instead we will use daily returns. Let's try that now.
###Code
# Remove the first element because percent change from nothing to something is NaN
R = data['000001'].pct_change()[1:]
# Plot a histogram using 20 bins
plt.hist(R, bins=20)
plt.xlabel('Return')
plt.ylabel('观察日期数量')
plt.title('Frequency Distribution of 000001 Returns, 2014');
###Output
_____no_output_____
###Markdown
The graph above shows, for example, that the daily returns of 000001 were above 0.03 on fewer than 5 days in 2014. Note that we are completely discarding the dates corresponding to these returns. IMPORTANT: Note also that this does not imply that future returns will have the same distribution. Cumulative Histogram (Discrete Estimated CDF)An alternative way to display the data would be using a cumulative distribution function, in which the height of a bar represents the number of observations that lie in that bin or in one of the previous ones. This graph is always nondecreasing since you cannot have a negative number of observations. The choice of graph depends on the information you are interested in.
###Code
# Remove the first element because percent change from nothing to something is NaN
R = data['000001'].pct_change()[1:]
# Plot a histogram using 20 bins
plt.hist(R, bins=20, cumulative=True)
plt.xlabel('Return')
plt.ylabel('Number of Days Observed')
plt.title('Cumulative Distribution of 000001 Returns, 2014');
###Output
_____no_output_____
###Markdown
Scatter plotA scatter plot is useful for visualizing the relationship between two data sets. We use two data sets which have some sort of correspondence, such as the date on which the measurement was taken. Each point represents two corresponding values from the two data sets. However, we don't plot the date that the measurements were taken on.
###Code
plt.scatter(data['000001'], data['000333'])
plt.xlabel('平安银行')
plt.ylabel('美的集团')
plt.title('Daily Prices in 2014');
R_000001 = data['000001'].pct_change()[1:]
R_000333 = data['000333'].pct_change()[1:]
plt.scatter(R_000001, R_000333)
plt.xlabel('000001')
plt.ylabel('000333')
plt.title('Daily Returns in 2014')
###Output
_____no_output_____
###Markdown
Line graphA line graph can be used when we want to track the development of the y value as the x value changes. For instance, when we are plotting the price of a stock, showing it as a line graph instead of just plotting the data points makes it easier to follow the price over time. This necessarily involves "connecting the dots" between the data points, which can mask out changes that happened between the time we took measurements.
###Code
plt.plot(data['000001'])
plt.plot(data['000333'])
plt.ylabel('Price')
plt.legend(['000001', '000333']);
# Remove the first element because percent change from nothing to something is NaN
R = data['000001'].pct_change()[1:]
plt.plot(R)
plt.ylabel('Return')
plt.title('000001 Returns')
###Output
_____no_output_____ |
LinpackAnalysis/linpack.ipynb | ###Markdown
The collected data Native
###Code
get_dataframe(tsv_files[0])
###Output
_____no_output_____
###Markdown
LXC
###Code
get_dataframe(tsv_files[1])
###Output
_____no_output_____
###Markdown
KVM
###Code
get_dataframe(tsv_files[2])
###Output
_____no_output_____
###Markdown
Docker
###Code
get_dataframe(tsv_files[3])
###Output
_____no_output_____
###Markdown
Joining the data
###Code
dfs = [pd.read_csv(tsv_files[i], sep='\s+') for i in range(len(tsv_files))]
dfs_gflops = [df['GFlops'] for df in dfs]
platforms = ['Native', 'LXC', 'KVM', 'Docker']
data = {}
for i in range(len(tsv_files)):
platform = platforms[i]
gflops = dfs_gflops[i]
data[platform] = gflops
df = pd.DataFrame(data=data)
df
###Output
_____no_output_____
###Markdown
Data visualization
###Code
plt.figure()
ax = df.plot(title='Linpack Benchmark')
plt.xlabel('Test nº')
plt.ylabel('GFlops')
plt.savefig('LINPACK_6000.png')
plt.show()
###Output
_____no_output_____
###Markdown
Data analysis
###Code
d = {}
for key in platforms:
min = df[key].min()
avg = df[key].mean()
max = df[key].max()
std = df[key].std()
d[key] = [min, avg, max, std]
d
df_analysis = pd.DataFrame(data=d, index=['Minimum', 'Average', 'Maximum', 'Standard deviation'])
df_analysis
###Output
_____no_output_____
###Markdown
Data comparison
###Code
plt.figure()
ax = df_analysis.plot.bar(title='Linpack Benchmark comparison')
plt.ylabel('GFlops')
plt.tight_layout()
plt.savefig('LINPACK_6000_barplot.png')
plt.show()
ax = df.boxplot()
ax.set_ylim([51, 59])
plt.title('Linpack Benchmark comparison')
plt.ylabel('GFlops')
plt.savefig('LINPACK_6000_boxplot.png')
plt.show()
###Output
_____no_output_____ |
notebooks/06_pipeline_parallelism.ipynb | ###Markdown
Pipeline Parallelism이번 세션에서는 파이프라인 병렬화에 대해 알아보겠습니다. 1. Inter-layer model parallelism파이프라인 병렬화는 Inter-layer 모델 병렬화를 개선한 것입니다. Inter-layer 모델 병렬화는 아래와 같이 특정 GPU에 특정 레이어들을 할당하는 모델 병렬화 방법이였죠. 아래 그림에서는 GPU1번에 1,2,3번 레이어가 할당되었고, GPU2번에 4,5번 레이어가 할당 되었는데, 이 때 쪼개진 하나의 조각을 `stage(스테이지)`라고 합니다. 아래 예시의 경우 2개의 스테이지로 분할되었습니다.![](../images/inter_layer.png)그러나 이전 레이어의 출력을 다음 레이어의 입력으로 하는 신경망의 특성상 특정 GPU의 연산이 끝나야 다른 GPU가 연산을 시작할 수 있습니다. 즉, 아래의 그림처럼 Inter-layer 모델 병렬화는 동시에 하나의 GPU만 사용할 수 있다는 치명적인 한계를 가지고 있습니다.![](../images/inter_layer_2.png)![](../images/inter_layer_3.gif) 2. GPipeGPipe는 Google에서 개발된 파이프라인 병렬화 기법으로 Inter Layer 모델 병렬화 시 GPU가 쉬는 시간 (idle time)을 줄이기 위해 등장했으며, mini-batch를 micro-batch로 한번 더 쪼개서 학습 과정을 파이프라이닝 하는 방식으로 동작합니다.![](../images/gpipe_1.png)![](../images/pipeline_parallelism2.png) Micro-batch- Mini-batch는 전체 데이터셋을 n개로 분할한 서브샘플 집합입니다.- Micro-batch는 Mini-batch를 m개로 한번 더 분할한 서브샘플 집합입니다.![](../images/gpipe_2.png) PipeliningGPipe는 미니배치를 마이크로 배치로 쪼개고 연산을 파이프라이닝 합니다. 붉은색 (GPU가 쉬는 부분)을 Bubble time이라고 하는데, Micro batch 사이즈가 커질 수록 Bubble time이 줄어드는 것을 알 수 있습니다.![](../images/gpipe_3.gif) GPipe with PyTorchkakaobrain에서 공개한 `torchgpipe`를 사용하면 손쉽게 GPipe를 사용할 수 있습니다. 단, `nn.Sequential`로 래핑된 모델만 사용 가능하며 모든 모듈의 입력과 출력 타입은 `torch.Tensor` 혹은 `Tuple[torch.Tensor]`로 제한됩니다. 따라서 코딩하기가 상당히 까다롭습니다.
###Code
"""
src/gpipe.py
"""
import torch
import torch.nn as nn
from datasets import load_dataset
from torch.optim import Adam
from torch.utils.data import DataLoader
from torchgpipe import GPipe
from transformers import GPT2Tokenizer, GPT2LMHeadModel
from transformers.models.gpt2.modeling_gpt2 import GPT2Block as GPT2BlockBase
class GPT2Preprocessing(nn.Module):
def __init__(self, config):
super().__init__()
self.embed_dim = config.hidden_size
self.wte = nn.Embedding(config.vocab_size, self.embed_dim)
self.wpe = nn.Embedding(config.max_position_embeddings, self.embed_dim)
self.drop = nn.Dropout(config.embd_pdrop)
def forward(self, input_ids):
input_shape = input_ids.size()
input_ids = input_ids.view(-1, input_shape[-1])
position_ids = torch.arange(
0, input_shape[-1], dtype=torch.long, device=input_ids.device
)
position_ids = position_ids.unsqueeze(0).view(-1, input_shape[-1])
inputs_embeds = self.wte(input_ids)
position_embeds = self.wpe(position_ids)
hidden_states = inputs_embeds + position_embeds
hidden_states = self.drop(hidden_states)
return hidden_states
class GPT2Block(GPT2BlockBase):
def forward(self, hidden_states):
hidden_states = super(GPT2Block, self).forward(
hidden_states=hidden_states,
)
return hidden_states[0]
class GPT2Postprocessing(nn.Module):
def __init__(self, config):
super().__init__()
self.ln_f = nn.LayerNorm(
config.hidden_size,
eps=config.layer_norm_epsilon,
)
self.lm_head = nn.Linear(
config.hidden_size,
config.vocab_size,
bias=False,
)
def forward(self, hidden_states):
hidden_states = self.ln_f(hidden_states)
lm_logits = self.lm_head(hidden_states)
return lm_logits
def create_model_from_pretrained(model_name):
pretrained = GPT2LMHeadModel.from_pretrained(model_name)
preprocess = GPT2Preprocessing(pretrained.config)
preprocess.wte.weight = pretrained.transformer.wte.weight
preprocess.wpe.weight = pretrained.transformer.wpe.weight
blocks = pretrained.transformer.h
for i, block in enumerate(blocks):
block.__class__ = GPT2Block
postprocess = GPT2Postprocessing(pretrained.config)
postprocess.ln_f.weight = pretrained.transformer.ln_f.weight
postprocess.ln_f.bias = pretrained.transformer.ln_f.bias
postprocess.lm_head.weight.data = pretrained.lm_head.weight.data.clone()
return nn.Sequential(preprocess, *blocks, postprocess)
if __name__ == "__main__":
world_size = 4
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
tokenizer.pad_token = tokenizer.eos_token
model = create_model_from_pretrained(model_name="gpt2")
model = GPipe(
model,
balance=[4, 3, 3, 4],
devices=[0, 1, 2, 3],
chunks=world_size,
)
datasets = load_dataset("squad").data["train"]["context"]
datasets = [str(sample) for sample in datasets]
data_loader = DataLoader(datasets, batch_size=8, num_workers=8)
optimizer = Adam(model.parameters(), lr=3e-5)
loss_fn = nn.CrossEntropyLoss()
for i, data in enumerate(data_loader):
optimizer.zero_grad()
tokens = tokenizer(data, return_tensors="pt", truncation=True, padding=True)
input_ids = tokens.input_ids.to(0)
labels = tokens.input_ids.to(world_size - 1)
lm_logits = model(input_ids)
shift_logits = lm_logits[..., :-1, :].contiguous()
shift_labels = labels[..., 1:].contiguous()
loss = nn.CrossEntropyLoss()(
shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1)
)
loss.backward()
optimizer.step()
if i % 10 == 0:
print(f"step: {i}, loss: {loss}")
if i == 300:
break
# !python -m torch.distributed.launch --nproc_per_node=4 ../src/gpipe.py
!python ../src/gpipe.py
###Output
Reusing dataset squad (/home/ubuntu/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453)
100%|█████████████████████████████████████████████| 2/2 [00:00<00:00, 55.94it/s]
step: 0, loss: 6.084661483764648
step: 10, loss: 3.2574026584625244
step: 20, loss: 2.796205759048462
step: 30, loss: 2.5538008213043213
step: 40, loss: 2.8463237285614014
step: 50, loss: 2.3466761112213135
step: 60, loss: 2.5407633781433105
step: 70, loss: 2.2434418201446533
step: 80, loss: 2.4792842864990234
step: 90, loss: 2.9400510787963867
step: 100, loss: 2.8163280487060547
step: 110, loss: 2.4787795543670654
step: 120, loss: 2.9588236808776855
step: 130, loss: 2.3893203735351562
step: 140, loss: 2.9571073055267334
step: 150, loss: 3.9219329357147217
step: 160, loss: 3.023880958557129
step: 170, loss: 3.018484592437744
step: 180, loss: 1.6825034618377686
step: 190, loss: 3.5461761951446533
step: 200, loss: 3.6606838703155518
step: 210, loss: 3.527740001678467
step: 220, loss: 2.988645315170288
step: 230, loss: 3.1758480072021484
step: 240, loss: 2.5451812744140625
step: 250, loss: 3.1476473808288574
step: 260, loss: 3.4633867740631104
step: 270, loss: 3.199225902557373
step: 280, loss: 2.612720489501953
step: 290, loss: 2.139256238937378
step: 300, loss: 3.437178373336792
###Markdown
3. 1F1B Pipelining (PipeDream)Microsoft에서 공개한 `PipeDream`은 `GPipe`와는 약간 다른 방식의 파이프라이닝을 수행합니다. 흔히 이 방법을 1F1B라고 부르는데, 모든 Forward가 끝나고 나서 Backward를 수행하는 GPipe와 달리 `PipeDream`은 Forward와 Backward를 번갈아가면서 수행합니다.1F1B Pipelining에는 다음과 같은 두가지 챌린지가 존재합니다.1. Weight version managing2. Work partitioning 1) Weight version managinigGPipe의 경우 하나의 weight 버전만 운용하지만 주기적으로 Pipeline flush가 일어납니다. Pipeline flush란 계산된 Gradient를 통해 파라미터를 업데이트 하는 과정입니다. 이러한 flush 과정 중에는 어떠한 forward, backward 연산도 하지 않기 때문에 처리 효율이 떨어집니다.PipeDream은 이러한 flush 없이 계속해서 파라미터를 업데이트 해나갑니다. 따라서 forward와 backward가 모두 쉬는 시간이 사라집니다. 그러나 이를 위해서는 여러 버전의 파라미터 상태를 지속적으로 관리해야 합니다. 만약 최신버전의 파라미터만 저장하고 있으면 이전 layer의 출력이 다음 layer로 전송될 때, 다음 layer 부분이 업데이트 될 수도 있기 때문이죠.이러한 문제를 막기 위해 여러 버전의 weight를 저장하여 관리하는데 그러면 weight를 저장하면 메모리 공간을 많이 차지하게 됩니다. 따라서 이 부분에서 트레이드 오프가 발생합니다.- GPipe: 메모리 효율적, 프로세싱 비효율적- PipeDream: 메모리 비효율적, 프로세싱 효율적 2) Work Partitioning두번쨰 문제는 뉴럴넷을 어떻게 쪼갤건지에 대한 문제입니다. 단순히 Layer별로 동일한 수의 레이어를 갖게끔 하는 것이 항상 최고의 솔루션이라고 할 수는 없겠죠. 우리에게 가장 중요한 것은 idle time을 최소화을 최소화 하는 것입니다. 그러기 위해서는 각 파티션의 running time이 비슷해야겠죠. 그 이외에도 추가로 parameter size, activation memory 등을 고려해야 합니다.PipeDream은 Profiling과 Optimizing을 통해 최적의 Partioning 전략을 찾아냅니다. 4. Variations of 1F1B PipeliningPipeDream의 1F1B 파이프라이닝을 개선한 두가지 버전의 파이프라인 전략을 소개합니다. 1) PipeDream 2BW (2-buffered weight update)PipeDream 2BW는 PipeDream의 메모리 비효율성을 개선하기 위해 등장했습니다. 핵심 아이디어는 파이프라이닝 중에 Gradient Accumulation을 수행하는 것입니다. 여러개의 Gradient들을 모아두다가 한번에 업데이트를 수행하는 방식으로 메모리 비효율성 문제를 해결했죠. 2BW는 이전과 달리 단 두개의 weight version만 유지하면 됩니다.![](../images/pipe_dream_2bw.png) 2) PipeDream FlushPipeDream Flush는 1F1B와 Pipeline Flush를 결합한 파이프라이닝 방법입니다. 이 파이프라이닝 방법은 Flush가 일어나기 때문에 GPIpe와 비교하여 idle time은 비슷하나, forward-backward 과정에서 유지해야 하는 **activation memory가 줄어듭니다.** PipeDream Flush는 Flush가 일어나기 때문에 여러버전의 파라미터를 관리할 필요가 없습니다. 따라서 단일 가중치만 유지하면 되기 때문에 PipeDream 2BW보다도 더 메모리 효율적입니다. (지금까지 소개드린 기법들 중 가장 메모리 효율적입니다.)![](../images/pipe_dream_flush.png)![](../images/pipe_dream_flush_2.png) 잠깐... 근데 Activation Memory가 뭐야?대부분의 Layer들은 Backward를 호출하기 전에 Forward에서 나온 출력값들을 저장하고 있습니다. 이는 `torch.autograd.Function`을 사용해보신 분들은 잘 아실텐데요. `ctx`변수에 forward 레이어의 출력값들을 저장해둡니다.
###Code
"""
참고: https://pytorch.org/tutorials/beginner/examples_autograd/two_layer_net_custom_function.html
"""
import torch
class ReLU(torch.autograd.Function):
@staticmethod
def forward(ctx, input):
ctx.save_for_backward(input)
# input 값을 저장하고 있음.
return input.clamp(min=0)
@staticmethod
def backward(ctx, grad_output):
input, = ctx.saved_tensors
grad_input = grad_output.clone()
grad_input[input < 0] = 0
return grad_input
###Output
_____no_output_____
###Markdown
이는 미분값(Gradient)을 계산할때 Forward 과정에서 사용했던 값들이 필요하기 때문입니다. 다음 예시를 봅시다.![](../images/max_pooling.png)위는 Max Pooling 연산과 그에 대한 Gradient를 계산한 것입니다. Backward를 수행할때는 [[0.8, 1.2], [0.9, 0.5]]와 같은 (2, 2) 텐서가 입력으로 들어옵니다. 이 값을 가지고 오른쪽의 Gradient Matrix를 찾아내야 하는데 반드시 Forward에서 받았던 (4, 4)의 텐서가 필요합니다. 따라서 이 텐서를 메모리에 저장하고 있는 것이죠. 이렇게 Backward를 수행하기 위해 Forward 당시에 쓰였던 텐서들을 저장해두기 위해 필요한 메모리를 Activation Memory라고 합니다. 이제 Activation Memory가 뭔지 알았으니, PipeDream을 실습해볼까요? **PipeDream Flush는 MS의 분산처리 라이브러리 DeepSpeed에 구현되어 있습니다.** (참고: https://github.com/microsoft/DeepSpeed/issues/1110) 따라서 DeepSpeed를 사용해봅시다. DeepSpeed 명령어 사용법아 참, 그 전에 `deepspeed`가 제공하는 매우 편리한 기능을 먼저 알아보고 가겠습니다. 기존에는 분산처리를 위해 `python -m torch.distributed.launch --nproc_per_node=n OOO.py`를 사용했으나 너무 길어서 불편했죠. DeepSpeed는 `deepspeed` 혹은 `ds`와 같은 명령어를 제공하고 있습니다. - `ds --num_gpus=n OOO.py`- `deepspeed --num_gpus=n OOO.py`위와 같은 명령어를 입력하면 `torch.distributed.launch`와 동일하게 작동합니다. 이제부터는 모든 분산처리 프로그램에 `deepspeed`의 명령어를 사용하도록 하겠습니다. (솔직히 `torch.distributed.launch`는 너무 길어요 😭)
###Code
"""
src/pipe_dream.py
"""
import deepspeed
import torch
import torch.nn as nn
from datasets import load_dataset
from deepspeed import PipelineModule
from torch.optim import Adam
from torch.utils.data import DataLoader
from tqdm import tqdm
from transformers import GPT2Tokenizer, GPT2LMHeadModel
from transformers.models.gpt2.modeling_gpt2 import GPT2Block as GPT2BlockBase
import torch.distributed as dist
class GPT2Preprocessing(nn.Module):
def __init__(self, config):
super().__init__()
self.embed_dim = config.hidden_size
self.wte = nn.Embedding(config.vocab_size, self.embed_dim)
self.wpe = nn.Embedding(config.max_position_embeddings, self.embed_dim)
self.drop = nn.Dropout(config.embd_pdrop)
def forward(self, input_ids):
input_shape = input_ids.size()
input_ids = input_ids.view(-1, input_shape[-1])
position_ids = torch.arange(
0, input_shape[-1], dtype=torch.long, device=input_ids.device
)
position_ids = position_ids.unsqueeze(0).view(-1, input_shape[-1])
inputs_embeds = self.wte(input_ids)
position_embeds = self.wpe(position_ids)
hidden_states = inputs_embeds + position_embeds
hidden_states = self.drop(hidden_states)
return hidden_states
class GPT2Block(GPT2BlockBase):
def forward(self, hidden_states):
hidden_states = super(GPT2Block, self).forward(
hidden_states=hidden_states,
)
return hidden_states[0]
class GPT2Postprocessing(nn.Module):
def __init__(self, config):
super().__init__()
self.ln_f = nn.LayerNorm(
config.hidden_size,
eps=config.layer_norm_epsilon,
)
self.lm_head = nn.Linear(
config.hidden_size,
config.vocab_size,
bias=False,
)
def forward(self, hidden_states):
hidden_states = self.ln_f(hidden_states)
lm_logits = self.lm_head(hidden_states)
return lm_logits
def create_model_from_pretrained(model_name):
pretrained = GPT2LMHeadModel.from_pretrained(model_name)
preprocess = GPT2Preprocessing(pretrained.config)
preprocess.wte.weight = pretrained.transformer.wte.weight
preprocess.wpe.weight = pretrained.transformer.wpe.weight
blocks = pretrained.transformer.h
for i, block in enumerate(blocks):
block.__class__ = GPT2Block
postprocess = GPT2Postprocessing(pretrained.config)
postprocess.ln_f.weight = pretrained.transformer.ln_f.weight
postprocess.ln_f.bias = pretrained.transformer.ln_f.bias
postprocess.lm_head.weight.data = pretrained.lm_head.weight.data.clone()
return nn.Sequential(preprocess, *blocks, postprocess)
def collate_fn(batch):
batch_encoding = tokenizer.pad(
{"input_ids": batch}, padding="max_length", max_length=1024
)
return batch_encoding.input_ids
def batch_fn(data):
input_ids = data
labels = data
return input_ids, labels
def loss_fn(logits, labels):
logits = logits[..., :-1, :].contiguous()
labels = labels[..., 1:].contiguous()
return nn.CrossEntropyLoss()(
logits.view(-1, logits.size(-1)),
labels.view(-1),
)
if __name__ == "__main__":
dist.init_process_group("nccl")
world_size, rank = dist.get_world_size(), dist.get_rank()
batch_size, train_steps = 16, 300
train_samples = batch_size * train_steps
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
tokenizer.pad_token = tokenizer.eos_token
model = PipelineModule(
create_model_from_pretrained(model_name="gpt2"),
loss_fn=loss_fn,
num_stages=world_size,
partition_method="type:GPT2Block"
# partition_method를 통해 병렬화 하고 싶은 레이어를 고를 수 있습니다.
)
engine, optimizer, _, _ = deepspeed.initialize(
model=model,
optimizer=Adam(model.parameters(), lr=3e-5),
config={
"train_batch_size": batch_size,
"steps_per_print": 9999999,
# turn off: https://github.com/microsoft/DeepSpeed/issues/1119
},
)
engine.set_batch_fn(batch_fn)
datasets = load_dataset("squad").data["train"]["context"]
datasets = [str(sample) for i, sample in enumerate(datasets) if i < train_samples]
datasets = [
tokenizer(data, return_tensors="pt", max_length=1024).input_ids[0]
for data in tqdm(datasets)
]
data_loader = iter(
DataLoader(
sorted(datasets, key=len, reverse=True),
# uniform length batching
# https://mccormickml.com/2020/07/29/smart-batching-tutorial/
batch_size=batch_size,
num_workers=8,
collate_fn=collate_fn,
shuffle=False,
)
)
for i in range(train_steps):
loss = engine.train_batch(data_loader)
if i % 10 == 0 and rank == 0:
print(f"step: {i}, loss: {loss}")
!ds --num_gpus=4 ../src/pipe_dream.py
###Output
[2021-10-21 23:11:01,063] [WARNING] [runner.py:122:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.
[2021-10-21 23:11:01,184] [INFO] [runner.py:360:main] cmd = /home/ubuntu/kevin/kevin_env/bin/python3 -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMCwgMSwgMiwgM119 --master_addr=127.0.0.1 --master_port=29500 ../src/pipe_dream.py
[2021-10-21 23:11:02,065] [INFO] [launch.py:80:main] WORLD INFO DICT: {'localhost': [0, 1, 2, 3]}
[2021-10-21 23:11:02,065] [INFO] [launch.py:86:main] nnodes=1, num_local_procs=4, node_rank=0
[2021-10-21 23:11:02,065] [INFO] [launch.py:101:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0, 1, 2, 3]})
[2021-10-21 23:11:02,065] [INFO] [launch.py:102:main] dist_world_size=4
[2021-10-21 23:11:02,065] [INFO] [launch.py:104:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3
SEED_LAYERS=False BASE_SEED=1234 SEED_FN=None
Using topology: {ProcessCoord(pipe=0, data=0): 0, ProcessCoord(pipe=1, data=0): 1, ProcessCoord(pipe=2, data=0): 2, ProcessCoord(pipe=3, data=0): 3}
[2021-10-21 23:11:24,460] [INFO] [module.py:365:_partition_layers] Partitioning pipeline stages with method type:GPT2Block
stage=0 layers=4
0: GPT2Preprocessing
1: GPT2Block
2: GPT2Block
3: GPT2Block
stage=1 layers=3
4: GPT2Block
5: GPT2Block
6: GPT2Block
stage=2 layers=3
7: GPT2Block
8: GPT2Block
9: GPT2Block
stage=3 layers=4
10: GPT2Block
11: GPT2Block
12: GPT2Block
13: GPT2Postprocessing
loss: loss_fn
[2021-10-21 23:14:05,483] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed info: version=0.5.4+c6d1418, git-hash=c6d1418, git-branch=master
[2021-10-21 23:14:05,869] [INFO] [engine.py:204:__init__] DeepSpeed Flops Profiler Enabled: False
[2021-10-21 23:14:05,869] [INFO] [engine.py:848:_configure_optimizer] Removing param_group that has no 'params' in the client Optimizer
[2021-10-21 23:14:05,869] [INFO] [engine.py:854:_configure_optimizer] Using client Optimizer as basic optimizer
[2021-10-21 23:14:05,892] [INFO] [engine.py:870:_configure_optimizer] DeepSpeed Basic Optimizer = Adam
[2021-10-21 23:14:05,892] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Final Optimizer = Adam
[2021-10-21 23:14:05,892] [INFO] [engine.py:596:_configure_lr_scheduler] DeepSpeed using client LR scheduler
[2021-10-21 23:14:05,892] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed LR Scheduler = None
[2021-10-21 23:14:05,892] [INFO] [logging.py:68:log_dist] [Rank 0] step=0, skipped=0, lr=[3e-05], mom=[(0.9, 0.999)]
[2021-10-21 23:14:05,892] [INFO] [config.py:940:print] DeepSpeedEngine configuration:
[2021-10-21 23:14:05,893] [INFO] [config.py:944:print] activation_checkpointing_config {
"partition_activations": false,
"contiguous_memory_optimization": false,
"cpu_checkpointing": false,
"number_checkpoints": null,
"synchronize_checkpoint_boundary": false,
"profile": false
}
[2021-10-21 23:14:05,893] [INFO] [config.py:944:print] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'thread_count': 1, 'single_submit': False, 'overlap_events': True}
[2021-10-21 23:14:05,893] [INFO] [config.py:944:print] allreduce_always_fp32 ........ False
[2021-10-21 23:14:05,893] [INFO] [config.py:944:print] amp_enabled .................. False
[2021-10-21 23:14:05,893] [INFO] [config.py:944:print] amp_params ................... False
[2021-10-21 23:14:05,893] [INFO] [config.py:944:print] checkpoint_tag_validation_enabled True
[2021-10-21 23:14:05,893] [INFO] [config.py:944:print] checkpoint_tag_validation_fail False
[2021-10-21 23:14:05,893] [INFO] [config.py:944:print] curriculum_enabled ........... False
[2021-10-21 23:14:05,893] [INFO] [config.py:944:print] curriculum_params ............ False
[2021-10-21 23:14:05,893] [INFO] [config.py:944:print] dataloader_drop_last ......... False
[2021-10-21 23:14:05,893] [INFO] [config.py:944:print] disable_allgather ............ False
[2021-10-21 23:14:05,893] [INFO] [config.py:944:print] dump_state ................... False
[2021-10-21 23:14:05,894] [INFO] [config.py:944:print] dynamic_loss_scale_args ...... None
[2021-10-21 23:14:05,894] [INFO] [config.py:944:print] eigenvalue_enabled ........... False
[2021-10-21 23:14:05,894] [INFO] [config.py:944:print] eigenvalue_gas_boundary_resolution 1
[2021-10-21 23:14:05,894] [INFO] [config.py:944:print] eigenvalue_layer_name ........ bert.encoder.layer
[2021-10-21 23:14:05,894] [INFO] [config.py:944:print] eigenvalue_layer_num ......... 0
[2021-10-21 23:14:05,894] [INFO] [config.py:944:print] eigenvalue_max_iter .......... 100
[2021-10-21 23:14:05,894] [INFO] [config.py:944:print] eigenvalue_stability ......... 1e-06
[2021-10-21 23:14:05,894] [INFO] [config.py:944:print] eigenvalue_tol ............... 0.01
[2021-10-21 23:14:05,894] [INFO] [config.py:944:print] eigenvalue_verbose ........... False
[2021-10-21 23:14:05,894] [INFO] [config.py:944:print] elasticity_enabled ........... False
[2021-10-21 23:14:05,894] [INFO] [config.py:944:print] flops_profiler_config ........ {
"enabled": false,
"profile_step": 1,
"module_depth": -1,
"top_modules": 1,
"detailed": true,
"output_file": null
}
[2021-10-21 23:14:05,894] [INFO] [config.py:944:print] fp16_enabled ................. False
[2021-10-21 23:14:05,894] [INFO] [config.py:944:print] fp16_master_weights_and_gradients False
[2021-10-21 23:14:05,894] [INFO] [config.py:944:print] fp16_mixed_quantize .......... False
[2021-10-21 23:14:05,894] [INFO] [config.py:944:print] global_rank .................. 0
[2021-10-21 23:14:05,894] [INFO] [config.py:944:print] gradient_accumulation_steps .. 1
[2021-10-21 23:14:05,894] [INFO] [config.py:944:print] gradient_clipping ............ 0.0
[2021-10-21 23:14:05,894] [INFO] [config.py:944:print] gradient_predivide_factor .... 1.0
[2021-10-21 23:14:05,894] [INFO] [config.py:944:print] initial_dynamic_scale ........ 4294967296
[2021-10-21 23:14:05,894] [INFO] [config.py:944:print] loss_scale ................... 0
[2021-10-21 23:14:05,894] [INFO] [config.py:944:print] memory_breakdown ............. False
[2021-10-21 23:14:05,894] [INFO] [config.py:944:print] optimizer_legacy_fusion ...... False
[2021-10-21 23:14:05,894] [INFO] [config.py:944:print] optimizer_name ............... None
[2021-10-21 23:14:05,894] [INFO] [config.py:944:print] optimizer_params ............. None
[2021-10-21 23:14:05,894] [INFO] [config.py:944:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0}
[2021-10-21 23:14:05,894] [INFO] [config.py:944:print] pld_enabled .................. False
[2021-10-21 23:14:05,894] [INFO] [config.py:944:print] pld_params ................... False
[2021-10-21 23:14:05,894] [INFO] [config.py:944:print] prescale_gradients ........... False
[2021-10-21 23:14:05,895] [INFO] [config.py:944:print] quantize_change_rate ......... 0.001
[2021-10-21 23:14:05,895] [INFO] [config.py:944:print] quantize_groups .............. 1
[2021-10-21 23:14:05,895] [INFO] [config.py:944:print] quantize_offset .............. 1000
[2021-10-21 23:14:05,895] [INFO] [config.py:944:print] quantize_period .............. 1000
[2021-10-21 23:14:05,895] [INFO] [config.py:944:print] quantize_rounding ............ 0
[2021-10-21 23:14:05,895] [INFO] [config.py:944:print] quantize_start_bits .......... 16
[2021-10-21 23:14:05,895] [INFO] [config.py:944:print] quantize_target_bits ......... 8
[2021-10-21 23:14:05,895] [INFO] [config.py:944:print] quantize_training_enabled .... False
[2021-10-21 23:14:05,895] [INFO] [config.py:944:print] quantize_type ................ 0
[2021-10-21 23:14:05,895] [INFO] [config.py:944:print] quantize_verbose ............. False
[2021-10-21 23:14:05,895] [INFO] [config.py:944:print] scheduler_name ............... None
[2021-10-21 23:14:05,895] [INFO] [config.py:944:print] scheduler_params ............. None
[2021-10-21 23:14:05,895] [INFO] [config.py:944:print] sparse_attention ............. None
[2021-10-21 23:14:05,895] [INFO] [config.py:944:print] sparse_gradients_enabled ..... False
[2021-10-21 23:14:05,895] [INFO] [config.py:944:print] steps_per_print .............. 9999999
[2021-10-21 23:14:05,895] [INFO] [config.py:944:print] tensorboard_enabled .......... False
[2021-10-21 23:14:05,895] [INFO] [config.py:944:print] tensorboard_job_name ......... DeepSpeedJobName
[2021-10-21 23:14:05,895] [INFO] [config.py:944:print] tensorboard_output_path ......
[2021-10-21 23:14:05,895] [INFO] [config.py:944:print] train_batch_size ............. 16
[2021-10-21 23:14:05,895] [INFO] [config.py:944:print] train_micro_batch_size_per_gpu 16
[2021-10-21 23:14:05,895] [INFO] [config.py:944:print] use_quantizer_kernel ......... False
[2021-10-21 23:14:05,895] [INFO] [config.py:944:print] wall_clock_breakdown ......... False
[2021-10-21 23:14:05,895] [INFO] [config.py:944:print] world_size ................... 1
[2021-10-21 23:14:05,895] [INFO] [config.py:944:print] zero_allow_untested_optimizer False
[2021-10-21 23:14:05,895] [INFO] [config.py:944:print] zero_config .................. {
"stage": 0,
"contiguous_gradients": true,
"reduce_scatter": true,
"reduce_bucket_size": 5.000000e+08,
"allgather_partitions": true,
"allgather_bucket_size": 5.000000e+08,
"overlap_comm": false,
"load_from_fp32_weights": true,
"elastic_checkpoint": true,
"offload_param": null,
"offload_optimizer": null,
"sub_group_size": 1.000000e+09,
"prefetch_bucket_size": 5.000000e+07,
"param_persistence_threshold": 1.000000e+05,
"max_live_parameters": 1.000000e+09,
"max_reuse_distance": 1.000000e+09,
"gather_fp16_weights_on_model_save": false,
"ignore_unused_parameters": true,
"round_robin_gradients": false,
"legacy_stage1": false
}
[2021-10-21 23:14:05,895] [INFO] [config.py:944:print] zero_enabled ................. False
[2021-10-21 23:14:05,896] [INFO] [config.py:944:print] zero_optimization_stage ...... 0
[2021-10-21 23:14:05,896] [INFO] [config.py:946:print] json = {
"train_batch_size": 16,
"steps_per_print": 9.999999e+06
}
###Markdown
Pipeline Parallelism이번 세션에서는 파이프라인 병렬화에 대해 알아보겠습니다. 1. Inter-layer model parallelism파이프라인 병렬화는 Inter-layer 모델 병렬화를 개선한 것입니다. Inter-layer 모델 병렬화는 아래와 같이 특정 GPU에 특정 레이어들을 할당하는 모델 병렬화 방법이였죠. 아래 그림에서는 GPU1번에 1,2,3번 레이어가 할당되었고, GPU2번에 4,5번 레이어가 할당 되었는데, 이 때 쪼개진 하나의 조각을 `stage(스테이지)`라고 합니다. 아래 예시의 경우 2개의 스테이지로 분할되었습니다.![](../images/inter_layer.png)그러나 이전 레이어의 출력을 다음 레이어의 입력으로 하는 신경망의 특성상 특정 GPU의 연산이 끝나야 다른 GPU가 연산을 시작할 수 있습니다. 즉, 아래의 그림처럼 Inter-layer 모델 병렬화는 동시에 하나의 GPU만 사용할 수 있다는 치명적인 한계를 가지고 있습니다.![](../images/inter_layer_2.png)![](../images/inter_layer_3.gif) 2. GPipeGPipe는 Google에서 개발된 파이프라인 병렬화 기법으로 Inter Layer 모델 병렬화 시 GPU가 쉬는 시간 (idle time)을 줄이기 위해 등장했으며, mini-batch를 micro-batch로 한번 더 쪼개서 학습 과정을 파이프라이닝 하는 방식으로 동작합니다.![](../images/gpipe_1.png)![](../images/pipeline_parallelism2.png) Micro-batch- Mini-batch는 전체 데이터셋을 n개로 분할한 서브샘플 집합입니다.- Micro-batch는 Mini-batch를 m개로 한번 더 분할한 서브샘플 집합입니다.![](../images/gpipe_2.png) PipeliningGPipe는 미니배치를 마이크로 배치로 쪼개고 연산을 파이프라이닝 합니다. 붉은색 (GPU가 쉬는 부분)을 Bubble time이라고 하는데, Micro batch 사이즈가 커질 수록 Bubble time이 줄어드는 것을 알 수 있습니다.![](../images/gpipe_3.gif) GPipe with PyTorchkakaobrain에서 공개한 `torchgpipe`를 사용하면 손쉽게 GPipe를 사용할 수 있습니다. 단, `nn.Sequential`로 래핑된 모델만 사용 가능하며 모든 모듈의 입력과 출력 타입은 `torch.Tensor` 혹은 `Tuple[torch.Tensor]`로 제한됩니다. 따라서 코딩하기가 상당히 까다롭습니다.
###Code
"""
src/gpipe.py
"""
import torch
import torch.nn as nn
from datasets import load_dataset
from torch.optim import Adam
from torch.utils.data import DataLoader
from torchgpipe import GPipe
from transformers import GPT2Tokenizer, GPT2LMHeadModel
from transformers.models.gpt2.modeling_gpt2 import GPT2Block as GPT2BlockBase
class GPT2Preprocessing(nn.Module):
def __init__(self, config):
super().__init__()
self.embed_dim = config.hidden_size
self.wte = nn.Embedding(config.vocab_size, self.embed_dim)
self.wpe = nn.Embedding(config.max_position_embeddings, self.embed_dim)
self.drop = nn.Dropout(config.embd_pdrop)
def forward(self, input_ids):
input_shape = input_ids.size()
input_ids = input_ids.view(-1, input_shape[-1])
position_ids = torch.arange(
0, input_shape[-1], dtype=torch.long, device=input_ids.device
)
position_ids = position_ids.unsqueeze(0).view(-1, input_shape[-1])
inputs_embeds = self.wte(input_ids)
position_embeds = self.wpe(position_ids)
hidden_states = inputs_embeds + position_embeds
hidden_states = self.drop(hidden_states)
return hidden_states
class GPT2Block(GPT2BlockBase):
def forward(self, hidden_states):
hidden_states = super(GPT2Block, self).forward(
hidden_states=hidden_states,
)
return hidden_states[0]
class GPT2Postprocessing(nn.Module):
def __init__(self, config):
super().__init__()
self.ln_f = nn.LayerNorm(
config.hidden_size,
eps=config.layer_norm_epsilon,
)
self.lm_head = nn.Linear(
config.hidden_size,
config.vocab_size,
bias=False,
)
def forward(self, hidden_states):
hidden_states = self.ln_f(hidden_states)
lm_logits = self.lm_head(hidden_states)
return lm_logits
def create_model_from_pretrained(model_name):
pretrained = GPT2LMHeadModel.from_pretrained(model_name)
preprocess = GPT2Preprocessing(pretrained.config)
preprocess.wte.weight = pretrained.transformer.wte.weight
preprocess.wpe.weight = pretrained.transformer.wpe.weight
blocks = pretrained.transformer.h
for i, block in enumerate(blocks):
block.__class__ = GPT2Block
postprocess = GPT2Postprocessing(pretrained.config)
postprocess.ln_f.weight = pretrained.transformer.ln_f.weight
postprocess.ln_f.bias = pretrained.transformer.ln_f.bias
postprocess.lm_head.weight.data = pretrained.lm_head.weight.data.clone()
return nn.Sequential(preprocess, *blocks, postprocess)
if __name__ == "__main__":
world_size = 4
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
tokenizer.pad_token = tokenizer.eos_token
model = create_model_from_pretrained(model_name="gpt2")
model = GPipe(
model,
balance=[7, 7],
devices=[0, 1],
chunks=world_size,
)
datasets = load_dataset("squad").data["train"]["context"]
datasets = [str(sample) for sample in datasets]
data_loader = DataLoader(datasets, batch_size=8, num_workers=2)
optimizer = Adam(model.parameters(), lr=3e-5)
loss_fn = nn.CrossEntropyLoss()
for i, data in enumerate(data_loader):
optimizer.zero_grad()
tokens = tokenizer(data, return_tensors="pt", truncation=True, padding=True)
input_ids = tokens.input_ids.to(0)
labels = tokens.input_ids.to(world_size - 1)
lm_logits = model(input_ids)
shift_logits = lm_logits[..., :-1, :].contiguous()
shift_labels = labels[..., 1:].contiguous()
loss = nn.CrossEntropyLoss()(
shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1)
)
loss.backward()
optimizer.step()
if i % 10 == 0:
print(f"step: {i}, loss: {loss}")
if i == 100:
break
# !torchrun --nproc_per_node=2 ../src/gpipe.py
!python ../src/gpipe.py
###Output
Reusing dataset squad (/root/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453)
100%|████████████████████████████████████████████| 2/2 [00:00<00:00, 630.25it/s]
step: 0, loss: 6.080104351043701
step: 10, loss: 3.2553441524505615
step: 20, loss: 2.8128654956817627
step: 30, loss: 2.5403740406036377
step: 40, loss: 2.8414905071258545
step: 50, loss: 2.35159969329834
step: 60, loss: 2.546950578689575
step: 70, loss: 2.2460806369781494
step: 80, loss: 2.477443218231201
step: 90, loss: 2.9362146854400635
step: 100, loss: 2.823525905609131
###Markdown
3. 1F1B Pipelining (PipeDream)Microsoft에서 공개한 `PipeDream`은 `GPipe`와는 약간 다른 방식의 파이프라이닝을 수행합니다. 흔히 이 방법을 1F1B라고 부르는데, 모든 Forward가 끝나고 나서 Backward를 수행하는 GPipe와 달리 `PipeDream`은 Forward와 Backward를 번갈아가면서 수행합니다.1F1B Pipelining에는 다음과 같은 두가지 챌린지가 존재합니다.1. Weight version managing2. Work partitioning 1) Weight version managinigGPipe의 경우 하나의 weight 버전만 운용하지만 주기적으로 Pipeline flush가 일어납니다. Pipeline flush란 계산된 Gradient를 통해 파라미터를 업데이트 하는 과정입니다. 이러한 flush 과정 중에는 어떠한 forward, backward 연산도 하지 않기 때문에 처리 효율이 떨어집니다.PipeDream은 이러한 flush 없이 계속해서 파라미터를 업데이트 해나갑니다. 따라서 forward와 backward가 모두 쉬는 시간이 사라집니다. 그러나 이를 위해서는 여러 버전의 파라미터 상태를 지속적으로 관리해야 합니다. 만약 최신버전의 파라미터만 저장하고 있으면 이전 layer의 출력이 다음 layer로 전송될 때, 다음 layer 부분이 업데이트 될 수도 있기 때문이죠.이러한 문제를 막기 위해 여러 버전의 weight를 저장하여 관리하는데 그러면 weight를 저장하면 메모리 공간을 많이 차지하게 됩니다. 따라서 이 부분에서 트레이드 오프가 발생합니다.- GPipe: 메모리 효율적, 프로세싱 비효율적- PipeDream: 메모리 비효율적, 프로세싱 효율적 2) Work Partitioning두번쨰 문제는 뉴럴넷을 어떻게 쪼갤건지에 대한 문제입니다. 단순히 Layer별로 동일한 수의 레이어를 갖게끔 하는 것이 항상 최고의 솔루션이라고 할 수는 없겠죠. 우리에게 가장 중요한 것은 idle time을 최소화을 최소화 하는 것입니다. 그러기 위해서는 각 파티션의 running time이 비슷해야겠죠. 그 이외에도 추가로 parameter size, activation memory 등을 고려해야 합니다.PipeDream은 Profiling과 Optimizing을 통해 최적의 Partioning 전략을 찾아냅니다. 4. Variations of 1F1B PipeliningPipeDream의 1F1B 파이프라이닝을 개선한 두가지 버전의 파이프라인 전략을 소개합니다. 1) PipeDream 2BW (2-buffered weight update)PipeDream 2BW는 PipeDream의 메모리 비효율성을 개선하기 위해 등장했습니다. 핵심 아이디어는 파이프라이닝 중에 Gradient Accumulation을 수행하는 것입니다. 여러개의 Gradient들을 모아두다가 한번에 업데이트를 수행하는 방식으로 메모리 비효율성 문제를 해결했죠. 2BW는 이전과 달리 단 두개의 weight version만 유지하면 됩니다.![](../images/pipe_dream_2bw.png) 2) PipeDream FlushPipeDream Flush는 1F1B와 Pipeline Flush를 결합한 파이프라이닝 방법입니다. 이 파이프라이닝 방법은 Flush가 일어나기 때문에 GPIpe와 비교하여 idle time은 비슷하나, forward-backward 과정에서 유지해야 하는 **activation memory가 줄어듭니다.** PipeDream Flush는 Flush가 일어나기 때문에 여러버전의 파라미터를 관리할 필요가 없습니다. 따라서 단일 가중치만 유지하면 되기 때문에 PipeDream 2BW보다도 더 메모리 효율적입니다. (지금까지 소개드린 기법들 중 가장 메모리 효율적입니다.)![](../images/pipe_dream_flush.png)![](../images/pipe_dream_flush_2.png) 잠깐... 근데 Activation Memory가 뭐야?대부분의 Layer들은 Backward를 호출하기 전에 Forward에서 나온 출력값들을 저장하고 있습니다. 이는 `torch.autograd.Function`을 사용해보신 분들은 잘 아실텐데요. `ctx`변수에 forward 레이어의 출력값들을 저장해둡니다.
###Code
"""
참고: https://pytorch.org/tutorials/beginner/examples_autograd/two_layer_net_custom_function.html
"""
import torch
class ReLU(torch.autograd.Function):
@staticmethod
def forward(ctx, input):
ctx.save_for_backward(input)
# input 값을 저장하고 있음.
return input.clamp(min=0)
@staticmethod
def backward(ctx, grad_output):
input, = ctx.saved_tensors
grad_input = grad_output.clone()
grad_input[input < 0] = 0
return grad_input
###Output
_____no_output_____
###Markdown
이는 미분값(Gradient)을 계산할때 Forward 과정에서 사용했던 값들이 필요하기 때문입니다. 다음 예시를 봅시다.![](../images/max_pooling.png)위는 Max Pooling 연산과 그에 대한 Gradient를 계산한 것입니다. Backward를 수행할때는 [[0.8, 1.2], [0.9, 0.5]]와 같은 (2, 2) 텐서가 입력으로 들어옵니다. 이 값을 가지고 오른쪽의 Gradient Matrix를 찾아내야 하는데 반드시 Forward에서 받았던 (4, 4)의 텐서가 필요합니다. 따라서 이 텐서를 메모리에 저장하고 있는 것이죠. 이렇게 Backward를 수행하기 위해 Forward 당시에 쓰였던 텐서들을 저장해두기 위해 필요한 메모리를 Activation Memory라고 합니다. 이제 Activation Memory가 뭔지 알았으니, PipeDream을 실습해볼까요? **PipeDream Flush는 MS의 분산처리 라이브러리 DeepSpeed에 구현되어 있습니다.** (참고: https://github.com/microsoft/DeepSpeed/issues/1110) 따라서 DeepSpeed를 사용해봅시다. DeepSpeed 명령어 사용법아 참, 그 전에 `deepspeed`가 제공하는 매우 편리한 기능을 먼저 알아보고 가겠습니다. 기존에는 분산처리를 위해 `torchrun --nproc_per_node=n OOO.py`를 사용했으나 너무 길어서 불편했죠. DeepSpeed는 `deepspeed` 혹은 `ds`와 같은 명령어를 제공하고 있습니다. - `ds --num_gpus=n OOO.py`- `deepspeed --num_gpus=n OOO.py`위와 같은 명령어를 입력하면 `torchrun`과 동일하게 작동합니다. 이제부터는 모든 분산처리 프로그램에 `deepspeed`의 명령어를 사용하도록 하겠습니다.
###Code
"""
src/pipe_dream.py
"""
import deepspeed
import torch
import torch.nn as nn
from datasets import load_dataset
from deepspeed import PipelineModule
from torch.optim import Adam
from torch.utils.data import DataLoader
from tqdm import tqdm
from transformers import GPT2Tokenizer, GPT2LMHeadModel
from transformers.models.gpt2.modeling_gpt2 import GPT2Block as GPT2BlockBase
import torch.distributed as dist
class GPT2Preprocessing(nn.Module):
def __init__(self, config):
super().__init__()
self.embed_dim = config.hidden_size
self.wte = nn.Embedding(config.vocab_size, self.embed_dim)
self.wpe = nn.Embedding(config.max_position_embeddings, self.embed_dim)
self.drop = nn.Dropout(config.embd_pdrop)
def forward(self, input_ids):
input_shape = input_ids.size()
input_ids = input_ids.view(-1, input_shape[-1])
position_ids = torch.arange(
0, input_shape[-1], dtype=torch.long, device=input_ids.device
)
position_ids = position_ids.unsqueeze(0).view(-1, input_shape[-1])
inputs_embeds = self.wte(input_ids)
position_embeds = self.wpe(position_ids)
hidden_states = inputs_embeds + position_embeds
hidden_states = self.drop(hidden_states)
return hidden_states
class GPT2Block(GPT2BlockBase):
def forward(self, hidden_states):
hidden_states = super(GPT2Block, self).forward(
hidden_states=hidden_states,
)
return hidden_states[0]
class GPT2Postprocessing(nn.Module):
def __init__(self, config):
super().__init__()
self.ln_f = nn.LayerNorm(
config.hidden_size,
eps=config.layer_norm_epsilon,
)
self.lm_head = nn.Linear(
config.hidden_size,
config.vocab_size,
bias=False,
)
def forward(self, hidden_states):
hidden_states = self.ln_f(hidden_states)
lm_logits = self.lm_head(hidden_states)
return lm_logits
def create_model_from_pretrained(model_name):
pretrained = GPT2LMHeadModel.from_pretrained(model_name)
preprocess = GPT2Preprocessing(pretrained.config)
preprocess.wte.weight = pretrained.transformer.wte.weight
preprocess.wpe.weight = pretrained.transformer.wpe.weight
blocks = pretrained.transformer.h
for i, block in enumerate(blocks):
block.__class__ = GPT2Block
postprocess = GPT2Postprocessing(pretrained.config)
postprocess.ln_f.weight = pretrained.transformer.ln_f.weight
postprocess.ln_f.bias = pretrained.transformer.ln_f.bias
postprocess.lm_head.weight.data = pretrained.lm_head.weight.data.clone()
return nn.Sequential(preprocess, *blocks, postprocess)
def collate_fn(batch):
batch_encoding = tokenizer.pad(
{"input_ids": batch}, padding="max_length", max_length=1024
)
return batch_encoding.input_ids
def batch_fn(data):
input_ids = data
labels = data
return input_ids, labels
def loss_fn(logits, labels):
logits = logits[..., :-1, :].contiguous()
labels = labels[..., 1:].contiguous()
return nn.CrossEntropyLoss()(
logits.view(-1, logits.size(-1)),
labels.view(-1),
)
if __name__ == "__main__":
dist.init_process_group("nccl")
world_size, rank = dist.get_world_size(), dist.get_rank()
batch_size, train_steps = 16, 300
train_samples = batch_size * train_steps
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
tokenizer.pad_token = tokenizer.eos_token
model = PipelineModule(
create_model_from_pretrained(model_name="gpt2"),
loss_fn=loss_fn,
num_stages=world_size,
partition_method="type:GPT2Block"
# partition_method를 통해 병렬화 하고 싶은 레이어를 고를 수 있습니다.
)
engine, optimizer, _, _ = deepspeed.initialize(
model=model,
optimizer=Adam(model.parameters(), lr=3e-5),
config={
"train_batch_size": batch_size,
"steps_per_print": 9999999,
# turn off: https://github.com/microsoft/DeepSpeed/issues/1119
},
)
engine.set_batch_fn(batch_fn)
datasets = load_dataset("squad").data["train"]["context"]
datasets = [str(sample) for i, sample in enumerate(datasets) if i < train_samples]
datasets = [
tokenizer(data, return_tensors="pt", max_length=1024).input_ids[0]
for data in tqdm(datasets)
]
data_loader = iter(
DataLoader(
sorted(datasets, key=len, reverse=True),
# uniform length batching
# https://mccormickml.com/2020/07/29/smart-batching-tutorial/
batch_size=batch_size,
num_workers=8,
collate_fn=collate_fn,
shuffle=False,
)
)
for i in range(train_steps):
loss = engine.train_batch(data_loader)
if i % 10 == 0 and rank == 0:
print(f"step: {i}, loss: {loss}")
!export NCCL_SHM_DISABLE=1
!ds --num_gpus=2 ../src/pipe_dream.py
###Output
[2021-11-02 03:51:58,603] [WARNING] [runner.py:122:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.
[2021-11-02 03:51:58,675] [INFO] [runner.py:360:main] cmd = /opt/conda/bin/python3.8 -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMCwgMV19 --master_addr=127.0.0.1 --master_port=29500 ../src/pipe_dream.py
[2021-11-02 03:51:59,749] [INFO] [launch.py:73:main] 0 NV_LIBNCCL_DEV_PACKAGE libnccl-dev=2.8.4-1+cuda11.1
[2021-11-02 03:51:59,749] [INFO] [launch.py:73:main] 0 NV_LIBNCCL_DEV_PACKAGE_VERSION 2.8.4-1
[2021-11-02 03:51:59,749] [INFO] [launch.py:73:main] 0 NCCL_VERSION 2.8.4-1
[2021-11-02 03:51:59,749] [INFO] [launch.py:73:main] 0 NV_LIBNCCL_DEV_PACKAGE_NAME libnccl-dev
[2021-11-02 03:51:59,749] [INFO] [launch.py:73:main] 0 NV_LIBNCCL_PACKAGE libnccl2=2.8.4-1+cuda11.1
[2021-11-02 03:51:59,750] [INFO] [launch.py:73:main] 0 NV_LIBNCCL_PACKAGE_NAME libnccl2
[2021-11-02 03:51:59,750] [INFO] [launch.py:73:main] 0 NV_LIBNCCL_PACKAGE_VERSION 2.8.4-1
[2021-11-02 03:51:59,750] [INFO] [launch.py:80:main] WORLD INFO DICT: {'localhost': [0, 1]}
[2021-11-02 03:51:59,750] [INFO] [launch.py:86:main] nnodes=1, num_local_procs=2, node_rank=0
[2021-11-02 03:51:59,750] [INFO] [launch.py:101:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0, 1]})
[2021-11-02 03:51:59,750] [INFO] [launch.py:102:main] dist_world_size=2
[2021-11-02 03:51:59,750] [INFO] [launch.py:104:main] Setting CUDA_VISIBLE_DEVICES=0,1
SEED_LAYERS=False BASE_SEED=1234 SEED_FN=None
Using topology: {ProcessCoord(pipe=0, data=0): 0, ProcessCoord(pipe=1, data=0): 1}
[2021-11-02 03:52:15,634] [INFO] [module.py:365:_partition_layers] Partitioning pipeline stages with method type:GPT2Block
stage=0 layers=7
0: GPT2Preprocessing
1: GPT2Block
2: GPT2Block
3: GPT2Block
4: GPT2Block
5: GPT2Block
6: GPT2Block
stage=1 layers=7
7: GPT2Block
8: GPT2Block
9: GPT2Block
10: GPT2Block
11: GPT2Block
12: GPT2Block
13: GPT2Postprocessing
loss: loss_fn
[2021-11-02 03:52:22,614] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed info: version=0.5.4, git-hash=unknown, git-branch=unknown
[2021-11-02 03:52:22,716] [INFO] [engine.py:204:__init__] DeepSpeed Flops Profiler Enabled: False
[2021-11-02 03:52:22,716] [INFO] [engine.py:848:_configure_optimizer] Removing param_group that has no 'params' in the client Optimizer
[2021-11-02 03:52:22,716] [INFO] [engine.py:854:_configure_optimizer] Using client Optimizer as basic optimizer
[2021-11-02 03:52:22,718] [INFO] [engine.py:870:_configure_optimizer] DeepSpeed Basic Optimizer = Adam
[2021-11-02 03:52:22,719] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Final Optimizer = Adam
[2021-11-02 03:52:22,719] [INFO] [engine.py:596:_configure_lr_scheduler] DeepSpeed using client LR scheduler
[2021-11-02 03:52:22,719] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed LR Scheduler = None
[2021-11-02 03:52:22,719] [INFO] [logging.py:68:log_dist] [Rank 0] step=0, skipped=0, lr=[3e-05], mom=[(0.9, 0.999)]
[2021-11-02 03:52:22,719] [INFO] [config.py:940:print] DeepSpeedEngine configuration:
[2021-11-02 03:52:22,719] [INFO] [config.py:944:print] activation_checkpointing_config {
"partition_activations": false,
"contiguous_memory_optimization": false,
"cpu_checkpointing": false,
"number_checkpoints": null,
"synchronize_checkpoint_boundary": false,
"profile": false
}
[2021-11-02 03:52:22,720] [INFO] [config.py:944:print] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'thread_count': 1, 'single_submit': False, 'overlap_events': True}
[2021-11-02 03:52:22,720] [INFO] [config.py:944:print] allreduce_always_fp32 ........ False
[2021-11-02 03:52:22,720] [INFO] [config.py:944:print] amp_enabled .................. False
[2021-11-02 03:52:22,720] [INFO] [config.py:944:print] amp_params ................... False
[2021-11-02 03:52:22,720] [INFO] [config.py:944:print] checkpoint_tag_validation_enabled True
[2021-11-02 03:52:22,720] [INFO] [config.py:944:print] checkpoint_tag_validation_fail False
[2021-11-02 03:52:22,720] [INFO] [config.py:944:print] curriculum_enabled ........... False
[2021-11-02 03:52:22,720] [INFO] [config.py:944:print] curriculum_params ............ False
[2021-11-02 03:52:22,720] [INFO] [config.py:944:print] dataloader_drop_last ......... False
[2021-11-02 03:52:22,720] [INFO] [config.py:944:print] disable_allgather ............ False
[2021-11-02 03:52:22,720] [INFO] [config.py:944:print] dump_state ................... False
[2021-11-02 03:52:22,720] [INFO] [config.py:944:print] dynamic_loss_scale_args ...... None
[2021-11-02 03:52:22,720] [INFO] [config.py:944:print] eigenvalue_enabled ........... False
[2021-11-02 03:52:22,720] [INFO] [config.py:944:print] eigenvalue_gas_boundary_resolution 1
[2021-11-02 03:52:22,720] [INFO] [config.py:944:print] eigenvalue_layer_name ........ bert.encoder.layer
[2021-11-02 03:52:22,720] [INFO] [config.py:944:print] eigenvalue_layer_num ......... 0
[2021-11-02 03:52:22,720] [INFO] [config.py:944:print] eigenvalue_max_iter .......... 100
[2021-11-02 03:52:22,720] [INFO] [config.py:944:print] eigenvalue_stability ......... 1e-06
[2021-11-02 03:52:22,720] [INFO] [config.py:944:print] eigenvalue_tol ............... 0.01
[2021-11-02 03:52:22,720] [INFO] [config.py:944:print] eigenvalue_verbose ........... False
[2021-11-02 03:52:22,720] [INFO] [config.py:944:print] elasticity_enabled ........... False
[2021-11-02 03:52:22,720] [INFO] [config.py:944:print] flops_profiler_config ........ {
"enabled": false,
"profile_step": 1,
"module_depth": -1,
"top_modules": 1,
"detailed": true,
"output_file": null
}
[2021-11-02 03:52:22,720] [INFO] [config.py:944:print] fp16_enabled ................. False
[2021-11-02 03:52:22,720] [INFO] [config.py:944:print] fp16_master_weights_and_gradients False
[2021-11-02 03:52:22,720] [INFO] [config.py:944:print] fp16_mixed_quantize .......... False
[2021-11-02 03:52:22,721] [INFO] [config.py:944:print] global_rank .................. 0
[2021-11-02 03:52:22,721] [INFO] [config.py:944:print] gradient_accumulation_steps .. 1
[2021-11-02 03:52:22,721] [INFO] [config.py:944:print] gradient_clipping ............ 0.0
[2021-11-02 03:52:22,721] [INFO] [config.py:944:print] gradient_predivide_factor .... 1.0
[2021-11-02 03:52:22,721] [INFO] [config.py:944:print] initial_dynamic_scale ........ 4294967296
[2021-11-02 03:52:22,721] [INFO] [config.py:944:print] loss_scale ................... 0
[2021-11-02 03:52:22,721] [INFO] [config.py:944:print] memory_breakdown ............. False
[2021-11-02 03:52:22,721] [INFO] [config.py:944:print] optimizer_legacy_fusion ...... False
[2021-11-02 03:52:22,721] [INFO] [config.py:944:print] optimizer_name ............... None
[2021-11-02 03:52:22,721] [INFO] [config.py:944:print] optimizer_params ............. None
[2021-11-02 03:52:22,721] [INFO] [config.py:944:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0}
[2021-11-02 03:52:22,721] [INFO] [config.py:944:print] pld_enabled .................. False
[2021-11-02 03:52:22,721] [INFO] [config.py:944:print] pld_params ................... False
[2021-11-02 03:52:22,721] [INFO] [config.py:944:print] prescale_gradients ........... False
[2021-11-02 03:52:22,721] [INFO] [config.py:944:print] quantize_change_rate ......... 0.001
[2021-11-02 03:52:22,721] [INFO] [config.py:944:print] quantize_groups .............. 1
[2021-11-02 03:52:22,721] [INFO] [config.py:944:print] quantize_offset .............. 1000
[2021-11-02 03:52:22,721] [INFO] [config.py:944:print] quantize_period .............. 1000
[2021-11-02 03:52:22,721] [INFO] [config.py:944:print] quantize_rounding ............ 0
[2021-11-02 03:52:22,721] [INFO] [config.py:944:print] quantize_start_bits .......... 16
[2021-11-02 03:52:22,721] [INFO] [config.py:944:print] quantize_target_bits ......... 8
[2021-11-02 03:52:22,721] [INFO] [config.py:944:print] quantize_training_enabled .... False
[2021-11-02 03:52:22,721] [INFO] [config.py:944:print] quantize_type ................ 0
[2021-11-02 03:52:22,721] [INFO] [config.py:944:print] quantize_verbose ............. False
[2021-11-02 03:52:22,721] [INFO] [config.py:944:print] scheduler_name ............... None
[2021-11-02 03:52:22,721] [INFO] [config.py:944:print] scheduler_params ............. None
[2021-11-02 03:52:22,721] [INFO] [config.py:944:print] sparse_attention ............. None
[2021-11-02 03:52:22,721] [INFO] [config.py:944:print] sparse_gradients_enabled ..... False
[2021-11-02 03:52:22,722] [INFO] [config.py:944:print] steps_per_print .............. 9999999
[2021-11-02 03:52:22,722] [INFO] [config.py:944:print] tensorboard_enabled .......... False
[2021-11-02 03:52:22,722] [INFO] [config.py:944:print] tensorboard_job_name ......... DeepSpeedJobName
[2021-11-02 03:52:22,722] [INFO] [config.py:944:print] tensorboard_output_path ......
[2021-11-02 03:52:22,722] [INFO] [config.py:944:print] train_batch_size ............. 4
[2021-11-02 03:52:22,722] [INFO] [config.py:944:print] train_micro_batch_size_per_gpu 4
[2021-11-02 03:52:22,722] [INFO] [config.py:944:print] use_quantizer_kernel ......... False
[2021-11-02 03:52:22,722] [INFO] [config.py:944:print] wall_clock_breakdown ......... False
[2021-11-02 03:52:22,722] [INFO] [config.py:944:print] world_size ................... 1
[2021-11-02 03:52:22,722] [INFO] [config.py:944:print] zero_allow_untested_optimizer False
[2021-11-02 03:52:22,722] [INFO] [config.py:944:print] zero_config .................. {
"stage": 0,
"contiguous_gradients": true,
"reduce_scatter": true,
"reduce_bucket_size": 5.000000e+08,
"allgather_partitions": true,
"allgather_bucket_size": 5.000000e+08,
"overlap_comm": false,
"load_from_fp32_weights": true,
"elastic_checkpoint": true,
"offload_param": null,
"offload_optimizer": null,
"sub_group_size": 1.000000e+09,
"prefetch_bucket_size": 5.000000e+07,
"param_persistence_threshold": 1.000000e+05,
"max_live_parameters": 1.000000e+09,
"max_reuse_distance": 1.000000e+09,
"gather_fp16_weights_on_model_save": false,
"ignore_unused_parameters": true,
"round_robin_gradients": false,
"legacy_stage1": false
}
[2021-11-02 03:52:22,722] [INFO] [config.py:944:print] zero_enabled ................. False
[2021-11-02 03:52:22,722] [INFO] [config.py:944:print] zero_optimization_stage ...... 0
[2021-11-02 03:52:22,722] [INFO] [config.py:946:print] json = {
"train_batch_size": 4,
"steps_per_print": 9.999999e+06
}
|
lectures/math/linear-algebra-I-live.ipynb | ###Markdown
Linear Algebra ILinear algebra is a core topic in modern applied mathematics. Essentially every important method in statistics, data science, and machine learning is built on linear algebra. Indeed, deep neural networks, which we will discuss shortly, are built on a foundation of matrix multiplication coupled with simple nonlinear functions. In this lecture, we'll see how to perform several important operations in linear algebra using our good friend, Numpy. These operations include: - Matrix multiplication. - Exact and approximate solutions to linear systems. - Singular value and eigenvalue-eigenvector decompositions. Along the way, we'll show several examples from statistics and applied mathematics, including simulation of partial differential equations; least-squares regression; and image compression. This is probably the lecture in which things will get "the most mathematical." Comfort with concepts from Math 33A or equivalent will be helpful. If you're not familiar with these concepts, that's ok -- feel free to ask questions. We'll all get through this just fine.
###Code
# no fancy packages this time! just our good friends numpy and matplotlib
import numpy as np
from matplotlib import pyplot as plt
np.random.seed(1234)
###Output
_____no_output_____
###Markdown
Basic Matrix OperationsA *matrix* is a two-dimensional array of numbers.
###Code
# random matrix data to play with
###Output
_____no_output_____
###Markdown
Matrices admit several standard operations, including:
###Code
# scalar multiplication
# transposition
# application of transposition: a "symmetrized" version of A
# symmetric matrices satisfy A = A.T
###Output
_____no_output_____
###Markdown
Inversion is an especially important matrix operation. The inverse $\mathbf{A}^{-1}$ of a square matrix $\mathbf{A}$ satisfies $\mathbf{A}\mathbf{A}^{-1} = \mathbf{I}$, where $\mathbf{I}$ is the identity matrix. We'll see how to multiply matrices and check this in a sec.
###Code
# inverse of A
###Output
_____no_output_____
###Markdown
Matrix multiplication
###Code
# random vector
# matrix-vector product
# modern, convenient syntax -- same effect
# random matrix
# matrix-matrix product (same as A.dot(B))
# checking our inverse from earlier
# observe--finite precision arithmetic!
# looks like the identity matrix
# identity matrix
# check the result to within machine precision, entrywise
# aggregates the above
###Output
_____no_output_____
###Markdown
Application: Simulating Heat DiffusionMatrix multiplication provides a simple way to simulate 1-dimensional partial differential equations in discrete time. For example, the 1-d heat equation reads$$\frac{\partial f(x, t)}{\partial t} = \frac{\partial^2 f}{\partial x^2 }\;.$$In a discrete approximation, we can write this as $$f(x_i, t + 1) - f(x_i, t) \approx \epsilon\left[f(x_{i+1}, t) - 2f(x_i, t) + f(x_{i-1}, t)\right]\;,$$where $\epsilon$ is a small positive number that controls the timescale of the approximation. We can write the righthand side of this equation as a matrix-vector product:- Let $\mathbf{v}(t)$ be the values of $f(\mathbf{x}, t)$ -- that is, $v_i = f(x_i)$. - Let $\mathbf{A}$ be the *transition operator*: $$\mathbf{A} = \left[\begin{matrix} -2 & 1 & 0 & \cdots& 0& 0 & 0\\ 1 & -2 & 1 & \cdots & 0& 0 & 0\\ 0 & 1 & -2 & \cdots & 0& 0 & 0\\ \vdots & \vdots &\vdots & \ddots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & -2 & 1 & 0\\ 0 & 0 & 0 & \cdots & 1 & -2 & 1\\ 0 & 0 & 0 & \cdots & 0 & 1 & -2 \\\end{matrix}\right]$$This transition operator has the property that $[\mathbf{A}\mathbf{v}]_i$ is equal to the righthand side of the discrete approximation, for $i = 2,\ldots,n-1$. That is, we can write $$\mathbf{v}(t+1) = \mathbf{v}(t) + \epsilon \mathbf{A}\mathbf{v}(t) = (\mathbf{I} + \epsilon\mathbf{A})\mathbf{v}(t)$$Note that there are issues at the boundary (i.e. where $i = 1$ or $i = n$), and further modeling decisions must be made in order to handle these. In the transition operator above, we are effectively allowing heat to escape at the boundaries. To simulate heat diffusion in Python, we can just build this transition operator as a matrix (`numpy` array) and then iterate this update.
###Code
# size of simulation
n = 201
# Construct A using the handy np.diag() function
# construct initial condition: 1 unit of heat at midpoint.
# simulate diffusion and plot, time intervals of 50
###Output
_____no_output_____
###Markdown
We observe the bell-shaped curve (Gaussian distribution) characteristic of 1d diffusion, just as we'd expect. Note that once we constructed the discretized approximation, we were able to perform the simulation in Python purely via linear algebra! Solving Linear EquationsOne of the most fundamental tasks in applied mathematics is solving linear systems of the form $$\mathbf{A}\mathbf{x} = \mathbf{b}\;,$$where $\mathbf{A} \in \mathbb{R}^{n \times m}$, $\mathbf{x} \in \mathbb{R}^{m}$, and $\mathbf{b} \in \mathbb{R}^{n}$. This equation represents a set of linear relationships between variables, a single one of which looks like this: $$a_{i1}x_1 + a_{i2}x_2 + \cdots + a_{im}x_m = b_i\;.$$Collectively, the equations in a linear system define a "flat space" called a *linear subspace* of $\mathbb{R}^m$. > 1. When $\mathbf{A}$ is square and of full rank (determinant nonzero), this equation has a unique solution. > 2. When $\mathbf{A}$ is not square or not of full rank, then this equation may have either 0 or infinitely many solutions. In Case 1 ("the good case"), we can use a simple built-in `numpy` method: `np.linalg.solve`.
###Code
# solve A@x = b for x
# manual approach (not as efficient)
# compute the inverse explicitly and
# premultiply by it
###Output
_____no_output_____
###Markdown
In Case 2 ("the bad case"), in which the matrix is either not of full rank or not square, we need to resort to subtler means. Suppose that the matrix $\mathbf{A}$ has more rows than columns: In this case, there usually are no exact solutions to the equation $\mathbf{A}\mathbf{x} = \mathbf{b}$. If we try the method from before, `numpy` will complain at us: One of the most common ways to approach this kind of problem is to compute the *least-squares approximation*, which is the minimizer $\mathbf{x}$ of the function $$f(\mathbf{x}) = \lVert \mathbf{A}\mathbf{x} - \mathbf{b} \rVert^2\; = \sum_i \left(b_i - \sum_j a_{ij} x_{j}\right)^2.$$Note that, if $\mathbf{b} \in \text{range}(\mathbf{A})$; that is, if $\mathbf{b}$ is one of those lucky values such that $\mathbf{A}\mathbf{x} = \mathbf{b}$ does indeed have an exact solution, then we can choose $\mathbf{x}$ such that $f(\mathbf{x}) = 0$ by finding the exact solution. Otherwise, we need to satisfy ourself with an approximation, i.e. a value $\mathbf{x}$ such that $f(\mathbf{x}) > 0$.
###Code
# compute the solution x, error f(x), rank of A, and singular values of A
# approximate solution
###Output
_____no_output_____
###Markdown
Application: Linear Regression, Several WaysActually, the problem of minimizing $f(\mathbf{x}) = \lVert \mathbf{A}\mathbf{x} - \mathbf{b} \rVert^2$ has a special name in statistics -- it's linear regression! Specifically, it's *orderinary least-squares multvariate linear regression*. It's usually written like this: $$f(\beta) = \lVert \mathbf{X}\beta - \mathbf{y} \rVert^2\;,$$where $\mathbf{X}$ is the matrix of observations of the dependent variables, and $\mathbf{y}$ is the vector of observations of the dependent variable. $\beta$ is the vector of coefficients, and it's the thing that we want to estimate. We do this by finding an estimate $\hat{\beta}$ that makes $f(\hat{\beta})$ small. By the way, if you are familiar with the topic of *loss functions* in machine learning, this function $f$ is called the *square-error loss* for estimating $\mathbf{y}$, and is probably the most important of all loss functions for regression tasks. Let's use least-squares approximation to perform 1d linear regression "by hand":
###Code
# formally, x needs to be 2d for this to work
# so we give it an extra dimension using reshape
###Output
_____no_output_____
###Markdown
Linear Algebra ILinear algebra is a core topic in modern applied mathematics. Essentially every important method in statistics, data science, and machine learning is built on linear algebra. Indeed, deep neural networks, which we will discuss shortly, are built on a foundation of matrix multiplication coupled with simple nonlinear functions. In this lecture, we'll see how to perform several important operations in linear algebra using our good friend, Numpy. These operations include: - Matrix multiplication. - Exact and approximate solutions to linear systems. - Singular value and eigenvalue-eigenvector decompositions. Along the way, we'll show several examples from statistics and applied mathematics, including simulation of partial differential equations; least-squares regression; and image compression. This is probably the lecture in which things will get "the most mathematical." Comfort with concepts from Math 33A or equivalent will be helpful. If you're not familiar with these concepts, that's ok -- feel free to ask questions. We'll all get through this just fine.
###Code
# no fancy packages this time! just our good friends numpy and matplotlib
import numpy as np
from matplotlib import pyplot as plt
np.random.seed(1234)
###Output
_____no_output_____
###Markdown
Basic Matrix OperationsA *matrix* is a two-dimensional array of numbers.
###Code
# random matrix data to play with
###Output
_____no_output_____
###Markdown
Matrices admit several standard operations, including:
###Code
# scalar multiplication
# transposition
# application of transposition: a "symmetrized" version of A
# symmetric matrices satisfy A = A.T
###Output
_____no_output_____
###Markdown
Inversion is an especially important matrix operation. The inverse $\mathbf{A}^{-1}$ of a square matrix $\mathbf{A}$ satisfies $\mathbf{A}\mathbf{A}^{-1} = \mathbf{I}$, where $\mathbf{I}$ is the identity matrix. We'll see how to multiply matrices and check this in a sec.
###Code
# inverse of A
###Output
_____no_output_____
###Markdown
Matrix multiplication
###Code
# random vector
# matrix-vector product
# modern, convenient syntax -- same effect
# random matrix
# matrix-matrix product (same as A.dot(B))
# checking our inverse from earlier
# observe--finite precision arithmetic!
# looks like the identity matrix
# identity matrix
# check the result to within machine precision, entrywise
# aggregates the above
###Output
_____no_output_____
###Markdown
Application: Simulating Heat DiffusionMatrix multiplication provides a simple way to simulate 1-dimensional partial differential equations in discrete time. For example, the 1-d heat equation reads$$\frac{\partial f(x, t)}{\partial t} = \frac{\partial^2 f}{\partial x^2 }\;.$$In a discrete approximation, we can write this as $$f(x_i, t + 1) - f(x_i, t) \approx \epsilon\left[f(x_{i+1}, t) - 2f(x_i, t) + f(x_{i-1}, t)\right]\;,$$where $\epsilon$ is a small positive number that controls the timescale of the approximation. We can write the righthand side of this equation as a matrix-vector product:- Let $\mathbf{v}(t)$ be the values of $f(\mathbf{x}, t)$ -- that is, $v_i = f(x_i)$. - Let $\mathbf{A}$ be the *transition operator*: $$\mathbf{A} = \left[\begin{matrix} -2 & 1 & 0 & \cdots& 0& 0 & 0\\ 1 & -2 & 1 & \cdots & 0& 0 & 0\\ 0 & 1 & -2 & \cdots & 0& 0 & 0\\ \vdots & \vdots &\vdots & \ddots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & -2 & 1 & 0\\ 0 & 0 & 0 & \cdots & 1 & -2 & 1\\ 0 & 0 & 0 & \cdots & 0 & 1 & -2 \\\end{matrix}\right]$$This transition operator has the property that $[\mathbf{A}\mathbf{v}]_i$ is equal to the righthand side of the discrete approximation, for $i = 2,\ldots,n-1$. That is, we can write $$\mathbf{v}(t+1) = \mathbf{v}(t) + \epsilon \mathbf{A}\mathbf{v}(t) = (\mathbf{I} + \epsilon\mathbf{A})\mathbf{v}(t)$$Note that there are issues at the boundary (i.e. where $i = 1$ or $i = n$), and further modeling decisions must be made in order to handle these. In the transition operator above, we are effectively allowing heat to escape at the boundaries. To simulate heat diffusion in Python, we can just build this transition operator as a matrix (`numpy` array) and then iterate this update.
###Code
# size of simulation
n = 201
# Construct A using the handy np.diag() function
# construct initial condition: 1 unit of heat at midpoint.
# simulate diffusion and plot, time intervals of 50
###Output
_____no_output_____
###Markdown
We observe the bell-shaped curve (Gaussian distribution) characteristic of 1d diffusion, just as we'd expect. Note that once we constructed the discretized approximation, we were able to perform the simulation in Python purely via linear algebra! Solving Linear EquationsOne of the most fundamental tasks in applied mathematics is solving linear systems of the form $$\mathbf{A}\mathbf{x} = \mathbf{b}\;,$$where $\mathbf{A} \in \mathbb{R}^{n \times m}$, $\mathbf{x} \in \mathbb{R}^{m}$, and $\mathbf{b} \in \mathbb{R}^{n}$. This equation represents a set of linear relationships between variables, a single one of which looks like this: $$a_{i1}x_1 + a_{i2}x_2 + \cdots + a_{im}x_m = b_i\;.$$Collectively, the equations in a linear system define a "flat space" called a *affine subspace* of $\mathbb{R}^m$. > 1. When $\mathbf{A}$ is square and of full rank (determinant nonzero), this equation has a unique solution. > 2. When $\mathbf{A}$ is not square or not of full rank, then this equation may have either 0 or infinitely many solutions. In Case 1 ("the good case"), we can use a simple built-in `numpy` method: `np.linalg.solve`.
###Code
# solve A@x = b for x
# manual approach (not as efficient)
# compute the inverse explicitly and
# premultiply by it
###Output
_____no_output_____
###Markdown
In Case 2 ("the bad case"), in which the matrix is either not of full rank or not square, we need to resort to subtler means. Suppose that the matrix $\mathbf{A}$ has more rows than columns: In this case, there usually are no exact solutions to the equation $\mathbf{A}\mathbf{x} = \mathbf{b}$. If we try the method from before, `numpy` will complain at us: One of the most common ways to approach this kind of problem is to compute the *least-squares approximation*, which is the minimizer $\mathbf{x}$ of the function $$f(\mathbf{x}) = \lVert \mathbf{A}\mathbf{x} - \mathbf{b} \rVert^2\; = \sum_i \left(b_i - \sum_j a_{ij} x_{j}\right)^2.$$Note that, if $\mathbf{b} \in \text{range}(\mathbf{A})$; that is, if $\mathbf{b}$ is one of those lucky values such that $\mathbf{A}\mathbf{x} = \mathbf{b}$ does indeed have an exact solution, then we can choose $\mathbf{x}$ such that $f(\mathbf{x}) = 0$ by finding the exact solution. Otherwise, we need to satisfy ourself with an approximation, i.e. a value $\mathbf{x}$ such that $f(\mathbf{x}) > 0$.
###Code
# compute the solution x, error f(x), rank of A, and singular values of A
# approximate solution
###Output
_____no_output_____
###Markdown
Application: Linear Regression, Several WaysActually, the problem of minimizing $f(\mathbf{x}) = \lVert \mathbf{A}\mathbf{x} - \mathbf{b} \rVert^2$ has a special name in statistics -- it's linear regression! Specifically, it's *orderinary least-squares multvariate linear regression*. It's usually written like this: $$f(\beta) = \lVert \mathbf{X}\beta - \mathbf{y} \rVert^2\;,$$where $\mathbf{X}$ is the matrix of observations of the dependent variables, and $\mathbf{y}$ is the vector of observations of the dependent variable. $\beta$ is the vector of coefficients, and it's the thing that we want to estimate. We do this by finding an estimate $\hat{\beta}$ that makes $f(\hat{\beta})$ small. By the way, if you are familiar with the topic of *loss functions* in machine learning, this function $f$ is called the *square-error loss* for estimating $\mathbf{y}$, and is probably the most important of all loss functions for regression tasks. Let's use least-squares approximation to perform 1d linear regression "by hand":
###Code
# formally, x needs to be 2d for this to work
# so we give it an extra dimension using reshape
###Output
_____no_output_____ |
eurovision_demo.ipynb | ###Markdown
Catchy feature extraction OutlineThis notebook shows how to compute features for a set of presegmented audiofiles.Extracting catchy features from a folder of such files involves three steps: 1. Base feature extractionHere, basic, familiar feature time series are extracted. The toolbox currently implements (wrappers for) MFCC, chroma, melody and perceptual feature extraction.This part of the toolbox relies on a lot of external code, but it's also easy to work around: if you want to use other features, just save them to a set of csv files (1 per song section--see below) in some folder (1 per feature). 2. Pitch descriptor extractionThis part computes mid-level pitch descriptors from chroma and/or melody information computed in step one.Essentially an implementation of several kinds of audio bigram descriptors.See also [1] and [2]. 3. Feature transformsCompute 'first' and 'second order' aggregates of any of the features computed in step 1 and step 2.See [2].[1] Van Balen, J., Wiering, F., & Veltkamp, R. (2015). Audio Bigrams as a Unifying Model of Pitch-based Song Description. In Proc. 11th International Symposium on Computer Music Multidisciplinary Research (CMMR). Plymouth, United Kingdom.[2] Van Balen, J., Burgoyne, J. A., Bountouridis, D., Müllensiefen, D., & Veltkamp, R. (2015). Corpus Analysis Tools for Computational Hook Discovery. In Proc. 16th International Society for Music Information Retrieval Conference (pp. 227–233). Malaga, Spain. DatasetLet's import some audio data and see how all of this works.The CATCHY toolbox was designed for the analysis of a corpus of song *sections*.CATCHY therefore requires data to be represented as a python dictionary of song section paths, grouped by song id.`utils.dataset_from_dir()` makes such a dictionary given a folder of audio files, labeled `songid-sectionid.ext` where `ext` can be `wav` or `mp3`
###Code
audio_dir = '../Cogitch/Audio/Eurovision/'
euro_dict = utils.dataset_from_dir(audio_dir)
###Output
_____no_output_____
###Markdown
Base featuresBasic feature time series can be extracted using the `base_features` module.The function `compute_and_write()` provides a convenient wrapper around most of the functionality in this module, reading audio and computing a set of basic, useful features.The results will be written to a set of csv files in `data_dir`.Currently requires a dir to made for each of the features.
###Code
data_dir = '../Cogitch/Data/Eurovision/'
# base_features.compute_and_write(audio_dir, data_dir)
###Output
_____no_output_____
###Markdown
Pitch FeaturesThe `pitch_features` module provides code to compute, from the variable-length base features computed above, fixed-sized melody and harmony descriptors for each of the song sections.`pitch_features.compute_and_write()` again provides a high-level wrapper function.The features that it should compute must be provided in a dictionary of `(feature_function, parameters)` tuples, with some feature name of your choice for each as keys.The result is again stored in a set of csv files. Directories are the feature names provided.
###Code
pitch_features.melody_dir = data_dir + 'melody/'
pitch_features.chroma_dir = data_dir + 'hpcp/'
features = {'pitchhist3': (pitch_features.get_pitchhist3, {}),
'pitchhist3_int': (pitch_features.get_pitchhist3, {'intervals': True}),
'chromahist3': (pitch_features.get_chromahist3, {}),
'chromahist3_int': (pitch_features.get_chromahist3, {'intervals': True}),
'harmonisation': (pitch_features.get_harmonisation, {}),
'harmonisation_int': (pitch_features.get_harmonisation, {'intervals': True}) }
# pitch_features.compute_and_write(data_dir, features=features)
###Output
_____no_output_____
###Markdown
Feature TransformsThe `feature_transforms` module allows you to compute first- and second-order features based on any of the features above. The transforms to be applied must be passed to the `compute()` function using a special syntax. The syntax states a feature, a reference corpus, and an aggregation function.From the doc string: - feature name and aggregates are separated by dots, e.g. 'mfcc.entropy' - feature name is first and contains no dots - first order and second order aggregates are separated by one of 2 keywords: 'corpus' or 'song' Ex.: >>> parse_features('loudness.mean.song.pdf.log') ('loudness', ['mean'], ['song', 'pdf', 'log']) The above shows how the transform names are read. In the example: `loudness.mean.song.pdf.log` computes the log of the probability density function of the distribution of the loudness features' mean within the song (i.e., across the sections of the song).The result is returned in a Pandas dataframe.
###Code
feature_transforms.data_dir = data_dir
###Output
_____no_output_____
###Markdown
The above tells the module where to look for base features.Below, a set of tested first and second-order features is computed for the full dataset.
###Code
features = [
'harmonisation_int.corpus.information',
'harmonisation_int.corpus.tau',
'harmonisation_int.song.information',
'harmonisation_int.song.tau',
'harmonisation.normentropy.minlog',
'harmonisation.normentropy.minlog.corpus.pdf.rank.logit',
'harmonisation.normentropy.minlog.song.pdf.rank.logit',
'chromahist3_int.corpus.information',
'chromahist3_int.corpus.tau',
'chromahist3_int.song.information',
'chromahist3_int.song.tau',
'chromahist3.normentropy.minlog',
'chromahist3.normentropy.minlog.corpus.pdf.rank.logit',
'chromahist3.normentropy.minlog.song.pdf.rank.logit',
'loudness.mean',
'loudness.mean.corpus.pdf.rank.logit',
'loudness.mean.song.pdf.rank.logit',
'loudness.std',
'loudness.std.corpus.pdf.rank.logit',
'loudness.std.song.pdf.rank.logit',
'pitchhist3_int.corpus.information',
'pitchhist3_int.corpus.tau',
'pitchhist3_int.song.information',
'pitchhist3_int.song.tau',
'pitchhist3.normentropy.minlog',
'pitchhist3.normentropy.minlog.corpus.pdf.rank.logit',
'pitchhist3.normentropy.minlog.song.pdf.rank.logit',
'mfcc.mean.corpus.indeppdf.rank.logit',
'mfcc.mean.song.indeppdf.rank.logit',
'mfcc.totvar.log',
'mfcc.totvar.log.corpus.pdf.rank.logit',
'mfcc.totvar.log.song.pdf.rank.logit',
'melody.mean',
'melody.mean.corpus.pdf.rank.logit',
'melody.mean.song.pdf.rank.logit',
'melody.std.log',
'melody.std.log.corpus.pdf.rank.logit',
'melody.std.log.song.pdf.rank.logit',
'roughness.mean.log',
'roughness.mean.log.corpus.pdf.rank.logit',
'roughness.mean.log.song.pdf.rank.logit',
'sharpness.mean',
'sharpness.mean.corpus.pdf.rank.logit',
'sharpness.mean.song.pdf.rank.logit']
data = feature_transforms.compute(euro_dict, features)
###Output
_____no_output_____
###Markdown
OutputFinally, output data to a single CSV file for use in another notebook or R.
###Code
# data.hist(figsize=(28,21));
data.to_csv('euro_features.csv', index=None)
###Output
_____no_output_____ |
gc3_query/var/scratchpad/IaaSRequestsClient.ipynb | ###Markdown
Caching Authentication Cookie* [https://medium.com/the-python-corner/how-to-make-your-code-faster-by-using-a-cache-in-python-fb169fbcbb0b]* [http://cachetools.readthedocs.io/en/latest/]
###Code
import cachetools
from cachetools import cached, TTLCache
cachetools.TTLCache?
###Output
_____no_output_____ |
Analyze Datasets and Train ML Models using AutoML/Week 3 Use Automated Machine Learning to train a Text Classifier/C1_W3_Assignment.ipynb | ###Markdown
Train a model with Amazon SageMaker Autopilot IntroductionIn this lab, you will use Amazon Sagemaker Autopilot to train a BERT-based natural language processing (NLP) model. The model will analyze customer feedback and classify the messages into positive (1), neutral (0) and negative (-1) sentiment. Table of Contents- [1. Review transformed dataset](c1w3-1.)- [2. Configure the Autopilot job](c1w3-2.) - [2.1. Upload data to S3 bucket](c1w3-2.1.) - [2.2. S3 output for generated assets](c1w3-2.2.) - [2.3. Configure the Autopilot job](c1w3-2.3.) - [Exercise 1](c1w3-ex-1)- [3. Launch the Autopilot job](c1w3-3.) - [Exercise 2](c1w3-ex-2)- [4. Track Autopilot job progress](c1w3-4.) - [4.1. Autopilot job description](c1w3-4.1.) - [4.2. Autopilot job status](c1w3-4.2.) - [4.3. Review the SageMaker processing jobs](c1w3-4.3.) - [4.4. Wait for the data analysis step to finish](c1w3-4.4.) - [4.5. View generated notebooks](c1w3-4.5.) - [Exercise 3](c1w3-ex-3) - [Exercise 4](c1w3-ex-4)- [5. Feature engineering](c1w3-5.) - [Exercise 5](c1w3-ex-5)- [6. Model training and tuning](c1w3-6.) - [6.1. Wait for training and tuning](c1w3-6.1.) - [Exercise 6](c1w3-ex-6) - [6.2. Compare model candidates](c1w3-6.2.) - [Exercise 7](c1w3-ex-7) - [6.3. Review best candidate](c1w3-6.3.) - [Exercise 8](c1w3-ex-8)- [7. Review all output in S3 bucket](c1w3-7.)- [8. Deploy and test best candidate model](c1w3-8.) - [8.1. Deploy best candidate model](c1w3-8.1.) - [8.2. Test the model](c1w3-8.2.) Amazon SageMaker Autopilot automatically trains and tunes the best machine learning models for classification or regression, based on your data while allowing to maintain full control and visibility.SageMaker Autopilot will inspect the raw dataset, apply feature processors, pick the best set of algorithms, train and tune multiple models, and then rank the models based on performance - all with just a few clicks. Autopilot transparently generates a set of Python scripts and notebooks for a complete end-to-end pipeline including data analysis, candidate generation, feature engineering, and model training/tuning.SageMaker Autopilot job consists of the following high-level steps: * _Data analysis_ where the data is summarized and analyzed to determine which feature engineering techniques, hyper-parameters, and models to explore.* _Feature engineering_ where the data is scrubbed, balanced, combined, and split into train and validation.* _Model training and tuning_ where the top performing features, hyper-parameters, and models are selected and trained.These re-usable scripts and notebooks give us full visibility into how the model candidates were created. Since Autopilot integrates natively with SageMaker Studio, we can visually explore the different models generated by SageMaker Autopilot.SageMaker Autopilot can be used by people without machine learning experience to automatically train a model from a dataset. Additionally, experienced developers can use Autopilot to train a baseline model from which they can iterate and manually improve.Autopilot is available through the SageMaker Studio UI and AWS Python SDK. In this notebook, you will use the AWS Python SDK to train a series of text-classification models and deploy the model with the highest accuracy.For more details on Autopilot, have a look at this [**Amazon Science Publication**](https://www.amazon.science/publications/amazon-sagemaker-autopilot-a-white-box-automl-solution-at-scale). Use case: analyze customer sentimentCustomer feedback appears across many channels including social media and partner websites. As a company, you want to capture this valuable product feedback to spot negative trends and improve the situation, if needed. Here you will train a model to classify the feedback messages into positive (1), neutral (0) and negative (-1) sentiment.First, let's install and import required modules.
###Code
# please ignore warning messages during the installation
!pip install --disable-pip-version-check -q sagemaker==2.35.0
import boto3
import sagemaker
import pandas as pd
import numpy as np
import botocore
import time
import json
config = botocore.config.Config(user_agent_extra='dlai-pds/c1/w3')
# low-level service client of the boto3 session
sm = boto3.client(service_name='sagemaker',
config=config)
sm_runtime = boto3.client('sagemaker-runtime',
config=config)
sess = sagemaker.Session(sagemaker_client=sm,
sagemaker_runtime_client=sm_runtime)
bucket = sess.default_bucket()
role = sagemaker.get_execution_role()
region = sess.boto_region_name
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format='retina'
###Output
_____no_output_____
###Markdown
1. Review transformed datasetLet's transform the dataset into a format that Autopilot recognizes. Specifically, a comma-separated file of `label,features` as shown here:```sentiment,review_body-1,"this is bad"0,"this is ok"1,"this is great"...```Sentiment is one of three classes: negative (-1), neutral (0), or positive (1). Autopilot requires that the target variable, `sentiment` is first and the set of features, just `review_body` in this case, come next.
###Code
!aws s3 cp 's3://dlai-practical-data-science/data/balanced/womens_clothing_ecommerce_reviews_balanced.csv' ./
path = './womens_clothing_ecommerce_reviews_balanced.csv'
df = pd.read_csv(path, delimiter=',')
df.head()
path_autopilot = './womens_clothing_ecommerce_reviews_balanced_for_autopilot.csv'
df[['sentiment', 'review_body']].to_csv(path_autopilot,
sep=',',
index=False)
###Output
_____no_output_____
###Markdown
2. Configure the Autopilot job 2.1. Upload data to S3 bucket
###Code
autopilot_train_s3_uri = sess.upload_data(bucket=bucket, key_prefix='autopilot/data', path=path_autopilot)
autopilot_train_s3_uri
###Output
_____no_output_____
###Markdown
Check the existence of the dataset in this S3 bucket folder:
###Code
!aws s3 ls $autopilot_train_s3_uri
###Output
2021-08-15 22:03:05 2253749 womens_clothing_ecommerce_reviews_balanced_for_autopilot.csv
###Markdown
2.2. S3 output for generated assetsSet the S3 output path for the Autopilot outputs. This includes Jupyter notebooks (analysis), Python scripts (feature engineering), and trained models.
###Code
model_output_s3_uri = 's3://{}/autopilot'.format(bucket)
print(model_output_s3_uri)
###Output
s3://sagemaker-us-east-1-575959626008/autopilot
###Markdown
2.3. Configure the Autopilot job Create the Autopilot job name.
###Code
import time
timestamp = int(time.time())
auto_ml_job_name = 'automl-dm-{}'.format(timestamp)
###Output
_____no_output_____
###Markdown
When configuring our Autopilot job, you need to specify the maximum number of candidates, `max_candidates`, to explore as well as the input/output S3 locations and target column to predict. In this case, you want to predict `sentiment` from the review text. Exercise 1Configure the Autopilot job.**Instructions**: Create an instance of the `sagemaker.automl.automl.AutoML` estimator class passing the required configuration parameters. Target attribute for predictions here is `sentiment`.```pythonautoml = sagemaker.automl.automl.AutoML( target_attribute_name='...', the name of the target attribute for predictions base_job_name=..., Autopilot job name output_path=..., output data path max_candidates=..., maximum number of candidates sagemaker_session=sess, role=role, max_runtime_per_training_job_in_seconds=1200, total_job_runtime_in_seconds=7200)```
###Code
max_candidates = 3
automl = sagemaker.automl.automl.AutoML(
### BEGIN SOLUTION - DO NOT delete this comment for grading purposes
target_attribute_name='sentiment', # Replace None
base_job_name=auto_ml_job_name, # Replace None
output_path=model_output_s3_uri, # Replace None
### END SOLUTION - DO NOT delete this comment for grading purposes
max_candidates=max_candidates,
sagemaker_session=sess,
role=role,
max_runtime_per_training_job_in_seconds=1200,
total_job_runtime_in_seconds=7200
)
###Output
_____no_output_____
###Markdown
3. Launch the Autopilot job Exercise 2Launch the Autopilot job.**Instructions**: Call `fit` function of the configured estimator passing the S3 bucket input data path and the Autopilot job name.```pythonautoml.fit( ..., input data path job_name=auto_ml_job_name, Autopilot job name wait=False, logs=False)```
###Code
automl.fit(
### BEGIN SOLUTION - DO NOT delete this comment for grading purposes
autopilot_train_s3_uri, # Replace None
### END SOLUTION - DO NOT delete this comment for grading purposes
job_name=auto_ml_job_name,
wait=False,
logs=False
)
###Output
_____no_output_____
###Markdown
4. Track Autopilot job progressOnce the Autopilot job has been launched, you can track the job progress directly from the notebook using the SDK capabilities. 4.1. Autopilot job descriptionFunction `describe_auto_ml_job` of the Amazon SageMaker service returns the information about the AutoML job in dictionary format. You can review the response syntax and response elements in the [**documentation**](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeAutoMLJob.html).
###Code
job_description_response = automl.describe_auto_ml_job(job_name=auto_ml_job_name)
###Output
_____no_output_____
###Markdown
4.2. Autopilot job statusTo track the job progress you can use two response elements: `AutoMLJobStatus` and `AutoMLJobSecondaryStatus`, which correspond to the primary (Completed | InProgress | Failed | Stopped | Stopping) and secondary (AnalyzingData | FeatureEngineering | ModelTuning etc.) job states respectively. To see if the AutoML job has started, you can check the existence of the `AutoMLJobStatus` and `AutoMLJobSecondaryStatus` elements in the job description response.In this notebook, you will use the following scheme to track the job progress:```python check if the job is still at certain stagewhile [check 'AutoMLJobStatus' and 'AutoMLJobSecondaryStatus'] in job_description_response: update the job description response job_description_response = automl.describe_auto_ml_job(AutoMLJobName=auto_ml_job_name) print the message the Autopilot job is in the stage ... print([message]) git a time step to check the status again sleep(15)print("Autopilot job complete...")```
###Code
while 'AutoMLJobStatus' not in job_description_response.keys() and 'AutoMLJobSecondaryStatus' not in job_description_response.keys():
job_description_response = automl.describe_auto_ml_job(job_name=auto_ml_job_name)
print('[INFO] Autopilot job has not yet started. Please wait. ')
# function `json.dumps` encodes JSON string for printing.
print(json.dumps(job_description_response, indent=4, sort_keys=True, default=str))
print('[INFO] Waiting for Autopilot job to start...')
sleep(15)
print('[OK] AutoML job started.')
###Output
[OK] AutoML job started.
###Markdown
4.3. Review the SageMaker processing jobsThe Autopilot creates required SageMaker processing jobs during the run:* First processing job (data splitter) checks the data sanity, performs stratified shuffling and splits the data into training and validation. * Second processing job (candidate generator) first streams through the data to compute statistics for the dataset. Then, uses these statistics to identify the problem type, and possible types of every column-predictor: numeric, categorical, natural language, etc.
###Code
from IPython.core.display import display, HTML
display(HTML('<b>Review <a target="blank" href="https://console.aws.amazon.com/sagemaker/home?region={}#/processing-jobs/">processing jobs</a></b>'.format(region)))
###Output
_____no_output_____
###Markdown
You can review the updates on that page during the run of the Autopilot job. 4.4. Wait for the data analysis step to finishHere you will use the same scheme as above to check the completion of the data analysis step. This step can be identified with the (primary) job status value `InProgress` and secondary job status values `Starting` and then `AnalyzingData`. _This cell will take approximately 10 minutes to run._
###Code
%%time
job_status = job_description_response['AutoMLJobStatus']
job_sec_status = job_description_response['AutoMLJobSecondaryStatus']
if job_status not in ('Stopped', 'Failed'):
while job_status in ('InProgress') and job_sec_status in ('Starting', 'AnalyzingData'):
job_description_response = automl.describe_auto_ml_job(job_name=auto_ml_job_name)
job_status = job_description_response['AutoMLJobStatus']
job_sec_status = job_description_response['AutoMLJobSecondaryStatus']
print(job_status, job_sec_status)
time.sleep(15)
print('[OK] Data analysis phase completed.\n')
print(json.dumps(job_description_response, indent=4, sort_keys=True, default=str))
###Output
InProgress Starting
InProgress AnalyzingData
InProgress AnalyzingData
InProgress AnalyzingData
InProgress AnalyzingData
InProgress AnalyzingData
InProgress AnalyzingData
InProgress AnalyzingData
InProgress AnalyzingData
InProgress AnalyzingData
InProgress AnalyzingData
InProgress AnalyzingData
InProgress AnalyzingData
InProgress AnalyzingData
InProgress AnalyzingData
InProgress AnalyzingData
InProgress AnalyzingData
InProgress AnalyzingData
InProgress AnalyzingData
InProgress AnalyzingData
InProgress AnalyzingData
InProgress AnalyzingData
InProgress AnalyzingData
InProgress AnalyzingData
InProgress AnalyzingData
InProgress AnalyzingData
InProgress AnalyzingData
InProgress AnalyzingData
InProgress AnalyzingData
InProgress AnalyzingData
InProgress AnalyzingData
InProgress AnalyzingData
InProgress AnalyzingData
InProgress AnalyzingData
InProgress AnalyzingData
InProgress AnalyzingData
InProgress AnalyzingData
InProgress FeatureEngineering
[OK] Data analysis phase completed.
{
"AutoMLJobArn": "arn:aws:sagemaker:us-east-1:575959626008:automl-job/automl-dm-1629064985",
"AutoMLJobArtifacts": {
"CandidateDefinitionNotebookLocation": "s3://sagemaker-us-east-1-575959626008/autopilot/automl-dm-1629064985/sagemaker-automl-candidates/automl-dm-1629064985-pr-1-464b0a135bcf447ca16531bda3ecc4992abcf/notebooks/SageMakerAutopilotCandidateDefinitionNotebook.ipynb",
"DataExplorationNotebookLocation": "s3://sagemaker-us-east-1-575959626008/autopilot/automl-dm-1629064985/sagemaker-automl-candidates/automl-dm-1629064985-pr-1-464b0a135bcf447ca16531bda3ecc4992abcf/notebooks/SageMakerAutopilotDataExplorationNotebook.ipynb"
},
"AutoMLJobConfig": {
"CompletionCriteria": {
"MaxAutoMLJobRuntimeInSeconds": 7200,
"MaxCandidates": 3,
"MaxRuntimePerTrainingJobInSeconds": 1200
},
"SecurityConfig": {
"EnableInterContainerTrafficEncryption": false
}
},
"AutoMLJobName": "automl-dm-1629064985",
"AutoMLJobSecondaryStatus": "FeatureEngineering",
"AutoMLJobStatus": "InProgress",
"CreationTime": "2021-08-15 22:03:05.661000+00:00",
"GenerateCandidateDefinitionsOnly": false,
"InputDataConfig": [
{
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": "s3://sagemaker-us-east-1-575959626008/autopilot/data/womens_clothing_ecommerce_reviews_balanced_for_autopilot.csv"
}
},
"TargetAttributeName": "sentiment"
}
],
"LastModifiedTime": "2021-08-15 22:12:25.540000+00:00",
"OutputDataConfig": {
"S3OutputPath": "s3://sagemaker-us-east-1-575959626008/autopilot"
},
"ResolvedAttributes": {
"AutoMLJobObjective": {
"MetricName": "Accuracy"
},
"CompletionCriteria": {
"MaxAutoMLJobRuntimeInSeconds": 7200,
"MaxCandidates": 3,
"MaxRuntimePerTrainingJobInSeconds": 1200
},
"ProblemType": "MulticlassClassification"
},
"ResponseMetadata": {
"HTTPHeaders": {
"content-length": "1720",
"content-type": "application/x-amz-json-1.1",
"date": "Sun, 15 Aug 2021 22:12:24 GMT",
"x-amzn-requestid": "e7c8129f-1292-4ab5-9e1f-30097a16f6c1"
},
"HTTPStatusCode": 200,
"RequestId": "e7c8129f-1292-4ab5-9e1f-30097a16f6c1",
"RetryAttempts": 0
},
"RoleArn": "arn:aws:iam::575959626008:role/c21581a406814l930566t1w5759-SageMakerExecutionRole-1OKLPNAWEHWZB"
}
CPU times: user 475 ms, sys: 34.2 ms, total: 510 ms
Wall time: 9min 33s
###Markdown
Wait for Autopilot to finish generating the notebooks. 4.5. View generated notebooksOnce data analysis is complete, SageMaker AutoPilot generates two notebooks: * Data exploration* Candidate definitionNotebooks are included in the AutoML job artifacts generated during the run. Before checking the existence of the notebooks, you can check if the artifacts have been generated. Exercise 3Check if the Autopilot job artifacts have been generated.**Instructions**: Use status check scheme described above. The generation of artifacts can be identified by existence of `AutoMLJobArtifacts` element in the keys of the job description response.
###Code
### BEGIN SOLUTION - DO NOT delete this comment for grading purposes
# get the information about the running Autopilot job
job_description_response = automl.describe_auto_ml_job(job_name=auto_ml_job_name) # Replace None
# keep in the while loop until the Autopilot job artifacts will be generated
while 'AutoMLJobArtifacts' not in job_description_response.keys(): # Replace all None
# update the information about the running Autopilot job
job_description_response = automl.describe_auto_ml_job(job_name=auto_ml_job_name) # Replace None
### END SOLUTION - DO NOT delete this comment for grading purposes
print('[INFO] Autopilot job has not yet generated the artifacts. Please wait. ')
print(json.dumps(job_description_response, indent=4, sort_keys=True, default=str))
print('[INFO] Waiting for AutoMLJobArtifacts...')
time.sleep(15)
print('[OK] AutoMLJobArtifacts generated.')
###Output
[OK] AutoMLJobArtifacts generated.
###Markdown
Wait for Autopilot to make the notebooks available. Exercise 4Check if the notebooks have been created.**Instructions**: Use status check scheme described above. Notebooks creation can be identified by existence of `DataExplorationNotebookLocation` element in the keys of the `job_description_response['AutoMLJobArtifacts']` dictionary.
###Code
### BEGIN SOLUTION - DO NOT delete this comment for grading purposes
# get the information about the running Autopilot job
job_description_response = automl.describe_auto_ml_job(job_name=auto_ml_job_name) # Replace None
# keep in the while loop until the notebooks will be created
while 'DataExplorationNotebookLocation' not in job_description_response['AutoMLJobArtifacts'].keys(): # Replace all None
# update the information about the running Autopilot job
job_description_response = automl.describe_auto_ml_job(job_name=auto_ml_job_name) # Replace None
### END SOLUTION - DO NOT delete this comment for grading purposes
print('[INFO] Autopilot job has not yet generated the notebooks. Please wait. ')
print(json.dumps(job_description_response, indent=4, sort_keys=True, default=str))
print('[INFO] Waiting for DataExplorationNotebookLocation...')
time.sleep(15)
print('[OK] DataExplorationNotebookLocation found.')
###Output
[OK] DataExplorationNotebookLocation found.
###Markdown
Review the generated resources in S3 directly. Following the link, you can find the notebooks in the folder `notebooks` and download them by clicking on object `Actions`/`Object actions` -> `Download as`/`Download`.
###Code
from IPython.core.display import display, HTML
generated_resources = job_description_response['AutoMLJobArtifacts']['DataExplorationNotebookLocation']
download_path = generated_resources.rsplit('/notebooks/SageMakerAutopilotDataExplorationNotebook.ipynb')[0]
job_id = download_path.rsplit('/', 1)[-1]
if not job_id:
print('No AutoMLJobArtifacts found.')
else:
display(HTML('<b>Review <a target="blank" href="https://s3.console.aws.amazon.com/s3/buckets/{}/autopilot/{}/sagemaker-automl-candidates/{}/">generated notebooks</a> in S3 bucket</b>'.format(bucket, auto_ml_job_name, job_id)))
###Output
_____no_output_____
###Markdown
5. Feature engineering Exercise 5Check the completion of the feature engineering step.**Instructions**: Use status check scheme described above. Feature engineering step can be identified with the (primary) job status value `InProgress` and secondary job status value `FeatureEngineering`. _This cell will take approximately 10 minutes to run._
###Code
%%time
job_description_response = automl.describe_auto_ml_job(job_name=auto_ml_job_name)
job_status = job_description_response['AutoMLJobStatus']
job_sec_status = job_description_response['AutoMLJobSecondaryStatus']
print(job_status)
print(job_sec_status)
if job_status not in ('Stopped', 'Failed'):
### BEGIN SOLUTION - DO NOT delete this comment for grading purposes
while job_status in ('InProgress') and job_sec_status in ('FeatureEngineering'): # Replace all None
### END SOLUTION - DO NOT delete this comment for grading purposes
job_description_response = automl.describe_auto_ml_job(job_name=auto_ml_job_name)
job_status = job_description_response['AutoMLJobStatus']
job_sec_status = job_description_response['AutoMLJobSecondaryStatus']
print(job_status, job_sec_status)
time.sleep(5)
print('[OK] Feature engineering phase completed.\n')
print(json.dumps(job_description_response, indent=4, sort_keys=True, default=str))
###Output
InProgress
FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress FeatureEngineering
InProgress ModelTuning
[OK] Feature engineering phase completed.
{
"AutoMLJobArn": "arn:aws:sagemaker:us-east-1:575959626008:automl-job/automl-dm-1629064985",
"AutoMLJobArtifacts": {
"CandidateDefinitionNotebookLocation": "s3://sagemaker-us-east-1-575959626008/autopilot/automl-dm-1629064985/sagemaker-automl-candidates/automl-dm-1629064985-pr-1-464b0a135bcf447ca16531bda3ecc4992abcf/notebooks/SageMakerAutopilotCandidateDefinitionNotebook.ipynb",
"DataExplorationNotebookLocation": "s3://sagemaker-us-east-1-575959626008/autopilot/automl-dm-1629064985/sagemaker-automl-candidates/automl-dm-1629064985-pr-1-464b0a135bcf447ca16531bda3ecc4992abcf/notebooks/SageMakerAutopilotDataExplorationNotebook.ipynb"
},
"AutoMLJobConfig": {
"CompletionCriteria": {
"MaxAutoMLJobRuntimeInSeconds": 7200,
"MaxCandidates": 3,
"MaxRuntimePerTrainingJobInSeconds": 1200
},
"SecurityConfig": {
"EnableInterContainerTrafficEncryption": false
}
},
"AutoMLJobName": "automl-dm-1629064985",
"AutoMLJobSecondaryStatus": "ModelTuning",
"AutoMLJobStatus": "InProgress",
"CreationTime": "2021-08-15 22:03:05.661000+00:00",
"GenerateCandidateDefinitionsOnly": false,
"InputDataConfig": [
{
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": "s3://sagemaker-us-east-1-575959626008/autopilot/data/womens_clothing_ecommerce_reviews_balanced_for_autopilot.csv"
}
},
"TargetAttributeName": "sentiment"
}
],
"LastModifiedTime": "2021-08-15 22:21:20.946000+00:00",
"OutputDataConfig": {
"S3OutputPath": "s3://sagemaker-us-east-1-575959626008/autopilot"
},
"ResolvedAttributes": {
"AutoMLJobObjective": {
"MetricName": "Accuracy"
},
"CompletionCriteria": {
"MaxAutoMLJobRuntimeInSeconds": 7200,
"MaxCandidates": 3,
"MaxRuntimePerTrainingJobInSeconds": 1200
},
"ProblemType": "MulticlassClassification"
},
"ResponseMetadata": {
"HTTPHeaders": {
"content-length": "1714",
"content-type": "application/x-amz-json-1.1",
"date": "Sun, 15 Aug 2021 22:21:20 GMT",
"x-amzn-requestid": "c1fc5032-ab79-4297-8cd8-81a053a72504"
},
"HTTPStatusCode": 200,
"RequestId": "c1fc5032-ab79-4297-8cd8-81a053a72504",
"RetryAttempts": 0
},
"RoleArn": "arn:aws:iam::575959626008:role/c21581a406814l930566t1w5759-SageMakerExecutionRole-1OKLPNAWEHWZB"
}
CPU times: user 402 ms, sys: 98.4 ms, total: 500 ms
Wall time: 8min 45s
###Markdown
6. Model training and tuningWhen you launched the Autopilot job, you requested that 3 model candidates are generated and compared. Therefore, you should see three (3) SageMaker training jobs below.
###Code
from IPython.core.display import display, HTML
display(HTML('<b>Review <a target="blank" href="https://console.aws.amazon.com/sagemaker/home?region={}#/hyper-tuning-jobs/">hyper-parameter tuning jobs</a></b>'.format(region)))
###Output
_____no_output_____
###Markdown
6.1. Wait for training and tuning Exercise 6Check the completion of the model tuning step.**Instructions**: Use status check scheme described above. Model tuning step can be identified with the (primary) job status value `InProgress` and secondary job status value `ModelTuning`. _This cell will take approximately 5-10 minutes to run._
###Code
%%time
job_description_response = automl.describe_auto_ml_job(job_name=auto_ml_job_name)
job_status = job_description_response['AutoMLJobStatus']
job_sec_status = job_description_response['AutoMLJobSecondaryStatus']
print(job_status)
print(job_sec_status)
if job_status not in ('Stopped', 'Failed'):
### BEGIN SOLUTION - DO NOT delete this comment for grading purposes
while job_status in ('InProgress') and job_sec_status in ('ModelTuning'): # Replace all None
### END SOLUTION - DO NOT delete this comment for grading purposes
job_description_response = automl.describe_auto_ml_job(job_name=auto_ml_job_name)
job_status = job_description_response['AutoMLJobStatus']
job_sec_status = job_description_response['AutoMLJobSecondaryStatus']
print(job_status, job_sec_status)
time.sleep(5)
print('[OK] Model tuning phase completed.\n')
print(json.dumps(job_description_response, indent=4, sort_keys=True, default=str))
###Output
InProgress
ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress ModelTuning
InProgress GeneratingExplainabilityReport
[OK] Model tuning phase completed.
{
"AutoMLJobArn": "arn:aws:sagemaker:us-east-1:575959626008:automl-job/automl-dm-1629064985",
"AutoMLJobArtifacts": {
"CandidateDefinitionNotebookLocation": "s3://sagemaker-us-east-1-575959626008/autopilot/automl-dm-1629064985/sagemaker-automl-candidates/automl-dm-1629064985-pr-1-464b0a135bcf447ca16531bda3ecc4992abcf/notebooks/SageMakerAutopilotCandidateDefinitionNotebook.ipynb",
"DataExplorationNotebookLocation": "s3://sagemaker-us-east-1-575959626008/autopilot/automl-dm-1629064985/sagemaker-automl-candidates/automl-dm-1629064985-pr-1-464b0a135bcf447ca16531bda3ecc4992abcf/notebooks/SageMakerAutopilotDataExplorationNotebook.ipynb"
},
"AutoMLJobConfig": {
"CompletionCriteria": {
"MaxAutoMLJobRuntimeInSeconds": 7200,
"MaxCandidates": 3,
"MaxRuntimePerTrainingJobInSeconds": 1200
},
"SecurityConfig": {
"EnableInterContainerTrafficEncryption": false
}
},
"AutoMLJobName": "automl-dm-1629064985",
"AutoMLJobSecondaryStatus": "GeneratingExplainabilityReport",
"AutoMLJobStatus": "InProgress",
"BestCandidate": {
"CandidateName": "automl-dm-1629064985tWyLd3iw3fN1-003-9951da66",
"CandidateProperties": {},
"CandidateStatus": "Completed",
"CandidateSteps": [
{
"CandidateStepArn": "arn:aws:sagemaker:us-east-1:575959626008:processing-job/automl-dm-1629064985-db-1-cf29a22c890846eda9b1ebc2b61a5244a819e",
"CandidateStepName": "automl-dm-1629064985-db-1-cf29a22c890846eda9b1ebc2b61a5244a819e",
"CandidateStepType": "AWS::SageMaker::ProcessingJob"
},
{
"CandidateStepArn": "arn:aws:sagemaker:us-east-1:575959626008:training-job/automl-dm-1629064985-dpp0-1-9ca7d6032a6d4a8f93c551a41363fe4444f",
"CandidateStepName": "automl-dm-1629064985-dpp0-1-9ca7d6032a6d4a8f93c551a41363fe4444f",
"CandidateStepType": "AWS::SageMaker::TrainingJob"
},
{
"CandidateStepArn": "arn:aws:sagemaker:us-east-1:575959626008:transform-job/automl-dm-1629064985-dpp0-rpb-1-856d079938174384abec0cdc363f3b7",
"CandidateStepName": "automl-dm-1629064985-dpp0-rpb-1-856d079938174384abec0cdc363f3b7",
"CandidateStepType": "AWS::SageMaker::TransformJob"
},
{
"CandidateStepArn": "arn:aws:sagemaker:us-east-1:575959626008:training-job/automl-dm-1629064985twyld3iw3fn1-003-9951da66",
"CandidateStepName": "automl-dm-1629064985tWyLd3iw3fN1-003-9951da66",
"CandidateStepType": "AWS::SageMaker::TrainingJob"
}
],
"CreationTime": "2021-08-15 22:23:43+00:00",
"EndTime": "2021-08-15 22:25:15+00:00",
"FinalAutoMLJobObjectiveMetric": {
"MetricName": "validation:accuracy",
"Value": 0.6029499769210815
},
"InferenceContainers": [
{
"Environment": {
"AUTOML_SPARSE_ENCODE_RECORDIO_PROTOBUF": "1",
"AUTOML_TRANSFORM_MODE": "feature-transform",
"SAGEMAKER_DEFAULT_INVOCATIONS_ACCEPT": "application/x-recordio-protobuf",
"SAGEMAKER_PROGRAM": "sagemaker_serve",
"SAGEMAKER_SUBMIT_DIRECTORY": "/opt/ml/model/code"
},
"Image": "683313688378.dkr.ecr.us-east-1.amazonaws.com/sagemaker-sklearn-automl:2.2.1-1-cpu-py3",
"ModelDataUrl": "s3://sagemaker-us-east-1-575959626008/autopilot/automl-dm-1629064985/data-processor-models/automl-dm-1629064985-dpp0-1-9ca7d6032a6d4a8f93c551a41363fe4444f/output/model.tar.gz"
},
{
"Environment": {
"MAX_CONTENT_LENGTH": "20971520",
"SAGEMAKER_DEFAULT_INVOCATIONS_ACCEPT": "text/csv",
"SAGEMAKER_INFERENCE_OUTPUT": "predicted_label",
"SAGEMAKER_INFERENCE_SUPPORTED": "predicted_label,probability,probabilities"
},
"Image": "683313688378.dkr.ecr.us-east-1.amazonaws.com/sagemaker-xgboost:1.2-2-cpu-py3",
"ModelDataUrl": "s3://sagemaker-us-east-1-575959626008/autopilot/automl-dm-1629064985/tuning/automl-dm--dpp0-xgb/automl-dm-1629064985tWyLd3iw3fN1-003-9951da66/output/model.tar.gz"
},
{
"Environment": {
"AUTOML_TRANSFORM_MODE": "inverse-label-transform",
"SAGEMAKER_DEFAULT_INVOCATIONS_ACCEPT": "text/csv",
"SAGEMAKER_INFERENCE_INPUT": "predicted_label",
"SAGEMAKER_INFERENCE_OUTPUT": "predicted_label",
"SAGEMAKER_INFERENCE_SUPPORTED": "predicted_label,probability,labels,probabilities",
"SAGEMAKER_PROGRAM": "sagemaker_serve",
"SAGEMAKER_SUBMIT_DIRECTORY": "/opt/ml/model/code"
},
"Image": "683313688378.dkr.ecr.us-east-1.amazonaws.com/sagemaker-sklearn-automl:2.2.1-1-cpu-py3",
"ModelDataUrl": "s3://sagemaker-us-east-1-575959626008/autopilot/automl-dm-1629064985/data-processor-models/automl-dm-1629064985-dpp0-1-9ca7d6032a6d4a8f93c551a41363fe4444f/output/model.tar.gz"
}
],
"LastModifiedTime": "2021-08-15 22:27:02.041000+00:00",
"ObjectiveStatus": "Succeeded"
},
"CreationTime": "2021-08-15 22:03:05.661000+00:00",
"GenerateCandidateDefinitionsOnly": false,
"InputDataConfig": [
{
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": "s3://sagemaker-us-east-1-575959626008/autopilot/data/womens_clothing_ecommerce_reviews_balanced_for_autopilot.csv"
}
},
"TargetAttributeName": "sentiment"
}
],
"LastModifiedTime": "2021-08-15 22:27:03.501000+00:00",
"OutputDataConfig": {
"S3OutputPath": "s3://sagemaker-us-east-1-575959626008/autopilot"
},
"ResolvedAttributes": {
"AutoMLJobObjective": {
"MetricName": "Accuracy"
},
"CompletionCriteria": {
"MaxAutoMLJobRuntimeInSeconds": 7200,
"MaxCandidates": 3,
"MaxRuntimePerTrainingJobInSeconds": 1200
},
"ProblemType": "MulticlassClassification"
},
"ResponseMetadata": {
"HTTPHeaders": {
"content-length": "5084",
"content-type": "application/x-amz-json-1.1",
"date": "Sun, 15 Aug 2021 22:27:04 GMT",
"x-amzn-requestid": "56d3ecb4-b652-487e-a41e-ce1c0e6a9414"
},
"HTTPStatusCode": 200,
"RequestId": "56d3ecb4-b652-487e-a41e-ce1c0e6a9414",
"RetryAttempts": 0
},
"RoleArn": "arn:aws:iam::575959626008:role/c21581a406814l930566t1w5759-SageMakerExecutionRole-1OKLPNAWEHWZB"
}
CPU times: user 290 ms, sys: 34.3 ms, total: 324 ms
Wall time: 5min 43s
###Markdown
_Please wait until ^^ Autopilot ^^ completes above_ Finally, you can check the completion of the Autopilot job looking for the `Completed` job status.
###Code
%%time
from pprint import pprint
job_description_response = automl.describe_auto_ml_job(job_name=auto_ml_job_name)
pprint(job_description_response)
job_status = job_description_response['AutoMLJobStatus']
job_sec_status = job_description_response['AutoMLJobSecondaryStatus']
print('Job status: {}'.format(job_status))
print('Secondary job status: {}'.format(job_sec_status))
if job_status not in ('Stopped', 'Failed'):
while job_status not in ('Completed'):
job_description_response = automl.describe_auto_ml_job(job_name=auto_ml_job_name)
job_status = job_description_response['AutoMLJobStatus']
job_sec_status = job_description_response['AutoMLJobSecondaryStatus']
print('Job status: {}'.format(job_status))
print('Secondary job status: {}'.format(job_sec_status))
time.sleep(10)
print('[OK] Autopilot job completed.\n')
else:
print('Job status: {}'.format(job_status))
print('Secondary job status: {}'.format(job_status))
###Output
{'AutoMLJobArn': 'arn:aws:sagemaker:us-east-1:575959626008:automl-job/automl-dm-1629064985',
'AutoMLJobArtifacts': {'CandidateDefinitionNotebookLocation': 's3://sagemaker-us-east-1-575959626008/autopilot/automl-dm-1629064985/sagemaker-automl-candidates/automl-dm-1629064985-pr-1-464b0a135bcf447ca16531bda3ecc4992abcf/notebooks/SageMakerAutopilotCandidateDefinitionNotebook.ipynb',
'DataExplorationNotebookLocation': 's3://sagemaker-us-east-1-575959626008/autopilot/automl-dm-1629064985/sagemaker-automl-candidates/automl-dm-1629064985-pr-1-464b0a135bcf447ca16531bda3ecc4992abcf/notebooks/SageMakerAutopilotDataExplorationNotebook.ipynb'},
'AutoMLJobConfig': {'CompletionCriteria': {'MaxAutoMLJobRuntimeInSeconds': 7200,
'MaxCandidates': 3,
'MaxRuntimePerTrainingJobInSeconds': 1200},
'SecurityConfig': {'EnableInterContainerTrafficEncryption': False}},
'AutoMLJobName': 'automl-dm-1629064985',
'AutoMLJobSecondaryStatus': 'GeneratingExplainabilityReport',
'AutoMLJobStatus': 'InProgress',
'BestCandidate': {'CandidateName': 'automl-dm-1629064985tWyLd3iw3fN1-003-9951da66',
'CandidateProperties': {},
'CandidateStatus': 'Completed',
'CandidateSteps': [{'CandidateStepArn': 'arn:aws:sagemaker:us-east-1:575959626008:processing-job/automl-dm-1629064985-db-1-cf29a22c890846eda9b1ebc2b61a5244a819e',
'CandidateStepName': 'automl-dm-1629064985-db-1-cf29a22c890846eda9b1ebc2b61a5244a819e',
'CandidateStepType': 'AWS::SageMaker::ProcessingJob'},
{'CandidateStepArn': 'arn:aws:sagemaker:us-east-1:575959626008:training-job/automl-dm-1629064985-dpp0-1-9ca7d6032a6d4a8f93c551a41363fe4444f',
'CandidateStepName': 'automl-dm-1629064985-dpp0-1-9ca7d6032a6d4a8f93c551a41363fe4444f',
'CandidateStepType': 'AWS::SageMaker::TrainingJob'},
{'CandidateStepArn': 'arn:aws:sagemaker:us-east-1:575959626008:transform-job/automl-dm-1629064985-dpp0-rpb-1-856d079938174384abec0cdc363f3b7',
'CandidateStepName': 'automl-dm-1629064985-dpp0-rpb-1-856d079938174384abec0cdc363f3b7',
'CandidateStepType': 'AWS::SageMaker::TransformJob'},
{'CandidateStepArn': 'arn:aws:sagemaker:us-east-1:575959626008:training-job/automl-dm-1629064985twyld3iw3fn1-003-9951da66',
'CandidateStepName': 'automl-dm-1629064985tWyLd3iw3fN1-003-9951da66',
'CandidateStepType': 'AWS::SageMaker::TrainingJob'}],
'CreationTime': datetime.datetime(2021, 8, 15, 22, 23, 43, tzinfo=tzlocal()),
'EndTime': datetime.datetime(2021, 8, 15, 22, 25, 15, tzinfo=tzlocal()),
'FinalAutoMLJobObjectiveMetric': {'MetricName': 'validation:accuracy',
'Value': 0.6029499769210815},
'InferenceContainers': [{'Environment': {'AUTOML_SPARSE_ENCODE_RECORDIO_PROTOBUF': '1',
'AUTOML_TRANSFORM_MODE': 'feature-transform',
'SAGEMAKER_DEFAULT_INVOCATIONS_ACCEPT': 'application/x-recordio-protobuf',
'SAGEMAKER_PROGRAM': 'sagemaker_serve',
'SAGEMAKER_SUBMIT_DIRECTORY': '/opt/ml/model/code'},
'Image': '683313688378.dkr.ecr.us-east-1.amazonaws.com/sagemaker-sklearn-automl:2.2.1-1-cpu-py3',
'ModelDataUrl': 's3://sagemaker-us-east-1-575959626008/autopilot/automl-dm-1629064985/data-processor-models/automl-dm-1629064985-dpp0-1-9ca7d6032a6d4a8f93c551a41363fe4444f/output/model.tar.gz'},
{'Environment': {'MAX_CONTENT_LENGTH': '20971520',
'SAGEMAKER_DEFAULT_INVOCATIONS_ACCEPT': 'text/csv',
'SAGEMAKER_INFERENCE_OUTPUT': 'predicted_label',
'SAGEMAKER_INFERENCE_SUPPORTED': 'predicted_label,probability,probabilities'},
'Image': '683313688378.dkr.ecr.us-east-1.amazonaws.com/sagemaker-xgboost:1.2-2-cpu-py3',
'ModelDataUrl': 's3://sagemaker-us-east-1-575959626008/autopilot/automl-dm-1629064985/tuning/automl-dm--dpp0-xgb/automl-dm-1629064985tWyLd3iw3fN1-003-9951da66/output/model.tar.gz'},
{'Environment': {'AUTOML_TRANSFORM_MODE': 'inverse-label-transform',
'SAGEMAKER_DEFAULT_INVOCATIONS_ACCEPT': 'text/csv',
'SAGEMAKER_INFERENCE_INPUT': 'predicted_label',
'SAGEMAKER_INFERENCE_OUTPUT': 'predicted_label',
'SAGEMAKER_INFERENCE_SUPPORTED': 'predicted_label,probability,labels,probabilities',
'SAGEMAKER_PROGRAM': 'sagemaker_serve',
'SAGEMAKER_SUBMIT_DIRECTORY': '/opt/ml/model/code'},
'Image': '683313688378.dkr.ecr.us-east-1.amazonaws.com/sagemaker-sklearn-automl:2.2.1-1-cpu-py3',
'ModelDataUrl': 's3://sagemaker-us-east-1-575959626008/autopilot/automl-dm-1629064985/data-processor-models/automl-dm-1629064985-dpp0-1-9ca7d6032a6d4a8f93c551a41363fe4444f/output/model.tar.gz'}],
'LastModifiedTime': datetime.datetime(2021, 8, 15, 22, 27, 2, 41000, tzinfo=tzlocal()),
'ObjectiveStatus': 'Succeeded'},
'CreationTime': datetime.datetime(2021, 8, 15, 22, 3, 5, 661000, tzinfo=tzlocal()),
'GenerateCandidateDefinitionsOnly': False,
'InputDataConfig': [{'DataSource': {'S3DataSource': {'S3DataType': 'S3Prefix',
'S3Uri': 's3://sagemaker-us-east-1-575959626008/autopilot/data/womens_clothing_ecommerce_reviews_balanced_for_autopilot.csv'}},
'TargetAttributeName': 'sentiment'}],
'LastModifiedTime': datetime.datetime(2021, 8, 15, 22, 27, 6, 28000, tzinfo=tzlocal()),
'OutputDataConfig': {'S3OutputPath': 's3://sagemaker-us-east-1-575959626008/autopilot'},
'ResolvedAttributes': {'AutoMLJobObjective': {'MetricName': 'Accuracy'},
'CompletionCriteria': {'MaxAutoMLJobRuntimeInSeconds': 7200,
'MaxCandidates': 3,
'MaxRuntimePerTrainingJobInSeconds': 1200},
'ProblemType': 'MulticlassClassification'},
'ResponseMetadata': {'HTTPHeaders': {'content-length': '5084',
'content-type': 'application/x-amz-json-1.1',
'date': 'Sun, 15 Aug 2021 22:27:09 GMT',
'x-amzn-requestid': '173743e0-7231-436e-936d-67a3abab2897'},
'HTTPStatusCode': 200,
'RequestId': '173743e0-7231-436e-936d-67a3abab2897',
'RetryAttempts': 0},
'RoleArn': 'arn:aws:iam::575959626008:role/c21581a406814l930566t1w5759-SageMakerExecutionRole-1OKLPNAWEHWZB'}
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: InProgress
Secondary job status: GeneratingExplainabilityReport
Job status: Completed
Secondary job status: Completed
[OK] Autopilot job completed.
CPU times: user 688 ms, sys: 66.7 ms, total: 755 ms
Wall time: 8min 45s
###Markdown
Before moving to the next section make sure the status above indicates `Autopilot job completed`. 6.2. Compare model candidatesOnce model tuning is complete, you can view all the candidates (pipeline evaluations with different hyperparameter combinations) that were explored by AutoML and sort them by their final performance metric. Exercise 7List candidates generated by Autopilot sorted by accuracy from highest to lowest.**Instructions**: Use `list_candidates` function passing the Autopilot job name `auto_ml_job_name` with the accuracy field `FinalObjectiveMetricValue`. It returns the list of candidates with the information about them.```pythoncandidates = automl.list_candidates( job_name=..., Autopilot job name sort_by='...' accuracy field name)```
###Code
candidates = automl.list_candidates(
### BEGIN SOLUTION - DO NOT delete this comment for grading purposes
job_name=auto_ml_job_name # Replace None
#sort_by='FinalObjectiveMetric' # Replace None
### END SOLUTION - DO NOT delete this comment for grading purposes
)
###Output
_____no_output_____
###Markdown
You can review the response syntax and response elements of the function `list_candidates` in the [**documentation**](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_AutoMLCandidate.html). Now let's put the candidate existence check into the loop:
###Code
while candidates == []:
candidates = automl.list_candidates(job_name=auto_ml_job_name)
print('[INFO] Autopilot job is generating the candidates. Please wait.')
time.sleep(10)
print('[OK] Candidates generated.')
###Output
[OK] Candidates generated.
###Markdown
The information about each of the candidates is in the dictionary with the following keys:
###Code
print(candidates[0].keys())
###Output
dict_keys(['CandidateName', 'FinalAutoMLJobObjectiveMetric', 'ObjectiveStatus', 'CandidateSteps', 'CandidateStatus', 'InferenceContainers', 'CreationTime', 'EndTime', 'LastModifiedTime', 'CandidateProperties'])
###Markdown
`CandidateName` contains the candidate name and the `FinalAutoMLJobObjectiveMetric` element contains the metric information which can be used to identify the best candidate later. Let's check that they were generated.
###Code
while 'CandidateName' not in candidates[0]:
candidates = automl.list_candidates(job_name=auto_ml_job_name)
print('[INFO] Autopilot job is generating CandidateName. Please wait. ')
sleep(10)
print('[OK] CandidateName generated.')
while 'FinalAutoMLJobObjectiveMetric' not in candidates[0]:
candidates = automl.list_candidates(job_name=auto_ml_job_name)
print('[INFO] Autopilot job is generating FinalAutoMLJobObjectiveMetric. Please wait. ')
sleep(10)
print('[OK] FinalAutoMLJobObjectiveMetric generated.')
print(json.dumps(candidates, indent=4, sort_keys=True, default=str))
###Output
[
{
"CandidateName": "automl-dm-1629064985tWyLd3iw3fN1-002-5ec81b99",
"CandidateProperties": {},
"CandidateStatus": "Completed",
"CandidateSteps": [
{
"CandidateStepArn": "arn:aws:sagemaker:us-east-1:575959626008:processing-job/automl-dm-1629064985-db-1-cf29a22c890846eda9b1ebc2b61a5244a819e",
"CandidateStepName": "automl-dm-1629064985-db-1-cf29a22c890846eda9b1ebc2b61a5244a819e",
"CandidateStepType": "AWS::SageMaker::ProcessingJob"
},
{
"CandidateStepArn": "arn:aws:sagemaker:us-east-1:575959626008:training-job/automl-dm-1629064985-dpp1-1-d0065008928d4d34b67f90b34b6e381040c",
"CandidateStepName": "automl-dm-1629064985-dpp1-1-d0065008928d4d34b67f90b34b6e381040c",
"CandidateStepType": "AWS::SageMaker::TrainingJob"
},
{
"CandidateStepArn": "arn:aws:sagemaker:us-east-1:575959626008:transform-job/automl-dm-1629064985-dpp1-csv-1-51ac1dc4826c41fab91e93eaa1967a4",
"CandidateStepName": "automl-dm-1629064985-dpp1-csv-1-51ac1dc4826c41fab91e93eaa1967a4",
"CandidateStepType": "AWS::SageMaker::TransformJob"
},
{
"CandidateStepArn": "arn:aws:sagemaker:us-east-1:575959626008:training-job/automl-dm-1629064985twyld3iw3fn1-002-5ec81b99",
"CandidateStepName": "automl-dm-1629064985tWyLd3iw3fN1-002-5ec81b99",
"CandidateStepType": "AWS::SageMaker::TrainingJob"
}
],
"CreationTime": "2021-08-15 22:23:58+00:00",
"EndTime": "2021-08-15 22:25:33+00:00",
"FinalAutoMLJobObjectiveMetric": {
"MetricName": "validation:accuracy",
"Value": 0.42063000798225403
},
"InferenceContainers": [
{
"Environment": {
"AUTOML_TRANSFORM_MODE": "feature-transform",
"SAGEMAKER_DEFAULT_INVOCATIONS_ACCEPT": "application/x-recordio-protobuf",
"SAGEMAKER_PROGRAM": "sagemaker_serve",
"SAGEMAKER_SUBMIT_DIRECTORY": "/opt/ml/model/code"
},
"Image": "683313688378.dkr.ecr.us-east-1.amazonaws.com/sagemaker-sklearn-automl:2.2.1-1-cpu-py3",
"ModelDataUrl": "s3://sagemaker-us-east-1-575959626008/autopilot/automl-dm-1629064985/data-processor-models/automl-dm-1629064985-dpp1-1-d0065008928d4d34b67f90b34b6e381040c/output/model.tar.gz"
},
{
"Environment": {
"MAX_CONTENT_LENGTH": "20971520",
"SAGEMAKER_DEFAULT_INVOCATIONS_ACCEPT": "text/csv",
"SAGEMAKER_INFERENCE_OUTPUT": "predicted_label",
"SAGEMAKER_INFERENCE_SUPPORTED": "predicted_label,probability,probabilities"
},
"Image": "683313688378.dkr.ecr.us-east-1.amazonaws.com/sagemaker-xgboost:1.2-2-cpu-py3",
"ModelDataUrl": "s3://sagemaker-us-east-1-575959626008/autopilot/automl-dm-1629064985/tuning/automl-dm--dpp1-xgb/automl-dm-1629064985tWyLd3iw3fN1-002-5ec81b99/output/model.tar.gz"
},
{
"Environment": {
"AUTOML_TRANSFORM_MODE": "inverse-label-transform",
"SAGEMAKER_DEFAULT_INVOCATIONS_ACCEPT": "text/csv",
"SAGEMAKER_INFERENCE_INPUT": "predicted_label",
"SAGEMAKER_INFERENCE_OUTPUT": "predicted_label",
"SAGEMAKER_INFERENCE_SUPPORTED": "predicted_label,probability,labels,probabilities",
"SAGEMAKER_PROGRAM": "sagemaker_serve",
"SAGEMAKER_SUBMIT_DIRECTORY": "/opt/ml/model/code"
},
"Image": "683313688378.dkr.ecr.us-east-1.amazonaws.com/sagemaker-sklearn-automl:2.2.1-1-cpu-py3",
"ModelDataUrl": "s3://sagemaker-us-east-1-575959626008/autopilot/automl-dm-1629064985/data-processor-models/automl-dm-1629064985-dpp1-1-d0065008928d4d34b67f90b34b6e381040c/output/model.tar.gz"
}
],
"LastModifiedTime": "2021-08-15 22:27:01.970000+00:00",
"ObjectiveStatus": "Succeeded"
},
{
"CandidateName": "automl-dm-1629064985tWyLd3iw3fN1-003-9951da66",
"CandidateProperties": {
"CandidateArtifactLocations": {
"Explainability": "s3://sagemaker-us-east-1-575959626008/autopilot/automl-dm-1629064985/documentation/explainability/output"
}
},
"CandidateStatus": "Completed",
"CandidateSteps": [
{
"CandidateStepArn": "arn:aws:sagemaker:us-east-1:575959626008:processing-job/automl-dm-1629064985-db-1-cf29a22c890846eda9b1ebc2b61a5244a819e",
"CandidateStepName": "automl-dm-1629064985-db-1-cf29a22c890846eda9b1ebc2b61a5244a819e",
"CandidateStepType": "AWS::SageMaker::ProcessingJob"
},
{
"CandidateStepArn": "arn:aws:sagemaker:us-east-1:575959626008:training-job/automl-dm-1629064985-dpp0-1-9ca7d6032a6d4a8f93c551a41363fe4444f",
"CandidateStepName": "automl-dm-1629064985-dpp0-1-9ca7d6032a6d4a8f93c551a41363fe4444f",
"CandidateStepType": "AWS::SageMaker::TrainingJob"
},
{
"CandidateStepArn": "arn:aws:sagemaker:us-east-1:575959626008:transform-job/automl-dm-1629064985-dpp0-rpb-1-856d079938174384abec0cdc363f3b7",
"CandidateStepName": "automl-dm-1629064985-dpp0-rpb-1-856d079938174384abec0cdc363f3b7",
"CandidateStepType": "AWS::SageMaker::TransformJob"
},
{
"CandidateStepArn": "arn:aws:sagemaker:us-east-1:575959626008:training-job/automl-dm-1629064985twyld3iw3fn1-003-9951da66",
"CandidateStepName": "automl-dm-1629064985tWyLd3iw3fN1-003-9951da66",
"CandidateStepType": "AWS::SageMaker::TrainingJob"
}
],
"CreationTime": "2021-08-15 22:23:43+00:00",
"EndTime": "2021-08-15 22:25:15+00:00",
"FinalAutoMLJobObjectiveMetric": {
"MetricName": "validation:accuracy",
"Value": 0.6029499769210815
},
"InferenceContainers": [
{
"Environment": {
"AUTOML_SPARSE_ENCODE_RECORDIO_PROTOBUF": "1",
"AUTOML_TRANSFORM_MODE": "feature-transform",
"SAGEMAKER_DEFAULT_INVOCATIONS_ACCEPT": "application/x-recordio-protobuf",
"SAGEMAKER_PROGRAM": "sagemaker_serve",
"SAGEMAKER_SUBMIT_DIRECTORY": "/opt/ml/model/code"
},
"Image": "683313688378.dkr.ecr.us-east-1.amazonaws.com/sagemaker-sklearn-automl:2.2.1-1-cpu-py3",
"ModelDataUrl": "s3://sagemaker-us-east-1-575959626008/autopilot/automl-dm-1629064985/data-processor-models/automl-dm-1629064985-dpp0-1-9ca7d6032a6d4a8f93c551a41363fe4444f/output/model.tar.gz"
},
{
"Environment": {
"MAX_CONTENT_LENGTH": "20971520",
"SAGEMAKER_DEFAULT_INVOCATIONS_ACCEPT": "text/csv",
"SAGEMAKER_INFERENCE_OUTPUT": "predicted_label",
"SAGEMAKER_INFERENCE_SUPPORTED": "predicted_label,probability,probabilities"
},
"Image": "683313688378.dkr.ecr.us-east-1.amazonaws.com/sagemaker-xgboost:1.2-2-cpu-py3",
"ModelDataUrl": "s3://sagemaker-us-east-1-575959626008/autopilot/automl-dm-1629064985/tuning/automl-dm--dpp0-xgb/automl-dm-1629064985tWyLd3iw3fN1-003-9951da66/output/model.tar.gz"
},
{
"Environment": {
"AUTOML_TRANSFORM_MODE": "inverse-label-transform",
"SAGEMAKER_DEFAULT_INVOCATIONS_ACCEPT": "text/csv",
"SAGEMAKER_INFERENCE_INPUT": "predicted_label",
"SAGEMAKER_INFERENCE_OUTPUT": "predicted_label",
"SAGEMAKER_INFERENCE_SUPPORTED": "predicted_label,probability,labels,probabilities",
"SAGEMAKER_PROGRAM": "sagemaker_serve",
"SAGEMAKER_SUBMIT_DIRECTORY": "/opt/ml/model/code"
},
"Image": "683313688378.dkr.ecr.us-east-1.amazonaws.com/sagemaker-sklearn-automl:2.2.1-1-cpu-py3",
"ModelDataUrl": "s3://sagemaker-us-east-1-575959626008/autopilot/automl-dm-1629064985/data-processor-models/automl-dm-1629064985-dpp0-1-9ca7d6032a6d4a8f93c551a41363fe4444f/output/model.tar.gz"
}
],
"LastModifiedTime": "2021-08-15 22:27:02.041000+00:00",
"ObjectiveStatus": "Succeeded"
},
{
"CandidateName": "automl-dm-1629064985tWyLd3iw3fN1-001-277a993b",
"CandidateProperties": {},
"CandidateStatus": "Completed",
"CandidateSteps": [
{
"CandidateStepArn": "arn:aws:sagemaker:us-east-1:575959626008:processing-job/automl-dm-1629064985-db-1-cf29a22c890846eda9b1ebc2b61a5244a819e",
"CandidateStepName": "automl-dm-1629064985-db-1-cf29a22c890846eda9b1ebc2b61a5244a819e",
"CandidateStepType": "AWS::SageMaker::ProcessingJob"
},
{
"CandidateStepArn": "arn:aws:sagemaker:us-east-1:575959626008:training-job/automl-dm-1629064985-dpp1-1-d0065008928d4d34b67f90b34b6e381040c",
"CandidateStepName": "automl-dm-1629064985-dpp1-1-d0065008928d4d34b67f90b34b6e381040c",
"CandidateStepType": "AWS::SageMaker::TrainingJob"
},
{
"CandidateStepArn": "arn:aws:sagemaker:us-east-1:575959626008:transform-job/automl-dm-1629064985-dpp1-csv-1-51ac1dc4826c41fab91e93eaa1967a4",
"CandidateStepName": "automl-dm-1629064985-dpp1-csv-1-51ac1dc4826c41fab91e93eaa1967a4",
"CandidateStepType": "AWS::SageMaker::TransformJob"
},
{
"CandidateStepArn": "arn:aws:sagemaker:us-east-1:575959626008:training-job/automl-dm-1629064985twyld3iw3fn1-001-277a993b",
"CandidateStepName": "automl-dm-1629064985tWyLd3iw3fN1-001-277a993b",
"CandidateStepType": "AWS::SageMaker::TrainingJob"
}
],
"CreationTime": "2021-08-15 22:23:36+00:00",
"EndTime": "2021-08-15 22:26:49+00:00",
"FinalAutoMLJobObjectiveMetric": {
"MetricName": "validation:accuracy",
"Value": 0.4231100082397461
},
"InferenceContainers": [
{
"Environment": {
"AUTOML_TRANSFORM_MODE": "feature-transform",
"SAGEMAKER_DEFAULT_INVOCATIONS_ACCEPT": "application/x-recordio-protobuf",
"SAGEMAKER_PROGRAM": "sagemaker_serve",
"SAGEMAKER_SUBMIT_DIRECTORY": "/opt/ml/model/code"
},
"Image": "683313688378.dkr.ecr.us-east-1.amazonaws.com/sagemaker-sklearn-automl:2.2.1-1-cpu-py3",
"ModelDataUrl": "s3://sagemaker-us-east-1-575959626008/autopilot/automl-dm-1629064985/data-processor-models/automl-dm-1629064985-dpp1-1-d0065008928d4d34b67f90b34b6e381040c/output/model.tar.gz"
},
{
"Environment": {
"MAX_CONTENT_LENGTH": "20971520",
"SAGEMAKER_DEFAULT_INVOCATIONS_ACCEPT": "text/csv",
"SAGEMAKER_INFERENCE_OUTPUT": "predicted_label",
"SAGEMAKER_INFERENCE_SUPPORTED": "predicted_label,probability,probabilities"
},
"Image": "683313688378.dkr.ecr.us-east-1.amazonaws.com/sagemaker-xgboost:1.2-2-cpu-py3",
"ModelDataUrl": "s3://sagemaker-us-east-1-575959626008/autopilot/automl-dm-1629064985/tuning/automl-dm--dpp1-xgb/automl-dm-1629064985tWyLd3iw3fN1-001-277a993b/output/model.tar.gz"
},
{
"Environment": {
"AUTOML_TRANSFORM_MODE": "inverse-label-transform",
"SAGEMAKER_DEFAULT_INVOCATIONS_ACCEPT": "text/csv",
"SAGEMAKER_INFERENCE_INPUT": "predicted_label",
"SAGEMAKER_INFERENCE_OUTPUT": "predicted_label",
"SAGEMAKER_INFERENCE_SUPPORTED": "predicted_label,probability,labels,probabilities",
"SAGEMAKER_PROGRAM": "sagemaker_serve",
"SAGEMAKER_SUBMIT_DIRECTORY": "/opt/ml/model/code"
},
"Image": "683313688378.dkr.ecr.us-east-1.amazonaws.com/sagemaker-sklearn-automl:2.2.1-1-cpu-py3",
"ModelDataUrl": "s3://sagemaker-us-east-1-575959626008/autopilot/automl-dm-1629064985/data-processor-models/automl-dm-1629064985-dpp1-1-d0065008928d4d34b67f90b34b6e381040c/output/model.tar.gz"
}
],
"LastModifiedTime": "2021-08-15 22:27:01.970000+00:00",
"ObjectiveStatus": "Succeeded"
}
]
###Markdown
You can print the names of the candidates with their metric values:
###Code
print("metric " + str(candidates[0]['FinalAutoMLJobObjectiveMetric']['MetricName']))
for index, candidate in enumerate(candidates):
print(str(index) + " "
+ candidate['CandidateName'] + " "
+ str(candidate['FinalAutoMLJobObjectiveMetric']['Value']))
###Output
metric validation:accuracy
0 automl-dm-1629064985tWyLd3iw3fN1-002-5ec81b99 0.42063000798225403
1 automl-dm-1629064985tWyLd3iw3fN1-003-9951da66 0.6029499769210815
2 automl-dm-1629064985tWyLd3iw3fN1-001-277a993b 0.4231100082397461
###Markdown
6.3. Review best candidateNow that you have successfully completed the Autopilot job on the dataset and visualized the trials, you can get the information about the best candidate model and review it. Exercise 8Get the information about the generated best candidate job. **Instructions**: Use `best_candidate` function passing the Autopilot job name. This function will give an error if candidates have not been generated.
###Code
candidates = automl.list_candidates(job_name=auto_ml_job_name)
if candidates != []:
best_candidate = automl.best_candidate(
### BEGIN SOLUTION - DO NOT delete this comment for grading purposes
job_name=auto_ml_job_name # Replace None
### END SOLUTION - DO NOT delete this comment for grading purposes
)
print(json.dumps(best_candidate, indent=4, sort_keys=True, default=str))
###Output
{
"CandidateName": "automl-dm-1629064985tWyLd3iw3fN1-003-9951da66",
"CandidateProperties": {
"CandidateArtifactLocations": {
"Explainability": "s3://sagemaker-us-east-1-575959626008/autopilot/automl-dm-1629064985/documentation/explainability/output"
}
},
"CandidateStatus": "Completed",
"CandidateSteps": [
{
"CandidateStepArn": "arn:aws:sagemaker:us-east-1:575959626008:processing-job/automl-dm-1629064985-db-1-cf29a22c890846eda9b1ebc2b61a5244a819e",
"CandidateStepName": "automl-dm-1629064985-db-1-cf29a22c890846eda9b1ebc2b61a5244a819e",
"CandidateStepType": "AWS::SageMaker::ProcessingJob"
},
{
"CandidateStepArn": "arn:aws:sagemaker:us-east-1:575959626008:training-job/automl-dm-1629064985-dpp0-1-9ca7d6032a6d4a8f93c551a41363fe4444f",
"CandidateStepName": "automl-dm-1629064985-dpp0-1-9ca7d6032a6d4a8f93c551a41363fe4444f",
"CandidateStepType": "AWS::SageMaker::TrainingJob"
},
{
"CandidateStepArn": "arn:aws:sagemaker:us-east-1:575959626008:transform-job/automl-dm-1629064985-dpp0-rpb-1-856d079938174384abec0cdc363f3b7",
"CandidateStepName": "automl-dm-1629064985-dpp0-rpb-1-856d079938174384abec0cdc363f3b7",
"CandidateStepType": "AWS::SageMaker::TransformJob"
},
{
"CandidateStepArn": "arn:aws:sagemaker:us-east-1:575959626008:training-job/automl-dm-1629064985twyld3iw3fn1-003-9951da66",
"CandidateStepName": "automl-dm-1629064985tWyLd3iw3fN1-003-9951da66",
"CandidateStepType": "AWS::SageMaker::TrainingJob"
}
],
"CreationTime": "2021-08-15 22:23:43+00:00",
"EndTime": "2021-08-15 22:25:15+00:00",
"FinalAutoMLJobObjectiveMetric": {
"MetricName": "validation:accuracy",
"Value": 0.6029499769210815
},
"InferenceContainers": [
{
"Environment": {
"AUTOML_SPARSE_ENCODE_RECORDIO_PROTOBUF": "1",
"AUTOML_TRANSFORM_MODE": "feature-transform",
"SAGEMAKER_DEFAULT_INVOCATIONS_ACCEPT": "application/x-recordio-protobuf",
"SAGEMAKER_PROGRAM": "sagemaker_serve",
"SAGEMAKER_SUBMIT_DIRECTORY": "/opt/ml/model/code"
},
"Image": "683313688378.dkr.ecr.us-east-1.amazonaws.com/sagemaker-sklearn-automl:2.2.1-1-cpu-py3",
"ModelDataUrl": "s3://sagemaker-us-east-1-575959626008/autopilot/automl-dm-1629064985/data-processor-models/automl-dm-1629064985-dpp0-1-9ca7d6032a6d4a8f93c551a41363fe4444f/output/model.tar.gz"
},
{
"Environment": {
"MAX_CONTENT_LENGTH": "20971520",
"SAGEMAKER_DEFAULT_INVOCATIONS_ACCEPT": "text/csv",
"SAGEMAKER_INFERENCE_OUTPUT": "predicted_label",
"SAGEMAKER_INFERENCE_SUPPORTED": "predicted_label,probability,probabilities"
},
"Image": "683313688378.dkr.ecr.us-east-1.amazonaws.com/sagemaker-xgboost:1.2-2-cpu-py3",
"ModelDataUrl": "s3://sagemaker-us-east-1-575959626008/autopilot/automl-dm-1629064985/tuning/automl-dm--dpp0-xgb/automl-dm-1629064985tWyLd3iw3fN1-003-9951da66/output/model.tar.gz"
},
{
"Environment": {
"AUTOML_TRANSFORM_MODE": "inverse-label-transform",
"SAGEMAKER_DEFAULT_INVOCATIONS_ACCEPT": "text/csv",
"SAGEMAKER_INFERENCE_INPUT": "predicted_label",
"SAGEMAKER_INFERENCE_OUTPUT": "predicted_label",
"SAGEMAKER_INFERENCE_SUPPORTED": "predicted_label,probability,labels,probabilities",
"SAGEMAKER_PROGRAM": "sagemaker_serve",
"SAGEMAKER_SUBMIT_DIRECTORY": "/opt/ml/model/code"
},
"Image": "683313688378.dkr.ecr.us-east-1.amazonaws.com/sagemaker-sklearn-automl:2.2.1-1-cpu-py3",
"ModelDataUrl": "s3://sagemaker-us-east-1-575959626008/autopilot/automl-dm-1629064985/data-processor-models/automl-dm-1629064985-dpp0-1-9ca7d6032a6d4a8f93c551a41363fe4444f/output/model.tar.gz"
}
],
"LastModifiedTime": "2021-08-15 22:27:02.041000+00:00",
"ObjectiveStatus": "Succeeded"
}
###Markdown
Check the existence of the candidate name for the best candidate.
###Code
while 'CandidateName' not in best_candidate:
best_candidate = automl.best_candidate(job_name=auto_ml_job_name)
print('[INFO] Autopilot Job is generating BestCandidate CandidateName. Please wait. ')
print(json.dumps(best_candidate, indent=4, sort_keys=True, default=str))
sleep(10)
print('[OK] BestCandidate CandidateName generated.')
###Output
[OK] BestCandidate CandidateName generated.
###Markdown
Check the existence of the metric value for the best candidate.
###Code
while 'FinalAutoMLJobObjectiveMetric' not in best_candidate:
best_candidate = automl.best_candidate(job_name=auto_ml_job_name)
print('[INFO] Autopilot Job is generating BestCandidate FinalAutoMLJobObjectiveMetric. Please wait. ')
print(json.dumps(best_candidate, indent=4, sort_keys=True, default=str))
sleep(10)
print('[OK] BestCandidate FinalAutoMLJobObjectiveMetric generated.')
###Output
[OK] BestCandidate FinalAutoMLJobObjectiveMetric generated.
###Markdown
Print the information about the best candidate:
###Code
best_candidate_identifier = best_candidate['CandidateName']
print("Candidate name: " + best_candidate_identifier)
print("Metric name: " + best_candidate['FinalAutoMLJobObjectiveMetric']['MetricName'])
print("Metric value: " + str(best_candidate['FinalAutoMLJobObjectiveMetric']['Value']))
###Output
Candidate name: automl-dm-1629064985tWyLd3iw3fN1-003-9951da66
Metric name: validation:accuracy
Metric value: 0.6029499769210815
###Markdown
7. Review all output in S3 bucketYou will see the artifacts generated by Autopilot including the following:```data-processor-models/ "models" learned to transform raw data into features documentation/ explainability and other documentation about your modelpreprocessed-data/ data for train and validationsagemaker-automl-candidates/ candidate models which autopilot comparestransformed-data/ candidate-specific data for train and validationtuning/ candidate-specific tuning resultsvalidations/ validation results```
###Code
from IPython.core.display import display, HTML
display(
HTML(
'<b>Review all <a target="blank" href="https://s3.console.aws.amazon.com/s3/buckets/{}?region={}&prefix=autopilot/{}/">output in S3</a></b>'.format(
bucket, region, auto_ml_job_name
)
)
)
###Output
_____no_output_____
###Markdown
8. Deploy and test best candidate model 8.1. Deploy best candidate modelWhile batch transformations are supported, you will deploy our model as a REST Endpoint in this example.First, you need to customize the inference response. The inference containers generated by SageMaker Autopilot allow you to select the response content for predictions. By default the inference containers are configured to generate the `predicted_label`. But you can add `probability` into the list of inference response keys.
###Code
inference_response_keys = ['predicted_label', 'probability']
###Output
_____no_output_____
###Markdown
Now you will create a SageMaker endpoint from the best candidate generated by Autopilot. Wait for SageMaker to deploy the endpoint. _This cell will take approximately 5-10 minutes to run._
###Code
autopilot_model = automl.deploy(
initial_instance_count=1,
instance_type='ml.m5.large',
candidate=best_candidate,
inference_response_keys=inference_response_keys,
predictor_cls=sagemaker.predictor.Predictor,
serializer=sagemaker.serializers.JSONSerializer(),
deserializer=sagemaker.deserializers.JSONDeserializer()
)
print('\nEndpoint name: {}'.format(autopilot_model.endpoint_name))
###Output
---------------!
Endpoint name: sagemaker-sklearn-automl-2021-08-15-22-35-57-214
###Markdown
_Please wait until the ^^ endpoint ^^ is deployed._ Review the SageMaker endpoint in the AWS console.
###Code
from IPython.core.display import display, HTML
display(HTML('<b>Review <a target="blank" href="https://console.aws.amazon.com/sagemaker/home?region={}#/endpoints/{}">SageMaker REST endpoint</a></b>'.format(region, autopilot_model.endpoint_name)))
###Output
_____no_output_____
###Markdown
8.2. Test the model Invoke a few predictions for the actual reviews using the deployed endpoint.
###Code
#sm_runtime = boto3.client('sagemaker-runtime')
review_list = ['This product is great!',
'OK, but not great.',
'This is not the right product.']
for review in review_list:
# remove commas from the review since we're passing the inputs as a CSV
review = review.replace(",", "")
response = sm_runtime.invoke_endpoint(
EndpointName=autopilot_model.endpoint_name, # endpoint name
ContentType='text/csv', # type of input data
Accept='text/csv', # type of the inference in the response
Body=review # review text
)
response_body=response['Body'].read().decode('utf-8').strip().split(',')
print('Review: ', review, ' Predicated class: {}'.format(response_body[0]))
print("(-1 = Negative, 0=Neutral, 1=Positive)")
###Output
Review: This product is great! Predicated class: 1
Review: OK but not great. Predicated class: 1
Review: This is not the right product. Predicated class: -1
(-1 = Negative, 0=Neutral, 1=Positive)
###Markdown
You used Amazon SageMaker Autopilot to automatically find the best model, hyper-parameters, and feature-engineering scripts for our dataset. Autopilot uses a uniquely-transparent approach to AutoML by generating re-usable Python scripts and notebooks.Upload the notebook into S3 bucket for grading purposes.**Note:** you may need to click on "Save" button before the upload.
###Code
!aws s3 cp ./C1_W3_Assignment.ipynb s3://$bucket/C1_W3_Assignment_Learner.ipynb
###Output
upload: ./C1_W3_Assignment.ipynb to s3://sagemaker-us-east-1-575959626008/C1_W3_Assignment_Learner.ipynb
|
lessons/Recommendations/1_Intro_to_Recommendations/4_Collaborative Filtering.ipynb | ###Markdown
Recommendations with MovieTweetings: Collaborative FilteringOne of the most popular methods for making recommendations is **collaborative filtering**. In collaborative filtering, you are using the collaboration of user-item recommendations to assist in making new recommendations. There are two main methods of performing collaborative filtering:1. **Neighborhood-Based Collaborative Filtering**, which is based on the idea that we can either correlate items that are similar to provide recommendations or we can correlate users to one another to provide recommendations.2. **Model Based Collaborative Filtering**, which is based on the idea that we can use machine learning and other mathematical models to understand the relationships that exist amongst items and users to predict ratings and provide ratings.In this notebook, you will be working on performing **neighborhood-based collaborative filtering**. There are two main methods for performing collaborative filtering:1. **User-based collaborative filtering:** In this type of recommendation, users related to the user you would like to make recommendations for are used to create a recommendation.2. **Item-based collaborative filtering:** In this type of recommendation, first you need to find the items that are most related to each other item (based on similar ratings). Then you can use the ratings of an individual on those similar items to understand if a user will like the new item.In this notebook you will be implementing **user-based collaborative filtering**. However, it is easy to extend this approach to make recommendations using **item-based collaborative filtering**. First, let's read in our data and necessary libraries.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tests as t
import progressbar
from scipy.sparse import csr_matrix
from IPython.display import HTML
%matplotlib inline
# Read in the datasets
movies = pd.read_csv('movies_clean.csv')
reviews = pd.read_csv('reviews_clean.csv')
del movies['Unnamed: 0']
del reviews['Unnamed: 0']
print(reviews.head())
###Output
user_id movie_id rating timestamp date
0 1 114508 8 1381006850 2013-10-05 17:00:50
1 2 102926 9 1590148016 2020-05-22 07:46:56
2 2 208092 5 1586466072 2020-04-09 17:01:12
3 2 358273 9 1579057827 2020-01-14 22:10:27
4 2 10039344 5 1578603053 2020-01-09 15:50:53
###Markdown
Measures of SimilarityWhen using **neighborhood** based collaborative filtering, it is important to understand how to measure the similarity of users or items to one another. There are a number of ways in which we might measure the similarity between two vectors (which might be two users or two items). In this notebook, we will look specifically at two measures used to compare vectors:* **Pearson's correlation coefficient**Pearson's correlation coefficient is a measure of the strength and direction of a linear relationship. The value for this coefficient is a value between -1 and 1 where -1 indicates a strong, negative linear relationship and 1 indicates a strong, positive linear relationship. If we have two vectors **x** and **y**, we can define the correlation between the vectors as:$$CORR(x, y) = \frac{\text{COV}(x, y)}{\text{STDEV}(x)\text{ }\text{STDEV}(y)}$$where $$\text{STDEV}(x) = \sqrt{\frac{1}{n-1}\sum_{i=1}^{n}(x_i - \bar{x})^2}$$and $$\text{COV}(x, y) = \frac{1}{n-1}\sum_{i=1}^{n}(x_i - \bar{x})(y_i - \bar{y})$$where n is the length of the vector, which must be the same for both x and y and $\bar{x}$ is the mean of the observations in the vector. We can use the correlation coefficient to indicate how alike two vectors are to one another, where the closer to 1 the coefficient, the more alike the vectors are to one another. There are some potential downsides to using this metric as a measure of similarity. You will see some of these throughout this workbook.* **Euclidean distance**Euclidean distance is a measure of the straightline distance from one vector to another. Because this is a measure of distance, larger values are an indication that two vectors are different from one another (which is different than Pearson's correlation coefficient).Specifically, the euclidean distance between two vectors **x** and **y** is measured as:$$ \text{EUCL}(x, y) = \sqrt{\sum_{i=1}^{n}(x_i - y_i)^2}$$Different from the correlation coefficient, no scaling is performed in the denominator. Therefore, you need to make sure all of your data are on the same scale when using this metric.**Note:** Because measuring similarity is often based on looking at the distance between vectors, it is important in these cases to scale your data or to have all data be in the same scale. If some measures are on a 5 point scale, while others are on a 100 point scale, you are likely to have non-optimal results due to the difference in variability of your features. Measures like Pearson and Spearman's correlation coefficients are unit agnostic, which means it is not necessary to scale for these measures. However, many measures used to measure similarity (like euclidean or manhatten distances) are not unit agnostic.In this case, we will not need to scale data because they are all on a 10 point scale, but it is always something to keep in mind!------------ User-Item MatrixIn order to calculate the similarities, it is common to put values in a matrix. In this matrix, users are identified by each row, and items are represented by columns. ![alt text](images/userxitem.png "User Item Matrix") In the above matrix, you can see that **User 1** and **User 2** both used **Item 1**, and **User 2**, **User 3**, and **User 4** all used **Item 2**. However, there are also a large number of missing values in the matrix for users who haven't used a particular item. A matrix with many missing values (like the one above) is considered **sparse**.Our first goal for this notebook is to create the above matrix with the **reviews** dataset. However, instead of 1 values in each cell, you should have the actual rating. The users will indicate the rows, and the movies will exist across the columns. To create the user-item matrix, we only need the first three columns of the **reviews** dataframe, which you can see by running the cell below.
###Code
user_items = reviews[['user_id', 'movie_id', 'rating']]
user_items.head()
user_items.shape
###Output
_____no_output_____
###Markdown
Ceating the User-Item MatrixIn order to create the user-items matrix (like the one above), I personally started by using a [pivot table](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html). However, I quickly ran into a memory error (a common theme throughout this notebook). I will help you navigate around many of the errors I had, and acheive useful collaborative filtering results! _____`1.` Create a matrix where the users are the rows, the movies are the columns, and the ratings exist in each cell, or a NaN exists in cells where a user hasn't rated a particular movie. If you get a memory error (like I did), [this link here](https://stackoverflow.com/questions/39648991/pandas-dataframe-pivot-memory-error) might help you!
###Code
# Create user-by-item matrix
user_items2 = user_items.head(20000)
user_by_movie = user_items2.groupby(['user_id', 'movie_id'])['rating'].max().unstack()
user_by_movie['is_na'] = user_by_movie.isnull().apply(lambda x: all(x), axis=1)
# checking if we kept only null rows while avoiding bigger matrix
user_by_movie[user_by_movie['is_na']==False]
# Following did not work!
# user_by_movie = pd.Series(user_items2['rating'], index=[user_items2['user_id'],user_items2['movie_id']]).unstack()
###Output
_____no_output_____
###Markdown
Check your results below to make sure your matrix is ready for the upcoming sections.
###Code
assert movies.shape[0] == user_by_movie.shape[1], "Oh no! Your matrix should have {} columns, and yours has {}!".format(movies.shape[0], user_by_movie.shape[1])
assert reviews.user_id.nunique() == user_by_movie.shape[0], "Oh no! Your matrix should have {} rows, and yours has {}!".format(reviews.user_id.nunique(), user_by_movie.shape[0])
print("Looks like you are all set! Proceed!")
# HTML('<img src="images/greatjob.webp">')
user_by_movie.drop(columns=['is_na'],inplace=True)
row_notnull = user_by_movie.loc[1].notnull()
row_notnull[row_notnull==True].index.values
###Output
_____no_output_____
###Markdown
`2.` Now that you have a matrix of users by movies, use this matrix to create a dictionary where the key is each user and the value is an array of the movies each user has rated.
###Code
# Create a dictionary with users and corresponding movies seen
def movies_watched(user_id):
'''
INPUT:
user_id - the user_id of an individual as int
OUTPUT:
movies - an array of movies the user has watched
'''
movies = []
row = user_by_movie.loc(user_id)
row_notnull = row.isnull()
movies = row_notnull[row_notnull==True].index.values
# for movie_id in user_by_movie.columns:
# if row_notnull[movie_id]:
# movies.append(movie_id)
# movies = user_by_movie.loc[user_id][user_by_movie.loc[user_id].isnull() == False].index.values
return np.array(movies)
def create_user_movie_dict():
'''
INPUT: None
OUTPUT: movies_seen - a dictionary where each key is a user_id and the value is an array of movie_ids
Creates the movies_seen dictionary
'''
# Do things - hint this may take some time, so you might want to set up a progress bar to watch things progress
movies_seen = {}
for user_id, _ in user_by_movie.iterrows():
movies_seen[user_id] = movies_watched(user_id)
return movies_seen
# Use your function to return dictionary
movies_seen = create_user_movie_dict()
###Output
_____no_output_____
###Markdown
`3.` If a user hasn't rated more than 2 movies, we consider these users "too new". Create a new dictionary that only contains users who have rated more than 2 movies. This dictionary will be used for all the final steps of this workbook.
###Code
# Remove individuals who have watched 2 or fewer movies - don't have enough data to make recs
def create_movies_to_analyze(movies_seen, lower_bound=2):
'''
INPUT:
movies_seen - a dictionary where each key is a user_id and the value is an array of movie_ids
lower_bound - (an int) a user must have more movies seen than the lower bound to be added to the movies_to_analyze dictionary
OUTPUT:
movies_to_analyze - a dictionary where each key is a user_id and the value is an array of movie_ids
The movies_seen and movies_to_analyze dictionaries should be the same except that the output dictionary has removed
'''
# Do things to create updated dictionary
return movies_to_analyze
# Use your function to return your updated dictionary
movies_to_analyze = create_movies_to_analyze(movies_seen)
# Run the tests below to check that your movies_to_analyze matches the solution
assert len(movies_to_analyze) == 23512, "Oops! It doesn't look like your dictionary has the right number of individuals."
assert len(movies_to_analyze[2]) == 23, "Oops! User 2 didn't match the number of movies we thought they would have."
assert len(movies_to_analyze[7]) == 3, "Oops! User 7 didn't match the number of movies we thought they would have."
print("If this is all you see, you are good to go!")
###Output
_____no_output_____
###Markdown
Calculating User SimilaritiesNow that you have set up the **movies_to_analyze** dictionary, it is time to take a closer look at the similarities between users. Below the sudo code for how I thought about determining the similarity between users:```for user1 in movies_to_analyze for user2 in movies_to_analyze see how many movies match between the two users if more than two movies in common pull the overlapping movies compute the distance/similarity metric between ratings on the same movies for the two users store the users and the distance metric```However, this took a very long time to run, and other methods of performing these operations did not fit on the workspace memory!Therefore, your task for this question is to look at a few specific examples of the correlation between ratings given by two users. For this question consider you want to compute the [correlation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.corr.html) between users.`4.` Using the **movies_to_analyze** dictionary and **user_by_movie** dataframe, create a function that computes the correlation between the ratings of similar movies for two users. Then use your function to compare your results to ours using the tests below.
###Code
def compute_correlation(user1, user2):
'''
INPUT
user1 - int user_id
user2 - int user_id
OUTPUT
the correlation between the matching ratings between the two users
'''
return corr #return the correlation
# Read in solution correlations - this will take some time to read in
import pickle
corrs_import = pickle.load(open("corrs.p", "rb"))
df_corrs = pd.DataFrame(corrs_import)
df_corrs.columns = ['user1', 'user2', 'movie_corr']
# Test your function against the solution
assert compute_correlation(2,2) == df_corrs.query("user1 == 2 and user2 == 2")['movie_corr'][0], "Oops! The correlation between a user and itself should be 1.0."
assert round(compute_correlation(2,66), 2) == round(df_corrs.query("user1 == 2 and user2 == 66")['movie_corr'][1], 2), "Oops! The correlation between user 2 and 66 should be about 0.76."
assert np.isnan(compute_correlation(2,104)) == np.isnan(df_corrs.query("user1 == 2 and user2 == 104")['movie_corr'][4]), "Oops! The correlation between user 2 and 104 should be a NaN."
print("If this is all you see, then it looks like your function passed all of our tests!")
###Output
_____no_output_____
###Markdown
Why the NaN's?If the function you wrote passed all of the tests, then you have correctly set up your function to calculate the correlation between any two users. The **df_corrs** dataframe created in the cell leading up to the tests holds combinations of users along with their corresponding correlation. `5.` But one question is why are we still obtaining **NaN** values. Look at the header below for users 2 and 104, they have a correlation of **NaN**, why?
###Code
df_corrs.head()
###Output
_____no_output_____
###Markdown
Leave your thoughts here about why the NaN exists, and use the cells below to validate your thoughts. These Nan's ultimately make the correlation coefficient a less than optimal measure of similarity between two users.
###Code
# Which movies did both user 2 and user 4 see?
# What were the ratings for each user on those movies?
###Output
_____no_output_____
###Markdown
`6.` Because the correlation coefficient proved to be less than optimal for relating user ratings to one another, we could instead calculate the euclidean distance between the ratings. I found [this post](https://stackoverflow.com/questions/1401712/how-can-the-euclidean-distance-be-calculated-with-numpy) particularly helpful when I was setting up my function. This function should be very similar to your previous function. When you feel confident with your function, test it against our results.
###Code
def compute_euclidean_dist(user1, user2):
'''
INPUT
user1 - int user_id
user2 - int user_id
OUTPUT
the euclidean distance between user1 and user2
'''
return dist #return the euclidean distance
# Read in solution euclidean distances - this will take some time to read in
df_dists = pickle.load(open("dists.p", "rb"))
# Test your function against the solution
assert compute_euclidean_dist(2,2) == df_dists.query("user1 == 2 and user2 == 2")['eucl_dist'][0], "Oops! The distance between a user and itself should be 0.0."
assert round(compute_euclidean_dist(2,66), 2) == round(df_dists.query("user1 == 2 and user2 == 66")['eucl_dist'][1], 2), "Oops! The distance between user 2 and 66 should be about 2.24."
assert np.isnan(compute_euclidean_dist(2,104)) == np.isnan(df_dists.query("user1 == 2 and user2 == 104")['eucl_dist'][4]), "Oops! The distance between user 2 and 104 should be 2."
print("If this is all you see, then it looks like your function passed all of our tests!")
###Output
_____no_output_____
###Markdown
Using the Nearest Neighbors to Make RecommendationsIn the previous questions, you read in **df_corrs** and **df_dists**. Therefore, you have a measure of distance and similarity for each user to every other user. These dataframes hold every possible combination of users, as well as the corresponding correlation or euclidean distance, respectively.Because of the **NaN** values that exist within **df_corrs**, we will proceed using **df_dists**. You will want to find the users that are 'nearest' each user. Then you will want to find the movies the closest neighbors have liked to recommend to each user.I made use of the following objects:* df_dists (to obtain the neighbors)* user_items (to obtain the movies the neighbors and users have rated)* movies (to obtain the names of the movies)`7.` Complete the functions below, which allow you to find the recommendations for any user. There are five functions which you will need:* **find_closest_neighbors** - this returns a list of user_ids from closest neighbor to farthest neighbor using euclidean distance* **movies_liked** - returns an array of movie_ids* **movie_names** - takes the output of movies_liked and returns a list of movie names associated with the movie_ids* **make_recommendations** - takes a user id and goes through closest neighbors to return a list of movie names as recommendations* **all_recommendations** = loops through every user and returns a dictionary of with the key as a user_id and the value as a list of movie recommendations
###Code
def find_closest_neighbors(user):
'''
INPUT:
user - (int) the user_id of the individual you want to find the closest users
OUTPUT:
closest_neighbors - an array of the id's of the users sorted from closest to farthest away
'''
# I treated ties as arbitrary and just kept whichever was easiest to keep using the head method
# You might choose to do something less hand wavy - order the neighbors
return closest_neighbors
def movies_liked(user_id, min_rating=7):
'''
INPUT:
user_id - the user_id of an individual as int
min_rating - the minimum rating considered while still a movie is still a "like" and not a "dislike"
OUTPUT:
movies_liked - an array of movies the user has watched and liked
'''
return movies_liked
def movie_names(movie_ids):
'''
INPUT
movie_ids - a list of movie_ids
OUTPUT
movies - a list of movie names associated with the movie_ids
'''
return movie_lst
def make_recommendations(user, num_recs=10):
'''
INPUT:
user - (int) a user_id of the individual you want to make recommendations for
num_recs - (int) number of movies to return
OUTPUT:
recommendations - a list of movies - if there are "num_recs" recommendations return this many
otherwise return the total number of recommendations available for the "user"
which may just be an empty list
'''
return recommendations
def all_recommendations(num_recs=10):
'''
INPUT
num_recs (int) the (max) number of recommendations for each user
OUTPUT
all_recs - a dictionary where each key is a user_id and the value is an array of recommended movie titles
'''
# Apply make recs for each user -
# hint this may take some time, so you might want to set up a progress bar to watch things progress
return all_recs
all_recs = all_recommendations(10)
# This make some time - it loads our solution dictionary so you can compare results
all_recs_sol = pickle.load(open("all_recs.p", "rb"))
assert all_recs[2] == make_recommendations(2), "Oops! Your recommendations for user 2 didn't match ours."
assert all_recs[26] == make_recommendations(26), "Oops! It actually wasn't possible to make any recommendations for user 26."
assert all_recs[1503] == make_recommendations(1503), "Oops! Looks like your solution for user 1503 didn't match ours."
print("If you made it here, you now have recommendations for many users using collaborative filtering!")
HTML('<img src="images/greatjob.webp">')
###Output
_____no_output_____
###Markdown
Now What?If you made it this far, you have successfully implemented a solution to making recommendations using collaborative filtering. `8.` Let's do a quick recap of the steps taken to obtain recommendations using collaborative filtering.
###Code
# Check your understanding of the results by correctly filling in the dictionary below
a = "pearson's correlation and spearman's correlation"
b = 'item based collaborative filtering'
c = "there were too many ratings to get a stable metric"
d = 'user based collaborative filtering'
e = "euclidean distance and pearson's correlation coefficient"
f = "manhatten distance and euclidean distance"
g = "spearman's correlation and euclidean distance"
h = "the spread in some ratings was zero"
i = 'content based recommendation'
sol_dict = {
'The type of recommendation system implemented here was a ...': # letter here,
'The two methods used to estimate user similarity were: ': # letter here,
'There was an issue with using the correlation coefficient. What was it?': # letter here
}
t.test_recs(sol_dict)
###Output
_____no_output_____
###Markdown
Additionally, let's take a closer look at some of the results. There are three objects that you read in to check your results against the solution:* **df_corrs** - a dataframe of user1, user2, pearson correlation between the two users* **df_dists** - a dataframe of user1, user2, euclidean distance between the two users* **all_recs_sol** - a dictionary of all recommendations (key = user, value = list of recommendations)Looping your results from the correlation and euclidean distance functions through every pair of users could have been used to create the first two objects (I don't recommend doing this given how long it will take). `9.`Use these three objects along with the cells below to correctly fill in the dictionary below and complete this notebook!
###Code
a = 567
b = 1503
c = 1319
d = 1325
e = 2526710
f = 0
g = 'Use another method to make recommendations - content based, knowledge based, or model based collaborative filtering'
sol_dict2 = {
'For how many pairs of users were we not able to obtain a measure of similarity using correlation?': # letter here,
'For how many pairs of users were we not able to obtain a measure of similarity using euclidean distance?': # letter here,
'For how many users were we unable to make any recommendations for using collaborative filtering?': # letter here,
'For how many users were we unable to make 10 recommendations for using collaborative filtering?': # letter here,
'What might be a way for us to get 10 recommendations for every user?': # letter here
}
t.test_recs2(sol_dict2)
#Use the below cells for any work you need to do!
# Users without recs
# NaN correlation values
# NaN euclidean distance values
# Users with less than 10 recs
###Output
_____no_output_____
###Markdown
Recommendations with MovieTweetings: Collaborative FilteringOne of the most popular methods for making recommendations is **collaborative filtering**. In collaborative filtering, you are using the collaboration of user-item recommendations to assist in making new recommendations. There are two main methods of performing collaborative filtering:1. **Neighborhood-Based Collaborative Filtering**, which is based on the idea that we can either correlate items that are similar to provide recommendations or we can correlate users to one another to provide recommendations.2. **Model Based Collaborative Filtering**, which is based on the idea that we can use machine learning and other mathematical models to understand the relationships that exist amongst items and users to predict ratings and provide ratings.In this notebook, you will be working on performing **neighborhood-based collaborative filtering**. There are two main methods for performing collaborative filtering:1. **User-based collaborative filtering:** In this type of recommendation, users related to the user you would like to make recommendations for are used to create a recommendation.2. **Item-based collaborative filtering:** In this type of recommendation, first you need to find the items that are most related to each other item (based on similar ratings). Then you can use the ratings of an individual on those similar items to understand if a user will like the new item.In this notebook you will be implementing **user-based collaborative filtering**. However, it is easy to extend this approach to make recommendations using **item-based collaborative filtering**. First, let's read in our data and necessary libraries.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tests as t
import progressbar
from scipy.sparse import csr_matrix
from IPython.display import HTML
%matplotlib inline
# Read in the datasets
movies = pd.read_csv('movies_clean.csv')
reviews = pd.read_csv('reviews_clean.csv')
del movies['Unnamed: 0']
del reviews['Unnamed: 0']
reviews = reviews.head(100000)
# create user by matrix
# user_items = reviews[['user_id', 'movie_id', 'rating']]
# user_by_movie = user_items.groupby(['user_id', 'movie_id'])['rating'].max().unstack()
# user_movie_subset = user_by_movie[[73486, 75314, 68646, 99685]]
print(reviews.head())
pd.__version__
###Output
_____no_output_____
###Markdown
There was a problem with the big data set, so I had to limit the number of raws from 800k to 100k.In the end I used the Udacity workspace as a encountered problems with the pickle file.
###Code
movies.head()
reviews.shape
###Output
_____no_output_____
###Markdown
Measures of SimilarityWhen using **neighborhood** based collaborative filtering, it is important to understand how to measure the similarity of users or items to one another. There are a number of ways in which we might measure the similarity between two vectors (which might be two users or two items). In this notebook, we will look specifically at two measures used to compare vectors:* **Pearson's correlation coefficient**Pearson's correlation coefficient is a measure of the strength and direction of a linear relationship. The value for this coefficient is a value between -1 and 1 where -1 indicates a strong, negative linear relationship and 1 indicates a strong, positive linear relationship. If we have two vectors **x** and **y**, we can define the correlation between the vectors as:$$CORR(x, y) = \frac{\text{COV}(x, y)}{\text{STDEV}(x)\text{ }\text{STDEV}(y)}$$where $$\text{STDEV}(x) = \sqrt{\frac{1}{n-1}\sum_{i=1}^{n}(x_i - \bar{x})^2}$$and $$\text{COV}(x, y) = \frac{1}{n-1}\sum_{i=1}^{n}(x_i - \bar{x})(y_i - \bar{y})$$where n is the length of the vector, which must be the same for both x and y and $\bar{x}$ is the mean of the observations in the vector. We can use the correlation coefficient to indicate how alike two vectors are to one another, where the closer to 1 the coefficient, the more alike the vectors are to one another. There are some potential downsides to using this metric as a measure of similarity. You will see some of these throughout this workbook.* **Euclidean distance**Euclidean distance is a measure of the straightline distance from one vector to another. Because this is a measure of distance, larger values are an indication that two vectors are different from one another (which is different than Pearson's correlation coefficient).Specifically, the euclidean distance between two vectors **x** and **y** is measured as:$$ \text{EUCL}(x, y) = \sqrt{\sum_{i=1}^{n}(x_i - y_i)^2}$$Different from the correlation coefficient, no scaling is performed in the denominator. Therefore, you need to make sure all of your data are on the same scale when using this metric.**Note:** Because measuring similarity is often based on looking at the distance between vectors, it is important in these cases to scale your data or to have all data be in the same scale. If some measures are on a 5 point scale, while others are on a 100 point scale, you are likely to have non-optimal results due to the difference in variability of your features. Measures like Pearson and Spearman's correlation coefficients are unit agnostic, which means it is not necessary to scale for these measures. However, many measures used to measure similarity (like euclidean or manhatten distances) are not unit agnostic.In this case, we will not need to scale data because they are all on a 10 point scale, but it is always something to keep in mind!------------ User-Item MatrixIn order to calculate the similarities, it is common to put values in a matrix. In this matrix, users are identified by each row, and items are represented by columns. ![alt text](images/userxitem.png "User Item Matrix") In the above matrix, you can see that **User 1** and **User 2** both used **Item 1**, and **User 2**, **User 3**, and **User 4** all used **Item 2**. However, there are also a large number of missing values in the matrix for users who haven't used a particular item. A matrix with many missing values (like the one above) is considered **sparse**.Our first goal for this notebook is to create the above matrix with the **reviews** dataset. However, instead of 1 values in each cell, you should have the actual rating. The users will indicate the rows, and the movies will exist across the columns. To create the user-item matrix, we only need the first three columns of the **reviews** dataframe, which you can see by running the cell below.
###Code
user_items = reviews[['user_id', 'movie_id', 'rating']]
user_items.head()
###Output
_____no_output_____
###Markdown
Ceating the User-Item MatrixIn order to create the user-items matrix (like the one above), I personally started by using a [pivot table](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html). However, I quickly ran into a memory error (a common theme throughout this notebook). I will help you navigate around many of the errors I had, and acheive useful collaborative filtering results! _____`1.` Create a matrix where the users are the rows, the movies are the columns, and the ratings exist in each cell, or a NaN exists in cells where a user hasn't rated a particular movie. If you get a memory error (like I did), [this link here](https://stackoverflow.com/questions/39648991/pandas-dataframe-pivot-memory-error) might help you!
###Code
# Create user-by-item matrix
user_by_movie = user_items.groupby(['user_id', 'movie_id'])['rating'].max().unstack()
from tqdm import tqdm
chunk_size = 50000
chunks = [x for x in range(0, reviews.shape[0], chunk_size)]
for i in range(0, len(chunks) - 1):
print(chunks[i], chunks[i + 1] - 1)
pivot_df = pd.DataFrame()
for i in tqdm(range(0, len(chunks) - 1)):
chunk_df = reviews.iloc[ chunks[i]:chunks[i + 1] - 1]
interactions = (chunk_df.groupby(['user_id', 'movie_id'])['rating']
.sum()
.unstack()
.reset_index()
.fillna(0)
.set_index('user_id')
)
print (interactions.shape)
pivot_df = pivot_df.append(interactions, sort=False)
###Output
0%| | 0/17 [00:00<?, ?it/s]
###Markdown
Check your results below to make sure your matrix is ready for the upcoming sections.
###Code
assert movies.shape[0] == user_by_movie.shape[1], "Oh no! Your matrix should have {} columns, and yours has {}!".format(movies.shape[0], user_by_movie.shape[1])
assert reviews.user_id.nunique() == user_by_movie.shape[0], "Oh no! Your matrix should have {} rows, and yours has {}!".format(reviews.user_id.nunique(), user_by_movie.shape[0])
print("Looks like you are all set! Proceed!")
HTML('<img src="images/greatjob.webp">')
###Output
_____no_output_____
###Markdown
`2.` Now that you have a matrix of users by movies, use this matrix to create a dictionary where the key is each user and the value is an array of the movies each user has rated.
###Code
# Create a dictionary with users and corresponding movies seen
def movies_watched(user_id):
'''
INPUT:
user_id - the user_id of an individual as int
OUTPUT:
movies - an array of movies the user has watched
'''
movies = user_by_movie.loc[user_id][user_by_movie.loc[user_id].isnull() == False].index.values
return movies
def create_user_movie_dict():
'''
INPUT: None
OUTPUT: movies_seen - a dictionary where each key is a user_id and the value is an array of movie_ids
Creates the movies_seen dictionary
'''
# Do things - hint this may take some time, so you might want to set up a progress bar to watch things progress
movies_seen = dict()
n_users = user_by_movie.shape[0]
# Set up a progress bar
cnter = 0
bar = progressbar.ProgressBar(maxval=n_users+1, widgets=[progressbar.Bar('=', '[', ']'), ' ', progressbar.Percentage()])
bar.start()
for user1 in range(1, n_users+1):
# update progress bar
cnter+=1
bar.update(cnter)
# assign list of movies to each user key
movies_seen[user1] = movies_watched(user1)
bar.finish()
return movies_seen
# Use your function to return dictionary
movies_seen = create_user_movie_dict()
###Output
[========================================================================] 100%
###Markdown
`3.` If a user hasn't rated more than 2 movies, we consider these users "too new". Create a new dictionary that only contains users who have rated more than 2 movies. This dictionary will be used for all the final steps of this workbook.
###Code
# Remove individuals who have watched 2 or fewer movies - don't have enough data to make recs
def create_movies_to_analyze(movies_seen, lower_bound=2):
'''
INPUT:
movies_seen - a dictionary where each key is a user_id and the value is an array of movie_ids
lower_bound - (an int) a user must have more movies seen than the lower bound to be added to the movies_to_analyze dictionary
OUTPUT:
movies_to_analyze - a dictionary where each key is a user_id and the value is an array of movie_ids
The movies_seen and movies_to_analyze dictionaries should be the same except that the output dictionary has removed
'''
# Do things to create updated dictionary
movies_to_analyze = dict()
for user, movies in movies_seen.items():
if len(movies) > lower_bound:
movies_to_analyze[user] = movies
return movies_to_analyze
movies_to_analyze = create_movies_to_analyze(movies_seen)
return movies_to_analyze
# Use your function to return your updated dictionary
movies_to_analyze = create_movies_to_analyze(movies_seen)
# Run the tests below to check that your movies_to_analyze matches the solution
assert len(movies_to_analyze) == 23512, "Oops! It doesn't look like your dictionary has the right number of individuals."
assert len(movies_to_analyze[2]) == 23, "Oops! User 2 didn't match the number of movies we thought they would have."
assert len(movies_to_analyze[7]) == 3, "Oops! User 7 didn't match the number of movies we thought they would have."
print("If this is all you see, you are good to go!")
###Output
_____no_output_____
###Markdown
Calculating User SimilaritiesNow that you have set up the **movies_to_analyze** dictionary, it is time to take a closer look at the similarities between users. Below the sudo code for how I thought about determining the similarity between users:```for user1 in movies_to_analyze for user2 in movies_to_analyze see how many movies match between the two users if more than two movies in common pull the overlapping movies compute the distance/similarity metric between ratings on the same movies for the two users store the users and the distance metric```However, this took a very long time to run, and other methods of performing these operations did not fit on the workspace memory!Therefore, your task for this question is to look at a few specific examples of the correlation between ratings given by two users. For this question consider you want to compute the [correlation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.corr.html) between users.`4.` Using the **movies_to_analyze** dictionary and **user_by_movie** dataframe, create a function that computes the correlation between the ratings of similar movies for two users. Then use your function to compare your results to ours using the tests below.
###Code
def compute_correlation(user1, user2):
'''
INPUT
user1 - int user_id
user2 - int user_id
OUTPUT
the correlation between the matching ratings between the two users
'''
# Pull movies for each user
movies1 = movies_to_analyze[user1]
movies2 = movies_to_analyze[user2]
# Find Similar Movies
sim_movs = np.intersect1d(movies1, movies2, assume_unique=True)
# Calculate correlation between the users
df = user_by_movie.loc[(user1, user2), sim_movs]
corr = df.transpose().corr().iloc[0,1]
return corr #return the correlation
# Read in solution correlations - this will take some time to read in
import pickle
corrs_import = pickle.load(open("corrs.p", "rb"))
df_corrs = pd.DataFrame(corrs_import)
df_corrs.columns = ['user1', 'user2', 'movie_corr']
# check if pickle is empty
import os
scores = {} # scores is an empty dict already
if os.path.getsize('corrs.p') > 0:
with open('corrs.p', "rb") as f:
unpickler = pickle.Unpickler(f)
# if file is not empty scores will be equal
# to the value unpickled
scores = unpickler.load()
# check what files in the working directory
import os
cwd = os.getcwd() # Get the current working directory (cwd)
files = os.listdir(cwd) # Get all the files in that directory
print("Files in %r: %s" % (cwd, files))
# Test your function against the solution
assert compute_correlation(2,2) == df_corrs.query("user1 == 2 and user2 == 2")['movie_corr'][0], "Oops! The correlation between a user and itself should be 1.0."
assert round(compute_correlation(2,66), 2) == round(df_corrs.query("user1 == 2 and user2 == 66")['movie_corr'][1], 2), "Oops! The correlation between user 2 and 66 should be about 0.76."
assert np.isnan(compute_correlation(2,104)) == np.isnan(df_corrs.query("user1 == 2 and user2 == 104")['movie_corr'][4]), "Oops! The correlation between user 2 and 104 should be a NaN."
print("If this is all you see, then it looks like your function passed all of our tests!")
###Output
_____no_output_____
###Markdown
Why the NaN's?If the function you wrote passed all of the tests, then you have correctly set up your function to calculate the correlation between any two users. The **df_corrs** dataframe created in the cell leading up to the tests holds combinations of users along with their corresponding correlation. `5.` But one question is why are we still obtaining **NaN** values. Look at the header below for users 2 and 104, they have a correlation of **NaN**, why?
###Code
df_corrs.head()
###Output
_____no_output_____
###Markdown
Leave your thoughts here about why the NaN exists, and use the cells below to validate your thoughts. These Nan's ultimately make the correlation coefficient a less than optimal measure of similarity between two users.
###Code
# Which movies did both user 2 and user 4 see?
# What were the ratings for each user on those movies?
###Output
_____no_output_____
###Markdown
`6.` Because the correlation coefficient proved to be less than optimal for relating user ratings to one another, we could instead calculate the euclidean distance between the ratings. I found [this post](https://stackoverflow.com/questions/1401712/how-can-the-euclidean-distance-be-calculated-with-numpy) particularly helpful when I was setting up my function. This function should be very similar to your previous function. When you feel confident with your function, test it against our results.
###Code
def compute_euclidean_dist(user1, user2):
'''
INPUT
user1 - int user_id
user2 - int user_id
OUTPUT
the euclidean distance between user1 and user2
'''
return dist #return the euclidean distance
# Read in solution euclidean distances - this will take some time to read in
df_dists = pickle.load(open("dists.p", "rb"))
# Test your function against the solution
assert compute_euclidean_dist(2,2) == df_dists.query("user1 == 2 and user2 == 2")['eucl_dist'][0], "Oops! The distance between a user and itself should be 0.0."
assert round(compute_euclidean_dist(2,66), 2) == round(df_dists.query("user1 == 2 and user2 == 66")['eucl_dist'][1], 2), "Oops! The distance between user 2 and 66 should be about 2.24."
assert np.isnan(compute_euclidean_dist(2,104)) == np.isnan(df_dists.query("user1 == 2 and user2 == 104")['eucl_dist'][4]), "Oops! The distance between user 2 and 104 should be 2."
print("If this is all you see, then it looks like your function passed all of our tests!")
###Output
_____no_output_____
###Markdown
Using the Nearest Neighbors to Make RecommendationsIn the previous questions, you read in **df_corrs** and **df_dists**. Therefore, you have a measure of distance and similarity for each user to every other user. These dataframes hold every possible combination of users, as well as the corresponding correlation or euclidean distance, respectively.Because of the **NaN** values that exist within **df_corrs**, we will proceed using **df_dists**. You will want to find the users that are 'nearest' each user. Then you will want to find the movies the closest neighbors have liked to recommend to each user.I made use of the following objects:* df_dists (to obtain the neighbors)* user_items (to obtain the movies the neighbors and users have rated)* movies (to obtain the names of the movies)`7.` Complete the functions below, which allow you to find the recommendations for any user. There are five functions which you will need:* **find_closest_neighbors** - this returns a list of user_ids from closest neighbor to farthest neighbor using euclidean distance* **movies_liked** - returns an array of movie_ids* **movie_names** - takes the output of movies_liked and returns a list of movie names associated with the movie_ids* **make_recommendations** - takes a user id and goes through closest neighbors to return a list of movie names as recommendations* **all_recommendations** = loops through every user and returns a dictionary of with the key as a user_id and the value as a list of movie recommendations
###Code
def find_closest_neighbors(user):
'''
INPUT:
user - (int) the user_id of the individual you want to find the closest users
OUTPUT:
closest_neighbors - an array of the id's of the users sorted from closest to farthest away
'''
# I treated ties as arbitrary and just kept whichever was easiest to keep using the head method
# You might choose to do something less hand wavy - order the neighbors
return closest_neighbors
def movies_liked(user_id, min_rating=7):
'''
INPUT:
user_id - the user_id of an individual as int
min_rating - the minimum rating considered while still a movie is still a "like" and not a "dislike"
OUTPUT:
movies_liked - an array of movies the user has watched and liked
'''
return movies_liked
def movie_names(movie_ids):
'''
INPUT
movie_ids - a list of movie_ids
OUTPUT
movies - a list of movie names associated with the movie_ids
'''
return movie_lst
def make_recommendations(user, num_recs=10):
'''
INPUT:
user - (int) a user_id of the individual you want to make recommendations for
num_recs - (int) number of movies to return
OUTPUT:
recommendations - a list of movies - if there are "num_recs" recommendations return this many
otherwise return the total number of recommendations available for the "user"
which may just be an empty list
'''
return recommendations
def all_recommendations(num_recs=10):
'''
INPUT
num_recs (int) the (max) number of recommendations for each user
OUTPUT
all_recs - a dictionary where each key is a user_id and the value is an array of recommended movie titles
'''
# Apply make recs for each user -
# hint this may take some time, so you might want to set up a progress bar to watch things progress
return all_recs
all_recs = all_recommendations(10)
# This make some time - it loads our solution dictionary so you can compare results
all_recs_sol = pickle.load(open("all_recs.p", "rb"))
assert all_recs[2] == make_recommendations(2), "Oops! Your recommendations for user 2 didn't match ours."
assert all_recs[26] == make_recommendations(26), "Oops! It actually wasn't possible to make any recommendations for user 26."
assert all_recs[1503] == make_recommendations(1503), "Oops! Looks like your solution for user 1503 didn't match ours."
print("If you made it here, you now have recommendations for many users using collaborative filtering!")
HTML('<img src="images/greatjob.webp">')
###Output
_____no_output_____
###Markdown
Now What?If you made it this far, you have successfully implemented a solution to making recommendations using collaborative filtering. `8.` Let's do a quick recap of the steps taken to obtain recommendations using collaborative filtering.
###Code
# Check your understanding of the results by correctly filling in the dictionary below
a = "pearson's correlation and spearman's correlation"
b = 'item based collaborative filtering'
c = "there were too many ratings to get a stable metric"
d = 'user based collaborative filtering'
e = "euclidean distance and pearson's correlation coefficient"
f = "manhatten distance and euclidean distance"
g = "spearman's correlation and euclidean distance"
h = "the spread in some ratings was zero"
i = 'content based recommendation'
sol_dict = {
'The type of recommendation system implemented here was a ...': # letter here,
'The two methods used to estimate user similarity were: ': # letter here,
'There was an issue with using the correlation coefficient. What was it?': # letter here
}
t.test_recs(sol_dict)
###Output
_____no_output_____
###Markdown
Additionally, let's take a closer look at some of the results. There are three objects that you read in to check your results against the solution:* **df_corrs** - a dataframe of user1, user2, pearson correlation between the two users* **df_dists** - a dataframe of user1, user2, euclidean distance between the two users* **all_recs_sol** - a dictionary of all recommendations (key = user, value = list of recommendations)Looping your results from the correlation and euclidean distance functions through every pair of users could have been used to create the first two objects (I don't recommend doing this given how long it will take). `9.`Use these three objects along with the cells below to correctly fill in the dictionary below and complete this notebook!
###Code
a = 567
b = 1503
c = 1319
d = 1325
e = 2526710
f = 0
g = 'Use another method to make recommendations - content based, knowledge based, or model based collaborative filtering'
sol_dict2 = {
'For how many pairs of users were we not able to obtain a measure of similarity using correlation?': # letter here,
'For how many pairs of users were we not able to obtain a measure of similarity using euclidean distance?': # letter here,
'For how many users were we unable to make any recommendations for using collaborative filtering?': # letter here,
'For how many users were we unable to make 10 recommendations for using collaborative filtering?': # letter here,
'What might be a way for us to get 10 recommendations for every user?': # letter here
}
t.test_recs2(sol_dict2)
#Use the below cells for any work you need to do!
# Users without recs
# NaN correlation values
# NaN euclidean distance values
# Users with less than 10 recs
###Output
_____no_output_____
###Markdown
Recommendations with MovieTweetings: Collaborative FilteringOne of the most popular methods for making recommendations is **collaborative filtering**. In collaborative filtering, you are using the collaboration of user-item recommendations to assist in making new recommendations. There are two main methods of performing collaborative filtering:1. **Neighborhood-Based Collaborative Filtering**, which is based on the idea that we can either correlate items that are similar to provide recommendations or we can correlate users to one another to provide recommendations.2. **Model Based Collaborative Filtering**, which is based on the idea that we can use machine learning and other mathematical models to understand the relationships that exist amongst items and users to predict ratings and provide ratings.In this notebook, you will be working on performing **neighborhood-based collaborative filtering**. There are two main methods for performing collaborative filtering:1. **User-based collaborative filtering:** In this type of recommendation, users related to the user you would like to make recommendations for are used to create a recommendation.2. **Item-based collaborative filtering:** In this type of recommendation, first you need to find the items that are most related to each other item (based on similar ratings). Then you can use the ratings of an individual on those similar items to understand if a user will like the new item.In this notebook you will be implementing **user-based collaborative filtering**. However, it is easy to extend this approach to make recommendations using **item-based collaborative filtering**. First, let's read in our data and necessary libraries.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tests as t
import progressbar
from scipy.sparse import csr_matrix
from IPython.display import HTML
%matplotlib inline
# Read in the datasets
movies = pd.read_csv('movies_clean.csv')
reviews = pd.read_csv('reviews_clean.csv')
del movies['Unnamed: 0']
del reviews['Unnamed: 0']
print(reviews.head())
###Output
user_id movie_id rating timestamp date month_1 \
0 1 68646 10 1381620027 2013-10-12 23:20:27 0
1 1 113277 10 1379466669 2013-09-18 01:11:09 0
2 2 422720 8 1412178746 2014-10-01 15:52:26 0
3 2 454876 8 1394818630 2014-03-14 17:37:10 0
4 2 790636 7 1389963947 2014-01-17 13:05:47 0
month_2 month_3 month_4 month_5 ... month_9 month_10 month_11 \
0 0 0 0 0 ... 0 1 0
1 0 0 0 0 ... 0 0 0
2 0 0 0 0 ... 0 1 0
3 0 0 0 0 ... 0 0 0
4 0 0 0 0 ... 0 0 0
month_12 year_2013 year_2014 year_2015 year_2016 year_2017 year_2018
0 0 1 0 0 0 0 0
1 0 1 0 0 0 0 0
2 0 0 1 0 0 0 0
3 0 0 1 0 0 0 0
4 0 0 1 0 0 0 0
[5 rows x 23 columns]
###Markdown
Measures of SimilarityWhen using **neighborhood** based collaborative filtering, it is important to understand how to measure the similarity of users or items to one another. There are a number of ways in which we might measure the similarity between two vectors (which might be two users or two items). In this notebook, we will look specifically at two measures used to compare vectors:* **Pearson's correlation coefficient**Pearson's correlation coefficient is a measure of the strength and direction of a linear relationship. The value for this coefficient is a value between -1 and 1 where -1 indicates a strong, negative linear relationship and 1 indicates a strong, positive linear relationship. If we have two vectors **x** and **y**, we can define the correlation between the vectors as:$$CORR(x, y) = \frac{\text{COV}(x, y)}{\text{STDEV}(x)\text{ }\text{STDEV}(y)}$$where $$\text{STDEV}(x) = \sqrt{\frac{1}{n-1}\sum_{i=1}^{n}(x_i - \bar{x})^2}$$and $$\text{COV}(x, y) = \frac{1}{n-1}\sum_{i=1}^{n}(x_i - \bar{x})(y_i - \bar{y})$$where n is the length of the vector, which must be the same for both x and y and $\bar{x}$ is the mean of the observations in the vector. We can use the correlation coefficient to indicate how alike two vectors are to one another, where the closer to 1 the coefficient, the more alike the vectors are to one another. There are some potential downsides to using this metric as a measure of similarity. You will see some of these throughout this workbook.* **Euclidean distance**Euclidean distance is a measure of the straightline distance from one vector to another. Because this is a measure of distance, larger values are an indication that two vectors are different from one another (which is different than Pearson's correlation coefficient).Specifically, the euclidean distance between two vectors **x** and **y** is measured as:$$ \text{EUCL}(x, y) = \sqrt{\sum_{i=1}^{n}(x_i - y_i)^2}$$Different from the correlation coefficient, no scaling is performed in the denominator. Therefore, you need to make sure all of your data are on the same scale when using this metric.**Note:** Because measuring similarity is often based on looking at the distance between vectors, it is important in these cases to scale your data or to have all data be in the same scale. If some measures are on a 5 point scale, while others are on a 100 point scale, you are likely to have non-optimal results due to the difference in variability of your features. Measures like Pearson and Spearman's correlation coefficients are unit agnostic, which means it is not necessary to scale for these measures. However, many measures used to measure similarity (like euclidean or manhatten distances) are not unit agnostic.In this case, we will not need to scale data because they are all on a 10 point scale, but it is always something to keep in mind!------------ User-Item MatrixIn order to calculate the similarities, it is common to put values in a matrix. In this matrix, users are identified by each row, and items are represented by columns. ![alt text](images/userxitem.png "User Item Matrix") In the above matrix, you can see that **User 1** and **User 2** both used **Item 1**, and **User 2**, **User 3**, and **User 4** all used **Item 2**. However, there are also a large number of missing values in the matrix for users who haven't used a particular item. A matrix with many missing values (like the one above) is considered **sparse**.Our first goal for this notebook is to create the above matrix with the **reviews** dataset. However, instead of 1 values in each cell, you should have the actual rating. The users will indicate the rows, and the movies will exist across the columns. To create the user-item matrix, we only need the first three columns of the **reviews** dataframe, which you can see by running the cell below.
###Code
user_items = reviews[['user_id', 'movie_id', 'rating']]
user_items.head()
###Output
_____no_output_____
###Markdown
Ceating the User-Item MatrixIn order to create the user-items matrix (like the one above), I personally started by using a [pivot table](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html). However, I quickly ran into a memory error (a common theme throughout this notebook). I will help you navigate around many of the errors I had, and acheive useful collaborative filtering results! _____`1.` Create a matrix where the users are the rows, the movies are the columns, and the ratings exist in each cell, or a NaN exists in cells where a user hasn't rated a particular movie. If you get a memory error (like I did), [this link here](https://stackoverflow.com/questions/39648991/pandas-dataframe-pivot-memory-error) might help you!
###Code
# Create user-by-item matrix
user_by_movie = pd.pivot_table(user_items, values = 'rating', index=['user_id'], columns=['movie_id'], fill_value=np.nan)
###Output
_____no_output_____
###Markdown
Check your results below to make sure your matrix is ready for the upcoming sections.
###Code
assert movies.shape[0] == user_by_movie.shape[1], "Oh no! Your matrix should have {} columns, and yours has {}!".format(movies.shape[0], user_by_movie.shape[1])
assert reviews.user_id.nunique() == user_by_movie.shape[0], "Oh no! Your matrix should have {} rows, and yours has {}!".format(reviews.user_id.nunique(), user_by_movie.shape[0])
print("Looks like you are all set! Proceed!")
HTML('<img src="images/greatjob.webp">')
user_by_movie.head()
user_by_movie.loc[1].name
###Output
_____no_output_____
###Markdown
`2.` Now that you have a matrix of users by movies, use this matrix to create a dictionary where the key is each user and the value is an array of the movies each user has rated.
###Code
# Create a dictionary with users and corresponding movies seen
def movies_watched(user_id):
'''
INPUT:
user_id - the user_id of an individual as int
OUTPUT:
movies - an array of movies the user has watched
'''
movies = user_items[user_items['user_id']==user_id]['movie_id'].tolist()
return movies
def create_user_movie_dict():
'''
INPUT: None
OUTPUT: movies_seen - a dictionary where each key is a user_id and the value is an array of movie_ids
Creates the movies_seen dictionary
'''
# Do things - hint this may take some time, so you might want to set up a progress bar to watch things progress
movies_seen = {}
for user, row in user_by_movie.iterrows():
movies_seen[user] = movies_watched(user)
return movies_seen
# Use your function to return dictionary
movies_seen = create_user_movie_dict()
###Output
_____no_output_____
###Markdown
`3.` If a user hasn't rated more than 2 movies, we consider these users "too new". Create a new dictionary that only contains users who have rated more than 2 movies. This dictionary will be used for all the final steps of this workbook.
###Code
# Remove individuals who have watched 2 or fewer movies - don't have enough data to make recs
def create_movies_to_analyze(movies_seen, lower_bound=2):
'''
INPUT:
movies_seen - a dictionary where each key is a user_id and the value is an array of movie_ids
lower_bound - (an int) a user must have more movies seen than the lower bound to be added to the movies_to_analyze dictionary
OUTPUT:
movies_to_analyze - a dictionary where each key is a user_id and the value is an array of movie_ids
The movies_seen and movies_to_analyze dictionaries should be the same except that the output dictionary has removed
'''
# Do things to create updated dictionary
movies_to_analyze = {}
for user, movies in movies_seen.items():
if len(movies)>lower_bound:
movies_to_analyze[user] = movies
return movies_to_analyze
# Use your function to return your updated dictionary
movies_to_analyze = create_movies_to_analyze(movies_seen)
# Run the tests below to check that your movies_to_analyze matches the solution
assert len(movies_to_analyze) == 23512, "Oops! It doesn't look like your dictionary has the right number of individuals."
assert len(movies_to_analyze[2]) == 23, "Oops! User 2 didn't match the number of movies we thought they would have."
assert len(movies_to_analyze[7]) == 3, "Oops! User 7 didn't match the number of movies we thought they would have."
print("If this is all you see, you are good to go!")
###Output
If this is all you see, you are good to go!
###Markdown
Calculating User SimilaritiesNow that you have set up the **movies_to_analyze** dictionary, it is time to take a closer look at the similarities between users. Below the sudo code for how I thought about determining the similarity between users:```for user1 in movies_to_analyze for user2 in movies_to_analyze see how many movies match between the two users if more than two movies in common pull the overlapping movies compute the distance/similarity metric between ratings on the same movies for the two users store the users and the distance metric```However, this took a very long time to run, and other methods of performing these operations did not fit on the workspace memory!Therefore, your task for this question is to look at a few specific examples of the correlation between ratings given by two users. For this question consider you want to compute the [correlation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.corr.html) between users.`4.` Using the **movies_to_analyze** dictionary and **user_by_movie** dataframe, create a function that computes the correlation between the ratings of similar movies for two users. Then use your function to compare your results to ours using the tests below.
###Code
def compute_correlation(user1, user2):
'''
INPUT
user1 - int user_id
user2 - int user_id
OUTPUT
the correlation between the matching ratings between the two users
'''
movies1 = movies_to_analyze[user1]
movies2 = movies_to_analyze[user2]
# Find Similar Movies
sim_movs = np.intersect1d(movies1, movies2, assume_unique=True)
# Calculate correlation between the users
df = user_by_movie.loc[(user1, user2), sim_movs]
corr = df.transpose().corr().iloc[0,1]
return corr #return the correlation
# Read in solution correlations - this will take some time to read in
import pickle
corrs_import = pickle.load(open("corrs.p", "rb"))
df_corrs = pd.DataFrame(corrs_import)
df_corrs.columns = ['user1', 'user2', 'movie_corr']
# Test your function against the solution
assert compute_correlation(2,2) == df_corrs.query("user1 == 2 and user2 == 2")['movie_corr'][0], "Oops! The correlation between a user and itself should be 1.0."
assert round(compute_correlation(2,66), 2) == round(df_corrs.query("user1 == 2 and user2 == 66")['movie_corr'][1], 2), "Oops! The correlation between user 2 and 66 should be about 0.76."
assert np.isnan(compute_correlation(2,104)) == np.isnan(df_corrs.query("user1 == 2 and user2 == 104")['movie_corr'][4]), "Oops! The correlation between user 2 and 104 should be a NaN."
print("If this is all you see, then it looks like your function passed all of our tests!")
###Output
If this is all you see, then it looks like your function passed all of our tests!
###Markdown
Why the NaN's?If the function you wrote passed all of the tests, then you have correctly set up your function to calculate the correlation between any two users. The **df_corrs** dataframe created in the cell leading up to the tests holds combinations of users along with their corresponding correlation. `5.` But one question is why are we still obtaining **NaN** values. Look at the header below for users 2 and 104, they have a correlation of **NaN**, why?
###Code
df_corrs.head()
###Output
_____no_output_____
###Markdown
Leave your thoughts here about why the NaN exists, and use the cells below to validate your thoughts. These Nan's ultimately make the correlation coefficient a less than optimal measure of similarity between two users.
###Code
# Which movies did both user 2 and user 4 see?
print(movies_to_analyze[2])
print(movies_to_analyze[104])
# What were the ratings for each user on those movies?
###Output
_____no_output_____
###Markdown
`6.` Because the correlation coefficient proved to be less than optimal for relating user ratings to one another, we could instead calculate the euclidean distance between the ratings. I found [this post](https://stackoverflow.com/questions/1401712/how-can-the-euclidean-distance-be-calculated-with-numpy) particularly helpful when I was setting up my function. This function should be very similar to your previous function. When you feel confident with your function, test it against our results.
###Code
m1 = movies_to_analyze[2]
m2 = movies_to_analyze[2]
sim_movs = np.intersect1d(m1, m2, assume_unique=True)
df = user_by_movie.loc[(2,2), sim_movs]
df = df.transpose()
dist = np.linalg.norm(df[2]-df[2])
print(dist)
def compute_euclidean_dist(user1, user2):
'''
INPUT
user1 - int user_id
user2 - int user_id
OUTPUT
the euclidean distance between user1 and user2
'''
movies1 = movies_to_analyze[user1]
movies2 = movies_to_analyze[user2]
# Find Similar Movies
sim_movs = np.intersect1d(movies1, movies2, assume_unique=True)
# Calculate correlation between the users
df = user_by_movie.loc[(user1, user2), sim_movs]
df = df.transpose()
dist = np.linalg.norm(df[user1]-df[user2])
return dist #return the euclidean distance
# Read in solution euclidean distances - this will take some time to read in
# df_dists = pickle.load(open("dists.p", "rb"))
df_dists = pd.read_pickle(open('dists.p', 'rb'), compression=None)
# Test your function against the solution
assert compute_euclidean_dist(2,2) == df_dists.query("user1 == 2 and user2 == 2")['eucl_dist'][0], "Oops! The distance between a user and itself should be 0.0."
assert round(compute_euclidean_dist(2,66), 2) == round(df_dists.query("user1 == 2 and user2 == 66")['eucl_dist'][1], 2), "Oops! The distance between user 2 and 66 should be about 2.24."
assert np.isnan(compute_euclidean_dist(2,104)) == np.isnan(df_dists.query("user1 == 2 and user2 == 104")['eucl_dist'][4]), "Oops! The distance between user 2 and 104 should be 2."
print("If this is all you see, then it looks like your function passed all of our tests!")
###Output
If this is all you see, then it looks like your function passed all of our tests!
###Markdown
Using the Nearest Neighbors to Make RecommendationsIn the previous questions, you read in **df_corrs** and **df_dists**. Therefore, you have a measure of distance and similarity for each user to every other user. These dataframes hold every possible combination of users, as well as the corresponding correlation or euclidean distance, respectively.Because of the **NaN** values that exist within **df_corrs**, we will proceed using **df_dists**. You will want to find the users that are 'nearest' each user. Then you will want to find the movies the closest neighbors have liked to recommend to each user.I made use of the following objects:* df_dists (to obtain the neighbors)* user_items (to obtain the movies the neighbors and users have rated)* movies (to obtain the names of the movies)`7.` Complete the functions below, which allow you to find the recommendations for any user. There are five functions which you will need:* **find_closest_neighbors** - this returns a list of user_ids from closest neighbor to farthest neighbor using euclidean distance* **movies_liked** - returns an array of movie_ids* **movie_names** - takes the output of movies_liked and returns a list of movie names associated with the movie_ids* **make_recommendations** - takes a user id and goes through closest neighbors to return a list of movie names as recommendations* **all_recommendations** = loops through every user and returns a dictionary of with the key as a user_id and the value as a list of movie recommendations
###Code
df_dists.head()
user_by_movie.shape[0]
user_by_movie[53967]
def find_closest_neighbors(user):
'''
INPUT:
user - (int) the user_id of the individual you want to find the closest users
OUTPUT:
closest_neighbors - an array of the id's of the users sorted from closest to farthest away
'''
# I treated ties as arbitrary and just kept whichever was easiest to keep using the head method
# You might choose to do something less hand wavy - order the neighbors
u_dist = df_dists[df_dists['user1']==user]
u_dist = u_dist.sort_values(by='eucl_dist', ascending=False)
closest_neighbors = u_dist['user2'].values
return closest_neighbors
def movies_liked(user_id, min_rating=7):
'''
INPUT:
user_id - the user_id of an individual as int
min_rating - the minimum rating considered while still a movie is still a "like" and not a "dislike"
OUTPUT:
movies_liked - an array of movies the user has watched and liked
'''
movies = movies_to_analyze[user_id]
ratings = user_by_movie.loc[user_id, movies]
ratings = ratings[ratings[movies]>min_rating]
movies_liked = ratings.index.values
return movies_liked
def movie_names(movie_ids):
'''
INPUT
movie_ids - a list of movie_ids
OUTPUT
movies - a list of movie names associated with the movie_ids
'''
movie_lst = movies[movies['movie_id'].isin(movie_ids)]['movie'].values
return movie_lst
def make_recommendations(user, num_recs=10):
'''
INPUT:
user - (int) a user_id of the individual you want to make recommendations for
num_recs - (int) number of movies to return
OUTPUT:
recommendations - a list of movies - if there are "num_recs" recommendations return this many
otherwise return the total number of recommendations available for the "user"
which may just be an empty list
'''
recommendations = find_closest_neighbors(user)[:num_recs]
return recommendations
def all_recommendations(num_recs=10):
'''
INPUT
num_recs (int) the (max) number of recommendations for each user
OUTPUT
all_recs - a dictionary where each key is a user_id and the value is an array of recommended movie titles
'''
# Apply make recs for each user -
# hint this may take some time, so you might want to set up a progress bar to watch things progress
all_recs = {}
for user in user_by_movie.index.to_list():
all_recs[user] = make_recommendations(user, num_recs=num_recs)
return all_recs
all_recs = all_recommendations(10)
(all_recs[2] == make_recommendations(2)).all()
# This make some time - it loads our solution dictionary so you can compare results
all_recs_sol = pickle.load(open("all_recs.p", "rb"))
assert (all_recs[2] == make_recommendations(2)).all(), "Oops! Your recommendations for user 2 didn't match ours."
assert (all_recs[26] == make_recommendations(26)).all(), "Oops! It actually wasn't possible to make any recommendations for user 26."
assert (all_recs[1503] == make_recommendations(1503)).all(), "Oops! Looks like your solution for user 1503 didn't match ours."
print("If you made it here, you now have recommendations for many users using collaborative filtering!")
HTML('<img src="images/greatjob.webp">')
###Output
If you made it here, you now have recommendations for many users using collaborative filtering!
###Markdown
Now What?If you made it this far, you have successfully implemented a solution to making recommendations using collaborative filtering. `8.` Let's do a quick recap of the steps taken to obtain recommendations using collaborative filtering.
###Code
# Check your understanding of the results by correctly filling in the dictionary below
a = "pearson's correlation and spearman's correlation"
b = 'item based collaborative filtering'
c = "there were too many ratings to get a stable metric"
d = 'user based collaborative filtering'
e = "euclidean distance and pearson's correlation coefficient"
f = "manhatten distance and euclidean distance"
g = "spearman's correlation and euclidean distance"
h = "the spread in some ratings was zero"
i = 'content based recommendation'
sol_dict = {
'The type of recommendation system implemented here was a ...': d,# letter here,
'The two methods used to estimate user similarity were: ': e,# letter here,
'There was an issue with using the correlation coefficient. What was it?': h# letter here
}
t.test_recs(sol_dict)
###Output
_____no_output_____
###Markdown
Additionally, let's take a closer look at some of the results. There are three objects that you read in to check your results against the solution:* **df_corrs** - a dataframe of user1, user2, pearson correlation between the two users* **df_dists** - a dataframe of user1, user2, euclidean distance between the two users* **all_recs_sol** - a dictionary of all recommendations (key = user, value = list of recommendations)Looping your results from the correlation and euclidean distance functions through every pair of users could have been used to create the first two objects (I don't recommend doing this given how long it will take). `9.`Use these three objects along with the cells below to correctly fill in the dictionary below and complete this notebook!
###Code
a = 567
b = 1503
c = 1319
d = 1325
e = 2526710
f = 0
g = 'Use another method to make recommendations - content based, knowledge based, or model based collaborative filtering'
sol_dict2 = {
'For how many pairs of users were we not able to obtain a measure of similarity using correlation?': # letter here,
'For how many pairs of users were we not able to obtain a measure of similarity using euclidean distance?': # letter here,
'For how many users were we unable to make any recommendations for using collaborative filtering?': # letter here,
'For how many users were we unable to make 10 recommendations for using collaborative filtering?': # letter here,
'What might be a way for us to get 10 recommendations for every user?': # letter here
}
t.test_recs2(sol_dict2)
#Use the below cells for any work you need to do!
# Users without recs
# NaN correlation values
# NaN euclidean distance values
# Users with less than 10 recs
###Output
_____no_output_____
###Markdown
Recommendations with MovieTweetings: Collaborative FilteringOne of the most popular methods for making recommendations is **collaborative filtering**. In collaborative filtering, you are using the collaboration of user-item recommendations to assist in making new recommendations. There are two main methods of performing collaborative filtering:1. **Neighborhood-Based Collaborative Filtering**, which is based on the idea that we can either correlate items that are similar to provide recommendations or we can correlate users to one another to provide recommendations.2. **Model Based Collaborative Filtering**, which is based on the idea that we can use machine learning and other mathematical models to understand the relationships that exist amongst items and users to predict ratings and provide ratings.In this notebook, you will be working on performing **neighborhood-based collaborative filtering**. There are two main methods for performing collaborative filtering:1. **User-based collaborative filtering:** In this type of recommendation, users related to the user you would like to make recommendations for are used to create a recommendation.2. **Item-based collaborative filtering:** In this type of recommendation, first you need to find the items that are most related to each other item (based on similar ratings). Then you can use the ratings of an individual on those similar items to understand if a user will like the new item.In this notebook you will be implementing **user-based collaborative filtering**. However, it is easy to extend this approach to make recommendations using **item-based collaborative filtering**. First, let's read in our data and necessary libraries.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tests as t
import progressbar
from scipy.sparse import csr_matrix
from IPython.display import HTML
%matplotlib inline
# Read in the datasets
movies = pd.read_csv('movies_clean.csv')
reviews = pd.read_csv('reviews_clean.csv')
del movies['Unnamed: 0']
del reviews['Unnamed: 0']
print(reviews.head())
###Output
_____no_output_____
###Markdown
Measures of SimilarityWhen using **neighborhood** based collaborative filtering, it is important to understand how to measure the similarity of users or items to one another. There are a number of ways in which we might measure the similarity between two vectors (which might be two users or two items). In this notebook, we will look specifically at two measures used to compare vectors:* **Pearson's correlation coefficient**Pearson's correlation coefficient is a measure of the strength and direction of a linear relationship. The value for this coefficient is a value between -1 and 1 where -1 indicates a strong, negative linear relationship and 1 indicates a strong, positive linear relationship. If we have two vectors **x** and **y**, we can define the correlation between the vectors as:$$CORR(x, y) = \frac{\text{COV}(x, y)}{\text{STDEV}(x)\text{ }\text{STDEV}(y)}$$where $$\text{STDEV}(x) = \sqrt{\frac{1}{n-1}\sum_{i=1}^{n}(x_i - \bar{x})^2}$$and $$\text{COV}(x, y) = \frac{1}{n-1}\sum_{i=1}^{n}(x_i - \bar{x})(y_i - \bar{y})$$where n is the length of the vector, which must be the same for both x and y and $\bar{x}$ is the mean of the observations in the vector. We can use the correlation coefficient to indicate how alike two vectors are to one another, where the closer to 1 the coefficient, the more alike the vectors are to one another. There are some potential downsides to using this metric as a measure of similarity. You will see some of these throughout this workbook.* **Euclidean distance**Euclidean distance is a measure of the straightline distance from one vector to another. Because this is a measure of distance, larger values are an indication that two vectors are different from one another (which is different than Pearson's correlation coefficient).Specifically, the euclidean distance between two vectors **x** and **y** is measured as:$$ \text{EUCL}(x, y) = \sqrt{\sum_{i=1}^{n}(x_i - y_i)^2}$$Different from the correlation coefficient, no scaling is performed in the denominator. Therefore, you need to make sure all of your data are on the same scale when using this metric.**Note:** Because measuring similarity is often based on looking at the distance between vectors, it is important in these cases to scale your data or to have all data be in the same scale. If some measures are on a 5 point scale, while others are on a 100 point scale, you are likely to have non-optimal results due to the difference in variability of your features. Measures like Pearson and Spearman's correlation coefficients are unit agnostic, which means it is not necessary to scale for these measures. However, many measures used to measure similarity (like euclidean or manhatten distances) are not unit agnostic.In this case, we will not need to scale data because they are all on a 10 point scale, but it is always something to keep in mind!------------ User-Item MatrixIn order to calculate the similarities, it is common to put values in a matrix. In this matrix, users are identified by each row, and items are represented by columns. ![alt text](images/userxitem.png "User Item Matrix") In the above matrix, you can see that **User 1** and **User 2** both used **Item 1**, and **User 2**, **User 3**, and **User 4** all used **Item 2**. However, there are also a large number of missing values in the matrix for users who haven't used a particular item. A matrix with many missing values (like the one above) is considered **sparse**.Our first goal for this notebook is to create the above matrix with the **reviews** dataset. However, instead of 1 values in each cell, you should have the actual rating. The users will indicate the rows, and the movies will exist across the columns. To create the user-item matrix, we only need the first three columns of the **reviews** dataframe, which you can see by running the cell below.
###Code
user_items = reviews[['user_id', 'movie_id', 'rating']]
user_items.head()
###Output
_____no_output_____
###Markdown
Ceating the User-Item MatrixIn order to create the user-items matrix (like the one above), I personally started by using a [pivot table](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html). However, I quickly ran into a memory error (a common theme throughout this notebook). I will help you navigate around many of the errors I had, and acheive useful collaborative filtering results! _____`1.` Create a matrix where the users are the rows, the movies are the columns, and the ratings exist in each cell, or a NaN exists in cells where a user hasn't rated a particular movie. If you get a memory error (like I did), [this link here](https://stackoverflow.com/questions/39648991/pandas-dataframe-pivot-memory-error) might help you!
###Code
# Create user-by-item matrix
###Output
_____no_output_____
###Markdown
Check your results below to make sure your matrix is ready for the upcoming sections.
###Code
assert movies.shape[0] == user_by_movie.shape[1], "Oh no! Your matrix should have {} columns, and yours has {}!".format(movies.shape[0], user_by_movie.shape[1])
assert reviews.user_id.nunique() == user_by_movie.shape[0], "Oh no! Your matrix should have {} rows, and yours has {}!".format(reviews.user_id.nunique(), user_by_movie.shape[0])
print("Looks like you are all set! Proceed!")
HTML('<img src="images/greatjob.webp">')
###Output
_____no_output_____
###Markdown
`2.` Now that you have a matrix of users by movies, use this matrix to create a dictionary where the key is each user and the value is an array of the movies each user has rated.
###Code
# Create a dictionary with users and corresponding movies seen
def movies_watched(user_id):
'''
INPUT:
user_id - the user_id of an individual as int
OUTPUT:
movies - an array of movies the user has watched
'''
return movies
def create_user_movie_dict():
'''
INPUT: None
OUTPUT: movies_seen - a dictionary where each key is a user_id and the value is an array of movie_ids
Creates the movies_seen dictionary
'''
# Do things - hint this may take some time, so you might want to set up a progress bar to watch things progress
return movies_seen
# Use your function to return dictionary
movies_seen = create_user_movie_dict()
###Output
_____no_output_____
###Markdown
`3.` If a user hasn't rated more than 2 movies, we consider these users "too new". Create a new dictionary that only contains users who have rated more than 2 movies. This dictionary will be used for all the final steps of this workbook.
###Code
# Remove individuals who have watched 2 or fewer movies - don't have enough data to make recs
def create_movies_to_analyze(movies_seen, lower_bound=2):
'''
INPUT:
movies_seen - a dictionary where each key is a user_id and the value is an array of movie_ids
lower_bound - (an int) a user must have more movies seen than the lower bound to be added to the movies_to_analyze dictionary
OUTPUT:
movies_to_analyze - a dictionary where each key is a user_id and the value is an array of movie_ids
The movies_seen and movies_to_analyze dictionaries should be the same except that the output dictionary has removed
'''
# Do things to create updated dictionary
return movies_to_analyze
# Use your function to return your updated dictionary
movies_to_analyze = create_movies_to_analyze(movies_seen)
# Run the tests below to check that your movies_to_analyze matches the solution
assert len(movies_to_analyze) == 23512, "Oops! It doesn't look like your dictionary has the right number of individuals."
assert len(movies_to_analyze[2]) == 23, "Oops! User 2 didn't match the number of movies we thought they would have."
assert len(movies_to_analyze[7]) == 3, "Oops! User 7 didn't match the number of movies we thought they would have."
print("If this is all you see, you are good to go!")
###Output
_____no_output_____
###Markdown
Calculating User SimilaritiesNow that you have set up the **movies_to_analyze** dictionary, it is time to take a closer look at the similarities between users. Below the sudo code for how I thought about determining the similarity between users:```for user1 in movies_to_analyze for user2 in movies_to_analyze see how many movies match between the two users if more than two movies in common pull the overlapping movies compute the distance/similarity metric between ratings on the same movies for the two users store the users and the distance metric```However, this took a very long time to run, and other methods of performing these operations did not fit on the workspace memory!Therefore, your task for this question is to look at a few specific examples of the correlation between ratings given by two users. For this question consider you want to compute the [correlation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.corr.html) between users.`4.` Using the **movies_to_analyze** dictionary and **user_by_movie** dataframe, create a function that computes the correlation between the ratings of similar movies for two users. Then use your function to compare your results to ours using the tests below.
###Code
def compute_correlation(user1, user2):
'''
INPUT
user1 - int user_id
user2 - int user_id
OUTPUT
the correlation between the matching ratings between the two users
'''
return corr #return the correlation
# Read in solution correlations - this will take some time to read in
import pickle
corrs_import = pickle.load(open("corrs.p", "rb"))
df_corrs = pd.DataFrame(corrs_import)
df_corrs.columns = ['user1', 'user2', 'movie_corr']
# Test your function against the solution
assert compute_correlation(2,2) == df_corrs.query("user1 == 2 and user2 == 2")['movie_corr'][0], "Oops! The correlation between a user and itself should be 1.0."
assert round(compute_correlation(2,66), 2) == round(df_corrs.query("user1 == 2 and user2 == 66")['movie_corr'][1], 2), "Oops! The correlation between user 2 and 66 should be about 0.76."
assert np.isnan(compute_correlation(2,104)) == np.isnan(df_corrs.query("user1 == 2 and user2 == 104")['movie_corr'][4]), "Oops! The correlation between user 2 and 104 should be a NaN."
print("If this is all you see, then it looks like your function passed all of our tests!")
###Output
_____no_output_____
###Markdown
Why the NaN's?If the function you wrote passed all of the tests, then you have correctly set up your function to calculate the correlation between any two users. The **df_corrs** dataframe created in the cell leading up to the tests holds combinations of users along with their corresponding correlation. `5.` But one question is why are we still obtaining **NaN** values. Look at the header below for users 2 and 104, they have a correlation of **NaN**, why?
###Code
df_corrs.head()
###Output
_____no_output_____
###Markdown
Leave your thoughts here about why the NaN exists, and use the cells below to validate your thoughts. These Nan's ultimately make the correlation coefficient a less than optimal measure of similarity between two users.
###Code
# Which movies did both user 2 and user 4 see?
# What were the ratings for each user on those movies?
###Output
_____no_output_____
###Markdown
`6.` Because the correlation coefficient proved to be less than optimal for relating user ratings to one another, we could instead calculate the euclidean distance between the ratings. I found [this post](https://stackoverflow.com/questions/1401712/how-can-the-euclidean-distance-be-calculated-with-numpy) particularly helpful when I was setting up my function. This function should be very similar to your previous function. When you feel confident with your function, test it against our results.
###Code
def compute_euclidean_dist(user1, user2):
'''
INPUT
user1 - int user_id
user2 - int user_id
OUTPUT
the euclidean distance between user1 and user2
'''
return dist #return the euclidean distance
# Read in solution euclidean distances - this will take some time to read in
df_dists = pickle.load(open("dists.p", "rb"))
# Test your function against the solution
assert compute_euclidean_dist(2,2) == df_dists.query("user1 == 2 and user2 == 2")['eucl_dist'][0], "Oops! The distance between a user and itself should be 0.0."
assert round(compute_euclidean_dist(2,66), 2) == round(df_dists.query("user1 == 2 and user2 == 66")['eucl_dist'][1], 2), "Oops! The distance between user 2 and 66 should be about 2.24."
assert np.isnan(compute_euclidean_dist(2,104)) == np.isnan(df_dists.query("user1 == 2 and user2 == 104")['eucl_dist'][4]), "Oops! The distance between user 2 and 104 should be 2."
print("If this is all you see, then it looks like your function passed all of our tests!")
###Output
_____no_output_____
###Markdown
Using the Nearest Neighbors to Make RecommendationsIn the previous questions, you read in **df_corrs** and **df_dists**. Therefore, you have a measure of distance and similarity for each user to every other user. These dataframes hold every possible combination of users, as well as the corresponding correlation or euclidean distance, respectively.Because of the **NaN** values that exist within **df_corrs**, we will proceed using **df_dists**. You will want to find the users are 'nearest' each user. Then you will want to find the movies the closest neighbors have liked to recommend to each user.I made use of the following objects:* df_dists (to obtain the neighbors)* user_items (to obtain the movies the neighbors and users have rated)* movies (to obtain the names of the movies)`7.` Complete the functions below, which allow you to find the recommendations for any user. There are three functions which you will need:* **find_closest_neighbors** - this returns a list of user_ids from closest neighbor to farthest neighbor using euclidean distance* **movies_liked** - returns an array of movie_ids* **movie_names** - takes the output of movies_liked and returns a list of movie names associated with the movie_ids* **make_recommendations** - takes a user id and goes through closest neighbors to return a list of movie names as recommendations* **all_recommendations** = loops through every user and returns a dictionary of with the key as a user_id and the value as a list of movie recommendations
###Code
def find_closest_neighbors(user):
'''
INPUT:
user - (int) the user_id of the individual you want to find the closest users
OUTPUT:
closest_neighbors - an array of the id's of the users sorted from closest to farthest away
'''
# I treated ties as arbitrary and just kept whichever was easiest to keep using the head method
# You might choose to do something less hand wavy - order the neighbors
return closest_neighbors
def movies_liked(user_id, min_rating=7):
'''
INPUT:
user_id - the user_id of an individual as int
min_rating - the minimum rating considered while still a movie is still a "like" and not a "dislike"
OUTPUT:
movies_liked - an array of movies the user has watched and liked
'''
return movies_liked
def movie_names(movie_ids):
'''
INPUT
movie_ids - a list of movie_ids
OUTPUT
movies - a list of movie names associated with the movie_ids
'''
return movie_lst
def make_recommendations(user, num_recs=10):
'''
INPUT:
user - (int) a user_id of the individual you want to make recommendations for
num_recs - (int) number of movies to return
OUTPUT:
recommendations - a list of movies - if there are "num_recs" recommendations return this many
otherwise return the total number of recommendations available for the "user"
which may just be an empty list
'''
return recommendations
def all_recommendations(num_recs=10):
'''
INPUT
num_recs (int) the (max) number of recommendations for each user
OUTPUT
all_recs - a dictionary where each key is a user_id and the value is an array of recommended movie titles
'''
# Apply make recs for each user -
# hint this may take some time, so you might want to set up a progress bar to watch things progress
return all_recs
all_recs = all_recommendations(10)
# This make some time - it loads our solution dictionary so you can compare results
all_recs_sol = pickle.load(open("all_recs.p", "rb"))
assert all_recs[2] == make_recommendations(2), "Oops! Your recommendations for user 2 didn't match ours."
assert all_recs[26] == make_recommendations(26), "Oops! It actually wasn't possible to make any recommendations for user 26."
assert all_recs[1503] == make_recommendations(1503), "Oops! Looks like your solution for user 1503 didn't match ours."
print("If you made it here, you now have recommendations for many users using collaborative filtering!")
HTML('<img src="images/greatjob.webp">')
###Output
_____no_output_____
###Markdown
Now What?If you made it this far, you have successfully implemented a solution to making recommendations using collaborative filtering. `8.` Let's do a quick recap of the steps taken to obtain recommendations using collaborative filtering.
###Code
# Check your understanding of the results by correctly filling in the dictionary below
a = "pearson's correlation and spearman's correlation"
b = 'item based collaborative filtering'
c = "there were too many ratings to get a stable metric"
d = 'user based collaborative filtering'
e = "euclidean distance and pearson's correlation coefficient"
f = "manhatten distance and euclidean distance"
g = "spearman's correlation and euclidean distance"
h = "the spread in some ratings was zero"
i = 'content based recommendation'
sol_dict = {
'The type of recommendation system implemented here was a ...': # letter here,
'The two methods used to estimate user similarity were: ': # letter here,
'There was an issue with using the correlation coefficient. What was it?': # letter here
}
t.test_recs(sol_dict)
###Output
_____no_output_____
###Markdown
Additionally, let's take a closer look at some of the results. There are three objects that you read in to check your results against the solution:* **df_corrs** - a dataframe of user1, user2, pearson correlation between the two users* **df_dists** - a dataframe of user1, user2, euclidean distance between the two users* **all_recs_sol** - a dictionary of all recommendations (key = user, value = list of recommendations)Looping your results from the correlation and euclidean distance functions through every pair of users could have been used to create the first two objects (I don't recommend doing this given how long it will take). `9.`Use these three objects along with the cells below to correctly fill in the dictionary below and complete this notebook!
###Code
a = 567
b = 1503
c = 1319
d = 1325
e = 2526710
f = 0
g = 'Use another method to make recommendations - content based, knowledge based, or model based collaborative filtering'
sol_dict2 = {
'For how many pairs of users were we not able to obtain a measure of similarity using correlation?': # letter here,
'For how many pairs of users were we not able to obtain a measure of similarity using euclidean distance?': # letter here,
'For how many users were we unable to make any recommendations for using collaborative filtering?': # letter here,
'For how many users were we unable to make 10 recommendations for using collaborative filtering?': # letter here,
'What might be a way for us to get 10 recommendations for every user?': # letter here
}
t.test_recs2(sol_dict2)
#Use the below cells for any work you need to do!
# Users without recs
# NaN correlation values
# NaN euclidean distance values
# Users with less than 10 recs
###Output
_____no_output_____
###Markdown
Recommendations with MovieTweetings: Collaborative FilteringOne of the most popular methods for making recommendations is **collaborative filtering**. In collaborative filtering, you are using the collaboration of user-item recommendations to assist in making new recommendations. There are two main methods of performing collaborative filtering:1. **Neighborhood-Based Collaborative Filtering**, which is based on the idea that we can either correlate items that are similar to provide recommendations or we can correlate users to one another to provide recommendations.2. **Model Based Collaborative Filtering**, which is based on the idea that we can use machine learning and other mathematical models to understand the relationships that exist amongst items and users to predict ratings and provide ratings.In this notebook, you will be working on performing **neighborhood-based collaborative filtering**. There are two main methods for performing collaborative filtering:1. **User-based collaborative filtering:** In this type of recommendation, users related to the user you would like to make recommendations for are used to create a recommendation.2. **Item-based collaborative filtering:** In this type of recommendation, first you need to find the items that are most related to each other item (based on similar ratings). Then you can use the ratings of an individual on those similar items to understand if a user will like the new item.In this notebook you will be implementing **user-based collaborative filtering**. However, it is easy to extend this approach to make recommendations using **item-based collaborative filtering**. First, let's read in our data and necessary libraries.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tests as t
import progressbar
from scipy.sparse import csr_matrix
from IPython.display import HTML
%matplotlib inline
# Read in the datasets
movies = pd.read_csv('movies_clean.csv')
reviews = pd.read_csv('reviews_clean.csv')
del movies['Unnamed: 0']
del reviews['Unnamed: 0']
print(reviews.head())
###Output
_____no_output_____
###Markdown
Measures of SimilarityWhen using **neighborhood** based collaborative filtering, it is important to understand how to measure the similarity of users or items to one another. There are a number of ways in which we might measure the similarity between two vectors (which might be two users or two items). In this notebook, we will look specifically at two measures used to compare vectors:* **Pearson's correlation coefficient**Pearson's correlation coefficient is a measure of the strength and direction of a linear relationship. The value for this coefficient is a value between -1 and 1 where -1 indicates a strong, negative linear relationship and 1 indicates a strong, positive linear relationship. If we have two vectors **x** and **y**, we can define the correlation between the vectors as:$$CORR(x, y) = \frac{\text{COV}(x, y)}{\text{STDEV}(x)\text{ }\text{STDEV}(y)}$$where $$\text{STDEV}(x) = \sqrt{\frac{1}{n-1}\sum_{i=1}^{n}(x_i - \bar{x})^2}$$and $$\text{COV}(x, y) = \frac{1}{n-1}\sum_{i=1}^{n}(x_i - \bar{x})(y_i - \bar{y})$$where n is the length of the vector, which must be the same for both x and y and $\bar{x}$ is the mean of the observations in the vector. We can use the correlation coefficient to indicate how alike two vectors are to one another, where the closer to 1 the coefficient, the more alike the vectors are to one another. There are some potential downsides to using this metric as a measure of similarity. You will see some of these throughout this workbook.* **Euclidean distance**Euclidean distance is a measure of the straightline distance from one vector to another. Because this is a measure of distance, larger values are an indication that two vectors are different from one another (which is different than Pearson's correlation coefficient).Specifically, the euclidean distance between two vectors **x** and **y** is measured as:$$ \text{EUCL}(x, y) = \sqrt{\sum_{i=1}^{n}(x_i - y_i)^2}$$Different from the correlation coefficient, no scaling is performed in the denominator. Therefore, you need to make sure all of your data are on the same scale when using this metric.**Note:** Because measuring similarity is often based on looking at the distance between vectors, it is important in these cases to scale your data or to have all data be in the same scale. If some measures are on a 5 point scale, while others are on a 100 point scale, you are likely to have non-optimal results due to the difference in variability of your features. Measures like Pearson and Spearman's correlation coefficients are unit agnostic, which means it is not necessary to scale for these measures. However, many measures used to measure similarity (like euclidean or manhatten distances) are not unit agnostic.In this case, we will not need to scale data because they are all on a 10 point scale, but it is always something to keep in mind!------------ User-Item MatrixIn order to calculate the similarities, it is common to put values in a matrix. In this matrix, users are identified by each row, and items are represented by columns. ![alt text](images/userxitem.png "User Item Matrix") In the above matrix, you can see that **User 1** and **User 2** both used **Item 1**, and **User 2**, **User 3**, and **User 4** all used **Item 2**. However, there are also a large number of missing values in the matrix for users who haven't used a particular item. A matrix with many missing values (like the one above) is considered **sparse**.Our first goal for this notebook is to create the above matrix with the **reviews** dataset. However, instead of 1 values in each cell, you should have the actual rating. The users will indicate the rows, and the movies will exist across the columns. To create the user-item matrix, we only need the first three columns of the **reviews** dataframe, which you can see by running the cell below.
###Code
user_items = reviews[['user_id', 'movie_id', 'rating']]
user_items.head()
###Output
_____no_output_____
###Markdown
Ceating the User-Item MatrixIn order to create the user-items matrix (like the one above), I personally started by using a [pivot table](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html). However, I quickly ran into a memory error (a common theme throughout this notebook). I will help you navigate around many of the errors I had, and acheive useful collaborative filtering results! _____`1.` Create a matrix where the users are the rows, the movies are the columns, and the ratings exist in each cell, or a NaN exists in cells where a user hasn't rated a particular movie. If you get a memory error (like I did), [this link here](https://stackoverflow.com/questions/39648991/pandas-dataframe-pivot-memory-error) might help you!
###Code
# Create user-by-item matrix
###Output
_____no_output_____
###Markdown
Check your results below to make sure your matrix is ready for the upcoming sections.
###Code
assert movies.shape[0] == user_by_movie.shape[1], "Oh no! Your matrix should have {} columns, and yours has {}!".format(movies.shape[0], user_by_movie.shape[1])
assert reviews.user_id.nunique() == user_by_movie.shape[0], "Oh no! Your matrix should have {} rows, and yours has {}!".format(reviews.user_id.nunique(), user_by_movie.shape[0])
print("Looks like you are all set! Proceed!")
HTML('<img src="images/greatjob.webp">')
###Output
_____no_output_____
###Markdown
`2.` Now that you have a matrix of users by movies, use this matrix to create a dictionary where the key is each user and the value is an array of the movies each user has rated.
###Code
# Create a dictionary with users and corresponding movies seen
def movies_watched(user_id):
'''
INPUT:
user_id - the user_id of an individual as int
OUTPUT:
movies - an array of movies the user has watched
'''
return movies
def create_user_movie_dict():
'''
INPUT: None
OUTPUT: movies_seen - a dictionary where each key is a user_id and the value is an array of movie_ids
Creates the movies_seen dictionary
'''
# Do things - hint this may take some time, so you might want to set up a progress bar to watch things progress
return movies_seen
# Use your function to return dictionary
movies_seen = create_user_movie_dict()
###Output
_____no_output_____
###Markdown
`3.` If a user hasn't rated more than 2 movies, we consider these users "too new". Create a new dictionary that only contains users who have rated more than 2 movies. This dictionary will be used for all the final steps of this workbook.
###Code
# Remove individuals who have watched 2 or fewer movies - don't have enough data to make recs
def create_movies_to_analyze(movies_seen, lower_bound=2):
'''
INPUT:
movies_seen - a dictionary where each key is a user_id and the value is an array of movie_ids
lower_bound - (an int) a user must have more movies seen than the lower bound to be added to the movies_to_analyze dictionary
OUTPUT:
movies_to_analyze - a dictionary where each key is a user_id and the value is an array of movie_ids
The movies_seen and movies_to_analyze dictionaries should be the same except that the output dictionary has removed
'''
# Do things to create updated dictionary
return movies_to_analyze
# Use your function to return your updated dictionary
movies_to_analyze = create_movies_to_analyze(movies_seen)
# Run the tests below to check that your movies_to_analyze matches the solution
assert len(movies_to_analyze) == 23512, "Oops! It doesn't look like your dictionary has the right number of individuals."
assert len(movies_to_analyze[2]) == 23, "Oops! User 2 didn't match the number of movies we thought they would have."
assert len(movies_to_analyze[7]) == 3, "Oops! User 7 didn't match the number of movies we thought they would have."
print("If this is all you see, you are good to go!")
###Output
_____no_output_____
###Markdown
Calculating User SimilaritiesNow that you have set up the **movies_to_analyze** dictionary, it is time to take a closer look at the similarities between users. Below the sudo code for how I thought about determining the similarity between users:```for user1 in movies_to_analyze for user2 in movies_to_analyze see how many movies match between the two users if more than two movies in common pull the overlapping movies compute the distance/similarity metric between ratings on the same movies for the two users store the users and the distance metric```However, this took a very long time to run, and other methods of performing these operations did not fit on the workspace memory!Therefore, your task for this question is to look at a few specific examples of the correlation between ratings given by two users. For this question consider you want to compute the [correlation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.corr.html) between users.`4.` Using the **movies_to_analyze** dictionary and **user_by_movie** dataframe, create a function that computes the correlation between the ratings of similar movies for two users. Then use your function to compare your results to ours using the tests below.
###Code
def compute_correlation(user1, user2):
'''
INPUT
user1 - int user_id
user2 - int user_id
OUTPUT
the correlation between the matching ratings between the two users
'''
return corr #return the correlation
# Read in solution correlations - this will take some time to read in
import pickle
corrs_import = pickle.load(open("corrs.p", "rb"))
df_corrs = pd.DataFrame(corrs_import)
df_corrs.columns = ['user1', 'user2', 'movie_corr']
# Test your function against the solution
assert compute_correlation(2,2) == df_corrs.query("user1 == 2 and user2 == 2")['movie_corr'][0], "Oops! The correlation between a user and itself should be 1.0."
assert round(compute_correlation(2,66), 2) == round(df_corrs.query("user1 == 2 and user2 == 66")['movie_corr'][1], 2), "Oops! The correlation between user 2 and 66 should be about 0.76."
assert np.isnan(compute_correlation(2,104)) == np.isnan(df_corrs.query("user1 == 2 and user2 == 104")['movie_corr'][4]), "Oops! The correlation between user 2 and 104 should be a NaN."
print("If this is all you see, then it looks like your function passed all of our tests!")
###Output
_____no_output_____
###Markdown
Why the NaN's?If the function you wrote passed all of the tests, then you have correctly set up your function to calculate the correlation between any two users. The **df_corrs** dataframe created in the cell leading up to the tests holds combinations of users along with their corresponding correlation. `5.` But one question is why are we still obtaining **NaN** values. Look at the header below for users 2 and 104, they have a correlation of **NaN**, why?
###Code
df_corrs.head()
###Output
_____no_output_____
###Markdown
Leave your thoughts here about why the NaN exists, and use the cells below to validate your thoughts. These Nan's ultimately make the correlation coefficient a less than optimal measure of similarity between two users.
###Code
# Which movies did both user 2 and user 4 see?
# What were the ratings for each user on those movies?
###Output
_____no_output_____
###Markdown
`6.` Because the correlation coefficient proved to be less than optimal for relating user ratings to one another, we could instead calculate the euclidean distance between the ratings. I found [this post](https://stackoverflow.com/questions/1401712/how-can-the-euclidean-distance-be-calculated-with-numpy) particularly helpful when I was setting up my function. This function should be very similar to your previous function. When you feel confident with your function, test it against our results.
###Code
def compute_euclidean_dist(user1, user2):
'''
INPUT
user1 - int user_id
user2 - int user_id
OUTPUT
the euclidean distance between user1 and user2
'''
return dist #return the euclidean distance
# Read in solution euclidean distances - this will take some time to read in
df_dists = pickle.load(open("dists.p", "rb"))
# Test your function against the solution
assert compute_euclidean_dist(2,2) == df_dists.query("user1 == 2 and user2 == 2")['eucl_dist'][0], "Oops! The distance between a user and itself should be 0.0."
assert round(compute_euclidean_dist(2,66), 2) == round(df_dists.query("user1 == 2 and user2 == 66")['eucl_dist'][1], 2), "Oops! The distance between user 2 and 66 should be about 2.24."
assert np.isnan(compute_euclidean_dist(2,104)) == np.isnan(df_dists.query("user1 == 2 and user2 == 104")['eucl_dist'][4]), "Oops! The distance between user 2 and 104 should be 2."
print("If this is all you see, then it looks like your function passed all of our tests!")
###Output
_____no_output_____
###Markdown
Using the Nearest Neighbors to Make RecommendationsIn the previous questions, you read in **df_corrs** and **df_dists**. Therefore, you have a measure of distance and similarity for each user to every other user. These dataframes hold every possible combination of users, as well as the corresponding correlation or euclidean distance, respectively.Because of the **NaN** values that exist within **df_corrs**, we will proceed using **df_dists**. You will want to find the users that are 'nearest' each user. Then you will want to find the movies the closest neighbors have liked to recommend to each user.I made use of the following objects:* df_dists (to obtain the neighbors)* user_items (to obtain the movies the neighbors and users have rated)* movies (to obtain the names of the movies)`7.` Complete the functions below, which allow you to find the recommendations for any user. There are five functions which you will need:* **find_closest_neighbors** - this returns a list of user_ids from closest neighbor to farthest neighbor using euclidean distance* **movies_liked** - returns an array of movie_ids* **movie_names** - takes the output of movies_liked and returns a list of movie names associated with the movie_ids* **make_recommendations** - takes a user id and goes through closest neighbors to return a list of movie names as recommendations* **all_recommendations** = loops through every user and returns a dictionary of with the key as a user_id and the value as a list of movie recommendations
###Code
def find_closest_neighbors(user):
'''
INPUT:
user - (int) the user_id of the individual you want to find the closest users
OUTPUT:
closest_neighbors - an array of the id's of the users sorted from closest to farthest away
'''
# I treated ties as arbitrary and just kept whichever was easiest to keep using the head method
# You might choose to do something less hand wavy - order the neighbors
return closest_neighbors
def movies_liked(user_id, min_rating=7):
'''
INPUT:
user_id - the user_id of an individual as int
min_rating - the minimum rating considered while still a movie is still a "like" and not a "dislike"
OUTPUT:
movies_liked - an array of movies the user has watched and liked
'''
return movies_liked
def movie_names(movie_ids):
'''
INPUT
movie_ids - a list of movie_ids
OUTPUT
movies - a list of movie names associated with the movie_ids
'''
return movie_lst
def make_recommendations(user, num_recs=10):
'''
INPUT:
user - (int) a user_id of the individual you want to make recommendations for
num_recs - (int) number of movies to return
OUTPUT:
recommendations - a list of movies - if there are "num_recs" recommendations return this many
otherwise return the total number of recommendations available for the "user"
which may just be an empty list
'''
return recommendations
def all_recommendations(num_recs=10):
'''
INPUT
num_recs (int) the (max) number of recommendations for each user
OUTPUT
all_recs - a dictionary where each key is a user_id and the value is an array of recommended movie titles
'''
# Apply make recs for each user -
# hint this may take some time, so you might want to set up a progress bar to watch things progress
return all_recs
all_recs = all_recommendations(10)
# This make some time - it loads our solution dictionary so you can compare results
all_recs_sol = pickle.load(open("all_recs.p", "rb"))
assert all_recs[2] == make_recommendations(2), "Oops! Your recommendations for user 2 didn't match ours."
assert all_recs[26] == make_recommendations(26), "Oops! It actually wasn't possible to make any recommendations for user 26."
assert all_recs[1503] == make_recommendations(1503), "Oops! Looks like your solution for user 1503 didn't match ours."
print("If you made it here, you now have recommendations for many users using collaborative filtering!")
HTML('<img src="images/greatjob.webp">')
###Output
_____no_output_____
###Markdown
Now What?If you made it this far, you have successfully implemented a solution to making recommendations using collaborative filtering. `8.` Let's do a quick recap of the steps taken to obtain recommendations using collaborative filtering.
###Code
# Check your understanding of the results by correctly filling in the dictionary below
a = "pearson's correlation and spearman's correlation"
b = 'item based collaborative filtering'
c = "there were too many ratings to get a stable metric"
d = 'user based collaborative filtering'
e = "euclidean distance and pearson's correlation coefficient"
f = "manhatten distance and euclidean distance"
g = "spearman's correlation and euclidean distance"
h = "the spread in some ratings was zero"
i = 'content based recommendation'
sol_dict = {
'The type of recommendation system implemented here was a ...': # letter here,
'The two methods used to estimate user similarity were: ': # letter here,
'There was an issue with using the correlation coefficient. What was it?': # letter here
}
t.test_recs(sol_dict)
###Output
_____no_output_____
###Markdown
Additionally, let's take a closer look at some of the results. There are three objects that you read in to check your results against the solution:* **df_corrs** - a dataframe of user1, user2, pearson correlation between the two users* **df_dists** - a dataframe of user1, user2, euclidean distance between the two users* **all_recs_sol** - a dictionary of all recommendations (key = user, value = list of recommendations)Looping your results from the correlation and euclidean distance functions through every pair of users could have been used to create the first two objects (I don't recommend doing this given how long it will take). `9.`Use these three objects along with the cells below to correctly fill in the dictionary below and complete this notebook!
###Code
a = 567
b = 1503
c = 1319
d = 1325
e = 2526710
f = 0
g = 'Use another method to make recommendations - content based, knowledge based, or model based collaborative filtering'
sol_dict2 = {
'For how many pairs of users were we not able to obtain a measure of similarity using correlation?': # letter here,
'For how many pairs of users were we not able to obtain a measure of similarity using euclidean distance?': # letter here,
'For how many users were we unable to make any recommendations for using collaborative filtering?': # letter here,
'For how many users were we unable to make 10 recommendations for using collaborative filtering?': # letter here,
'What might be a way for us to get 10 recommendations for every user?': # letter here
}
t.test_recs2(sol_dict2)
#Use the below cells for any work you need to do!
# Users without recs
# NaN correlation values
# NaN euclidean distance values
# Users with less than 10 recs
###Output
_____no_output_____
###Markdown
Recommendations with MovieTweetings: Collaborative FilteringOne of the most popular methods for making recommendations is **collaborative filtering**. In collaborative filtering, you are using the collaboration of user-item recommendations to assist in making new recommendations. There are two main methods of performing collaborative filtering:1. **Neighborhood-Based Collaborative Filtering**, which is based on the idea that we can either correlate items that are similar to provide recommendations or we can correlate users to one another to provide recommendations.2. **Model Based Collaborative Filtering**, which is based on the idea that we can use machine learning and other mathematical models to understand the relationships that exist amongst items and users to predict ratings and provide ratings.In this notebook, you will be working on performing **neighborhood-based collaborative filtering**. There are two main methods for performing collaborative filtering:1. **User-based collaborative filtering:** In this type of recommendation, users related to the user you would like to make recommendations for are used to create a recommendation.2. **Item-based collaborative filtering:** In this type of recommendation, first you need to find the items that are most related to each other item (based on similar ratings). Then you can use the ratings of an individual on those similar items to understand if a user will like the new item.In this notebook you will be implementing **user-based collaborative filtering**. However, it is easy to extend this approach to make recommendations using **item-based collaborative filtering**. First, let's read in our data and necessary libraries.
###Code
%load_ext lab_black
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tests as t
import progressbar
from scipy.sparse import csr_matrix
from scipy import sparse
from IPython.display import HTML
%matplotlib inline
# Read in the datasets
movies = pd.read_csv("movies_clean.csv")
reviews = pd.read_csv("reviews_clean.csv")
del movies["Unnamed: 0"]
del reviews["Unnamed: 0"]
print(reviews.head())
###Output
user_id movie_id rating timestamp date
0 1 114508 8 1381006850 2013-10-05 23:00:50
1 2 499549 9 1376753198 2013-08-17 17:26:38
2 2 1305591 8 1376742507 2013-08-17 14:28:27
3 2 1428538 1 1371307089 2013-06-15 16:38:09
4 3 75314 1 1595468524 2020-07-23 03:42:04
###Markdown
Measures of SimilarityWhen using **neighborhood** based collaborative filtering, it is important to understand how to measure the similarity of users or items to one another. There are a number of ways in which we might measure the similarity between two vectors (which might be two users or two items). In this notebook, we will look specifically at two measures used to compare vectors:* **Pearson's correlation coefficient**Pearson's correlation coefficient is a measure of the strength and direction of a linear relationship. The value for this coefficient is a value between -1 and 1 where -1 indicates a strong, negative linear relationship and 1 indicates a strong, positive linear relationship. If we have two vectors **x** and **y**, we can define the correlation between the vectors as:$$CORR(x, y) = \frac{\text{COV}(x, y)}{\text{STDEV}(x)\text{ }\text{STDEV}(y)}$$where $$\text{STDEV}(x) = \sqrt{\frac{1}{n-1}\sum_{i=1}^{n}(x_i - \bar{x})^2}$$and $$\text{COV}(x, y) = \frac{1}{n-1}\sum_{i=1}^{n}(x_i - \bar{x})(y_i - \bar{y})$$where n is the length of the vector, which must be the same for both x and y and $\bar{x}$ is the mean of the observations in the vector. We can use the correlation coefficient to indicate how alike two vectors are to one another, where the closer to 1 the coefficient, the more alike the vectors are to one another. There are some potential downsides to using this metric as a measure of similarity. You will see some of these throughout this workbook.* **Euclidean distance**Euclidean distance is a measure of the straightline distance from one vector to another. Because this is a measure of distance, larger values are an indication that two vectors are different from one another (which is different than Pearson's correlation coefficient).Specifically, the euclidean distance between two vectors **x** and **y** is measured as:$$ \text{EUCL}(x, y) = \sqrt{\sum_{i=1}^{n}(x_i - y_i)^2}$$Different from the correlation coefficient, no scaling is performed in the denominator. Therefore, you need to make sure all of your data are on the same scale when using this metric.**Note:** Because measuring similarity is often based on looking at the distance between vectors, it is important in these cases to scale your data or to have all data be in the same scale. If some measures are on a 5 point scale, while others are on a 100 point scale, you are likely to have non-optimal results due to the difference in variability of your features. Measures like Pearson and Spearman's correlation coefficients are unit agnostic, which means it is not necessary to scale for these measures. However, many measures used to measure similarity (like euclidean or manhatten distances) are not unit agnostic.In this case, we will not need to scale data because they are all on a 10 point scale, but it is always something to keep in mind!------------ User-Item MatrixIn order to calculate the similarities, it is common to put values in a matrix. In this matrix, users are identified by each row, and items are represented by columns. ![alt text](images/userxitem.png "User Item Matrix") In the above matrix, you can see that **User 1** and **User 2** both used **Item 1**, and **User 2**, **User 3**, and **User 4** all used **Item 2**. However, there are also a large number of missing values in the matrix for users who haven't used a particular item. A matrix with many missing values (like the one above) is considered **sparse**.Our first goal for this notebook is to create the above matrix with the **reviews** dataset. However, instead of 1 values in each cell, you should have the actual rating. The users will indicate the rows, and the movies will exist across the columns. To create the user-item matrix, we only need the first three columns of the **reviews** dataframe, which you can see by running the cell below.
###Code
user_items = reviews[["user_id", "movie_id", "rating"]]
user_items.head()
user_items.query("user_id==2")
###Output
_____no_output_____
###Markdown
Ceating the User-Item MatrixIn order to create the user-items matrix (like the one above), I personally started by using a [pivot table](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html). However, I quickly ran into a memory error (a common theme throughout this notebook). I will help you navigate around many of the errors I had, and acheive useful collaborative filtering results! _____`1.` Create a matrix where the users are the rows, the movies are the columns, and the ratings exist in each cell, or a NaN exists in cells where a user hasn't rated a particular movie. If you get a memory error (like I did), [this link here](https://stackoverflow.com/questions/39648991/pandas-dataframe-pivot-memory-error) might help you! Check your results below to make sure your matrix is ready for the upcoming sections.
###Code
user_by_movie = np.zeros(
(reviews.user_id.nunique(), reviews.movie_id.nunique()), dtype="int8"
)
user_by_movie = pd.DataFrame(
user_by_movie, index=reviews.user_id.unique(), columns=reviews.movie_id.unique()
)
user_by_movie.index.name = "user_id"
user_by_movie.columns.name = "movie_id"
for _, (user_id, movie_id, rating) in user_items.iterrows():
user_by_movie.loc[user_id, movie_id] = rating
movies = movies.drop_duplicates("movie_id")
assert (
movies.shape[0] == user_by_movie.shape[1]
), "Oh no! Your matrix should have {} columns, and yours has {}!".format(
movies.shape[0], user_by_movie.shape[1]
)
assert (
reviews.user_id.nunique() == user_by_movie.shape[0]
), "Oh no! Your matrix should have {} rows, and yours has {}!".format(
reviews.user_id.nunique(), user_by_movie.shape[0]
)
print("Looks like you are all set! Proceed!")
# HTML('<img src="images/greatjob.webp">')
###Output
Looks like you are all set! Proceed!
###Markdown
`2.` Now that you have a matrix of users by movies, use this matrix to create a dictionary where the key is each user and the value is an array of the movies each user has rated.
###Code
# Create a dictionary with users and corresponding movies seen
def movies_watched(user_id):
"""
INPUT:
user_id - the user_id of an individual as int
OUTPUT:
movies - an array of movies the user has watched
"""
return user_by_movie.loc[user_id, (user_by_movie.loc[user_id] > 0)].index.values
def create_user_movie_dict():
"""
INPUT: None
OUTPUT: movies_seen - a dictionary where each key is a user_id and the value is an array of movie_ids
Creates the movies_seen dictionary
"""
# Do things - hint this may take some time, so you might want to set up a progress bar to watch things progress
movies_seen = {user_id: movies_watched(user_id) for user_id in user_by_movie.index}
return movies_seen
# Use your function to return dictionary
movies_seen = create_user_movie_dict()
###Output
_____no_output_____
###Markdown
`3.` If a user hasn't rated more than 2 movies, we consider these users "too new". Create a new dictionary that only contains users who have rated more than 2 movies. This dictionary will be used for all the final steps of this workbook.
###Code
movies_seen[2]
# Remove individuals who have watched 2 or fewer movies - don't have enough data to make recs
def create_movies_to_analyze(movies_seen, lower_bound=2):
"""
INPUT:
movies_seen - a dictionary where each key is a user_id and the value is an array of movie_ids
lower_bound - (an int) a user must have more movies seen than the lower bound to be added to the movies_to_analyze dictionary
OUTPUT:
movies_to_analyze - a dictionary where each key is a user_id and the value is an array of movie_ids
The movies_seen and movies_to_analyze dictionaries should be the same except that the output dictionary has removed
"""
# Do things to create updated dictionary
movies_to_analyze = {
user_id: movies
for user_id, movies in movies_seen.items()
if len(movies) > lower_bound
}
return movies_to_analyze
# Use your function to return your updated dictionary
movies_to_analyze = create_movies_to_analyze(movies_seen)
# Run the tests below to check that your movies_to_analyze matches the solution
# # assert len(movies_to_analyze) == 23512, "Oops! It doesn't look like your dictionary has the right number of individuals."
# assert len(movies_to_analyze[2]) == 23, "Oops! User 2 didn't match the number of movies we thought they would have."
# assert len(movies_to_analyze[7]) == 3, "Oops! User 7 didn't match the number of movies we thought they would have."
# print("If this is all you see, you are good to go!")
###Output
_____no_output_____
###Markdown
Calculating User SimilaritiesNow that you have set up the **movies_to_analyze** dictionary, it is time to take a closer look at the similarities between users. Below the sudo code for how I thought about determining the similarity between users:```for user1 in movies_to_analyze for user2 in movies_to_analyze see how many movies match between the two users if more than two movies in common pull the overlapping movies compute the distance/similarity metric between ratings on the same movies for the two users store the users and the distance metric```However, this took a very long time to run, and other methods of performing these operations did not fit on the workspace memory!Therefore, your task for this question is to look at a few specific examples of the correlation between ratings given by two users. For this question consider you want to compute the [correlation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.corr.html) between users.`4.` Using the **movies_to_analyze** dictionary and **user_by_movie** dataframe, create a function that computes the correlation between the ratings of similar movies for two users. Then use your function to compare your results to ours using the tests below.
###Code
movies_1 = movies_to_analyze[2]
movies_2 = movies_to_analyze[2]
set(movies_1) & set(movies_2)
user_by_movie.loc[2, set(movies_1) & set(movies_2)].corr(
user_by_movie.loc[2, set(movies_1) & set(movies_2)]
)
def compute_correlation(user1, user2):
"""
INPUT
user1 - int user_id
user2 - int user_id
OUTPUT
the correlation between the matching ratings between the two users
"""
movies_1 = movies_to_analyze[user1]
movies_2 = movies_to_analyze[user2]
overlap = set(movies_1) & set(movies_2)
corr = user_by_movie.loc[user1, overlap].corr(user_by_movie.loc[user2, overlap])
return corr # return the correlation
# Read in solution correlations - this will take some time to read in
import pickle
corrs_import = pickle.load(open("corrs.p", "rb"))
df_corrs = pd.DataFrame(corrs_import)
df_corrs.columns = ['user1', 'user2', 'movie_corr']
# Test your function against the solution
assert compute_correlation(2,2) == df_corrs.query("user1 == 2 and user2 == 2")['movie_corr'][0], "Oops! The correlation between a user and itself should be 1.0."
assert round(compute_correlation(2,66), 2) == round(df_corrs.query("user1 == 2 and user2 == 66")['movie_corr'][1], 2), "Oops! The correlation between user 2 and 66 should be about 0.76."
assert np.isnan(compute_correlation(2,104)) == np.isnan(df_corrs.query("user1 == 2 and user2 == 104")['movie_corr'][4]), "Oops! The correlation between user 2 and 104 should be a NaN."
print("If this is all you see, then it looks like your function passed all of our tests!")
###Output
_____no_output_____
###Markdown
Why the NaN's?If the function you wrote passed all of the tests, then you have correctly set up your function to calculate the correlation between any two users. The **df_corrs** dataframe created in the cell leading up to the tests holds combinations of users along with their corresponding correlation. `5.` But one question is why are we still obtaining **NaN** values. Look at the header below for users 2 and 104, they have a correlation of **NaN**, why?
###Code
df_corrs.head()
###Output
_____no_output_____
###Markdown
Leave your thoughts here about why the NaN exists, and use the cells below to validate your thoughts. These Nan's ultimately make the correlation coefficient a less than optimal measure of similarity between two users.
###Code
# Which movies did both user 2 and user 4 see?
# What were the ratings for each user on those movies?
###Output
_____no_output_____
###Markdown
`6.` Because the correlation coefficient proved to be less than optimal for relating user ratings to one another, we could instead calculate the euclidean distance between the ratings. I found [this post](https://stackoverflow.com/questions/1401712/how-can-the-euclidean-distance-be-calculated-with-numpy) particularly helpful when I was setting up my function. This function should be very similar to your previous function. When you feel confident with your function, test it against our results.
###Code
def compute_euclidean_dist(user1, user2):
"""
INPUT
user1 - int user_id
user2 - int user_id
OUTPUT
the euclidean distance between user1 and user2
"""
movies_1 = movies_to_analyze[user1]
movies_2 = movies_to_analyze[user2]
overlap = set(movies_1) & set(movies_2)
dist = np.linalg.norm(
user_by_movie.loc[user1, overlap].values
- user_by_movie.loc[user2, overlap].values
)
return dist # return the euclidean distance
# Read in solution euclidean distances - this will take some time to read in
df_dists = pickle.load(open("dists.p", "rb"))
# Test your function against the solution
assert compute_euclidean_dist(2,2) == df_dists.query("user1 == 2 and user2 == 2")['eucl_dist'][0], "Oops! The distance between a user and itself should be 0.0."
assert round(compute_euclidean_dist(2,66), 2) == round(df_dists.query("user1 == 2 and user2 == 66")['eucl_dist'][1], 2), "Oops! The distance between user 2 and 66 should be about 2.24."
assert np.isnan(compute_euclidean_dist(2,104)) == np.isnan(df_dists.query("user1 == 2 and user2 == 104")['eucl_dist'][4]), "Oops! The distance between user 2 and 104 should be 2."
print("If this is all you see, then it looks like your function passed all of our tests!")
###Output
_____no_output_____
###Markdown
Using the Nearest Neighbors to Make RecommendationsIn the previous questions, you read in **df_corrs** and **df_dists**. Therefore, you have a measure of distance and similarity for each user to every other user. These dataframes hold every possible combination of users, as well as the corresponding correlation or euclidean distance, respectively.Because of the **NaN** values that exist within **df_corrs**, we will proceed using **df_dists**. You will want to find the users that are 'nearest' each user. Then you will want to find the movies the closest neighbors have liked to recommend to each user.I made use of the following objects:* df_dists (to obtain the neighbors)* user_items (to obtain the movies the neighbors and users have rated)* movies (to obtain the names of the movies)`7.` Complete the functions below, which allow you to find the recommendations for any user. There are five functions which you will need:* **find_closest_neighbors** - this returns a list of user_ids from closest neighbor to farthest neighbor using euclidean distance* **movies_liked** - returns an array of movie_ids* **movie_names** - takes the output of movies_liked and returns a list of movie names associated with the movie_ids* **make_recommendations** - takes a user id and goes through closest neighbors to return a list of movie names as recommendations* **all_recommendations** = loops through every user and returns a dictionary of with the key as a user_id and the value as a list of movie recommendations
###Code
def find_closest_neighbors(user):
'''
INPUT:
user - (int) the user_id of the individual you want to find the closest users
OUTPUT:
closest_neighbors - an array of the id's of the users sorted from closest to farthest away
'''
# I treated ties as arbitrary and just kept whichever was easiest to keep using the head method
# You might choose to do something less hand wavy - order the neighbors
return closest_neighbors
def movies_liked(user_id, min_rating=7):
'''
INPUT:
user_id - the user_id of an individual as int
min_rating - the minimum rating considered while still a movie is still a "like" and not a "dislike"
OUTPUT:
movies_liked - an array of movies the user has watched and liked
'''
return movies_liked
def movie_names(movie_ids):
'''
INPUT
movie_ids - a list of movie_ids
OUTPUT
movies - a list of movie names associated with the movie_ids
'''
return movie_lst
def make_recommendations(user, num_recs=10):
'''
INPUT:
user - (int) a user_id of the individual you want to make recommendations for
num_recs - (int) number of movies to return
OUTPUT:
recommendations - a list of movies - if there are "num_recs" recommendations return this many
otherwise return the total number of recommendations available for the "user"
which may just be an empty list
'''
return recommendations
def all_recommendations(num_recs=10):
'''
INPUT
num_recs (int) the (max) number of recommendations for each user
OUTPUT
all_recs - a dictionary where each key is a user_id and the value is an array of recommended movie titles
'''
# Apply make recs for each user -
# hint this may take some time, so you might want to set up a progress bar to watch things progress
return all_recs
all_recs = all_recommendations(10)
# This make some time - it loads our solution dictionary so you can compare results
all_recs_sol = pickle.load(open("all_recs.p", "rb"))
assert all_recs[2] == make_recommendations(2), "Oops! Your recommendations for user 2 didn't match ours."
assert all_recs[26] == make_recommendations(26), "Oops! It actually wasn't possible to make any recommendations for user 26."
assert all_recs[1503] == make_recommendations(1503), "Oops! Looks like your solution for user 1503 didn't match ours."
print("If you made it here, you now have recommendations for many users using collaborative filtering!")
HTML('<img src="images/greatjob.webp">')
###Output
_____no_output_____
###Markdown
Now What?If you made it this far, you have successfully implemented a solution to making recommendations using collaborative filtering. `8.` Let's do a quick recap of the steps taken to obtain recommendations using collaborative filtering.
###Code
# Check your understanding of the results by correctly filling in the dictionary below
a = "pearson's correlation and spearman's correlation"
b = 'item based collaborative filtering'
c = "there were too many ratings to get a stable metric"
d = 'user based collaborative filtering'
e = "euclidean distance and pearson's correlation coefficient"
f = "manhatten distance and euclidean distance"
g = "spearman's correlation and euclidean distance"
h = "the spread in some ratings was zero"
i = 'content based recommendation'
sol_dict = {
'The type of recommendation system implemented here was a ...': # letter here,
'The two methods used to estimate user similarity were: ': # letter here,
'There was an issue with using the correlation coefficient. What was it?': # letter here
}
t.test_recs(sol_dict)
###Output
_____no_output_____
###Markdown
Additionally, let's take a closer look at some of the results. There are three objects that you read in to check your results against the solution:* **df_corrs** - a dataframe of user1, user2, pearson correlation between the two users* **df_dists** - a dataframe of user1, user2, euclidean distance between the two users* **all_recs_sol** - a dictionary of all recommendations (key = user, value = list of recommendations)Looping your results from the correlation and euclidean distance functions through every pair of users could have been used to create the first two objects (I don't recommend doing this given how long it will take). `9.`Use these three objects along with the cells below to correctly fill in the dictionary below and complete this notebook!
###Code
a = 567
b = 1503
c = 1319
d = 1325
e = 2526710
f = 0
g = 'Use another method to make recommendations - content based, knowledge based, or model based collaborative filtering'
sol_dict2 = {
'For how many pairs of users were we not able to obtain a measure of similarity using correlation?': # letter here,
'For how many pairs of users were we not able to obtain a measure of similarity using euclidean distance?': # letter here,
'For how many users were we unable to make any recommendations for using collaborative filtering?': # letter here,
'For how many users were we unable to make 10 recommendations for using collaborative filtering?': # letter here,
'What might be a way for us to get 10 recommendations for every user?': # letter here
}
t.test_recs2(sol_dict2)
#Use the below cells for any work you need to do!
# Users without recs
# NaN correlation values
# NaN euclidean distance values
# Users with less than 10 recs
###Output
_____no_output_____ |
notebooks/1 - Getting started with Generative Adversarial Networks using Keras and MNIST.ipynb | ###Markdown
Using GANs and Keras
###Code
from keras.models import Sequential
from keras.layers import Dense, Activation, Flatten, Reshape
from keras.layers import Conv2D, UpSampling2D
from keras.layers import LeakyReLU, Dropout
from keras.layers import BatchNormalization
from keras.optimizers import Adam, SGD, RMSprop
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import clear_output, Image
from tensorflow.examples.tutorials.mnist import input_data
import keras.backend.tensorflow_backend as ktf
import tensorflow as tf
import os
def get_session(gpu_fraction=0.45):
'''Assume that you have 6GB of GPU memory and want to allocate ~2GB'''
num_threads = os.environ.get('OMP_NUM_THREADS')
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=gpu_fraction)
if num_threads:
return tf.Session(config=tf.ConfigProto(
gpu_options=gpu_options, intra_op_parallelism_threads=num_threads))
else:
return tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
ktf.set_session(get_session())
###Output
_____no_output_____
###Markdown
Defining the discriminatorIn our two-player game the discriminator takes the role of the police: given an image it has to find out whether the image is fake or not. Given this requirement, the input of our discriminator network is a (28x28x1) input patch, equal to the dimensions of an MNIST image. The output is a single node. The setup of the networks is roughly based on the [DCGAN paper](https://arxiv.org/abs/1511.06434) and one of its [implementations](https://github.com/carpedm20/DCGAN-tensorflow).We use `LeakyReLU` in between the convolution layers to improve the gradients.
###Code
def discriminator():
net = Sequential()
input_shape = (28, 28, 1)
dropout_prob = 0.4
net.add(Conv2D(64, 5, strides=2, input_shape=input_shape, padding='same'))
net.add(LeakyReLU())
net.add(Conv2D(128, 5, strides=2, padding='same'))
net.add(LeakyReLU())
net.add(Dropout(dropout_prob))
net.add(Conv2D(256, 5, strides=2, padding='same'))
net.add(LeakyReLU())
net.add(Dropout(dropout_prob))
net.add(Conv2D(512, 5, strides=1, padding='same'))
net.add(LeakyReLU())
net.add(Dropout(dropout_prob))
net.add(Flatten())
net.add(Dense(1))
net.add(Activation('sigmoid'))
return net
###Output
_____no_output_____
###Markdown
The full network structure is as follows:
###Code
net_discriminator = discriminator()
net_discriminator.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 14, 14, 64) 1664
_________________________________________________________________
leaky_re_lu_1 (LeakyReLU) (None, 14, 14, 64) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 7, 7, 128) 204928
_________________________________________________________________
leaky_re_lu_2 (LeakyReLU) (None, 7, 7, 128) 0
_________________________________________________________________
dropout_1 (Dropout) (None, 7, 7, 128) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 4, 4, 256) 819456
_________________________________________________________________
leaky_re_lu_3 (LeakyReLU) (None, 4, 4, 256) 0
_________________________________________________________________
dropout_2 (Dropout) (None, 4, 4, 256) 0
_________________________________________________________________
conv2d_4 (Conv2D) (None, 4, 4, 512) 3277312
_________________________________________________________________
leaky_re_lu_4 (LeakyReLU) (None, 4, 4, 512) 0
_________________________________________________________________
dropout_3 (Dropout) (None, 4, 4, 512) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 8192) 0
_________________________________________________________________
dense_1 (Dense) (None, 1) 8193
_________________________________________________________________
activation_1 (Activation) (None, 1) 0
=================================================================
Total params: 4,311,553
Trainable params: 4,311,553
Non-trainable params: 0
_________________________________________________________________
###Markdown
Defining the generatorThe task of the generator, also known as "the counterfeiter", is to fool the discriminator by producing real-looking fake images. These images should eventually resemble the data distribution of the MNIST dataset.The structure of the generator is comparable to the discrminiator but in reverse. We start with a random vector of noise (length=100) and gradually upsample. To improve the output of the generator we use `UpSampling2D` and normal convolutions instead of transposed convolutions (see also [this article](https://distill.pub/2016/deconv-checkerboard/)). The sizes of the layers are adjusted to match the size of our data (28x28 as opposed to the 64x64 of the DCGAN paper).
###Code
def generator():
net = Sequential()
dropout_prob = 0.4
net.add(Dense(7*7*256, input_dim=100))
net.add(BatchNormalization(momentum=0.9))
net.add(LeakyReLU())
net.add(Reshape((7,7,256)))
net.add(Dropout(dropout_prob))
net.add(UpSampling2D())
net.add(Conv2D(128, 5, padding='same'))
net.add(BatchNormalization(momentum=0.9))
net.add(LeakyReLU())
net.add(UpSampling2D())
net.add(Conv2D(64, 5, padding='same'))
net.add(BatchNormalization(momentum=0.9))
net.add(LeakyReLU())
net.add(Conv2D(32, 5, padding='same'))
net.add(BatchNormalization(momentum=0.9))
net.add(LeakyReLU())
net.add(Conv2D(1, 5, padding='same'))
net.add(Activation('sigmoid'))
return net
###Output
_____no_output_____
###Markdown
The full network of the generator looks as follows:
###Code
net_generator = generator()
net_generator.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_2 (Dense) (None, 12544) 1266944
_________________________________________________________________
batch_normalization_1 (Batch (None, 12544) 50176
_________________________________________________________________
leaky_re_lu_5 (LeakyReLU) (None, 12544) 0
_________________________________________________________________
reshape_1 (Reshape) (None, 7, 7, 256) 0
_________________________________________________________________
dropout_4 (Dropout) (None, 7, 7, 256) 0
_________________________________________________________________
up_sampling2d_1 (UpSampling2 (None, 14, 14, 256) 0
_________________________________________________________________
conv2d_5 (Conv2D) (None, 14, 14, 128) 819328
_________________________________________________________________
batch_normalization_2 (Batch (None, 14, 14, 128) 512
_________________________________________________________________
leaky_re_lu_6 (LeakyReLU) (None, 14, 14, 128) 0
_________________________________________________________________
up_sampling2d_2 (UpSampling2 (None, 28, 28, 128) 0
_________________________________________________________________
conv2d_6 (Conv2D) (None, 28, 28, 64) 204864
_________________________________________________________________
batch_normalization_3 (Batch (None, 28, 28, 64) 256
_________________________________________________________________
leaky_re_lu_7 (LeakyReLU) (None, 28, 28, 64) 0
_________________________________________________________________
conv2d_7 (Conv2D) (None, 28, 28, 32) 51232
_________________________________________________________________
batch_normalization_4 (Batch (None, 28, 28, 32) 128
_________________________________________________________________
leaky_re_lu_8 (LeakyReLU) (None, 28, 28, 32) 0
_________________________________________________________________
conv2d_8 (Conv2D) (None, 28, 28, 1) 801
_________________________________________________________________
activation_2 (Activation) (None, 28, 28, 1) 0
=================================================================
Total params: 2,394,241
Trainable params: 2,368,705
Non-trainable params: 25,536
_________________________________________________________________
###Markdown
Creating the modelsWe now defined the two separate networks but these still need to be combined in to two trainable models: one to train the discrmininator and one to train the generator. We first start with the most simple one which is the discriminator model.For the discriminator model we only have to define the optimizer, all the other parts of the model are already defined. We use `SGD` as the optimizer with a low learning rate and clip the values between -1 and 1. A small decay in the learning rate can help with stabilizing. Besides the loss we also tell Keras to gives us the accuracy as a metric.
###Code
optim_discriminator = RMSprop(lr=0.0008, clipvalue=1.0, decay=1e-10)
model_discriminator = Sequential()
model_discriminator.add(net_discriminator)
model_discriminator.compile(loss='binary_crossentropy', optimizer=optim_discriminator, metrics=['accuracy'])
model_discriminator.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
sequential_1 (Sequential) (None, 1) 4311553
=================================================================
Total params: 4,311,553
Trainable params: 4,311,553
Non-trainable params: 0
_________________________________________________________________
###Markdown
The model for the generator is a bit more complex. The generator needs to fool the discriminator by generating images. To train the generator we need to assess its performance on the output of the discriminator. For this we add both networks to a combined model: *the adversarial model*. Our adverserial model uses random noise as its input and outputs the eventual prediction of the discriminator on the generated images. The generator performs well if the adverserial model outputs 'real' on all inputs. In other words, for any input of the adversial network aim to get an output classifying the generated image as real. This means, however, that the discriminator failed (which is a good thing for the generator). If we would use normal back propagation here on the full adversarial model we would update slowly push the discriminator to update itself and start classifying fake images as real. To prevent this we must freeze the part of the model that belongs to the discriminator.In Keras freezing a model is easily done by freezing all the layers of the model. By setting the `trainable` parameter to `False` we prevent the layer of updating within this particular model (it is still trainable in the discriminator model).The adversarial model uses `Adam` as the optimizer with the default values for the momentum.
###Code
optim_adversarial = Adam(lr=0.0004, clipvalue=1.0, decay=1e-10)
model_adversarial = Sequential()
model_adversarial.add(net_generator)
# Disable layers in discriminator
for layer in net_discriminator.layers:
layer.trainable = False
model_adversarial.add(net_discriminator)
model_adversarial.compile(loss='binary_crossentropy', optimizer=optim_adversarial, metrics=['accuracy'])
model_adversarial.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
sequential_2 (Sequential) (None, 28, 28, 1) 2394241
_________________________________________________________________
sequential_1 (Sequential) (None, 1) 4311553
=================================================================
Total params: 6,705,794
Trainable params: 2,368,705
Non-trainable params: 4,337,089
_________________________________________________________________
###Markdown
Note that the number of non-trainable parameters is very high. This is exactly what we want! Reading MNIST dataWe can now read our training data. For this I use a small utility function from Tensorflow.
###Code
# Read MNIST data
x_train = input_data.read_data_sets("mnist", one_hot=True).train.images
x_train = x_train.reshape(-1, 28, 28, 1).astype(np.float32)
# Map the images to a new range [-1, 1]
#x_train = x_train / 0.5 - 1
###Output
Extracting mnist/train-images-idx3-ubyte.gz
Extracting mnist/train-labels-idx1-ubyte.gz
Extracting mnist/t10k-images-idx3-ubyte.gz
Extracting mnist/t10k-labels-idx1-ubyte.gz
###Markdown
Training the GANWith our models defined and the data loaded we can start training our GAN. The models are trained one after another, starting with the discriminator. The discriminator is trained on a data set of both fake and real images and tries to classify them correctly. The adversarial model is trained on noise vectors as explained above.
###Code
batch_size = 256
vis_noise = np.random.uniform(-1.0, 1.0, size=[16, 100])
loss_adv = []
loss_dis = []
acc_adv = []
acc_dis = []
plot_iteration = []
for i in range(10001):
# Select a random set of training images from the mnist dataset
images_train = x_train[np.random.randint(0, x_train.shape[0], size=batch_size), :, :, :]
# Generate a random noise vector
noise = np.random.uniform(-1.0, 1.0, size=[batch_size, 100])
# Use the generator to create fake images from the noise vector
images_fake = net_generator.predict(noise)
# Create a dataset with fake and real images
x = np.concatenate((images_train, images_fake))
y = np.ones([2*batch_size, 1])
y[batch_size:, :] = 0
# Train discriminator for one batch
d_stats = model_discriminator.train_on_batch(x, y)
# Train the generator
# The input of th adversarial model is a list of noise vectors. The generator is 'good' if the discriminator classifies
# all the generated images as real. Therefore, the desired output is a list of all ones.
y = np.ones([batch_size, 1])
noise = np.random.uniform(-1.0, 1.0, size=[batch_size, 100])
a_stats = model_adversarial.train_on_batch(noise, y)
if i % 50 == 0:
plot_iteration.append(i)
loss_adv.append(a_stats[0])
loss_dis.append(d_stats[0])
acc_adv.append(a_stats[1])
acc_dis.append(d_stats[1])
clear_output(wait=True)
fig, (ax1, ax2) = plt.subplots(1,2)
fig.set_size_inches(16, 8)
ax1.plot(plot_iteration, loss_adv, label="loss adversarial")
ax1.plot(plot_iteration, loss_dis, label="loss discriminator")
ax1.set_ylim([0,5])
ax1.legend()
ax2.plot(plot_iteration, acc_adv, label="acc adversarial")
ax2.plot(plot_iteration, acc_dis, label="acc discriminator")
ax2.legend()
plt.show()
# Optional, print losses instead of plotting with:
# print("{}: [Dis. loss: {:.4f}, acc: {:.4f}] [Gen. loss: {:.4f}, acc: {:.4f}]".format(i, d_stats[0], d_stats[1], a_stats[0], a_stats[1]))
if i % 500 == 0:
# Visualize the performance of the generator by producing images from the test vector
images = net_generator.predict(vis_noise)
# Map back to original range
#images = (images + 1 ) * 0.5
plt.figure(figsize=(10,10))
for im in range(images.shape[0]):
plt.subplot(4, 4, im+1)
image = images[im, :, :, :]
image = np.reshape(image, [28, 28])
plt.imshow(image, cmap='gray')
plt.axis('off')
plt.tight_layout()
plt.savefig(r'output/mnist-normal/{}.png'.format(i))
plt.close('all')
import imageio
filenames = [r'output/mnist-normal/{}.png'.format(i * 500) for i in range(20)]
images = []
for filename in filenames:
images.append(imageio.imread(filename))
imageio.mimsave(r'output/mnist-normal/learning.gif', images, duration=0.5)
Image(url='output/mnist-normal/learning.gif')
plt.figure(figsize=(15,4))
for i in range(10):
noise = np.zeros([1,100]) - 1 + (i * 0.2)
images = net_generator.predict(noise)
image = images[0, :, :, :]
image = np.reshape(image, [28, 28])
plt.subplot(1, 10, i+1)
plt.imshow(image, cmap='gray')
plt.axis('off')
plt.tight_layout()
plt.show()
a = np.random.uniform(-1.0, 1.0, size=[1, 100])
b = np.random.uniform(-1.0, 1.0, size=[1, 100])
image_a = np.reshape(net_generator.predict(a)[0], [28, 28])
image_b = np.reshape(net_generator.predict(b)[0], [28, 28])
image_sum = np.reshape(net_generator.predict(b - a)[0], [28, 28])
plt.figure(figsize=(5,4))
plt.subplot(1,3,1)
plt.imshow(image_a, cmap='gray')
plt.axis('off')
plt.subplot(1,3,2)
plt.imshow(image_b, cmap='gray')
plt.axis('off')
plt.subplot(1,3,3)
plt.imshow(image_sum, cmap='gray')
plt.axis('off')
plt.tight_layout()
plt.show()
plt.figure(figsize=(15,6))
images = x_train[np.random.randint(0, x_train.shape[0], size=40), :, :, :]
for i in range(40):
image = images[i, :, :, :]
image = np.reshape(image, [28, 28])
plt.subplot(4, 10, i+1)
plt.imshow(image, cmap='gray')
plt.axis('off')
plt.tight_layout()
plt.show()
plt.figure(figsize=(15,6))
noise = np.random.uniform(-1.0, 1.0, size=[40, 100])
images = net_generator.predict(noise)
for i in range(40):
image = images[i, :, :, :]
image = np.reshape(image, [28, 28])
plt.subplot(4, 10, i+1)
plt.imshow(image, cmap='gray')
plt.axis('off')
plt.tight_layout()
plt.show()
import matplotlib.patches as plot_patch
plt.figure(figsize=(15,6))
noise = np.random.uniform(-1.0, 1.0, size=[40, 100])
images_fake = net_generator.predict(noise)
images_real = x_train[np.random.randint(0, x_train.shape[0], size=40), :, :, :]
choice_vector = np.random.uniform(0, 1, size=40)
for i in range(40):
if choice_vector[i] > 0.5:
image = images_fake[i, :, :, :]
else:
image = images_real[i]
image = np.reshape(image, [28, 28])
plt.subplot(4, 10, i+1)
plt.imshow(image, cmap='gray')
plt.axis('off')
plt.tight_layout()
plt.show()
plt.figure(figsize=(15,6))
border = np.zeros((28,28,3))
border[0,:] = [255,0,0]
border[:,0] = [255,0,0]
for i in range(40):
if choice_vector[i] > 0.5:
image = images_fake[i, :, :, :]
else:
image = images_real[i]
image = np.reshape(image, [28, 28])
ax = plt.subplot(4, 10, i+1)
plt.imshow(image, cmap='gray')
if choice_vector[i] > 0.5:
ax.add_patch(plot_patch.Rectangle((0,0), 27, 27, edgecolor="red", linewidth=2, fill=False))
plt.axis('off')
plt.tight_layout()
plt.show()
###Output
_____no_output_____ |
dev/archive/model_rf_gboost.ipynb | ###Markdown
Predicting Enron Spam Emails using Supervised Learning DS-GA 1001: Introduction to Data Science Final Project Scripts Models Random Forest Gradient Boosting Machine Created On: 11/30/2020Modified On: 12/04/2020 DescriptionThis script establishes various supervised learning models for the `emails_cleaned.csv` dataset. DataWe applied feature engineering to make the data ready for models.
###Code
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import ElasticNet
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import confusion_matrix, classification_report, roc_auc_score, roc_curve
from sklearn.model_selection import GridSearchCV
print("SUCCESS! All modules have been imported.")
df = pd.read_csv('../data/emails_cleaned.csv')
# Remove rows containing missing values
df.dropna(subset=['X'], inplace=True)
# Confirm that there is no missing values
df.isnull().sum()
df.shape
print('The model-ready dataset contains {} rows.'.format(df.shape[0]))
###Output
The model-ready dataset contains 785672 rows.
###Markdown
Feature EngineeringWe applied [Term frequency–inverse document frequency](https://en.wikipedia.org/wiki/Tf%E2%80%93idf) (TF-IDF) method to transform string email contents into meaningful numeric figures so that they can be applied for model training. TF-IDF
###Code
# Create a vectorization matrix using tf-idf vectorizer
vectorizer = TfidfVectorizer()
vectorized_emails = vectorizer.fit_transform(df.X)
vectorized_emails
###Output
_____no_output_____
###Markdown
Methods Train Test SplitBefore fitting the model, we splitted the dataset into two parts: train set and test set. We used train set to train models and used test set to examine the performance of each model.
###Code
test_size = 0.2
X_train, X_test, y_train, y_test = train_test_split(vectorized_emails, df.y, test_size=test_size, random_state=88)
###Output
_____no_output_____
###Markdown
Cross ValidationWe applied cross validation to improve model performance. Grid SearchWe also applied grid search with to tune the hyperparameters. The goal is to find the optimal hyperparameter `C` so that our model will reach the optimal complexity. Logstic Regression with Elastic Net (Baseline)We used logistic regression as our baseline model. We also applied elastic net to weight coefficients and added a penalty term to our model. Elastic NetWe first applied grid search on the parameter of the elastic net, `l1 ratio`. We created a hyperparameter space that contains 30 possible l1 ratio values. We then fitted the training data with 5-fold cross validation to get the best l1 ratio.
###Code
# Setup a hyperparameter grid for l1_ratio that is from 0 to 1
l1_space = [0.2, 0.5, 0.8]
param_grid = {'l1_ratio': l1_space}
elastic_net = ElasticNet()
# Setup the grid search and fit the training data
gm_cv = GridSearchCV(elastic_net, param_grid, cv=5, n_jobs=-1)
gm_cv.fit(X_train, y_train)
# Predict on the test dataset and compute metrics
y_pred = gm_cv.predict(X_test)
r2 = gm_cv.score(X_test, y_test)
mse = mean_squared_error(y_test, y_pred)
print("Tuned ElasticNet l1 ratio: {}".format(gm_cv.best_params_))
print("Tuned ElasticNet R squared: {}".format(r2))
print("Tuned ElasticNet MSE: {}".format(mse))
###Output
Tuned ElasticNet l1 ratio: {'l1_ratio': 0.2}
Tuned ElasticNet R squared: -9.747972842921726e-06
Tuned ElasticNet MSE: 0.2497127785371497
###Markdown
Regularization StrengthWe also considered tuning the regularization parameter, `C`. A large `C` leads to overfit while a small `C` can get to underfit. In this case, we set the parameter to 1.
###Code
# Setup a hyperparameter grid for C that is from 0 to 1
# param_grid = {'C': [0.01, 0.1, 1, 10, 100]}
# Fit a logistic regression model with elastic net and built-in cross validation
logreg = LogisticRegression(solver='saga', penalty='elasticnet', l1_ratio=0.2, max_iter=5000, verbose=0.2)
# logreg_cv = GridSearchCV(logreg, param_grid, cv=5, scoring='roc_auc')
logreg.fit(X_train, y_train)
# best_params_logreg = logreg_cv.best_params_
# validation_auc_logreg = logreg_cv.best_score_
y_pred = logreg.predict(X_test)
y_pred_prob = logreg.predict_proba(X_test)[:, 1]
test_auc_logreg = roc_auc_score(y_test, y_pred_prob)
fpr_logreg, tpr_logreg, thresholds_logreg = roc_curve(y_test, y_pred_prob)
# logreg_cv.best_estimator_
print('Tuned Logstic Regression Test AUC: {}'.format(test_auc_logreg))
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
plt.style.use('seaborn')
fig = plt.figure(num=None, figsize=(5, 5), dpi=300, tight_layout=True)
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fpr_logreg, tpr_logreg)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve, Logistic Regression')
plt.show()
fig.savefig('../results/roc_curve_logistic_regression.png', dpi=fig.dpi)
###Output
_____no_output_____
###Markdown
Random Forest
###Code
#Setup a hyperparamter grid for random forest
n_estimators = [100]
max_depth = [200]
min_samples_leaf = [10]
param_grid = {'n_estimators': n_estimators, 'max_depth': max_depth, 'min_samples_leaf': min_samples_leaf}
rf = RandomForestClassifier(criterion='entropy', random_state=88, verbose=0.1)
rf_cv = GridSearchCV(rf, param_grid, cv=5, scoring='roc_auc')
rf_cv.fit(X_train, y_train)
best_param_rf = rf_cv.best_params_
validation_auc_rf = rf_cv.best_score_
y_pred = rf_cv.predict(X_test)
y_pred_prob = rf_cv.predict_proba(X_test)[:, 1]
test_auc_rf = roc_auc_score(y_test, y_pred_prob)
fpr_rf, tpr_rf, thresholds_rf = roc_curve(y_test, y_pred_prob)
rf_cv.best_estimator_
print('Tuned Random Forest Parameters: {}'.format(best_param_rf))
print('Tuned Random Forest Validation AUC: {}'.format(validation_auc_rf))
print('Tuned Random Forest Test AUC: {}'.format(test_auc_rf))
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
plt.style.use('seaborn')
fig = plt.figure(num=None, figsize=(5, 5), dpi=300, tight_layout=True)
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fpr_rf, tpr_rf)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve, Random Forest')
plt.show()
fig.savefig('../results/roc_curve_random_forest.png', dpi=fig.dpi)
###Output
_____no_output_____
###Markdown
Gradient Boosting
###Code
# Setup a hyperparameter grid for gradient boosting
n_estimators = [100, 200, 500]
max_depth = [10, 50, 100]
param_grid = {'n_estimators': n_estimators, 'max_depth': max_depth}
gboost = GradientBoostingClassifier(verbose=0.1)
gboost_cv = GridSearchCV(gboost, param_grid, cv=5, scoring='roc_auc')
gboost_cv.fit(X_train, y_train)
best_param_gboost = gboost_cv.best_params_
validation_auc_gboost = gboost_cv.best_score_
y_pred = gboost_cv.predict(X_test)
y_pred_prob = gboost_cv.predict_proba(X_test)[:, 1]
test_auc_gboost = roc_auc_score(y_test, y_pred_prob)
fpr_gboost, tpr_gboost, thresholds_gboost = roc_curve(y_test, y_pred_prob)
gboost_cv.best_estimator_
print('Tuned Gradient Boosting Parameters: {}'.format(best_param_gboost))
print('Tuned Gradient Boosting Validation AUC: {}'.format(validation_auc_gboost))
print('Tuned Gradient Boosting Test AUC: {}'.format(test_auc_gboost))
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
plt.style.use('seaborn')
fig = plt.figure(num=None, figsize=(5, 5), dpi=300, tight_layout=True)
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fpr_gboost, tpr_gboost)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve, Gradient Boosting Machine')
plt.show()
fig.savefig('../results/roc_curve_gradient_boosting.png', dpi=fig.dpi)
###Output
_____no_output_____ |
datasets/Part_1_Artificial_Neural_Networks_ANN/ann_homework_solution.ipynb | ###Markdown
Redes Neuronales Artificales **Instalar Theano:** * pip install --upgrade --no-deps git+git://github.com/Theano/Theano.git **Instalar Tensorflow y Keras:** * conda install -c conda-forge keras Parte 1 - Pre procesado de datos
###Code
# Cómo importar las librerías
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Importar el data set
dataset = pd.read_csv('Churn_Modelling.csv')
X = dataset.iloc[:, 3:13].values
y = dataset.iloc[:, 13].values
# Codificar datos categóricos
from sklearn.preprocessing import LabelEncoder
labelencoder_X_1 = LabelEncoder()
X[:, 1] = labelencoder_X_1.fit_transform(X[:, 1])
labelencoder_X_2 = LabelEncoder()
X[:, 2] = labelencoder_X_2.fit_transform(X[:, 2])
#El OneHotEncoder en las nuevas versiones está OBSOLETO
#onehotencoder = OneHotEncoder(categorical_features=[1])
#X = onehotencoder.fit_transform(X).toarray()
from sklearn.preprocessing import OneHotEncoder
from sklearn.compose import ColumnTransformer
transformer = ColumnTransformer(
transformers=[
("Churn_Modelling", # Un nombre de la transformación
OneHotEncoder(categories='auto'), # La clase a la que transformar
[1] # Las columnas a transformar.
)
], remainder='passthrough'
)
X = transformer.fit_transform(X)
X = X[:, 1:]
# Dividir el data set en conjunto de entrenamiento y conjunto de testing
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
# Escalado de variables
from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train)
X_test = sc_X.transform(X_test)
###Output
_____no_output_____
###Markdown
Parte 2 - Construir la RNA
###Code
# Importar Keras y librerías adicionales
import keras
from keras.models import Sequential
from keras.layers import Dense
# Inicializar la RNA
classifier = Sequential()
# Añadir las capas de entrada y primera capa oculta
classifier.add(Dense(units = 6, kernel_initializer = "uniform",
activation = "relu", input_dim = 11))
# Añadir la segunda capa oculta
classifier.add(Dense(units = 6, kernel_initializer = "uniform", activation = "relu"))
# Añadir la capa de salida
classifier.add(Dense(units = 1, kernel_initializer = "uniform", activation = "sigmoid"))
# Compilar la RNA
classifier.compile(optimizer = "adam", loss = "binary_crossentropy", metrics = ["accuracy"])
# Ajustamos la RNA al Conjunto de Entrenamiento
classifier.fit(X_train, y_train, batch_size = 10, epochs = 100)
###Output
Epoch 1/100
8000/8000 [==============================] - 1s 120us/step - loss: 0.4861 - accuracy: 0.7959
Epoch 2/100
8000/8000 [==============================] - 1s 153us/step - loss: 0.4283 - accuracy: 0.7960
Epoch 3/100
8000/8000 [==============================] - 1s 138us/step - loss: 0.4227 - accuracy: 0.7960
Epoch 4/100
8000/8000 [==============================] - 1s 114us/step - loss: 0.4190 - accuracy: 0.8189
Epoch 5/100
8000/8000 [==============================] - 1s 100us/step - loss: 0.4166 - accuracy: 0.8264
Epoch 6/100
8000/8000 [==============================] - 1s 110us/step - loss: 0.4145 - accuracy: 0.8292
Epoch 7/100
8000/8000 [==============================] - 1s 100us/step - loss: 0.4131 - accuracy: 0.8309
Epoch 8/100
8000/8000 [==============================] - 1s 114us/step - loss: 0.4118 - accuracy: 0.8326
Epoch 9/100
8000/8000 [==============================] - 1s 110us/step - loss: 0.4108 - accuracy: 0.8341
Epoch 10/100
8000/8000 [==============================] - 1s 112us/step - loss: 0.4102 - accuracy: 0.8339
Epoch 11/100
8000/8000 [==============================] - 1s 121us/step - loss: 0.4090 - accuracy: 0.8351
Epoch 12/100
8000/8000 [==============================] - 1s 108us/step - loss: 0.4085 - accuracy: 0.8344
Epoch 13/100
8000/8000 [==============================] - 1s 113us/step - loss: 0.4077 - accuracy: 0.8344
Epoch 14/100
8000/8000 [==============================] - 1s 99us/step - loss: 0.4067 - accuracy: 0.8341
Epoch 15/100
8000/8000 [==============================] - 1s 109us/step - loss: 0.4061 - accuracy: 0.8356
Epoch 16/100
8000/8000 [==============================] - 1s 103us/step - loss: 0.4061 - accuracy: 0.8341
Epoch 17/100
8000/8000 [==============================] - 1s 111us/step - loss: 0.4057 - accuracy: 0.8347
Epoch 18/100
8000/8000 [==============================] - 1s 100us/step - loss: 0.4048 - accuracy: 0.8349
Epoch 19/100
8000/8000 [==============================] - 1s 98us/step - loss: 0.4046 - accuracy: 0.8364
Epoch 20/100
8000/8000 [==============================] - 1s 111us/step - loss: 0.4046 - accuracy: 0.8351
Epoch 21/100
8000/8000 [==============================] - 1s 99us/step - loss: 0.4044 - accuracy: 0.8359
Epoch 22/100
8000/8000 [==============================] - 1s 112us/step - loss: 0.4036 - accuracy: 0.8360
Epoch 23/100
8000/8000 [==============================] - 1s 104us/step - loss: 0.4034 - accuracy: 0.8347
Epoch 24/100
8000/8000 [==============================] - 1s 106us/step - loss: 0.4027 - accuracy: 0.8351
Epoch 25/100
8000/8000 [==============================] - 1s 101us/step - loss: 0.4028 - accuracy: 0.8364
Epoch 26/100
8000/8000 [==============================] - 1s 101us/step - loss: 0.4027 - accuracy: 0.8361
Epoch 27/100
8000/8000 [==============================] - 1s 110us/step - loss: 0.4020 - accuracy: 0.8351
Epoch 28/100
8000/8000 [==============================] - 1s 100us/step - loss: 0.4018 - accuracy: 0.8351
Epoch 29/100
8000/8000 [==============================] - 1s 113us/step - loss: 0.4016 - accuracy: 0.8357
Epoch 30/100
8000/8000 [==============================] - 1s 106us/step - loss: 0.4014 - accuracy: 0.8366
Epoch 31/100
8000/8000 [==============================] - 1s 105us/step - loss: 0.4012 - accuracy: 0.8357
Epoch 32/100
8000/8000 [==============================] - 1s 101us/step - loss: 0.4005 - accuracy: 0.8378
Epoch 33/100
8000/8000 [==============================] - 1s 100us/step - loss: 0.4008 - accuracy: 0.8375
Epoch 34/100
8000/8000 [==============================] - 1s 107us/step - loss: 0.4008 - accuracy: 0.8367
Epoch 35/100
8000/8000 [==============================] - 1s 101us/step - loss: 0.4001 - accuracy: 0.8354
Epoch 36/100
8000/8000 [==============================] - 1s 120us/step - loss: 0.3997 - accuracy: 0.8382
Epoch 37/100
8000/8000 [==============================] - 1s 114us/step - loss: 0.3998 - accuracy: 0.8376
Epoch 38/100
8000/8000 [==============================] - 1s 107us/step - loss: 0.3997 - accuracy: 0.8370
Epoch 39/100
8000/8000 [==============================] - 1s 102us/step - loss: 0.3994 - accuracy: 0.8367
Epoch 40/100
8000/8000 [==============================] - 1s 101us/step - loss: 0.3996 - accuracy: 0.8361
Epoch 41/100
8000/8000 [==============================] - 1s 109us/step - loss: 0.3994 - accuracy: 0.8372
Epoch 42/100
8000/8000 [==============================] - 1s 106us/step - loss: 0.3996 - accuracy: 0.8376
Epoch 43/100
8000/8000 [==============================] - 1s 115us/step - loss: 0.3993 - accuracy: 0.8382
Epoch 44/100
8000/8000 [==============================] - 1s 105us/step - loss: 0.3990 - accuracy: 0.8369
Epoch 45/100
8000/8000 [==============================] - 1s 104us/step - loss: 0.3991 - accuracy: 0.8366
Epoch 46/100
8000/8000 [==============================] - 1s 104us/step - loss: 0.3990 - accuracy: 0.8369
Epoch 47/100
8000/8000 [==============================] - 1s 103us/step - loss: 0.3985 - accuracy: 0.8388
Epoch 48/100
8000/8000 [==============================] - 1s 106us/step - loss: 0.3989 - accuracy: 0.8363
Epoch 49/100
8000/8000 [==============================] - 1s 108us/step - loss: 0.3985 - accuracy: 0.8361
Epoch 50/100
8000/8000 [==============================] - 1s 110us/step - loss: 0.3990 - accuracy: 0.8376
Epoch 51/100
8000/8000 [==============================] - 1s 108us/step - loss: 0.3986 - accuracy: 0.8357
Epoch 52/100
8000/8000 [==============================] - 1s 117us/step - loss: 0.3982 - accuracy: 0.8365
Epoch 53/100
8000/8000 [==============================] - 1s 118us/step - loss: 0.3987 - accuracy: 0.8381
Epoch 54/100
8000/8000 [==============================] - 1s 99us/step - loss: 0.3983 - accuracy: 0.8378
Epoch 55/100
8000/8000 [==============================] - 1s 107us/step - loss: 0.3986 - accuracy: 0.8360
Epoch 56/100
8000/8000 [==============================] - 1s 104us/step - loss: 0.3985 - accuracy: 0.8365
Epoch 57/100
8000/8000 [==============================] - 1s 121us/step - loss: 0.3983 - accuracy: 0.8355
Epoch 58/100
8000/8000 [==============================] - 1s 111us/step - loss: 0.3979 - accuracy: 0.8375
Epoch 59/100
8000/8000 [==============================] - 1s 119us/step - loss: 0.3981 - accuracy: 0.8363
Epoch 60/100
8000/8000 [==============================] - 1s 114us/step - loss: 0.3981 - accuracy: 0.8365
Epoch 61/100
8000/8000 [==============================] - 1s 118us/step - loss: 0.3979 - accuracy: 0.8379
Epoch 62/100
8000/8000 [==============================] - 1s 109us/step - loss: 0.3981 - accuracy: 0.8376
Epoch 63/100
8000/8000 [==============================] - 1s 116us/step - loss: 0.3981 - accuracy: 0.8359
Epoch 64/100
8000/8000 [==============================] - 1s 114us/step - loss: 0.3977 - accuracy: 0.8365
Epoch 65/100
8000/8000 [==============================] - 1s 110us/step - loss: 0.3976 - accuracy: 0.8372
Epoch 66/100
8000/8000 [==============================] - 1s 119us/step - loss: 0.3979 - accuracy: 0.8356
Epoch 67/100
8000/8000 [==============================] - 1s 110us/step - loss: 0.3976 - accuracy: 0.8367
Epoch 68/100
8000/8000 [==============================] - 1s 132us/step - loss: 0.3976 - accuracy: 0.8363
Epoch 69/100
8000/8000 [==============================] - 1s 101us/step - loss: 0.3979 - accuracy: 0.8370
Epoch 70/100
8000/8000 [==============================] - 1s 119us/step - loss: 0.3977 - accuracy: 0.8350
Epoch 71/100
8000/8000 [==============================] - 1s 104us/step - loss: 0.3975 - accuracy: 0.8371
Epoch 72/100
8000/8000 [==============================] - 1s 110us/step - loss: 0.3977 - accuracy: 0.8389
Epoch 73/100
8000/8000 [==============================] - 1s 102us/step - loss: 0.3975 - accuracy: 0.8379
Epoch 74/100
8000/8000 [==============================] - 1s 101us/step - loss: 0.3977 - accuracy: 0.8364
Epoch 75/100
8000/8000 [==============================] - 1s 111us/step - loss: 0.3974 - accuracy: 0.8367
Epoch 76/100
8000/8000 [==============================] - 1s 102us/step - loss: 0.3974 - accuracy: 0.8374
Epoch 77/100
8000/8000 [==============================] - 1s 125us/step - loss: 0.3976 - accuracy: 0.8363
Epoch 78/100
###Markdown
Parte 3 - Evaluar el modelo y calcular predicciones finales
###Code
# Predicción de los resultados con el Conjunto de Testing
y_pred = classifier.predict(X_test)
y_pred = (y_pred>0.5)
###Output
_____no_output_____
###Markdown
Predecir una nueva observaciónUtiliza nuestro modelo de RNA para predecir si el cliente con la siguiente información abandonará el banco:* Geografia: Francia* Puntaje de crédito: 600* Género masculino* Edad: 40 años de edad* Tenencia: 3 años.* Saldo: $ 60000* Número de productos: 2* ¿Este cliente tiene una tarjeta de crédito? Sí* ¿Es este cliente un miembro activo? Sí* Salario estimado: $ 50000Entonces, ¿deberíamos decir adiós a ese cliente?
###Code
new_prediction = classifier.predict(sc_X.transform(np.array([[0,0,600, 1, 40, 3, 60000, 2, 1, 1, 50000]])))
new_prediction = (new_prediction > 0.5)
# Elaborar una matriz de confusión
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
print(cm)
print((cm[0][0]+cm[1][1])/cm.sum())
###Output
_____no_output_____ |
notebooks/prod/.ipynb_checkpoints/n08_simple_q_learner_1000_states_full_training-checkpoint.ipynb | ###Markdown
In this notebook a simple Q learner will be trained and evaluated. The Q learner recommends when to buy or sell shares of one particular stock, and in which quantity (in fact it determines the desired fraction of shares in the total portfolio value). One initial attempt was made to train the Q-learner with multiple processes, but it was unsuccessful.
###Code
# Basic imports
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
from time import time
from sklearn.metrics import r2_score, median_absolute_error
from multiprocessing import Pool
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10.0)
%load_ext autoreload
%autoreload 2
sys.path.append('../../')
import recommender.simulator as sim
from utils.analysis import value_eval
from recommender.agent import Agent
from functools import partial
NUM_THREADS = 1
LOOKBACK = -1 # 252*4 + 28
STARTING_DAYS_AHEAD = 252
POSSIBLE_FRACTIONS = [0.0, 1.0]
# Get the data
SYMBOL = 'SPY'
total_data_train_df = pd.read_pickle('../../data/data_train_val_df.pkl').stack(level='feature')
data_train_df = total_data_train_df[SYMBOL].unstack()
total_data_test_df = pd.read_pickle('../../data/data_test_df.pkl').stack(level='feature')
data_test_df = total_data_test_df[SYMBOL].unstack()
if LOOKBACK == -1:
total_data_in_df = total_data_train_df
data_in_df = data_train_df
else:
data_in_df = data_train_df.iloc[-LOOKBACK:]
total_data_in_df = total_data_train_df.loc[data_in_df.index[0]:]
# Create many agents
index = np.arange(NUM_THREADS).tolist()
env, num_states, num_actions = sim.initialize_env(total_data_in_df,
SYMBOL,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
n_levels=10)
agents = [Agent(num_states=num_states,
num_actions=num_actions,
random_actions_rate=0.98,
random_actions_decrease=0.9999,
dyna_iterations=0,
name='Agent_{}'.format(i)) for i in index]
def show_results(results_list, data_in_df, graph=False):
for values in results_list:
total_value = values.sum(axis=1)
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(total_value))))
print('-'*100)
initial_date = total_value.index[0]
compare_results = data_in_df.loc[initial_date:, 'Close'].copy()
compare_results.name = SYMBOL
compare_results_df = pd.DataFrame(compare_results)
compare_results_df['portfolio'] = total_value
std_comp_df = compare_results_df / compare_results_df.iloc[0]
if graph:
plt.figure()
std_comp_df.plot()
###Output
_____no_output_____
###Markdown
Let's show the symbols data, to see how good the recommender has to be.
###Code
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_in_df['Close'].iloc[STARTING_DAYS_AHEAD:]))))
# Simulate (with new envs, each time)
n_epochs = 7
for i in range(n_epochs):
tic = time()
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL,
agents[0],
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_in_df)
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL, agents[0],
learn=False,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
other_env=env)
show_results([results_list], data_in_df, graph=True)
###Output
Starting simulation for agent: Agent_0. 5268 days of simulation to go.
Date 2014-12-22 00:00:00 (simulating until 2014-12-31 00:00:00). Time: 0.2823922634124756s. Value: 877288.3300000005...Sharpe ratio: 2.2911390004146073
Cum. Ret.: 87.25733300000006
AVG_DRET: 0.000868830678214477
STD_DRET: 0.0060198265724509415
Final value: 882573.3300000005
----------------------------------------------------------------------------------------------------
###Markdown
Ok, let's save that
###Code
import pickle
with open('../../data/simple_q_learner_1000_states_full_training.pkl', 'wb') as best_agent:
pickle.dump(agents[0], best_agent)
###Output
_____no_output_____
###Markdown
Let's run the trained agent, with the test set First a non-learning test: this scenario would be worse than what is possible (in fact, the q-learner can learn from past samples in the test set without compromising the causality).
###Code
TEST_DAYS_AHEAD = 20
env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)
tic = time()
results_list = sim.simulate_period(total_data_test_df,
SYMBOL,
agents[0],
learn=False,
starting_days_ahead=TEST_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_test_df, graph=True)
###Output
Starting simulation for agent: Agent_0. 484 days of simulation to go.
Date 2016-12-28 00:00:00 (simulating until 2016-12-30 00:00:00). Time: 0.17544817924499512s. Value: 12099.470000000001.Epoch: 6
Elapsed time: 8.874800443649292 seconds.
Random Actions Rate: 0.024544019369877303
Sharpe ratio: 1.3139195350252832
Cum. Ret.: 0.2099470000000001
AVG_DRET: 0.00040755281995981493
STD_DRET: 0.004923970055971514
Final value: 12099.470000000001
----------------------------------------------------------------------------------------------------
###Markdown
And now a "realistic" test, in which the learner continues to learn from past samples in the test set (it even makes some random moves, though very few).
###Code
env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)
tic = time()
results_list = sim.simulate_period(total_data_test_df,
SYMBOL,
agents[0],
learn=True,
starting_days_ahead=TEST_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_test_df, graph=True)
###Output
Starting simulation for agent: Agent_0. 484 days of simulation to go.
Date 2016-12-28 00:00:00 (simulating until 2016-12-30 00:00:00). Time: 0.17123079299926758s. Value: 10595.430000000008.Epoch: 6
Elapsed time: 9.082789421081543 seconds.
Random Actions Rate: 0.02338666058186899
Sharpe ratio: 0.45880561807905557
Cum. Ret.: 0.05954300000000079
AVG_DRET: 0.000130165710975031
STD_DRET: 0.004503686357325763
Final value: 10595.430000000008
----------------------------------------------------------------------------------------------------
###Markdown
What are the metrics for "holding the position"?
###Code
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_test_df['Close'].iloc[TEST_DAYS_AHEAD:]))))
###Output
Sharpe ratio: 0.44271542660031676
Cum. Ret.: 0.1070225832012679
AVG_DRET: 0.00025103195406808796
STD_DRET: 0.009001287260690292
Final value: 223.53
|
colab-notebooks/Multitrack_MusicVAE.ipynb | ###Markdown
Multitrack MusicVAE: Learning a Latent Space of Multitrack Measures ___Ian Simon, Adam Roberts, Colin Raffel, Jesse Engel, Curtis Hawthorne, Douglas Eck___[MusicVAE](https://g.co/magenta/music-vae) learns a latent space of musical sequences. Here we apply the MusicVAE framework to single measures of multi-instrument General MIDI, a symbolic music representation that uses a standard set of 128 instrument sounds.The models in this notebook are capable of encoding and decoding single measures of up to 8 tracks, optionally conditioned on an underlying chord. Encoding transforms a single measure into a vector in a latent space, and decoding transforms a latent vector back into a measure. Both encoding and decoding are performed hierarchically, with one level operating on tracks and another operating on the notes (and choice of instrument) in each track.See our [arXiv paper](https://arxiv.org/abs/1806.00195) for more details, along with our [blog post](http://g.co/magenta/multitrack) with links to JavaScript CodePens. Environment Setup
###Code
#@title Setup Environment
print('Copying checkpoints and modified SGM SoundFont (https://sites.google.com/site/soundfonts4u) from GCS.')
print('This will take a few minutes...')
!gsutil -q -m cp gs://download.magenta.tensorflow.org/models/music_vae/multitrack/* /content/
!gsutil -q -m cp gs://download.magenta.tensorflow.org/soundfonts/SGM-v2.01-Sal-Guit-Bass-V1.3.sf2 /content/
print('Installing dependencies...')
!apt-get update -qq && apt-get install -qq libfluidsynth1 build-essential libasound2-dev libjack-dev
!pip install -qU magenta pyfluidsynth pretty_midi
import ctypes.util
def proxy_find_library(lib):
if lib == 'fluidsynth':
return 'libfluidsynth.so.1'
else:
return ctypes.util.find_library(lib)
ctypes.util.find_library = proxy_find_library
print('Importing libraries...')
import numpy as np
import os
import tensorflow.compat.v1 as tf
from google.colab import files
import magenta.music as mm
from magenta.music.sequences_lib import concatenate_sequences
from magenta.models.music_vae import configs
from magenta.models.music_vae.trained_model import TrainedModel
tf.disable_v2_behavior()
print('Done!')
#@title Definitions
BATCH_SIZE = 4
Z_SIZE = 512
TOTAL_STEPS = 512
BAR_SECONDS = 2.0
CHORD_DEPTH = 49
SAMPLE_RATE = 44100
SF2_PATH = '/content/SGM-v2.01-Sal-Guit-Bass-V1.3.sf2'
# Play sequence using SoundFont.
def play(note_sequences):
if not isinstance(note_sequences, list):
note_sequences = [note_sequences]
for ns in note_sequences:
mm.play_sequence(ns, synth=mm.fluidsynth, sf2_path=SF2_PATH)
# Spherical linear interpolation.
def slerp(p0, p1, t):
"""Spherical linear interpolation."""
omega = np.arccos(np.dot(np.squeeze(p0/np.linalg.norm(p0)), np.squeeze(p1/np.linalg.norm(p1))))
so = np.sin(omega)
return np.sin((1.0-t)*omega) / so * p0 + np.sin(t*omega)/so * p1
# Download sequence.
def download(note_sequence, filename):
mm.sequence_proto_to_midi_file(note_sequence, filename)
files.download(filename)
# Chord encoding tensor.
def chord_encoding(chord):
index = mm.TriadChordOneHotEncoding().encode_event(chord)
c = np.zeros([TOTAL_STEPS, CHORD_DEPTH])
c[0,0] = 1.0
c[1:,index] = 1.0
return c
# Trim sequences to exactly one bar.
def trim_sequences(seqs, num_seconds=BAR_SECONDS):
for i in range(len(seqs)):
seqs[i] = mm.extract_subsequence(seqs[i], 0.0, num_seconds)
seqs[i].total_time = num_seconds
# Consolidate instrument numbers by MIDI program.
def fix_instruments_for_concatenation(note_sequences):
instruments = {}
for i in range(len(note_sequences)):
for note in note_sequences[i].notes:
if not note.is_drum:
if note.program not in instruments:
if len(instruments) >= 8:
instruments[note.program] = len(instruments) + 2
else:
instruments[note.program] = len(instruments) + 1
note.instrument = instruments[note.program]
else:
note.instrument = 9
###Output
_____no_output_____
###Markdown
Chord-Conditioned Model
###Code
#@title Load Checkpoint
config = configs.CONFIG_MAP['hier-multiperf_vel_1bar_med_chords']
model = TrainedModel(
config, batch_size=BATCH_SIZE,
checkpoint_dir_or_path='/content/model_chords_fb64.ckpt')
#@title Same Chord, Random Styles
chord = 'C' #@param {type:"string"}
temperature = 0.2 #@param {type:"slider", min:0.01, max:1.5, step:0.01}
seqs = model.sample(n=BATCH_SIZE, length=TOTAL_STEPS, temperature=temperature,
c_input=chord_encoding(chord))
trim_sequences(seqs)
play(seqs)
#@title Same Style, Chord Progression
chord_1 = 'C' #@param {type:"string"}
chord_2 = 'Caug' #@param {type:"string"}
chord_3 = 'Am' #@param {type:"string"}
chord_4 = 'E' #@param {type:"string"}
chords = [chord_1, chord_2, chord_3, chord_4]
temperature = 0.2 #@param {type:"slider", min:0.01, max:1.5, step:0.01}
z = np.random.normal(size=[1, Z_SIZE])
seqs = [
model.decode(length=TOTAL_STEPS, z=z, temperature=temperature,
c_input=chord_encoding(c))[0]
for c in chords
]
trim_sequences(seqs)
fix_instruments_for_concatenation(seqs)
prog_ns = concatenate_sequences(seqs)
play(prog_ns)
mm.plot_sequence(prog_ns)
#@title (Optional) Save Arrangement to MIDI
download(prog_ns, '_'.join(chords) + '.mid')
#@title Style Interpolation, Repeating Chord Progression
chord_1 = 'Dm' #@param {type:"string"}
chord_2 = 'F' #@param {type:"string"}
chord_3 = 'Am' #@param {type:"string"}
chord_4 = 'G' #@param {type:"string"}
chords = [chord_1, chord_2, chord_3, chord_4]
num_bars = 32 #@param {type:"slider", min:4, max:64, step:4}
temperature = 0.2 #@param {type:"slider", min:0.01, max:1.5, step:0.01}
z1 = np.random.normal(size=[Z_SIZE])
z2 = np.random.normal(size=[Z_SIZE])
z = np.array([slerp(z1, z2, t)
for t in np.linspace(0, 1, num_bars)])
seqs = [
model.decode(length=TOTAL_STEPS, z=z[i:i+1, :], temperature=temperature,
c_input=chord_encoding(chords[i % 4]))[0]
for i in range(num_bars)
]
trim_sequences(seqs)
fix_instruments_for_concatenation(seqs)
prog_interp_ns = concatenate_sequences(seqs)
play(prog_interp_ns)
mm.plot_sequence(prog_interp_ns)
#@title (Optional) Save to MIDI
download(prog_interp_ns, 'interp_' + '_'.join(chords) + '.mid')
###Output
_____no_output_____
###Markdown
Unconditioned Model
###Code
#@title Load Checkpoint
config = configs.CONFIG_MAP['hier-multiperf_vel_1bar_med']
model = TrainedModel(
config, batch_size=BATCH_SIZE,
checkpoint_dir_or_path='/content/model_fb256.ckpt')
model._config.data_converter._max_tensors_per_input = None
#@title Random Samples
temperature = 0.2 #@param {type:"slider", min:0.01, max:1.5, step:0.01}
seqs = model.sample(n=BATCH_SIZE, length=TOTAL_STEPS, temperature=temperature)
trim_sequences(seqs)
play(seqs)
#@title Interpolation Between Random Samples
num_bars = 32 #@param {type:"slider", min:4, max:64, step:1}
temperature = 0.2 #@param {type:"slider", min:0.01, max:1.5, step:0.01}
z1 = np.random.normal(size=[Z_SIZE])
z2 = np.random.normal(size=[Z_SIZE])
z = np.array([slerp(z1, z2, t)
for t in np.linspace(0, 1, num_bars)])
seqs = model.decode(length=TOTAL_STEPS, z=z, temperature=temperature)
trim_sequences(seqs)
fix_instruments_for_concatenation(seqs)
interp_ns = concatenate_sequences(seqs)
play(interp_ns)
mm.plot_sequence(interp_ns)
#@title (Optional) Save to MIDI
download(interp_ns, 'interp.mid')
#@title Upload MIDI Files to Reconstruct
midi_files = files.upload().values()
seqs = [mm.midi_to_sequence_proto(midi) for midi in midi_files]
uploaded_seqs = []
for seq in seqs:
_, tensors, _, _ = model._config.data_converter.to_tensors(seq)
uploaded_seqs.extend(model._config.data_converter.from_tensors(tensors))
trim_sequences(uploaded_seqs)
print('Parsed %d measures' % len(uploaded_seqs))
#@title Encode and Decode
index = 0 #@param {type:"integer"}
temperature = 0.2 #@param {type:"slider", min:0.01, max:1.5, step:0.01}
z, _, _ = model.encode([uploaded_seqs[index]])
reconstructed_seq = model.decode(z, length=TOTAL_STEPS,
temperature=temperature)[0]
trim_sequences([reconstructed_seq])
print('Original')
play(uploaded_seqs[index])
mm.plot_sequence(uploaded_seqs[index])
print('Reconstructed')
play(reconstructed_seq)
mm.plot_sequence(reconstructed_seq)
#@title Interpolation Between Encodings
index_1 = 0 #@param {type:"integer"}
index_2 = 1 #@param {type:"integer"}
num_bars = 32 #@param {type:"slider", min:4, max:64, step:4}
temperature = 0.2 #@param {type:"slider", min:0.01, max:1.5, step:0.01}
z1, _, _ = model.encode([uploaded_seqs[index_1]])
z2, _, _ = model.encode([uploaded_seqs[index_2]])
z = np.array([slerp(np.squeeze(z1), np.squeeze(z2), t)
for t in np.linspace(0, 1, num_bars)])
seqs = model.decode(length=TOTAL_STEPS, z=z, temperature=temperature)
trim_sequences(seqs)
fix_instruments_for_concatenation(seqs)
recon_interp_ns = concatenate_sequences(seqs)
play(recon_interp_ns)
mm.plot_sequence(recon_interp_ns)
#@title (Optional) Save to MIDI
download(recon_interp_ns, 'recon_interp.mid')
###Output
_____no_output_____
###Markdown
Multitrack MusicVAE: Learning a Latent Space of Multitrack Measures ___Ian Simon, Adam Roberts, Colin Raffel, Jesse Engel, Curtis Hawthorne, Douglas Eck___[MusicVAE](https://g.co/magenta/music-vae) learns a latent space of musical sequences. Here we apply the MusicVAE framework to single measures of multi-instrument General MIDI, a symbolic music representation that uses a standard set of 128 instrument sounds.The models in this notebook are capable of encoding and decoding single measures of up to 8 tracks, optionally conditioned on an underlying chord. Encoding transforms a single measure into a vector in a latent space, and decoding transforms a latent vector back into a measure. Both encoding and decoding are performed hierarchically, with one level operating on tracks and another operating on the notes (and choice of instrument) in each track.See our [arXiv paper](https://arxiv.org/abs/1806.00195) for more details, along with our [blog post](http://g.co/magenta/multitrack) with links to JavaScript CodePens. Environment Setup
###Code
#@title Setup Environment
print('Copying checkpoints and modified SGM SoundFont (https://sites.google.com/site/soundfonts4u) from GCS.')
print('This will take a few minutes...')
!gsutil -q -m cp gs://download.magenta.tensorflow.org/models/music_vae/multitrack/* /content/
!gsutil -q -m cp gs://download.magenta.tensorflow.org/soundfonts/SGM-v2.01-Sal-Guit-Bass-V1.3.sf2 /content/
print('Installing dependencies...')
!apt-get update -qq && apt-get install -qq libfluidsynth1 build-essential libasound2-dev libjack-dev
!pip install -qU magenta pyfluidsynth pretty_midi
print('Importing libraries...')
import numpy as np
import os
import tensorflow.compat.v1 as tf
from google.colab import files
import magenta.music as mm
from magenta.music.sequences_lib import concatenate_sequences
from magenta.models.music_vae import configs
from magenta.models.music_vae.trained_model import TrainedModel
tf.disable_v2_behavior()
print('Done!')
#@title Definitions
BATCH_SIZE = 4
Z_SIZE = 512
TOTAL_STEPS = 512
BAR_SECONDS = 2.0
CHORD_DEPTH = 49
SAMPLE_RATE = 44100
SF2_PATH = '/content/SGM-v2.01-Sal-Guit-Bass-V1.3.sf2'
# Play sequence using SoundFont.
def play(note_sequences):
if not isinstance(note_sequences, list):
note_sequences = [note_sequences]
for ns in note_sequences:
mm.play_sequence(ns, synth=mm.fluidsynth, sf2_path=SF2_PATH)
# Spherical linear interpolation.
def slerp(p0, p1, t):
"""Spherical linear interpolation."""
omega = np.arccos(np.dot(np.squeeze(p0/np.linalg.norm(p0)), np.squeeze(p1/np.linalg.norm(p1))))
so = np.sin(omega)
return np.sin((1.0-t)*omega) / so * p0 + np.sin(t*omega)/so * p1
# Download sequence.
def download(note_sequence, filename):
mm.sequence_proto_to_midi_file(note_sequence, filename)
files.download(filename)
# Chord encoding tensor.
def chord_encoding(chord):
index = mm.TriadChordOneHotEncoding().encode_event(chord)
c = np.zeros([TOTAL_STEPS, CHORD_DEPTH])
c[0,0] = 1.0
c[1:,index] = 1.0
return c
# Trim sequences to exactly one bar.
def trim_sequences(seqs, num_seconds=BAR_SECONDS):
for i in range(len(seqs)):
seqs[i] = mm.extract_subsequence(seqs[i], 0.0, num_seconds)
seqs[i].total_time = num_seconds
# Consolidate instrument numbers by MIDI program.
def fix_instruments_for_concatenation(note_sequences):
instruments = {}
for i in range(len(note_sequences)):
for note in note_sequences[i].notes:
if not note.is_drum:
if note.program not in instruments:
if len(instruments) >= 8:
instruments[note.program] = len(instruments) + 2
else:
instruments[note.program] = len(instruments) + 1
note.instrument = instruments[note.program]
else:
note.instrument = 9
###Output
_____no_output_____
###Markdown
Chord-Conditioned Model
###Code
#@title Load Checkpoint
config = configs.CONFIG_MAP['hier-multiperf_vel_1bar_med_chords']
model = TrainedModel(
config, batch_size=BATCH_SIZE,
checkpoint_dir_or_path='/content/model_chords_fb64.ckpt')
#@title Same Chord, Random Styles
chord = 'C' #@param {type:"string"}
temperature = 0.2 #@param {type:"slider", min:0.01, max:1.5, step:0.01}
seqs = model.sample(n=BATCH_SIZE, length=TOTAL_STEPS, temperature=temperature,
c_input=chord_encoding(chord))
trim_sequences(seqs)
play(seqs)
#@title Same Style, Chord Progression
chord_1 = 'C' #@param {type:"string"}
chord_2 = 'Caug' #@param {type:"string"}
chord_3 = 'Am' #@param {type:"string"}
chord_4 = 'E' #@param {type:"string"}
chords = [chord_1, chord_2, chord_3, chord_4]
temperature = 0.2 #@param {type:"slider", min:0.01, max:1.5, step:0.01}
z = np.random.normal(size=[1, Z_SIZE])
seqs = [
model.decode(length=TOTAL_STEPS, z=z, temperature=temperature,
c_input=chord_encoding(c))[0]
for c in chords
]
trim_sequences(seqs)
fix_instruments_for_concatenation(seqs)
prog_ns = concatenate_sequences(seqs)
play(prog_ns)
mm.plot_sequence(prog_ns)
#@title (Optional) Save Arrangement to MIDI
download(prog_ns, '_'.join(chords) + '.mid')
#@title Style Interpolation, Repeating Chord Progression
chord_1 = 'Dm' #@param {type:"string"}
chord_2 = 'F' #@param {type:"string"}
chord_3 = 'Am' #@param {type:"string"}
chord_4 = 'G' #@param {type:"string"}
chords = [chord_1, chord_2, chord_3, chord_4]
num_bars = 32 #@param {type:"slider", min:4, max:64, step:4}
temperature = 0.2 #@param {type:"slider", min:0.01, max:1.5, step:0.01}
z1 = np.random.normal(size=[Z_SIZE])
z2 = np.random.normal(size=[Z_SIZE])
z = np.array([slerp(z1, z2, t)
for t in np.linspace(0, 1, num_bars)])
seqs = [
model.decode(length=TOTAL_STEPS, z=z[i:i+1, :], temperature=temperature,
c_input=chord_encoding(chords[i % 4]))[0]
for i in range(num_bars)
]
trim_sequences(seqs)
fix_instruments_for_concatenation(seqs)
prog_interp_ns = concatenate_sequences(seqs)
play(prog_interp_ns)
mm.plot_sequence(prog_interp_ns)
#@title (Optional) Save to MIDI
download(prog_interp_ns, 'interp_' + '_'.join(chords) + '.mid')
###Output
_____no_output_____
###Markdown
Unconditioned Model
###Code
#@title Load Checkpoint
config = configs.CONFIG_MAP['hier-multiperf_vel_1bar_med']
model = TrainedModel(
config, batch_size=BATCH_SIZE,
checkpoint_dir_or_path='/content/model_fb256.ckpt')
model._config.data_converter._max_tensors_per_input = None
#@title Random Samples
temperature = 0.2 #@param {type:"slider", min:0.01, max:1.5, step:0.01}
seqs = model.sample(n=BATCH_SIZE, length=TOTAL_STEPS, temperature=temperature)
trim_sequences(seqs)
play(seqs)
#@title Interpolation Between Random Samples
num_bars = 32 #@param {type:"slider", min:4, max:64, step:1}
temperature = 0.2 #@param {type:"slider", min:0.01, max:1.5, step:0.01}
z1 = np.random.normal(size=[Z_SIZE])
z2 = np.random.normal(size=[Z_SIZE])
z = np.array([slerp(z1, z2, t)
for t in np.linspace(0, 1, num_bars)])
seqs = model.decode(length=TOTAL_STEPS, z=z, temperature=temperature)
trim_sequences(seqs)
fix_instruments_for_concatenation(seqs)
interp_ns = concatenate_sequences(seqs)
play(interp_ns)
mm.plot_sequence(interp_ns)
#@title (Optional) Save to MIDI
download(interp_ns, 'interp.mid')
#@title Upload MIDI Files to Reconstruct
midi_files = files.upload().values()
seqs = [mm.midi_to_sequence_proto(midi) for midi in midi_files]
uploaded_seqs = []
for seq in seqs:
_, tensors, _, _ = model._config.data_converter.to_tensors(seq)
uploaded_seqs.extend(model._config.data_converter.from_tensors(tensors))
trim_sequences(uploaded_seqs)
print('Parsed %d measures' % len(uploaded_seqs))
#@title Encode and Decode
index = 0 #@param {type:"integer"}
temperature = 0.2 #@param {type:"slider", min:0.01, max:1.5, step:0.01}
z, _, _ = model.encode([uploaded_seqs[index]])
reconstructed_seq = model.decode(z, length=TOTAL_STEPS,
temperature=temperature)[0]
trim_sequences([reconstructed_seq])
print('Original')
play(uploaded_seqs[index])
mm.plot_sequence(uploaded_seqs[index])
print('Reconstructed')
play(reconstructed_seq)
mm.plot_sequence(reconstructed_seq)
#@title Interpolation Between Encodings
index_1 = 0 #@param {type:"integer"}
index_2 = 1 #@param {type:"integer"}
num_bars = 32 #@param {type:"slider", min:4, max:64, step:4}
temperature = 0.2 #@param {type:"slider", min:0.01, max:1.5, step:0.01}
z1, _, _ = model.encode([uploaded_seqs[index_1]])
z2, _, _ = model.encode([uploaded_seqs[index_2]])
z = np.array([slerp(np.squeeze(z1), np.squeeze(z2), t)
for t in np.linspace(0, 1, num_bars)])
seqs = model.decode(length=TOTAL_STEPS, z=z, temperature=temperature)
trim_sequences(seqs)
fix_instruments_for_concatenation(seqs)
recon_interp_ns = concatenate_sequences(seqs)
play(recon_interp_ns)
mm.plot_sequence(recon_interp_ns)
#@title (Optional) Save to MIDI
download(recon_interp_ns, 'recon_interp.mid')
###Output
_____no_output_____ |
docs/source/include/notebooks/sbw25_feature_template.ipynb | ###Markdown
The pseudomonas-fluorescens SBW25 knowledge base
###Code
%load_ext autoreload
%autoreload 2
from IPython.display import IFrame, clear_output, Image
#from GenDBScraper.Utilities import nb_utilities as nbu
# Configure logging.
from GenDBScraper.StringDBScraper import StringDBScraper, stringdb_query
import ipywidgets as widgets
import pandas
import logging
import ipyaggrid
from Bio import SeqIO
from io import StringIO
import re
from GenDBScraper.PseudomonasDotComScraper import PseudomonasDotComScraper, pdc_query
# %load /home/grotec/repos/GenDBScraper/GenDBScraper/Utilities/nb_utilities.py
logging.basicConfig(format='%(asctime)s %(levelname)s: %(message)s', level=logging.DEBUG)
logging.debug("ha")
def make_table(strain, locus_tag):
""" Get the data for strain and locus_tag from pseudomonas.com and render as a table. """
display(nbu.get_grids(data_tables=nbu.run_pdc(strain, locus_tag)))
def make_table_button(strain, locus_tag):
""" Return a button. If clicked, display a table for the corresponding data from pdc. """
def table_button_clicked(b):
""" Callback for click on the button """
make_table(strain, locus_tag)
button = widgets.Button(description=locus_tag)
button.on_click(table_button_clicked)
return button
def run_pdc(strain, locus_tag):
""" Get data for strain and locus tag from pseudomonas.com """
pdc = PseudomonasDotComScraper(query=pdc_query(strain=strain, feature=locus_tag))
query_string = "__".join([pdc.query[0].strain, pdc.query[0].feature])
pdc.connect()
pdc.run_query()
results = pdc.results[query_string]
return results
def get_grids(data_tables):
""" Create grid view of all data tables"""
if not isinstance(data_tables, dict):
raise TypeError("Input parameter 'data_tables' must be of type dict. Received type is {}".format(type(data_tables)))
tabs = widgets.Tab()
children = []
titles = []
skipped = ["Ortholog xml"]
for i, title in enumerate(data_tables.keys()):
if title in skipped:
logging.debug("Skipping %s", title)
continue
df = data_tables[title]
if df is None:
logging.debug("Skipping %s", title)
continue
if isinstance(df, pandas.DataFrame):
if df.empty:
logging.debug("Skipping %s", title)
continue
df = df.rename(str, axis='columns')
grid_options={'columnDefs' : [{'field': c} for c in df.columns],
'enableSorting': True,
'enableFilter': True,
'enableColResize': True,
'enableRangeSelection': True,
}
if title.lower() == "ortholog group":
for column_def in grid_options['columnDefs']:
pattern = re.compile(r"^GI$")
if pattern.match(column_def['field']):
column_def['cellRenderer'] = """function(params) { return '<a href=http://www.ncbi.nlm.nih.gov/protein/'+params.value+' target=_blank>'+params.value+'</a>'; }"""
if title.lower() == "ortholog cluster":
for column_def in grid_options['columnDefs']:
pattern = re.compile(r"^GI \(Strain [1,2]\)$")
if pattern.match(column_def['field']):
column_def['cellRenderer'] = """function(params) { return '<a href=http://www.ncbi.nlm.nih.gov/protein/'+params.value+' target=_blank>'+params.value+'</a>'; }"""
if title.lower() == "cross-references":
for column_def in grid_options['columnDefs']:
pattern = re.compile(r"^[U,u]rl$")
if pattern.match(column_def['field']):
column_def['cellRenderer'] = """function(params) { return '<a href='+params.value+' target=_blank>'+params.value+'</a>'; }"""
if title.lower() == "individual mappings":
for column_def in grid_options['columnDefs']:
pattern = re.compile(r"^PMID$")
if pattern.match(column_def['field']):
column_def['cellRenderer'] = """function(params) { return '<a href=http://ncbi.nlm.nih.gov/pubmed/'+params.value+' target=_blank>'+params.value+'</a>'; }"""
if title.lower() == "gene ontology":
for column_def in grid_options['columnDefs']:
# GO Accession
pattern = re.compile(r"^Accession$")
if pattern.match(column_def['field']):
column_def['cellRenderer'] = """function(params) { return '<a href=http://www.ebi.ac.uk/QuickGO/GTerm?id='+params.value+' target=_blank>'+params.value+'</a>'; }"""
# ECO accession
pattern = re.compile(r"^Evidence Ontology \(ECO\) Code$")
if pattern.match(column_def['field']):
column_def['cellRenderer'] = """function(params) { return '<a href=http://www.ebi.ac.uk/ontology-lookup/?termId='+params.value+' target=_blank>'+params.value+'</a>'; }"""
pattern = re.compile(r"^Reference$")
if pattern.match(column_def['field']):
column_def['cellRenderer'] = """function(params) { return '<a http://ncbi.nlm.nih.gov/pubmed/'+params.value+' target=_blank>'+params.value+'</a>'; }"""
if title.lower() == "functional predictions from interpro":
for column_def in grid_options['columnDefs']:
pattern = re.compile(r"^Interpro Accession$")
if pattern.match(column_def['field']):
column_def['cellRenderer'] = """function(params) { return '<a href=http://www.ebi.ac.uk/interpro/entry/'+params.value+' target=_blank>'+params.value+'</a>'; }"""
if re.match(r'^transposon.*$', title.lower() ):
for column_def in grid_options['columnDefs']:
pattern = re.compile(r"^Reference$")
if pattern.match(column_def['field']):
column_def['cellRenderer'] = """function(params) { return '<a href=http://ncbi.nlm.nih.gov/pubmed/'+params.value+' target=_blank>'+params.value+'</a>'; }"""
# if title.lower() == 'genes':
# for column_def in grid_options['columnDefs']:
# pattern = re.compile(r"^Unnamed: 7$",flags=re.IGNORECASE)
# if pattern.match(column_def['field']):
# column_def['cellRenderer'] = """function(params) {
# let v = params.value;
# function clicked(){
# let new_cell = Jupyter.notebook.insert_cell_below().set_text("This feature is not implemented yet.");
# }
#
# let b = document.createElement('button');
# b.innerHTML = v;
# b.style = "background-color:bisque; margin:1px 10px 1px 2px;";
# b.title = "Open gene table";
# b.addEventListener("click", function (){clicked()}, false);
# // b.addEventListener("click", function (){clicked()}, false);
#
# return b;
#} """
if title.lower() == 'references':
for column_def in grid_options['columnDefs']:
pattern = re.compile(r"^Pubmed_id$",flags=re.IGNORECASE)
if pattern.match(column_def['field']):
column_def['cellRenderer'] = """function(params) { return '<a href=http://ncbi.nlm.nih.gov/pubmed/'+params.value+' target=_blank>'+params.value+'</a>'; }"""
g = ipyaggrid.Grid(grid_data = df,
grid_options=grid_options,
center=False,
theme='ag-theme-fresh',
grid_options_multi=[],
columns_fit='',
index=True,
keep_multiindex=False,
compress_data=True,
quick_filter=True,
export_csv=True,
export_excel=True,
show_toggle_delete=False,
show_toggle_edit=False,
paste_from_excel=True,
export_mode='disabled',
export_to_df=True,
hide_grid=False,
menu=None,
)
children.append(g)
elif isinstance(df, dict):
if df == {}:
logging.debug("Skipping %s", title)
g = get_grids(df)
children.append(g)
elif isinstance(df, list):
if len(df) == 0:
logging.debug("Skipping %s", title)
continue
titles.append(title)
tabs.children = children
assert len(children) == len(titles)
for i, title in enumerate(titles):
tabs.set_title(i, title)
return tabs
# Need to treat each tab and subtabs individually
def get_single_grid(df, title, column_formatting):
df = df.rename(str, axis='columns')
grid_options={'columnDefs' : [{'field': c} for c in df.columns],
'enableSorting': True,
'enableFilter': True,
'enableColResize': True,
'enableRangeSelection': True,
}
for cd in grid_options['columnDefs']:
field = cd['field']
if cd['field'] in column_formatting.keys():
cd['cellRenderer'] = column_formatting[field]
grid = ipyaggrid.Grid(grid_data = df,
grid_options=grid_options,
center=False,
theme='ag-theme-fresh',
grid_options_multi=[],
columns_fit='',
index=False,
keep_multiindex=False,
compress_data=True,
quick_filter=True,
export_csv=True,
export_excel=True,
show_toggle_delete=False,
show_toggle_edit=False,
paste_from_excel=True,
export_mode='disabled',
export_to_df=True,
hide_grid=False,
menu=None,
)
return grid
def apply_column_formatting(tabs, titles=[], formatting_string=""" """):
""" Apply the given formatting string to the tab specified by titles
:param tabs: The tab widget to apply the formatting to.
:type tabs: ipywidgets.Tab
:param titles: Sequence of tab titles and column titles needed to navigate to the tab in question
:type titles: list
:param formatting_string: The formatting string to apply to the specified column.
:type formatting_string: str
"""
t = tabs
# Navigate to the correct tab by searching for tab titles [vomit].
for title in titles[:-1]:
kids = t.children
# Find index.
logging.debug("Getting index for title %s", title)
current_titles = [t.get_title(i) for i in range(len(kids))]
logging.debug("Current available titles are %s", str(current_titles))
idx = [ct == title for ct in current_titles].index(True)
logging.debug("Found idx = %s", str(idx))
t = kids[idx]
column_defs = t.grid_options["columnDefs"]
locate_key = None
for cd in column_defs:
if cd['field'] == titles[-1]:
cd["cellRenderer"] = formatting_string
def run_stdb(locus_tag):
clear_output(wait=True)
gene_sub_pattern = re.compile(r'([a-z](?=[0-9]))')
gene=gene_sub_pattern.sub(r'\1_', locus_tag)
stdb = StringDBScraper(query=stringdb_query(taxonId=216595, features=[gene]))
stdb.connect()
stdb.update_features()
stdb_results = dict()
stdb_results['Network Image'] = stdb.network_image()
stdb_results['Network Interactions'] = stdb.network_interactions()
stdb_results['Interaction Partners'] = stdb.interaction_partners(required_score=300)
stdb_results['Functional Enrichments'] = stdb.functional_enrichments()
stdb_results['Interaction Enrichments'] = stdb.interaction_enrichments()
with open(stdb_results['Network Image'], 'rb') as fp:
image_widget = widgets.Image(value=fp.read(), format='svg')
tabs = []
for key in stdb_results.keys():
if key == 'Network Image':
continue
result = stdb_results[key]
cds = ColumnDataSource(result)
data_table = DataTable(source=cds,
columns=[TableColumn(field=c, title=c, width=80) for c in list(result.columns)],
fit_columns=False
)
tabs.append(Panel(child=data_table, title=key))
stdb_tabs = Tabs(tabs=tabs)
display(image_widget)
show(column(stdb_tabs, width=500))
results = run_pdc(strain="UCBPP-PA14", locus_tag=r'pa14_67150')
###Output
2019-07-15 11:12:03,792 INFO: Connected to https://www.pseudomonas.com .
2019-07-15 11:12:04,513 INFO: Connected to https://www.pseudomonas.com/primarySequenceFeature/list?c1=name&v1=pa14_67150&e1=1&term1=UCBPP-PA14&assembly=complete .
2019-07-15 11:12:05,249 INFO: Connected to https://www.pseudomonas.com/feature/show?id=1661770&view=overview .
2019-07-15 11:12:05,988 INFO: Connected to https://www.pseudomonas.com/feature/show?id=1661770&view=overview .
2019-07-15 11:12:07,608 INFO: Connected to https://www.pseudomonas.com/feature/show?id=1661770&view=sequence .
2019-07-15 11:12:08,347 INFO: Connected to https://www.pseudomonas.com/feature/show?id=1661770&view=functions .
2019-07-15 11:12:08,575 INFO: Querying Motifs is not implemented yet.
2019-07-15 11:12:09,270 INFO: Connected to https://www.pseudomonas.com/feature/show?id=1661770&view=operons .
2019-07-15 11:12:09,995 INFO: Connected to https://www.pseudomonas.com/feature/show?id=1661770&view=transposons .
2019-07-15 11:12:10,735 INFO: Connected to https://www.pseudomonas.com/feature/show?id=1661770&view=updates .
2019-07-15 11:12:11,749 INFO: Connected to https://www.pseudomonas.com/orthologs/list?format=tab&extension=tab&id=1661770 .
2019-07-15 11:12:12,659 INFO: Connected to http://pseudoluge.pseudomonas.com/named/download/xml?gene_id=1661770 .
###Markdown
Data from pseudomonas.com
###Code
get_grids(data_tables=results)
logger=logging.getLogger()
logger.getEffectiveLevel()
logging.getLevelName(20)
grids=get_grids(data_tables=results)
type(grids.children[0])
###Output
_____no_output_____ |
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-08-28.ipynb | ###Markdown
RadarCOVID-Report Data Extraction
###Code
import datetime
import json
import logging
import os
import shutil
import tempfile
import textwrap
import uuid
import matplotlib.pyplot as plt
import matplotlib.ticker
import numpy as np
import pandas as pd
import pycountry
import retry
import seaborn as sns
%matplotlib inline
current_working_directory = os.environ.get("PWD")
if current_working_directory:
os.chdir(current_working_directory)
sns.set()
matplotlib.rcParams["figure.figsize"] = (15, 6)
extraction_datetime = datetime.datetime.utcnow()
extraction_date = extraction_datetime.strftime("%Y-%m-%d")
extraction_previous_datetime = extraction_datetime - datetime.timedelta(days=1)
extraction_previous_date = extraction_previous_datetime.strftime("%Y-%m-%d")
extraction_date_with_hour = datetime.datetime.utcnow().strftime("%Y-%m-%d@%H")
current_hour = datetime.datetime.utcnow().hour
are_today_results_partial = current_hour != 23
###Output
_____no_output_____
###Markdown
Constants
###Code
from Modules.ExposureNotification import exposure_notification_io
spain_region_country_code = "ES"
germany_region_country_code = "DE"
default_backend_identifier = spain_region_country_code
backend_generation_days = 7 * 2
daily_summary_days = 7 * 4 * 3
daily_plot_days = 7 * 4
tek_dumps_load_limit = daily_summary_days + 1
###Output
_____no_output_____
###Markdown
Parameters
###Code
environment_backend_identifier = os.environ.get("RADARCOVID_REPORT__BACKEND_IDENTIFIER")
if environment_backend_identifier:
report_backend_identifier = environment_backend_identifier
else:
report_backend_identifier = default_backend_identifier
report_backend_identifier
environment_enable_multi_backend_download = \
os.environ.get("RADARCOVID_REPORT__ENABLE_MULTI_BACKEND_DOWNLOAD")
if environment_enable_multi_backend_download:
report_backend_identifiers = None
else:
report_backend_identifiers = [report_backend_identifier]
report_backend_identifiers
environment_invalid_shared_diagnoses_dates = \
os.environ.get("RADARCOVID_REPORT__INVALID_SHARED_DIAGNOSES_DATES")
if environment_invalid_shared_diagnoses_dates:
invalid_shared_diagnoses_dates = environment_invalid_shared_diagnoses_dates.split(",")
else:
invalid_shared_diagnoses_dates = []
invalid_shared_diagnoses_dates
###Output
_____no_output_____
###Markdown
COVID-19 Cases
###Code
report_backend_client = \
exposure_notification_io.get_backend_client_with_identifier(
backend_identifier=report_backend_identifier)
@retry.retry(tries=10, delay=10, backoff=1.1, jitter=(0, 10))
def download_cases_dataframe():
return pd.read_csv("https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/owid-covid-data.csv")
confirmed_df_ = download_cases_dataframe()
confirmed_df_.iloc[0]
confirmed_df = confirmed_df_.copy()
confirmed_df = confirmed_df[["date", "new_cases", "iso_code"]]
confirmed_df.rename(
columns={
"date": "sample_date",
"iso_code": "country_code",
},
inplace=True)
def convert_iso_alpha_3_to_alpha_2(x):
try:
return pycountry.countries.get(alpha_3=x).alpha_2
except Exception as e:
logging.info(f"Error converting country ISO Alpha 3 code '{x}': {repr(e)}")
return None
confirmed_df["country_code"] = confirmed_df.country_code.apply(convert_iso_alpha_3_to_alpha_2)
confirmed_df.dropna(inplace=True)
confirmed_df["sample_date"] = pd.to_datetime(confirmed_df.sample_date, dayfirst=True)
confirmed_df["sample_date"] = confirmed_df.sample_date.dt.strftime("%Y-%m-%d")
confirmed_df.sort_values("sample_date", inplace=True)
confirmed_df.tail()
confirmed_days = pd.date_range(
start=confirmed_df.iloc[0].sample_date,
end=extraction_datetime)
confirmed_days_df = pd.DataFrame(data=confirmed_days, columns=["sample_date"])
confirmed_days_df["sample_date_string"] = \
confirmed_days_df.sample_date.dt.strftime("%Y-%m-%d")
confirmed_days_df.tail()
def sort_source_regions_for_display(source_regions: list) -> list:
if report_backend_identifier in source_regions:
source_regions = [report_backend_identifier] + \
list(sorted(set(source_regions).difference([report_backend_identifier])))
else:
source_regions = list(sorted(source_regions))
return source_regions
report_source_regions = report_backend_client.source_regions_for_date(
date=extraction_datetime.date())
report_source_regions = sort_source_regions_for_display(
source_regions=report_source_regions)
report_source_regions
def get_cases_dataframe(source_regions_for_date_function, columns_suffix=None):
source_regions_at_date_df = confirmed_days_df.copy()
source_regions_at_date_df["source_regions_at_date"] = \
source_regions_at_date_df.sample_date.apply(
lambda x: source_regions_for_date_function(date=x))
source_regions_at_date_df.sort_values("sample_date", inplace=True)
source_regions_at_date_df["_source_regions_group"] = source_regions_at_date_df. \
source_regions_at_date.apply(lambda x: ",".join(sort_source_regions_for_display(x)))
source_regions_at_date_df.tail()
#%%
source_regions_for_summary_df_ = \
source_regions_at_date_df[["sample_date", "_source_regions_group"]].copy()
source_regions_for_summary_df_.rename(columns={"_source_regions_group": "source_regions"}, inplace=True)
source_regions_for_summary_df_.tail()
#%%
confirmed_output_columns = ["sample_date", "new_cases", "covid_cases"]
confirmed_output_df = pd.DataFrame(columns=confirmed_output_columns)
for source_regions_group, source_regions_group_series in \
source_regions_at_date_df.groupby("_source_regions_group"):
source_regions_set = set(source_regions_group.split(","))
confirmed_source_regions_set_df = \
confirmed_df[confirmed_df.country_code.isin(source_regions_set)].copy()
confirmed_source_regions_group_df = \
confirmed_source_regions_set_df.groupby("sample_date").new_cases.sum() \
.reset_index().sort_values("sample_date")
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df.merge(
confirmed_days_df[["sample_date_string"]].rename(
columns={"sample_date_string": "sample_date"}),
how="right")
confirmed_source_regions_group_df["new_cases"] = \
confirmed_source_regions_group_df["new_cases"].clip(lower=0)
confirmed_source_regions_group_df["covid_cases"] = \
confirmed_source_regions_group_df.new_cases.rolling(7, min_periods=0).mean().round()
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df[confirmed_output_columns]
confirmed_source_regions_group_df = confirmed_source_regions_group_df.replace(0, np.nan)
confirmed_source_regions_group_df.fillna(method="ffill", inplace=True)
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df[
confirmed_source_regions_group_df.sample_date.isin(
source_regions_group_series.sample_date_string)]
confirmed_output_df = confirmed_output_df.append(confirmed_source_regions_group_df)
result_df = confirmed_output_df.copy()
result_df.tail()
#%%
result_df.rename(columns={"sample_date": "sample_date_string"}, inplace=True)
result_df = confirmed_days_df[["sample_date_string"]].merge(result_df, how="left")
result_df.sort_values("sample_date_string", inplace=True)
result_df.fillna(method="ffill", inplace=True)
result_df.tail()
#%%
result_df[["new_cases", "covid_cases"]].plot()
if columns_suffix:
result_df.rename(
columns={
"new_cases": "new_cases_" + columns_suffix,
"covid_cases": "covid_cases_" + columns_suffix},
inplace=True)
return result_df, source_regions_for_summary_df_
confirmed_eu_df, source_regions_for_summary_df = get_cases_dataframe(
report_backend_client.source_regions_for_date)
confirmed_es_df, _ = get_cases_dataframe(
lambda date: [spain_region_country_code],
columns_suffix=spain_region_country_code.lower())
###Output
_____no_output_____
###Markdown
Extract API TEKs
###Code
raw_zip_path_prefix = "Data/TEKs/Raw/"
base_backend_identifiers = [report_backend_identifier]
multi_backend_exposure_keys_df = \
exposure_notification_io.download_exposure_keys_from_backends(
backend_identifiers=report_backend_identifiers,
generation_days=backend_generation_days,
fail_on_error_backend_identifiers=base_backend_identifiers,
save_raw_zip_path_prefix=raw_zip_path_prefix)
multi_backend_exposure_keys_df["region"] = multi_backend_exposure_keys_df["backend_identifier"]
multi_backend_exposure_keys_df.rename(
columns={
"generation_datetime": "sample_datetime",
"generation_date_string": "sample_date_string",
},
inplace=True)
multi_backend_exposure_keys_df.head()
early_teks_df = multi_backend_exposure_keys_df[
multi_backend_exposure_keys_df.rolling_period < 144].copy()
early_teks_df["rolling_period_in_hours"] = early_teks_df.rolling_period / 6
early_teks_df[early_teks_df.sample_date_string != extraction_date] \
.rolling_period_in_hours.hist(bins=list(range(24)))
early_teks_df[early_teks_df.sample_date_string == extraction_date] \
.rolling_period_in_hours.hist(bins=list(range(24)))
multi_backend_exposure_keys_df = multi_backend_exposure_keys_df[[
"sample_date_string", "region", "key_data"]]
multi_backend_exposure_keys_df.head()
active_regions = \
multi_backend_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist()
active_regions
multi_backend_summary_df = multi_backend_exposure_keys_df.groupby(
["sample_date_string", "region"]).key_data.nunique().reset_index() \
.pivot(index="sample_date_string", columns="region") \
.sort_index(ascending=False)
multi_backend_summary_df.rename(
columns={"key_data": "shared_teks_by_generation_date"},
inplace=True)
multi_backend_summary_df.rename_axis("sample_date", inplace=True)
multi_backend_summary_df = multi_backend_summary_df.fillna(0).astype(int)
multi_backend_summary_df = multi_backend_summary_df.head(backend_generation_days)
multi_backend_summary_df.head()
def compute_keys_cross_sharing(x):
teks_x = x.key_data_x.item()
common_teks = set(teks_x).intersection(x.key_data_y.item())
common_teks_fraction = len(common_teks) / len(teks_x)
return pd.Series(dict(
common_teks=common_teks,
common_teks_fraction=common_teks_fraction,
))
multi_backend_exposure_keys_by_region_df = \
multi_backend_exposure_keys_df.groupby("region").key_data.unique().reset_index()
multi_backend_exposure_keys_by_region_df["_merge"] = True
multi_backend_exposure_keys_by_region_combination_df = \
multi_backend_exposure_keys_by_region_df.merge(
multi_backend_exposure_keys_by_region_df, on="_merge")
multi_backend_exposure_keys_by_region_combination_df.drop(
columns=["_merge"], inplace=True)
if multi_backend_exposure_keys_by_region_combination_df.region_x.nunique() > 1:
multi_backend_exposure_keys_by_region_combination_df = \
multi_backend_exposure_keys_by_region_combination_df[
multi_backend_exposure_keys_by_region_combination_df.region_x !=
multi_backend_exposure_keys_by_region_combination_df.region_y]
multi_backend_exposure_keys_cross_sharing_df = \
multi_backend_exposure_keys_by_region_combination_df \
.groupby(["region_x", "region_y"]) \
.apply(compute_keys_cross_sharing) \
.reset_index()
multi_backend_cross_sharing_summary_df = \
multi_backend_exposure_keys_cross_sharing_df.pivot_table(
values=["common_teks_fraction"],
columns="region_x",
index="region_y",
aggfunc=lambda x: x.item())
multi_backend_cross_sharing_summary_df
multi_backend_without_active_region_exposure_keys_df = \
multi_backend_exposure_keys_df[multi_backend_exposure_keys_df.region != report_backend_identifier]
multi_backend_without_active_region = \
multi_backend_without_active_region_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist()
multi_backend_without_active_region
exposure_keys_summary_df = multi_backend_exposure_keys_df[
multi_backend_exposure_keys_df.region == report_backend_identifier]
exposure_keys_summary_df.drop(columns=["region"], inplace=True)
exposure_keys_summary_df = \
exposure_keys_summary_df.groupby(["sample_date_string"]).key_data.nunique().to_frame()
exposure_keys_summary_df = \
exposure_keys_summary_df.reset_index().set_index("sample_date_string")
exposure_keys_summary_df.sort_index(ascending=False, inplace=True)
exposure_keys_summary_df.rename(columns={"key_data": "shared_teks_by_generation_date"}, inplace=True)
exposure_keys_summary_df.head()
###Output
/opt/hostedtoolcache/Python/3.8.11/x64/lib/python3.8/site-packages/pandas/core/frame.py:4110: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
return super().drop(
###Markdown
Dump API TEKs
###Code
tek_list_df = multi_backend_exposure_keys_df[
["sample_date_string", "region", "key_data"]].copy()
tek_list_df["key_data"] = tek_list_df["key_data"].apply(str)
tek_list_df.rename(columns={
"sample_date_string": "sample_date",
"key_data": "tek_list"}, inplace=True)
tek_list_df = tek_list_df.groupby(
["sample_date", "region"]).tek_list.unique().reset_index()
tek_list_df["extraction_date"] = extraction_date
tek_list_df["extraction_date_with_hour"] = extraction_date_with_hour
tek_list_path_prefix = "Data/TEKs/"
tek_list_current_path = tek_list_path_prefix + f"/Current/RadarCOVID-TEKs.json"
tek_list_daily_path = tek_list_path_prefix + f"Daily/RadarCOVID-TEKs-{extraction_date}.json"
tek_list_hourly_path = tek_list_path_prefix + f"Hourly/RadarCOVID-TEKs-{extraction_date_with_hour}.json"
for path in [tek_list_current_path, tek_list_daily_path, tek_list_hourly_path]:
os.makedirs(os.path.dirname(path), exist_ok=True)
tek_list_base_df = tek_list_df[tek_list_df.region == report_backend_identifier]
tek_list_base_df.drop(columns=["extraction_date", "extraction_date_with_hour"]).to_json(
tek_list_current_path,
lines=True, orient="records")
tek_list_base_df.drop(columns=["extraction_date_with_hour"]).to_json(
tek_list_daily_path,
lines=True, orient="records")
tek_list_base_df.to_json(
tek_list_hourly_path,
lines=True, orient="records")
tek_list_base_df.head()
###Output
_____no_output_____
###Markdown
Load TEK Dumps
###Code
import glob
def load_extracted_teks(mode, region=None, limit=None) -> pd.DataFrame:
extracted_teks_df = pd.DataFrame(columns=["region"])
file_paths = list(reversed(sorted(glob.glob(tek_list_path_prefix + mode + "/RadarCOVID-TEKs-*.json"))))
if limit:
file_paths = file_paths[:limit]
for file_path in file_paths:
logging.info(f"Loading TEKs from '{file_path}'...")
iteration_extracted_teks_df = pd.read_json(file_path, lines=True)
extracted_teks_df = extracted_teks_df.append(
iteration_extracted_teks_df, sort=False)
extracted_teks_df["region"] = \
extracted_teks_df.region.fillna(spain_region_country_code).copy()
if region:
extracted_teks_df = \
extracted_teks_df[extracted_teks_df.region == region]
return extracted_teks_df
daily_extracted_teks_df = load_extracted_teks(
mode="Daily",
region=report_backend_identifier,
limit=tek_dumps_load_limit)
daily_extracted_teks_df.head()
exposure_keys_summary_df_ = daily_extracted_teks_df \
.sort_values("extraction_date", ascending=False) \
.groupby("sample_date").tek_list.first() \
.to_frame()
exposure_keys_summary_df_.index.name = "sample_date_string"
exposure_keys_summary_df_["tek_list"] = \
exposure_keys_summary_df_.tek_list.apply(len)
exposure_keys_summary_df_ = exposure_keys_summary_df_ \
.rename(columns={"tek_list": "shared_teks_by_generation_date"}) \
.sort_index(ascending=False)
exposure_keys_summary_df = exposure_keys_summary_df_
exposure_keys_summary_df.head()
###Output
_____no_output_____
###Markdown
Daily New TEKs
###Code
tek_list_df = daily_extracted_teks_df.groupby("extraction_date").tek_list.apply(
lambda x: set(sum(x, []))).reset_index()
tek_list_df = tek_list_df.set_index("extraction_date").sort_index(ascending=True)
tek_list_df.head()
def compute_teks_by_generation_and_upload_date(date):
day_new_teks_set_df = tek_list_df.copy().diff()
try:
day_new_teks_set = day_new_teks_set_df[
day_new_teks_set_df.index == date].tek_list.item()
except ValueError:
day_new_teks_set = None
if pd.isna(day_new_teks_set):
day_new_teks_set = set()
day_new_teks_df = daily_extracted_teks_df[
daily_extracted_teks_df.extraction_date == date].copy()
day_new_teks_df["shared_teks"] = \
day_new_teks_df.tek_list.apply(lambda x: set(x).intersection(day_new_teks_set))
day_new_teks_df["shared_teks"] = \
day_new_teks_df.shared_teks.apply(len)
day_new_teks_df["upload_date"] = date
day_new_teks_df.rename(columns={"sample_date": "generation_date"}, inplace=True)
day_new_teks_df = day_new_teks_df[
["upload_date", "generation_date", "shared_teks"]]
day_new_teks_df["generation_to_upload_days"] = \
(pd.to_datetime(day_new_teks_df.upload_date) -
pd.to_datetime(day_new_teks_df.generation_date)).dt.days
day_new_teks_df = day_new_teks_df[day_new_teks_df.shared_teks > 0]
return day_new_teks_df
shared_teks_generation_to_upload_df = pd.DataFrame()
for upload_date in daily_extracted_teks_df.extraction_date.unique():
shared_teks_generation_to_upload_df = \
shared_teks_generation_to_upload_df.append(
compute_teks_by_generation_and_upload_date(date=upload_date))
shared_teks_generation_to_upload_df \
.sort_values(["upload_date", "generation_date"], ascending=False, inplace=True)
shared_teks_generation_to_upload_df.tail()
today_new_teks_df = \
shared_teks_generation_to_upload_df[
shared_teks_generation_to_upload_df.upload_date == extraction_date].copy()
today_new_teks_df.tail()
if not today_new_teks_df.empty:
today_new_teks_df.set_index("generation_to_upload_days") \
.sort_index().shared_teks.plot.bar()
generation_to_upload_period_pivot_df = \
shared_teks_generation_to_upload_df[
["upload_date", "generation_to_upload_days", "shared_teks"]] \
.pivot(index="upload_date", columns="generation_to_upload_days") \
.sort_index(ascending=False).fillna(0).astype(int) \
.droplevel(level=0, axis=1)
generation_to_upload_period_pivot_df.head()
new_tek_df = tek_list_df.diff().tek_list.apply(
lambda x: len(x) if not pd.isna(x) else None).to_frame().reset_index()
new_tek_df.rename(columns={
"tek_list": "shared_teks_by_upload_date",
"extraction_date": "sample_date_string",}, inplace=True)
new_tek_df.tail()
shared_teks_uploaded_on_generation_date_df = shared_teks_generation_to_upload_df[
shared_teks_generation_to_upload_df.generation_to_upload_days == 0] \
[["upload_date", "shared_teks"]].rename(
columns={
"upload_date": "sample_date_string",
"shared_teks": "shared_teks_uploaded_on_generation_date",
})
shared_teks_uploaded_on_generation_date_df.head()
estimated_shared_diagnoses_df = shared_teks_generation_to_upload_df \
.groupby(["upload_date"]).shared_teks.max().reset_index() \
.sort_values(["upload_date"], ascending=False) \
.rename(columns={
"upload_date": "sample_date_string",
"shared_teks": "shared_diagnoses",
})
invalid_shared_diagnoses_dates_mask = \
estimated_shared_diagnoses_df.sample_date_string.isin(invalid_shared_diagnoses_dates)
estimated_shared_diagnoses_df[invalid_shared_diagnoses_dates_mask] = 0
estimated_shared_diagnoses_df.head()
###Output
_____no_output_____
###Markdown
Hourly New TEKs
###Code
hourly_extracted_teks_df = load_extracted_teks(
mode="Hourly", region=report_backend_identifier, limit=25)
hourly_extracted_teks_df.head()
hourly_new_tek_count_df = hourly_extracted_teks_df \
.groupby("extraction_date_with_hour").tek_list. \
apply(lambda x: set(sum(x, []))).reset_index().copy()
hourly_new_tek_count_df = hourly_new_tek_count_df.set_index("extraction_date_with_hour") \
.sort_index(ascending=True)
hourly_new_tek_count_df["new_tek_list"] = hourly_new_tek_count_df.tek_list.diff()
hourly_new_tek_count_df["new_tek_count"] = hourly_new_tek_count_df.new_tek_list.apply(
lambda x: len(x) if not pd.isna(x) else 0)
hourly_new_tek_count_df.rename(columns={
"new_tek_count": "shared_teks_by_upload_date"}, inplace=True)
hourly_new_tek_count_df = hourly_new_tek_count_df.reset_index()[[
"extraction_date_with_hour", "shared_teks_by_upload_date"]]
hourly_new_tek_count_df.head()
hourly_summary_df = hourly_new_tek_count_df.copy()
hourly_summary_df.set_index("extraction_date_with_hour", inplace=True)
hourly_summary_df = hourly_summary_df.fillna(0).astype(int).reset_index()
hourly_summary_df["datetime_utc"] = pd.to_datetime(
hourly_summary_df.extraction_date_with_hour, format="%Y-%m-%d@%H")
hourly_summary_df.set_index("datetime_utc", inplace=True)
hourly_summary_df = hourly_summary_df.tail(-1)
hourly_summary_df.head()
###Output
_____no_output_____
###Markdown
Official Statistics
###Code
import requests
import pandas.io.json
official_stats_response = requests.get("https://radarcovid.covid19.gob.es/kpi/statistics/basics")
official_stats_response.raise_for_status()
official_stats_df_ = pandas.io.json.json_normalize(official_stats_response.json())
official_stats_df = official_stats_df_.copy()
official_stats_df["date"] = pd.to_datetime(official_stats_df["date"], dayfirst=True)
official_stats_df.head()
official_stats_column_map = {
"date": "sample_date",
"applicationsDownloads.totalAcummulated": "app_downloads_es_accumulated",
"communicatedContagions.totalAcummulated": "shared_diagnoses_es_accumulated",
}
accumulated_suffix = "_accumulated"
accumulated_values_columns = \
list(filter(lambda x: x.endswith(accumulated_suffix), official_stats_column_map.values()))
interpolated_values_columns = \
list(map(lambda x: x[:-len(accumulated_suffix)], accumulated_values_columns))
official_stats_df = \
official_stats_df[official_stats_column_map.keys()] \
.rename(columns=official_stats_column_map)
official_stats_df["extraction_date"] = extraction_date
official_stats_df.head()
official_stats_path = "Data/Statistics/Current/RadarCOVID-Statistics.json"
previous_official_stats_df = pd.read_json(official_stats_path, orient="records", lines=True)
previous_official_stats_df["sample_date"] = pd.to_datetime(previous_official_stats_df["sample_date"], dayfirst=True)
official_stats_df = official_stats_df.append(previous_official_stats_df)
official_stats_df.head()
official_stats_df = official_stats_df[~(official_stats_df.shared_diagnoses_es_accumulated == 0)]
official_stats_df.sort_values("extraction_date", ascending=False, inplace=True)
official_stats_df.drop_duplicates(subset=["sample_date"], keep="first", inplace=True)
official_stats_df.head()
official_stats_stored_df = official_stats_df.copy()
official_stats_stored_df["sample_date"] = official_stats_stored_df.sample_date.dt.strftime("%Y-%m-%d")
official_stats_stored_df.to_json(official_stats_path, orient="records", lines=True)
official_stats_df.drop(columns=["extraction_date"], inplace=True)
official_stats_df = confirmed_days_df.merge(official_stats_df, how="left")
official_stats_df.sort_values("sample_date", ascending=False, inplace=True)
official_stats_df.head()
official_stats_df[accumulated_values_columns] = \
official_stats_df[accumulated_values_columns] \
.astype(float).interpolate(limit_area="inside")
official_stats_df[interpolated_values_columns] = \
official_stats_df[accumulated_values_columns].diff(periods=-1)
official_stats_df.drop(columns="sample_date", inplace=True)
official_stats_df.head()
###Output
_____no_output_____
###Markdown
Data Merge
###Code
result_summary_df = exposure_keys_summary_df.merge(
new_tek_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
shared_teks_uploaded_on_generation_date_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
estimated_shared_diagnoses_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
official_stats_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = confirmed_eu_df.tail(daily_summary_days).merge(
result_summary_df, on=["sample_date_string"], how="left")
result_summary_df.head()
result_summary_df = confirmed_es_df.tail(daily_summary_days).merge(
result_summary_df, on=["sample_date_string"], how="left")
result_summary_df.head()
result_summary_df["sample_date"] = pd.to_datetime(result_summary_df.sample_date_string)
result_summary_df = result_summary_df.merge(source_regions_for_summary_df, how="left")
result_summary_df.set_index(["sample_date", "source_regions"], inplace=True)
result_summary_df.drop(columns=["sample_date_string"], inplace=True)
result_summary_df.sort_index(ascending=False, inplace=True)
result_summary_df.head()
with pd.option_context("mode.use_inf_as_na", True):
result_summary_df = result_summary_df.fillna(0).astype(int)
result_summary_df["teks_per_shared_diagnosis"] = \
(result_summary_df.shared_teks_by_upload_date / result_summary_df.shared_diagnoses).fillna(0)
result_summary_df["shared_diagnoses_per_covid_case"] = \
(result_summary_df.shared_diagnoses / result_summary_df.covid_cases).fillna(0)
result_summary_df["shared_diagnoses_per_covid_case_es"] = \
(result_summary_df.shared_diagnoses_es / result_summary_df.covid_cases_es).fillna(0)
result_summary_df.head(daily_plot_days)
def compute_aggregated_results_summary(days) -> pd.DataFrame:
aggregated_result_summary_df = result_summary_df.copy()
aggregated_result_summary_df["covid_cases_for_ratio"] = \
aggregated_result_summary_df.covid_cases.mask(
aggregated_result_summary_df.shared_diagnoses == 0, 0)
aggregated_result_summary_df["covid_cases_for_ratio_es"] = \
aggregated_result_summary_df.covid_cases_es.mask(
aggregated_result_summary_df.shared_diagnoses_es == 0, 0)
aggregated_result_summary_df = aggregated_result_summary_df \
.sort_index(ascending=True).fillna(0).rolling(days).agg({
"covid_cases": "sum",
"covid_cases_es": "sum",
"covid_cases_for_ratio": "sum",
"covid_cases_for_ratio_es": "sum",
"shared_teks_by_generation_date": "sum",
"shared_teks_by_upload_date": "sum",
"shared_diagnoses": "sum",
"shared_diagnoses_es": "sum",
}).sort_index(ascending=False)
with pd.option_context("mode.use_inf_as_na", True):
aggregated_result_summary_df = aggregated_result_summary_df.fillna(0).astype(int)
aggregated_result_summary_df["teks_per_shared_diagnosis"] = \
(aggregated_result_summary_df.shared_teks_by_upload_date /
aggregated_result_summary_df.covid_cases_for_ratio).fillna(0)
aggregated_result_summary_df["shared_diagnoses_per_covid_case"] = \
(aggregated_result_summary_df.shared_diagnoses /
aggregated_result_summary_df.covid_cases_for_ratio).fillna(0)
aggregated_result_summary_df["shared_diagnoses_per_covid_case_es"] = \
(aggregated_result_summary_df.shared_diagnoses_es /
aggregated_result_summary_df.covid_cases_for_ratio_es).fillna(0)
return aggregated_result_summary_df
aggregated_result_with_7_days_window_summary_df = compute_aggregated_results_summary(days=7)
aggregated_result_with_7_days_window_summary_df.head()
last_7_days_summary = aggregated_result_with_7_days_window_summary_df.to_dict(orient="records")[1]
last_7_days_summary
aggregated_result_with_14_days_window_summary_df = compute_aggregated_results_summary(days=13)
last_14_days_summary = aggregated_result_with_14_days_window_summary_df.to_dict(orient="records")[1]
last_14_days_summary
###Output
_____no_output_____
###Markdown
Report Results
###Code
display_column_name_mapping = {
"sample_date": "Sample\u00A0Date\u00A0(UTC)",
"source_regions": "Source Countries",
"datetime_utc": "Timestamp (UTC)",
"upload_date": "Upload Date (UTC)",
"generation_to_upload_days": "Generation to Upload Period in Days",
"region": "Backend",
"region_x": "Backend\u00A0(A)",
"region_y": "Backend\u00A0(B)",
"common_teks": "Common TEKs Shared Between Backends",
"common_teks_fraction": "Fraction of TEKs in Backend (A) Available in Backend (B)",
"covid_cases": "COVID-19 Cases (Source Countries)",
"shared_teks_by_generation_date": "Shared TEKs by Generation Date (Source Countries)",
"shared_teks_by_upload_date": "Shared TEKs by Upload Date (Source Countries)",
"shared_teks_uploaded_on_generation_date": "Shared TEKs Uploaded on Generation Date (Source Countries)",
"shared_diagnoses": "Shared Diagnoses (Source Countries – Estimation)",
"teks_per_shared_diagnosis": "TEKs Uploaded per Shared Diagnosis (Source Countries)",
"shared_diagnoses_per_covid_case": "Usage Ratio (Source Countries)",
"covid_cases_es": "COVID-19 Cases (Spain)",
"app_downloads_es": "App Downloads (Spain – Official)",
"shared_diagnoses_es": "Shared Diagnoses (Spain – Official)",
"shared_diagnoses_per_covid_case_es": "Usage Ratio (Spain)",
}
summary_columns = [
"covid_cases",
"shared_teks_by_generation_date",
"shared_teks_by_upload_date",
"shared_teks_uploaded_on_generation_date",
"shared_diagnoses",
"teks_per_shared_diagnosis",
"shared_diagnoses_per_covid_case",
"covid_cases_es",
"app_downloads_es",
"shared_diagnoses_es",
"shared_diagnoses_per_covid_case_es",
]
summary_percentage_columns= [
"shared_diagnoses_per_covid_case_es",
"shared_diagnoses_per_covid_case",
]
###Output
_____no_output_____
###Markdown
Daily Summary Table
###Code
result_summary_df_ = result_summary_df.copy()
result_summary_df = result_summary_df[summary_columns]
result_summary_with_display_names_df = result_summary_df \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping)
result_summary_with_display_names_df
###Output
_____no_output_____
###Markdown
Daily Summary Plots
###Code
result_plot_summary_df = result_summary_df.head(daily_plot_days)[summary_columns] \
.droplevel(level=["source_regions"]) \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping)
summary_ax_list = result_plot_summary_df.sort_index(ascending=True).plot.bar(
title=f"Daily Summary",
rot=45, subplots=True, figsize=(15, 30), legend=False)
ax_ = summary_ax_list[0]
ax_.get_figure().tight_layout()
ax_.get_figure().subplots_adjust(top=0.95)
_ = ax_.set_xticklabels(sorted(result_plot_summary_df.index.strftime("%Y-%m-%d").tolist()))
for percentage_column in summary_percentage_columns:
percentage_column_index = summary_columns.index(percentage_column)
summary_ax_list[percentage_column_index].yaxis \
.set_major_formatter(matplotlib.ticker.PercentFormatter(1.0))
###Output
/opt/hostedtoolcache/Python/3.8.11/x64/lib/python3.8/site-packages/pandas/plotting/_matplotlib/tools.py:307: MatplotlibDeprecationWarning:
The rowNum attribute was deprecated in Matplotlib 3.2 and will be removed two minor releases later. Use ax.get_subplotspec().rowspan.start instead.
layout[ax.rowNum, ax.colNum] = ax.get_visible()
/opt/hostedtoolcache/Python/3.8.11/x64/lib/python3.8/site-packages/pandas/plotting/_matplotlib/tools.py:307: MatplotlibDeprecationWarning:
The colNum attribute was deprecated in Matplotlib 3.2 and will be removed two minor releases later. Use ax.get_subplotspec().colspan.start instead.
layout[ax.rowNum, ax.colNum] = ax.get_visible()
/opt/hostedtoolcache/Python/3.8.11/x64/lib/python3.8/site-packages/pandas/plotting/_matplotlib/tools.py:313: MatplotlibDeprecationWarning:
The rowNum attribute was deprecated in Matplotlib 3.2 and will be removed two minor releases later. Use ax.get_subplotspec().rowspan.start instead.
if not layout[ax.rowNum + 1, ax.colNum]:
/opt/hostedtoolcache/Python/3.8.11/x64/lib/python3.8/site-packages/pandas/plotting/_matplotlib/tools.py:313: MatplotlibDeprecationWarning:
The colNum attribute was deprecated in Matplotlib 3.2 and will be removed two minor releases later. Use ax.get_subplotspec().colspan.start instead.
if not layout[ax.rowNum + 1, ax.colNum]:
###Markdown
Daily Generation to Upload Period Table
###Code
display_generation_to_upload_period_pivot_df = \
generation_to_upload_period_pivot_df \
.head(backend_generation_days)
display_generation_to_upload_period_pivot_df \
.head(backend_generation_days) \
.rename_axis(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping)
fig, generation_to_upload_period_pivot_table_ax = plt.subplots(
figsize=(12, 1 + 0.6 * len(display_generation_to_upload_period_pivot_df)))
generation_to_upload_period_pivot_table_ax.set_title(
"Shared TEKs Generation to Upload Period Table")
sns.heatmap(
data=display_generation_to_upload_period_pivot_df
.rename_axis(columns=display_column_name_mapping)
.rename_axis(index=display_column_name_mapping),
fmt=".0f",
annot=True,
ax=generation_to_upload_period_pivot_table_ax)
generation_to_upload_period_pivot_table_ax.get_figure().tight_layout()
###Output
_____no_output_____
###Markdown
Hourly Summary Plots
###Code
hourly_summary_ax_list = hourly_summary_df \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.plot.bar(
title=f"Last 24h Summary",
rot=45, subplots=True, legend=False)
ax_ = hourly_summary_ax_list[-1]
ax_.get_figure().tight_layout()
ax_.get_figure().subplots_adjust(top=0.9)
_ = ax_.set_xticklabels(sorted(hourly_summary_df.index.strftime("%Y-%m-%d@%H").tolist()))
###Output
_____no_output_____
###Markdown
Publish Results
###Code
github_repository = os.environ.get("GITHUB_REPOSITORY")
if github_repository is None:
github_repository = "pvieito/Radar-STATS"
github_project_base_url = "https://github.com/" + github_repository
display_formatters = {
display_column_name_mapping["teks_per_shared_diagnosis"]: lambda x: f"{x:.2f}" if x != 0 else "",
display_column_name_mapping["shared_diagnoses_per_covid_case"]: lambda x: f"{x:.2%}" if x != 0 else "",
display_column_name_mapping["shared_diagnoses_per_covid_case_es"]: lambda x: f"{x:.2%}" if x != 0 else "",
}
general_columns = \
list(filter(lambda x: x not in display_formatters, display_column_name_mapping.values()))
general_formatter = lambda x: f"{x}" if x != 0 else ""
display_formatters.update(dict(map(lambda x: (x, general_formatter), general_columns)))
daily_summary_table_html = result_summary_with_display_names_df \
.head(daily_plot_days) \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.to_html(formatters=display_formatters)
multi_backend_summary_table_html = multi_backend_summary_df \
.head(daily_plot_days) \
.rename_axis(columns=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping) \
.to_html(formatters=display_formatters)
def format_multi_backend_cross_sharing_fraction(x):
if pd.isna(x):
return "-"
elif round(x * 100, 1) == 0:
return ""
else:
return f"{x:.1%}"
multi_backend_cross_sharing_summary_table_html = multi_backend_cross_sharing_summary_df \
.rename_axis(columns=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping) \
.to_html(
classes="table-center",
formatters=display_formatters,
float_format=format_multi_backend_cross_sharing_fraction)
multi_backend_cross_sharing_summary_table_html = \
multi_backend_cross_sharing_summary_table_html \
.replace("<tr>","<tr style=\"text-align: center;\">")
extraction_date_result_summary_df = \
result_summary_df[result_summary_df.index.get_level_values("sample_date") == extraction_date]
extraction_date_result_hourly_summary_df = \
hourly_summary_df[hourly_summary_df.extraction_date_with_hour == extraction_date_with_hour]
covid_cases = \
extraction_date_result_summary_df.covid_cases.item()
shared_teks_by_generation_date = \
extraction_date_result_summary_df.shared_teks_by_generation_date.item()
shared_teks_by_upload_date = \
extraction_date_result_summary_df.shared_teks_by_upload_date.item()
shared_diagnoses = \
extraction_date_result_summary_df.shared_diagnoses.item()
teks_per_shared_diagnosis = \
extraction_date_result_summary_df.teks_per_shared_diagnosis.item()
shared_diagnoses_per_covid_case = \
extraction_date_result_summary_df.shared_diagnoses_per_covid_case.item()
shared_teks_by_upload_date_last_hour = \
extraction_date_result_hourly_summary_df.shared_teks_by_upload_date.sum().astype(int)
display_source_regions = ", ".join(report_source_regions)
if len(report_source_regions) == 1:
display_brief_source_regions = report_source_regions[0]
else:
display_brief_source_regions = f"{len(report_source_regions)} 🇪🇺"
def get_temporary_image_path() -> str:
return os.path.join(tempfile.gettempdir(), str(uuid.uuid4()) + ".png")
def save_temporary_plot_image(ax):
if isinstance(ax, np.ndarray):
ax = ax[0]
media_path = get_temporary_image_path()
ax.get_figure().savefig(media_path)
return media_path
def save_temporary_dataframe_image(df):
import dataframe_image as dfi
df = df.copy()
df_styler = df.style.format(display_formatters)
media_path = get_temporary_image_path()
dfi.export(df_styler, media_path)
return media_path
summary_plots_image_path = save_temporary_plot_image(
ax=summary_ax_list)
summary_table_image_path = save_temporary_dataframe_image(
df=result_summary_with_display_names_df)
hourly_summary_plots_image_path = save_temporary_plot_image(
ax=hourly_summary_ax_list)
multi_backend_summary_table_image_path = save_temporary_dataframe_image(
df=multi_backend_summary_df)
generation_to_upload_period_pivot_table_image_path = save_temporary_plot_image(
ax=generation_to_upload_period_pivot_table_ax)
###Output
[0828/230946.698742:ERROR:gpu_init.cc(441)] Passthrough is not supported, GL is swiftshader
###Markdown
Save Results
###Code
report_resources_path_prefix = "Data/Resources/Current/RadarCOVID-Report-"
result_summary_df.to_csv(
report_resources_path_prefix + "Summary-Table.csv")
result_summary_df.to_html(
report_resources_path_prefix + "Summary-Table.html")
hourly_summary_df.to_csv(
report_resources_path_prefix + "Hourly-Summary-Table.csv")
multi_backend_summary_df.to_csv(
report_resources_path_prefix + "Multi-Backend-Summary-Table.csv")
multi_backend_cross_sharing_summary_df.to_csv(
report_resources_path_prefix + "Multi-Backend-Cross-Sharing-Summary-Table.csv")
generation_to_upload_period_pivot_df.to_csv(
report_resources_path_prefix + "Generation-Upload-Period-Table.csv")
_ = shutil.copyfile(
summary_plots_image_path,
report_resources_path_prefix + "Summary-Plots.png")
_ = shutil.copyfile(
summary_table_image_path,
report_resources_path_prefix + "Summary-Table.png")
_ = shutil.copyfile(
hourly_summary_plots_image_path,
report_resources_path_prefix + "Hourly-Summary-Plots.png")
_ = shutil.copyfile(
multi_backend_summary_table_image_path,
report_resources_path_prefix + "Multi-Backend-Summary-Table.png")
_ = shutil.copyfile(
generation_to_upload_period_pivot_table_image_path,
report_resources_path_prefix + "Generation-Upload-Period-Table.png")
###Output
_____no_output_____
###Markdown
Publish Results as JSON
###Code
def generate_summary_api_results(df: pd.DataFrame) -> list:
api_df = df.reset_index().copy()
api_df["sample_date_string"] = \
api_df["sample_date"].dt.strftime("%Y-%m-%d")
api_df["source_regions"] = \
api_df["source_regions"].apply(lambda x: x.split(","))
return api_df.to_dict(orient="records")
summary_api_results = \
generate_summary_api_results(df=result_summary_df)
today_summary_api_results = \
generate_summary_api_results(df=extraction_date_result_summary_df)[0]
summary_results = dict(
backend_identifier=report_backend_identifier,
source_regions=report_source_regions,
extraction_datetime=extraction_datetime,
extraction_date=extraction_date,
extraction_date_with_hour=extraction_date_with_hour,
last_hour=dict(
shared_teks_by_upload_date=shared_teks_by_upload_date_last_hour,
shared_diagnoses=0,
),
today=today_summary_api_results,
last_7_days=last_7_days_summary,
last_14_days=last_14_days_summary,
daily_results=summary_api_results)
summary_results = \
json.loads(pd.Series([summary_results]).to_json(orient="records"))[0]
with open(report_resources_path_prefix + "Summary-Results.json", "w") as f:
json.dump(summary_results, f, indent=4)
###Output
_____no_output_____
###Markdown
Publish on README
###Code
with open("Data/Templates/README.md", "r") as f:
readme_contents = f.read()
readme_contents = readme_contents.format(
extraction_date_with_hour=extraction_date_with_hour,
github_project_base_url=github_project_base_url,
daily_summary_table_html=daily_summary_table_html,
multi_backend_summary_table_html=multi_backend_summary_table_html,
multi_backend_cross_sharing_summary_table_html=multi_backend_cross_sharing_summary_table_html,
display_source_regions=display_source_regions)
with open("README.md", "w") as f:
f.write(readme_contents)
###Output
_____no_output_____
###Markdown
Publish on Twitter
###Code
enable_share_to_twitter = os.environ.get("RADARCOVID_REPORT__ENABLE_PUBLISH_ON_TWITTER")
github_event_name = os.environ.get("GITHUB_EVENT_NAME")
if enable_share_to_twitter and github_event_name == "schedule" and \
(shared_teks_by_upload_date_last_hour or not are_today_results_partial):
import tweepy
twitter_api_auth_keys = os.environ["RADARCOVID_REPORT__TWITTER_API_AUTH_KEYS"]
twitter_api_auth_keys = twitter_api_auth_keys.split(":")
auth = tweepy.OAuthHandler(twitter_api_auth_keys[0], twitter_api_auth_keys[1])
auth.set_access_token(twitter_api_auth_keys[2], twitter_api_auth_keys[3])
api = tweepy.API(auth)
summary_plots_media = api.media_upload(summary_plots_image_path)
summary_table_media = api.media_upload(summary_table_image_path)
generation_to_upload_period_pivot_table_image_media = api.media_upload(generation_to_upload_period_pivot_table_image_path)
media_ids = [
summary_plots_media.media_id,
summary_table_media.media_id,
generation_to_upload_period_pivot_table_image_media.media_id,
]
if are_today_results_partial:
today_addendum = " (Partial)"
else:
today_addendum = ""
def format_shared_diagnoses_per_covid_case(value) -> str:
if value == 0:
return "–"
return f"≤{value:.2%}"
display_shared_diagnoses_per_covid_case = \
format_shared_diagnoses_per_covid_case(value=shared_diagnoses_per_covid_case)
display_last_14_days_shared_diagnoses_per_covid_case = \
format_shared_diagnoses_per_covid_case(value=last_14_days_summary["shared_diagnoses_per_covid_case"])
display_last_14_days_shared_diagnoses_per_covid_case_es = \
format_shared_diagnoses_per_covid_case(value=last_14_days_summary["shared_diagnoses_per_covid_case_es"])
status = textwrap.dedent(f"""
#RadarCOVID – {extraction_date_with_hour}
Today{today_addendum}:
- Uploaded TEKs: {shared_teks_by_upload_date:.0f} ({shared_teks_by_upload_date_last_hour:+d} last hour)
- Shared Diagnoses: ≤{shared_diagnoses:.0f}
- Usage Ratio: {display_shared_diagnoses_per_covid_case}
Last 14 Days:
- Usage Ratio (Estimation): {display_last_14_days_shared_diagnoses_per_covid_case}
- Usage Ratio (Official): {display_last_14_days_shared_diagnoses_per_covid_case_es}
Info: {github_project_base_url}#documentation
""")
status = status.encode(encoding="utf-8")
api.update_status(status=status, media_ids=media_ids)
###Output
_____no_output_____ |
notebooks/createClassificationLabels.ipynb | ###Markdown
Binary `{calc, mass}` classification labelsCreate .csv file for labels for each unique mammogram identifier.
###Code
import pandas as pd
import numpy as np
import os
import random
import matplotlib.pyplot as plt
from tensorflow import keras
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from classification_models.keras import Classifiers
all_df = pd.read_csv("../data/csv/mass_calc_all.csv")
all_df.head()
# Get identifiers that have both calcification and mass abnormalities.
# --------------------------------------------------------------------
# Create dictionary of identifiers.
iden_list = list(all_df["identifier"].unique())
iden_dict = dict((iden, []) for iden in iden_list)
# Loop through all_df and get calcification or mass type.
for row in all_df.itertuples():
iden = row.identifier
ab_type = row.abnormality_type
if ab_type not in iden_dict[iden]:
iden_dict[iden].append(ab_type)
# Check for identifiers with >1 type.
both_iden = []
for k, v in iden_dict.items():
if len(v) > 1:
both_iden.append(k)
print(both_iden)
# Remove identifiers with both calc and mass
# ------------------------------------------
for iden in both_iden:
del iden_dict[iden]
# Create datafrom from iden_dict
# ------------------------------
calc_labels_df = pd.DataFrame.from_dict(data=iden_dict, orient="index", columns=["labels"])
calc_labels_df.reset_index(level=0, inplace=True)
calc_labels_df.rename(columns={"index":"identifier"}, inplace=True)
# Save
calc_labels_df.to_csv("../data/csv/calc_labels_all.csv", index=False)
# One hot encode - 1 = calcification, 0 = mass
calc_ohe_df = pd.get_dummies(calc_labels_df["labels"])
calc_ohe_df.drop(columns=["mass"], inplace=True)
# Save
calc_ohe_df.to_csv("../data/csv/calc_ohe_all.csv", index=False)
# Create labels as tensorflow dataset
# -----------------------------------
y = np.asarray(calc_ohe_df["calcification"])
y
base_model = keras.applications.ResNet50(
include_top=False,
weights="imagenet",
input_shape=(224, 224, 3),
)
base_model.summary()
base_model.output
###Output
_____no_output_____
###Markdown
Create classification train and test dataset
###Code
top = "../data/preprocessed/Classification/all_classification"
extension = ".png"
# 1. Get lists of calc and mass filenames
# =======================================
mass = []
calc = []
for (curdir, dirs, files) in os.walk(top=top, topdown=False):
dirs.sort()
files.sort()
for f in files:
if f.endswith(extension):
if "mass" in f.lower():
mass.append(f)
elif "calc" in f.lower():
calc.append(f)
# 2. Random split paths into train and valid
# ==========================================
val_split = 0.2
mass_val_count = round(val_split * len(mass))
calc_val_count = round(val_split * len(calc))
mass_val = random.sample(mass, mass_val_count)
mass_train = [m for m in mass if m not in mass_val]
calc_val = random.sample(calc, calc_val_count)
calc_train = [c for c in calc if c not in calc_val]
val = mass_val + calc_val
train = mass_train + calc_train
random.shuffle(val)
random.shuffle(train)
# 3. Create train and test dataframe with labels
# ==============================================
val_df = pd.DataFrame(data=val, columns=["filename"])
val_df["label"] = val_df["filename"].apply(lambda x: "calc" if "Calc" in x else "mass")
val_df["calc"] = val_df["filename"].apply(lambda x: 1 if "Calc" in x else 0)
val_df["mass"] = val_df["filename"].apply(lambda x: 1 if "Mass" in x else 0)
train_df = pd.DataFrame(data=train, columns=["filename"])
train_df["label"] = train_df["filename"].apply(lambda x: "calc" if "Calc" in x else "mass")
train_df["calc"] = train_df["filename"].apply(lambda x: 1 if "Calc" in x else 0)
train_df["mass"] = train_df["filename"].apply(lambda x: 1 if "Mass" in x else 0)
# 4. Use ImageDataGenerator to create train and val datasets
# ==========================================================
batch_size = 10
target_size = (224, 224)
# Define data generator
train_gen = ImageDataGenerator(rescale=1./255,
horizontal_flip=True,
vertical_flip=True,
brightness_range=(0.6, 1.3)
)
val_gen = ImageDataGenerator(rescale = 1./255)
# Get the data
train_data = train_gen.flow_from_dataframe(dataframe = train_df,
directory = top,
x_col = "filename",
y_col = "label",
batch_size = batch_size,
color_mode = "rgb",
class_mode = "binary",
target_size = target_size,
shuffle = True,
seed = 42)
val_data = val_gen.flow_from_dataframe(dataframe = val_df,
directory = top,
x_col = "filename",
y_col = "label",
batch_size = batch_size,
color_mode = "rgb",
class_mode="binary",
target_size = target_size,
shuffle = True,
seed = 42)
train_data.class_indices
# To check images and labels from the data generator
imgs, lbl = next(iter(train_data))
print(lbl)
fig, ax = plt.subplots(nrows=1, ncols=10, figsize=(50, 5))
for i in range(10):
ax[i].imshow(imgs[i], cmap="gray")
base_model = keras.applications.VGG16(
include_top=False,
weights="imagenet",
input_shape=(224, 224, 3)
)
x = base_model.output
# Add Global Average Pooling layer.
x = keras.layers.GlobalAveragePooling2D()(x)
# Add FC layer having 1024 neurons.
x = keras.layers.Dense(
units=1024, activation="relu"
)(x)
# Add FC output layer for final classification.
final_x = keras.layers.Dense(
units=1,
activation="sigmoid",
)(x)
# Create ResNet50 model.
VGG16_model = keras.Model(inputs=base_model.input, outputs=final_x)
# Freeze layers of base model.
for layer in base_model.layers:
layer.trainable = False
VGG16_model.summary()
base_model.output
arr = np.zeros(shape=(21, 7))
arr
def centerCrop(img):
h, w = img.shape
# If cropping is required...
if h != w:
# Take the shorter side as the square length.
if w < h: # Vertical rectangle, use w as square length.
start_w = 0
end_w = w
start_h = h//2 - w//2
end_h = start_h + w
elif h < w: # Horizontal rectangle, use h as square length.
start_h = 0
end_h = h
start_w = w//2 - h//2
end_w = start_w + h
# Crop.
sq_img = img[start_h:end_h, start_w:end_w]
return sq_img
# If padding is not required...
elif nrows == ncols:
# Return original image.
return img
crop = centerCrop(img=arr)
crop.shape
###Output
_____no_output_____ |
improving_neural_net_performance.ipynb | ###Markdown
[View in Colaboratory](https://colab.research.google.com/github/ArunkumarRamanan/Exercises-Machine-Learning-Crash-Course-Google-Developers/blob/master/improving_neural_net_performance.ipynb) Copyright 2017 Google LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Improving Neural Net Performance **Learning Objective:** Improve the performance of a neural network by normalizing features and applying various optimization algorithms**NOTE:** The optimization methods described in this exercise are not specific to neural networks; they are effective means to improve most types of models. SetupFirst, we'll load the data.
###Code
from __future__ import print_function
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
california_housing_dataframe = pd.read_csv("https://dl.google.com/mlcc/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
def preprocess_features(california_housing_dataframe):
"""Prepares input features from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the features to be used for the model, including
synthetic features.
"""
selected_features = california_housing_dataframe[
["latitude",
"longitude",
"housing_median_age",
"total_rooms",
"total_bedrooms",
"population",
"households",
"median_income"]]
processed_features = selected_features.copy()
# Create a synthetic feature.
processed_features["rooms_per_person"] = (
california_housing_dataframe["total_rooms"] /
california_housing_dataframe["population"])
return processed_features
def preprocess_targets(california_housing_dataframe):
"""Prepares target features (i.e., labels) from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the target feature.
"""
output_targets = pd.DataFrame()
# Scale the target to be in units of thousands of dollars.
output_targets["median_house_value"] = (
california_housing_dataframe["median_house_value"] / 1000.0)
return output_targets
# Choose the first 12000 (out of 17000) examples for training.
training_examples = preprocess_features(california_housing_dataframe.head(12000))
training_targets = preprocess_targets(california_housing_dataframe.head(12000))
# Choose the last 5000 (out of 17000) examples for validation.
validation_examples = preprocess_features(california_housing_dataframe.tail(5000))
validation_targets = preprocess_targets(california_housing_dataframe.tail(5000))
# Double-check that we've done the right thing.
print("Training examples summary:")
display.display(training_examples.describe())
print("Validation examples summary:")
display.display(validation_examples.describe())
print("Training targets summary:")
display.display(training_targets.describe())
print("Validation targets summary:")
display.display(validation_targets.describe())
###Output
_____no_output_____
###Markdown
Train the Neural NetworkNext, we'll train the neural network.
###Code
def construct_feature_columns(input_features):
"""Construct the TensorFlow Feature Columns.
Args:
input_features: The names of the numerical input features to use.
Returns:
A set of feature columns
"""
return set([tf.feature_column.numeric_column(my_feature)
for my_feature in input_features])
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Trains a neural network model.
Args:
features: pandas DataFrame of features
targets: pandas DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""
# Convert pandas data into a dict of np arrays.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construct a dataset, and configure batching/repeating.
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified.
if shuffle:
ds = ds.shuffle(10000)
# Return the next batch of data.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
def train_nn_regression_model(
my_optimizer,
steps,
batch_size,
hidden_units,
training_examples,
training_targets,
validation_examples,
validation_targets):
"""Trains a neural network regression model.
In addition to training, this function also prints training progress information,
as well as a plot of the training and validation loss over time.
Args:
my_optimizer: An instance of `tf.train.Optimizer`, the optimizer to use.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
hidden_units: A `list` of int values, specifying the number of neurons in each layer.
training_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for training.
training_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for training.
validation_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for validation.
validation_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for validation.
Returns:
A tuple `(estimator, training_losses, validation_losses)`:
estimator: the trained `DNNRegressor` object.
training_losses: a `list` containing the training loss values taken during training.
validation_losses: a `list` containing the validation loss values taken during training.
"""
periods = 10
steps_per_period = steps / periods
# Create a DNNRegressor object.
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
dnn_regressor = tf.estimator.DNNRegressor(
feature_columns=construct_feature_columns(training_examples),
hidden_units=hidden_units,
optimizer=my_optimizer
)
# Create input functions.
training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
num_epochs=1,
shuffle=False)
predict_validation_input_fn = lambda: my_input_fn(validation_examples,
validation_targets["median_house_value"],
num_epochs=1,
shuffle=False)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("RMSE (on training data):")
training_rmse = []
validation_rmse = []
for period in range (0, periods):
# Train the model, starting from the prior state.
dnn_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Take a break and compute predictions.
training_predictions = dnn_regressor.predict(input_fn=predict_training_input_fn)
training_predictions = np.array([item['predictions'][0] for item in training_predictions])
validation_predictions = dnn_regressor.predict(input_fn=predict_validation_input_fn)
validation_predictions = np.array([item['predictions'][0] for item in validation_predictions])
# Compute training and validation loss.
training_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(training_predictions, training_targets))
validation_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(validation_predictions, validation_targets))
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, training_root_mean_squared_error))
# Add the loss metrics from this period to our list.
training_rmse.append(training_root_mean_squared_error)
validation_rmse.append(validation_root_mean_squared_error)
print("Model training finished.")
# Output a graph of loss metrics over periods.
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.tight_layout()
plt.plot(training_rmse, label="training")
plt.plot(validation_rmse, label="validation")
plt.legend()
print("Final RMSE (on training data): %0.2f" % training_root_mean_squared_error)
print("Final RMSE (on validation data): %0.2f" % validation_root_mean_squared_error)
return dnn_regressor, training_rmse, validation_rmse
_ = train_nn_regression_model(
my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.0007),
steps=5000,
batch_size=70,
hidden_units=[10, 10],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Linear ScalingIt can be a good standard practice to normalize the inputs to fall within the range -1, 1. This helps SGD not get stuck taking steps that are too large in one dimension, or too small in another. Fans of numerical optimization may note that there's a connection to the idea of using a preconditioner here.
###Code
def linear_scale(series):
min_val = series.min()
max_val = series.max()
scale = (max_val - min_val) / 2.0
return series.apply(lambda x:((x - min_val) / scale) - 1.0)
###Output
_____no_output_____
###Markdown
Task 1: Normalize the Features Using Linear Scaling**Normalize the inputs to the scale -1, 1.****Spend about 5 minutes training and evaluating on the newly normalized data. How well can you do?**As a rule of thumb, NN's train best when the input features are roughly on the same scale.Sanity check your normalized data. (What would happen if you forgot to normalize one feature?)
###Code
def normalize_linear_scale(examples_dataframe):
"""Returns a version of the input `DataFrame` that has all its features normalized linearly."""
#
# Your code here: normalize the inputs.
#
pass
normalized_dataframe = normalize_linear_scale(preprocess_features(california_housing_dataframe))
normalized_training_examples = normalized_dataframe.head(12000)
normalized_validation_examples = normalized_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.0007),
steps=5000,
batch_size=70,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
SolutionClick below for one possible solution. Since normalization uses min and max, we have to ensure it's done on the entire dataset at once. We can do that here because all our data is in a single DataFrame. If we had multiple data sets, a good practice would be to derive the normalization parameters from the training set and apply those identically to the test set.
###Code
def normalize_linear_scale(examples_dataframe):
"""Returns a version of the input `DataFrame` that has all its features normalized linearly."""
processed_features = pd.DataFrame()
processed_features["latitude"] = linear_scale(examples_dataframe["latitude"])
processed_features["longitude"] = linear_scale(examples_dataframe["longitude"])
processed_features["housing_median_age"] = linear_scale(examples_dataframe["housing_median_age"])
processed_features["total_rooms"] = linear_scale(examples_dataframe["total_rooms"])
processed_features["total_bedrooms"] = linear_scale(examples_dataframe["total_bedrooms"])
processed_features["population"] = linear_scale(examples_dataframe["population"])
processed_features["households"] = linear_scale(examples_dataframe["households"])
processed_features["median_income"] = linear_scale(examples_dataframe["median_income"])
processed_features["rooms_per_person"] = linear_scale(examples_dataframe["rooms_per_person"])
return processed_features
normalized_dataframe = normalize_linear_scale(preprocess_features(california_housing_dataframe))
normalized_training_examples = normalized_dataframe.head(12000)
normalized_validation_examples = normalized_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.005),
steps=2000,
batch_size=50,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Task 2: Try a Different Optimizer** Use the Adagrad and Adam optimizers and compare performance.**The Adagrad optimizer is one alternative. The key insight of Adagrad is that it modifies the learning rate adaptively for each coefficient in a model, monotonically lowering the effective learning rate. This works great for convex problems, but isn't always ideal for the non-convex problem Neural Net training. You can use Adagrad by specifying `AdagradOptimizer` instead of `GradientDescentOptimizer`. Note that you may need to use a larger learning rate with Adagrad.For non-convex optimization problems, Adam is sometimes more efficient than Adagrad. To use Adam, invoke the `tf.train.AdamOptimizer` method. This method takes several optional hyperparameters as arguments, but our solution only specifies one of these (`learning_rate`). In a production setting, you should specify and tune the optional hyperparameters carefully.
###Code
#
# YOUR CODE HERE: Retrain the network using Adagrad and then Adam.
#
###Output
_____no_output_____
###Markdown
SolutionClick below for the solution First, let's try Adagrad.
###Code
_, adagrad_training_losses, adagrad_validation_losses = train_nn_regression_model(
my_optimizer=tf.train.AdagradOptimizer(learning_rate=0.5),
steps=500,
batch_size=100,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Now let's try Adam.
###Code
_, adam_training_losses, adam_validation_losses = train_nn_regression_model(
my_optimizer=tf.train.AdamOptimizer(learning_rate=0.009),
steps=500,
batch_size=100,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Let's print a graph of loss metrics side by side.
###Code
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.plot(adagrad_training_losses, label='Adagrad training')
plt.plot(adagrad_validation_losses, label='Adagrad validation')
plt.plot(adam_training_losses, label='Adam training')
plt.plot(adam_validation_losses, label='Adam validation')
_ = plt.legend()
###Output
_____no_output_____
###Markdown
Task 3: Explore Alternate Normalization Methods**Try alternate normalizations for various features to further improve performance.**If you look closely at summary stats for your transformed data, you may notice that linear scaling some features leaves them clumped close to `-1`.For example, many features have a median of `-0.8` or so, rather than `0.0`.
###Code
_ = normalized_training_examples.hist(bins=20, figsize=(18, 12), xlabelsize=10)
###Output
_____no_output_____
###Markdown
We might be able to do better by choosing additional ways to transform these features.For example, a log scaling might help some features. Or clipping extreme values may make the remainder of the scale more informative.
###Code
def log_normalize(series):
return series.apply(lambda x:math.log(x+1.0))
def clip(series, clip_to_min, clip_to_max):
return series.apply(lambda x:(
min(max(x, clip_to_min), clip_to_max)))
def z_score_normalize(series):
mean = series.mean()
std_dv = series.std()
return series.apply(lambda x:(x - mean) / std_dv)
def binary_threshold(series, threshold):
return series.apply(lambda x:(1 if x > threshold else 0))
###Output
_____no_output_____
###Markdown
The block above contains a few additional possible normalization functions. Try some of these, or add your own.Note that if you normalize the target, you'll need to un-normalize the predictions for loss metrics to be comparable.
###Code
def normalize(examples_dataframe):
"""Returns a version of the input `DataFrame` that has all its features normalized."""
#
# YOUR CODE HERE: Normalize the inputs.
#
pass
normalized_dataframe = normalize(preprocess_features(california_housing_dataframe))
normalized_training_examples = normalized_dataframe.head(12000)
normalized_validation_examples = normalized_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.0007),
steps=5000,
batch_size=70,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
SolutionClick below for one possible solution. These are only a few ways in which we could think about the data. Other transformations may work even better!`households`, `median_income` and `total_bedrooms` all appear normally-distributed in a log space.`latitude`, `longitude` and `housing_median_age` would probably be better off just scaled linearly, as before.`population`, `totalRooms` and `rooms_per_person` have a few extreme outliers. They seem too extreme for log normalization to help. So let's clip them instead.
###Code
def normalize(examples_dataframe):
"""Returns a version of the input `DataFrame` that has all its features normalized."""
processed_features = pd.DataFrame()
processed_features["households"] = log_normalize(examples_dataframe["households"])
processed_features["median_income"] = log_normalize(examples_dataframe["median_income"])
processed_features["total_bedrooms"] = log_normalize(examples_dataframe["total_bedrooms"])
processed_features["latitude"] = linear_scale(examples_dataframe["latitude"])
processed_features["longitude"] = linear_scale(examples_dataframe["longitude"])
processed_features["housing_median_age"] = linear_scale(examples_dataframe["housing_median_age"])
processed_features["population"] = linear_scale(clip(examples_dataframe["population"], 0, 5000))
processed_features["rooms_per_person"] = linear_scale(clip(examples_dataframe["rooms_per_person"], 0, 5))
processed_features["total_rooms"] = linear_scale(clip(examples_dataframe["total_rooms"], 0, 10000))
return processed_features
normalized_dataframe = normalize(preprocess_features(california_housing_dataframe))
normalized_training_examples = normalized_dataframe.head(12000)
normalized_validation_examples = normalized_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.AdagradOptimizer(learning_rate=0.15),
steps=1000,
batch_size=50,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Optional Challenge: Use only Latitude and Longitude Features**Train a NN model that uses only latitude and longitude as features.**Real estate people are fond of saying that location is the only important feature in housing price.Let's see if we can confirm this by training a model that uses only latitude and longitude as features.This will only work well if our NN can learn complex nonlinearities from latitude and longitude.**NOTE:** We may need a network structure that has more layers than were useful earlier in the exercise.
###Code
#
# YOUR CODE HERE: Train the network using only latitude and longitude
#
###Output
_____no_output_____
###Markdown
SolutionClick below for a possible solution. It's a good idea to keep latitude and longitude normalized:
###Code
def location_location_location(examples_dataframe):
"""Returns a version of the input `DataFrame` that keeps only the latitude and longitude."""
processed_features = pd.DataFrame()
processed_features["latitude"] = linear_scale(examples_dataframe["latitude"])
processed_features["longitude"] = linear_scale(examples_dataframe["longitude"])
return processed_features
lll_dataframe = location_location_location(preprocess_features(california_housing_dataframe))
lll_training_examples = lll_dataframe.head(12000)
lll_validation_examples = lll_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.AdagradOptimizer(learning_rate=0.05),
steps=500,
batch_size=50,
hidden_units=[10, 10, 5, 5, 5],
training_examples=lll_training_examples,
training_targets=training_targets,
validation_examples=lll_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Copyright 2017 Google LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Improving Neural Net Performance **Learning Objective:** Improve the performance of a neural network by normalizing features and applying various optimization algorithms**NOTE:** The optimization methods described in this exercise are not specific to neural networks; they are effective means to improve most types of models. SetupFirst, we'll load the data.
###Code
from __future__ import print_function
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
california_housing_dataframe = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
def preprocess_features(california_housing_dataframe):
"""Prepares input features from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the features to be used for the model, including
synthetic features.
"""
selected_features = california_housing_dataframe[
["latitude",
"longitude",
"housing_median_age",
"total_rooms",
"total_bedrooms",
"population",
"households",
"median_income"]]
processed_features = selected_features.copy()
# Create a synthetic feature.
processed_features["rooms_per_person"] = (
california_housing_dataframe["total_rooms"] /
california_housing_dataframe["population"])
return processed_features
def preprocess_targets(california_housing_dataframe):
"""Prepares target features (i.e., labels) from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the target feature.
"""
output_targets = pd.DataFrame()
# Scale the target to be in units of thousands of dollars.
output_targets["median_house_value"] = (
california_housing_dataframe["median_house_value"] / 1000.0)
return output_targets
# Choose the first 12000 (out of 17000) examples for training.
training_examples = preprocess_features(california_housing_dataframe.head(12000))
training_targets = preprocess_targets(california_housing_dataframe.head(12000))
# Choose the last 5000 (out of 17000) examples for validation.
validation_examples = preprocess_features(california_housing_dataframe.tail(5000))
validation_targets = preprocess_targets(california_housing_dataframe.tail(5000))
# Double-check that we've done the right thing.
print("Training examples summary:")
display.display(training_examples.describe())
print("Validation examples summary:")
display.display(validation_examples.describe())
print("Training targets summary:")
display.display(training_targets.describe())
print("Validation targets summary:")
display.display(validation_targets.describe())
###Output
_____no_output_____
###Markdown
Train the Neural NetworkNext, we'll train the neural network.
###Code
def construct_feature_columns(input_features):
"""Construct the TensorFlow Feature Columns.
Args:
input_features: The names of the numerical input features to use.
Returns:
A set of feature columns
"""
return set([tf.feature_column.numeric_column(my_feature)
for my_feature in input_features])
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Trains a neural network model.
Args:
features: pandas DataFrame of features
targets: pandas DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""
# Convert pandas data into a dict of np arrays.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construct a dataset, and configure batching/repeating.
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified.
if shuffle:
ds = ds.shuffle(10000)
# Return the next batch of data.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
def train_nn_regression_model(
my_optimizer,
steps,
batch_size,
hidden_units,
training_examples,
training_targets,
validation_examples,
validation_targets):
"""Trains a neural network regression model.
In addition to training, this function also prints training progress information,
as well as a plot of the training and validation loss over time.
Args:
my_optimizer: An instance of `tf.train.Optimizer`, the optimizer to use.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
hidden_units: A `list` of int values, specifying the number of neurons in each layer.
training_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for training.
training_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for training.
validation_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for validation.
validation_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for validation.
Returns:
A tuple `(estimator, training_losses, validation_losses)`:
estimator: the trained `DNNRegressor` object.
training_losses: a `list` containing the training loss values taken during training.
validation_losses: a `list` containing the validation loss values taken during training.
"""
periods = 10
steps_per_period = steps / periods
# Create a DNNRegressor object.
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
dnn_regressor = tf.estimator.DNNRegressor(
feature_columns=construct_feature_columns(training_examples),
hidden_units=hidden_units,
optimizer=my_optimizer
)
# Create input functions.
training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
num_epochs=1,
shuffle=False)
predict_validation_input_fn = lambda: my_input_fn(validation_examples,
validation_targets["median_house_value"],
num_epochs=1,
shuffle=False)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("RMSE (on training data):")
training_rmse = []
validation_rmse = []
for period in range (0, periods):
# Train the model, starting from the prior state.
dnn_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Take a break and compute predictions.
training_predictions = dnn_regressor.predict(input_fn=predict_training_input_fn)
training_predictions = np.array([item['predictions'][0] for item in training_predictions])
validation_predictions = dnn_regressor.predict(input_fn=predict_validation_input_fn)
validation_predictions = np.array([item['predictions'][0] for item in validation_predictions])
# Compute training and validation loss.
training_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(training_predictions, training_targets))
validation_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(validation_predictions, validation_targets))
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, training_root_mean_squared_error))
# Add the loss metrics from this period to our list.
training_rmse.append(training_root_mean_squared_error)
validation_rmse.append(validation_root_mean_squared_error)
print("Model training finished.")
# Output a graph of loss metrics over periods.
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.tight_layout()
plt.plot(training_rmse, label="training")
plt.plot(validation_rmse, label="validation")
plt.legend()
print("Final RMSE (on training data): %0.2f" % training_root_mean_squared_error)
print("Final RMSE (on validation data): %0.2f" % validation_root_mean_squared_error)
return dnn_regressor, training_rmse, validation_rmse
_ = train_nn_regression_model(
my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.0007),
steps=5000,
batch_size=70,
hidden_units=[10, 10],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Linear ScalingIt can be a good standard practice to normalize the inputs to fall within the range -1, 1. This helps SGD not get stuck taking steps that are too large in one dimension, or too small in another. Fans of numerical optimization may note that there's a connection to the idea of using a preconditioner here.
###Code
def linear_scale(series):
min_val = series.min()
max_val = series.max()
scale = (max_val - min_val) / 2.0
return series.apply(lambda x:((x - min_val) / scale) - 1.0)
###Output
_____no_output_____
###Markdown
Task 1: Normalize the Features Using Linear Scaling**Normalize the inputs to the scale -1, 1.****Spend about 5 minutes training and evaluating on the newly normalized data. How well can you do?**As a rule of thumb, NN's train best when the input features are roughly on the same scale.Sanity check your normalized data. (What would happen if you forgot to normalize one feature?)
###Code
def normalize_linear_scale(examples_dataframe):
"""Returns a version of the input `DataFrame` that has all its features normalized linearly."""
#
# Your code here: normalize the inputs.
#
pass
normalized_dataframe = normalize_linear_scale(preprocess_features(california_housing_dataframe))
normalized_training_examples = normalized_dataframe.head(12000)
normalized_validation_examples = normalized_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.0007),
steps=5000,
batch_size=70,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
SolutionClick below for one possible solution. Since normalization uses min and max, we have to ensure it's done on the entire dataset at once. We can do that here because all our data is in a single DataFrame. If we had multiple data sets, a good practice would be to derive the normalization parameters from the training set and apply those identically to the test set.
###Code
def normalize_linear_scale(examples_dataframe):
"""Returns a version of the input `DataFrame` that has all its features normalized linearly."""
processed_features = pd.DataFrame()
processed_features["latitude"] = linear_scale(examples_dataframe["latitude"])
processed_features["longitude"] = linear_scale(examples_dataframe["longitude"])
processed_features["housing_median_age"] = linear_scale(examples_dataframe["housing_median_age"])
processed_features["total_rooms"] = linear_scale(examples_dataframe["total_rooms"])
processed_features["total_bedrooms"] = linear_scale(examples_dataframe["total_bedrooms"])
processed_features["population"] = linear_scale(examples_dataframe["population"])
processed_features["households"] = linear_scale(examples_dataframe["households"])
processed_features["median_income"] = linear_scale(examples_dataframe["median_income"])
processed_features["rooms_per_person"] = linear_scale(examples_dataframe["rooms_per_person"])
return processed_features
normalized_dataframe = normalize_linear_scale(preprocess_features(california_housing_dataframe))
normalized_training_examples = normalized_dataframe.head(12000)
normalized_validation_examples = normalized_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.005),
steps=2000,
batch_size=50,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Task 2: Try a Different Optimizer** Use the Adagrad and Adam optimizers and compare performance.**The Adagrad optimizer is one alternative. The key insight of Adagrad is that it modifies the learning rate adaptively for each coefficient in a model, monotonically lowering the effective learning rate. This works great for convex problems, but isn't always ideal for the non-convex problem Neural Net training. You can use Adagrad by specifying `AdagradOptimizer` instead of `GradientDescentOptimizer`. Note that you may need to use a larger learning rate with Adagrad.For non-convex optimization problems, Adam is sometimes more efficient than Adagrad. To use Adam, invoke the `tf.train.AdamOptimizer` method. This method takes several optional hyperparameters as arguments, but our solution only specifies one of these (`learning_rate`). In a production setting, you should specify and tune the optional hyperparameters carefully.
###Code
#
# YOUR CODE HERE: Retrain the network using Adagrad and then Adam.
#
###Output
_____no_output_____
###Markdown
SolutionClick below for the solution First, let's try Adagrad.
###Code
_, adagrad_training_losses, adagrad_validation_losses = train_nn_regression_model(
my_optimizer=tf.train.AdagradOptimizer(learning_rate=0.5),
steps=500,
batch_size=100,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Now let's try Adam.
###Code
_, adam_training_losses, adam_validation_losses = train_nn_regression_model(
my_optimizer=tf.train.AdamOptimizer(learning_rate=0.009),
steps=500,
batch_size=100,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Let's print a graph of loss metrics side by side.
###Code
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.plot(adagrad_training_losses, label='Adagrad training')
plt.plot(adagrad_validation_losses, label='Adagrad validation')
plt.plot(adam_training_losses, label='Adam training')
plt.plot(adam_validation_losses, label='Adam validation')
_ = plt.legend()
###Output
_____no_output_____
###Markdown
Task 3: Explore Alternate Normalization Methods**Try alternate normalizations for various features to further improve performance.**If you look closely at summary stats for your transformed data, you may notice that linear scaling some features leaves them clumped close to `-1`.For example, many features have a median of `-0.8` or so, rather than `0.0`.
###Code
_ = normalized_training_examples.hist(bins=20, figsize=(18, 12), xlabelsize=10)
###Output
_____no_output_____
###Markdown
We might be able to do better by choosing additional ways to transform these features.For example, a log scaling might help some features. Or clipping extreme values may make the remainder of the scale more informative.
###Code
def log_normalize(series):
return series.apply(lambda x:math.log(x+1.0))
def clip(series, clip_to_min, clip_to_max):
return series.apply(lambda x:(
min(max(x, clip_to_min), clip_to_max)))
def z_score_normalize(series):
mean = series.mean()
std_dv = series.std()
return series.apply(lambda x:(x - mean) / std_dv)
def binary_threshold(series, threshold):
return series.apply(lambda x:(1 if x > threshold else 0))
###Output
_____no_output_____
###Markdown
The block above contains a few additional possible normalization functions. Try some of these, or add your own.Note that if you normalize the target, you'll need to un-normalize the predictions for loss metrics to be comparable.
###Code
def normalize(examples_dataframe):
"""Returns a version of the input `DataFrame` that has all its features normalized."""
#
# YOUR CODE HERE: Normalize the inputs.
#
pass
normalized_dataframe = normalize(preprocess_features(california_housing_dataframe))
normalized_training_examples = normalized_dataframe.head(12000)
normalized_validation_examples = normalized_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.0007),
steps=5000,
batch_size=70,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
SolutionClick below for one possible solution. These are only a few ways in which we could think about the data. Other transformations may work even better!`households`, `median_income` and `total_bedrooms` all appear normally-distributed in a log space.`latitude`, `longitude` and `housing_median_age` would probably be better off just scaled linearly, as before.`population`, `totalRooms` and `rooms_per_person` have a few extreme outliers. They seem too extreme for log normalization to help. So let's clip them instead.
###Code
def normalize(examples_dataframe):
"""Returns a version of the input `DataFrame` that has all its features normalized."""
processed_features = pd.DataFrame()
processed_features["households"] = log_normalize(examples_dataframe["households"])
processed_features["median_income"] = log_normalize(examples_dataframe["median_income"])
processed_features["total_bedrooms"] = log_normalize(examples_dataframe["total_bedrooms"])
processed_features["latitude"] = linear_scale(examples_dataframe["latitude"])
processed_features["longitude"] = linear_scale(examples_dataframe["longitude"])
processed_features["housing_median_age"] = linear_scale(examples_dataframe["housing_median_age"])
processed_features["population"] = linear_scale(clip(examples_dataframe["population"], 0, 5000))
processed_features["rooms_per_person"] = linear_scale(clip(examples_dataframe["rooms_per_person"], 0, 5))
processed_features["total_rooms"] = linear_scale(clip(examples_dataframe["total_rooms"], 0, 10000))
return processed_features
normalized_dataframe = normalize(preprocess_features(california_housing_dataframe))
normalized_training_examples = normalized_dataframe.head(12000)
normalized_validation_examples = normalized_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.AdagradOptimizer(learning_rate=0.15),
steps=1000,
batch_size=50,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Optional Challenge: Use only Latitude and Longitude Features**Train a NN model that uses only latitude and longitude as features.**Real estate people are fond of saying that location is the only important feature in housing price.Let's see if we can confirm this by training a model that uses only latitude and longitude as features.This will only work well if our NN can learn complex nonlinearities from latitude and longitude.**NOTE:** We may need a network structure that has more layers than were useful earlier in the exercise.
###Code
#
# YOUR CODE HERE: Train the network using only latitude and longitude
#
###Output
_____no_output_____
###Markdown
SolutionClick below for a possible solution. It's a good idea to keep latitude and longitude normalized:
###Code
def location_location_location(examples_dataframe):
"""Returns a version of the input `DataFrame` that keeps only the latitude and longitude."""
processed_features = pd.DataFrame()
processed_features["latitude"] = linear_scale(examples_dataframe["latitude"])
processed_features["longitude"] = linear_scale(examples_dataframe["longitude"])
return processed_features
lll_dataframe = location_location_location(preprocess_features(california_housing_dataframe))
lll_training_examples = lll_dataframe.head(12000)
lll_validation_examples = lll_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.AdagradOptimizer(learning_rate=0.05),
steps=500,
batch_size=50,
hidden_units=[10, 10, 5, 5, 5],
training_examples=lll_training_examples,
training_targets=training_targets,
validation_examples=lll_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
[View in Colaboratory](https://colab.research.google.com/github/rogueai/tensorflow-crash-course/blob/master/improving_neural_net_performance.ipynb) Copyright 2017 Google LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Improving Neural Net Performance **Learning Objective:** Improve the performance of a neural network by normalizing features and applying various optimization algorithms**NOTE:** The optimization methods described in this exercise are not specific to neural networks; they are effective means to improve most types of models. SetupFirst, we'll load the data.
###Code
from __future__ import print_function
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
california_housing_dataframe = pd.read_csv("https://storage.googleapis.com/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
def preprocess_features(california_housing_dataframe):
"""Prepares input features from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the features to be used for the model, including
synthetic features.
"""
selected_features = california_housing_dataframe[
["latitude",
"longitude",
"housing_median_age",
"total_rooms",
"total_bedrooms",
"population",
"households",
"median_income"]]
processed_features = selected_features.copy()
# Create a synthetic feature.
processed_features["rooms_per_person"] = (
california_housing_dataframe["total_rooms"] /
california_housing_dataframe["population"])
return processed_features
def preprocess_targets(california_housing_dataframe):
"""Prepares target features (i.e., labels) from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the target feature.
"""
output_targets = pd.DataFrame()
# Scale the target to be in units of thousands of dollars.
output_targets["median_house_value"] = (
california_housing_dataframe["median_house_value"] / 1000.0)
return output_targets
# Choose the first 12000 (out of 17000) examples for training.
training_examples = preprocess_features(california_housing_dataframe.head(12000))
training_targets = preprocess_targets(california_housing_dataframe.head(12000))
# Choose the last 5000 (out of 17000) examples for validation.
validation_examples = preprocess_features(california_housing_dataframe.tail(5000))
validation_targets = preprocess_targets(california_housing_dataframe.tail(5000))
# Double-check that we've done the right thing.
print("Training examples summary:")
display.display(training_examples.describe())
print("Validation examples summary:")
display.display(validation_examples.describe())
print("Training targets summary:")
display.display(training_targets.describe())
print("Validation targets summary:")
display.display(validation_targets.describe())
###Output
_____no_output_____
###Markdown
Train the Neural NetworkNext, we'll train the neural network.
###Code
def construct_feature_columns(input_features):
"""Construct the TensorFlow Feature Columns.
Args:
input_features: The names of the numerical input features to use.
Returns:
A set of feature columns
"""
return set([tf.feature_column.numeric_column(my_feature)
for my_feature in input_features])
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Trains a neural network model.
Args:
features: pandas DataFrame of features
targets: pandas DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""
# Convert pandas data into a dict of np arrays.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construct a dataset, and configure batching/repeating.
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified.
if shuffle:
ds = ds.shuffle(10000)
# Return the next batch of data.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
def train_nn_regression_model(
my_optimizer,
steps,
batch_size,
hidden_units,
training_examples,
training_targets,
validation_examples,
validation_targets):
"""Trains a neural network regression model.
In addition to training, this function also prints training progress information,
as well as a plot of the training and validation loss over time.
Args:
my_optimizer: An instance of `tf.train.Optimizer`, the optimizer to use.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
hidden_units: A `list` of int values, specifying the number of neurons in each layer.
training_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for training.
training_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for training.
validation_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for validation.
validation_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for validation.
Returns:
A tuple `(estimator, training_losses, validation_losses)`:
estimator: the trained `DNNRegressor` object.
training_losses: a `list` containing the training loss values taken during training.
validation_losses: a `list` containing the validation loss values taken during training.
"""
periods = 10
steps_per_period = steps / periods
# Create a DNNRegressor object.
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
dnn_regressor = tf.estimator.DNNRegressor(
feature_columns=construct_feature_columns(training_examples),
hidden_units=hidden_units,
optimizer=my_optimizer
)
# Create input functions.
training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
num_epochs=1,
shuffle=False)
predict_validation_input_fn = lambda: my_input_fn(validation_examples,
validation_targets["median_house_value"],
num_epochs=1,
shuffle=False)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("RMSE (on training data):")
training_rmse = []
validation_rmse = []
for period in range (0, periods):
# Train the model, starting from the prior state.
dnn_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Take a break and compute predictions.
training_predictions = dnn_regressor.predict(input_fn=predict_training_input_fn)
training_predictions = np.array([item['predictions'][0] for item in training_predictions])
validation_predictions = dnn_regressor.predict(input_fn=predict_validation_input_fn)
validation_predictions = np.array([item['predictions'][0] for item in validation_predictions])
# Compute training and validation loss.
training_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(training_predictions, training_targets))
validation_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(validation_predictions, validation_targets))
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, training_root_mean_squared_error))
# Add the loss metrics from this period to our list.
training_rmse.append(training_root_mean_squared_error)
validation_rmse.append(validation_root_mean_squared_error)
print("Model training finished.")
# Output a graph of loss metrics over periods.
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.tight_layout()
plt.plot(training_rmse, label="training")
plt.plot(validation_rmse, label="validation")
plt.legend()
print("Final RMSE (on training data): %0.2f" % training_root_mean_squared_error)
print("Final RMSE (on validation data): %0.2f" % validation_root_mean_squared_error)
return dnn_regressor, training_rmse, validation_rmse
_ = train_nn_regression_model(
my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.0007),
steps=5000,
batch_size=70,
hidden_units=[10, 10],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Linear ScalingIt can be a good standard practice to normalize the inputs to fall within the range -1, 1. This helps SGD not get stuck taking steps that are too large in one dimension, or too small in another. Fans of numerical optimization may note that there's a connection to the idea of using a preconditioner here.
###Code
def linear_scale(series):
min_val = series.min()
max_val = series.max()
scale = (max_val - min_val) / 2.0
return series.apply(lambda x:((x - min_val) / scale) - 1.0)
###Output
_____no_output_____
###Markdown
Task 1: Normalize the Features Using Linear Scaling**Normalize the inputs to the scale -1, 1.****Spend about 5 minutes training and evaluating on the newly normalized data. How well can you do?**As a rule of thumb, NN's train best when the input features are roughly on the same scale.Sanity check your normalized data. (What would happen if you forgot to normalize one feature?)
###Code
def normalize_linear_scale(examples_dataframe):
"""Returns a version of the input `DataFrame` that has all its features normalized linearly."""
#
# Your code here: normalize the inputs.
#
pass
normalized_dataframe = normalize_linear_scale(preprocess_features(california_housing_dataframe))
normalized_training_examples = normalized_dataframe.head(12000)
normalized_validation_examples = normalized_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.0007),
steps=5000,
batch_size=70,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
SolutionClick below for one possible solution. Since normalization uses min and max, we have to ensure it's done on the entire dataset at once. We can do that here because all our data is in a single DataFrame. If we had multiple data sets, a good practice would be to derive the normalization parameters from the training set and apply those identically to the test set.
###Code
def normalize_linear_scale(examples_dataframe):
"""Returns a version of the input `DataFrame` that has all its features normalized linearly."""
processed_features = pd.DataFrame()
processed_features["latitude"] = linear_scale(examples_dataframe["latitude"])
processed_features["longitude"] = linear_scale(examples_dataframe["longitude"])
processed_features["housing_median_age"] = linear_scale(examples_dataframe["housing_median_age"])
processed_features["total_rooms"] = linear_scale(examples_dataframe["total_rooms"])
processed_features["total_bedrooms"] = linear_scale(examples_dataframe["total_bedrooms"])
processed_features["population"] = linear_scale(examples_dataframe["population"])
processed_features["households"] = linear_scale(examples_dataframe["households"])
processed_features["median_income"] = linear_scale(examples_dataframe["median_income"])
processed_features["rooms_per_person"] = linear_scale(examples_dataframe["rooms_per_person"])
return processed_features
normalized_dataframe = normalize_linear_scale(preprocess_features(california_housing_dataframe))
normalized_training_examples = normalized_dataframe.head(12000)
normalized_validation_examples = normalized_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.005),
steps=2000,
batch_size=50,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Task 2: Try a Different Optimizer** Use the Adagrad and Adam optimizers and compare performance.**The Adagrad optimizer is one alternative. The key insight of Adagrad is that it modifies the learning rate adaptively for each coefficient in a model, monotonically lowering the effective learning rate. This works great for convex problems, but isn't always ideal for the non-convex problem Neural Net training. You can use Adagrad by specifying `AdagradOptimizer` instead of `GradientDescentOptimizer`. Note that you may need to use a larger learning rate with Adagrad.For non-convex optimization problems, Adam is sometimes more efficient than Adagrad. To use Adam, invoke the `tf.train.AdamOptimizer` method. This method takes several optional hyperparameters as arguments, but our solution only specifies one of these (`learning_rate`). In a production setting, you should specify and tune the optional hyperparameters carefully.
###Code
#
# YOUR CODE HERE: Retrain the network using Adagrad and then Adam.
#
###Output
_____no_output_____
###Markdown
SolutionClick below for the solution First, let's try Adagrad.
###Code
_, adagrad_training_losses, adagrad_validation_losses = train_nn_regression_model(
my_optimizer=tf.train.AdagradOptimizer(learning_rate=0.5),
steps=500,
batch_size=100,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Now let's try Adam.
###Code
_, adam_training_losses, adam_validation_losses = train_nn_regression_model(
my_optimizer=tf.train.AdamOptimizer(learning_rate=0.009),
steps=500,
batch_size=100,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Let's print a graph of loss metrics side by side.
###Code
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.plot(adagrad_training_losses, label='Adagrad training')
plt.plot(adagrad_validation_losses, label='Adagrad validation')
plt.plot(adam_training_losses, label='Adam training')
plt.plot(adam_validation_losses, label='Adam validation')
_ = plt.legend()
###Output
_____no_output_____
###Markdown
Task 3: Explore Alternate Normalization Methods**Try alternate normalizations for various features to further improve performance.**If you look closely at summary stats for your transformed data, you may notice that linear scaling some features leaves them clumped close to `-1`.For example, many features have a median of `-0.8` or so, rather than `0.0`.
###Code
_ = normalized_training_examples.hist(bins=20, figsize=(18, 12), xlabelsize=10)
###Output
_____no_output_____
###Markdown
We might be able to do better by choosing additional ways to transform these features.For example, a log scaling might help some features. Or clipping extreme values may make the remainder of the scale more informative.
###Code
def log_normalize(series):
return series.apply(lambda x:math.log(x+1.0))
def clip(series, clip_to_min, clip_to_max):
return series.apply(lambda x:(
min(max(x, clip_to_min), clip_to_max)))
def z_score_normalize(series):
mean = series.mean()
std_dv = series.std()
return series.apply(lambda x:(x - mean) / std_dv)
def binary_threshold(series, threshold):
return series.apply(lambda x:(1 if x > threshold else 0))
###Output
_____no_output_____
###Markdown
The block above contains a few additional possible normalization functions. Try some of these, or add your own.Note that if you normalize the target, you'll need to un-normalize the predictions for loss metrics to be comparable.
###Code
def normalize(examples_dataframe):
"""Returns a version of the input `DataFrame` that has all its features normalized."""
#
# YOUR CODE HERE: Normalize the inputs.
#
pass
normalized_dataframe = normalize(preprocess_features(california_housing_dataframe))
normalized_training_examples = normalized_dataframe.head(12000)
normalized_validation_examples = normalized_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.0007),
steps=5000,
batch_size=70,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
SolutionClick below for one possible solution. These are only a few ways in which we could think about the data. Other transformations may work even better!`households`, `median_income` and `total_bedrooms` all appear normally-distributed in a log space.`latitude`, `longitude` and `housing_median_age` would probably be better off just scaled linearly, as before.`population`, `totalRooms` and `rooms_per_person` have a few extreme outliers. They seem too extreme for log normalization to help. So let's clip them instead.
###Code
def normalize(examples_dataframe):
"""Returns a version of the input `DataFrame` that has all its features normalized."""
processed_features = pd.DataFrame()
processed_features["households"] = log_normalize(examples_dataframe["households"])
processed_features["median_income"] = log_normalize(examples_dataframe["median_income"])
processed_features["total_bedrooms"] = log_normalize(examples_dataframe["total_bedrooms"])
processed_features["latitude"] = linear_scale(examples_dataframe["latitude"])
processed_features["longitude"] = linear_scale(examples_dataframe["longitude"])
processed_features["housing_median_age"] = linear_scale(examples_dataframe["housing_median_age"])
processed_features["population"] = linear_scale(clip(examples_dataframe["population"], 0, 5000))
processed_features["rooms_per_person"] = linear_scale(clip(examples_dataframe["rooms_per_person"], 0, 5))
processed_features["total_rooms"] = linear_scale(clip(examples_dataframe["total_rooms"], 0, 10000))
return processed_features
normalized_dataframe = normalize(preprocess_features(california_housing_dataframe))
normalized_training_examples = normalized_dataframe.head(12000)
normalized_validation_examples = normalized_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.AdagradOptimizer(learning_rate=0.15),
steps=1000,
batch_size=50,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Optional Challenge: Use only Latitude and Longitude Features**Train a NN model that uses only latitude and longitude as features.**Real estate people are fond of saying that location is the only important feature in housing price.Let's see if we can confirm this by training a model that uses only latitude and longitude as features.This will only work well if our NN can learn complex nonlinearities from latitude and longitude.**NOTE:** We may need a network structure that has more layers than were useful earlier in the exercise.
###Code
#
# YOUR CODE HERE: Train the network using only latitude and longitude
#
###Output
_____no_output_____
###Markdown
SolutionClick below for a possible solution. It's a good idea to keep latitude and longitude normalized:
###Code
def location_location_location(examples_dataframe):
"""Returns a version of the input `DataFrame` that keeps only the latitude and longitude."""
processed_features = pd.DataFrame()
processed_features["latitude"] = linear_scale(examples_dataframe["latitude"])
processed_features["longitude"] = linear_scale(examples_dataframe["longitude"])
return processed_features
lll_dataframe = location_location_location(preprocess_features(california_housing_dataframe))
lll_training_examples = lll_dataframe.head(12000)
lll_validation_examples = lll_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.AdagradOptimizer(learning_rate=0.05),
steps=500,
batch_size=50,
hidden_units=[10, 10, 5, 5, 5],
training_examples=lll_training_examples,
training_targets=training_targets,
validation_examples=lll_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Copyright 2017 Google LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Improving Neural Net Performance **Learning Objective:** Improve the performance of a neural network by normalizing features and applying various optimization algorithms**NOTE:** The optimization methods described in this exercise are not specific to neural networks; they are effective means to improve most types of models. SetupFirst, we'll load the data.
###Code
from __future__ import print_function
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
california_housing_dataframe = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
def preprocess_features(california_housing_dataframe):
"""Prepares input features from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the features to be used for the model, including
synthetic features.
"""
selected_features = california_housing_dataframe[
["latitude",
"longitude",
"housing_median_age",
"total_rooms",
"total_bedrooms",
"population",
"households",
"median_income"]]
processed_features = selected_features.copy()
# Create a synthetic feature.
processed_features["rooms_per_person"] = (
california_housing_dataframe["total_rooms"] /
california_housing_dataframe["population"])
return processed_features
def preprocess_targets(california_housing_dataframe):
"""Prepares target features (i.e., labels) from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the target feature.
"""
output_targets = pd.DataFrame()
# Scale the target to be in units of thousands of dollars.
output_targets["median_house_value"] = (
california_housing_dataframe["median_house_value"] / 1000.0)
return output_targets
# Choose the first 12000 (out of 17000) examples for training.
training_examples = preprocess_features(california_housing_dataframe.head(12000))
training_targets = preprocess_targets(california_housing_dataframe.head(12000))
# Choose the last 5000 (out of 17000) examples for validation.
validation_examples = preprocess_features(california_housing_dataframe.tail(5000))
validation_targets = preprocess_targets(california_housing_dataframe.tail(5000))
# Double-check that we've done the right thing.
print("Training examples summary:")
display.display(training_examples.describe())
print("Validation examples summary:")
display.display(validation_examples.describe())
print("Training targets summary:")
display.display(training_targets.describe())
print("Validation targets summary:")
display.display(validation_targets.describe())
###Output
_____no_output_____
###Markdown
Train the Neural NetworkNext, we'll train the neural network.
###Code
def construct_feature_columns(input_features):
"""Construct the TensorFlow Feature Columns.
Args:
input_features: The names of the numerical input features to use.
Returns:
A set of feature columns
"""
return set([tf.feature_column.numeric_column(my_feature)
for my_feature in input_features])
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Trains a neural network model.
Args:
features: pandas DataFrame of features
targets: pandas DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""
# Convert pandas data into a dict of np arrays.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construct a dataset, and configure batching/repeating.
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified.
if shuffle:
ds = ds.shuffle(10000)
# Return the next batch of data.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
def train_nn_regression_model(
my_optimizer,
steps,
batch_size,
hidden_units,
training_examples,
training_targets,
validation_examples,
validation_targets):
"""Trains a neural network regression model.
In addition to training, this function also prints training progress information,
as well as a plot of the training and validation loss over time.
Args:
my_optimizer: An instance of `tf.train.Optimizer`, the optimizer to use.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
hidden_units: A `list` of int values, specifying the number of neurons in each layer.
training_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for training.
training_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for training.
validation_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for validation.
validation_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for validation.
Returns:
A tuple `(estimator, training_losses, validation_losses)`:
estimator: the trained `DNNRegressor` object.
training_losses: a `list` containing the training loss values taken during training.
validation_losses: a `list` containing the validation loss values taken during training.
"""
periods = 10
steps_per_period = steps / periods
# Create a DNNRegressor object.
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
dnn_regressor = tf.estimator.DNNRegressor(
feature_columns=construct_feature_columns(training_examples),
hidden_units=hidden_units,
optimizer=my_optimizer
)
# Create input functions.
training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
num_epochs=1,
shuffle=False)
predict_validation_input_fn = lambda: my_input_fn(validation_examples,
validation_targets["median_house_value"],
num_epochs=1,
shuffle=False)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("RMSE (on training data):")
training_rmse = []
validation_rmse = []
for period in range (0, periods):
# Train the model, starting from the prior state.
dnn_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Take a break and compute predictions.
training_predictions = dnn_regressor.predict(input_fn=predict_training_input_fn)
training_predictions = np.array([item['predictions'][0] for item in training_predictions])
validation_predictions = dnn_regressor.predict(input_fn=predict_validation_input_fn)
validation_predictions = np.array([item['predictions'][0] for item in validation_predictions])
# Compute training and validation loss.
training_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(training_predictions, training_targets))
validation_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(validation_predictions, validation_targets))
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, training_root_mean_squared_error))
# Add the loss metrics from this period to our list.
training_rmse.append(training_root_mean_squared_error)
validation_rmse.append(validation_root_mean_squared_error)
print("Model training finished.")
# Output a graph of loss metrics over periods.
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.tight_layout()
plt.plot(training_rmse, label="training")
plt.plot(validation_rmse, label="validation")
plt.legend()
print("Final RMSE (on training data): %0.2f" % training_root_mean_squared_error)
print("Final RMSE (on validation data): %0.2f" % validation_root_mean_squared_error)
return dnn_regressor, training_rmse, validation_rmse
_ = train_nn_regression_model(
my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.0007),
steps=5000,
batch_size=70,
hidden_units=[10, 10],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Linear ScalingIt can be a good standard practice to normalize the inputs to fall within the range -1, 1. This helps SGD not get stuck taking steps that are too large in one dimension, or too small in another. Fans of numerical optimization may note that there's a connection to the idea of using a preconditioner here.
###Code
def linear_scale(series):
min_val = series.min()
max_val = series.max()
scale = (max_val - min_val) / 2.0
return series.apply(lambda x:((x - min_val) / scale) - 1.0)
###Output
_____no_output_____
###Markdown
Task 1: Normalize the Features Using Linear Scaling**Normalize the inputs to the scale -1, 1.****Spend about 5 minutes training and evaluating on the newly normalized data. How well can you do?**As a rule of thumb, NN's train best when the input features are roughly on the same scale.Sanity check your normalized data. (What would happen if you forgot to normalize one feature?)
###Code
def normalize_linear_scale(examples_dataframe):
"""Returns a version of the input `DataFrame` that has all its features normalized linearly."""
#
# Your code here: normalize the inputs.
#
pass
normalized_dataframe = normalize_linear_scale(preprocess_features(california_housing_dataframe))
normalized_training_examples = normalized_dataframe.head(12000)
normalized_validation_examples = normalized_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.0007),
steps=5000,
batch_size=70,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
SolutionClick below for one possible solution. Since normalization uses min and max, we have to ensure it's done on the entire dataset at once. We can do that here because all our data is in a single DataFrame. If we had multiple data sets, a good practice would be to derive the normalization parameters from the training set and apply those identically to the test set.
###Code
def normalize_linear_scale(examples_dataframe):
"""Returns a version of the input `DataFrame` that has all its features normalized linearly."""
processed_features = pd.DataFrame()
processed_features["latitude"] = linear_scale(examples_dataframe["latitude"])
processed_features["longitude"] = linear_scale(examples_dataframe["longitude"])
processed_features["housing_median_age"] = linear_scale(examples_dataframe["housing_median_age"])
processed_features["total_rooms"] = linear_scale(examples_dataframe["total_rooms"])
processed_features["total_bedrooms"] = linear_scale(examples_dataframe["total_bedrooms"])
processed_features["population"] = linear_scale(examples_dataframe["population"])
processed_features["households"] = linear_scale(examples_dataframe["households"])
processed_features["median_income"] = linear_scale(examples_dataframe["median_income"])
processed_features["rooms_per_person"] = linear_scale(examples_dataframe["rooms_per_person"])
return processed_features
normalized_dataframe = normalize_linear_scale(preprocess_features(california_housing_dataframe))
normalized_training_examples = normalized_dataframe.head(12000)
normalized_validation_examples = normalized_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.005),
steps=2000,
batch_size=50,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Task 2: Try a Different Optimizer** Use the Adagrad and Adam optimizers and compare performance.**The Adagrad optimizer is one alternative. The key insight of Adagrad is that it modifies the learning rate adaptively for each coefficient in a model, monotonically lowering the effective learning rate. This works great for convex problems, but isn't always ideal for the non-convex problem Neural Net training. You can use Adagrad by specifying `AdagradOptimizer` instead of `GradientDescentOptimizer`. Note that you may need to use a larger learning rate with Adagrad.For non-convex optimization problems, Adam is sometimes more efficient than Adagrad. To use Adam, invoke the `tf.train.AdamOptimizer` method. This method takes several optional hyperparameters as arguments, but our solution only specifies one of these (`learning_rate`). In a production setting, you should specify and tune the optional hyperparameters carefully.
###Code
#
# YOUR CODE HERE: Retrain the network using Adagrad and then Adam.
#
###Output
_____no_output_____
###Markdown
SolutionClick below for the solution First, let's try Adagrad.
###Code
_, adagrad_training_losses, adagrad_validation_losses = train_nn_regression_model(
my_optimizer=tf.train.AdagradOptimizer(learning_rate=0.5),
steps=500,
batch_size=100,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Now let's try Adam.
###Code
_, adam_training_losses, adam_validation_losses = train_nn_regression_model(
my_optimizer=tf.train.AdamOptimizer(learning_rate=0.009),
steps=500,
batch_size=100,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Let's print a graph of loss metrics side by side.
###Code
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.plot(adagrad_training_losses, label='Adagrad training')
plt.plot(adagrad_validation_losses, label='Adagrad validation')
plt.plot(adam_training_losses, label='Adam training')
plt.plot(adam_validation_losses, label='Adam validation')
_ = plt.legend()
###Output
_____no_output_____
###Markdown
Task 3: Explore Alternate Normalization Methods**Try alternate normalizations for various features to further improve performance.**If you look closely at summary stats for your transformed data, you may notice that linear scaling some features leaves them clumped close to `-1`.For example, many features have a median of `-0.8` or so, rather than `0.0`.
###Code
_ = normalized_training_examples.hist(bins=20, figsize=(18, 12), xlabelsize=10)
###Output
_____no_output_____
###Markdown
We might be able to do better by choosing additional ways to transform these features.For example, a log scaling might help some features. Or clipping extreme values may make the remainder of the scale more informative.
###Code
def log_normalize(series):
return series.apply(lambda x:math.log(x+1.0))
def clip(series, clip_to_min, clip_to_max):
return series.apply(lambda x:(
min(max(x, clip_to_min), clip_to_max)))
def z_score_normalize(series):
mean = series.mean()
std_dv = series.std()
return series.apply(lambda x:(x - mean) / std_dv)
def binary_threshold(series, threshold):
return series.apply(lambda x:(1 if x > threshold else 0))
###Output
_____no_output_____
###Markdown
The block above contains a few additional possible normalization functions. Try some of these, or add your own.Note that if you normalize the target, you'll need to un-normalize the predictions for loss metrics to be comparable.
###Code
def normalize(examples_dataframe):
"""Returns a version of the input `DataFrame` that has all its features normalized."""
#
# YOUR CODE HERE: Normalize the inputs.
#
pass
normalized_dataframe = normalize(preprocess_features(california_housing_dataframe))
normalized_training_examples = normalized_dataframe.head(12000)
normalized_validation_examples = normalized_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.0007),
steps=5000,
batch_size=70,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
SolutionClick below for one possible solution. These are only a few ways in which we could think about the data. Other transformations may work even better!`households`, `median_income` and `total_bedrooms` all appear normally-distributed in a log space.`latitude`, `longitude` and `housing_median_age` would probably be better off just scaled linearly, as before.`population`, `totalRooms` and `rooms_per_person` have a few extreme outliers. They seem too extreme for log normalization to help. So let's clip them instead.
###Code
def normalize(examples_dataframe):
"""Returns a version of the input `DataFrame` that has all its features normalized."""
processed_features = pd.DataFrame()
processed_features["households"] = log_normalize(examples_dataframe["households"])
processed_features["median_income"] = log_normalize(examples_dataframe["median_income"])
processed_features["total_bedrooms"] = log_normalize(examples_dataframe["total_bedrooms"])
processed_features["latitude"] = linear_scale(examples_dataframe["latitude"])
processed_features["longitude"] = linear_scale(examples_dataframe["longitude"])
processed_features["housing_median_age"] = linear_scale(examples_dataframe["housing_median_age"])
processed_features["population"] = linear_scale(clip(examples_dataframe["population"], 0, 5000))
processed_features["rooms_per_person"] = linear_scale(clip(examples_dataframe["rooms_per_person"], 0, 5))
processed_features["total_rooms"] = linear_scale(clip(examples_dataframe["total_rooms"], 0, 10000))
return processed_features
normalized_dataframe = normalize(preprocess_features(california_housing_dataframe))
normalized_training_examples = normalized_dataframe.head(12000)
normalized_validation_examples = normalized_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.AdagradOptimizer(learning_rate=0.15),
steps=1000,
batch_size=50,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Optional Challenge: Use only Latitude and Longitude Features**Train a NN model that uses only latitude and longitude as features.**Real estate people are fond of saying that location is the only important feature in housing price.Let's see if we can confirm this by training a model that uses only latitude and longitude as features.This will only work well if our NN can learn complex nonlinearities from latitude and longitude.**NOTE:** We may need a network structure that has more layers than were useful earlier in the exercise.
###Code
#
# YOUR CODE HERE: Train the network using only latitude and longitude
#
###Output
_____no_output_____
###Markdown
SolutionClick below for a possible solution. It's a good idea to keep latitude and longitude normalized:
###Code
def location_location_location(examples_dataframe):
"""Returns a version of the input `DataFrame` that keeps only the latitude and longitude."""
processed_features = pd.DataFrame()
processed_features["latitude"] = linear_scale(examples_dataframe["latitude"])
processed_features["longitude"] = linear_scale(examples_dataframe["longitude"])
return processed_features
lll_dataframe = location_location_location(preprocess_features(california_housing_dataframe))
lll_training_examples = lll_dataframe.head(12000)
lll_validation_examples = lll_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.AdagradOptimizer(learning_rate=0.05),
steps=500,
batch_size=50,
hidden_units=[10, 10, 5, 5, 5],
training_examples=lll_training_examples,
training_targets=training_targets,
validation_examples=lll_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
[View in Colaboratory](https://colab.research.google.com/github/nikhilbhatewara/GoogleMachineLearningCrashCourse/blob/master/improving_neural_net_performance.ipynb) Copyright 2017 Google LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Improving Neural Net Performance **Learning Objective:** Improve the performance of a neural network by normalizing features and applying various optimization algorithms**NOTE:** The optimization methods described in this exercise are not specific to neural networks; they are effective means to improve most types of models. SetupFirst, we'll load the data.
###Code
from __future__ import print_function
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
california_housing_dataframe = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
def preprocess_features(california_housing_dataframe):
"""Prepares input features from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the features to be used for the model, including
synthetic features.
"""
selected_features = california_housing_dataframe[
["latitude",
"longitude",
"housing_median_age",
"total_rooms",
"total_bedrooms",
"population",
"households",
"median_income"]]
processed_features = selected_features.copy()
# Create a synthetic feature.
processed_features["rooms_per_person"] = (
california_housing_dataframe["total_rooms"] /
california_housing_dataframe["population"])
return processed_features
def preprocess_targets(california_housing_dataframe):
"""Prepares target features (i.e., labels) from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the target feature.
"""
output_targets = pd.DataFrame()
# Scale the target to be in units of thousands of dollars.
output_targets["median_house_value"] = (
california_housing_dataframe["median_house_value"] / 1000.0)
return output_targets
# Choose the first 12000 (out of 17000) examples for training.
training_examples = preprocess_features(california_housing_dataframe.head(12000))
training_targets = preprocess_targets(california_housing_dataframe.head(12000))
# Choose the last 5000 (out of 17000) examples for validation.
validation_examples = preprocess_features(california_housing_dataframe.tail(5000))
validation_targets = preprocess_targets(california_housing_dataframe.tail(5000))
# Double-check that we've done the right thing.
print("Training examples summary:")
display.display(training_examples.describe())
print("Validation examples summary:")
display.display(validation_examples.describe())
print("Training targets summary:")
display.display(training_targets.describe())
print("Validation targets summary:")
display.display(validation_targets.describe())
###Output
Training examples summary:
###Markdown
Train the Neural NetworkNext, we'll train the neural network.
###Code
def construct_feature_columns(input_features):
"""Construct the TensorFlow Feature Columns.
Args:
input_features: The names of the numerical input features to use.
Returns:
A set of feature columns
"""
return set([tf.feature_column.numeric_column(my_feature)
for my_feature in input_features])
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Trains a neural network model.
Args:
features: pandas DataFrame of features
targets: pandas DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""
# Convert pandas data into a dict of np arrays.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construct a dataset, and configure batching/repeating.
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified.
if shuffle:
ds = ds.shuffle(10000)
# Return the next batch of data.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
def train_nn_regression_model(
my_optimizer,
steps,
batch_size,
hidden_units,
training_examples,
training_targets,
validation_examples,
validation_targets):
"""Trains a neural network regression model.
In addition to training, this function also prints training progress information,
as well as a plot of the training and validation loss over time.
Args:
my_optimizer: An instance of `tf.train.Optimizer`, the optimizer to use.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
hidden_units: A `list` of int values, specifying the number of neurons in each layer.
training_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for training.
training_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for training.
validation_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for validation.
validation_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for validation.
Returns:
A tuple `(estimator, training_losses, validation_losses)`:
estimator: the trained `DNNRegressor` object.
training_losses: a `list` containing the training loss values taken during training.
validation_losses: a `list` containing the validation loss values taken during training.
"""
periods = 10
steps_per_period = steps / periods
# Create a DNNRegressor object.
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
dnn_regressor = tf.estimator.DNNRegressor(
feature_columns=construct_feature_columns(training_examples),
hidden_units=hidden_units,
optimizer=my_optimizer
)
# Create input functions.
training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
num_epochs=1,
shuffle=False)
predict_validation_input_fn = lambda: my_input_fn(validation_examples,
validation_targets["median_house_value"],
num_epochs=1,
shuffle=False)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("RMSE (on training data):")
training_rmse = []
validation_rmse = []
for period in range (0, periods):
# Train the model, starting from the prior state.
dnn_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Take a break and compute predictions.
training_predictions = dnn_regressor.predict(input_fn=predict_training_input_fn)
training_predictions = np.array([item['predictions'][0] for item in training_predictions])
validation_predictions = dnn_regressor.predict(input_fn=predict_validation_input_fn)
validation_predictions = np.array([item['predictions'][0] for item in validation_predictions])
# Compute training and validation loss.
training_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(training_predictions, training_targets))
validation_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(validation_predictions, validation_targets))
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, training_root_mean_squared_error))
# Add the loss metrics from this period to our list.
training_rmse.append(training_root_mean_squared_error)
validation_rmse.append(validation_root_mean_squared_error)
print("Model training finished.")
# Output a graph of loss metrics over periods.
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.tight_layout()
plt.plot(training_rmse, label="training")
plt.plot(validation_rmse, label="validation")
plt.legend()
print("Final RMSE (on training data): %0.2f" % training_root_mean_squared_error)
print("Final RMSE (on validation data): %0.2f" % validation_root_mean_squared_error)
return dnn_regressor, training_rmse, validation_rmse
_ = train_nn_regression_model(
my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.0007),
steps=5000,
batch_size=70,
hidden_units=[10, 10],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
Training model...
RMSE (on training data):
period 00 : 148.34
period 01 : 127.01
period 02 : 117.80
period 03 : 110.63
period 04 : 107.46
period 05 : 105.47
period 06 : 104.04
period 07 : 105.65
period 08 : 103.24
period 09 : 102.00
Model training finished.
Final RMSE (on training data): 102.00
Final RMSE (on validation data): 103.16
###Markdown
Linear ScalingIt can be a good standard practice to normalize the inputs to fall within the range -1, 1. This helps SGD not get stuck taking steps that are too large in one dimension, or too small in another. Fans of numerical optimization may note that there's a connection to the idea of using a preconditioner here.
###Code
def linear_scale(series):
min_val = series.min()
max_val = series.max()
scale = (max_val - min_val) / 2.0
return series.apply(lambda x:((x - min_val) / scale) - 1.0)
###Output
_____no_output_____
###Markdown
Task 1: Normalize the Features Using Linear Scaling**Normalize the inputs to the scale -1, 1.****Spend about 5 minutes training and evaluating on the newly normalized data. How well can you do?**As a rule of thumb, NN's train best when the input features are roughly on the same scale.Sanity check your normalized data. (What would happen if you forgot to normalize one feature?)
###Code
def normalize_linear_scale(examples_dataframe):
"""Returns a version of the input `DataFrame` that has all its features normalized linearly."""
#
# Your code here: normalize the inputs.
#
pass
normalized_dataframe = normalize_linear_scale(preprocess_features(california_housing_dataframe))
normalized_training_examples = normalized_dataframe.head(12000)
normalized_validation_examples = normalized_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.0007),
steps=5000,
batch_size=70,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
SolutionClick below for one possible solution. Since normalization uses min and max, we have to ensure it's done on the entire dataset at once. We can do that here because all our data is in a single DataFrame. If we had multiple data sets, a good practice would be to derive the normalization parameters from the training set and apply those identically to the test set.
###Code
def normalize_linear_scale(examples_dataframe):
"""Returns a version of the input `DataFrame` that has all its features normalized linearly."""
processed_features = pd.DataFrame()
processed_features["latitude"] = linear_scale(examples_dataframe["latitude"])
processed_features["longitude"] = linear_scale(examples_dataframe["longitude"])
processed_features["housing_median_age"] = linear_scale(examples_dataframe["housing_median_age"])
processed_features["total_rooms"] = linear_scale(examples_dataframe["total_rooms"])
processed_features["total_bedrooms"] = linear_scale(examples_dataframe["total_bedrooms"])
processed_features["population"] = linear_scale(examples_dataframe["population"])
processed_features["households"] = linear_scale(examples_dataframe["households"])
processed_features["median_income"] = linear_scale(examples_dataframe["median_income"])
processed_features["rooms_per_person"] = linear_scale(examples_dataframe["rooms_per_person"])
return processed_features
normalized_dataframe = normalize_linear_scale(preprocess_features(california_housing_dataframe))
normalized_training_examples = normalized_dataframe.head(12000)
normalized_validation_examples = normalized_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.005),
steps=2000,
batch_size=50,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
Training model...
RMSE (on training data):
period 00 : 180.68
period 01 : 113.39
period 02 : 101.47
period 03 : 84.74
period 04 : 76.86
period 05 : 73.84
period 06 : 72.41
period 07 : 71.64
period 08 : 71.07
period 09 : 70.78
Model training finished.
Final RMSE (on training data): 70.78
Final RMSE (on validation data): 71.40
###Markdown
Task 2: Try a Different Optimizer** Use the Adagrad and Adam optimizers and compare performance.**The Adagrad optimizer is one alternative. The key insight of Adagrad is that it modifies the learning rate adaptively for each coefficient in a model, monotonically lowering the effective learning rate. This works great for convex problems, but isn't always ideal for the non-convex problem Neural Net training. You can use Adagrad by specifying `AdagradOptimizer` instead of `GradientDescentOptimizer`. Note that you may need to use a larger learning rate with Adagrad.For non-convex optimization problems, Adam is sometimes more efficient than Adagrad. To use Adam, invoke the `tf.train.AdamOptimizer` method. This method takes several optional hyperparameters as arguments, but our solution only specifies one of these (`learning_rate`). In a production setting, you should specify and tune the optional hyperparameters carefully.
###Code
#
# YOUR CODE HERE: Retrain the network using Adagrad and then Adam.
#
_ = train_nn_regression_model(
my_optimizer=tf.train.AdamOptimizer(learning_rate=0.05),
steps=2000,
batch_size=50,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
Training model...
RMSE (on training data):
period 00 : 71.91
period 01 : 70.25
period 02 : 68.01
period 03 : 67.35
period 04 : 67.51
period 05 : 68.32
period 06 : 66.48
period 07 : 66.66
period 08 : 68.05
period 09 : 67.49
Model training finished.
Final RMSE (on training data): 67.49
Final RMSE (on validation data): 67.19
###Markdown
SolutionClick below for the solution First, let's try Adagrad.
###Code
_, adagrad_training_losses, adagrad_validation_losses = train_nn_regression_model(
my_optimizer=tf.train.AdagradOptimizer(learning_rate=0.5),
steps=500,
batch_size=100,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
Training model...
RMSE (on training data):
period 00 : 82.81
period 01 : 74.76
period 02 : 72.78
period 03 : 70.84
period 04 : 72.26
period 05 : 69.50
period 06 : 70.47
period 07 : 69.47
period 08 : 69.39
period 09 : 68.56
Model training finished.
Final RMSE (on training data): 68.56
Final RMSE (on validation data): 68.67
###Markdown
Now let's try Adam.
###Code
_, adam_training_losses, adam_validation_losses = train_nn_regression_model(
my_optimizer=tf.train.AdamOptimizer(learning_rate=0.009),
steps=500,
batch_size=100,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
Training model...
RMSE (on training data):
period 00 : 206.19
period 01 : 118.89
period 02 : 108.46
period 03 : 94.30
period 04 : 74.89
period 05 : 71.14
period 06 : 70.26
period 07 : 69.93
period 08 : 69.77
period 09 : 69.95
Model training finished.
Final RMSE (on training data): 69.95
Final RMSE (on validation data): 70.17
###Markdown
Let's print a graph of loss metrics side by side.
###Code
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.plot(adagrad_training_losses, label='Adagrad training')
plt.plot(adagrad_validation_losses, label='Adagrad validation')
plt.plot(adam_training_losses, label='Adam training')
plt.plot(adam_validation_losses, label='Adam validation')
_ = plt.legend()
###Output
_____no_output_____
###Markdown
Task 3: Explore Alternate Normalization Methods**Try alternate normalizations for various features to further improve performance.**If you look closely at summary stats for your transformed data, you may notice that linear scaling some features leaves them clumped close to `-1`.For example, many features have a median of `-0.8` or so, rather than `0.0`.
###Code
_ = normalized_training_examples.hist(bins=20, figsize=(18, 12), xlabelsize=10)
###Output
_____no_output_____
###Markdown
We might be able to do better by choosing additional ways to transform these features.For example, a log scaling might help some features. Or clipping extreme values may make the remainder of the scale more informative.
###Code
def log_normalize(series):
return series.apply(lambda x:math.log(x+1.0))
def clip(series, clip_to_min, clip_to_max):
return series.apply(lambda x:(
min(max(x, clip_to_min), clip_to_max)))
def z_score_normalize(series):
mean = series.mean()
std_dv = series.std()
return series.apply(lambda x:(x - mean) / std_dv)
def binary_threshold(series, threshold):
return series.apply(lambda x:(1 if x > threshold else 0))
###Output
_____no_output_____
###Markdown
The block above contains a few additional possible normalization functions. Try some of these, or add your own.Note that if you normalize the target, you'll need to un-normalize the predictions for loss metrics to be comparable.
###Code
def normalize(examples_dataframe):
"""Returns a version of the input `DataFrame` that has all its features normalized."""
processed_features = pd.DataFrame()
processed_features["latitude"] = log_normalize(examples_dataframe["latitude"])
processed_features["longitude"] = linear_scale(examples_dataframe["longitude"])
processed_features["housing_median_age"] = z_score_normalize(examples_dataframe["housing_median_age"])
processed_features["total_rooms"] = log_normalize(examples_dataframe["total_rooms"])
processed_features["total_bedrooms"] = log_normalize(examples_dataframe["total_bedrooms"])
processed_features["population"] = z_score_normalize(examples_dataframe["population"])
processed_features["households"] = z_score_normalize(examples_dataframe["households"])
processed_features["median_income"] = z_score_normalize(examples_dataframe["median_income"])
processed_features["rooms_per_person"] = log_normalize(examples_dataframe["rooms_per_person"])
return processed_features
normalized_dataframe = normalize(preprocess_features(california_housing_dataframe))
normalized_training_examples = normalized_dataframe.head(12000)
normalized_validation_examples = normalized_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.0007),
steps=5000,
batch_size=70,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
Training model...
RMSE (on training data):
period 00 : 204.88
period 01 : 119.14
period 02 : 109.26
period 03 : 104.03
period 04 : 97.31
period 05 : 88.99
period 06 : 82.46
period 07 : 79.56
period 08 : 78.73
period 09 : 78.36
Model training finished.
Final RMSE (on training data): 78.36
Final RMSE (on validation data): 78.77
###Markdown
SolutionClick below for one possible solution. These are only a few ways in which we could think about the data. Other transformations may work even better!`households`, `median_income` and `total_bedrooms` all appear normally-distributed in a log space.`latitude`, `longitude` and `housing_median_age` would probably be better off just scaled linearly, as before.`population`, `totalRooms` and `rooms_per_person` have a few extreme outliers. They seem too extreme for log normalization to help. So let's clip them instead.
###Code
def normalize(examples_dataframe):
"""Returns a version of the input `DataFrame` that has all its features normalized."""
processed_features = pd.DataFrame()
processed_features["households"] = log_normalize(examples_dataframe["households"])
processed_features["median_income"] = log_normalize(examples_dataframe["median_income"])
processed_features["total_bedrooms"] = log_normalize(examples_dataframe["total_bedrooms"])
processed_features["latitude"] = linear_scale(examples_dataframe["latitude"])
processed_features["longitude"] = linear_scale(examples_dataframe["longitude"])
processed_features["housing_median_age"] = linear_scale(examples_dataframe["housing_median_age"])
processed_features["population"] = linear_scale(clip(examples_dataframe["population"], 0, 5000))
processed_features["rooms_per_person"] = linear_scale(clip(examples_dataframe["rooms_per_person"], 0, 5))
processed_features["total_rooms"] = linear_scale(clip(examples_dataframe["total_rooms"], 0, 10000))
return processed_features
normalized_dataframe = normalize(preprocess_features(california_housing_dataframe))
normalized_training_examples = normalized_dataframe.head(12000)
normalized_validation_examples = normalized_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.AdagradOptimizer(learning_rate=0.15),
steps=1000,
batch_size=50,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Optional Challenge: Use only Latitude and Longitude Features**Train a NN model that uses only latitude and longitude as features.**Real estate people are fond of saying that location is the only important feature in housing price.Let's see if we can confirm this by training a model that uses only latitude and longitude as features.This will only work well if our NN can learn complex nonlinearities from latitude and longitude.**NOTE:** We may need a network structure that has more layers than were useful earlier in the exercise.
###Code
#
# YOUR CODE HERE: Train the network using only latitude and longitude
#
def normalize(examples_dataframe):
"""Returns a version of the input `DataFrame` that has all its features normalized."""
processed_features = pd.DataFrame()
processed_features["latitude"] = linear_scale(examples_dataframe["latitude"])
processed_features["longitude"] = linear_scale(examples_dataframe["longitude"])
return processed_features
normalized_dataframe = normalize(preprocess_features(california_housing_dataframe))
normalized_training_examples = normalized_dataframe.head(12000)
normalized_validation_examples = normalized_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.AdagradOptimizer(learning_rate=0.15),
steps=1000,
batch_size=50,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
Training model...
RMSE (on training data):
period 00 : 103.23
period 01 : 101.26
period 02 : 99.94
period 03 : 99.23
period 04 : 99.18
period 05 : 98.70
period 06 : 98.62
period 07 : 98.40
period 08 : 98.16
period 09 : 98.03
Model training finished.
Final RMSE (on training data): 98.03
Final RMSE (on validation data): 99.89
###Markdown
SolutionClick below for a possible solution. It's a good idea to keep latitude and longitude normalized:
###Code
def location_location_location(examples_dataframe):
"""Returns a version of the input `DataFrame` that keeps only the latitude and longitude."""
processed_features = pd.DataFrame()
processed_features["latitude"] = linear_scale(examples_dataframe["latitude"])
processed_features["longitude"] = linear_scale(examples_dataframe["longitude"])
return processed_features
lll_dataframe = location_location_location(preprocess_features(california_housing_dataframe))
lll_training_examples = lll_dataframe.head(12000)
lll_validation_examples = lll_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.AdagradOptimizer(learning_rate=0.05),
steps=500,
batch_size=50,
hidden_units=[10, 10, 5, 5, 5],
training_examples=lll_training_examples,
training_targets=training_targets,
validation_examples=lll_validation_examples,
validation_targets=validation_targets)
###Output
Training model...
RMSE (on training data):
period 00 : 113.52
period 01 : 107.46
period 02 : 102.99
period 03 : 100.81
period 04 : 100.81
period 05 : 100.21
period 06 : 99.28
period 07 : 100.41
period 08 : 99.21
period 09 : 98.97
Model training finished.
Final RMSE (on training data): 98.97
Final RMSE (on validation data): 100.96
###Markdown
[View in Colaboratory](https://colab.research.google.com/github/DillipKS/MLCC_assignments/blob/master/improving_neural_net_performance.ipynb) Copyright 2017 Google LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Improving Neural Net Performance **Learning Objective:** Improve the performance of a neural network by normalizing features and applying various optimization algorithms**NOTE:** The optimization methods described in this exercise are not specific to neural networks; they are effective means to improve most types of models. SetupFirst, we'll load the data.
###Code
from __future__ import print_function
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
california_housing_dataframe = pd.read_csv("https://dl.google.com/mlcc/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
def preprocess_features(california_housing_dataframe):
"""Prepares input features from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the features to be used for the model, including
synthetic features.
"""
selected_features = california_housing_dataframe[
["latitude",
"longitude",
"housing_median_age",
"total_rooms",
"total_bedrooms",
"population",
"households",
"median_income"]]
processed_features = selected_features.copy()
# Create a synthetic feature.
processed_features["rooms_per_person"] = (
california_housing_dataframe["total_rooms"] /
california_housing_dataframe["population"])
return processed_features
def preprocess_targets(california_housing_dataframe):
"""Prepares target features (i.e., labels) from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the target feature.
"""
output_targets = pd.DataFrame()
# Scale the target to be in units of thousands of dollars.
output_targets["median_house_value"] = (
california_housing_dataframe["median_house_value"] / 1000.0)
return output_targets
# Choose the first 12000 (out of 17000) examples for training.
training_examples = preprocess_features(california_housing_dataframe.head(12000))
training_targets = preprocess_targets(california_housing_dataframe.head(12000))
# Choose the last 5000 (out of 17000) examples for validation.
validation_examples = preprocess_features(california_housing_dataframe.tail(5000))
validation_targets = preprocess_targets(california_housing_dataframe.tail(5000))
# Double-check that we've done the right thing.
print("Training examples summary:")
display.display(training_examples.describe())
print("Validation examples summary:")
display.display(validation_examples.describe())
print("Training targets summary:")
display.display(training_targets.describe())
print("Validation targets summary:")
display.display(validation_targets.describe())
###Output
Training examples summary:
###Markdown
Train the Neural NetworkNext, we'll train the neural network.
###Code
def construct_feature_columns(input_features):
"""Construct the TensorFlow Feature Columns.
Args:
input_features: The names of the numerical input features to use.
Returns:
A set of feature columns
"""
return set([tf.feature_column.numeric_column(my_feature)
for my_feature in input_features])
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Trains a neural network model.
Args:
features: pandas DataFrame of features
targets: pandas DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""
# Convert pandas data into a dict of np arrays.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construct a dataset, and configure batching/repeating.
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified.
if shuffle:
ds = ds.shuffle(10000)
# Return the next batch of data.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
def train_nn_regression_model(
my_optimizer,
steps,
batch_size,
hidden_units,
training_examples,
training_targets,
validation_examples,
validation_targets):
"""Trains a neural network regression model.
In addition to training, this function also prints training progress information,
as well as a plot of the training and validation loss over time.
Args:
my_optimizer: An instance of `tf.train.Optimizer`, the optimizer to use.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
hidden_units: A `list` of int values, specifying the number of neurons in each layer.
training_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for training.
training_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for training.
validation_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for validation.
validation_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for validation.
Returns:
A tuple `(estimator, training_losses, validation_losses)`:
estimator: the trained `DNNRegressor` object.
training_losses: a `list` containing the training loss values taken during training.
validation_losses: a `list` containing the validation loss values taken during training.
"""
periods = 10
steps_per_period = steps / periods
# Create a DNNRegressor object.
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
dnn_regressor = tf.estimator.DNNRegressor(
feature_columns=construct_feature_columns(training_examples),
hidden_units=hidden_units,
optimizer=my_optimizer
)
# Create input functions.
training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
num_epochs=1,
shuffle=False)
predict_validation_input_fn = lambda: my_input_fn(validation_examples,
validation_targets["median_house_value"],
num_epochs=1,
shuffle=False)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("RMSE (on training data):")
training_rmse = []
validation_rmse = []
for period in range (0, periods):
# Train the model, starting from the prior state.
dnn_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Take a break and compute predictions.
training_predictions = dnn_regressor.predict(input_fn=predict_training_input_fn)
training_predictions = np.array([item['predictions'][0] for item in training_predictions])
validation_predictions = dnn_regressor.predict(input_fn=predict_validation_input_fn)
validation_predictions = np.array([item['predictions'][0] for item in validation_predictions])
# Compute training and validation loss.
training_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(training_predictions, training_targets))
validation_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(validation_predictions, validation_targets))
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, training_root_mean_squared_error))
# Add the loss metrics from this period to our list.
training_rmse.append(training_root_mean_squared_error)
validation_rmse.append(validation_root_mean_squared_error)
print("Model training finished.")
# Output a graph of loss metrics over periods.
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.tight_layout()
plt.plot(training_rmse, label="training")
plt.plot(validation_rmse, label="validation")
plt.legend()
print("Final RMSE (on training data): %0.2f" % training_root_mean_squared_error)
print("Final RMSE (on validation data): %0.2f" % validation_root_mean_squared_error)
return dnn_regressor, training_rmse, validation_rmse
_ = train_nn_regression_model(
my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.0007),
steps=5000,
batch_size=70,
hidden_units=[10, 10],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
Training model...
RMSE (on training data):
period 00 : 166.45
period 01 : 156.28
period 02 : 150.90
period 03 : 151.88
period 04 : 130.34
period 05 : 124.31
period 06 : 115.07
period 07 : 106.98
period 08 : 104.75
period 09 : 107.53
Model training finished.
Final RMSE (on training data): 107.53
Final RMSE (on validation data): 110.03
###Markdown
Linear ScalingIt can be a good standard practice to normalize the inputs to fall within the range -1, 1. This helps SGD not get stuck taking steps that are too large in one dimension, or too small in another. Fans of numerical optimization may note that there's a connection to the idea of using a preconditioner here.
###Code
def linear_scale(series):
min_val = series.min()
max_val = series.max()
scale = (max_val - min_val) / 2.0
return series.apply(lambda x:((x - min_val) / scale) - 1.0)
###Output
_____no_output_____
###Markdown
Task 1: Normalize the Features Using Linear Scaling**Normalize the inputs to the scale -1, 1.****Spend about 5 minutes training and evaluating on the newly normalized data. How well can you do?**As a rule of thumb, NN's train best when the input features are roughly on the same scale.Sanity check your normalized data. (What would happen if you forgot to normalize one feature?)
###Code
def normalize_linear_scale(dataframe):
"""Returns a version of the input `DataFrame` that has all its features normalized linearly."""
# Your code here: normalize the inputs.
processed_features = pd.DataFrame()
processed_features["latitude"] = linear_scale(dataframe["latitude"])
processed_features["longitude"] = linear_scale(dataframe["longitude"])
processed_features["housing_median_age"] = linear_scale(dataframe["housing_median_age"])
processed_features["total_rooms"] = linear_scale(dataframe["total_rooms"])
processed_features["total_bedrooms"] = linear_scale(dataframe["total_bedrooms"])
processed_features["population"] = linear_scale(dataframe["population"])
processed_features["households"] = linear_scale(dataframe["households"])
processed_features["median_income"] = linear_scale(dataframe["median_income"])
processed_features["rooms_per_person"] = linear_scale(dataframe["rooms_per_person"])
return processed_features
normalized_dataframe = normalize_linear_scale(preprocess_features(california_housing_dataframe))
normalized_training_examples = normalized_dataframe.head(12000)
normalized_validation_examples = normalized_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.0007),
steps=5000,
batch_size=70,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
Training model...
RMSE (on training data):
period 00 : 231.74
period 01 : 209.93
period 02 : 165.00
period 03 : 121.18
period 04 : 117.15
period 05 : 113.42
period 06 : 109.29
period 07 : 104.42
period 08 : 98.54
period 09 : 91.84
Model training finished.
Final RMSE (on training data): 91.84
Final RMSE (on validation data): 92.46
###Markdown
SolutionClick below for one possible solution. Since normalization uses min and max, we have to ensure it's done on the entire dataset at once. We can do that here because all our data is in a single DataFrame. If we had multiple data sets, a good practice would be to derive the normalization parameters from the training set and apply those identically to the test set.
###Code
def normalize_linear_scale(examples_dataframe):
"""Returns a version of the input `DataFrame` that has all its features normalized linearly."""
processed_features = pd.DataFrame()
processed_features["latitude"] = linear_scale(examples_dataframe["latitude"])
processed_features["longitude"] = linear_scale(examples_dataframe["longitude"])
processed_features["housing_median_age"] = linear_scale(examples_dataframe["housing_median_age"])
processed_features["total_rooms"] = linear_scale(examples_dataframe["total_rooms"])
processed_features["total_bedrooms"] = linear_scale(examples_dataframe["total_bedrooms"])
processed_features["population"] = linear_scale(examples_dataframe["population"])
processed_features["households"] = linear_scale(examples_dataframe["households"])
processed_features["median_income"] = linear_scale(examples_dataframe["median_income"])
processed_features["rooms_per_person"] = linear_scale(examples_dataframe["rooms_per_person"])
return processed_features
normalized_dataframe = normalize_linear_scale(preprocess_features(california_housing_dataframe))
normalized_training_examples = normalized_dataframe.head(12000)
normalized_validation_examples = normalized_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.005),
steps=2000,
batch_size=50,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Task 2: Try a Different Optimizer** Use the Adagrad and Adam optimizers and compare performance.**The Adagrad optimizer is one alternative. The key insight of Adagrad is that it modifies the learning rate adaptively for each coefficient in a model, monotonically lowering the effective learning rate. This works great for convex problems, but isn't always ideal for the non-convex problem Neural Net training. You can use Adagrad by specifying `AdagradOptimizer` instead of `GradientDescentOptimizer`. Note that you may need to use a larger learning rate with Adagrad.For non-convex optimization problems, Adam is sometimes more efficient than Adagrad. To use Adam, invoke the `tf.train.AdamOptimizer` method. This method takes several optional hyperparameters as arguments, but our solution only specifies one of these (`learning_rate`). In a production setting, you should specify and tune the optional hyperparameters carefully.
###Code
# YOUR CODE HERE: Retrain the network using Adagrad and then Adam.
_, adagrad_training_losses, adagrad_validation_losses = train_nn_regression_model(
my_optimizer=tf.train.AdagradOptimizer(learning_rate=0.07),
steps=5000,
batch_size=70,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
_, adam_training_losses, adam_validation_losses = train_nn_regression_model(
my_optimizer=tf.train.AdamOptimizer(learning_rate=0.001),
steps=5000,
batch_size=70,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.plot(adagrad_training_losses, label='Adagrad training')
plt.plot(adagrad_validation_losses, label='Adagrad validation')
plt.plot(adam_training_losses, label='Adam training')
plt.plot(adam_validation_losses, label='Adam validation')
_ = plt.legend()
###Output
_____no_output_____
###Markdown
SolutionClick below for the solution First, let's try Adagrad.
###Code
_, adagrad_training_losses, adagrad_validation_losses = train_nn_regression_model(
my_optimizer=tf.train.AdagradOptimizer(learning_rate=0.5),
steps=500,
batch_size=100,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Now let's try Adam.
###Code
_, adam_training_losses, adam_validation_losses = train_nn_regression_model(
my_optimizer=tf.train.AdamOptimizer(learning_rate=0.009),
steps=500,
batch_size=100,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Let's print a graph of loss metrics side by side.
###Code
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.plot(adagrad_training_losses, label='Adagrad training')
plt.plot(adagrad_validation_losses, label='Adagrad validation')
plt.plot(adam_training_losses, label='Adam training')
plt.plot(adam_validation_losses, label='Adam validation')
_ = plt.legend()
###Output
_____no_output_____
###Markdown
Task 3: Explore Alternate Normalization Methods**Try alternate normalizations for various features to further improve performance.**If you look closely at summary stats for your transformed data, you may notice that linear scaling some features leaves them clumped close to `-1`.For example, many features have a median of `-0.8` or so, rather than `0.0`.
###Code
_ = normalized_training_examples.hist(bins=20, figsize=(18, 12), xlabelsize=10)
###Output
_____no_output_____
###Markdown
We might be able to do better by choosing additional ways to transform these features.For example, a log scaling might help some features. Or clipping extreme values may make the remainder of the scale more informative.
###Code
def log_normalize(series):
return series.apply(lambda x:math.log(x+1.0))
def clip(series, clip_to_min, clip_to_max):
return series.apply(lambda x:(
min(max(x, clip_to_min), clip_to_max)))
def z_score_normalize(series):
mean = series.mean()
std_dv = series.std()
return series.apply(lambda x:(x - mean) / std_dv)
def binary_threshold(series, threshold):
return series.apply(lambda x:(1 if x > threshold else 0))
###Output
_____no_output_____
###Markdown
The block above contains a few additional possible normalization functions. Try some of these, or add your own.Note that if you normalize the target, you'll need to un-normalize the predictions for loss metrics to be comparable.
###Code
def normalize(dataframe):
"""Returns a version of the input `DataFrame` that has all its features normalized."""
# YOUR CODE HERE: Normalize the inputs.
processed_features = pd.DataFrame()
processed_features["latitude"] = linear_scale(dataframe["latitude"])
processed_features["longitude"] = linear_scale(dataframe["longitude"])
processed_features["housing_median_age"] = linear_scale(dataframe["housing_median_age"])
processed_features["median_income"] = log_normalize(dataframe["median_income"])
processed_features["households"] = log_normalize(dataframe["households"])
processed_features["total_rooms"] = clip(linear_scale(dataframe["total_rooms"]),-1.0,-0.25)
processed_features["total_bedrooms"] = clip(linear_scale(dataframe["total_bedrooms"]),-1.0,-0.25)
processed_features["population"] = clip(linear_scale(dataframe["population"]),-1.0,-0.5)
processed_features["rooms_per_person"] = clip(linear_scale(dataframe["rooms_per_person"]),-1.0,-0.75)
return processed_features
normalized_dataframe = normalize(preprocess_features(california_housing_dataframe))
normalized_training_examples = normalized_dataframe.head(12000)
normalized_validation_examples = normalized_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.AdagradOptimizer(learning_rate=0.07),
steps=5000,
batch_size=70,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
Training model...
RMSE (on training data):
period 00 : 78.37
period 01 : 72.44
period 02 : 71.12
period 03 : 70.36
period 04 : 69.90
period 05 : 69.39
period 06 : 69.11
period 07 : 68.73
period 08 : 68.43
period 09 : 68.24
Model training finished.
Final RMSE (on training data): 68.24
Final RMSE (on validation data): 68.71
###Markdown
SolutionClick below for one possible solution. These are only a few ways in which we could think about the data. Other transformations may work even better!`households`, `median_income` and `total_bedrooms` all appear normally-distributed in a log space.`latitude`, `longitude` and `housing_median_age` would probably be better off just scaled linearly, as before.`population`, `totalRooms` and `rooms_per_person` have a few extreme outliers. They seem too extreme for log normalization to help. So let's clip them instead.
###Code
def normalize(examples_dataframe):
"""Returns a version of the input `DataFrame` that has all its features normalized."""
processed_features = pd.DataFrame()
processed_features["households"] = log_normalize(examples_dataframe["households"])
processed_features["median_income"] = log_normalize(examples_dataframe["median_income"])
processed_features["total_bedrooms"] = log_normalize(examples_dataframe["total_bedrooms"])
processed_features["latitude"] = linear_scale(examples_dataframe["latitude"])
processed_features["longitude"] = linear_scale(examples_dataframe["longitude"])
processed_features["housing_median_age"] = linear_scale(examples_dataframe["housing_median_age"])
processed_features["population"] = linear_scale(clip(examples_dataframe["population"], 0, 5000))
processed_features["rooms_per_person"] = linear_scale(clip(examples_dataframe["rooms_per_person"], 0, 5))
processed_features["total_rooms"] = linear_scale(clip(examples_dataframe["total_rooms"], 0, 10000))
return processed_features
normalized_dataframe = normalize(preprocess_features(california_housing_dataframe))
normalized_training_examples = normalized_dataframe.head(12000)
normalized_validation_examples = normalized_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.AdagradOptimizer(learning_rate=0.15),
steps=1000,
batch_size=50,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Optional Challenge: Use only Latitude and Longitude Features**Train a NN model that uses only latitude and longitude as features.**Real estate people are fond of saying that location is the only important feature in housing price.Let's see if we can confirm this by training a model that uses only latitude and longitude as features.This will only work well if our NN can learn complex nonlinearities from latitude and longitude.**NOTE:** We may need a network structure that has more layers than were useful earlier in the exercise.
###Code
# YOUR CODE HERE: Train the network using only latitude and longitude
def normalize(dataframe):
"""Returns a version of the input `DataFrame` that has all its features normalized."""
processed_features = pd.DataFrame()
processed_features["latitude"] = linear_scale(dataframe["latitude"])
processed_features["longitude"] = linear_scale(dataframe["longitude"])
return processed_features
normalized_dataframe = normalize(preprocess_features(california_housing_dataframe))
normalized_training_examples = normalized_dataframe.head(12000)
normalized_validation_examples = normalized_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.007),
steps=5000,
batch_size=70,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
Training model...
RMSE (on training data):
period 00 : 108.30
period 01 : 101.08
period 02 : 99.63
period 03 : 99.01
period 04 : 98.64
period 05 : 98.29
period 06 : 98.23
period 07 : 98.20
period 08 : 97.81
period 09 : 97.79
Model training finished.
Final RMSE (on training data): 97.79
Final RMSE (on validation data): 98.24
###Markdown
SolutionClick below for a possible solution. It's a good idea to keep latitude and longitude normalized:
###Code
def location_location_location(examples_dataframe):
"""Returns a version of the input `DataFrame` that keeps only the latitude and longitude."""
processed_features = pd.DataFrame()
processed_features["latitude"] = linear_scale(examples_dataframe["latitude"])
processed_features["longitude"] = linear_scale(examples_dataframe["longitude"])
return processed_features
lll_dataframe = location_location_location(preprocess_features(california_housing_dataframe))
lll_training_examples = lll_dataframe.head(12000)
lll_validation_examples = lll_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.AdagradOptimizer(learning_rate=0.05),
steps=500,
batch_size=50,
hidden_units=[10, 10, 5, 5, 5],
training_examples=lll_training_examples,
training_targets=training_targets,
validation_examples=lll_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Copyright 2017 Google LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Improving Neural Net Performance **Learning Objective:** Improve the performance of a neural network by normalizing features and applying various optimization algorithms**NOTE:** The optimization methods described in this exercise are not specific to neural networks; they are effective means to improve most types of models. SetupFirst, we'll load the data.
###Code
from __future__ import print_function
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
california_housing_dataframe = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
def preprocess_features(california_housing_dataframe):
"""Prepares input features from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the features to be used for the model, including
synthetic features.
"""
selected_features = california_housing_dataframe[
["latitude",
"longitude",
"housing_median_age",
"total_rooms",
"total_bedrooms",
"population",
"households",
"median_income"]]
processed_features = selected_features.copy()
# Create a synthetic feature.
processed_features["rooms_per_person"] = (
california_housing_dataframe["total_rooms"] /
california_housing_dataframe["population"])
return processed_features
def preprocess_targets(california_housing_dataframe):
"""Prepares target features (i.e., labels) from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the target feature.
"""
output_targets = pd.DataFrame()
# Scale the target to be in units of thousands of dollars.
output_targets["median_house_value"] = (
california_housing_dataframe["median_house_value"] / 1000.0)
return output_targets
# Choose the first 12000 (out of 17000) examples for training.
training_examples = preprocess_features(california_housing_dataframe.head(12000))
training_targets = preprocess_targets(california_housing_dataframe.head(12000))
# Choose the last 5000 (out of 17000) examples for validation.
validation_examples = preprocess_features(california_housing_dataframe.tail(5000))
validation_targets = preprocess_targets(california_housing_dataframe.tail(5000))
# Double-check that we've done the right thing.
print("Training examples summary:")
display.display(training_examples.describe())
print("Validation examples summary:")
display.display(validation_examples.describe())
print("Training targets summary:")
display.display(training_targets.describe())
print("Validation targets summary:")
display.display(validation_targets.describe())
###Output
_____no_output_____
###Markdown
Train the Neural NetworkNext, we'll train the neural network.
###Code
def construct_feature_columns(input_features):
"""Construct the TensorFlow Feature Columns.
Args:
input_features: The names of the numerical input features to use.
Returns:
A set of feature columns
"""
return set([tf.feature_column.numeric_column(my_feature)
for my_feature in input_features])
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Trains a neural network model.
Args:
features: pandas DataFrame of features
targets: pandas DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""
# Convert pandas data into a dict of np arrays.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construct a dataset, and configure batching/repeating.
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified.
if shuffle:
ds = ds.shuffle(10000)
# Return the next batch of data.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
def train_nn_regression_model(
my_optimizer,
steps,
batch_size,
hidden_units,
training_examples,
training_targets,
validation_examples,
validation_targets):
"""Trains a neural network regression model.
In addition to training, this function also prints training progress information,
as well as a plot of the training and validation loss over time.
Args:
my_optimizer: An instance of `tf.train.Optimizer`, the optimizer to use.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
hidden_units: A `list` of int values, specifying the number of neurons in each layer.
training_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for training.
training_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for training.
validation_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for validation.
validation_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for validation.
Returns:
A tuple `(estimator, training_losses, validation_losses)`:
estimator: the trained `DNNRegressor` object.
training_losses: a `list` containing the training loss values taken during training.
validation_losses: a `list` containing the validation loss values taken during training.
"""
periods = 10
steps_per_period = steps / periods
# Create a DNNRegressor object.
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
dnn_regressor = tf.estimator.DNNRegressor(
feature_columns=construct_feature_columns(training_examples),
hidden_units=hidden_units,
optimizer=my_optimizer
)
# Create input functions.
training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
num_epochs=1,
shuffle=False)
predict_validation_input_fn = lambda: my_input_fn(validation_examples,
validation_targets["median_house_value"],
num_epochs=1,
shuffle=False)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("RMSE (on training data):")
training_rmse = []
validation_rmse = []
for period in range (0, periods):
# Train the model, starting from the prior state.
dnn_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Take a break and compute predictions.
training_predictions = dnn_regressor.predict(input_fn=predict_training_input_fn)
training_predictions = np.array([item['predictions'][0] for item in training_predictions])
validation_predictions = dnn_regressor.predict(input_fn=predict_validation_input_fn)
validation_predictions = np.array([item['predictions'][0] for item in validation_predictions])
# Compute training and validation loss.
training_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(training_predictions, training_targets))
validation_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(validation_predictions, validation_targets))
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, training_root_mean_squared_error))
# Add the loss metrics from this period to our list.
training_rmse.append(training_root_mean_squared_error)
validation_rmse.append(validation_root_mean_squared_error)
print("Model training finished.")
# Output a graph of loss metrics over periods.
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.tight_layout()
plt.plot(training_rmse, label="training")
plt.plot(validation_rmse, label="validation")
plt.legend()
print("Final RMSE (on training data): %0.2f" % training_root_mean_squared_error)
print("Final RMSE (on validation data): %0.2f" % validation_root_mean_squared_error)
return dnn_regressor, training_rmse, validation_rmse
_ = train_nn_regression_model(
my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.0007),
steps=5000,
batch_size=70,
hidden_units=[10, 10],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Linear ScalingIt can be a good standard practice to normalize the inputs to fall within the range -1, 1. This helps SGD not get stuck taking steps that are too large in one dimension, or too small in another. Fans of numerical optimization may note that there's a connection to the idea of using a preconditioner here.
###Code
def linear_scale(series):
min_val = series.min()
max_val = series.max()
scale = (max_val - min_val) / 2.0
return series.apply(lambda x:((x - min_val) / scale) - 1.0)
###Output
_____no_output_____
###Markdown
Task 1: Normalize the Features Using Linear Scaling**Normalize the inputs to the scale -1, 1.****Spend about 5 minutes training and evaluating on the newly normalized data. How well can you do?**As a rule of thumb, NN's train best when the input features are roughly on the same scale.Sanity check your normalized data. (What would happen if you forgot to normalize one feature?)
###Code
def normalize_linear_scale(examples_dataframe):
"""Returns a version of the input `DataFrame` that has all its features normalized linearly."""
#
# Your code here: normalize the inputs.
#
pass
normalized_dataframe = normalize_linear_scale(preprocess_features(california_housing_dataframe))
normalized_training_examples = normalized_dataframe.head(12000)
normalized_validation_examples = normalized_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.0007),
steps=5000,
batch_size=70,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
SolutionClick below for one possible solution. Since normalization uses min and max, we have to ensure it's done on the entire dataset at once. We can do that here because all our data is in a single DataFrame. If we had multiple data sets, a good practice would be to derive the normalization parameters from the training set and apply those identically to the test set.
###Code
def normalize_linear_scale(examples_dataframe):
"""Returns a version of the input `DataFrame` that has all its features normalized linearly."""
processed_features = pd.DataFrame()
processed_features["latitude"] = linear_scale(examples_dataframe["latitude"])
processed_features["longitude"] = linear_scale(examples_dataframe["longitude"])
processed_features["housing_median_age"] = linear_scale(examples_dataframe["housing_median_age"])
processed_features["total_rooms"] = linear_scale(examples_dataframe["total_rooms"])
processed_features["total_bedrooms"] = linear_scale(examples_dataframe["total_bedrooms"])
processed_features["population"] = linear_scale(examples_dataframe["population"])
processed_features["households"] = linear_scale(examples_dataframe["households"])
processed_features["median_income"] = linear_scale(examples_dataframe["median_income"])
processed_features["rooms_per_person"] = linear_scale(examples_dataframe["rooms_per_person"])
return processed_features
normalized_dataframe = normalize_linear_scale(preprocess_features(california_housing_dataframe))
normalized_training_examples = normalized_dataframe.head(12000)
normalized_validation_examples = normalized_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.005),
steps=2000,
batch_size=50,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Task 2: Try a Different Optimizer** Use the Adagrad and Adam optimizers and compare performance.**The Adagrad optimizer is one alternative. The key insight of Adagrad is that it modifies the learning rate adaptively for each coefficient in a model, monotonically lowering the effective learning rate. This works great for convex problems, but isn't always ideal for the non-convex problem Neural Net training. You can use Adagrad by specifying `AdagradOptimizer` instead of `GradientDescentOptimizer`. Note that you may need to use a larger learning rate with Adagrad.For non-convex optimization problems, Adam is sometimes more efficient than Adagrad. To use Adam, invoke the `tf.train.AdamOptimizer` method. This method takes several optional hyperparameters as arguments, but our solution only specifies one of these (`learning_rate`). In a production setting, you should specify and tune the optional hyperparameters carefully.
###Code
#
# YOUR CODE HERE: Retrain the network using Adagrad and then Adam.
#
###Output
_____no_output_____
###Markdown
SolutionClick below for the solution First, let's try Adagrad.
###Code
_, adagrad_training_losses, adagrad_validation_losses = train_nn_regression_model(
my_optimizer=tf.train.AdagradOptimizer(learning_rate=0.5),
steps=500,
batch_size=100,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Now let's try Adam.
###Code
_, adam_training_losses, adam_validation_losses = train_nn_regression_model(
my_optimizer=tf.train.AdamOptimizer(learning_rate=0.009),
steps=500,
batch_size=100,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Let's print a graph of loss metrics side by side.
###Code
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.plot(adagrad_training_losses, label='Adagrad training')
plt.plot(adagrad_validation_losses, label='Adagrad validation')
plt.plot(adam_training_losses, label='Adam training')
plt.plot(adam_validation_losses, label='Adam validation')
_ = plt.legend()
###Output
_____no_output_____
###Markdown
Task 3: Explore Alternate Normalization Methods**Try alternate normalizations for various features to further improve performance.**If you look closely at summary stats for your transformed data, you may notice that linear scaling some features leaves them clumped close to `-1`.For example, many features have a median of `-0.8` or so, rather than `0.0`.
###Code
_ = normalized_training_examples.hist(bins=20, figsize=(18, 12), xlabelsize=10)
###Output
_____no_output_____
###Markdown
We might be able to do better by choosing additional ways to transform these features.For example, a log scaling might help some features. Or clipping extreme values may make the remainder of the scale more informative.
###Code
def log_normalize(series):
return series.apply(lambda x:math.log(x+1.0))
def clip(series, clip_to_min, clip_to_max):
return series.apply(lambda x:(
min(max(x, clip_to_min), clip_to_max)))
def z_score_normalize(series):
mean = series.mean()
std_dv = series.std()
return series.apply(lambda x:(x - mean) / std_dv)
def binary_threshold(series, threshold):
return series.apply(lambda x:(1 if x > threshold else 0))
###Output
_____no_output_____
###Markdown
The block above contains a few additional possible normalization functions. Try some of these, or add your own.Note that if you normalize the target, you'll need to un-normalize the predictions for loss metrics to be comparable.
###Code
def normalize(examples_dataframe):
"""Returns a version of the input `DataFrame` that has all its features normalized."""
#
# YOUR CODE HERE: Normalize the inputs.
#
pass
normalized_dataframe = normalize(preprocess_features(california_housing_dataframe))
normalized_training_examples = normalized_dataframe.head(12000)
normalized_validation_examples = normalized_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.0007),
steps=5000,
batch_size=70,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
SolutionClick below for one possible solution. These are only a few ways in which we could think about the data. Other transformations may work even better!`households`, `median_income` and `total_bedrooms` all appear normally-distributed in a log space.`latitude`, `longitude` and `housing_median_age` would probably be better off just scaled linearly, as before.`population`, `totalRooms` and `rooms_per_person` have a few extreme outliers. They seem too extreme for log normalization to help. So let's clip them instead.
###Code
def normalize(examples_dataframe):
"""Returns a version of the input `DataFrame` that has all its features normalized."""
processed_features = pd.DataFrame()
processed_features["households"] = log_normalize(examples_dataframe["households"])
processed_features["median_income"] = log_normalize(examples_dataframe["median_income"])
processed_features["total_bedrooms"] = log_normalize(examples_dataframe["total_bedrooms"])
processed_features["latitude"] = linear_scale(examples_dataframe["latitude"])
processed_features["longitude"] = linear_scale(examples_dataframe["longitude"])
processed_features["housing_median_age"] = linear_scale(examples_dataframe["housing_median_age"])
processed_features["population"] = linear_scale(clip(examples_dataframe["population"], 0, 5000))
processed_features["rooms_per_person"] = linear_scale(clip(examples_dataframe["rooms_per_person"], 0, 5))
processed_features["total_rooms"] = linear_scale(clip(examples_dataframe["total_rooms"], 0, 10000))
return processed_features
normalized_dataframe = normalize(preprocess_features(california_housing_dataframe))
normalized_training_examples = normalized_dataframe.head(12000)
normalized_validation_examples = normalized_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.AdagradOptimizer(learning_rate=0.15),
steps=1000,
batch_size=50,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Optional Challenge: Use only Latitude and Longitude Features**Train a NN model that uses only latitude and longitude as features.**Real estate people are fond of saying that location is the only important feature in housing price.Let's see if we can confirm this by training a model that uses only latitude and longitude as features.This will only work well if our NN can learn complex nonlinearities from latitude and longitude.**NOTE:** We may need a network structure that has more layers than were useful earlier in the exercise.
###Code
#
# YOUR CODE HERE: Train the network using only latitude and longitude
#
###Output
_____no_output_____
###Markdown
SolutionClick below for a possible solution. It's a good idea to keep latitude and longitude normalized:
###Code
def location_location_location(examples_dataframe):
"""Returns a version of the input `DataFrame` that keeps only the latitude and longitude."""
processed_features = pd.DataFrame()
processed_features["latitude"] = linear_scale(examples_dataframe["latitude"])
processed_features["longitude"] = linear_scale(examples_dataframe["longitude"])
return processed_features
lll_dataframe = location_location_location(preprocess_features(california_housing_dataframe))
lll_training_examples = lll_dataframe.head(12000)
lll_validation_examples = lll_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.AdagradOptimizer(learning_rate=0.05),
steps=500,
batch_size=50,
hidden_units=[10, 10, 5, 5, 5],
training_examples=lll_training_examples,
training_targets=training_targets,
validation_examples=lll_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____ |
Section 1/Reinforcement Learning with TensorFlow & TRFL -- SARSA & SARSE.ipynb | ###Markdown
**Reinforcement Learning with TensorFlow & TRFL: SARSE & SARSE*** This notebook shows how to apply the classic Reinforcement Learning (RL) concepts of SARSA and SARSE with TRFL.* In SARSA, we estimate action values: Q(s,a) like we did in Q learning. However in SARSA we do on-policy updates while in Q learning we do off-policy updates* We can create a policy from the action values. Two types of policy categorizations are on-policy and off-policy methods. * In off-policy methods we use one policy for exploration (behavior policy) while we learn a seperate policy (target policy). In on-policy methods, the exploration and learned policy are the same. In SARSA we explore with the policy we are learning.* SARSE is a slight variation of SARSA. In SARSA the next state is found by sampling an action from the policy, in SARSE the next state is the expected value of all states weighted by the policy. In SARS**A** we take an **A**ction while in SARS**E** we use **E**xpected value.Outline:1. Install TRFL2. Define the GridWorld environment3. Discuss On-policy and Off-policy methods4. Find the value of each state-action value in the environment using SARSA5. Find the value of each state-action value in the environment using SARSE
###Code
#TRFL has issues on Colab with TensorFlow version tensorflow-1.13.0rc1
#install TensorFlow 1.12 and restart run time
!pip install tensorflow==1.12
import os
os.kill(os.getpid(), 9)
#install TRFL
!pip install trfl==1.0
#install Tensorflow Probability
!pip install tensorflow-probability==0.5.0
###Output
Requirement already satisfied: trfl==1.0 in /usr/local/lib/python3.6/dist-packages (1.0)
Requirement already satisfied: dm-sonnet in /usr/local/lib/python3.6/dist-packages (from trfl==1.0) (1.23)
Requirement already satisfied: absl-py in /usr/local/lib/python3.6/dist-packages (from trfl==1.0) (0.7.1)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from trfl==1.0) (1.16.2)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from trfl==1.0) (1.11.0)
Requirement already satisfied: tensorflow-probability==0.5.0 in /usr/local/lib/python3.6/dist-packages (0.5.0)
Requirement already satisfied: numpy>=1.13.3 in /usr/local/lib/python3.6/dist-packages (from tensorflow-probability==0.5.0) (1.16.2)
Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-probability==0.5.0) (1.11.0)
###Markdown
**GridWorld**The GridWorld environment is a four by four grid. The agent randomly starts on the grid and can move either up, left, right, or down. If the agent reaches the upper left or lower right the episode is over. Every action the agent takes gets a reward of -1 until you reach the upper left or over right.
###Code
#Environment from: https://github.com/dennybritz/reinforcement-learning/blob/cee9e78652f8ce98d6079282daf20680e5e17c6a/lib/envs/gridworld.py
#https://github.com/dennybritz/reinforcement-learning/blob/cee9e78652f8ce98d6079282daf20680e5e17c6a/DP/Value%20Iteration%20Solution.ipynb
#define the environment
import io
import numpy as np
import sys
from gym.envs.toy_text import discrete
import pprint
UP = 0
RIGHT = 1
DOWN = 2
LEFT = 3
class GridworldEnv(discrete.DiscreteEnv):
"""
Grid World environment from Sutton's Reinforcement Learning book chapter 4.
You are an agent on an MxN grid and your goal is to reach the terminal
state at the top left or the bottom right corner.
For example, a 4x4 grid looks as follows:
T o o o
o x o o
o o o o
o o o T
x is your position and T are the two terminal states.
You can take actions in each direction (UP=0, RIGHT=1, DOWN=2, LEFT=3).
Actions going off the edge leave you in your current state.
You receive a reward of -1 at each step until you reach a terminal state.
"""
metadata = {'render.modes': ['human', 'ansi']}
def __init__(self, shape=[4,4]):
if not isinstance(shape, (list, tuple)) or not len(shape) == 2:
raise ValueError('shape argument must be a list/tuple of length 2')
self.shape = shape
nS = np.prod(shape)
nA = 4
MAX_Y = shape[0]
MAX_X = shape[1]
P = {}
grid = np.arange(nS).reshape(shape)
it = np.nditer(grid, flags=['multi_index'])
while not it.finished:
s = it.iterindex
y, x = it.multi_index
# P[s][a] = (prob, next_state, reward, is_done)
P[s] = {a : [] for a in range(nA)}
is_done = lambda s: s == 0 or s == (nS - 1)
reward = 0.0 if is_done(s) else -1.0
#reward = 1.0 if is_done(s) else 0.0
# We're stuck in a terminal state
if is_done(s):
P[s][UP] = [(1.0, s, reward, True)]
P[s][RIGHT] = [(1.0, s, reward, True)]
P[s][DOWN] = [(1.0, s, reward, True)]
P[s][LEFT] = [(1.0, s, reward, True)]
# Not a terminal state
else:
ns_up = s if y == 0 else s - MAX_X
ns_right = s if x == (MAX_X - 1) else s + 1
ns_down = s if y == (MAX_Y - 1) else s + MAX_X
ns_left = s if x == 0 else s - 1
P[s][UP] = [(1.0, ns_up, reward, is_done(ns_up))]
P[s][RIGHT] = [(1.0, ns_right, reward, is_done(ns_right))]
P[s][DOWN] = [(1.0, ns_down, reward, is_done(ns_down))]
P[s][LEFT] = [(1.0, ns_left, reward, is_done(ns_left))]
it.iternext()
# Initial state distribution is uniform
isd = np.ones(nS) / nS
# We expose the model of the environment for educational purposes
# This should not be used in any model-free learning algorithm
self.P = P
super(GridworldEnv, self).__init__(nS, nA, P, isd)
def _render(self, mode='human', close=False):
""" Renders the current gridworld layout
For example, a 4x4 grid with the mode="human" looks like:
T o o o
o x o o
o o o o
o o o T
where x is your position and T are the two terminal states.
"""
if close:
return
outfile = io.StringIO() if mode == 'ansi' else sys.stdout
grid = np.arange(self.nS).reshape(self.shape)
it = np.nditer(grid, flags=['multi_index'])
while not it.finished:
s = it.iterindex
y, x = it.multi_index
if self.s == s:
output = " x "
elif s == 0 or s == self.nS - 1:
output = " T "
else:
output = " o "
if x == 0:
output = output.lstrip()
if x == self.shape[1] - 1:
output = output.rstrip()
outfile.write(output)
if x == self.shape[1] - 1:
outfile.write("\n")
it.iternext()
pp = pprint.PrettyPrinter(indent=2)
###Output
_____no_output_____
###Markdown
**Policies: On-Policy vs. Off-Policy**A policy is the agent's action selection method for each state (a probability distribution over actions). This can be a deterministic choice like a greedy policy where the highest valued action is always chosen or a stochastic choice like in the TD learning notebook were we used a random policy at each state. Two categorizations of policies are on-policy and off-policy methods. SARSA and Q learning are very similar. The difference is in how the action value estimate is updated. In Q learning the update is off-policy, in SARSA the update is on-policy.In off-policy methods we use one policy for exploration (behavior policy) while we learn a separate policy (target policy). In on-policy methods, the exploration and learned policy are the same. In SARSA we explore and learn with one policy. The difference is in how we use the TD error. In Q learning the TD error is:reward + gamma*max(Q(s',a)) - current_state_estimate. The max value isn't based on the current policy that the agent is actually following, it's based on a greedy policy that is always selecting the highest action value estimate. Contrast this to SARSA where the TD error is:reward + gamma*Q(s',sampled_action) - current_state_estimateIn SARSA we sample the next action selected from the policy and use that for our next action value estimate. The code cell below has the updates side by side. SARSA is making updates using the policy that SARSA is exploring the env with.
###Code
#declare the environment
env = GridworldEnv()
#reset the environment and get the agent's current position (observation)
current_state = env.reset()
env._render()
print("")
action_dict = {0:"UP",1:"RIGHT", 2:"DOWN",3:"LEFT"}
q_table = np.array([[ 0., 0., 0., 0. ],
[-1.7, -2.4, -2.2, -1. ],
[-2.3, -2.8, -2.6, -2. ],
[-3.2, -3.3, -3., -3. ],
[-1., -2.4, -2.6, -1.8],
[-2., -2.8, -2.5, -2. ],
[-3., -3., -3., -3. ],
[-2.7, -2.5, -2., -2.5],
[-2., -2.4, -2.6, -2.4],
[-3., -3., -3., -3. ],
[-2.5, -2., -2., -2.9],
[-1.9, -1.5, -1., -2.3],
[-3., -3., -3.5, -3.1],
[-2.9, -2., -2.6, -2.9],
[-2.5, -1., -1.6, -2.3],
[ 0., 0., 0., 0. ]])
alpha = 0.1
gamma = 1.
epsilon = 0.1
def get_action(s):
#choose random action epsilon amount of the time
if np.random.rand() < epsilon:
action = env.action_space.sample()
action_type = "random"
else:
#Choose a greedy action.
action = np.argmax(q_table[s])
action_type = "greedy"
return action, action_type
action,action_type = get_action(current_state)
for i in range(10):
next_state,reward,done,info = env.step(action)
print("Agent took {} action {} and is now in state {} ".format(action_type, action_dict[action], current_state))
#in SARSA we find our next action based on the current policy (on-policy). In Q learning we don't need the next action, we take the max of the next state
next_action, action_type = get_action(next_state)
#update q table on-policy (SARSA)
q_table[current_state,action] = q_table[current_state,action] + alpha*(gamma*q_table[next_state,next_action] - q_table[current_state,action])
#For reference update q table off-policy (Q learning)
#q_table[current_state,action] = q_table[current_state,action] + alpha*(gamma*np.max(q_table[next_state]) - q_table[current_state,action])
env._render()
print("")
if done:
print("Agent reached end of episode, resetting the env")
current_state = env.reset()
print("")
env._render()
print("")
else:
current_state = next_state
action = next_action
###Output
T o o o
o o o x
o o o o
o o o T
Agent took random action UP and is now in state 7
T o o x
o o o o
o o o o
o o o T
Agent took greedy action DOWN and is now in state 3
T o o o
o o o x
o o o o
o o o T
Agent took greedy action DOWN and is now in state 7
T o o o
o o o o
o o o x
o o o T
Agent took greedy action DOWN and is now in state 11
T o o o
o o o o
o o o o
o o o x
Agent reached end of episode, resetting the env
T o o o
o o o o
o o o o
o o x T
Agent took greedy action DOWN and is now in state 14
T o o o
o o o o
o o o o
o o x T
Agent took greedy action RIGHT and is now in state 14
T o o o
o o o o
o o o o
o o o x
Agent reached end of episode, resetting the env
T o o o
o o o o
o o o o
o x o T
Agent took greedy action RIGHT and is now in state 13
T o o o
o o o o
o o o o
o o x T
Agent took greedy action RIGHT and is now in state 14
T o o o
o o o o
o o o o
o o o x
Agent reached end of episode, resetting the env
T o o o
o o o o
o o x o
o o o T
Agent took greedy action RIGHT and is now in state 10
T o o o
o o o o
o o o x
o o o T
Agent took greedy action DOWN and is now in state 11
T o o o
o o o o
o o o o
o o o x
Agent reached end of episode, resetting the env
T o o o
o o o x
o o o o
o o o T
###Markdown
** TRFL Usage **Once again, the three main TRFL steps are:1. In the TensorFlow graph, define the necessary TensorFlow tensors2. In the graph, feed the tensors into the trfl method3. In the TensorFlow session, run the graph operationThe difference between this trfl.sarsa and trfl.qlearning is that in trfl.sarsa an additional argument is needed: the next_action_t. SARSA updates estimated values using this next_action_t while in Q learning, the update is done with the max value of q_next_t.
###Code
#set up TRFL graph
import tensorflow as tf
import trfl
num_actions = env.action_space.n
batch_size = 1
#https://github.com/deepmind/trfl/blob/master/docs/trfl.md#sarsaq_tm1-a_tm1-r_t-pcont_t-q_t-a_t-namesarsa
# Args:
# q_tm1: Tensor holding Q-values for first timestep in a batch of transitions, shape [B x num_actions].
# a_tm1: Tensor holding action indices, shape [B].
# r_t: Tensor holding rewards, shape [B].
# pcont_t: Tensor holding pcontinue values, shape [B].
# q_t: Tensor holding Q-values for second timestep in a batch of transitions, shape [B x num_actions].
# a_t: Tensor holding action indices for second timestep, shape [B].
# name: name to prefix ops created within this op.
q_t = tf.placeholder(dtype=tf.float32,shape=[batch_size,num_actions],name="action_value")
action_t = tf.placeholder(dtype=tf.int32,shape=[batch_size],name="action")
reward_t = tf.placeholder(dtype=tf.float32,shape=[batch_size],name='reward')
gamma_t = tf.placeholder(dtype=tf.float32,shape=[batch_size],name='discount_factor')
q_next_t = tf.placeholder(dtype=tf.float32,shape=[batch_size,num_actions],name="next_action_value")
next_action_t = tf.placeholder(dtype=tf.int32,shape=[batch_size],name="next_action_action")
_, sarsa_t = trfl.sarsa(q_t, action_t, reward_t, gamma_t, q_next_t, next_action_t, name='Sarsa')
###Output
_____no_output_____
###Markdown
** The RL Training Loop **In the next cell we are going to define the training loop and then run it in the following cell. The goal is to estimate the action value of each state (the value of each state-action combination) using SARSA. action_value_array holds the estimated values. After each step the agent takes in the env, we update the action_value_array with the SARSA formula. The SARSA loop differs in that prior to updating the estimate, we select the next action. We use the next action in the update and then in the agent's next step we use that next action as the action to take.** TRFL Usage **The TRFL usage here is to run the trfl operation sarsa_t in sess.run(). We then take the output (sarsa_output) and extract the td_error part of that tensor. Using the td_error we update the action_value_array. For reference, the code below shows the full output of trfl.sarsa and the classic RL method of performing tabular SARSA learning updates.
###Code
def choose_action(q_table, state, epsilon=0.1):
#choose action based on epsilon-greedy policy
if np.random.rand() < epsilon:
eg_action = env.action_space.sample()
else:
#Choose a greedy action. We will learn greedy actions with Q learning in the following cells.
eg_action = np.argmax(q_table[state])
return eg_action
def sarsa_action_value_estimate(env,episodes=1000,alpha=0.05,discount_factor=1.0,epsilon=0.1):
"""
Args:
env: OpenAI env. env.P represents the transition probabilities of the environment.
env.P[s][a] is a list of transition tuples (prob, next_state, reward, done).
env.nS is a number of states in the environment.
env.nA is a number of actions in the environment.
episodes: number of episodes to run
alpha: learning rate for state value updates
discount_factor: Gamma discount factor. pcont_t TRFL argument
Returns:
Value of each state with random policy
"""
with tf.Session() as sess:
#initialize the estimated state values to zero
action_value_array = np.zeros((env.nS,env.nA))
#reset the env
current_state = env.reset()
eg_action = choose_action(action_value_array, current_state, epsilon)
#run through each episode taking a random action each time
#upgrade estimated state value after each action
current_episode = 0
while current_episode < episodes:
#take a step using epsilon-greedy action
next_state, rew, done, info = env.step(eg_action)
next_action = choose_action(action_value_array, next_state, epsilon)
#run TRFL operation in the session
sarsa_output = sess.run([sarsa_t],feed_dict={q_t:np.expand_dims(action_value_array[current_state],axis=0),
action_t:np.expand_dims(eg_action,axis=0),
reward_t:np.expand_dims(rew,axis=0),
gamma_t:np.expand_dims(discount_factor,axis=0),
q_next_t:np.expand_dims(action_value_array[next_state],axis=0),
next_action_t:np.expand_dims(next_action,axis=0)})
# trfl.sarsa() returns:
# A namedtuple with fields:
# * `loss`: a tensor containing the batch of losses, shape `[B]`.
# * `extra`: a namedtuple with fields:
# * `target`: batch of target values for `q_tm1[a_tm1]`, shape `[B]`.
# * `td_error`: batch of temporal difference errors, shape `[B]`.
#Use the SARSA TD error to update estimated state-action values
action_value_array[current_state,eg_action] = action_value_array[current_state,eg_action] + alpha * sarsa_output[0].td_error
#For reference, here is the tabular SARSA update method
# action_value_array[current_state,eg_action] = action_value_array[current_state,eg_action] + \
# alpha * (rew + discount_factor*action_value_array[next_state,next_action] - action_value_array[current_state,eg_action])
#if the epsiode is done, reset the env, if not the next state becomes the current state and the loop repeats
if done:
current_state = env.reset()
eg_action = choose_action(action_value_array, current_state, epsilon)
current_episode += 1
else:
current_state = next_state
eg_action = next_action
return action_value_array
#run episodes with SARSA and get the state value estimates
action_values = sarsa_action_value_estimate(env,episodes=1000,alpha=0.1)
print("All Action Value Estimates:")
print(np.round(action_values.reshape((16,4)),2))
print("each row is a state, each column is an action")
print("")
optimal_action_estimates = np.max(action_values,axis=1)
print("Current Policy State Value Estimates:")
print(np.round(optimal_action_estimates.reshape(env.shape),2))
print("estimate of the current state value at each state")
print("")
###Output
All Action Value Estimates:
[[ 0. 0. 0. 0. ]
[-1.54 -2.14 -1.76 -1. ]
[-2.43 -2.52 -2.21 -2.07]
[-3.16 -3.2 -3.01 -3.01]
[-1. -1.74 -1.85 -1.57]
[-2.01 -2.52 -2.11 -2.08]
[-2.89 -2.89 -2.88 -2.88]
[-2.58 -2.35 -2.08 -2.37]
[-2.04 -2.19 -2.56 -2.52]
[-2.76 -2.77 -2.74 -2.77]
[-2.69 -2.05 -2.02 -2.17]
[-1.75 -1.3 -1. -2.12]
[-3. -3. -3.06 -3.01]
[-2.3 -2.04 -2.23 -2.28]
[-1.91 -1. -1.55 -1.64]
[ 0. 0. 0. 0. ]]
each row is a state, each column is an action
Current Policy State Value Estimates:
[[ 0. -1. -2.07 -3.01]
[-1. -2.01 -2.88 -2.08]
[-2.04 -2.74 -2.02 -1. ]
[-3. -2.04 -1. 0. ]]
estimate of the current state value at each state
###Markdown
**SARSE vs. SARSA**SARSE slightly modifies SARSA. While in SARSA we sample to get the next action, in SARSE we use the policy probabilities to create an expected value of the next state estimate. For example, with SARSA we used epsilon-greedy exploration to get the next action. 92.5% of the time SARSA chose the greedy action (90% greedy + 2.5% random) and 2.5% of the time each of the other non-greedy actions were chosen. SARSE uses these probabilities (0.925, 0.025, 0.025, 0.025) and the state-action value estimates to create an expectation. The TD error update becomes:reward + gamma*next_state_estimate - current_state_estimatewhere next_state_estimate is:next_state_estimate = 0.925 x q_table[next_state_0,next_action_0] + 0.025 x q_table[next_state_1,next_action_1] + 0.025 x q_table[next_state_2, next_action_2] + 0.025 x q_table[next_state_3,next_action_3]SARSE is on-policy.**TRFL Usage**In SARSE we use the sarse_action_probs_t instead of next_action_t. Ie we are using the expected distribution of actions rather than the action that was actually selected by the policy.
###Code
#set up TRFL graph
import tensorflow as tf
import trfl
num_actions = env.action_space.n
batch_size = 1
#SARSE replaces the next_action tensor with a tensor holding a probability of next_actions
#https://github.com/deepmind/trfl/blob/master/docs/trfl.md#sarseq_tm1-a_tm1-r_t-pcont_t-q_t-probs_a_t-debugfalse-namesarse
# Args:
# q_tm1: Tensor holding Q-values for first timestep in a batch of transitions, shape [B x num_actions].
# a_tm1: Tensor holding action indices, shape [B].
# r_t: Tensor holding rewards, shape [B].
# pcont_t: Tensor holding pcontinue values, shape [B].
# q_t: Tensor holding Q-values for second timestep in a batch of transitions, shape [B x num_actions].
# probs_a_t: Tensor holding action probabilities for second timestep, shape [B x num_actions].
# debug: Boolean flag, when set to True adds ops to check whether probs_a_t is a batch of (approximately) valid probability distributions.
# name: name to prefix ops created by this function.
sarse_q_t = tf.placeholder(dtype=tf.float32,shape=[batch_size,num_actions],name="action_value")
sarse_action_t = tf.placeholder(dtype=tf.int32,shape=[batch_size],name="action")
sarse_reward_t = tf.placeholder(dtype=tf.float32,shape=[batch_size],name='reward')
sarse_gamma_t = tf.placeholder(dtype=tf.float32,shape=[batch_size],name='discount_factor')
sarse_q_next_t = tf.placeholder(dtype=tf.float32,shape=[batch_size,num_actions],name="next_action_value")
sarse_action_probs_t = tf.placeholder(dtype=tf.float32,shape=[batch_size,num_actions],name='action_probs')
_, sarse_t = trfl.sarse(sarse_q_t, sarse_action_t, sarse_reward_t, sarse_gamma_t, sarse_q_next_t, sarse_action_probs_t, name='Sarse')
def sarse_action_value_estimate(env,episodes=1000,alpha=0.05,discount_factor=1.0,epsilon=0.1):
"""
Args:
env: OpenAI env. env.P represents the transition probabilities of the environment.
env.P[s][a] is a list of transition tuples (prob, next_state, reward, done).
env.nS is a number of states in the environment.
env.nA is a number of actions in the environment.
episodes: number of episodes to run
alpha: learning rate for state value updates
discount_factor: Gamma discount factor. pcont_t TRFL argument
Returns:
Value of each state with random policy
"""
with tf.Session() as sess:
#initialize the estimated state values to zero
action_value_array = np.zeros((env.nS,env.nA))
#reset the env
current_state = env.reset()
#chance of choosing random action based on epsilon. use this with SARSE's action probabilities
random_prob = epsilon/env.nA
greedy_prob = 1.-epsilon
#run through each episode taking a random action each time
#upgrade estimated state value after each action
current_episode = 0
while current_episode < episodes:
#choose action based on epsilon-greedy policy
if np.random.rand() < epsilon:
eg_action = env.action_space.sample()
else:
#Choose a greedy action. We will learn greedy actions with Q learning in the following cells.
eg_action = np.argmax(action_value_array[current_state])
#take a step using epsilon-greedy action
next_state, rew, done, info = env.step(eg_action)
#generate action probabilities
#randomly choose each action with probability epislon/4
action_probs = np.array([random_prob]*env.nA)
#choose greedy action with probability 1-epsilon
action_probs[np.argmax(action_value_array[next_state])] += greedy_prob
#run TRFL operation in the session
sarse_output = sess.run([sarse_t],feed_dict={sarse_q_t:np.expand_dims(action_value_array[current_state],axis=0),
sarse_action_t:np.expand_dims(eg_action,axis=0),
sarse_reward_t:np.expand_dims(rew,axis=0),
sarse_gamma_t:np.expand_dims(discount_factor,axis=0),
sarse_q_next_t:np.expand_dims(action_value_array[next_state],axis=0),
sarse_action_probs_t:np.expand_dims(action_probs,axis=0)})
# trfl.sarse() returns:
# A namedtuple with fields:
# * `loss`: a tensor containing the batch of losses, shape `[B]`.
# * `extra`: a namedtuple with fields:
# * `target`: batch of target values for `q_tm1[a_tm1]`, shape `[B]`.
# * `td_error`: batch of temporal difference errors, shape `[B]`.
#Use the SARSE TD error to update estimated state-action values
action_value_array[current_state,eg_action] = action_value_array[current_state,eg_action] + alpha * sarse_output[0].td_error
#For reference, here is the tabular SARSE update method
# next_action_value_estimate = 0.
# for i in range(env.nA):
# next_action_value_estimate += action_probs[i] * action_value_array[next_state,i]
# action_value_array[current_state,eg_action] = action_value_array[current_state,eg_action] + \
# alpha * (rew + discount_factor*next_action_value_estimate - action_value_array[current_state,eg_action])
#if the epsiode is done, reset the env, if not the next state becomes the current state and the loop repeats
if done:
current_state = env.reset()
current_episode += 1
else:
current_state = next_state
return action_value_array
#run episodes with SARSE and get the state value estimates
action_values = sarse_action_value_estimate(env,episodes=1000,alpha=0.1)
print("All Action Value Estimates:")
print(np.round(action_values.reshape((16,4)),2))
print("each row is a state, each column is an action")
print("")
optimal_action_estimates = np.max(action_values,axis=1)
print("Current Policy State Value Estimates:")
print(np.round(optimal_action_estimates.reshape(env.shape),2))
print("estimate of the current state value at each state")
print("")
###Output
_____no_output_____ |
004_lane_lines/P1.ipynb | ###Markdown
Self-Driving Car Engineer Nanodegree Project: **Finding Lane Lines on the Road** ***In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below. Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/!/rubrics/322/view) for this project.---Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.**Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".**--- **The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.**--- Your output should look something like this (above) after detecting line segments using the helper functions below Your goal is to connect/average/extrapolate line segments to get output like this **Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.** Import Packages
###Code
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
from pathlib import Path
import os
%matplotlib inline
###Output
_____no_output_____
###Markdown
Read in an Image
###Code
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
###Output
This image is: <class 'numpy.ndarray'> with dimensions: (540, 960, 3)
###Markdown
Ideas for Lane Detection Pipeline **Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**`cv2.inRange()` for color selection `cv2.fillPoly()` for regions selection `cv2.line()` to draw lines on an image given endpoints `cv2.addWeighted()` to coadd / overlay two images`cv2.cvtColor()` to grayscale or change color`cv2.imwrite()` to output images to file `cv2.bitwise_and()` to apply a mask to an image**Check out the OpenCV documentation to learn about these and discover even more awesome functionality!** Helper Functions Below are some helper functions to help get you started. They should look familiar from the lesson! These are the values used for Masking the Image to focus on the Lanes section from a captured Image/Video
###Code
EPSILON = 0.005
LOWER_LEFT_X = 0
LOWER_LEFT_Y = 540
UPPER_LEFT_X = 450
UPPER_LEFT_Y = 320
UPPER_RIGHT_X = 490
UPPER_RIGHT_Y = 320
LOWER_RIGHT_X = 960
LOWER_RIGHT_Y = 540
LEFT_SLOPE = -1 *(UPPER_LEFT_Y - LOWER_LEFT_Y)/ ((UPPER_LEFT_X - LOWER_LEFT_X) + EPSILON)
RIGHT_SLOPE = -1 *(UPPER_RIGHT_Y - LOWER_RIGHT_Y)/ ((UPPER_RIGHT_X - LOWER_RIGHT_X) + EPSILON )
REGION_CENTER_X = ((UPPER_LEFT_X + UPPER_RIGHT_X) / 2 + (LOWER_RIGHT_X + LOWER_LEFT_X) / 2 ) / 2
REGION_CENTER_Y = ((UPPER_LEFT_Y + UPPER_RIGHT_Y) / 2 + (LOWER_RIGHT_Y + LOWER_LEFT_Y) / 2 ) / 2
print("Left Slope : {} || Right Slope : {}".format(LEFT_SLOPE,RIGHT_SLOPE))
print("Region Center : ({},{})".format(REGION_CENTER_X,REGION_CENTER_Y))
import math
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
`vertices` should be a numpy array of integer points.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[255, 0, 0], thickness=5):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
# Creating Array to store Slope & Intercept of every line detected by Hough
detected_edges = np.zeros(shape=(len(lines),2))
# Storing lines in different lists based on which lane they belong too
left_lane =list()
right_lane = list()
# Storing co-oordinates in different lists based on which lane they belong too
x_left = list()
y_left = list()
x_right = list()
y_right = list()
for index,line in enumerate(lines):
for x1,y1,x2,y2 in line:
# Computing Slope & intercept of all the line detected
detected_edges[index] = calculate_slope(line,detected_edges)
# Condition check that if the points on the line are less than /on the left side of the
# masked region, then it's part of the left lane other wise right.
if (x2 < REGION_CENTER_X ) and (x1 < REGION_CENTER_X) :
left_lane.append(line)
x_left.append(x1)
x_left.append(x2)
y_left.append(y1)
y_left.append(y2)
elif (x2 > REGION_CENTER_X ) and (x1 > REGION_CENTER_X) :
right_lane.append(line)
x_right.append(x1)
x_right.append(x2)
y_right.append(y1)
y_right.append(y2)
slope_max = detected_edges[detected_edges.argmax(axis=0)[0]]
slope_min = detected_edges[detected_edges.argmin(axis=0)[0]]
print("Max Slope detected by Hough Transform: {}".format(slope_max))
print("Min Slope by Hough Transform: {}".format(slope_min))
# extrapolating straight lines from the Min & Max points
(x1, y1), (x2, y2) = extrapolate_line(x_left,y_left)
(x3, y3), (x4, y4) = extrapolate_line(x_right,y_right)
print("Left Lane Cordinates : ",(x1, y1), (x2, y2))
print("Right Lane Cordinates : ",(x3, y3), (x4, y4))
cv2.line(img, (x1, y1), (x2, y2), color, thickness)
cv2.line(img, (x3, y3), (x4, y4), color, thickness)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., γ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + γ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, γ)
def calculate_slope(line,detected_edges) :
"""
This function computes the slope & intercept from the Hough Lineand stores it
returns a list with the slope & intercept of the line
"""
[[x1,y1,x2,y2]] = line
dx = x2 - x1
dy = y2 - y1
slope = dy/dx
intercept = y1 - (x1 * slope)
return [slope,intercept]
def extrapolate_line(x_lane,y_lane):
"""
Extrapolates line based on the Mn & Max Cordinates of the spaecified region of interest.
"""
y_min = UPPER_LEFT_Y
y_max = LOWER_RIGHT_Y
coeff = np.polyfit(x_lane, y_lane, 1)
m = coeff[0]
b = coeff[1]
x_min = int(abs((b - y_min)/m))
x_max = int(abs((b - y_max)/m))
return (x_min, y_min),(x_max, y_max)
###Output
_____no_output_____
###Markdown
Test ImagesBuild your pipeline to work on the images in the directory "test_images" **You should make sure your pipeline works well on these images before you try the videos.** Build a Lane Finding Pipeline Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report.Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
###Code
# TODO: Build your pipeline that will draw lane lines on the test_images
images = Path('test_images/').glob('**/*')
#images = os.listdir("test_images/")
for image_path in images :
image = mpimg.imread(image_path)
print("Current Image : {} with dimensions: {}".format(image_path.name,image.shape))
#plt.imshow(image)
#plt.show()
# Grayscale Conversion
img = grayscale(image)
#plt.imshow(img)
#plt.show()
# Canny Edge Detection
img_gaussian = gaussian_blur(img,kernel_size=3)
img_canny = canny(img_gaussian, low_threshold =50, high_threshold=150)
#plt.imshow(img_canny)
#plt.show()
# Region of Interest
vertices= np.array([[(LOWER_LEFT_X,LOWER_LEFT_Y),
(UPPER_LEFT_X, UPPER_LEFT_Y),
(UPPER_RIGHT_X, UPPER_RIGHT_Y),
(LOWER_RIGHT_X,LOWER_RIGHT_Y)]],dtype=np.int32)
img_region = region_of_interest(img_canny,vertices )
#plt.imshow(img_region)
#plt.show()
# Hough Transform
img_hough = hough_lines(img_region, rho = 2 , theta = np.pi /180, threshold = 10, min_line_len = 10, max_line_gap = 20)
plt.imshow(img_hough)
plt.show()
result = weighted_img(img_hough,image)
#print("Final Image Shape : {} ".format(result.shape))
#plt.imshow(result)
#plt.show()
#break
# then save them to the test_images_output directory.
mpimg.imsave(os.path.join('test_images_output/',image_path.name), result)
###Output
Current Image : solidYellowCurve.jpg with dimensions: (540, 960, 3)
Max Slope detected by Hough Transform: [ 0.66666667 -22.66666667]
Min Slope by Hough Transform: [ -0.95652174 755.26086957]
Left Lane Cordinates : (466, 320) (165, 540)
Right Lane Cordinates : (490, 320) (858, 540)
###Markdown
Test on VideosYou know what's cooler than drawing lanes over images? Drawing lanes over video!We can test our solution on two provided videos:`solidWhiteRight.mp4``solidYellowLeft.mp4`**Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.****If you get an error that looks like this:**```NeedDownloadError: Need ffmpeg exe. You can download it by calling: imageio.plugins.ffmpeg.download()```**Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.**
###Code
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image where lines are drawn on lanes)
# TODO: Build your pipeline that will draw lane lines on the test_images
frame = cv2.resize(image, (960,540))
print(type(frame))
# Grayscale Conversion
img = grayscale(frame)
#plt.imshow(img)
#plt.show()
# Canny Edge Detection
img_gaussian = gaussian_blur(img,kernel_size=3)
img_canny = canny(img_gaussian, low_threshold =50, high_threshold=150)
#plt.imshow(img_canny)
#plt.show()
# Region of Interest
vertices= np.array([[(LOWER_LEFT_X,LOWER_LEFT_Y),
(UPPER_LEFT_X, UPPER_LEFT_Y),
(UPPER_RIGHT_X, UPPER_RIGHT_Y),
(LOWER_RIGHT_X,LOWER_RIGHT_Y)]],dtype=np.int32)
img_region = region_of_interest(img_canny,vertices)
#plt.imshow(img_region)
#plt.show()
# Hough Transform
img_hough = hough_lines(img_region, rho = 2 , theta = np.pi /180, threshold = 10, min_line_len = 10, max_line_gap = 20)
#plt.imshow(img_hough)
#plt.show()
result = weighted_img(img_hough,frame)
#print("Final Image Shape : {} ".format(result.shape))
#plt.imshow(result)
#plt.show()
return result
###Output
_____no_output_____
###Markdown
Let's try the one with the solid white lane on the right first ...
###Code
white_output = 'test_videos_output/solidWhiteRight.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
###Output
<class 'numpy.ndarray'>
Max Slope detected by Hough Transform: [ 0.68421053 -26.10526316]
Min Slope by Hough Transform: [ -0.78571429 672.92857143]
Left Lane Cordinates : (456, 320) (157, 540)
Right Lane Cordinates : (504, 320) (858, 540)
[MoviePy] >>>> Building video test_videos_output/solidWhiteRight.mp4
[MoviePy] Writing video test_videos_output/solidWhiteRight.mp4
###Markdown
Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
###Code
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
###Output
_____no_output_____
###Markdown
Improve the draw_lines() function**At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".****Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.** Now for the one with the solid yellow lane on the left. This one's more tricky!
###Code
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
###Output
_____no_output_____
###Markdown
Writeup and SubmissionIf you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file. Optional ChallengeTry your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
###Code
challenge_output = 'test_videos_output/challenge.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)
clip3 = VideoFileClip('test_videos/challenge.mp4')
challenge_clip = clip3.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
###Output
_____no_output_____ |
basic_workflow.ipynb | ###Markdown
A scanpy workflow for the initial basic processing of scRNA-seq data Note, this workflow does not have many steps- it is meant only to harmonize the basic upstream matrix processing analyses.This workflow is for processing 10X single cell transcriptomics data with scanpy. It is based on my own workflows, the [scanpy tutorial](https://scanpy-tutorials.readthedocs.io/en/latest/pbmc3k.html), and [best practise guidlines](https://github.com/theislab/single-cell-tutorial).Inputs:- CellRanger v3 matrix files (filtered)-- barcodes.tsv, genes.tsv, matrix.mtx- You may edit the input paramter `value`s in `params_lst`Outputs:- `.h5ad` output file with the `anndata` object containing the gene processed expression matrixNote, please install the depenancies with the correct versions via the `environment.yml` file using `conda` User input parameters
###Code
import sys
print(sys.version, sys.executable)
assert sys.version_info[:2] == (3, 7) # should have been installed via conda
# TODO: find a better way to specify paramters, this is overly complicated
# Argparse style for notebooks or Mypy way for type checking?
from dataclasses import dataclass
from typing import Callable, Any
from inspect import getsource
@dataclass
class Param:
"""Holds information on a particular user input parameter."""
name: str
value: Any
description: str
func: Callable
def __bool__(self):
"""Forbid boolean comparisons."""
raise NotImplementedError
def __eq__(self, other):
"""Forbid equality comparisons"""
raise NotImplementedError
def validate(self):
"""Validate the .value satisfies the .func"""
if not self.func(self.value):
_err = (
f"parameter {self.name}={self.value}, "
f"should satisfy the function: {getsource(self.func).strip()}"
)
raise Exception(_err)
###Output
_____no_output_____
###Markdown
Specifying the input directory and output file
###Code
params_lst = [
# File/directory names
Param(
"input_dir",
"test_data", # Value-- can be edited
"the directory containing subdirectories with the CellRanger filtered matrix.mtx file",
lambda x: isinstance(x, str),
),
Param(
"output_file",
"results.h5ad", # Value-- can be edited
".h5ad output file name",
lambda x: isinstance(x, str),
),
Param(
"log_file",
sys.stdout, # Value-- can be edited
"log file name, sys.stdout for printing",
lambda x: lambda x: x == sys.stdout or isinstance(x, str),
),
# Other params
Param(
"min_genes",
50, # Value-- can be edited
"filter cells with fewer than n genes expressed",
lambda x: isinstance(x, int),
),
Param(
"min_cells",
3, # Value-- can be edited
"filter genes expressed in fewer than n cells",
lambda x: isinstance(x, int),
),
Param(
"filter_n",
False, # Value-- can be edited
"filter <4500 counts (arbituary value)",
lambda x: isinstance(x, bool),
),
Param("filter_mt",
False, # Value-- can be edited
"filter <5% mitocondrial",
lambda x: isinstance(x, bool)
),
Param(
"doublet_rem",
False, # Value-- can be edited
"remove doublets with scrublet, recommend to check the plots first",
lambda x: isinstance(x, bool),
),
Param(
"regress",
False, # Value-- can be edited
"regress out (mostly) unwanted sources of variation; not recommended",
lambda x: isinstance(x, bool),
),
Param(
"scale",
False, # Value-- can be edited
"scale data to unit variance and zero mean; not recommended",
lambda x: isinstance(x, bool),
),
Param(
"exclude_high",
False, # Value-- can be edited
"exclude very highly expressed genes during normalization",
lambda x: isinstance(x, bool),
),
]
for param in params_lst:
print(
f"Name: {param.name}, Value: {param.value},\nDescription: {param.description}\n"
)
for param in params_lst:
param.validate()
params = {param.name: param for param in params_lst}
###Output
_____no_output_____
###Markdown
Initial configuration and logging Setup logging and imports
###Code
import os
import glob
import logging
logging.basicConfig(stream=params["log_file"].value, filemode="a", level=logging.INFO)
logging.info(sys.version_info)
import collections
import itertools
import numpy as np
import pandas as pd
import scanpy as sc
from anndata import AnnData
sc.logging.logfile = params["log_file"].value # for scanpy's own logging
import scrublet as scr
logging.info(scr.__file__)
###Output
_____no_output_____
###Markdown
Log config and settings
###Code
# Reproducibility settings
import random
seed = 42
random.seed(seed)
logging.info("random seed {}".format(seed))
hash_seed = os.environ.get("PYTHONHASHSEED")
logging.info(f"PYTHONHASHSEED= {hash_seed}")
if hash_seed != 0:
logging.warning(
"Set PYTHONHASHSEED environmnetal variable to 0 for reproducibility"
)
# scanpy settings
sc.settings.verbosity = 3 # verbosity: errors (0), warnings (1), info (2), hints (3)
sc.logging.print_versions()
# Increase plot resolution
sc.settings.set_figure_params(dpi=80)
###Output
_____no_output_____
###Markdown
Read the input data and check/filter doublets
###Code
input_dir = sorted(
os.listdir(params["input_dir"].value)
) # sort to ensure order is OS independent
adata_list = []
for each_dir in input_dir:
adata_tmp = sc.read_10x_mtx(
os.path.join(
params["input_dir"].value, each_dir
), # the directory with the `.mtx` file
var_names="gene_symbols", # use gene symbols for the variable names (variables-axis index)
cache=True,
) # write a cache file for faster subsequent reading # will this work with list?
# Check doublets in each 'batch' seperately
scrub = scr.Scrublet(adata_tmp.X)
(
adata_tmp.obs["doublet_scores"],
adata_tmp.obs["predicted_doublets"],
) = scrub.scrub_doublets()
scrub.plot_histogram()
scrub.set_embedding('UMAP', scr.get_umap(scrub.manifold_obs_, 10, min_dist=0.3))
scrub.plot_embedding('UMAP', order_points=True)
if params["doublet_rem"].value:
# Actually do the filtering
adata_tmp = adata_tmp[adata_tmp.obs["predicted_doublets"] == False, :]
adata_list.append(adata_tmp)
###Output
_____no_output_____
###Markdown
Concatenate the different batches (note you may want to name the batch categories)
###Code
# Use an older version of AnnData since the newer versions
# have this bug that makes concatentaion slow
# https://github.com/theislab/anndata/issues/303
adata_merged = AnnData.concatenate(
*adata_list, join="outer"
) # outer means union rather than intersection,
# Note will fill with 0s
adata_merged
# Check .obs look normal
display(adata_merged.obs.head(2))
adata_merged.obs.tail(2)
# Check the .var looks normal
display(adata_merged.var.head(2))
adata_merged.var.tail(2)
adata_merged.var_names_make_unique() # this is unnecessary if using 'gene_ids'
###Output
_____no_output_____
###Markdown
Filtering and normalisation Show those genes that yield the highest fraction of counts in each single cells, across all cells.
###Code
sc.pl.highest_expr_genes(adata_merged, n_top=20)
###Output
_____no_output_____
###Markdown
Minimal filtering
###Code
sc.pp.filter_cells(adata_merged, min_genes=params["min_genes"].value)
sc.pp.filter_genes(adata_merged, min_cells=params["min_cells"].value)
mito_genes = adata_merged.var_names.str.startswith("MT-")
# for each cell compute fraction of counts in mito genes vs. all genes
# the `.A1` is only necessary as X is sparse (to transform to a dense array after summing)
adata_merged.obs["percent_mito"] = (
np.sum(adata_merged[:, mito_genes].X, axis=1).A1 / np.sum(adata_merged.X, axis=1).A1
)
# add the total counts per cell as observations-annotation to adata
adata_merged.obs["n_counts"] = adata_merged.X.sum(axis=1).A1
sc.pl.violin(adata_merged, ['n_genes', 'n_counts', 'percent_mito'],
jitter=0.4, multi_panel=True)
# More optional filtering
if params["filter_n"].value:
adata_merged = adata_merged[adata_merged.obs['n_genes'] < 4500, :]
if params["filter_mt"].value:
adata_merged = adata_merged[adata_merged.obs['percent_mito'] < 0.05, :]
###Output
_____no_output_____
###Markdown
Normalize data
###Code
#after normalization, each observation (cell) has a total count equal to
# the median of total counts for observations (cells) before normalization"
sc.pp.normalize_total(adata_merged,
exclude_highly_expressed=params["exclude_high"].value)
###Output
_____no_output_____
###Markdown
Log-transform the data
###Code
sc.pp.log1p(adata_merged)
###Output
_____no_output_____
###Markdown
Regress out effects of total counts per cell and the percentage of mitochondrial genes expressed. Scale the data to unit variance.
###Code
# Not always best practise:
# https://github.com/theislab/scanpy/issues/526
if params["regress"].value:
sc.pp.regress_out(adata_merged, ['n_counts', 'percent_mito']) # memory intensive step
###Output
_____no_output_____
###Markdown
Scale each gene to unit variance. Clip values exceeding standard deviation 10.
###Code
if params["scale"].value:
sc.pp.scale(adata_merged, max_value=10)
# Not recommended here:
# https://www.embopress.org/doi/10.15252/msb.20188746
logging.info(f'Total number of cells: {adata_merged.n_obs:d}')
logging.info(f'Total number of genes: {adata_merged.n_vars:d}')
###Output
_____no_output_____
###Markdown
Save the results
###Code
adata_merged.write(params["output_file"].value, compression='gzip')
###Output
_____no_output_____ |
module_3_classification/3_1_logistic_regression.ipynb | ###Markdown
Lab 4.1: Logistic regression---1. Let's remember that the objective of a _classification_ method is to assign an observation to a category or class.2. Logistic regression is one of the methods we can use to do this and is arguably the most famous and well-used classifier. 3. It *is* a regression, but don't let that confuse you. It estimates probabilities of class membership.4. This notebook complements the Logistic Regression module by illustrating the coding/programming application of what was explained in class.--- Notebook Structure- [Importing Packages](imporp)- [Reading the dataset](rds) - [Missing Values](msvl) - [Implementation of Logistic Regression](implementation) - [Admissions](addmission) - [Republican or Democrat](repdemoc) Importing Packages---
###Code
## Basic packages
import numpy as np
import pandas as pd
## Graphing packages
import seaborn as sns
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
## Scikit learn and Statsmodel packages
from sklearn.linear_model import LogisticRegression, LinearRegression
import statsmodels.api as sm
## Operating system dependent functionality
import os
## Lines of code needed to make sure graph(s) appear in notebook, and check versions of packages
%matplotlib inline
%load_ext watermark
%config InlineBackend.figure_format = 'retina'
%watermark -v -d -a 'Delta Analytics' -p scikit-learn,matplotlib,numpy,pandas
###Output
_____no_output_____
###Markdown
Reading the dataset---1. In this exercise we are using the admissions dataset.
###Code
data_directory = os.path.join('../datasets', 'admissions')
admission_filepath = os.path.join(data_directory, 'admissions.csv')
admissions = pd.read_csv(admission_filepath)
admissions.head(3)
admissions.tail(3)
###Output
_____no_output_____
###Markdown
Missing Values---1. If so, drop the missing values (no the best practice, but is ok for now)
###Code
admissions.isnull().sum()
admissions.dropna(inplace=True)
admissions.isnull().sum()
admissions.prestige.value_counts()
###Output
_____no_output_____
###Markdown
Implementation of Logistic Regression--- Admissions---
###Code
## Get some basic stats from your dataset
admissions.describe()
## lets set our prestige column as integer
admissions['prestige'] = admissions['prestige'].astype(int)
admissions.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 397 entries, 0 to 399
Data columns (total 4 columns):
admit 397 non-null int64
gre 397 non-null float64
gpa 397 non-null float64
prestige 397 non-null int64
dtypes: float64(2), int64(2)
memory usage: 15.5 KB
###Markdown
If you explore prestige you will see that this is a categorical column, and you can turn them into dummy variables => pandas has a nice predifine function to get dummies https://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html A few things you want to be careful here is:1. Once you create your dummies we need to drop the prestige column (the original column)2. We must drop one of the "new created" columns3. Steps 1 and 2 are needed to avoid multicollinearity
###Code
get_dummies = pd.get_dummies(admissions.prestige, prefix="prestige", drop_first=True)
get_dummies.head(4)
## now lets bring these new columns to our dataset using concat and add the intercept
df = pd.concat([admissions, get_dummies], axis=1)
df.drop(['prestige'], inplace=True, axis=1)
df['intercept'] = 1.0
## we have a dataset that is ready for analysis
df.head(4)
'''Define y and X'''
y = df['admit']
columns_ = df.columns.tolist()
exclude_col = ['admit']
X = df[[i for i in columns_ if i not in exclude_col]]
print (X.shape, y.shape)
'''Split the data'''
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=10)
print (X_train.shape, y_train.shape)
print (X_test.shape, y_test.shape)
## Set up the regression
logit = sm.Logit(y_train, X_train)
logit_result = logit.fit()
## lets get the results
print (logit_result.summary())
print("Coeffieients")
print(logit_result.params)
print ("\n")
print("p-Values")
print(logit_result.pvalues)
print ("\n")
print("Dependent variables")
print(logit.endog_names)
###Output
Coeffieients
gre 0.001470
gpa 0.917260
prestige_2 -0.852635
prestige_3 -1.702499
prestige_4 -1.612396
intercept -3.544092
dtype: float64
p-Values
gre 0.276981
gpa 0.019722
prestige_2 0.028627
prestige_3 0.000050
prestige_4 0.001544
intercept 0.007568
dtype: float64
Dependent variables
admit
###Markdown
Interpreting logistic regression coefficients. Remember the odds ratio?In this case, using the odds ratio will help us understand how 1 unit of increase or decrease in any of the variables affects the odds of being admitted.
###Code
print (np.exp(logit_result.params))
###Output
gre 1.001471
gpa 2.502425
prestige_2 0.426290
prestige_3 0.182227
prestige_4 0.199409
intercept 0.028895
dtype: float64
###Markdown
We can see that the odds of being admitted could potentially decrease by 42% if the prestige of school is 2, or by 18% if the prestige of the school is 3. These values are from our train set, now lets predict on our test set Predicting and EvaluatingIf we call the predict method, we will get the predictive probabilities. But to make a prediction as to whether a student will be admitted or not, we must convert these predicted probabilities into class labels 1=admitted or 0 = no admitted.
###Code
## Here we have the predictive probabilities
predictions = logit_result.predict(X_test)
print (predictions[:10])
plt.hist(predictions);
predictions_nominal = [ 0 if x < 0.5 else 1 for x in predictions]
print (predictions_nominal.count(0))
print (predictions_nominal.count(1))
###Output
99
21
###Markdown
Confusion matrix and Classification report---
###Code
from sklearn.metrics import confusion_matrix, classification_report
confmat = confusion_matrix(y_true=y_test, y_pred=predictions_nominal)
confusion = pd.DataFrame(confmat, index=['True_Label_0 Rejected', 'True_Label_1 Admitted'],
columns=['Predict_Label_0 Rejected', 'Predict_Label_1 Admitted'])
confusion
print (classification_report(y_test, predictions_nominal, digits=3))
###Output
precision recall f1-score support
0 0.818 0.871 0.844 93
1 0.429 0.333 0.375 27
avg / total 0.731 0.750 0.738 120
###Markdown
Lets implement the same logistic regression using scikit learn---
###Code
'''Baseline'''
'''Remeber that 0 is no admitted 1 is admitted'''
print (df['admit'].value_counts(), "\n" )
print ("if I randomly choose, %.0f percent of the time I/we will be choosing admitted "
% ((np.mean(df['admit']))*100))
logistic = LogisticRegression()
logistic.fit(X_train, y_train)
y_pred=logistic.predict(X_test)
confmat = confusion_matrix(y_true=y_test, y_pred=y_pred)
confusion = pd.DataFrame(confmat, index=['True_Label_0 Rejected', 'True_Label_1 Admitted'],
columns=['Predict_Label_0 Rejected', 'Predict_Label_1 Admitted'])
confusion
print (classification_report(y_test, y_pred, digits=3))
###Output
precision recall f1-score support
0 0.812 0.882 0.845 93
1 0.421 0.296 0.348 27
avg / total 0.724 0.750 0.733 120
###Markdown
Republican or Democrat---For this exercise we are going to use data from the [1984 United States Congressional Voting Records Database] [1](take a look at the data dictionary) to predict if a congressmen/women is a republican or democrat [1]: http://archive.ics.uci.edu/ml/machine-learning-databases/voting-records/house-votes-84.names "1984 United States Congressional Voting Records Database"
###Code
## Define the colum/variable/feature names
columns = [
"class",
"handicapped_infants",
"water_project_cost",
"adoption_of_the_budget_resolution",
"physician_fee_freeze",
"el_salvador_aid",
"religious_groups_in_schools",
"anti_satellite_test_ban",
"aid_to_nicaraguan_contras",
"mx_missile",
"immigration",
"synfuels_corporation_cutback",
"education_spending",
"superfund_right_to_sue",
"crime",
"duty_free_exports",
"export_administration_act_south_africa"
]
'''We are going to read the data directly from the web'''
csv_url = "http://archive.ics.uci.edu/ml/machine-learning-databases/voting-records/house-votes-84.data"
''' Here we are reading the data and create a binary var 0 for republican 1 for democrat'''
house_df = pd.read_csv(csv_url, names = columns)
house_df['class'] = house_df['class'].map(lambda value: 0 if value == "republican" else 1 )
house_df.head(3)
## Lets clean the dataset
house_df.replace('?', np.nan, inplace=True)
house_df.ffill(inplace=True)
## Create dummy var
df_dummies = pd.get_dummies(house_df)
df_dummies.head(3)
'''Define y and X'''
y = df_dummies['class']
columns_ = df_dummies.columns.tolist()
exclude_col = ['class']
X = df_dummies[[i for i in columns_ if i not in exclude_col]]
print (X.shape, y.shape)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=10)
print (X_train.shape, y_train.shape)
print (X_test.shape, y_test.shape)
'''Use scikit learn'''
r_d_logistic = LogisticRegression()
r_d_logistic.fit(X_train, y_train)
'''Baseline'''
'''Remeber that 0 is republican 1 is democrat'''
print (df_dummies['class'].value_counts(), "\n" )
print ("if I randomly choose, %.0f percent of the time I/we will be choosing democrat"
% ((np.mean(df_dummies['class']))*100))
## predicting
y_pred=r_d_logistic.predict(X_test)
confmat = confusion_matrix(y_true=y_test, y_pred=y_pred)
confusion = pd.DataFrame(confmat, index=['True_Label_0 Republican', 'True_Label_1 Democrat'],
columns=['Predict_Label_0 Republican', 'Predict_Label_1 Democrat'])
confusion
###Output
_____no_output_____
###Markdown
Let's get the TP, FP, TN, FN from the confusion matrix---
###Code
TP = confusion.loc['True_Label_0 Republican', 'Predict_Label_0 Republican']
FP = confusion.loc['True_Label_1 Democrat', 'Predict_Label_0 Republican']
TN = confusion.loc['True_Label_1 Democrat', 'Predict_Label_1 Democrat']
FN = confusion.loc['True_Label_0 Republican', 'Predict_Label_1 Democrat']
values = sorted(zip(['True Positives','False Positives','True Negatives','False Negatives'], [TP, FP, TN, FN]))
values
###Output
_____no_output_____
###Markdown
Calculate accuracy, Misclassification Rate (Error Rate), Precision, Recall---
###Code
## Accuracy
## How often is the classifier correct?
from sklearn.metrics import accuracy_score
acc = accuracy_score(y_test, y_pred)
print ("Accuracy score: %.3f" %(acc*100))
## Misclassification Rate (Error Rate)
## How often is the model wrong
print ("Error rate: %.3f" % (((FP + FN))/ float(len(y_test))*100))
## Precision
## Ability of the classifier to avoid labeling a class as a member of another class
from sklearn.metrics import precision_score
pcs = precision_score(y_test, y_pred)
print ("Precision: %.3f" %(pcs*100))
## Recall
## Recall the ability of the classifier to correctly identify the current class
from sklearn.metrics import recall_score
rcs = recall_score(y_test, y_pred)
print ("Recall: %.3f" % (rcs*100))
print (classification_report(y_test, y_pred, digits=3))
###Output
precision recall f1-score support
0 0.942 0.961 0.951 51
1 0.975 0.963 0.969 80
avg / total 0.962 0.962 0.962 131
###Markdown
ROC and AUC---
###Code
from sklearn.metrics import roc_curve, auc
# Get out the predicted probabilities for the X_test matrix
y_pp = r_d_logistic.predict_proba(X_test)[:,1]
# roc_curve returns the false positive rate and true positive rates as the threshold changes
# takes in the y and the predicted probabilities of the positive class from your model.
fpr, tpr, _ = roc_curve(y_test, y_pp)
roc_auc = auc(fpr, tpr)
plt.figure(figsize=[9,9])
plt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % roc_auc, linewidth=10, color='g')
plt.plot([0, 1], [0, 1], linestyle='--', color='gray', linewidth=2)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate', fontsize=16)
plt.ylabel('True Positive Rate', fontsize=16)
plt.title('Receiver operating characteristic curve', fontsize=20)
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____
###Markdown
Lab 4.1: Logistic regression 1. Let's remember that the objective of a _classification_ method is to assign an observation to a category or class.2. Logistic regression is one of the methods we can use to do this and is arguably the most famous and well-used classifier. 3. It *is* a regression, but don't let that confuse you. It estimates probabilities of class membership.4. This notebook complements the Logistic Regression module by illustrating the coding/programming application of what was explained in class.--- Notebook Structure- [Importing Packages](imporp)- [Reading the dataset](rds) - [Missing Values](msvl) - [Implementation of Logistic Regression](implementation) - [Admissions](addmission) - [Republican or Democrat](repdemoc) Importing Packages---
###Code
## Basic packages
import numpy as np
import pandas as pd
## Graphing packages
import seaborn as sns
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
## Scikit learn and Statsmodel packages
from sklearn.linear_model import LogisticRegression, LinearRegression
import statsmodels.api as sm
## Operating system dependent functionality
import os
## Lines of code needed to make sure graph(s) appear in notebook, and check versions of packages
%matplotlib inline
%load_ext watermark
%config InlineBackend.figure_format = 'retina'
%watermark -v -d -a 'Delta Analytics' -p scikit-learn,matplotlib,numpy,pandas
###Output
_____no_output_____
###Markdown
Reading the dataset---1. In this exercise we are using the admissions dataset.
###Code
data_directory = os.path.join('../datasets', 'admissions')
admission_filepath = os.path.join(data_directory, 'admissions.csv')
admissions = pd.read_csv(admission_filepath)
admissions.head(3)
admissions.tail(3)
###Output
_____no_output_____
###Markdown
Missing Values---1. If so, drop the missing values (no the best practice, but is ok for now)
###Code
admissions.isnull().sum()
admissions.dropna(inplace=True)
admissions.isnull().sum()
admissions.prestige.value_counts()
###Output
_____no_output_____
###Markdown
Implementation of Logistic Regression--- Admissions---
###Code
## Get some basic stats from your dataset
admissions.describe()
## lets set our prestige column as integer
admissions['prestige'] = admissions['prestige'].astype(int)
admissions.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 397 entries, 0 to 399
Data columns (total 4 columns):
admit 397 non-null int64
gre 397 non-null float64
gpa 397 non-null float64
prestige 397 non-null int64
dtypes: float64(2), int64(2)
memory usage: 15.5 KB
###Markdown
If you explore prestige you will see that this is a categorical column, and you can turn them into dummy variables => pandas has a nice predifine function to get dummies https://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html A few things you want to be careful here is:1. Once you create your dummies we need to drop the prestige column (the original column)2. We must drop one of the "new created" columns3. Steps 1 and 2 are needed to avoid multicollinearity
###Code
get_dummies = pd.get_dummies(admissions.prestige, prefix="prestige", drop_first=True)
get_dummies.head(4)
## now lets bring these new columns to our dataset using concat and add the intercept
df = pd.concat([admissions, get_dummies], axis=1)
df.drop(['prestige'], inplace=True, axis=1)
df['intercept'] = 1.0
## we have a dataset that is ready for analysis
df.head(4)
'''Define y and X'''
y = df['admit']
columns_ = df.columns.tolist()
exclude_col = ['admit']
X = df[[i for i in columns_ if i not in exclude_col]]
print (X.shape, y.shape)
'''Split the data'''
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=10)
print (X_train.shape, y_train.shape)
print (X_test.shape, y_test.shape)
## Set up the regression
logit = sm.Logit(y_train, X_train)
logit_result = logit.fit()
## lets get the results
print (logit_result.summary())
print("Coeffieients")
print(logit_result.params)
print ("\n")
print("p-Values")
print(logit_result.pvalues)
print ("\n")
print("Dependent variables")
print(logit.endog_names)
###Output
Coeffieients
gre 0.001470
gpa 0.917260
prestige_2 -0.852635
prestige_3 -1.702499
prestige_4 -1.612396
intercept -3.544092
dtype: float64
p-Values
gre 0.276981
gpa 0.019722
prestige_2 0.028627
prestige_3 0.000050
prestige_4 0.001544
intercept 0.007568
dtype: float64
Dependent variables
admit
###Markdown
Interpreting logistic regression coefficients. Remember the odds ratio?In this case, using the odds ratio will help us understand how 1 unit of increase or decrease in any of the variables affects the odds of being admitted.
###Code
print (np.exp(logit_result.params))
###Output
gre 1.001471
gpa 2.502425
prestige_2 0.426290
prestige_3 0.182227
prestige_4 0.199409
intercept 0.028895
dtype: float64
###Markdown
We can see that the odds of being admitted could potentially decrease by 42% if the prestige of school is 2, or by 18% if the prestige of the school is 3. These values are from our train set, now lets predict on our test set Predicting and EvaluatingIf we call the predict method, we will get the predictive probabilities. But to make a prediction as to whether a student will be admitted or not, we must convert these predicted probabilities into class labels 1=admitted or 0 = no admitted.
###Code
## Here we have the predictive probabilities
predictions = logit_result.predict(X_test)
print (predictions[:10])
plt.hist(predictions);
predictions_nominal = [ 0 if x < 0.5 else 1 for x in predictions]
print (predictions_nominal.count(0))
print (predictions_nominal.count(1))
###Output
99
21
###Markdown
Confusion matrix and Classification report---
###Code
from sklearn.metrics import confusion_matrix, classification_report
confmat = confusion_matrix(y_true=y_test, y_pred=predictions_nominal)
confusion = pd.DataFrame(confmat, index=['True_Label_0 Rejected', 'True_Label_1 Admitted'],
columns=['Predict_Label_0 Rejected', 'Predict_Label_1 Admitted'])
confusion
print (classification_report(y_test, predictions_nominal, digits=3))
###Output
precision recall f1-score support
0 0.818 0.871 0.844 93
1 0.429 0.333 0.375 27
avg / total 0.731 0.750 0.738 120
###Markdown
Lets implement the same logistic regression using scikit learn---
###Code
'''Baseline'''
'''Remeber that 0 is no admitted 1 is admitted'''
print (df['admit'].value_counts(), "\n" )
print ("if I randomly choose, %.0f percent of the time I/we will be choosing admitted "
% ((np.mean(df['admit']))*100))
logistic = LogisticRegression()
logistic.fit(X_train, y_train)
y_pred=logistic.predict(X_test)
confmat = confusion_matrix(y_true=y_test, y_pred=y_pred)
confusion = pd.DataFrame(confmat, index=['True_Label_0 Rejected', 'True_Label_1 Admitted'],
columns=['Predict_Label_0 Rejected', 'Predict_Label_1 Admitted'])
confusion
print (classification_report(y_test, y_pred, digits=3))
###Output
precision recall f1-score support
0 0.812 0.882 0.845 93
1 0.421 0.296 0.348 27
avg / total 0.724 0.750 0.733 120
###Markdown
Republican or Democrat---For this exercise we are going to use data from the [1984 United States Congressional Voting Records Database] [1](take a look at the data dictionary) to predict if a congressmen/women is a republican or democrat [1]: http://archive.ics.uci.edu/ml/machine-learning-databases/voting-records/house-votes-84.names "1984 United States Congressional Voting Records Database"
###Code
## Define the colum/variable/feature names
columns = [
"class",
"handicapped_infants",
"water_project_cost",
"adoption_of_the_budget_resolution",
"physician_fee_freeze",
"el_salvador_aid",
"religious_groups_in_schools",
"anti_satellite_test_ban",
"aid_to_nicaraguan_contras",
"mx_missile",
"immigration",
"synfuels_corporation_cutback",
"education_spending",
"superfund_right_to_sue",
"crime",
"duty_free_exports",
"export_administration_act_south_africa"
]
'''We are going to read the data directly from the web'''
csv_url = "http://archive.ics.uci.edu/ml/machine-learning-databases/voting-records/house-votes-84.data"
''' Here we are reading the data and create a binary var 0 for republican 1 for democrat'''
house_df = pd.read_csv(csv_url, names = columns)
house_df['class'] = house_df['class'].map(lambda value: 0 if value == "republican" else 1 )
house_df.head(3)
## Lets clean the dataset
house_df.replace('?', np.nan, inplace=True)
house_df.ffill(inplace=True)
## Create dummy var
df_dummies = pd.get_dummies(house_df)
df_dummies.head(3)
'''Define y and X'''
y = df_dummies['class']
columns_ = df_dummies.columns.tolist()
exclude_col = ['class']
X = df_dummies[[i for i in columns_ if i not in exclude_col]]
print (X.shape, y.shape)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=10)
print (X_train.shape, y_train.shape)
print (X_test.shape, y_test.shape)
'''Use scikit learn'''
r_d_logistic = LogisticRegression()
r_d_logistic.fit(X_train, y_train)
'''Baseline'''
'''Remeber that 0 is republican 1 is democrat'''
print (df_dummies['class'].value_counts(), "\n" )
print ("if I randomly choose, %.0f percent of the time I/we will be choosing democrat"
% ((np.mean(df_dummies['class']))*100))
## predicting
y_pred=r_d_logistic.predict(X_test)
confmat = confusion_matrix(y_true=y_test, y_pred=y_pred)
confusion = pd.DataFrame(confmat, index=['True_Label_0 Republican', 'True_Label_1 Democrat'],
columns=['Predict_Label_0 Republican', 'Predict_Label_1 Democrat'])
confusion
###Output
_____no_output_____
###Markdown
Let's get the TP, FP, TN, FN from the confusion matrix---
###Code
TP = confusion.loc['True_Label_0 Republican', 'Predict_Label_0 Republican']
FP = confusion.loc['True_Label_1 Democrat', 'Predict_Label_0 Republican']
TN = confusion.loc['True_Label_1 Democrat', 'Predict_Label_1 Democrat']
FN = confusion.loc['True_Label_0 Republican', 'Predict_Label_1 Democrat']
values = sorted(zip(['True Positives','False Positives','True Negatives','False Negatives'], [TP, FP, TN, FN]))
values
###Output
_____no_output_____
###Markdown
Calculate accuracy, Misclassification Rate (Error Rate), Precision, Recall---
###Code
## Accuracy
## How often is the classifier correct?
from sklearn.metrics import accuracy_score
acc = accuracy_score(y_test, y_pred)
print ("Accuracy score: %.3f" %(acc*100))
## Misclassification Rate (Error Rate)
## How often is the model wrong
print ("Error rate: %.3f" % (((FP + FN))/ float(len(y_test))*100))
## Precision
## Ability of the classifier to avoid labeling a class as a member of another class
from sklearn.metrics import precision_score
pcs = precision_score(y_test, y_pred)
print ("Precision: %.3f" %(pcs*100))
## Recall
## Recall the ability of the classifier to correctly identify the current class
from sklearn.metrics import recall_score
rcs = recall_score(y_test, y_pred)
print ("Recall: %.3f" % (rcs*100))
print (classification_report(y_test, y_pred, digits=3))
###Output
precision recall f1-score support
0 0.942 0.961 0.951 51
1 0.975 0.963 0.969 80
avg / total 0.962 0.962 0.962 131
###Markdown
ROC and AUC---
###Code
from sklearn.metrics import roc_curve, auc
# Get out the predicted probabilities for the X_test matrix
y_pp = r_d_logistic.predict_proba(X_test)[:,1]
# roc_curve returns the false positive rate and true positive rates as the threshold changes
# takes in the y and the predicted probabilities of the positive class from your model.
fpr, tpr, _ = roc_curve(y_test, y_pp)
roc_auc = auc(fpr, tpr)
plt.figure(figsize=[9,9])
plt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % roc_auc, linewidth=10, color='g')
plt.plot([0, 1], [0, 1], linestyle='--', color='gray', linewidth=2)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate', fontsize=16)
plt.ylabel('True Positive Rate', fontsize=16)
plt.title('Receiver operating characteristic curve', fontsize=20)
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____
###Markdown
Lab 4.1: Logistic regression 1. Let's remember that the objective of a _classification_ method is to assign an observation to a category or class.2. Logistic regression is one of the methods we can use to do this and is arguably the most famous and well-used classifier. 3. It *is* a regression, but don't let that confuse you. It estimates probabilities of class membership.4. This notebook complements the Logistic Regression module by illustrating the coding/programming application of what was explained in class.--- Notebook Structure- [Importing Packages](imporp)- [Reading the dataset](rds) - [Missing Values](msvl) - [Implementation of Logistic Regression](implementation) - [Admissions](addmission) - [Republican or Democrat](repdemoc) Importing Packages---
###Code
## Basic packages
import numpy as np
import pandas as pd
## Graphing packages
import seaborn as sns
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
## Scikit learn and Statsmodel packages
from sklearn.linear_model import LogisticRegression, LinearRegression
import statsmodels.api as sm
## Operating system dependent functionality
import os
## Lines of code needed to make sure graph(s) appear in notebook, and check versions of packages
%matplotlib inline
%load_ext watermark
%config InlineBackend.figure_format = 'retina'
%watermark -v -d -a 'Delta Analytics' -p scikit-learn,matplotlib,numpy,pandas
###Output
_____no_output_____
###Markdown
Reading the dataset---1. In this exercise we are using the admissions dataset.
###Code
data_directory = os.path.join('../datasets', 'admissions')
admission_filepath = os.path.join(data_directory, 'admissions.csv')
admissions = pd.read_csv(admission_filepath)
admissions.head(3)
admissions.tail(3)
###Output
_____no_output_____
###Markdown
Missing Values---1. If so, drop the missing values (no the best practice, but is ok for now)
###Code
admissions.isnull().sum()
admissions.dropna(inplace=True)
admissions.isnull().sum()
admissions.prestige.value_counts()
###Output
_____no_output_____
###Markdown
Implementation of Logistic Regression--- Admissions---
###Code
## Get some basic stats from your dataset
admissions.describe()
## lets set our prestige column as integer
admissions['prestige'] = admissions['prestige'].astype(int)
admissions.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 397 entries, 0 to 399
Data columns (total 4 columns):
admit 397 non-null int64
gre 397 non-null float64
gpa 397 non-null float64
prestige 397 non-null int64
dtypes: float64(2), int64(2)
memory usage: 15.5 KB
###Markdown
If you explore prestige you will see that this is a categorical column, and you can turn them into dummy variables => pandas has a nice predifine function to get dummies https://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html A few things you want to be careful here is:1. Once you create your dummies we need to drop the prestige column (the original column)2. We must drop one of the "new created" columns3. Steps 1 and 2 are needed to avoid multicollinearity
###Code
get_dummies = pd.get_dummies(admissions.prestige, prefix="prestige", drop_first=True)
get_dummies.head(4)
## now lets bring these new columns to our dataset using concat and add the intercept
df = pd.concat([admissions, get_dummies], axis=1)
df.drop(['prestige'], inplace=True, axis=1)
df['intercept'] = 1.0
## we have a dataset that is ready for analysis
df.head(4)
'''Define y and X'''
y = df['admit']
columns_ = df.columns.tolist()
exclude_col = ['admit']
X = df[[i for i in columns_ if i not in exclude_col]]
print (X.shape, y.shape)
'''Split the data'''
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=10)
print (X_train.shape, y_train.shape)
print (X_test.shape, y_test.shape)
## Set up the regression
logit = sm.Logit(y_train, X_train)
logit_result = logit.fit()
## lets get the results
print (logit_result.summary())
print("Coeffieients")
print(logit_result.params)
print ("\n")
print("p-Values")
print(logit_result.pvalues)
print ("\n")
print("Dependent variables")
print(logit.endog_names)
###Output
Coeffieients
gre 0.001470
gpa 0.917260
prestige_2 -0.852635
prestige_3 -1.702499
prestige_4 -1.612396
intercept -3.544092
dtype: float64
p-Values
gre 0.276981
gpa 0.019722
prestige_2 0.028627
prestige_3 0.000050
prestige_4 0.001544
intercept 0.007568
dtype: float64
Dependent variables
admit
###Markdown
Interpreting logistic regression coefficients. Remember the odds ratio?In this case, using the odds ratio will help us understand how 1 unit of increase or decrease in any of the variables affects the odds of being admitted.
###Code
print (np.exp(logit_result.params))
###Output
gre 1.001471
gpa 2.502425
prestige_2 0.426290
prestige_3 0.182227
prestige_4 0.199409
intercept 0.028895
dtype: float64
###Markdown
We can see that the odds of being admitted could potentially decrease by 42% if the prestige of school is 2, or by 18% if the prestige of the school is 3. These values are from our train set, now lets predict on our test set Predicting and EvaluatingIf we call the predict method, we will get the predictive probabilities. But to make a prediction as to whether a student will be admitted or not, we must convert these predicted probabilities into class labels 1=admitted or 0 = no admitted.
###Code
## Here we have the predictive probabilities
predictions = logit_result.predict(X_test)
print (predictions[:10])
plt.hist(predictions);
predictions_nominal = [ 0 if x < 0.5 else 1 for x in predictions]
print (predictions_nominal.count(0))
print (predictions_nominal.count(1))
###Output
99
21
###Markdown
Confusion matrix and Classification report---
###Code
from sklearn.metrics import confusion_matrix, classification_report
confmat = confusion_matrix(y_true=y_test, y_pred=predictions_nominal)
confusion = pd.DataFrame(confmat, index=['True_Label_0 Rejected', 'True_Label_1 Admitted'],
columns=['Predict_Label_0 Rejected', 'Predict_Label_1 Admitted'])
confusion
print (classification_report(y_test, predictions_nominal, digits=3))
###Output
precision recall f1-score support
0 0.818 0.871 0.844 93
1 0.429 0.333 0.375 27
avg / total 0.731 0.750 0.738 120
###Markdown
Lets implement the same logistic regression using scikit learn---
###Code
'''Baseline'''
'''Remeber that 0 is no admitted 1 is admitted'''
print (df['admit'].value_counts(), "\n" )
print ("if I randomly choose, %.0f percent of the time I/we will be choosing admitted "
% ((np.mean(df['admit']))*100))
logistic = LogisticRegression()
logistic.fit(X_train, y_train)
y_pred=logistic.predict(X_test)
confmat = confusion_matrix(y_true=y_test, y_pred=y_pred)
confusion = pd.DataFrame(confmat, index=['True_Label_0 Rejected', 'True_Label_1 Admitted'],
columns=['Predict_Label_0 Rejected', 'Predict_Label_1 Admitted'])
confusion
print (classification_report(y_test, y_pred, digits=3))
###Output
precision recall f1-score support
0 0.812 0.882 0.845 93
1 0.421 0.296 0.348 27
avg / total 0.724 0.750 0.733 120
###Markdown
Republican or Democrat---For this exercise we are going to use data from the [1984 United States Congressional Voting Records Database] [1](take a look at the data dictionary) to predict if a congressmen/women is a republican or democrat [1]: http://archive.ics.uci.edu/ml/machine-learning-databases/voting-records/house-votes-84.names "1984 United States Congressional Voting Records Database"
###Code
## Define the colum/variable/feature names
columns = [
"class",
"handicapped_infants",
"water_project_cost",
"adoption_of_the_budget_resolution",
"physician_fee_freeze",
"el_salvador_aid",
"religious_groups_in_schools",
"anti_satellite_test_ban",
"aid_to_nicaraguan_contras",
"mx_missile",
"immigration",
"synfuels_corporation_cutback",
"education_spending",
"superfund_right_to_sue",
"crime",
"duty_free_exports",
"export_administration_act_south_africa"
]
'''We are going to read the data directly from the web'''
csv_url = "http://archive.ics.uci.edu/ml/machine-learning-databases/voting-records/house-votes-84.data"
''' Here we are reading the data and create a binary var 0 for republican 1 for democrat'''
house_df = pd.read_csv(csv_url, names = columns)
house_df['class'] = house_df['class'].map(lambda value: 0 if value == "republican" else 1 )
house_df.head(3)
## Lets clean the dataset
house_df.replace('?', np.nan, inplace=True)
house_df.ffill(inplace=True)
## Create dummy var
df_dummies = pd.get_dummies(house_df)
df_dummies.head(3)
'''Define y and X'''
y = df_dummies['class']
columns_ = df_dummies.columns.tolist()
exclude_col = ['class']
X = df_dummies[[i for i in columns_ if i not in exclude_col]]
print (X.shape, y.shape)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=10)
print (X_train.shape, y_train.shape)
print (X_test.shape, y_test.shape)
'''Use scikit learn'''
r_d_logistic = LogisticRegression()
r_d_logistic.fit(X_train, y_train)
'''Baseline'''
'''Remeber that 0 is republican 1 is democrat'''
print (df_dummies['class'].value_counts(), "\n" )
print ("if I randomly choose, %.0f percent of the time I/we will be choosing democrat"
% ((np.mean(df_dummies['class']))*100))
## predicting
y_pred=r_d_logistic.predict(X_test)
confmat = confusion_matrix(y_true=y_test, y_pred=y_pred)
confusion = pd.DataFrame(confmat, index=['True_Label_0 Republican', 'True_Label_1 Democrat'],
columns=['Predict_Label_0 Republican', 'Predict_Label_1 Democrat'])
confusion
###Output
_____no_output_____
###Markdown
Let's get the TP, FP, TN, FN from the confusion matrix---
###Code
TP = confusion.loc['True_Label_0 Republican', 'Predict_Label_0 Republican']
FP = confusion.loc['True_Label_1 Democrat', 'Predict_Label_0 Republican']
TN = confusion.loc['True_Label_1 Democrat', 'Predict_Label_1 Democrat']
FN = confusion.loc['True_Label_0 Republican', 'Predict_Label_1 Democrat']
values = sorted(zip(['True Positives','False Positives','True Negatives','False Negatives'], [TP, FP, TN, FN]))
values
###Output
_____no_output_____
###Markdown
Calculate accuracy, Misclassification Rate (Error Rate), Precision, Recall---
###Code
## Accuracy
## How often is the classifier correct?
from sklearn.metrics import accuracy_score
acc = accuracy_score(y_test, y_pred)
print ("Accuracy score: %.3f" %(acc*100))
## Misclassification Rate (Error Rate)
## How often is the model wrong
print ("Error rate: %.3f" % (((FP + FN))/ float(len(y_test))*100))
## Precision
## Ability of the classifier to avoid labeling a class as a member of another class
from sklearn.metrics import precision_score
pcs = precision_score(y_test, y_pred)
print ("Precision: %.3f" %(pcs*100))
## Recall
## Recall the ability of the classifier to correctly identify the current class
from sklearn.metrics import recall_score
rcs = recall_score(y_test, y_pred)
print ("Recall: %.3f" % (rcs*100))
print (classification_report(y_test, y_pred, digits=3))
###Output
precision recall f1-score support
0 0.942 0.961 0.951 51
1 0.975 0.963 0.969 80
avg / total 0.962 0.962 0.962 131
###Markdown
ROC and AUC---
###Code
from sklearn.metrics import roc_curve, auc
# Get out the predicted probabilities for the X_test matrix
y_pp = r_d_logistic.predict_proba(X_test)[:,1]
# roc_curve returns the false positive rate and true positive rates as the threshold changes
# takes in the y and the predicted probabilities of the positive class from your model.
fpr, tpr, _ = roc_curve(y_test, y_pp)
roc_auc = auc(fpr, tpr)
plt.figure(figsize=[9,9])
plt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % roc_auc, linewidth=10, color='g')
plt.plot([0, 1], [0, 1], linestyle='--', color='gray', linewidth=2)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate', fontsize=16)
plt.ylabel('True Positive Rate', fontsize=16)
plt.title('Receiver operating characteristic curve', fontsize=20)
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____ |
agla/mat/Kurven_R2.ipynb | ###Markdown
Materialien zu aglaAutor: Holger Böttcher - [email protected] Beispiele zu ebenen Kurven
###Code
%run agla/start
###Output
_____no_output_____
###Markdown
Versiera der Agnesi
###Code
versiera = lambda a, ber: Kurve( v(t, a^3/(a^2+t^2)), (t, ber[0], ber[1]))
v5 = versiera(5, (-10, 10))
zeichne([v5, 2, 'b'])
###Output
_____no_output_____
###Markdown
Neilsche Parabel
###Code
neil_parabel = lambda a, ber: Kurve( v(t^2, a*t^3), (t, ber[0], ber[1]),
imp = y^2 - a*x^3)
np1 = neil_parabel(1, (-10, 10))
np1.imp
zeichne([np1, 2, 'b'])
###Output
_____no_output_____
###Markdown
Kartesisches Blatt
###Code
def kartesisches_blatt(a, ber):
# der Parameter muss in Zeichenketten integriert werden
gl = 3*a * cos(phi)*sin(phi) / (cos(phi)^3+sin(phi)^3)
gl = 'r = ' + str(gl)
prg = v(3*a*t/(t^3+1), 3*a*t^2/(t^3+1))
imp = x^3 + y^3 - 3*a*x*y
return Kurve(gl, (phi, ber[0], ber[1]), prg=prg, imp=imp)
kb1 = kartesisches_blatt(1, (0, 3*pi/4))
kb2 = kartesisches_blatt(1, (3/4*pi+0.0001, pi))
# Reduzierung des Parameterbereiches zur Vermeidung einer Unsauberkeit
zeichne([kb1, 2, 'b'], [kb2, 2, 'b'])
# Grafik anhand der Parametergleichung
kb1.prg
zeichne(Kurve(kb2.pkt(t), (t, -pi/4+10^-6, pi*3/4-10^-6)))
# Koprrektur des Parameterbereiches
# Grafik anhand der impliziten Gleichung
kb1.imp
gl = str(kb1.imp)
zeichne(Kurve(gl), achsen=nein)
###Output
_____no_output_____
###Markdown
Kissoide
###Code
def kissoide(a, ber):
gl = a * sin(phi)^2 / cos(phi)
gl = 'r = ' + str(gl)
prg = v(a*t^2/(t^2+1), a*t^3/(t^2+1))
imp = x^3 - y^2*(a-x)
return Kurve(gl, (t, ber[0], ber[1]), prg=prg, imp=imp)
ki51 = kissoide(5, (0, pi/2))
ki52 = kissoide(5, (pi/2+0.0001, pi)) # Vermeidung einer
# Unsauberkeiet
zeichne([ki51, 2, 'b'], [ki52, 2, 'b'])
# Grafik anhand der Paramertergleichzng
ki51.pkt()
zeichne(Kurve(ki51.pkt(t), (t, -pi/2, pi/2)))
ki51.imp
# Grafik anhand der impliziten Gleichumng
gl = str(ki51.imp)
zeichne([Kurve(gl), 'punkte=(2000, 2000)']) # mit Verfeinerung
###Output
_____no_output_____
###Markdown
Strophoide
###Code
def strophoide(a, ber):
gl = - a * cos(2 * phi) / cos(phi)
gl = 'r = ' + str(gl)
prg = v(a*(t^2-1)/(t^2+1), a*t*(t^2-1)/(t^2+1))
imp = (a + x)*x^2 - (a - x)*y^2
return Kurve(gl, (phi, ber[0], ber[1]), prg=prg, imp=imp)
s31 = strophoide(3, (0, pi/2))
s32 = strophoide(3, (pi/2+0.0001, pi)) # Vermeidung einer Unsauberkeit
zeichne([s31, 2, 'b'], [s32, 2, 'b'])
s31.pkt()
gl = s31.pkt()
zeichne(Kurve(gl, (t, -pi/2, pi/2)), achsen=nein)
###Output
_____no_output_____
###Markdown
Lemniskate
###Code
def lemniskate(a, ber):
p = ( sqrt(2)*a*cos(t)/(1+sin(t)^2), sqrt(2)*a*sin(t)*cos(t)/(1+sin(t)^2) )
pol = a*sqrt(2*cos(2*phi))
imp = (x^2 + y^2)^2 - 2*a^2*(x^2 - y^2)
return Kurve(p, (t, ber[0], ber[1]), pol=pol, imp=imp)
le6 = lemniskate(6, (0, 2*pi))
zeichne([le6, blau, 2])
###Output
_____no_output_____
###Markdown
Konchoide des Nikodemes
###Code
def konchoide_nikodemes(a, b, ber):
gl = v(a+b*cos(t), a*tan(t)+b*sin(t))
pol = a/cos(phi)+b
imp = (x-a)^2 * (x^2+y^2) - b^2*x^2
return Kurve(gl, (t, ber[0], ber[1]), pol=pol, imp=imp)
ber = (10^-3, 2*pi-10^-3)
ko3051 = konchoide_nikodemes(3, 0.5, ber)
ko315 = konchoide_nikodemes(3, 1.5, ber)
ko330 = konchoide_nikodemes(3, 3, ber)
ko36 = konchoide_nikodemes(3, 6, ber)
zeichne( [ko3051, gruen],
[ko315, blau],
[ko330, rot],
[ko36, 'k'],
achsen=nein
)
# die senkrechte Linie gehört nicht zur Kurve
# Grafik anhand der impliziten Gleichung
ko3051.imp
zeichne( [Kurve(str(ko3051.imp)), grün],
[Kurve(str(ko315.imp)), blau],
[Kurve(str(ko330.imp)), rot, 'punkte=(1000,1000)'],
[Kurve(str(ko36.imp)), 'k']
)
###Output
_____no_output_____
###Markdown
Herzkurven
###Code
herz1 = Kurve(x^2+(y-abs(x)^Rational(2, 3))^2-1)
herz1.imp
sicht_box(-2, 2)
zeichne([herz1, rot, 2], achsen=nein)
herz2 = Kurve(17*x^2-20*abs(x)*y+17*y^2-200)
herz2.imp
sicht_box(-7, 7)
zeichne([herz2, rot, 3], achsen=nein)
xk = 4*sin(t)^3
yk = 3*cos(t)-1.3*cos(2*t)-0.6*cos(3*t)-0.2*cos(4*t)
herz3 = Kurve(v(xk, yk), (t, 0, 2*pi))
sicht_box(-6, 6)
zeichne([herz3, 3, rot], achsen=nein)
t0 = 2-10^-8
herz41 = Kurve(v(t, sqrt(1-(abs(t)-1)^2)), (t, -t0, 0))
herz42 = Kurve(v(-t, sqrt(1-(abs(t)-1)^2)), (t, -t0, 0))
herz43 = Kurve(v(t, arccos(1-(abs(t)))-pi), (t, -t0, 0))
herz44 = Kurve(v(-t, arccos(1-(abs(t)))-pi), (t, -t0, 0))
# leichte Korrektur in den Parameterwerten # und Teilung der beiden
# Kurven in jeweils zwei
herz41.prg, herz42.prg
sicht_box(-2.5, 2.5, -3, 2)
zeichne([herz41, 2, rot],
[herz42, 2, rot],
[herz43, 2, rot],
[herz44, 2, rot],
achsen=nein)
###Output
_____no_output_____ |
ensembles/Examples/Number Of Layers/Number Of Layers.ipynb | ###Markdown
Example 1
###Code
#3 Layers
en.set_no_of_layers(3)
###Output
_____no_output_____
###Markdown
Example 2
###Code
#4 Layers
en.set_no_of_layers(4)
###Output
_____no_output_____
###Markdown
Example 3
###Code
#5 Layers
en.set_no_of_layers(5)
###Output
_____no_output_____ |
UnsupervisedAnalysis.ipynb | ###Markdown
Unsupervised Analysis of Days of Week Treating crossings each day as features to learn about the relationships between various days. Based on Jake Vanderplas's [Youtube videos](https://www.youtube.com/watch?v=DjpCHNYQodY&index=5&list=PLYCpMb24GpOC704uO9svUrihl-HY1tTJJ)
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn')
import numpy as np
import pandas as pd
from sklearn.decomposition import PCA
from sklearn.mixture import GaussianMixture
###Output
_____no_output_____
###Markdown
Get Data
###Code
from jupyterworkflow.data import get_fremont_data
data = get_fremont_data()
pivoted = data.pivot_table('Total', index=data.index.time, columns=data.index.date)
pivoted.plot(legend=False, alpha=0.01);
pivoted.head()
###Output
_____no_output_____
###Markdown
Principal Component Analysis
###Code
X = pivoted.fillna(0).T.values
X.shape
X2 = PCA(2, svd_solver='full').fit_transform(X)
X2.shape
plt.scatter(X2[:, 0], X2[:, 1])
###Output
_____no_output_____
###Markdown
Unsupervised Clustering
###Code
gmm = GaussianMixture(2)
gmm.fit(X)
labels = gmm.predict(X)
np.unique(labels)
plt.scatter(X2[:, 0], X2[:, 1], c=labels, cmap='rainbow')
plt.colorbar();
fig, ax = plt.subplots(1, 2, figsize=(14, 6))
pivoted.T[labels == 1].T.plot(legend=False, alpha=0.01, ax=ax[0])
pivoted.T[labels == 0].T.plot(legend=False, alpha=0.01, ax=ax[1]);
ax[0].set_title('Red Cluster')
ax[1].set_title('Purple Cluster');
###Output
_____no_output_____
###Markdown
Comparing the Day of the Week
###Code
dayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek
plt.scatter(X2[:, 0], X2[:, 1], c=dayofweek, cmap='rainbow')
plt.colorbar()
###Output
_____no_output_____
###Markdown
Analyzing OutliersThe following points are weekdays with a holiday-like pattern
###Code
dates = pd.DatetimeIndex(pivoted.columns)
dates[(labels == 1) & (dayofweek < 5)]
###Output
_____no_output_____
###Markdown
Unsupervised Analysis of Days of WeekTreating crossings each day as features to learn about the relationships between various days
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.decomposition import PCA
from sklearn.mixture import GaussianMixture
# plot parameters
FIGSIZE = (12,7)
plt.rcParams['figure.figsize'] = FIGSIZE
plt.style.use('seaborn')
###Output
_____no_output_____
###Markdown
Get Data
###Code
from jupyterworkflow.data import get_fremont_data
data = get_fremont_data()
pivoted = data.pivot_table('Total', index=data.index.time, columns=data.index.date)
pivoted.iloc[:,:500].plot(legend=False, alpha=.05, figsize=FIGSIZE);
###Output
_____no_output_____
###Markdown
Principal Component Analysis
###Code
X = pivoted.fillna(0).T.values
X.shape
X2 = PCA(2,svd_solver='full').fit_transform(X)
X2.shape
plt.scatter(X2[:,0], X2[:,1]);
###Output
_____no_output_____
###Markdown
Unsupervised Clustering
###Code
gmm = GaussianMixture(2)
gmm.fit(X)
labels = gmm.predict(X)
plt.scatter(X2[:,0], X2[:,1], c=labels, cmap='rainbow')
plt.colorbar();
fig, ax = plt.subplots(1, 2, figsize=(15,8))
pivoted.T[labels == 0].T.plot(legend=False, alpha=0.1, ax=ax[0])
pivoted.T[labels == 1].T.plot(legend=False, alpha=0.1, ax=ax[1])
ax[0].set_title("Purple Cluster")
ax[0].set_title("Red Cluster");
###Output
_____no_output_____
###Markdown
Comparing with Day of the week
###Code
dayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek
plt.scatter(X2[:,0], X2[:,1], c=dayofweek, cmap='rainbow')
plt.colorbar();
###Output
_____no_output_____
###Markdown
Analyzing outliersThe following points are weekdays with a holiday-like pattern
###Code
dates = pd.DatetimeIndex(pivoted.columns)
dates[(labels == 0) & (dayofweek <5)]
###Output
_____no_output_____
###Markdown
Unsupervised Analysis of Days of Week treating crossings each day as features to learn about the relationship between various days.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn')
import pandas as pd
import numpy as np
from sklearn.decomposition import PCA
from sklearn.mixture import GaussianMixture
###Output
_____no_output_____
###Markdown
Get Data
###Code
from jupyterworkflow.data import get_fremont_data
data = get_fremont_data()
pivoted = data.pivot_table('Total', index=data.index.time, columns=data.index.date)
pivoted.plot(legend=False, alpha=0.01);
###Output
_____no_output_____
###Markdown
Principal Component Analysis
###Code
X = pivoted.fillna(0).T.values
X.shape
X2 = PCA(2, svd_solver='full').fit_transform(X)
X2.shape
plt.scatter(X2[:,0],X2[:,1]);
###Output
_____no_output_____
###Markdown
Unsupervised Clustering
###Code
gmm = GaussianMixture(2).fit(X)
labels = gmm.predict(X)
plt.scatter(X2[:,0],X2[:,1], c=labels, cmap='rainbow')
plt.colorbar();
fig, ax = plt.subplots(1, 2, figsize=(14, 6))
pivoted.T[labels == 0].T.plot(legend = False, alpha=0.1, ax=ax[0]);
pivoted.T[labels == 1].T.plot(legend = False, alpha=0.1, ax=ax[1]);
ax[0].set_title('Purple Cluster')
ax[1].set_title('Red Cluster')
###Output
_____no_output_____
###Markdown
Comparing with Day of Week
###Code
dayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek
plt.scatter(X2[:,0],X2[:,1], c=dayofweek, cmap='rainbow')
plt.colorbar();
###Output
_____no_output_____
###Markdown
Analysing OutliersThe following are weekdays with a holiday-like pattern
###Code
dates = pd.DatetimeIndex(pivoted.columns)
dates[(labels ==1) & (dayofweek < 5)]
###Output
_____no_output_____
###Markdown
Unsupervised Analysis of Days of WeekTreating crossings each day as features to learn about the relationships between various days.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn')
import pandas as pd
import numpy as np
from sklearn.decomposition import PCA
from sklearn.mixture import GaussianMixture
###Output
_____no_output_____
###Markdown
Get Data
###Code
from jupyterworkflow.data import get_freemont_data
data = get_freemont_data()
pivoted = data.pivot_table('Total', index=data.index.time, columns=data.index.date)
pivoted.plot(legend=False, alpha=0.01)
###Output
_____no_output_____
###Markdown
Principal Component Analysis
###Code
X = pivoted.fillna(0).T.values
X.shape
X2 = PCA(2, svd_solver='full').fit_transform(X)
X2.shape
plt.scatter(X2[:, 0], X2[:, 1]) # 2 distinct clusters of days...
###Output
_____no_output_____
###Markdown
Unsupervised Clustering
###Code
gmm = GaussianMixture(2).fit(X)
labels = gmm.predict(X)
plt.scatter(X2[:, 0], X2[:, 1], c=labels, cmap='rainbow')
plt.colorbar()
fig, ax = plt.subplots(1, 2, figsize=(14, 6))
pivoted.T[labels == 1].T.plot(legend=False, alpha=0.1, ax=ax[0]);
pivoted.T[labels == 0].T.plot(legend=False, alpha=0.1, ax=ax[1]);
ax[0].set_title('Red Cluster')
ax[1].set_title('Purple Cluster');
###Output
_____no_output_____
###Markdown
Comparing with Day of Week
###Code
dayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek
plt.scatter(X2[:, 0], X2[:, 1], c=dayofweek, cmap='rainbow')
plt.colorbar()
###Output
_____no_output_____
###Markdown
Analyzing OutliersThe following points are weekdays with a holiday-like pattern.
###Code
dates = pd.DatetimeIndex(pivoted.columns)
dates[(labels == 0) & (dayofweek < 5)] # makes sense - Thanksgiving, Christmas, New Year weeks evident
###Output
_____no_output_____
###Markdown
Unsupervised Analysis of Days of WeekTreating crossings each day as features to learn about the relationships between various days.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn')
import pandas as pd
import numpy as np
from sklearn.decomposition import PCA
from sklearn.mixture import GaussianMixture
###Output
_____no_output_____
###Markdown
Get Data
###Code
from fremont_packages.data import get_fremont_data
data = get_fremont_data()
pivoted = data.pivot_table('Total', index=data.index.time, columns=data.index.date)
pivoted.plot(legend=False, alpha=0.01);
###Output
_____no_output_____
###Markdown
Principle Component Analysis
###Code
pivoted.shape
X = pivoted.fillna(0).T.values
X.shape
X2 = PCA(2, svd_solver='full').fit_transform(X)
X2.shape
plt.scatter(X2[:, 0], X2[:, 1])
###Output
_____no_output_____
###Markdown
Unsupervised Clustering
###Code
gmm = GaussianMixture(2).fit(X)
## gmm.fit(X)
labels = gmm.predict(X)
plt.scatter(X2[:, 0], X2[:, 1], c=labels, cmap='rainbow')
plt.colorbar()
fig, ax = plt.subplots(1, 2, figsize=(14, 6))
pivoted.T[labels == 0].T.plot(legend=False, alpha=0.1, ax=ax[0]);
pivoted.T[labels == 1].T.plot(legend=False, alpha=0.1, ax=ax[1]);
ax[0].set_title('Purple Cluster')
ax[1].set_title('Red Cluster');
###Output
_____no_output_____
###Markdown
Comparing with Day of Week
###Code
dayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek
plt.scatter(X2[:, 0], X2[:, 1], c=dayofweek, cmap='rainbow')
plt.colorbar()
###Output
_____no_output_____
###Markdown
Analyzing OutliersThe following points are weekends with a holiday-like pattern
###Code
dates = pd.DatetimeIndex(pivoted.columns)
dates[(labels == 1) & (dayofweek < 5)]
###Output
_____no_output_____
###Markdown
Unsupervised Analysis of Days of WeekTreating crossings each day as features to learn about the relationships between various days. Import Libraries
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn')
import pandas as pd
import numpy as np
from sklearn.decomposition import PCA
from sklearn.mixture import GaussianMixture
###Output
_____no_output_____
###Markdown
Get Data
###Code
from functions.data import get_fremont_data
data = get_fremont_data()
pivoted = data.pivot_table('Total', index=data.index.time, columns=data.index.date)
pivoted.plot(legend=False, alpha=0.01)
###Output
_____no_output_____
###Markdown
Principal Componente Analysis
###Code
## Check if NA value exist
data.isna().sum()
X = pivoted.fillna(0).T.values
X.shape
## Reduice dimension using PCA 24 => 2
X2 = PCA(2, svd_solver='full').fit_transform(X)
X2.shape
plt.scatter(X2[:, 0], X2[:, 1])
###Output
_____no_output_____
###Markdown
Unsupervised Clustering
###Code
gmm = GaussianMixture(2).fit(X)
labels = gmm.predict(X)
plt.scatter(X2[:, 0], X2[:, 1], c=labels, cmap='rainbow')
plt.colorbar()
fig, ax = plt.subplots(1, 2, figsize=(14, 6))
pivoted.T[labels == 0].T.plot(legend=False, alpha=0.1, ax=ax[0]);
pivoted.T[labels == 1].T.plot(legend=False, alpha=0.1, ax=ax[1]);
ax[0].set_title('Purple Cluster');
ax[1].set_title('Red Cluster');
###Output
_____no_output_____
###Markdown
Comparing with Day of Week
###Code
dayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek
plt.scatter(X2[:, 0], X2[:, 1], c=dayofweek, cmap='rainbow')
plt.colorbar()
###Output
_____no_output_____
###Markdown
Analyzing Outliers
###Code
dates = pd.DatetimeIndex(pivoted.columns)
dates[(labels == 1) & (dayofweek < 5)]
###Output
_____no_output_____
###Markdown
Unsupervised Analysis of Days of WeekTreating crossings each day as features to learn about the relationships vetween various days.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn')
import pandas as pd
import numpy as np
from sklearn.decomposition import PCA
from sklearn.mixture import GaussianMixture
###Output
_____no_output_____
###Markdown
Get Data
###Code
from jupyterworkflow.data import get_fremont_data
data = get_fremont_data()
pivoted = data.pivot_table('Total', index=data.index.time, columns=data.index.date)
pivoted.plot(legend=False, alpha=0.01);
###Output
_____no_output_____
###Markdown
Principal Component Analysis
###Code
X = pivoted.fillna(0).T.values
X.shape
X2 = PCA(2, svd_solver='full').fit_transform(X)
X2.shape
plt.scatter(X2[:, 0], X2[:, 1])
###Output
_____no_output_____
###Markdown
Unsupervised Clustering
###Code
gmm = GaussianMixture(2)
gmm.fit(X)
labels = gmm.predict(X)
plt.scatter(X2[:, 0], X2[:, 1], c=labels, cmap='rainbow')
plt.colorbar();
fig, ax = plt.subplots(1, 2, figsize=(14, 6))
pivoted.T[labels == 0].T.plot(legend=False, alpha=0.1, ax=ax[0]);
pivoted.T[labels == 1].T.plot(legend=False, alpha=0.1, ax=ax[1]);
ax[0].set_title('Purple Cluster')
ax[1].set_title('Red Cluster');
###Output
_____no_output_____
###Markdown
Comparing with Day of Week
###Code
dayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek
plt.scatter(X2[:, 0], X2[:, 1], c=dayofweek, cmap='rainbow')
plt.colorbar();
###Output
_____no_output_____
###Markdown
Analyzing OutliersThe following points are weekdays with a holiday-like pattern
###Code
dates = pd.DatetimeIndex(pivoted.columns)
dates[(labels==1) & (dayofweek < 5)]
###Output
_____no_output_____
###Markdown
Unsupervised Analysis of Days of WeekTreating crossing each day as features to learn about the relationships between various days.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn')
import pandas as pd
import numpy as np
from sklearn.decomposition import PCA
from sklearn.mixture import GaussianMixture
###Output
_____no_output_____
###Markdown
Get Data
###Code
from jupyterworkflow.data import get_fremont_data
data = get_fremont_data()
pivoted = data.pivot_table('Total', index=data.index.time, columns=data.index.date)
pivoted.plot(legend=False, alpha=0.01);
###Output
_____no_output_____
###Markdown
Principal Component Analysis
###Code
X = pivoted.fillna(0).T.values
X.shape
x2 = PCA(2, svd_solver='full').fit_transform(X)
x2.shape
plt.scatter(x2[:, 0], x2[:, 1])
###Output
_____no_output_____
###Markdown
Unsupervised Clustering
###Code
gmm = GaussianMixture(2).fit(X)
labels = gmm.predict(X)
labels
plt.scatter(x2[:, 0], x2[:, 1], c=labels, cmap='rainbow')
plt.colorbar();
fig, ax = plt.subplots(1, 2, figsize=(20, 8))
pivoted.T[labels == 0].T.plot(legend=False, alpha=0.1, ax=ax[0]);
pivoted.T[labels == 1].T.plot(legend=False, alpha=0.1, ax=ax[1]);
ax[0].set_title('Purple Cluster')
ax[1].set_title('Red Cluster')
###Output
_____no_output_____
###Markdown
Comparing with Day of Week
###Code
dayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek
plt.scatter(x2[:, 0], x2[:, 1], c=dayofweek, cmap='rainbow')
plt.colorbar();
###Output
_____no_output_____
###Markdown
Analyzing OutliersThe following points are weekdays with a holiday-like pattern
###Code
dates = pd.DatetimeIndex(pivoted.columns)
dates[(labels == 1) & (dayofweek < 5)]
###Output
_____no_output_____
###Markdown
Unsupervised Clustering of Days of the Week
###Code
from jupyterworkflow.data import get_fremont_data
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use('seaborn')
from sklearn.mixture import GaussianMixture
from sklearn.decomposition import PCA
###Output
_____no_output_____
###Markdown
Get Data
###Code
data = get_fremont_data()
data.head()
pivoted = data.pivot_table('Total', index=data.index.time, columns=data.index.date)
pivoted.plot(legend=False, alpha=0.01)
X = pivoted.fillna(0).T.values
X.shape
###Output
_____no_output_____
###Markdown
If we take the transpose of our pivoted dataset, we can begin to look at how the days relate to eachother. By the shape of our data, we can see that there are 2340 days with 24 hours each. Principle Components Analysis
###Code
X2 = PCA(2).fit_transform(X)
###Output
_____no_output_____
###Markdown
Now we have a 2d projection of our original data. Of course 2d data is very conducive to plotting and visualization, so this is the next step we will take.
###Code
plt.scatter(X2[:,0],X2[:,1])
###Output
_____no_output_____
###Markdown
As is predictable, we have 2 very obvious clusters of days. It turns out that PCA is much more of an EDA step and less of a prediction algorithm. Unsupervised Clustering
###Code
gmm = GaussianMixture(2)
gmm.fit(X)
labels = gmm.predict(X)
labels
plt.scatter(X2[:,0],X2[:,1], c=labels, cmap='rainbow')
plt.colorbar()
fig, ax = plt.subplots(1, 2, figsize=(14,6))
pivoted.T[labels==0].T.plot(legend=False, alpha=0.1, ax=ax[0])
pivoted.T[labels==1].T.plot(legend=False, alpha=0.1, ax=ax[1])
ax[0].set_title('Purple Cluster')
ax[1].set_title('Red Cluster')
###Output
_____no_output_____
###Markdown
Comparing Days of the Week
###Code
dayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek
# get the days of the week
plt.scatter(X2[:,0],X2[:,1], c=dayofweek, cmap='rainbow')
plt.colorbar()
###Output
_____no_output_____
###Markdown
Notice that we have the weekdays (0-4) in 1 group, with the weekends (5 and 6) in the other group. However, we also have some weekdays interspersed with the weekends. Perhaps these represent holidays? Analyzing Outliers
###Code
dates = pd.DatetimeIndex(pivoted.columns)
dates[(labels==0) & (dayofweek<5)]
###Output
_____no_output_____
###Markdown
Unsupervised Analysis of Days of WeekTreating crossings each day of features to learn about the relationships between various days
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn')
import pandas as pd
import numpy as np
from sklearn.decomposition import PCA
from sklearn.mixture import GaussianMixture
###Output
_____no_output_____
###Markdown
Get Data
###Code
from jupyterworkflow.data import get_fremont_data
data = get_fremont_data()
pivoted = data.pivot_table('Total', index=data.index.time, columns = data.index.date)
pivoted.plot(legend=False,alpha=0.01)
###Output
_____no_output_____
###Markdown
Principal Components Analysis
###Code
X = pivoted.fillna(0).T.values
X.shape
X2 =PCA(2, svd_solver='full').fit_transform(X)
X2.shape
import matplotlib.pyplot as plt
plt.scatter(X2[:, 0], X2[:, 1])
###Output
_____no_output_____
###Markdown
Unsupervised Clustering
###Code
gmm = GaussianMixture(2).fit(X)
labels = gmm.predict(X)
np.unique(labels)
plt.scatter(X2[:, 0], X2[:, 1], c=labels, cmap='rainbow')
plt.colorbar()
fig, ax = plt.subplots(1, 2, figsize=(14, 6))
pivoted.T[labels == 0].T.plot(legend=False, alpha=0.1, ax=ax[0])
pivoted.T[labels == 1].T.plot(legend=False, alpha=0.1, ax=ax[1])
ax[0].set_title('Purple Cluster')
ax[1].set_title('Red Cluster')
###Output
_____no_output_____
###Markdown
Comparing with Day of Week
###Code
pd.DatetimeIndex(pivoted.columns).dayofweek
dayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek
plt.scatter(X2[:, 0], X2[:, 1], c=dayofweek, cmap='rainbow')
plt.colorbar()
###Output
_____no_output_____
###Markdown
- 0-4 weekdays- 5, 6 weekend Analyzing OutliersThe following points are weekdays with a holiday-like pattern
###Code
dates = pd.DatetimeIndex(pivoted.columns)
dates[(labels==1) & (dayofweek<5)]
###Output
_____no_output_____
###Markdown
Unsupervised Analysis of Days of WeekTreating crossings each day as freatures to learn about the relationships between carious days.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn')
import pandas as pd
import numpy as np
from sklearn.decomposition import PCA
from sklearn.mixture import GaussianMixture
###Output
_____no_output_____
###Markdown
Get Data
###Code
from jupyterworkflow.data import get_fremont_data
data = get_fremont_data()
pivoted = data.pivot_table('Total', index=data.index.time, columns=data.index.date)
pivoted.plot(legend=False, alpha=0.01)
###Output
_____no_output_____
###Markdown
Principal Component Analysis
###Code
X = pivoted.fillna(0).T.values
X.shape
X2 = PCA(2, svd_solver='full').fit_transform(X)
X2.shape
plt.scatter(X2[:, 0], X2[:, 1])
###Output
_____no_output_____
###Markdown
Unsupervised Clustering
###Code
gmm = GaussianMixture(2).fit(X)
labels = gmm.predict(X)
labels
plt.scatter(X2[:, 0], X2[:, 1], c=labels, cmap='rainbow')
plt.colorbar()
fig, ax = plt.subplots(1, 2, figsize=(14, 6))
pivoted.T[labels == 0].T.plot(legend=False, alpha=0.1, ax=ax[0])
pivoted.T[labels == 1].T.plot(legend=False, alpha=0.1, ax=ax[1])
ax[0].set_title('Purple Cluster')
ax[1].set_title('Red Cluster')
###Output
_____no_output_____
###Markdown
Comparing with Day of Week
###Code
dayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek
plt.scatter(X2[:, 0], X2[:, 1], c=dayofweek, cmap='rainbow')
plt.colorbar()
###Output
_____no_output_____
###Markdown
Analyzing OutliersThe following points are weekdays with a holiday-like pattern
###Code
dates = pd.DatetimeIndex(pivoted.columns)
dates[(labels == 1) & (dayofweek < 5)]
###Output
_____no_output_____
###Markdown
Unsupervised Analysis of Days of WeekTreating crossings each day as features to learn about the relationships between various days.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn')
import pandas as pd
import numpy as np
from sklearn.decomposition import PCA
from sklearn.mixture import GaussianMixture
###Output
_____no_output_____
###Markdown
Get Data
###Code
from jworkflow.data import get_fremont_data
data = get_fremont_data()
pivoted = data.pivot_table('Total', index=data.index.time, columns=data.index.date)
pivoted.plot(legend=False, alpha=0.01)
###Output
_____no_output_____
###Markdown
Principal Component Analysis
###Code
X = pivoted.fillna(0).T.values
X.shape
X2 = PCA(2, svd_solver='full').fit_transform(X)
X2.shape
plt.scatter(X2[:,0], X2[:,1])
###Output
_____no_output_____
###Markdown
Unsupervised Clustering
###Code
gmm = GaussianMixture(2).fit(X)
labels = gmm.predict(X)
plt.scatter(X2[:, 0], X2[:, 1], c=labels, cmap='rainbow')
plt.colorbar()
fig, ax = plt.subplots(1, 2, figsize=(14, 6)) # 1 x 2 grid
pivoted.T[labels == 0].T.plot(legend=False, alpha=0.1, ax=ax[0]);
pivoted.T[labels == 1].T.plot(legend=False, alpha=0.1, ax=ax[1]);
ax[0].set_title('Purple Cluster')
ax[1].set_title('Red Cluster');
###Output
_____no_output_____
###Markdown
Comparing with Day of Week
###Code
dayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek
plt.scatter(X2[:, 0], X2[:, 1], c=dayofweek, cmap='rainbow')
plt.colorbar()
###Output
_____no_output_____
###Markdown
Analyzing OutliersThe following points are weekdays with a holiday-like pattern
###Code
dates = pd.DatetimeIndex(pivoted.columns)
dates[(labels == 1) & (dayofweek < 5)]
###Output
_____no_output_____
###Markdown
Unsupervised Analysis of Fremont Bridge Data Codealong from Jake Vanderplas Youtube videoMust use Python 3 because of the urllib package only available in Python 3Treating bridge crossings for each day as features to understand relationships between various days.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('seaborn')
from jupyterworkflow.data import get_fremont_data
from sklearn.decomposition import PCA
from sklearn.mixture import GaussianMixture
###Output
_____no_output_____
###Markdown
Get Data
###Code
data = get_fremont_data()
pivoted = data.pivot_table('Total', index=data.index.time, columns=data.index.date)
pivoted.plot(legend=False, alpha=0.01)
np.unique(data.index.time)
pivoted.shape
# if we transpose it, the days become observations and hours become columns
pivoted.T.shape
###Output
_____no_output_____
###Markdown
Principal Component Analysis
###Code
X = pivoted.fillna(0).T.values
X.shape
PCA(2).fit(X)
# transforming X into 2 dimensions (features) using PCA
X2 = PCA(2, svd_solver='full').fit_transform(X)
X2.shape
plt.scatter(X2[:, 0], X2[:,1])
plt.title('Pivoted Transposed Fremont Data with PCA/full SVD')
# 2 clusters
# use guassian mixture to classify each observation into 2 groups
# what would it look like if we did with 'auto' svd_solver?
X2_auto = PCA(2).fit_transform(X)
X2_auto.shape
plt.scatter(X2_auto[:, 0], X2_auto[:,1])
plt.title('Pivoted Transposed Fremont Data with PCA/auto SVD')
# looks to be no difference.
###Output
_____no_output_____
###Markdown
Unsupervised Clustering - Gaussian Mixture
###Code
gmm = GaussianMixture(2).fit(X)
labels = gmm.predict(X)
plt.scatter(X2[:, 0], X2[:,1], c=labels, cmap='rainbow')
plt.title('Pivoted Transposed Fremont Data with PCA/full SVD and Gaussian Mixture Labels')
plt.colorbar();
fig, ax = plt.subplots(1, 2, figsize=(14,6))
pivoted.T[labels == 0].T.plot(legend=False, alpha=0.1, ax=ax[0])
pivoted.T[labels == 1].T.plot(legend=False, alpha=0.1, ax=ax[1])
ax[0].set_title('Purple Cluster - Weekday Pattern')
ax[1].set_title('Red Cluster - Weekend Pattern')
###Output
_____no_output_____
###Markdown
Comparing with Day of Week Looking at clusters with labels as day of the week
###Code
dayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek
plt.scatter(X2[:, 0], X2[:,1], c=dayofweek, cmap='rainbow')
plt.title('Pivoted Transposed Fremont Data with PCA/full SVD and DayOfWeek Labels')
plt.colorbar();
###Output
_____no_output_____
###Markdown
Analyzing Outliers
###Code
dates = pd.DatetimeIndex(pivoted.columns)
dates[(labels == 1) & (dayofweek<5)]
# cluster [1] with weekday labels
# look up date 2017-02-06 reason why this weekday shows up in the 'weekend' cluster
# This has shown to be the snow day, one of the worst weathers in Seattle.
# All other dates are related to holidays
###Output
_____no_output_____
###Markdown
Thank you Jake Vanderplas for this wonderful tutorial!
###Code
# checking versions
# import sklearn
# sklearn.__version__
###Output
_____no_output_____
###Markdown
Unsupervised Analysis of Days of WeekTreating crossings each day as features.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use("seaborn")
import numpy as np
import pandas as pd
from sklearn.decomposition import PCA
from sklearn.mixture import GaussianMixture
###Output
_____no_output_____
###Markdown
Get Data
###Code
from jupyterworkflow.data import get_fremont_data
data = get_fremont_data()
pivoted = data.pivot_table('Total', index=data.index.time, columns=data.index.date)
pivoted.plot(legend=False, alpha=0.01)
###Output
_____no_output_____
###Markdown
Principal Component Analysis
###Code
X = pivoted.fillna(0).T.values
X.shape
X2 = PCA(2, svd_solver='full').fit_transform(X)
X2.shape
plt.scatter(X2[:,0], X2[:,1])
###Output
_____no_output_____
###Markdown
Unsupervised Clustering
###Code
gmm = GaussianMixture(2).fit(X)
labels = gmm.predict(X)
plt.scatter(X2[:,0], X2[:,1], c=labels, cmap='rainbow')
plt.colorbar()
###Output
_____no_output_____
###Markdown
Week days and Weekends
###Code
fig, ax = plt.subplots(1,2, figsize=(14, 6))
pivoted.T[labels == 0].T.plot(legend=False, alpha=0.1, ax=ax[0]);
pivoted.T[labels == 1].T.plot(legend=False, alpha=0.1, ax=ax[1]);
ax[0].set_title("Purple Cluster")
ax[1].set_title("Red Cluster")
###Output
_____no_output_____
###Markdown
Comparing with Day of Week
###Code
dayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek
plt.scatter(X2[:,0], X2[:,1], c=dayofweek, cmap='rainbow')
plt.colorbar()
###Output
_____no_output_____
###Markdown
Analyzing OutliersThe following days are weekdays with a holiday-like pattern
###Code
dates = pd.DatetimeIndex(pivoted.columns)
dates[(labels ==1) & (dayofweek <5)]
###Output
_____no_output_____
###Markdown
Unsupervised Analysis of Days of Week (Fremont Bike Rides - Seattle, Washington)Treating crossings each day as features to learn about the relationships between various days.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn')
import pandas as pd
import numpy as np
from sklearn.decomposition import PCA
from sklearn.mixture import GaussianMixture
###Output
_____no_output_____
###Markdown
Get Data
###Code
from jupyterworkflow.data import get_fremont_data
data = get_fremont_data()
pivoted = data.pivot_table('Total', index=data.index.time, columns=data.index.date)
pivoted.plot(legend=False, alpha=0.01)
###Output
_____no_output_____
###Markdown
Principal Component Analysis
###Code
X = pivoted.fillna(0).T.values
X.shape
X2 = PCA(2, svd_solver='full').fit_transform(X)
X2.shape
plt.scatter(X2[:, 0], X2[:, 1])
###Output
_____no_output_____
###Markdown
we can see from above that we have two clusters of day types. It will be nice if we can automatically identify those clusters. One way to do this is to use Gaussian mixture model Unsupervised Clustering
###Code
gmm = GaussianMixture(2).fit(X)
labels = gmm.predict(X)
# lets make it more colorful by adding cmap='rainbow'
plt.scatter(X2[:, 0], X2[:, 1], c=labels, cmap='rainbow')
plt.colorbar()
###Output
_____no_output_____
###Markdown
Lets explore this further by examining whats going on within these clusters.
###Code
fig, ax = plt.subplots(1, 2, figsize=(14, 6))
pivoted.T[labels == 0].T.plot(legend=False, alpha=0.1, ax=ax[0]);
pivoted.T[labels == 1].T.plot(legend=False, alpha=0.1, ax=ax[1]);
ax[0].set_title('Purple Cluster')
ax[1].set_title('Red Cluster');
###Output
_____no_output_____
###Markdown
Comparing with Day of Week
###Code
dayof_week = pd.DatetimeIndex(pivoted.columns).dayofweek
plt.scatter(X2[:, 0], X2[:, 1], c=dayof_week, cmap='rainbow')
plt.colorbar()
###Output
_____no_output_____
###Markdown
Analyzing OutliersThe following points are weekdays with holiday-like pattern
###Code
dates = pd.DatetimeIndex(pivoted.columns)
dates[(labels == 1) & (dayof_week < 5)]
###Output
_____no_output_____
###Markdown
Unsupervised Analysis of Days of Week
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn')
import pandas as pd
import numpy as np
from sklearn.decomposition import PCA
from sklearn.mixture import GaussianMixture
###Output
_____no_output_____
###Markdown
Get data
###Code
from jupyterworkflow.data import get_data
data = get_data()
pivoted = data.pivot_table('Total', index= data.index.time, columns=data.index.date)
pivoted.plot(legend=False, alpha=0.01)
###Output
_____no_output_____
###Markdown
Pinciple Component Analysis
###Code
X = pivoted.fillna(0).T.values
X.shape
X2 = PCA(2, svd_solver="full").fit_transform(X)
X2.shape
plt.scatter(X2[:, 0], X2[:, 1]);
###Output
_____no_output_____
###Markdown
Unsuprvised Clustering
###Code
gmm = GaussianMixture(2).fit(X)
labels = gmm.predict(X)
labels
plt.scatter(X2[:,0], X2[:,1], c=labels, cmap='rainbow')
plt.colorbar()
fig, ax = plt.subplots(1, 2, figsize=(14, 6))
pivoted.T[labels == 0].T.plot(legend=False, alpha=0.1, ax=ax[0]);
pivoted.T[labels == 1].T.plot(legend=False, alpha=0.1, ax=ax[1]);
ax[0].set_title('Pupple Cluster')
ax[1].set_title('Red Cluster');
###Output
_____no_output_____
###Markdown
Comparing with Days of Week
###Code
dayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek
plt.scatter(X2[:,0], X2[:,1], c=dayofweek, cmap='rainbow')
plt.colorbar()
###Output
_____no_output_____
###Markdown
Analyzing OutlinersThe following points are weekdays with a holiday-like pattern
###Code
dates = pd.DatetimeIndex(pivoted.columns)
dates[(labels==1) & (dayofweek < 5)]
###Output
_____no_output_____
###Markdown
Unsupervised Analysis of Days of WeekTreating crossings each day as features to learn about the relationships between various days
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn-pastel')
import pandas as pd
import numpy as np
from sklearn.decomposition import PCA
from sklearn.mixture import GaussianMixture
###Output
_____no_output_____
###Markdown
Get Data
###Code
from jupyterworkflow.data import get_fremont_data
data = get_fremont_data()
pivoted = data.pivot_table('Total', index=data.index.time, columns=data.index.date)
pivoted.plot(legend=False, alpha = 0.01);
###Output
_____no_output_____
###Markdown
Principal Component Analysis
###Code
X = pivoted.fillna(0).T.values
X.shape
X2 = PCA(2, svd_solver='full').fit_transform(X)
X2.shape
plt.scatter(X2[:,0], X2[:, 1]);
###Output
_____no_output_____
###Markdown
Unsupervised Clustering
###Code
gmm = GaussianMixture(2).fit(X)
labels = gmm.predict(X)
plt.scatter(X2[:,0], X2[:, 1], c = labels, cmap = 'rainbow')
plt.colorbar();
fig, ax = plt.subplots(1,2, figsize=(14,6))
pivoted.T[labels == 0].T.plot(legend=False, alpha = 0.1, ax = ax[0]);
pivoted.T[labels == 1].T.plot(legend=False, alpha = 0.1, ax = ax[1]);
ax[0].set_title('Red Cluster');
ax[1].set_title('Purple Cluster');
###Output
_____no_output_____
###Markdown
Comparing with Day of Week
###Code
dayofweek=pd.DatetimeIndex(pivoted.columns).dayofweek
plt.scatter(X2[:,0], X2[:, 1], c = dayofweek, cmap = 'rainbow')
plt.colorbar();
###Output
_____no_output_____
###Markdown
Analyzing OutliersThe following points are weekdays with holiday-like pattern
###Code
dates = pd.DatetimeIndex(pivoted.columns)
dates[(labels ==0) & (dayofweek < 5)]
###Output
_____no_output_____
###Markdown
Unsupervised Analysis of Days of week
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn')
import pandas as pd
import numpy as np
from sklearn.decomposition import PCA
from sklearn.mixture import GaussianMixture
###Output
_____no_output_____
###Markdown
Get Data
###Code
from jupyterworkflow.data import get_fremont_data
data = get_fremont_data()
pivoted = data.pivot_table('Total', index = data.index.time, columns = data.index.date)
pivoted.plot(legend = False, alpha = 0.01)
###Output
_____no_output_____
###Markdown
Principal Component Analysis
###Code
X = pivoted.fillna(0).T.values
X.shape
X2 = PCA(2).fit_transform(X)
X2.shape
plt.scatter(X2[:, 0], X2[:, 1])
###Output
_____no_output_____
###Markdown
Unsupervised Clustering
###Code
gmm = GaussianMixture(2).fit(X)
labels = gmm.predict(X)
labels
plt.scatter(X2[:, 0], X2[:, 1], c = labels, cmap = 'rainbow')
plt.colorbar()
fig, ax = plt.subplots(1, 2, figsize=(14,6))
pivoted.T[labels == 0].T.plot(legend = False, alpha = 0.01, ax=ax[0])
pivoted.T[labels == 1].T.plot(legend = False, alpha = 0.01, ax=ax[1])
ax[0].set_title('Purple Cluster')
ax[1].set_title('Red Cluster')
###Output
_____no_output_____
###Markdown
Comparing with Day of week
###Code
dayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek
plt.scatter(X2[:, 0], X2[:, 1], c = dayofweek, cmap = 'rainbow')
plt.colorbar()
###Output
_____no_output_____
###Markdown
Analyzing OutliersThe following points are weekdays with a holiday-like pattern
###Code
dates = pd.DatetimeIndex(pivoted.columns)
dates[(labels == 1) & (dayofweek < 5)]
###Output
_____no_output_____
###Markdown
Unsupervised Analysis of Days of WeekTreating crossings each day as features to learn about the relationships between various days
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn')
from jupyterworkflow.data import get_fremont_data
import pandas as pd
import numpy as np
from sklearn.mixture import GaussianMixture
###Output
_____no_output_____
###Markdown
Get Data
###Code
data = get_fremont_data()
pivoted = data.pivot_table('Total', index=data.index.time, columns=data.index.date)
pivoted.plot(legend=False, alpha=0.01)
pivoted.index[:24]
data.index
np.unique(data.index.time)
!head fremont.csv
pivoted.shape
X = pivoted.T.fillna(0).values
X.shape
from sklearn.decomposition import PCA
X2 = PCA(2, svd_solver='full').fit_transform(X)
X2.shape
import matplotlib.pyplot as plt
plt.scatter(X2[:, 0], X2[:, 1])
gmm = GaussianMixture(2).fit(X)
labels = gmm.predict(X)
labels
plt.scatter(X2[:, 0], X2[:, 1], c=labels, cmap='rainbow')
plt.colorbar()
fig, ax = plt.subplots(1, 2, figsize=(14, 6))
pivoted.T[labels == 0].T.plot(legend=False, alpha=0.01, ax=ax[0])
pivoted.T[labels == 1].T.plot(legend=False, alpha=0.01, ax=ax[1])
ax[0].set_title('Purple Cluster')
ax[1].set_title('Red Cluster')
dayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek
plt.scatter(X2[:, 0], X2[:, 1], c=dayofweek, cmap='rainbow')
plt.colorbar()
###Output
_____no_output_____
###Markdown
Analyzing OutliersThe following days are weekdays (mostly holidays) with a weekend-like pattern
###Code
dates = pd.DatetimeIndex(pivoted.columns)
dates[(labels == 1) & (dayofweek < 5)]
###Output
_____no_output_____
###Markdown
Unsupervised Analysis of Days of WeekTreating crossings each day as features to learn about the relationships between various days
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn')
import numpy as np
import pandas as pd
from sklearn.decomposition import PCA
from sklearn.mixture import GaussianMixture
###Output
_____no_output_____
###Markdown
Get Data
###Code
from jupyterworkflow.data import get_fremont_data
data = get_fremont_data()
pivoted = data.pivot_table('Total', index=data.index.time, columns = data.index.date)
pivoted.plot(legend=False, alpha=0.01);
###Output
_____no_output_____
###Markdown
Principal Components Analysis
###Code
X = pivoted.fillna(0).T.values
X.shape
X2 = PCA(2, svd_solver='full').fit_transform(X)
X2.shape
plt.scatter(X2[:, 0], X2[:, 1]);
###Output
_____no_output_____
###Markdown
Unsupervised Clustering
###Code
gmm = GaussianMixture(2).fit(X)
labels = gmm.predict(X)
labels
plt.scatter(X2[:, 0], X2[:, 1], c=labels, cmap='rainbow')
plt.colorbar();
fig, ax = plt.subplots(1, 2, figsize=(14, 6))
pivoted.T[labels == 0].T.plot(legend=False, alpha=0.1, ax=ax[0])
pivoted.T[labels == 1].T.plot(legend=False, alpha=0.05, ax=ax[1])
ax[0].set_title('Red Cluster')
ax[1].set_title('Purple Cluster');
###Output
_____no_output_____
###Markdown
Comparing with Day of Week
###Code
dayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek
plt.scatter(X2[:, 0], X2[:, 1], c=dayofweek, cmap='rainbow')
plt.colorbar();
###Output
_____no_output_____
###Markdown
Analyzing OutliersThe following points are weekdays with a holiday-like pattern
###Code
dates = pd.DatetimeIndex(pivoted.columns)
dates[(labels == 0) & (dayofweek < 5)]
###Output
_____no_output_____
###Markdown
Unsupervised Analysis of Days of WeekTreating crossings each day as features to learn about the relationships between various days.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn')
import pandas as pd
import numpy as np
from sklearn.decomposition import PCA
from sklearn.mixture import GaussianMixture
###Output
_____no_output_____
###Markdown
Get Data
###Code
from jupyterworkflow.data import get_fremont_data
data = get_fremont_data()
pivoted = data.pivot_table('Total', index=data.index.time, columns=data.index.date)
pivoted.plot(legend=False, alpha=0.01);
###Output
_____no_output_____
###Markdown
Principal Component Analysis
###Code
X = pivoted.fillna(0).T.values
X.shape
X2 = PCA(2, svd_solver='full').fit_transform(X)
X2.shape
plt.scatter(X2[:, 0], X2[:, 1]);
###Output
_____no_output_____
###Markdown
Unsupervised Clustering
###Code
gmm = GaussianMixture(2).fit(X)
labels = gmm.predict(X)
plt.scatter(X2[:, 0], X2[:, 1], c=labels, cmap='rainbow')
plt.colorbar();
fig, ax = plt.subplots(1, 2, figsize=(14, 6))
pivoted.T[labels == 0].T.plot(legend=False, alpha=0.1, ax=ax[0]);
pivoted.T[labels == 1].T.plot(legend=False, alpha=0.1, ax=ax[1]);
ax[0].set_title('Purple Cluster')
ax[1].set_title('Red Cluster');
###Output
_____no_output_____
###Markdown
Comparing with Day of Week
###Code
dayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek
plt.scatter(X2[:, 0], X2[:, 1], c=dayofweek, cmap='rainbow')
plt.colorbar();
###Output
_____no_output_____
###Markdown
Analyzing OutliersThe following points are weekdays with a holiday-like pattern
###Code
dates = pd.DatetimeIndex(pivoted.columns)
dates[(labels == 1) & (dayofweek < 5)]
###Output
_____no_output_____
###Markdown
Unsupervised Analysis of Days of WeekTreating crossings each day as features to learn about the relationships between various days.
###Code
from jupyterworkflow.data import get_fremont_data
from sklearn.decomposition import PCA
import pandas as pd
from sklearn.mixture import GaussianMixture
import matplotlib.pyplot as plt
plt.style.use('seaborn')
%matplotlib inline
###Output
_____no_output_____
###Markdown
Get Data
###Code
data = get_fremont_data()
pivoted = data.pivot_table('Total', index=data.index.time, columns=data.index.date)
pivoted.plot(legend=False, alpha=0.01)
###Output
_____no_output_____
###Markdown
Principle Component Analysis
###Code
# Checking shape of data...24 Hours in a day, 1975 Days in the data.
pivoted.shape
# Transpose data to allow us to compare each hour of the day to other
# days. Result: 1975 observations with 24 features each.
X = pivoted.fillna(0).T.values
X.shape
X2 = PCA(2, svd_solver='full').fit_transform(X)
X2.shape
plt.scatter(X2[:, 0], X2[:, 1]);
###Output
_____no_output_____
###Markdown
Unsupervised Clustering
###Code
gmm = GaussianMixture(2).fit(X)
labels = gmm.predict(X)
plt.scatter(X2[:, 0], X2[:, 1], c=labels, cmap='rainbow')
plt.colorbar();
fig, ax = plt.subplots(1, 2, figsize=(14, 6))
pivoted.T[labels == 1].T.plot(legend=False, alpha=0.2, ax=ax[0]);
pivoted.T[labels == 0].T.plot(legend=False, alpha=0.2, ax=ax[1]);
ax[0].set_title('Red Cluster')
ax[1].set_title('Purple Cluster');
###Output
_____no_output_____
###Markdown
Comparing with Day of Week
###Code
dayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek
plt.scatter(X2[:, 0], X2[:, 1], c=dayofweek, cmap='rainbow')
plt.colorbar();
###Output
_____no_output_____
###Markdown
Analyzing OutliersThe following points are weekdays with holiday-like pattern.
###Code
dates = pd.DatetimeIndex(pivoted.columns)
dates[(labels == 0) & (dayofweek < 5)]
###Output
_____no_output_____
###Markdown
Unsupervised Analysis of Days of WeekTreating crossings each day as features to learn about the relationships between various days.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn')
import pandas as pd
import numpy as np
from sklearn.decomposition import PCA
from sklearn.mixture import GaussianMixture
###Output
_____no_output_____
###Markdown
Get Data
###Code
from jupyterworkflow.data import get_fremont_data
data = get_fremont_data()
pivoted = data.pivot_table('Total', index=data.index.time, columns=data.index.date)
pivoted.plot(legend=False, alpha=0.01);
###Output
_____no_output_____
###Markdown
Principal Component Analysis
###Code
X = pivoted.fillna(0).T.values
X.shape
X2 = PCA(2, svd_solver='full').fit_transform(X)
X2.shape
plt.scatter(X2[:, 0], X2[:, 1]);
###Output
_____no_output_____
###Markdown
Unsupervised Clustering
###Code
gmm = GaussianMixture(2).fit(X)
labels = gmm.predict(X)
plt.scatter(X2[:, 0], X2[:, 1], c=labels, cmap='rainbow')
plt.colorbar();
fig, ax = plt.subplots(1, 2, figsize=(14,6))
pivoted.T[labels == 0].T.plot(legend=False, alpha=0.1, ax=ax[0]);
pivoted.T[labels == 1].T.plot(legend=False, alpha=0.1, ax=ax[1]);
ax[0].set_title('Purple Cluster');
ax[1].set_title('Red Cluster');
###Output
_____no_output_____
###Markdown
Comparing with day of week
###Code
dayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek
plt.scatter(X2[:, 0], X2[:, 1], c=dayofweek, cmap='rainbow')
plt.colorbar();
###Output
_____no_output_____
###Markdown
Analyzing OutliersThe following points are weekdays with a holiday-like pattern
###Code
dates = pd.DatetimeIndex(pivoted.columns)
dates[(labels == 1) & (dayofweek < 5)]
###Output
_____no_output_____
###Markdown
Unsupervised Analysis of Days of WeekTreating crossings each day as features to learn about the relationships between various days.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn')
import pandas as pd
import numpy as np
from sklearn.decomposition import PCA
from sklearn.mixture import GaussianMixture
###Output
_____no_output_____
###Markdown
Get Data
###Code
from jupyterworkflow.data import get_fremont_data
data = get_fremont_data()
pivoted = data.pivot_table('Total', index=data.index.time, columns=data.index.date)
pivoted.plot(legend=False, alpha=0.01);
###Output
_____no_output_____
###Markdown
Principal Component Analysis
###Code
X = pivoted.fillna(0).T.values
X.shape
X2 = PCA(2, svd_solver='full').fit_transform(X)
X2.shape
plt.scatter(X2[:, 0], X2[:, 1]);
###Output
_____no_output_____
###Markdown
Unsupervised Clustering
###Code
gmm = GaussianMixture(2).fit(X)
labels = gmm.predict(X)
plt.scatter(X2[:, 0], X2[:, 1], c=labels, cmap='rainbow')
plt.colorbar();
fig, ax = plt.subplots(1, 2, figsize=(14, 6))
pivoted.T[labels==0].T.plot(legend=False, alpha=0.1, ax=ax[0]);
pivoted.T[labels==1].T.plot(legend=False, alpha=0.1, ax=ax[1]);
ax[0].set_title('Purple Cluster')
ax[1].set_title('Red Cluster');
###Output
_____no_output_____
###Markdown
Comparing with Day of Week
###Code
dayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek
plt.scatter(X2[:, 0], X2[:, 1], c=dayofweek, cmap='rainbow')
plt.colorbar();
###Output
_____no_output_____
###Markdown
Analyzing OutliersThe following points are weekdays with a holiday-like pattern (except 2020)
###Code
dates = pd.DatetimeIndex(pivoted.columns)
dates[(labels == 1) & (dayofweek < 5)]
###Output
_____no_output_____
###Markdown
Unsupervised Analysis of Days of WeekTreating crossing each day as features to learn the relationships between various days.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn')
import pandas as pd
import numpy as np
from sklearn.decomposition import PCA
from sklearn.mixture import GaussianMixture
###Output
_____no_output_____
###Markdown
Get Data
###Code
from jupyterworkflow.data import get_fremont_data
data = get_fremont_data()
pivoted = data.pivot_table('Total', index=data.index.time, columns=data.index.date)
pivoted.plot(legend=False, alpha=0.01);
###Output
_____no_output_____
###Markdown
Principal Component Analysis
###Code
X = pivoted.fillna(0).T.values
X.shape
X2 = PCA(2, svd_solver='full').fit_transform(X)
X2.shape
plt.scatter(X2[:, 0], X2[:, 1]);
###Output
_____no_output_____
###Markdown
Unsupervised Clustering
###Code
gmm = GaussianMixture(2).fit(X)
labels = gmm.predict(X)
plt.scatter(X2[:, 0], X2[:, 1], c=labels, cmap='rainbow')
plt.colorbar();
fig, ax = plt.subplots(1, 2, figsize=(14, 6))
pivoted.T[labels == 0].T.plot(legend=False, alpha=0.1, ax=ax[0]);
pivoted.T[labels == 1].T.plot(legend=False, alpha=0.1, ax=ax[1]);
ax[0].set_title('Purple Cluster')
ax[1].set_title('Red Cluster');
###Output
_____no_output_____
###Markdown
Comparing with Day of Week
###Code
dayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek
plt.scatter(X2[:, 0], X2[:, 1], c=dayofweek, cmap='rainbow')
plt.colorbar();
###Output
_____no_output_____
###Markdown
Analyzing OutliersThe following points are weekdays with a holiday-like pattern
###Code
dates = pd.DatetimeIndex(pivoted.columns)
dates[(labels == 1) & (dayofweek < 5)]
###Output
_____no_output_____
###Markdown
Unsupervised Days of Week
###Code
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use('seaborn')
import numpy as np
from sklearn.decomposition import PCA
from sklearn.mixture import GaussianMixture
###Output
_____no_output_____
###Markdown
Get Data
###Code
from jupyterworkflow.data import get_fremont_data
data=get_fremont_data()
pivoted = data.pivot_table('Total',index=data.index.time, columns=data.index.date)
pivoted.plot(legend = False, alpha = 0.01);
###Output
_____no_output_____
###Markdown
Principle component analysis
###Code
X=pivoted.fillna(0).T.values
X.shape
X2=PCA(2,svd_solver='full').fit_transform(X) # Kim , 03/02 :PCA(2,svd_solver='dense').fit(X)
X2.shape
plt.scatter(X2[:,0], X2[:,1])
###Output
_____no_output_____
###Markdown
Unsupervised Clustring
###Code
gmm = GaussianMixture(2)
gmm.fit(X)
labels = gmm.predict(X)
labels
plt.scatter(X2[:,0], X2[:,1], c=labels, cmap ='rainbow')
plt.colorbar()
fig, ax=plt.subplots(1,2,figsize=(14,6))
pivoted.T[labels==0].T.plot(legend=False,alpha= 0.1, ax = ax[0]);
pivoted.T[labels==1].T.plot(legend=False,alpha= 0.1, ax = ax[1]);
ax[0].set_title('Purple Cluster')
ax[1].set_title('Red Cluster');
###Output
_____no_output_____
###Markdown
Comapre day of week
###Code
dayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek
plt.scatter(X2[:,0], X2[:,1], c=dayofweek, cmap ='rainbow')
plt.colorbar()
###Output
_____no_output_____
###Markdown
Analyzing Outliers the following points are weekdays with a holyday-like patterns
###Code
dates = pd.DatetimeIndex(pivoted.columns)
dates[(labels==1) & (dayofweek < 5)]
###Output
_____no_output_____
###Markdown
Unsupervised Analysis of Days of WeekTreating crossing each day as features to learn about the relationships between various days
###Code
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn')
from sklearn.decomposition import PCA
from sklearn.mixture import GaussianMixture
###Output
_____no_output_____
###Markdown
Get Data
###Code
from jwf.data import get_fremont_data
data = get_fremont_data()
pivoted= data.pivot_table('Total',index=data.index.time, columns=data.index.date)
pivoted.plot(legend=False,alpha=0.01);
###Output
_____no_output_____
###Markdown
Principle component analysis
###Code
X = pivoted.fillna(0).T.values
X.shape
X2 = PCA(2, svd_solver='full').fit_transform(X)
X2.shape
plt.scatter(X2[:, 0], X2[:, 1])
###Output
_____no_output_____
###Markdown
Unsupervised Clustering
###Code
gmm = GaussianMixture(2).fit(X)
labels = gmm.predict(X)
plt.scatter(X2[:, 0], X2[:, 1],c=labels, cmap='rainbow')
plt.colorbar();
fig, ax = plt.subplots(1, 2, figsize=(14, 6))
pivoted.T[labels ==0].T.plot(legend=False, alpha=0.1, ax=ax[0]);
pivoted.T[labels ==1].T.plot(legend=False, alpha=0.1, ax=ax[1]);
ax[0].set_title('Purple Cluster')
ax[1].set_title('Red Cluster');
###Output
_____no_output_____
###Markdown
Comparing with Day of week
###Code
dayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek
plt.scatter(X2[:, 0], X2[:, 1],c=dayofweek, cmap='rainbow')
plt.colorbar();
###Output
_____no_output_____
###Markdown
Analysing OutliersThe follwoing points are weekdays with a holiday like pattern
###Code
dates = pd.DatetimeIndex(pivoted.columns)
dates[(labels == 1) & (dayofweek < 5) ]
###Output
_____no_output_____
###Markdown
Unsupervised Analysis of Days of WeekTreating crossing each day as features to learn about the relationships between various days.
###Code
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity='all'
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn')
import pandas as pd
import numpy as np
from sklearn.decomposition import PCA
from sklearn.mixture import GaussianMixture
###Output
_____no_output_____
###Markdown
Get Data
###Code
from jupyterworkflow.data import get_fremont_data
data = get_fremont_data()
pivoted = data.pivot_table('Total', index=data.index.time, columns=data.index.date)
pivoted.plot(legend=False, alpha=.01);
###Output
_____no_output_____
###Markdown
Principal Component Analysis
###Code
X = pivoted.fillna(0).T.values
X.shape
X2 = PCA(2, svd_solver='full').fit_transform(X)
X2.shape
plt.scatter(X2[:,0], X2[:,1]);
###Output
_____no_output_____
###Markdown
Unsupervised Clustering
###Code
gmm = GaussianMixture(2).fit(X)
labels = gmm.predict(X)
plt.scatter(X2[:,0], X2[:,1], c=labels, cmap='rainbow')
plt.colorbar();
fig, ax = plt.subplots(1, 2, figsize=(12, 6))
pivoted.T[labels==0].T.plot(legend=False, alpha=0.1, ax=ax[0])
pivoted.T[labels==1].T.plot(legend=False, alpha=0.1, ax=ax[1])
ax[0].set_title('Purple Cluster')
ax[1].set_title('Red Cluster');
###Output
_____no_output_____
###Markdown
Comparing with Day of Week
###Code
dayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek
plt.scatter(X2[:,0], X2[:,1], c=dayofweek, cmap='rainbow', )
plt.colorbar();
###Output
_____no_output_____
###Markdown
Analyzing OutliersThe following points are weekdays with a holiday like pattern.
###Code
dates = pd.DatetimeIndex(pivoted.columns)
dates[(labels==1)&(dayofweek<5)] # there are 63 such days which are not weekends
###Output
_____no_output_____
###Markdown
Unsupervised Analysis of Days of WeekTreating crossings each day as features to learn about the relationships between various days
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use("seaborn")
import pandas as pd
import numpy as np
from sklearn.decomposition import PCA
from sklearn.mixture import GaussianMixture
###Output
_____no_output_____
###Markdown
Get Data
###Code
from jupyterworkflow.data import get_freemont_data
data = get_freemont_data()
pivoted = data.pivot_table('Total', index=data.index.time, columns=data.index.date)
pivoted.plot(legend=False, alpha=0.01);
###Output
_____no_output_____
###Markdown
Principal Component Analysis
###Code
X = pivoted.fillna(0).T.values
X.shape
X2 = PCA(2, svd_solver="full").fit_transform(X)
X2.shape
plt.scatter(X2[:,0], X2[:,1]);
###Output
_____no_output_____
###Markdown
Unsupervised Clustering
###Code
gmm = GaussianMixture(2).fit(X)
labels = gmm.predict(X)
plt.scatter(X2[:,0], X2[:,1], c=labels, cmap="rainbow");
plt.colorbar();
fig, ax = plt.subplots(1, 2, figsize=(14, 6));
pivoted.T[labels == 0].T.plot(legend=False, alpha=0.1, ax=ax[0]);
pivoted.T[labels == 1].T.plot(legend=False, alpha=0.1, ax=ax[1]);
ax[0].set_title('Purple Cluster');
ax[1].set_title('Red Cluster');
###Output
_____no_output_____
###Markdown
Comparing with Day of Week
###Code
dayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek
plt.scatter(X2[:,0], X2[:,1], c=dayofweek, cmap="rainbow");
plt.colorbar();
###Output
_____no_output_____
###Markdown
Analyzing OutliersThe following points are weekdays with a holiday-like pattern
###Code
dates = pd.DatetimeIndex(pivoted.columns)
dates[(labels==1) & (dayofweek < 5)]
###Output
_____no_output_____
###Markdown
Unsupervised Analysis of Days of WeekTresting crossings each dat as features to learn about the relationships bwteen days of the week
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn')
import numpy as np
import pandas as pd
from sklearn.decomposition import PCA
from sklearn.mixture import GaussianMixture
###Output
_____no_output_____
###Markdown
Get Data
###Code
from jupyterworkflow.data import get_fremont_data
data = get_fremont_data()
pivoted = data.pivot_table('Total', index=data.index.time, columns=data.index.date)
pivoted.plot(legend=False, alpha=0.01);
###Output
_____no_output_____
###Markdown
Principal Component Analysis
###Code
X = pivoted.fillna(0).T.values
X.shape
X2 = PCA(2, svd_solver='full').fit_transform(X)
X2.shape
plt.scatter(X2[:, 0], X2[:, 1]);
###Output
_____no_output_____
###Markdown
Unsupervised Clustering
###Code
gmm = GaussianMixture(2)
gmm.fit(X)
labels = gmm.predict(X)
plt.scatter(X2[:, 0], X2[:, 1], c=labels, cmap='rainbow');
plt.colorbar();
fig, ax = plt.subplots(1, 2, figsize=(14, 6))
pivoted.T[labels == 0].T.plot(legend=False, alpha=0.01, ax=ax[0]);
pivoted.T[labels == 1].T.plot(legend=False, alpha=0.01, ax=ax[1]);
ax[0].set_title('Purple Cluster');
ax[1].set_title('Red Cluster');
###Output
_____no_output_____
###Markdown
Comparing with Day of the week
###Code
dayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek
plt.scatter(X2[:, 0], X2[:, 1], c=dayofweek, cmap='rainbow');
plt.colorbar();
###Output
_____no_output_____
###Markdown
Analyzing OutliersThe following points are weekdays with a holiday-like pattern
###Code
weekday_label = 0
weekend_label = 1
if pivoted.T[labels == 1].max().max() > pivoted.T[labels == 0].max().max() :
weekday_label = 1
weekend_label = 0
dates = pd.DatetimeIndex(pivoted.columns)
dates [(labels == weekend_label) & (dayofweek < 5)]
###Output
_____no_output_____ |
docs/html/tutorials/echo_data.ipynb | ###Markdown
echo_dataecho_data is a data plugin that echoes the data passed into it. It is useful in grouped_tasks debugging. Example
###Code
from nornir import InitNornir
from nornir.core.filter import F
from nornir_utils.plugins.tasks.data import echo_data
from nornir_utils.plugins.functions import print_result
nr = InitNornir(
inventory={
"plugin": "SimpleInventory",
"options": {"host_file": "data/hosts.yaml", "group_file": "data/groups.yaml"},
}
)
nr = nr.filter(~F(name="dev5.no_group"))
def grouped_task(task):
task.run(task=echo_data, name=task.host.name, role=task.host["role"])
r = nr.run(task=grouped_task)
print_result(r)
###Output
[1m[36mgrouped_task********************************************************************[0m
[0m[1m[34m* dev1.group_1 ** changed : False **********************************************[0m
[0m[1m[32mvvvv grouped_task ** changed : False vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv INFO[0m
[0m[1m[32m---- dev1.group_1 ** changed : False ------------------------------------------- INFO[0m
[0m{'role': 'www'}[0m
[0m[1m[32m^^^^ END grouped_task ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^[0m
[0m[1m[34m* dev2.group_1 ** changed : False **********************************************[0m
[0m[1m[32mvvvv grouped_task ** changed : False vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv INFO[0m
[0m[1m[32m---- dev2.group_1 ** changed : False ------------------------------------------- INFO[0m
[0m{'role': 'db'}[0m
[0m[1m[32m^^^^ END grouped_task ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^[0m
[0m[1m[34m* dev3.group_2 ** changed : False **********************************************[0m
[0m[1m[32mvvvv grouped_task ** changed : False vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv INFO[0m
[0m[1m[32m---- dev3.group_2 ** changed : False ------------------------------------------- INFO[0m
[0m{'role': 'www'}[0m
[0m[1m[32m^^^^ END grouped_task ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^[0m
[0m[1m[34m* dev4.group_2 ** changed : False **********************************************[0m
[0m[1m[32mvvvv grouped_task ** changed : False vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv INFO[0m
[0m[1m[32m---- dev4.group_2 ** changed : False ------------------------------------------- INFO[0m
[0m{'role': 'db'}[0m
[0m[1m[32m^^^^ END grouped_task ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^[0m
[0m |
examples/2D/structN2V_2D_convallaria/01_training.ipynb | ###Markdown
StructN2V - 2D Example for Convallaria data
###Code
# We import all our dependencies.
from n2v.models import N2VConfig, N2V
import numpy as np
from csbdeep.utils import plot_history
from n2v.utils.n2v_utils import manipulate_val_data
from n2v.internals.N2V_DataGenerator import N2V_DataGenerator
from matplotlib import pyplot as plt
import urllib
import os
import zipfile
from tifffile import imread
###Output
Using TensorFlow backend.
###Markdown
Download Example Data*C. majalis* data acquired by Britta Schroth-Diez of the MPI-CBG Light Microscopy Facility.Thank you Britta!
###Code
# create a folder for our data.
if not os.path.isdir('./data'):
os.mkdir('./data')
# check if data has been downloaded already
zipPath="data/flower.tif"
if not os.path.exists(zipPath):
urllib.request.urlretrieve('https://cloud.mpi-cbg.de/index.php/s/MJPMow0bk8iv95O/download', zipPath)
data = imread("data/flower.tif")
###Output
_____no_output_____
###Markdown
Training Data Preparation For training we use the N2V_DataGenerator to extract training X and validation X_val patches.
###Code
datagen = N2V_DataGenerator()
imgs = datagen.load_imgs_from_directory(directory = "data/", dims="TYX")
print(imgs[0].shape)
# The function automatically added an extra "channels" dimensions to the images at the end
# Lets' look at the images.
# Select channel=0 in the last dimension, as `imshow()` doesn't really understand channels
plt.imshow(imgs[0][0,...,0], cmap='magma')
plt.show()
# split up image into little non-overlapping patches for training.
# y<832 (top of image) is training, y>=832 (bottom of image) is validation
imgs_train = [imgs[0][:,:832]]
X = datagen.generate_patches_from_list(imgs_train,shape=(96,96))
imgs_vali = [imgs[0][:,832:]]
X_val = datagen.generate_patches_from_list(imgs_vali,shape=(96,96))
# Patches are created so they do not overlap.
# (Note: this is not the case if you specify a number of patches. See the docstring for details!)
# Just in case you don't know how to access the docstring of a method:
datagen.generate_patches_from_list?
# Let's look at one of our training and validation patches.
plt.figure(figsize=(14,7))
plt.subplot(1,2,1)
plt.imshow(X[0,...,0], cmap='magma')
plt.title('Training Patch');
plt.subplot(1,2,2)
plt.imshow(X_val[0,...,0], cmap='magma')
plt.title('Validation Patch');
###Output
_____no_output_____
###Markdown
Configure Noise2Void comes with a special config-object, where we store network-architecture and training specific parameters. See the docstring of the N2VConfig constructor for a description of all parameters.When creating the config-object, we provide the training data X. From X we extract mean and std that will be used to normalize all data before it is processed by the network. We also extract the dimensionality and number of channels from X.Compared to supervised training (i.e. traditional CARE), we recommend to use N2V with an increased train_batch_size and batch_norm.To keep the network from learning the identity we have to manipulate the input pixels during training. For this we have the parameter n2v_manipulator with default value 'uniform_withCP'. Most pixel manipulators will compute the replacement value based on a neighborhood. With n2v_neighborhood_radius we can control its size. Other pixel manipulators:* normal_withoutCP: samples the neighborhood according to a normal gaussian distribution, but without the center pixel* normal_additive: adds a random number to the original pixel value. The random number is sampled from a gaussian distribution with zero-mean and sigma = n2v_neighborhood_radius* normal_fitted: uses a random value from a gaussian normal distribution with mean equal to the mean of the neighborhood and standard deviation equal to the standard deviation of the neighborhood.* identity: performs no pixel manipulationFor faster training multiple pixels per input patch can be manipulated. In our experiments we manipulated about 0.198% of the input pixels per patch. For a patch size of 64 by 64 pixels this corresponds to about 8 pixels. This fraction can be tuned via n2v_perc_pix.For Noise2Void training it is possible to pass arbitrarily large patches to the training method. From these patches random subpatches of size n2v_patch_shape are extracted during training. Default patch shape is set to (64, 64). In the past we experienced bleedthrough artifacts between channels if training was terminated to early. To counter bleedthrough we added the `single_net_per_channel` option, which is turned on by default. In the back a single U-Net for each channel is created and trained independently, thereby removing the possiblity of bleedthrough. __Note:__ Essentially the network gets multiplied by the number of channels, which increases the memory requirements. If your GPU gets too small, you can always split the channels manually and train a network for each channel one after another. Warning: to make this example notebook execute faster, we have set train_epochs to only 10. For better results we suggest 100 to 200 train_epochs.
###Code
# train_steps_per_epoch is set to (number of training patches)/(batch size), like this each training patch
# is shown once per epoch.
config = N2VConfig(X, unet_kern_size=3,
train_steps_per_epoch=int(X.shape[0]/128), train_epochs=10, train_loss='mse', batch_norm=True,
train_batch_size=128, n2v_perc_pix=0.198, n2v_patch_shape=(64, 64),
n2v_manipulator='uniform_withCP', n2v_neighborhood_radius=5, structN2Vmask = [[0,1,1,1,1,1,1,1,1,1,0]])
# Let's look at the parameters stored in the config-object.
vars(config)
# a name used to identify the model
model_name = 'n2v_2D'
# the base directory in which our model will live
basedir = 'models'
# We are now creating our network model.
model = N2V(config, model_name, basedir=basedir)
###Output
/home/tbuchhol/Gitrepos/n2v/n2v/models/n2v_standard.py:428: UserWarning: output path for model already exists, files may be overwritten: /home/tbuchhol/Gitrepos/n2v/examples/2D/structN2V_2D_convallaria/models/n2v_2D
warnings.warn('output path for model already exists, files may be overwritten: %s' % str(self.logdir.resolve()))
###Markdown
TrainingTraining the model will likely take some time. We recommend to monitor the progress with TensorBoard, which allows you to inspect the losses during training. Furthermore, you can look at the predictions for some of the validation images, which can be helpful to recognize problems early on.You can start TensorBoard in a terminal from the current working directory with tensorboard --logdir=. Then connect to http://localhost:6006/ with your browser.
###Code
# We are ready to start training now.
history = model.train(X, X_val)
###Output
StructN2V Mask is: [[0 1 1 1 1 1 1 1 1 1 0]]
8 blind-spots will be generated per training patch of size (64, 64).
###Markdown
After training, lets plot training and validation loss.
###Code
print(sorted(list(history.history.keys())))
plt.figure(figsize=(16,5))
plot_history(history,['loss','val_loss']);
###Output
['loss', 'lr', 'n2v_abs', 'n2v_mse', 'val_loss', 'val_n2v_abs', 'val_n2v_mse']
###Markdown
Export Model in BioImage ModelZoo FormatSee https://imagej.net/N2VPrediction for details.
###Code
model.export_TF(name='Struct Noise2Void - Convallaria Example',
description='This is the Struct Noise2Void example trained on the Convallaria data in python.',
authors=["Coleman Broaddus"],
test_img=X_val[0], axes='YXC',
patch_shape=(96,96))
###Output
INFO:tensorflow:No assets to save.
INFO:tensorflow:No assets to write.
INFO:tensorflow:SavedModel written to: /tmp/tmpa2nls8w6/model/saved_model.pb
Model exported in BioImage ModelZoo format:
/home/tbuchhol/Gitrepos/n2v/examples/2D/structN2V_2D_convallaria/models/n2v_2D/export.bioimage.io.zip
###Markdown
StructN2V - 2D Example for Convallaria data
###Code
# We import all our dependencies.
from n2v.models import N2VConfig, N2V
import numpy as np
from csbdeep.utils import plot_history
from n2v.utils.n2v_utils import manipulate_val_data
from n2v.internals.N2V_DataGenerator import N2V_DataGenerator
from matplotlib import pyplot as plt
import urllib
import os
import zipfile
from tifffile import imread
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
###Output
2021-11-04 13:15:00.624789: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.1
###Markdown
Download Example Data*C. majalis* data acquired by Britta Schroth-Diez of the MPI-CBG Light Microscopy Facility.Thank you Britta!
###Code
# create a folder for our data.
if not os.path.isdir('./data'):
os.mkdir('./data')
# check if data has been downloaded already
zipPath="data/flower.tif"
if not os.path.exists(zipPath):
urllib.request.urlretrieve('https://download.fht.org/jug/n2v/flower.tif', zipPath)
data = imread("data/flower.tif")
###Output
_____no_output_____
###Markdown
Training Data Preparation For training we use the N2V_DataGenerator to extract training X and validation X_val patches.
###Code
datagen = N2V_DataGenerator()
imgs = datagen.load_imgs_from_directory(directory = "data/", dims="TYX")
print(imgs[0].shape)
# The function automatically added an extra "channels" dimensions to the images at the end
# Lets' look at the images.
# Select channel=0 in the last dimension, as `imshow()` doesn't really understand channels
plt.imshow(imgs[0][0,...,0], cmap='magma')
plt.show()
# split up image into little non-overlapping patches for training.
# y<832 (top of image) is training, y>=832 (bottom of image) is validation
imgs_train = [imgs[0][:,:832]]
X = datagen.generate_patches_from_list(imgs_train,shape=(96,96))
imgs_vali = [imgs[0][:,832:]]
X_val = datagen.generate_patches_from_list(imgs_vali,shape=(96,96))
# Patches are created so they do not overlap.
# (Note: this is not the case if you specify a number of patches. See the docstring for details!)
# Just in case you don't know how to access the docstring of a method:
datagen.generate_patches_from_list?
# Let's look at one of our training and validation patches.
plt.figure(figsize=(14,7))
plt.subplot(1,2,1)
plt.imshow(X[0,...,0], cmap='magma')
plt.title('Training Patch');
plt.subplot(1,2,2)
plt.imshow(X_val[0,...,0], cmap='magma')
plt.title('Validation Patch');
###Output
_____no_output_____
###Markdown
Configure Noise2Void comes with a special config-object, where we store network-architecture and training specific parameters. See the docstring of the N2VConfig constructor for a description of all parameters.When creating the config-object, we provide the training data X. From X we extract mean and std that will be used to normalize all data before it is processed by the network. We also extract the dimensionality and number of channels from X.Compared to supervised training (i.e. traditional CARE), we recommend to use N2V with an increased train_batch_size and batch_norm.To keep the network from learning the identity we have to manipulate the input pixels during training. For this we have the parameter n2v_manipulator with default value 'uniform_withCP'. Most pixel manipulators will compute the replacement value based on a neighborhood. With n2v_neighborhood_radius we can control its size. Other pixel manipulators:* normal_withoutCP: samples the neighborhood according to a normal gaussian distribution, but without the center pixel* normal_additive: adds a random number to the original pixel value. The random number is sampled from a gaussian distribution with zero-mean and sigma = n2v_neighborhood_radius* normal_fitted: uses a random value from a gaussian normal distribution with mean equal to the mean of the neighborhood and standard deviation equal to the standard deviation of the neighborhood.* identity: performs no pixel manipulationFor faster training multiple pixels per input patch can be manipulated. In our experiments we manipulated about 0.198% of the input pixels per patch. For a patch size of 64 by 64 pixels this corresponds to about 8 pixels. This fraction can be tuned via n2v_perc_pix.For Noise2Void training it is possible to pass arbitrarily large patches to the training method. From these patches random subpatches of size n2v_patch_shape are extracted during training. Default patch shape is set to (64, 64). In the past we experienced bleedthrough artifacts between channels if training was terminated to early. To counter bleedthrough we added the `single_net_per_channel` option, which is turned on by default. In the back a single U-Net for each channel is created and trained independently, thereby removing the possiblity of bleedthrough. __Note:__ Essentially the network gets multiplied by the number of channels, which increases the memory requirements. If your GPU gets too small, you can always split the channels manually and train a network for each channel one after another. Warning: to make this example notebook execute faster, we have set train_epochs to only 10. For better results we suggest 100 to 200 train_epochs.
###Code
# train_steps_per_epoch is set to (number of training patches)/(batch size), like this each training patch
# is shown once per epoch.
config = N2VConfig(X, unet_kern_size=3,
train_steps_per_epoch=int(X.shape[0]/128), train_epochs=10, train_loss='mse', batch_norm=True,
train_batch_size=128, n2v_perc_pix=0.198, n2v_patch_shape=(64, 64),
n2v_manipulator='uniform_withCP', n2v_neighborhood_radius=5, structN2Vmask = [[0,1,1,1,1,1,1,1,1,1,0]])
# Let's look at the parameters stored in the config-object.
vars(config)
# a name used to identify the model
model_name = 'n2v_2D'
# the base directory in which our model will live
basedir = 'models'
# We are now creating our network model.
model = N2V(config, model_name, basedir=basedir)
###Output
/home/tbuchhol/Gitrepos/n2v/n2v/models/n2v_standard.py:405: UserWarning: output path for model already exists, files may be overwritten: /home/tbuchhol/Gitrepos/n2v/examples/2D/structN2V_2D_convallaria/models/n2v_2D
'output path for model already exists, files may be overwritten: %s' % str(self.logdir.resolve()))
###Markdown
TrainingTraining the model will likely take some time. We recommend to monitor the progress with TensorBoard, which allows you to inspect the losses during training. Furthermore, you can look at the predictions for some of the validation images, which can be helpful to recognize problems early on.You can start TensorBoard in a terminal from the current working directory with tensorboard --logdir=. Then connect to http://localhost:6006/ with your browser.
###Code
# We are ready to start training now.
history = model.train(X, X_val)
###Output
Using TensorFlow backend.
###Markdown
After training, lets plot training and validation loss.
###Code
print(sorted(list(history.history.keys())))
plt.figure(figsize=(16,5))
plot_history(history,['loss','val_loss']);
###Output
['loss', 'lr', 'n2v_abs', 'n2v_mse', 'val_loss', 'val_n2v_abs', 'val_n2v_mse']
###Markdown
Export Model in BioImage ModelZoo FormatSee https://imagej.net/N2VPrediction for details.
###Code
model.export_TF(name='Struct Noise2Void - Convallaria Example',
description='This is the Struct Noise2Void example trained on the Convallaria data in python.',
authors=["Coleman Broaddus"],
test_img=X_val[0], axes='YXC',
patch_shape=(96,96))
###Output
WARNING:tensorflow:From /home/tbuchhol/Programs/miniconda3/envs/n2v_tf2/lib/python3.7/site-packages/tensorflow/python/saved_model/signature_def_utils_impl.py:201: build_tensor_info (from tensorflow.python.saved_model.utils_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.utils.build_tensor_info or tf.compat.v1.saved_model.build_tensor_info.
INFO:tensorflow:No assets to save.
INFO:tensorflow:No assets to write.
INFO:tensorflow:SavedModel written to: /tmp/tmp73bqmprp/model/saved_model.pb
Model exported in BioImage ModelZoo format:
/home/tbuchhol/Gitrepos/n2v/examples/2D/structN2V_2D_convallaria/models/n2v_2D/export.bioimage.io.zip
###Markdown
StructN2V - 2D Example for Convallaria data
###Code
# We import all our dependencies.
from n2v.models import N2VConfig, N2V
import numpy as np
from csbdeep.utils import plot_history
from n2v.utils.n2v_utils import manipulate_val_data
from n2v.internals.N2V_DataGenerator import N2V_DataGenerator
from matplotlib import pyplot as plt
import urllib
import os
import zipfile
from tifffile import imread
###Output
_____no_output_____
###Markdown
Download Example Data*C. majalis* data acquired by Britta Schroth-Diez of the MPI-CBG Light Microscopy Facility.Thank you Britta!
###Code
# create a folder for our data.
if not os.path.isdir('./data'):
os.mkdir('./data')
# check if data has been downloaded already
zipPath="data/flower.tif"
if not os.path.exists(zipPath):
urllib.request.urlretrieve('https://cloud.mpi-cbg.de/index.php/s/MJPMow0bk8iv95O/download', zipPath)
data = imread("data/flower.tif")
###Output
_____no_output_____
###Markdown
Training Data Preparation For training we use the N2V_DataGenerator to extract training X and validation X_val patches.
###Code
datagen = N2V_DataGenerator()
imgs = datagen.load_imgs_from_directory(directory = "data/", dims="TYX")
print(imgs[0].shape)
# The function automatically added an extra "channels" dimensions to the images at the end
# Lets' look at the images.
# Select channel=0 in the last dimension, as `imshow()` doesn't really understand channels
plt.imshow(imgs[0][0,...,0], cmap='magma')
plt.show()
# split up image into little non-overlapping patches for training.
# y<832 (top of image) is training, y>=832 (bottom of image) is validation
imgs_train = [imgs[0][:,:832]]
X = datagen.generate_patches_from_list(imgs_train,shape=(96,96))
imgs_vali = [imgs[0][:,832:]]
X_val = datagen.generate_patches_from_list(imgs_vali,shape=(96,96))
# Patches are created so they do not overlap.
# (Note: this is not the case if you specify a number of patches. See the docstring for details!)
# Just in case you don't know how to access the docstring of a method:
datagen.generate_patches_from_list?
# Let's look at one of our training and validation patches.
plt.figure(figsize=(14,7))
plt.subplot(1,2,1)
plt.imshow(X[0,...,0], cmap='magma')
plt.title('Training Patch');
plt.subplot(1,2,2)
plt.imshow(X_val[0,...,0], cmap='magma')
plt.title('Validation Patch');
###Output
_____no_output_____
###Markdown
Configure Noise2Void comes with a special config-object, where we store network-architecture and training specific parameters. See the docstring of the N2VConfig constructor for a description of all parameters.When creating the config-object, we provide the training data X. From X we extract mean and std that will be used to normalize all data before it is processed by the network. We also extract the dimensionality and number of channels from X.Compared to supervised training (i.e. traditional CARE), we recommend to use N2V with an increased train_batch_size and batch_norm.To keep the network from learning the identity we have to manipulate the input pixels during training. For this we have the parameter n2v_manipulator with default value 'uniform_withCP'. Most pixel manipulators will compute the replacement value based on a neighborhood. With n2v_neighborhood_radius we can control its size. Other pixel manipulators:* normal_withoutCP: samples the neighborhood according to a normal gaussian distribution, but without the center pixel* normal_additive: adds a random number to the original pixel value. The random number is sampled from a gaussian distribution with zero-mean and sigma = n2v_neighborhood_radius* normal_fitted: uses a random value from a gaussian normal distribution with mean equal to the mean of the neighborhood and standard deviation equal to the standard deviation of the neighborhood.* identity: performs no pixel manipulationFor faster training multiple pixels per input patch can be manipulated. In our experiments we manipulated about 0.198% of the input pixels per patch. For a patch size of 64 by 64 pixels this corresponds to about 8 pixels. This fraction can be tuned via n2v_perc_pix.For Noise2Void training it is possible to pass arbitrarily large patches to the training method. From these patches random subpatches of size n2v_patch_shape are extracted during training. Default patch shape is set to (64, 64). In the past we experienced bleedthrough artifacts between channels if training was terminated to early. To counter bleedthrough we added the `single_net_per_channel` option, which is turned on by default. In the back a single U-Net for each channel is created and trained independently, thereby removing the possiblity of bleedthrough. __Note:__ Essentially the network gets multiplied by the number of channels, which increases the memory requirements. If your GPU gets too small, you can always split the channels manually and train a network for each channel one after another. Warning: to make this example notebook execute faster, we have set train_epochs to only 10. For better results we suggest 100 to 200 train_epochs.
###Code
# train_steps_per_epoch is set to (number of training patches)/(batch size), like this each training patch
# is shown once per epoch.
config = N2VConfig(X, unet_kern_size=3,
train_steps_per_epoch=int(X.shape[0]/128), train_epochs=10, train_loss='mse', batch_norm=True,
train_batch_size=128, n2v_perc_pix=0.198, n2v_patch_shape=(64, 64),
n2v_manipulator='uniform_withCP', n2v_neighborhood_radius=5, structN2Vmask = [[0,1,1,1,1,1,1,1,1,1,0]])
# Let's look at the parameters stored in the config-object.
vars(config)
# a name used to identify the model
model_name = 'n2v_2D'
# the base directory in which our model will live
basedir = 'models'
# We are now creating our network model.
model = N2V(config, model_name, basedir=basedir)
###Output
/home/tbuchhol/Gitrepos/n2v/n2v/models/n2v_standard.py:405: UserWarning: output path for model already exists, files may be overwritten: /home/tbuchhol/Gitrepos/n2v/examples/2D/structN2V_2D_convallaria/models/n2v_2D
'output path for model already exists, files may be overwritten: %s' % str(self.logdir.resolve()))
###Markdown
TrainingTraining the model will likely take some time. We recommend to monitor the progress with TensorBoard, which allows you to inspect the losses during training. Furthermore, you can look at the predictions for some of the validation images, which can be helpful to recognize problems early on.You can start TensorBoard in a terminal from the current working directory with tensorboard --logdir=. Then connect to http://localhost:6006/ with your browser.
###Code
# We are ready to start training now.
history = model.train(X, X_val)
###Output
Using TensorFlow backend.
###Markdown
After training, lets plot training and validation loss.
###Code
print(sorted(list(history.history.keys())))
plt.figure(figsize=(16,5))
plot_history(history,['loss','val_loss']);
###Output
['loss', 'lr', 'n2v_abs', 'n2v_mse', 'val_loss', 'val_n2v_abs', 'val_n2v_mse']
###Markdown
Export Model in BioImage ModelZoo FormatSee https://imagej.net/N2VPrediction for details.
###Code
model.export_TF(name='Struct Noise2Void - Convallaria Example',
description='This is the Struct Noise2Void example trained on the Convallaria data in python.',
authors=["Coleman Broaddus"],
test_img=X_val[0], axes='YXC',
patch_shape=(96,96))
###Output
WARNING:tensorflow:From /home/tbuchhol/Programs/miniconda3/envs/n2v_tf2/lib/python3.7/site-packages/tensorflow/python/saved_model/signature_def_utils_impl.py:201: build_tensor_info (from tensorflow.python.saved_model.utils_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.utils.build_tensor_info or tf.compat.v1.saved_model.build_tensor_info.
INFO:tensorflow:No assets to save.
INFO:tensorflow:No assets to write.
INFO:tensorflow:SavedModel written to: /tmp/tmp73bqmprp/model/saved_model.pb
Model exported in BioImage ModelZoo format:
/home/tbuchhol/Gitrepos/n2v/examples/2D/structN2V_2D_convallaria/models/n2v_2D/export.bioimage.io.zip
|
02_thermo/.ipynb_checkpoints/01_Thermo-checkpoint.ipynb | ###Markdown
Review of the main principles of thermodynamics. Summary of the key principles thermodynamics1. Thermodynamics is a **phenomoneological** theory. Phenomoneological means that the macrosocpic phenomena are descibred terms of few quanitites which can be observed and measrued by macroscopic devices without any refernce to microscopic details.2. The variables one deals in thermodynamics can be classifeid as **extensive** and **intensive**. Former depends on the size of system (volume, number of particles, energy, entropy) the latter is size independent (temperature, pressure, magnetic field, etc). **Extensive variables** are a priviledges set of variables becasue which uniquely describe the equilibrium sates of matter. **Intensive variables** are derived from extensive ones and are conjugate pairs to extensive ones. E.g $V-P$, $S-T$, $N-\mu$ are conjugate pairs. Conjugate means one can replace extensive variables by intensive variables through legendre transformation.3. **Equilibrium** is a special state of matter where the most simple descirption is possible in terms of extensive variables or properly chosen set of extensive+intensive variables. Equilibrium state is defined as a state where on the timescale of interest no measurable variable dsiplays any changes over time. In particular there are no macrosopic fluxex or flows of any form of energy or matter. In equilibrium, macroscopic matter assumes a particularly simple description in terms of **few extensive quantites**. 4. **Fundmanetal equation** in thermodynamics is the equation that binds together all extensive variables, e.g $E(U,V,S,N_1, N_2, ...)$. 5. **Transformations between equilibrium states** is the central task of thermodynamics. Thermodynamics is fully equiped to predict the equilibrium state B which results form equilibrium state A through spontenous transformation upon removal of a **constraint.** 6. **Quasi-static path: a dense successtion of equilibrium states** that connects A with B in the space of extensive variables is constructed in order to compute changes in thermodynamic variables between states A and B. This equilibrium path is necessarily quasistatic for ensuring that system does not at all deviate from equilibrium state during transformation. The quasistatic path can also be reversible when the path from B to A can be re-traced with zero change in the universe while the system remains in the state of equilibrium. This necessitates intoduction of Entropy which differnetiates reversible from non-reversible changes. 7. Thermodynamic space is folliated into non-corssing **adiabats**. These adiabats are planes on whcih system can be transformed reversibly. The only way to "jump" from one adiabt to another is by heating or cooling the system, e.g transfer of heat. 8. The second Law establishes the directionality of processes. The first law is a reflection of conservation of "mechanical energy" in many body systems such studied in thermodynamics. 9. Any change in adiabatic system is accompanied either by entropy increase (non-equilibrium change) or entropy remaining the same (equilibrium-change) Basic features of macrosystems Let us list some of the most conspicious features of macroscopic systems consisting of many particles:- **Additivitiy of Energy**- **Irreversibility of time evoluation.** - **Simplicity and stability of equiliubrium states.**- **"Invisibility" of fluctuations** On Addititivty of EnergyThe additivity of energy can hold if we assumed pairwise potential description between particles and that these potentials decasye with distance faster than the $r^{-3}$ in 3D.
###Code
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
from ipywidgets import widgets
import matplotlib.pyplot as plt
import numpy as np
import scipy as sci
def U_LJ6_12(r, sig=1, eps=1):
'''Classic 6-12 Lennar Jones Potential
INPUT
r: interatomic distance in units sigma
sigma: atomic/particle size
OUTPUT
E: energy in units of epsilon
'''
x=r/sig
inv_r6 = 1/x**6
inv_r12 = inv_r6**2
return 4*eps*(inv_r12 - inv_r6)
def U_DH(r, a=1):
'''Screened electrostatic potential
'''
return 1/r * np.exp(-a*r)
fig, ax = plt.subplots(nrows=1, ncols=2,figsize=(11,4))
dist = np.linspace(1, 4,100)
ax[0].plot(dist, U_LJ6_12(dist,1,1),'--',lw=3,color='orange')
ax[0].set_xlabel('$r, [\sigma]$',fontsize=12)
ax[0].set_ylabel('$U_{LJ}(r)$',fontsize=12)
ax[1].plot(dist, U_DH(dist,1),'--',lw=3,color='green')
ax[1].set_xlabel('$r, [\sigma]$',fontsize=12)
ax[1].set_ylabel('$U_{DH}(r)$',fontsize=12)
ax[0].grid('on')
ax[1].grid('on')
###Output
_____no_output_____
###Markdown
On Irreversibility[Poincarre recurrence theorem](https://en.wikipedia.org/wiki/Poincar%C3%A9_recurrence_theorem)If you play bridge long enough you will eventually be dealt any grand-slam hand, not once but several times. A similar thing is true for mechanical systems governed by Newton's laws, as the French mathematician Henri Poincare (1854-1912) showed with his recurrence theorem in 1890: if the system has a fixed total energy that restricts its dynamics to bounded subsets of its phase space, the system will eventually return as closely as you like to any given initial set of molecular positions and velocities. If the entropy is determined by these variables, then it must also return to its original value, so if it increases during one period of time it must decrease during another.```{figure} ./figs/recurrence.jpg---height: 400pxname: directive-fig---```- Zermello is right for small systems. A dynamical system will alwasy return to its starting configuration hence irreversibility is not a property of micrsccopic systems. - Boltzman is right for large systems becasue a likelihhood of recurrence for macrosystem happening is beyond the lifetime of a universie. Case closed. Extensive vs IntensiveThe **extensive variables (E,V,N)** are a priviledged set of variables in thermodynamin space becasue:- Proportional to the size of the system - Uniquely describe macroscopic states - Only mechanics/electromagnetis is needed without introdcuing derived notions of heat and temperature. The **intensive variables (T, P, $\mu$)** are derived from extensive variables and are therefore derived, conveient variables for controlling experiments. Thus, intensive variables do not have the same status of extenisve variables. - A glass of water with and without ice cube can both be under 1 atm and 0 C whereas values of energy, entropy volume will be different. Thermodynamic coordinates and thermodynamic space. - State of equilibrium is completely defined as a point in the space of thermodynamic coordinates: $E, V, N, S$. Theese coordinates have a **unique** and well defined values for each equilirbum state irresective to how such state was created. Weather through violent non-equilibrimu process or calm quasi-static sequence of equilibrou states. This is why the functions of extensive variables $E(S,V,N)$ or $S(E,V,N)$ are called **state functions** and their changes are given by differnee between initial or final state only $\Delta E =E_f -E_i$, $\Delta S =S_f -S_i$. The work $W$ or heat $Q$ on the other hand are process dependent characterizing the way energy is trasnfered to the system and not characterizing equilibrium states itself. - Study of thermodynamic processes than boils down to study of transofrmations between equilibium A to equilibrium B in the **thermodynamic space** spanned by thermodynamic coordinates. E.g computing $\Delta E = E_B - E_A$- To compute changes between equilirbum state A and B we construct reversible (read equilirbium) and quasistatic path connecting the two states which allwos writing down exact differntials for state changes. Reversible, quasistatic process```{figure} ./figs/adiabat.png---height: 400pxname: directive-fig---``` Plank's statment of 2nd law```{figure} ./figs/plank.png---height: 400pxname: directive-fig---```> "Planck’s principle: For any adiabatic process with all the work coordinates returning totheir original values $\Delta E \geq 0$ " M Plank > In other words doing pure mechanical work on insulated(read adiabatic) system with no net change in mechanical variables results in energy either going up or remaining unchanged $\Delta E \geq 0$. Thus we can not through mechanical work "steal" energy away from closed system wihtout any other change in the environment. Thermodynamic space is made up of non-crossing adiabats. ```{figure} ./figs/Adiabats.png---height: 400pxname: directive-fig---``` Nope-1```{figure} ./figs/NO1.png---height: 400pxname: directive-fig---``` Nope-2```{figure} ./figs/NO2.png---height: 400pxname: directive-fig---``` First LawMechanical energy conservation law extended to many-body thermal systems$$dE = \delta Q +\delta W$$ Second LawFor an adiabatic quasisatic process Entropy always increases or remains the same (in equilibrium state) $$dS \geq 0$$ Gibbs relationGiven the energy as a function of extensive variables $E(S,V,N)$ we can write down its full differntial. $$dE = \Big(\frac{\partial E}{\partial S} \Big)_{V,N}dS+ \Big(\frac{\partial E}{\partial V} \Big)_{S,N}dV+\Big(\frac{\partial E}{\partial N} \Big)_{S,V}dN$$We identify **intensive variables** conjugate to extenive variables:- $$T = \Big(\frac{\partial E}{\partial S} \Big)_{V,N}$$- $$P = \Big(\frac{\partial E}{\partial V} \Big)_{S,N}$$- $$\mu = \Big(\frac{\partial E}{\partial N} \Big)_{S,V}$$This is known as **Gibbs relation** in Thermodynamics and is a starting point for thermodynamic calculations$$\boxed{dE= TdS - pdV +\mu dN}$$ Gibbs Duhem relationExtensivity proeprty implies linear scaling with respect to extensive variables. In other words extensive variables are additive quantities $$E(\lambda S,\lambda V,\lambda N) = \lambda E(S,V,N)$$ $$E = \Big(\frac{\partial E}{\partial \lambda S} \Big)_{V,N}S+ \Big(\frac{\partial E}{\partial \lambda V} \Big)_{S,N}V+\Big(\frac{\partial E}{\partial \lambda N} \Big)_{S,V}N$$$$E = TS -PV +\mu N$$Now take derivative of E and compare with Gibbs relation$$\boxed{SdT-VdP+Nd\mu =0}$$ Other useful thermodynamic derivatives Heat capacities at constnat P and V. Thermal stability requires $c_v,c_p\geq 0$$$C_p = \Big(\frac{d Q}{dT} \Big)_{p,N}$$$$C_v = \Big(\frac{d Q}{dT} \Big)_{v,N}$$ Expansion and compression coefficients. Mechanical stability requires $\kappa_T\geq 0$- **Thermal expansion coeff:** $$\alpha = \frac{1}{V}\Big(\frac{d V}{dT} \Big)_{p,N}$$- **Isothermal compressibility coeff:** $$\kappa_T = -\frac{1}{V}\Big(\frac{d V}{dP} \Big)_{T,N}$$ Ideal Gas entropy example$$dS = \frac{1}{T}dE + \frac{P}{T}dV$$- $E = \frac{3}{2}Nk_B T$ and $PV = Nk_BT$ for monoatomic gas$$dS = \frac{3Nk_B}{2E}dE + \frac{Nk_B}{V}dV$$$$S(E,V,T) = \frac{3}{2}Nk log \frac{E}{N} +Nk log \frac{V}{N} + const$$ Convexity of Entropy and Concavity of Energy Entropy S(E,V,N) of a composite system is additive over each one of the individual components. The entropy is therefore continuous, differentiable, and monotonically increasing function of the energy $S(E)$![](./figs/concave_convex.png) Exercise find your equilibriumThe fundamental equations of both systems $A$ and $B$ are $$ S = \left (\frac{R^2}{v_0\theta} \right )^{1/3} \left ( N V U \right )^{1/3} $$- The volume and mole number of system $A$ are $ 9 \times 10^{-6}\ m^3 $ and $3$ mol, respectively, - and of system $B$ are $ 4 \times 10^{-6}\ m^3 $ and $2$ mol, respectively. First suppose $A$ and $B$ are completely isolated from oneanother. Plot the total entropy $S_A + S_B$ as function of $U_A/(U_A + U_B)$,where $U_A + U_B = 80$ J. If $A$ and $B$ were connected by a diathermal wall andthe pair allowed to come to equilibrium, what would $U_A$ and $U_B$ be? Call$$ X = \frac{U_A}{U_A + U_B}$$we know $U_A + U_B = 80$, therefore$$ U_A = 80X,\hspace{20pt} U_B = 80(1 - X) $$Then setting $R, v_0, \theta = 1 $ and plugging in $V_A$, $V_B$, $N_A$ and $N_B$.$S = S_A + S_B = \left(3 \times 9 \times 10^{-6} \times 80X \right)^{1/3} + \left(2 \times 4 \times 10^{-6} \times 80(1-X)\right)^{1/3} = 0.086(1-X)^{1/3} + 0.129X^{1/3}$Entropy is maximized when $X = 0.65$, which is where we would expect the system to go at equilibrium once the internal wall is made diathermal.
###Code
import matplotlib.pyplot as plt
import numpy as np
X = np.linspace(0,1,100)
S = 0.086 * (1 - X)**(1./3) + 0.129 * (X**(1./3))
plt.plot(X, S,'-o')
plt.xlabel('X')
plt.ylabel('S(X)')
###Output
_____no_output_____
###Markdown
Free Energies: Swapping extensive variables for intensive ones$$E(S,V,N) \rightarrow A(T,V,N)$$$$E(S,V,N) \rightarrow G(T,p,N)$$$$E(S,V,N) \rightarrow \Omega(T,p,\mu)$$ Legendre Transform of convex functions. Genrally speaking legendre transform is transforming one convex function $f(x)$ into another $f^*(\alpha)$. Morover, the trnasofmraiton isinvolutive meaning it is its own inverse. If we apply legendre trnsform to a function if single variable twice we get back to orginal function! ```{figure} ./figs/Legendre.png---height: 500pxname: directive-fig---```$$f^*(\alpha) = max_x \big [{\alpha x - f(x)} \big ]$$$$f(x) = max_{\alpha} \big [ {\alpha x - f^*(\alpha)} \big ]$$ Example of Legendre transform-1$$f(x) = x^2$$$$a = f'(x) =2x \rightarrow x = a/2 $$$$g(\alpha) = f^*(\alpha) = max_x \Big[ \alpha x - f(x) \Big ] = \alpha^2/2 - \alpha^2/4 = \alpha^2/4$$
###Code
f = lambda x: x**2
g = lambda a: a*(a/2) - f(a/2) # deriv f(x) = 2x = a ---> x = a/2
@widgets.interact(a=(0,2,0.2))
def legendre_transf(a):
fig,ax =plt.subplots(nrows=1,ncols=2, figsize = (10,4))
x = np.linspace(0,1,100)
ax[0].plot(x,f(x),lw=3)
ax[0].plot(x, a*x-g(a),'--')
ax[0].set_title('$f(x) = x^2$')
ax[0].legend(['f(x)', f"$y = ax-g(a)$ = {a}x -{g(a):.2f}"])
ax[0].set_xlim(0,1.2)
ax[0].set_ylim(0,1.2)
ax[0].set_xlabel('x',fontsize=20)
ax[0].set_ylabel('f(x)',fontsize=20)
ax[0].grid('on')
ax[1].set_title('$g(a) = max_x [ax-f(x)]= a^2/2$')
ax[1].plot(a,g(a),'o',color='orange',ms=12)
ax[1].plot(np.linspace(0,2,10),g(np.linspace(0,2,10)),'-',lw=3, color='red')
ax[1].set_xlim(0,1.2)
ax[1].set_ylim(0,1.2)
ax[1].set_xlabel('a',fontsize=20)
ax[1].set_ylabel('g(a)',fontsize=20)
ax[1].grid('on')
###Output
_____no_output_____
###Markdown
Example of Legendre transform-2$$f(x) = e^x$$$$a = f'(x) =e^x \rightarrow x = log a$$$$g(\alpha) = max_x \Big[ \alpha x - f(x) \Big ] = a(log a-1)$$
###Code
f2 = lambda x: np.exp(x)
g2 = lambda a: a*np.log(a) - f2(np.log(a)) # deriv f(x) = e^x = a ---> x = log a
@widgets.interact(a=(1,3,0.2))
def legendre_transf(a):
fig,ax =plt.subplots(nrows=1,ncols=2, figsize = (10,4))
x = np.linspace(0,1,100)
ax[0].plot(x,f2(x),lw=3)
ax[0].plot(x, a*x-g2(a),'--')
ax[0].set_title('$f(x) = x^2$')
ax[0].legend(['f(x)', f"$y = ax-g(a)$ = {a:.2f}x-{g2(a):.2f}"])
ax[0].set_xlim(0,1.2)
ax[0].set_ylim(0,3)
ax[0].set_xlabel('x',fontsize=20)
ax[0].set_ylabel('f(x)',fontsize=20)
ax[0].grid('on')
ax[1].set_title('$g(a) = max_x [ax-f(x)]= a(log a-1)$')
ax[1].plot(a,g(a),'o',color='orange',ms=12)
ax[1].plot(np.linspace(0,3,10),g(np.linspace(0,3,10)),'-',lw=3, color='red')
ax[1].set_xlim(0,3)
ax[1].set_ylim(0,3)
ax[1].set_xlabel('a',fontsize=20)
ax[1].set_ylabel('g(a)',fontsize=20)
ax[1].grid('on')
###Output
_____no_output_____
###Markdown
Legendre Transform numerically (via numpy/scipy )
###Code
def legendre_transf(f, a=1, guess_0=0):
'''Legendre transform function f to g
INPUT:
f <-- function
a <-- value of new variable
OUTPUT:
g(a) = min_x[a*x-f(x)] legendre transform at point a
'''
min_x, = sci.optimize.fmin(lambda x: f(x)-a*x, guess_0)
return a*min_x - f(min_x)
f = lambda x: x**2+x**4
#g = [legendre_transf(f, a) for a in np.linspace(0,1,100)]
###Output
_____no_output_____
###Markdown
Legendre Transform via SymPy
###Code
from sympy import *
x, x_min, a, a_min, f, g = symbols('x x_min a a_min f g') # Definte symbols
f = x**2 # Define function of x
x_min, = solve(a-diff(f,x), x) # solve for maximum
g = a*x_min - f.subs(x,x_min) # Define function of a as leg transform of f(x)
f, g
ff = lambdify(x, f)
gg = lambdify(a, g)
#plt.plot(x,f(x))
#plt.plot(x,g(x))
###Output
_____no_output_____ |
2_Robot_Localization/.ipynb_checkpoints/8_1. Multiple Movements, exercise-checkpoint.ipynb | ###Markdown
Multiple MovementsLet's see how our robot responds to moving multiple times without sensing! First let's include our usual resource imports and display function.
###Code
# importing resources
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
A helper function for visualizing a distribution.
###Code
def display_map(grid, bar_width=1):
if(len(grid) > 0):
x_labels = range(len(grid))
plt.bar(x_labels, height=grid, width=bar_width, color='b')
plt.xlabel('Grid Cell')
plt.ylabel('Probability')
plt.ylim(0, 1) # range of 0-1 for probability values
plt.title('Probability of the robot being at each cell in the grid')
plt.xticks(np.arange(min(x_labels), max(x_labels)+1, 1))
plt.show()
else:
print('Grid is empty')
###Output
_____no_output_____
###Markdown
QUIZ: Write code that moves 1000 times and then prints the resulting probability distribution.You are given the initial variables and a complete `move` function (that incorporates uncertainty), below.
###Code
# given initial variables
p=[0, 1, 0, 0, 0]
# the color of each grid cell in the 1D world
world=['green', 'red', 'red', 'green', 'green']
# Z, the sensor reading ('red' or 'green')
Z = 'red'
pHit = 0.6
pMiss = 0.2
pExact = 0.8
pOvershoot = 0.1
pUndershoot = 0.1
# Complete the move function
def move(p, U):
q=[]
# iterate through all values in p
for i in range(len(p)):
# use the modulo operator to find the new location for a p value
# this finds an index that is shifted by the correct amount
index = (i-U) % len(p)
nextIndex = (index+1) % len(p)
prevIndex = (index-1) % len(p)
s = pExact * p[index]
s = s + pOvershoot * p[nextIndex]
s = s + pUndershoot * p[prevIndex]
# append the correct, modified value of p to q
q.append(s)
return q
# Here is code for moving twice
for i in range(1000):
p = move(p, 1)
print(p)
display_map(p)
## TODO: Write code for moving 1000 times
###Output
_____no_output_____ |
Jupyter_Notebooks/Second_Ops_Parsergen_Notebook.ipynb | ###Markdown
GENIE ParsergenIn addition to using the Ops package to retrieve and parse operational state of a device, the Genie Parsergen Class provides a one-step parsing mechanism that is capable of parsing dynamic tabular and non-tabular device outputs in a “noticeably” less lines of code compared to standard parsing mechanisms. The Parsergen Class is particularly useful where Genie Ops does not have a model for the particular state you are looking to parse. As an example there is currently no Genie Ops Model for NVE/VXLAN. This gap can be overcome by creating the parser that can then be leveraged by pyATS/GENIE. The object of the remaining exercises is to * Parse VXLAN relevant state* Create an Ops library* Run a pyATS easypy script to test condition of VXLAN stateTabular ParsingThe Genie Parsergen Class can deal with both Tabular and Non Tabular device output from a networking device. We shall initially explore Tabular parsingConsider the output from the show command 'show nve vni'```Interface VNI Multicast-group VNI state Mode BD cfg vrf nve1 6001 N/A Up L2DP 1 CLI N/A ```As can been seen above this is a column based/tabular output. In order to parse this output we need to instructparsergen as to the titles of the columns. Follow the commands below to parse the command 'show nve vni' As in previous sections initiate the testbed topology and import the relevant libraries for this exercise
###Code
import pprint
from genie.conf import Genie
from genie import parsergen
from genie.libs.ops.interface.iosxe.interface import Interface
testbed = Genie.init('../scripts/vagrant_single_ios.yaml')
uut = testbed.devices.iosxe1
uut.connect()
###Output
_____no_output_____
###Markdown
The testbed object 'uut.device' has a method of execute. Execute will run the command on the device and returna string as the result of the command
###Code
output = uut.device.execute('show nve vni')
###Output
_____no_output_____
###Markdown
A list identifying the headers of the expected column output is created
###Code
header = ['Interface', 'VNI', 'Multicast-group', 'VNI state', 'Mode', 'BD', 'cfg', 'vrf']
###Output
_____no_output_____
###Markdown
We will now use the parsergen oper_fill_tabular method to parse the string and store as structured data
###Code
result = parsergen.oper_fill_tabular(device_output=output, device_os='iosxe', header_fields=header, index=[0])
###Output
_____no_output_____
###Markdown
Now print the structured data returned
###Code
pprint.pprint(result.entries)
###Output
_____no_output_____
###Markdown
Determine the type of the result object entries attribute
###Code
type(result.entries)
###Output
_____no_output_____
###Markdown
As you will see the returned data is now structured data in the form of a dictionary GENIE Non Tabular ParsingNot all output from the device will be in tabular form. Parsergen can deal with non tabularreturned data. Parsergen tries to match a given set of data using regular expressions that describe the values foundin the show command output.Consider the following output from the _show nve interface nve 1_ . We shall parse the data to retrieve Source_Interface and Primary address based upon an encapsulation of Vxlan```bashInterface: nve1, State: Admin Up, Oper Up, Encapsulation: Vxlan,BGP host reachability: Disable, VxLAN dport: 4789VNI number: L3CP 0 L2CP 0 L2DP 1source-interface: Loopback10 (primary:172.16.10.1 vrf:0)```There are two methods by which we can retrieve this data - Manual regular expressions and Markup Using Regular Expressions manuallyTo start make sure that your Python Virtual Environment is still running from step 4 and that you are in the scripts directory.Initiate an iPython interactive session and intialise the testbed
###Code
from pprint import pprint
from genie.conf import Genie
from genie import parsergen
testbed = Genie.init('../scripts/vagrant_single_ios.yaml')
uut = testbed.devices.iosxe1
uut.connect()
###Output
_____no_output_____
###Markdown
Create a dictionary of show commands. Only one show command for IOSXE in this instance
###Code
show_cmds = {
'iosxe': {
'show_int' : "show nve interface {}",
}
}
###Output
_____no_output_____
###Markdown
Create a dictionary of regular expressions to capture the elements required in the output. The example has regular expressions that will capture the encapsulation type, the source interface and the primary address. As useful tool for creating and validing python _re_ based regular expressions can be found here: [Pythex](https://pythex.org/)
###Code
regex = {
'iosxe': {
'nve.intf.if_encap': r'[a-zA-Z0-9\:\,\s]+Encapsulation:\s+(\w+),',
'nve.intf.source_intf': r'^source-interface:\s+(\w+)',
'nve.intf.primary': r'[a-zA-Z0-9\:\,a-zA-Z0-9\s]+\(primary:([A-Fa-f0-9:\.]+)'
}
}
regex_tags = {
'iosxe': ['nve.intf.if_encap', 'nve.intf.source_intf', 'nve.intf.primary']
}
###Output
_____no_output_____
###Markdown
'Extend' the Parsergen Class to include the show commands and the regular expressions
###Code
parsergen.extend(show_cmds=show_cmds, regex_ext=regex, regex_tags=regex_tags)
###Output
_____no_output_____
###Markdown
Now determine the parameters you wish to start the regex search on. The first item in the tuple is the key name of the regex value, the second item is the value being searched in thiscase all interfaces with Vxlan encapsulation
###Code
attrValPairsToParse = [('nve.intf.if_encap', 'Vxlan')]
###Output
_____no_output_____
###Markdown
Finally we create the object pgfill by calling the _parsergen.oper\_fill_ method is called. The arguments in this method will* determine the device to be called (uut)* determine which show command to call from the key show_int and use nve1 as the interface name for the show command* Provide the attribute value pairs to search on* And use the defined regular expressions that begin with _nve.intf_
###Code
pgfill = parsergen.oper_fill (
uut,
('show_int', ['nve1']),
attrValPairsToParse,
refresh_cache=True,
regex_tag_fill_pattern='nve\.intf')
###Output
_____no_output_____
###Markdown
Now enter the parse method for pgfill to populate parsergen ext_dictio attribute with the parsed items
###Code
pgfill.parse()
###Output
_____no_output_____
###Markdown
Display the completed parse with
###Code
pprint(parsergen.ext_dictio)
###Output
_____no_output_____
###Markdown
Disconnect from the device
###Code
uut.disconnect()
###Output
_____no_output_____
###Markdown
--- Using Markup Text to parse Non Tabular OutputRather than explicitly defining regular expressions for each item to retrieve, as an alternativewe can use a special CLI command markup format that will automatically generate the regularexpressions.If you have an iPython session running. Close and restart iPythonInitiate an iPython interactive session and intialise the testbed
###Code
import pprint
from genie.conf import Genie
from genie import parsergen
testbed = Genie.init('../scripts/vagrant_single_ios.yaml')
uut = testbed.devices.iosxe1
uut.connect()
###Output
_____no_output_____
###Markdown
Enter the following to assign the _marked up_ string to the variable markedupIOSX
###Code
markedupIOSX = '''
OS: iosxe
CMD: show_nve_interface
SHOWCMD: show nve interface {ifname}
PREFIX: nve.intf
ACTUAL:
Interface: nve1, State: Admin Up, Oper Up, Encapsulation: Vxlan,
BGP host reachability: Disable, VxLAN dport: 10000
VNI number: L3CP 0 L2CP 0 L2DP 1
source-interface: Loopback10 (primary:1.1.1.1 vrf:22)
MARKUP:
Interface: XW<ifname>Xnve1, State: Admin XW<state>XUp, Oper Up, Encapsulation: XW<encap>XVxlan,
BGP host reachability: Disable, VxLAN dport: XN<udp_port>X1000
VNI number: L3CP 0 L2CP 0 L2DP 1
source-interface: XW<source_interface>XLoopback0 (primary:XA<primary_address>X1.1.1.1 vrf:XN<VRF>X22)'''
###Output
_____no_output_____
###Markdown
You will notice in the string that there are some key components**OS:** Define the operating system being used **CMD:** Used by parsergen as the dict key for the _SHOWCMD_ **SHOWCMD:** The actual show command to be issued **PREFIX** Will be used to prefix the keys for each item parsed **ACTUAL** Output expected from the device (optional) **MARKUP** The Output with markup added. Will be used to identify items to parseThe Markup itself begins and ends with **X** with the key name inbetween. For example**XW\X** will assign a value to the key nve.intf.**ifname**Full list of Markup tags are included at the bottom of this file.The remaining commands are similar to those used for parsing with regular expressions'Extend' the Parsergen Class to include the show commands and the regular expressions
###Code
parsergen.extend_markup(markedupIOSX)
###Output
_____no_output_____
###Markdown
Now determine the parameters you wish to start the regex search on. The first item in the tuple is the key name of the regex value, the second item is the value being searched. In this instanceonly nve interfaces that have a Vxlan encapsulation are being considered
###Code
attrValPairsToCheck = [('nve.intf.encap', 'Vxlan'),]
###Output
_____no_output_____
###Markdown
Create an object called pgfill from the parsergen.oper_fill method in order to create a dictionary of the parsed output.
###Code
pgfill = parsergen.oper_fill(device=uut,
show_command=('show_nve_interface', [], {'ifname':'nve1'}),
attrvalpairs=attrValPairsToCheck,
refresh_cache=True,
regex_tag_fill_pattern='nve\.intf')
###Output
_____no_output_____
###Markdown
Now call the parse method for the object pgfill
###Code
pgfill.parse()
###Output
_____no_output_____
###Markdown
Print the parsed output
###Code
print(parsergen.ext_dictio)
###Output
_____no_output_____
###Markdown
Disconnect from the device
###Code
uut.disconnect()
###Output
_____no_output_____
###Markdown
**Mark Up Reference**The following are the available values for x in the XxX notation:* A - IPv4 or IPv6 address. * B - Value terminated with a close brace, bracket, or parenthesis.* C - Value terminated with a comma.* F - Floating point number.* H - Hexidecimal number.* I - Interface name.* M - Mac address.* N - Decimal number.* R - everything else to the newline.* P - IPv4 or IPv6 prefix.* Q - Value terminated by a double quote.* S - Non-space value.* T - Time (00:00:00)* W - A word. --- GENIE Creating an OPS objectWe are now going to create a VxLAN OPS object that will collate the output of the two parsers we created earlier.For the sake of brevity these two parsers have been defined within Classes in the file [iosxevxlan.py](../scripts/iosxevxlan.py). The parsers are also inheriting from Genie Metaparser. The configuration of Metaparser is outside the scope of this workshopbut further details can be found at - [Metaparser](https://pubhub.devnetcloud.com/media/pyats-packages/docs/metaparser/index.html)
###Code
import pprint
from genie.conf import Genie
testbed = Genie.init('../scripts/vagrant_single_ios.yaml')
uut = testbed.devices.iosxe1
uut.connect()
###Output
_____no_output_____
###Markdown
First we shall import from Genie ops the Base class. We will create a class that will inherit from 'Base' to leverage the'Maker' functionality. 'Maker' simplifies the process of mapping parsers output to the ops object attributes. Further information on the Maker class can be found at [Maker](https://pubhub.devnetcloud.com/media/pyats-packages/docs/genie/Ops/developer/maker.html) In addition we will import the parsers that were created earlier.Enter the code below into your ipython session
###Code
from genie.ops.base import Base
from iosxevxlan import ShowNveVni,ShowNvePeers
###Output
_____no_output_____
###Markdown
We now create a class that will be our Ops object, named Vxlan. This class inherits from the Base class of Genie Ops. A method which referred to as _learn_ is created. The remaining code performs the following functions * Runs a for loop issuing the commands for the parsers and then adds data (add_leaf) to the new Ops object structure.* src is the dictionary item from the parsed output. For example '['(?P.*)][VNI]' will equate to the value of VNI (6001)* dest is where the data will be placed in the new object structure referenced as *info*. In this case the src and dest keys are the samebut this does not have to be the case* Finally the make() is invoked to finalise the new object structure.
###Code
class Vxlan(Base):
def learn(self, custom=None):
# Capture output from ShowNveVni parser
src = '[(?P<interf>.*)]'
dest = 'info[(?P<interf>.*)]'
req_keys = ['[VNI]','[Multicast-group]','[VNIstate]','[Mode]']
for key in req_keys:
self.add_leaf(cmd=ShowNveVni,
src=src + '[{}]'.format(key),
dest=dest + '[{}]'.format(key))
# Capture ouptut from ShowNveVni parser
src = '[(?P<nvename>.*)]'
dest = 'info[(?P<nvename>.*)]'
req_keys = ['[Peer-IP]','[Router-RMAC]','[Type]','[state]']
for key in req_keys:
self.add_leaf(cmd=ShowNvePeers,
src=src + '[{}]'.format(key),
dest=dest + '[{}]'.format(key))
#Add ops data to the Vxlan ojbect
self.make()
###Output
_____no_output_____
###Markdown
Finally create a new ops object called myvxlan and learn from the device
###Code
myvxlan = Vxlan(device=uut)
myvxlan.learn()
myvxlan.info
###Output
_____no_output_____
###Markdown
Disconnect from the device
###Code
uut.disconnect()
###Output
_____no_output_____ |
tutorials/speaker_tasks/Speaker_Diarization_Inference.ipynb | ###Markdown
IntroductionWho Speaks When? Speaker Diarization is the task of segmenting audio recordings by speaker labels. A diarization system consists of Voice Activity Detection (VAD) model to get the time stamps of audio where speech is being spoken ignoring the background and Speaker Embeddings model to get speaker embeddings on segments that were previously time stamped. These speaker embeddings would then be clustered into clusters based on number of speakers present in the audio recording.In NeMo we support both **oracle VAD** and **non-oracle VAD** diarization. In this tutorial, we shall first demonstrate how to perform diarization with a oracle VAD time stamps (we assume we already have speech time stamps) and pretrained speaker verification model which can be found in tutorial for [Speaker Identification and Verification in NeMo](https://github.com/NVIDIA/NeMo/blob/main/tutorials/speaker_tasks/Speaker_Identification_Verification.ipynb).In ORACLE-VAD-DIARIZATION we show how to perform VAD and then diarization if ground truth timestamped speech were not available (non-oracle VAD). We also have tutorials for [VAD training in NeMo](https://github.com/NVIDIA/NeMo/blob/main/tutorials/asr/Voice_Activity_Detection.ipynb) and [online offline microphone inference](https://github.com/NVIDIA/NeMo/blob/main/tutorials/asr/Online_Offline_Microphone_VAD_Demo.ipynb), where you can custom your model and training/finetuning on your own data.For demonstration purposes we would be using simulated audio from [an4 dataset](http://www.speech.cs.cmu.edu/databases/an4/)
###Code
import os
import wget
ROOT = os.getcwd()
data_dir = os.path.join(ROOT,'data')
os.makedirs(data_dir, exist_ok=True)
an4_audio = os.path.join(data_dir,'an4_diarize_test.wav')
an4_rttm = os.path.join(data_dir,'an4_diarize_test.rttm')
if not os.path.exists(an4_audio):
an4_audio_url = "https://nemo-public.s3.us-east-2.amazonaws.com/an4_diarize_test.wav"
an4_audio = wget.download(an4_audio_url, data_dir)
if not os.path.exists(an4_rttm):
an4_rttm_url = "https://nemo-public.s3.us-east-2.amazonaws.com/an4_diarize_test.rttm"
an4_rttm = wget.download(an4_rttm_url, data_dir)
###Output
_____no_output_____
###Markdown
Let's plot and listen to the audio and visualize the RTTM speaker labels
###Code
import IPython
import matplotlib.pyplot as plt
import numpy as np
import librosa
sr = 16000
signal, sr = librosa.load(an4_audio,sr=sr)
fig,ax = plt.subplots(1,1)
fig.set_figwidth(20)
fig.set_figheight(2)
plt.plot(np.arange(len(signal)),signal,'gray')
fig.suptitle('Reference merged an4 audio', fontsize=16)
plt.xlabel('time (secs)', fontsize=18)
ax.margins(x=0)
plt.ylabel('signal strength', fontsize=16);
a,_ = plt.xticks();plt.xticks(a,a/sr);
IPython.display.Audio(an4_audio)
###Output
_____no_output_____
###Markdown
We would use [pyannote_metrics](https://pyannote.github.io/pyannote-metrics/) for visualization and score calculation purposes. Hence all the labels in rttm formats would eventually be converted to pyannote objects, we created two helper functions rttm_to_labels (for NeMo intermediate processing) and labels_to_pyannote_object for scoring and visualization format
###Code
from nemo.collections.asr.parts.utils.speaker_utils import rttm_to_labels, labels_to_pyannote_object
###Output
_____no_output_____
###Markdown
Let's load ground truth RTTM labels and view the reference Annotation timestamps visually
###Code
# view the sample rttm file
!cat {an4_rttm}
labels = rttm_to_labels(an4_rttm)
reference = labels_to_pyannote_object(labels)
print(labels)
reference
###Output
_____no_output_____
###Markdown
Speaker Diarization scripts commonly expects following arguments:1. manifest_filepath : Path to manifest file containing json lines of format: {'audio_filepath': /path/to/audio_file, 'offset': 0, 'duration':None, 'label': 'infer', 'text': '-', 'num_speakers': None, 'rttm_filepath': /path/to/rttm/file, 'uem_filepath'='/path/to/uem/filepath'}2. out_dir : directory where outputs and intermediate files are stored. 3. oracle_vad: If this is true then we extract speech activity labels from rttm files, if False then either 4. vad.model_path or external_manifestpath containing speech activity labels has to be passed. Mandatory fields are audio_filepath, offset, duration, label and text. For the rest if you would like to evaluate with known number of speakers pass the value else None. If you would like to score the system with known rttms then that should be passed as well, else None. uem file is used to score only part of your audio for evaluation purposes, hence pass if you would like to evaluate on it else None.**Note** we expect audio and corresponding RTTM have **same base name** and the name should be **unique**. For eg: if audio file name is **test_an4**.wav, if provided we expect corresponding rttm file name to be **test_an4**.rttm (note the matching **test_an4** base name) Lets create manifest with the an4 audio and rttm available. If you have more than one files you may also use the script `pathsfiles_to_manifest.py` to generate manifest file from list of audio files and optionally rttm files
###Code
# Create a manifest for input with below format.
# {'audio_filepath': /path/to/audio_file, 'offset': 0, 'duration':None, 'label': 'infer', 'text': '-',
# 'num_speakers': None, 'rttm_filepath': /path/to/rttm/file, 'uem_filepath'='/path/to/uem/filepath'}
import json
meta = {
'audio_filepath': an4_audio,
'offset': 0,
'duration':None,
'label': 'infer',
'text': '-',
'num_speakers': 2,
'rttm_filepath': an4_rttm,
'uem_filepath' : None
}
with open('data/input_manifest.json','w') as fp:
json.dump(meta,fp)
fp.write('\n')
!cat data/input_manifest.json
output_dir = os.path.join(ROOT, 'oracle_vad')
os.makedirs(output_dir,exist_ok=True)
###Output
_____no_output_____
###Markdown
ORACLE-VAD DIARIZATION Oracle-vad diarization is to compute speaker embeddings from known speech label timestamps rather than depending on VAD output. This step can also be used to run speaker diarization with rttms generated from any external VAD, not just VAD model from NeMo.For it, the first step is to start converting reference audio rttm(vad) time stamps to oracle manifest file. This manifest file would be sent to our speaker diarizer to extract embeddings.This is just an argument in our config, and system automatically computes oracle manifest based on the rttms provided through input manifest file Our config file is based on [hydra](https://hydra.cc/docs/intro/). With hydra config, we ask users to provide values to variables that were filled with **???**, these are mandatory fields and scripts expect them for successful runs. And notice some variables were filled with **null** are optional variables. Those could be provided if needed but are not mandatory.
###Code
from omegaconf import OmegaConf
MODEL_CONFIG = os.path.join(data_dir,'offline_diarization.yaml')
if not os.path.exists(MODEL_CONFIG):
config_url = "https://raw.githubusercontent.com/NVIDIA/NeMo/main/examples/speaker_tasks/diarization/conf/offline_diarization.yaml"
MODEL_CONFIG = wget.download(config_url,data_dir)
config = OmegaConf.load(MODEL_CONFIG)
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Now we can perform speaker diarization based on timestamps generated from ground truth rttms rather than generating through VAD
###Code
pretrained_speaker_model='titanet_large'
config.diarizer.manifest_filepath = 'data/input_manifest.json'
config.diarizer.out_dir = output_dir # Directory to store intermediate files and prediction outputs
config.diarizer.speaker_embeddings.model_path = pretrained_speaker_model
config.diarizer.speaker_embeddings.parameters.window_length_in_sec = 1.5
config.diarizer.speaker_embeddings.parameters.shift_length_in_sec = 0.75
config.diarizer.oracle_vad = True # ----> ORACLE VAD
config.diarizer.clustering.parameters.oracle_num_speakers = True
from nemo.collections.asr.models import ClusteringDiarizer
oracle_model = ClusteringDiarizer(cfg=config)
# And lets diarize
oracle_model.diarize()
###Output
_____no_output_____
###Markdown
With DER 0 -> means it clustered speaker embeddings correctly. Let's view
###Code
!cat {output_dir}/pred_rttms/an4_diarize_test.rttm
pred_labels = rttm_to_labels(output_dir+'/pred_rttms/an4_diarize_test.rttm')
hypothesis = labels_to_pyannote_object(pred_labels)
hypothesis
reference
###Output
_____no_output_____
###Markdown
VAD DIARIZATION In this method we compute VAD time stamps using NeMo VAD model on input manifest file and then use these time stamps of speech label to find speaker embeddings followed by clustering them into num of speakers Before we proceed let's look at the speaker diarization config, which we would be depending up on for vad computationand speaker embedding extraction
###Code
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
As can be seen most of the variables in config are self explanatory with VAD variables under vad section and speaker related variables under speaker embeddings section. To perform VAD based diarization we can ignore `oracle_vad_manifest` in `speaker_embeddings` section for now and needs to fill up the rest. We also needs to provide pretrained `model_path` of vad and speaker embeddings .nemo models
###Code
pretrained_vad = 'vad_marblenet'
pretrained_speaker_model = 'titanet_large'
###Output
_____no_output_____
###Markdown
Note in this tutorial, we use the VAD model MarbleNet-3x2 introduced and published in [ICASSP MarbleNet](https://arxiv.org/pdf/2010.13886.pdf). You might need to tune on dev set similar to your dataset if you would like to improve the performance.And the speakerNet-M-Diarization model achieves 7.3% confusion error rate on CH109 set with oracle vad. This model is trained on voxceleb1, voxceleb2, Fisher, SwitchBoard datasets. So for more improved performance specific to your dataset, finetune speaker verification model with a devset similar to your test set.
###Code
output_dir = os.path.join(ROOT,'outputs')
config.diarizer.manifest_filepath = 'data/input_manifest.json'
config.diarizer.out_dir = output_dir #Directory to store intermediate files and prediction outputs
config.diarizer.speaker_embeddings.model_path = pretrained_speaker_model
config.diarizer.speaker_embeddings.parameters.window_length_in_sec = 1.5
config.diarizer.speaker_embeddings.parameters.shift_length_in_sec = 0.75
config.diarizer.oracle_vad = False # compute VAD provided with model_path to vad config
config.diarizer.clustering.parameters.oracle_num_speakers=True
#Here we use our inhouse pretrained NeMo VAD
config.diarizer.vad.model_path = pretrained_vad
config.diarizer.vad.window_length_in_sec = 0.15
config.diarizer.vad.shift_length_in_sec = 0.01
config.diarizer.vad.parameters.onset = 0.8
config.diarizer.vad.parameters.offset = 0.6
config.diarizer.vad.parameters.min_duration_on = 0.1
config.diarizer.vad.parameters.min_duration_off = 0.4
###Output
_____no_output_____
###Markdown
Now that we passed all the variables we needed lets initialize the clustering model with above config
###Code
from nemo.collections.asr.models import ClusteringDiarizer
sd_model = ClusteringDiarizer(cfg=config)
###Output
_____no_output_____
###Markdown
And Diarize with single line of code
###Code
sd_model.diarize()
###Output
_____no_output_____
###Markdown
As can be seen, we first performed VAD, then with the timestamps created in `{output_dir}/vad_outputs` by VAD we calculated speaker embeddings (`{output_dir}/speaker_outputs/embeddings/`) which are then clustered using spectral clustering. To generate VAD predicted time step. We perform VAD inference to have frame level prediction &8594; (optional: use decision smoothing) &8594; given `threshold`, write speech segment to RTTM-like time stamps manifest.we use vad decision smoothing (87.5% overlap median) as described [here](https://github.com/NVIDIA/NeMo/blob/stable/nemo/collections/asr/parts/utils/vad_utils.py)you can also tune the threshold on your dev set. Use this provided [script](https://github.com/NVIDIA/NeMo/blob/stable/scripts/voice_activity_detection/vad_tune_threshold.py)
###Code
# VAD predicted time stamps
# you can also use single threshold(=onset=offset) for binarization and plot here
from nemo.collections.asr.parts.utils.vad_utils import plot
plot(
an4_audio,
'outputs/vad_outputs/overlap_smoothing_output_median_0.875/an4_diarize_test.median',
an4_rttm,
per_args = config.diarizer.vad.parameters, #threshold
)
print(f"postprocessing_params: {config.diarizer.vad.parameters}")
###Output
_____no_output_____
###Markdown
Predicted outputs are written to `output_dir/pred_rttms` and see how we predicted along with VAD prediction
###Code
!cat outputs/pred_rttms/an4_diarize_test.rttm
pred_labels = rttm_to_labels('outputs/pred_rttms/an4_diarize_test.rttm')
hypothesis = labels_to_pyannote_object(pred_labels)
hypothesis
reference
###Output
_____no_output_____
###Markdown
Storing and Restoring models Now we can save the whole config and model parameters in a single .nemo and restore from it anytime.
###Code
oracle_model.save_to(os.path.join(output_dir,'diarize.nemo'))
###Output
_____no_output_____
###Markdown
Restore from saved model
###Code
del oracle_model
import nemo.collections.asr as nemo_asr
restored_model = nemo_asr.models.ClusteringDiarizer.restore_from(os.path.join(output_dir,'diarize.nemo'))
###Output
_____no_output_____
###Markdown
IntroductionWho Speaks When? Speaker Diarization is the task of segmenting audio recordings by speaker labels. A diarization system consists of Voice Activity Detection (VAD) model to get the time stamps of audio where speech is being spoken ignoring the background and Speaker Embeddings model to get speaker embeddings on segments that were previously time stamped. These speaker embeddings would then be clustered into clusters based on number of speakers present in the audio recording.In NeMo we support both **oracle VAD** and **non-oracle VAD** diarization. In this tutorial, we shall first demonstrate how to perform diarization with a oracle VAD time stamps (we assume we already have speech time stamps) and pretrained speaker verification model which can be found in tutorial for [Speaker Identification and Verification in NeMo](https://github.com/NVIDIA/NeMo/blob/main/tutorials/speaker_tasks/Speaker_Identification_Verification.ipynb).In ORACLE-VAD-DIARIZATION we show how to perform VAD and then diarization if ground truth timestamped speech were not available (non-oracle VAD). We also have tutorials for [VAD training in NeMo](https://github.com/NVIDIA/NeMo/blob/main/tutorials/asr/Voice_Activity_Detection.ipynb) and [online offline microphone inference](https://github.com/NVIDIA/NeMo/blob/main/tutorials/asr/Online_Offline_Microphone_VAD_Demo.ipynb), where you can custom your model and training/finetuning on your own data.For demonstration purposes we would be using simulated audio from [an4 dataset](http://www.speech.cs.cmu.edu/databases/an4/)
###Code
import os
import wget
ROOT = os.getcwd()
data_dir = os.path.join(ROOT,'data')
os.makedirs(data_dir, exist_ok=True)
an4_audio = os.path.join(data_dir,'an4_diarize_test.wav')
an4_rttm = os.path.join(data_dir,'an4_diarize_test.rttm')
if not os.path.exists(an4_audio):
an4_audio_url = "https://nemo-public.s3.us-east-2.amazonaws.com/an4_diarize_test.wav"
an4_audio = wget.download(an4_audio_url, data_dir)
if not os.path.exists(an4_rttm):
an4_rttm_url = "https://nemo-public.s3.us-east-2.amazonaws.com/an4_diarize_test.rttm"
an4_rttm = wget.download(an4_rttm_url, data_dir)
###Output
_____no_output_____
###Markdown
Let's plot and listen to the audio and visualize the RTTM speaker labels
###Code
import IPython
import matplotlib.pyplot as plt
import numpy as np
import librosa
sr = 16000
signal, sr = librosa.load(an4_audio,sr=sr)
fig,ax = plt.subplots(1,1)
fig.set_figwidth(20)
fig.set_figheight(2)
plt.plot(np.arange(len(signal)),signal,'gray')
fig.suptitle('Reference merged an4 audio', fontsize=16)
plt.xlabel('time (secs)', fontsize=18)
ax.margins(x=0)
plt.ylabel('signal strength', fontsize=16);
a,_ = plt.xticks();plt.xticks(a,a/sr);
IPython.display.Audio(an4_audio)
###Output
_____no_output_____
###Markdown
We would use [pyannote_metrics](https://pyannote.github.io/pyannote-metrics/) for visualization and score calculation purposes. Hence all the labels in rttm formats would eventually be converted to pyannote objects, we created two helper functions rttm_to_labels (for NeMo intermediate processing) and labels_to_pyannote_object for scoring and visualization format
###Code
from nemo.collections.asr.parts.utils.speaker_utils import rttm_to_labels, labels_to_pyannote_object
###Output
_____no_output_____
###Markdown
Let's load ground truth RTTM labels and view the reference Annotation timestamps visually
###Code
# view the sample rttm file
!cat {an4_rttm}
labels = rttm_to_labels(an4_rttm)
reference = labels_to_pyannote_object(labels)
print(labels)
reference
###Output
_____no_output_____
###Markdown
Speaker Diarization scripts commonly expects following arguments:1. manifest_filepath : Path to manifest file containing json lines of format: {'audio_filepath': /path/to/audio_file, 'offset': 0, 'duration':None, 'label': 'infer', 'text': '-', 'num_speakers': None, 'rttm_filepath': /path/to/rttm/file, 'uem_filepath'='/path/to/uem/filepath'}2. out_dir : directory where outputs and intermediate files are stored. 3. oracle_vad: If this is true then we extract speech activity labels from rttm files, if False then either 4. vad.model_path or external_manifestpath containing speech activity labels has to be passed. Mandatory fields are audio_filepath, offset, duration, label and text. For the rest if you would like to evaluate with known number of speakers pass the value else None. If you would like to score the system with known rttms then that should be passed as well, else None. uem file is used to score only part of your audio for evaluation purposes, hence pass if you would like to evaluate on it else None.**Note** we expect audio and corresponding RTTM have **same base name** and the name should be **unique**. For eg: if audio file name is **test_an4**.wav, if provided we expect corresponding rttm file name to be **test_an4**.rttm (note the matching **test_an4** base name) Lets create manifest with the an4 audio and rttm available. If you have more than one files you may also use the script `pathfiles_to_diarize_manifest.py` to generate manifest file from list of audio files and optionally rttm files
###Code
# Create a manifest for input with below format.
# {'audio_filepath': /path/to/audio_file, 'offset': 0, 'duration':None, 'label': 'infer', 'text': '-',
# 'num_speakers': None, 'rttm_filepath': /path/to/rttm/file, 'uem_filepath'='/path/to/uem/filepath'}
import json
meta = {
'audio_filepath': an4_audio,
'offset': 0,
'duration':None,
'label': 'infer',
'text': '-',
'num_speakers': 2,
'rttm_filepath': an4_rttm,
'uem_filepath' : None
}
with open('data/input_manifest.json','w') as fp:
json.dump(meta,fp)
fp.write('\n')
!cat data/input_manifest.json
output_dir = os.path.join(ROOT, 'oracle_vad')
os.makedirs(output_dir,exist_ok=True)
###Output
_____no_output_____
###Markdown
ORACLE-VAD DIARIZATION Oracle-vad diarization is to compute speaker embeddings from known speech label timestamps rather than depending on VAD output. This step can also be used to run speaker diarization with rttms generated from any external VAD, not just VAD model from NeMo.For it, the first step is to start converting reference audio rttm(vad) time stamps to oracle manifest file. This manifest file would be sent to our speaker diarizer to extract embeddings.This is just an argument in our config, and system automatically computes oracle manifest based on the rttms provided through input manifest file Our config file is based on [hydra](https://hydra.cc/docs/intro/). With hydra config, we ask users to provide values to variables that were filled with **???**, these are mandatory fields and scripts expect them for successful runs. And notice some variables were filled with **null** are optional variables. Those could be provided if needed but are not mandatory.
###Code
from omegaconf import OmegaConf
MODEL_CONFIG = os.path.join(data_dir,'offline_diarization.yaml')
if not os.path.exists(MODEL_CONFIG):
config_url = "https://raw.githubusercontent.com/NVIDIA/NeMo/main/examples/speaker_tasks/diarization/conf/offline_diarization.yaml"
MODEL_CONFIG = wget.download(config_url,data_dir)
config = OmegaConf.load(MODEL_CONFIG)
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Now we can perform speaker diarization based on timestamps generated from ground truth rttms rather than generating through VAD
###Code
pretrained_speaker_model='titanet_large'
config.diarizer.manifest_filepath = 'data/input_manifest.json'
config.diarizer.out_dir = output_dir # Directory to store intermediate files and prediction outputs
config.diarizer.speaker_embeddings.model_path = pretrained_speaker_model
config.diarizer.speaker_embeddings.parameters.window_length_in_sec = 1.5
config.diarizer.speaker_embeddings.parameters.shift_length_in_sec = 0.75
config.diarizer.oracle_vad = True # ----> ORACLE VAD
config.diarizer.clustering.parameters.oracle_num_speakers = True
from nemo.collections.asr.models import ClusteringDiarizer
oracle_model = ClusteringDiarizer(cfg=config)
# And lets diarize
oracle_model.diarize()
###Output
_____no_output_____
###Markdown
With DER 0 -> means it clustered speaker embeddings correctly. Let's view
###Code
!cat {output_dir}/pred_rttms/an4_diarize_test.rttm
pred_labels = rttm_to_labels(output_dir+'/pred_rttms/an4_diarize_test.rttm')
hypothesis = labels_to_pyannote_object(pred_labels)
hypothesis
reference
###Output
_____no_output_____
###Markdown
VAD DIARIZATION In this method we compute VAD time stamps using NeMo VAD model on input manifest file and then use these time stamps of speech label to find speaker embeddings followed by clustering them into num of speakers Before we proceed let's look at the speaker diarization config, which we would be depending up on for vad computationand speaker embedding extraction
###Code
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
As can be seen most of the variables in config are self explanatory with VAD variables under vad section and speaker related variables under speaker embeddings section. To perform VAD based diarization we can ignore `oracle_vad_manifest` in `speaker_embeddings` section for now and needs to fill up the rest. We also needs to provide pretrained `model_path` of vad and speaker embeddings .nemo models
###Code
pretrained_vad = 'vad_marblenet'
pretrained_speaker_model = 'titanet_large'
###Output
_____no_output_____
###Markdown
Note in this tutorial, we use the VAD model MarbleNet-3x2 introduced and published in [ICASSP MarbleNet](https://arxiv.org/pdf/2010.13886.pdf). You might need to tune on dev set similar to your dataset if you would like to improve the performance.And the speakerNet-M-Diarization model achieves 7.3% confusion error rate on CH109 set with oracle vad. This model is trained on voxceleb1, voxceleb2, Fisher, SwitchBoard datasets. So for more improved performance specific to your dataset, finetune speaker verification model with a devset similar to your test set.
###Code
output_dir = os.path.join(ROOT,'outputs')
config.diarizer.manifest_filepath = 'data/input_manifest.json'
config.diarizer.out_dir = output_dir #Directory to store intermediate files and prediction outputs
config.diarizer.speaker_embeddings.model_path = pretrained_speaker_model
config.diarizer.speaker_embeddings.parameters.window_length_in_sec = 1.5
config.diarizer.speaker_embeddings.parameters.shift_length_in_sec = 0.75
config.diarizer.oracle_vad = False # compute VAD provided with model_path to vad config
config.diarizer.clustering.parameters.oracle_num_speakers=True
#Here we use our inhouse pretrained NeMo VAD
config.diarizer.vad.model_path = pretrained_vad
config.diarizer.vad.window_length_in_sec = 0.15
config.diarizer.vad.shift_length_in_sec = 0.01
config.diarizer.vad.parameters.onset = 0.8
config.diarizer.vad.parameters.offset = 0.6
config.diarizer.vad.parameters.min_duration_on = 0.1
config.diarizer.vad.parameters.min_duration_off = 0.4
###Output
_____no_output_____
###Markdown
Now that we passed all the variables we needed lets initialize the clustering model with above config
###Code
from nemo.collections.asr.models import ClusteringDiarizer
sd_model = ClusteringDiarizer(cfg=config)
###Output
_____no_output_____
###Markdown
And Diarize with single line of code
###Code
sd_model.diarize()
###Output
_____no_output_____
###Markdown
As can be seen, we first performed VAD, then with the timestamps created in `{output_dir}/vad_outputs` by VAD we calculated speaker embeddings (`{output_dir}/speaker_outputs/embeddings/`) which are then clustered using spectral clustering. To generate VAD predicted time step. We perform VAD inference to have frame level prediction &8594; (optional: use decision smoothing) &8594; given `threshold`, write speech segment to RTTM-like time stamps manifest.we use vad decision smoothing (87.5% overlap median) as described [here](https://github.com/NVIDIA/NeMo/blob/stable/nemo/collections/asr/parts/utils/vad_utils.py)you can also tune the threshold on your dev set. Use this provided [script](https://github.com/NVIDIA/NeMo/blob/stable/scripts/voice_activity_detection/vad_tune_threshold.py)
###Code
# VAD predicted time stamps
# you can also use single threshold(=onset=offset) for binarization and plot here
from nemo.collections.asr.parts.utils.vad_utils import plot
plot(
an4_audio,
'outputs/vad_outputs/overlap_smoothing_output_median_0.875/an4_diarize_test.median',
an4_rttm,
per_args = config.diarizer.vad.parameters, #threshold
)
print(f"postprocessing_params: {config.diarizer.vad.parameters}")
###Output
_____no_output_____
###Markdown
Predicted outputs are written to `output_dir/pred_rttms` and see how we predicted along with VAD prediction
###Code
!cat outputs/pred_rttms/an4_diarize_test.rttm
pred_labels = rttm_to_labels('outputs/pred_rttms/an4_diarize_test.rttm')
hypothesis = labels_to_pyannote_object(pred_labels)
hypothesis
reference
###Output
_____no_output_____
###Markdown
Storing and Restoring models Now we can save the whole config and model parameters in a single .nemo and restore from it anytime.
###Code
oracle_model.save_to(os.path.join(output_dir,'diarize.nemo'))
###Output
_____no_output_____
###Markdown
Restore from saved model
###Code
del oracle_model
import nemo.collections.asr as nemo_asr
restored_model = nemo_asr.models.ClusteringDiarizer.restore_from(os.path.join(output_dir,'diarize.nemo'))
###Output
_____no_output_____
###Markdown
IntroductionWho Speaks When? Speaker Diarization is the task of segmenting audio recordings by speaker labels. A diarization system consists of Voice Activity Detection (VAD) model to get the time stamps of audio where speech is being spoken ignoring the background and Speaker Embeddings model to get speaker embeddings on segments that were previously time stamped. These speaker embeddings would then be clustered into clusters based on number of speakers present in the audio recording.In NeMo we support both **oracle VAD** and **non-oracle VAD** diarization. In this tutorial, we shall first demonstrate how to perform diarization with a oracle VAD time stamps (we assume we already have speech time stamps) and pretrained speaker verification model which can be found in tutorial for [Speaker Identification and Verification in NeMo](https://github.com/NVIDIA/NeMo/blob/main/tutorials/speaker_tasks/Speaker_Identification_Verification.ipynb).In ORACLE-VAD-DIARIZATION we show how to perform VAD and then diarization if ground truth timestamped speech were not available (non-oracle VAD). We also have tutorials for [VAD training in NeMo](https://github.com/NVIDIA/NeMo/blob/main/tutorials/asr/Voice_Activity_Detection.ipynb) and [online offline microphone inference](https://github.com/NVIDIA/NeMo/blob/main/tutorials/asr/Online_Offline_Microphone_VAD_Demo.ipynb), where you can custom your model and training/finetuning on your own data.For demonstration purposes we would be using simulated audio from [an4 dataset](http://www.speech.cs.cmu.edu/databases/an4/)
###Code
import os
import wget
ROOT = os.getcwd()
data_dir = os.path.join(ROOT,'data')
os.makedirs(data_dir, exist_ok=True)
an4_audio = os.path.join(data_dir,'an4_diarize_test.wav')
an4_rttm = os.path.join(data_dir,'an4_diarize_test.rttm')
if not os.path.exists(an4_audio):
an4_audio_url = "https://nemo-public.s3.us-east-2.amazonaws.com/an4_diarize_test.wav"
an4_audio = wget.download(an4_audio_url, data_dir)
if not os.path.exists(an4_rttm):
an4_rttm_url = "https://nemo-public.s3.us-east-2.amazonaws.com/an4_diarize_test.rttm"
an4_rttm = wget.download(an4_rttm_url, data_dir)
###Output
_____no_output_____
###Markdown
Let's plot and listen to the audio and visualize the RTTM speaker labels
###Code
import IPython
import matplotlib.pyplot as plt
import numpy as np
import librosa
sr = 16000
signal, sr = librosa.load(an4_audio,sr=sr)
fig,ax = plt.subplots(1,1)
fig.set_figwidth(20)
fig.set_figheight(2)
plt.plot(np.arange(len(signal)),signal,'gray')
fig.suptitle('Reference merged an4 audio', fontsize=16)
plt.xlabel('time (secs)', fontsize=18)
ax.margins(x=0)
plt.ylabel('signal strength', fontsize=16);
a,_ = plt.xticks();plt.xticks(a,a/sr);
IPython.display.Audio(an4_audio)
###Output
_____no_output_____
###Markdown
We would use [pyannote_metrics](https://pyannote.github.io/pyannote-metrics/) for visualization and score calculation purposes. Hence all the labels in rttm formats would eventually be converted to pyannote objects, we created two helper functions rttm_to_labels (for NeMo intermediate processing) and labels_to_pyannote_object for scoring and visualization format
###Code
from nemo.collections.asr.parts.utils.speaker_utils import rttm_to_labels, labels_to_pyannote_object
###Output
_____no_output_____
###Markdown
Let's load ground truth RTTM labels and view the reference Annotation timestamps visually
###Code
# view the sample rttm file
!cat {an4_rttm}
labels = rttm_to_labels(an4_rttm)
reference = labels_to_pyannote_object(labels)
print(labels)
reference
###Output
_____no_output_____
###Markdown
Speaker Diarization scripts commonly expects two files:1. paths2audio_files : either list of audio file paths or file containing paths to audio files for which we need to perform diarization.2. path2groundtruth_rttm_files (optional): either list of rttm file paths or file containing paths to rttm files (this can be passed if we need to calculate DER rate based on our ground truth rttm files).**Note** we expect audio and corresponding RTTM have **same base name** and the name should be **unique**. For eg: if audio file name is **test_an4**.wav, if provided we expect corresponding rttm file name to be **test_an4**.rttm (note the matching **test_an4** base name)Now let's create paths2audio_files list (or file) for which we need to perform diarization
###Code
paths2audio_files = [an4_audio]
print(paths2audio_files)
###Output
_____no_output_____
###Markdown
Similarly create` path2groundtruth_rttm_files` list (this is optional, and needed for score calculation)
###Code
path2groundtruth_rttm_files = [an4_rttm]
print(path2groundtruth_rttm_files)
###Output
_____no_output_____
###Markdown
ORACLE-VAD DIARIZATION Oracle-vad diarization is to compute speaker embeddings from known speech label timestamps rather than depending on VAD output. This step can also be used to run speaker diarization with rttms generated from any external VAD, not just VAD model from NeMo.For it, the first step is to start converting reference audio rttm(vad) time stamps to oracle manifest file. This manifest file would be sent to our speaker diarizer to extract embeddings.For that let's use write_rttm2manifest function, that takes paths2audio_files and paths2rttm_files as arguments
###Code
from nemo.collections.asr.parts.utils.speaker_utils import write_rttm2manifest
output_dir = os.path.join(ROOT, 'oracle_vad')
os.makedirs(output_dir,exist_ok=True)
oracle_manifest = os.path.join(output_dir,'oracle_manifest.json')
write_rttm2manifest(paths2audio_files=paths2audio_files,
paths2rttm_files=path2groundtruth_rttm_files,
manifest_file=oracle_manifest)
!cat {oracle_manifest}
###Output
_____no_output_____
###Markdown
Our config file is based on [hydra](https://hydra.cc/docs/intro/). With hydra config, we ask users to provide values to variables that were filled with **???**, these are mandatory fields and scripts expect them for successful runs. And notice some variables were filled with **null** are optional variables. Those could be provided if needed but are not mandatory.
###Code
from omegaconf import OmegaConf
MODEL_CONFIG = os.path.join(data_dir,'speaker_diarization.yaml')
if not os.path.exists(MODEL_CONFIG):
config_url = "https://raw.githubusercontent.com/NVIDIA/NeMo/main/examples/speaker_tasks/diarization/conf/speaker_diarization.yaml"
MODEL_CONFIG = wget.download(config_url,data_dir)
config = OmegaConf.load(MODEL_CONFIG)
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Now we can perform speaker diarization based on timestamps generated from ground truth rttms rather than generating through VAD
###Code
pretrained_speaker_model='ecapa_tdnn'
config.diarizer.paths2audio_files = paths2audio_files
config.diarizer.path2groundtruth_rttm_files = path2groundtruth_rttm_files
config.diarizer.out_dir = output_dir #Directory to store intermediate files and prediction outputs
config.diarizer.speaker_embeddings.model_path = pretrained_speaker_model
# Ignoring vad we just need to pass the manifest file we created
config.diarizer.speaker_embeddings.oracle_vad_manifest = oracle_manifest
config.diarizer.oracle_num_speakers=2
from nemo.collections.asr.models import ClusteringDiarizer
oracle_model = ClusteringDiarizer(cfg=config)
# And lets diarize
oracle_model.diarize()
###Output
_____no_output_____
###Markdown
With DER 0 -> means it clustered speaker embeddings correctly. Let's view
###Code
!cat {output_dir}/pred_rttms/an4_diarize_test.rttm
pred_labels = rttm_to_labels(output_dir+'/pred_rttms/an4_diarize_test.rttm')
hypothesis = labels_to_pyannote_object(pred_labels)
hypothesis
reference
###Output
_____no_output_____
###Markdown
VAD DIARIZATION In this method we compute VAD time stamps using NeMo VAD model on `paths2audio_files` and then use these time stamps of speech label to find speaker embeddings followed by clustering them into num of speakers Before we proceed let's look at the speaker diarization config, which we would be depending up on for vad computationand speaker embedding extraction
###Code
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
As can be seen most of the variables in config are self explanatory with VAD variables under vad section and speaker related variables under speaker embeddings section. To perform VAD based diarization we can ignore `oracle_vad_manifest` in `speaker_embeddings` section for now and needs to fill up the rest. We also needs to provide pretrained `model_path` of vad and speaker embeddings .nemo models
###Code
pretrained_vad = 'vad_marblenet'
pretrained_speaker_model = 'ecapa_tdnn'
###Output
_____no_output_____
###Markdown
Note in this tutorial, we use the VAD model MarbleNet-3x2 introduced and published in [ICASSP MarbleNet](https://arxiv.org/pdf/2010.13886.pdf). You might need to tune on dev set similar to your dataset if you would like to improve the performance.And the speakerNet-M-Diarization model achieves 7.3% confusion error rate on CH109 set with oracle vad. This model is trained on voxceleb1, voxceleb2, Fisher, SwitchBoard datasets. So for more improved performance specific to your dataset, finetune speaker verification model with a devset similar to your test set.
###Code
output_dir = os.path.join(ROOT,'outputs')
config.diarizer.paths2audio_files = paths2audio_files
config.diarizer.path2groundtruth_rttm_files = path2groundtruth_rttm_files
config.diarizer.out_dir = output_dir # Directory to store intermediate files and prediction outputs
config.diarizer.speaker_embeddings.model_path = pretrained_speaker_model
#Here we use our inhouse pretrained NeMo VAD
config.diarizer.vad.model_path = pretrained_vad
config.diarizer.vad.window_length_in_sec = 0.15
config.diarizer.vad.shift_length_in_sec = 0.01
# config.diarizer.vad.threshold = 0.8 threshold would be deprecated in release 1.5
config.diarizer.vad.postprocessing_params.onset = 0.8
config.diarizer.vad.postprocessing_params.offset = 0.7
config.diarizer.vad.postprocessing_params.min_duration_on = 0.1
config.diarizer.vad.postprocessing_params.min_duration_off = 0.3
###Output
_____no_output_____
###Markdown
Now that we passed all the variables we needed lets initialize the clustering model with above config
###Code
from nemo.collections.asr.models import ClusteringDiarizer
sd_model = ClusteringDiarizer(cfg=config)
###Output
_____no_output_____
###Markdown
And Diarize with single line of code
###Code
sd_model.diarize()
###Output
_____no_output_____
###Markdown
As can be seen, we first performed VAD, then with the timestamps created in `{output_dir}/vad_outputs` by VAD we calculated speaker embeddings (`{output_dir}/speaker_outputs/embeddings/`) which are then clustered using spectral clustering. To generate VAD predicted time step. We perform VAD inference to have frame level prediction &8594; (optional: use decision smoothing) &8594; given `threshold`, write speech segment to RTTM-like time stamps manifest.we use vad decision smoothing (87.5% overlap median) as described [here](https://github.com/NVIDIA/NeMo/blob/stable/nemo/collections/asr/parts/utils/vad_utils.py)you can also tune the threshold on your dev set. Use this provided [script](https://github.com/NVIDIA/NeMo/blob/stable/scripts/voice_activity_detection/vad_tune_threshold.py)
###Code
# VAD predicted time stamps
# you can also use single threshold(=onset=offset) for binarization and plot here
from nemo.collections.asr.parts.utils.vad_utils import plot
plot(
paths2audio_files[0],
'outputs/vad_outputs/overlap_smoothing_output_median_0.875/an4_diarize_test.median',
path2groundtruth_rttm_files[0],
per_args = config.diarizer.vad.postprocessing_params, #threshold
)
print(f"postprocessing_params: {config.diarizer.vad.postprocessing_params}")
###Output
_____no_output_____
###Markdown
Predicted outputs are written to `output_dir/pred_rttms` and see how we predicted along with VAD prediction
###Code
!cat outputs/pred_rttms/an4_diarize_test.rttm
pred_labels = rttm_to_labels('outputs/pred_rttms/an4_diarize_test.rttm')
hypothesis = labels_to_pyannote_object(pred_labels)
hypothesis
reference
###Output
_____no_output_____
###Markdown
Storing and Restoring models Now we can save the whole config and model parameters in a single .nemo and restore from it anytime.
###Code
oracle_model.save_to(os.path.join(output_dir,'diarize.nemo'))
###Output
_____no_output_____
###Markdown
Restore from saved model
###Code
del oracle_model
import nemo.collections.asr as nemo_asr
restored_model = nemo_asr.models.ClusteringDiarizer.restore_from(os.path.join(output_dir,'diarize.nemo'))
###Output
_____no_output_____
###Markdown
ADD ON - ASR
###Code
IPython.display.Audio(an4_audio)
quartznet = nemo_asr.models.EncDecCTCModel.from_pretrained(model_name="QuartzNet15x5Base-En")
for fname, transcription in zip(paths2audio_files, quartznet.transcribe(paths2audio_files=paths2audio_files)):
print(f"Audio in {fname} was recognized as: {transcription}")
###Output
_____no_output_____
###Markdown
IntroductionWho Speaks When? Speaker Diarization is the task of segmenting audio recordings by speaker labels. A diarization system consists of Voice Activity Detection (VAD) model to get the time stamps of audio where speech is being spoken ignoring the background and Speaker Embeddings model to get speaker embeddings on segments that were previously time stamped. These speaker embeddings would then be clustered into clusters based on number of speakers present in the audio recording.In NeMo we support both **oracle VAD** and **non-oracle VAD** diarization. In this tutorial, we shall first demonstrate how to perform diarization with a oracle VAD time stamps (we assume we already have speech time stamps) and pretrained speaker verification model which can be found in tutorial for [Speaker and Recognition and Verification in NeMo](https://github.com/NVIDIA/NeMo/blob/main/tutorials/speaker_tasks/Speaker_Recognition_Verification.ipynb).In [second part](ORACLE-VAD-DIARIZATION) we show how to perform VAD and then diarization if ground truth timestamped speech were not available (non-oracle VAD). We also have tutorials for [VAD training in NeMo](https://github.com/NVIDIA/NeMo/blob/main/tutorials/asr/Voice_Activity_Detection.ipynb) and [online offline microphone inference](https://github.com/NVIDIA/NeMo/blob/main/tutorials/asr/Online_Offline_Microphone_VAD_Demo.ipynb), where you can custom your model and training/finetuning on your own data.For demonstration purposes we would be using simulated audio from [an4 dataset](http://www.speech.cs.cmu.edu/databases/an4/)
###Code
import os
import wget
ROOT = os.getcwd()
data_dir = os.path.join(ROOT,'data')
os.makedirs(data_dir, exist_ok=True)
an4_audio = os.path.join(data_dir,'an4_diarize_test.wav')
an4_rttm = os.path.join(data_dir,'an4_diarize_test.rttm')
if not os.path.exists(an4_audio):
an4_audio_url = "https://nemo-public.s3.us-east-2.amazonaws.com/an4_diarize_test.wav"
an4_audio = wget.download(an4_audio_url, data_dir)
if not os.path.exists(an4_rttm):
an4_rttm_url = "https://nemo-public.s3.us-east-2.amazonaws.com/an4_diarize_test.rttm"
an4_rttm = wget.download(an4_rttm_url, data_dir)
###Output
_____no_output_____
###Markdown
Let's plot and listen to the audio and visualize the RTTM speaker labels
###Code
import IPython
import matplotlib.pyplot as plt
import numpy as np
import librosa
sr = 16000
signal, sr = librosa.load(an4_audio,sr=sr)
fig,ax = plt.subplots(1,1)
fig.set_figwidth(20)
fig.set_figheight(2)
plt.plot(np.arange(len(signal)),signal,'gray')
fig.suptitle('Reference merged an4 audio', fontsize=16)
plt.xlabel('time (secs)', fontsize=18)
ax.margins(x=0)
plt.ylabel('signal strength', fontsize=16);
a,_ = plt.xticks();plt.xticks(a,a/sr);
IPython.display.Audio(an4_audio)
###Output
_____no_output_____
###Markdown
We would use [pyannote_metrics](https://pyannote.github.io/pyannote-metrics/) for visualization and score calculation purposes. Hence all the labels in rttm formats would eventually be converted to pyannote objects, we created two helper functions rttm_to_labels (for NeMo intermediate processing) and labels_to_pyannote_object for scoring and visualization format
###Code
from nemo.collections.asr.parts.utils.speaker_utils import rttm_to_labels, labels_to_pyannote_object
###Output
_____no_output_____
###Markdown
Let's load ground truth RTTM labels and view the reference Annotation timestamps visually
###Code
# view the sample rttm file
!cat {an4_rttm}
labels = rttm_to_labels(an4_rttm)
reference = labels_to_pyannote_object(labels)
print(labels)
reference
###Output
_____no_output_____
###Markdown
Speaker Diarization scripts commonly expects two files:1. paths2audio_files : either list of audio file paths or file containing paths to audio files for which we need to perform diarization.2. path2groundtruth_rttm_files (optional): either list of rttm file paths or file containing paths to rttm files (this can be passed if we need to calculate DER rate based on our ground truth rttm files).**Note** we expect audio and corresponding RTTM have **same base name** and the name should be **unique**. For eg: if audio file name is **test_an4**.wav, if provided we expect corresponding rttm file name to be **test_an4**.rttm (note the matching **test_an4** base name)Now let's create paths2audio_files list (or file) for which we need to perform diarization
###Code
paths2audio_files = [an4_audio]
print(paths2audio_files)
###Output
_____no_output_____
###Markdown
Similarly create` path2groundtruth_rttm_files` list (this is optional, and needed for score calculation)
###Code
path2groundtruth_rttm_files = [an4_rttm]
print(path2groundtruth_rttm_files)
###Output
_____no_output_____
###Markdown
ORACLE-VAD DIARIZATION Oracle-vad diarization is to compute speaker embeddings from known speech label timestamps rather than depending on VAD output. This step can also be used to run speaker diarization with rttms generated from any external VAD, not just VAD model from NeMo.For it, the first step is to start converting reference audio rttm(vad) time stamps to oracle manifest file. This manifest file would be sent to our speaker diarizer to extract embeddings.For that let's use write_rttm2manifest function, that takes paths2audio_files and paths2rttm_files as arguments
###Code
from nemo.collections.asr.parts.utils.speaker_utils import write_rttm2manifest
output_dir = os.path.join(ROOT, 'oracle_vad')
os.makedirs(output_dir,exist_ok=True)
oracle_manifest = os.path.join(output_dir,'oracle_manifest.json')
write_rttm2manifest(paths2audio_files=paths2audio_files,
paths2rttm_files=path2groundtruth_rttm_files,
manifest_file=oracle_manifest)
!cat {oracle_manifest}
###Output
_____no_output_____
###Markdown
Our config file is based on [hydra](https://hydra.cc/docs/intro/). With hydra config, we ask users to provide values to variables that were filled with **???**, these are mandatory fields and scripts expect them for successful runs. And notice some variables were filled with **null** are optional variables. Those could be provided if needed but are not mandatory.
###Code
from omegaconf import OmegaConf
MODEL_CONFIG = os.path.join(data_dir,'speaker_diarization.yaml')
if not os.path.exists(MODEL_CONFIG):
config_url = "https://raw.githubusercontent.com/NVIDIA/NeMo/main/examples/speaker_tasks/diarization/conf/speaker_diarization.yaml"
MODEL_CONFIG = wget.download(config_url,data_dir)
config = OmegaConf.load(MODEL_CONFIG)
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Now we can perform speaker diarization based on timestamps generated from ground truth rttms rather than generating through VAD
###Code
pretrained_speaker_model='ecapa_tdnn'
config.diarizer.paths2audio_files = paths2audio_files
config.diarizer.path2groundtruth_rttm_files = path2groundtruth_rttm_files
config.diarizer.out_dir = output_dir #Directory to store intermediate files and prediction outputs
config.diarizer.speaker_embeddings.model_path = pretrained_speaker_model
# Ignoring vad we just need to pass the manifest file we created
config.diarizer.speaker_embeddings.oracle_vad_manifest = oracle_manifest
config.diarizer.oracle_num_speakers=2
from nemo.collections.asr.models import ClusteringDiarizer
oracle_model = ClusteringDiarizer(cfg=config)
# And lets diarize
oracle_model.diarize()
###Output
_____no_output_____
###Markdown
With DER 0 -> means it clustered speaker embeddings correctly. Let's view
###Code
!cat {output_dir}/pred_rttms/an4_diarize_test.rttm
pred_labels = rttm_to_labels(output_dir+'/pred_rttms/an4_diarize_test.rttm')
hypothesis = labels_to_pyannote_object(pred_labels)
hypothesis
reference
###Output
_____no_output_____
###Markdown
VAD DIARIZATION In this method we compute VAD time stamps using NeMo VAD model on `paths2audio_files` and then use these time stamps of speech label to find speaker embeddings followed by clustering them into num of speakers Before we proceed let's look at the speaker diarization config, which we would be depending up on for vad computationand speaker embedding extraction
###Code
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
As can be seen most of the variables in config are self explanatory with VAD variables under vad section and speaker related variables under speaker embeddings section. To perform VAD based diarization we can ignore `oracle_vad_manifest` in `speaker_embeddings` section for now and needs to fill up the rest. We also needs to provide pretrained `model_path` of vad and speaker embeddings .nemo models
###Code
pretrained_vad = 'vad_marblenet'
pretrained_speaker_model = 'ecapa_tdnn'
###Output
_____no_output_____
###Markdown
Note in this tutorial, we use the VAD model MarbleNet-3x2 introduced and published in [ICASSP MarbleNet](https://arxiv.org/pdf/2010.13886.pdf). You might need to tune on dev set similar to your dataset if you would like to improve the performance.And the speakerNet-M-Diarization model achieves 7.3% confusion error rate on CH109 set with oracle vad. This model is trained on voxceleb1, voxceleb2, Fisher, SwitchBoard datasets. So for more improved performance specific to your dataset, finetune speaker verification model with a devset similar to your test set.
###Code
output_dir = os.path.join(ROOT,'outputs')
config.diarizer.paths2audio_files = paths2audio_files
config.diarizer.path2groundtruth_rttm_files = path2groundtruth_rttm_files
config.diarizer.out_dir = output_dir # Directory to store intermediate files and prediction outputs
config.diarizer.speaker_embeddings.model_path = pretrained_speaker_model
#Here we use our inhouse pretrained NeMo VAD
config.diarizer.vad.model_path = pretrained_vad
config.diarizer.vad.window_length_in_sec = 0.15
config.diarizer.vad.shift_length_in_sec = 0.01
# config.diarizer.vad.threshold = 0.8 threshold would be deprecated in release 1.5
config.diarizer.vad.postprocessing_params.onset = 0.8
config.diarizer.vad.postprocessing_params.offset = 0.7
config.diarizer.vad.postprocessing_params.min_duration_on = 0.1
config.diarizer.vad.postprocessing_params.min_duration_off = 0.3
###Output
_____no_output_____
###Markdown
Now that we passed all the variables we needed lets initialize the clustering model with above config
###Code
from nemo.collections.asr.models import ClusteringDiarizer
sd_model = ClusteringDiarizer(cfg=config)
###Output
_____no_output_____
###Markdown
And Diarize with single line of code
###Code
sd_model.diarize()
###Output
_____no_output_____
###Markdown
As can be seen, we first performed VAD, then with the timestamps created in `{output_dir}/vad_outputs` by VAD we calculated speaker embeddings (`{output_dir}/speaker_outputs/embeddings/`) which are then clustered using spectral clustering. To generate VAD predicted time step. We perform VAD inference to have frame level prediction &8594; (optional: use decision smoothing) &8594; given `threshold`, write speech segment to RTTM-like time stamps manifest.we use vad decision smoothing (87.5% overlap median) as described [here](https://github.com/NVIDIA/NeMo/blob/stable/nemo/collections/asr/parts/utils/vad_utils.py)you can also tune the threshold on your dev set. Use this provided [script](https://github.com/NVIDIA/NeMo/blob/stable/scripts/voice_activity_detection/vad_tune_threshold.py)
###Code
# VAD predicted time stamps
# you can also use single threshold(=onset=offset) for binarization and plot here
from nemo.collections.asr.parts.utils.vad_utils import plot
plot(
paths2audio_files[0],
'outputs/vad_outputs/overlap_smoothing_output_median_0.875/an4_diarize_test.median',
path2groundtruth_rttm_files[0],
per_args = config.diarizer.vad.postprocessing_params, #threshold
)
print(f"postprocessing_params: {config.diarizer.vad.postprocessing_params}")
###Output
_____no_output_____
###Markdown
Predicted outputs are written to `output_dir/pred_rttms` and see how we predicted along with VAD prediction
###Code
!cat outputs/pred_rttms/an4_diarize_test.rttm
pred_labels = rttm_to_labels('outputs/pred_rttms/an4_diarize_test.rttm')
hypothesis = labels_to_pyannote_object(pred_labels)
hypothesis
reference
###Output
_____no_output_____
###Markdown
Storing and Restoring models Now we can save the whole config and model parameters in a single .nemo and restore from it anytime.
###Code
oracle_model.save_to(os.path.join(output_dir,'diarize.nemo'))
###Output
_____no_output_____
###Markdown
Restore from saved model
###Code
del oracle_model
import nemo.collections.asr as nemo_asr
restored_model = nemo_asr.models.ClusteringDiarizer.restore_from(os.path.join(output_dir,'diarize.nemo'))
###Output
_____no_output_____
###Markdown
ADD ON - ASR
###Code
IPython.display.Audio(an4_audio)
quartznet = nemo_asr.models.EncDecCTCModel.from_pretrained(model_name="QuartzNet15x5Base-En")
for fname, transcription in zip(paths2audio_files, quartznet.transcribe(paths2audio_files=paths2audio_files)):
print(f"Audio in {fname} was recognized as: {transcription}")
###Output
_____no_output_____
###Markdown
IntroductionWho Speaks When? Speaker Diarization is the task of segmenting audio recordings by speaker labels. A diarization system consists of Voice Activity Detection (VAD) model to get the time stamps of audio where speech is being spoken ignoring the background and Speaker Embeddings model to get speaker embeddings on segments that were previously time stamped. These speaker embeddings would then be clustered into clusters based on number of speakers present in the audio recording.In NeMo we support both **oracle VAD** and **non-oracle VAD** diarization. In this tutorial, we shall first demonstrate how to perform diarization with a oracle VAD time stamps (we assume we already have speech time stamps) and pretrained speaker verification model which can be found in tutorial for [Speaker Identification and Verification in NeMo](https://github.com/NVIDIA/NeMo/blob/main/tutorials/speaker_tasks/Speaker_Identification_Verification.ipynb).In ORACLE-VAD-DIARIZATION we show how to perform VAD and then diarization if ground truth timestamped speech were not available (non-oracle VAD). We also have tutorials for [VAD training in NeMo](https://github.com/NVIDIA/NeMo/blob/main/tutorials/asr/Voice_Activity_Detection.ipynb) and [online offline microphone inference](https://github.com/NVIDIA/NeMo/blob/main/tutorials/asr/Online_Offline_Microphone_VAD_Demo.ipynb), where you can custom your model and training/finetuning on your own data.For demonstration purposes we would be using simulated audio from [an4 dataset](http://www.speech.cs.cmu.edu/databases/an4/)
###Code
import os
import wget
ROOT = os.getcwd()
data_dir = os.path.join(ROOT,'data')
os.makedirs(data_dir, exist_ok=True)
an4_audio = os.path.join(data_dir,'an4_diarize_test.wav')
an4_rttm = os.path.join(data_dir,'an4_diarize_test.rttm')
if not os.path.exists(an4_audio):
an4_audio_url = "https://nemo-public.s3.us-east-2.amazonaws.com/an4_diarize_test.wav"
an4_audio = wget.download(an4_audio_url, data_dir)
if not os.path.exists(an4_rttm):
an4_rttm_url = "https://nemo-public.s3.us-east-2.amazonaws.com/an4_diarize_test.rttm"
an4_rttm = wget.download(an4_rttm_url, data_dir)
###Output
_____no_output_____
###Markdown
Let's plot and listen to the audio and visualize the RTTM speaker labels
###Code
import IPython
import matplotlib.pyplot as plt
import numpy as np
import librosa
sr = 16000
signal, sr = librosa.load(an4_audio,sr=sr)
fig,ax = plt.subplots(1,1)
fig.set_figwidth(20)
fig.set_figheight(2)
plt.plot(np.arange(len(signal)),signal,'gray')
fig.suptitle('Reference merged an4 audio', fontsize=16)
plt.xlabel('time (secs)', fontsize=18)
ax.margins(x=0)
plt.ylabel('signal strength', fontsize=16);
a,_ = plt.xticks();plt.xticks(a,a/sr);
IPython.display.Audio(an4_audio)
###Output
_____no_output_____
###Markdown
We would use [pyannote_metrics](https://pyannote.github.io/pyannote-metrics/) for visualization and score calculation purposes. Hence all the labels in rttm formats would eventually be converted to pyannote objects, we created two helper functions rttm_to_labels (for NeMo intermediate processing) and labels_to_pyannote_object for scoring and visualization format
###Code
from nemo.collections.asr.parts.utils.speaker_utils import rttm_to_labels, labels_to_pyannote_object
###Output
_____no_output_____
###Markdown
Let's load ground truth RTTM labels and view the reference Annotation timestamps visually
###Code
# view the sample rttm file
!cat {an4_rttm}
labels = rttm_to_labels(an4_rttm)
reference = labels_to_pyannote_object(labels)
print(labels)
reference
###Output
_____no_output_____
###Markdown
Speaker Diarization scripts commonly expects following arguments:1. manifest_filepath : Path to manifest file containing json lines of format: {'audio_filepath': /path/to/audio_file, 'offset': 0, 'duration':None, 'label': 'infer', 'text': '-', 'num_speakers': None, 'rttm_filepath': /path/to/rttm/file, 'uem_filepath'='/path/to/uem/filepath'}2. out_dir : directory where outputs and intermediate files are stored. 3. oracle_vad: If this is true then we extract speech activity labels from rttm files, if False then either 4. vad.model_path or external_manifestpath containing speech activity labels has to be passed. Mandatory fields are audio_filepath, offset, duration, label and text. For the rest if you would like to evaluate with known number of speakers pass the value else None. If you would like to score the system with known rttms then that should be passed as well, else None. uem file is used to score only part of your audio for evaluation purposes, hence pass if you would like to evaluate on it else None.**Note** we expect audio and corresponding RTTM have **same base name** and the name should be **unique**. For eg: if audio file name is **test_an4**.wav, if provided we expect corresponding rttm file name to be **test_an4**.rttm (note the matching **test_an4** base name) Lets create manifest with the an4 audio and rttm available. If you have more than one files you may also use the script `pathsfiles_to_manifest.py` to generate manifest file from list of audio files and optionally rttm files
###Code
# Create a manifest for input with below format.
# {'audio_filepath': /path/to/audio_file, 'offset': 0, 'duration':None, 'label': 'infer', 'text': '-',
# 'num_speakers': None, 'rttm_filepath': /path/to/rttm/file, 'uem_filepath'='/path/to/uem/filepath'}
import json
meta = {
'audio_filepath': an4_audio,
'offset': 0,
'duration':None,
'label': 'infer',
'text': '-',
'num_speakers': 2,
'rttm_filepath': an4_rttm,
'uem_filepath' : None
}
with open('data/input_manifest.json','w') as fp:
json.dump(meta,fp)
fp.write('\n')
!cat data/input_manifest.json
output_dir = os.path.join(ROOT, 'oracle_vad')
os.makedirs(output_dir,exist_ok=True)
###Output
_____no_output_____
###Markdown
ORACLE-VAD DIARIZATION Oracle-vad diarization is to compute speaker embeddings from known speech label timestamps rather than depending on VAD output. This step can also be used to run speaker diarization with rttms generated from any external VAD, not just VAD model from NeMo.For it, the first step is to start converting reference audio rttm(vad) time stamps to oracle manifest file. This manifest file would be sent to our speaker diarizer to extract embeddings.This is just an argument in our config, and system automatically computes oracle manifest based on the rttms provided through input manifest file Our config file is based on [hydra](https://hydra.cc/docs/intro/). With hydra config, we ask users to provide values to variables that were filled with **???**, these are mandatory fields and scripts expect them for successful runs. And notice some variables were filled with **null** are optional variables. Those could be provided if needed but are not mandatory.
###Code
from omegaconf import OmegaConf
MODEL_CONFIG = os.path.join(data_dir,'offline_diarization.yaml')
if not os.path.exists(MODEL_CONFIG):
config_url = "https://raw.githubusercontent.com/NVIDIA/NeMo/main/examples/speaker_tasks/diarization/conf/offline_diarization.yaml"
MODEL_CONFIG = wget.download(config_url,data_dir)
config = OmegaConf.load(MODEL_CONFIG)
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Now we can perform speaker diarization based on timestamps generated from ground truth rttms rather than generating through VAD
###Code
pretrained_speaker_model='titanet_large'
config.diarizer.manifest_filepath = 'data/input_manifest.json'
config.diarizer.out_dir = output_dir # Directory to store intermediate files and prediction outputs
config.diarizer.speaker_embeddings.model_path = pretrained_speaker_model
config.diarizer.speaker_embeddings.parameters.window_length_in_sec = 1.5
config.diarizer.speaker_embeddings.parameters.shift_length_in_sec = 0.75
config.diarizer.oracle_vad = True # ----> ORACLE VAD
config.diarizer.clustering.parameters.oracle_num_speakers = True
from nemo.collections.asr.models import ClusteringDiarizer
oracle_model = ClusteringDiarizer(cfg=config)
# And lets diarize
oracle_model.diarize()
###Output
_____no_output_____
###Markdown
With DER 0 -> means it clustered speaker embeddings correctly. Let's view
###Code
!cat {output_dir}/pred_rttms/an4_diarize_test.rttm
pred_labels = rttm_to_labels(output_dir+'/pred_rttms/an4_diarize_test.rttm')
hypothesis = labels_to_pyannote_object(pred_labels)
hypothesis
reference
###Output
_____no_output_____
###Markdown
VAD DIARIZATION In this method we compute VAD time stamps using NeMo VAD model on input manifest file and then use these time stamps of speech label to find speaker embeddings followed by clustering them into num of speakers Before we proceed let's look at the speaker diarization config, which we would be depending up on for vad computationand speaker embedding extraction
###Code
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
As can be seen most of the variables in config are self explanatory with VAD variables under vad section and speaker related variables under speaker embeddings section. To perform VAD based diarization we can ignore `oracle_vad_manifest` in `speaker_embeddings` section for now and needs to fill up the rest. We also needs to provide pretrained `model_path` of vad and speaker embeddings .nemo models
###Code
pretrained_vad = 'vad_marblenet'
pretrained_speaker_model = 'titanet_large'
###Output
_____no_output_____
###Markdown
Note in this tutorial, we use the VAD model MarbleNet-3x2 introduced and published in [ICASSP MarbleNet](https://arxiv.org/pdf/2010.13886.pdf). You might need to tune on dev set similar to your dataset if you would like to improve the performance.And the speakerNet-M-Diarization model achieves 7.3% confusion error rate on CH109 set with oracle vad. This model is trained on voxceleb1, voxceleb2, Fisher, SwitchBoard datasets. So for more improved performance specific to your dataset, finetune speaker verification model with a devset similar to your test set.
###Code
output_dir = os.path.join(ROOT,'outputs')
config.diarizer.manifest_filepath = 'data/input_manifest.json'
config.diarizer.out_dir = output_dir #Directory to store intermediate files and prediction outputs
config.diarizer.speaker_embeddings.model_path = pretrained_speaker_model
config.diarizer.speaker_embeddings.parameters.window_length_in_sec = 1.5
config.diarizer.speaker_embeddings.parameters.shift_length_in_sec = 0.75
config.diarizer.oracle_vad = False # compute VAD provided with model_path to vad config
config.diarizer.clustering.parameters.oracle_num_speakers=True
#Here we use our inhouse pretrained NeMo VAD
config.diarizer.vad.model_path = pretrained_vad
config.diarizer.vad.window_length_in_sec = 0.15
config.diarizer.vad.shift_length_in_sec = 0.01
config.diarizer.vad.parameters.onset = 0.8
config.diarizer.vad.parameters.offset = 0.6
config.diarizer.vad.parameters.min_duration_on = 0.1
config.diarizer.vad.parameters.min_duration_off = 0.4
###Output
_____no_output_____
###Markdown
Now that we passed all the variables we needed lets initialize the clustering model with above config
###Code
from nemo.collections.asr.models import ClusteringDiarizer
sd_model = ClusteringDiarizer(cfg=config)
###Output
_____no_output_____
###Markdown
And Diarize with single line of code
###Code
sd_model.diarize()
###Output
_____no_output_____
###Markdown
As can be seen, we first performed VAD, then with the timestamps created in `{output_dir}/vad_outputs` by VAD we calculated speaker embeddings (`{output_dir}/speaker_outputs/embeddings/`) which are then clustered using spectral clustering. To generate VAD predicted time step. We perform VAD inference to have frame level prediction &8594; (optional: use decision smoothing) &8594; given `threshold`, write speech segment to RTTM-like time stamps manifest.we use vad decision smoothing (87.5% overlap median) as described [here](https://github.com/NVIDIA/NeMo/blob/stable/nemo/collections/asr/parts/utils/vad_utils.py)you can also tune the threshold on your dev set. Use this provided [script](https://github.com/NVIDIA/NeMo/blob/stable/scripts/voice_activity_detection/vad_tune_threshold.py)
###Code
# VAD predicted time stamps
# you can also use single threshold(=onset=offset) for binarization and plot here
from nemo.collections.asr.parts.utils.vad_utils import plot
plot(
an4_audio,
'outputs/vad_outputs/overlap_smoothing_output_median_0.875/an4_diarize_test.median',
an4_rttm,
per_args = config.diarizer.vad.parameters, #threshold
)
print(f"postprocessing_params: {config.diarizer.vad.parameters}")
###Output
_____no_output_____
###Markdown
Predicted outputs are written to `output_dir/pred_rttms` and see how we predicted along with VAD prediction
###Code
!cat outputs/pred_rttms/an4_diarize_test.rttm
pred_labels = rttm_to_labels('outputs/pred_rttms/an4_diarize_test.rttm')
hypothesis = labels_to_pyannote_object(pred_labels)
hypothesis
reference
###Output
_____no_output_____
###Markdown
Storing and Restoring models Now we can save the whole config and model parameters in a single .nemo and restore from it anytime.
###Code
oracle_model.save_to(os.path.join(output_dir,'diarize.nemo'))
###Output
_____no_output_____
###Markdown
Restore from saved model
###Code
del oracle_model
import nemo.collections.asr as nemo_asr
restored_model = nemo_asr.models.ClusteringDiarizer.restore_from(os.path.join(output_dir,'diarize.nemo'))
###Output
_____no_output_____
###Markdown
IntroductionWho Speaks When? Speaker Diarization is the task of segmenting audio recordings by speaker labels. A diarization system consists of Voice Activity Detection (VAD) model to get the time stamps of audio where speech is being spoken ignoring the background and Speaker Embeddings model to get speaker embeddings on segments that were previously time stamped. These speaker embeddings would then be clustered into clusters based on number of speakers present in the audio recording.In NeMo we support both **oracle VAD** and **non-oracle VAD** diarization. In this tutorial, we shall first demonstrate how to perform diarization with a oracle VAD time stamps (we assume we already have speech time stamps) and pretrained speaker verification model which can be found in tutorial for [Speaker Identification and Verification in NeMo](https://github.com/NVIDIA/NeMo/blob/main/tutorials/speaker_tasks/Speaker_Identification_Verification.ipynb).In ORACLE-VAD-DIARIZATION we show how to perform VAD and then diarization if ground truth timestamped speech were not available (non-oracle VAD). We also have tutorials for [VAD training in NeMo](https://github.com/NVIDIA/NeMo/blob/main/tutorials/asr/Voice_Activity_Detection.ipynb) and [online offline microphone inference](https://github.com/NVIDIA/NeMo/blob/main/tutorials/asr/Online_Offline_Microphone_VAD_Demo.ipynb), where you can custom your model and training/finetuning on your own data.For demonstration purposes we would be using simulated audio from [an4 dataset](http://www.speech.cs.cmu.edu/databases/an4/)
###Code
import os
import wget
ROOT = os.getcwd()
data_dir = os.path.join(ROOT,'data')
os.makedirs(data_dir, exist_ok=True)
an4_audio = os.path.join(data_dir,'an4_diarize_test.wav')
an4_rttm = os.path.join(data_dir,'an4_diarize_test.rttm')
if not os.path.exists(an4_audio):
an4_audio_url = "https://nemo-public.s3.us-east-2.amazonaws.com/an4_diarize_test.wav"
an4_audio = wget.download(an4_audio_url, data_dir)
if not os.path.exists(an4_rttm):
an4_rttm_url = "https://nemo-public.s3.us-east-2.amazonaws.com/an4_diarize_test.rttm"
an4_rttm = wget.download(an4_rttm_url, data_dir)
###Output
_____no_output_____
###Markdown
Let's plot and listen to the audio and visualize the RTTM speaker labels
###Code
import IPython
import matplotlib.pyplot as plt
import numpy as np
import librosa
sr = 16000
signal, sr = librosa.load(an4_audio,sr=sr)
fig,ax = plt.subplots(1,1)
fig.set_figwidth(20)
fig.set_figheight(2)
plt.plot(np.arange(len(signal)),signal,'gray')
fig.suptitle('Reference merged an4 audio', fontsize=16)
plt.xlabel('time (secs)', fontsize=18)
ax.margins(x=0)
plt.ylabel('signal strength', fontsize=16);
a,_ = plt.xticks();plt.xticks(a,a/sr);
IPython.display.Audio(an4_audio)
###Output
_____no_output_____
###Markdown
We would use [pyannote_metrics](https://pyannote.github.io/pyannote-metrics/) for visualization and score calculation purposes. Hence all the labels in rttm formats would eventually be converted to pyannote objects, we created two helper functions rttm_to_labels (for NeMo intermediate processing) and labels_to_pyannote_object for scoring and visualization format
###Code
from nemo.collections.asr.parts.utils.speaker_utils import rttm_to_labels, labels_to_pyannote_object
###Output
_____no_output_____
###Markdown
Let's load ground truth RTTM labels and view the reference Annotation timestamps visually
###Code
# view the sample rttm file
!cat {an4_rttm}
labels = rttm_to_labels(an4_rttm)
reference = labels_to_pyannote_object(labels)
print(labels)
reference
###Output
_____no_output_____
###Markdown
Speaker Diarization scripts commonly expects following arguments:1. manifest_filepath : Path to manifest file containing json lines of format: {'audio_filepath': /path/to/audio_file, 'offset': 0, 'duration':None, 'label': 'infer', 'text': '-', 'num_speakers': None, 'rttm_filepath': /path/to/rttm/file, 'uem_filepath'='/path/to/uem/filepath'}2. out_dir : directory where outputs and intermediate files are stored. 3. oracle_vad: If this is true then we extract speech activity labels from rttm files, if False then either 4. vad.model_path or external_manifestpath containing speech activity labels has to be passed. Mandatory fields are audio_filepath, offset, duration, label and text. For the rest if you would like to evaluate with known number of speakers pass the value else None. If you would like to score the system with known rttms then that should be passed as well, else None. uem file is used to score only part of your audio for evaluation purposes, hence pass if you would like to evaluate on it else None.**Note** we expect audio and corresponding RTTM have **same base name** and the name should be **unique**. For eg: if audio file name is **test_an4**.wav, if provided we expect corresponding rttm file name to be **test_an4**.rttm (note the matching **test_an4** base name) Lets create manifest with the an4 audio and rttm available. If you have more than one files you may also use the script `pathsfiles_to_manifest.py` to generate manifest file from list of audio files and optionally rttm files
###Code
# Create a manifest for input with below format.
# {'audio_filepath': /path/to/audio_file, 'offset': 0, 'duration':None, 'label': 'infer', 'text': '-',
# 'num_speakers': None, 'rttm_filepath': /path/to/rttm/file, 'uem_filepath'='/path/to/uem/filepath'}
import json
meta = {
'audio_filepath': an4_audio,
'offset': 0,
'duration':None,
'label': 'infer',
'text': '-',
'num_speakers': 2,
'rttm_filepath': an4_rttm,
'uem_filepath' : None
}
with open('data/input_manifest.json','w') as fp:
json.dump(meta,fp)
fp.write('\n')
!cat data/input_manifest.json
output_dir = os.path.join(ROOT, 'oracle_vad')
os.makedirs(output_dir,exist_ok=True)
###Output
_____no_output_____
###Markdown
ORACLE-VAD DIARIZATION Oracle-vad diarization is to compute speaker embeddings from known speech label timestamps rather than depending on VAD output. This step can also be used to run speaker diarization with rttms generated from any external VAD, not just VAD model from NeMo.For it, the first step is to start converting reference audio rttm(vad) time stamps to oracle manifest file. This manifest file would be sent to our speaker diarizer to extract embeddings.This is just an argument in our config, and system automatically computes oracle manifest based on the rttms provided through input manifest file Our config file is based on [hydra](https://hydra.cc/docs/intro/). With hydra config, we ask users to provide values to variables that were filled with **???**, these are mandatory fields and scripts expect them for successful runs. And notice some variables were filled with **null** are optional variables. Those could be provided if needed but are not mandatory.
###Code
from omegaconf import OmegaConf
MODEL_CONFIG = os.path.join(data_dir,'offline_diarization.yaml')
if not os.path.exists(MODEL_CONFIG):
config_url = "https://raw.githubusercontent.com/NVIDIA/NeMo/main/examples/speaker_tasks/diarization/conf/offline_diarization.yaml"
MODEL_CONFIG = wget.download(config_url,data_dir)
config = OmegaConf.load(MODEL_CONFIG)
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Now we can perform speaker diarization based on timestamps generated from ground truth rttms rather than generating through VAD
###Code
pretrained_speaker_model='titanet_large'
config.diarizer.manifest_filepath = 'data/input_manifest.json'
config.diarizer.out_dir = output_dir # Directory to store intermediate files and prediction outputs
config.diarizer.speaker_embeddings.model_path = pretrained_speaker_model
config.diarizer.speaker_embeddings.parameters.window_length_in_sec = 1.5
config.diarizer.speaker_embeddings.parameters.shift_length_in_sec = 0.75
config.diarizer.oracle_vad = True # ----> ORACLE VAD
config.diarizer.clustering.parameters.oracle_num_speakers = True
from nemo.collections.asr.models import ClusteringDiarizer
oracle_model = ClusteringDiarizer(cfg=config)
# And lets diarize
oracle_model.diarize()
###Output
_____no_output_____
###Markdown
With DER 0 -> means it clustered speaker embeddings correctly. Let's view
###Code
!cat {output_dir}/pred_rttms/an4_diarize_test.rttm
pred_labels = rttm_to_labels(output_dir+'/pred_rttms/an4_diarize_test.rttm')
hypothesis = labels_to_pyannote_object(pred_labels)
hypothesis
reference
###Output
_____no_output_____
###Markdown
VAD DIARIZATION In this method we compute VAD time stamps using NeMo VAD model on input manifest file and then use these time stamps of speech label to find speaker embeddings followed by clustering them into num of speakers Before we proceed let's look at the speaker diarization config, which we would be depending up on for vad computationand speaker embedding extraction
###Code
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
As can be seen most of the variables in config are self explanatory with VAD variables under vad section and speaker related variables under speaker embeddings section. To perform VAD based diarization we can ignore `oracle_vad_manifest` in `speaker_embeddings` section for now and needs to fill up the rest. We also needs to provide pretrained `model_path` of vad and speaker embeddings .nemo models
###Code
pretrained_vad = 'vad_marblenet'
pretrained_speaker_model = 'titanet_large'
###Output
_____no_output_____
###Markdown
Note in this tutorial, we use the VAD model MarbleNet-3x2 introduced and published in [ICASSP MarbleNet](https://arxiv.org/pdf/2010.13886.pdf). You might need to tune on dev set similar to your dataset if you would like to improve the performance.And the speakerNet-M-Diarization model achieves 7.3% confusion error rate on CH109 set with oracle vad. This model is trained on voxceleb1, voxceleb2, Fisher, SwitchBoard datasets. So for more improved performance specific to your dataset, finetune speaker verification model with a devset similar to your test set.
###Code
output_dir = os.path.join(ROOT,'outputs')
config.diarizer.manifest_filepath = 'data/input_manifest.json'
config.diarizer.out_dir = output_dir #Directory to store intermediate files and prediction outputs
config.diarizer.speaker_embeddings.model_path = pretrained_speaker_model
config.diarizer.speaker_embeddings.parameters.window_length_in_sec = 1.5
config.diarizer.speaker_embeddings.parameters.shift_length_in_sec = 0.75
config.diarizer.oracle_vad = False # compute VAD provided with model_path to vad config
config.diarizer.clustering.parameters.oracle_num_speakers=True
#Here we use our inhouse pretrained NeMo VAD
config.diarizer.vad.model_path = pretrained_vad
config.diarizer.vad.window_length_in_sec = 0.15
config.diarizer.vad.shift_length_in_sec = 0.01
config.diarizer.vad.parameters.onset = 0.8
config.diarizer.vad.parameters.offset = 0.6
config.diarizer.vad.parameters.min_duration_on = 0.1
config.diarizer.vad.parameters.min_duration_off = 0.4
###Output
_____no_output_____
###Markdown
Now that we passed all the variables we needed lets initialize the clustering model with above config
###Code
from nemo.collections.asr.models import ClusteringDiarizer
sd_model = ClusteringDiarizer(cfg=config)
###Output
_____no_output_____
###Markdown
And Diarize with single line of code
###Code
sd_model.diarize()
###Output
_____no_output_____
###Markdown
As can be seen, we first performed VAD, then with the timestamps created in `{output_dir}/vad_outputs` by VAD we calculated speaker embeddings (`{output_dir}/speaker_outputs/embeddings/`) which are then clustered using spectral clustering. To generate VAD predicted time step. We perform VAD inference to have frame level prediction &8594; (optional: use decision smoothing) &8594; given `threshold`, write speech segment to RTTM-like time stamps manifest.we use vad decision smoothing (87.5% overlap median) as described [here](https://github.com/NVIDIA/NeMo/blob/stable/nemo/collections/asr/parts/utils/vad_utils.py)you can also tune the threshold on your dev set. Use this provided [script](https://github.com/NVIDIA/NeMo/blob/stable/scripts/voice_activity_detection/vad_tune_threshold.py)
###Code
# VAD predicted time stamps
# you can also use single threshold(=onset=offset) for binarization and plot here
from nemo.collections.asr.parts.utils.vad_utils import plot
plot(
an4_audio,
'outputs/vad_outputs/overlap_smoothing_output_median_0.875/an4_diarize_test.median',
an4_rttm,
per_args = config.diarizer.vad.parameters, #threshold
)
print(f"postprocessing_params: {config.diarizer.vad.parameters}")
###Output
_____no_output_____
###Markdown
Predicted outputs are written to `output_dir/pred_rttms` and see how we predicted along with VAD prediction
###Code
!cat outputs/pred_rttms/an4_diarize_test.rttm
pred_labels = rttm_to_labels('outputs/pred_rttms/an4_diarize_test.rttm')
hypothesis = labels_to_pyannote_object(pred_labels)
hypothesis
reference
###Output
_____no_output_____
###Markdown
Storing and Restoring models Now we can save the whole config and model parameters in a single .nemo and restore from it anytime.
###Code
oracle_model.save_to(os.path.join(output_dir,'diarize.nemo'))
###Output
_____no_output_____
###Markdown
Restore from saved model
###Code
del oracle_model
import nemo.collections.asr as nemo_asr
restored_model = nemo_asr.models.ClusteringDiarizer.restore_from(os.path.join(output_dir,'diarize.nemo'))
###Output
_____no_output_____
###Markdown
IntroductionWho Speaks When? Speaker Diarization is the task of segmenting audio recordings by speaker labels. A diarization system consists of Voice Activity Detection (VAD) model to get the time stamps of audio where speech is being spoken ignoring the background and Speaker Embeddings model to get speaker embeddings on segments that were previously time stamped. These speaker embeddings would then be clustered into clusters based on number of speakers present in the audio recording.In NeMo we support both **oracle VAD** and **non-oracle VAD** diarization. In this tutorial, we shall first demonstrate how to perform diarization with a oracle VAD time stamps (we assume we already have speech time stamps) and pretrained speaker verification model which can be found in tutorial for [Speaker Identification and Verification in NeMo](https://github.com/NVIDIA/NeMo/blob/main/tutorials/speaker_tasks/Speaker_Identification_Verification.ipynb).In ORACLE-VAD-DIARIZATION we show how to perform VAD and then diarization if ground truth timestamped speech were not available (non-oracle VAD). We also have tutorials for [VAD training in NeMo](https://github.com/NVIDIA/NeMo/blob/main/tutorials/asr/Voice_Activity_Detection.ipynb) and [online offline microphone inference](https://github.com/NVIDIA/NeMo/blob/main/tutorials/asr/Online_Offline_Microphone_VAD_Demo.ipynb), where you can custom your model and training/finetuning on your own data.For demonstration purposes we would be using simulated audio from [an4 dataset](http://www.speech.cs.cmu.edu/databases/an4/)
###Code
import os
import wget
ROOT = os.getcwd()
data_dir = os.path.join(ROOT,'data')
os.makedirs(data_dir, exist_ok=True)
an4_audio = os.path.join(data_dir,'an4_diarize_test.wav')
an4_rttm = os.path.join(data_dir,'an4_diarize_test.rttm')
if not os.path.exists(an4_audio):
an4_audio_url = "https://nemo-public.s3.us-east-2.amazonaws.com/an4_diarize_test.wav"
an4_audio = wget.download(an4_audio_url, data_dir)
if not os.path.exists(an4_rttm):
an4_rttm_url = "https://nemo-public.s3.us-east-2.amazonaws.com/an4_diarize_test.rttm"
an4_rttm = wget.download(an4_rttm_url, data_dir)
###Output
_____no_output_____
###Markdown
Let's plot and listen to the audio and visualize the RTTM speaker labels
###Code
import IPython
import matplotlib.pyplot as plt
import numpy as np
import librosa
sr = 16000
signal, sr = librosa.load(an4_audio,sr=sr)
fig,ax = plt.subplots(1,1)
fig.set_figwidth(20)
fig.set_figheight(2)
plt.plot(np.arange(len(signal)),signal,'gray')
fig.suptitle('Reference merged an4 audio', fontsize=16)
plt.xlabel('time (secs)', fontsize=18)
ax.margins(x=0)
plt.ylabel('signal strength', fontsize=16);
a,_ = plt.xticks();plt.xticks(a,a/sr);
IPython.display.Audio(an4_audio)
###Output
_____no_output_____
###Markdown
We would use [pyannote_metrics](https://pyannote.github.io/pyannote-metrics/) for visualization and score calculation purposes. Hence all the labels in rttm formats would eventually be converted to pyannote objects, we created two helper functions rttm_to_labels (for NeMo intermediate processing) and labels_to_pyannote_object for scoring and visualization format
###Code
from nemo.collections.asr.parts.utils.speaker_utils import rttm_to_labels, labels_to_pyannote_object
###Output
_____no_output_____
###Markdown
Let's load ground truth RTTM labels and view the reference Annotation timestamps visually
###Code
# view the sample rttm file
!cat {an4_rttm}
labels = rttm_to_labels(an4_rttm)
reference = labels_to_pyannote_object(labels)
print(labels)
reference
###Output
_____no_output_____
###Markdown
Speaker Diarization scripts commonly expects following arguments:1. manifest_filepath : Path to manifest file containing json lines of format: {'audio_filepath': /path/to/audio_file, 'offset': 0, 'duration':None, 'label': 'infer', 'text': '-', 'num_speakers': None, 'rttm_filepath': /path/to/rttm/file, 'uem_filepath'='/path/to/uem/filepath'}2. out_dir : directory where outputs and intermediate files are stored. 3. oracle_vad: If this is true then we extract speech activity labels from rttm files, if False then either 4. vad.model_path or external_manifestpath containing speech activity labels has to be passed. Mandatory fields are audio_filepath, offset, duration, label and text. For the rest if you would like to evaluate with known number of speakers pass the value else None. If you would like to score the system with known rttms then that should be passed as well, else None. uem file is used to score only part of your audio for evaluation purposes, hence pass if you would like to evaluate on it else None.**Note** we expect audio and corresponding RTTM have **same base name** and the name should be **unique**. For eg: if audio file name is **test_an4**.wav, if provided we expect corresponding rttm file name to be **test_an4**.rttm (note the matching **test_an4** base name) Lets create manifest with the an4 audio and rttm available. If you have more than one files you may also use the script `pathsfiles_to_manifest.py` to generate manifest file from list of audio files and optionally rttm files
###Code
# Create a manifest for input with below format.
# {'audio_filepath': /path/to/audio_file, 'offset': 0, 'duration':None, 'label': 'infer', 'text': '-',
# 'num_speakers': None, 'rttm_filepath': /path/to/rttm/file, 'uem_filepath'='/path/to/uem/filepath'}
import json
meta = {
'audio_filepath': an4_audio,
'offset': 0,
'duration':None,
'label': 'infer',
'text': '-',
'num_speakers': 2,
'rttm_filepath': an4_rttm,
'uem_filepath' : None
}
with open('data/input_manifest.json','w') as fp:
json.dump(meta,fp)
fp.write('\n')
!cat data/input_manifest.json
output_dir = os.path.join(ROOT, 'oracle_vad')
os.makedirs(output_dir,exist_ok=True)
###Output
_____no_output_____
###Markdown
ORACLE-VAD DIARIZATION Oracle-vad diarization is to compute speaker embeddings from known speech label timestamps rather than depending on VAD output. This step can also be used to run speaker diarization with rttms generated from any external VAD, not just VAD model from NeMo.For it, the first step is to start converting reference audio rttm(vad) time stamps to oracle manifest file. This manifest file would be sent to our speaker diarizer to extract embeddings.This is just an argument in our config, and system automatically computes oracle manifest based on the rttms provided through input manifest file Our config file is based on [hydra](https://hydra.cc/docs/intro/). With hydra config, we ask users to provide values to variables that were filled with **???**, these are mandatory fields and scripts expect them for successful runs. And notice some variables were filled with **null** are optional variables. Those could be provided if needed but are not mandatory.
###Code
from omegaconf import OmegaConf
MODEL_CONFIG = os.path.join(data_dir,'offline_diarization.yaml')
if not os.path.exists(MODEL_CONFIG):
config_url = "https://raw.githubusercontent.com/NVIDIA/NeMo/main/examples/speaker_tasks/diarization/conf/offline_diarization.yaml"
MODEL_CONFIG = wget.download(config_url,data_dir)
config = OmegaConf.load(MODEL_CONFIG)
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Now we can perform speaker diarization based on timestamps generated from ground truth rttms rather than generating through VAD
###Code
pretrained_speaker_model='ecapa_tdnn'
config.diarizer.manifest_filepath = 'data/input_manifest.json'
config.diarizer.out_dir = output_dir #Directory to store intermediate files and prediction outputs
config.diarizer.speaker_embeddings.model_path = pretrained_speaker_model
config.diarizer.speaker_embeddings.parameters.window_length_in_sec = 1.5
config.diarizer.speaker_embeddings.parameters.shift_length_in_sec = 0.75
config.diarizer.oracle_vad = True # ----> ORACLE VAD
config.diarizer.clustering.parameters.oracle_num_speakers = True
from nemo.collections.asr.models import ClusteringDiarizer
oracle_model = ClusteringDiarizer(cfg=config)
# And lets diarize
oracle_model.diarize()
###Output
_____no_output_____
###Markdown
With DER 0 -> means it clustered speaker embeddings correctly. Let's view
###Code
!cat {output_dir}/pred_rttms/an4_diarize_test.rttm
pred_labels = rttm_to_labels(output_dir+'/pred_rttms/an4_diarize_test.rttm')
hypothesis = labels_to_pyannote_object(pred_labels)
hypothesis
reference
###Output
_____no_output_____
###Markdown
VAD DIARIZATION In this method we compute VAD time stamps using NeMo VAD model on input manifest file and then use these time stamps of speech label to find speaker embeddings followed by clustering them into num of speakers Before we proceed let's look at the speaker diarization config, which we would be depending up on for vad computationand speaker embedding extraction
###Code
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
As can be seen most of the variables in config are self explanatory with VAD variables under vad section and speaker related variables under speaker embeddings section. To perform VAD based diarization we can ignore `oracle_vad_manifest` in `speaker_embeddings` section for now and needs to fill up the rest. We also needs to provide pretrained `model_path` of vad and speaker embeddings .nemo models
###Code
pretrained_vad = 'vad_marblenet'
pretrained_speaker_model = 'ecapa_tdnn'
###Output
_____no_output_____
###Markdown
Note in this tutorial, we use the VAD model MarbleNet-3x2 introduced and published in [ICASSP MarbleNet](https://arxiv.org/pdf/2010.13886.pdf). You might need to tune on dev set similar to your dataset if you would like to improve the performance.And the speakerNet-M-Diarization model achieves 7.3% confusion error rate on CH109 set with oracle vad. This model is trained on voxceleb1, voxceleb2, Fisher, SwitchBoard datasets. So for more improved performance specific to your dataset, finetune speaker verification model with a devset similar to your test set.
###Code
output_dir = os.path.join(ROOT,'outputs')
config.diarizer.manifest_filepath = 'data/input_manifest.json'
config.diarizer.out_dir = output_dir #Directory to store intermediate files and prediction outputs
config.diarizer.speaker_embeddings.model_path = pretrained_speaker_model
config.diarizer.speaker_embeddings.parameters.window_length_in_sec = 1.5
config.diarizer.speaker_embeddings.parameters.shift_length_in_sec = 0.75
config.diarizer.oracle_vad = False # compute VAD provided with model_path to vad config
config.diarizer.clustering.parameters.oracle_num_speakers=True
#Here we use our inhouse pretrained NeMo VAD
config.diarizer.vad.model_path = pretrained_vad
config.diarizer.vad.window_length_in_sec = 0.15
config.diarizer.vad.shift_length_in_sec = 0.01
config.diarizer.vad.parameters.onset = 0.8
config.diarizer.vad.parameters.offset = 0.6
config.diarizer.vad.parameters.min_duration_on = 0.1
config.diarizer.vad.parameters.min_duration_off = 0.4
###Output
_____no_output_____
###Markdown
Now that we passed all the variables we needed lets initialize the clustering model with above config
###Code
from nemo.collections.asr.models import ClusteringDiarizer
sd_model = ClusteringDiarizer(cfg=config)
###Output
_____no_output_____
###Markdown
And Diarize with single line of code
###Code
sd_model.diarize()
###Output
_____no_output_____
###Markdown
As can be seen, we first performed VAD, then with the timestamps created in `{output_dir}/vad_outputs` by VAD we calculated speaker embeddings (`{output_dir}/speaker_outputs/embeddings/`) which are then clustered using spectral clustering. To generate VAD predicted time step. We perform VAD inference to have frame level prediction &8594; (optional: use decision smoothing) &8594; given `threshold`, write speech segment to RTTM-like time stamps manifest.we use vad decision smoothing (87.5% overlap median) as described [here](https://github.com/NVIDIA/NeMo/blob/stable/nemo/collections/asr/parts/utils/vad_utils.py)you can also tune the threshold on your dev set. Use this provided [script](https://github.com/NVIDIA/NeMo/blob/stable/scripts/voice_activity_detection/vad_tune_threshold.py)
###Code
# VAD predicted time stamps
# you can also use single threshold(=onset=offset) for binarization and plot here
from nemo.collections.asr.parts.utils.vad_utils import plot
plot(
an4_audio,
'outputs/vad_outputs/overlap_smoothing_output_median_0.875/an4_diarize_test.median',
an4_rttm,
per_args = config.diarizer.vad.parameters, #threshold
)
print(f"postprocessing_params: {config.diarizer.vad.parameters}")
###Output
_____no_output_____
###Markdown
Predicted outputs are written to `output_dir/pred_rttms` and see how we predicted along with VAD prediction
###Code
!cat outputs/pred_rttms/an4_diarize_test.rttm
pred_labels = rttm_to_labels('outputs/pred_rttms/an4_diarize_test.rttm')
hypothesis = labels_to_pyannote_object(pred_labels)
hypothesis
reference
###Output
_____no_output_____
###Markdown
Storing and Restoring models Now we can save the whole config and model parameters in a single .nemo and restore from it anytime.
###Code
oracle_model.save_to(os.path.join(output_dir,'diarize.nemo'))
###Output
_____no_output_____
###Markdown
Restore from saved model
###Code
del oracle_model
import nemo.collections.asr as nemo_asr
restored_model = nemo_asr.models.ClusteringDiarizer.restore_from(os.path.join(output_dir,'diarize.nemo'))
###Output
_____no_output_____
###Markdown
ADD ON - ASR
###Code
IPython.display.Audio(an4_audio)
quartznet = nemo_asr.models.EncDecCTCModel.from_pretrained(model_name="QuartzNet15x5Base-En")
for fname, transcription in zip([an4_audio], quartznet.transcribe(paths2audio_files=[an4_audio])):
print(f"Audio in {fname} was recognized as:\n{transcription}")
###Output
_____no_output_____
###Markdown
IntroductionWho Speaks When? Speaker Diarization is the task of segmenting audio recordings by speaker labels. A diarization system consists of Voice Activity Detection (VAD) model to get the time stamps of audio where speech is being spoken ignoring the background and Speaker Embeddings model to get speaker embeddings on segments that were previously time stamped. These speaker embeddings would then be clustered into clusters based on number of speakers present in the audio recording.In NeMo we support both **oracle VAD** and **non-oracle VAD** diarization. In this tutorial, we shall first demonstrate how to perform diarization with a oracle VAD time stamps (we assume we already have speech time stamps) and pretrained speaker verification model which can be found in tutorial for [Speaker Identification and Verification in NeMo](https://github.com/NVIDIA/NeMo/blob/main/tutorials/speaker_tasks/Speaker_Identification_Verification.ipynb).In ORACLE-VAD-DIARIZATION we show how to perform VAD and then diarization if ground truth timestamped speech were not available (non-oracle VAD). We also have tutorials for [VAD training in NeMo](https://github.com/NVIDIA/NeMo/blob/main/tutorials/asr/Voice_Activity_Detection.ipynb) and [online offline microphone inference](https://github.com/NVIDIA/NeMo/blob/main/tutorials/asr/Online_Offline_Microphone_VAD_Demo.ipynb), where you can custom your model and training/finetuning on your own data.For demonstration purposes we would be using simulated audio from [an4 dataset](http://www.speech.cs.cmu.edu/databases/an4/)
###Code
import os
import wget
ROOT = os.getcwd()
data_dir = os.path.join(ROOT,'data')
os.makedirs(data_dir, exist_ok=True)
an4_audio = os.path.join(data_dir,'an4_diarize_test.wav')
an4_rttm = os.path.join(data_dir,'an4_diarize_test.rttm')
if not os.path.exists(an4_audio):
an4_audio_url = "https://nemo-public.s3.us-east-2.amazonaws.com/an4_diarize_test.wav"
an4_audio = wget.download(an4_audio_url, data_dir)
if not os.path.exists(an4_rttm):
an4_rttm_url = "https://nemo-public.s3.us-east-2.amazonaws.com/an4_diarize_test.rttm"
an4_rttm = wget.download(an4_rttm_url, data_dir)
###Output
_____no_output_____
###Markdown
Let's plot and listen to the audio and visualize the RTTM speaker labels
###Code
import IPython
import matplotlib.pyplot as plt
import numpy as np
import librosa
sr = 16000
signal, sr = librosa.load(an4_audio,sr=sr)
fig,ax = plt.subplots(1,1)
fig.set_figwidth(20)
fig.set_figheight(2)
plt.plot(np.arange(len(signal)),signal,'gray')
fig.suptitle('Reference merged an4 audio', fontsize=16)
plt.xlabel('time (secs)', fontsize=18)
ax.margins(x=0)
plt.ylabel('signal strength', fontsize=16);
a,_ = plt.xticks();plt.xticks(a,a/sr);
IPython.display.Audio(an4_audio)
###Output
_____no_output_____
###Markdown
We would use [pyannote_metrics](https://pyannote.github.io/pyannote-metrics/) for visualization and score calculation purposes. Hence all the labels in rttm formats would eventually be converted to pyannote objects, we created two helper functions rttm_to_labels (for NeMo intermediate processing) and labels_to_pyannote_object for scoring and visualization format
###Code
from nemo.collections.asr.parts.utils.speaker_utils import rttm_to_labels, labels_to_pyannote_object
###Output
_____no_output_____
###Markdown
Let's load ground truth RTTM labels and view the reference Annotation timestamps visually
###Code
# view the sample rttm file
!cat {an4_rttm}
labels = rttm_to_labels(an4_rttm)
reference = labels_to_pyannote_object(labels)
print(labels)
reference
###Output
_____no_output_____
###Markdown
Speaker Diarization scripts commonly expects following arguments:1. manifest_filepath : Path to manifest file containing json lines of format: {'audio_filepath': /path/to/audio_file, 'offset': 0, 'duration':None, 'label': 'infer', 'text': '-', 'num_speakers': None, 'rttm_filepath': /path/to/rttm/file, 'uem_filepath'='/path/to/uem/filepath'}2. out_dir : directory where outputs and intermediate files are stored. 3. oracle_vad: If this is true then we extract speech activity labels from rttm files, if False then either 4. vad.model_path or external_manifestpath containing speech activity labels has to be passed. Mandatory fields are audio_filepath, offset, duration, label and text. For the rest if you would like to evaluate with known number of speakers pass the value else None. If you would like to score the system with known rttms then that should be passed as well, else None. uem file is used to score only part of your audio for evaluation purposes, hence pass if you would like to evaluate on it else None.**Note** we expect audio and corresponding RTTM have **same base name** and the name should be **unique**. For eg: if audio file name is **test_an4**.wav, if provided we expect corresponding rttm file name to be **test_an4**.rttm (note the matching **test_an4** base name) Lets create manifest with the an4 audio and rttm available. If you have more than one files you may also use the script `pathfiles_to_diarize_manifest.py` to generate manifest file from list of audio files and optionally rttm files
###Code
# Create a manifest for input with below format.
# {'audio_filepath': /path/to/audio_file, 'offset': 0, 'duration':None, 'label': 'infer', 'text': '-',
# 'num_speakers': None, 'rttm_filepath': /path/to/rttm/file, 'uem_filepath'='/path/to/uem/filepath'}
import json
meta = {
'audio_filepath': an4_audio,
'offset': 0,
'duration':None,
'label': 'infer',
'text': '-',
'num_speakers': 2,
'rttm_filepath': an4_rttm,
'uem_filepath' : None
}
with open('data/input_manifest.json','w') as fp:
json.dump(meta,fp)
fp.write('\n')
!cat data/input_manifest.json
output_dir = os.path.join(ROOT, 'oracle_vad')
os.makedirs(output_dir,exist_ok=True)
###Output
_____no_output_____
###Markdown
ORACLE-VAD DIARIZATION Oracle-vad diarization is to compute speaker embeddings from known speech label timestamps rather than depending on VAD output. This step can also be used to run speaker diarization with rttms generated from any external VAD, not just VAD model from NeMo.For it, the first step is to start converting reference audio rttm(vad) time stamps to oracle manifest file. This manifest file would be sent to our speaker diarizer to extract embeddings.This is just an argument in our config, and system automatically computes oracle manifest based on the rttms provided through input manifest file Our config file is based on [hydra](https://hydra.cc/docs/intro/). With hydra config, we ask users to provide values to variables that were filled with **???**, these are mandatory fields and scripts expect them for successful runs. And notice some variables were filled with **null** are optional variables. Those could be provided if needed but are not mandatory.
###Code
from omegaconf import OmegaConf
MODEL_CONFIG = os.path.join(data_dir,'offline_diarization.yaml')
if not os.path.exists(MODEL_CONFIG):
config_url = "https://raw.githubusercontent.com/NVIDIA/NeMo/main/examples/speaker_tasks/diarization/conf/offline_diarization.yaml"
MODEL_CONFIG = wget.download(config_url,data_dir)
config = OmegaConf.load(MODEL_CONFIG)
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Now we can perform speaker diarization based on timestamps generated from ground truth rttms rather than generating through VAD
###Code
pretrained_speaker_model='titanet_large'
config.diarizer.manifest_filepath = 'data/input_manifest.json'
config.diarizer.out_dir = output_dir # Directory to store intermediate files and prediction outputs
config.diarizer.speaker_embeddings.model_path = pretrained_speaker_model
config.diarizer.speaker_embeddings.parameters.window_length_in_sec = 1.5
config.diarizer.speaker_embeddings.parameters.shift_length_in_sec = 0.75
config.diarizer.oracle_vad = True # ----> ORACLE VAD
config.diarizer.clustering.parameters.oracle_num_speakers = True
from nemo.collections.asr.models import ClusteringDiarizer
oracle_model = ClusteringDiarizer(cfg=config)
# And lets diarize
oracle_model.diarize()
###Output
_____no_output_____
###Markdown
With DER 0 -> means it clustered speaker embeddings correctly. Let's view
###Code
!cat {output_dir}/pred_rttms/an4_diarize_test.rttm
pred_labels = rttm_to_labels(output_dir+'/pred_rttms/an4_diarize_test.rttm')
hypothesis = labels_to_pyannote_object(pred_labels)
hypothesis
reference
###Output
_____no_output_____
###Markdown
VAD DIARIZATION In this method we compute VAD time stamps using NeMo VAD model on input manifest file and then use these time stamps of speech label to find speaker embeddings followed by clustering them into num of speakers Before we proceed let's look at the speaker diarization config, which we would be depending up on for vad computationand speaker embedding extraction
###Code
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
As can be seen most of the variables in config are self explanatory with VAD variables under vad section and speaker related variables under speaker embeddings section. To perform VAD based diarization we can ignore `oracle_vad_manifest` in `speaker_embeddings` section for now and needs to fill up the rest. We also needs to provide pretrained `model_path` of vad and speaker embeddings .nemo models
###Code
pretrained_vad = 'vad_marblenet'
pretrained_speaker_model = 'titanet_large'
###Output
_____no_output_____
###Markdown
Note in this tutorial, we use the VAD model MarbleNet-3x2 introduced and published in [ICASSP MarbleNet](https://arxiv.org/pdf/2010.13886.pdf). You might need to tune on dev set similar to your dataset if you would like to improve the performance.And the speakerNet-M-Diarization model achieves 7.3% confusion error rate on CH109 set with oracle vad. This model is trained on voxceleb1, voxceleb2, Fisher, SwitchBoard datasets. So for more improved performance specific to your dataset, finetune speaker verification model with a devset similar to your test set.
###Code
output_dir = os.path.join(ROOT,'outputs')
config.diarizer.manifest_filepath = 'data/input_manifest.json'
config.diarizer.out_dir = output_dir #Directory to store intermediate files and prediction outputs
config.diarizer.speaker_embeddings.model_path = pretrained_speaker_model
config.diarizer.speaker_embeddings.parameters.window_length_in_sec = 1.5
config.diarizer.speaker_embeddings.parameters.shift_length_in_sec = 0.75
config.diarizer.oracle_vad = False # compute VAD provided with model_path to vad config
config.diarizer.clustering.parameters.oracle_num_speakers=True
#Here we use our inhouse pretrained NeMo VAD
config.diarizer.vad.model_path = pretrained_vad
config.diarizer.vad.window_length_in_sec = 0.15
config.diarizer.vad.shift_length_in_sec = 0.01
config.diarizer.vad.parameters.onset = 0.8
config.diarizer.vad.parameters.offset = 0.6
config.diarizer.vad.parameters.min_duration_on = 0.1
config.diarizer.vad.parameters.min_duration_off = 0.4
###Output
_____no_output_____
###Markdown
Now that we passed all the variables we needed lets initialize the clustering model with above config
###Code
from nemo.collections.asr.models import ClusteringDiarizer
sd_model = ClusteringDiarizer(cfg=config)
###Output
_____no_output_____
###Markdown
And Diarize with single line of code
###Code
sd_model.diarize()
###Output
_____no_output_____
###Markdown
As can be seen, we first performed VAD, then with the timestamps created in `{output_dir}/vad_outputs` by VAD we calculated speaker embeddings (`{output_dir}/speaker_outputs/embeddings/`) which are then clustered using spectral clustering. To generate VAD predicted time step. We perform VAD inference to have frame level prediction &8594; (optional: use decision smoothing) &8594; given `threshold`, write speech segment to RTTM-like time stamps manifest.we use vad decision smoothing (87.5% overlap median) as described [here](https://github.com/NVIDIA/NeMo/blob/stable/nemo/collections/asr/parts/utils/vad_utils.py)you can also tune the threshold on your dev set. Use this provided [script](https://github.com/NVIDIA/NeMo/blob/stable/scripts/voice_activity_detection/vad_tune_threshold.py)
###Code
# VAD predicted time stamps
# you can also use single threshold(=onset=offset) for binarization and plot here
from nemo.collections.asr.parts.utils.vad_utils import plot
plot(
an4_audio,
'outputs/vad_outputs/overlap_smoothing_output_median_0.875/an4_diarize_test.median',
an4_rttm,
per_args = config.diarizer.vad.parameters, #threshold
)
print(f"postprocessing_params: {config.diarizer.vad.parameters}")
###Output
_____no_output_____
###Markdown
Predicted outputs are written to `output_dir/pred_rttms` and see how we predicted along with VAD prediction
###Code
!cat outputs/pred_rttms/an4_diarize_test.rttm
pred_labels = rttm_to_labels('outputs/pred_rttms/an4_diarize_test.rttm')
hypothesis = labels_to_pyannote_object(pred_labels)
hypothesis
reference
###Output
_____no_output_____
###Markdown
Storing and Restoring models Now we can save the whole config and model parameters in a single .nemo and restore from it anytime.
###Code
oracle_model.save_to(os.path.join(output_dir,'diarize.nemo'))
###Output
_____no_output_____
###Markdown
Restore from saved model
###Code
del oracle_model
import nemo.collections.asr as nemo_asr
restored_model = nemo_asr.models.ClusteringDiarizer.restore_from(os.path.join(output_dir,'diarize.nemo'))
###Output
_____no_output_____
###Markdown
IntroductionWho Speaks When? Speaker Diarization is the task of segmenting audio recordings by speaker labels. A diarization system consists of Voice Activity Detection (VAD) model to get the time stamps of audio where speech is being spoken ignoring the background and Speaker Embeddings model to get speaker embeddings on segments that were previously time stamped. These speaker embeddings would then be clustered into clusters based on number of speakers present in the audio recording.In NeMo we support both **oracle VAD** and **non-oracle VAD** diarization. In this tutorial, we shall first demonstrate how to perform diarization with a oracle VAD time stamps (we assume we already have speech time stamps) and pretrained speaker verification model which can be found in tutorial for [Speaker and Recognition and Verification in NeMo](https://github.com/NVIDIA/NeMo/blob/main/tutorials/speaker_tasks/Speaker_Recognition_Verification.ipynb).In [second part](ORACLE-VAD-DIARIZATION) we show how to perform VAD and then diarization if ground truth timestamped speech were not available (non-oracle VAD). We also have tutorials for [VAD training in NeMo](https://github.com/NVIDIA/NeMo/blob/main/tutorials/asr/Voice_Activity_Detection.ipynb) and [online offline microphone inference](https://github.com/NVIDIA/NeMo/blob/main/tutorials/asr/Online_Offline_Microphone_VAD_Demo.ipynb), where you can custom your model and training/finetuning on your own data.For demonstration purposes we would be using simulated audio from [an4 dataset](http://www.speech.cs.cmu.edu/databases/an4/)
###Code
import os
import wget
ROOT = os.getcwd()
data_dir = os.path.join(ROOT,'data')
os.makedirs(data_dir, exist_ok=True)
an4_audio = os.path.join(data_dir,'an4_diarize_test.wav')
an4_rttm = os.path.join(data_dir,'an4_diarize_test.rttm')
if not os.path.exists(an4_audio):
an4_audio_url = "https://nemo-public.s3.us-east-2.amazonaws.com/an4_diarize_test.wav"
an4_audio = wget.download(an4_audio_url, data_dir)
if not os.path.exists(an4_rttm):
an4_rttm_url = "https://nemo-public.s3.us-east-2.amazonaws.com/an4_diarize_test.rttm"
an4_rttm = wget.download(an4_rttm_url, data_dir)
###Output
_____no_output_____
###Markdown
Let's plot and listen to the audio and visualize the RTTM speaker labels
###Code
import IPython
import matplotlib.pyplot as plt
import numpy as np
import librosa
sr = 16000
signal, sr = librosa.load(an4_audio,sr=sr)
fig,ax = plt.subplots(1,1)
fig.set_figwidth(20)
fig.set_figheight(2)
plt.plot(np.arange(len(signal)),signal,'gray')
fig.suptitle('Reference merged an4 audio', fontsize=16)
plt.xlabel('time (secs)', fontsize=18)
ax.margins(x=0)
plt.ylabel('signal strength', fontsize=16);
a,_ = plt.xticks();plt.xticks(a,a/sr);
IPython.display.Audio(an4_audio)
###Output
_____no_output_____
###Markdown
We would use [pyannote_metrics](https://pyannote.github.io/pyannote-metrics/) for visualization and score calculation purposes. Hence all the labels in rttm formats would eventually be converted to pyannote objects, we created two helper functions rttm_to_labels (for NeMo intermediate processing) and labels_to_pyannote_object for scoring and visualization format
###Code
from nemo.collections.asr.parts.utils.speaker_utils import rttm_to_labels, labels_to_pyannote_object
###Output
_____no_output_____
###Markdown
Let's load ground truth RTTM labels and view the reference Annotation timestamps visually
###Code
# view the sample rttm file
!cat {an4_rttm}
labels = rttm_to_labels(an4_rttm)
reference = labels_to_pyannote_object(labels)
print(labels)
reference
###Output
_____no_output_____
###Markdown
Speaker Diarization scripts commonly expects two files:1. paths2audio_files : either list of audio file paths or file containing paths to audio files for which we need to perform diarization.2. path2groundtruth_rttm_files (optional): either list of rttm file paths or file containing paths to rttm files (this can be passed if we need to calculate DER rate based on our ground truth rttm files).**Note** we expect audio and corresponding RTTM have **same base name** and the name should be **unique**. For eg: if audio file name is **test_an4**.wav, if provided we expect corresponding rttm file name to be **test_an4**.rttm (note the matching **test_an4** base name)Now let's create paths2audio_files list (or file) for which we need to perform diarization
###Code
paths2audio_files = [an4_audio]
print(paths2audio_files)
###Output
_____no_output_____
###Markdown
Similarly create` path2groundtruth_rttm_files` list (this is optional, and needed for score calculation)
###Code
path2groundtruth_rttm_files = [an4_rttm]
print(path2groundtruth_rttm_files)
###Output
_____no_output_____
###Markdown
ORACLE-VAD DIARIZATION Oracle-vad diarization is to compute speaker embeddings from known speech label timestamps rather than depending on VAD output. This step can also be used to run speaker diarization with rttms generated from any external VAD, not just VAD model from NeMo.For it, the first step is to start converting reference audio rttm(vad) time stamps to oracle manifest file. This manifest file would be sent to our speaker diarizer to extract embeddings.For that let's use write_rttm2manifest function, that takes paths2audio_files and paths2rttm_files as arguments
###Code
from nemo.collections.asr.parts.utils.speaker_utils import write_rttm2manifest
output_dir = os.path.join(ROOT, 'oracle_vad')
os.makedirs(output_dir,exist_ok=True)
oracle_manifest = os.path.join(output_dir,'oracle_manifest.json')
write_rttm2manifest(paths2audio_files=paths2audio_files,
paths2rttm_files=path2groundtruth_rttm_files,
manifest_file=oracle_manifest)
!cat {oracle_manifest}
###Output
_____no_output_____
###Markdown
Our config file is based on [hydra](https://hydra.cc/docs/intro/). With hydra config, we ask users to provide values to variables that were filled with **???**, these are mandatory fields and scripts expect them for successful runs. And notice some variables were filled with **null** are optional variables. Those could be provided if needed but are not mandatory.
###Code
from omegaconf import OmegaConf
MODEL_CONFIG = os.path.join(data_dir,'speaker_diarization.yaml')
if not os.path.exists(MODEL_CONFIG):
config_url = "https://raw.githubusercontent.com/NVIDIA/NeMo/main/examples/speaker_tasks/diarization/conf/speaker_diarization.yaml"
MODEL_CONFIG = wget.download(config_url,data_dir)
config = OmegaConf.load(MODEL_CONFIG)
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Now we can perform speaker diarization based on timestamps generated from ground truth rttms rather than generating through VAD
###Code
pretrained_speaker_model='speakerdiarization_speakernet'
config.diarizer.paths2audio_files = paths2audio_files
config.diarizer.path2groundtruth_rttm_files = path2groundtruth_rttm_files
config.diarizer.out_dir = output_dir #Directory to store intermediate files and prediction outputs
config.diarizer.speaker_embeddings.model_path = pretrained_speaker_model
# Ignoring vad we just need to pass the manifest file we created
config.diarizer.speaker_embeddings.oracle_vad_manifest = oracle_manifest
config.diarizer.oracle_num_speakers=2
from nemo.collections.asr.models import ClusteringDiarizer
oracle_model = ClusteringDiarizer(cfg=config)
# And lets diarize
oracle_model.diarize()
###Output
_____no_output_____
###Markdown
With DER 0 -> means it clustered speaker embeddings correctly. Let's view
###Code
!cat {output_dir}/pred_rttms/an4_diarize_test.rttm
pred_labels = rttm_to_labels(output_dir+'/pred_rttms/an4_diarize_test.rttm')
hypothesis = labels_to_pyannote_object(pred_labels)
hypothesis
reference
###Output
_____no_output_____
###Markdown
VAD DIARIZATION In this method we compute VAD time stamps using NeMo VAD model on `paths2audio_files` and then use these time stamps of speech label to find speaker embeddings followed by clustering them into num of speakers Before we proceed let's look at the speaker diarization config, which we would be depending up on for vad computationand speaker embedding extraction
###Code
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
As can be seen most of the variables in config are self explanatory with VAD variables under vad section and speaker related variables under speaker embeddings section. To perform VAD based diarization we can ignore `oracle_vad_manifest` in `speaker_embeddings` section for now and needs to fill up the rest. We also needs to provide pretrained `model_path` of vad and speaker embeddings .nemo models
###Code
pretrained_vad = 'vad_marblenet'
pretrained_speaker_model = 'speakerdiarization_speakernet'
###Output
_____no_output_____
###Markdown
Note in this tutorial, we use the VAD model MarbleNet-3x2 introduced and published in [ICASSP MarbleNet](https://arxiv.org/pdf/2010.13886.pdf). You might need to tune on dev set similar to your dataset if you would like to improve the performance.And the speakerNet-M-Diarization model achieves 7.3% confusion error rate on CH109 set with oracle vad. This model is trained on voxceleb1, voxceleb2, Fisher, SwitchBoard datasets. So for more improved performance specific to your dataset, finetune speaker verification model with a devset similar to your test set.
###Code
output_dir = os.path.join(ROOT,'outputs')
config.diarizer.paths2audio_files = paths2audio_files
config.diarizer.path2groundtruth_rttm_files = path2groundtruth_rttm_files
config.diarizer.out_dir = output_dir # Directory to store intermediate files and prediction outputs
config.diarizer.speaker_embeddings.model_path = pretrained_speaker_model
#Here we use our inhouse pretrained NeMo VAD
config.diarizer.vad.model_path = pretrained_vad
config.diarizer.vad.window_length_in_sec = 0.15
config.diarizer.vad.shift_length_in_sec = 0.01
# config.diarizer.vad.threshold = 0.8 threshold would be deprecated in release 1.5
config.diarizer.vad.postprocessing_params.onset = 0.8
config.diarizer.vad.postprocessing_params.offset = 0.7
config.diarizer.vad.postprocessing_params.min_duration_on = 0.1
config.diarizer.vad.postprocessing_params.min_duration_off = 0.3
###Output
_____no_output_____
###Markdown
Now that we passed all the variables we needed lets initialize the clustering model with above config
###Code
from nemo.collections.asr.models import ClusteringDiarizer
sd_model = ClusteringDiarizer(cfg=config)
###Output
_____no_output_____
###Markdown
And Diarize with single line of code
###Code
sd_model.diarize()
###Output
_____no_output_____
###Markdown
As can be seen, we first performed VAD, then with the timestamps created in `{output_dir}/vad_outputs` by VAD we calculated speaker embeddings (`{output_dir}/speaker_outputs/embeddings/`) which are then clustered using spectral clustering. To generate VAD predicted time step. We perform VAD inference to have frame level prediction &8594; (optional: use decision smoothing) &8594; given `threshold`, write speech segment to RTTM-like time stamps manifest.we use vad decision smoothing (87.5% overlap median) as described [here](https://github.com/NVIDIA/NeMo/blob/stable/nemo/collections/asr/parts/utils/vad_utils.py)you can also tune the threshold on your dev set. Use this provided [script](https://github.com/NVIDIA/NeMo/blob/stable/scripts/voice_activity_detection/vad_tune_threshold.py)
###Code
# VAD predicted time stamps
# you can also use single threshold(=onset=offset) for binarization and plot here
from nemo.collections.asr.parts.utils.vad_utils import plot
plot(
paths2audio_files[0],
'outputs/vad_outputs/overlap_smoothing_output_median_0.875/an4_diarize_test.median',
path2groundtruth_rttm_files[0],
per_args = config.diarizer.vad.postprocessing_params, #threshold
)
print(f"postprocessing_params: {config.diarizer.vad.postprocessing_params}")
###Output
_____no_output_____
###Markdown
Predicted outputs are written to `output_dir/pred_rttms` and see how we predicted along with VAD prediction
###Code
!cat outputs/pred_rttms/an4_diarize_test.rttm
pred_labels = rttm_to_labels('outputs/pred_rttms/an4_diarize_test.rttm')
hypothesis = labels_to_pyannote_object(pred_labels)
hypothesis
reference
###Output
_____no_output_____
###Markdown
Storing and Restoring models Now we can save the whole config and model parameters in a single .nemo and restore from it anytime.
###Code
oracle_model.save_to(os.path.join(output_dir,'diarize.nemo'))
###Output
_____no_output_____
###Markdown
Restore from saved model
###Code
del oracle_model
import nemo.collections.asr as nemo_asr
restored_model = nemo_asr.models.ClusteringDiarizer.restore_from(os.path.join(output_dir,'diarize.nemo'))
###Output
_____no_output_____
###Markdown
ADD ON - ASR
###Code
IPython.display.Audio(an4_audio)
quartznet = nemo_asr.models.EncDecCTCModel.from_pretrained(model_name="QuartzNet15x5Base-En")
for fname, transcription in zip(paths2audio_files, quartznet.transcribe(paths2audio_files=paths2audio_files)):
print(f"Audio in {fname} was recognized as: {transcription}")
###Output
_____no_output_____
###Markdown
IntroductionWho Speaks When? Speaker Diarization is the task of segmenting audio recordings by speaker labels. A diarization system consists of Voice Activity Detection (VAD) model to get the time stamps of audio where speech is being spoken ignoring the background and Speaker Embeddings model to get speaker embeddings on segments that were previously time stamped. These speaker embeddings would then be clustered into clusters based on number of speakers present in the audio recording.In NeMo we support both **oracle VAD** and **non-oracle VAD** diarization. In this tutorial, we shall first demonstrate how to perform diarization with a oracle VAD time stamps (we assume we already have speech time stamps) and pretrained speaker verification model which can be found in tutorial for [Speaker Identification and Verification in NeMo](https://github.com/NVIDIA/NeMo/blob/main/tutorials/speaker_tasks/Speaker_Identification_Verification.ipynb).In ORACLE-VAD-DIARIZATION we show how to perform VAD and then diarization if ground truth timestamped speech were not available (non-oracle VAD). We also have tutorials for [VAD training in NeMo](https://github.com/NVIDIA/NeMo/blob/main/tutorials/asr/Voice_Activity_Detection.ipynb) and [online offline microphone inference](https://github.com/NVIDIA/NeMo/blob/main/tutorials/asr/Online_Offline_Microphone_VAD_Demo.ipynb), where you can custom your model and training/finetuning on your own data.For demonstration purposes we would be using simulated audio from [an4 dataset](http://www.speech.cs.cmu.edu/databases/an4/)
###Code
import os
import wget
ROOT = os.getcwd()
data_dir = os.path.join(ROOT,'data')
os.makedirs(data_dir, exist_ok=True)
an4_audio = os.path.join(data_dir,'an4_diarize_test.wav')
an4_rttm = os.path.join(data_dir,'an4_diarize_test.rttm')
if not os.path.exists(an4_audio):
an4_audio_url = "https://nemo-public.s3.us-east-2.amazonaws.com/an4_diarize_test.wav"
an4_audio = wget.download(an4_audio_url, data_dir)
if not os.path.exists(an4_rttm):
an4_rttm_url = "https://nemo-public.s3.us-east-2.amazonaws.com/an4_diarize_test.rttm"
an4_rttm = wget.download(an4_rttm_url, data_dir)
###Output
_____no_output_____
###Markdown
Let's plot and listen to the audio and visualize the RTTM speaker labels
###Code
import IPython
import matplotlib.pyplot as plt
import numpy as np
import librosa
sr = 16000
signal, sr = librosa.load(an4_audio,sr=sr)
fig,ax = plt.subplots(1,1)
fig.set_figwidth(20)
fig.set_figheight(2)
plt.plot(np.arange(len(signal)),signal,'gray')
fig.suptitle('Reference merged an4 audio', fontsize=16)
plt.xlabel('time (secs)', fontsize=18)
ax.margins(x=0)
plt.ylabel('signal strength', fontsize=16);
a,_ = plt.xticks();plt.xticks(a,a/sr);
IPython.display.Audio(an4_audio)
###Output
_____no_output_____
###Markdown
We would use [pyannote_metrics](https://pyannote.github.io/pyannote-metrics/) for visualization and score calculation purposes. Hence all the labels in rttm formats would eventually be converted to pyannote objects, we created two helper functions rttm_to_labels (for NeMo intermediate processing) and labels_to_pyannote_object for scoring and visualization format
###Code
from nemo.collections.asr.parts.utils.speaker_utils import rttm_to_labels, labels_to_pyannote_object
###Output
_____no_output_____
###Markdown
Let's load ground truth RTTM labels and view the reference Annotation timestamps visually
###Code
# view the sample rttm file
!cat {an4_rttm}
labels = rttm_to_labels(an4_rttm)
reference = labels_to_pyannote_object(labels)
print(labels)
reference
###Output
_____no_output_____
###Markdown
Speaker Diarization scripts commonly expects following arguments:1. manifest_filepath : Path to manifest file containing json lines of format: {'audio_filepath': /path/to/audio_file, 'offset': 0, 'duration':None, 'label': 'infer', 'text': '-', 'num_speakers': None, 'rttm_filepath': /path/to/rttm/file, 'uem_filepath'='/path/to/uem/filepath'}2. out_dir : directory where outputs and intermediate files are stored. 3. oracle_vad: If this is true then we extract speech activity labels from rttm files, if False then either 4. vad.model_path or external_manifestpath containing speech activity labels has to be passed. Mandatory fields are audio_filepath, offset, duration, label and text. For the rest if you would like to evaluate with known number of speakers pass the value else None. If you would like to score the system with known rttms then that should be passed as well, else None. uem file is used to score only part of your audio for evaluation purposes, hence pass if you would like to evaluate on it else None.**Note** we expect audio and corresponding RTTM have **same base name** and the name should be **unique**. For eg: if audio file name is **test_an4**.wav, if provided we expect corresponding rttm file name to be **test_an4**.rttm (note the matching **test_an4** base name) Lets create manifest with the an4 audio and rttm available. If you have more than one files you may also use the script `pathsfiles_to_manifest.py` to generate manifest file from list of audio files and optionally rttm files
###Code
# Create a manifest for input with below format.
# {'audio_filepath': /path/to/audio_file, 'offset': 0, 'duration':None, 'label': 'infer', 'text': '-',
# 'num_speakers': None, 'rttm_filepath': /path/to/rttm/file, 'uem_filepath'='/path/to/uem/filepath'}
import json
meta = {
'audio_filepath': an4_audio,
'offset': 0,
'duration':None,
'label': 'infer',
'text': '-',
'num_speakers': 2,
'rttm_filepath': an4_rttm,
'uem_filepath' : None
}
with open('data/input_manifest.json','w') as fp:
json.dump(meta,fp)
fp.write('\n')
!cat data/input_manifest.json
output_dir = os.path.join(ROOT, 'oracle_vad')
os.makedirs(output_dir,exist_ok=True)
###Output
_____no_output_____
###Markdown
ORACLE-VAD DIARIZATION Oracle-vad diarization is to compute speaker embeddings from known speech label timestamps rather than depending on VAD output. This step can also be used to run speaker diarization with rttms generated from any external VAD, not just VAD model from NeMo.For it, the first step is to start converting reference audio rttm(vad) time stamps to oracle manifest file. This manifest file would be sent to our speaker diarizer to extract embeddings.This is just an argument in our config, and system automatically computes oracle manifest based on the rttms provided through input manifest file Our config file is based on [hydra](https://hydra.cc/docs/intro/). With hydra config, we ask users to provide values to variables that were filled with **???**, these are mandatory fields and scripts expect them for successful runs. And notice some variables were filled with **null** are optional variables. Those could be provided if needed but are not mandatory.
###Code
from omegaconf import OmegaConf
MODEL_CONFIG = os.path.join(data_dir,'offline_diarization.yaml')
if not os.path.exists(MODEL_CONFIG):
config_url = "https://raw.githubusercontent.com/NVIDIA/NeMo/modify_speaker_input/examples/speaker_tasks/diarization/conf/offline_diarization.yaml"
MODEL_CONFIG = wget.download(config_url,data_dir)
config = OmegaConf.load(MODEL_CONFIG)
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Now we can perform speaker diarization based on timestamps generated from ground truth rttms rather than generating through VAD
###Code
pretrained_speaker_model='ecapa_tdnn'
config.diarizer.manifest_filepath = 'data/input_manifest.json'
config.diarizer.out_dir = output_dir #Directory to store intermediate files and prediction outputs
config.diarizer.speaker_embeddings.model_path = pretrained_speaker_model
config.diarizer.speaker_embeddings.parameters.window_length_in_sec = 1.5
config.diarizer.speaker_embeddings.parameters.shift_length_in_sec = 0.75
config.diarizer.oracle_vad = True # ----> ORACLE VAD
config.diarizer.clustering.parameters.oracle_num_speakers = True
from nemo.collections.asr.models import ClusteringDiarizer
oracle_model = ClusteringDiarizer(cfg=config)
# And lets diarize
oracle_model.diarize()
###Output
_____no_output_____
###Markdown
With DER 0 -> means it clustered speaker embeddings correctly. Let's view
###Code
!cat {output_dir}/pred_rttms/an4_diarize_test.rttm
pred_labels = rttm_to_labels(output_dir+'/pred_rttms/an4_diarize_test.rttm')
hypothesis = labels_to_pyannote_object(pred_labels)
hypothesis
reference
###Output
_____no_output_____
###Markdown
VAD DIARIZATION In this method we compute VAD time stamps using NeMo VAD model on input manifest file and then use these time stamps of speech label to find speaker embeddings followed by clustering them into num of speakers Before we proceed let's look at the speaker diarization config, which we would be depending up on for vad computationand speaker embedding extraction
###Code
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
As can be seen most of the variables in config are self explanatory with VAD variables under vad section and speaker related variables under speaker embeddings section. To perform VAD based diarization we can ignore `oracle_vad_manifest` in `speaker_embeddings` section for now and needs to fill up the rest. We also needs to provide pretrained `model_path` of vad and speaker embeddings .nemo models
###Code
pretrained_vad = 'vad_marblenet'
pretrained_speaker_model = 'ecapa_tdnn'
###Output
_____no_output_____
###Markdown
Note in this tutorial, we use the VAD model MarbleNet-3x2 introduced and published in [ICASSP MarbleNet](https://arxiv.org/pdf/2010.13886.pdf). You might need to tune on dev set similar to your dataset if you would like to improve the performance.And the speakerNet-M-Diarization model achieves 7.3% confusion error rate on CH109 set with oracle vad. This model is trained on voxceleb1, voxceleb2, Fisher, SwitchBoard datasets. So for more improved performance specific to your dataset, finetune speaker verification model with a devset similar to your test set.
###Code
output_dir = os.path.join(ROOT,'outputs')
config.diarizer.manifest_filepath = 'data/input_manifest.json'
config.diarizer.out_dir = output_dir #Directory to store intermediate files and prediction outputs
config.diarizer.speaker_embeddings.model_path = pretrained_speaker_model
config.diarizer.speaker_embeddings.parameters.window_length_in_sec = 1.5
config.diarizer.speaker_embeddings.parameters.shift_length_in_sec = 0.75
config.diarizer.oracle_vad = False # compute VAD provided with model_path to vad config
config.diarizer.clustering.parameters.oracle_num_speakers=True
#Here we use our inhouse pretrained NeMo VAD
config.diarizer.vad.model_path = pretrained_vad
config.diarizer.vad.window_length_in_sec = 0.15
config.diarizer.vad.shift_length_in_sec = 0.01
config.diarizer.vad.parameters.onset = 0.8
config.diarizer.vad.parameters.offset = 0.6
config.diarizer.vad.parameters.min_duration_on = 0.1
config.diarizer.vad.parameters.min_duration_off = 0.4
###Output
_____no_output_____
###Markdown
Now that we passed all the variables we needed lets initialize the clustering model with above config
###Code
from nemo.collections.asr.models import ClusteringDiarizer
sd_model = ClusteringDiarizer(cfg=config)
###Output
_____no_output_____
###Markdown
And Diarize with single line of code
###Code
sd_model.diarize()
###Output
_____no_output_____
###Markdown
As can be seen, we first performed VAD, then with the timestamps created in `{output_dir}/vad_outputs` by VAD we calculated speaker embeddings (`{output_dir}/speaker_outputs/embeddings/`) which are then clustered using spectral clustering. To generate VAD predicted time step. We perform VAD inference to have frame level prediction &8594; (optional: use decision smoothing) &8594; given `threshold`, write speech segment to RTTM-like time stamps manifest.we use vad decision smoothing (87.5% overlap median) as described [here](https://github.com/NVIDIA/NeMo/blob/stable/nemo/collections/asr/parts/utils/vad_utils.py)you can also tune the threshold on your dev set. Use this provided [script](https://github.com/NVIDIA/NeMo/blob/stable/scripts/voice_activity_detection/vad_tune_threshold.py)
###Code
# VAD predicted time stamps
# you can also use single threshold(=onset=offset) for binarization and plot here
from nemo.collections.asr.parts.utils.vad_utils import plot
plot(
an4_audio,
'outputs/vad_outputs/overlap_smoothing_output_median_0.875/an4_diarize_test.median',
an4_rttm,
per_args = config.diarizer.vad.parameters, #threshold
)
print(f"postprocessing_params: {config.diarizer.vad.parameters}")
###Output
_____no_output_____
###Markdown
Predicted outputs are written to `output_dir/pred_rttms` and see how we predicted along with VAD prediction
###Code
!cat outputs/pred_rttms/an4_diarize_test.rttm
pred_labels = rttm_to_labels('outputs/pred_rttms/an4_diarize_test.rttm')
hypothesis = labels_to_pyannote_object(pred_labels)
hypothesis
reference
###Output
_____no_output_____
###Markdown
Storing and Restoring models Now we can save the whole config and model parameters in a single .nemo and restore from it anytime.
###Code
oracle_model.save_to(os.path.join(output_dir,'diarize.nemo'))
###Output
_____no_output_____
###Markdown
Restore from saved model
###Code
del oracle_model
import nemo.collections.asr as nemo_asr
restored_model = nemo_asr.models.ClusteringDiarizer.restore_from(os.path.join(output_dir,'diarize.nemo'))
###Output
_____no_output_____
###Markdown
ADD ON - ASR
###Code
IPython.display.Audio(an4_audio)
quartznet = nemo_asr.models.EncDecCTCModel.from_pretrained(model_name="QuartzNet15x5Base-En")
for fname, transcription in zip([an4_audio], quartznet.transcribe(paths2audio_files=[an4_audio])):
print(f"Audio in {fname} was recognized as:\n{transcription}")
###Output
_____no_output_____ |
LS_DS_234.ipynb | ###Markdown
Lambda School Data Science*Unit 2, Sprint 3, Module 4*--- Model Interpretation- Visualize and interpret **partial dependence plots**- Explain individual predictions with **shapley value plots** SetupRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.Libraries:- category_encoders- matplotlib- numpy- pandas- [**pdpbox**](https://github.com/SauceCat/PDPbox)- plotly- scikit-learn- scipy.stats- [**shap**](https://github.com/slundberg/shap)- xgboost
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
!pip install category_encoders==2.*
!pip install pdpbox
!pip install shap
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this warning: https://github.com/dmlc/xgboost/issues/4300
# xgboost/core.py:587: FutureWarning: Series.base is deprecated and will be removed in a future version
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='xgboost')
###Output
_____no_output_____
###Markdown
Visualize and interpret partial dependence plots Overview Partial dependence plots show the relationship between 1-2 individual features and the target — how predictions partially depend on the isolated features. It's explained well by [PDPbox library documentation](https://pdpbox.readthedocs.io/en/latest/):>**The common headache**: When using black box machine learning algorithms like random forest and boosting, it is hard to understand the relations between predictors and model outcome. For example, in terms of random forest, all we get is the feature importance. Although we can know which feature is significantly influencing the outcome based on the importance calculation, it really sucks that we don’t know in which direction it is influencing. And in most of the real cases, the effect is non-monotonic. We need some powerful tools to help understanding the complex relations between predictors and model prediction. Let's also look at an [animation by Christoph Molnar](https://twitter.com/ChristophMolnar/status/1066398522608635904), author of [_Interpretable Machine Learning_](https://christophm.github.io/interpretable-ml-book/pdp.htmlexamples):> Partial dependence plots show how a feature affects predictions of a Machine Learning model on average.> 1. Define grid along feature> 2. Model predictions at grid points> 3. Line per data instance -> ICE (Individual Conditional Expectation) curve> 4. Average curves to get a PDP (Partial Dependence Plot) To demonstrate, we'll use a Lending Club dataset, to predict interest rates. (Like [this example](https://rrherr-project2-example.herokuapp.com/).)
###Code
import pandas as pd
# Stratified sample, 10% of expired Lending Club loans, grades A-D
# Source: https://www.lendingclub.com/info/download-data.action
history = pd.read_csv(DATA_PATH+'lending-club/lending-club-subset.csv')
history['issue_d'] = pd.to_datetime(history['issue_d'], infer_datetime_format=True)
# Just use 36 month loans
history = history[history.term==' 36 months']
# Index & sort by issue date
history = history.set_index('issue_d').sort_index()
# Clean data, engineer feature, & select subset of features
history = history.rename(columns=
{'annual_inc': 'Annual Income',
'fico_range_high': 'Credit Score',
'funded_amnt': 'Loan Amount',
'title': 'Loan Purpose'})
history['Interest Rate'] = history['int_rate'].str.strip('%').astype(float)
history['Monthly Debts'] = history['Annual Income'] / 12 * history['dti'] / 100
columns = ['Annual Income',
'Credit Score',
'Loan Amount',
'Loan Purpose',
'Monthly Debts',
'Interest Rate']
history = history[columns]
history = history.dropna()
# Test on the last 10,000 loans,
# Validate on the 10,000 before that,
# Train on the rest
test = history[-10000:]
val = history[-20000:-10000]
train = history[:-20000]
# Assign to X, y
target = 'Interest Rate'
features = history.columns.drop('Interest Rate')
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
X_test = test[features]
y_test = test[target]
X_train.info()
import numpy as np
# The target has some right skew, but it's not too bad
%matplotlib inline
import seaborn as sns
sns.distplot(y_train);
###Output
_____no_output_____
###Markdown
Fit Linear Regression model
###Code
import category_encoders as ce
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
lr = make_pipeline(
ce.TargetEncoder(),
StandardScaler(),
LinearRegression()
)
lr.fit(X_train, y_train)
print('Linear Regression R^2', lr.score(X_val, y_val))
###Output
_____no_output_____
###Markdown
Explaining Linear Regression
###Code
coefficients = lr.named_steps['linearregression'].coef_
pd.Series(coefficients, features).sort_values()
###Output
_____no_output_____
###Markdown
Fit Gradient Boosting model
###Code
from sklearn.metrics import r2_score
from xgboost import XGBRegressor
gb = make_pipeline(
ce.OrdinalEncoder(),
XGBRegressor(n_estimators=200, objective='reg:squarederror', n_jobs=-1)
)
gb.fit(X_train, y_train)
y_pred = gb.predict(X_val)
print('Gradient Boosting R^2', r2_score(y_val, y_pred))
###Output
_____no_output_____
###Markdown
Explaining Gradient Boosting???Linear models have coefficients, but trees do not.Instead, to see the relationship between individual feature(s) and the target, we can use partial dependence plots. Follow Along Partial Dependence Plots with 1 featurePDPbox- [Gallery](https://github.com/SauceCat/PDPboxgallery)- [API Reference: pdp_isolate](https://pdpbox.readthedocs.io/en/latest/pdp_isolate.html)- [API Reference: pdp_plot](https://pdpbox.readthedocs.io/en/latest/pdp_plot.html)
###Code
# Later, when you save matplotlib images to include in blog posts or web apps,
# increase the dots per inch (double it), so the text isn't so fuzzy
import matplotlib.pyplot as plt
plt.rcParams['figure.dpi'] = 72
from pdpbox.pdp import pdp_isolate, pdp_plot
feature = 'Annual Income'
isolated = pdp_isolate(
model=gb,
dataset=X_val,
model_features=X_val.columns,
feature=feature
)
pdp_plot(isolated, feature_name=feature, plot_lines=True);
###Output
_____no_output_____
###Markdown
Partial Dependence Plots with 2 featuresSee interactions!PDPbox- [Gallery](https://github.com/SauceCat/PDPboxgallery)- [API Reference: pdp_interact](https://pdpbox.readthedocs.io/en/latest/pdp_interact.html)- [API Reference: pdp_interact_plot](https://pdpbox.readthedocs.io/en/latest/pdp_interact_plot.html)Be aware of a bug in PDPBox version <= 0.20 with some versions of matplotlib:- With the `pdp_interact_plot` function, `plot_type='contour'` gets an error, but `plot_type='grid'` works- This will be fixed in the next release of PDPbox: https://github.com/SauceCat/PDPbox/issues/40
###Code
from pdpbox.pdp import pdp_interact, pdp_interact_plot
features = ['Annual Income', 'Credit Score']
interaction = pdp_interact(
model=gb,
dataset=X_val,
model_features=X_val.columns,
features=features
)
pdp_interact_plot(interaction, plot_type='grid', feature_names=feature)
###Output
_____no_output_____
###Markdown
BONUS: 3D with Plotly!Just for your future reference, here's how you can make it 3D! (Like [this example](https://rrherr-project2-example.herokuapp.com/).)
###Code
# First, make the 2D plot above. Then ...
pdp = interaction.pdp.pivot_table(
values='preds',
columns=features[0],
index=features[1]
)[::-1] # Slice notation to reverse index order so y axis is ascending
pdp = pdp.drop(columns=[1000.0, 751329.0])
import plotly.graph_objs as go
surface = go.Surface(
x=pdp.columns,
y=pdp.index,
z=pdp.values
)
layout = go.Layout(
scene=dict(
xaxis=dict(title=features[0]),
yaxis=dict(title=features[1]),
zaxis=dict(title=target)
)
)
fig = go.Figure(surface, layout)
fig.show()
###Output
_____no_output_____
###Markdown
BONUS: PDPs with categorical featuresJust for your future reference, here's a bonus example to demonstrate partial dependence plots with categorical features.1. I recommend you use Ordinal Encoder or Target Encoder, outside of a pipeline, to encode your data first. (If there is a natural ordering, then take the time to encode it that way, instead of random integers.) Then use the encoded data with pdpbox.2. There's some extra work to get readable category names on your plot, instead of integer category codes.
###Code
# Fit a model on Titanic data
import category_encoders as ce
import seaborn as sns
from sklearn.ensemble import RandomForestClassifier
df = sns.load_dataset('titanic')
df.age = df.age.fillna(df.age.median())
df = df.drop(columns='deck')
df = df.dropna()
target = 'survived'
features = df.columns.drop(['survived', 'alive'])
X = df[features]
y = df[target]
X.head()
# Use Ordinal Encoder, outside of a pipeline
encoder = ce.OrdinalEncoder()
X_encoded = encoder.fit_transform(X)
model = RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=4)
model.fit(X_encoded, y)
# Use Pdpbox
%matplotlib inline
import matplotlib.pyplot as plt
from pdpbox import pdp
feature = 'sex'
pdp_dist = pdp.pdp_isolate(model=model, dataset=X_encoded, model_features=features, feature=feature)
pdp.pdp_plot(pdp_dist, feature);
# Look at the encoder's mappings
encoder.mapping[0]
pdp.pdp_plot(pdp_dist, feature)
# Manually change the xticks labels
plt.xticks([1, 2], ['male', 'female']);
# Let's automate it
feature = 'sex'
for item in encoder.mapping:
if item['col'] == feature:
feature_mapping = item['mapping']
feature_mapping = feature_mapping[feature_mapping.index.dropna()]
category_names = feature_mapping.index.tolist()
category_codes = feature_mapping.values.tolist()
pdp.pdp_plot(pdp_dist, feature)
# Automatically change the xticks labels
plt.xticks(category_codes, category_names);
features = ['sex', 'age']
interaction = pdp_interact(
model=model,
dataset=X_encoded,
model_features=X_encoded.columns,
features=features
)
pdp_interact_plot(interaction, plot_type='grid', feature_names=features);
pdp = interaction.pdp.pivot_table(
values='preds',
columns=features[0], # First feature on x axis
index=features[1] # Next feature on y axis
)[::-1] # Reverse the index order so y axis is ascending
pdp = pdp.rename(columns=dict(zip(category_codes, category_names)))
plt.figure(figsize=(10,8))
sns.heatmap(pdp, annot=True, fmt='.2f', cmap='viridis')
plt.title('Partial Dependence of Titanic survival, on sex & age');
###Output
_____no_output_____
###Markdown
Explain individual predictions with shapley value plots Overview We’ll use TreeExplainer from an awesome library called [SHAP](https://github.com/slundberg/shap), for “additive explanations” — we can explain individual predictions by seeing how the features add up! Regression example We're coming full circle, with the NYC Apartment Rent dataset! Remember this code you wrote for your first assignment?```python Arrange X features matrix & y target vectorfeatures = ['bedrooms', 'bathrooms']target = 'price'X = df[features]y = df[target] Fit modelfrom sklearn.linear_model import LinearRegressionmodel = LinearRegression()model.fit(X, y)def predict(bedrooms, bathrooms): y_pred = model.predict([[bedrooms, bathrooms]]) estimate = y_pred[0] bed_coef = model.coef_[0] bath_coef = model.coef_[1] Format with $ and comma separators. No decimals. result = f'Rent for a {bedrooms}-bed, {bathrooms}-bath apartment in NYC is estimated at ${estimate:,.0f}.' explanation = f' In this model, each bedroom adds ${bed_coef:,.0f} & each bathroom adds ${bath_coef:,.0f}.' return result + explanation``` Let’s do something similar, but with a tuned Random Forest and Shapley Values.
###Code
import numpy as np
import pandas as pd
# Read New York City apartment rental listing data
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv')
assert df.shape == (49352, 34)
# Remove the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= np.percentile(df['price'], 0.5)) &
(df['price'] <= np.percentile(df['price'], 99.5)) &
(df['latitude'] >= np.percentile(df['latitude'], 0.05)) &
(df['latitude'] < np.percentile(df['latitude'], 99.95)) &
(df['longitude'] >= np.percentile(df['longitude'], 0.05)) &
(df['longitude'] <= np.percentile(df['longitude'], 99.95))]
# Do train/test split
# Use data from April & May 2016 to train
# Use data from June 2016 to test
df['created'] = pd.to_datetime(df['created'], infer_datetime_format=True)
cutoff = pd.to_datetime('2016-06-01')
train = df[df.created < cutoff]
test = df[df.created >= cutoff]
# Assign to X, y
features = ['bedrooms', 'bathrooms', 'longitude', 'latitude']
target = 'price'
X_train = train[features]
y_train = train[target]
X_test = test[features]
y_test = test[target]
from scipy.stats import randint, uniform
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import RandomizedSearchCV
param_distributions = {
'n_estimators': randint(50, 500),
'max_depth': [5, 10, 15, 20, None],
'max_features': uniform(0, 1),
}
search = RandomizedSearchCV(
RandomForestRegressor(random_state=42),
param_distributions=param_distributions,
n_iter=5,
cv=2,
scoring='neg_mean_absolute_error',
verbose=10,
return_train_score=True,
n_jobs=6,
random_state=42
)
search.fit(X_train, y_train);
print('Best hyperparameters', search.best_params_)
print('Cross-validation MAE', -search.best_score_)
model = search.best_estimator_
###Output
_____no_output_____
###Markdown
Follow Along [Dan Becker explains Shapley Values:](https://www.kaggle.com/dansbecker/shap-values)>You've seen (and used) techniques to extract general insights from a machine learning model. But what if you want to break down how the model works for an individual prediction?>>SHAP Values (an acronym from SHapley Additive exPlanations) break down a prediction to show the impact of each feature. >>There is some complexity to the technique ... We won't go into that detail here, since it isn't critical for using the technique. [This blog post](https://towardsdatascience.com/one-feature-attribution-method-to-supposedly-rule-them-all-shapley-values-f3e04534983d) has a longer theoretical explanation.
###Code
# Get an individual observation to explain.
# For example, the 0th row from the test set.
row = X_test.iloc[[0]]
row
# What was the actual rent for this apartment?
y_test.iloc[[0]]
# What does the model predict for this apartment?
model.predict(row)
# Why did the model predict this?
# Look at a Shapley Values Force Plot
import shap
explainer = shap.TreeExplainer(model)
shap_values = explainer.shap_values(row)
shap.initjs()
shap.force_plot(
base_value=explainer.expected_value,
shap_values=shap_values,
features=row
)
###Output
_____no_output_____
###Markdown
Define the predict function
###Code
def predict(bedrooms, bathrooms, longitude, latitude):
# Make dataframe from the inputs
df = pd.DataFrame(
data=[[bedrooms, bathrooms, longitude, latitude]],
columns=['bedrooms', 'bathrooms', 'longitude', 'latitude']
)
# Get the model's prediction
pred = model.predict(df)[0]
# Calculate shap values
explainer = shap.TreeExplainer(model)
shap_values = explainer.shap_values(df)
# Get series with shap values, feature names, & feature values
feature_names = df.columns
feature_values = df.values[0]
shaps = pd.Series(shap_values[0], zip(feature_names, feature_values))
# Print results
result = f'${pred:,.0f} estimated rent for this NYC apartment. \n\n'
#result += f'Starting from baseline of ${explainer.expected_value:,.0f} \n'
result += shaps.to_string()
print(result)
# Show shapley values force plot
shap.initjs()
return shap.force_plot(
base_value=explainer.expected_value,
shap_values=shap_values,
features=df
)
predict(3, 1.5, -73.9425, 40.7145)
# What if it was a 2 bedroom?
predict(2, 1.5, -73.9425, 40.7145)
# What if it was a 1 bedroom?
predict(1, 1.5, -73.9425, 40.7145)
###Output
_____no_output_____
###Markdown
BONUS: Classification exampleJust for your future reference, here's a bonus example for a classification problem. This uses Lending Club data, historical and current. The goal: Predict if peer-to-peer loans are charged off or fully paid. Decide which loans to invest in.
###Code
import pandas as pd
# Stratified sample, 10% of expired Lending Club loans, grades A-D
# Source: https://www.lendingclub.com/info/download-data.action
history = pd.read_csv(DATA_PATH+'lending-club/lending-club-subset.csv')
history['issue_d'] = pd.to_datetime(history['issue_d'], infer_datetime_format=True)
# Current loans available for manual investing, June 17, 2019
# Source: https://www.lendingclub.com/browse/browse.action
current = pd.read_csv(DATA_PATH+'../data/lending-club/primaryMarketNotes_browseNotes_1-RETAIL.csv')
# Transform earliest_cr_line to an integer:
# How many days the earliest credit line was open, before the loan was issued.
# For current loans available for manual investing, assume the loan will be issued today.
history['earliest_cr_line'] = pd.to_datetime(history['earliest_cr_line'], infer_datetime_format=True)
history['earliest_cr_line'] = history['issue_d'] - history['earliest_cr_line']
history['earliest_cr_line'] = history['earliest_cr_line'].dt.days
current['earliest_cr_line'] = pd.to_datetime(current['earliest_cr_line'], infer_datetime_format=True)
current['earliest_cr_line'] = pd.Timestamp.today() - current['earliest_cr_line']
current['earliest_cr_line'] = current['earliest_cr_line'].dt.days
# Transform earliest_cr_line for the secondary applicant
history['sec_app_earliest_cr_line'] = pd.to_datetime(history['sec_app_earliest_cr_line'], infer_datetime_format=True, errors='coerce')
history['sec_app_earliest_cr_line'] = history['issue_d'] - history['sec_app_earliest_cr_line']
history['sec_app_earliest_cr_line'] = history['sec_app_earliest_cr_line'].dt.days
current['sec_app_earliest_cr_line'] = pd.to_datetime(current['sec_app_earliest_cr_line'], infer_datetime_format=True, errors='coerce')
current['sec_app_earliest_cr_line'] = pd.Timestamp.today() - current['sec_app_earliest_cr_line']
current['sec_app_earliest_cr_line'] = current['sec_app_earliest_cr_line'].dt.days
# Engineer features for issue date year & month
history['issue_d_year'] = history['issue_d'].dt.year
history['issue_d_month'] = history['issue_d'].dt.month
current['issue_d_year'] = pd.Timestamp.today().year
current['issue_d_month'] = pd.Timestamp.today().month
# Calculate percent of each loan repaid
history['percent_paid'] = history['total_pymnt'] / history['funded_amnt']
# Train on the historical data.
# For the target, use `loan_status` ('Fully Paid' or 'Charged Off')
target = 'loan_status'
X = history.drop(columns=target)
y = history[target]
# Do train/validate/test 3-way split
from sklearn.model_selection import train_test_split
X_trainval, X_test, y_trainval, y_test = train_test_split(
X, y, test_size=20000, stratify=y, random_state=42)
X_train, X_val, y_train, y_val = train_test_split(
X_trainval, y_trainval, test_size=20000,
stratify=y_trainval, random_state=42)
print('X_train shape', X_train.shape)
print('y_train shape', y_train.shape)
print('X_val shape', X_val.shape)
print('y_val shape', y_val.shape)
print('X_test shape', X_test.shape)
print('y_test shape', y_test.shape)
# Save the ids for later, so we can look up actual results,
# to compare with predicted results
train_id = X_train['id']
val_id = X_val['id']
test_id = X_test['id']
# Use Python sets to compare the historical columns & current columns
common_columns = set(history.columns) & set(current.columns)
just_history = set(history.columns) - set(current.columns)
just_current = set(current.columns) - set(history.columns)
# For features, use only the common columns shared by the historical & current data.
features = list(common_columns)
X_train = X_train[features]
X_val = X_val[features]
X_test = X_test[features]
def wrangle(X):
X = X.copy()
# Engineer new feature for every feature: is the feature null?
for col in X:
X[col+'_NULL'] = X[col].isnull()
# Convert percentages from strings to floats
X['int_rate'] = X['int_rate'].str.strip('%').astype(float)
X['revol_util'] = X['revol_util'].str.strip('%').astype(float)
# Convert employment length from string to float
X['emp_length'] = X['emp_length'].str.replace(r'\D','').astype(float)
# Create features for three employee titles: teacher, manager, owner
X['emp_title'] = X['emp_title'].str.lower()
X['emp_title_teacher'] = X['emp_title'].str.contains('teacher', na=False)
X['emp_title_manager'] = X['emp_title'].str.contains('manager', na=False)
X['emp_title_owner'] = X['emp_title'].str.contains('owner', na=False)
# Get length of free text fields
X['title'] = X['title'].str.len()
X['desc'] = X['desc'].str.len()
X['emp_title'] = X['emp_title'].str.len()
# Convert sub_grade from string "A1"-"D5" to numbers
sub_grade_ranks = {'A1': 1.1, 'A2': 1.2, 'A3': 1.3, 'A4': 1.4, 'A5': 1.5,
'B1': 2.1, 'B2': 2.2, 'B3': 2.3, 'B4': 2.4, 'B5': 2.5,
'C1': 3.1, 'C2': 3.2, 'C3': 3.3, 'C4': 3.4, 'C5': 3.5,
'D1': 4.1, 'D2': 4.2, 'D3': 4.3, 'D4': 4.4, 'D5': 4.5}
X['sub_grade'] = X['sub_grade'].map(sub_grade_ranks)
# Drop some columns
X = X.drop(columns='id') # Always unique
X = X.drop(columns='url') # Always unique
X = X.drop(columns='member_id') # Always null
X = X.drop(columns='grade') # Duplicative of sub_grade
X = X.drop(columns='zip_code') # High cardinality
# Only use these features which had nonzero permutation importances in earlier models
features = ['acc_open_past_24mths', 'addr_state', 'all_util', 'annual_inc',
'annual_inc_joint', 'avg_cur_bal', 'bc_open_to_buy', 'bc_util',
'collections_12_mths_ex_med', 'delinq_amnt', 'desc_NULL', 'dti',
'dti_joint', 'earliest_cr_line', 'emp_length', 'emp_length_NULL',
'emp_title', 'emp_title_NULL', 'emp_title_owner', 'fico_range_high',
'funded_amnt', 'home_ownership', 'inq_last_12m', 'inq_last_6mths',
'installment', 'int_rate', 'issue_d_month', 'issue_d_year', 'loan_amnt',
'max_bal_bc', 'mo_sin_old_il_acct', 'mo_sin_old_rev_tl_op',
'mo_sin_rcnt_rev_tl_op', 'mort_acc', 'mths_since_last_major_derog_NULL',
'mths_since_last_record', 'mths_since_recent_bc', 'mths_since_recent_inq',
'num_actv_bc_tl', 'num_actv_rev_tl', 'num_op_rev_tl', 'num_rev_tl_bal_gt_0',
'num_tl_120dpd_2m_NULL', 'open_rv_12m_NULL', 'open_rv_24m',
'pct_tl_nvr_dlq', 'percent_bc_gt_75', 'pub_rec_bankruptcies', 'purpose',
'revol_bal', 'revol_bal_joint', 'sec_app_earliest_cr_line',
'sec_app_fico_range_high', 'sec_app_open_acc', 'sec_app_open_act_il',
'sub_grade', 'term', 'title', 'title_NULL', 'tot_coll_amt',
'tot_hi_cred_lim', 'total_acc', 'total_bal_il', 'total_bc_limit',
'total_cu_tl', 'total_rev_hi_lim']
X = X[features]
# Reset index
X = X.reset_index(drop=True)
# Return the wrangled dataframe
return X
X_train = wrangle(X_train)
X_val = wrangle(X_val)
X_test = wrangle(X_test)
print('X_train shape', X_train.shape)
print('X_val shape', X_val.shape)
print('X_test shape', X_test.shape)
import category_encoders as ce
from sklearn.impute import SimpleImputer
from sklearn.pipeline import make_pipeline
from xgboost import XGBClassifier
processor = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median')
)
X_train_processed = processor.fit_transform(X_train)
X_val_processed = processor.transform(X_val)
eval_set = [(X_train_processed, y_train),
(X_val_processed, y_val)]
model = XGBClassifier(n_estimators=1000, n_jobs=-1)
model.fit(X_train_processed, y_train, eval_set=eval_set, eval_metric='auc',
early_stopping_rounds=10)
# THIS CELL ISN'T ABOUT THE NEW OBJECTIVES FOR TODAY
# BUT IT IS IMPORTANT FOR YOUR SPRINT CHALLENGE
from sklearn.metrics import roc_auc_score
X_test_processed = processor.transform(X_test)
class_index = 1
y_pred_proba = model.predict_proba(X_test_processed)[:, class_index]
print(f'Test ROC AUC for class {class_index}:')
print(roc_auc_score(y_test, y_pred_proba)) # Ranges from 0-1, higher is better
###Output
_____no_output_____
###Markdown
Look at predictions vs actuals
###Code
df = pd.DataFrame({
'id': test_id,
'pred_proba': y_pred_proba,
'status_group': y_test
})
df = df.merge(
history[['id', 'issue_d', 'sub_grade', 'percent_paid', 'term', 'int_rate']],
how='left'
)
df.head()
fully_paid = df['status_group'] == 'Fully Paid'
charged_off = ~fully_paid
right = (fully_paid) == (df['pred_proba'] > 0.50)
wrong = ~right
###Output
_____no_output_____
###Markdown
Loan was fully paid, model's prediction was right
###Code
df[fully_paid & right].sample(n=10, random_state=1).sort_values(by='pred_proba')
# To explain the prediction for test observation with index #3094,
# first, get all of the features for that observation
row = X_test.iloc[[3094]]
row
###Output
_____no_output_____
###Markdown
Explain individual predictions with shapley value plots
###Code
# STUDY/PRACTICE THIS CELL FOR THE SPRINT CHALLENGE
import shap
explainer = shap.TreeExplainer(model)
row_processed = processor.transform(row)
shap_values = explainer.shap_values(row_processed)
shap.initjs()
shap.force_plot(
base_value=explainer.expected_value,
shap_values=shap_values,
features=row,
link='logit' # For classification, this shows predicted probabilities
)
###Output
_____no_output_____
###Markdown
Make a function to explain predictionsGoal Output:```The model predicts this loan is Fully Paid, with 74% probability. Top 3 reasons for prediction:1. dti is 10.97.2. term is 36 months.3. total_acc is 45.0. Top counter-argument against prediction:- sub_grade is 4.2. ```
###Code
feature_names = row.columns
feature_values = row.values[0]
shaps = pd.Series(shap_values[0], zip(feature_names, feature_values))
pros = shaps.sort_values(ascending=False)[:3].index
cons = shaps.sort_values(ascending=True)[:3].index
print('Top 3 reasons for fully paid:')
for i, pro in enumerate(pros, start=1):
feature_name, feature_value = pro
print(f'{i}. {feature_name} is {feature_value}.')
print('\n')
print('Cons:')
for i, con in enumerate(cons, start=1):
feature_name, feature_value = con
print(f'{i}. {feature_name} is {feature_value}.')
def explain(row_number):
positive_class = 'Fully Paid'
positive_class_index = 1
# Get & process the data for the row
row = X_test.iloc[[row_number]]
row_processed = processor.transform(row)
# Make predictions (includes predicted probability)
pred = model.predict(row_processed)[0]
pred_proba = model.predict_proba(row_processed)[0, positive_class_index]
pred_proba *= 100
if pred != positive_class:
pred_proba = 100 - pred_proba
# Show prediction & probability
print(f'The model predicts this loan is {pred}, with {pred_proba:.0f}% probability.')
# Get shapley additive explanations
shap_values = explainer.shap_values(row_processed)
# Get top 3 "pros & cons" for fully paid
feature_names = row.columns
feature_values = row.values[0]
shaps = pd.Series(shap_values[0], zip(feature_names, feature_values))
pros = shaps.sort_values(ascending=False)[:3].index
cons = shaps.sort_values(ascending=True)[:3].index
# Show top 3 reason for prediction
print('\n')
print('Top 3 reasons for prediction:')
evidence = pros if pred == positive_class else cons
for i, info in enumerate(evidence, start=1):
feature_name, feature_value = info
print(f'{i}. {feature_name} is {feature_value}.')
# Show top 1 counter-argument against prediction
print('\n')
print('Top counter-argument against prediction:')
evidence = cons if pred == positive_class else pros
feature_name, feature_value = evidence[0]
print(f'- {feature_name} is {feature_value}.')
# Show Shapley Values Force Plot
shap.initjs()
return shap.force_plot(
base_value=explainer.expected_value,
shap_values=shap_values,
features=row,
link='logit' # For classification, this shows predicted probabilities
)
explain(3094)
###Output
_____no_output_____
###Markdown
Look at more examplesYou can choose an example from each quadrant of the confusion matrix, and get an explanation for the model's prediction. Loan was charged off, model's prediction was right
###Code
df[charged_off & right].sample(n=10, random_state=1).sort_values(by='pred_proba')
explain(8383)
###Output
_____no_output_____
###Markdown
Loan was fully paid, model's prediction was wrong
###Code
df[fully_paid & wrong].sample(n=10, random_state=1).sort_values(by='pred_proba')
explain(18061)
explain(6763)
###Output
_____no_output_____
###Markdown
Loan was charged off, model's prediction was wrong
###Code
df[charged_off & wrong].sample(n=10, random_state=1).sort_values(by='pred_proba')
explain(19883)
###Output
_____no_output_____ |
spark-example/Spark_preproc.ipynb | ###Markdown
0. Подключение библиотеки pyspark
###Code
import os
import pandas as pd
import pyspark.sql.functions as F
import pyspark.sql.types as T
from pyspark.sql import SparkSession
###Output
_____no_output_____
###Markdown
1. Создание SparkSession и SparkContext
###Code
spark = SparkSession.builder.master('local').getOrCreate()
sc = spark.sparkContext
###Output
_____no_output_____
###Markdown
2. Конфигурирование доступа к хранилищу S3 Укажите параметры доступа к своему хранилищу S3: Endpoint, Acess key, Secret key
###Code
sc._jsc.hadoopConfiguration().set("fs.s3.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
sc._jsc.hadoopConfiguration().set("fs.s3a.endpoint", "https://your_endpoint_name")
sc._jsc.hadoopConfiguration().set("fs.s3a.access.key", "your_access_key")
sc._jsc.hadoopConfiguration().set("fs.s3a.secret.key", "your_secret_key")
###Output
_____no_output_____
###Markdown
3. Загрузка датасета Определим схему данных датасета, и создадим Spark Dataframe, посредством которого мы будем работать с данными:
###Code
schema = T.StructType([
T.StructField('num', T.IntegerType(), True),
T.StructField('sensor_id', T.IntegerType(), True),
T.StructField('location', T.IntegerType(), True),
T.StructField('lat', T.DoubleType(), True),
T.StructField('lon', T.DoubleType(), True),
T.StructField('timestamp', T.TimestampType(), True),
T.StructField('pressure', T.DoubleType(), True),
T.StructField('temperature', T.DoubleType(), True),
T.StructField('humidity', T.DoubleType(), True)
])
###Output
_____no_output_____
###Markdown
В качестве данных используем CSV-файл объемом 7.8 GB, собранный из данных, расположенных на https://www.kaggle.com/hmavrodiev/sofia-air-quality-dataset. Данные содержат записи с датчиков погоды. Укажите путь до датасета на вашем бакете S3:
###Code
path = 's3a://your_bucket_name/path/dataset.csv'
df = spark \
.read \
.format('csv') \
.options(header='true') \
.schema(schema) \
.load(path)
df = df.drop('num').withColumn('hour', F.hour(F.col('timestamp')))
df.printSchema()
###Output
root
|-- sensor_id: integer (nullable = true)
|-- location: integer (nullable = true)
|-- lat: double (nullable = true)
|-- lon: double (nullable = true)
|-- timestamp: timestamp (nullable = true)
|-- pressure: double (nullable = true)
|-- temperature: double (nullable = true)
|-- humidity: double (nullable = true)
|-- hour: integer (nullable = true)
###Markdown
Темерь мы можем увидеть Spark Dataset:
###Code
df.show(10)
###Output
+---------+--------+------------------+------------------+-------------------+--------+-----------+--------+----+
|sensor_id|location| lat| lon| timestamp|pressure|temperature|humidity|hour|
+---------+--------+------------------+------------------+-------------------+--------+-----------+--------+----+
| 2266| 1140| 42.738| 23.272|2017-07-01 00:00:07|95270.27| 23.46| 62.48| 0|
| 2292| 1154|42.663000000000004|23.273000000000003|2017-07-01 00:00:08|94355.83| 23.06| 59.46| 0|
| 3096| 1558| 42.7| 23.36|2017-07-01 00:00:10|95155.81| 26.53| 44.38| 0|
| 3428| 1727|42.623999999999995| 23.406|2017-07-01 00:00:12|94679.57| 28.34| 38.28| 0|
| 3472| 1750| 42.669| 23.318|2017-07-01 00:00:13|94327.88| 26.31| 46.37| 0|
| 1952| 976|42.708999999999996|23.398000000000003|2017-07-01 00:00:13|95314.52| 22.66| 56.55| 0|
| 1846| 923| 42.64| 23.31|2017-07-01 00:00:15|93616.77| 23.87| 50.76| 0|
| 3512| 1770| 42.683| 23.335|2017-07-01 00:00:24|94962.39| 24.92| 55.53| 0|
| 2228| 1120|42.693000000000005|23.333000000000002|2017-07-01 00:00:28|94982.91| 26.29| 45.7| 0|
| 3438| 1732| 42.738|23.293000000000003|2017-07-01 00:00:37|95099.81| 24.62| 57.97| 0|
+---------+--------+------------------+------------------+-------------------+--------+-----------+--------+----+
only showing top 10 rows
###Markdown
Посчитаем количество строк до препроцессинга (это может занять время):
###Code
df.count()
###Output
_____no_output_____
###Markdown
4. Препроцессинг данных Если мы хотим использовать SQL-синтаксис для запросов Spark, мы должны зарегистрировать временное представление для данных (ограниченное в рамках вашей сессии Spark). После этого мы сможем обращаться к нему по имени:
###Code
df.createOrReplaceTempView('weather')
###Output
_____no_output_____
###Markdown
Команда ниже запускает типичное задание Spark и собирает результаты на Spark Driver. Запрос выбирает данные за дневные периоы, группирует по расположению, и подсчитывает некоторые статистики для каждого расположения (это может занять время):
###Code
result = spark.sql('''
select
location as location_id,
count(1) as data_num,
avg(pressure) as mean_pressure,
avg(humidity) as mean_humidity,
max(temperature) as max_temp
from weather
where hour > 9 and hour < 20
group by location
''').collect()
print(len(result))
###Output
485
###Markdown
Препроцессинг был успешно осуществлён, размер датасета уменьшился с 97 288 452 до 485 строк. Теперь мы можем, к примеру, загрузить данные в датафрейм Pandas, чтобы продолжить работу с ними на Spark Driver или в любом другом расположении:
###Code
import pandas as pd
pd.DataFrame.from_records(map(lambda x: x.asDict(), result))
###Output
_____no_output_____ |
trading_exercise/.ipynb_checkpoints/strategy_under_business_times-checkpoint.ipynb | ###Markdown
Packages What I imported
###Code
import personal_pkg as per
import pandas as pd
import numpy as np
from IPython.display import display , Markdown
import requests
from bs4 import BeautifulSoup
from scrapy.http import TextResponse
from datetime import datetime, timedelta
import nltk
import matplotlib.pylab as plt
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
from nltk.sentiment.vader import SentimentIntensityAnalyzer
# nltk.download('vader_lexicon')
sia = SentimentIntensityAnalyzer()
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import datetime
import pandas as pd
import FinanceDataReader as fdr
from datetime import datetime
# 값 설정
plt.rcParams['figure.figsize'] = (14,4)
plt.rcParams['lines.linewidth'] = 2
plt.rcParams['lines.color'] = 'b'
plt.rcParams['axes.grid'] = True
###Output
_____no_output_____
###Markdown
Adjusting the methodology of scoring- 최댓값과 최솟값이 각각 4.0 -4.0 이 되도록 normalizing 한다.- 그리고 LM_positive 은 2 값을 넣고, LM_negative에는 -2 값을 넣어준다.
###Code
stock_lex = pd.read_csv('file_for_dictionary/stock_lex.csv')
stock_lex['sentiment'] = (stock_lex['Aff_Score'] + stock_lex['Neg_Score'])/2
stock_lex = dict(zip(stock_lex.Item, stock_lex.sentiment))
stock_lex = {k:v for k,v in stock_lex.items() if len(k.split(' '))==1}
stock_lex_scaled = {}
for k, v in stock_lex.items():
if v > 0:
stock_lex_scaled[k] = v / max(stock_lex.values()) * 4
else:
stock_lex_scaled[k] = v / min(stock_lex.values()) * -4
display(pd.DataFrame(np.array(list(stock_lex_scaled.values()))).describe().loc[['min','max'],:])
# Loughran and McDonald
negative_ls = [i.strip() for i in per.convert_pdf_to_txt('file_for_dictionary/LM_Negative.pdf').split('\n')]
negative_ls = [i for i in negative_ls if i and 'Negative' not in i]
positive_ls = [i.strip() for i in per.convert_pdf_to_txt('file_for_dictionary/LM_Positive.pdf').split('\n')]
positive_ls = [i for i in positive_ls if i and 'Positive' not in i]
final_lex = {}
final_lex.update({word:2.0 for word in positive_ls})
final_lex.update({word:-2.0 for word in negative_ls})
final_lex.update(stock_lex_scaled)
final_lex.update(sia.lexicon)
sia.lexicon = final_lex
###Output
_____no_output_____
###Markdown
Crawling the data
###Code
%%time
url_ls = []
date_ls = []
content_ls = []
score_ls = []
for page in range(1,10+1):
req = requests.get("https://www.businesstimes.com.sg/search/facebook?page={}".format(page))
http = TextResponse(req.url , body=req.text, encoding='utf-8')
url_ls.append(http.xpath('//*[@id="sph-search-results"]/div/div/h4/a/@href').extract())
date_ls.append(http.xpath('//*[@id="sph-search-results"]/div/div/time/text()').extract())
real_date_ls = [j for i in date_ls for j in i]
real_url_ls = [j for i in url_ls for j in i]
for idx,url in enumerate(real_url_ls) :
if idx % 30 == 0 : print(idx,url)
req = requests.get(url)
dom = BeautifulSoup(req.text, 'lxml')
content = ','.join([i.text for i in dom.findAll('p')]).replace(',',' ')
content_ls.append(content)
score_ls.append(sia.polarity_scores(content)['compound'])
date_ls = [datetime.strptime(i, '%d %b %Y').date() + timedelta(days=1) for i in real_date_ls]
date_sentiment = dict(zip(date_ls,score_ls))
earliest_date = min(date_sentiment.keys())
score_df = pd.DataFrame.from_dict(date_sentiment,orient='index')
score_df.rename(columns={0:'score'},inplace=True)
score_df.sort_index(inplace=True)
score_df['1day_before'] = score_df.shift(periods=1)['score'].tolist()
score_df['diff'] = score_df['score'] - score_df['1day_before']
score_df.fillna(method='bfill',inplace=True)
signal_ls = []
for idx,val in enumerate(score_df['diff'].tolist()) :
if val >=1 :
signal_ls.append('buy')
elif val <= -1 :
signal_ls.append('sell')
else : signal_ls.append('0')
score_df['signal'] = signal_ls
score_df.reset_index(inplace=True)
score_df.rename(columns={'index':'Date'},inplace=True)
start = earliest_date
end = datetime.now()
# FaceBook
df = fdr.DataReader("FB", start, end)
df = df[['Close', 'Volume']]
df.reset_index(inplace=True)
revised_date_ls = []
for idx,val in enumerate(df['Date'].tolist()) :
day = str(df['Date'][idx].day) + ' ' + str(df['Date'][idx].month) + ' ' + str(df['Date'][idx].year)
revised_date_ls.append(datetime.strptime(day, '%d %m %Y').date())
df['Date'] = revised_date_ls
trade_df = pd.merge(score_df,df,on='Date')
ax = trade_df[['Close']].plot(figsize=(16,6))
for key, val in trade_df['signal'].iteritems():
if val == 0:
continue
if val == 'buy' :
ax.annotate('Buy', xy=(key, trade_df['Close'][key]), xytext=(10,-30),
textcoords='offset points', arrowprops=dict(arrowstyle='-|>'))
elif val == 'sell':
ax.annotate('Sell', xy=(key, trade_df['Close'][key]), xytext=(10,30),
textcoords='offset points', arrowprops=dict(arrowstyle='-|>'))
trade_df[['diff']].plot()
###Output
_____no_output_____
###Markdown
거래 전략 : - 감성 지수의 데이터는 매일 아침에 가져오는 것으로 가정- 그날의 감성 지수와 그 전날의 감성 지수를 비교해서, 차이가 0.5로 오늘 것이 더 커졌으면 50주를 산다.- 그날의 감성 지수와 그 전날의 감성 지수를 비교해서, 차이가 -0.5로 오늘 것이 더 작아졌으면 50주를 판다.
###Code
from functools import reduce
profit_ls = []
for idx,val in enumerate(trade_df['Close'].tolist()) :
if trade_df['signal'][idx] != '0' :
if trade_df['signal'][idx] == 'sell' :
profit_ls.append(trade_df['Close'][idx-1] - trade_df['Close'][idx])
else :
profit_ls.append(trade_df['Close'][idx] - trade_df['Close'][idx-1])
else :
profit_ls.append(0)
trade_df['profit'] = profit_ls
trade_df.tail()
testing_df = pd.DataFrame()
testing_df['return'] = trade_df['Close'].pct_change()
testing_df['return'] = [i+1 for i in testing_df['return']]
testing_df.fillna(1,inplace=True)
testing_df.head()
new_return_ls = []
for idx in range(len(trade_df)) :
if trade_df['signal'][idx] != '0' :
if trade_df['signal'][idx] == 'buy' :
new_return_ls.append(testing_df['return'][idx])
else :
new_return_ls.append( 2- testing_df['return'][idx])
else :
new_return_ls.append(testing_df['return'][idx])
testing_df['new_return'] = new_return_ls
print(reduce(lambda x,y:x*y, testing_df['return'].tolist()))
print(reduce(lambda x,y:x*y, testing_df['new_return'].tolist()))
plt.plot(testing_df['return'].cumprod(),label='default')
plt.plot(testing_df['new_return'].cumprod(),label='mine')
plt.legend() ; plt.grid(True)
###Output
_____no_output_____ |
content/NOTES 06.02 - PCA.ipynb | ###Markdown
06.02 - PCA
###Code
!wget --no-cache -O init.py -q https://raw.githubusercontent.com/rramosp/ai4eng.v1.20211.udea/main/content/init.py
import init; init.init(force_download=False); init.get_weblink()
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
consulta [A Tutorial on Principal Component Analysis](https://www.cs.princeton.edu/picasso/mats/PCA-Tutorial-Intuition_jp.pdf) para una decripción intuitiva y detallada de PCA y SVD Intuicióntenemos los siguientes datos 2D y nos gustaría encontrar una proyección en 1D que preserve la máxima cantidad de variabilidad.
###Code
np.random.seed(1)
X = np.dot(np.random.random(size=(2, 2)), np.random.normal(size=(2, 200))).T+10
# center data on 0,0
X=X-np.mean(X, axis=0)
print (X.shape)
plt.scatter(X[:,0], X[:,1])
###Output
(200, 2)
###Markdown
recuerda que la proyección de un vector $\vec{x}$ en otro vector $\vec{v}$ (consulta [here](https://matthew-brett.github.io/teaching/vector_projection.html)) viene dada por:$$c = \frac{\vec{v}\times \vec{x}}{||\vec{v}||^2}$$$$proj_\vec{v} \vec{x} = \vec{v} c$$donde $c$ es el tamaño de la proyección de $\vec{x}$ sobre $\vec{v}$ inspeccionamos algunas proyecciones
###Code
plt.figure(figsize=(15,3))
unit_vector = lambda angle: np.array([np.cos(angle), np.sin(angle)])
for i in range(3):
plt.subplot(1,3,i+1)
angle = np.random.random()*np.pi*2 if i!=0 else 1.8
v = unit_vector(angle)
c = X.dot(v.reshape(-1,1))/(np.linalg.norm(v)**2)
Xp = np.repeat(v.reshape(-1,2),len(X),axis=0)*c
plt.scatter(X[:,0], X[:,1], color="blue", alpha=.5, label="original data")
plt.scatter(Xp[:,0], Xp[:,1], color="red", alpha=.5, label="projected data")
plt.axvline(0, color="gray")
plt.axhline(0, color="gray")
plt.plot([0,v[0]], [0,v[1]], color="black", lw=3, label="projection vector")
plt.axis('equal')
plt.ylim(-2,2)
plt.title("$\\alpha$=%.2f rads, proj std=%.3f"%(angle, np.std(c)))
if i==2:
plt.legend(loc="center left", bbox_to_anchor=(1.01,.5))
###Output
_____no_output_____
###Markdown
encontremos las proyecciones con mayor y menor std por fuerza bruta
###Code
def get_maxmin_projections(X):
stds = []
angles = np.linspace(0,np.pi*2, 100)
for a in angles:
v = np.array([np.cos(a), np.sin(a)])
c = X.dot(v.reshape(-1,1))/(np.linalg.norm(v)**2)
stds.append(np.std(c))
v2 = unit_vector(angles[np.argmin(stds)])
v1 = unit_vector(angles[np.argmax(stds)])
return angles, stds, v1, v2
angles, stds, v1, v2 = get_maxmin_projections(X)
plt.plot(angles, stds)
plt.xlabel("projection $\\alpha$ (in rads)")
plt.ylabel("projection std")
plt.scatter(X[:,0], X[:,1], color="blue", alpha=.5, label="original data")
plt.axvline(0, color="gray")
plt.axhline(0, color="gray")
plt.plot([0,v1[0]], [0,v1[1]], color="black", lw=5, label="max std projection vector")
plt.plot([0,v2[0]], [0,v2[1]], color="black", ls="--", lw=2, label="min std projection vector")
plt.axis('equal')
plt.ylim(-2,2)
plt.legend(loc="center left", bbox_to_anchor=(1.01,.5))
###Output
_____no_output_____
###Markdown
**estos son los componentes principales**!! **observa que su dimensionalidad es la misma que los datos originales**esto es lo que PCA nos da
###Code
from sklearn.decomposition import PCA
pca = PCA(n_components=1)
pca.fit(X)
print ("sklearn PCA components")
print (pca.components_)
print ("brute force components")
print (v1)
print (v2)
c = pca.transform(X)
print (c.shape)
c
###Output
(200, 1)
###Markdown
pero de modo mucho más eficiente
###Code
%timeit pca.fit(X)
%timeit get_maxmin_projections(X)
###Output
3.1 ms ± 62.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
###Markdown
we can use the largest component to reduce our data from 2D to 1D podemos usar el componente mayor para reducir la dimensionalidad de nuestros datos de 2D a 1Dobserva que:$$\mathbf{X_t} = \mathbf{X} \times \mathbf{V}$$donde:- $\mathbf{X}$ son nuestros datos- $\mathbf{V}$ es el vector de componentes seleccionados- $\mathbf{X_t}$ son los datos transformadosasí que nos estamos restringiendo a **transformaciones linealer** (rotaciones y escalado)
###Code
pca = PCA(n_components=1)
pca.fit(X)
Xt = pca.transform(X)[:,0]
plt.scatter(X[:,0], X[:,1], color="blue", alpha=.5, label="$\mathbf{X}$: original data")
plt.scatter(Xt, [0]*len(Xt), color="red", alpha=.5, label="$\mathbf{X_t}$: reduced data")
plt.axis("equal");
plt.legend(loc="center left", bbox_to_anchor=(1.01,.5))
###Output
_____no_output_____
###Markdown
y podemos también recontruir los datos 2D después de la transformación
###Code
v0 = pca.components_[0]
c = X.dot(v0)
Xr = np.r_[[i*v0 for i in c]]
plt.scatter(X[:,0], X[:,1], color="blue", alpha=.5, label="original data")
plt.scatter(Xr[:,0], Xr[:,1], color="red", alpha=.5, label="reconstructed data from largest component")
plt.legend(loc="center left", bbox_to_anchor=(1.01,.5))
###Output
_____no_output_____
###Markdown
06.02 - PCA
###Code
!wget --no-cache -O init.py -q https://raw.githubusercontent.com/rramosp/ai4eng.v1/main/content/init.py
import init; init.init(force_download=False); init.get_weblink()
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
consulta [A Tutorial on Principal Component Analysis](https://www.cs.princeton.edu/picasso/mats/PCA-Tutorial-Intuition_jp.pdf) para una decripción intuitiva y detallada de PCA y SVD Intuicióntenemos los siguientes datos 2D y nos gustaría encontrar una proyección en 1D que preserve la máxima cantidad de variabilidad.
###Code
np.random.seed(1)
X = np.dot(np.random.random(size=(2, 2)), np.random.normal(size=(2, 200))).T+10
# center data on 0,0
X=X-np.mean(X, axis=0)
print (X.shape)
plt.scatter(X[:,0], X[:,1])
###Output
(200, 2)
###Markdown
recuerda que la proyección de un vector $\vec{x}$ en otro vector $\vec{v}$ (consulta [here](https://matthew-brett.github.io/teaching/vector_projection.html)) viene dada por:$$c = \frac{\vec{v}\times \vec{x}}{||\vec{v}||^2}$$$$proj_\vec{v} \vec{x} = \vec{v} c$$donde $c$ es el tamaño de la proyección de $\vec{x}$ sobre $\vec{v}$ inspeccionamos algunas proyecciones
###Code
plt.figure(figsize=(15,3))
unit_vector = lambda angle: np.array([np.cos(angle), np.sin(angle)])
for i in range(3):
plt.subplot(1,3,i+1)
angle = np.random.random()*np.pi*2 if i!=0 else 1.8
v = unit_vector(angle)
c = X.dot(v.reshape(-1,1))/(np.linalg.norm(v)**2)
Xp = np.repeat(v.reshape(-1,2),len(X),axis=0)*c
plt.scatter(X[:,0], X[:,1], color="blue", alpha=.5, label="original data")
plt.scatter(Xp[:,0], Xp[:,1], color="red", alpha=.5, label="projected data")
plt.axvline(0, color="gray")
plt.axhline(0, color="gray")
plt.plot([0,v[0]], [0,v[1]], color="black", lw=3, label="projection vector")
plt.axis('equal')
plt.ylim(-2,2)
plt.title("$\\alpha$=%.2f rads, proj std=%.3f"%(angle, np.std(c)))
if i==2:
plt.legend(loc="center left", bbox_to_anchor=(1.01,.5))
###Output
_____no_output_____
###Markdown
encontremos las proyecciones con mayor y menor std por fuerza bruta
###Code
def get_maxmin_projections(X):
stds = []
angles = np.linspace(0,np.pi*2, 100)
for a in angles:
v = np.array([np.cos(a), np.sin(a)])
c = X.dot(v.reshape(-1,1))/(np.linalg.norm(v)**2)
stds.append(np.std(c))
v2 = unit_vector(angles[np.argmin(stds)])
v1 = unit_vector(angles[np.argmax(stds)])
return angles, stds, v1, v2
angles, stds, v1, v2 = get_maxmin_projections(X)
plt.plot(angles, stds)
plt.xlabel("projection $\\alpha$ (in rads)")
plt.ylabel("projection std")
plt.scatter(X[:,0], X[:,1], color="blue", alpha=.5, label="original data")
plt.axvline(0, color="gray")
plt.axhline(0, color="gray")
plt.plot([0,v1[0]], [0,v1[1]], color="black", lw=5, label="max std projection vector")
plt.plot([0,v2[0]], [0,v2[1]], color="black", ls="--", lw=2, label="min std projection vector")
plt.axis('equal')
plt.ylim(-2,2)
plt.legend(loc="center left", bbox_to_anchor=(1.01,.5))
###Output
_____no_output_____
###Markdown
**estos son los componentes principales**!! **observa que su dimensionalidad es la misma que los datos originales**esto es lo que PCA nos da
###Code
from sklearn.decomposition import PCA
pca = PCA(n_components=1)
pca.fit(X)
print ("sklearn PCA components")
print (pca.components_)
print ("brute force components")
print (v1)
print (v2)
c = pca.transform(X)
print (c.shape)
c
###Output
(200, 1)
###Markdown
pero de modo mucho más eficiente
###Code
%timeit pca.fit(X)
%timeit get_maxmin_projections(X)
###Output
3.1 ms ± 62.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
###Markdown
we can use the largest component to reduce our data from 2D to 1D podemos usar el componente mayor para reducir la dimensionalidad de nuestros datos de 2D a 1Dobserva que:$$\mathbf{X_t} = \mathbf{X} \times \mathbf{V}$$donde:- $\mathbf{X}$ son nuestros datos- $\mathbf{V}$ es el vector de componentes seleccionados- $\mathbf{X_t}$ son los datos transformadosasí que nos estamos restringiendo a **transformaciones linealer** (rotaciones y escalado)
###Code
pca = PCA(n_components=1)
pca.fit(X)
Xt = pca.transform(X)[:,0]
plt.scatter(X[:,0], X[:,1], color="blue", alpha=.5, label="$\mathbf{X}$: original data")
plt.scatter(Xt, [0]*len(Xt), color="red", alpha=.5, label="$\mathbf{X_t}$: reduced data")
plt.axis("equal");
plt.legend(loc="center left", bbox_to_anchor=(1.01,.5))
###Output
_____no_output_____
###Markdown
y podemos también recontruir los datos 2D después de la transformación
###Code
v0 = pca.components_[0]
c = X.dot(v0)
Xr = np.r_[[i*v0 for i in c]]
plt.scatter(X[:,0], X[:,1], color="blue", alpha=.5, label="original data")
plt.scatter(Xr[:,0], Xr[:,1], color="red", alpha=.5, label="reconstructed data from largest component")
plt.legend(loc="center left", bbox_to_anchor=(1.01,.5))
###Output
_____no_output_____
###Markdown
06.02 - PCA
###Code
!wget --no-cache -O init.py -q https://raw.githubusercontent.com/rramosp/20201.xai4eng/master/content/init.py
import init; init.init(force_download=False); init.get_weblink()
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
consulta [A Tutorial on Principal Component Analysis](https://www.cs.princeton.edu/picasso/mats/PCA-Tutorial-Intuition_jp.pdf) para una decripción intuitiva y detallada de PCA y SVD Intuicióntenemos los siguientes datos 2D y nos gustaría encontrar una proyección en 1D que preserve la máxima cantidad de variabilidad.
###Code
np.random.seed(1)
X = np.dot(np.random.random(size=(2, 2)), np.random.normal(size=(2, 200))).T+10
# center data on 0,0
X=X-np.mean(X, axis=0)
print (X.shape)
plt.scatter(X[:,0], X[:,1])
###Output
(200, 2)
###Markdown
recuerda que la proyección de un vector $\vec{x}$ en otro vector $\vec{v}$ (consulta [here](https://matthew-brett.github.io/teaching/vector_projection.html)) viene dada por:$$c = \frac{\vec{v}\times \vec{x}}{||\vec{v}||^2}$$$$proj_\vec{v} \vec{x} = \vec{v} c$$donde $c$ es el tamaño de la proyección de $\vec{x}$ sobre $\vec{v}$ inspeccionamos algunas proyecciones
###Code
plt.figure(figsize=(15,3))
unit_vector = lambda angle: np.array([np.cos(angle), np.sin(angle)])
for i in range(3):
plt.subplot(1,3,i+1)
angle = np.random.random()*np.pi*2 if i!=0 else 1.8
v = unit_vector(angle)
c = X.dot(v.reshape(-1,1))/(np.linalg.norm(v)**2)
Xp = np.repeat(v.reshape(-1,2),len(X),axis=0)*c
plt.scatter(X[:,0], X[:,1], color="blue", alpha=.5, label="original data")
plt.scatter(Xp[:,0], Xp[:,1], color="red", alpha=.5, label="projected data")
plt.axvline(0, color="gray")
plt.axhline(0, color="gray")
plt.plot([0,v[0]], [0,v[1]], color="black", lw=3, label="projection vector")
plt.axis('equal')
plt.ylim(-2,2)
plt.title("$\\alpha$=%.2f rads, proj std=%.3f"%(angle, np.std(c)))
if i==2:
plt.legend(loc="center left", bbox_to_anchor=(1.01,.5))
###Output
_____no_output_____
###Markdown
encontremos las proyecciones con mayor y menor std por fuerza bruta
###Code
def get_maxmin_projections(X):
stds = []
angles = np.linspace(0,np.pi*2, 100)
for a in angles:
v = np.array([np.cos(a), np.sin(a)])
c = X.dot(v.reshape(-1,1))/(np.linalg.norm(v)**2)
stds.append(np.std(c))
v2 = unit_vector(angles[np.argmin(stds)])
v1 = unit_vector(angles[np.argmax(stds)])
return angles, stds, v1, v2
angles, stds, v1, v2 = get_maxmin_projections(X)
plt.plot(angles, stds)
plt.xlabel("projection $\\alpha$ (in rads)")
plt.ylabel("projection std")
plt.scatter(X[:,0], X[:,1], color="blue", alpha=.5, label="original data")
plt.axvline(0, color="gray")
plt.axhline(0, color="gray")
plt.plot([0,v1[0]], [0,v1[1]], color="black", lw=5, label="max std projection vector")
plt.plot([0,v2[0]], [0,v2[1]], color="black", ls="--", lw=2, label="min std projection vector")
plt.axis('equal')
plt.ylim(-2,2)
plt.legend(loc="center left", bbox_to_anchor=(1.01,.5))
###Output
_____no_output_____
###Markdown
**estos son los componentes principales**!! **observa que su dimensionalidad es la misma que los datos originales**esto es lo que PCA nos da
###Code
from sklearn.decomposition import PCA
pca = PCA(n_components=1)
pca.fit(X)
print ("sklearn PCA components")
print (pca.components_)
print ("brute force components")
print (v1)
print (v2)
c = pca.transform(X)
print (c.shape)
c
###Output
(200, 1)
###Markdown
pero de modo mucho más eficiente
###Code
%timeit pca.fit(X)
%timeit get_maxmin_projections(X)
###Output
3.1 ms ± 62.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
###Markdown
we can use the largest component to reduce our data from 2D to 1D podemos usar el componente mayor para reducir la dimensionalidad de nuestros datos de 2D a 1Dobserva que:$$\mathbf{X_t} = \mathbf{X} \times \mathbf{V}$$donde:- $\mathbf{X}$ son nuestros datos- $\mathbf{V}$ es el vector de componentes seleccionados- $\mathbf{X_t}$ son los datos transformadosasí que nos estamos restringiendo a **transformaciones linealer** (rotaciones y escalado)
###Code
pca = PCA(n_components=1)
pca.fit(X)
Xt = pca.transform(X)[:,0]
plt.scatter(X[:,0], X[:,1], color="blue", alpha=.5, label="$\mathbf{X}$: original data")
plt.scatter(Xt, [0]*len(Xt), color="red", alpha=.5, label="$\mathbf{X_t}$: reduced data")
plt.axis("equal");
plt.legend(loc="center left", bbox_to_anchor=(1.01,.5))
###Output
_____no_output_____
###Markdown
y podemos también recontruir los datos 2D después de la transformación
###Code
v0 = pca.components_[0]
c = X.dot(v0)
Xr = np.r_[[i*v0 for i in c]]
plt.scatter(X[:,0], X[:,1], color="blue", alpha=.5, label="original data")
plt.scatter(Xr[:,0], Xr[:,1], color="red", alpha=.5, label="reconstructed data from largest component")
plt.legend(loc="center left", bbox_to_anchor=(1.01,.5))
###Output
_____no_output_____ |
examples/LegendControl.ipynb | ###Markdown
Legend: How to use step 1: create an ipyleaflet map
###Code
from ipyleaflet import Map, LegendControl
mymap = Map(center=(-10,-45), zoom=4)
mymap
###Output
_____no_output_____
###Markdown
step 2: create a legend By default, you need to provide at least a dictionnary with pair key=> the label to display and value=> the desired color. By default, it is named 'Legend', but you can pass a name as argument as well.
###Code
a_legend = LegendControl({"low":"#FAA", "medium":"#A55", "High":"#500"}, name="Legend", position="bottomright")
mymap.add_control(a_legend)
###Output
_____no_output_____
###Markdown
Step 3: manipulate Legend Name
###Code
a_legend.name = "Risk" ## set name
a_legend.name # get name
###Output
_____no_output_____
###Markdown
Legend content
###Code
a_legend.legends = {"el1":"#FAA", "el2":"#A55", "el3":"#500"} #set content
a_legend.legends # get content
a_legend.add_legend_element("el5","#000") # add a legend element
a_legend.remove_legend_element("el5") # remove a legend element
###Output
_____no_output_____
###Markdown
Positionning
###Code
a_legend.positionning ="topright" # set positionning : possible values are topleft, topright, bottomleft, bottomright
a_legend.positionning # get current positionning
###Output
_____no_output_____
###Markdown
Legend: How to use step 1: create an ipyleaflet map
###Code
from ipyleaflet import Map, LegendControl
mymap = Map(center=(-10, -45), zoom=4)
mymap
###Output
_____no_output_____
###Markdown
step 2: create a legend By default, you need to provide at least a dictionary with pair key=> the label to display and value=> the desired color. By default, it is named 'Legend', but you can pass a name as argument as well.
###Code
a_legend = LegendControl(
{"low": "#FAA", "medium": "#A55", "High": "#500"},
name="Legend",
position="bottomright",
)
mymap.add_control(a_legend)
###Output
_____no_output_____
###Markdown
Step 3: manipulate Legend Name
###Code
a_legend.name = "Risk" ## set name
a_legend.name # get name
###Output
_____no_output_____
###Markdown
Legend content
###Code
a_legend.legends = {"el1": "#FAA", "el2": "#A55", "el3": "#500"} # set content
a_legend.legends # get content
a_legend.add_legend_element("el5", "#000") # add a legend element
a_legend.remove_legend_element("el5") # remove a legend element
###Output
_____no_output_____
###Markdown
Positioning
###Code
a_legend.positioning = "topright" # set positioning : possible values are topleft, topright, bottomleft, bottomright
a_legend.positioning # get current positioning
###Output
_____no_output_____
###Markdown
step 1: create an ipyleaflet map
###Code
from ipyleaflet import Map, LegendControl
mymap = Map(center=(-10, -45), zoom=4)
mymap
###Output
_____no_output_____
###Markdown
step 2: create a legend By default, you need to provide at least a dictionary with pair key=> the label to display and value=> the desired color. By default, it is named 'Legend', but you can pass a name as argument as well.
###Code
a_legend = LegendControl(
{"low": "#FAA", "medium": "#A55", "High": "#500"},
name="Legend",
position="bottomright",
)
mymap.add(a_legend)
###Output
_____no_output_____
###Markdown
Step 3: manipulate Legend Name
###Code
a_legend.name = "Risk" ## set name
a_legend.name # get name
###Output
_____no_output_____
###Markdown
Legend content
###Code
a_legend.legends = {"el1": "#FAA", "el2": "#A55", "el3": "#500"} # set content
a_legend.legends # get content
a_legend.add_legend_element("el5", "#000") # add a legend element
a_legend.remove_legend_element("el5") # remove a legend element
###Output
_____no_output_____
###Markdown
Positioning
###Code
a_legend.positioning = "topright" # set positioning : possible values are topleft, topright, bottomleft, bottomright
a_legend.positioning # get current positioning
###Output
_____no_output_____
###Markdown
Legend: How to use step 1: create an ipyleaflet map
###Code
from ipyleaflet import Map, LegendControl
mymap = Map(center=(-10,-45), zoom=4)
mymap
###Output
_____no_output_____
###Markdown
step 2: create a legend By default, you need to provide at least a dictionary with pair key=> the label to display and value=> the desired color. By default, it is named 'Legend', but you can pass a name as argument as well.
###Code
a_legend = LegendControl({"low":"#FAA", "medium":"#A55", "High":"#500"}, name="Legend", position="bottomright")
mymap.add_control(a_legend)
###Output
_____no_output_____
###Markdown
Step 3: manipulate Legend Name
###Code
a_legend.name = "Risk" ## set name
a_legend.name # get name
###Output
_____no_output_____
###Markdown
Legend content
###Code
a_legend.legends = {"el1":"#FAA", "el2":"#A55", "el3":"#500"} #set content
a_legend.legends # get content
a_legend.add_legend_element("el5","#000") # add a legend element
a_legend.remove_legend_element("el5") # remove a legend element
###Output
_____no_output_____
###Markdown
Positioning
###Code
a_legend.positioning ="topright" # set positioning : possible values are topleft, topright, bottomleft, bottomright
a_legend.positioning # get current positioning
###Output
_____no_output_____ |
S3_PID_Tuning/A2_APSO/PID Tuning.ipynb | ###Markdown
LOAD PARAMETER
###Code
# Steady State Response
param_ssr = np.load('../model/ssr.npy')[-1]
# Dynamics
param_dynamics = np.load('../model/sys_id.npy')[-1]
###Output
_____no_output_____
###Markdown
Generate Trajectory Step & ramp function
###Code
def step(tt):
out = np.zeros_like(tt)
out[tt >= 0] = 1
return out
def ramp(tt):
out = np.array(tt)
out[tt < 0] = 0
return out
def jitter(gain, omega, tt, t0, tf):
out = np.array(tt)
out = gain * np.sin(omega*(tt-t0))
out[tt-t0 < 0] = 0
out[tt-tf > 0] = 0
return out
###Output
_____no_output_____
###Markdown
Continuous acceleration
###Code
t0 = np.arange(3, 288, 0.02)
a0 = ramp(t0-3) - ramp(t0-4.5) - ramp(t0-8) + ramp(t0-9.5) \
- 0.25*ramp(t0-27) + 0.25*ramp(t0-30) + 0.25*ramp(t0-32) - 0.25*ramp(t0-35) \
+ 0.5*ramp(t0-40) - 1.*ramp(t0-44) + 0.5*ramp(t0-48) \
- 1*ramp(t0-60) + 2*ramp(t0 - 62) - 1*ramp(t0-64) \
- 0.1*ramp(t0-79) + 0.4*ramp(t0-85) - 0.3*ramp(t0-87) \
+ 0.35*ramp(t0-95) - 0.7*ramp(t0-98) + 0.35*ramp(t0-101) \
- 0.5*ramp(t0-101) + 1*ramp(t0-102.5) - 0.5*ramp(t0-104) \
+ 0.35*ramp(t0-104) - 0.7*ramp(t0-107) + 0.35*ramp(t0-110) \
- 0.15*ramp(t0-110) + 0.3*ramp(t0-114) - 0.15*ramp(t0-118) \
+ jitter(0.25, np.pi / 2.0, t0, 132, 152) \
+ 2.*ramp(t0-160) - 2.*ramp(t0-161) - 2.*ramp(t0-163) + 2.*ramp(t0-164) \
- 2.*ramp(t0 - 180) + 2*ramp(t0-181) + 2 *ramp(t0-183) - 2*ramp(t0-184) \
+ 2.0 * ramp(t0-210) - 2.0*ramp(t0-210.2) - 2.0*ramp(t0-216) + 2.0*ramp(t0-216.4)\
+ 2.0 * ramp(t0-218.4) - 2.0*ramp(t0-218.8) - 2.0*ramp(t0 - 230) + 2.0*ramp(t0-230.2) \
- 1.5*ramp(t0-240) + 1.5*ramp(t0-241) + 1.5*ramp(t0-243) - 1.5*ramp(t0-244)
t0 = np.arange(0, 285, 0.02)
v0 = cumtrapz(a0, t0, initial=0.) + 1.
fig, ax1 = plt.subplots()
ax1.set_xlabel('Time (s)')
ax1.plot(t0, v0, color='tab:blue', linewidth=2.0, label='Speed')
ax1.set_ylabel('Speed 'r'$(m/s)$', color='tab:blue')
ax2 = ax1.twinx()
ax2.plot(t0, a0, color='black', linestyle='--', linewidth=1.5, label='Acceleration')
ax2.set_ylabel('Acceleration '+r'$(m/s^2)$', color='black')
ax2.set_ylim(ax2.get_ylim()[0], 3 * ax2.get_ylim()[1])
fig.legend()
plt.title('Reference Trajectory')
plt.show()
###Output
_____no_output_____
###Markdown
MAKE FUNCTION Generate Population
###Code
def generate_population(num, dim, rng):
"""
Generate population:
Input:
num: number of population (integer)
dim: number of parameters (integer)
rng: range number used in initialization (list or numpy array)
Output:
pop: initial position of the population (numpy array)
"""
pop = np.zeros((num,dim))
for i in range(dim):
lim = rng[i]
pop[:, i] = np.random.uniform(lim[0], lim[1], size=num)
return pop
###Output
_____no_output_____
###Markdown
Forward Propagation
###Code
@njit
def delayed_control_signal(i, u, u_list, td):
if i < td:
ut = 0.0
else:
if td == 0:
ut = u
else:
ut = u_list[i-td]
return ut
_ = delayed_control_signal(1, 0.1, np.array([0.1, 0.2]), 0)
@njit
def clip(a, a_min, a_max):
if a > a_max:
return a_max
elif a < a_min:
return a_min
else:
return a
_ = clip(2.0, -1.0, 1.0)
# Steady state response parameters
beta1, beta2, beta3 = param_ssr
# System parameters
a1, a2, a3, b1, b2, b3, b4, c1, c2, c3, c4, td11, td12, td13, td21, td22, td23 = param_dynamics
td11 = int(np.around(td11))
td12 = int(np.around(td12))
td13 = int(np.around(td13))
td21 = int(np.around(td21))
td22 = int(np.around(td22))
td23 = int(np.around(td23))
sat_min = -1.
sat_max = 1.
@njit
def forward_propagation(t, v, param):
kp, ki, kd = param
dt = np.mean(t[1:] - t[:-1])
ki = ki * dt
kd = kd / dt
e_sum = 0.0
e_last = 0.0
e_int_state = 0 # 0 --> No Saturation || 1 --> Saturation (+) || -1 --> Saturation (-1)
is_start = True
u1_list = np.empty(t.shape)
u2_list = np.empty(t.shape)
out = np.empty(t.shape)
y = 0.0
for i in range(t.shape[0]):
# LONGITUDINAL CONTROLLER
sp = clip(v[i], 0.0, np.Inf)
sr = beta1 * (1 - np.exp(beta2*sp)) + beta3
sr = clip(sr, 0., sat_max) * 0.5
err = sp - y
if e_int_state == 0:
e_sum += err
elif e_int_state == 1:
if err < 0:
e_sum += err
elif e_int_state == -1:
if err > 0:
e_sum += err
if is_start:
temp = sr + kp * err + ki * e_sum + 0.
is_start = False
else:
temp = sr + kp * err + ki * e_sum + kd * (err - e_last)
e_last = err
if temp > sat_max: # Saturation (+)
temp = sat_max
e_int_state = 1
elif temp < sat_min: # Saturation (-)
temp = sat_min
e_int_state = -1
else: # Not saturated
e_int_state = 0
u1 = clip(temp, 0.0, sat_max)
u2 = clip(-temp, 0.0, -sat_min)
# DYNAMICS
u11t = delayed_control_signal(i, u1, u1_list, td11)
u12t = delayed_control_signal(i, u1, u1_list, td12)
u13t = delayed_control_signal(i, u1, u1_list, td13)
u21t = delayed_control_signal(i, u2, u2_list, td21)
u22t = delayed_control_signal(i, u2, u2_list, td22)
u23t = delayed_control_signal(i, u2, u2_list, td23)
temp = 0.
if y != 0.:
temp = a1
y_dot = temp + a2 * y + a3 * y**2 \
+ b1 * u11t + b2 * np.exp(b3 * y + b4 * u12t) * u13t \
+ c1 * u21t + c2 * np.exp(c3 * y + c4 * u22t) * u23t
y += y_dot * dt
if y < 0.0:
y = 0.0
u1_list[i] = u1
u2_list[i] = u2
out[i] = y
return out, u1_list, u2_list
_ = forward_propagation(np.arange(10, dtype=float), np.ones(10), np.array([0.1, 0.1, 0.1]))
%timeit forward_propagation(t0, v0, np.array([0.2, 0.1550, 0.1]))
###Output
382 µs ± 3.29 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
###Markdown
Constraint
###Code
@njit
def admissible(param):
kp, ki, kd = param
if kp < 0. or ki < 0. or kd < 0.:
return False
else:
return True
n_dim = 3
_ = admissible(np.random.randn(n_dim))
###Output
_____no_output_____
###Markdown
Cost
###Code
@njit
def gradient(a, t):
dt = np.mean(t[1:]-t[:-1])
out = np.zeros_like(a)
out[1:-1] = (a[2:] - a[:-2]) / 2 / dt
out[0] = out[1]
out[-1] = out[-2]
return out
_ = gradient(v0, t0)
idx = np.array([[9.5, 27.], [35., 40.], [48., 60.], [64., 79.], [87., 95.], [118., 132.], [164., 180.], [184., 210.], [230.2, 240.], [244., t0[-1]+3.]]) -3
direction = np.array([1, 0, 1, 0, 0, 1, 1, 0, 1, 0])
@njit
def max_os_sim(mv):
out = 0.
for i in range(mv.shape[0]):
for j in range(idx.shape[0]):
if idx[j,0] <= t0[i] and t0[i] <= idx[j,1]:
if direction[j] > 0.5:
temp = mv[i] - v0[i]
else:
temp = v0[i] - mv[i]
temp = temp / v0[i] * 100
temp = clip(temp, 0.0, np.Inf)
if temp > out:
out = temp
return out
_ = max_os_sim(np.zeros(v0.shape[0]))
@njit
def cost(t, v, param, lamda):
mv, cs1, cs2 = forward_propagation(t, v, param)
error = v - mv
mj = gradient(cs1, t)
max_os = max_os_sim(mv)
if max_os > lamda[1]: # max_os %
return np.Inf
loss = np.sum(error**2) + lamda[0] * np.sum(np.abs(mj))
M = t.shape[0]
return loss / M
_ = cost(np.arange(10, dtype=float), np.ones(10), np.ones(3), np.array([0.001, 0.001]))
@njit
def mean_squared_error(t, v, param):
mv, _, _ = forward_propagation(t, v, param)
error = v - mv
cost = np.mean(error**2)
return cost
_ = mean_squared_error(np.arange(10, dtype=float), np.ones(10), np.array([0.1, 0.1, 0.1]))
@njit
def mean_absolute_error(t, v, param):
mv, _, _ = forward_propagation(t, v, param)
error = v - mv
out = np.mean(np.abs(error))
return out
_ = mean_absolute_error(np.arange(10, dtype=float), np.ones(10), np.array([0.1, 0.1, 0.1]))
@njit
def max_absolute_error(t, v, param):
mv, _, _ = forward_propagation(t, v, param)
error = v - mv
return np.max(np.abs(error))
_ = max_absolute_error(np.arange(10, dtype=float), np.ones(10), np.array([0.1, 0.1, 0.1]))
@njit
def mean_absolute_jerk(t, v, param):
mv, _, _ = forward_propagation(t, v, param)
ma = gradient(mv, t)
mj = gradient(ma, t)
return np.mean(np.abs(mj))
_ = mean_absolute_jerk(np.arange(10, dtype=float), np.ones(10), np.array([0.1, 0.1, 0.1]))
@njit
def mean_squared_jerk(t, v, param):
mv, _, _ = forward_propagation(t, v, param)
ma = gradient(mv, t)
mj = gradient(ma, t)
return np.mean(mj**2)
_ = mean_squared_jerk(np.arange(10, dtype=float), np.ones(10), np.array([0.1, 0.1, 0.1]))
@njit
def max_percent_overshoot(t, v, param):
mv, _, _ = forward_propagation(t, v, param)
return max_os_sim(mv)
_ = max_percent_overshoot(np.arange(10, dtype=float), np.ones(10), np.array([0.1, 0.1, 0.1]))
@njit
def mean_absolute_u_dot(t, v, param):
mv, cs1, cs2 = forward_propagation(t, v, param)
cs1_dot = gradient(cs1, t)
cs2_dot = gradient(cs2, t)
return np.mean(np.abs(cs1_dot)+np.abs(cs2_dot))
_ = mean_absolute_u_dot(np.arange(10, dtype=float), np.ones(10), np.array([0.1, 0.1, 0.1]))
@njit
def mean_squared_u_dot(t, v, param):
mv, cs1, cs2 = forward_propagation(t, v, param)
cs1_dot = gradient(cs1, t)
cs2_dot = gradient(cs2, t)
return np.mean(np.abs(cs1_dot)**2+np.abs(cs2_dot)**2)
_ = mean_squared_u_dot(np.arange(10, dtype=float), np.ones(10), np.array([0.1, 0.1, 0.1]))
@njit
def calculate_total_cost(param, lamda):
if admissible(param):
return cost(t0, v0, param, lamda)
return np.Inf
_ = calculate_total_cost(np.array([0.1, 0.1, 0.1]), np.array([0.001, 0.001]))
@njit(parallel=True)
def population_cost(population, lamda):
length = population.shape[0]
losses = np.zeros(length)
for ii in prange(length):
losses[ii] = calculate_total_cost(population[ii], lamda)
return losses
_ = population_cost(np.array([[0.1, 0.1, 0.1], [0.1, 0.1, 0.1]]), np.array([0.001, 0.001]))
###Output
_____no_output_____
###Markdown
APSO
###Code
@njit
def apso(population, loss_population, global_, global_loss, alpha0, beta, lamda):
num = population.shape[0]
dim = population.shape[1]
# Initial conditions
ppos_vector = np.copy(population)
pbest_pos = np.copy(ppos_vector)
pfit_value = np.copy(loss_population)
gbest_pos = np.copy(global_)
gfit_value = global_loss
L = np.empty_like(population[0])
for i in range(L.shape[0]):
L[i] = np.max(population[:, i]) - np.min(population[:, i])
for i in range(num):
# Update the alpha value
alpha = alpha0 * L
# Update the velocity and position vector
ppos_vector[i] = (1-beta)*ppos_vector[i] + alpha*np.random.normal(0,1) + beta*gbest_pos
cost_func = calculate_total_cost(ppos_vector[i], lamda)
# Update each values using the cost functions
if(pfit_value[i] > cost_func):
pfit_value[i] = cost_func
pbest_pos[i] = np.copy(ppos_vector[i])
if(gfit_value > cost_func):
gfit_value = cost_func
gbest_pos = np.copy(ppos_vector[i])
return pbest_pos, pfit_value, gbest_pos, gfit_value
xx1 = np.ones((2, n_dim))
xx2 = np.ones(2)
xx3 = np.random.randn(n_dim)
_ = apso(xx1, xx2, xx3, 100.0, 0.8, 1.5, np.array([0., np.Inf]))
###Output
_____no_output_____
###Markdown
SIMULATION (OPTIMIZATION)
###Code
num = 50
n_sim = 20
n_itr = 5000
r_kp = [0.0, 1.0]
r_ki = [0.0, 1.0]
r_kd = [0.0, 1.0]
rng = [r_kp, r_ki, r_kd]
dim = len(rng)
alpha0 = 0.8
beta = 0.15
lamda = np.array([0.0, np.Inf])
param_history = np.zeros((n_sim, dim))
loss_history = np.ones(n_sim) * np.Inf
the_best_param_history = np.zeros((n_itr, dim))
the_best_loss_history = np.zeros(n_itr)
for j in range(n_sim):
print(f'Optimization: {j+1} ------------------------------------------')
print('Initializing ...')
while True:
try:
population = generate_population(num, dim, rng)
global_ = None
global_loss_ = np.Inf
loss_population = population_cost(population, lamda)
loss_population[np.isnan(loss_population)] = np.Inf
min_idx = np.argmin(loss_population)
min_loss = loss_population[min_idx]
if global_loss_ > min_loss:
global_loss_ = min_loss
global_ = population[min_idx, :]
global_history = np.empty((n_itr, dim))
global_history[0] = global_
global_loss_history = np.empty(n_itr)
global_loss_history[0] = global_loss_
# Biasanya di sini suka gagal, kalau inisialisasi population awal semuanya menyelisihi constraint
population, loss_population, global_, global_loss_ = apso(population, loss_population, global_, global_loss_, alpha0, beta, lamda)
break
except:
print('Re-Initializing ...')
print('Continue ...')
for i in range(1, n_itr):
# APSO
population, loss_population, global_, global_loss_ = apso(population, loss_population, global_, global_loss_, alpha0, beta, lamda)
if (i-1) % 500 == 0:
print('simulation: {} || iteration: {} || global_loss: {:.5f}'.format(j+1, i, global_loss_))
global_history[i] = global_
global_loss_history[i] = global_loss_
if np.min(loss_history) > global_loss_history[-1]:
the_best_loss_history = np.copy(global_loss_history)
the_best_param_history = np.copy(global_history)
param_history[j] = np.copy(global_history[-1])
loss_history[j] = np.copy(global_loss_history[-1])
print('simulation: {} || the best loss: {:.10f}'.format(j, the_best_loss_history[-1]))
# Save the simulation
np.save('result/param_history.npy', param_history)
np.save('result/loss_history.npy', loss_history)
np.save('result/the_best_loss_history.npy', the_best_loss_history)
np.save('result/the_best_param_history.npy', the_best_param_history)
f = open("result/sim.cfg", "w+")
f.writelines('num: {} # The number of particles\n'.format(num))
f.writelines('n_sim: {} # The number of simulation loop\n'.format(n_sim))
f.writelines('n_itr: {} # The number of iteration for each simulation\n'.format(n_itr))
f.writelines('\n# Lambda value\n')
f.writelines('lambda0: {}'.format(lamda[0]))
f.writelines('lambda1: {}'.format(lamda[1]))
f.writelines('\n# The boundary of the initialization value\n')
f.writelines('r_kp: {}\n'.format(r_kp))
f.writelines('r_ki: {}\n'.format(r_ki))
f.writelines('r_kd: {}\n'.format(r_kd))
f.writelines('\n# The APSO hyperparameters\n')
f.writelines('alpha0: {}\n'.format(alpha0))
f.writelines('beta: {}\n'.format(beta))
f.close()
print('Lambda')
print(lamda)
print('Parameters')
print(global_)
print('Total loss: {}'.format(global_loss_))
print('MAE: {}'.format(mean_absolute_error(t0, v0, global_)))
print('MAJ: {}'.format(mean_absolute_jerk(t0, v0, global_)))
print('MSJ: {}'.format(mean_squared_jerk(t0, v0, global_)))
print('MAUD: {}'.format(mean_absolute_u_dot(t0, v0, global_)))
print('maximum %OS: {}'.format(max_percent_overshoot(t0, v0, global_)))
print('MSUD: {}'.format(mean_squared_u_dot(t0, v0, np.array([0.56458294, 2.2533995, 0.07817718]))))
###Output
MSUD: 27.819735851337157
|
doc/auto_tutorials/plot_08-FurtherAnalysis.ipynb | ###Markdown
08: Further Analysis====================Analyze results from fitting power spectrum models. Exploring Power Spectrum Model Results--------------------------------------So far we have explored how to parameterize neural power spectra as a method to extractparameters of interest from data - in particular measuring aperiodic and periodic activity.These measured parameters can then be examined within or between groups of interest,and/or fed into further analysis to examine if, for example, these parameterspredict other behavioural or physiological features of interest.Largely, it is up to you what to do after fitting power spectrum models, as it dependson your questions of interest.Here, we briefly introduce some analysis utilities that are included in the module,and explore some simple analyses that can be done with model parameters.To start, we will load and fit some example data, as well as simulate a group ofpower spectra to fit with power spectrum models.
###Code
# General imports
import numpy as np
# Import the FOOOF and FOOOFGroup objects
from fooof import FOOOF, FOOOFGroup
# Import the Bands object, which is used to define frequency bands
from fooof.bands import Bands
# Import simulation code and utilities
from fooof.sim.params import param_sampler
from fooof.sim.gen import gen_group_power_spectra
from fooof.sim.utils import set_random_seed
# Import some analysis functions
from fooof.analysis import get_band_peak_fm, get_band_peak_fg
# Import utility to download and load example data
from fooof.utils.download import load_fooof_data
###Output
_____no_output_____
###Markdown
Load and Fit Example Data~~~~~~~~~~~~~~~~~~~~~~~~~
###Code
# Load examples data files needed for this example
freqs = load_fooof_data('freqs.npy', folder='data')
spectrum = load_fooof_data('spectrum.npy', folder='data')
# Fit a power spectrum model
fm = FOOOF(peak_width_limits=[2, 8])
fm.fit(freqs, spectrum, [3, 30])
###Output
_____no_output_____
###Markdown
Simulate and Fit Group Data~~~~~~~~~~~~~~~~~~~~~~~~~~~
###Code
# Set random seed, for consistency generating simulated data
set_random_seed(21)
# Generate some simulated power spectra
freqs, spectra = gen_group_power_spectra(n_spectra=10,
freq_range=[3, 40],
aperiodic_params=param_sampler([[20, 2], [35, 1.5]]),
periodic_params=param_sampler([[], [10, 0.5, 2]]))
# Initialize a FOOOFGroup object with desired settings
fg = FOOOFGroup(peak_width_limits=[1, 8], min_peak_height=0.05,
max_n_peaks=6, verbose=False)
# Fit power spectrum models across the group of simulated power spectra
fg.fit(freqs, spectra)
###Output
_____no_output_____
###Markdown
Analysis Utilities------------------The FOOOF module includes some analysis functions. Note that these utilities aregenerally relatively simple utilities that assist in accessing and investigatingthe model parameters.In depth analysis of power spectrum model results is typically idiosyncratic to the goals ofthe project, and so we consider that this will typically require custom code, and seekhere to offer the most general utilities, and not support all possible applications.Here we demonstrate some of these utility functions covering very general use cases. Analyzing Periodic Components-----------------------------We will start by analyzing the periodic components.In particular, these utilities mostly serve to help organize and extract periodiccomponents, for example extracting peaks that fall within defined frequency bands.This also includes using the :class:`~.Bands` object, that is providedto store band definitions.
###Code
# Define frequency bands of interest
bands = Bands({'theta' : [4, 8],
'alpha' : [8, 12],
'beta' : [15, 30]})
###Output
_____no_output_____
###Markdown
Extracting peaks from FOOOF Objects~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~The :func:`~.get_band_peak_fm` function takes in a:class:`~.FOOOF` object and extracts peak(s) from a requested frequency range.You can optionally specify:- whether to return one peak from the specified band, in which case the highest peak is returned, or whether to return all peaks from within the band- whether to apply a minimum threshold to extract peaks, for example, to extract peaks only above some minimum power threshold
###Code
# Extract any alpha band peaks from the power spectrum model
alpha = get_band_peak_fm(fm, bands.alpha)
print(alpha)
###Output
_____no_output_____
###Markdown
Extracting peaks from FOOOFGroup Objects~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Similarly, the :func:`~.get_band_peak_fg` function can be usedto select peaks within specific frequency ranges, from :class:`~fooof.FOOOFGroup` objects.Note that you can also apply a threshold to extract group peaks but, as discussed below,this approach will only extract one peak per individual model in the FOOOFGroup object.
###Code
# Get all alpha peaks from a group of power spectrum models
alphas = get_band_peak_fg(fg, bands.alpha)
# Check out some of the alpha data
print(alphas[0:5, :])
###Output
_____no_output_____
###Markdown
When selecting peaks from a group of model fits, we want to retain information aboutwhich model each peak comes from.To do so, the output of :func:`~.get_band_peak_fg` is organizedsuch that each row corresponds to a specific model fit. This means that returned arrayhas the shape [n_models, 3], and so the index of each row corresponds to the index of themodel from the FOOOFGroup object.For this to work, at most 1 peak is extracted for each model fit within the specified band.If more than 1 peak are found within the band, the peak with the highest power is extracted.If no peaks are found, that row is filled with 'nan'.
###Code
# Check descriptive statistics of extracted peak data
print('Alpha CF : {:1.2f}'.format(np.nanmean(alphas[:, 0])))
print('Alpha PW : {:1.2f}'.format(np.nanmean(alphas[:, 1])))
print('Alpha BW : {:1.2f}'.format(np.nanmean(alphas[:, 2])))
###Output
_____no_output_____
###Markdown
Customizing Peak Extraction~~~~~~~~~~~~~~~~~~~~~~~~~~~If you want to do more customized extraction of peaks, for example, extracting all peaksin a frequency band from each model in a FOOOFGroup object, you may need to use theunderlying functions that operate on arrays of peak parameters. To explore these functions,check the listing in the API page. A Note on Frequency Ranges--------------------------A benefit of fitting power spectrum models is that you do not have to definea priori frequency ranges from which to extract peaks.Nevertheless, it may still be useful to group extracted peaks into 'bands' of interest,which is why the aforementioned functions are offered.Since this frequency-range selection can be done after model fitting, we do recommendchecking the model results, for example by checking a histogram of the center frequenciesextracted across a group, in order to ensure the frequency ranges you choose reflectthe characteristics of the data under study. Analyzing the Aperiodic Component---------------------------------Typically, for analyzing the aperiodic component of the data, aperiodic parametersjust need to be extracted from FOOOF objects and fit into analyses of interest.
###Code
# Plot from the FOOOFGroup, to visualize the parameters
fg.plot()
# Extract aperiodic exponent data from group results
exps = fg.get_params('aperiodic_params', 'exponent')
# Check out the aperiodic exponent results
print(exps)
###Output
_____no_output_____ |
training/CellAttention_with_RoBERTa.ipynb | ###Markdown
All
###Code
from google.colab import drive
drive.mount('/content/drive')
from google.colab import auth
auth.authenticate_user()
%%capture
!pip install --force-reinstall git+https://github.com/raina-kikani/transformers.git
from transformers import EncoderDecoderModel, RobertaTokenizer, RobertaConfig, RobertaModel
import torch
import numpy as np
if torch.cuda.is_available():
device = torch.device('cuda')
print(torch.cuda.get_device_name())
else:
device = torch.device('cpu')
###Output
Tesla T4
###Markdown
Train
###Code
!pip install tokenizers
!mkdir cellAttention
#!gsutil cp gs://cytereader/preprocessed_cell_corpus_0.txt .
#!gsutil cp gs://cytereader/preprocessed_cell_corpus_1.txt .
### upload vocab list
from google.colab import files
uploaded = files.upload()
#from tokenizers import BertWordPieceTokenizer
#!mkdir cellAttention
#wb_tokenizer = BertWordPieceTokenizer(clean_text=True,
# strip_accents=True, lowercase=True)
#
#wb_tokenizer.train(['preprocessed_cell_corpus_0.txt'],
# vocab_size=10000, min_frequency=2,
# special_tokens=["[PAD]", "[UNK]", "[CLS]", "[SEP]", "[MASK]"])
#wb_tokenizer.save_model("./cellAttention")
from transformers import RobertaConfig, BertTokenizer, RobertaModel, RobertaForMaskedLM, RobertaForSequenceClassification
tokenizer = BertTokenizer.from_pretrained("./cellAttention", max_len=64)
configuration = RobertaConfig(vocab_size=1000)
model = RobertaForMaskedLM(configuration)
# tokenizer = CellBertTokenizer.from_pretrained('./cellAttention/',vocab_file="vocab.txt")
model.num_parameters()
cnt = 0
with open('small.txt', 'w') as file:
for line in open('preprocessed_cell_corpus_0.txt'):
file.write(line)
cnt += 1
if cnt == 100000:
break
%%time
from transformers import LineByLineTextDataset
dataset = LineByLineTextDataset(
tokenizer=tokenizer,
file_path="small.txt",
block_size=128,
)
from transformers import DataCollatorForLanguageModeling
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=True, mlm_probability=0.15
)
from transformers import Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir="./cellAttention",
overwrite_output_dir=True,
num_train_epochs=1,
per_device_train_batch_size=64,
save_steps=10_000,
learning_rate=1e-4,
save_total_limit=2,
prediction_loss_only=True,
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=dataset,
)
# %%time
trainer.train()
trainer.save_model("./cellAttention")
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="./cellAttention",
tokenizer="./cellAttention"
)
fill_mask("CD45+ CD196_CCR6+ CD181_CXCR1- HLA_DR- CD15- CD31_PECAM1- CD8a- CD182_CXCR2[MASK] CD66ace- CD63- CD14- CD66b- CD62L_Lselectin- CD3+ CD27- CD86+ CD10- CD197_CCR7+ CD28- CD11c- CD33- CD161- CD45RO- CD24- CD38+ CD278_ICOS- CD32- CD152_CTLA4+ IgM+ CD184_CXCR4+ CD279_PD1- CD56+ CD16-")
input = 'CD45+ CD196_CCR6+ CD181_CXCR1- HLA_DR- CD15-'
masked_input = 'CD45+ CD196_CCR6+ CD181_CXCR1[MASKED] HLA_DR- CD15-'
output = 'CD45+ CD196_CCR6+ CD181_CXCR1-/+ HLA_DR- CD15-'
###Output
_____no_output_____ |
create_sims.ipynb | ###Markdown
for each planet we need to calculate* mass* density* semimajor* eccentricity* inclination* omega * OMEGA * mean anomaly
###Code
# specify star and planet parameters
# star
star = dict(
mass_msun=[0.0802,0.0073],
radius_rsun=[0.117,0.0036]
)
# planet b
planetb = dict(
period_days=[1.51087081,0.00000060],
t0=[7322.51736,0.00010],
impact=[0.126,0.090],
mass_mearth=[0.85,0.72],
ecc_max=[0.2], # 5-sigma upper limit
td_percent=[0.7266,0.0088],
)
# planet c
planetc = dict(
period_days=[2.4218233,0.0000017],
t0=[7282.80728,0.00019],
impact=[0.161,0.080],
mass_mearth=[1.38,0.61],
ecc_max=[0.2], # 5-sigma upper limit
td_percent=[0.687,0.010],
)
# planet d
planetd = dict(
period_days=[4.049610,0.000063],
t0=[7670.14165,0.00035],
impact=[0.17,0.11],
mass_mearth=[0.41,0.27],
ecc_max=[0.175], # 5-sigma upper limit
td_percent=[0.367,0.017],
)
# planet e
planete = dict(
period_days=[6.099615,0.000011],
t0=[7660.37859,0.00026],
impact=[0.12,0.10],
mass_mearth=[0.62,0.58],
ecc_max=[0.2], # 5-sigma upper limit
td_percent=[0.519,0.026],
)
# planet f
planetf = dict(
period_days=[9.206690,0.000015],
t0=[7671.39767,0.00023],
impact=[0.382,0.035],
mass_mearth=[0.68,0.18],
ecc_max=[0.12], # 5-sigma upper limit
td_percent=[0.673,0.023],
)
# planet g
planetg = dict(
period_days=[12.35294,0.00012],
t0=[7665.34937,0.00021],
impact=[0.421,0.031],
mass_mearth=[1.34,0.88],
ecc_max=[0.12], # 5-sigma upper limit
td_percent=[0.782,0.027],
)
# planet h
planeth = dict(
period_days_uniform=[14,35],
t0=[7662.55463,0.00056],
impact=[0.45,0.3],
mass_mearth=[0.4,1.0],
ecc_max=[0.3], # 5-sigma upper limit
td_percent=[0.353,0.0326],
)
def calc_mercury_parameters(pdicts, sdict, size=1):
# stellar radius
sradius_rsun = _get_property(sdict['radius_rsun'][0], sdict['radius_rsun'][1], 0.0, 100.0, size=size)
# stellar mass
smass_msun = _get_property(sdict['mass_msun'][0], sdict['mass_msun'][1], 0.0, 100.0, size=size)
nplanets = len(pdicts)
mercury_params = []
for pdict in pdicts:
mercury_params.append(_calc_planet_parameters(pdict, sradius_rsun, smass_msun, size=1))
if size == 1:
mercury_params = np.reshape(mercury_params, [nplanets, 8])
return mercury_params
def _calc_planet_parameters(pdict, sradius_rsun, smass_msun, size=1):
# mass
pmass_mearth = _get_property(pdict['mass_mearth'][0], pdict['mass_mearth'][1], 0.0, 5.0, size=size)
pmass_msun = pmass_mearth * 3.003467E-6
# density
rprs = (_get_property(pdict['td_percent'][0], pdict['td_percent'][1], 0.0, 50., size=size)/100.)**0.5
pradius_rsun = (rprs * sradius_rsun)
pdensity_cgs = (pmass_msun * 1.989E33) / ((4./3.) *np.pi* (pradius_rsun * 69.57E9)**3)
# semimajor
sdensity_cgs = (smass_msun * 1.989E33) / ((4./3.) * np.pi* (sradius_rsun * 69.57E9)**3)
if 'period_days' in pdict.keys():
pperiod_days = _get_property(pdict['period_days'][0], pdict['period_days'][1], 0.0, 10000.0, size=size)
elif 'period_days_uniform' in pdict.keys():
pperiod_days = np.random.uniform(pdict['period_days_uniform'][0], pdict['period_days_uniform'][1], size=size)
else:
raise 'period is missing'
ars = get_ar(sdensity_cgs, pperiod_days)
semimajor_au = ars * sradius_rsun * 0.00464913034
# ecc
ecc = np.random.uniform(0.0, pdict['ecc_max'], size=size)
# inclination
b = _get_property(pdict['impact'][0], pdict['impact'][1], 0.0, 1.0, size=size)
inc = np.degrees(np.arccos(b / ars))
# omega
omega = np.random.rand(size) * 360
# OMEGA
OMEGA = np.random.rand(size) * 360
# meananomaly
t0 = np.random.normal(pdict['t0'][0], pdict['t0'][1], size=size)
meananomaly = (t0 % pperiod_days) / pperiod_days * 360
return pmass_msun, pdensity_cgs, semimajor_au, ecc, inc, omega, OMEGA, meananomaly
def _get_property(mu, sigma, lower, upper, size):
X = stats.truncnorm.rvs(
(lower - mu) / sigma, (upper - mu) / sigma, loc=mu, scale=sigma, size=size)
return X
def get_ar(rho,period):
""" gets a/R* from period and mean stellar density"""
G = 6.67E-11
rho_SI = rho * 1000.
tpi = 3. * np.pi
period_s = period * 86400.
part1 = period_s**2 * G * rho_SI
ar = (part1 / tpi)**(1./3.)
return ar
pdicts = [planetb, planetc, planetd, planete, planetf, planetg, planeth]
q = calc_mercury_parameters(pdicts, star, size=1)
outstr = r''')O+_06 Big-body initial data (WARNING: Do not delete this line!!)
) Lines beginning with ) are ignored.
)---------------------------------------------------------------------
style (Cartesian, Asteroidal, Cometary) = Ast
epoch (in days) = 0
)---------------------------------------------------------------------
PL1 m={} d={}
{} {} {} {} {} {} 0. 0. 0.
PL2 m={} d={}
{} {} {} {} {} {} 0. 0. 0.
PL3 m={} d={}
{} {} {} {} {} {} 0. 0. 0.
PL4 m={} d={}
{} {} {} {} {} {} 0. 0. 0.
PL5 m={} d={}
{} {} {} {} {} {} 0. 0. 0.
PL6 m={} d={}
{} {} {} {} {} {} 0. 0. 0.
PL7 m={} d={}
{} {} {} {} {} {} 0. 0. 0.'''.format(*q.flatten())
print(outstr)
###Output
)O+_06 Big-body initial data (WARNING: Do not delete this line!!)
) Lines beginning with ) are ignored.
)---------------------------------------------------------------------
style (Cartesian, Asteroidal, Cometary) = Ast
epoch (in days) = 0
)---------------------------------------------------------------------
PL1 m=5.29335286427e-07 d=0.782887909861
0.0107451796487 0.0179840586075 89.2797335997 224.035863701 60.6264500675 199.11674704 0. 0. 0.
PL2 m=4.88567681607e-06 d=7.94430667632
0.0147171453652 0.147395189085 89.7259184124 103.969200702 96.4575250127 57.5750531725 0. 0. 0.
PL3 m=1.21847438558e-06 d=4.9277256779
0.0207334781259 0.166793011888 89.7455070975 19.7696641937 1.22945615356 13.0041705069 0. 0. 0.
PL4 m=2.0063932814e-06 d=5.00610502414
0.0272435245565 0.117199458824 89.9628577261 299.243220589 317.818542142 317.035789111 0. 0. 0.
PL5 m=1.77977018533e-06 d=2.71625038771
0.0358479982821 0.0737592707889 89.7565108299 177.869693568 83.5372887334 86.4829813948 0. 0. 0.
PL6 m=1.28635567445e-06 d=1.64677336305
0.0436088629565 0.1035643245 89.6824221699 259.447638481 263.665835547 190.854564442 0. 0. 0.
PL7 m=4.04909572015e-06 d=23.246594285
0.0531109427093 0.069732910376 89.6216162676 231.002562226 102.01793636 187.25905021 0. 0. 0.
|
examples/alanine_dipeptide_tps/AD_tps_3a_analysis_flex.ipynb | ###Markdown
Analyzing the flexible path length simulation Load the file, and from the file pull our the engine (which tells us what the timestep was) and the move scheme (which gives us a starting point for much of the analysis).
###Code
filename = "tps_nc_files/alanine_dipeptide_tps.nc"
# note that this log will overwrite the log from the previous notebook
#import logging.config
#logging.config.fileConfig("logging.conf", disable_existing_loggers=False)
%%time
flexible = paths.AnalysisStorage(filename)
engine = flexible.engines[0]
flex_scheme = flexible.schemes[0]
print "File size: {0} for {1} steps, {2} snapshots".format(
flexible.file_size_str,
len(flexible.steps),
len(flexible.snapshots)
)
# rough estimate of total time
sum(step.change.details.timing for step in flexible.steps if step.change.details is not None)
step = flexible.steps[1]
step.change.details
###Output
_____no_output_____
###Markdown
That tell us a little about the file we're dealing with. Now we'll start analyzing the contents of that file. We used a very simple move scheme (only shooting), so the main information that the `move_summary` gives us is the acceptance of the only kind of move in that scheme. See the MSTIS examples for more complicated move schemes, where you want to make sure that frequency at which the move runs is close to what was expected.
###Code
flex_scheme.move_summary(flexible.steps)
###Output
shooting ran 100.000% (expected 100.00%) of the cycles with acceptance 5639/10000 (56.39%)
###Markdown
Replica history tree and decorrelated trajectoriesThe `ReplicaHistoryTree` object gives us both the history tree (often called the "move tree") and the number of decorrelated trajectories.A `ReplicaHistoryTree` is made for a certain set of Monte Carlo steps. First, we make a tree of only the first 50 steps in order to visualize it. (All 10000 steps would be unwieldy.) After the visualization, we make a second `ReplicaHistoryTree` of all the steps, in order to count the number of decorrelated trajectories.
###Code
replica_history = ops_vis.ReplicaEvolution(replica=0)
tree = ops_vis.PathTree(
flexible.steps[0:25],
replica_history
)
tree.options.css['scale_x'] = 3
SVG(tree.svg())
# can write to svg file and open with programs that can read SVG
with open("flex_tps_tree.svg", 'w') as f:
f.write(tree.svg())
tree.options.movers['default']['new'] = 'single'
tree.options.css['scale_x'] = 3
tree.options.css['horizontal_gap'] = 0.1 # True is the same as 0.05
SVG(tree.svg())
print "Decorrelated trajectories:", len(tree.generator.decorrelated_trajectories)
full_history = ops_vis.PathTree(
flexible.steps,
ops_vis.ReplicaEvolution(
replica=0
)
)
n_decorrelated = len(full_history.generator.decorrelated_trajectories)
print "All decorrelated trajectories:", n_decorrelated
###Output
All decorrelated trajectories: 893
###Markdown
Path length distributionFlexible length TPS gives a distribution of path lengths. Here we calculate the length of every accepted trajectory, then histogram those lengths, and calculate the maximum and average path lengths.We also use `engine.snapshot_timestep` to convert the count of frames to time, including correct units.
###Code
path_lengths = [len(step.active[0].trajectory) for step in flexible.steps]
plt.hist(path_lengths, bins=40, alpha=0.5);
print "Maximum:", max(path_lengths), "("+str(max(path_lengths)*engine.snapshot_timestep)+")"
print "Average:", "{0:.2f}".format(np.mean(path_lengths)), "("+(np.mean(path_lengths)*engine.snapshot_timestep).format("%.3f")+")"
###Output
Maximum: 505 (10.1 ps)
Average: 82.10 (1.642 ps)
###Markdown
Path density histogramNext we will create a path density histogram. Calculating the histogram itself is quite easy: first we reload the collective variables we want to plot it in (we choose the phi and psi angles). Then we create the empty path density histogram, by telling it which CVs to use and how to make the histogram (bin sizes, etc). Finally, we build the histogram by giving it the list of active trajectories to histogram.
###Code
from openpathsampling.numerics import HistogramPlotter2D
psi = flexible.cvs['psi']
phi = flexible.cvs['phi']
deg = 180.0 / np.pi
path_density = paths.PathDensityHistogram(cvs=[phi, psi],
left_bin_edges=(-180/deg,-180/deg),
bin_widths=(2.0/deg,2.0/deg))
# TODO: can we pre-cache all the trajectories, too? That might make this faster....
flexible.trajectories.cache_all()
%%time
path_dens_counter = path_density.histogram([s.active[0].trajectory for s in flexible.steps])
# TODO: for the real thing, run over *all* steps -- just takes 10 times longer
###Output
CPU times: user 7min 7s, sys: 1.74 s, total: 7min 8s
Wall time: 7min 9s
###Markdown
Now we've built the path density histogram, and we want to visualize it. We have a convenient `plot_2d_histogram` function that works in this case, and takes the histogram, desired plot tick labels and limits, and additional `matplotlib` named arguments to `plt.pcolormesh`.
###Code
tick_labels = np.arange(-np.pi, np.pi+0.01, np.pi/4)
plotter = HistogramPlotter2D(path_density,
xticklabels=tick_labels,
yticklabels=tick_labels,
label_format="{:4.2f}")
ax = plotter.plot(cmap="Blues")
# this is the figure we actually publish
# change xlim and ylim (which are in radians) to get the figure you want
xlim = (-np.pi, 0)
ylim = (-np.pi/2, np.pi)
#xlim = ylim = (-np.pi, np.pi)
state_color = (0.953, 0.867, 0.878)
plt.rcParams.update({'font.size': 14})
# main plotting
ax = plotter.plot(xlim=xlim, ylim=ylim, cmap="Blues")
trajA = flexible.steps[3000].active[0].trajectory
trajB = flexible.steps[2000].active[0].trajectory
plotter.plot_trajectory(trajA, '-k', lw=0.5)
plotter.plot_trajectory(trajB, '-r', lw=0.5)
plt.xlabel("$\phi$")
plt.ylabel("$\psi$")
# adding something to show the states
alpha_R_xywh = (-180, -100, 180, 100)
# our states are rectangular, so we make rectangular patches
from matplotlib.patches import Rectangle
def state_patch(x, y, w, h):
xy = np.array([x, y]) / deg
wh = np.array([w, h]) / deg
plot_xy = [plotter.to_bins(val, i)
for (i, val) in enumerate(xy)]
plot_w, plot_h = wh / plotter.histogram.bin_widths
return Rectangle(plot_xy, plot_w, plot_h, color=state_color)
ax.axes.add_patch(state_patch(-180, -100, 180, 100)) # alpha_R
ax.axes.add_patch(state_patch(-180, 100, 180, 100)) # C7eq
ax.axes.add_patch(state_patch(-180, -260, 180, 100)) # C7eq, wrapped around
plt.text(x=plotter.to_bins(-100/deg, 0),
y=plotter.to_bins(-60/deg, 1),
s="$\\alpha_R$")
plt.text(x=plotter.to_bins(-100/deg, 0),
y=plotter.to_bins(130/deg, 1),
s="$C_{7eq}$")
# now we're going to clean up so our axes are in degrees
# save limits
xlim = plt.xlim()
ylim = plt.ylim()
# convert labels back to degree
def degree_ticks(locs_labels):
locs, labels = locs_labels
new_labels = []
for label in labels:
numeric = float(label.get_text())
label.set_text("{:.0f}".format(numeric*deg))
new_labels.append(label)
return locs, labels
xlocs, xlabels = degree_ticks(plt.xticks())
plt.xticks(xlocs, xlabels)
ylocs, ylabels = degree_ticks(plt.yticks())
plt.yticks(ylocs, ylabels)
plt.xlim(*xlim)
plt.ylim(*ylim);
plt.tight_layout()
plt.savefig("AD_tps_pathdensity.pdf")
#import nglview as nv
#nv.show_mdtraj(traj.to_mdtraj())
trajA.to_mdtraj()
###Output
_____no_output_____
###Markdown
Analyzing the flexible path length simulation
###Code
from __future__ import print_function
%matplotlib inline
import openpathsampling as paths
import numpy as np
import matplotlib.pyplot as plt
import os
import openpathsampling.visualize as ops_vis
from IPython.display import SVG
###Output
_____no_output_____
###Markdown
Load the file, and from the file pull our the engine (which tells us what the timestep was) and the move scheme (which gives us a starting point for much of the analysis).
###Code
# note that this log will overwrite the log from the previous notebook
#import logging.config
#logging.config.fileConfig("logging.conf", disable_existing_loggers=False)
%%time
flexible = paths.AnalysisStorage("ad_tps.nc")
# opening as AnalysisStorage is a little slower, but speeds up the move_summary
engine = flexible.engines[0]
flex_scheme = flexible.schemes[0]
print("File size: {0} for {1} steps, {2} snapshots".format(
flexible.file_size_str,
len(flexible.steps),
len(flexible.snapshots)
))
###Output
File size: 18.65GB for 10001 steps, 985686 snapshots
###Markdown
That tell us a little about the file we're dealing with. Now we'll start analyzing the contents of that file. We used a very simple move scheme (only shooting), so the main information that the `move_summary` gives us is the acceptance of the only kind of move in that scheme. See the MSTIS examples for more complicated move schemes, where you want to make sure that frequency at which the move runs is close to what was expected.
###Code
flex_scheme.move_summary(flexible.steps)
###Output
_____no_output_____
###Markdown
Replica history tree and decorrelated trajectoriesThe `ReplicaHistoryTree` object gives us both the history tree (often called the "move tree") and the number of decorrelated trajectories.A `ReplicaHistoryTree` is made for a certain set of Monte Carlo steps. First, we make a tree of only the first 25 steps in order to visualize it. (All 10000 steps would be unwieldy.) After the visualization, we make a second `PathTree` of all the steps. We won't visualize that; instead we use it to count the number of decorrelated trajectories.
###Code
replica_history = ops_vis.ReplicaEvolution(replica=0)
tree = ops_vis.PathTree(
flexible.steps[0:25],
replica_history
)
tree.options.css['scale_x'] = 3
SVG(tree.svg())
# can write to svg file and open with programs that can read SVG
with open("flex_tps_tree.svg", 'w') as f:
f.write(tree.svg())
print("Decorrelated trajectories:", len(tree.generator.decorrelated_trajectories))
%%time
full_history = ops_vis.PathTree(
flexible.steps,
ops_vis.ReplicaEvolution(
replica=0
)
)
n_decorrelated = len(full_history.generator.decorrelated_trajectories)
print("All decorrelated trajectories:", n_decorrelated)
###Output
All decorrelated trajectories: 846
CPU times: user 1min 22s, sys: 321 ms, total: 1min 22s
Wall time: 1min 22s
###Markdown
Path length distributionFlexible length TPS gives a distribution of path lengths. Here we calculate the length of every accepted trajectory, then histogram those lengths, and calculate the maximum and average path lengths.We also use `engine.snapshot_timestep` to convert the count of frames to time, including correct units.
###Code
path_lengths = [len(step.active[0].trajectory) for step in flexible.steps]
plt.hist(path_lengths, bins=40, alpha=0.5);
print("Maximum:", max(path_lengths),
"("+(max(path_lengths)*engine.snapshot_timestep).format("%.3f")+")")
print ("Average:", "{0:.2f}".format(np.mean(path_lengths)),
"("+(np.mean(path_lengths)*engine.snapshot_timestep).format("%.3f")+")")
###Output
Maximum: 449 (8.980 ps)
Average: 84.56 (1.691 ps)
###Markdown
Path density histogramNext we will create a path density histogram. Calculating the histogram itself is quite easy: first we reload the collective variables we want to plot it in (we choose the phi and psi angles). Then we create the empty path density histogram, by telling it which CVs to use and how to make the histogram (bin sizes, etc). Finally, we build the histogram by giving it the list of active trajectories to histogram.
###Code
from openpathsampling.numerics import HistogramPlotter2D
psi = flexible.cvs['psi']
phi = flexible.cvs['phi']
deg = 180.0 / np.pi
path_density = paths.PathDensityHistogram(cvs=[phi, psi],
left_bin_edges=(-180/deg,-180/deg),
bin_widths=(2.0/deg,2.0/deg))
path_dens_counter = path_density.histogram([s.active[0].trajectory for s in flexible.steps])
###Output
_____no_output_____
###Markdown
Now we've built the path density histogram, and we want to visualize it. We have a convenient `plot_2d_histogram` function that works in this case, and takes the histogram, desired plot tick labels and limits, and additional `matplotlib` named arguments to `plt.pcolormesh`.
###Code
tick_labels = np.arange(-np.pi, np.pi+0.01, np.pi/4)
plotter = HistogramPlotter2D(path_density,
xticklabels=tick_labels,
yticklabels=tick_labels,
label_format="{:4.2f}")
ax = plotter.plot(cmap="Blues")
###Output
_____no_output_____
###Markdown
Convert to MDTraj for analysis by external toolsThe trajectory can be converted to an MDTraj trajectory, and then used anywhere that MDTraj can be used. This includes writing it to a file (in any number of file formats) or visualizing the trajectory using, e.g., NGLView.
###Code
ops_traj = flexible.steps[1000].active[0].trajectory
traj = ops_traj.to_mdtraj()
traj
# Here's how you would then use NGLView:
#import nglview as nv
#view = nv.show_mdtraj(traj)
#view
flexible.close()
###Output
_____no_output_____ |
tutorials/TDC_103.1_Datasets_Small_Molecules.ipynb | ###Markdown
TDC 103: Datasets Part 1 - Small Molecules[Kexin](https://twitter.com/KexinHuang5)In this tutorial, we will walk through various small molecule datasets provided in TDC!We assume you have familiarize yourself with the installations, data loaders, and data functions. If not, please visit [TDC 101 Data Loaders](https://github.com/mims-harvard/TDC/blob/master/tutorials/TDC_101_Data_Loader.ipynb) and [TDC 102 Data Functions](https://github.com/mims-harvard/TDC/blob/master/tutorials/TDC_102_Data_Functions.ipynb) first!TDC has more than 60 datasets in the first release. In this tutorial, we highlight many of them and hopefully will give users a sense of what the TDC covers. We will start with small molecule drugs and go to biologics in the next part of the tutorial. For small molecules, we introduce the dataset in the order of discovery and development pipelines. Small Molecule Target DiscoveryThe first stage of small molecule drug discovery is target discovery, that is to identify genes for the disease of interest. This is relatively underexplored for ML usage. One way to do it is by modeling it as a prediction problem for gene-disease association (GDA). TDC includes one high quality GDA data [DisGeNET](https://www.disgenet.org/), which curates from UniProt, PsyGeNET, Orphanet, the CGI, CTD (human data), ClinGen, and the Genomics England PanelApp. We also generate disease definitions for disease and amino acid sequence for gene as input features. You can access them via:
###Code
from tdc.multi_pred import GDA
data = GDA(name = 'DisGeNET')
data.get_data().head(2)
###Output
Downloading...
100%|██████████| 63.9M/63.9M [00:03<00:00, 18.2MiB/s]
Loading...
Done!
###Markdown
The Gene_ID is the GenBank GeneID and the Disease_ID is the Concept ID from MedGen. We can see the association distribution by:
###Code
data.label_distribution()
###Output
_____no_output_____
###Markdown
Now, you can build ML models to predict this association. Also, note that another way to phrase it is as a missing link prediction problem in Gene-Disease Association Network, where you can apply recent Graph ML to do interesting predictions. You can obtain the network object of edge list/DGL/PyG format using TDC data functions. For example, we want to include all associations above 0.35 as edges. Then, to obtain DGL object, type:
###Code
graph = data.to_graph(threshold = 0.35, format = 'dgl', split = True, frac = [0.7, 0.1, 0.2], seed = 'benchmark', order = 'ascending')
graph['dgl_graph']
###Output
The dataset label consists of affinity scores. Binarization using threshold 0.35 is conducted to construct the positive edges in the network. Adjust the threshold by to_graph(threshold = X)
Using backend: pytorch
###Markdown
In additiont to predicting GDA, there are also research doing target fishing using drug-target interaction dataset. ActivityAfter we found the target, we want to screen a large set of compounds to identify the ones who have high binding affinity or activity to the disease target. The binding affinity is generated via high-throughput screening. There are huge amounts of wet lab data available out there for various disease targets. Instead of including all of them, TDC aims to include assays for disease of current interest. For example, we include a SARS-CoV2 in vitro data from Touret et al.:
###Code
from tdc.single_pred import HTS
data = HTS(name = 'SARSCoV2_Vitro_Touret')
data.get_data().head(2)
###Output
Downloading...
100%|██████████| 101k/101k [00:00<00:00, 626kiB/s]
Loading...
Done!
###Markdown
For HTS, we hope to be a community driven effort, where domain people can point out disease of interests and the corresponding assay data and then we could quickly add it to TDC. This would make TDC reflect the state-of-the-art landscape of diseases targets and allow machine learning scientists to build models to aid the development of that disease. If you have any idea, please don't hesitate to [contact us](mailto:[email protected]). While HTS is restricted to one target protein, drug-target interaction (DTI) dataset combines many assays. One huge advantage of it is that a ML model learned on HTS dataset can only do prediction on one protein whereas a ML model learned on DTI dataset learns both disease proteins and drugs chemicals and thus can generalize to unseen drugs/targets. TDC includes several DTI datasets, including the largest BindingDB dataset. Note that BindingDB is the collection of many assays. Since different assays use different units, TDC separates them as separate datasets. Specifically, it has four datasets with Kd, IC50, EC50, Ki as the units. We load Kd here as an example for the sake of tutorial example (although IC50 has much larger number of data points, ~1Million):
###Code
from tdc.multi_pred import DTI
data = DTI(name = 'BindingDB_Kd', print_stats = True)
###Output
Downloading...
100%|██████████| 54.4M/54.4M [00:03<00:00, 16.5MiB/s]
Loading...
--- Dataset Statistics ---
10665 unique drugs.
1413 unique targets.
66444 drug-target pairs.
--------------------------
Done!
###Markdown
Another way to find compound that has affinity to disease target is through molecule generation. Molecule generation model is roughly defined as a generative model that generates new molecule structure that achieves some desirable properties such as high binding affinity to a target. There are mainly three diagrams: 1) goal-oriented learning where the ML model generates new molecule individually that achieves high score through oracles; 2) distribution learning aims to learn the distribution of the training set and generates molecule from this learnt distribution; 3) pair molecule generation formulates generation as a translation problem where it is to translate from drug X to Y where X and Y are similar but X has low score and Y has high score. The datasets for 1 and 2 are any compound library. We provide several compound libraries and oracles in TDC. For compound library, we have MOSES, ChEMBL and ZINC. For example, to load MOSES:
###Code
from tdc.generation import MolGen
data = MolGen(name = 'MOSES', print_stats = True)
data.get_data().head(2)
###Output
Downloading...
100%|██████████| 75.3M/75.3M [00:04<00:00, 18.1MiB/s]
Loading...
There are 1936962 molecules
Done!
###Markdown
Using the same library, different task for goal-oriented and distribution training is defined by different oracles. For example, for goal-oriented, we have an oracle measures the affinity to target DRD2, another task has oracle that measures the affinity to target GSK3B and so on. We use the example of GSK3B here:
###Code
from tdc import Oracle
oracle = Oracle(name = 'GSK3B')
oracle(['CCOC1=CC(=C(C=C1C=CC(=O)O)Br)OCC',
'CC(=O)OC1=CC=CC=C1C(=O)O'])
###Output
Downloading...
100%|██████████| 27.8M/27.8M [00:01<00:00, 16.2MiB/s]
###Markdown
For all the goal-oriented and generation oracles, please checkout the [TDC oracle webpage](https://zitniklab.hms.harvard.edu/TDC/functions/oracles/). We also provide three datasets for pair molecule generation DRD2, QED and LogP. For example, to load DRD2 dataset, you can type:
###Code
from tdc.generation import PairMolGen
data = PairMolGen(name = 'DRD2')
data.get_data().head(2)
###Output
Downloading...
100%|██████████| 3.14M/3.14M [00:00<00:00, 3.75MiB/s]
Loading...
Done!
###Markdown
The previous dataset assumes a one-drug-fits-all-patients diagram whereas in reality different patient has different response to the same drug, especially in the case of oncology where patient genomics is a deciding factor for a drug's effectiveness. This is also coined as precision oncology. In TDC, we include Genomics in Drug Sensitivity in Cancer (GDSC) dataset which measures the drug response in various cancer cell lines. In the dataset, we also include SMILES string for the drug and the gene expression for cell lines. There are two versions of GDSC where GDSC2 uses improved experimental procedures. To access the data, for example, type:
###Code
from tdc.multi_pred import DrugRes
data = DrugRes(name = 'GDSC2')
data.get_data().head(2)
###Output
Downloading...
100%|██████████| 117M/117M [00:06<00:00, 18.8MiB/s]
Loading...
Done!
###Markdown
Another important trend is drug combinations. Drug combinations can achieve synergistic effect and improves treatment outcome. In the first version of TDC, we include one drug synergy dataset OncoPolyPharmacology, where it includes experimental results of drug pair combination response to various cancer cell lines. You can obtain it via:
###Code
from tdc.multi_pred import DrugSyn
data = DrugSyn(name = 'OncoPolyPharmacology')
data.get_data().head(2)
###Output
Downloading...
100%|██████████| 1.62G/1.62G [01:29<00:00, 18.1MiB/s]
Loading...
Done!
###Markdown
Efficacy and SafetyAfter a compound is found to have high affinity to the target disease, it needs to have numerous drug-likeliness properties for it to be delivered safely and efficaciously to the human body. That is good ADME (Absorption, Distribution, Metabolism, and Execretion) properties. ADME datasets are scattered around the internet, there are several great resource on ADME prediction web services, but there is a limited set of organized data for machine learning scientists to build models upon and improve the model performances. In TDC first release, we collect 21 ADME datasets from various public sources such as eDrug3D, AqSolDB, Molecule Net, and various papers supplementary. You can find all the datasets by typing:
###Code
from tdc import utils
utils.retrieve_dataset_names('ADME')
###Output
_____no_output_____
###Markdown
As always, you can load and process the data through TDC data loaders. For example, to load the P-glycoprotein Inhibition dataset, type:
###Code
from tdc.single_pred import ADME
data = ADME(name = 'Pgp_Broccatelli')
data.get_data().head(2)
###Output
Downloading...
100%|██████████| 129k/129k [00:00<00:00, 751kiB/s]
Loading...
Done!
###Markdown
In addition to ADME, the drug has to have low toxicity. We put all of them under the task `Tox`, where we collect Tox21, ToxCast, ClinTox. For Tox21 and ToxCast, they are wet lab results for various toxicity assays. So you can retrieve any of the assay outcome by specifying the assay name. You can find all the assay name and retrieve the corresponding data via:
###Code
from tdc.utils import retrieve_label_name_list
label_list = retrieve_label_name_list('Tox21')
label_list[:3]
from tdc.single_pred import Tox
data = Tox(name = 'Tox21', label_name = label_list[0])
data.get_data().head(2)
###Output
Downloading...
100%|██████████| 712k/712k [00:00<00:00, 1.75MiB/s]
Loading...
Done!
###Markdown
Similar to using molecule generation oracle for high binding affinity to a target, we can use generation for property improvement. Just simply switching an oracle. For example, for a drug to be synthesizable, we can use the Synthetic Accessibility oracle:
###Code
from tdc import Oracle
oracle = Oracle(name = 'SA')
oracle(['CCOC1=CC(=C(C=C1C=CC(=O)O)Br)OCC',
'CC(=O)OC1=CC=CC=C1C(=O)O'])
###Output
Downloading...
100%|██████████| 9.05M/9.05M [00:00<00:00, 10.0MiB/s]
###Markdown
In addition to individual efficacy and safety, a drug can clash with each other to have adverse effects, i.e. drug-drug interactions (DDIs). This becomes more and more important as more people are taking combination of drugs for various diseases and it is impossible to screen the combination of all of them in wet lab, especially for higher-order combinations. In TDC, we include the DrugBank and TWOSIDES datasets for DDI. For DrugBank, instead of the standard binary dataset, we use the full multi-typed DrugBank where there are more than 80 DDI types:
###Code
from tdc.multi_pred import DDI
data = DDI(name = 'DrugBank')
data.get_data().head(2)
###Output
Downloading...
100%|██████████| 44.4M/44.4M [00:02<00:00, 15.6MiB/s]
Loading...
Done!
###Markdown
You can get what the label represents by typing:
###Code
from tdc.utils import get_label_map
label_map = get_label_map(name = 'DrugBank', task = 'DDI')
print(label_map[1])
print(label_map[2])
print(label_map[3])
###Output
#Drug1 may increase the photosensitizing activities of #Drug2.
#Drug1 may increase the anticholinergic activities of #Drug2.
The bioavailability of #Drug2 can be decreased when combined with #Drug1.
###Markdown
After finding a safe and efficacious compound, usually a compound lead goes to pre-clinical study and then clinical trials. TDC currently does not support any tasks in these stages, but we are actively looking for including them (e.g. one task coming in a few months is clinical trial outcome prediction). **If you have any dataset related to this, please [contact us](mailto:[email protected]).** ManufacturingAfter discovering a potential drug candidate, a big portion of drug development is manufacturing, that is how to make the drug candidate from basis reactants and catalysts. TDC currently includes four tasks in this stage. The first is reaction prediction, where one wants to predict the reaction outcome given the reactants. TDC parses out the full USPTO dataset and obtains 1,939,253 reactions. You can load the data via:
###Code
from tdc.generation import Reaction
data = Reaction(name = 'USPTO')
data.get_data().head(2)
###Output
Downloading...
100%|██████████| 795M/795M [00:44<00:00, 17.8MiB/s]
Loading...
Done!
###Markdown
In addition to the forward synthesis, a realistic scenario is one has the product and wants to know what is the reactants that can generate this product. This is also called retrosynthesis. Using the same USPTO dataset above and flip the input and output, we can get the retrosynthesis dataset. A popular smaller dataset is USPTO-50K that is widely used in ML community. USPTO-50K is a subset of USPTO. TDC also includes it:
###Code
from tdc.generation import RetroSyn
data = RetroSyn(name = 'USPTO-50K')
data.get_data().head(2)
###Output
Downloading...
100%|██████████| 5.22M/5.22M [00:00<00:00, 5.57MiB/s]
Loading...
Done!
###Markdown
In addition to reaction predictions, it is also important to predict the reaction condition. One condition is the catalyst. Given the reactants and products, we want to predict the catalyst type. TDC again mines through the USPTO dataset and obtains 1,257,015 reactions with 888 common catalyst types.
###Code
from tdc.multi_pred import Catalyst
data = Catalyst(name = 'USPTO_Catalyst')
data.get_data().head(2)
###Output
Downloading...
100%|██████████| 565M/565M [00:35<00:00, 16.1MiB/s]
Loading...
Done!
###Markdown
As in the dataset, we make it machine learning ready, which means the labels are integers values. You can also see what each label index corresponds to by:
###Code
from tdc.utils import get_label_map
label_map = get_label_map(name = 'USPTO_Catalyst', task = 'Catalyst')
print(label_map[1])
print(label_map[2])
print(label_map[3])
###Output
C1COCC1
C(Cl)Cl
CN(C=O)C
###Markdown
Another important factor of drug manufacturing is yields. TDC includes two Yields dataset. One is what we mine through USPTO. But as there is recent research from Schwaller et al. argues that USPTO is a bit too noisy. We thus also includes another dataset used in Schwaller et al., Buchwald-Hartwig. You can obtain it via:
###Code
from tdc.single_pred import Yields
data = Yields(name = 'Buchwald-Hartwig')
data.get_data().head(2)
###Output
Downloading...
100%|██████████| 15.0M/15.0M [00:01<00:00, 11.7MiB/s]
Loading...
Done!
|
Regression/Support Vector Machine/LinearSVR_QuantileTransformer.ipynb | ###Markdown
Linear Support Vector Regressor with QuantileTransformer This Code template is for the Classification task using Support Vector Regressor (SVR) based on the Support Vector Machine algorithm with Quantile Transformer as Feature Transformation Technique in a pipeline. Required Packages
###Code
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as se
from sklearn.preprocessing import QuantileTransformer
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import train_test_split
from imblearn.over_sampling import RandomOverSampler
from sklearn.svm import LinearSVR
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
InitializationFilepath of CSV file
###Code
#filepath
file_path=""
###Output
_____no_output_____
###Markdown
List of features which are required for model training .
###Code
#x_values
features=[]
###Output
_____no_output_____
###Markdown
Target feature for prediction.
###Code
#y_values
target=''
###Output
_____no_output_____
###Markdown
Data fetchingPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
###Code
df=pd.read_csv(file_path)
df.head()
###Output
_____no_output_____
###Markdown
Feature SelectionsIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.We will assign all the required input features to X and target/outcome to Y.
###Code
X=df[features]
Y=df[target]
###Output
_____no_output_____
###Markdown
Data preprocessingSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
###Code
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
###Output
_____no_output_____
###Markdown
Calling preprocessing functions on the feature and target set.
###Code
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
###Output
_____no_output_____
###Markdown
Correlation MapIn order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
###Code
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
###Output
_____no_output_____
###Markdown
Data SplittingThe train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
###Code
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)#performing datasplitting
###Output
_____no_output_____
###Markdown
ModelSupport vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection.A Support Vector Machine is a discriminative classifier formally defined by a separating hyperplane. In other terms, for a given known/labelled data points, the SVM outputs an appropriate hyperplane that classifies the inputted new cases based on the hyperplane. In 2-Dimensional space, this hyperplane is a line separating a plane into two segments where each class or group occupied on either side.LinearSVR is similar to SVR with kernel=’linear’. It has more flexibility in the choice of tuning parameters and is suited for large samples. Model Tuning Parameters 1. epsilon : float, default=0.0> Epsilon parameter in the epsilon-insensitive loss function. 2. loss : {‘epsilon_insensitive’, ‘squared_epsilon_insensitive’}, default=’epsilon_insensitive’ > Specifies the loss function. ‘hinge’ is the standard SVM loss (used e.g. by the SVC class) while ‘squared_hinge’ is the square of the hinge loss. The combination of penalty='l1' and loss='hinge' is not supported. 3. C : float, default=1.0> Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. 4. tol : float, default=1e-4> Tolerance for stopping criteria. 5. dual : bool, default=True> Select the algorithm to either solve the dual or primal optimization problem. Prefer dual=False when n_samples > n_features.Feature TransformationQuantileTransformer transforms features using quantiles information.This method transforms the features to follow a uniform or a normal distribution. Therefore, for a given feature, this transformation tends to spread out the most frequent values. It also reduces the impact of (marginal) outliers: this is therefore a robust preprocessing scheme.The transformation is applied on each feature independently.For more information... [click here](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.QuantileTransformer.html)
###Code
model=make_pipeline(QuantileTransformer(),LinearSVR())
model.fit(x_train, y_train)
###Output
_____no_output_____
###Markdown
Model AccuracyWe will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.> **score**: The **score** function returns the coefficient of determination R2 of the prediction.
###Code
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
###Output
Accuracy score 43.30 %
###Markdown
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions. > **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model. > **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
###Code
y_pred=model.predict(x_test)
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
###Output
R2 Score: 43.30 %
Mean Absolute Error 28.41
Mean Squared Error 1198.91
###Markdown
Prediction PlotFirst, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
###Code
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(x_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
###Output
_____no_output_____
###Markdown
Linear Support Vector Regressor with QuantileTransformer This Code template is for the Classification task using Support Vector Regressor (SVR) based on the Support Vector Machine algorithm with Quantile Transformer as Feature Transformation Technique in a pipeline. Required Packages
###Code
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as se
from sklearn.preprocessing import QuantileTransformer
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import train_test_split
from imblearn.over_sampling import RandomOverSampler
from sklearn.svm import LinearSVR
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
InitializationFilepath of CSV file
###Code
#filepath
file_path=""
###Output
_____no_output_____
###Markdown
List of features which are required for model training .
###Code
#x_values
features=[]
###Output
_____no_output_____
###Markdown
Target feature for prediction.
###Code
#y_values
target=''
###Output
_____no_output_____
###Markdown
Data fetchingPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
###Code
df=pd.read_csv(file_path)
df.head()
###Output
_____no_output_____
###Markdown
Feature SelectionsIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.We will assign all the required input features to X and target/outcome to Y.
###Code
X=df[features]
Y=df[target]
###Output
_____no_output_____
###Markdown
Data preprocessingSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
###Code
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
###Output
_____no_output_____
###Markdown
Calling preprocessing functions on the feature and target set.
###Code
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
###Output
_____no_output_____
###Markdown
Correlation MapIn order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
###Code
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
###Output
_____no_output_____
###Markdown
Data SplittingThe train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
###Code
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)#performing datasplitting
###Output
_____no_output_____
###Markdown
ModelSupport vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection.A Support Vector Machine is a discriminative classifier formally defined by a separating hyperplane. In other terms, for a given known/labelled data points, the SVM outputs an appropriate hyperplane that classifies the inputted new cases based on the hyperplane. In 2-Dimensional space, this hyperplane is a line separating a plane into two segments where each class or group occupied on either side.LinearSVR is similar to SVR with kernel=’linear’. It has more flexibility in the choice of tuning parameters and is suited for large samples. Model Tuning Parameters 1. epsilon : float, default=0.0> Epsilon parameter in the epsilon-insensitive loss function. 2. loss : {‘epsilon_insensitive’, ‘squared_epsilon_insensitive’}, default=’epsilon_insensitive’ > Specifies the loss function. ‘hinge’ is the standard SVM loss (used e.g. by the SVC class) while ‘squared_hinge’ is the square of the hinge loss. The combination of penalty='l1' and loss='hinge' is not supported. 3. C : float, default=1.0> Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. 4. tol : float, default=1e-4> Tolerance for stopping criteria. 5. dual : bool, default=True> Select the algorithm to either solve the dual or primal optimization problem. Prefer dual=False when n_samples > n_features.Feature TransformationQuantileTransformer transforms features using quantiles information.This method transforms the features to follow a uniform or a normal distribution. Therefore, for a given feature, this transformation tends to spread out the most frequent values. It also reduces the impact of (marginal) outliers: this is therefore a robust preprocessing scheme.The transformation is applied on each feature independently.For more information... [click here](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.QuantileTransformer.html)
###Code
model=make_pipeline(QuantileTransformer(),LinearSVR())
model.fit(x_train, y_train)
###Output
_____no_output_____
###Markdown
Model AccuracyWe will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.> **score**: The **score** function returns the coefficient of determination R2 of the prediction.
###Code
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
###Output
Accuracy score 43.30 %
###Markdown
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions. > **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model. > **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
###Code
y_pred=model.predict(x_test)
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
###Output
R2 Score: 43.30 %
Mean Absolute Error 28.41
Mean Squared Error 1198.91
###Markdown
Prediction PlotFirst, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
###Code
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(x_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
###Output
_____no_output_____ |
examples/products/bonds/BOND_MarketConventions.ipynb | ###Markdown
BOND MARKET CONVENTIONS You can get Governmen Bond Market conventions from FinancePy. They are not guaranteed accurate as they can change so please see them as advisory.Contact me if you see any errors.
###Code
import sys
sys.path.append("..")
sys.path.append("..\\..")
from financepy.finutils.FinDayCount import FinDayCountTypes
from financepy.finutils.FinDate import FinDate
from financepy.products.bonds.FinBondMarket import getTreasuryBondMarketConventions, FinBondMarkets
###Output
_____no_output_____
###Markdown
First we get a list of the markets available. The order is EU and then non-EU
###Code
for country in FinBondMarkets:
print(country.name)
###Output
AUSTRIA
BELGIUM
CYPRUS
ESTONIA
FINLAND
FRANCE
GERMANY
GREECE
IRELAND
ITALY
LATVIA
LITHUANIA
LUXEMBOURG
MALTA
NETHERLANDS
PORTUGAL
SLOVAKIA
SLOVENIA
SPAIN
ESM
EFSF
BULGARIA
CROATIA
CZECH_REPUBLIC
DENMARK
HUNGARY
POLAND
ROMANIA
SWEDEN
JAPAN
SWITZERLAND
UNITED_KINGDOM
UNITED_STATES
###Markdown
There is a function to get the accrual convention, the frequency and the settlement period for each
###Code
print("%20s %17s %15s %11s" % ("COUNTRY","ACCRUED","FREQUENCY","SETTLE DAYS"))
for country in FinBondMarkets:
accrualType, frequencyType, settlementDays = getTreasuryBondMarketConventions(country)
print("%20s %17s %15s %11d" %(country.name, accrualType.name, frequencyType.name, settlementDays))
###Output
COUNTRY ACCRUED FREQUENCY SETTLE DAYS
AUSTRIA ACT_ACT_ICMA ANNUAL 2
BELGIUM ACT_ACT_ICMA ANNUAL 2
CYPRUS ACT_ACT_ICMA SEMI_ANNUAL 2
ESTONIA ACT_ACT_ICMA ANNUAL 2
FINLAND ACT_ACT_ICMA ANNUAL 2
FRANCE ACT_ACT_ICMA ANNUAL 2
GERMANY ACT_ACT_ICMA ANNUAL 2
GREECE ACT_ACT_ICMA ANNUAL 3
IRELAND ACT_ACT_ICMA ANNUAL 2
ITALY ACT_ACT_ICMA SEMI_ANNUAL 2
LATVIA ACT_ACT_ICMA ANNUAL 2
LITHUANIA ACT_ACT_ICMA ANNUAL 1
LUXEMBOURG ACT_ACT_ICMA ANNUAL 2
MALTA ACT_ACT_ICMA SEMI_ANNUAL 2
NETHERLANDS ACT_ACT_ICMA ANNUAL 2
PORTUGAL ACT_ACT_ICMA ANNUAL 2
SLOVAKIA ACT_ACT_ICMA ANNUAL 2
SLOVENIA ACT_ACT_ICMA ANNUAL 2
SPAIN ACT_ACT_ICMA ANNUAL 2
ESM ACT_ACT_ICMA ANNUAL 2
EFSF ACT_ACT_ICMA ANNUAL 2
BULGARIA ACT_ACT_ICMA SEMI_ANNUAL 0
CROATIA ACT_ACT_ICMA SEMI_ANNUAL 3
CZECH_REPUBLIC ACT_ACT_ICMA SEMI_ANNUAL 2
DENMARK ACT_ACT_ICMA ANNUAL 2
HUNGARY ACT_ACT_ICMA ANNUAL 2
POLAND ACT_ACT_ICMA SEMI_ANNUAL 2
ROMANIA ACT_ACT_ICMA SEMI_ANNUAL 2
SWEDEN THIRTY_E_360 ANNUAL 2
JAPAN ACT_ACT_ICMA ANNUAL 2
SWITZERLAND ACT_ACT_ICMA ANNUAL 2
UNITED_KINGDOM ACT_ACT_ICMA SEMI_ANNUAL 1
UNITED_STATES ACT_ACT_ICMA SEMI_ANNUAL 2
|
ProyectoFinal/.ipynb_checkpoints/Parte1-checkpoint.ipynb | ###Markdown
Red neuronal feed forward/MLP**Objetivo:** predecir el tipo de movilidad utilizada (walk, train, taxi, etc) utilizada por los usuarios en sus recorridos por medio de la informacion GPS recolectada.Los datos orginales son de un formato PLT:* Line 1…6 are useless in this dataset, and can be ignored. Points are described in following lines, one for each line.* Field 1: Latitude in decimal degrees.* Field 2: Longitude in decimal degrees.* Field 3: All set to 0 for this dataset.* Field 4: Altitude in feet (-777 if not valid).* Field 5: Date - number of days (with fractional part) that have passed since 12/30/1899.* Field 6: Date as a string.* Field 7: Time as a string.* Number of users: 182* Number of trajectories: 18,670* Number of points: 24,876,978* Total distance: 1,292,951km* Total duration: 50,176hourDe esta informacion solo se tomo la que esta clasificada en su tipo de ruta. Adicionalmente se realizo Feature Engineering, para procesar los datos y adecuar la entrada para el modelo MLP. Siendo la estructura inicial del modelo:* StartDate* EndDate* Type* StarDifAprox* EndDifAprox* xStart* yStart* xEnd* yEndReferencia del datasethttps://www.microsoft.com/en-us/download/details.aspx?id=52367&from=http%3A%2F%2Fresearch.microsoft.com%2Fen-us%2Fdownloads%2Fb16d359d-d164-469e-9fd4-daa38f2b2e13%2F
###Code
import os
import numpy as np
import pandas as pd
import datetime
import numpy as np
from datetime import datetime
#from sklearn.model_selection import train_test_split
#from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import StandardScaler
df = pd.read_csv('../data2x.csv')
df.head(5)
df.describe()
###Output
_____no_output_____
###Markdown
Limpieza de datos no validos
###Code
valid = df[(df['StarDifAprox']!=500) | (df['EndDifAprox']!=500)]
valid.head(10)
###Output
_____no_output_____
###Markdown
Categorias del dataset
###Code
valid.Type.unique()
def tiempoMin(fec1,fec2):
val = datetime.datetime.strptime(fec2, '%m/%d/%Y %H:%M') - datetime.datetime.strptime(fec1, '%m/%d/%Y %H:%M')
val = val.days * 24 *60 + val.seconds/60
return val
valid['Tiempo'] = valid.apply(lambda row:
tiempoMin(row.StartDate, row.EndDate),
axis = 1)
valid
# remover columnas no necesarias (explicadas en las notas de feature eng)
del valid["StarDifAprox"] # utilizada en el procesamiento y ya no es necesaria
del valid["EndDifAprox"] # utilizada en el procesamiento y ya no es necesaria
del valid["StartDate"] # Hora inicial se combina con la final para forma tiempo total del viaje
del valid["EndDate"]
valid.head(10)
#valid['Type'] = pd.factorize(valid['Type'])[0]
Y = valid[['Type']]
X = valid[['xStart','yStart','xEnd','yEnd','Tiempo']]
# one-hot encoding
labels_hot = pd.get_dummies(Y, prefix='tipo_')
labels_hot
labels_hot.describe()
# normalizacion = MinMaxScaler(feature_range=(0, 1))
normalizacion = StandardScaler()
xin = normalizacion.fit_transform(X)
xin
xin.shape, labels_hot.shape
###Output
_____no_output_____
###Markdown
MLP
###Code
import tensorflow as tf
import keras
import matplotlib.pyplot as plt
from keras.models import Sequential
from keras import optimizers
from tensorflow.python.keras.optimizers import TFOptimizer
from keras.callbacks import ModelCheckpoint
#from keras.optimizers import TFOptimizer
from keras.layers import Dense,BatchNormalization
from keras import initializers
from keras import backend as K
print(tf.__version__)
print(keras.__version__)
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
def recall_m(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = true_positives / (possible_positives + K.epsilon())
return recall
def precision_m(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
def f1_m(y_true, y_pred):
precision = precision_m(y_true, y_pred)
recall = recall_m(y_true, y_pred)
return 2*((precision*recall)/(precision+recall+K.epsilon()))
cols_input = xx1.shape[1]
###Output
_____no_output_____
###Markdown
Se realizaron diversas pruebas con las funciones de activacion y devido a que se tienen valores negativos en la entrada la funcion **tanh** fue la que genero mejores resultados.Como inicializacion se esta utilizando Xavier por medio de **RandomNormal**El **BatchNormalization** se modificaron los parametros para tratar de mejorar las metricas
###Code
# Definicion de Aquitectura
mlp1 = Sequential()
mlp1.add(Dense(64,
input_dim=cols_input,
activation='tanh',
kernel_initializer=initializers.RandomNormal(stddev=0.3),
#kernel_initializer=initializers.RandomNormal(),
bias_initializer=initializers.Zeros(),
name="layer1"))
mlp1.add(BatchNormalization(momentum=0.09, epsilon=0.001))
mlp1.add(Dense(64, activation='tanh',kernel_initializer=initializers.RandomNormal(stddev=0.3),bias_initializer=initializers.Zeros()))
mlp1.add(BatchNormalization(momentum=0.05, epsilon=0.001))
mlp1.add(Dense(64, activation='tanh',kernel_initializer=initializers.RandomNormal(stddev=0.3),bias_initializer=initializers.Zeros()))
mlp1.add(BatchNormalization(momentum=0.05, epsilon=0.001))
mlp1.add(Dense(32, activation='tanh',kernel_initializer=initializers.RandomNormal(stddev=0.1),bias_initializer=initializers.Zeros()))
mlp1.add(BatchNormalization(momentum=0.02, epsilon=0.001))
mlp1.add(Dense(16, activation='tanh',kernel_initializer=initializers.RandomNormal(stddev=0.1),bias_initializer=initializers.Zeros()))
mlp1.add(BatchNormalization(momentum=0.01, epsilon=0.001))
mlp1.add(Dense(11, activation='softmax'))
###Output
_____no_output_____
###Markdown
Para la optimizacion se selecciono **Adam** con una configuracion que se considero que responde mejor a los datos de entradaEl tipo de loss es **Categorical Cross Entropy** debido a que es un problema de clasificacion multiclase y los datos *Y* estan como one-hot encoding Se presentan las mediciones de **F1**, **precision** y **recall**
###Code
#RMSprop
#adam
#adagrad
#opt = tf.compat.v1.train.AdamOptimizer(learning_rate=0.0005)
#opt = tf.keras.optimizers.SGD(learning_rate=0.01, momentum=0.09)
#loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),
opt = optimizers.Adam(learning_rate=0.0005,amsgrad=True)
#opt = optimizers.SGD(learning_rate=0.0005, nesterov=True)
#opt = TFOptimizer(opt)
mlp1.compile(optimizer=opt,
#loss = 'categorical_crossentropy',
loss=tf.keras.losses.CategoricalCrossentropy(
from_logits=True,
label_smoothing=0,
reduction="auto"),
metrics=['accuracy',f1_m,precision_m, recall_m])
#metrics=['accuracy'])
mlp1.summary()
###Output
Model: "sequential_94"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
layer1 (Dense) (None, 64) 384
_________________________________________________________________
batch_normalization_46 (Batc (None, 64) 256
_________________________________________________________________
dense_343 (Dense) (None, 64) 4160
_________________________________________________________________
batch_normalization_47 (Batc (None, 64) 256
_________________________________________________________________
dense_344 (Dense) (None, 64) 4160
_________________________________________________________________
batch_normalization_48 (Batc (None, 64) 256
_________________________________________________________________
dense_345 (Dense) (None, 32) 2080
_________________________________________________________________
batch_normalization_49 (Batc (None, 32) 128
_________________________________________________________________
dense_346 (Dense) (None, 16) 528
_________________________________________________________________
batch_normalization_50 (Batc (None, 16) 64
_________________________________________________________________
dense_347 (Dense) (None, 11) 187
=================================================================
Total params: 12,459
Trainable params: 11,979
Non-trainable params: 480
_________________________________________________________________
###Markdown
Guardando checkpoints
###Code
checkpoint_path = 'cp.ckpt'
checkpoint_dir = os.path.dirname(checkpoint_path)
save_prog = ModelCheckpoint(
filepath = checkpoint_path,
verbose = 1,
save_weights_only =True,
period=50
)
###Output
_____no_output_____
###Markdown
Ejecucion
###Code
#batch_size=20,
#shuffle=True,
resultado = mlp1.fit(xin,labels_hot,validation_split=0.2,epochs=500, verbose=2, callbacks=[save_prog])
fechaHora = datetime.now().strftime("%Y%m%d-%H%M%S")
fechaHora
try:
mlp1.save('mlp_' + fechaHora)
except:
print("Genera una excepcion pero guarda el archivo")
def plot_graphs(history, string):
plt.plot(history.history[string])
plt.plot(history.history['val_'+string])
plt.xlabel("Epochs")
plt.ylabel(string)
plt.legend([string, 'val_'+string])
plt.show()
plot_graphs(resultado, 'accuracy')
plot_graphs(resultado, 'loss')
###Output
_____no_output_____ |
courses/machine_learning/deepdive2/building_production_ml_systems/labs/4a_streaming_data_training.ipynb | ###Markdown
Training a model with `traffic_last_5min` feature IntroductionIn this notebook, we'll train a taxifare prediction model but this time with an additional feature of `traffic_last_5min`.
###Code
import datetime
import os
import shutil
import pandas as pd
import tensorflow as tf
from matplotlib import pyplot as plt
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, DenseFeatures
from tensorflow.keras.callbacks import TensorBoard
print(tf.__version__)
%matplotlib inline
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'cloud-training-demos' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# For Bash Code
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
###Output
_____no_output_____
###Markdown
Load raw data
###Code
!ls -l ../data/taxi-traffic*
!head ../data/taxi-traffic*
###Output
_____no_output_____
###Markdown
Use tf.data to read the CSV filesThese functions for reading data from the csv files are similar to what we used in the Introduction to Tensorflow module. Note that here we have an addtional feature `traffic_last_5min`.
###Code
CSV_COLUMNS = [
'fare_amount',
'dayofweek',
'hourofday',
'pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'traffic_last_5min'
]
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0.0]]
def features_and_labels(row_data):
label = row_data.pop(LABEL_COLUMN)
features = row_data
return features, label
def create_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
dataset = tf.data.experimental.make_csv_dataset(
pattern, batch_size, CSV_COLUMNS, DEFAULTS)
dataset = dataset.map(features_and_labels)
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(buffer_size=1000).repeat()
# take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(1)
return dataset
INPUT_COLS = [
'dayofweek',
'hourofday',
'pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'traffic_last_5min'
]
# Create input layer of feature columns
feature_columns = {
colname: tf.feature_column.numeric_column(colname)
for colname in INPUT_COLS
}
###Output
_____no_output_____
###Markdown
Build a simple keras DNN model
###Code
# Build a keras DNN model using Sequential API
def build_model(dnn_hidden_units):
model = Sequential(DenseFeatures(feature_columns=feature_columns.values()))
for num_nodes in dnn_hidden_units:
model.add(Dense(units=num_nodes, activation="relu"))
model.add(Dense(units=1, activation="linear"))
# Create a custom evalution metric
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
# Compile the keras model
model.compile(optimizer="adam", loss="mse", metrics=[rmse, "mse"])
return model
###Output
_____no_output_____
###Markdown
Next, we can call the `build_model` to create the model. Here we'll have two hidden layers before our final output layer. And we'll train with the same parameters we used before.
###Code
HIDDEN_UNITS = [32, 8]
model = build_model(dnn_hidden_units=HIDDEN_UNITS)
BATCH_SIZE = 1000
NUM_TRAIN_EXAMPLES = 10000 * 6 # training dataset will repeat, wrap around
NUM_EVALS = 60 # how many times to evaluate
NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample
trainds = create_dataset(
pattern='../data/taxi-traffic-train*',
batch_size=BATCH_SIZE,
mode=tf.estimator.ModeKeys.TRAIN)
evalds = create_dataset(
pattern='../data/taxi-traffic-valid*',
batch_size=BATCH_SIZE,
mode=tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000)
%%time
steps_per_epoch = NUM_TRAIN_EXAMPLES // (BATCH_SIZE * NUM_EVALS)
LOGDIR = "./taxi_trained"
history = model.fit(x=trainds,
steps_per_epoch=steps_per_epoch,
epochs=NUM_EVALS,
validation_data=evalds,
callbacks=[TensorBoard(LOGDIR)])
RMSE_COLS = ['rmse', 'val_rmse']
pd.DataFrame(history.history)[RMSE_COLS].plot()
model.predict(x={"dayofweek": tf.convert_to_tensor([6]),
"hourofday": tf.convert_to_tensor([17]),
"pickup_longitude": tf.convert_to_tensor([-73.982683]),
"pickup_latitude": tf.convert_to_tensor([40.742104]),
"dropoff_longitude": tf.convert_to_tensor([-73.983766]),
"dropoff_latitude": tf.convert_to_tensor([40.755174]),
"traffic_last_5min": tf.convert_to_tensor([114])},
steps=1)
###Output
_____no_output_____
###Markdown
Export and deploy model
###Code
OUTPUT_DIR = "./export/savedmodel"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(OUTPUT_DIR,
datetime.datetime.now().strftime("%Y%m%d%H%M%S"))
tf.saved_model.save(model, EXPORT_PATH) # with default serving function
os.environ['EXPORT_PATH'] = EXPORT_PATH
%%bash
PROJECT=${PROJECT}
BUCKET=${BUCKET}
REGION=${REGION}
MODEL_NAME=taxifare
VERSION_NAME=traffic
if [[ $(gcloud ai-platform models list --format='value(name)' | grep $MODEL_NAME) ]]; then
echo "$MODEL_NAME already exists"
else
# create model
echo "Creating $MODEL_NAME"
gcloud ai-platform models create --regions=$REGION $MODEL_NAME
fi
if [[ $(gcloud ai-platform versions list --model $MODEL_NAME --format='value(name)' | grep $VERSION_NAME) ]]; then
echo "Deleting already existing $MODEL_NAME:$VERSION_NAME ... "
gcloud ai-platform versions delete --model=$MODEL_NAME $VERSION_NAME
echo "Please run this cell again if you don't see a Creating message ... "
sleep 2
fi
# create model
echo "Creating $MODEL_NAME:$VERSION_NAME"
gcloud ai-platform versions create --model=$MODEL_NAME $VERSION_NAME --async \
--framework=tensorflow --python-version=3.5 --runtime-version=1.14 \
--origin=${EXPORT_PATH} --staging-bucket=gs://$BUCKET
###Output
_____no_output_____
###Markdown
Training a model with `traffic_last_5min` feature IntroductionIn this notebook, we'll train a taxifare prediction model but this time with an additional feature of `traffic_last_5min`.
###Code
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
import datetime
import os
import shutil
import pandas as pd
import tensorflow as tf
from matplotlib import pyplot as plt
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, DenseFeatures
from tensorflow.keras.callbacks import TensorBoard
print(tf.__version__)
%matplotlib inline
PROJECT = 'qwiklabs-gcp-02-15ad15b6da61' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'qwiklabs-gcp-02-15ad15b6da61' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# For Bash Code
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
###Output
Updated property [core/project].
Updated property [compute/region].
###Markdown
Load raw data
###Code
!ls -l ../data/taxi-traffic*
!head ../data/taxi-traffic*
###Output
==> ../data/taxi-traffic-test.csv <==
15.7,6,12,-73.990072,40.758199,-73.974686,40.742004,2089
6.1,7,2,-73.95647,40.771226,-73.971845,40.750089,1738
4.1,6,18,-73.987871,40.759855,-73.996375,40.763728,2971
5.7,2,18,-73.974177,40.761154,-73.980953,40.769357,2320
7.4,4,23,-73.924908,40.741879,-73.897524,40.747867,1491
20.5,1,15,-73.957528,40.766847,-73.870813,40.774044,1794
6.5,6,9,-73.996553,40.725558,-73.992503,40.737248,2341
4.1,4,11,-73.98353,40.746821000000004,-73.976831,40.751082000000004,2329
10.5,3,18,-73.863998,40.770439,-73.91671099999999,40.773011,2318
10.1,6,1,-73.979685,40.727247999999996,-73.952508,40.772492,1455
==> ../data/taxi-traffic-train.csv <==
6.1,2,0,-73.98689499999999,40.729723,-74.00631,40.739407,1129
9.7,7,0,-73.94578299999999,40.777807,-73.97539,40.757712,2876
5.3,6,0,-74.00644,40.739349,-73.999379,40.731804,3950
7.3,5,0,-73.96611800000001,40.753983000000005,-73.945605,40.782802000000004,1334
6.5,7,0,-73.974153,40.762767,-73.989152,40.742727,2623
22.9,1,0,-73.977188,40.774063,-73.962647,40.654768,2833
22.9,2,0,-74.00188,40.745946999999994,-73.968497,40.639375,2002
6.1,3,0,-73.994051,40.751077,-73.977333,40.778875,661
5.3,5,0,-73.980898,40.744515,-73.973383,40.753496999999996,1938
6.5,7,0,-74.00540600000001,40.708533,-74.005498,40.725617,2781
==> ../data/taxi-traffic-valid.csv <==
7.7,2,11,-73.97463,40.742118,-73.98544,40.760585999999996,1059
30.1,7,1,-73.956921,40.777588,-73.965109,40.673271,2225
7.7,6,13,-73.98073199999999,40.742109,-73.96415400000001,40.764891999999996,1994
24.67,4,4,-73.953387,40.822733,-73.878697,40.755373,321
7.7,2,1,-73.982304,40.723572,-73.972778,40.74928,1115
8.1,5,18,-73.98474300000001,40.749171999999994,-74.00232,40.72825,2697
6.1,4,1,-73.983588,40.72224,-73.997302,40.720786,868
19.07,3,1,-73.94446500000001,40.807284,-73.876339,40.763073999999996,711
12.5,4,10,-73.98696899999999,40.722343,-74.01621,40.715067,1990
5.7,7,18,-74.007972,40.738759,-73.991973,40.73704,2048
###Markdown
Use tf.data to read the CSV filesThese functions for reading data from the csv files are similar to what we used in the Introduction to Tensorflow module. Note that here we have an addtional feature `traffic_last_5min`.
###Code
CSV_COLUMNS = [
'fare_amount',
'dayofweek',
'hourofday',
'pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'traffic_last_5min'
]
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0.0]]
def features_and_labels(row_data):
label = row_data.pop(LABEL_COLUMN)
features = row_data
return features, label
def create_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
dataset = tf.data.experimental.make_csv_dataset(
pattern, batch_size, CSV_COLUMNS, DEFAULTS)
dataset = dataset.map(features_and_labels)
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(buffer_size=1000).repeat()
# take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(1)
return dataset
INPUT_COLS = [
'dayofweek',
'hourofday',
'pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'traffic_last_5min'
]
# Create input layer of feature columns
feature_columns = {
colname: tf.feature_column.numeric_column(colname)
for colname in INPUT_COLS
}
###Output
_____no_output_____
###Markdown
Build a simple keras DNN model
###Code
# Build a keras DNN model using Sequential API
def build_model(dnn_hidden_units):
model = Sequential(DenseFeatures(feature_columns=feature_columns.values()))
for num_nodes in dnn_hidden_units:
model.add(Dense(units=num_nodes, activation="relu"))
model.add(Dense(units=1, activation="linear"))
# Create a custom evalution metric
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
# Compile the keras model
model.compile(optimizer="adam", loss="mse", metrics=[rmse, "mse"])
return model
###Output
_____no_output_____
###Markdown
Next, we can call the `build_model` to create the model. Here we'll have two hidden layers before our final output layer. And we'll train with the same parameters we used before.
###Code
HIDDEN_UNITS = [64, 32, 16]
model = build_model(dnn_hidden_units=HIDDEN_UNITS)
BATCH_SIZE = 1000
NUM_TRAIN_EXAMPLES = 10000 * 6 # training dataset will repeat, wrap around
NUM_EVALS = 60 # how many times to evaluate
NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample
trainds = create_dataset(
pattern='../data/taxi-traffic-train*',
batch_size=BATCH_SIZE,
mode=tf.estimator.ModeKeys.TRAIN)
evalds = create_dataset(
pattern='../data/taxi-traffic-valid*',
batch_size=BATCH_SIZE,
mode=tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000)
%%time
steps_per_epoch = NUM_TRAIN_EXAMPLES // (BATCH_SIZE * NUM_EVALS)
LOGDIR = "./taxi_trained"
history = model.fit(x=trainds,
steps_per_epoch=steps_per_epoch,
epochs=NUM_EVALS,
validation_data=evalds,
callbacks=[TensorBoard(LOGDIR)])
RMSE_COLS = ['rmse', 'val_rmse']
pd.DataFrame(history.history)[RMSE_COLS].plot()
model.predict(x={"dayofweek": tf.convert_to_tensor([6]),
"hourofday": tf.convert_to_tensor([17]),
"pickup_longitude": tf.convert_to_tensor([-73.982683]),
"pickup_latitude": tf.convert_to_tensor([40.742104]),
"dropoff_longitude": tf.convert_to_tensor([-73.983766]),
"dropoff_latitude": tf.convert_to_tensor([40.755174]),
"traffic_last_5min": tf.convert_to_tensor([114])},
steps=1)
###Output
WARNING:tensorflow:Layers in a Sequential model should only have a single input tensor, but we receive a <class 'dict'> input: {'dayofweek': <tf.Tensor 'ExpandDims:0' shape=(1, 1) dtype=int32>, 'hourofday': <tf.Tensor 'ExpandDims_3:0' shape=(1, 1) dtype=int32>, 'pickup_longitude': <tf.Tensor 'ExpandDims_5:0' shape=(1, 1) dtype=float32>, 'pickup_latitude': <tf.Tensor 'ExpandDims_4:0' shape=(1, 1) dtype=float32>, 'dropoff_longitude': <tf.Tensor 'ExpandDims_2:0' shape=(1, 1) dtype=float32>, 'dropoff_latitude': <tf.Tensor 'ExpandDims_1:0' shape=(1, 1) dtype=float32>, 'traffic_last_5min': <tf.Tensor 'ExpandDims_6:0' shape=(1, 1) dtype=int32>}
Consider rewriting this model with the Functional API.
###Markdown
Export and deploy model
###Code
OUTPUT_DIR = "./export/savedmodel"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(OUTPUT_DIR,
datetime.datetime.now().strftime("%Y%m%d%H%M%S"))
tf.saved_model.save(model, EXPORT_PATH) # with default serving function
os.environ['EXPORT_PATH'] = EXPORT_PATH
%%bash
PROJECT=${PROJECT}
BUCKET=${BUCKET}
REGION=${REGION}
MODEL_NAME=taxifare
VERSION_NAME=traffic
if [[ $(gcloud ai-platform models list --format='value(name)' --region=$REGION | grep "^$MODEL_NAME$") ]]; then
echo "$MODEL_NAME already exists"
else
# create model
echo "Creating $MODEL_NAME"
gcloud ai-platform models create --region=$REGION $MODEL_NAME
fi
if [[ $(gcloud ai-platform versions list --model $MODEL_NAME --format='value(name)' --region=$REGION | grep "^$VERSION_NAME$") ]]; then
echo "Deleting already existing $MODEL_NAME:$VERSION_NAME ... "
gcloud ai-platform versions delete --model=$MODEL_NAME $VERSION_NAME --region=$REGION
echo "Please run this cell again if you don't see a Creating message ... "
sleep 2
fi
# create model
echo "Creating $MODEL_NAME:$VERSION_NAME"
gcloud ai-platform versions create --model=$MODEL_NAME $VERSION_NAME --async \
--framework=tensorflow --python-version=3.5 --runtime-version=1.14 \
--origin=${EXPORT_PATH} --staging-bucket=gs://$BUCKET --region=$REGION
###Output
Creating taxifare
Creating taxifare:traffic
###Markdown
Training a model with `traffic_last_5min` feature IntroductionIn this notebook, we'll train a taxifare prediction model but this time with an additional feature of `traffic_last_5min`.
###Code
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
!pip install --user google-cloud-bigquery==1.25.0
###Output
Collecting google-cloud-bigquery==1.25.0
Downloading https://files.pythonhosted.org/packages/48/6d/e8f5e5cd05ee968682d389cec3fdbccb920f1f8302464a46ef87b7b8fdad/google_cloud_bigquery-1.25.0-py2.py3-none-any.whl (169kB)
|████████████████████████████████| 174kB 3.2MB/s eta 0:00:01
Requirement already satisfied: google-cloud-core<2.0dev,>=1.1.0 in /usr/local/lib/python3.5/dist-packages (from google-cloud-bigquery==1.25.0) (1.2.0)
Requirement already satisfied: google-resumable-media<0.6dev,>=0.5.0 in /usr/local/lib/python3.5/dist-packages (from google-cloud-bigquery==1.25.0) (0.5.0)
Requirement already satisfied: google-auth<2.0dev,>=1.9.0 in /usr/local/lib/python3.5/dist-packages (from google-cloud-bigquery==1.25.0) (1.10.1)
Requirement already satisfied: protobuf>=3.6.0 in /usr/local/lib/python3.5/dist-packages (from google-cloud-bigquery==1.25.0) (3.11.2)
Requirement already satisfied: google-api-core<2.0dev,>=1.15.0 in /usr/local/lib/python3.5/dist-packages (from google-cloud-bigquery==1.25.0) (1.16.0)
Requirement already satisfied: six<2.0.0dev,>=1.13.0 in /usr/local/lib/python3.5/dist-packages (from google-cloud-bigquery==1.25.0) (1.14.0)
Requirement already satisfied: setuptools>=40.3.0 in /usr/local/lib/python3.5/dist-packages (from google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (45.0.0)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.5/dist-packages (from google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (4.0.0)
Requirement already satisfied: rsa<4.1,>=3.1.4 in /usr/local/lib/python3.5/dist-packages (from google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (4.0)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.5/dist-packages (from google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (0.2.8)
Requirement already satisfied: pytz in /usr/local/lib/python3.5/dist-packages (from google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2019.3)
Requirement already satisfied: requests<3.0.0dev,>=2.18.0 in /usr/local/lib/python3.5/dist-packages (from google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2.22.0)
Requirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /usr/local/lib/python3.5/dist-packages (from google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (1.51.0)
Requirement already satisfied: pyasn1>=0.1.3 in /usr/local/lib/python3.5/dist-packages (from rsa<4.1,>=3.1.4->google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (0.4.8)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.5/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (1.24.2)
Requirement already satisfied: idna<2.9,>=2.5 in /usr/local/lib/python3.5/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2.8)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.5/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.5/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2019.11.28)
Installing collected packages: google-cloud-bigquery
Successfully installed google-cloud-bigquery-1.25.0 google-resumable-media-0.5.1
WARNING: You are using pip version 19.3.1; however, version 20.2.3 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
###Markdown
**Note**: Restart your kernel to use updated packages. Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage.
###Code
import datetime
import os
import shutil
import pandas as pd
import tensorflow as tf
from matplotlib import pyplot as plt
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, DenseFeatures
from tensorflow.keras.callbacks import TensorBoard
print(tf.__version__)
%matplotlib inline
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'cloud-training-demos' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# For Bash Code
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
###Output
_____no_output_____
###Markdown
Load raw data
###Code
!ls -l ../data/taxi-traffic*
!head ../data/taxi-traffic*
###Output
_____no_output_____
###Markdown
Use tf.data to read the CSV filesThese functions for reading data from the csv files are similar to what we used in the Introduction to Tensorflow module. Note that here we have an addtional feature `traffic_last_5min`.
###Code
CSV_COLUMNS = [
'fare_amount',
'dayofweek',
'hourofday',
'pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'traffic_last_5min'
]
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0.0]]
def features_and_labels(row_data):
label = row_data.pop(LABEL_COLUMN)
features = row_data
return features, label
def create_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
dataset = tf.data.experimental.make_csv_dataset(
pattern, batch_size, CSV_COLUMNS, DEFAULTS)
dataset = dataset.map(features_and_labels)
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(buffer_size=1000).repeat()
# take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(1)
return dataset
INPUT_COLS = [
'dayofweek',
'hourofday',
'pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'traffic_last_5min'
]
# Create input layer of feature columns
feature_columns = {
colname: tf.feature_column.numeric_column(colname)
for colname in INPUT_COLS
}
###Output
_____no_output_____
###Markdown
Build a simple keras DNN model
###Code
# Build a keras DNN model using Sequential API
def build_model(dnn_hidden_units):
model = Sequential(DenseFeatures(feature_columns=feature_columns.values()))
for num_nodes in dnn_hidden_units:
model.add(Dense(units=num_nodes, activation="relu"))
model.add(Dense(units=1, activation="linear"))
# Create a custom evalution metric
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
# Compile the keras model
model.compile(optimizer="adam", loss="mse", metrics=[rmse, "mse"])
return model
###Output
_____no_output_____
###Markdown
Next, we can call the `build_model` to create the model. Here we'll have two hidden layers before our final output layer. And we'll train with the same parameters we used before.
###Code
HIDDEN_UNITS = [32, 8]
model = build_model(dnn_hidden_units=HIDDEN_UNITS)
BATCH_SIZE = 1000
NUM_TRAIN_EXAMPLES = 10000 * 6 # training dataset will repeat, wrap around
NUM_EVALS = 60 # how many times to evaluate
NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample
trainds = create_dataset(
pattern='../data/taxi-traffic-train*',
batch_size=BATCH_SIZE,
mode=tf.estimator.ModeKeys.TRAIN)
evalds = create_dataset(
pattern='../data/taxi-traffic-valid*',
batch_size=BATCH_SIZE,
mode=tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000)
%%time
steps_per_epoch = NUM_TRAIN_EXAMPLES // (BATCH_SIZE * NUM_EVALS)
LOGDIR = "./taxi_trained"
history = model.fit(x=trainds,
steps_per_epoch=steps_per_epoch,
epochs=NUM_EVALS,
validation_data=evalds,
callbacks=[TensorBoard(LOGDIR)])
RMSE_COLS = ['rmse', 'val_rmse']
pd.DataFrame(history.history)[RMSE_COLS].plot()
model.predict(x={"dayofweek": tf.convert_to_tensor([6]),
"hourofday": tf.convert_to_tensor([17]),
"pickup_longitude": tf.convert_to_tensor([-73.982683]),
"pickup_latitude": tf.convert_to_tensor([40.742104]),
"dropoff_longitude": tf.convert_to_tensor([-73.983766]),
"dropoff_latitude": tf.convert_to_tensor([40.755174]),
"traffic_last_5min": tf.convert_to_tensor([114])},
steps=1)
###Output
_____no_output_____
###Markdown
Export and deploy model
###Code
OUTPUT_DIR = "./export/savedmodel"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(OUTPUT_DIR,
datetime.datetime.now().strftime("%Y%m%d%H%M%S"))
tf.saved_model.save(model, EXPORT_PATH) # with default serving function
os.environ['EXPORT_PATH'] = EXPORT_PATH
%%bash
PROJECT=${PROJECT}
BUCKET=${BUCKET}
REGION=${REGION}
MODEL_NAME=taxifare
VERSION_NAME=traffic
if [[ $(gcloud ai-platform models list --format='value(name)' | grep $MODEL_NAME) ]]; then
echo "$MODEL_NAME already exists"
else
# create model
echo "Creating $MODEL_NAME"
gcloud ai-platform models create --regions=$REGION $MODEL_NAME
fi
if [[ $(gcloud ai-platform versions list --model $MODEL_NAME --format='value(name)' | grep $VERSION_NAME) ]]; then
echo "Deleting already existing $MODEL_NAME:$VERSION_NAME ... "
gcloud ai-platform versions delete --model=$MODEL_NAME $VERSION_NAME
echo "Please run this cell again if you don't see a Creating message ... "
sleep 2
fi
# create model
echo "Creating $MODEL_NAME:$VERSION_NAME"
gcloud ai-platform versions create --model=$MODEL_NAME $VERSION_NAME --async \
--framework=tensorflow --python-version=3.5 --runtime-version=1.14 \
--origin=${EXPORT_PATH} --staging-bucket=gs://$BUCKET
###Output
_____no_output_____
###Markdown
Training a model with `traffic_last_5min` feature IntroductionIn this notebook, we'll train a taxifare prediction model but this time with an additional feature of `traffic_last_5min`.
###Code
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
import datetime
import os
import shutil
import pandas as pd
import tensorflow as tf
from matplotlib import pyplot as plt
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, DenseFeatures
from tensorflow.keras.callbacks import TensorBoard
print(tf.__version__)
%matplotlib inline
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'cloud-training-demos' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# For Bash Code
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
###Output
_____no_output_____
###Markdown
Load raw data
###Code
!ls -l ../data/taxi-traffic*
!head ../data/taxi-traffic*
###Output
_____no_output_____
###Markdown
Use tf.data to read the CSV filesThese functions for reading data from the csv files are similar to what we used in the Introduction to Tensorflow module. Note that here we have an addtional feature `traffic_last_5min`.
###Code
CSV_COLUMNS = [
'fare_amount',
'dayofweek',
'hourofday',
'pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'traffic_last_5min'
]
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0.0]]
def features_and_labels(row_data):
label = row_data.pop(LABEL_COLUMN)
features = row_data
return features, label
def create_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
dataset = tf.data.experimental.make_csv_dataset(
pattern, batch_size, CSV_COLUMNS, DEFAULTS)
dataset = dataset.map(features_and_labels)
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(buffer_size=1000).repeat()
# take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(1)
return dataset
INPUT_COLS = [
'dayofweek',
'hourofday',
'pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'traffic_last_5min'
]
# Create input layer of feature columns
feature_columns = {
colname: tf.feature_column.numeric_column(colname)
for colname in INPUT_COLS
}
###Output
_____no_output_____
###Markdown
Build a simple keras DNN model
###Code
# Build a keras DNN model using Sequential API
def build_model(dnn_hidden_units):
model = Sequential(DenseFeatures(feature_columns=feature_columns.values()))
for num_nodes in dnn_hidden_units:
model.add(Dense(units=num_nodes, activation="relu"))
model.add(Dense(units=1, activation="linear"))
# Create a custom evalution metric
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
# Compile the keras model
model.compile(optimizer="adam", loss="mse", metrics=[rmse, "mse"])
return model
###Output
_____no_output_____
###Markdown
Next, we can call the `build_model` to create the model. Here we'll have two hidden layers before our final output layer. And we'll train with the same parameters we used before.
###Code
HIDDEN_UNITS = [32, 8]
model = build_model(dnn_hidden_units=HIDDEN_UNITS)
BATCH_SIZE = 1000
NUM_TRAIN_EXAMPLES = 10000 * 6 # training dataset will repeat, wrap around
NUM_EVALS = 60 # how many times to evaluate
NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample
trainds = create_dataset(
pattern='../data/taxi-traffic-train*',
batch_size=BATCH_SIZE,
mode=tf.estimator.ModeKeys.TRAIN)
evalds = create_dataset(
pattern='../data/taxi-traffic-valid*',
batch_size=BATCH_SIZE,
mode=tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000)
%%time
steps_per_epoch = NUM_TRAIN_EXAMPLES // (BATCH_SIZE * NUM_EVALS)
LOGDIR = "./taxi_trained"
history = model.fit(x=trainds,
steps_per_epoch=steps_per_epoch,
epochs=NUM_EVALS,
validation_data=evalds,
callbacks=[TensorBoard(LOGDIR)])
RMSE_COLS = ['rmse', 'val_rmse']
pd.DataFrame(history.history)[RMSE_COLS].plot()
model.predict(x={"dayofweek": tf.convert_to_tensor([6]),
"hourofday": tf.convert_to_tensor([17]),
"pickup_longitude": tf.convert_to_tensor([-73.982683]),
"pickup_latitude": tf.convert_to_tensor([40.742104]),
"dropoff_longitude": tf.convert_to_tensor([-73.983766]),
"dropoff_latitude": tf.convert_to_tensor([40.755174]),
"traffic_last_5min": tf.convert_to_tensor([114])},
steps=1)
###Output
_____no_output_____
###Markdown
Export and deploy model
###Code
OUTPUT_DIR = "./export/savedmodel"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(OUTPUT_DIR,
datetime.datetime.now().strftime("%Y%m%d%H%M%S"))
tf.saved_model.save(model, EXPORT_PATH) # with default serving function
os.environ['EXPORT_PATH'] = EXPORT_PATH
%%bash
PROJECT=${PROJECT}
BUCKET=${BUCKET}
REGION=${REGION}
MODEL_NAME=taxifare
VERSION_NAME=traffic
if [[ $(gcloud ai-platform models list --format='value(name)' --region=$REGION | grep "^$MODEL_NAME$") ]]; then
echo "$MODEL_NAME already exists"
else
# create model
echo "Creating $MODEL_NAME"
gcloud ai-platform models create --region=$REGION $MODEL_NAME
fi
if [[ $(gcloud ai-platform versions list --model $MODEL_NAME --format='value(name)' --region=$REGION | grep "^$VERSION_NAME$") ]]; then
echo "Deleting already existing $MODEL_NAME:$VERSION_NAME ... "
gcloud ai-platform versions delete --model=$MODEL_NAME $VERSION_NAME --region=$REGION
echo "Please run this cell again if you don't see a Creating message ... "
sleep 2
fi
# create model
echo "Creating $MODEL_NAME:$VERSION_NAME"
gcloud ai-platform versions create --model=$MODEL_NAME $VERSION_NAME --async \
--framework=tensorflow --python-version=3.5 --runtime-version=1.14 \
--origin=${EXPORT_PATH} --staging-bucket=gs://$BUCKET --region=$REGION
###Output
_____no_output_____ |
notebooks/mnist_quickshift_node_removal.ipynb | ###Markdown
MNIST Quickshift Node Removal
###Code
import sys, os
sys.path.insert(0, '..')
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
from skimage import color, draw
from skimage.future import graph
def _scale(image, scale=8):
if scale == 1:
return image
else:
image = np.repeat(image, scale, axis=0)
image = np.repeat(image, scale, axis=1)
return image
def draw_rag(image, segmentation):
image = color.gray2rgb(np.reshape(image, (28, 28)))
image = _scale(image)
segmentation = _scale(segmentation)
rag = graph.rag_mean_color(image, segmentation)
graph.draw_rag(segmentation, rag, image) # Calculate the centroids internally.
out_1 = image.copy()
for n1, n2, data in rag.edges_iter(data=True):
r1, c1 = map(int, rag.node[n1]['centroid'])
r2, c2 = map(int, rag.node[n2]['centroid'])
line = draw.line(r1, c1, r2, c2)
out_1[line] = [1, 0, 0]
for n, d in rag.nodes_iter(data=True):
r, c = map(int, rag.node[n]['centroid'])
if r > 1 and c > 1 and r < image.shape[0] - 1 and c < image.shape[1] - 1:
circle = draw.circle(r, c, 2)
out_1[circle] = [0, 1, 0]
out_2 = image.copy()
for n1, n2, data in rag.edges_iter(data=True):
mean_1 = rag.node[n1]['mean color'][0]
mean_2 = rag.node[n2]['mean color'][0]
if mean_1 < 0.01 or mean_2 < 0.01:
continue
r1, c1 = map(int, rag.node[n1]['centroid'])
r2, c2 = map(int, rag.node[n2]['centroid'])
line = draw.line(r1, c1, r2, c2)
out_2[line] = [1, 0, 0]
for n, d in rag.nodes_iter(data=True):
mean = rag.node[n]['mean color'][0]
if mean < 0.01:
continue
r, c = map(int, rag.node[n]['centroid'])
if r > 1 and c > 1 and r < image.shape[0] - 1 and c < image.shape[1] - 1:
circle = draw.circle(r, c, 2)
out_2[circle] = [0, 1, 0]
plt.rcParams['figure.figsize'] = (10, 5)
fig = plt.figure()
fig.add_subplot(121)
plt.xticks([])
plt.yticks([])
plt.imshow(out_1)
fig.add_subplot(122)
plt.xticks([])
plt.yticks([])
plt.imshow(out_2)
plt.show()
###Output
_____no_output_____
###Markdown
Load dataset
###Code
from lib.datasets import MNIST
mnist = MNIST('../data/mnist').test
images, _ = mnist.next_batch(3, shuffle=False)
image_1 = images[0]
image_2 = images[1]
image_3 = images[2]
from lib.segmentation import quickshift
image = images[0]
segmentation = quickshift(image, ratio=1, kernel_size=2, max_dist=2, sigma=0)
draw_rag(image, segmentation)
image = images[1]
segmentation = quickshift(image, ratio=1, kernel_size=2, max_dist=2, sigma=0)
draw_rag(image, segmentation)
image = images[2]
segmentation = quickshift(image, ratio=1, kernel_size=2, max_dist=2, sigma=0)
draw_rag(image, segmentation)
###Output
_____no_output_____
###Markdown
Validate different node removal algorithms**Important:** The graph should still be fully-connected after node removal.
###Code
import scipy.sparse as sp
from lib.graph import filter_adj
from lib.segmentation import segmentation_adjacency, extract_features
def _validate_adj(adj):
adj.data = adj.data.astype(np.float32)
lap = sp.csgraph.laplacian(adj, normed=True)
# Check that lambda_1 > 0, so that adj is connected.
lambdas, _ = np.linalg.eig(lap.toarray())
lambdas = np.sort(lambdas)
lambda_1 = lambdas[1]
lambda_1 = round(lambda_1, 8)
return lambda_1 > 0
def node_removal(images, algorithm):
valids = []
removed = []
for image in images:
segmentation = quickshift(image, ratio=1, kernel_size=2, max_dist=2, sigma=0)
adj, _, _ = segmentation_adjacency(segmentation)
features = extract_features(segmentation, image, [0], scaler=None)
nodes = algorithm(adj, features)
adj_new = filter_adj(adj, nodes)
valid = _validate_adj(adj_new)
removed.append(adj.shape[0] - adj_new.shape[0])
valids.append(valid)
valids = np.array(valids, dtype=np.uint8)
valid = valids.mean()
print('Valid adjacencies: {:.2f}%'.format(valid * 100))
removed = np.array(removed, dtype=np.float32)
removed = removed.mean()
print('Mean nodes removed: {:.2f}'.format(removed))
images, _ = mnist.next_batch(1000, shuffle=False)
###Output
_____no_output_____
###Markdown
Color threshold
###Code
from lib.graph import gray_color_threshold
def gray_color_threshold_fixed(adj, features):
return gray_color_threshold(adj, features, 0.01)
node_removal(images, gray_color_threshold_fixed)
###Output
Valid adjacencies: 99.90%
Mean nodes removed: 4.96
###Markdown
Degree threshold
###Code
from lib.graph import degree_threshold
def degree_threshold_fixed(adj, features):
return degree_threshold(adj, features, 15)
node_removal(images, degree_threshold_fixed)
###Output
Valid adjacencies: 45.40%
Mean nodes removed: 3.20
###Markdown
Area threshold
###Code
from lib.graph import area_threshold
def area_threshold_fixed(adj, features):
return area_threshold(adj, features, 60)
node_removal(images, area_threshold_fixed)
###Output
Valid adjacencies: 93.40%
Mean nodes removed: 4.22
|
4 Pandas - Groupby function.ipynb | ###Markdown
GroupbyThe groupby method is used to group rows together and perform aggregate functions
###Code
import pandas as pd
# Create dataframe as given below
dat = {'CustID':['1001','1001','1002','1002','1003','1003'],
'CustName':['UIPat','DatRob','Goog','Chrysler','Ford','GM'],
'Profitinlakhs':[2005,3245,1245,8765,5463,3547]}
dataframe = pd.DataFrame(dat)
dataframe
###Output
_____no_output_____
###Markdown
** We can now use the .groupby() method to group rows together based on a column name. For example let's group based on CustID. This will create a DataFrameGroupBy object:**
###Code
dataframe.groupby('CustID')
###Output
_____no_output_____
###Markdown
This object can be saved as a variable
###Code
CustID_grouped = dataframe.groupby("CustID")
###Output
_____no_output_____
###Markdown
Now we can aggregate using the variable
###Code
CustID_grouped.mean()
###Output
_____no_output_____
###Markdown
Or we can call the groupby function for each aggregation
###Code
dataframe.groupby('CustID').mean()
###Output
_____no_output_____
###Markdown
Some more examples
###Code
CustID_grouped.std()
CustID_grouped.min()
CustID_grouped.max()
CustID_grouped.count()
CustID_grouped.describe()
CustID_grouped.describe().transpose()
CustID_grouped.describe().transpose()['1001']
###Output
_____no_output_____ |
exercices/JIT.ipynb | ###Markdown
JIT Exercise Use `jit` (either in function or decorator form) to speed up the Mandelbrot code below, then time and compare the results
###Code
import numba
from numba import jit
import numpy
from matplotlib import pyplot, cm
%matplotlib inline
@jit(nopython = True)
def mandel(x, y, max_iters):
i = 0
c = complex(x, y)
z = 0.0j
for i in range(max_iters):
z = z * z + c
if (z.real * z.real + z.imag * z.imag) >= 4:
return i
return 255
def create_fractal(min_x, max_x, min_y, max_y, image, iters):
height = image.shape[0]
width = image.shape[1]
pixel_size_x = (max_x - min_x) / width
pixel_size_y = (max_y - min_y) / height
for x in range(width):
real = min_x + x * pixel_size_x
for y in range(height):
imag = min_y + y * pixel_size_y
color = mandel(real, imag, iters)
image[y, x] = color
return image
%%time
import numpy
image = numpy.zeros((500 * 2, 750 * 2), dtype=numpy.uint8)
image = create_fractal(-2.0, 1.0, -1.0, 1.0, image, 20)
%timeit image
###Output
17.7 ns ± 0.624 ns per loop (mean ± std. dev. of 7 runs, 100000000 loops each)
CPU times: user 15.6 s, sys: 10.2 ms, total: 15.7 s
Wall time: 16.4 s
###Markdown
The result shows that the Python function run in parallel using Numba JIT compilation faster than pure Python function on CPU . The comparison measure used is the execution time of the Python function using %timeit Python magic command.
###Code
pyplot.figure(figsize=(10,8))
pyplot.imshow(image, cmap=cm.viridis)
pyplot.colorbar()
###Output
_____no_output_____
###Markdown
JIT Exercise Use `jit` (either in function or decorator form) to speed up the Mandelbrot code below, then time and compare the results
###Code
from numba import jit
import numpy
import time
from matplotlib import pyplot, cm
%matplotlib inline
start= time.time()
@jit
def mandel(x, y, max_iters):
i = 0
c = complex(x, y)
z = 0.0j
for i in range(max_iters):
z = z * z + c
if (z.real * z.real + z.imag * z.imag) >= 4:
return i
return 255
def create_fractal(min_x, max_x, min_y, max_y, image, iters):
height = image.shape[0]
width = image.shape[1]
pixel_size_x = (max_x - min_x) / width
pixel_size_y = (max_y - min_y) / height
for x in range(width):
real = min_x + x * pixel_size_x
for y in range(height):
imag = min_y + y * pixel_size_y
color = mandel(real, imag, iters)
image[y, x] = color
return image
image = numpy.zeros((500 * 2, 750 * 2), dtype=numpy.uint8)
image = create_fractal(-2.0, 1.0, -1.0, 1.0, image, 20)
end = time.time()
print("Duree = ",end - start,"seconds")
pyplot.figure(figsize=(10,8))
pyplot.imshow(image, cmap=cm.viridis)
pyplot.colorbar();
###Output
_____no_output_____
###Markdown
JIT Exercise Use `jit` (either in function or decorator form) to speed up the Mandelbrot code below, then time and compare the results
###Code
from numba import jit
import numpy
from matplotlib import pyplot, cm
%matplotlib inline
%load_ext line_profiler
@jit(nopython=True)
def mandel(x, y, max_iters):
i = 0
c = complex(x, y)
z = 0.0j
for i in range(max_iters):
z = z * z + c
if (z.real * z.real + z.imag * z.imag) >= 4:
return i
return 255
@jit(nopython=True)
def create_fractal(min_x, max_x, min_y, max_y, image, iters):
height = image.shape[0]
width = image.shape[1]
pixel_size_x = (max_x - min_x) / width
pixel_size_y = (max_y - min_y) / height
for x in range(width):
real = min_x + x * pixel_size_x
for y in range(height):
imag = min_y + y * pixel_size_y
color = mandel(real, imag, iters)
image[y, x] = color
return image
%%time
image = numpy.zeros((500 * 2, 750 * 2), dtype=numpy.uint8)
image = create_fractal(-2.0, 1.0, -1.0, 1.0, image, 20)
%%time
pyplot.figure(figsize=(10,8))
pyplot.imshow(image, cmap=cm.viridis)
pyplot.colorbar();
###Output
CPU times: user 73 ms, sys: 0 ns, total: 73 ms
Wall time: 72.1 ms
###Markdown
JIT Exercise Use `jit` (either in function or decorator form) to speed up the Mandelbrot code below, then time and compare the results
###Code
from numba import jit
import numpy
from matplotlib import pyplot, cm
%matplotlib inline
def mandel(x, y, max_iters):
i = 0
c = complex(x, y)
z = 0.0j
for i in range(max_iters):
z = z * z + c
if (z.real * z.real + z.imag * z.imag) >= 4:
return i
return 255
def create_fractal(min_x, max_x, min_y, max_y, image, iters):
height = image.shape[0]
width = image.shape[1]
pixel_size_x = (max_x - min_x) / width
pixel_size_y = (max_y - min_y) / height
for x in range(width):
real = min_x + x * pixel_size_x
for y in range(height):
imag = min_y + y * pixel_size_y
color = mandel(real, imag, iters)
image[y, x] = color
return image
import time
start = time.perf_counter()
image = numpy.zeros((500 * 2, 750 * 2), dtype=numpy.uint8)
image = create_fractal(-2.0, 1.0, -1.0, 1.0, image, 20)
end = time.perf_counter()
print("Duree = ", end-start," secondes")
pyplot.figure(figsize=(10,8))
pyplot.imshow(image, cmap=cm.viridis)
pyplot.colorbar();
#using jit method
from numba import jit
import numpy
from matplotlib import pyplot, cm
%matplotlib inline
@jit(nopython=True)
def mandel(x, y, max_iters):
i = 0
c = complex(x, y)
z = 0.0j
for i in range(max_iters):
z = z * z + c
if (z.real * z.real + z.imag * z.imag) >= 4:
return i
return 255
def create_fractal(min_x, max_x, min_y, max_y, image, iters):
height = image.shape[0]
width = image.shape[1]
pixel_size_x = (max_x - min_x) / width
pixel_size_y = (max_y - min_y) / height
for x in range(width):
real = min_x + x * pixel_size_x
for y in range(height):
imag = min_y + y * pixel_size_y
color = mandel(real, imag, iters)
image[y, x] = color
return image
import time
start = time.perf_counter()
image = numpy.zeros((500 * 2, 750 * 2), dtype=numpy.uint8)
image = create_fractal(-2.0, 1.0, -1.0, 1.0, image, 20)
end = time.perf_counter()
print("Duree = ", end-start," secondes")
pyplot.figure(figsize=(10,8))
pyplot.imshow(image, cmap=cm.viridis)
pyplot.colorbar();
compare the results:
as we can see : when we speed up the Mandelbrot code without jit it took Duree = 8.193807659000413 secondes
but when we use jit it took only Duree = 1.7892067380007575 secondes
so jit is faster it is a high performance python compiler.
###Output
_____no_output_____
###Markdown
JIT Exercise Use `jit` (either in function or decorator form) to speed up the Mandelbrot code below, then time and compare the results
###Code
from numba import jit
import numpy
from matplotlib import pyplot, cm
%matplotlib inlin
%load_ext line_profiler
@jit(nopython=True)
def mandel(x, y, max_iters):
i = 0
c = complex(x, y)
z = 0.0j
for i in range(max_iters):
z = z * z + c
if (z.real * z.real + z.imag * z.imag) >= 4:
return i
return 255
@jit(nopython=True)
def create_fractal(min_x, max_x, min_y, max_y, image, iters):
height = image.shape[0]
width = image.shape[1]
pixel_size_x = (max_x - min_x) / width
pixel_size_y = (max_y - min_y) / height
for x in range(width):
real = min_x + x * pixel_size_x
for y in range(height):
imag = min_y + y * pixel_size_y
color = mandel(real, imag, iters)
image[y, x] = color
return image
%%time
image = numpy.zeros((500 * 2, 750 * 2), dtype=numpy.uint8)
image = create_fractal(-2.0, 1.0, -1.0, 1.0, image, 20)
%%time
pyplot.figure(figsize=(10,8))
pyplot.imshow(image, cmap=cm.viridis)
pyplot.colorbar();
###Output
_____no_output_____
###Markdown
JIT Exercise Use `jit` (either in function or decorator form) to speed up the Mandelbrot code below, then time and compare the results
###Code
from numba import jit
import numpy
from matplotlib import pyplot, cm
%matplotlib inline
def mandel(x, y, max_iters):
i = 0
c = complex(x, y)
z = 0.0j
for i in range(max_iters):
z = z * z + c
if (z.real * z.real + z.imag * z.imag) >= 4:
return i
return 255
def create_fractal(min_x, max_x, min_y, max_y, image, iters):
height = image.shape[0]
width = image.shape[1]
pixel_size_x = (max_x - min_x) / width
pixel_size_y = (max_y - min_y) / height
for x in range(width):
real = min_x + x * pixel_size_x
for y in range(height):
imag = min_y + y * pixel_size_y
color = mandel(real, imag, iters)
image[y, x] = color
return image
image = numpy.zeros((500 * 2, 750 * 2), dtype=numpy.uint8)
image = create_fractal(-2.0, 1.0, -1.0, 1.0, image, 20)
pyplot.figure(figsize=(10,8))
pyplot.imshow(image, cmap=cm.viridis)
pyplot.colorbar();
###Output
_____no_output_____
###Markdown
JIT Exercise Use `jit` (either in function or decorator form) to speed up the Mandelbrot code below, then time and compare the results
###Code
from numba import jit
import numpy
from matplotlib import pyplot, cm
%matplotlib inline
def mandel(x, y, max_iters):
i = 0
c = complex(x, y)
z = 0.0j
for i in range(max_iters):
z = z * z + c
if (z.real * z.real + z.imag * z.imag) >= 4:
return i
return 255
def create_fractal(min_x, max_x, min_y, max_y, image, iters):
height = image.shape[0]
width = image.shape[1]
pixel_size_x = (max_x - min_x) / width
pixel_size_y = (max_y - min_y) / height
for x in range(width):
real = min_x + x * pixel_size_x
for y in range(height):
imag = min_y + y * pixel_size_y
color = mandel(real, imag, iters)
image[y, x] = color
return image
image = numpy.zeros((500 * 2, 750 * 2), dtype=numpy.uint8)
image = create_fractal(-2.0, 1.0, -1.0, 1.0, image, 20)
pyplot.figure(figsize=(10,8))
pyplot.imshow(image, cmap=cm.viridis)
pyplot.colorbar();
###Output
_____no_output_____
###Markdown
JIT Exercise Use `jit` (either in function or decorator form) to speed up the Mandelbrot code below, then time and compare the results
###Code
from numba import jit
import numpy
from matplotlib import pyplot, cm
%matplotlib inline
%load_ext line_profiler
@jit(nopython=True)
def mandel(x, y, max_iters):
i = 0
c = complex(x, y)
z = 0.0j
for i in range(max_iters):
z = z * z + c
if (z.real * z.real + z.imag * z.imag) >= 4:
return i
return 255
@jit(nopython=True)
def create_fractal(min_x, max_x, min_y, max_y, image, iters):
height = image.shape[0]
width = image.shape[1]
pixel_size_x = (max_x - min_x) / width
pixel_size_y = (max_y - min_y) / height
for x in range(width):
real = min_x + x * pixel_size_x
for y in range(height):
imag = min_y + y * pixel_size_y
color = mandel(real, imag, iters)
image[y, x] = color
return image
%%time
image = numpy.zeros((500 * 2, 750 * 2), dtype=numpy.uint8)
image = create_fractal(-2.0, 1.0, -1.0, 1.0, image, 20)
%%time
pyplot.figure(figsize=(10,8))
pyplot.imshow(image, cmap=cm.viridis)
pyplot.colorbar();
###Output
Wall time: 386 ms
###Markdown
JIT Exercise Use `jit` (either in function or decorator form) to speed up the Mandelbrot code below, then time and compare the results
###Code
from numba import jit
import numpy
from matplotlib import pyplot, cm
%matplotlib inline
%load_ext line_profiler
@jit(nopython=True)
def mandel(x, y, max_iters):
i = 0
c = complex(x, y)
z = 0.0j
for i in range(max_iters):
z = z * z + c
if (z.real * z.real + z.imag * z.imag) >= 4:
return i
return 255
@jit(nopython=True)
def create_fractal(min_x, max_x, min_y, max_y, image, iters):
height = image.shape[0]
width = image.shape[1]
pixel_size_x = (max_x - min_x) / width
pixel_size_y = (max_y - min_y) / height
for x in range(width):
real = min_x + x * pixel_size_x
for y in range(height):
imag = min_y + y * pixel_size_y
color = mandel(real, imag, iters)
image[y, x] = color
return image
%%time
image = numpy.zeros((500 * 2, 750 * 2), dtype=numpy.uint8)
image = create_fractal(-2.0, 1.0, -1.0, 1.0, image, 20)
pyplot.figure(figsize=(10,8))
pyplot.imshow(image, cmap=cm.viridis)
pyplot.colorbar();
###Output
_____no_output_____
###Markdown
JIT Exercise Use `jit` (either in function or decorator form) to speed up the Mandelbrot code below, then time and compare the results
###Code
from numba import jit
import numpy
from matplotlib import pyplot, cm
%matplotlib inline
@jit(nopython=True)
def mandel(x, y, max_iters):
i = 0
c = complex(x, y)
z = 0.0j
for i in range(max_iters):
z = z * z + c
if (z.real * z.real + z.imag * z.imag) >= 4:
return i
return 255
def create_fractal(min_x, max_x, min_y, max_y, image, iters):
height = image.shape[0]
width = image.shape[1]
pixel_size_x = (max_x - min_x) / width
pixel_size_y = (max_y - min_y) / height
for x in range(width):
real = min_x + x * pixel_size_x
for y in range(height):
imag = min_y + y * pixel_size_y
color = mandel(real, imag, iters)
image[y, x] = color
return image
image = numpy.zeros((500 * 2, 750 * 2), dtype=numpy.uint8)
image = create_fractal(-2.0, 1.0, -1.0, 1.0, image, 20)
pyplot.figure(figsize=(10,8))
pyplot.imshow(image, cmap=cm.viridis)
pyplot.colorbar();
###Output
_____no_output_____
###Markdown
JIT Exercise Use `jit` (either in function or decorator form) to speed up the Mandelbrot code below, then time and compare the results
###Code
from numba import jit
import numpy
from matplotlib import pyplot, cm
%matplotlib inline
def mandel(x, y, max_iters):
i = 0
c = complex(x, y)
z = 0.0j
for i in range(max_iters):
z = z * z + c
if (z.real * z.real + z.imag * z.imag) >= 4:
return i
return 255
def create_fractal(min_x, max_x, min_y, max_y, image, iters):
height = image.shape[0]
width = image.shape[1]
pixel_size_x = (max_x - min_x) / width
pixel_size_y = (max_y - min_y) / height
for x in range(width):
real = min_x + x * pixel_size_x
for y in range(height):
imag = min_y + y * pixel_size_y
color = mandel(real, imag, iters)
image[y, x] = color
return image
image = numpy.zeros((500 * 2, 750 * 2), dtype=numpy.uint8)
image = create_fractal(-2.0, 1.0, -1.0, 1.0, image, 20)
pyplot.figure(figsize=(10,8))
pyplot.imshow(image, cmap=cm.viridis)
pyplot.colorbar();
###Output
_____no_output_____
###Markdown
JIT Exercise Use `jit` (either in function or decorator form) to speed up the Mandelbrot code below, then time and compare the results
###Code
%%timeit
total = 0
for i in range(1000):
for j in range(1000):
total += i * (-1) ** j
%timeit sum(range(100))
from numba import jit
import numpy
from matplotlib import pyplot, cm
%matplotlib inline
%load_ext line_profiler
@jit(nopython=True)
def mandel(x, y, max_iters):
i = 0
c = complex(x, y)
z = 0.0j
for i in range(max_iters):
z = z * z + c
if (z.real * z.real + z.imag * z.imag) >= 4:
return i
return 255
@jit(nopython=True)
def create_fractal(min_x, max_x, min_y, max_y, image, iters):
height = image.shape[0]
width = image.shape[1]
pixel_size_x = (max_x - min_x) / width
pixel_size_y = (max_y - min_y) / height
for x in range(width):
real = min_x + x * pixel_size_x
for y in range(height):
imag = min_y + y * pixel_size_y
color = mandel(real, imag, iters)
image[y, x] = color
return image
%%time
image = numpy.zeros((500 * 2, 750 * 2), dtype=numpy.uint8)
image = create_fractal(-2.0, 1.0, -1.0, 1.0, image, 20)
pyplot.figure(figsize=(10,8))
pyplot.imshow(image, cmap=cm.viridis)
pyplot.colorbar();
###Output
_____no_output_____ |
logic.ipynb | ###Markdown
Logic: `logic.py`; Chapters 6-8 This notebook describes the [logic.py](https://github.com/aimacode/aima-python/blob/master/logic.py) module, which covers Chapters 6 (Logical Agents), 7 (First-Order Logic) and 8 (Inference in First-Order Logic) of *[Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu)*. See the [intro notebook](https://github.com/aimacode/aima-python/blob/master/intro.ipynb) for instructions.We'll start by looking at `Expr`, the data type for logical sentences, and the convenience function `expr`. We'll be covering two types of knowledge bases, `PropKB` - Propositional logic knowledge base and `FolKB` - First order logic knowledge base. We will construct a propositional knowledge base of a specific situation in the Wumpus World. We will next go through the `tt_entails` function and experiment with it a bit. The `pl_resolution` and `pl_fc_entails` functions will come next. We'll study forward chaining and backward chaining algorithms for `FolKB` and use them on `crime_kb` knowledge base.But the first step is to load the code:
###Code
from utils import *
from logic import *
from notebook import psource
###Output
_____no_output_____
###Markdown
Logical Sentences The `Expr` class is designed to represent any kind of mathematical expression. The simplest type of `Expr` is a symbol, which can be defined with the function `Symbol`:
###Code
Symbol('x')
###Output
_____no_output_____
###Markdown
Or we can define multiple symbols at the same time with the function `symbols`:
###Code
(x, y, P, Q, f) = symbols('x, y, P, Q, f')
###Output
_____no_output_____
###Markdown
We can combine `Expr`s with the regular Python infix and prefix operators. Here's how we would form the logical sentence "P and not Q":
###Code
P & ~Q
###Output
_____no_output_____
###Markdown
This works because the `Expr` class overloads the `&` operator with this definition:```pythondef __and__(self, other): return Expr('&', self, other)``` and does similar overloads for the other operators. An `Expr` has two fields: `op` for the operator, which is always a string, and `args` for the arguments, which is a tuple of 0 or more expressions. By "expression," I mean either an instance of `Expr`, or a number. Let's take a look at the fields for some `Expr` examples:
###Code
sentence = P & ~Q
sentence.op
sentence.args
P.op
P.args
Pxy = P(x, y)
Pxy.op
Pxy.args
###Output
_____no_output_____
###Markdown
It is important to note that the `Expr` class does not define the *logic* of Propositional Logic sentences; it just gives you a way to *represent* expressions. Think of an `Expr` as an [abstract syntax tree](https://en.wikipedia.org/wiki/Abstract_syntax_tree). Each of the `args` in an `Expr` can be either a symbol, a number, or a nested `Expr`. We can nest these trees to any depth. Here is a deply nested `Expr`:
###Code
3 * f(x, y) + P(y) / 2 + 1
###Output
_____no_output_____
###Markdown
Operators for Constructing Logical SentencesHere is a table of the operators that can be used to form sentences. Note that we have a problem: we want to use Python operators to make sentences, so that our programs (and our interactive sessions like the one here) will show simple code. But Python does not allow implication arrows as operators, so for now we have to use a more verbose notation that Python does allow: `|'==>'|` instead of just `==>`. Alternately, you can always use the more verbose `Expr` constructor forms:| Operation | Book | Python Infix Input | Python Output | Python `Expr` Input|--------------------------|----------------------|-------------------------|---|---|| Negation | ¬ P | `~P` | `~P` | `Expr('~', P)`| And | P ∧ Q | `P & Q` | `P & Q` | `Expr('&', P, Q)`| Or | P ∨ Q | `P` &124; `Q`| `P` &124; `Q` | `Expr('`&124;`', P, Q)`| Inequality (Xor) | P ≠ Q | `P ^ Q` | `P ^ Q` | `Expr('^', P, Q)`| Implication | P → Q | `P` &124;`'==>'`&124; `Q` | `P ==> Q` | `Expr('==>', P, Q)`| Reverse Implication | Q ← P | `Q` &124;`'&124; `P` |`Q <== P` | `Expr('<==', Q, P)`| Equivalence | P ↔ Q | `P` &124;`''`&124; `Q` |`P Q` | `Expr('', P, Q)`Here's an example of defining a sentence with an implication arrow:
###Code
~(P & Q) |'==>'| (~P | ~Q)
###Output
_____no_output_____
###Markdown
`expr`: a Shortcut for Constructing SentencesIf the `|'==>'|` notation looks ugly to you, you can use the function `expr` instead:
###Code
expr('~(P & Q) ==> (~P | ~Q)')
###Output
_____no_output_____
###Markdown
`expr` takes a string as input, and parses it into an `Expr`. The string can contain arrow operators: `==>`, ``, which are handled as if they were regular Python infix operators. And `expr` automatically defines any symbols, so you don't need to pre-define them:
###Code
expr('sqrt(b ** 2 - 4 * a * c)')
###Output
_____no_output_____
###Markdown
For now that's all you need to know about `expr`. If you are interested, we explain the messy details of how `expr` is implemented and how `|'==>'|` is handled in the appendix. Propositional Knowledge Bases: `PropKB`The class `PropKB` can be used to represent a knowledge base of propositional logic sentences.We see that the class `KB` has four methods, apart from `__init__`. A point to note here: the `ask` method simply calls the `ask_generator` method. Thus, this one has already been implemented, and what you'll have to actually implement when you create your own knowledge base class (though you'll probably never need to, considering the ones we've created for you) will be the `ask_generator` function and not the `ask` function itself.The class `PropKB` now.* `__init__(self, sentence=None)` : The constructor `__init__` creates a single field `clauses` which will be a list of all the sentences of the knowledge base. Note that each one of these sentences will be a 'clause' i.e. a sentence which is made up of only literals and `or`s.* `tell(self, sentence)` : When you want to add a sentence to the KB, you use the `tell` method. This method takes a sentence, converts it to its CNF, extracts all the clauses, and adds all these clauses to the `clauses` field. So, you need not worry about `tell`ing only clauses to the knowledge base. You can `tell` the knowledge base a sentence in any form that you wish; converting it to CNF and adding the resulting clauses will be handled by the `tell` method.* `ask_generator(self, query)` : The `ask_generator` function is used by the `ask` function. It calls the `tt_entails` function, which in turn returns `True` if the knowledge base entails query and `False` otherwise. The `ask_generator` itself returns an empty dict `{}` if the knowledge base entails query and `None` otherwise. This might seem a little bit weird to you. After all, it makes more sense just to return a `True` or a `False` instead of the `{}` or `None` But this is done to maintain consistency with the way things are in First-Order Logic, where an `ask_generator` function is supposed to return all the substitutions that make the query true. Hence the dict, to return all these substitutions. I will be mostly be using the `ask` function which returns a `{}` or a `False`, but if you don't like this, you can always use the `ask_if_true` function which returns a `True` or a `False`.* `retract(self, sentence)` : This function removes all the clauses of the sentence given, from the knowledge base. Like the `tell` function, you don't have to pass clauses to remove them from the knowledge base; any sentence will do fine. The function will take care of converting that sentence to clauses and then remove those. Wumpus World KBLet us create a `PropKB` for the wumpus world with the sentences mentioned in `section 7.4.3`.
###Code
wumpus_kb = PropKB()
###Output
_____no_output_____
###Markdown
We define the symbols we use in our clauses.$P_{x, y}$ is true if there is a pit in `[x, y]`.$B_{x, y}$ is true if the agent senses breeze in `[x, y]`.
###Code
P11, P12, P21, P22, P31, B11, B21 = expr('P11, P12, P21, P22, P31, B11, B21')
###Output
_____no_output_____
###Markdown
Now we tell sentences based on `section 7.4.3`.There is no pit in `[1,1]`.
###Code
wumpus_kb.tell(~P11)
###Output
_____no_output_____
###Markdown
A square is breezy if and only if there is a pit in a neighboring square. This has to be stated for each square but for now, we include just the relevant squares.
###Code
wumpus_kb.tell(B11 | '<=>' | ((P12 | P21)))
wumpus_kb.tell(B21 | '<=>' | ((P11 | P22 | P31)))
###Output
_____no_output_____
###Markdown
Now we include the breeze percepts for the first two squares leading up to the situation in `Figure 7.3(b)`
###Code
wumpus_kb.tell(~B11)
wumpus_kb.tell(B21)
###Output
_____no_output_____
###Markdown
We can check the clauses stored in a `KB` by accessing its `clauses` variable
###Code
wumpus_kb.clauses
###Output
_____no_output_____
###Markdown
We see that the equivalence $B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was automatically converted to two implications which were inturn converted to CNF which is stored in the `KB`.$B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was split into $B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ and $B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$.$B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ was converted to $P_{1, 2} \lor P_{2, 1} \lor \neg B_{1, 1}$.$B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$ was converted to $\neg (P_{1, 2} \lor P_{2, 1}) \lor B_{1, 1}$ which becomes $(\neg P_{1, 2} \lor B_{1, 1}) \land (\neg P_{2, 1} \lor B_{1, 1})$ after applying De Morgan's laws and distributing the disjunction.$B_{2, 1} \iff (P_{1, 1} \lor P_{2, 2} \lor P_{3, 2})$ is converted in similar manner. Inference in Propositional Knowledge BaseIn this section we will look at two algorithms to check if a sentence is entailed by the `KB`. Our goal is to decide whether $\text{KB} \vDash \alpha$ for some sentence $\alpha$. Truth Table EnumerationIt is a model-checking approach which, as the name suggests, enumerates all possible models in which the `KB` is true and checks if $\alpha$ is also true in these models. We list the $n$ symbols in the `KB` and enumerate the $2^{n}$ models in a depth-first manner and check the truth of `KB` and $\alpha$.
###Code
psource(tt_check_all)
###Output
_____no_output_____
###Markdown
The algorithm basically computes every line of the truth table $KB\implies \alpha$ and checks if it is true everywhere.If symbols are defined, the routine recursively constructs every combination of truth values for the symbols and then, it checks whether `model` is consistent with `kb`.The given models correspond to the lines in the truth table,which have a `true` in the KB column, and for these lines it checks whether the query evaluates to true`result = pl_true(alpha, model)`.In short, `tt_check_all` evaluates this logical expression for each `model``pl_true(kb, model) => pl_true(alpha, model)`which is logically equivalent to`pl_true(kb, model) & ~pl_true(alpha, model)` that is, the knowledge base and the negation of the query are logically inconsistent.`tt_entails()` just extracts the symbols from the query and calls `tt_check_all()` with the proper parameters.
###Code
psource(tt_entails)
###Output
_____no_output_____
###Markdown
Keep in mind that for two symbols P and Q, P => Q is false only when P is `True` and Q is `False`.Example usage of `tt_entails()`:
###Code
tt_entails(P & Q, Q)
###Output
_____no_output_____
###Markdown
P & Q is True only when both P and Q are True. Hence, (P & Q) => Q is True
###Code
tt_entails(P | Q, Q)
tt_entails(P | Q, P)
###Output
_____no_output_____
###Markdown
If we know that P | Q is true, we cannot infer the truth values of P and Q. Hence (P | Q) => Q is False and so is (P | Q) => P.
###Code
(A, B, C, D, E, F, G) = symbols('A, B, C, D, E, F, G')
tt_entails(A & (B | C) & D & E & ~(F | G), A & D & E & ~F & ~G)
###Output
_____no_output_____
###Markdown
We can see that for the KB to be true, A, D, E have to be True and F and G have to be False.Nothing can be said about B or C. Coming back to our problem, note that `tt_entails()` takes an `Expr` which is a conjunction of clauses as the input instead of the `KB` itself. You can use the `ask_if_true()` method of `PropKB` which does all the required conversions. Let's check what `wumpus_kb` tells us about $P_{1, 1}$.
###Code
wumpus_kb.ask_if_true(~P11), wumpus_kb.ask_if_true(P11)
###Output
_____no_output_____
###Markdown
Looking at Figure 7.9 we see that in all models in which the knowledge base is `True`, $P_{1, 1}$ is `False`. It makes sense that `ask_if_true()` returns `True` for $\alpha = \neg P_{1, 1}$ and `False` for $\alpha = P_{1, 1}$. This begs the question, what if $\alpha$ is `True` in only a portion of all models. Do we return `True` or `False`? This doesn't rule out the possibility of $\alpha$ being `True` but it is not entailed by the `KB` so we return `False` in such cases. We can see this is the case for $P_{2, 2}$ and $P_{3, 1}$.
###Code
wumpus_kb.ask_if_true(~P22), wumpus_kb.ask_if_true(P22)
###Output
_____no_output_____
###Markdown
Proof by ResolutionRecall that our goal is to check whether $\text{KB} \vDash \alpha$ i.e. is $\text{KB} \implies \alpha$ true in every model. Suppose we wanted to check if $P \implies Q$ is valid. We check the satisfiability of $\neg (P \implies Q)$, which can be rewritten as $P \land \neg Q$. If $P \land \neg Q$ is unsatisfiable, then $P \implies Q$ must be true in all models. This gives us the result "$\text{KB} \vDash \alpha$ if and only if $\text{KB} \land \neg \alpha$ is unsatisfiable".This technique corresponds to proof by contradiction, a standard mathematical proof technique. We assume $\alpha$ to be false and show that this leads to a contradiction with known axioms in $\text{KB}$. We obtain a contradiction by making valid inferences using inference rules. In this proof we use a single inference rule, resolution which states $(l_1 \lor \dots \lor l_k) \land (m_1 \lor \dots \lor m_n) \land (l_i \iff \neg m_j) \implies l_1 \lor \dots \lor l_{i - 1} \lor l_{i + 1} \lor \dots \lor l_k \lor m_1 \lor \dots \lor m_{j - 1} \lor m_{j + 1} \lor \dots \lor m_n$. Applying the resolution yeilds us a clause which we add to the KB. We keep doing this until:* There are no new clauses that can be added, in which case $\text{KB} \nvDash \alpha$.* Two clauses resolve to yield the empty clause, in which case $\text{KB} \vDash \alpha$.The empty clause is equivalent to False because it arises only from resolving two complementaryunit clauses such as $P$ and $\neg P$ which is a contradiction as both $P$ and $\neg P$ can't be True at the same time. There is one catch however, the algorithm that implements proof by resolution cannot handle complex sentences. Implications and bi-implications have to be simplified into simpler clauses. We already know that *every sentence of a propositional logic is logically equivalent to a conjunction of clauses*.We will use this fact to our advantage and simplify the input sentence into the **conjunctive normal form** (CNF) which is a conjunction of disjunctions of literals.For eg:$$(A\lor B)\land (\neg B\lor C\lor\neg D)\land (D\lor\neg E)$$This is equivalent to the POS (Product of sums) form in digital electronics.Here's an outline of how the conversion is done:1. Convert bi-implications to implications$\alpha\iff\beta$ can be written as $(\alpha\implies\beta)\land(\beta\implies\alpha)$This also applies to compound sentences$\alpha\iff(\beta\lor\gamma)$ can be written as $(\alpha\implies(\beta\lor\gamma))\land((\beta\lor\gamma)\implies\alpha)$2. Convert implications to their logical equivalents$\alpha\implies\beta$ can be written as $\neg\alpha\lor\beta$3. Move negation inwardsCNF requires atomic literals. Hence, negation cannot appear on a compound statement.De Morgan's laws will be helpful here.$\neg(\alpha\land\beta)\equiv(\neg\alpha\lor\neg\beta)$$\neg(\alpha\lor\beta)\equiv(\neg\alpha\land\neg\beta)$4. Distribute disjunction over conjunctionDisjunction and conjunction are distributive over each other.Now that we only have conjunctions, disjunctions and negations in our expression, we will distribute disjunctions over conjunctions wherever possible as this will give us a sentence which is a conjunction of simpler clauses, which is what we wanted in the first place.We need a term of the form$(\alpha_{1}\lor\alpha_{2}\lor\alpha_{3}...)\land(\beta_{1}\lor\beta_{2}\lor\beta_{3}...)\land(\gamma_{1}\lor\gamma_{2}\lor\gamma_{3}...)\land...$The `to_cnf` function executes this conversion using helper subroutines.
###Code
psource(to_cnf)
###Output
_____no_output_____
###Markdown
`to_cnf` calls three subroutines.`eliminate_implications` converts bi-implications and implications to their logical equivalents.`move_not_inwards` removes negations from compound statements and moves them inwards using De Morgan's laws.`distribute_and_over_or` distributes disjunctions over conjunctions.Run the cells below for implementation details.
###Code
%psource eliminate_implications
%psource move_not_inwards
%psource distribute_and_over_or
###Output
_____no_output_____
###Markdown
Let's convert some sentences to see how it works
###Code
A, B, C, D = expr('A, B, C, D')
to_cnf(A |'<=>'| B)
to_cnf(A |'<=>'| (B & C))
to_cnf(A & (B | (C & D)))
to_cnf((A |'<=>'| ~B) |'==>'| (C | ~D))
###Output
_____no_output_____
###Markdown
Coming back to our resolution problem, we can see how the `to_cnf` function is utilized here
###Code
psource(pl_resolution)
pl_resolution(wumpus_kb, ~P11), pl_resolution(wumpus_kb, P11)
pl_resolution(wumpus_kb, ~P22), pl_resolution(wumpus_kb, P22)
###Output
_____no_output_____
###Markdown
First-Order Logic Knowledge Bases: `FolKB`The class `FolKB` can be used to represent a knowledge base of First-order logic sentences. You would initialize and use it the same way as you would for `PropKB` except that the clauses are first-order definite clauses. We will see how to write such clauses to create a database and query them in the following sections. Criminal KBIn this section we create a `FolKB` based on the following paragraph.The law says that it is a crime for an American to sell weapons to hostile nations. The country Nono, an enemy of America, has some missiles, and all of its missiles were sold to it by Colonel West, who is American.The first step is to extract the facts and convert them into first-order definite clauses. Extracting the facts from data alone is a challenging task. Fortunately, we have a small paragraph and can do extraction and conversion manually. We'll store the clauses in list aptly named `clauses`.
###Code
clauses = []
###Output
_____no_output_____
###Markdown
“... it is a crime for an American to sell weapons to hostile nations”The keywords to look for here are 'crime', 'American', 'sell', 'weapon' and 'hostile'. We use predicate symbols to make meaning of them.* `Criminal(x)`: `x` is a criminal* `American(x)`: `x` is an American* `Sells(x ,y, z)`: `x` sells `y` to `z`* `Weapon(x)`: `x` is a weapon* `Hostile(x)`: `x` is a hostile nationLet us now combine them with appropriate variable naming to depict the meaning of the sentence. The criminal `x` is also the American `x` who sells weapon `y` to `z`, which is a hostile nation.$\text{American}(x) \land \text{Weapon}(y) \land \text{Sells}(x, y, z) \land \text{Hostile}(z) \implies \text{Criminal} (x)$
###Code
clauses.append(expr("(American(x) & Weapon(y) & Sells(x, y, z) & Hostile(z)) ==> Criminal(x)"))
###Output
_____no_output_____
###Markdown
"The country Nono, an enemy of America"We now know that Nono is an enemy of America. We represent these nations using the constant symbols `Nono` and `America`. the enemy relation is show using the predicate symbol `Enemy`.$\text{Enemy}(\text{Nono}, \text{America})$
###Code
clauses.append(expr("Enemy(Nono, America)"))
###Output
_____no_output_____
###Markdown
"Nono ... has some missiles"This states the existence of some missile which is owned by Nono. $\exists x \text{Owns}(\text{Nono}, x) \land \text{Missile}(x)$. We invoke existential instantiation to introduce a new constant `M1` which is the missile owned by Nono.$\text{Owns}(\text{Nono}, \text{M1}), \text{Missile}(\text{M1})$
###Code
clauses.append(expr("Owns(Nono, M1)"))
clauses.append(expr("Missile(M1)"))
###Output
_____no_output_____
###Markdown
"All of its missiles were sold to it by Colonel West"If Nono owns something and it classifies as a missile, then it was sold to Nono by West.$\text{Missile}(x) \land \text{Owns}(\text{Nono}, x) \implies \text{Sells}(\text{West}, x, \text{Nono})$
###Code
clauses.append(expr("(Missile(x) & Owns(Nono, x)) ==> Sells(West, x, Nono)"))
###Output
_____no_output_____
###Markdown
"West, who is American"West is an American.$\text{American}(\text{West})$
###Code
clauses.append(expr("American(West)"))
###Output
_____no_output_____
###Markdown
We also know, from our understanding of language, that missiles are weapons and that an enemy of America counts as “hostile”.$\text{Missile}(x) \implies \text{Weapon}(x), \text{Enemy}(x, \text{America}) \implies \text{Hostile}(x)$
###Code
clauses.append(expr("Missile(x) ==> Weapon(x)"))
clauses.append(expr("Enemy(x, America) ==> Hostile(x)"))
###Output
_____no_output_____
###Markdown
Now that we have converted the information into first-order definite clauses we can create our first-order logic knowledge base.
###Code
crime_kb = FolKB(clauses)
###Output
_____no_output_____
###Markdown
Inference in First-Order LogicIn this section we look at a forward chaining and a backward chaining algorithm for `FolKB`. Both aforementioned algorithms rely on a process called unification, a key component of all first-order inference algorithms. UnificationWe sometimes require finding substitutions that make different logical expressions look identical. This process, called unification, is done by the `unify` algorithm. It takes as input two sentences and returns a unifier for them if one exists. A unifier is a dictionary which stores the substitutions required to make the two sentences identical. It does so by recursively unifying the components of a sentence, where the unification of a variable symbol `var` with a constant symbol `Const` is the mapping `{var: Const}`. Let's look at a few examples.
###Code
unify(expr('x'), 3)
unify(expr('A(x)'), expr('A(B)'))
unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(y)'))
###Output
_____no_output_____
###Markdown
In cases where there is no possible substitution that unifies the two sentences the function return `None`.
###Code
print(unify(expr('Cat(x)'), expr('Dog(Dobby)')))
###Output
None
###Markdown
We also need to take care we do not unintentionally use the same variable name. Unify treats them as a single variable which prevents it from taking multiple value.
###Code
print(unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(x)')))
###Output
None
###Markdown
Forward Chaining AlgorithmWe consider the simple forward-chaining algorithm presented in Figure 9.3. We look at each rule in the knoweldge base and see if the premises can be satisfied. This is done by finding a substitution which unifies each of the premise with a clause in the `KB`. If we are able to unify the premises, the conclusion (with the corresponding substitution) is added to the `KB`. This inferencing process is repeated until either the query can be answered or till no new sentences can be added. We test if the newly added clause unifies with the query in which case the substitution yielded by `unify` is an answer to the query. If we run out of sentences to infer, this means the query was a failure.The function `fol_fc_ask` is a generator which yields all substitutions which validate the query.
###Code
%psource fol_fc_ask
###Output
_____no_output_____
###Markdown
Let's find out all the hostile nations. Note that we only told the `KB` that Nono was an enemy of America, not that it was hostile.
###Code
answer = fol_fc_ask(crime_kb, expr('Hostile(x)'))
print(list(answer))
###Output
[{x: Nono}]
###Markdown
The generator returned a single substitution which says that Nono is a hostile nation. See how after adding another enemy nation the generator returns two substitutions.
###Code
crime_kb.tell(expr('Enemy(JaJa, America)'))
answer = fol_fc_ask(crime_kb, expr('Hostile(x)'))
print(list(answer))
###Output
[{x: Nono}, {x: JaJa}]
###Markdown
Note: `fol_fc_ask` makes changes to the `KB` by adding sentences to it. Backward Chaining AlgorithmThis algorithm works backward from the goal, chaining through rules to find known facts that support the proof. Suppose `goal` is the query we want to find the substitution for. We find rules of the form $\text{lhs} \implies \text{goal}$ in the `KB` and try to prove `lhs`. There may be multiple clauses in the `KB` which give multiple `lhs`. It is sufficient to prove only one of these. But to prove a `lhs` all the conjuncts in the `lhs` of the clause must be proved. This makes it similar to And/Or search. ORThe OR part of the algorithm comes from our choice to select any clause of the form $\text{lhs} \implies \text{goal}$. Looking at all rules's `lhs` whose `rhs` unify with the `goal`, we yield a substitution which proves all the conjuncts in the `lhs`. We use `parse_definite_clause` to attain `lhs` and `rhs` from a clause of the form $\text{lhs} \implies \text{rhs}$. For atomic facts the `lhs` is an empty list.
###Code
%psource fol_bc_or
###Output
_____no_output_____
###Markdown
ANDThe AND corresponds to proving all the conjuncts in the `lhs`. We need to find a substitution which proves each and every clause in the list of conjuncts.
###Code
%psource fol_bc_and
###Output
_____no_output_____
###Markdown
Now the main function `fl_bc_ask` calls `fol_bc_or` with substitution initialized as empty. The `ask` method of `FolKB` uses `fol_bc_ask` and fetches the first substitution returned by the generator to answer query. Let's query the knowledge base we created from `clauses` to find hostile nations.
###Code
# Rebuild KB because running fol_fc_ask would add new facts to the KB
crime_kb = FolKB(clauses)
crime_kb.ask(expr('Hostile(x)'))
###Output
_____no_output_____
###Markdown
You may notice some new variables in the substitution. They are introduced to standardize the variable names to prevent naming problems as discussed in the [Unification section](Unification) Appendix: The Implementation of `|'==>'|`Consider the `Expr` formed by this syntax:
###Code
P |'==>'| ~Q
###Output
_____no_output_____
###Markdown
What is the funny `|'==>'|` syntax? The trick is that "`|`" is just the regular Python or-operator, and so is exactly equivalent to this:
###Code
(P | '==>') | ~Q
###Output
_____no_output_____
###Markdown
In other words, there are two applications of or-operators. Here's the first one:
###Code
P | '==>'
###Output
_____no_output_____
###Markdown
What is going on here is that the `__or__` method of `Expr` serves a dual purpose. If the right-hand-side is another `Expr` (or a number), then the result is an `Expr`, as in `(P | Q)`. But if the right-hand-side is a string, then the string is taken to be an operator, and we create a node in the abstract syntax tree corresponding to a partially-filled `Expr`, one where we know the left-hand-side is `P` and the operator is `==>`, but we don't yet know the right-hand-side.The `PartialExpr` class has an `__or__` method that says to create an `Expr` node with the right-hand-side filled in. Here we can see the combination of the `PartialExpr` with `Q` to create a complete `Expr`:
###Code
partial = PartialExpr('==>', P)
partial | ~Q
###Output
_____no_output_____
###Markdown
This [trick](http://code.activestate.com/recipes/384122-infix-operators/) is due to [Ferdinand Jamitzky](http://code.activestate.com/recipes/users/98863/), with a modification by [C. G. Vedant](https://github.com/Chipe1),who suggested using a string inside the or-bars. Appendix: The Implementation of `expr`How does `expr` parse a string into an `Expr`? It turns out there are two tricks (besides the Jamitzky/Vedant trick):1. We do a string substitution, replacing "`==>`" with "`|'==>'|`" (and likewise for other operators).2. We `eval` the resulting string in an environment in which every identifieris bound to a symbol with that identifier as the `op`.In other words,
###Code
expr('~(P & Q) ==> (~P | ~Q)')
###Output
_____no_output_____
###Markdown
is equivalent to doing:
###Code
P, Q = symbols('P, Q')
~(P & Q) |'==>'| (~P | ~Q)
###Output
_____no_output_____
###Markdown
One thing to beware of: this puts `==>` at the same precedence level as `"|"`, which is not quite right. For example, we get this:
###Code
P & Q |'==>'| P | Q
###Output
_____no_output_____
###Markdown
which is probably not what we meant; when in doubt, put in extra parens:
###Code
(P & Q) |'==>'| (P | Q)
###Output
_____no_output_____
###Markdown
Examples
###Code
from notebook import Canvas_fol_bc_ask
canvas_bc_ask = Canvas_fol_bc_ask('canvas_bc_ask', crime_kb, expr('Criminal(x)'))
###Output
_____no_output_____
###Markdown
Logic This Jupyter notebook acts as supporting material for topics covered in __Chapter 6 Logical Agents__, __Chapter 7 First-Order Logic__ and __Chapter 8 Inference in First-Order Logic__ of the book *[Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu)*. We make use of the implementations in the [logic.py](https://github.com/aimacode/aima-python/blob/master/logic.py) module. See the [intro notebook](https://github.com/aimacode/aima-python/blob/master/intro.ipynb) for instructions.Let's first import everything from the `logic` module.
###Code
from utils import *
from logic import *
from notebook import psource
###Output
_____no_output_____
###Markdown
CONTENTS- Logical sentences - Expr - PropKB - Knowledge-based agents - Inference in propositional knowledge base - Truth table enumeration - Proof by resolution - Forward and backward chaining - DPLL - WalkSAT - SATPlan - FolKB - Inference in first order knowledge base - Unification - Forward chaining algorithm - Backward chaining algorithm Logical Sentences The `Expr` class is designed to represent any kind of mathematical expression. The simplest type of `Expr` is a symbol, which can be defined with the function `Symbol`:
###Code
Symbol('x')
###Output
_____no_output_____
###Markdown
Or we can define multiple symbols at the same time with the function `symbols`:
###Code
(x, y, P, Q, f) = symbols('x, y, P, Q, f')
###Output
_____no_output_____
###Markdown
We can combine `Expr`s with the regular Python infix and prefix operators. Here's how we would form the logical sentence "P and not Q":
###Code
P & ~Q
###Output
_____no_output_____
###Markdown
This works because the `Expr` class overloads the `&` operator with this definition:```pythondef __and__(self, other): return Expr('&', self, other)``` and does similar overloads for the other operators. An `Expr` has two fields: `op` for the operator, which is always a string, and `args` for the arguments, which is a tuple of 0 or more expressions. By "expression," I mean either an instance of `Expr`, or a number. Let's take a look at the fields for some `Expr` examples:
###Code
sentence = P & ~Q
sentence.op
sentence.args
P.op
P.args
Pxy = P(x, y)
Pxy.op
Pxy.args
###Output
_____no_output_____
###Markdown
It is important to note that the `Expr` class does not define the *logic* of Propositional Logic sentences; it just gives you a way to *represent* expressions. Think of an `Expr` as an [abstract syntax tree](https://en.wikipedia.org/wiki/Abstract_syntax_tree). Each of the `args` in an `Expr` can be either a symbol, a number, or a nested `Expr`. We can nest these trees to any depth. Here is a deply nested `Expr`:
###Code
3 * f(x, y) + P(y) / 2 + 1
###Output
_____no_output_____
###Markdown
Operators for Constructing Logical SentencesHere is a table of the operators that can be used to form sentences. Note that we have a problem: we want to use Python operators to make sentences, so that our programs (and our interactive sessions like the one here) will show simple code. But Python does not allow implication arrows as operators, so for now we have to use a more verbose notation that Python does allow: `|'==>'|` instead of just `==>`. Alternately, you can always use the more verbose `Expr` constructor forms:| Operation | Book | Python Infix Input | Python Output | Python `Expr` Input|--------------------------|----------------------|-------------------------|---|---|| Negation | ¬ P | `~P` | `~P` | `Expr('~', P)`| And | P ∧ Q | `P & Q` | `P & Q` | `Expr('&', P, Q)`| Or | P ∨ Q | `P` &124; `Q`| `P` &124; `Q` | `Expr('`&124;`', P, Q)`| Inequality (Xor) | P ≠ Q | `P ^ Q` | `P ^ Q` | `Expr('^', P, Q)`| Implication | P → Q | `P` &124;`'==>'`&124; `Q` | `P ==> Q` | `Expr('==>', P, Q)`| Reverse Implication | Q ← P | `Q` &124;`'&124; `P` |`Q <== P` | `Expr('<==', Q, P)`| Equivalence | P ↔ Q | `P` &124;`''`&124; `Q` |`P Q` | `Expr('', P, Q)`Here's an example of defining a sentence with an implication arrow:
###Code
~(P & Q) |'==>'| (~P | ~Q)
###Output
_____no_output_____
###Markdown
`expr`: a Shortcut for Constructing SentencesIf the `|'==>'|` notation looks ugly to you, you can use the function `expr` instead:
###Code
expr('~(P & Q) ==> (~P | ~Q)')
###Output
_____no_output_____
###Markdown
`expr` takes a string as input, and parses it into an `Expr`. The string can contain arrow operators: `==>`, ``, which are handled as if they were regular Python infix operators. And `expr` automatically defines any symbols, so you don't need to pre-define them:
###Code
expr('sqrt(b ** 2 - 4 * a * c)')
###Output
_____no_output_____
###Markdown
For now that's all you need to know about `expr`. If you are interested, we explain the messy details of how `expr` is implemented and how `|'==>'|` is handled in the appendix. Propositional Knowledge Bases: `PropKB`The class `PropKB` can be used to represent a knowledge base of propositional logic sentences.We see that the class `KB` has four methods, apart from `__init__`. A point to note here: the `ask` method simply calls the `ask_generator` method. Thus, this one has already been implemented, and what you'll have to actually implement when you create your own knowledge base class (though you'll probably never need to, considering the ones we've created for you) will be the `ask_generator` function and not the `ask` function itself.The class `PropKB` now.* `__init__(self, sentence=None)` : The constructor `__init__` creates a single field `clauses` which will be a list of all the sentences of the knowledge base. Note that each one of these sentences will be a 'clause' i.e. a sentence which is made up of only literals and `or`s.* `tell(self, sentence)` : When you want to add a sentence to the KB, you use the `tell` method. This method takes a sentence, converts it to its CNF, extracts all the clauses, and adds all these clauses to the `clauses` field. So, you need not worry about `tell`ing only clauses to the knowledge base. You can `tell` the knowledge base a sentence in any form that you wish; converting it to CNF and adding the resulting clauses will be handled by the `tell` method.* `ask_generator(self, query)` : The `ask_generator` function is used by the `ask` function. It calls the `tt_entails` function, which in turn returns `True` if the knowledge base entails query and `False` otherwise. The `ask_generator` itself returns an empty dict `{}` if the knowledge base entails query and `None` otherwise. This might seem a little bit weird to you. After all, it makes more sense just to return a `True` or a `False` instead of the `{}` or `None` But this is done to maintain consistency with the way things are in First-Order Logic, where an `ask_generator` function is supposed to return all the substitutions that make the query true. Hence the dict, to return all these substitutions. I will be mostly be using the `ask` function which returns a `{}` or a `False`, but if you don't like this, you can always use the `ask_if_true` function which returns a `True` or a `False`.* `retract(self, sentence)` : This function removes all the clauses of the sentence given, from the knowledge base. Like the `tell` function, you don't have to pass clauses to remove them from the knowledge base; any sentence will do fine. The function will take care of converting that sentence to clauses and then remove those. Wumpus World KBLet us create a `PropKB` for the wumpus world with the sentences mentioned in `section 7.4.3`.
###Code
wumpus_kb = PropKB()
###Output
_____no_output_____
###Markdown
We define the symbols we use in our clauses.$P_{x, y}$ is true if there is a pit in `[x, y]`.$B_{x, y}$ is true if the agent senses breeze in `[x, y]`.
###Code
P11, P12, P21, P22, P31, B11, B21 = expr('P11, P12, P21, P22, P31, B11, B21')
###Output
_____no_output_____
###Markdown
Now we tell sentences based on `section 7.4.3`.There is no pit in `[1,1]`.
###Code
wumpus_kb.tell(~P11)
###Output
_____no_output_____
###Markdown
A square is breezy if and only if there is a pit in a neighboring square. This has to be stated for each square but for now, we include just the relevant squares.
###Code
wumpus_kb.tell(B11 | '<=>' | ((P12 | P21)))
wumpus_kb.tell(B21 | '<=>' | ((P11 | P22 | P31)))
###Output
_____no_output_____
###Markdown
Now we include the breeze percepts for the first two squares leading up to the situation in `Figure 7.3(b)`
###Code
wumpus_kb.tell(~B11)
wumpus_kb.tell(B21)
###Output
_____no_output_____
###Markdown
We can check the clauses stored in a `KB` by accessing its `clauses` variable
###Code
wumpus_kb.clauses
###Output
_____no_output_____
###Markdown
We see that the equivalence $B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was automatically converted to two implications which were inturn converted to CNF which is stored in the `KB`.$B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was split into $B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ and $B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$.$B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ was converted to $P_{1, 2} \lor P_{2, 1} \lor \neg B_{1, 1}$.$B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$ was converted to $\neg (P_{1, 2} \lor P_{2, 1}) \lor B_{1, 1}$ which becomes $(\neg P_{1, 2} \lor B_{1, 1}) \land (\neg P_{2, 1} \lor B_{1, 1})$ after applying De Morgan's laws and distributing the disjunction.$B_{2, 1} \iff (P_{1, 1} \lor P_{2, 2} \lor P_{3, 2})$ is converted in similar manner. Knowledge based agents A knowledge-based agent is a simple generic agent that maintains and handles a knowledge base.The knowledge base may initially contain some background knowledge.The purpose of a KB agent is to provide a level of abstraction over knowledge-base manipulation and is to be used as a base class for agents that work on a knowledge base.Given a percept, the KB agent adds the percept to its knowledge base, asks the knowledge base for the best action, and tells the knowledge base that it has in fact taken that action.Our implementation of `KB-Agent` is encapsulated in a class `KB_AgentProgram` which inherits from the `KB` class.Let's have a look.
###Code
psource(KB_AgentProgram)
###Output
_____no_output_____
###Markdown
The helper functions `make_percept_sentence`, `make_action_query` and `make_action_sentence` are all aptly named and as expected,`make_percept_sentence` makes first-order logic sentences about percepts we want our agent to receive,`make_action_query` asks the underlying `KB` about the action that should be taken and`make_action_sentence` tells the underlying `KB` about the action it has just taken. Inference in Propositional Knowledge BaseIn this section we will look at two algorithms to check if a sentence is entailed by the `KB`. Our goal is to decide whether $\text{KB} \vDash \alpha$ for some sentence $\alpha$. Truth Table EnumerationIt is a model-checking approach which, as the name suggests, enumerates all possible models in which the `KB` is true and checks if $\alpha$ is also true in these models. We list the $n$ symbols in the `KB` and enumerate the $2^{n}$ models in a depth-first manner and check the truth of `KB` and $\alpha$.
###Code
psource(tt_check_all)
###Output
_____no_output_____
###Markdown
The algorithm basically computes every line of the truth table $KB\implies \alpha$ and checks if it is true everywhere.If symbols are defined, the routine recursively constructs every combination of truth values for the symbols and then, it checks whether `model` is consistent with `kb`.The given models correspond to the lines in the truth table,which have a `true` in the KB column, and for these lines it checks whether the query evaluates to true`result = pl_true(alpha, model)`.In short, `tt_check_all` evaluates this logical expression for each `model``pl_true(kb, model) => pl_true(alpha, model)`which is logically equivalent to`pl_true(kb, model) & ~pl_true(alpha, model)` that is, the knowledge base and the negation of the query are logically inconsistent.`tt_entails()` just extracts the symbols from the query and calls `tt_check_all()` with the proper parameters.
###Code
psource(tt_entails)
###Output
_____no_output_____
###Markdown
Keep in mind that for two symbols P and Q, P => Q is false only when P is `True` and Q is `False`.Example usage of `tt_entails()`:
###Code
tt_entails(P & Q, Q)
###Output
_____no_output_____
###Markdown
P & Q is True only when both P and Q are True. Hence, (P & Q) => Q is True
###Code
tt_entails(P | Q, Q)
tt_entails(P | Q, P)
###Output
_____no_output_____
###Markdown
If we know that P | Q is true, we cannot infer the truth values of P and Q. Hence (P | Q) => Q is False and so is (P | Q) => P.
###Code
(A, B, C, D, E, F, G) = symbols('A, B, C, D, E, F, G')
tt_entails(A & (B | C) & D & E & ~(F | G), A & D & E & ~F & ~G)
###Output
_____no_output_____
###Markdown
We can see that for the KB to be true, A, D, E have to be True and F and G have to be False.Nothing can be said about B or C. Coming back to our problem, note that `tt_entails()` takes an `Expr` which is a conjunction of clauses as the input instead of the `KB` itself. You can use the `ask_if_true()` method of `PropKB` which does all the required conversions. Let's check what `wumpus_kb` tells us about $P_{1, 1}$.
###Code
wumpus_kb.ask_if_true(~P11), wumpus_kb.ask_if_true(P11)
###Output
_____no_output_____
###Markdown
Looking at Figure 7.9 we see that in all models in which the knowledge base is `True`, $P_{1, 1}$ is `False`. It makes sense that `ask_if_true()` returns `True` for $\alpha = \neg P_{1, 1}$ and `False` for $\alpha = P_{1, 1}$. This begs the question, what if $\alpha$ is `True` in only a portion of all models. Do we return `True` or `False`? This doesn't rule out the possibility of $\alpha$ being `True` but it is not entailed by the `KB` so we return `False` in such cases. We can see this is the case for $P_{2, 2}$ and $P_{3, 1}$.
###Code
wumpus_kb.ask_if_true(~P22), wumpus_kb.ask_if_true(P22)
###Output
_____no_output_____
###Markdown
Proof by ResolutionRecall that our goal is to check whether $\text{KB} \vDash \alpha$ i.e. is $\text{KB} \implies \alpha$ true in every model. Suppose we wanted to check if $P \implies Q$ is valid. We check the satisfiability of $\neg (P \implies Q)$, which can be rewritten as $P \land \neg Q$. If $P \land \neg Q$ is unsatisfiable, then $P \implies Q$ must be true in all models. This gives us the result "$\text{KB} \vDash \alpha$ if and only if $\text{KB} \land \neg \alpha$ is unsatisfiable".This technique corresponds to proof by contradiction, a standard mathematical proof technique. We assume $\alpha$ to be false and show that this leads to a contradiction with known axioms in $\text{KB}$. We obtain a contradiction by making valid inferences using inference rules. In this proof we use a single inference rule, resolution which states $(l_1 \lor \dots \lor l_k) \land (m_1 \lor \dots \lor m_n) \land (l_i \iff \neg m_j) \implies l_1 \lor \dots \lor l_{i - 1} \lor l_{i + 1} \lor \dots \lor l_k \lor m_1 \lor \dots \lor m_{j - 1} \lor m_{j + 1} \lor \dots \lor m_n$. Applying the resolution yields us a clause which we add to the KB. We keep doing this until:* There are no new clauses that can be added, in which case $\text{KB} \nvDash \alpha$.* Two clauses resolve to yield the empty clause, in which case $\text{KB} \vDash \alpha$.The empty clause is equivalent to False because it arises only from resolving two complementaryunit clauses such as $P$ and $\neg P$ which is a contradiction as both $P$ and $\neg P$ can't be True at the same time. There is one catch however, the algorithm that implements proof by resolution cannot handle complex sentences. Implications and bi-implications have to be simplified into simpler clauses. We already know that *every sentence of a propositional logic is logically equivalent to a conjunction of clauses*.We will use this fact to our advantage and simplify the input sentence into the **conjunctive normal form** (CNF) which is a conjunction of disjunctions of literals.For eg:$$(A\lor B)\land (\neg B\lor C\lor\neg D)\land (D\lor\neg E)$$This is equivalent to the POS (Product of sums) form in digital electronics.Here's an outline of how the conversion is done:1. Convert bi-implications to implications$\alpha\iff\beta$ can be written as $(\alpha\implies\beta)\land(\beta\implies\alpha)$This also applies to compound sentences$\alpha\iff(\beta\lor\gamma)$ can be written as $(\alpha\implies(\beta\lor\gamma))\land((\beta\lor\gamma)\implies\alpha)$2. Convert implications to their logical equivalents$\alpha\implies\beta$ can be written as $\neg\alpha\lor\beta$3. Move negation inwardsCNF requires atomic literals. Hence, negation cannot appear on a compound statement.De Morgan's laws will be helpful here.$\neg(\alpha\land\beta)\equiv(\neg\alpha\lor\neg\beta)$$\neg(\alpha\lor\beta)\equiv(\neg\alpha\land\neg\beta)$4. Distribute disjunction over conjunctionDisjunction and conjunction are distributive over each other.Now that we only have conjunctions, disjunctions and negations in our expression, we will distribute disjunctions over conjunctions wherever possible as this will give us a sentence which is a conjunction of simpler clauses, which is what we wanted in the first place.We need a term of the form$(\alpha_{1}\lor\alpha_{2}\lor\alpha_{3}...)\land(\beta_{1}\lor\beta_{2}\lor\beta_{3}...)\land(\gamma_{1}\lor\gamma_{2}\lor\gamma_{3}...)\land...$The `to_cnf` function executes this conversion using helper subroutines.
###Code
psource(to_cnf)
###Output
_____no_output_____
###Markdown
`to_cnf` calls three subroutines.`eliminate_implications` converts bi-implications and implications to their logical equivalents.`move_not_inwards` removes negations from compound statements and moves them inwards using De Morgan's laws.`distribute_and_over_or` distributes disjunctions over conjunctions.Run the cell below for implementation details.
###Code
psource(eliminate_implications)
psource(move_not_inwards)
psource(distribute_and_over_or)
###Output
_____no_output_____
###Markdown
Let's convert some sentences to see how it works
###Code
A, B, C, D = expr('A, B, C, D')
to_cnf(A |'<=>'| B)
to_cnf(A |'<=>'| (B & C))
to_cnf(A & (B | (C & D)))
to_cnf((A |'<=>'| ~B) |'==>'| (C | ~D))
###Output
_____no_output_____
###Markdown
Coming back to our resolution problem, we can see how the `to_cnf` function is utilized here
###Code
psource(pl_resolution)
pl_resolution(wumpus_kb, ~P11), pl_resolution(wumpus_kb, P11)
pl_resolution(wumpus_kb, ~P22), pl_resolution(wumpus_kb, P22)
###Output
_____no_output_____
###Markdown
Forward and backward chainingPreviously, we said we will look at two algorithms to check if a sentence is entailed by the `KB`. Here's a third one. The difference here is that our goal now is to determine if a knowledge base of definite clauses entails a single proposition symbol *q* - the query.There is a catch however - the knowledge base can only contain **Horn clauses**. Horn ClausesHorn clauses can be defined as a *disjunction* of *literals* with **at most** one positive literal. A Horn clause with exactly one positive literal is called a *definite clause*.A Horn clause might look like $\neg a\lor\neg b\lor\neg c\lor\neg d... \lor z$This, coincidentally, is also a definite clause.Using De Morgan's laws, the example above can be simplified to $a\land b\land c\land d ... \implies z$This seems like a logical representation of how humans process known data and facts. Assuming percepts `a`, `b`, `c`, `d` ... to be true simultaneously, we can infer `z` to also be true at that point in time. There are some interesting aspects of Horn clauses that make algorithmic inference or *resolution* easier.- Definite clauses can be written as implications:The most important simplification a definite clause provides is that it can be written as an implication.The premise (or the knowledge that leads to the implication) is a conjunction of positive literals.The conclusion (the implied statement) is also a positive literal.The sentence thus becomes easier to understand.The premise and the conclusion are conventionally called the *body* and the *head* respectively.A single positive literal is called a *fact*.- Forward chaining and backward chaining can be used for inference from Horn clauses:Forward chaining is semantically identical to `AND-OR-Graph-Search` from the chapter on search algorithms.Implementational details will be explained shortly.- Deciding entailment with Horn clauses is linear in size of the knowledge base:Surprisingly, the forward and backward chaining algorithms traverse each element of the knowledge base at most once, greatly simplifying the problem.The function `pl_fc_entails` implements forward chaining to see if a knowledge base `KB` entails a symbol `q`.Before we proceed further, note that `pl_fc_entails` doesn't use an ordinary `KB` instance. The knowledge base here is an instance of the `PropDefiniteKB` class, derived from the `PropKB` class, but modified to store definite clauses.The main point of difference arises in the inclusion of a helper method to `PropDefiniteKB` that returns a list of clauses in KB that have a given symbol `p` in their premise.
###Code
psource(PropDefiniteKB.clauses_with_premise)
###Output
_____no_output_____
###Markdown
Let's now have a look at the `pl_fc_entails` algorithm.
###Code
psource(pl_fc_entails)
###Output
_____no_output_____
###Markdown
The function accepts a knowledge base `KB` (an instance of `PropDefiniteKB`) and a query `q` as inputs.`count` initially stores the number of symbols in the premise of each sentence in the knowledge base.The `conjuncts` helper function separates a given sentence at conjunctions.`inferred` is initialized as a *boolean* defaultdict. This will be used later to check if we have inferred all premises of each clause of the agenda.`agenda` initially stores a list of clauses that the knowledge base knows to be true.The `is_prop_symbol` helper function checks if the given symbol is a valid propositional logic symbol.We now iterate through `agenda`, popping a symbol `p` on each iteration.If the query `q` is the same as `p`, we know that entailment holds.The agenda is processed, reducing `count` by one for each implication with a premise `p`.A conclusion is added to the agenda when `count` reaches zero. This means we know all the premises of that particular implication to be true.`clauses_with_premise` is a helpful method of the `PropKB` class.It returns a list of clauses in the knowledge base that have `p` in their premise.Now that we have an idea of how this function works, let's see a few examples of its usage, but we first need to define our knowledge base. We assume we know the following clauses to be true.
###Code
clauses = ['(B & F)==>E',
'(A & E & F)==>G',
'(B & C)==>F',
'(A & B)==>D',
'(E & F)==>H',
'(H & I)==>J',
'A',
'B',
'C']
###Output
_____no_output_____
###Markdown
We will now `tell` this information to our knowledge base.
###Code
definite_clauses_KB = PropDefiniteKB()
for clause in clauses:
definite_clauses_KB.tell(expr(clause))
###Output
_____no_output_____
###Markdown
We can now check if our knowledge base entails the following queries.
###Code
pl_fc_entails(definite_clauses_KB, expr('G'))
pl_fc_entails(definite_clauses_KB, expr('H'))
pl_fc_entails(definite_clauses_KB, expr('I'))
pl_fc_entails(definite_clauses_KB, expr('J'))
###Output
_____no_output_____
###Markdown
Effective Propositional Model CheckingThe previous segments elucidate the algorithmic procedure for model checking. In this segment, we look at ways of making them computationally efficient.The problem we are trying to solve is conventionally called the _propositional satisfiability problem_, abbreviated as the _SAT_ problem.In layman terms, if there exists a model that satisfies a given Boolean formula, the formula is called satisfiable.The SAT problem was the first problem to be proven _NP-complete_.The main characteristics of an NP-complete problem are:- Given a solution to such a problem, it is easy to verify if the solution solves the problem.- The time required to actually solve the problem using any known algorithm increases exponentially with respect to the size of the problem.Due to these properties, heuristic and approximational methods are often applied to find solutions to these problems.It is extremely important to be able to solve large scale SAT problems efficiently because many combinatorial problems in computer science can be conveniently reduced to checking the satisfiability of a propositional sentence under some constraints.We will introduce two new algorithms that perform propositional model checking in a computationally effective way. 1. DPLL (Davis-Putnam-Logeman-Loveland) algorithmThis algorithm is very similar to Backtracking-Search.It recursively enumerates possible models in a depth-first fashion with the following improvements over algorithms like `tt_entails`:1. Early termination:In certain cases, the algorithm can detect the truth value of a statement using just a partially completed model.For example, $(P\lor Q)\land(P\lor R)$ is true if P is true, regardless of other variables.This reduces the search space significantly.2. Pure symbol heuristic:A symbol that has the same sign (positive or negative) in all clauses is called a _pure symbol_.It isn't difficult to see that any satisfiable model will have the pure symbols assigned such that its parent clause becomes _true_.For example, $(P\lor\neg Q)\land(\neg Q\lor\neg R)\land(R\lor P)$ has P and Q as pure symbolsand for the sentence to be true, P _has_ to be true and Q _has_ to be false.The pure symbol heuristic thus simplifies the problem a bit.3. Unit clause heuristic:In the context of DPLL, clauses with just one literal and clauses with all but one _false_ literals are called unit clauses.If a clause is a unit clause, it can only be satisfied by assigning the necessary value to make the last literal true.We have no other choice.Assigning one unit clause can create another unit clause.For example, when P is false, $(P\lor Q)$ becomes a unit clause, causing _true_ to be assigned to Q.A series of forced assignments derived from previous unit clauses is called _unit propagation_.In this way, this heuristic simplifies the problem further.The algorithm often employs other tricks to scale up to large problems.However, these tricks are currently out of the scope of this notebook. Refer to section 7.6 of the book for more details.Let's have a look at the algorithm.
###Code
psource(dpll)
###Output
_____no_output_____
###Markdown
The algorithm uses the ideas described above to check satisfiability of a sentence in propositional logic.It recursively calls itself, simplifying the problem at each step. It also uses helper functions `find_pure_symbol` and `find_unit_clause` to carry out steps 2 and 3 above.The `dpll_satisfiable` helper function converts the input clauses to _conjunctive normal form_ and calls the `dpll` function with the correct parameters.
###Code
psource(dpll_satisfiable)
###Output
_____no_output_____
###Markdown
Let's see a few examples of usage.
###Code
A, B, C, D = expr('A, B, C, D')
dpll_satisfiable(A & B & ~C & D)
###Output
_____no_output_____
###Markdown
This is a simple case to highlight that the algorithm actually works.
###Code
dpll_satisfiable((A & B) | (C & ~A) | (B & ~D))
###Output
_____no_output_____
###Markdown
If a particular symbol isn't present in the solution, it means that the solution is independent of the value of that symbol.In this case, the solution is independent of A.
###Code
dpll_satisfiable(A |'<=>'| B)
dpll_satisfiable((A |'<=>'| B) |'==>'| (C & ~A))
dpll_satisfiable((A | (B & C)) |'<=>'| ((A | B) & (A | C)))
###Output
_____no_output_____
###Markdown
2. WalkSAT algorithmThis algorithm is very similar to Hill climbing.On every iteration, the algorithm picks an unsatisfied clause and flips a symbol in the clause.This is similar to finding a neighboring state in the `hill_climbing` algorithm.The symbol to be flipped is decided by an evaluation function that counts the number of unsatisfied clauses.Sometimes, symbols are also flipped randomly to avoid local optima. A subtle balance between greediness and randomness is required. Alternatively, some versions of the algorithm restart with a completely new random assignment if no solution has been found for too long as a way of getting out of local minima of numbers of unsatisfied clauses.Let's have a look at the algorithm.
###Code
psource(WalkSAT)
###Output
_____no_output_____
###Markdown
The function takes three arguments:1. The `clauses` we want to satisfy.2. The probability `p` of randomly changing a symbol.3. The maximum number of flips (`max_flips`) the algorithm will run for. If the clauses are still unsatisfied, the algorithm returns `None` to denote failure.The algorithm is identical in concept to Hill climbing and the code isn't difficult to understand.Let's see a few examples of usage.
###Code
A, B, C, D = expr('A, B, C, D')
WalkSAT([A, B, ~C, D], 0.5, 100)
###Output
_____no_output_____
###Markdown
This is a simple case to show that the algorithm converges.
###Code
WalkSAT([A & B, A & C], 0.5, 100)
WalkSAT([A & B, C & D, C & B], 0.5, 100)
WalkSAT([A & B, C | D, ~(D | B)], 0.5, 1000)
###Output
_____no_output_____
###Markdown
This one doesn't give any output because WalkSAT did not find any model where these clauses hold. We can solve these clauses to see that they together form a contradiction and hence, it isn't supposed to have a solution. One point of difference between this algorithm and the `dpll_satisfiable` algorithms is that both these algorithms take inputs differently. For WalkSAT to take complete sentences as input, we can write a helper function that converts the input sentence into conjunctive normal form and then calls WalkSAT with the list of conjuncts of the CNF form of the sentence.
###Code
def WalkSAT_CNF(sentence, p=0.5, max_flips=10000):
return WalkSAT(conjuncts(to_cnf(sentence)), 0, max_flips)
###Output
_____no_output_____
###Markdown
Now we can call `WalkSAT_CNF` and `DPLL_Satisfiable` with the same arguments.
###Code
WalkSAT_CNF((A & B) | (C & ~A) | (B & ~D), 0.5, 1000)
###Output
_____no_output_____
###Markdown
It works!Notice that the solution generated by WalkSAT doesn't omit variables that the sentence doesn't depend upon. If the sentence is independent of a particular variable, the solution contains a random value for that variable because of the stochastic nature of the algorithm.Let's compare the runtime of WalkSAT and DPLL for a few cases. We will use the `%%timeit` magic to do this.
###Code
sentence_1 = A |'<=>'| B
sentence_2 = (A & B) | (C & ~A) | (B & ~D)
sentence_3 = (A | (B & C)) |'<=>'| ((A | B) & (A | C))
%%timeit
dpll_satisfiable(sentence_1)
dpll_satisfiable(sentence_2)
dpll_satisfiable(sentence_3)
%%timeit
WalkSAT_CNF(sentence_1)
WalkSAT_CNF(sentence_2)
WalkSAT_CNF(sentence_3)
###Output
1.02 ms ± 6.92 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
###Markdown
On an average, for solvable cases, `WalkSAT` is quite faster than `dpll` because, for a small number of variables, `WalkSAT` can reduce the search space significantly. Results can be different for sentences with more symbols though.Feel free to play around with this to understand the trade-offs of these algorithms better. SATPlan In this section we show how to make plans by logical inference. The basic idea is very simple. It includes the following three steps:1. Constuct a sentence that includes: 1. A colection of assertions about the initial state. 2. The successor-state axioms for all the possible actions at each time up to some maximum time t. 3. The assertion that the goal is achieved at time t.2. Present the whole sentence to a SAT solver.3. Assuming a model is found, extract from the model those variables that represent actions and are assigned true. Together they represent a plan to achieve the goals.Lets have a look at the algorithm
###Code
psource(SAT_plan)
###Output
_____no_output_____
###Markdown
Let's see few examples of its usage. First we define a transition and then call `SAT_plan`.
###Code
transition = {'A': {'Left': 'A', 'Right': 'B'},
'B': {'Left': 'A', 'Right': 'C'},
'C': {'Left': 'B', 'Right': 'C'}}
print(SAT_plan('A', transition, 'C', 2))
print(SAT_plan('A', transition, 'B', 3))
print(SAT_plan('C', transition, 'A', 3))
###Output
None
['Right']
['Left', 'Left']
###Markdown
Let us do the same for another transition.
###Code
transition = {(0, 0): {'Right': (0, 1), 'Down': (1, 0)},
(0, 1): {'Left': (1, 0), 'Down': (1, 1)},
(1, 0): {'Right': (1, 0), 'Up': (1, 0), 'Left': (1, 0), 'Down': (1, 0)},
(1, 1): {'Left': (1, 0), 'Up': (0, 1)}}
print(SAT_plan((0, 0), transition, (1, 1), 4))
###Output
['Right', 'Down']
###Markdown
First-Order Logic Knowledge Bases: `FolKB`The class `FolKB` can be used to represent a knowledge base of First-order logic sentences. You would initialize and use it the same way as you would for `PropKB` except that the clauses are first-order definite clauses. We will see how to write such clauses to create a database and query them in the following sections. Criminal KBIn this section we create a `FolKB` based on the following paragraph.The law says that it is a crime for an American to sell weapons to hostile nations. The country Nono, an enemy of America, has some missiles, and all of its missiles were sold to it by Colonel West, who is American.The first step is to extract the facts and convert them into first-order definite clauses. Extracting the facts from data alone is a challenging task. Fortunately, we have a small paragraph and can do extraction and conversion manually. We'll store the clauses in list aptly named `clauses`.
###Code
clauses = []
###Output
_____no_output_____
###Markdown
“... it is a crime for an American to sell weapons to hostile nations”The keywords to look for here are 'crime', 'American', 'sell', 'weapon' and 'hostile'. We use predicate symbols to make meaning of them.* `Criminal(x)`: `x` is a criminal* `American(x)`: `x` is an American* `Sells(x ,y, z)`: `x` sells `y` to `z`* `Weapon(x)`: `x` is a weapon* `Hostile(x)`: `x` is a hostile nationLet us now combine them with appropriate variable naming to depict the meaning of the sentence. The criminal `x` is also the American `x` who sells weapon `y` to `z`, which is a hostile nation.$\text{American}(x) \land \text{Weapon}(y) \land \text{Sells}(x, y, z) \land \text{Hostile}(z) \implies \text{Criminal} (x)$
###Code
clauses.append(expr("(American(x) & Weapon(y) & Sells(x, y, z) & Hostile(z)) ==> Criminal(x)"))
###Output
_____no_output_____
###Markdown
"The country Nono, an enemy of America"We now know that Nono is an enemy of America. We represent these nations using the constant symbols `Nono` and `America`. the enemy relation is show using the predicate symbol `Enemy`.$\text{Enemy}(\text{Nono}, \text{America})$
###Code
clauses.append(expr("Enemy(Nono, America)"))
###Output
_____no_output_____
###Markdown
"Nono ... has some missiles"This states the existence of some missile which is owned by Nono. $\exists x \text{Owns}(\text{Nono}, x) \land \text{Missile}(x)$. We invoke existential instantiation to introduce a new constant `M1` which is the missile owned by Nono.$\text{Owns}(\text{Nono}, \text{M1}), \text{Missile}(\text{M1})$
###Code
clauses.append(expr("Owns(Nono, M1)"))
clauses.append(expr("Missile(M1)"))
###Output
_____no_output_____
###Markdown
"All of its missiles were sold to it by Colonel West"If Nono owns something and it classifies as a missile, then it was sold to Nono by West.$\text{Missile}(x) \land \text{Owns}(\text{Nono}, x) \implies \text{Sells}(\text{West}, x, \text{Nono})$
###Code
clauses.append(expr("(Missile(x) & Owns(Nono, x)) ==> Sells(West, x, Nono)"))
###Output
_____no_output_____
###Markdown
"West, who is American"West is an American.$\text{American}(\text{West})$
###Code
clauses.append(expr("American(West)"))
###Output
_____no_output_____
###Markdown
We also know, from our understanding of language, that missiles are weapons and that an enemy of America counts as “hostile”.$\text{Missile}(x) \implies \text{Weapon}(x), \text{Enemy}(x, \text{America}) \implies \text{Hostile}(x)$
###Code
clauses.append(expr("Missile(x) ==> Weapon(x)"))
clauses.append(expr("Enemy(x, America) ==> Hostile(x)"))
###Output
_____no_output_____
###Markdown
Now that we have converted the information into first-order definite clauses we can create our first-order logic knowledge base.
###Code
crime_kb = FolKB(clauses)
###Output
_____no_output_____
###Markdown
The `subst` helper function substitutes variables with given values in first-order logic statements.This will be useful in later algorithms.It's implementation is quite simple and self-explanatory.
###Code
psource(subst)
###Output
_____no_output_____
###Markdown
Here's an example of how `subst` can be used.
###Code
subst({x: expr('Nono'), y: expr('M1')}, expr('Owns(x, y)'))
###Output
_____no_output_____
###Markdown
Inference in First-Order LogicIn this section we look at a forward chaining and a backward chaining algorithm for `FolKB`. Both aforementioned algorithms rely on a process called unification, a key component of all first-order inference algorithms. UnificationWe sometimes require finding substitutions that make different logical expressions look identical. This process, called unification, is done by the `unify` algorithm. It takes as input two sentences and returns a unifier for them if one exists. A unifier is a dictionary which stores the substitutions required to make the two sentences identical. It does so by recursively unifying the components of a sentence, where the unification of a variable symbol `var` with a constant symbol `Const` is the mapping `{var: Const}`. Let's look at a few examples.
###Code
unify(expr('x'), 3)
unify(expr('A(x)'), expr('A(B)'))
unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(y)'))
###Output
_____no_output_____
###Markdown
In cases where there is no possible substitution that unifies the two sentences the function return `None`.
###Code
print(unify(expr('Cat(x)'), expr('Dog(Dobby)')))
###Output
None
###Markdown
We also need to take care we do not unintentionally use the same variable name. Unify treats them as a single variable which prevents it from taking multiple value.
###Code
print(unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(x)')))
###Output
None
###Markdown
Forward Chaining AlgorithmWe consider the simple forward-chaining algorithm presented in Figure 9.3. We look at each rule in the knowledge base and see if the premises can be satisfied. This is done by finding a substitution which unifies each of the premise with a clause in the `KB`. If we are able to unify the premises, the conclusion (with the corresponding substitution) is added to the `KB`. This inferencing process is repeated until either the query can be answered or till no new sentences can be added. We test if the newly added clause unifies with the query in which case the substitution yielded by `unify` is an answer to the query. If we run out of sentences to infer, this means the query was a failure.The function `fol_fc_ask` is a generator which yields all substitutions which validate the query.
###Code
psource(fol_fc_ask)
###Output
_____no_output_____
###Markdown
Let's find out all the hostile nations. Note that we only told the `KB` that Nono was an enemy of America, not that it was hostile.
###Code
answer = fol_fc_ask(crime_kb, expr('Hostile(x)'))
print(list(answer))
###Output
[{x: Nono}]
###Markdown
The generator returned a single substitution which says that Nono is a hostile nation. See how after adding another enemy nation the generator returns two substitutions.
###Code
crime_kb.tell(expr('Enemy(JaJa, America)'))
answer = fol_fc_ask(crime_kb, expr('Hostile(x)'))
print(list(answer))
###Output
[{x: Nono}, {x: JaJa}]
###Markdown
Note: `fol_fc_ask` makes changes to the `KB` by adding sentences to it. Backward Chaining AlgorithmThis algorithm works backward from the goal, chaining through rules to find known facts that support the proof. Suppose `goal` is the query we want to find the substitution for. We find rules of the form $\text{lhs} \implies \text{goal}$ in the `KB` and try to prove `lhs`. There may be multiple clauses in the `KB` which give multiple `lhs`. It is sufficient to prove only one of these. But to prove a `lhs` all the conjuncts in the `lhs` of the clause must be proved. This makes it similar to And/Or search. ORThe OR part of the algorithm comes from our choice to select any clause of the form $\text{lhs} \implies \text{goal}$. Looking at all rules's `lhs` whose `rhs` unify with the `goal`, we yield a substitution which proves all the conjuncts in the `lhs`. We use `parse_definite_clause` to attain `lhs` and `rhs` from a clause of the form $\text{lhs} \implies \text{rhs}$. For atomic facts the `lhs` is an empty list.
###Code
psource(fol_bc_or)
###Output
_____no_output_____
###Markdown
ANDThe AND corresponds to proving all the conjuncts in the `lhs`. We need to find a substitution which proves each and every clause in the list of conjuncts.
###Code
psource(fol_bc_and)
###Output
_____no_output_____
###Markdown
Now the main function `fl_bc_ask` calls `fol_bc_or` with substitution initialized as empty. The `ask` method of `FolKB` uses `fol_bc_ask` and fetches the first substitution returned by the generator to answer query. Let's query the knowledge base we created from `clauses` to find hostile nations.
###Code
# Rebuild KB because running fol_fc_ask would add new facts to the KB
crime_kb = FolKB(clauses)
crime_kb.ask(expr('Hostile(x)'))
###Output
_____no_output_____
###Markdown
You may notice some new variables in the substitution. They are introduced to standardize the variable names to prevent naming problems as discussed in the [Unification section](Unification) Appendix: The Implementation of `|'==>'|`Consider the `Expr` formed by this syntax:
###Code
P |'==>'| ~Q
###Output
_____no_output_____
###Markdown
What is the funny `|'==>'|` syntax? The trick is that "`|`" is just the regular Python or-operator, and so is exactly equivalent to this:
###Code
(P | '==>') | ~Q
###Output
_____no_output_____
###Markdown
In other words, there are two applications of or-operators. Here's the first one:
###Code
P | '==>'
###Output
_____no_output_____
###Markdown
What is going on here is that the `__or__` method of `Expr` serves a dual purpose. If the right-hand-side is another `Expr` (or a number), then the result is an `Expr`, as in `(P | Q)`. But if the right-hand-side is a string, then the string is taken to be an operator, and we create a node in the abstract syntax tree corresponding to a partially-filled `Expr`, one where we know the left-hand-side is `P` and the operator is `==>`, but we don't yet know the right-hand-side.The `PartialExpr` class has an `__or__` method that says to create an `Expr` node with the right-hand-side filled in. Here we can see the combination of the `PartialExpr` with `Q` to create a complete `Expr`:
###Code
partial = PartialExpr('==>', P)
partial | ~Q
###Output
_____no_output_____
###Markdown
Logic This Jupyter notebook acts as supporting material for topics covered in __Chapter 6 Logical Agents__, __Chapter 7 First-Order Logic__ and __Chapter 8 Inference in First-Order Logic__ of the book *[Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu)*. We make use of the implementations in the [logic.py](https://github.com/aimacode/aima-python/blob/master/logic.py) module. See the [intro notebook](https://github.com/aimacode/aima-python/blob/master/intro.ipynb) for instructions.Let's first import everything from the `logic` module.
###Code
from utils import *
from logic import *
from notebook import psource
###Output
_____no_output_____
###Markdown
CONTENTS- Logical sentences - Expr - PropKB - Knowledge-based agents - Inference in propositional knowledge base - Truth table enumeration - Proof by resolution - Forward and backward chaining - DPLL - WalkSAT - SATPlan - FolKB - Inference in first order knowledge base - Unification - Forward chaining algorithm - Backward chaining algorithm Logical Sentences The `Expr` class is designed to represent any kind of mathematical expression. The simplest type of `Expr` is a symbol, which can be defined with the function `Symbol`:
###Code
Symbol('x')
###Output
_____no_output_____
###Markdown
Or we can define multiple symbols at the same time with the function `symbols`:
###Code
(x, y, P, Q, f) = symbols('x, y, P, Q, f')
###Output
_____no_output_____
###Markdown
We can combine `Expr`s with the regular Python infix and prefix operators. Here's how we would form the logical sentence "P and not Q":
###Code
P & ~Q
###Output
_____no_output_____
###Markdown
This works because the `Expr` class overloads the `&` operator with this definition:```pythondef __and__(self, other): return Expr('&', self, other)``` and does similar overloads for the other operators. An `Expr` has two fields: `op` for the operator, which is always a string, and `args` for the arguments, which is a tuple of 0 or more expressions. By "expression," I mean either an instance of `Expr`, or a number. Let's take a look at the fields for some `Expr` examples:
###Code
sentence = P & ~Q
sentence.op
sentence.args
P.op
P.args
Pxy = P(x, y)
Pxy.op
Pxy.args
###Output
_____no_output_____
###Markdown
It is important to note that the `Expr` class does not define the *logic* of Propositional Logic sentences; it just gives you a way to *represent* expressions. Think of an `Expr` as an [abstract syntax tree](https://en.wikipedia.org/wiki/Abstract_syntax_tree). Each of the `args` in an `Expr` can be either a symbol, a number, or a nested `Expr`. We can nest these trees to any depth. Here is a deply nested `Expr`:
###Code
3 * f(x, y) + P(y) / 2 + 1
###Output
_____no_output_____
###Markdown
Operators for Constructing Logical SentencesHere is a table of the operators that can be used to form sentences. Note that we have a problem: we want to use Python operators to make sentences, so that our programs (and our interactive sessions like the one here) will show simple code. But Python does not allow implication arrows as operators, so for now we have to use a more verbose notation that Python does allow: `|'==>'|` instead of just `==>`. Alternately, you can always use the more verbose `Expr` constructor forms:| Operation | Book | Python Infix Input | Python Output | Python `Expr` Input|--------------------------|----------------------|-------------------------|---|---|| Negation | ¬ P | `~P` | `~P` | `Expr('~', P)`| And | P ∧ Q | `P & Q` | `P & Q` | `Expr('&', P, Q)`| Or | P ∨ Q | `P` &124; `Q`| `P` &124; `Q` | `Expr('`&124;`', P, Q)`| Inequality (Xor) | P ≠ Q | `P ^ Q` | `P ^ Q` | `Expr('^', P, Q)`| Implication | P → Q | `P` &124;`'==>'`&124; `Q` | `P ==> Q` | `Expr('==>', P, Q)`| Reverse Implication | Q ← P | `Q` &124;`'&124; `P` |`Q <== P` | `Expr('<==', Q, P)`| Equivalence | P ↔ Q | `P` &124;`''`&124; `Q` |`P Q` | `Expr('', P, Q)`Here's an example of defining a sentence with an implication arrow:
###Code
~(P & Q) |'==>'| (~P | ~Q)
###Output
_____no_output_____
###Markdown
`expr`: a Shortcut for Constructing SentencesIf the `|'==>'|` notation looks ugly to you, you can use the function `expr` instead:
###Code
expr('~(P & Q) ==> (~P | ~Q)')
###Output
_____no_output_____
###Markdown
`expr` takes a string as input, and parses it into an `Expr`. The string can contain arrow operators: `==>`, ``, which are handled as if they were regular Python infix operators. And `expr` automatically defines any symbols, so you don't need to pre-define them:
###Code
expr('sqrt(b ** 2 - 4 * a * c)')
###Output
_____no_output_____
###Markdown
For now that's all you need to know about `expr`. If you are interested, we explain the messy details of how `expr` is implemented and how `|'==>'|` is handled in the appendix. Propositional Knowledge Bases: `PropKB`The class `PropKB` can be used to represent a knowledge base of propositional logic sentences.We see that the class `KB` has four methods, apart from `__init__`. A point to note here: the `ask` method simply calls the `ask_generator` method. Thus, this one has already been implemented, and what you'll have to actually implement when you create your own knowledge base class (though you'll probably never need to, considering the ones we've created for you) will be the `ask_generator` function and not the `ask` function itself.The class `PropKB` now.* `__init__(self, sentence=None)` : The constructor `__init__` creates a single field `clauses` which will be a list of all the sentences of the knowledge base. Note that each one of these sentences will be a 'clause' i.e. a sentence which is made up of only literals and `or`s.* `tell(self, sentence)` : When you want to add a sentence to the KB, you use the `tell` method. This method takes a sentence, converts it to its CNF, extracts all the clauses, and adds all these clauses to the `clauses` field. So, you need not worry about `tell`ing only clauses to the knowledge base. You can `tell` the knowledge base a sentence in any form that you wish; converting it to CNF and adding the resulting clauses will be handled by the `tell` method.* `ask_generator(self, query)` : The `ask_generator` function is used by the `ask` function. It calls the `tt_entails` function, which in turn returns `True` if the knowledge base entails query and `False` otherwise. The `ask_generator` itself returns an empty dict `{}` if the knowledge base entails query and `None` otherwise. This might seem a little bit weird to you. After all, it makes more sense just to return a `True` or a `False` instead of the `{}` or `None` But this is done to maintain consistency with the way things are in First-Order Logic, where an `ask_generator` function is supposed to return all the substitutions that make the query true. Hence the dict, to return all these substitutions. I will be mostly be using the `ask` function which returns a `{}` or a `False`, but if you don't like this, you can always use the `ask_if_true` function which returns a `True` or a `False`.* `retract(self, sentence)` : This function removes all the clauses of the sentence given, from the knowledge base. Like the `tell` function, you don't have to pass clauses to remove them from the knowledge base; any sentence will do fine. The function will take care of converting that sentence to clauses and then remove those. Wumpus World KBLet us create a `PropKB` for the wumpus world with the sentences mentioned in `section 7.4.3`.
###Code
wumpus_kb = PropKB()
###Output
_____no_output_____
###Markdown
We define the symbols we use in our clauses.$P_{x, y}$ is true if there is a pit in `[x, y]`.$B_{x, y}$ is true if the agent senses breeze in `[x, y]`.
###Code
P11, P12, P21, P22, P31, B11, B21 = expr('P11, P12, P21, P22, P31, B11, B21')
###Output
_____no_output_____
###Markdown
Now we tell sentences based on `section 7.4.3`.There is no pit in `[1,1]`.
###Code
wumpus_kb.tell(~P11)
###Output
_____no_output_____
###Markdown
A square is breezy if and only if there is a pit in a neighboring square. This has to be stated for each square but for now, we include just the relevant squares.
###Code
wumpus_kb.tell(B11 | '<=>' | ((P12 | P21)))
wumpus_kb.tell(B21 | '<=>' | ((P11 | P22 | P31)))
###Output
_____no_output_____
###Markdown
Now we include the breeze percepts for the first two squares leading up to the situation in `Figure 7.3(b)`
###Code
wumpus_kb.tell(~B11)
wumpus_kb.tell(B21)
###Output
_____no_output_____
###Markdown
We can check the clauses stored in a `KB` by accessing its `clauses` variable
###Code
wumpus_kb.clauses
###Output
_____no_output_____
###Markdown
We see that the equivalence $B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was automatically converted to two implications which were inturn converted to CNF which is stored in the `KB`.$B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was split into $B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ and $B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$.$B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ was converted to $P_{1, 2} \lor P_{2, 1} \lor \neg B_{1, 1}$.$B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$ was converted to $\neg (P_{1, 2} \lor P_{2, 1}) \lor B_{1, 1}$ which becomes $(\neg P_{1, 2} \lor B_{1, 1}) \land (\neg P_{2, 1} \lor B_{1, 1})$ after applying De Morgan's laws and distributing the disjunction.$B_{2, 1} \iff (P_{1, 1} \lor P_{2, 2} \lor P_{3, 2})$ is converted in similar manner. Knowledge based agents A knowledge-based agent is a simple generic agent that maintains and handles a knowledge base.The knowledge base may initially contain some background knowledge.The purpose of a KB agent is to provide a level of abstraction over knowledge-base manipulation and is to be used as a base class for agents that work on a knowledge base.Given a percept, the KB agent adds the percept to its knowledge base, asks the knowledge base for the best action, and tells the knowledge base that it has in fact taken that action.Our implementation of `KB-Agent` is encapsulated in a class `KB_AgentProgram` which inherits from the `KB` class.Let's have a look.
###Code
psource(KB_AgentProgram)
###Output
_____no_output_____
###Markdown
The helper functions `make_percept_sentence`, `make_action_query` and `make_action_sentence` are all aptly named and as expected,`make_percept_sentence` makes first-order logic sentences about percepts we want our agent to receive,`make_action_query` asks the underlying `KB` about the action that should be taken and`make_action_sentence` tells the underlying `KB` about the action it has just taken. Inference in Propositional Knowledge BaseIn this section we will look at two algorithms to check if a sentence is entailed by the `KB`. Our goal is to decide whether $\text{KB} \vDash \alpha$ for some sentence $\alpha$. Truth Table EnumerationIt is a model-checking approach which, as the name suggests, enumerates all possible models in which the `KB` is true and checks if $\alpha$ is also true in these models. We list the $n$ symbols in the `KB` and enumerate the $2^{n}$ models in a depth-first manner and check the truth of `KB` and $\alpha$.
###Code
psource(tt_check_all)
###Output
_____no_output_____
###Markdown
The algorithm basically computes every line of the truth table $KB\implies \alpha$ and checks if it is true everywhere.If symbols are defined, the routine recursively constructs every combination of truth values for the symbols and then, it checks whether `model` is consistent with `kb`.The given models correspond to the lines in the truth table,which have a `true` in the KB column, and for these lines it checks whether the query evaluates to true`result = pl_true(alpha, model)`.In short, `tt_check_all` evaluates this logical expression for each `model``pl_true(kb, model) => pl_true(alpha, model)`which is logically equivalent to`pl_true(kb, model) & ~pl_true(alpha, model)` that is, the knowledge base and the negation of the query are logically inconsistent.`tt_entails()` just extracts the symbols from the query and calls `tt_check_all()` with the proper parameters.
###Code
psource(tt_entails)
###Output
_____no_output_____
###Markdown
Keep in mind that for two symbols P and Q, P => Q is false only when P is `True` and Q is `False`.Example usage of `tt_entails()`:
###Code
tt_entails(P & Q, Q)
###Output
_____no_output_____
###Markdown
P & Q is True only when both P and Q are True. Hence, (P & Q) => Q is True
###Code
tt_entails(P | Q, Q)
tt_entails(P | Q, P)
###Output
_____no_output_____
###Markdown
If we know that P | Q is true, we cannot infer the truth values of P and Q. Hence (P | Q) => Q is False and so is (P | Q) => P.
###Code
(A, B, C, D, E, F, G) = symbols('A, B, C, D, E, F, G')
tt_entails(A & (B | C) & D & E & ~(F | G), A & D & E & ~F & ~G)
###Output
_____no_output_____
###Markdown
We can see that for the KB to be true, A, D, E have to be True and F and G have to be False.Nothing can be said about B or C. Coming back to our problem, note that `tt_entails()` takes an `Expr` which is a conjunction of clauses as the input instead of the `KB` itself. You can use the `ask_if_true()` method of `PropKB` which does all the required conversions. Let's check what `wumpus_kb` tells us about $P_{1, 1}$.
###Code
wumpus_kb.ask_if_true(~P11), wumpus_kb.ask_if_true(P11)
###Output
_____no_output_____
###Markdown
Looking at Figure 7.9 we see that in all models in which the knowledge base is `True`, $P_{1, 1}$ is `False`. It makes sense that `ask_if_true()` returns `True` for $\alpha = \neg P_{1, 1}$ and `False` for $\alpha = P_{1, 1}$. This begs the question, what if $\alpha$ is `True` in only a portion of all models. Do we return `True` or `False`? This doesn't rule out the possibility of $\alpha$ being `True` but it is not entailed by the `KB` so we return `False` in such cases. We can see this is the case for $P_{2, 2}$ and $P_{3, 1}$.
###Code
wumpus_kb.ask_if_true(~P22), wumpus_kb.ask_if_true(P22)
###Output
_____no_output_____
###Markdown
Proof by ResolutionRecall that our goal is to check whether $\text{KB} \vDash \alpha$ i.e. is $\text{KB} \implies \alpha$ true in every model. Suppose we wanted to check if $P \implies Q$ is valid. We check the satisfiability of $\neg (P \implies Q)$, which can be rewritten as $P \land \neg Q$. If $P \land \neg Q$ is unsatisfiable, then $P \implies Q$ must be true in all models. This gives us the result "$\text{KB} \vDash \alpha$ if and only if $\text{KB} \land \neg \alpha$ is unsatisfiable".This technique corresponds to proof by contradiction, a standard mathematical proof technique. We assume $\alpha$ to be false and show that this leads to a contradiction with known axioms in $\text{KB}$. We obtain a contradiction by making valid inferences using inference rules. In this proof we use a single inference rule, resolution which states $(l_1 \lor \dots \lor l_k) \land (m_1 \lor \dots \lor m_n) \land (l_i \iff \neg m_j) \implies l_1 \lor \dots \lor l_{i - 1} \lor l_{i + 1} \lor \dots \lor l_k \lor m_1 \lor \dots \lor m_{j - 1} \lor m_{j + 1} \lor \dots \lor m_n$. Applying the resolution yields us a clause which we add to the KB. We keep doing this until:* There are no new clauses that can be added, in which case $\text{KB} \nvDash \alpha$.* Two clauses resolve to yield the empty clause, in which case $\text{KB} \vDash \alpha$.The empty clause is equivalent to False because it arises only from resolving two complementaryunit clauses such as $P$ and $\neg P$ which is a contradiction as both $P$ and $\neg P$ can't be True at the same time. There is one catch however, the algorithm that implements proof by resolution cannot handle complex sentences. Implications and bi-implications have to be simplified into simpler clauses. We already know that *every sentence of a propositional logic is logically equivalent to a conjunction of clauses*.We will use this fact to our advantage and simplify the input sentence into the **conjunctive normal form** (CNF) which is a conjunction of disjunctions of literals.For eg:$$(A\lor B)\land (\neg B\lor C\lor\neg D)\land (D\lor\neg E)$$This is equivalent to the POS (Product of sums) form in digital electronics.Here's an outline of how the conversion is done:1. Convert bi-implications to implications$\alpha\iff\beta$ can be written as $(\alpha\implies\beta)\land(\beta\implies\alpha)$This also applies to compound sentences$\alpha\iff(\beta\lor\gamma)$ can be written as $(\alpha\implies(\beta\lor\gamma))\land((\beta\lor\gamma)\implies\alpha)$2. Convert implications to their logical equivalents$\alpha\implies\beta$ can be written as $\neg\alpha\lor\beta$3. Move negation inwardsCNF requires atomic literals. Hence, negation cannot appear on a compound statement.De Morgan's laws will be helpful here.$\neg(\alpha\land\beta)\equiv(\neg\alpha\lor\neg\beta)$$\neg(\alpha\lor\beta)\equiv(\neg\alpha\land\neg\beta)$4. Distribute disjunction over conjunctionDisjunction and conjunction are distributive over each other.Now that we only have conjunctions, disjunctions and negations in our expression, we will distribute disjunctions over conjunctions wherever possible as this will give us a sentence which is a conjunction of simpler clauses, which is what we wanted in the first place.We need a term of the form$(\alpha_{1}\lor\alpha_{2}\lor\alpha_{3}...)\land(\beta_{1}\lor\beta_{2}\lor\beta_{3}...)\land(\gamma_{1}\lor\gamma_{2}\lor\gamma_{3}...)\land...$The `to_cnf` function executes this conversion using helper subroutines.
###Code
psource(to_cnf)
###Output
_____no_output_____
###Markdown
`to_cnf` calls three subroutines.`eliminate_implications` converts bi-implications and implications to their logical equivalents.`move_not_inwards` removes negations from compound statements and moves them inwards using De Morgan's laws.`distribute_and_over_or` distributes disjunctions over conjunctions.Run the cell below for implementation details.
###Code
psource(eliminate_implications)
psource(move_not_inwards)
psource(distribute_and_over_or)
###Output
_____no_output_____
###Markdown
Let's convert some sentences to see how it works
###Code
A, B, C, D = expr('A, B, C, D')
to_cnf(A |'<=>'| B)
to_cnf(A |'<=>'| (B & C))
to_cnf(A & (B | (C & D)))
to_cnf((A |'<=>'| ~B) |'==>'| (C | ~D))
###Output
_____no_output_____
###Markdown
Coming back to our resolution problem, we can see how the `to_cnf` function is utilized here
###Code
psource(pl_resolution)
pl_resolution(wumpus_kb, ~P11), pl_resolution(wumpus_kb, P11)
pl_resolution(wumpus_kb, ~P22), pl_resolution(wumpus_kb, P22)
###Output
_____no_output_____
###Markdown
Forward and backward chainingPreviously, we said we will look at two algorithms to check if a sentence is entailed by the `KB`. Here's a third one. The difference here is that our goal now is to determine if a knowledge base of definite clauses entails a single proposition symbol *q* - the query.There is a catch however - the knowledge base can only contain **Horn clauses**. Horn ClausesHorn clauses can be defined as a *disjunction* of *literals* with **at most** one positive literal. A Horn clause with exactly one positive literal is called a *definite clause*.A Horn clause might look like $\neg a\lor\neg b\lor\neg c\lor\neg d... \lor z$This, coincidentally, is also a definite clause.Using De Morgan's laws, the example above can be simplified to $a\land b\land c\land d ... \implies z$This seems like a logical representation of how humans process known data and facts. Assuming percepts `a`, `b`, `c`, `d` ... to be true simultaneously, we can infer `z` to also be true at that point in time. There are some interesting aspects of Horn clauses that make algorithmic inference or *resolution* easier.- Definite clauses can be written as implications:The most important simplification a definite clause provides is that it can be written as an implication.The premise (or the knowledge that leads to the implication) is a conjunction of positive literals.The conclusion (the implied statement) is also a positive literal.The sentence thus becomes easier to understand.The premise and the conclusion are conventionally called the *body* and the *head* respectively.A single positive literal is called a *fact*.- Forward chaining and backward chaining can be used for inference from Horn clauses:Forward chaining is semantically identical to `AND-OR-Graph-Search` from the chapter on search algorithms.Implementational details will be explained shortly.- Deciding entailment with Horn clauses is linear in size of the knowledge base:Surprisingly, the forward and backward chaining algorithms traverse each element of the knowledge base at most once, greatly simplifying the problem.The function `pl_fc_entails` implements forward chaining to see if a knowledge base `KB` entails a symbol `q`.Before we proceed further, note that `pl_fc_entails` doesn't use an ordinary `KB` instance. The knowledge base here is an instance of the `PropDefiniteKB` class, derived from the `PropKB` class, but modified to store definite clauses.The main point of difference arises in the inclusion of a helper method to `PropDefiniteKB` that returns a list of clauses in KB that have a given symbol `p` in their premise.
###Code
psource(PropDefiniteKB.clauses_with_premise)
###Output
_____no_output_____
###Markdown
Let's now have a look at the `pl_fc_entails` algorithm.
###Code
psource(pl_fc_entails)
###Output
_____no_output_____
###Markdown
The function accepts a knowledge base `KB` (an instance of `PropDefiniteKB`) and a query `q` as inputs.`count` initially stores the number of symbols in the premise of each sentence in the knowledge base.The `conjuncts` helper function separates a given sentence at conjunctions.`inferred` is initialized as a *boolean* defaultdict. This will be used later to check if we have inferred all premises of each clause of the agenda.`agenda` initially stores a list of clauses that the knowledge base knows to be true.The `is_prop_symbol` helper function checks if the given symbol is a valid propositional logic symbol.We now iterate through `agenda`, popping a symbol `p` on each iteration.If the query `q` is the same as `p`, we know that entailment holds.The agenda is processed, reducing `count` by one for each implication with a premise `p`.A conclusion is added to the agenda when `count` reaches zero. This means we know all the premises of that particular implication to be true.`clauses_with_premise` is a helpful method of the `PropKB` class.It returns a list of clauses in the knowledge base that have `p` in their premise.Now that we have an idea of how this function works, let's see a few examples of its usage, but we first need to define our knowledge base. We assume we know the following clauses to be true.
###Code
clauses = ['(B & F)==>E',
'(A & E & F)==>G',
'(B & C)==>F',
'(A & B)==>D',
'(E & F)==>H',
'(H & I)==>J',
'A',
'B',
'C']
###Output
_____no_output_____
###Markdown
We will now `tell` this information to our knowledge base.
###Code
definite_clauses_KB = PropDefiniteKB()
for clause in clauses:
definite_clauses_KB.tell(expr(clause))
###Output
_____no_output_____
###Markdown
We can now check if our knowledge base entails the following queries.
###Code
pl_fc_entails(definite_clauses_KB, expr('G'))
pl_fc_entails(definite_clauses_KB, expr('H'))
pl_fc_entails(definite_clauses_KB, expr('I'))
pl_fc_entails(definite_clauses_KB, expr('J'))
###Output
_____no_output_____
###Markdown
Effective Propositional Model CheckingThe previous segments elucidate the algorithmic procedure for model checking. In this segment, we look at ways of making them computationally efficient.The problem we are trying to solve is conventionally called the _propositional satisfiability problem_, abbreviated as the _SAT_ problem.In layman terms, if there exists a model that satisfies a given Boolean formula, the formula is called satisfiable.The SAT problem was the first problem to be proven _NP-complete_.The main characteristics of an NP-complete problem are:- Given a solution to such a problem, it is easy to verify if the solution solves the problem.- The time required to actually solve the problem using any known algorithm increases exponentially with respect to the size of the problem.Due to these properties, heuristic and approximational methods are often applied to find solutions to these problems.It is extremely important to be able to solve large scale SAT problems efficiently because many combinatorial problems in computer science can be conveniently reduced to checking the satisfiability of a propositional sentence under some constraints.We will introduce two new algorithms that perform propositional model checking in a computationally effective way. 1. DPLL (Davis-Putnam-Logeman-Loveland) algorithmThis algorithm is very similar to Backtracking-Search.It recursively enumerates possible models in a depth-first fashion with the following improvements over algorithms like `tt_entails`:1. Early termination:In certain cases, the algorithm can detect the truth value of a statement using just a partially completed model.For example, $(P\lor Q)\land(P\lor R)$ is true if P is true, regardless of other variables.This reduces the search space significantly.2. Pure symbol heuristic:A symbol that has the same sign (positive or negative) in all clauses is called a _pure symbol_.It isn't difficult to see that any satisfiable model will have the pure symbols assigned such that its parent clause becomes _true_.For example, $(P\lor\neg Q)\land(\neg Q\lor\neg R)\land(R\lor P)$ has P and Q as pure symbolsand for the sentence to be true, P _has_ to be true and Q _has_ to be false.The pure symbol heuristic thus simplifies the problem a bit.3. Unit clause heuristic:In the context of DPLL, clauses with just one literal and clauses with all but one _false_ literals are called unit clauses.If a clause is a unit clause, it can only be satisfied by assigning the necessary value to make the last literal true.We have no other choice.Assigning one unit clause can create another unit clause.For example, when P is false, $(P\lor Q)$ becomes a unit clause, causing _true_ to be assigned to Q.A series of forced assignments derived from previous unit clauses is called _unit propagation_.In this way, this heuristic simplifies the problem further.The algorithm often employs other tricks to scale up to large problems.However, these tricks are currently out of the scope of this notebook. Refer to section 7.6 of the book for more details.Let's have a look at the algorithm.
###Code
psource(dpll)
###Output
_____no_output_____
###Markdown
The algorithm uses the ideas described above to check satisfiability of a sentence in propositional logic.It recursively calls itself, simplifying the problem at each step. It also uses helper functions `find_pure_symbol` and `find_unit_clause` to carry out steps 2 and 3 above.The `dpll_satisfiable` helper function converts the input clauses to _conjunctive normal form_ and calls the `dpll` function with the correct parameters.
###Code
psource(dpll_satisfiable)
###Output
_____no_output_____
###Markdown
Let's see a few examples of usage.
###Code
A, B, C, D = expr('A, B, C, D')
dpll_satisfiable(A & B & ~C & D)
###Output
_____no_output_____
###Markdown
This is a simple case to highlight that the algorithm actually works.
###Code
dpll_satisfiable((A & B) | (C & ~A) | (B & ~D))
###Output
_____no_output_____
###Markdown
If a particular symbol isn't present in the solution, it means that the solution is independent of the value of that symbol.In this case, the solution is independent of A.
###Code
dpll_satisfiable(A |'<=>'| B)
dpll_satisfiable((A |'<=>'| B) |'==>'| (C & ~A))
dpll_satisfiable((A | (B & C)) |'<=>'| ((A | B) & (A | C)))
###Output
_____no_output_____
###Markdown
2. WalkSAT algorithmThis algorithm is very similar to Hill climbing.On every iteration, the algorithm picks an unsatisfied clause and flips a symbol in the clause.This is similar to finding a neighboring state in the `hill_climbing` algorithm.The symbol to be flipped is decided by an evaluation function that counts the number of unsatisfied clauses.Sometimes, symbols are also flipped randomly to avoid local optima. A subtle balance between greediness and randomness is required. Alternatively, some versions of the algorithm restart with a completely new random assignment if no solution has been found for too long as a way of getting out of local minima of numbers of unsatisfied clauses.Let's have a look at the algorithm.
###Code
psource(WalkSAT)
###Output
_____no_output_____
###Markdown
The function takes three arguments:1. The `clauses` we want to satisfy.2. The probability `p` of randomly changing a symbol.3. The maximum number of flips (`max_flips`) the algorithm will run for. If the clauses are still unsatisfied, the algorithm returns `None` to denote failure.The algorithm is identical in concept to Hill climbing and the code isn't difficult to understand.Let's see a few examples of usage.
###Code
A, B, C, D = expr('A, B, C, D')
WalkSAT([A, B, ~C, D], 0.5, 100)
###Output
_____no_output_____
###Markdown
This is a simple case to show that the algorithm converges.
###Code
WalkSAT([A & B, A & C], 0.5, 100)
WalkSAT([A & B, C & D, C & B], 0.5, 100)
WalkSAT([A & B, C | D, ~(D | B)], 0.5, 1000)
###Output
_____no_output_____
###Markdown
This one doesn't give any output because WalkSAT did not find any model where these clauses hold. We can solve these clauses to see that they together form a contradiction and hence, it isn't supposed to have a solution. One point of difference between this algorithm and the `dpll_satisfiable` algorithms is that both these algorithms take inputs differently. For WalkSAT to take complete sentences as input, we can write a helper function that converts the input sentence into conjunctive normal form and then calls WalkSAT with the list of conjuncts of the CNF form of the sentence.
###Code
def WalkSAT_CNF(sentence, p=0.5, max_flips=10000):
return WalkSAT(conjuncts(to_cnf(sentence)), 0, max_flips)
###Output
_____no_output_____
###Markdown
Now we can call `WalkSAT_CNF` and `DPLL_Satisfiable` with the same arguments.
###Code
WalkSAT_CNF((A & B) | (C & ~A) | (B & ~D), 0.5, 1000)
###Output
_____no_output_____
###Markdown
It works!Notice that the solution generated by WalkSAT doesn't omit variables that the sentence doesn't depend upon. If the sentence is independent of a particular variable, the solution contains a random value for that variable because of the stochastic nature of the algorithm.Let's compare the runtime of WalkSAT and DPLL for a few cases. We will use the `%%timeit` magic to do this.
###Code
sentence_1 = A |'<=>'| B
sentence_2 = (A & B) | (C & ~A) | (B & ~D)
sentence_3 = (A | (B & C)) |'<=>'| ((A | B) & (A | C))
%%timeit
dpll_satisfiable(sentence_1)
dpll_satisfiable(sentence_2)
dpll_satisfiable(sentence_3)
%%timeit
WalkSAT_CNF(sentence_1)
WalkSAT_CNF(sentence_2)
WalkSAT_CNF(sentence_3)
###Output
1.02 ms ± 6.92 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
###Markdown
On an average, for solvable cases, `WalkSAT` is quite faster than `dpll` because, for a small number of variables, `WalkSAT` can reduce the search space significantly. Results can be different for sentences with more symbols though.Feel free to play around with this to understand the trade-offs of these algorithms better. SATPlan In this section we show how to make plans by logical inference. The basic idea is very simple. It includes the following three steps:1. Constuct a sentence that includes: 1. A colection of assertions about the initial state. 2. The successor-state axioms for all the possible actions at each time up to some maximum time t. 3. The assertion that the goal is achieved at time t.2. Present the whole sentence to a SAT solver.3. Assuming a model is found, extract from the model those variables that represent actions and are assigned true. Together they represent a plan to achieve the goals.Lets have a look at the algorithm
###Code
psource(SAT_plan)
###Output
_____no_output_____
###Markdown
Let's see few examples of its usage. First we define a transition and then call `SAT_plan`.
###Code
transition = {'A': {'Left': 'A', 'Right': 'B'},
'B': {'Left': 'A', 'Right': 'C'},
'C': {'Left': 'B', 'Right': 'C'}}
print(SAT_plan('A', transition, 'C', 2))
print(SAT_plan('A', transition, 'B', 3))
print(SAT_plan('C', transition, 'A', 3))
###Output
None
['Right']
['Left', 'Left']
###Markdown
Let us do the same for another transition.
###Code
transition = {(0, 0): {'Right': (0, 1), 'Down': (1, 0)},
(0, 1): {'Left': (1, 0), 'Down': (1, 1)},
(1, 0): {'Right': (1, 0), 'Up': (1, 0), 'Left': (1, 0), 'Down': (1, 0)},
(1, 1): {'Left': (1, 0), 'Up': (0, 1)}}
print(SAT_plan((0, 0), transition, (1, 1), 4))
###Output
['Right', 'Down']
###Markdown
First-Order Logic Knowledge Bases: `FolKB`The class `FolKB` can be used to represent a knowledge base of First-order logic sentences. You would initialize and use it the same way as you would for `PropKB` except that the clauses are first-order definite clauses. We will see how to write such clauses to create a database and query them in the following sections. Criminal KBIn this section we create a `FolKB` based on the following paragraph.The law says that it is a crime for an American to sell weapons to hostile nations. The country Nono, an enemy of America, has some missiles, and all of its missiles were sold to it by Colonel West, who is American.The first step is to extract the facts and convert them into first-order definite clauses. Extracting the facts from data alone is a challenging task. Fortunately, we have a small paragraph and can do extraction and conversion manually. We'll store the clauses in list aptly named `clauses`.
###Code
clauses = []
###Output
_____no_output_____
###Markdown
“... it is a crime for an American to sell weapons to hostile nations”The keywords to look for here are 'crime', 'American', 'sell', 'weapon' and 'hostile'. We use predicate symbols to make meaning of them.* `Criminal(x)`: `x` is a criminal* `American(x)`: `x` is an American* `Sells(x ,y, z)`: `x` sells `y` to `z`* `Weapon(x)`: `x` is a weapon* `Hostile(x)`: `x` is a hostile nationLet us now combine them with appropriate variable naming to depict the meaning of the sentence. The criminal `x` is also the American `x` who sells weapon `y` to `z`, which is a hostile nation.$\text{American}(x) \land \text{Weapon}(y) \land \text{Sells}(x, y, z) \land \text{Hostile}(z) \implies \text{Criminal} (x)$
###Code
clauses.append(expr("(American(x) & Weapon(y) & Sells(x, y, z) & Hostile(z)) ==> Criminal(x)"))
###Output
_____no_output_____
###Markdown
"The country Nono, an enemy of America"We now know that Nono is an enemy of America. We represent these nations using the constant symbols `Nono` and `America`. the enemy relation is show using the predicate symbol `Enemy`.$\text{Enemy}(\text{Nono}, \text{America})$
###Code
clauses.append(expr("Enemy(Nono, America)"))
###Output
_____no_output_____
###Markdown
"Nono ... has some missiles"This states the existence of some missile which is owned by Nono. $\exists x \text{Owns}(\text{Nono}, x) \land \text{Missile}(x)$. We invoke existential instantiation to introduce a new constant `M1` which is the missile owned by Nono.$\text{Owns}(\text{Nono}, \text{M1}), \text{Missile}(\text{M1})$
###Code
clauses.append(expr("Owns(Nono, M1)"))
clauses.append(expr("Missile(M1)"))
###Output
_____no_output_____
###Markdown
"All of its missiles were sold to it by Colonel West"If Nono owns something and it classifies as a missile, then it was sold to Nono by West.$\text{Missile}(x) \land \text{Owns}(\text{Nono}, x) \implies \text{Sells}(\text{West}, x, \text{Nono})$
###Code
clauses.append(expr("(Missile(x) & Owns(Nono, x)) ==> Sells(West, x, Nono)"))
###Output
_____no_output_____
###Markdown
"West, who is American"West is an American.$\text{American}(\text{West})$
###Code
clauses.append(expr("American(West)"))
###Output
_____no_output_____
###Markdown
We also know, from our understanding of language, that missiles are weapons and that an enemy of America counts as “hostile”.$\text{Missile}(x) \implies \text{Weapon}(x), \text{Enemy}(x, \text{America}) \implies \text{Hostile}(x)$
###Code
clauses.append(expr("Missile(x) ==> Weapon(x)"))
clauses.append(expr("Enemy(x, America) ==> Hostile(x)"))
###Output
_____no_output_____
###Markdown
Now that we have converted the information into first-order definite clauses we can create our first-order logic knowledge base.
###Code
crime_kb = FolKB(clauses)
###Output
_____no_output_____
###Markdown
The `subst` helper function substitutes variables with given values in first-order logic statements.This will be useful in later algorithms.It's implementation is quite simple and self-explanatory.
###Code
psource(subst)
###Output
_____no_output_____
###Markdown
Here's an example of how `subst` can be used.
###Code
subst({x: expr('Nono'), y: expr('M1')}, expr('Owns(x, y)'))
###Output
_____no_output_____
###Markdown
Inference in First-Order LogicIn this section we look at a forward chaining and a backward chaining algorithm for `FolKB`. Both aforementioned algorithms rely on a process called unification, a key component of all first-order inference algorithms. UnificationWe sometimes require finding substitutions that make different logical expressions look identical. This process, called unification, is done by the `unify` algorithm. It takes as input two sentences and returns a unifier for them if one exists. A unifier is a dictionary which stores the substitutions required to make the two sentences identical. It does so by recursively unifying the components of a sentence, where the unification of a variable symbol `var` with a constant symbol `Const` is the mapping `{var: Const}`. Let's look at a few examples.
###Code
unify(expr('x'), 3)
unify(expr('A(x)'), expr('A(B)'))
unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(y)'))
###Output
_____no_output_____
###Markdown
In cases where there is no possible substitution that unifies the two sentences the function return `None`.
###Code
print(unify(expr('Cat(x)'), expr('Dog(Dobby)')))
###Output
None
###Markdown
We also need to take care we do not unintentionally use the same variable name. Unify treats them as a single variable which prevents it from taking multiple value.
###Code
print(unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(x)')))
###Output
None
###Markdown
Forward Chaining AlgorithmWe consider the simple forward-chaining algorithm presented in Figure 9.3. We look at each rule in the knowledge base and see if the premises can be satisfied. This is done by finding a substitution which unifies each of the premise with a clause in the `KB`. If we are able to unify the premises, the conclusion (with the corresponding substitution) is added to the `KB`. This inferencing process is repeated until either the query can be answered or till no new sentences can be added. We test if the newly added clause unifies with the query in which case the substitution yielded by `unify` is an answer to the query. If we run out of sentences to infer, this means the query was a failure.The function `fol_fc_ask` is a generator which yields all substitutions which validate the query.
###Code
psource(fol_fc_ask)
###Output
_____no_output_____
###Markdown
Let's find out all the hostile nations. Note that we only told the `KB` that Nono was an enemy of America, not that it was hostile.
###Code
answer = fol_fc_ask(crime_kb, expr('Hostile(x)'))
print(list(answer))
###Output
[{x: Nono}]
###Markdown
The generator returned a single substitution which says that Nono is a hostile nation. See how after adding another enemy nation the generator returns two substitutions.
###Code
crime_kb.tell(expr('Enemy(JaJa, America)'))
answer = fol_fc_ask(crime_kb, expr('Hostile(x)'))
print(list(answer))
###Output
[{x: Nono}, {x: JaJa}]
###Markdown
Note: `fol_fc_ask` makes changes to the `KB` by adding sentences to it. Backward Chaining AlgorithmThis algorithm works backward from the goal, chaining through rules to find known facts that support the proof. Suppose `goal` is the query we want to find the substitution for. We find rules of the form $\text{lhs} \implies \text{goal}$ in the `KB` and try to prove `lhs`. There may be multiple clauses in the `KB` which give multiple `lhs`. It is sufficient to prove only one of these. But to prove a `lhs` all the conjuncts in the `lhs` of the clause must be proved. This makes it similar to And/Or search. ORThe OR part of the algorithm comes from our choice to select any clause of the form $\text{lhs} \implies \text{goal}$. Looking at all rules's `lhs` whose `rhs` unify with the `goal`, we yield a substitution which proves all the conjuncts in the `lhs`. We use `parse_definite_clause` to attain `lhs` and `rhs` from a clause of the form $\text{lhs} \implies \text{rhs}$. For atomic facts the `lhs` is an empty list.
###Code
psource(fol_bc_or)
###Output
_____no_output_____
###Markdown
ANDThe AND corresponds to proving all the conjuncts in the `lhs`. We need to find a substitution which proves each and every clause in the list of conjuncts.
###Code
psource(fol_bc_and)
###Output
_____no_output_____
###Markdown
Now the main function `fl_bc_ask` calls `fol_bc_or` with substitution initialized as empty. The `ask` method of `FolKB` uses `fol_bc_ask` and fetches the first substitution returned by the generator to answer query. Let's query the knowledge base we created from `clauses` to find hostile nations.
###Code
# Rebuild KB because running fol_fc_ask would add new facts to the KB
crime_kb = FolKB(clauses)
crime_kb.ask(expr('Hostile(x)'))
###Output
_____no_output_____
###Markdown
You may notice some new variables in the substitution. They are introduced to standardize the variable names to prevent naming problems as discussed in the [Unification section](Unification) Appendix: The Implementation of `|'==>'|`Consider the `Expr` formed by this syntax:
###Code
P |'==>'| ~Q
###Output
_____no_output_____
###Markdown
What is the funny `|'==>'|` syntax? The trick is that "`|`" is just the regular Python or-operator, and so is exactly equivalent to this:
###Code
(P | '==>') | ~Q
###Output
_____no_output_____
###Markdown
In other words, there are two applications of or-operators. Here's the first one:
###Code
P | '==>'
###Output
_____no_output_____
###Markdown
What is going on here is that the `__or__` method of `Expr` serves a dual purpose. If the right-hand-side is another `Expr` (or a number), then the result is an `Expr`, as in `(P | Q)`. But if the right-hand-side is a string, then the string is taken to be an operator, and we create a node in the abstract syntax tree corresponding to a partially-filled `Expr`, one where we know the left-hand-side is `P` and the operator is `==>`, but we don't yet know the right-hand-side.The `PartialExpr` class has an `__or__` method that says to create an `Expr` node with the right-hand-side filled in. Here we can see the combination of the `PartialExpr` with `Q` to create a complete `Expr`:
###Code
partial = PartialExpr('==>', P)
partial | ~Q
###Output
_____no_output_____
###Markdown
This [trick](http://code.activestate.com/recipes/384122-infix-operators/) is due to [Ferdinand Jamitzky](http://code.activestate.com/recipes/users/98863/), with a modification by [C. G. Vedant](https://github.com/Chipe1),who suggested using a string inside the or-bars. Appendix: The Implementation of `expr`How does `expr` parse a string into an `Expr`? It turns out there are two tricks (besides the Jamitzky/Vedant trick):1. We do a string substitution, replacing "`==>`" with "`|'==>'|`" (and likewise for other operators).2. We `eval` the resulting string in an environment in which every identifieris bound to a symbol with that identifier as the `op`.In other words,
###Code
expr('~(P & Q) ==> (~P | ~Q)')
###Output
_____no_output_____
###Markdown
is equivalent to doing:
###Code
P, Q = symbols('P, Q')
~(P & Q) |'==>'| (~P | ~Q)
###Output
_____no_output_____
###Markdown
One thing to beware of: this puts `==>` at the same precedence level as `"|"`, which is not quite right. For example, we get this:
###Code
P & Q |'==>'| P | Q
###Output
_____no_output_____
###Markdown
which is probably not what we meant; when in doubt, put in extra parens:
###Code
(P & Q) |'==>'| (P | Q)
###Output
_____no_output_____
###Markdown
Examples
###Code
from notebook import Canvas_fol_bc_ask
canvas_bc_ask = Canvas_fol_bc_ask('canvas_bc_ask', crime_kb, expr('Criminal(x)'))
###Output
_____no_output_____
###Markdown
###Code
import numpy as np
def numerical_derivative(f,x):
delta_x = 1e-4
grad = np.zeros_like(x)
# print("debug1. initial input vairable = {}".format(x))
#print("debug2. initial grad = {}".format(grad))
it = np.nditer(x, flags=['multi_index'], op_flags=['readwrite'])
while not it.finished:
idx = it.multi_index
# print("debug3. idx = {},x[idx] ={}".format(idx,x[idx]))
tmp_val = x[idx]
x[idx] = float(tmp_val) + delta_x
fx1 = f(x)
x[idx] = float(tmp_val) - delta_x
fx2= f(x)
grad[idx] = (fx1-fx2)/(2*delta_x)
# print("debug 4. grad[idx = {}".format(grad[idx]))
# print("debug 5. grad= {}".format(grad))
# print("==================================================")
x[idx] = tmp_val
it.iternext()
return grad
import numpy as np
def sigmoid(x):
return 1/(1+np.exp(-x))
class LogicGate:
def __init__(self,gate_name,xdata,tdata):
self.name = gate_name
self.__xdata = xdata.reshape(4,2)
self.__tdata = tdata.reshape(4,1)
self.__W = np.random.rand(2,1)
self.__b = np.random.rand(1)
self.__learning_rate = 1e-2
def __loss_func(self):
delta = 1e-7
z = np.dot(self.__xdata,self.__W) + self.__b
y = sigmoid(z)
return -np.sum(self.__tdata*np.log(y + delta) + (1-self.__tdata)*np.log((1-y)+delta))
def error_val(self):
delta = 1e-7
z = np.dot(self.__xdata,self.__W) + self.__b
y = sigmoid(z)
return -np.sum(self.__tdata*np.log(y + delta) + (1-self.__tdata)*np.log((1-y)+delta))
def train(self):
f = lambda x: self.__loss_func()
print("Initial error value = ", self.error_val())
for step in range(8001):
self.__W -= self.__learning_rate * numerical_derivative(f,self.__W)
self.__b -= self.__learning_rate * numerical_derivative(f,self.__b)
if(step % 400 == 0):
print("step = ", step, "error value = ", self.error_val())
def predict(self,input_data):
z = np.dot(input_data, self.__W) + self.__b
y = sigmoid(z)
if y > 0.5:
result = 1
else:
result = 0
return y, result
xdata = np.array([0,0,0,1,1,0,1,1])
tdata = np.array([0,0,0,1])
test = xdata.reshape(4,2)
print(test)
AND_obj = LogicGate("AND_GATE",xdata,tdata)
AND_obj.train()
print(AND_obj.name,"\n")
test_data = np.array([[0,0],[0,1],[1,0],[1,1,]])
for input_data in test_data:
(sigmoid_val, logical_val) = AND_obj.predict(input_data)
print(input_data," = ", logical_val,'\n')
xdata = np.array([0,0,0,1,1,0,1,1])
tdata = np.array([0,1,1,1])
test = xdata.reshape(4,2)
print(test)
OR_obj = LogicGate("AND_GATE",xdata,tdata)
OR_obj.train()
xdata = np.array([0,0,0,1,1,0,1,1])
tdata = np.array([1,1,1,0])
test = xdata.reshape(4,2)
print(test)
NAND_obj = LogicGate("AND_GATE",xdata,tdata)
NAND_obj.train()
input_data = np.array([[0,0],[0,1],[1,0],[1,1]])
s1 = []
s2 = []
new_input_data = []
final_output = []
print (len(input_data))
for index in range(len(input_data)):
s1 = NAND_obj.predict(input_data[index])
s2 = OR_obj.predict(input_data[index])
print("s1 is {}".format(s1))
print("s2 is {}".format(s2))
new_input_data.append(s1[-1])
new_input_data.append(s1[-1])
(sigmoid_val, logical_val) = AND_obj.predict(np.array(new_input_data))
final_output.append(logical_val)
new_input_data = []
for index in range(len(input_data)):
print(input_data[index], "=", final_output[index],end='')
print('\n')
###Output
4
s1 is (array([0.99960637]), 1)
s2 is (array([0.06338463]), 0)
s1 is (array([0.93907592]), 1)
s2 is (array([0.97473529]), 1)
s1 is (array([0.93907727]), 1)
s2 is (array([0.9748408]), 1)
s1 is (array([0.0855561]), 0)
s2 is (array([0.99995473]), 1)
[0 0] = 1
[0 1] = 1
[1 0] = 1
[1 1] = 0
###Markdown
Logic: `logic.py`; Chapters 6-8 This notebook describes the [logic.py](https://github.com/aimacode/aima-python/blob/master/logic.py) module, which covers Chapters 6 (Logical Agents), 7 (First-Order Logic) and 8 (Inference in First-Order Logic) of *[Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu)*. See the [intro notebook](https://github.com/aimacode/aima-python/blob/master/intro.ipynb) for instructions.We'll start by looking at `Expr`, the data type for logical sentences, and the convenience function `expr`. We'll be covering two types of knowledge bases, `PropKB` - Propositional logic knowledge base and `FolKB` - First order logic knowledge base. We will construct a propositional knowledge base of a specific situation in the Wumpus World. We will next go through the `tt_entails` function and experiment with it a bit. The `pl_resolution` and `pl_fc_entails` functions will come next. We'll study forward chaining and backward chaining algorithms for `FolKB` and use them on `crime_kb` knowledge base.But the first step is to load the code:
###Code
from utils import *
from logic import *
from notebook import psource
###Output
_____no_output_____
###Markdown
Logical Sentences The `Expr` class is designed to represent any kind of mathematical expression. The simplest type of `Expr` is a symbol, which can be defined with the function `Symbol`:
###Code
Symbol('x')
###Output
_____no_output_____
###Markdown
Or we can define multiple symbols at the same time with the function `symbols`:
###Code
(x, y, P, Q, f) = symbols('x, y, P, Q, f')
###Output
_____no_output_____
###Markdown
We can combine `Expr`s with the regular Python infix and prefix operators. Here's how we would form the logical sentence "P and not Q":
###Code
P & ~Q
###Output
_____no_output_____
###Markdown
This works because the `Expr` class overloads the `&` operator with this definition:```pythondef __and__(self, other): return Expr('&', self, other)``` and does similar overloads for the other operators. An `Expr` has two fields: `op` for the operator, which is always a string, and `args` for the arguments, which is a tuple of 0 or more expressions. By "expression," I mean either an instance of `Expr`, or a number. Let's take a look at the fields for some `Expr` examples:
###Code
sentence = P & ~Q
sentence.op
sentence.args
P.op
P.args
Pxy = P(x, y)
Pxy.op
Pxy.args
###Output
_____no_output_____
###Markdown
It is important to note that the `Expr` class does not define the *logic* of Propositional Logic sentences; it just gives you a way to *represent* expressions. Think of an `Expr` as an [abstract syntax tree](https://en.wikipedia.org/wiki/Abstract_syntax_tree). Each of the `args` in an `Expr` can be either a symbol, a number, or a nested `Expr`. We can nest these trees to any depth. Here is a deply nested `Expr`:
###Code
3 * f(x, y) + P(y) / 2 + 1
###Output
_____no_output_____
###Markdown
Operators for Constructing Logical SentencesHere is a table of the operators that can be used to form sentences. Note that we have a problem: we want to use Python operators to make sentences, so that our programs (and our interactive sessions like the one here) will show simple code. But Python does not allow implication arrows as operators, so for now we have to use a more verbose notation that Python does allow: `|'==>'|` instead of just `==>`. Alternately, you can always use the more verbose `Expr` constructor forms:| Operation | Book | Python Infix Input | Python Output | Python `Expr` Input|--------------------------|----------------------|-------------------------|---|---|| Negation | ¬ P | `~P` | `~P` | `Expr('~', P)`| And | P ∧ Q | `P & Q` | `P & Q` | `Expr('&', P, Q)`| Or | P ∨ Q | `P` &124; `Q`| `P` &124; `Q` | `Expr('`&124;`', P, Q)`| Inequality (Xor) | P ≠ Q | `P ^ Q` | `P ^ Q` | `Expr('^', P, Q)`| Implication | P → Q | `P` &124;`'==>'`&124; `Q` | `P ==> Q` | `Expr('==>', P, Q)`| Reverse Implication | Q ← P | `Q` &124;`'&124; `P` |`Q <== P` | `Expr('<==', Q, P)`| Equivalence | P ↔ Q | `P` &124;`''`&124; `Q` |`P Q` | `Expr('', P, Q)`Here's an example of defining a sentence with an implication arrow:
###Code
~(P & Q) |'==>'| (~P | ~Q)
###Output
_____no_output_____
###Markdown
`expr`: a Shortcut for Constructing SentencesIf the `|'==>'|` notation looks ugly to you, you can use the function `expr` instead:
###Code
expr('~(P & Q) ==> (~P | ~Q)')
###Output
_____no_output_____
###Markdown
`expr` takes a string as input, and parses it into an `Expr`. The string can contain arrow operators: `==>`, ``, which are handled as if they were regular Python infix operators. And `expr` automatically defines any symbols, so you don't need to pre-define them:
###Code
expr('sqrt(b ** 2 - 4 * a * c)')
###Output
_____no_output_____
###Markdown
For now that's all you need to know about `expr`. If you are interested, we explain the messy details of how `expr` is implemented and how `|'==>'|` is handled in the appendix. Propositional Knowledge Bases: `PropKB`The class `PropKB` can be used to represent a knowledge base of propositional logic sentences.We see that the class `KB` has four methods, apart from `__init__`. A point to note here: the `ask` method simply calls the `ask_generator` method. Thus, this one has already been implemented, and what you'll have to actually implement when you create your own knowledge base class (though you'll probably never need to, considering the ones we've created for you) will be the `ask_generator` function and not the `ask` function itself.The class `PropKB` now.* `__init__(self, sentence=None)` : The constructor `__init__` creates a single field `clauses` which will be a list of all the sentences of the knowledge base. Note that each one of these sentences will be a 'clause' i.e. a sentence which is made up of only literals and `or`s.* `tell(self, sentence)` : When you want to add a sentence to the KB, you use the `tell` method. This method takes a sentence, converts it to its CNF, extracts all the clauses, and adds all these clauses to the `clauses` field. So, you need not worry about `tell`ing only clauses to the knowledge base. You can `tell` the knowledge base a sentence in any form that you wish; converting it to CNF and adding the resulting clauses will be handled by the `tell` method.* `ask_generator(self, query)` : The `ask_generator` function is used by the `ask` function. It calls the `tt_entails` function, which in turn returns `True` if the knowledge base entails query and `False` otherwise. The `ask_generator` itself returns an empty dict `{}` if the knowledge base entails query and `None` otherwise. This might seem a little bit weird to you. After all, it makes more sense just to return a `True` or a `False` instead of the `{}` or `None` But this is done to maintain consistency with the way things are in First-Order Logic, where an `ask_generator` function is supposed to return all the substitutions that make the query true. Hence the dict, to return all these substitutions. I will be mostly be using the `ask` function which returns a `{}` or a `False`, but if you don't like this, you can always use the `ask_if_true` function which returns a `True` or a `False`.* `retract(self, sentence)` : This function removes all the clauses of the sentence given, from the knowledge base. Like the `tell` function, you don't have to pass clauses to remove them from the knowledge base; any sentence will do fine. The function will take care of converting that sentence to clauses and then remove those. Wumpus World KBLet us create a `PropKB` for the wumpus world with the sentences mentioned in `section 7.4.3`.
###Code
wumpus_kb = PropKB()
###Output
_____no_output_____
###Markdown
We define the symbols we use in our clauses.$P_{x, y}$ is true if there is a pit in `[x, y]`.$B_{x, y}$ is true if the agent senses breeze in `[x, y]`.
###Code
P11, P12, P21, P22, P31, B11, B21 = expr('P11, P12, P21, P22, P31, B11, B21')
###Output
_____no_output_____
###Markdown
Now we tell sentences based on `section 7.4.3`.There is no pit in `[1,1]`.
###Code
wumpus_kb.tell(~P11)
###Output
_____no_output_____
###Markdown
A square is breezy if and only if there is a pit in a neighboring square. This has to be stated for each square but for now, we include just the relevant squares.
###Code
wumpus_kb.tell(B11 | '<=>' | ((P12 | P21)))
wumpus_kb.tell(B21 | '<=>' | ((P11 | P22 | P31)))
###Output
_____no_output_____
###Markdown
Now we include the breeze percepts for the first two squares leading up to the situation in `Figure 7.3(b)`
###Code
wumpus_kb.tell(~B11)
wumpus_kb.tell(B21)
###Output
_____no_output_____
###Markdown
We can check the clauses stored in a `KB` by accessing its `clauses` variable
###Code
wumpus_kb.clauses
###Output
_____no_output_____
###Markdown
We see that the equivalence $B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was automatically converted to two implications which were inturn converted to CNF which is stored in the `KB`.$B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was split into $B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ and $B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$.$B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ was converted to $P_{1, 2} \lor P_{2, 1} \lor \neg B_{1, 1}$.$B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$ was converted to $\neg (P_{1, 2} \lor P_{2, 1}) \lor B_{1, 1}$ which becomes $(\neg P_{1, 2} \lor B_{1, 1}) \land (\neg P_{2, 1} \lor B_{1, 1})$ after applying De Morgan's laws and distributing the disjunction.$B_{2, 1} \iff (P_{1, 1} \lor P_{2, 2} \lor P_{3, 2})$ is converted in similar manner. Inference in Propositional Knowledge BaseIn this section we will look at two algorithms to check if a sentence is entailed by the `KB`. Our goal is to decide whether $\text{KB} \vDash \alpha$ for some sentence $\alpha$. Truth Table EnumerationIt is a model-checking approach which, as the name suggests, enumerates all possible models in which the `KB` is true and checks if $\alpha$ is also true in these models. We list the $n$ symbols in the `KB` and enumerate the $2^{n}$ models in a depth-first manner and check the truth of `KB` and $\alpha$.
###Code
psource(tt_check_all)
###Output
_____no_output_____
###Markdown
The algorithm basically computes every line of the truth table $KB\implies \alpha$ and checks if it is true everywhere.If symbols are defined, the routine recursively constructs every combination of truth values for the symbols and then, it checks whether `model` is consistent with `kb`.The given models correspond to the lines in the truth table,which have a `true` in the KB column, and for these lines it checks whether the query evaluates to true`result = pl_true(alpha, model)`.In short, `tt_check_all` evaluates this logical expression for each `model``pl_true(kb, model) => pl_true(alpha, model)`which is logically equivalent to`pl_true(kb, model) & ~pl_true(alpha, model)` that is, the knowledge base and the negation of the query are logically inconsistent.`tt_entails()` just extracts the symbols from the query and calls `tt_check_all()` with the proper parameters.
###Code
psource(tt_entails)
###Output
_____no_output_____
###Markdown
Keep in mind that for two symbols P and Q, P => Q is false only when P is `True` and Q is `False`.Example usage of `tt_entails()`:
###Code
tt_entails(P & Q, Q)
###Output
_____no_output_____
###Markdown
P & Q is True only when both P and Q are True. Hence, (P & Q) => Q is True
###Code
tt_entails(P | Q, Q)
tt_entails(P | Q, P)
###Output
_____no_output_____
###Markdown
If we know that P | Q is true, we cannot infer the truth values of P and Q. Hence (P | Q) => Q is False and so is (P | Q) => P.
###Code
(A, B, C, D, E, F, G) = symbols('A, B, C, D, E, F, G')
tt_entails(A & (B | C) & D & E & ~(F | G), A & D & E & ~F & ~G)
###Output
_____no_output_____
###Markdown
We can see that for the KB to be true, A, D, E have to be True and F and G have to be False.Nothing can be said about B or C. Coming back to our problem, note that `tt_entails()` takes an `Expr` which is a conjunction of clauses as the input instead of the `KB` itself. You can use the `ask_if_true()` method of `PropKB` which does all the required conversions. Let's check what `wumpus_kb` tells us about $P_{1, 1}$.
###Code
wumpus_kb.ask_if_true(~P11), wumpus_kb.ask_if_true(P11)
###Output
_____no_output_____
###Markdown
Looking at Figure 7.9 we see that in all models in which the knowledge base is `True`, $P_{1, 1}$ is `False`. It makes sense that `ask_if_true()` returns `True` for $\alpha = \neg P_{1, 1}$ and `False` for $\alpha = P_{1, 1}$. This begs the question, what if $\alpha$ is `True` in only a portion of all models. Do we return `True` or `False`? This doesn't rule out the possibility of $\alpha$ being `True` but it is not entailed by the `KB` so we return `False` in such cases. We can see this is the case for $P_{2, 2}$ and $P_{3, 1}$.
###Code
wumpus_kb.ask_if_true(~P22), wumpus_kb.ask_if_true(P22)
###Output
_____no_output_____
###Markdown
Proof by ResolutionRecall that our goal is to check whether $\text{KB} \vDash \alpha$ i.e. is $\text{KB} \implies \alpha$ true in every model. Suppose we wanted to check if $P \implies Q$ is valid. We check the satisfiability of $\neg (P \implies Q)$, which can be rewritten as $P \land \neg Q$. If $P \land \neg Q$ is unsatisfiable, then $P \implies Q$ must be true in all models. This gives us the result "$\text{KB} \vDash \alpha$ if and only if $\text{KB} \land \neg \alpha$ is unsatisfiable".This technique corresponds to proof by contradiction, a standard mathematical proof technique. We assume $\alpha$ to be false and show that this leads to a contradiction with known axioms in $\text{KB}$. We obtain a contradiction by making valid inferences using inference rules. In this proof we use a single inference rule, resolution which states $(l_1 \lor \dots \lor l_k) \land (m_1 \lor \dots \lor m_n) \land (l_i \iff \neg m_j) \implies l_1 \lor \dots \lor l_{i - 1} \lor l_{i + 1} \lor \dots \lor l_k \lor m_1 \lor \dots \lor m_{j - 1} \lor m_{j + 1} \lor \dots \lor m_n$. Applying the resolution yeilds us a clause which we add to the KB. We keep doing this until:* There are no new clauses that can be added, in which case $\text{KB} \nvDash \alpha$.* Two clauses resolve to yield the empty clause, in which case $\text{KB} \vDash \alpha$.The empty clause is equivalent to False because it arises only from resolving two complementaryunit clauses such as $P$ and $\neg P$ which is a contradiction as both $P$ and $\neg P$ can't be True at the same time. There is one catch however, the algorithm that implements proof by resolution cannot handle complex sentences. Implications and bi-implications have to be simplified into simpler clauses. We already know that *every sentence of a propositional logic is logically equivalent to a conjunction of clauses*.We will use this fact to our advantage and simplify the input sentence into the **conjunctive normal form** (CNF) which is a conjunction of disjunctions of literals.For eg:$$(A\lor B)\land (\neg B\lor C\lor\neg D)\land (D\lor\neg E)$$This is equivalent to the POS (Product of sums) form in digital electronics.Here's an outline of how the conversion is done:1. Convert bi-implications to implications$\alpha\iff\beta$ can be written as $(\alpha\implies\beta)\land(\beta\implies\alpha)$This also applies to compound sentences$\alpha\iff(\beta\lor\gamma)$ can be written as $(\alpha\implies(\beta\lor\gamma))\land((\beta\lor\gamma)\implies\alpha)$2. Convert implications to their logical equivalents$\alpha\implies\beta$ can be written as $\neg\alpha\lor\beta$3. Move negation inwardsCNF requires atomic literals. Hence, negation cannot appear on a compound statement.De Morgan's laws will be helpful here.$\neg(\alpha\land\beta)\equiv(\neg\alpha\lor\neg\beta)$$\neg(\alpha\lor\beta)\equiv(\neg\alpha\land\neg\beta)$4. Distribute disjunction over conjunctionDisjunction and conjunction are distributive over each other.Now that we only have conjunctions, disjunctions and negations in our expression, we will distribute disjunctions over conjunctions wherever possible as this will give us a sentence which is a conjunction of simpler clauses, which is what we wanted in the first place.We need a term of the form$(\alpha_{1}\lor\alpha_{2}\lor\alpha_{3}...)\land(\beta_{1}\lor\beta_{2}\lor\beta_{3}...)\land(\gamma_{1}\lor\gamma_{2}\lor\gamma_{3}...)\land...$The `to_cnf` function executes this conversion using helper subroutines.
###Code
psource(to_cnf)
###Output
_____no_output_____
###Markdown
`to_cnf` calls three subroutines.`eliminate_implications` converts bi-implications and implications to their logical equivalents.`move_not_inwards` removes negations from compound statements and moves them inwards using De Morgan's laws.`distribute_and_over_or` distributes disjunctions over conjunctions.Run the cells below for implementation details.
###Code
%psource eliminate_implications
%psource move_not_inwards
%psource distribute_and_over_or
###Output
_____no_output_____
###Markdown
Let's convert some sentences to see how it works
###Code
A, B, C, D = expr('A, B, C, D')
to_cnf(A |'<=>'| B)
to_cnf(A |'<=>'| (B & C))
to_cnf(A & (B | (C & D)))
to_cnf((A |'<=>'| ~B) |'==>'| (C | ~D))
###Output
_____no_output_____
###Markdown
Coming back to our resolution problem, we can see how the `to_cnf` function is utilized here
###Code
psource(pl_resolution)
pl_resolution(wumpus_kb, ~P11), pl_resolution(wumpus_kb, P11)
pl_resolution(wumpus_kb, ~P22), pl_resolution(wumpus_kb, P22)
###Output
_____no_output_____
###Markdown
Effective Propositional Model CheckingThe previous segments elucidate the algorithmic procedure for model checking. In this segment, we look at ways of making them computationally efficient.The problem we are trying to solve is conventionally called the _propositional satisfiability problem_, abbreviated as the _SAT_ problem.In layman terms, if there exists a model that satisfies a given Boolean formula, the formula is called satisfiable.The SAT problem was the first problem to be proven _NP-complete_.The main characteristics of an NP-complete problem are:- Given a solution to such a problem, it is easy to verify if the solution solves the problem.- The time required to actually solve the problem using any known algorithm increases exponentially with respect to the size of the problem.Due to these properties, heuristic and approximational methods are often applied to find solutions to these problems.It is extremely important to be able to solve large scale SAT problems efficiently because many combinatorial problems in computer science can be conveniently reduced to checking the satisfiability of a propositional sentence under some constraints.We will introduce two new algorithms that perform propositional model checking in a computationally effective way. 1. DPLL (Davis-Putnam-Logeman-Loveland) algorithmThis algorithm is very similar to Backtracking-Search.It recursively enumerates possible models in a depth-first fashion with the following improvements over algorithms like `tt_entails`:1. Early termination:In certain cases, the algorithm can detect the truth value of a statement using just a partially completed model.For example, $(P\lor Q)\land(P\lor R)$ is true if P is true, regardless of other variables.This reduces the search space significantly.2. Pure symbol heuristic:A symbol that has the same sign (positive or negative) in all clauses is called a _pure symbol_.It isn't difficult to see that any satisfiable model will have the pure symbols assigned such that its parent clause becomes _true_.For example, $(P\lor\neg Q)\land(\neg Q\lor\neg R)\land(R\lor P)$ has P and Q as pure symbolsand for the sentence to be true, P _has_ to be true and Q _has_ to be false.The pure symbol heuristic thus simplifies the problem a bit.3. Unit clause heuristic:In the context of DPLL, clauses with just one literal and clauses with all but one _false_ literals are called unit clauses.If a clause is a unit clause, it can only be satisfied by assigning the necessary value to make the last literal true.We have no other choice.Assigning one unit clause can create another unit clause.For example, when P is false, $(P\lor Q)$ becomes a unit clause, causing _true_ to be assigned to Q.A series of forced assignments derived from previous unit clauses is called _unit propagation_.In this way, this heuristic simplifies the problem further.The algorithm often employs other tricks to scale up to large problems.However, these tricks are currently out of the scope of this notebook. Refer to section 7.6 of the book for more details.Let's have a look at the algorithm.
###Code
psource(dpll)
###Output
_____no_output_____
###Markdown
The algorithm uses the ideas described above to check satisfiability of a sentence in propositional logic.It recursively calls itself, simplifying the problem at each step. It also uses helper functions `find_pure_symbol` and `find_unit_clause` to carry out steps 2 and 3 above.The `dpll_satisfiable` helper function converts the input clauses to _conjunctive normal form_ and calls the `dpll` function with the correct parameters.
###Code
psource(dpll_satisfiable)
###Output
_____no_output_____
###Markdown
Let's see a few examples of usage.
###Code
A, B, C, D = expr('A, B, C, D')
dpll_satisfiable(A & B & ~C & D)
###Output
_____no_output_____
###Markdown
This is a simple case to highlight that the algorithm actually works.
###Code
dpll_satisfiable((A & B) | (C & ~A) | (B & ~D))
###Output
_____no_output_____
###Markdown
If a particular symbol isn't present in the solution, it means that the solution is independent of the value of that symbol.In this case, the solution is independent of A.
###Code
dpll_satisfiable(A |'<=>'| B)
dpll_satisfiable((A |'<=>'| B) |'==>'| (C & ~A))
dpll_satisfiable((A | (B & C)) |'<=>'| ((A | B) & (A | C)))
###Output
_____no_output_____
###Markdown
2. WalkSAT algorithmThis algorithm is very similar to Hill climbing.On every iteration, the algorithm picks an unsatisfied clause and flips a symbol in the clause.This is similar to finding a neighboring state in the `hill_climbing` algorithm.The symbol to be flipped is decided by an evaluation function that counts the number of unsatisfied clauses.Sometimes, symbols are also flipped randomly, to avoid local optima. A subtle balance between greediness and randomness is required. Alternatively, some versions of the algorithm restart with a completely new random assignment if no solution has been found for too long, as a way of getting out of local minima of numbers of unsatisfied clauses.Let's have a look at the algorithm.
###Code
psource(WalkSAT)
###Output
_____no_output_____
###Markdown
The function takes three arguments:1. The `clauses` we want to satisfy.2. The probability `p` of randomly changing a symbol.3. The maximum number of flips (`max_flips`) the algorithm will run for. If the clauses are still unsatisfied, the algorithm returns `None` to denote failure.The algorithm is identical in concept to Hill climbing and the code isn't difficult to understand.Let's see a few examples of usage.
###Code
A, B, C, D = expr('A, B, C, D')
WalkSAT([A, B, ~C, D], 0.5, 100)
###Output
_____no_output_____
###Markdown
This is a simple case to show that the algorithm converges.
###Code
WalkSAT([A & B, A & C], 0.5, 100)
WalkSAT([A & B, C & D, C & B], 0.5, 100)
WalkSAT([A & B, C | D, ~(D | B)], 0.5, 1000)
###Output
_____no_output_____
###Markdown
This one doesn't give any output because WalkSAT did not find any model where these clauses hold. We can solve these clauses to see that they together form a contradiction and hence, it isn't supposed to have a solution. One point of difference between this algorithm and the `dpll_satisfiable` algorithms is that both these algorithms take inputs differently. For WalkSAT to take complete sentences as input, we can write a helper function that converts the input sentence into conjunctive normal form and then calls WalkSAT with the list of conjuncts of the CNF form of the sentence.
###Code
def WalkSAT_CNF(sentence, p=0.5, max_flips=10000):
return WalkSAT(conjuncts(to_cnf(sentence)), 0, max_flips)
###Output
_____no_output_____
###Markdown
Now we can call `WalkSAT_CNF` and `DPLL_Satisfiable` with the same arguments.
###Code
WalkSAT_CNF((A & B) | (C & ~A) | (B & ~D), 0.5, 1000)
###Output
_____no_output_____
###Markdown
It works!Notice that the solution generated by WalkSAT doesn't omit variables that the sentence doesn't depend upon. If the sentence is independent of a particular variable, the solution contains a random value for that variable because of the stochastic nature of the algorithm.Let's compare the runtime of WalkSAT and DPLL for a few cases. We will use the `%%timeit` magic to do this.
###Code
sentence_1 = A |'<=>'| B
sentence_2 = (A & B) | (C & ~A) | (B & ~D)
sentence_3 = (A | (B & C)) |'<=>'| ((A | B) & (A | C))
%%timeit
dpll_satisfiable(sentence_1)
dpll_satisfiable(sentence_2)
dpll_satisfiable(sentence_3)
%%timeit
WalkSAT_CNF(sentence_1)
WalkSAT_CNF(sentence_2)
WalkSAT_CNF(sentence_3)
###Output
100 loops, best of 3: 1.91 ms per loop
###Markdown
On an average, for solvable cases, `WalkSAT` is quite faster than `dpll` because, for a small number of variables, `WalkSAT` can reduce the search space significantly. Results can be different for sentences with more symbols though.Feel free to play around with this to understand the trade-offs of these algorithms better. First-Order Logic Knowledge Bases: `FolKB`The class `FolKB` can be used to represent a knowledge base of First-order logic sentences. You would initialize and use it the same way as you would for `PropKB` except that the clauses are first-order definite clauses. We will see how to write such clauses to create a database and query them in the following sections. Criminal KBIn this section we create a `FolKB` based on the following paragraph.The law says that it is a crime for an American to sell weapons to hostile nations. The country Nono, an enemy of America, has some missiles, and all of its missiles were sold to it by Colonel West, who is American.The first step is to extract the facts and convert them into first-order definite clauses. Extracting the facts from data alone is a challenging task. Fortunately, we have a small paragraph and can do extraction and conversion manually. We'll store the clauses in list aptly named `clauses`.
###Code
clauses = []
###Output
_____no_output_____
###Markdown
“... it is a crime for an American to sell weapons to hostile nations”The keywords to look for here are 'crime', 'American', 'sell', 'weapon' and 'hostile'. We use predicate symbols to make meaning of them.* `Criminal(x)`: `x` is a criminal* `American(x)`: `x` is an American* `Sells(x ,y, z)`: `x` sells `y` to `z`* `Weapon(x)`: `x` is a weapon* `Hostile(x)`: `x` is a hostile nationLet us now combine them with appropriate variable naming to depict the meaning of the sentence. The criminal `x` is also the American `x` who sells weapon `y` to `z`, which is a hostile nation.$\text{American}(x) \land \text{Weapon}(y) \land \text{Sells}(x, y, z) \land \text{Hostile}(z) \implies \text{Criminal} (x)$
###Code
clauses.append(expr("(American(x) & Weapon(y) & Sells(x, y, z) & Hostile(z)) ==> Criminal(x)"))
###Output
_____no_output_____
###Markdown
"The country Nono, an enemy of America"We now know that Nono is an enemy of America. We represent these nations using the constant symbols `Nono` and `America`. the enemy relation is show using the predicate symbol `Enemy`.$\text{Enemy}(\text{Nono}, \text{America})$
###Code
clauses.append(expr("Enemy(Nono, America)"))
###Output
_____no_output_____
###Markdown
"Nono ... has some missiles"This states the existence of some missile which is owned by Nono. $\exists x \text{Owns}(\text{Nono}, x) \land \text{Missile}(x)$. We invoke existential instantiation to introduce a new constant `M1` which is the missile owned by Nono.$\text{Owns}(\text{Nono}, \text{M1}), \text{Missile}(\text{M1})$
###Code
clauses.append(expr("Owns(Nono, M1)"))
clauses.append(expr("Missile(M1)"))
###Output
_____no_output_____
###Markdown
"All of its missiles were sold to it by Colonel West"If Nono owns something and it classifies as a missile, then it was sold to Nono by West.$\text{Missile}(x) \land \text{Owns}(\text{Nono}, x) \implies \text{Sells}(\text{West}, x, \text{Nono})$
###Code
clauses.append(expr("(Missile(x) & Owns(Nono, x)) ==> Sells(West, x, Nono)"))
###Output
_____no_output_____
###Markdown
"West, who is American"West is an American.$\text{American}(\text{West})$
###Code
clauses.append(expr("American(West)"))
###Output
_____no_output_____
###Markdown
We also know, from our understanding of language, that missiles are weapons and that an enemy of America counts as “hostile”.$\text{Missile}(x) \implies \text{Weapon}(x), \text{Enemy}(x, \text{America}) \implies \text{Hostile}(x)$
###Code
clauses.append(expr("Missile(x) ==> Weapon(x)"))
clauses.append(expr("Enemy(x, America) ==> Hostile(x)"))
###Output
_____no_output_____
###Markdown
Now that we have converted the information into first-order definite clauses we can create our first-order logic knowledge base.
###Code
crime_kb = FolKB(clauses)
###Output
_____no_output_____
###Markdown
Inference in First-Order LogicIn this section we look at a forward chaining and a backward chaining algorithm for `FolKB`. Both aforementioned algorithms rely on a process called unification, a key component of all first-order inference algorithms. UnificationWe sometimes require finding substitutions that make different logical expressions look identical. This process, called unification, is done by the `unify` algorithm. It takes as input two sentences and returns a unifier for them if one exists. A unifier is a dictionary which stores the substitutions required to make the two sentences identical. It does so by recursively unifying the components of a sentence, where the unification of a variable symbol `var` with a constant symbol `Const` is the mapping `{var: Const}`. Let's look at a few examples.
###Code
unify(expr('x'), 3)
unify(expr('A(x)'), expr('A(B)'))
unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(y)'))
###Output
_____no_output_____
###Markdown
In cases where there is no possible substitution that unifies the two sentences the function return `None`.
###Code
print(unify(expr('Cat(x)'), expr('Dog(Dobby)')))
###Output
None
###Markdown
We also need to take care we do not unintentionally use the same variable name. Unify treats them as a single variable which prevents it from taking multiple value.
###Code
print(unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(x)')))
###Output
None
###Markdown
Forward Chaining AlgorithmWe consider the simple forward-chaining algorithm presented in Figure 9.3. We look at each rule in the knoweldge base and see if the premises can be satisfied. This is done by finding a substitution which unifies each of the premise with a clause in the `KB`. If we are able to unify the premises, the conclusion (with the corresponding substitution) is added to the `KB`. This inferencing process is repeated until either the query can be answered or till no new sentences can be added. We test if the newly added clause unifies with the query in which case the substitution yielded by `unify` is an answer to the query. If we run out of sentences to infer, this means the query was a failure.The function `fol_fc_ask` is a generator which yields all substitutions which validate the query.
###Code
%psource fol_fc_ask
###Output
_____no_output_____
###Markdown
Let's find out all the hostile nations. Note that we only told the `KB` that Nono was an enemy of America, not that it was hostile.
###Code
answer = fol_fc_ask(crime_kb, expr('Hostile(x)'))
print(list(answer))
###Output
[{x: Nono}]
###Markdown
The generator returned a single substitution which says that Nono is a hostile nation. See how after adding another enemy nation the generator returns two substitutions.
###Code
crime_kb.tell(expr('Enemy(JaJa, America)'))
answer = fol_fc_ask(crime_kb, expr('Hostile(x)'))
print(list(answer))
###Output
[{x: Nono}, {x: JaJa}]
###Markdown
Note: `fol_fc_ask` makes changes to the `KB` by adding sentences to it. Backward Chaining AlgorithmThis algorithm works backward from the goal, chaining through rules to find known facts that support the proof. Suppose `goal` is the query we want to find the substitution for. We find rules of the form $\text{lhs} \implies \text{goal}$ in the `KB` and try to prove `lhs`. There may be multiple clauses in the `KB` which give multiple `lhs`. It is sufficient to prove only one of these. But to prove a `lhs` all the conjuncts in the `lhs` of the clause must be proved. This makes it similar to And/Or search. ORThe OR part of the algorithm comes from our choice to select any clause of the form $\text{lhs} \implies \text{goal}$. Looking at all rules's `lhs` whose `rhs` unify with the `goal`, we yield a substitution which proves all the conjuncts in the `lhs`. We use `parse_definite_clause` to attain `lhs` and `rhs` from a clause of the form $\text{lhs} \implies \text{rhs}$. For atomic facts the `lhs` is an empty list.
###Code
%psource fol_bc_or
###Output
_____no_output_____
###Markdown
ANDThe AND corresponds to proving all the conjuncts in the `lhs`. We need to find a substitution which proves each and every clause in the list of conjuncts.
###Code
%psource fol_bc_and
###Output
_____no_output_____
###Markdown
Now the main function `fl_bc_ask` calls `fol_bc_or` with substitution initialized as empty. The `ask` method of `FolKB` uses `fol_bc_ask` and fetches the first substitution returned by the generator to answer query. Let's query the knowledge base we created from `clauses` to find hostile nations.
###Code
# Rebuild KB because running fol_fc_ask would add new facts to the KB
crime_kb = FolKB(clauses)
crime_kb.ask(expr('Hostile(x)'))
###Output
_____no_output_____
###Markdown
You may notice some new variables in the substitution. They are introduced to standardize the variable names to prevent naming problems as discussed in the [Unification section](Unification) Appendix: The Implementation of `|'==>'|`Consider the `Expr` formed by this syntax:
###Code
P |'==>'| ~Q
###Output
_____no_output_____
###Markdown
What is the funny `|'==>'|` syntax? The trick is that "`|`" is just the regular Python or-operator, and so is exactly equivalent to this:
###Code
(P | '==>') | ~Q
###Output
_____no_output_____
###Markdown
In other words, there are two applications of or-operators. Here's the first one:
###Code
P | '==>'
###Output
_____no_output_____
###Markdown
What is going on here is that the `__or__` method of `Expr` serves a dual purpose. If the right-hand-side is another `Expr` (or a number), then the result is an `Expr`, as in `(P | Q)`. But if the right-hand-side is a string, then the string is taken to be an operator, and we create a node in the abstract syntax tree corresponding to a partially-filled `Expr`, one where we know the left-hand-side is `P` and the operator is `==>`, but we don't yet know the right-hand-side.The `PartialExpr` class has an `__or__` method that says to create an `Expr` node with the right-hand-side filled in. Here we can see the combination of the `PartialExpr` with `Q` to create a complete `Expr`:
###Code
partial = PartialExpr('==>', P)
partial | ~Q
###Output
_____no_output_____
###Markdown
This [trick](http://code.activestate.com/recipes/384122-infix-operators/) is due to [Ferdinand Jamitzky](http://code.activestate.com/recipes/users/98863/), with a modification by [C. G. Vedant](https://github.com/Chipe1),who suggested using a string inside the or-bars. Appendix: The Implementation of `expr`How does `expr` parse a string into an `Expr`? It turns out there are two tricks (besides the Jamitzky/Vedant trick):1. We do a string substitution, replacing "`==>`" with "`|'==>'|`" (and likewise for other operators).2. We `eval` the resulting string in an environment in which every identifieris bound to a symbol with that identifier as the `op`.In other words,
###Code
expr('~(P & Q) ==> (~P | ~Q)')
###Output
_____no_output_____
###Markdown
is equivalent to doing:
###Code
P, Q = symbols('P, Q')
~(P & Q) |'==>'| (~P | ~Q)
###Output
_____no_output_____
###Markdown
One thing to beware of: this puts `==>` at the same precedence level as `"|"`, which is not quite right. For example, we get this:
###Code
P & Q |'==>'| P | Q
###Output
_____no_output_____
###Markdown
which is probably not what we meant; when in doubt, put in extra parens:
###Code
(P & Q) |'==>'| (P | Q)
###Output
_____no_output_____
###Markdown
Examples
###Code
from notebook import Canvas_fol_bc_ask
canvas_bc_ask = Canvas_fol_bc_ask('canvas_bc_ask', crime_kb, expr('Criminal(x)'))
###Output
_____no_output_____
###Markdown
Logic: `logic.py`; Chapters 6-8 This notebook describes the [logic.py](https://github.com/aimacode/aima-python/blob/master/logic.py) module, which covers Chapters 6 (Logical Agents), 7 (First-Order Logic) and 8 (Inference in First-Order Logic) of *[Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu)*. See the [intro notebook](https://github.com/aimacode/aima-python/blob/master/intro.ipynb) for instructions.We'll start by looking at `Expr`, the data type for logical sentences, and the convenience function `expr`. We'll be covering two types of knowledge bases, `PropKB` - Propositional logic knowledge base and `FolKB` - First order logic knowledge base. We will construct a propositional knowledge base of a specific situation in the Wumpus World. We will next go through the `tt_entails` function and experiment with it a bit. The `pl_resolution` and `pl_fc_entails` functions will come next. We'll study forward chaining and backward chaining algorithms for `FolKB` and use them on `crime_kb` knowledge base.But the first step is to load the code:
###Code
from utils import *
from logic import *
###Output
_____no_output_____
###Markdown
Logical Sentences The `Expr` class is designed to represent any kind of mathematical expression. The simplest type of `Expr` is a symbol, which can be defined with the function `Symbol`:
###Code
Symbol('x')
###Output
_____no_output_____
###Markdown
Or we can define multiple symbols at the same time with the function `symbols`:
###Code
(x, y, P, Q, f) = symbols('x, y, P, Q, f')
###Output
_____no_output_____
###Markdown
We can combine `Expr`s with the regular Python infix and prefix operators. Here's how we would form the logical sentence "P and not Q":
###Code
P & ~Q
###Output
_____no_output_____
###Markdown
This works because the `Expr` class overloads the `&` operator with this definition:```pythondef __and__(self, other): return Expr('&', self, other)``` and does similar overloads for the other operators. An `Expr` has two fields: `op` for the operator, which is always a string, and `args` for the arguments, which is a tuple of 0 or more expressions. By "expression," I mean either an instance of `Expr`, or a number. Let's take a look at the fields for some `Expr` examples:
###Code
sentence = P & ~Q
sentence.op
sentence.args
P.op
P.args
Pxy = P(x, y)
Pxy.op
Pxy.args
###Output
_____no_output_____
###Markdown
It is important to note that the `Expr` class does not define the *logic* of Propositional Logic sentences; it just gives you a way to *represent* expressions. Think of an `Expr` as an [abstract syntax tree](https://en.wikipedia.org/wiki/Abstract_syntax_tree). Each of the `args` in an `Expr` can be either a symbol, a number, or a nested `Expr`. We can nest these trees to any depth. Here is a deply nested `Expr`:
###Code
3 * f(x, y) + P(y) / 2 + 1
###Output
_____no_output_____
###Markdown
Operators for Constructing Logical SentencesHere is a table of the operators that can be used to form sentences. Note that we have a problem: we want to use Python operators to make sentences, so that our programs (and our interactive sessions like the one here) will show simple code. But Python does not allow implication arrows as operators, so for now we have to use a more verbose notation that Python does allow: `|'==>'|` instead of just `==>`. Alternately, you can always use the more verbose `Expr` constructor forms:| Operation | Book | Python Infix Input | Python Output | Python `Expr` Input|--------------------------|----------------------|-------------------------|---|---|| Negation | ¬ P | `~P` | `~P` | `Expr('~', P)`| And | P ∧ Q | `P & Q` | `P & Q` | `Expr('&', P, Q)`| Or | P ∨ Q | `P` &124; `Q`| `P` &124; `Q` | `Expr('`&124;`', P, Q)`| Inequality (Xor) | P ≠ Q | `P ^ Q` | `P ^ Q` | `Expr('^', P, Q)`| Implication | P → Q | `P` &124;`'==>'`&124; `Q` | `P ==> Q` | `Expr('==>', P, Q)`| Reverse Implication | Q ← P | `Q` &124;`'&124; `P` |`Q <== P` | `Expr('<==', Q, P)`| Equivalence | P ↔ Q | `P` &124;`''`&124; `Q` |`P Q` | `Expr('', P, Q)`Here's an example of defining a sentence with an implication arrow:
###Code
~(P & Q) |'==>'| (~P | ~Q)
###Output
_____no_output_____
###Markdown
`expr`: a Shortcut for Constructing SentencesIf the `|'==>'|` notation looks ugly to you, you can use the function `expr` instead:
###Code
expr('~(P & Q) ==> (~P | ~Q)')
###Output
_____no_output_____
###Markdown
`expr` takes a string as input, and parses it into an `Expr`. The string can contain arrow operators: `==>`, ``, which are handled as if they were regular Python infix operators. And `expr` automatically defines any symbols, so you don't need to pre-define them:
###Code
expr('sqrt(b ** 2 - 4 * a * c)')
###Output
_____no_output_____
###Markdown
For now that's all you need to know about `expr`. If you are interested, we explain the messy details of how `expr` is implemented and how `|'==>'|` is handled in the appendix. Propositional Knowledge Bases: `PropKB`The class `PropKB` can be used to represent a knowledge base of propositional logic sentences.We see that the class `KB` has four methods, apart from `__init__`. A point to note here: the `ask` method simply calls the `ask_generator` method. Thus, this one has already been implemented and what you'll have to actually implement when you create your own knowledge base class (if you want to, though I doubt you'll ever need to; just use the ones we've created for you), will be the `ask_generator` function and not the `ask` function itself.The class `PropKB` now.* `__init__(self, sentence=None)` : The constructor `__init__` creates a single field `clauses` which will be a list of all the sentences of the knowledge base. Note that each one of these sentences will be a 'clause' i.e. a sentence which is made up of only literals and `or`s.* `tell(self, sentence)` : When you want to add a sentence to the KB, you use the `tell` method. This method takes a sentence, converts it to its CNF, extracts all the clauses, and adds all these clauses to the `clauses` field. So, you need not worry about `tell`ing only clauses to the knowledge base. You can `tell` the knowledge base a sentence in any form that you wish; converting it to CNF and adding the resulting clauses will be handled by the `tell` method.* `ask_generator(self, query)` : The `ask_generator` function is used by the `ask` function. It calls the `tt_entails` function, which in turn returns `True` if the knowledge base entails query and `False` otherwise. The `ask_generator` itself returns an empty dict `{}` if the knowledge base entails query and `None` otherwise. This might seem a little bit weird to you. After all, it makes more sense just to return a `True` or a `False` instead of the `{}` or `None` But this is done to maintain consistency with the way things are in First-Order Logic, where, an `ask_generator` function, is supposed to return all the substitutions that make the query true. Hence the dict, to return all these substitutions. I will be mostly be using the `ask` function which returns a `{}` or a `False`, but if you don't like this, you can always use the `ask_if_true` function which returns a `True` or a `False`.* `retract(self, sentence)` : This function removes all the clauses of the sentence given, from the knowledge base. Like the `tell` function, you don't have to pass clauses to remove them from the knowledge base; any sentence will do fine. The function will take care of converting that sentence to clauses and then remove those. Wumpus World KBLet us create a `PropKB` for the wumpus world with the sentences mentioned in `section 7.4.3`.
###Code
wumpus_kb = PropKB()
###Output
_____no_output_____
###Markdown
We define the symbols we use in our clauses.$P_{x, y}$ is true if there is a pit in `[x, y]`.$B_{x, y}$ is true if the agent senses breeze in `[x, y]`.
###Code
P11, P12, P21, P22, P31, B11, B21 = expr('P11, P12, P21, P22, P31, B11, B21')
###Output
_____no_output_____
###Markdown
Now we tell sentences based on `section 7.4.3`.There is no pit in `[1,1]`.
###Code
wumpus_kb.tell(~P11)
###Output
_____no_output_____
###Markdown
A square is breezy if and only if there is a pit in a neighboring square. This has to be stated for each square but for now, we include just the relevant squares.
###Code
wumpus_kb.tell(B11 | '<=>' | ((P12 | P21)))
wumpus_kb.tell(B21 | '<=>' | ((P11 | P22 | P31)))
###Output
_____no_output_____
###Markdown
Now we include the breeze percepts for the first two squares leading up to the situation in `Figure 7.3(b)`
###Code
wumpus_kb.tell(~B11)
wumpus_kb.tell(B21)
###Output
_____no_output_____
###Markdown
We can check the clauses stored in a `KB` by accessing its `clauses` variable
###Code
wumpus_kb.clauses
###Output
_____no_output_____
###Markdown
We see that the equivalence $B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was automatically converted to two implications which were inturn converted to CNF which is stored in the `KB`.$B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was split into $B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ and $B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$.$B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ was converted to $P_{1, 2} \lor P_{2, 1} \lor \neg B_{1, 1}$.$B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$ was converted to $\neg (P_{1, 2} \lor P_{2, 1}) \lor B_{1, 1}$ which becomes $(\neg P_{1, 2} \lor B_{1, 1}) \land (\neg P_{2, 1} \lor B_{1, 1})$ after applying De Morgan's laws and distributing the disjunction.$B_{2, 1} \iff (P_{1, 1} \lor P_{2, 2} \lor P_{3, 2})$ is converted in similar manner. Inference in Propositional Knowlwdge BaseIn this section we will look at two algorithms to check if a sentence is entailed by the `KB`. Our goal is to decide whether $\text{KB} \vDash \alpha$ for some sentence $\alpha$. Truth Table EnumerationIt is a model-checking approach which, as the name suggests, enumerates all possible models in which the `KB` is true and checks if $\alpha$ is also true in these models. We list the $n$ symbols in the `KB` and enumerate the $2^{n}$ models in a depth-first manner and check the truth of `KB` and $\alpha$.
###Code
%psource tt_check_all
###Output
_____no_output_____
###Markdown
Note that `tt_entails()` takes an `Expr` which is a conjunction of clauses as the input instead of the `KB` itself. You can use the `ask_if_true()` method of `PropKB` which does all the required conversions. Let's check what `wumpus_kb` tells us about $P_{1, 1}$.
###Code
wumpus_kb.ask_if_true(~P11), wumpus_kb.ask_if_true(P11)
###Output
_____no_output_____
###Markdown
Looking at Figure 7.9 we see that in all models in which the knowledge base is `True`, $P_{1, 1}$ is `False`. It makes sense that `ask_if_true()` returns `True` for $\alpha = \neg P_{1, 1}$ and `False` for $\alpha = P_{1, 1}$. This begs the question, what if $\alpha$ is `True` in only a portion of all models. Do we return `True` or `False`? This doesn't rule out the possibility of $\alpha$ being `True` but it is not entailed by the `KB` so we return `False` in such cases. We can see this is the case for $P_{2, 2}$ and $P_{3, 1}$.
###Code
wumpus_kb.ask_if_true(~P22), wumpus_kb.ask_if_true(P22)
###Output
_____no_output_____
###Markdown
Proof by ResolutionRecall that our goal is to check whether $\text{KB} \vDash \alpha$ i.e. is $\text{KB} \implies \alpha$ true in every model. Suppose we wanted to check if $P \implies Q$ is valid. We check the satisfiability of $\neg (P \implies Q)$, which can be rewritten as $P \land \neg Q$. If $P \land \neg Q$ is unsatisfiable, then $P \implies Q$ must be true in all models. This gives us the result "$\text{KB} \vDash \alpha$ if and only if $\text{KB} \land \neg \alpha$ is unsatisfiable".This technique corresponds to proof by contradiction, a standard mathematical proof technique. We assume $\alpha$ to be false and show that this leads to a contradiction with known axioms in $\text{KB}$. We obtain a contradiction by making valid inferences using inference rules. In this proof we use a single inference rule, resolution which states $(l_1 \lor \dots \lor l_k) \land (m_1 \lor \dots \lor m_n) \land (l_i \iff \neg m_j) \implies l_1 \lor \dots \lor l_{i - 1} \lor l_{i + 1} \lor \dots \lor l_k \lor m_1 \lor \dots \lor m_{j - 1} \lor m_{j + 1} \lor \dots \lor m_n$. Applying the resolution yeilds us a clause which we add to the KB. We keep doing this until:There are no new clauses that can be added, in which case $\text{KB} \nvDash \alpha$.Two clauses resolve to yield the empty clause, in which case $\text{KB} \vDash \alpha$.The empty clause is equivalent to False because it arises only from resolving two complementaryunit clauses such as $P$ and $\neg P$ which is a contradiction as both $P$ and $\neg P$ can't be True at the same time.
###Code
%psource pl_resolution
pl_resolution(wumpus_kb, ~P11), pl_resolution(wumpus_kb, P11)
pl_resolution(wumpus_kb, ~P22), pl_resolution(wumpus_kb, P22)
###Output
_____no_output_____
###Markdown
First-Order Logic Knowledge Bases: `FolKB`The class `FolKB` can be used to represent a knowledge base of First-order logic sentences. You would initialize and use it the same way as you would for `PropKB` except that the clauses are first-order definite clauses. We will see how to write such clauses to create a database and query them in the following sections. Criminal KBIn this section we create a `FolKB` based on the following paragraph.The law says that it is a crime for an American to sell weapons to hostile nations. The country Nono, an enemy of America, has some missiles, and all of its missiles were sold to it by Colonel West, who is American.The first step is to extract the facts and convert them into first-order definite clauses. Extracting the facts from data alone is a challenging task. Fortnately we have a small paragraph and can do extraction and conversion manually. We'll store the clauses in list aptly named `clauses`.
###Code
clauses = []
###Output
_____no_output_____
###Markdown
“... it is a crime for an American to sell weapons to hostile nations”The keywords to look for here are 'crime', 'American', 'sell', 'weapon' and 'hostile'. We use predicate symbols to make meaning of them.`Criminal(x)`: `x` is a criminal`American(x)`: `x` is an American`Sells(x ,y, z)`: `x` sells `y` to `z``Weapon(x)`: `x` is a weapon`Hostile(x)`: `x` is a hostile nationLet us now combine them with appropriate variable naming depict the meaning of the sentence. The criminal `x` is also the American `x` who sells weapon `y` to `z`, which is a hostile nation.$\text{American}(x) \land \text{Weapon}(y) \land \text{Sells}(x, y, z) \land \text{Hostile}(z) \implies \text{Criminal} (x)$
###Code
clauses.append(expr("(American(x) & Weapon(y) & Sells(x, y, z) & Hostile(z)) ==> Criminal(x)"))
###Output
_____no_output_____
###Markdown
"The country Nono, an enemy of America"We now know that Nono is an enemy of America. We represent these nations using the constant symbols `Nono` and `America`. the enemy relation is show using the predicate symbol `Enemy`.$\text{Enemy}(\text{Nono}, \text{America})$
###Code
clauses.append(expr("Enemy(Nono, America)"))
###Output
_____no_output_____
###Markdown
"Nono ... has some missiles"This states the existance of some missile which is owned by Nono. $\exists x \text{Owns}(\text{Nono}, x) \land \text{Missile}(x)$. We invoke existential instantiation to introduce a new constant `M1` which is the missile owned by Nono.$\text{Owns}(\text{Nono}, \text{M1}), \text{Missile}(\text{M1})$
###Code
clauses.append(expr("Owns(Nono, M1)"))
clauses.append(expr("Missile(M1)"))
###Output
_____no_output_____
###Markdown
"All of its missiles were sold to it by Colonel West"If Nono owns something and it classifies as a missile, then it was sold to Nono by West.$\text{Missile}(x) \land \text{Owns}(\text{Nono}, x) \implies \text{Sells}(\text{West}, x, \text{Nono})$
###Code
clauses.append(expr("(Missile(x) & Owns(Nono, x)) ==> Sells(West, x, Nono)"))
###Output
_____no_output_____
###Markdown
"West, who is American"West is an American.$\text{American}(\text{West})$
###Code
clauses.append(expr("American(West)"))
###Output
_____no_output_____
###Markdown
We also know, from our understanding of language, that missiles are weapons and that an enemy of America counts as “hostile”.$\text{Missile}(x) \implies \text{Weapon}(x), \text{Enemy}(x, \text{America}) \implies \text{Hostile}(x)$
###Code
clauses.append(expr("Missile(x) ==> Weapon(x)"))
clauses.append(expr("Enemy(x, America) ==> Hostile(x)"))
###Output
_____no_output_____
###Markdown
Now that we have converted the information into first-order definite clauses we can create our first-order logic knowledge base.
###Code
crime_kb = FolKB(clauses)
###Output
_____no_output_____
###Markdown
Inference in First-Order LogicIn this section we look at a forward chaining and a backward chaining algorithm for `FolKB`. Both the aforementioned algorithms rely on a process called unification, a key component of all first-order inference algorithms. UnificationWe sometimes require finding substitutions that make different logical expressions look identical. This process, called unification, is done by the `unify` algorithm. It takes as input two sentences and returns a unifier for them if one exists. A unifier is a dictionary which stores the substitutions required to make the two sentences identical. It does so by recursively unifying the components of a sentence, where the unification of a variable symbol `var` with a constant symbol `Const` is the mapping `{var: Const}`. Let's look at a few examples.
###Code
unify(expr('x'), 3)
unify(expr('A(x)'), expr('A(B)'))
unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(y)'))
###Output
_____no_output_____
###Markdown
In cases where there is no possible substitution that unifies the two sentences the function return `None`.
###Code
print(unify(expr('Cat(x)'), expr('Dog(Dobby)')))
###Output
None
###Markdown
We also need to take care we do not unintentionally use same variable name. Unify treats them as a single variable which prevents it from taking multiple value.
###Code
print(unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(x)')))
###Output
None
###Markdown
Forward Chaining AlgorithmWe consider the simple forward-chaining algorithm presented in Figure 9.3. We look at each rule in the knoweldge base and see if the premises can be satisfied. This is done by finding a substitution which unifies the each of the premise with a clause in the `KB`. If we are able to unify the premises the conclusion (with the corresponding substitution) is added to the `KB`. This inferencing process is repeated until either the query can be answered or till no new sentences can be aded. We test if the newly added clause unifies with the query in which case the substitution yielded by `unify` is an answer to the query. If we run out of sentences to infer, this means the query was a failure.The function `fol_fc_ask` is a generator which yields all substitutions which validate the query.
###Code
%psource fol_fc_ask
###Output
_____no_output_____
###Markdown
Let's find out all the hostile nations. Note that we only told the `KB` that Nono was an enemy of America, not that it was hostile.
###Code
answer = fol_fc_ask(crime_kb, expr('Hostile(x)'))
print(list(answer))
###Output
[{x: Nono}]
###Markdown
The generator returned a single substitution which says that Nono is a hostile nation. See how after adding another enemy nation the generator returns two substitutions.
###Code
crime_kb.tell(expr('Enemy(JaJa, America)'))
answer = fol_fc_ask(crime_kb, expr('Hostile(x)'))
print(list(answer))
###Output
[{x: Nono}, {x: JaJa}]
###Markdown
Note: `fol_fc_ask` makes changes to the `KB` by adding sentences to it. Backward Chaining AlgorithmThis algorithm works backward from the goal, chaining through rules to find known facts that support the proof. Suppose `goal` is the query we want to find the substitution for. We find rules of the form $\text{lhs} \implies \text{goal}$ in the `KB` and try to prove `lhs`. There may be multiple clauses in the `KB` which give multiple `lhs`. It is sufficient to prove only one of these. But to prove a `lhs` all the conjuncts in the `lhs` of the clause must be proved. This makes it similar to And/Or search. ORThe OR part of the algorithm comes from our choice to select any clause of the form $\text{lhs} \implies \text{goal}$. Looking at all rules's `lhs` whose `rhs` unify with the `goal`, we yield a substitution which proves all the conjuncts in the `lhs`. We use `parse_definite_clause` to attain `lhs` and `rhs` from a clause of the form $\text{lhs} \implies \text{rhs}$. For atomic facts the `lhs` is an empty list.
###Code
%psource fol_bc_or
###Output
_____no_output_____
###Markdown
ANDThe AND corresponds to proving all the conjuncts in the `lhs`. We need to find a substitution which proves each and every clause in the list of conjuncts.
###Code
%psource fol_bc_and
###Output
_____no_output_____
###Markdown
Now the main function `fl_bc_ask` calls `fol_bc_or` with substitution initialized as empty. The `ask` method of `FolKB` uses `fol_bc_ask` and fetches the first substitution returned by the generator to answer query. Let's query the knowledge base we created from `clauses` to find hostile nations.
###Code
# Rebuild KB because running fol_fc_ask would add new facts to the KB
crime_kb = FolKB(clauses)
crime_kb.ask(expr('Hostile(x)'))
###Output
_____no_output_____
###Markdown
You may notice some new variables in the substitution. They are introduced to standardize the variable names to prevent naming problems as discussed in the [Unification section](Unification) Appendix: The Implementation of `|'==>'|`Consider the `Expr` formed by this syntax:
###Code
P |'==>'| ~Q
###Output
_____no_output_____
###Markdown
What is the funny `|'==>'|` syntax? The trick is that "`|`" is just the regular Python or-operator, and so is exactly equivalent to this:
###Code
(P | '==>') | ~Q
###Output
_____no_output_____
###Markdown
In other words, there are two applications of or-operators. Here's the first one:
###Code
P | '==>'
###Output
_____no_output_____
###Markdown
What is going on here is that the `__or__` method of `Expr` serves a dual purpose. If the right-hand-side is another `Expr` (or a number), then the result is an `Expr`, as in `(P | Q)`. But if the right-hand-side is a string, then the string is taken to be an operator, and we create a node in the abstract syntax tree corresponding to a partially-filled `Expr`, one where we know the left-hand-side is `P` and the operator is `==>`, but we don't yet know the right-hand-side.The `PartialExpr` class has an `__or__` method that says to create an `Expr` node with the right-hand-side filled in. Here we can see the combination of the `PartialExpr` with `Q` to create a complete `Expr`:
###Code
partial = PartialExpr('==>', P)
partial | ~Q
###Output
_____no_output_____
###Markdown
This [trick](http://code.activestate.com/recipes/384122-infix-operators/) is due to [Ferdinand Jamitzky](http://code.activestate.com/recipes/users/98863/), with a modification by [C. G. Vedant](https://github.com/Chipe1),who suggested using a string inside the or-bars. Appendix: The Implementation of `expr`How does `expr` parse a string into an `Expr`? It turns out there are two tricks (besides the Jamitzky/Vedant trick):1. We do a string substitution, replacing "`==>`" with "`|'==>'|`" (and likewise for other operators).2. We `eval` the resulting string in an environment in which every identifieris bound to a symbol with that identifier as the `op`.In other words,
###Code
expr('~(P & Q) ==> (~P | ~Q)')
###Output
_____no_output_____
###Markdown
is equivalent to doing:
###Code
P, Q = symbols('P, Q')
~(P & Q) |'==>'| (~P | ~Q)
###Output
_____no_output_____
###Markdown
One thing to beware of: this puts `==>` at the same precedence level as `"|"`, which is not quite right. For example, we get this:
###Code
P & Q |'==>'| P | Q
###Output
_____no_output_____
###Markdown
which is probably not what we meant; when in doubt, put in extra parens:
###Code
(P & Q) |'==>'| (P | Q)
###Output
_____no_output_____
###Markdown
Examples
###Code
from notebook import Canvas_fol_bc_ask
canvas_bc_ask = Canvas_fol_bc_ask('canvas_bc_ask', crime_kb, expr('Criminal(x)'))
###Output
_____no_output_____
###Markdown
Logic This Jupyter notebook acts as supporting material for topics covered in __Chapter 6 Logical Agents__, __Chapter 7 First-Order Logic__ and __Chapter 8 Inference in First-Order Logic__ of the book *[Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu)*. We make use of the implementations in the [logic.py](https://github.com/aimacode/aima-python/blob/master/logic.py) module. See the [intro notebook](https://github.com/aimacode/aima-python/blob/master/intro.ipynb) for instructions.Let's first import everything from the `logic` module.
###Code
from utils import *
from logic import *
from notebook import psource
###Output
_____no_output_____
###Markdown
CONTENTS- Logical sentences - Expr - PropKB - Knowledge-based agents - Inference in propositional knowledge base - Truth table enumeration - Proof by resolution - Forward and backward chaining - DPLL - WalkSAT - SATPlan - FolKB - Inference in first order knowledge base - Unification - Forward chaining algorithm - Backward chaining algorithm Logical Sentences The `Expr` class is designed to represent any kind of mathematical expression. The simplest type of `Expr` is a symbol, which can be defined with the function `Symbol`:
###Code
Symbol('x')
###Output
_____no_output_____
###Markdown
Or we can define multiple symbols at the same time with the function `symbols`:
###Code
(x, y, P, Q, f) = symbols('x, y, P, Q, f')
###Output
_____no_output_____
###Markdown
We can combine `Expr`s with the regular Python infix and prefix operators. Here's how we would form the logical sentence "P and not Q":
###Code
P & ~Q
###Output
_____no_output_____
###Markdown
This works because the `Expr` class overloads the `&` operator with this definition:```pythondef __and__(self, other): return Expr('&', self, other)``` and does similar overloads for the other operators. An `Expr` has two fields: `op` for the operator, which is always a string, and `args` for the arguments, which is a tuple of 0 or more expressions. By "expression," I mean either an instance of `Expr`, or a number. Let's take a look at the fields for some `Expr` examples:
###Code
sentence = P & ~Q
sentence.op
sentence.args
P.op
P.args
Pxy = P(x, y)
Pxy.op
Pxy.args
###Output
_____no_output_____
###Markdown
It is important to note that the `Expr` class does not define the *logic* of Propositional Logic sentences; it just gives you a way to *represent* expressions. Think of an `Expr` as an [abstract syntax tree](https://en.wikipedia.org/wiki/Abstract_syntax_tree). Each of the `args` in an `Expr` can be either a symbol, a number, or a nested `Expr`. We can nest these trees to any depth. Here is a deply nested `Expr`:
###Code
3 * f(x, y) + P(y) / 2 + 1
###Output
_____no_output_____
###Markdown
Operators for Constructing Logical SentencesHere is a table of the operators that can be used to form sentences. Note that we have a problem: we want to use Python operators to make sentences, so that our programs (and our interactive sessions like the one here) will show simple code. But Python does not allow implication arrows as operators, so for now we have to use a more verbose notation that Python does allow: `|'==>'|` instead of just `==>`. Alternately, you can always use the more verbose `Expr` constructor forms:| Operation | Book | Python Infix Input | Python Output | Python `Expr` Input|--------------------------|----------------------|-------------------------|---|---|| Negation | ¬ P | `~P` | `~P` | `Expr('~', P)`| And | P ∧ Q | `P & Q` | `P & Q` | `Expr('&', P, Q)`| Or | P ∨ Q | `P` &124; `Q`| `P` &124; `Q` | `Expr('`&124;`', P, Q)`| Inequality (Xor) | P ≠ Q | `P ^ Q` | `P ^ Q` | `Expr('^', P, Q)`| Implication | P → Q | `P` &124;`'==>'`&124; `Q` | `P ==> Q` | `Expr('==>', P, Q)`| Reverse Implication | Q ← P | `Q` &124;`'&124; `P` |`Q <== P` | `Expr('<==', Q, P)`| Equivalence | P ↔ Q | `P` &124;`''`&124; `Q` |`P Q` | `Expr('', P, Q)`Here's an example of defining a sentence with an implication arrow:
###Code
~(P & Q) |'==>'| (~P | ~Q)
###Output
_____no_output_____
###Markdown
`expr`: a Shortcut for Constructing SentencesIf the `|'==>'|` notation looks ugly to you, you can use the function `expr` instead:
###Code
expr('~(P & Q) ==> (~P | ~Q)')
###Output
_____no_output_____
###Markdown
`expr` takes a string as input, and parses it into an `Expr`. The string can contain arrow operators: `==>`, ``, which are handled as if they were regular Python infix operators. And `expr` automatically defines any symbols, so you don't need to pre-define them:
###Code
expr('sqrt(b ** 2 - 4 * a * c)')
###Output
_____no_output_____
###Markdown
For now that's all you need to know about `expr`. If you are interested, we explain the messy details of how `expr` is implemented and how `|'==>'|` is handled in the appendix. Propositional Knowledge Bases: `PropKB`The class `PropKB` can be used to represent a knowledge base of propositional logic sentences.We see that the class `KB` has four methods, apart from `__init__`. A point to note here: the `ask` method simply calls the `ask_generator` method. Thus, this one has already been implemented, and what you'll have to actually implement when you create your own knowledge base class (though you'll probably never need to, considering the ones we've created for you) will be the `ask_generator` function and not the `ask` function itself.The class `PropKB` now.* `__init__(self, sentence=None)` : The constructor `__init__` creates a single field `clauses` which will be a list of all the sentences of the knowledge base. Note that each one of these sentences will be a 'clause' i.e. a sentence which is made up of only literals and `or`s.* `tell(self, sentence)` : When you want to add a sentence to the KB, you use the `tell` method. This method takes a sentence, converts it to its CNF, extracts all the clauses, and adds all these clauses to the `clauses` field. So, you need not worry about `tell`ing only clauses to the knowledge base. You can `tell` the knowledge base a sentence in any form that you wish; converting it to CNF and adding the resulting clauses will be handled by the `tell` method.* `ask_generator(self, query)` : The `ask_generator` function is used by the `ask` function. It calls the `tt_entails` function, which in turn returns `True` if the knowledge base entails query and `False` otherwise. The `ask_generator` itself returns an empty dict `{}` if the knowledge base entails query and `None` otherwise. This might seem a little bit weird to you. After all, it makes more sense just to return a `True` or a `False` instead of the `{}` or `None` But this is done to maintain consistency with the way things are in First-Order Logic, where an `ask_generator` function is supposed to return all the substitutions that make the query true. Hence the dict, to return all these substitutions. I will be mostly be using the `ask` function which returns a `{}` or a `False`, but if you don't like this, you can always use the `ask_if_true` function which returns a `True` or a `False`.* `retract(self, sentence)` : This function removes all the clauses of the sentence given, from the knowledge base. Like the `tell` function, you don't have to pass clauses to remove them from the knowledge base; any sentence will do fine. The function will take care of converting that sentence to clauses and then remove those. Wumpus World KBLet us create a `PropKB` for the wumpus world with the sentences mentioned in `section 7.4.3`.
###Code
wumpus_kb = PropKB()
###Output
_____no_output_____
###Markdown
We define the symbols we use in our clauses.$P_{x, y}$ is true if there is a pit in `[x, y]`.$B_{x, y}$ is true if the agent senses breeze in `[x, y]`.
###Code
P11, P12, P21, P22, P31, B11, B21 = expr('P11, P12, P21, P22, P31, B11, B21')
###Output
_____no_output_____
###Markdown
Now we tell sentences based on `section 7.4.3`.There is no pit in `[1,1]`.
###Code
wumpus_kb.tell(~P11)
###Output
_____no_output_____
###Markdown
A square is breezy if and only if there is a pit in a neighboring square. This has to be stated for each square but for now, we include just the relevant squares.
###Code
wumpus_kb.tell(B11 | '<=>' | ((P12 | P21)))
wumpus_kb.tell(B21 | '<=>' | ((P11 | P22 | P31)))
###Output
_____no_output_____
###Markdown
Now we include the breeze percepts for the first two squares leading up to the situation in `Figure 7.3(b)`
###Code
wumpus_kb.tell(~B11)
wumpus_kb.tell(B21)
###Output
_____no_output_____
###Markdown
We can check the clauses stored in a `KB` by accessing its `clauses` variable
###Code
wumpus_kb.clauses
###Output
_____no_output_____
###Markdown
We see that the equivalence $B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was automatically converted to two implications which were inturn converted to CNF which is stored in the `KB`.$B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was split into $B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ and $B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$.$B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ was converted to $P_{1, 2} \lor P_{2, 1} \lor \neg B_{1, 1}$.$B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$ was converted to $\neg (P_{1, 2} \lor P_{2, 1}) \lor B_{1, 1}$ which becomes $(\neg P_{1, 2} \lor B_{1, 1}) \land (\neg P_{2, 1} \lor B_{1, 1})$ after applying De Morgan's laws and distributing the disjunction.$B_{2, 1} \iff (P_{1, 1} \lor P_{2, 2} \lor P_{3, 2})$ is converted in similar manner. Knowledge based agents A knowledge-based agent is a simple generic agent that maintains and handles a knowledge base.The knowledge base may initially contain some background knowledge.The purpose of a KB agent is to provide a level of abstraction over knowledge-base manipulation and is to be used as a base class for agents that work on a knowledge base.Given a percept, the KB agent adds the percept to its knowledge base, asks the knowledge base for the best action, and tells the knowledge base that it has in fact taken that action.Our implementation of `KB-Agent` is encapsulated in a class `KB_AgentProgram` which inherits from the `KB` class.Let's have a look.
###Code
psource(KB_AgentProgram)
###Output
_____no_output_____
###Markdown
The helper functions `make_percept_sentence`, `make_action_query` and `make_action_sentence` are all aptly named and as expected,`make_percept_sentence` makes first-order logic sentences about percepts we want our agent to receive,`make_action_query` asks the underlying `KB` about the action that should be taken and`make_action_sentence` tells the underlying `KB` about the action it has just taken. Inference in Propositional Knowledge BaseIn this section we will look at two algorithms to check if a sentence is entailed by the `KB`. Our goal is to decide whether $\text{KB} \vDash \alpha$ for some sentence $\alpha$. Truth Table EnumerationIt is a model-checking approach which, as the name suggests, enumerates all possible models in which the `KB` is true and checks if $\alpha$ is also true in these models. We list the $n$ symbols in the `KB` and enumerate the $2^{n}$ models in a depth-first manner and check the truth of `KB` and $\alpha$.
###Code
psource(tt_check_all)
###Output
_____no_output_____
###Markdown
The algorithm basically computes every line of the truth table $KB\implies \alpha$ and checks if it is true everywhere.If symbols are defined, the routine recursively constructs every combination of truth values for the symbols and then, it checks whether `model` is consistent with `kb`.The given models correspond to the lines in the truth table,which have a `true` in the KB column, and for these lines it checks whether the query evaluates to true`result = pl_true(alpha, model)`.In short, `tt_check_all` evaluates this logical expression for each `model``pl_true(kb, model) => pl_true(alpha, model)`which is logically equivalent to`pl_true(kb, model) & ~pl_true(alpha, model)` that is, the knowledge base and the negation of the query are logically inconsistent.`tt_entails()` just extracts the symbols from the query and calls `tt_check_all()` with the proper parameters.
###Code
psource(tt_entails)
###Output
_____no_output_____
###Markdown
Keep in mind that for two symbols P and Q, P => Q is false only when P is `True` and Q is `False`.Example usage of `tt_entails()`:
###Code
tt_entails(P & Q, Q)
###Output
_____no_output_____
###Markdown
P & Q is True only when both P and Q are True. Hence, (P & Q) => Q is True
###Code
tt_entails(P | Q, Q)
tt_entails(P | Q, P)
###Output
_____no_output_____
###Markdown
If we know that P | Q is true, we cannot infer the truth values of P and Q. Hence (P | Q) => Q is False and so is (P | Q) => P.
###Code
(A, B, C, D, E, F, G) = symbols('A, B, C, D, E, F, G')
tt_entails(A & (B | C) & D & E & ~(F | G), A & D & E & ~F & ~G)
###Output
_____no_output_____
###Markdown
We can see that for the KB to be true, A, D, E have to be True and F and G have to be False.Nothing can be said about B or C. Coming back to our problem, note that `tt_entails()` takes an `Expr` which is a conjunction of clauses as the input instead of the `KB` itself. You can use the `ask_if_true()` method of `PropKB` which does all the required conversions. Let's check what `wumpus_kb` tells us about $P_{1, 1}$.
###Code
wumpus_kb.ask_if_true(~P11), wumpus_kb.ask_if_true(P11)
###Output
_____no_output_____
###Markdown
Looking at Figure 7.9 we see that in all models in which the knowledge base is `True`, $P_{1, 1}$ is `False`. It makes sense that `ask_if_true()` returns `True` for $\alpha = \neg P_{1, 1}$ and `False` for $\alpha = P_{1, 1}$. This begs the question, what if $\alpha$ is `True` in only a portion of all models. Do we return `True` or `False`? This doesn't rule out the possibility of $\alpha$ being `True` but it is not entailed by the `KB` so we return `False` in such cases. We can see this is the case for $P_{2, 2}$ and $P_{3, 1}$.
###Code
wumpus_kb.ask_if_true(~P22), wumpus_kb.ask_if_true(P22)
###Output
_____no_output_____
###Markdown
Proof by ResolutionRecall that our goal is to check whether $\text{KB} \vDash \alpha$ i.e. is $\text{KB} \implies \alpha$ true in every model. Suppose we wanted to check if $P \implies Q$ is valid. We check the satisfiability of $\neg (P \implies Q)$, which can be rewritten as $P \land \neg Q$. If $P \land \neg Q$ is unsatisfiable, then $P \implies Q$ must be true in all models. This gives us the result "$\text{KB} \vDash \alpha$ if and only if $\text{KB} \land \neg \alpha$ is unsatisfiable".This technique corresponds to proof by contradiction, a standard mathematical proof technique. We assume $\alpha$ to be false and show that this leads to a contradiction with known axioms in $\text{KB}$. We obtain a contradiction by making valid inferences using inference rules. In this proof we use a single inference rule, resolution which states $(l_1 \lor \dots \lor l_k) \land (m_1 \lor \dots \lor m_n) \land (l_i \iff \neg m_j) \implies l_1 \lor \dots \lor l_{i - 1} \lor l_{i + 1} \lor \dots \lor l_k \lor m_1 \lor \dots \lor m_{j - 1} \lor m_{j + 1} \lor \dots \lor m_n$. Applying the resolution yields us a clause which we add to the KB. We keep doing this until:* There are no new clauses that can be added, in which case $\text{KB} \nvDash \alpha$.* Two clauses resolve to yield the empty clause, in which case $\text{KB} \vDash \alpha$.The empty clause is equivalent to False because it arises only from resolving two complementaryunit clauses such as $P$ and $\neg P$ which is a contradiction as both $P$ and $\neg P$ can't be True at the same time. There is one catch however, the algorithm that implements proof by resolution cannot handle complex sentences. Implications and bi-implications have to be simplified into simpler clauses. We already know that *every sentence of a propositional logic is logically equivalent to a conjunction of clauses*.We will use this fact to our advantage and simplify the input sentence into the **conjunctive normal form** (CNF) which is a conjunction of disjunctions of literals.For eg:$$(A\lor B)\land (\neg B\lor C\lor\neg D)\land (D\lor\neg E)$$This is equivalent to the POS (Product of sums) form in digital electronics.Here's an outline of how the conversion is done:1. Convert bi-implications to implications$\alpha\iff\beta$ can be written as $(\alpha\implies\beta)\land(\beta\implies\alpha)$This also applies to compound sentences$\alpha\iff(\beta\lor\gamma)$ can be written as $(\alpha\implies(\beta\lor\gamma))\land((\beta\lor\gamma)\implies\alpha)$2. Convert implications to their logical equivalents$\alpha\implies\beta$ can be written as $\neg\alpha\lor\beta$3. Move negation inwardsCNF requires atomic literals. Hence, negation cannot appear on a compound statement.De Morgan's laws will be helpful here.$\neg(\alpha\land\beta)\equiv(\neg\alpha\lor\neg\beta)$$\neg(\alpha\lor\beta)\equiv(\neg\alpha\land\neg\beta)$4. Distribute disjunction over conjunctionDisjunction and conjunction are distributive over each other.Now that we only have conjunctions, disjunctions and negations in our expression, we will distribute disjunctions over conjunctions wherever possible as this will give us a sentence which is a conjunction of simpler clauses, which is what we wanted in the first place.We need a term of the form$(\alpha_{1}\lor\alpha_{2}\lor\alpha_{3}...)\land(\beta_{1}\lor\beta_{2}\lor\beta_{3}...)\land(\gamma_{1}\lor\gamma_{2}\lor\gamma_{3}...)\land...$The `to_cnf` function executes this conversion using helper subroutines.
###Code
psource(to_cnf)
###Output
_____no_output_____
###Markdown
`to_cnf` calls three subroutines.`eliminate_implications` converts bi-implications and implications to their logical equivalents.`move_not_inwards` removes negations from compound statements and moves them inwards using De Morgan's laws.`distribute_and_over_or` distributes disjunctions over conjunctions.Run the cell below for implementation details.
###Code
psource(eliminate_implications)
psource(move_not_inwards)
psource(distribute_and_over_or)
###Output
_____no_output_____
###Markdown
Let's convert some sentences to see how it works
###Code
A, B, C, D = expr('A, B, C, D')
to_cnf(A |'<=>'| B)
to_cnf(A |'<=>'| (B & C))
to_cnf(A & (B | (C & D)))
to_cnf((A |'<=>'| ~B) |'==>'| (C | ~D))
###Output
_____no_output_____
###Markdown
Coming back to our resolution problem, we can see how the `to_cnf` function is utilized here
###Code
psource(pl_resolution)
pl_resolution(wumpus_kb, ~P11), pl_resolution(wumpus_kb, P11)
pl_resolution(wumpus_kb, ~P22), pl_resolution(wumpus_kb, P22)
###Output
_____no_output_____
###Markdown
Forward and backward chainingPreviously, we said we will look at two algorithms to check if a sentence is entailed by the `KB`. Here's a third one. The difference here is that our goal now is to determine if a knowledge base of definite clauses entails a single proposition symbol *q* - the query.There is a catch however - the knowledge base can only contain **Horn clauses**. Horn ClausesHorn clauses can be defined as a *disjunction* of *literals* with **at most** one positive literal. A Horn clause with exactly one positive literal is called a *definite clause*.A Horn clause might look like $\neg a\lor\neg b\lor\neg c\lor\neg d... \lor z$This, coincidentally, is also a definite clause.Using De Morgan's laws, the example above can be simplified to $a\land b\land c\land d ... \implies z$This seems like a logical representation of how humans process known data and facts. Assuming percepts `a`, `b`, `c`, `d` ... to be true simultaneously, we can infer `z` to also be true at that point in time. There are some interesting aspects of Horn clauses that make algorithmic inference or *resolution* easier.- Definite clauses can be written as implications:The most important simplification a definite clause provides is that it can be written as an implication.The premise (or the knowledge that leads to the implication) is a conjunction of positive literals.The conclusion (the implied statement) is also a positive literal.The sentence thus becomes easier to understand.The premise and the conclusion are conventionally called the *body* and the *head* respectively.A single positive literal is called a *fact*.- Forward chaining and backward chaining can be used for inference from Horn clauses:Forward chaining is semantically identical to `AND-OR-Graph-Search` from the chapter on search algorithms.Implementational details will be explained shortly.- Deciding entailment with Horn clauses is linear in size of the knowledge base:Surprisingly, the forward and backward chaining algorithms traverse each element of the knowledge base at most once, greatly simplifying the problem.The function `pl_fc_entails` implements forward chaining to see if a knowledge base `KB` entails a symbol `q`.Before we proceed further, note that `pl_fc_entails` doesn't use an ordinary `KB` instance. The knowledge base here is an instance of the `PropDefiniteKB` class, derived from the `PropKB` class, but modified to store definite clauses.The main point of difference arises in the inclusion of a helper method to `PropDefiniteKB` that returns a list of clauses in KB that have a given symbol `p` in their premise.
###Code
psource(PropDefiniteKB.clauses_with_premise)
###Output
_____no_output_____
###Markdown
Let's now have a look at the `pl_fc_entails` algorithm.
###Code
psource(pl_fc_entails)
###Output
_____no_output_____
###Markdown
The function accepts a knowledge base `KB` (an instance of `PropDefiniteKB`) and a query `q` as inputs.`count` initially stores the number of symbols in the premise of each sentence in the knowledge base.The `conjuncts` helper function separates a given sentence at conjunctions.`inferred` is initialized as a *boolean* defaultdict. This will be used later to check if we have inferred all premises of each clause of the agenda.`agenda` initially stores a list of clauses that the knowledge base knows to be true.The `is_prop_symbol` helper function checks if the given symbol is a valid propositional logic symbol.We now iterate through `agenda`, popping a symbol `p` on each iteration.If the query `q` is the same as `p`, we know that entailment holds.The agenda is processed, reducing `count` by one for each implication with a premise `p`.A conclusion is added to the agenda when `count` reaches zero. This means we know all the premises of that particular implication to be true.`clauses_with_premise` is a helpful method of the `PropKB` class.It returns a list of clauses in the knowledge base that have `p` in their premise.Now that we have an idea of how this function works, let's see a few examples of its usage, but we first need to define our knowledge base. We assume we know the following clauses to be true.
###Code
clauses = ['(B & F)==>E',
'(A & E & F)==>G',
'(B & C)==>F',
'(A & B)==>D',
'(E & F)==>H',
'(H & I)==>J',
'A',
'B',
'C']
###Output
_____no_output_____
###Markdown
We will now `tell` this information to our knowledge base.
###Code
definite_clauses_KB = PropDefiniteKB()
for clause in clauses:
definite_clauses_KB.tell(expr(clause))
###Output
_____no_output_____
###Markdown
We can now check if our knowledge base entails the following queries.
###Code
pl_fc_entails(definite_clauses_KB, expr('G'))
pl_fc_entails(definite_clauses_KB, expr('H'))
pl_fc_entails(definite_clauses_KB, expr('I'))
pl_fc_entails(definite_clauses_KB, expr('J'))
###Output
_____no_output_____
###Markdown
Effective Propositional Model CheckingThe previous segments elucidate the algorithmic procedure for model checking. In this segment, we look at ways of making them computationally efficient.The problem we are trying to solve is conventionally called the _propositional satisfiability problem_, abbreviated as the _SAT_ problem.In layman terms, if there exists a model that satisfies a given Boolean formula, the formula is called satisfiable.The SAT problem was the first problem to be proven _NP-complete_.The main characteristics of an NP-complete problem are:- Given a solution to such a problem, it is easy to verify if the solution solves the problem.- The time required to actually solve the problem using any known algorithm increases exponentially with respect to the size of the problem.Due to these properties, heuristic and approximational methods are often applied to find solutions to these problems.It is extremely important to be able to solve large scale SAT problems efficiently because many combinatorial problems in computer science can be conveniently reduced to checking the satisfiability of a propositional sentence under some constraints.We will introduce two new algorithms that perform propositional model checking in a computationally effective way. 1. DPLL (Davis-Putnam-Logeman-Loveland) algorithmThis algorithm is very similar to Backtracking-Search.It recursively enumerates possible models in a depth-first fashion with the following improvements over algorithms like `tt_entails`:1. Early termination:In certain cases, the algorithm can detect the truth value of a statement using just a partially completed model.For example, $(P\lor Q)\land(P\lor R)$ is true if P is true, regardless of other variables.This reduces the search space significantly.2. Pure symbol heuristic:A symbol that has the same sign (positive or negative) in all clauses is called a _pure symbol_.It isn't difficult to see that any satisfiable model will have the pure symbols assigned such that its parent clause becomes _true_.For example, $(P\lor\neg Q)\land(\neg Q\lor\neg R)\land(R\lor P)$ has P and Q as pure symbolsand for the sentence to be true, P _has_ to be true and Q _has_ to be false.The pure symbol heuristic thus simplifies the problem a bit.3. Unit clause heuristic:In the context of DPLL, clauses with just one literal and clauses with all but one _false_ literals are called unit clauses.If a clause is a unit clause, it can only be satisfied by assigning the necessary value to make the last literal true.We have no other choice.Assigning one unit clause can create another unit clause.For example, when P is false, $(P\lor Q)$ becomes a unit clause, causing _true_ to be assigned to Q.A series of forced assignments derived from previous unit clauses is called _unit propagation_.In this way, this heuristic simplifies the problem further.The algorithm often employs other tricks to scale up to large problems.However, these tricks are currently out of the scope of this notebook. Refer to section 7.6 of the book for more details.Let's have a look at the algorithm.
###Code
psource(dpll)
###Output
_____no_output_____
###Markdown
The algorithm uses the ideas described above to check satisfiability of a sentence in propositional logic.It recursively calls itself, simplifying the problem at each step. It also uses helper functions `find_pure_symbol` and `find_unit_clause` to carry out steps 2 and 3 above.The `dpll_satisfiable` helper function converts the input clauses to _conjunctive normal form_ and calls the `dpll` function with the correct parameters.
###Code
psource(dpll_satisfiable)
###Output
_____no_output_____
###Markdown
Let's see a few examples of usage.
###Code
A, B, C, D = expr('A, B, C, D')
dpll_satisfiable(A & B & ~C & D)
###Output
_____no_output_____
###Markdown
This is a simple case to highlight that the algorithm actually works.
###Code
dpll_satisfiable((A & B) | (C & ~A) | (B & ~D))
###Output
_____no_output_____
###Markdown
If a particular symbol isn't present in the solution, it means that the solution is independent of the value of that symbol.In this case, the solution is independent of A.
###Code
dpll_satisfiable(A |'<=>'| B)
dpll_satisfiable((A |'<=>'| B) |'==>'| (C & ~A))
dpll_satisfiable((A | (B & C)) |'<=>'| ((A | B) & (A | C)))
###Output
_____no_output_____
###Markdown
2. WalkSAT algorithmThis algorithm is very similar to Hill climbing.On every iteration, the algorithm picks an unsatisfied clause and flips a symbol in the clause.This is similar to finding a neighboring state in the `hill_climbing` algorithm.The symbol to be flipped is decided by an evaluation function that counts the number of unsatisfied clauses.Sometimes, symbols are also flipped randomly to avoid local optima. A subtle balance between greediness and randomness is required. Alternatively, some versions of the algorithm restart with a completely new random assignment if no solution has been found for too long as a way of getting out of local minima of numbers of unsatisfied clauses.Let's have a look at the algorithm.
###Code
psource(WalkSAT)
###Output
_____no_output_____
###Markdown
The function takes three arguments:1. The `clauses` we want to satisfy.2. The probability `p` of randomly changing a symbol.3. The maximum number of flips (`max_flips`) the algorithm will run for. If the clauses are still unsatisfied, the algorithm returns `None` to denote failure.The algorithm is identical in concept to Hill climbing and the code isn't difficult to understand.Let's see a few examples of usage.
###Code
A, B, C, D = expr('A, B, C, D')
WalkSAT([A, B, ~C, D], 0.5, 100)
###Output
_____no_output_____
###Markdown
This is a simple case to show that the algorithm converges.
###Code
WalkSAT([A & B, A & C], 0.5, 100)
WalkSAT([A & B, C & D, C & B], 0.5, 100)
WalkSAT([A & B, C | D, ~(D | B)], 0.5, 1000)
###Output
_____no_output_____
###Markdown
This one doesn't give any output because WalkSAT did not find any model where these clauses hold. We can solve these clauses to see that they together form a contradiction and hence, it isn't supposed to have a solution. One point of difference between this algorithm and the `dpll_satisfiable` algorithms is that both these algorithms take inputs differently. For WalkSAT to take complete sentences as input, we can write a helper function that converts the input sentence into conjunctive normal form and then calls WalkSAT with the list of conjuncts of the CNF form of the sentence.
###Code
def WalkSAT_CNF(sentence, p=0.5, max_flips=10000):
return WalkSAT(conjuncts(to_cnf(sentence)), 0, max_flips)
###Output
_____no_output_____
###Markdown
Now we can call `WalkSAT_CNF` and `DPLL_Satisfiable` with the same arguments.
###Code
WalkSAT_CNF((A & B) | (C & ~A) | (B & ~D), 0.5, 1000)
###Output
_____no_output_____
###Markdown
It works!Notice that the solution generated by WalkSAT doesn't omit variables that the sentence doesn't depend upon. If the sentence is independent of a particular variable, the solution contains a random value for that variable because of the stochastic nature of the algorithm.Let's compare the runtime of WalkSAT and DPLL for a few cases. We will use the `%%timeit` magic to do this.
###Code
sentence_1 = A |'<=>'| B
sentence_2 = (A & B) | (C & ~A) | (B & ~D)
sentence_3 = (A | (B & C)) |'<=>'| ((A | B) & (A | C))
%%timeit
dpll_satisfiable(sentence_1)
dpll_satisfiable(sentence_2)
dpll_satisfiable(sentence_3)
%%timeit
WalkSAT_CNF(sentence_1)
WalkSAT_CNF(sentence_2)
WalkSAT_CNF(sentence_3)
###Output
1.02 ms ± 6.92 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
###Markdown
On an average, for solvable cases, `WalkSAT` is quite faster than `dpll` because, for a small number of variables, `WalkSAT` can reduce the search space significantly. Results can be different for sentences with more symbols though.Feel free to play around with this to understand the trade-offs of these algorithms better. SATPlan In this section we show how to make plans by logical inference. The basic idea is very simple. It includes the following three steps:1. Constuct a sentence that includes: 1. A colection of assertions about the initial state. 2. The successor-state axioms for all the possible actions at each time up to some maximum time t. 3. The assertion that the goal is achieved at time t.2. Present the whole sentence to a SAT solver.3. Assuming a model is found, extract from the model those variables that represent actions and are assigned true. Together they represent a plan to achieve the goals.Lets have a look at the algorithm
###Code
psource(SAT_plan)
###Output
_____no_output_____
###Markdown
Let's see few examples of its usage. First we define a transition and then call `SAT_plan`.
###Code
transition = {'A': {'Left': 'A', 'Right': 'B'},
'B': {'Left': 'A', 'Right': 'C'},
'C': {'Left': 'B', 'Right': 'C'}}
print(SAT_plan('A', transition, 'C', 2))
print(SAT_plan('A', transition, 'B', 3))
print(SAT_plan('C', transition, 'A', 3))
###Output
None
['Right']
['Left', 'Left']
###Markdown
Let us do the same for another transition.
###Code
transition = {(0, 0): {'Right': (0, 1), 'Down': (1, 0)},
(0, 1): {'Left': (1, 0), 'Down': (1, 1)},
(1, 0): {'Right': (1, 0), 'Up': (1, 0), 'Left': (1, 0), 'Down': (1, 0)},
(1, 1): {'Left': (1, 0), 'Up': (0, 1)}}
print(SAT_plan((0, 0), transition, (1, 1), 4))
###Output
['Right', 'Down']
###Markdown
First-Order Logic Knowledge Bases: `FolKB`The class `FolKB` can be used to represent a knowledge base of First-order logic sentences. You would initialize and use it the same way as you would for `PropKB` except that the clauses are first-order definite clauses. We will see how to write such clauses to create a database and query them in the following sections. Criminal KBIn this section we create a `FolKB` based on the following paragraph.The law says that it is a crime for an American to sell weapons to hostile nations. The country Nono, an enemy of America, has some missiles, and all of its missiles were sold to it by Colonel West, who is American.The first step is to extract the facts and convert them into first-order definite clauses. Extracting the facts from data alone is a challenging task. Fortunately, we have a small paragraph and can do extraction and conversion manually. We'll store the clauses in list aptly named `clauses`.
###Code
clauses = []
###Output
_____no_output_____
###Markdown
“... it is a crime for an American to sell weapons to hostile nations”The keywords to look for here are 'crime', 'American', 'sell', 'weapon' and 'hostile'. We use predicate symbols to make meaning of them.* `Criminal(x)`: `x` is a criminal* `American(x)`: `x` is an American* `Sells(x ,y, z)`: `x` sells `y` to `z`* `Weapon(x)`: `x` is a weapon* `Hostile(x)`: `x` is a hostile nationLet us now combine them with appropriate variable naming to depict the meaning of the sentence. The criminal `x` is also the American `x` who sells weapon `y` to `z`, which is a hostile nation.$\text{American}(x) \land \text{Weapon}(y) \land \text{Sells}(x, y, z) \land \text{Hostile}(z) \implies \text{Criminal} (x)$
###Code
clauses.append(expr("(American(x) & Weapon(y) & Sells(x, y, z) & Hostile(z)) ==> Criminal(x)"))
###Output
_____no_output_____
###Markdown
"The country Nono, an enemy of America"We now know that Nono is an enemy of America. We represent these nations using the constant symbols `Nono` and `America`. the enemy relation is show using the predicate symbol `Enemy`.$\text{Enemy}(\text{Nono}, \text{America})$
###Code
clauses.append(expr("Enemy(Nono, America)"))
###Output
_____no_output_____
###Markdown
"Nono ... has some missiles"This states the existence of some missile which is owned by Nono. $\exists x \text{Owns}(\text{Nono}, x) \land \text{Missile}(x)$. We invoke existential instantiation to introduce a new constant `M1` which is the missile owned by Nono.$\text{Owns}(\text{Nono}, \text{M1}), \text{Missile}(\text{M1})$
###Code
clauses.append(expr("Owns(Nono, M1)"))
clauses.append(expr("Missile(M1)"))
###Output
_____no_output_____
###Markdown
"All of its missiles were sold to it by Colonel West"If Nono owns something and it classifies as a missile, then it was sold to Nono by West.$\text{Missile}(x) \land \text{Owns}(\text{Nono}, x) \implies \text{Sells}(\text{West}, x, \text{Nono})$
###Code
clauses.append(expr("(Missile(x) & Owns(Nono, x)) ==> Sells(West, x, Nono)"))
###Output
_____no_output_____
###Markdown
"West, who is American"West is an American.$\text{American}(\text{West})$
###Code
clauses.append(expr("American(West)"))
###Output
_____no_output_____
###Markdown
We also know, from our understanding of language, that missiles are weapons and that an enemy of America counts as “hostile”.$\text{Missile}(x) \implies \text{Weapon}(x), \text{Enemy}(x, \text{America}) \implies \text{Hostile}(x)$
###Code
clauses.append(expr("Missile(x) ==> Weapon(x)"))
clauses.append(expr("Enemy(x, America) ==> Hostile(x)"))
###Output
_____no_output_____
###Markdown
Now that we have converted the information into first-order definite clauses we can create our first-order logic knowledge base.
###Code
crime_kb = FolKB(clauses)
###Output
_____no_output_____
###Markdown
The `subst` helper function substitutes variables with given values in first-order logic statements.This will be useful in later algorithms.It's implementation is quite simple and self-explanatory.
###Code
psource(subst)
###Output
_____no_output_____
###Markdown
Here's an example of how `subst` can be used.
###Code
subst({x: expr('Nono'), y: expr('M1')}, expr('Owns(x, y)'))
###Output
_____no_output_____
###Markdown
Inference in First-Order LogicIn this section we look at a forward chaining and a backward chaining algorithm for `FolKB`. Both aforementioned algorithms rely on a process called unification, a key component of all first-order inference algorithms. UnificationWe sometimes require finding substitutions that make different logical expressions look identical. This process, called unification, is done by the `unify` algorithm. It takes as input two sentences and returns a unifier for them if one exists. A unifier is a dictionary which stores the substitutions required to make the two sentences identical. It does so by recursively unifying the components of a sentence, where the unification of a variable symbol `var` with a constant symbol `Const` is the mapping `{var: Const}`. Let's look at a few examples.
###Code
unify(expr('x'), 3)
unify(expr('A(x)'), expr('A(B)'))
unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(y)'))
###Output
_____no_output_____
###Markdown
In cases where there is no possible substitution that unifies the two sentences the function return `None`.
###Code
print(unify(expr('Cat(x)'), expr('Dog(Dobby)')))
###Output
None
###Markdown
We also need to take care we do not unintentionally use the same variable name. Unify treats them as a single variable which prevents it from taking multiple value.
###Code
print(unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(x)')))
###Output
None
###Markdown
Forward Chaining AlgorithmWe consider the simple forward-chaining algorithm presented in Figure 9.3. We look at each rule in the knowledge base and see if the premises can be satisfied. This is done by finding a substitution which unifies each of the premise with a clause in the `KB`. If we are able to unify the premises, the conclusion (with the corresponding substitution) is added to the `KB`. This inferencing process is repeated until either the query can be answered or till no new sentences can be added. We test if the newly added clause unifies with the query in which case the substitution yielded by `unify` is an answer to the query. If we run out of sentences to infer, this means the query was a failure.The function `fol_fc_ask` is a generator which yields all substitutions which validate the query.
###Code
psource(fol_fc_ask)
###Output
_____no_output_____
###Markdown
Let's find out all the hostile nations. Note that we only told the `KB` that Nono was an enemy of America, not that it was hostile.
###Code
answer = fol_fc_ask(crime_kb, expr('Hostile(x)'))
print(list(answer))
###Output
[{x: Nono}]
###Markdown
The generator returned a single substitution which says that Nono is a hostile nation. See how after adding another enemy nation the generator returns two substitutions.
###Code
crime_kb.tell(expr('Enemy(JaJa, America)'))
answer = fol_fc_ask(crime_kb, expr('Hostile(x)'))
print(list(answer))
###Output
[{x: Nono}, {x: JaJa}]
###Markdown
Note: `fol_fc_ask` makes changes to the `KB` by adding sentences to it. Backward Chaining AlgorithmThis algorithm works backward from the goal, chaining through rules to find known facts that support the proof. Suppose `goal` is the query we want to find the substitution for. We find rules of the form $\text{lhs} \implies \text{goal}$ in the `KB` and try to prove `lhs`. There may be multiple clauses in the `KB` which give multiple `lhs`. It is sufficient to prove only one of these. But to prove a `lhs` all the conjuncts in the `lhs` of the clause must be proved. This makes it similar to And/Or search. ORThe OR part of the algorithm comes from our choice to select any clause of the form $\text{lhs} \implies \text{goal}$. Looking at all rules's `lhs` whose `rhs` unify with the `goal`, we yield a substitution which proves all the conjuncts in the `lhs`. We use `parse_definite_clause` to attain `lhs` and `rhs` from a clause of the form $\text{lhs} \implies \text{rhs}$. For atomic facts the `lhs` is an empty list.
###Code
psource(fol_bc_or)
###Output
_____no_output_____
###Markdown
ANDThe AND corresponds to proving all the conjuncts in the `lhs`. We need to find a substitution which proves each and every clause in the list of conjuncts.
###Code
psource(fol_bc_and)
###Output
_____no_output_____
###Markdown
Now the main function `fl_bc_ask` calls `fol_bc_or` with substitution initialized as empty. The `ask` method of `FolKB` uses `fol_bc_ask` and fetches the first substitution returned by the generator to answer query. Let's query the knowledge base we created from `clauses` to find hostile nations.
###Code
# Rebuild KB because running fol_fc_ask would add new facts to the KB
crime_kb = FolKB(clauses)
crime_kb.ask(expr('Hostile(x)'))
###Output
_____no_output_____
###Markdown
You may notice some new variables in the substitution. They are introduced to standardize the variable names to prevent naming problems as discussed in the [Unification section](Unification) Appendix: The Implementation of `|'==>'|`Consider the `Expr` formed by this syntax:
###Code
P |'==>'| ~Q
###Output
_____no_output_____
###Markdown
What is the funny `|'==>'|` syntax? The trick is that "`|`" is just the regular Python or-operator, and so is exactly equivalent to this:
###Code
(P | '==>') | ~Q
###Output
_____no_output_____
###Markdown
In other words, there are two applications of or-operators. Here's the first one:
###Code
P | '==>'
###Output
_____no_output_____
###Markdown
What is going on here is that the `__or__` method of `Expr` serves a dual purpose. If the right-hand-side is another `Expr` (or a number), then the result is an `Expr`, as in `(P | Q)`. But if the right-hand-side is a string, then the string is taken to be an operator, and we create a node in the abstract syntax tree corresponding to a partially-filled `Expr`, one where we know the left-hand-side is `P` and the operator is `==>`, but we don't yet know the right-hand-side.The `PartialExpr` class has an `__or__` method that says to create an `Expr` node with the right-hand-side filled in. Here we can see the combination of the `PartialExpr` with `Q` to create a complete `Expr`:
###Code
partial = PartialExpr('==>', P)
partial | ~Q
###Output
_____no_output_____
###Markdown
This [trick](http://code.activestate.com/recipes/384122-infix-operators/) is due to [Ferdinand Jamitzky](http://code.activestate.com/recipes/users/98863/), with a modification by [C. G. Vedant](https://github.com/Chipe1),who suggested using a string inside the or-bars. Appendix: The Implementation of `expr`How does `expr` parse a string into an `Expr`? It turns out there are two tricks (besides the Jamitzky/Vedant trick):1. We do a string substitution, replacing "`==>`" with "`|'==>'|`" (and likewise for other operators).2. We `eval` the resulting string in an environment in which every identifieris bound to a symbol with that identifier as the `op`.In other words,
###Code
expr('~(P & Q) ==> (~P | ~Q)')
###Output
_____no_output_____
###Markdown
is equivalent to doing:
###Code
P, Q = symbols('P, Q')
~(P & Q) |'==>'| (~P | ~Q)
###Output
_____no_output_____
###Markdown
One thing to beware of: this puts `==>` at the same precedence level as `"|"`, which is not quite right. For example, we get this:
###Code
P & Q |'==>'| P | Q
###Output
_____no_output_____
###Markdown
which is probably not what we meant; when in doubt, put in extra parens:
###Code
(P & Q) |'==>'| (P | Q)
###Output
_____no_output_____
###Markdown
Examples
###Code
from notebook import Canvas_fol_bc_ask
canvas_bc_ask = Canvas_fol_bc_ask('canvas_bc_ask', crime_kb, expr('Criminal(x)'))
###Output
_____no_output_____
###Markdown
Logic: `logic.py`; Chapters 6-8 This notebook describes the [logic.py](https://github.com/aimacode/aima-python/blob/master/logic.py) module, which covers Chapters 6 (Logical Agents), 7 (First-Order Logic) and 8 (Inference in First-Order Logic) of *[Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu)*. See the [intro notebook](https://github.com/aimacode/aima-python/blob/master/intro.ipynb) for instructions.We'll start by looking at `Expr`, the data type for logical sentences, and the convenience function `expr`. We'll be covering two types of knowledge bases, `PropKB` - Propositional logic knowledge base and `FolKB` - First order logic knowledge base. We will construct a propositional knowledge base of a specific situation in the Wumpus World. We will next go through the `tt_entails` function and experiment with it a bit. The `pl_resolution` and `pl_fc_entails` functions will come next. We'll study forward chaining and backward chaining algorithms for `FolKB` and use them on `crime_kb` knowledge base.But the first step is to load the code:
###Code
from utils import *
from logic import *
###Output
_____no_output_____
###Markdown
Logical Sentences The `Expr` class is designed to represent any kind of mathematical expression. The simplest type of `Expr` is a symbol, which can be defined with the function `Symbol`:
###Code
Symbol('x')
###Output
_____no_output_____
###Markdown
Or we can define multiple symbols at the same time with the function `symbols`:
###Code
(x, y, P, Q, f) = symbols('x, y, P, Q, f')
###Output
_____no_output_____
###Markdown
We can combine `Expr`s with the regular Python infix and prefix operators. Here's how we would form the logical sentence "P and not Q":
###Code
P & ~Q
###Output
_____no_output_____
###Markdown
This works because the `Expr` class overloads the `&` operator with this definition:```pythondef __and__(self, other): return Expr('&', self, other)``` and does similar overloads for the other operators. An `Expr` has two fields: `op` for the operator, which is always a string, and `args` for the arguments, which is a tuple of 0 or more expressions. By "expression," I mean either an instance of `Expr`, or a number. Let's take a look at the fields for some `Expr` examples:
###Code
sentence = P & ~Q
sentence.op
sentence.args
P.op
P.args
Pxy = P(x, y)
Pxy.op
Pxy.args
###Output
_____no_output_____
###Markdown
It is important to note that the `Expr` class does not define the *logic* of Propositional Logic sentences; it just gives you a way to *represent* expressions. Think of an `Expr` as an [abstract syntax tree](https://en.wikipedia.org/wiki/Abstract_syntax_tree). Each of the `args` in an `Expr` can be either a symbol, a number, or a nested `Expr`. We can nest these trees to any depth. Here is a deply nested `Expr`:
###Code
3 * f(x, y) + P(y) / 2 + 1
###Output
_____no_output_____
###Markdown
Operators for Constructing Logical SentencesHere is a table of the operators that can be used to form sentences. Note that we have a problem: we want to use Python operators to make sentences, so that our programs (and our interactive sessions like the one here) will show simple code. But Python does not allow implication arrows as operators, so for now we have to use a more verbose notation that Python does allow: `|'==>'|` instead of just `==>`. Alternately, you can always use the more verbose `Expr` constructor forms:| Operation | Book | Python Infix Input | Python Output | Python `Expr` Input|--------------------------|----------------------|-------------------------|---|---|| Negation | ¬ P | `~P` | `~P` | `Expr('~', P)`| And | P ∧ Q | `P & Q` | `P & Q` | `Expr('&', P, Q)`| Or | P ∨ Q | `P` &124; `Q`| `P` &124; `Q` | `Expr('`&124;`', P, Q)`| Inequality (Xor) | P ≠ Q | `P ^ Q` | `P ^ Q` | `Expr('^', P, Q)`| Implication | P → Q | `P` &124;`'==>'`&124; `Q` | `P ==> Q` | `Expr('==>', P, Q)`| Reverse Implication | Q ← P | `Q` &124;`'&124; `P` |`Q <== P` | `Expr('<==', Q, P)`| Equivalence | P ↔ Q | `P` &124;`''`&124; `Q` |`P Q` | `Expr('', P, Q)`Here's an example of defining a sentence with an implication arrow:
###Code
~(P & Q) |'==>'| (~P | ~Q)
###Output
_____no_output_____
###Markdown
`expr`: a Shortcut for Constructing SentencesIf the `|'==>'|` notation looks ugly to you, you can use the function `expr` instead:
###Code
expr('~(P & Q) ==> (~P | ~Q)')
###Output
_____no_output_____
###Markdown
`expr` takes a string as input, and parses it into an `Expr`. The string can contain arrow operators: `==>`, ``, which are handled as if they were regular Python infix operators. And `expr` automatically defines any symbols, so you don't need to pre-define them:
###Code
expr('sqrt(b ** 2 - 4 * a * c)')
###Output
_____no_output_____
###Markdown
For now that's all you need to know about `expr`. If you are interested, we explain the messy details of how `expr` is implemented and how `|'==>'|` is handled in the appendix. Propositional Knowledge Bases: `PropKB`The class `PropKB` can be used to represent a knowledge base of propositional logic sentences.We see that the class `KB` has four methods, apart from `__init__`. A point to note here: the `ask` method simply calls the `ask_generator` method. Thus, this one has already been implemented, and what you'll have to actually implement when you create your own knowledge base class (though you'll probably never need to, considering the ones we've created for you) will be the `ask_generator` function and not the `ask` function itself.The class `PropKB` now.* `__init__(self, sentence=None)` : The constructor `__init__` creates a single field `clauses` which will be a list of all the sentences of the knowledge base. Note that each one of these sentences will be a 'clause' i.e. a sentence which is made up of only literals and `or`s.* `tell(self, sentence)` : When you want to add a sentence to the KB, you use the `tell` method. This method takes a sentence, converts it to its CNF, extracts all the clauses, and adds all these clauses to the `clauses` field. So, you need not worry about `tell`ing only clauses to the knowledge base. You can `tell` the knowledge base a sentence in any form that you wish; converting it to CNF and adding the resulting clauses will be handled by the `tell` method.* `ask_generator(self, query)` : The `ask_generator` function is used by the `ask` function. It calls the `tt_entails` function, which in turn returns `True` if the knowledge base entails query and `False` otherwise. The `ask_generator` itself returns an empty dict `{}` if the knowledge base entails query and `None` otherwise. This might seem a little bit weird to you. After all, it makes more sense just to return a `True` or a `False` instead of the `{}` or `None` But this is done to maintain consistency with the way things are in First-Order Logic, where an `ask_generator` function is supposed to return all the substitutions that make the query true. Hence the dict, to return all these substitutions. I will be mostly be using the `ask` function which returns a `{}` or a `False`, but if you don't like this, you can always use the `ask_if_true` function which returns a `True` or a `False`.* `retract(self, sentence)` : This function removes all the clauses of the sentence given, from the knowledge base. Like the `tell` function, you don't have to pass clauses to remove them from the knowledge base; any sentence will do fine. The function will take care of converting that sentence to clauses and then remove those. Wumpus World KBLet us create a `PropKB` for the wumpus world with the sentences mentioned in `section 7.4.3`.
###Code
wumpus_kb = PropKB()
###Output
_____no_output_____
###Markdown
We define the symbols we use in our clauses.$P_{x, y}$ is true if there is a pit in `[x, y]`.$B_{x, y}$ is true if the agent senses breeze in `[x, y]`.
###Code
P11, P12, P21, P22, P31, B11, B21 = expr('P11, P12, P21, P22, P31, B11, B21')
###Output
_____no_output_____
###Markdown
Now we tell sentences based on `section 7.4.3`.There is no pit in `[1,1]`.
###Code
wumpus_kb.tell(~P11)
###Output
_____no_output_____
###Markdown
A square is breezy if and only if there is a pit in a neighboring square. This has to be stated for each square but for now, we include just the relevant squares.
###Code
wumpus_kb.tell(B11 | '<=>' | ((P12 | P21)))
wumpus_kb.tell(B21 | '<=>' | ((P11 | P22 | P31)))
###Output
_____no_output_____
###Markdown
Now we include the breeze percepts for the first two squares leading up to the situation in `Figure 7.3(b)`
###Code
wumpus_kb.tell(~B11)
wumpus_kb.tell(B21)
###Output
_____no_output_____
###Markdown
We can check the clauses stored in a `KB` by accessing its `clauses` variable
###Code
wumpus_kb.clauses
###Output
_____no_output_____
###Markdown
We see that the equivalence $B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was automatically converted to two implications which were inturn converted to CNF which is stored in the `KB`.$B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was split into $B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ and $B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$.$B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ was converted to $P_{1, 2} \lor P_{2, 1} \lor \neg B_{1, 1}$.$B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$ was converted to $\neg (P_{1, 2} \lor P_{2, 1}) \lor B_{1, 1}$ which becomes $(\neg P_{1, 2} \lor B_{1, 1}) \land (\neg P_{2, 1} \lor B_{1, 1})$ after applying De Morgan's laws and distributing the disjunction.$B_{2, 1} \iff (P_{1, 1} \lor P_{2, 2} \lor P_{3, 2})$ is converted in similar manner. Inference in Propositional Knowledge BaseIn this section we will look at two algorithms to check if a sentence is entailed by the `KB`. Our goal is to decide whether $\text{KB} \vDash \alpha$ for some sentence $\alpha$. Truth Table EnumerationIt is a model-checking approach which, as the name suggests, enumerates all possible models in which the `KB` is true and checks if $\alpha$ is also true in these models. We list the $n$ symbols in the `KB` and enumerate the $2^{n}$ models in a depth-first manner and check the truth of `KB` and $\alpha$.
###Code
%psource tt_check_all
###Output
_____no_output_____
###Markdown
Note that `tt_entails()` takes an `Expr` which is a conjunction of clauses as the input instead of the `KB` itself. You can use the `ask_if_true()` method of `PropKB` which does all the required conversions. Let's check what `wumpus_kb` tells us about $P_{1, 1}$.
###Code
wumpus_kb.ask_if_true(~P11), wumpus_kb.ask_if_true(P11)
###Output
_____no_output_____
###Markdown
Looking at Figure 7.9 we see that in all models in which the knowledge base is `True`, $P_{1, 1}$ is `False`. It makes sense that `ask_if_true()` returns `True` for $\alpha = \neg P_{1, 1}$ and `False` for $\alpha = P_{1, 1}$. This begs the question, what if $\alpha$ is `True` in only a portion of all models. Do we return `True` or `False`? This doesn't rule out the possibility of $\alpha$ being `True` but it is not entailed by the `KB` so we return `False` in such cases. We can see this is the case for $P_{2, 2}$ and $P_{3, 1}$.
###Code
wumpus_kb.ask_if_true(~P22), wumpus_kb.ask_if_true(P22)
###Output
_____no_output_____
###Markdown
Proof by ResolutionRecall that our goal is to check whether $\text{KB} \vDash \alpha$ i.e. is $\text{KB} \implies \alpha$ true in every model. Suppose we wanted to check if $P \implies Q$ is valid. We check the satisfiability of $\neg (P \implies Q)$, which can be rewritten as $P \land \neg Q$. If $P \land \neg Q$ is unsatisfiable, then $P \implies Q$ must be true in all models. This gives us the result "$\text{KB} \vDash \alpha$ if and only if $\text{KB} \land \neg \alpha$ is unsatisfiable".This technique corresponds to proof by contradiction, a standard mathematical proof technique. We assume $\alpha$ to be false and show that this leads to a contradiction with known axioms in $\text{KB}$. We obtain a contradiction by making valid inferences using inference rules. In this proof we use a single inference rule, resolution which states $(l_1 \lor \dots \lor l_k) \land (m_1 \lor \dots \lor m_n) \land (l_i \iff \neg m_j) \implies l_1 \lor \dots \lor l_{i - 1} \lor l_{i + 1} \lor \dots \lor l_k \lor m_1 \lor \dots \lor m_{j - 1} \lor m_{j + 1} \lor \dots \lor m_n$. Applying the resolution yeilds us a clause which we add to the KB. We keep doing this until:* There are no new clauses that can be added, in which case $\text{KB} \nvDash \alpha$.* Two clauses resolve to yield the empty clause, in which case $\text{KB} \vDash \alpha$.The empty clause is equivalent to False because it arises only from resolving two complementaryunit clauses such as $P$ and $\neg P$ which is a contradiction as both $P$ and $\neg P$ can't be True at the same time.
###Code
%psource pl_resolution
pl_resolution(wumpus_kb, ~P11), pl_resolution(wumpus_kb, P11)
pl_resolution(wumpus_kb, ~P22), pl_resolution(wumpus_kb, P22)
###Output
_____no_output_____
###Markdown
First-Order Logic Knowledge Bases: `FolKB`The class `FolKB` can be used to represent a knowledge base of First-order logic sentences. You would initialize and use it the same way as you would for `PropKB` except that the clauses are first-order definite clauses. We will see how to write such clauses to create a database and query them in the following sections. Criminal KBIn this section we create a `FolKB` based on the following paragraph.The law says that it is a crime for an American to sell weapons to hostile nations. The country Nono, an enemy of America, has some missiles, and all of its missiles were sold to it by Colonel West, who is American.The first step is to extract the facts and convert them into first-order definite clauses. Extracting the facts from data alone is a challenging task. Fortunately, we have a small paragraph and can do extraction and conversion manually. We'll store the clauses in list aptly named `clauses`.
###Code
clauses = []
###Output
_____no_output_____
###Markdown
“... it is a crime for an American to sell weapons to hostile nations”The keywords to look for here are 'crime', 'American', 'sell', 'weapon' and 'hostile'. We use predicate symbols to make meaning of them.* `Criminal(x)`: `x` is a criminal* `American(x)`: `x` is an American* `Sells(x ,y, z)`: `x` sells `y` to `z`* `Weapon(x)`: `x` is a weapon* `Hostile(x)`: `x` is a hostile nationLet us now combine them with appropriate variable naming to depict the meaning of the sentence. The criminal `x` is also the American `x` who sells weapon `y` to `z`, which is a hostile nation.$\text{American}(x) \land \text{Weapon}(y) \land \text{Sells}(x, y, z) \land \text{Hostile}(z) \implies \text{Criminal} (x)$
###Code
clauses.append(expr("(American(x) & Weapon(y) & Sells(x, y, z) & Hostile(z)) ==> Criminal(x)"))
###Output
_____no_output_____
###Markdown
"The country Nono, an enemy of America"We now know that Nono is an enemy of America. We represent these nations using the constant symbols `Nono` and `America`. the enemy relation is show using the predicate symbol `Enemy`.$\text{Enemy}(\text{Nono}, \text{America})$
###Code
clauses.append(expr("Enemy(Nono, America)"))
###Output
_____no_output_____
###Markdown
"Nono ... has some missiles"This states the existence of some missile which is owned by Nono. $\exists x \text{Owns}(\text{Nono}, x) \land \text{Missile}(x)$. We invoke existential instantiation to introduce a new constant `M1` which is the missile owned by Nono.$\text{Owns}(\text{Nono}, \text{M1}), \text{Missile}(\text{M1})$
###Code
clauses.append(expr("Owns(Nono, M1)"))
clauses.append(expr("Missile(M1)"))
###Output
_____no_output_____
###Markdown
"All of its missiles were sold to it by Colonel West"If Nono owns something and it classifies as a missile, then it was sold to Nono by West.$\text{Missile}(x) \land \text{Owns}(\text{Nono}, x) \implies \text{Sells}(\text{West}, x, \text{Nono})$
###Code
clauses.append(expr("(Missile(x) & Owns(Nono, x)) ==> Sells(West, x, Nono)"))
###Output
_____no_output_____
###Markdown
"West, who is American"West is an American.$\text{American}(\text{West})$
###Code
clauses.append(expr("American(West)"))
###Output
_____no_output_____
###Markdown
We also know, from our understanding of language, that missiles are weapons and that an enemy of America counts as “hostile”.$\text{Missile}(x) \implies \text{Weapon}(x), \text{Enemy}(x, \text{America}) \implies \text{Hostile}(x)$
###Code
clauses.append(expr("Missile(x) ==> Weapon(x)"))
clauses.append(expr("Enemy(x, America) ==> Hostile(x)"))
###Output
_____no_output_____
###Markdown
Now that we have converted the information into first-order definite clauses we can create our first-order logic knowledge base.
###Code
crime_kb = FolKB(clauses)
###Output
_____no_output_____
###Markdown
Inference in First-Order LogicIn this section we look at a forward chaining and a backward chaining algorithm for `FolKB`. Both aforementioned algorithms rely on a process called unification, a key component of all first-order inference algorithms. UnificationWe sometimes require finding substitutions that make different logical expressions look identical. This process, called unification, is done by the `unify` algorithm. It takes as input two sentences and returns a unifier for them if one exists. A unifier is a dictionary which stores the substitutions required to make the two sentences identical. It does so by recursively unifying the components of a sentence, where the unification of a variable symbol `var` with a constant symbol `Const` is the mapping `{var: Const}`. Let's look at a few examples.
###Code
unify(expr('x'), 3)
unify(expr('A(x)'), expr('A(B)'))
unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(y)'))
###Output
_____no_output_____
###Markdown
In cases where there is no possible substitution that unifies the two sentences the function return `None`.
###Code
print(unify(expr('Cat(x)'), expr('Dog(Dobby)')))
###Output
None
###Markdown
We also need to take care we do not unintentionally use the same variable name. Unify treats them as a single variable which prevents it from taking multiple value.
###Code
print(unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(x)')))
###Output
None
###Markdown
Forward Chaining AlgorithmWe consider the simple forward-chaining algorithm presented in Figure 9.3. We look at each rule in the knoweldge base and see if the premises can be satisfied. This is done by finding a substitution which unifies each of the premise with a clause in the `KB`. If we are able to unify the premises, the conclusion (with the corresponding substitution) is added to the `KB`. This inferencing process is repeated until either the query can be answered or till no new sentences can be added. We test if the newly added clause unifies with the query in which case the substitution yielded by `unify` is an answer to the query. If we run out of sentences to infer, this means the query was a failure.The function `fol_fc_ask` is a generator which yields all substitutions which validate the query.
###Code
%psource fol_fc_ask
###Output
_____no_output_____
###Markdown
Let's find out all the hostile nations. Note that we only told the `KB` that Nono was an enemy of America, not that it was hostile.
###Code
answer = fol_fc_ask(crime_kb, expr('Hostile(x)'))
print(list(answer))
###Output
[{x: Nono}]
###Markdown
The generator returned a single substitution which says that Nono is a hostile nation. See how after adding another enemy nation the generator returns two substitutions.
###Code
crime_kb.tell(expr('Enemy(JaJa, America)'))
answer = fol_fc_ask(crime_kb, expr('Hostile(x)'))
print(list(answer))
###Output
[{x: Nono}, {x: JaJa}]
###Markdown
Note: `fol_fc_ask` makes changes to the `KB` by adding sentences to it. Backward Chaining AlgorithmThis algorithm works backward from the goal, chaining through rules to find known facts that support the proof. Suppose `goal` is the query we want to find the substitution for. We find rules of the form $\text{lhs} \implies \text{goal}$ in the `KB` and try to prove `lhs`. There may be multiple clauses in the `KB` which give multiple `lhs`. It is sufficient to prove only one of these. But to prove a `lhs` all the conjuncts in the `lhs` of the clause must be proved. This makes it similar to And/Or search. ORThe OR part of the algorithm comes from our choice to select any clause of the form $\text{lhs} \implies \text{goal}$. Looking at all rules's `lhs` whose `rhs` unify with the `goal`, we yield a substitution which proves all the conjuncts in the `lhs`. We use `parse_definite_clause` to attain `lhs` and `rhs` from a clause of the form $\text{lhs} \implies \text{rhs}$. For atomic facts the `lhs` is an empty list.
###Code
%psource fol_bc_or
###Output
_____no_output_____
###Markdown
ANDThe AND corresponds to proving all the conjuncts in the `lhs`. We need to find a substitution which proves each and every clause in the list of conjuncts.
###Code
%psource fol_bc_and
###Output
_____no_output_____
###Markdown
Now the main function `fl_bc_ask` calls `fol_bc_or` with substitution initialized as empty. The `ask` method of `FolKB` uses `fol_bc_ask` and fetches the first substitution returned by the generator to answer query. Let's query the knowledge base we created from `clauses` to find hostile nations.
###Code
# Rebuild KB because running fol_fc_ask would add new facts to the KB
crime_kb = FolKB(clauses)
crime_kb.ask(expr('Hostile(x)'))
###Output
_____no_output_____
###Markdown
You may notice some new variables in the substitution. They are introduced to standardize the variable names to prevent naming problems as discussed in the [Unification section](Unification) Appendix: The Implementation of `|'==>'|`Consider the `Expr` formed by this syntax:
###Code
P |'==>'| ~Q
###Output
_____no_output_____
###Markdown
What is the funny `|'==>'|` syntax? The trick is that "`|`" is just the regular Python or-operator, and so is exactly equivalent to this:
###Code
(P | '==>') | ~Q
###Output
_____no_output_____
###Markdown
In other words, there are two applications of or-operators. Here's the first one:
###Code
P | '==>'
###Output
_____no_output_____
###Markdown
What is going on here is that the `__or__` method of `Expr` serves a dual purpose. If the right-hand-side is another `Expr` (or a number), then the result is an `Expr`, as in `(P | Q)`. But if the right-hand-side is a string, then the string is taken to be an operator, and we create a node in the abstract syntax tree corresponding to a partially-filled `Expr`, one where we know the left-hand-side is `P` and the operator is `==>`, but we don't yet know the right-hand-side.The `PartialExpr` class has an `__or__` method that says to create an `Expr` node with the right-hand-side filled in. Here we can see the combination of the `PartialExpr` with `Q` to create a complete `Expr`:
###Code
partial = PartialExpr('==>', P)
partial | ~Q
###Output
_____no_output_____
###Markdown
This [trick](http://code.activestate.com/recipes/384122-infix-operators/) is due to [Ferdinand Jamitzky](http://code.activestate.com/recipes/users/98863/), with a modification by [C. G. Vedant](https://github.com/Chipe1),who suggested using a string inside the or-bars. Appendix: The Implementation of `expr`How does `expr` parse a string into an `Expr`? It turns out there are two tricks (besides the Jamitzky/Vedant trick):1. We do a string substitution, replacing "`==>`" with "`|'==>'|`" (and likewise for other operators).2. We `eval` the resulting string in an environment in which every identifieris bound to a symbol with that identifier as the `op`.In other words,
###Code
expr('~(P & Q) ==> (~P | ~Q)')
###Output
_____no_output_____
###Markdown
is equivalent to doing:
###Code
P, Q = symbols('P, Q')
~(P & Q) |'==>'| (~P | ~Q)
###Output
_____no_output_____
###Markdown
One thing to beware of: this puts `==>` at the same precedence level as `"|"`, which is not quite right. For example, we get this:
###Code
P & Q |'==>'| P | Q
###Output
_____no_output_____
###Markdown
which is probably not what we meant; when in doubt, put in extra parens:
###Code
(P & Q) |'==>'| (P | Q)
###Output
_____no_output_____
###Markdown
Examples
###Code
from notebook import Canvas_fol_bc_ask
canvas_bc_ask = Canvas_fol_bc_ask('canvas_bc_ask', crime_kb, expr('Criminal(x)'))
###Output
_____no_output_____
###Markdown
Logic: `logic.py`; Chapters 6-8 This notebook describes the [logic.py](https://github.com/aimacode/aima-python/blob/master/logic.py) module, which covers Chapters 6 (Logical Agents), 7 (First-Order Logic) and 8 (Inference in First-Order Logic) of *[Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu)*. See the [intro notebook](https://github.com/aimacode/aima-python/blob/master/intro.ipynb) for instructions.We'll start by looking at `Expr`, the data type for logical sentences, and the convenience function `expr`. Then we'll cover `KB` and `ProbKB`, the classes for Knowledge Bases. Then, we will construct a knowledge base of a specific situation in the Wumpus World. We will next go through the `tt_entails` function and experiment with it a bit. The `pl_resolution` and `pl_fc_entails` functions will come next. But the first step is to load the code:
###Code
from utils import *
from logic import *
###Output
_____no_output_____
###Markdown
Logical Sentences The `Expr` class is designed to represent any kind of mathematical expression. The simplest type of `Expr` is a symbol, which can be defined with the function `Symbol`:
###Code
Symbol('x')
###Output
_____no_output_____
###Markdown
Or we can define multiple symbols at the same time with the function `symbols`:
###Code
(x, y, P, Q, f) = symbols('x, y, P, Q, f')
###Output
_____no_output_____
###Markdown
We can combine `Expr`s with the regular Python infix and prefix operators. Here's how we would form the logical sentence "P and not Q":
###Code
P & ~Q
###Output
_____no_output_____
###Markdown
This works because the `Expr` class overloads the `&` operator with this definition:```pythondef __and__(self, other): return Expr('&', self, other)``` and does similar overloads for the other operators. An `Expr` has two fields: `op` for the operator, which is always a string, and `args` for the arguments, which is a tuple of 0 or more expressions. By "expression," I mean either an instance of `Expr`, or a number. Let's take a look at the fields for some `Expr` examples:
###Code
sentence = P & ~Q
sentence.op
sentence.args
P.op
P.args
Pxy = P(x, y)
Pxy.op
Pxy.args
###Output
_____no_output_____
###Markdown
It is important to note that the `Expr` class does not define the *logic* of Propositional Logic sentences; it just gives you a way to *represent* expressions. Think of an `Expr` as an [abstract syntax tree](https://en.wikipedia.org/wiki/Abstract_syntax_tree). Each of the `args` in an `Expr` can be either a symbol, a number, or a nested `Expr`. We can nest these trees to any depth. Here is a deply nested `Expr`:
###Code
3 * f(x, y) + P(y) / 2 + 1
###Output
_____no_output_____
###Markdown
Operators for Constructing Logical SentencesHere is a table of the operators that can be used to form sentences. Note that we have a problem: we want to use Python operators to make sentences, so that our programs (and our interactive sessions like the one here) will show simple code. But Python does not allow implication arrows as operators, so for now we have to use a more verbose notation that Python does allow: `|'==>'|` instead of just `==>`. Alternately, you can always use the more verbose `Expr` constructor forms:| Operation | Book | Python Infix Input | Python Output | Python `Expr` Input|--------------------------|----------------------|-------------------------|---|---|| Negation | ¬ P | `~P` | `~P` | `Expr('~', P)`| And | P ∧ Q | `P & Q` | `P & Q` | `Expr('&', P, Q)`| Or | P ∨ Q | `P` &124; `Q`| `P` &124; `Q` | `Expr('`&124;`', P, Q)| Inequality (Xor) | P ≠ Q | `P ^ Q` | `P ^ Q` | `Expr('^', P, Q)`| Implication | P → Q | `P` &124;`'==>'`&124; `Q` | `P ==> Q` | `Expr('==>', P, Q)`| Reverse Implication | Q ← P | `Q` &124;`'&124; `P` |`Q <== P` | `Expr('<==', Q, P)`| Equivalence | P ↔ Q | `P` &124;`''`&124; `Q` |`P ==> Q` | `Expr('==>', P, Q)`Here's an example of defining a sentence with an implication arrow:
###Code
~(P & Q) |'==>'| (~P | ~Q)
###Output
_____no_output_____
###Markdown
`expr`: a Shortcut for Constructing SentencesIf the `|'==>'|` notation looks ugly to you, you can use the function `expr` instead:
###Code
expr('~(P & Q) ==> (~P | ~Q)')
###Output
_____no_output_____
###Markdown
`expr` takes a string as input, and parses it into an `Expr`. The string can contain arrow operators: `==>`, ``, which are handled as if they were regular Python infix operators. And `expr` automatically defines any symbols, so you don't need to pre-define them:
###Code
expr('sqrt(b ** 2 - 4 * a * c)')
###Output
_____no_output_____
###Markdown
For now that's all you need to know about `expr`. Later we will explain the messy details of how `expr` is implemented and how `|'==>'|` is handled. Propositional Knowledge Bases: `PropKB`The class `PropKB` can be used to represent a knowledge base of propositional logic sentences.We see that the class `KB` has four methods, apart from `__init__`. A point to note here: the `ask` method simply calls the `ask_generator` method. Thus, this one has already been implemented and what you'll have to actually implement when you create your own knowledge base class (if you want to, though I doubt you'll ever need to; just use the ones we've created for you), will be the `ask_generator` function and not the `ask` function itself.The class `PropKB` now.* `__init__(self, sentence=None)` : The constructor `__init__` creates a single field `clauses` which will be a list of all the sentences of the knowledge base. Note that each one of these sentences will be a 'clause' i.e. a sentence which is made up of only literals and `or`s.* `tell(self, sentence)` : When you want to add a sentence to the KB, you use the `tell` method. This method takes a sentence, converts it to its CNF, extracts all the clauses, and adds all these clauses to the `clauses` field. So, you need not worry about `tell`ing only clauses to the knowledge base. You can `tell` the knowledge base a sentence in any form that you wish; converting it to CNF and adding the resulting clauses will be handled by the `tell` method.* `ask_generator(self, query)` : The `ask_generator` function is used by the `ask` function. It calls the `tt_entails` function, which in turn returns `True` if the knowledge base entails query and `False` otherwise. The `ask_generator` itself returns an empty dict `{}` if the knowledge base entails query and `None` otherwise. This might seem a little bit weird to you. After all, it makes more sense just to return a `True` or a `False` instead of the `{}` or `None` But this is done to maintain consistency with the way things are in First-Order Logic, where, an `ask_generator` function, is supposed to return all the substitutions that make the query true. Hence the dict, to return all these substitutions. I will be mostly be using the `ask` function which returns a `{}` or a `False`, but if you don't like this, you can always use the `ask_if_true` function which returns a `True` or a `False`.* `retract(self, sentence)` : This function removes all the clauses of the sentence given, from the knowledge base. Like the `tell` function, you don't have to pass clauses to remove them from the knowledge base; any sentence will do fine. The function will take care of converting that sentence to clauses and then remove those. TODO: More on KBs, plus what was promised in Intro SectionTODO: fill in here ... Appendix: The Implementation of `|'==>'|`Consider the `Expr` formed by this syntax:
###Code
P |'==>'| ~Q
###Output
_____no_output_____
###Markdown
What is the funny `|'==>'|` syntax? The trick is that "`|`" is just the regular Python or-operator, and so is exactly equivalent to this:
###Code
(P | '==>') | ~Q
###Output
_____no_output_____
###Markdown
In other words, there are two applications of or-operators. Here's the first one:
###Code
P | '==>'
###Output
_____no_output_____
###Markdown
What is going on here is that the `__or__` method of `Expr` serves a dual purpose. If the right-hand-side is another `Expr` (or a number), then the result is an `Expr`, as in `(P | Q)`. But if the right-hand-side is a string, then the string is taken to be an operator, and we create a node in the abstract syntax tree corresponding to a partially-filled `Expr`, one where we know the left-hand-side is `P` and the operator is `==>`, but we don't yet know the right-hand-side.The `PartialExpr` class has an `__or__` method that says to create an `Expr` node with the right-hand-side filled in. Here we can see the combination of the `PartialExpr` with `Q` to create a complete `Expr`:
###Code
partial = PartialExpr('==>', P)
partial | ~Q
###Output
_____no_output_____
###Markdown
This [trick](http://code.activestate.com/recipes/384122-infix-operators/) is due to [Ferdinand Jamitzky](http://code.activestate.com/recipes/users/98863/), with a modification by [C. G. Vedant](https://github.com/Chipe1),who suggested using a string inside the or-bars. Appendix: The Implementation of `expr`How does `expr` parse a string into an `Expr`? It turns out there are two tricks (besides the Jamitzky/Vedant trick):1. We do a string substitution, replacing "`==>`" with "`|'==>'|`" (and likewise for other operators).2. We `eval` the resulting string in an environment in which every identifieris bound to a symbol with that identifier as the `op`.In other words,
###Code
expr('~(P & Q) ==> (~P | ~Q)')
###Output
_____no_output_____
###Markdown
is equivalent to doing:
###Code
P, Q = symbols('P, Q')
~(P & Q) |'==>'| (~P | ~Q)
###Output
_____no_output_____
###Markdown
One thing to beware of: this puts `==>` at the same precedence level as `"|"`, which is not quite right. For example, we get this:
###Code
P & Q |'==>'| P | Q
###Output
_____no_output_____
###Markdown
which is probably not what we meant; when in doubt, put in extra parens:
###Code
(P & Q) |'==>'| (P | Q)
###Output
_____no_output_____
###Markdown
Logic: `logic.py`; Chapters 6-8 This notebook describes the [logic.py](https://github.com/aimacode/aima-python/blob/master/logic.py) module, which covers Chapters 6 (Logical Agents), 7 (First-Order Logic) and 8 (Inference in First-Order Logic) of *[Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu)*. See the [intro notebook](https://github.com/aimacode/aima-python/blob/master/intro.ipynb) for instructions.We'll start by looking at `Expr`, the data type for logical sentences, and the convenience function `expr`. Then we'll cover `KB` and `ProbKB`, the classes for Knowledge Bases. Then, we will construct a knowledge base of a specific situation in the Wumpus World. We will next go through the `tt_entails` function and experiment with it a bit. The `pl_resolution` and `pl_fc_entails` functions will come next. But the first step is to load the code:
###Code
from utils import *
from logic import *
###Output
_____no_output_____
###Markdown
Logical Sentences The `Expr` class is designed to represent any kind of mathematical expression. The simplest type of `Expr` is a symbol, which can be defined with the function `Symbol`:
###Code
Symbol('x')
###Output
_____no_output_____
###Markdown
Or we can define multiple symbols at the same time with the function `symbols`:
###Code
(x, y, P, Q, f) = symbols('x, y, P, Q, f')
###Output
_____no_output_____
###Markdown
We can combine `Expr`s with the regular Python infix and prefix operators. Here's how we would form the logical sentence "P and not Q":
###Code
P & ~Q
###Output
_____no_output_____
###Markdown
This works because the `Expr` class overloads the `&` operator with this definition:```pythondef __and__(self, other): return Expr('&', self, other)``` and does similar overloads for the other operators. An `Expr` has two fields: `op` for the operator, which is always a string, and `args` for the arguments, which is a tuple of 0 or more expressions. By "expression," I mean either an instance of `Expr`, or a number. Let's take a look at the fields for some `Expr` examples:
###Code
sentence = P & ~Q
sentence.op
sentence.args
P.op
P.args
Pxy = P(x, y)
Pxy.op
Pxy.args
###Output
_____no_output_____
###Markdown
It is important to note that the `Expr` class does not define the *logic* of Propositional Logic sentences; it just gives you a way to *represent* expressions. Think of an `Expr` as an [abstract syntax tree](https://en.wikipedia.org/wiki/Abstract_syntax_tree). Each of the `args` in an `Expr` can be either a symbol, a number, or a nested `Expr`. We can nest these trees to any depth. Here is a deply nested `Expr`:
###Code
3 * f(x, y) + P(y) / 2 + 1
###Output
_____no_output_____
###Markdown
Operators for Constructing Logical SentencesHere is a table of the operators that can be used to form sentences. Note that we have a problem: we want to use Python operators to make sentences, so that our programs (and our interactive sessions like the one here) will show simple code. But Python does not allow implication arrows as operators, so for now we have to use a more verbose notation that Python does allow: `|'==>'|` instead of just `==>`. Alternately, you can always use the more verbose `Expr` constructor forms:| Operation | Book | Python Infix Input | Python Output | Python `Expr` Input|--------------------------|----------------------|-------------------------|---|---|| Negation | ¬ P | `~P` | `~P` | `Expr('~', P)`| And | P ∧ Q | `P & Q` | `P & Q` | `Expr('&', P, Q)`| Or | P ∨ Q | `P` &124; `Q`| `P` &124; `Q` | `Expr('`&124;`', P, Q)| Inequality (Xor) | P ≠ Q | `P ^ Q` | `P ^ Q` | `Expr('^', P, Q)`| Implication | P → Q | `P` &124;`'==>'`&124; `Q` | `P ==> Q` | `Expr('==>', P, Q)`| Reverse Implication | Q ← P | `Q` &124;`'&124; `P` |`Q <== P` | `Expr('<==', Q, P)`| Equivalence | P ↔ Q | `P` &124;`''`&124; `Q` |`P ==> Q` | `Expr('==>', P, Q)`Here's an example of defining a sentence with an implication arrow:
###Code
~(P & Q) |'==>'| (~P | ~Q)
###Output
_____no_output_____
###Markdown
`expr`: a Shortcut for Constructing SentencesIf the `|'==>'|` notation looks ugly to you, you can use the function `expr` instead:
###Code
expr('~(P & Q) ==> (~P | ~Q)')
###Output
_____no_output_____
###Markdown
`expr` takes a string as input, and parses it into an `Expr`. The string can contain arrow operators: `==>`, ``, which are handled as if they were regular Python infix operators. And `expr` automatically defines any symbols, so you don't need to pre-define them:
###Code
expr('sqrt(b ** 2 - 4 * a * c)')
###Output
_____no_output_____
###Markdown
For now that's all you need to know about `expr`. Later we will explain the messy details of how `expr` is implemented and how `|'==>'|` is handled. Propositional Knowledge Bases: `PropKB`The class `PropKB` can be used to represent a knowledge base of propositional logic sentences.We see that the class `KB` has four methods, apart from `__init__`. A point to note here: the `ask` method simply calls the `ask_generator` method. Thus, this one has already been implemented and what you'll have to actually implement when you create your own knowledge base class (if you want to, though I doubt you'll ever need to; just use the ones we've created for you), will be the `ask_generator` function and not the `ask` function itself.The class `PropKB` now.* `__init__(self, sentence=None)` : The constructor `__init__` creates a single field `clauses` which will be a list of all the sentences of the knowledge base. Note that each one of these sentences will be a 'clause' i.e. a sentence which is made up of only literals and `or`s.* `tell(self, sentence)` : When you want to add a sentence to the KB, you use the `tell` method. This method takes a sentence, converts it to its CNF, extracts all the clauses, and adds all these clauses to the `clauses` field. So, you need not worry about `tell`ing only clauses to the knowledge base. You can `tell` the knowledge base a sentence in any form that you wish; converting it to CNF and adding the resulting clauses will be handled by the `tell` method.* `ask_generator(self, query)` : The `ask_generator` function is used by the `ask` function. It calls the `tt_entails` function, which in turn returns `True` if the knowledge base entails query and `False` otherwise. The `ask_generator` itself returns an empty dict `{}` if the knowledge base entails query and `None` otherwise. This might seem a little bit weird to you. After all, it makes more sense just to return a `True` or a `False` instead of the `{}` or `None` But this is done to maintain consistency with the way things are in First-Order Logic, where, an `ask_generator` function, is supposed to return all the substitutions that make the query true. Hence the dict, to return all these substitutions. I will be mostly be using the `ask` function which returns a `{}` or a `False`, but if you don't like this, you can always use the `ask_if_true` function which returns a `True` or a `False`.* `retract(self, sentence)` : This function removes all the clauses of the sentence given, from the knowledge base. Like the `tell` function, you don't have to pass clauses to remove them from the knowledge base; any sentence will do fine. The function will take care of converting that sentence to clauses and then remove those. TODO: More on KBs, plus what was promised in Intro SectionTODO: fill in here ... Appendix: The Implementation of `|'==>'|`Consider the `Expr` formed by this syntax:
###Code
P |'==>'| ~Q
###Output
_____no_output_____
###Markdown
What is the funny `|'==>'|` syntax? The trick is that "`|`" is just the regular Python or-operator, and so is exactly equivalent to this:
###Code
(P | '==>') | ~Q
###Output
_____no_output_____
###Markdown
In other words, there are two applications of or-operators. Here's the first one:
###Code
P | '==>'
###Output
_____no_output_____
###Markdown
What is going on here is that the `__or__` method of `Expr` serves a dual purpose. If the right-hand-side is another `Expr` (or a number), then the result is an `Expr`, as in `(P | Q)`. But if the right-hand-side is a string, then the string is taken to be an operator, and we create a node in the abstract syntax tree corresponding to a partially-filled `Expr`, one where we know the left-hand-side is `P` and the operator is `==>`, but we don't yet know the right-hand-side.The `PartialExpr` class has an `__or__` method that says to create an `Expr` node with the right-hand-side filled in. Here we can see the combination of the `PartialExpr` with `Q` to create a complete `Expr`:
###Code
partial = PartialExpr('==>', P)
partial | ~Q
###Output
_____no_output_____
###Markdown
This [trick](http://code.activestate.com/recipes/384122-infix-operators/) is due to [Ferdinand Jamitzky](http://code.activestate.com/recipes/users/98863/), with a modification by [C. G. Vedant](https://github.com/Chipe1),who suggested using a string inside the or-bars. Appendix: The Implementation of `expr`How does `expr` parse a string into an `Expr`? It turns out there are two tricks (besides the Jamitzky/Vedant trick):1. We do a string substitution, replacing "`==>`" with "`|'==>'|`" (and likewise for other operators).2. We `eval` the resulting string in an environment in which every identifieris bound to a symbol with that identifier as the `op`.In other words,
###Code
expr('~(P & Q) ==> (~P | ~Q)')
###Output
_____no_output_____
###Markdown
is equivalent to doing:
###Code
P, Q = symbols('P, Q')
~(P & Q) |'==>'| (~P | ~Q)
###Output
_____no_output_____
###Markdown
One thing to beware of: this puts `==>` at the same precedence level as `"|"`, which is not quite right. For example, we get this:
###Code
P & Q |'==>'| P | Q
###Output
_____no_output_____
###Markdown
which is probably not what we meant; when in doubt, put in extra parens:
###Code
(P & Q) |'==>'| (P | Q)
###Output
_____no_output_____
###Markdown
Logic: `logic.py`; Chapters 6-8 This notebook describes the [logic.py](https://github.com/aimacode/aima-python/blob/master/logic.py) module, which covers Chapters 6 (Logical Agents), 7 (First-Order Logic) and 8 (Inference in First-Order Logic) of *[Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu)*. See the [intro notebook](https://github.com/aimacode/aima-python/blob/master/intro.ipynb) for instructions.We'll start by looking at `Expr`, the data type for logical sentences, and the convenience function `expr`. Then we'll cover `KB` and `ProbKB`, the classes for Knowledge Bases. Then, we will construct a knowledge base of a specific situation in the Wumpus World. We will next go through the `tt_entails` function and experiment with it a bit. The `pl_resolution` and `pl_fc_entails` functions will come next. But the first step is to load the code:
###Code
from utils import *
from logic import *
###Output
_____no_output_____
###Markdown
Logical Sentences The `Expr` class is designed to represent any kind of mathematical expression. The simplest type of `Expr` is a symbol, which can be defined with the function `Symbol`:
###Code
Symbol('x')
###Output
_____no_output_____
###Markdown
Or we can define multiple symbols at the same time with the function `symbols`:
###Code
(x, y, P, Q, f) = symbols('x, y, P, Q, f')
###Output
_____no_output_____
###Markdown
We can combine `Expr`s with the regular Python infix and prefix operators. Here's how we would form the logical sentence "P and not Q":
###Code
P & ~Q
###Output
_____no_output_____
###Markdown
This works because the `Expr` class overloads the `&` operator with this definition:```pythondef __and__(self, other): return Expr('&', self, other)``` and does similar overloads for the other operators. An `Expr` has two fields: `op` for the operator, which is always a string, and `args` for the arguments, which is a tuple of 0 or more expressions. By "expression," I mean either an instance of `Expr`, or a number. Let's take a look at the fields for some `Expr` examples:
###Code
sentence = P & ~Q
sentence.op
sentence.args
P.op
P.args
Pxy = P(x, y)
Pxy.op
Pxy.args
###Output
_____no_output_____
###Markdown
It is important to note that the `Expr` class does not define the *logic* of Propositional Logic sentences; it just gives you a way to *represent* expressions. Think of an `Expr` as an [abstract syntax tree](https://en.wikipedia.org/wiki/Abstract_syntax_tree). Each of the `args` in an `Expr` can be either a symbol, a number, or a nested `Expr`. We can nest these trees to any depth. Here is a deply nested `Expr`:
###Code
3 * f(x, y) + P(y) / 2 + 1
###Output
_____no_output_____
###Markdown
Operators for Constructing Logical SentencesHere is a table of the operators that can be used to form sentences. Note that we have a problem: we want to use Python operators to make sentences, so that our programs (and our interactive sessions like the one here) will show simple code. But Python does not allow implication arrows as operators, so for now we have to use a more verbose notation that Python does allow: `|'==>'|` instead of just `==>`. Alternately, you can always use the more verbose `Expr` constructor forms:| Operation | Book | Python Infix Input | Python Output | Python `Expr` Input|--------------------------|----------------------|-------------------------|---|---|| Negation | ¬ P | `~P` | `~P` | `Expr('~', P)`| And | P ∧ Q | `P & Q` | `P & Q` | `Expr('&', P, Q)`| Or | P ∨ Q | `P` &124; `Q`| `P` &124; `Q` | `Expr('`&124;`', P, Q)`| Inequality (Xor) | P ≠ Q | `P ^ Q` | `P ^ Q` | `Expr('^', P, Q)`| Implication | P → Q | `P` &124;`'==>'`&124; `Q` | `P ==> Q` | `Expr('==>', P, Q)`| Reverse Implication | Q ← P | `Q` &124;`'&124; `P` |`Q <== P` | `Expr('<==', Q, P)`| Equivalence | P ↔ Q | `P` &124;`''`&124; `Q` |`P Q` | `Expr('', P, Q)`Here's an example of defining a sentence with an implication arrow:
###Code
~(P & Q) |'==>'| (~P | ~Q)
###Output
_____no_output_____
###Markdown
`expr`: a Shortcut for Constructing SentencesIf the `|'==>'|` notation looks ugly to you, you can use the function `expr` instead:
###Code
expr('~(P & Q) ==> (~P | ~Q)')
###Output
_____no_output_____
###Markdown
`expr` takes a string as input, and parses it into an `Expr`. The string can contain arrow operators: `==>`, ``, which are handled as if they were regular Python infix operators. And `expr` automatically defines any symbols, so you don't need to pre-define them:
###Code
expr('sqrt(b ** 2 - 4 * a * c)')
###Output
_____no_output_____
###Markdown
For now that's all you need to know about `expr`. Later we will explain the messy details of how `expr` is implemented and how `|'==>'|` is handled. Propositional Knowledge Bases: `PropKB`The class `PropKB` can be used to represent a knowledge base of propositional logic sentences.We see that the class `KB` has four methods, apart from `__init__`. A point to note here: the `ask` method simply calls the `ask_generator` method. Thus, this one has already been implemented and what you'll have to actually implement when you create your own knowledge base class (if you want to, though I doubt you'll ever need to; just use the ones we've created for you), will be the `ask_generator` function and not the `ask` function itself.The class `PropKB` now.* `__init__(self, sentence=None)` : The constructor `__init__` creates a single field `clauses` which will be a list of all the sentences of the knowledge base. Note that each one of these sentences will be a 'clause' i.e. a sentence which is made up of only literals and `or`s.* `tell(self, sentence)` : When you want to add a sentence to the KB, you use the `tell` method. This method takes a sentence, converts it to its CNF, extracts all the clauses, and adds all these clauses to the `clauses` field. So, you need not worry about `tell`ing only clauses to the knowledge base. You can `tell` the knowledge base a sentence in any form that you wish; converting it to CNF and adding the resulting clauses will be handled by the `tell` method.* `ask_generator(self, query)` : The `ask_generator` function is used by the `ask` function. It calls the `tt_entails` function, which in turn returns `True` if the knowledge base entails query and `False` otherwise. The `ask_generator` itself returns an empty dict `{}` if the knowledge base entails query and `None` otherwise. This might seem a little bit weird to you. After all, it makes more sense just to return a `True` or a `False` instead of the `{}` or `None` But this is done to maintain consistency with the way things are in First-Order Logic, where, an `ask_generator` function, is supposed to return all the substitutions that make the query true. Hence the dict, to return all these substitutions. I will be mostly be using the `ask` function which returns a `{}` or a `False`, but if you don't like this, you can always use the `ask_if_true` function which returns a `True` or a `False`.* `retract(self, sentence)` : This function removes all the clauses of the sentence given, from the knowledge base. Like the `tell` function, you don't have to pass clauses to remove them from the knowledge base; any sentence will do fine. The function will take care of converting that sentence to clauses and then remove those. TODO: More on KBs, plus what was promised in Intro SectionTODO: fill in here ... Appendix: The Implementation of `|'==>'|`Consider the `Expr` formed by this syntax:
###Code
P |'==>'| ~Q
###Output
_____no_output_____
###Markdown
What is the funny `|'==>'|` syntax? The trick is that "`|`" is just the regular Python or-operator, and so is exactly equivalent to this:
###Code
(P | '==>') | ~Q
###Output
_____no_output_____
###Markdown
In other words, there are two applications of or-operators. Here's the first one:
###Code
P | '==>'
###Output
_____no_output_____
###Markdown
What is going on here is that the `__or__` method of `Expr` serves a dual purpose. If the right-hand-side is another `Expr` (or a number), then the result is an `Expr`, as in `(P | Q)`. But if the right-hand-side is a string, then the string is taken to be an operator, and we create a node in the abstract syntax tree corresponding to a partially-filled `Expr`, one where we know the left-hand-side is `P` and the operator is `==>`, but we don't yet know the right-hand-side.The `PartialExpr` class has an `__or__` method that says to create an `Expr` node with the right-hand-side filled in. Here we can see the combination of the `PartialExpr` with `Q` to create a complete `Expr`:
###Code
partial = PartialExpr('==>', P)
partial | ~Q
###Output
_____no_output_____
###Markdown
This [trick](http://code.activestate.com/recipes/384122-infix-operators/) is due to [Ferdinand Jamitzky](http://code.activestate.com/recipes/users/98863/), with a modification by [C. G. Vedant](https://github.com/Chipe1),who suggested using a string inside the or-bars. Appendix: The Implementation of `expr`How does `expr` parse a string into an `Expr`? It turns out there are two tricks (besides the Jamitzky/Vedant trick):1. We do a string substitution, replacing "`==>`" with "`|'==>'|`" (and likewise for other operators).2. We `eval` the resulting string in an environment in which every identifieris bound to a symbol with that identifier as the `op`.In other words,
###Code
expr('~(P & Q) ==> (~P | ~Q)')
###Output
_____no_output_____
###Markdown
is equivalent to doing:
###Code
P, Q = symbols('P, Q')
~(P & Q) |'==>'| (~P | ~Q)
###Output
_____no_output_____
###Markdown
One thing to beware of: this puts `==>` at the same precedence level as `"|"`, which is not quite right. For example, we get this:
###Code
P & Q |'==>'| P | Q
###Output
_____no_output_____
###Markdown
which is probably not what we meant; when in doubt, put in extra parens:
###Code
(P & Q) |'==>'| (P | Q)
###Output
_____no_output_____
###Markdown
Logic: `logic.py`; Chapters 6-8 This notebook describes the [logic.py](https://github.com/aimacode/aima-python/blob/master/logic.py) module, which covers Chapters 6 (Logical Agents), 7 (First-Order Logic) and 8 (Inference in First-Order Logic) of *[Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu)*. See the [intro notebook](https://github.com/aimacode/aima-python/blob/master/intro.ipynb) for instructions.We'll start by looking at `Expr`, the data type for logical sentences, and the convenience function `expr`. We'll be covering two types of knowledge bases, `PropKB` - Propositional logic knowledge base and `FolKB` - First order logic knowledge base. We will construct a propositional knowledge base of a specific situation in the Wumpus World. We will next go through the `tt_entails` function and experiment with it a bit. The `pl_resolution` and `pl_fc_entails` functions will come next. We'll study forward chaining and backward chaining algorithms for `FolKB` and use them on `crime_kb` knowledge base.But the first step is to load the code:
###Code
from utils import *
from logic import *
from notebook import psource
###Output
_____no_output_____
###Markdown
Logical Sentences The `Expr` class is designed to represent any kind of mathematical expression. The simplest type of `Expr` is a symbol, which can be defined with the function `Symbol`:
###Code
Symbol('x')
###Output
_____no_output_____
###Markdown
Or we can define multiple symbols at the same time with the function `symbols`:
###Code
(x, y, P, Q, f) = symbols('x, y, P, Q, f')
###Output
_____no_output_____
###Markdown
We can combine `Expr`s with the regular Python infix and prefix operators. Here's how we would form the logical sentence "P and not Q":
###Code
P & ~Q
###Output
_____no_output_____
###Markdown
This works because the `Expr` class overloads the `&` operator with this definition:```pythondef __and__(self, other): return Expr('&', self, other)``` and does similar overloads for the other operators. An `Expr` has two fields: `op` for the operator, which is always a string, and `args` for the arguments, which is a tuple of 0 or more expressions. By "expression," I mean either an instance of `Expr`, or a number. Let's take a look at the fields for some `Expr` examples:
###Code
sentence = P & ~Q
sentence.op
sentence.args
P.op
P.args
Pxy = P(x, y)
Pxy.op
Pxy.args
###Output
_____no_output_____
###Markdown
It is important to note that the `Expr` class does not define the *logic* of Propositional Logic sentences; it just gives you a way to *represent* expressions. Think of an `Expr` as an [abstract syntax tree](https://en.wikipedia.org/wiki/Abstract_syntax_tree). Each of the `args` in an `Expr` can be either a symbol, a number, or a nested `Expr`. We can nest these trees to any depth. Here is a deply nested `Expr`:
###Code
3 * f(x, y) + P(y) / 2 + 1
###Output
_____no_output_____
###Markdown
Operators for Constructing Logical SentencesHere is a table of the operators that can be used to form sentences. Note that we have a problem: we want to use Python operators to make sentences, so that our programs (and our interactive sessions like the one here) will show simple code. But Python does not allow implication arrows as operators, so for now we have to use a more verbose notation that Python does allow: `|'==>'|` instead of just `==>`. Alternately, you can always use the more verbose `Expr` constructor forms:| Operation | Book | Python Infix Input | Python Output | Python `Expr` Input|--------------------------|----------------------|-------------------------|---|---|| Negation | ¬ P | `~P` | `~P` | `Expr('~', P)`| And | P ∧ Q | `P & Q` | `P & Q` | `Expr('&', P, Q)`| Or | P ∨ Q | `P` &124; `Q`| `P` &124; `Q` | `Expr('`&124;`', P, Q)`| Inequality (Xor) | P ≠ Q | `P ^ Q` | `P ^ Q` | `Expr('^', P, Q)`| Implication | P → Q | `P` &124;`'==>'`&124; `Q` | `P ==> Q` | `Expr('==>', P, Q)`| Reverse Implication | Q ← P | `Q` &124;`'&124; `P` |`Q <== P` | `Expr('<==', Q, P)`| Equivalence | P ↔ Q | `P` &124;`''`&124; `Q` |`P Q` | `Expr('', P, Q)`Here's an example of defining a sentence with an implication arrow:
###Code
~(P & Q) |'==>'| (~P | ~Q)
###Output
_____no_output_____
###Markdown
`expr`: a Shortcut for Constructing SentencesIf the `|'==>'|` notation looks ugly to you, you can use the function `expr` instead:
###Code
expr('~(P & Q) ==> (~P | ~Q)')
###Output
_____no_output_____
###Markdown
`expr` takes a string as input, and parses it into an `Expr`. The string can contain arrow operators: `==>`, ``, which are handled as if they were regular Python infix operators. And `expr` automatically defines any symbols, so you don't need to pre-define them:
###Code
expr('sqrt(b ** 2 - 4 * a * c)')
###Output
_____no_output_____
###Markdown
For now that's all you need to know about `expr`. If you are interested, we explain the messy details of how `expr` is implemented and how `|'==>'|` is handled in the appendix. Propositional Knowledge Bases: `PropKB`The class `PropKB` can be used to represent a knowledge base of propositional logic sentences.We see that the class `KB` has four methods, apart from `__init__`. A point to note here: the `ask` method simply calls the `ask_generator` method. Thus, this one has already been implemented, and what you'll have to actually implement when you create your own knowledge base class (though you'll probably never need to, considering the ones we've created for you) will be the `ask_generator` function and not the `ask` function itself.The class `PropKB` now.* `__init__(self, sentence=None)` : The constructor `__init__` creates a single field `clauses` which will be a list of all the sentences of the knowledge base. Note that each one of these sentences will be a 'clause' i.e. a sentence which is made up of only literals and `or`s.* `tell(self, sentence)` : When you want to add a sentence to the KB, you use the `tell` method. This method takes a sentence, converts it to its CNF, extracts all the clauses, and adds all these clauses to the `clauses` field. So, you need not worry about `tell`ing only clauses to the knowledge base. You can `tell` the knowledge base a sentence in any form that you wish; converting it to CNF and adding the resulting clauses will be handled by the `tell` method.* `ask_generator(self, query)` : The `ask_generator` function is used by the `ask` function. It calls the `tt_entails` function, which in turn returns `True` if the knowledge base entails query and `False` otherwise. The `ask_generator` itself returns an empty dict `{}` if the knowledge base entails query and `None` otherwise. This might seem a little bit weird to you. After all, it makes more sense just to return a `True` or a `False` instead of the `{}` or `None` But this is done to maintain consistency with the way things are in First-Order Logic, where an `ask_generator` function is supposed to return all the substitutions that make the query true. Hence the dict, to return all these substitutions. I will be mostly be using the `ask` function which returns a `{}` or a `False`, but if you don't like this, you can always use the `ask_if_true` function which returns a `True` or a `False`.* `retract(self, sentence)` : This function removes all the clauses of the sentence given, from the knowledge base. Like the `tell` function, you don't have to pass clauses to remove them from the knowledge base; any sentence will do fine. The function will take care of converting that sentence to clauses and then remove those. Wumpus World KBLet us create a `PropKB` for the wumpus world with the sentences mentioned in `section 7.4.3`.
###Code
wumpus_kb = PropKB()
###Output
_____no_output_____
###Markdown
We define the symbols we use in our clauses.$P_{x, y}$ is true if there is a pit in `[x, y]`.$B_{x, y}$ is true if the agent senses breeze in `[x, y]`.
###Code
P11, P12, P21, P22, P31, B11, B21 = expr('P11, P12, P21, P22, P31, B11, B21')
###Output
_____no_output_____
###Markdown
Now we tell sentences based on `section 7.4.3`.There is no pit in `[1,1]`.
###Code
wumpus_kb.tell(~P11)
###Output
_____no_output_____
###Markdown
A square is breezy if and only if there is a pit in a neighboring square. This has to be stated for each square but for now, we include just the relevant squares.
###Code
wumpus_kb.tell(B11 | '<=>' | ((P12 | P21)))
wumpus_kb.tell(B21 | '<=>' | ((P11 | P22 | P31)))
###Output
_____no_output_____
###Markdown
Now we include the breeze percepts for the first two squares leading up to the situation in `Figure 7.3(b)`
###Code
wumpus_kb.tell(~B11)
wumpus_kb.tell(B21)
###Output
_____no_output_____
###Markdown
We can check the clauses stored in a `KB` by accessing its `clauses` variable
###Code
wumpus_kb.clauses
###Output
_____no_output_____
###Markdown
We see that the equivalence $B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was automatically converted to two implications which were inturn converted to CNF which is stored in the `KB`.$B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was split into $B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ and $B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$.$B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ was converted to $P_{1, 2} \lor P_{2, 1} \lor \neg B_{1, 1}$.$B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$ was converted to $\neg (P_{1, 2} \lor P_{2, 1}) \lor B_{1, 1}$ which becomes $(\neg P_{1, 2} \lor B_{1, 1}) \land (\neg P_{2, 1} \lor B_{1, 1})$ after applying De Morgan's laws and distributing the disjunction.$B_{2, 1} \iff (P_{1, 1} \lor P_{2, 2} \lor P_{3, 2})$ is converted in similar manner. Inference in Propositional Knowledge BaseIn this section we will look at two algorithms to check if a sentence is entailed by the `KB`. Our goal is to decide whether $\text{KB} \vDash \alpha$ for some sentence $\alpha$. Truth Table EnumerationIt is a model-checking approach which, as the name suggests, enumerates all possible models in which the `KB` is true and checks if $\alpha$ is also true in these models. We list the $n$ symbols in the `KB` and enumerate the $2^{n}$ models in a depth-first manner and check the truth of `KB` and $\alpha$.
###Code
psource(tt_check_all)
###Output
_____no_output_____
###Markdown
The algorithm basically computes every line of the truth table $KB\implies \alpha$ and checks if it is true everywhere.If symbols are defined, the routine recursively constructs every combination of truth values for the symbols and then, it checks whether `model` is consistent with `kb`.The given models correspond to the lines in the truth table,which have a `true` in the KB column, and for these lines it checks whether the query evaluates to true`result = pl_true(alpha, model)`.In short, `tt_check_all` evaluates this logical expression for each `model``pl_true(kb, model) => pl_true(alpha, model)`which is logically equivalent to`pl_true(kb, model) & ~pl_true(alpha, model)` that is, the knowledge base and the negation of the query are logically inconsistent.`tt_entails()` just extracts the symbols from the query and calls `tt_check_all()` with the proper parameters.
###Code
psource(tt_entails)
###Output
_____no_output_____
###Markdown
Keep in mind that for two symbols P and Q, P => Q is false only when P is `True` and Q is `False`.Example usage of `tt_entails()`:
###Code
tt_entails(P & Q, Q)
###Output
_____no_output_____
###Markdown
P & Q is True only when both P and Q are True. Hence, (P & Q) => Q is True
###Code
tt_entails(P | Q, Q)
tt_entails(P | Q, P)
###Output
_____no_output_____
###Markdown
If we know that P | Q is true, we cannot infer the truth values of P and Q. Hence (P | Q) => Q is False and so is (P | Q) => P.
###Code
(A, B, C, D, E, F, G) = symbols('A, B, C, D, E, F, G')
tt_entails(A & (B | C) & D & E & ~(F | G), A & D & E & ~F & ~G)
###Output
_____no_output_____
###Markdown
We can see that for the KB to be true, A, D, E have to be True and F and G have to be False.Nothing can be said about B or C. Coming back to our problem, note that `tt_entails()` takes an `Expr` which is a conjunction of clauses as the input instead of the `KB` itself. You can use the `ask_if_true()` method of `PropKB` which does all the required conversions. Let's check what `wumpus_kb` tells us about $P_{1, 1}$.
###Code
wumpus_kb.ask_if_true(~P11), wumpus_kb.ask_if_true(P11)
###Output
_____no_output_____
###Markdown
Looking at Figure 7.9 we see that in all models in which the knowledge base is `True`, $P_{1, 1}$ is `False`. It makes sense that `ask_if_true()` returns `True` for $\alpha = \neg P_{1, 1}$ and `False` for $\alpha = P_{1, 1}$. This begs the question, what if $\alpha$ is `True` in only a portion of all models. Do we return `True` or `False`? This doesn't rule out the possibility of $\alpha$ being `True` but it is not entailed by the `KB` so we return `False` in such cases. We can see this is the case for $P_{2, 2}$ and $P_{3, 1}$.
###Code
wumpus_kb.ask_if_true(~P22), wumpus_kb.ask_if_true(P22)
###Output
_____no_output_____
###Markdown
Proof by ResolutionRecall that our goal is to check whether $\text{KB} \vDash \alpha$ i.e. is $\text{KB} \implies \alpha$ true in every model. Suppose we wanted to check if $P \implies Q$ is valid. We check the satisfiability of $\neg (P \implies Q)$, which can be rewritten as $P \land \neg Q$. If $P \land \neg Q$ is unsatisfiable, then $P \implies Q$ must be true in all models. This gives us the result "$\text{KB} \vDash \alpha$ if and only if $\text{KB} \land \neg \alpha$ is unsatisfiable".This technique corresponds to proof by contradiction, a standard mathematical proof technique. We assume $\alpha$ to be false and show that this leads to a contradiction with known axioms in $\text{KB}$. We obtain a contradiction by making valid inferences using inference rules. In this proof we use a single inference rule, resolution which states $(l_1 \lor \dots \lor l_k) \land (m_1 \lor \dots \lor m_n) \land (l_i \iff \neg m_j) \implies l_1 \lor \dots \lor l_{i - 1} \lor l_{i + 1} \lor \dots \lor l_k \lor m_1 \lor \dots \lor m_{j - 1} \lor m_{j + 1} \lor \dots \lor m_n$. Applying the resolution yeilds us a clause which we add to the KB. We keep doing this until:* There are no new clauses that can be added, in which case $\text{KB} \nvDash \alpha$.* Two clauses resolve to yield the empty clause, in which case $\text{KB} \vDash \alpha$.The empty clause is equivalent to False because it arises only from resolving two complementaryunit clauses such as $P$ and $\neg P$ which is a contradiction as both $P$ and $\neg P$ can't be True at the same time. There is one catch however, the algorithm that implements proof by resolution cannot handle complex sentences. Implications and bi-implications have to be simplified into simpler clauses. We already know that *every sentence of a propositional logic is logically equivalent to a conjunction of clauses*.We will use this fact to our advantage and simplify the input sentence into the **conjunctive normal form** (CNF) which is a conjunction of disjunctions of literals.For eg:$$(A\lor B)\land (\neg B\lor C\lor\neg D)\land (D\lor\neg E)$$This is equivalent to the POS (Product of sums) form in digital electronics.Here's an outline of how the conversion is done:1. Convert bi-implications to implications$\alpha\iff\beta$ can be written as $(\alpha\implies\beta)\land(\beta\implies\alpha)$This also applies to compound sentences$\alpha\iff(\beta\lor\gamma)$ can be written as $(\alpha\implies(\beta\lor\gamma))\land((\beta\lor\gamma)\implies\alpha)$2. Convert implications to their logical equivalents$\alpha\implies\beta$ can be written as $\neg\alpha\lor\beta$3. Move negation inwardsCNF requires atomic literals. Hence, negation cannot appear on a compound statement.De Morgan's laws will be helpful here.$\neg(\alpha\land\beta)\equiv(\neg\alpha\lor\neg\beta)$$\neg(\alpha\lor\beta)\equiv(\neg\alpha\land\neg\beta)$4. Distribute disjunction over conjunctionDisjunction and conjunction are distributive over each other.Now that we only have conjunctions, disjunctions and negations in our expression, we will distribute disjunctions over conjunctions wherever possible as this will give us a sentence which is a conjunction of simpler clauses, which is what we wanted in the first place.We need a term of the form$(\alpha_{1}\lor\alpha_{2}\lor\alpha_{3}...)\land(\beta_{1}\lor\beta_{2}\lor\beta_{3}...)\land(\gamma_{1}\lor\gamma_{2}\lor\gamma_{3}...)\land...$The `to_cnf` function executes this conversion using helper subroutines.
###Code
psource(to_cnf)
###Output
_____no_output_____
###Markdown
`to_cnf` calls three subroutines.`eliminate_implications` converts bi-implications and implications to their logical equivalents.`move_not_inwards` removes negations from compound statements and moves them inwards using De Morgan's laws.`distribute_and_over_or` distributes disjunctions over conjunctions.Run the cells below for implementation details.
###Code
%psource eliminate_implications
%psource move_not_inwards
%psource distribute_and_over_or
###Output
_____no_output_____
###Markdown
Let's convert some sentences to see how it works
###Code
A, B, C, D = expr('A, B, C, D')
to_cnf(A |'<=>'| B)
to_cnf(A |'<=>'| (B & C))
to_cnf(A & (B | (C & D)))
to_cnf((A |'<=>'| ~B) |'==>'| (C | ~D))
###Output
_____no_output_____
###Markdown
Coming back to our resolution problem, we can see how the `to_cnf` function is utilized here
###Code
psource(pl_resolution)
pl_resolution(wumpus_kb, ~P11), pl_resolution(wumpus_kb, P11)
pl_resolution(wumpus_kb, ~P22), pl_resolution(wumpus_kb, P22)
###Output
_____no_output_____
###Markdown
Forward and backward chainingPreviously, we said we will look at two algorithms to check if a sentence is entailed by the `KB`, but here's a third one. The difference here is that our goal now is to determine if a knowledge base of definite clauses entails a single proposition symbol *q* - the query.There is a catch however, the knowledge base can only contain **Horn clauses**. Horn ClausesHorn clauses can be defined as a *disjunction* of *literals* with **at most** one positive literal. A Horn clause with exactly one positive literal is called a *definite clause*.A Horn clause might look like $\neg a\lor\neg b\lor\neg c\lor\neg d... \lor z$This, coincidentally, is also a definite clause.Using De Morgan's laws, the example above can be simplified to $a\land b\land c\land d ... \implies z$This seems like a logical representation of how humans process known data and facts. Assuming percepts `a`, `b`, `c`, `d` ... to be true simultaneously, we can infer `z` to also be true at that point in time. There are some interesting aspects of Horn clauses that make algorithmic inference or *resolution* easier.- Definite clauses can be written as implications:The most important simplification a definite clause provides is that it can be written as an implication.The premise (or the knowledge that leads to the implication) is a conjunction of positive literals.The conclusion (the implied statement) is also a positive literal.The sentence thus becomes easier to understand.The premise and the conclusion are conventionally called the *body* and the *head* respectively.A single positive literal is called a *fact*.- Forward chaining and backward chaining can be used for inference from Horn clauses:Forward chaining is semantically identical to `AND-OR-Graph-Search` from the chapter on search algorithms.Implementational details will be explained shortly.- Deciding entailment with Horn clauses is linear in size of the knowledge base:Surprisingly, the forward and backward chaining algorithms traverse each element of the knowledge base at most once, greatly simplifying the problem.The function `pl_fc_entails` implements forward chaining to see if a knowledge base `KB` entails a symbol `q`.Before we proceed further, note that `pl_fc_entails` doesn't use an ordinary `KB` instance. The knowledge base here is an instance of the `PropDefiniteKB` class, derived from the `PropKB` class, but modified to store definite clauses.The main point of difference arises in the inclusion of a helper method to `PropDefiniteKB` that returns a list of clauses in KB that have a given symbol `p` in their premise.
###Code
psource(PropDefiniteKB.clauses_with_premise)
###Output
_____no_output_____
###Markdown
Let's now have a look at the `pl_fc_entails` algorithm.
###Code
psource(pl_fc_entails)
###Output
_____no_output_____
###Markdown
The function accepts a knowledge base `KB` (an instance of `PropDefiniteKB`) and a query `q` as inputs.`count` initially stores the number of symbols in the premise of each sentence in the knowledge base.The `conjuncts` helper function separates a given sentence at conjunctions.`inferred` is initialized as a *boolean* defaultdict. This will be used later to check if we have inferred all premises of each clause of the agenda.`agenda` initially stores a list of clauses that the knowledge base knows to be true.The `is_prop_symbol` helper function checks if the given symbol is a valid propositional logic symbol.We now iterate through `agenda`, popping a symbol `p` on each iteration.If the query `q` is the same as `p`, we know that entailment holds.The agenda is processed, reducing `count` by one for each implication with a premise `p`.A conclusion is added to the agenda when `count` reaches zero. This means we know all the premises of that particular implication to be true.`clauses_with_premise` is a helpful method of the `PropKB` class.It returns a list of clauses in the knowledge base that have `p` in their premise.Now that we have an idea of how this function works, let's see a few examples of its usage, but we first need to define our knowledge base. We assume we know the following clauses to be true.
###Code
clauses = ['(B & F)==>E',
'(A & E & F)==>G',
'(B & C)==>F',
'(A & B)==>D',
'(E & F)==>H',
'(H & I)==>J',
'A',
'B',
'C']
###Output
_____no_output_____
###Markdown
We will now `tell` this information to our knowledge base.
###Code
definite_clauses_KB = PropDefiniteKB()
for clause in clauses:
definite_clauses_KB.tell(expr(clause))
###Output
_____no_output_____
###Markdown
We can now check if our knowledge base entails the following queries.
###Code
pl_fc_entails(definite_clauses_KB, expr('G'))
pl_fc_entails(definite_clauses_KB, expr('H'))
pl_fc_entails(definite_clauses_KB, expr('I'))
pl_fc_entails(definite_clauses_KB, expr('J'))
###Output
_____no_output_____
###Markdown
Effective Propositional Model CheckingThe previous segments elucidate the algorithmic procedure for model checking. In this segment, we look at ways of making them computationally efficient.The problem we are trying to solve is conventionally called the _propositional satisfiability problem_, abbreviated as the _SAT_ problem.In layman terms, if there exists a model that satisfies a given Boolean formula, the formula is called satisfiable.The SAT problem was the first problem to be proven _NP-complete_.The main characteristics of an NP-complete problem are:- Given a solution to such a problem, it is easy to verify if the solution solves the problem.- The time required to actually solve the problem using any known algorithm increases exponentially with respect to the size of the problem.Due to these properties, heuristic and approximational methods are often applied to find solutions to these problems.It is extremely important to be able to solve large scale SAT problems efficiently because many combinatorial problems in computer science can be conveniently reduced to checking the satisfiability of a propositional sentence under some constraints.We will introduce two new algorithms that perform propositional model checking in a computationally effective way. 1. DPLL (Davis-Putnam-Logeman-Loveland) algorithmThis algorithm is very similar to Backtracking-Search.It recursively enumerates possible models in a depth-first fashion with the following improvements over algorithms like `tt_entails`:1. Early termination:In certain cases, the algorithm can detect the truth value of a statement using just a partially completed model.For example, $(P\lor Q)\land(P\lor R)$ is true if P is true, regardless of other variables.This reduces the search space significantly.2. Pure symbol heuristic:A symbol that has the same sign (positive or negative) in all clauses is called a _pure symbol_.It isn't difficult to see that any satisfiable model will have the pure symbols assigned such that its parent clause becomes _true_.For example, $(P\lor\neg Q)\land(\neg Q\lor\neg R)\land(R\lor P)$ has P and Q as pure symbolsand for the sentence to be true, P _has_ to be true and Q _has_ to be false.The pure symbol heuristic thus simplifies the problem a bit.3. Unit clause heuristic:In the context of DPLL, clauses with just one literal and clauses with all but one _false_ literals are called unit clauses.If a clause is a unit clause, it can only be satisfied by assigning the necessary value to make the last literal true.We have no other choice.Assigning one unit clause can create another unit clause.For example, when P is false, $(P\lor Q)$ becomes a unit clause, causing _true_ to be assigned to Q.A series of forced assignments derived from previous unit clauses is called _unit propagation_.In this way, this heuristic simplifies the problem further.The algorithm often employs other tricks to scale up to large problems.However, these tricks are currently out of the scope of this notebook. Refer to section 7.6 of the book for more details.Let's have a look at the algorithm.
###Code
psource(dpll)
###Output
_____no_output_____
###Markdown
The algorithm uses the ideas described above to check satisfiability of a sentence in propositional logic.It recursively calls itself, simplifying the problem at each step. It also uses helper functions `find_pure_symbol` and `find_unit_clause` to carry out steps 2 and 3 above.The `dpll_satisfiable` helper function converts the input clauses to _conjunctive normal form_ and calls the `dpll` function with the correct parameters.
###Code
psource(dpll_satisfiable)
###Output
_____no_output_____
###Markdown
Let's see a few examples of usage.
###Code
A, B, C, D = expr('A, B, C, D')
dpll_satisfiable(A & B & ~C & D)
###Output
_____no_output_____
###Markdown
This is a simple case to highlight that the algorithm actually works.
###Code
dpll_satisfiable((A & B) | (C & ~A) | (B & ~D))
###Output
_____no_output_____
###Markdown
If a particular symbol isn't present in the solution, it means that the solution is independent of the value of that symbol.In this case, the solution is independent of A.
###Code
dpll_satisfiable(A |'<=>'| B)
dpll_satisfiable((A |'<=>'| B) |'==>'| (C & ~A))
dpll_satisfiable((A | (B & C)) |'<=>'| ((A | B) & (A | C)))
###Output
_____no_output_____
###Markdown
2. WalkSAT algorithmThis algorithm is very similar to Hill climbing.On every iteration, the algorithm picks an unsatisfied clause and flips a symbol in the clause.This is similar to finding a neighboring state in the `hill_climbing` algorithm.The symbol to be flipped is decided by an evaluation function that counts the number of unsatisfied clauses.Sometimes, symbols are also flipped randomly, to avoid local optima. A subtle balance between greediness and randomness is required. Alternatively, some versions of the algorithm restart with a completely new random assignment if no solution has been found for too long, as a way of getting out of local minima of numbers of unsatisfied clauses.Let's have a look at the algorithm.
###Code
psource(WalkSAT)
###Output
_____no_output_____
###Markdown
The function takes three arguments:1. The `clauses` we want to satisfy.2. The probability `p` of randomly changing a symbol.3. The maximum number of flips (`max_flips`) the algorithm will run for. If the clauses are still unsatisfied, the algorithm returns `None` to denote failure.The algorithm is identical in concept to Hill climbing and the code isn't difficult to understand.Let's see a few examples of usage.
###Code
A, B, C, D = expr('A, B, C, D')
WalkSAT([A, B, ~C, D], 0.5, 100)
###Output
_____no_output_____
###Markdown
This is a simple case to show that the algorithm converges.
###Code
WalkSAT([A & B, A & C], 0.5, 100)
WalkSAT([A & B, C & D, C & B], 0.5, 100)
WalkSAT([A & B, C | D, ~(D | B)], 0.5, 1000)
###Output
_____no_output_____
###Markdown
This one doesn't give any output because WalkSAT did not find any model where these clauses hold. We can solve these clauses to see that they together form a contradiction and hence, it isn't supposed to have a solution. One point of difference between this algorithm and the `dpll_satisfiable` algorithms is that both these algorithms take inputs differently. For WalkSAT to take complete sentences as input, we can write a helper function that converts the input sentence into conjunctive normal form and then calls WalkSAT with the list of conjuncts of the CNF form of the sentence.
###Code
def WalkSAT_CNF(sentence, p=0.5, max_flips=10000):
return WalkSAT(conjuncts(to_cnf(sentence)), 0, max_flips)
###Output
_____no_output_____
###Markdown
Now we can call `WalkSAT_CNF` and `DPLL_Satisfiable` with the same arguments.
###Code
WalkSAT_CNF((A & B) | (C & ~A) | (B & ~D), 0.5, 1000)
###Output
_____no_output_____
###Markdown
It works!Notice that the solution generated by WalkSAT doesn't omit variables that the sentence doesn't depend upon. If the sentence is independent of a particular variable, the solution contains a random value for that variable because of the stochastic nature of the algorithm.Let's compare the runtime of WalkSAT and DPLL for a few cases. We will use the `%%timeit` magic to do this.
###Code
sentence_1 = A |'<=>'| B
sentence_2 = (A & B) | (C & ~A) | (B & ~D)
sentence_3 = (A | (B & C)) |'<=>'| ((A | B) & (A | C))
%%timeit
dpll_satisfiable(sentence_1)
dpll_satisfiable(sentence_2)
dpll_satisfiable(sentence_3)
%%timeit
WalkSAT_CNF(sentence_1)
WalkSAT_CNF(sentence_2)
WalkSAT_CNF(sentence_3)
###Output
4.64 ms ± 65.3 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
###Markdown
On an average, for solvable cases, `WalkSAT` is quite faster than `dpll` because, for a small number of variables, `WalkSAT` can reduce the search space significantly. Results can be different for sentences with more symbols though.Feel free to play around with this to understand the trade-offs of these algorithms better. SATPlan In this section we show how to make plans by logical inference. The basic idea is very simple. It includes the following three steps:1. Constuct a sentence that includes: 1. A colection of assertions about the initial state. 2. The successor-state axioms for all the possible actions at each time up to some maximum time t. 3. The assertion that the goal is achieved at time t.2. Present the whole sentence to a SAT solver.3. Assuming a model is found, extract from the model those variables that represent actions and are assigned true. Together they represent a plan to achieve the goals.Lets have a look at the algorithm
###Code
psource(SAT_plan)
###Output
_____no_output_____
###Markdown
Let's see few examples of its usage. First we define a transition and then call `SAT_plan`.
###Code
transition = {'A': {'Left': 'A', 'Right': 'B'},
'B': {'Left': 'A', 'Right': 'C'},
'C': {'Left': 'B', 'Right': 'C'}}
print(SAT_plan('A', transition, 'C', 2))
print(SAT_plan('A', transition, 'B', 3))
print(SAT_plan('C', transition, 'A', 3))
###Output
None
['Right']
['Left', 'Left']
###Markdown
Let us do the same for another transition.
###Code
transition = {(0, 0): {'Right': (0, 1), 'Down': (1, 0)},
(0, 1): {'Left': (1, 0), 'Down': (1, 1)},
(1, 0): {'Right': (1, 0), 'Up': (1, 0), 'Left': (1, 0), 'Down': (1, 0)},
(1, 1): {'Left': (1, 0), 'Up': (0, 1)}}
print(SAT_plan((0, 0), transition, (1, 1), 4))
###Output
['Right', 'Down']
###Markdown
First-Order Logic Knowledge Bases: `FolKB`The class `FolKB` can be used to represent a knowledge base of First-order logic sentences. You would initialize and use it the same way as you would for `PropKB` except that the clauses are first-order definite clauses. We will see how to write such clauses to create a database and query them in the following sections. Criminal KBIn this section we create a `FolKB` based on the following paragraph.The law says that it is a crime for an American to sell weapons to hostile nations. The country Nono, an enemy of America, has some missiles, and all of its missiles were sold to it by Colonel West, who is American.The first step is to extract the facts and convert them into first-order definite clauses. Extracting the facts from data alone is a challenging task. Fortunately, we have a small paragraph and can do extraction and conversion manually. We'll store the clauses in list aptly named `clauses`.
###Code
clauses = []
###Output
_____no_output_____
###Markdown
“... it is a crime for an American to sell weapons to hostile nations”The keywords to look for here are 'crime', 'American', 'sell', 'weapon' and 'hostile'. We use predicate symbols to make meaning of them.* `Criminal(x)`: `x` is a criminal* `American(x)`: `x` is an American* `Sells(x ,y, z)`: `x` sells `y` to `z`* `Weapon(x)`: `x` is a weapon* `Hostile(x)`: `x` is a hostile nationLet us now combine them with appropriate variable naming to depict the meaning of the sentence. The criminal `x` is also the American `x` who sells weapon `y` to `z`, which is a hostile nation.$\text{American}(x) \land \text{Weapon}(y) \land \text{Sells}(x, y, z) \land \text{Hostile}(z) \implies \text{Criminal} (x)$
###Code
clauses.append(expr("(American(x) & Weapon(y) & Sells(x, y, z) & Hostile(z)) ==> Criminal(x)"))
###Output
_____no_output_____
###Markdown
"The country Nono, an enemy of America"We now know that Nono is an enemy of America. We represent these nations using the constant symbols `Nono` and `America`. the enemy relation is show using the predicate symbol `Enemy`.$\text{Enemy}(\text{Nono}, \text{America})$
###Code
clauses.append(expr("Enemy(Nono, America)"))
###Output
_____no_output_____
###Markdown
"Nono ... has some missiles"This states the existence of some missile which is owned by Nono. $\exists x \text{Owns}(\text{Nono}, x) \land \text{Missile}(x)$. We invoke existential instantiation to introduce a new constant `M1` which is the missile owned by Nono.$\text{Owns}(\text{Nono}, \text{M1}), \text{Missile}(\text{M1})$
###Code
clauses.append(expr("Owns(Nono, M1)"))
clauses.append(expr("Missile(M1)"))
###Output
_____no_output_____
###Markdown
"All of its missiles were sold to it by Colonel West"If Nono owns something and it classifies as a missile, then it was sold to Nono by West.$\text{Missile}(x) \land \text{Owns}(\text{Nono}, x) \implies \text{Sells}(\text{West}, x, \text{Nono})$
###Code
clauses.append(expr("(Missile(x) & Owns(Nono, x)) ==> Sells(West, x, Nono)"))
###Output
_____no_output_____
###Markdown
"West, who is American"West is an American.$\text{American}(\text{West})$
###Code
clauses.append(expr("American(West)"))
###Output
_____no_output_____
###Markdown
We also know, from our understanding of language, that missiles are weapons and that an enemy of America counts as “hostile”.$\text{Missile}(x) \implies \text{Weapon}(x), \text{Enemy}(x, \text{America}) \implies \text{Hostile}(x)$
###Code
clauses.append(expr("Missile(x) ==> Weapon(x)"))
clauses.append(expr("Enemy(x, America) ==> Hostile(x)"))
###Output
_____no_output_____
###Markdown
Now that we have converted the information into first-order definite clauses we can create our first-order logic knowledge base.
###Code
crime_kb = FolKB(clauses)
###Output
_____no_output_____
###Markdown
Inference in First-Order LogicIn this section we look at a forward chaining and a backward chaining algorithm for `FolKB`. Both aforementioned algorithms rely on a process called unification, a key component of all first-order inference algorithms. UnificationWe sometimes require finding substitutions that make different logical expressions look identical. This process, called unification, is done by the `unify` algorithm. It takes as input two sentences and returns a unifier for them if one exists. A unifier is a dictionary which stores the substitutions required to make the two sentences identical. It does so by recursively unifying the components of a sentence, where the unification of a variable symbol `var` with a constant symbol `Const` is the mapping `{var: Const}`. Let's look at a few examples.
###Code
unify(expr('x'), 3)
unify(expr('A(x)'), expr('A(B)'))
unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(y)'))
###Output
_____no_output_____
###Markdown
In cases where there is no possible substitution that unifies the two sentences the function return `None`.
###Code
print(unify(expr('Cat(x)'), expr('Dog(Dobby)')))
###Output
None
###Markdown
We also need to take care we do not unintentionally use the same variable name. Unify treats them as a single variable which prevents it from taking multiple value.
###Code
print(unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(x)')))
###Output
None
###Markdown
Forward Chaining AlgorithmWe consider the simple forward-chaining algorithm presented in Figure 9.3. We look at each rule in the knoweldge base and see if the premises can be satisfied. This is done by finding a substitution which unifies each of the premise with a clause in the `KB`. If we are able to unify the premises, the conclusion (with the corresponding substitution) is added to the `KB`. This inferencing process is repeated until either the query can be answered or till no new sentences can be added. We test if the newly added clause unifies with the query in which case the substitution yielded by `unify` is an answer to the query. If we run out of sentences to infer, this means the query was a failure.The function `fol_fc_ask` is a generator which yields all substitutions which validate the query.
###Code
psource(fol_fc_ask)
###Output
_____no_output_____
###Markdown
Let's find out all the hostile nations. Note that we only told the `KB` that Nono was an enemy of America, not that it was hostile.
###Code
answer = fol_fc_ask(crime_kb, expr('Hostile(x)'))
print(list(answer))
###Output
[{x: Nono}]
###Markdown
The generator returned a single substitution which says that Nono is a hostile nation. See how after adding another enemy nation the generator returns two substitutions.
###Code
crime_kb.tell(expr('Enemy(JaJa, America)'))
answer = fol_fc_ask(crime_kb, expr('Hostile(x)'))
print(list(answer))
###Output
[{x: Nono}, {x: JaJa}]
###Markdown
Note: `fol_fc_ask` makes changes to the `KB` by adding sentences to it. Backward Chaining AlgorithmThis algorithm works backward from the goal, chaining through rules to find known facts that support the proof. Suppose `goal` is the query we want to find the substitution for. We find rules of the form $\text{lhs} \implies \text{goal}$ in the `KB` and try to prove `lhs`. There may be multiple clauses in the `KB` which give multiple `lhs`. It is sufficient to prove only one of these. But to prove a `lhs` all the conjuncts in the `lhs` of the clause must be proved. This makes it similar to And/Or search. ORThe OR part of the algorithm comes from our choice to select any clause of the form $\text{lhs} \implies \text{goal}$. Looking at all rules's `lhs` whose `rhs` unify with the `goal`, we yield a substitution which proves all the conjuncts in the `lhs`. We use `parse_definite_clause` to attain `lhs` and `rhs` from a clause of the form $\text{lhs} \implies \text{rhs}$. For atomic facts the `lhs` is an empty list.
###Code
%psource fol_bc_or
###Output
_____no_output_____
###Markdown
ANDThe AND corresponds to proving all the conjuncts in the `lhs`. We need to find a substitution which proves each and every clause in the list of conjuncts.
###Code
%psource fol_bc_and
###Output
_____no_output_____
###Markdown
Now the main function `fl_bc_ask` calls `fol_bc_or` with substitution initialized as empty. The `ask` method of `FolKB` uses `fol_bc_ask` and fetches the first substitution returned by the generator to answer query. Let's query the knowledge base we created from `clauses` to find hostile nations.
###Code
# Rebuild KB because running fol_fc_ask would add new facts to the KB
crime_kb = FolKB(clauses)
crime_kb.ask(expr('Hostile(x)'))
###Output
_____no_output_____
###Markdown
You may notice some new variables in the substitution. They are introduced to standardize the variable names to prevent naming problems as discussed in the [Unification section](Unification) Appendix: The Implementation of `|'==>'|`Consider the `Expr` formed by this syntax:
###Code
P |'==>'| ~Q
###Output
_____no_output_____
###Markdown
What is the funny `|'==>'|` syntax? The trick is that "`|`" is just the regular Python or-operator, and so is exactly equivalent to this:
###Code
(P | '==>') | ~Q
###Output
_____no_output_____
###Markdown
In other words, there are two applications of or-operators. Here's the first one:
###Code
P | '==>'
###Output
_____no_output_____
###Markdown
What is going on here is that the `__or__` method of `Expr` serves a dual purpose. If the right-hand-side is another `Expr` (or a number), then the result is an `Expr`, as in `(P | Q)`. But if the right-hand-side is a string, then the string is taken to be an operator, and we create a node in the abstract syntax tree corresponding to a partially-filled `Expr`, one where we know the left-hand-side is `P` and the operator is `==>`, but we don't yet know the right-hand-side.The `PartialExpr` class has an `__or__` method that says to create an `Expr` node with the right-hand-side filled in. Here we can see the combination of the `PartialExpr` with `Q` to create a complete `Expr`:
###Code
partial = PartialExpr('==>', P)
partial | ~Q
###Output
_____no_output_____
###Markdown
This [trick](http://code.activestate.com/recipes/384122-infix-operators/) is due to [Ferdinand Jamitzky](http://code.activestate.com/recipes/users/98863/), with a modification by [C. G. Vedant](https://github.com/Chipe1),who suggested using a string inside the or-bars. Appendix: The Implementation of `expr`How does `expr` parse a string into an `Expr`? It turns out there are two tricks (besides the Jamitzky/Vedant trick):1. We do a string substitution, replacing "`==>`" with "`|'==>'|`" (and likewise for other operators).2. We `eval` the resulting string in an environment in which every identifieris bound to a symbol with that identifier as the `op`.In other words,
###Code
expr('~(P & Q) ==> (~P | ~Q)')
###Output
_____no_output_____
###Markdown
is equivalent to doing:
###Code
P, Q = symbols('P, Q')
~(P & Q) |'==>'| (~P | ~Q)
###Output
_____no_output_____
###Markdown
One thing to beware of: this puts `==>` at the same precedence level as `"|"`, which is not quite right. For example, we get this:
###Code
P & Q |'==>'| P | Q
###Output
_____no_output_____
###Markdown
which is probably not what we meant; when in doubt, put in extra parens:
###Code
(P & Q) |'==>'| (P | Q)
###Output
_____no_output_____
###Markdown
Examples
###Code
from notebook import Canvas_fol_bc_ask
canvas_bc_ask = Canvas_fol_bc_ask('canvas_bc_ask', crime_kb, expr('Criminal(x)'))
###Output
_____no_output_____
###Markdown
This [trick](http://code.activestate.com/recipes/384122-infix-operators/) is due to [Ferdinand Jamitzky](http://code.activestate.com/recipes/users/98863/), with a modification by [C. G. Vedant](https://github.com/Chipe1),who suggested using a string inside the or-bars. Appendix: The Implementation of `expr`How does `expr` parse a string into an `Expr`? It turns out there are two tricks (besides the Jamitzky/Vedant trick):1. We do a string substitution, replacing "`==>`" with "`|'==>'|`" (and likewise for other operators).2. We `eval` the resulting string in an environment in which every identifieris bound to a symbol with that identifier as the `op`.In other words,
###Code
expr('~(P & Q) ==> (~P | ~Q)')
###Output
_____no_output_____
###Markdown
is equivalent to doing:
###Code
P, Q = symbols('P, Q')
~(P & Q) |'==>'| (~P | ~Q)
###Output
_____no_output_____
###Markdown
One thing to beware of: this puts `==>` at the same precedence level as `"|"`, which is not quite right. For example, we get this:
###Code
P & Q |'==>'| P | Q
###Output
_____no_output_____
###Markdown
which is probably not what we meant; when in doubt, put in extra parens:
###Code
(P & Q) |'==>'| (P | Q)
###Output
_____no_output_____
###Markdown
Examples
###Code
from notebook import Canvas_fol_bc_ask
canvas_bc_ask = Canvas_fol_bc_ask('canvas_bc_ask', crime_kb, expr('Criminal(x)'))
###Output
_____no_output_____
###Markdown
Sistem Pendukung Keputusan Kredit Mobil menggunakan Algoritma KNN
###Code
import numpy as np
import pandas as pd
def datapemohon(x,k):
while x<len(k):
print("Masukkan Kriteria "+str(x+1)+" :",end=" ") #Memasukkan kriteria 1 sampai n
try: #Cek apakah user sudah menginputkan angka atau belum
z=input()
k[x]=int(z)
if (k[x]>=0)and(k[x])<=5: #check apakah sudah sesuai nilai atau belum
x+=1
else:
print("Mohon masukkan angka 1 - 5") #error yang ditampikan saat user tidak memasukkan angka 1-5
pass
pass
except ValueError:
print("Mohon masukkan bilangan bulat.") #error yang ditampilkan saat user tidak menginputkan bilangan bulat
pass
def masukkan_nilai_k(y):
while y<=0:
try:
print("Masukkan nilai k :",end=" ")
nilai_k=input()
nilai_k=int(nilai_k)
if nilai_k<=0:
print("Mohon masukkan angka minimal angka 1")
else:
y+=1
pass
pass
except ValueError:
print("Mohon masukkan bilangan bulat.")
pass
return nilai_k
def hitung(hasil,i,d,k):
while i<len(d): #Ulangi sebanyak data train yang ada.
i2=0
while i2<5: #while untuk menghitung masing masing selisih data train dan data test dikuadrat pada kriteria.
hasil_sq=abs(k[i2]-d[i][i2+1])**2 #rumus menghitung selisih data train dan data test dikuadrat
hasil[i][0]+=hasil_sq #setiap hasil hitung yang ada ditambahkan
i2=i2+1
pass
hasil[i][0]=np.sqrt(hasil[i][0])#total hasil hitung di akar kuadratkan
hasil[i][1]=i+1 #menyimpan nomor data train
hasil[i][2]=d[i][6] #menyimpan keputusan hasil hitung
i=i+1
pass
hasil = list(hasil)
def urutkan(hasil, nilai_k, i3):
kebenaran=0
print("Sebelum pengurutan kedekatan")
print("No \t Kedekatan")
for x in hasil: #pengulangan untuk menampilkan hasil hitung sebelum diurutkan
print(int(x[1]),end="\t")
print(x[0])
pass
hasil.sort(key=lambda x:x[0]) #mengurutkan array berdasarkan kedekatan dengan data test
print("Sesudah pengurutan kedekatan")
print("No \t Kedekatan")
for x in hasil: #pengulangan untuk menampilkan hasil hitung setelah diurutkan
print(int(x[1]),end="\t")
print(x[0])
pass
while i3<nilai_k: #pengulangan untuk memilih hasil penghitungan dengan nilai k yang diurutkan berdasarkan kedekatan.
kebenaran+=hasil[i3][2] #menghitung total nilai keputusan (0 untuk tidak memenuhi dan 1 untuk memenuhi)
i3=i3+1
pass
return kebenaran
hasil=np.zeros((4,3)) #definisi array 4 baris 3 kolom hasil untuk menampung nomor, hasil hitung, dan kategori keputusan
k=np.zeros(5) #definisi array untuk nilai kriteria
d=pd.read_csv('data_train_sample.csv',delimiter=";")
d=np.asarray(d)
i=0
i3=0
x=0
y=0
z=0
# kebenaran=0 #inisialisasi keputusan
datapemohon(x,k)
nilai_k = masukkan_nilai_k(y)
hitung(hasil,i,d,k)
print(hasil)
# print(masukkan_nilai_k)
hasil = list(hasil)
kebenaran = urutkan(hasil, nilai_k, 0)
print(kebenaran, nilai_k)
keputusan=kebenaran/nilai_k*100 #membagi nilai keputusan dengan nilai k untuk menentukan apakah memenuhi atau tidak.
print("Berdasarkan perhitungan diatas, sistem menyatakan pemohon",end=" ")
if keputusan>50:
print("Memenuhi",end=" ")
else:
print("Tidak Memenuhi",end=" ")
pass
print("mengajukan kredit.")
print("Persentasi kebenaran "+str(keputusan)+"%")
###Output
Sebelum pengurutan kedekatan
No Kedekatan
1 5.477225575051661
2 3.872983346207417
3 3.4641016151377544
4 3.4641016151377544
Sesudah pengurutan kedekatan
No Kedekatan
3 3.4641016151377544
4 3.4641016151377544
2 3.872983346207417
1 5.477225575051661
2.0 3
Berdasarkan perhitungan diatas, sistem menyatakan pemohon Memenuhi mengajukan kredit.
Persentasi kebenaran 66.66666666666666%
###Markdown
Logic This Jupyter notebook acts as supporting material for topics covered in __Chapter 6 Logical Agents__, __Chapter 7 First-Order Logic__ and __Chapter 8 Inference in First-Order Logic__ of the book *[Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu)*. We make use of the implementations in the [logic.py](https://github.com/aimacode/aima-python/blob/master/logic.py) module. See the [intro notebook](https://github.com/aimacode/aima-python/blob/master/intro.ipynb) for instructions.Let's first import everything from the `logic` module.
###Code
from utils import *
from logic4e import *
###Output
_____no_output_____
###Markdown
CONTENTS- Logical sentences - Expr - PropKB - Knowledge-based agents - Inference in propositional knowledge base - Truth table enumeration - Proof by resolution - Forward and backward chaining - DPLL - WalkSAT - SATPlan - FolKB - Inference in first order knowledge base - Unification - Forward chaining algorithm - Backward chaining algorithm Logical Sentences The `Expr` class is designed to represent any kind of mathematical expression. The simplest type of `Expr` is a symbol, which can be defined with the function `Symbol`:
###Code
Symbol('x')
###Output
_____no_output_____
###Markdown
Or we can define multiple symbols at the same time with the function `symbols`:
###Code
(x, y, P, Q, f) = symbols('x, y, P, Q, f')
###Output
_____no_output_____
###Markdown
We can combine `Expr`s with the regular Python infix and prefix operators. Here's how we would form the logical sentence "P and not Q":
###Code
P & ~Q
###Output
_____no_output_____
###Markdown
This works because the `Expr` class overloads the `&` operator with this definition:```pythondef __and__(self, other): return Expr('&', self, other)``` and does similar overloads for the other operators. An `Expr` has two fields: `op` for the operator, which is always a string, and `args` for the arguments, which is a tuple of 0 or more expressions. By "expression," I mean either an instance of `Expr`, or a number. Let's take a look at the fields for some `Expr` examples:
###Code
sentence = P & ~Q
sentence.op
sentence.args
P.op
P.args
Pxy = P(x, y)
Pxy.op
Pxy.args
###Output
_____no_output_____
###Markdown
It is important to note that the `Expr` class does not define the *logic* of Propositional Logic sentences; it just gives you a way to *represent* expressions. Think of an `Expr` as an [abstract syntax tree](https://en.wikipedia.org/wiki/Abstract_syntax_tree). Each of the `args` in an `Expr` can be either a symbol, a number, or a nested `Expr`. We can nest these trees to any depth. Here is a deply nested `Expr`:
###Code
3 * f(x, y) + P(y) / 2 + 1
###Output
_____no_output_____
###Markdown
Operators for Constructing Logical SentencesHere is a table of the operators that can be used to form sentences. Note that we have a problem: we want to use Python operators to make sentences, so that our programs (and our interactive sessions like the one here) will show simple code. But Python does not allow implication arrows as operators, so for now we have to use a more verbose notation that Python does allow: `|'==>'|` instead of just `==>`. Alternately, you can always use the more verbose `Expr` constructor forms:| Operation | Book | Python Infix Input | Python Output | Python `Expr` Input|--------------------------|----------------------|-------------------------|---|---|| Negation | ¬ P | `~P` | `~P` | `Expr('~', P)`| And | P ∧ Q | `P & Q` | `P & Q` | `Expr('&', P, Q)`| Or | P ∨ Q | `P` &124; `Q`| `P` &124; `Q` | `Expr('`&124;`', P, Q)`| Inequality (Xor) | P ≠ Q | `P ^ Q` | `P ^ Q` | `Expr('^', P, Q)`| Implication | P → Q | `P` &124;`'==>'`&124; `Q` | `P ==> Q` | `Expr('==>', P, Q)`| Reverse Implication | Q ← P | `Q` &124;`'&124; `P` |`Q <== P` | `Expr('<==', Q, P)`| Equivalence | P ↔ Q | `P` &124;`''`&124; `Q` |`P Q` | `Expr('', P, Q)`Here's an example of defining a sentence with an implication arrow:
###Code
~(P & Q) |'==>'| (~P | ~Q)
###Output
_____no_output_____
###Markdown
`expr`: a Shortcut for Constructing SentencesIf the `|'==>'|` notation looks ugly to you, you can use the function `expr` instead:
###Code
expr('~(P & Q) ==> (~P | ~Q)')
###Output
_____no_output_____
###Markdown
`expr` takes a string as input, and parses it into an `Expr`. The string can contain arrow operators: `==>`, ``, which are handled as if they were regular Python infix operators. And `expr` automatically defines any symbols, so you don't need to pre-define them:
###Code
expr('sqrt(b ** 2 - 4 * a * c)')
###Output
_____no_output_____
###Markdown
For now that's all you need to know about `expr`. If you are interested, we explain the messy details of how `expr` is implemented and how `|'==>'|` is handled in the appendix. Propositional Knowledge Bases: `PropKB`The class `PropKB` can be used to represent a knowledge base of propositional logic sentences.We see that the class `KB` has four methods, apart from `__init__`. A point to note here: the `ask` method simply calls the `ask_generator` method. Thus, this one has already been implemented, and what you'll have to actually implement when you create your own knowledge base class (though you'll probably never need to, considering the ones we've created for you) will be the `ask_generator` function and not the `ask` function itself.The class `PropKB` now.* `__init__(self, sentence=None)` : The constructor `__init__` creates a single field `clauses` which will be a list of all the sentences of the knowledge base. Note that each one of these sentences will be a 'clause' i.e. a sentence which is made up of only literals and `or`s.* `tell(self, sentence)` : When you want to add a sentence to the KB, you use the `tell` method. This method takes a sentence, converts it to its CNF, extracts all the clauses, and adds all these clauses to the `clauses` field. So, you need not worry about `tell`ing only clauses to the knowledge base. You can `tell` the knowledge base a sentence in any form that you wish; converting it to CNF and adding the resulting clauses will be handled by the `tell` method.* `ask_generator(self, query)` : The `ask_generator` function is used by the `ask` function. It calls the `tt_entails` function, which in turn returns `True` if the knowledge base entails query and `False` otherwise. The `ask_generator` itself returns an empty dict `{}` if the knowledge base entails query and `None` otherwise. This might seem a little bit weird to you. After all, it makes more sense just to return a `True` or a `False` instead of the `{}` or `None` But this is done to maintain consistency with the way things are in First-Order Logic, where an `ask_generator` function is supposed to return all the substitutions that make the query true. Hence the dict, to return all these substitutions. I will be mostly be using the `ask` function which returns a `{}` or a `False`, but if you don't like this, you can always use the `ask_if_true` function which returns a `True` or a `False`.* `retract(self, sentence)` : This function removes all the clauses of the sentence given, from the knowledge base. Like the `tell` function, you don't have to pass clauses to remove them from the knowledge base; any sentence will do fine. The function will take care of converting that sentence to clauses and then remove those. Wumpus World KBLet us create a `PropKB` for the wumpus world with the sentences mentioned in `section 7.4.3`.
###Code
wumpus_kb = PropKB()
###Output
_____no_output_____
###Markdown
We define the symbols we use in our clauses.$P_{x, y}$ is true if there is a pit in `[x, y]`.$B_{x, y}$ is true if the agent senses breeze in `[x, y]`.
###Code
P11, P12, P21, P22, P31, B11, B21 = expr('P11, P12, P21, P22, P31, B11, B21')
###Output
_____no_output_____
###Markdown
Now we tell sentences based on `section 7.4.3`.There is no pit in `[1,1]`.
###Code
wumpus_kb.tell(~P11)
###Output
_____no_output_____
###Markdown
A square is breezy if and only if there is a pit in a neighboring square. This has to be stated for each square but for now, we include just the relevant squares.
###Code
wumpus_kb.tell(B11 | '<=>' | ((P12 | P21)))
wumpus_kb.tell(B21 | '<=>' | ((P11 | P22 | P31)))
###Output
_____no_output_____
###Markdown
Now we include the breeze percepts for the first two squares leading up to the situation in `Figure 7.3(b)`
###Code
wumpus_kb.tell(~B11)
wumpus_kb.tell(B21)
###Output
_____no_output_____
###Markdown
We can check the clauses stored in a `KB` by accessing its `clauses` variable
###Code
wumpus_kb.clauses
###Output
_____no_output_____
###Markdown
We see that the equivalence $B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was automatically converted to two implications which were inturn converted to CNF which is stored in the `KB`.$B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was split into $B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ and $B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$.$B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ was converted to $P_{1, 2} \lor P_{2, 1} \lor \neg B_{1, 1}$.$B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$ was converted to $\neg (P_{1, 2} \lor P_{2, 1}) \lor B_{1, 1}$ which becomes $(\neg P_{1, 2} \lor B_{1, 1}) \land (\neg P_{2, 1} \lor B_{1, 1})$ after applying De Morgan's laws and distributing the disjunction.$B_{2, 1} \iff (P_{1, 1} \lor P_{2, 2} \lor P_{3, 2})$ is converted in similar manner. Knowledge based agents A knowledge-based agent is a simple generic agent that maintains and handles a knowledge base.The knowledge base may initially contain some background knowledge.The purpose of a KB agent is to provide a level of abstraction over knowledge-base manipulation and is to be used as a base class for agents that work on a knowledge base.Given a percept, the KB agent adds the percept to its knowledge base, asks the knowledge base for the best action, and tells the knowledge base that it has in fact taken that action.Our implementation of `KB-Agent` is encapsulated in a class `KB_AgentProgram` which inherits from the `KB` class.Let's have a look.
###Code
%psource KB_AgentProgram
###Output
[0;32mdef[0m [0mKB_AgentProgram[0m[0;34m([0m[0mKB[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;34m"""A generic logical knowledge-based agent program. [Figure 7.1]"""[0m[0;34m[0m
[0;34m[0m [0msteps[0m [0;34m=[0m [0mitertools[0m[0;34m.[0m[0mcount[0m[0;34m([0m[0;34m)[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;32mdef[0m [0mprogram[0m[0;34m([0m[0mpercept[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0mt[0m [0;34m=[0m [0mnext[0m[0;34m([0m[0msteps[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0mKB[0m[0;34m.[0m[0mtell[0m[0;34m([0m[0mmake_percept_sentence[0m[0;34m([0m[0mpercept[0m[0;34m,[0m [0mt[0m[0;34m)[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0maction[0m [0;34m=[0m [0mKB[0m[0;34m.[0m[0mask[0m[0;34m([0m[0mmake_action_query[0m[0;34m([0m[0mt[0m[0;34m)[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0mKB[0m[0;34m.[0m[0mtell[0m[0;34m([0m[0mmake_action_sentence[0m[0;34m([0m[0maction[0m[0;34m,[0m [0mt[0m[0;34m)[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0maction[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;32mdef[0m [0mmake_percept_sentence[0m[0;34m([0m[0mpercept[0m[0;34m,[0m [0mt[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0mExpr[0m[0;34m([0m[0;34m"Percept"[0m[0;34m)[0m[0;34m([0m[0mpercept[0m[0;34m,[0m [0mt[0m[0;34m)[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;32mdef[0m [0mmake_action_query[0m[0;34m([0m[0mt[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0mexpr[0m[0;34m([0m[0;34m"ShouldDo(action, {})"[0m[0;34m.[0m[0mformat[0m[0;34m([0m[0mt[0m[0;34m)[0m[0;34m)[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;32mdef[0m [0mmake_action_sentence[0m[0;34m([0m[0maction[0m[0;34m,[0m [0mt[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0mExpr[0m[0;34m([0m[0;34m"Did"[0m[0;34m)[0m[0;34m([0m[0maction[0m[0;34m[[0m[0mexpr[0m[0;34m([0m[0;34m'action'[0m[0;34m)[0m[0;34m][0m[0;34m,[0m [0mt[0m[0;34m)[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0mprogram[0m[0;34m[0m[0;34m[0m[0m
###Markdown
The helper functions `make_percept_sentence`, `make_action_query` and `make_action_sentence` are all aptly named and as expected,`make_percept_sentence` makes first-order logic sentences about percepts we want our agent to receive,`make_action_query` asks the underlying `KB` about the action that should be taken and`make_action_sentence` tells the underlying `KB` about the action it has just taken. Inference in Propositional Knowledge BaseIn this section we will look at two algorithms to check if a sentence is entailed by the `KB`. Our goal is to decide whether $\text{KB} \vDash \alpha$ for some sentence $\alpha$. Truth Table EnumerationIt is a model-checking approach which, as the name suggests, enumerates all possible models in which the `KB` is true and checks if $\alpha$ is also true in these models. We list the $n$ symbols in the `KB` and enumerate the $2^{n}$ models in a depth-first manner and check the truth of `KB` and $\alpha$.
###Code
%psource tt_check_all
###Output
[0;32mdef[0m [0mtt_check_all[0m[0;34m([0m[0mkb[0m[0;34m,[0m [0malpha[0m[0;34m,[0m [0msymbols[0m[0;34m,[0m [0mmodel[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;34m"""Auxiliary routine to implement tt_entails."""[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0;32mnot[0m [0msymbols[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0mpl_true[0m[0;34m([0m[0mkb[0m[0;34m,[0m [0mmodel[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0mresult[0m [0;34m=[0m [0mpl_true[0m[0;34m([0m[0malpha[0m[0;34m,[0m [0mmodel[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32massert[0m [0mresult[0m [0;32min[0m [0;34m([0m[0;32mTrue[0m[0;34m,[0m [0;32mFalse[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0mresult[0m[0;34m[0m
[0;34m[0m [0;32melse[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0;32mTrue[0m[0;34m[0m
[0;34m[0m [0;32melse[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0mP[0m[0;34m,[0m [0mrest[0m [0;34m=[0m [0msymbols[0m[0;34m[[0m[0;36m0[0m[0;34m][0m[0;34m,[0m [0msymbols[0m[0;34m[[0m[0;36m1[0m[0;34m:[0m[0;34m][0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0;34m([0m[0mtt_check_all[0m[0;34m([0m[0mkb[0m[0;34m,[0m [0malpha[0m[0;34m,[0m [0mrest[0m[0;34m,[0m [0mextend[0m[0;34m([0m[0mmodel[0m[0;34m,[0m [0mP[0m[0;34m,[0m [0;32mTrue[0m[0;34m)[0m[0;34m)[0m [0;32mand[0m[0;34m[0m
[0;34m[0m [0mtt_check_all[0m[0;34m([0m[0mkb[0m[0;34m,[0m [0malpha[0m[0;34m,[0m [0mrest[0m[0;34m,[0m [0mextend[0m[0;34m([0m[0mmodel[0m[0;34m,[0m [0mP[0m[0;34m,[0m [0;32mFalse[0m[0;34m)[0m[0;34m)[0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
###Markdown
The algorithm basically computes every line of the truth table $KB\implies \alpha$ and checks if it is true everywhere.If symbols are defined, the routine recursively constructs every combination of truth values for the symbols and then, it checks whether `model` is consistent with `kb`.The given models correspond to the lines in the truth table,which have a `true` in the KB column, and for these lines it checks whether the query evaluates to true`result = pl_true(alpha, model)`.In short, `tt_check_all` evaluates this logical expression for each `model``pl_true(kb, model) => pl_true(alpha, model)`which is logically equivalent to`pl_true(kb, model) & ~pl_true(alpha, model)` that is, the knowledge base and the negation of the query are logically inconsistent.`tt_entails()` just extracts the symbols from the query and calls `tt_check_all()` with the proper parameters.
###Code
%psource tt_entails
###Output
[0;32mdef[0m [0mtt_entails[0m[0;34m([0m[0mkb[0m[0;34m,[0m [0malpha[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;34m"""[0m
[0;34m Does kb entail the sentence alpha? Use truth tables. For propositional[0m
[0;34m kb's and sentences. [Figure 7.10]. Note that the 'kb' should be an[0m
[0;34m Expr which is a conjunction of clauses.[0m
[0;34m >>> tt_entails(expr('P & Q'), expr('Q'))[0m
[0;34m True[0m
[0;34m """[0m[0;34m[0m
[0;34m[0m [0;32massert[0m [0;32mnot[0m [0mvariables[0m[0;34m([0m[0malpha[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0msymbols[0m [0;34m=[0m [0mlist[0m[0;34m([0m[0mprop_symbols[0m[0;34m([0m[0mkb[0m [0;34m&[0m [0malpha[0m[0;34m)[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0mtt_check_all[0m[0;34m([0m[0mkb[0m[0;34m,[0m [0malpha[0m[0;34m,[0m [0msymbols[0m[0;34m,[0m [0;34m{[0m[0;34m}[0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
###Markdown
Keep in mind that for two symbols P and Q, P => Q is false only when P is `True` and Q is `False`.Example usage of `tt_entails()`:
###Code
tt_entails(P & Q, Q)
###Output
_____no_output_____
###Markdown
P & Q is True only when both P and Q are True. Hence, (P & Q) => Q is True
###Code
tt_entails(P | Q, Q)
tt_entails(P | Q, P)
###Output
_____no_output_____
###Markdown
If we know that P | Q is true, we cannot infer the truth values of P and Q. Hence (P | Q) => Q is False and so is (P | Q) => P.
###Code
(A, B, C, D, E, F, G) = symbols('A, B, C, D, E, F, G')
tt_entails(A & (B | C) & D & E & ~(F | G), A & D & E & ~F & ~G)
###Output
_____no_output_____
###Markdown
We can see that for the KB to be true, A, D, E have to be True and F and G have to be False.Nothing can be said about B or C. Coming back to our problem, note that `tt_entails()` takes an `Expr` which is a conjunction of clauses as the input instead of the `KB` itself. You can use the `ask_if_true()` method of `PropKB` which does all the required conversions. Let's check what `wumpus_kb` tells us about $P_{1, 1}$.
###Code
wumpus_kb.ask_if_true(~P11), wumpus_kb.ask_if_true(P11)
###Output
_____no_output_____
###Markdown
Looking at Figure 7.9 we see that in all models in which the knowledge base is `True`, $P_{1, 1}$ is `False`. It makes sense that `ask_if_true()` returns `True` for $\alpha = \neg P_{1, 1}$ and `False` for $\alpha = P_{1, 1}$. This begs the question, what if $\alpha$ is `True` in only a portion of all models. Do we return `True` or `False`? This doesn't rule out the possibility of $\alpha$ being `True` but it is not entailed by the `KB` so we return `False` in such cases. We can see this is the case for $P_{2, 2}$ and $P_{3, 1}$.
###Code
wumpus_kb.ask_if_true(~P22), wumpus_kb.ask_if_true(P22)
###Output
_____no_output_____
###Markdown
Proof by ResolutionRecall that our goal is to check whether $\text{KB} \vDash \alpha$ i.e. is $\text{KB} \implies \alpha$ true in every model. Suppose we wanted to check if $P \implies Q$ is valid. We check the satisfiability of $\neg (P \implies Q)$, which can be rewritten as $P \land \neg Q$. If $P \land \neg Q$ is unsatisfiable, then $P \implies Q$ must be true in all models. This gives us the result "$\text{KB} \vDash \alpha$ if and only if $\text{KB} \land \neg \alpha$ is unsatisfiable".This technique corresponds to proof by contradiction, a standard mathematical proof technique. We assume $\alpha$ to be false and show that this leads to a contradiction with known axioms in $\text{KB}$. We obtain a contradiction by making valid inferences using inference rules. In this proof we use a single inference rule, resolution which states $(l_1 \lor \dots \lor l_k) \land (m_1 \lor \dots \lor m_n) \land (l_i \iff \neg m_j) \implies l_1 \lor \dots \lor l_{i - 1} \lor l_{i + 1} \lor \dots \lor l_k \lor m_1 \lor \dots \lor m_{j - 1} \lor m_{j + 1} \lor \dots \lor m_n$. Applying the resolution yields us a clause which we add to the KB. We keep doing this until:* There are no new clauses that can be added, in which case $\text{KB} \nvDash \alpha$.* Two clauses resolve to yield the empty clause, in which case $\text{KB} \vDash \alpha$.The empty clause is equivalent to False because it arises only from resolving two complementaryunit clauses such as $P$ and $\neg P$ which is a contradiction as both $P$ and $\neg P$ can't be True at the same time. There is one catch however, the algorithm that implements proof by resolution cannot handle complex sentences. Implications and bi-implications have to be simplified into simpler clauses. We already know that *every sentence of a propositional logic is logically equivalent to a conjunction of clauses*.We will use this fact to our advantage and simplify the input sentence into the **conjunctive normal form** (CNF) which is a conjunction of disjunctions of literals.For eg:$$(A\lor B)\land (\neg B\lor C\lor\neg D)\land (D\lor\neg E)$$This is equivalent to the POS (Product of sums) form in digital electronics.Here's an outline of how the conversion is done:1. Convert bi-implications to implications$\alpha\iff\beta$ can be written as $(\alpha\implies\beta)\land(\beta\implies\alpha)$This also applies to compound sentences$\alpha\iff(\beta\lor\gamma)$ can be written as $(\alpha\implies(\beta\lor\gamma))\land((\beta\lor\gamma)\implies\alpha)$2. Convert implications to their logical equivalents$\alpha\implies\beta$ can be written as $\neg\alpha\lor\beta$3. Move negation inwardsCNF requires atomic literals. Hence, negation cannot appear on a compound statement.De Morgan's laws will be helpful here.$\neg(\alpha\land\beta)\equiv(\neg\alpha\lor\neg\beta)$$\neg(\alpha\lor\beta)\equiv(\neg\alpha\land\neg\beta)$4. Distribute disjunction over conjunctionDisjunction and conjunction are distributive over each other.Now that we only have conjunctions, disjunctions and negations in our expression, we will distribute disjunctions over conjunctions wherever possible as this will give us a sentence which is a conjunction of simpler clauses, which is what we wanted in the first place.We need a term of the form$(\alpha_{1}\lor\alpha_{2}\lor\alpha_{3}...)\land(\beta_{1}\lor\beta_{2}\lor\beta_{3}...)\land(\gamma_{1}\lor\gamma_{2}\lor\gamma_{3}...)\land...$The `to_cnf` function executes this conversion using helper subroutines.
###Code
%psource to_cnf
###Output
[0;32mdef[0m [0mto_cnf[0m[0;34m([0m[0ms[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;34m"""Convert a propositional logical sentence to conjunctive normal form.[0m
[0;34m That is, to the form ((A | ~B | ...) & (B | C | ...) & ...) [p. 253][0m
[0;34m >>> to_cnf('~(B | C)')[0m
[0;34m (~B & ~C)[0m
[0;34m """[0m[0;34m[0m
[0;34m[0m [0ms[0m [0;34m=[0m [0mexpr[0m[0;34m([0m[0ms[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0misinstance[0m[0;34m([0m[0ms[0m[0;34m,[0m [0mstr[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0ms[0m [0;34m=[0m [0mexpr[0m[0;34m([0m[0ms[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0ms[0m [0;34m=[0m [0meliminate_implications[0m[0;34m([0m[0ms[0m[0;34m)[0m [0;31m# Steps 1, 2 from p. 253[0m[0;34m[0m
[0;34m[0m [0ms[0m [0;34m=[0m [0mmove_not_inwards[0m[0;34m([0m[0ms[0m[0;34m)[0m [0;31m# Step 3[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0mdistribute_and_over_or[0m[0;34m([0m[0ms[0m[0;34m)[0m [0;31m# Step 4[0m[0;34m[0m[0;34m[0m[0m
###Markdown
`to_cnf` calls three subroutines.`eliminate_implications` converts bi-implications and implications to their logical equivalents.`move_not_inwards` removes negations from compound statements and moves them inwards using De Morgan's laws.`distribute_and_over_or` distributes disjunctions over conjunctions.Run the cell below for implementation details.
###Code
%psource eliminate_implications
%psource move_not_inwards
%psource distribute_and_over_or
###Output
[0;32mdef[0m [0mdistribute_and_over_or[0m[0;34m([0m[0ms[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;34m"""Given a sentence s consisting of conjunctions and disjunctions[0m
[0;34m of literals, return an equivalent sentence in CNF.[0m
[0;34m >>> distribute_and_over_or((A & B) | C)[0m
[0;34m ((A | C) & (B | C))[0m
[0;34m """[0m[0;34m[0m
[0;34m[0m [0ms[0m [0;34m=[0m [0mexpr[0m[0;34m([0m[0ms[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0ms[0m[0;34m.[0m[0mop[0m [0;34m==[0m [0;34m'|'[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0ms[0m [0;34m=[0m [0massociate[0m[0;34m([0m[0;34m'|'[0m[0;34m,[0m [0ms[0m[0;34m.[0m[0margs[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0ms[0m[0;34m.[0m[0mop[0m [0;34m!=[0m [0;34m'|'[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0mdistribute_and_over_or[0m[0;34m([0m[0ms[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0mlen[0m[0;34m([0m[0ms[0m[0;34m.[0m[0margs[0m[0;34m)[0m [0;34m==[0m [0;36m0[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0;32mFalse[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0mlen[0m[0;34m([0m[0ms[0m[0;34m.[0m[0margs[0m[0;34m)[0m [0;34m==[0m [0;36m1[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0mdistribute_and_over_or[0m[0;34m([0m[0ms[0m[0;34m.[0m[0margs[0m[0;34m[[0m[0;36m0[0m[0;34m][0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0mconj[0m [0;34m=[0m [0mfirst[0m[0;34m([0m[0marg[0m [0;32mfor[0m [0marg[0m [0;32min[0m [0ms[0m[0;34m.[0m[0margs[0m [0;32mif[0m [0marg[0m[0;34m.[0m[0mop[0m [0;34m==[0m [0;34m'&'[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0;32mnot[0m [0mconj[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0ms[0m[0;34m[0m
[0;34m[0m [0mothers[0m [0;34m=[0m [0;34m[[0m[0ma[0m [0;32mfor[0m [0ma[0m [0;32min[0m [0ms[0m[0;34m.[0m[0margs[0m [0;32mif[0m [0ma[0m [0;32mis[0m [0;32mnot[0m [0mconj[0m[0;34m][0m[0;34m[0m
[0;34m[0m [0mrest[0m [0;34m=[0m [0massociate[0m[0;34m([0m[0;34m'|'[0m[0;34m,[0m [0mothers[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0massociate[0m[0;34m([0m[0;34m'&'[0m[0;34m,[0m [0;34m[[0m[0mdistribute_and_over_or[0m[0;34m([0m[0mc[0m [0;34m|[0m [0mrest[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32mfor[0m [0mc[0m [0;32min[0m [0mconj[0m[0;34m.[0m[0margs[0m[0;34m][0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32melif[0m [0ms[0m[0;34m.[0m[0mop[0m [0;34m==[0m [0;34m'&'[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0massociate[0m[0;34m([0m[0;34m'&'[0m[0;34m,[0m [0mlist[0m[0;34m([0m[0mmap[0m[0;34m([0m[0mdistribute_and_over_or[0m[0;34m,[0m [0ms[0m[0;34m.[0m[0margs[0m[0;34m)[0m[0;34m)[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32melse[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0ms[0m[0;34m[0m[0;34m[0m[0m
###Markdown
Let's convert some sentences to see how it works
###Code
A, B, C, D = expr('A, B, C, D')
to_cnf(A |'<=>'| B)
to_cnf(A |'<=>'| (B & C))
to_cnf(A & (B | (C & D)))
to_cnf((A |'<=>'| ~B) |'==>'| (C | ~D))
###Output
_____no_output_____
###Markdown
Coming back to our resolution problem, we can see how the `to_cnf` function is utilized here
###Code
%psource pl_resolution
pl_resolution(wumpus_kb, ~P11), pl_resolution(wumpus_kb, P11)
pl_resolution(wumpus_kb, ~P22), pl_resolution(wumpus_kb, P22)
###Output
_____no_output_____
###Markdown
Forward and backward chainingPreviously, we said we will look at two algorithms to check if a sentence is entailed by the `KB`. Here's a third one. The difference here is that our goal now is to determine if a knowledge base of definite clauses entails a single proposition symbol *q* - the query.There is a catch however - the knowledge base can only contain **Horn clauses**. Horn ClausesHorn clauses can be defined as a *disjunction* of *literals* with **at most** one positive literal. A Horn clause with exactly one positive literal is called a *definite clause*.A Horn clause might look like $\neg a\lor\neg b\lor\neg c\lor\neg d... \lor z$This, coincidentally, is also a definite clause.Using De Morgan's laws, the example above can be simplified to $a\land b\land c\land d ... \implies z$This seems like a logical representation of how humans process known data and facts. Assuming percepts `a`, `b`, `c`, `d` ... to be true simultaneously, we can infer `z` to also be true at that point in time. There are some interesting aspects of Horn clauses that make algorithmic inference or *resolution* easier.- Definite clauses can be written as implications:The most important simplification a definite clause provides is that it can be written as an implication.The premise (or the knowledge that leads to the implication) is a conjunction of positive literals.The conclusion (the implied statement) is also a positive literal.The sentence thus becomes easier to understand.The premise and the conclusion are conventionally called the *body* and the *head* respectively.A single positive literal is called a *fact*.- Forward chaining and backward chaining can be used for inference from Horn clauses:Forward chaining is semantically identical to `AND-OR-Graph-Search` from the chapter on search algorithms.Implementational details will be explained shortly.- Deciding entailment with Horn clauses is linear in size of the knowledge base:Surprisingly, the forward and backward chaining algorithms traverse each element of the knowledge base at most once, greatly simplifying the problem.The function `pl_fc_entails` implements forward chaining to see if a knowledge base `KB` entails a symbol `q`.Before we proceed further, note that `pl_fc_entails` doesn't use an ordinary `KB` instance. The knowledge base here is an instance of the `PropDefiniteKB` class, derived from the `PropKB` class, but modified to store definite clauses.The main point of difference arises in the inclusion of a helper method to `PropDefiniteKB` that returns a list of clauses in KB that have a given symbol `p` in their premise.
###Code
%psource PropDefiniteKB.clauses_with_premise
###Output
[0;32mdef[0m [0mclauses_with_premise[0m[0;34m([0m[0mself[0m[0;34m,[0m [0mp[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;34m"""Return a list of the clauses in KB that have p in their premise.[0m
[0;34m This could be cached away for O(1) speed, but we'll recompute it."""[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0;34m[[0m[0mc[0m [0;32mfor[0m [0mc[0m [0;32min[0m [0mself[0m[0;34m.[0m[0mclauses[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0mc[0m[0;34m.[0m[0mop[0m [0;34m==[0m [0;34m'==>'[0m [0;32mand[0m [0mp[0m [0;32min[0m [0mconjuncts[0m[0;34m([0m[0mc[0m[0;34m.[0m[0margs[0m[0;34m[[0m[0;36m0[0m[0;34m][0m[0;34m)[0m[0;34m][0m[0;34m[0m[0;34m[0m[0m
###Markdown
Let's now have a look at the `pl_fc_entails` algorithm.
###Code
%psource pl_fc_entails
###Output
[0;32mdef[0m [0mpl_fc_entails[0m[0;34m([0m[0mKB[0m[0;34m,[0m [0mq[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;34m"""Use forward chaining to see if a PropDefiniteKB entails symbol q.[0m
[0;34m [Figure 7.15][0m
[0;34m >>> pl_fc_entails(horn_clauses_KB, expr('Q'))[0m
[0;34m True[0m
[0;34m """[0m[0;34m[0m
[0;34m[0m [0mcount[0m [0;34m=[0m [0;34m{[0m[0mc[0m[0;34m:[0m [0mlen[0m[0;34m([0m[0mconjuncts[0m[0;34m([0m[0mc[0m[0;34m.[0m[0margs[0m[0;34m[[0m[0;36m0[0m[0;34m][0m[0;34m)[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32mfor[0m [0mc[0m [0;32min[0m [0mKB[0m[0;34m.[0m[0mclauses[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0mc[0m[0;34m.[0m[0mop[0m [0;34m==[0m [0;34m'==>'[0m[0;34m}[0m[0;34m[0m
[0;34m[0m [0minferred[0m [0;34m=[0m [0mdefaultdict[0m[0;34m([0m[0mbool[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0magenda[0m [0;34m=[0m [0;34m[[0m[0ms[0m [0;32mfor[0m [0ms[0m [0;32min[0m [0mKB[0m[0;34m.[0m[0mclauses[0m [0;32mif[0m [0mis_prop_symbol[0m[0;34m([0m[0ms[0m[0;34m.[0m[0mop[0m[0;34m)[0m[0;34m][0m[0;34m[0m
[0;34m[0m [0;32mwhile[0m [0magenda[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0mp[0m [0;34m=[0m [0magenda[0m[0;34m.[0m[0mpop[0m[0;34m([0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0mp[0m [0;34m==[0m [0mq[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0;32mTrue[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0;32mnot[0m [0minferred[0m[0;34m[[0m[0mp[0m[0;34m][0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0minferred[0m[0;34m[[0m[0mp[0m[0;34m][0m [0;34m=[0m [0;32mTrue[0m[0;34m[0m
[0;34m[0m [0;32mfor[0m [0mc[0m [0;32min[0m [0mKB[0m[0;34m.[0m[0mclauses_with_premise[0m[0;34m([0m[0mp[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0mcount[0m[0;34m[[0m[0mc[0m[0;34m][0m [0;34m-=[0m [0;36m1[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0mcount[0m[0;34m[[0m[0mc[0m[0;34m][0m [0;34m==[0m [0;36m0[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0magenda[0m[0;34m.[0m[0mappend[0m[0;34m([0m[0mc[0m[0;34m.[0m[0margs[0m[0;34m[[0m[0;36m1[0m[0;34m][0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0;32mFalse[0m[0;34m[0m[0;34m[0m[0m
###Markdown
The function accepts a knowledge base `KB` (an instance of `PropDefiniteKB`) and a query `q` as inputs.`count` initially stores the number of symbols in the premise of each sentence in the knowledge base.The `conjuncts` helper function separates a given sentence at conjunctions.`inferred` is initialized as a *boolean* defaultdict. This will be used later to check if we have inferred all premises of each clause of the agenda.`agenda` initially stores a list of clauses that the knowledge base knows to be true.The `is_prop_symbol` helper function checks if the given symbol is a valid propositional logic symbol.We now iterate through `agenda`, popping a symbol `p` on each iteration.If the query `q` is the same as `p`, we know that entailment holds.The agenda is processed, reducing `count` by one for each implication with a premise `p`.A conclusion is added to the agenda when `count` reaches zero. This means we know all the premises of that particular implication to be true.`clauses_with_premise` is a helpful method of the `PropKB` class.It returns a list of clauses in the knowledge base that have `p` in their premise.Now that we have an idea of how this function works, let's see a few examples of its usage, but we first need to define our knowledge base. We assume we know the following clauses to be true.
###Code
clauses = ['(B & F)==>E',
'(A & E & F)==>G',
'(B & C)==>F',
'(A & B)==>D',
'(E & F)==>H',
'(H & I)==>J',
'A',
'B',
'C']
###Output
_____no_output_____
###Markdown
We will now `tell` this information to our knowledge base.
###Code
definite_clauses_KB = PropDefiniteKB()
for clause in clauses:
definite_clauses_KB.tell(expr(clause))
###Output
_____no_output_____
###Markdown
We can now check if our knowledge base entails the following queries.
###Code
pl_fc_entails(definite_clauses_KB, expr('G'))
pl_fc_entails(definite_clauses_KB, expr('H'))
pl_fc_entails(definite_clauses_KB, expr('I'))
pl_fc_entails(definite_clauses_KB, expr('J'))
###Output
_____no_output_____
###Markdown
Effective Propositional Model CheckingThe previous segments elucidate the algorithmic procedure for model checking. In this segment, we look at ways of making them computationally efficient.The problem we are trying to solve is conventionally called the _propositional satisfiability problem_, abbreviated as the _SAT_ problem.In layman terms, if there exists a model that satisfies a given Boolean formula, the formula is called satisfiable.The SAT problem was the first problem to be proven _NP-complete_.The main characteristics of an NP-complete problem are:- Given a solution to such a problem, it is easy to verify if the solution solves the problem.- The time required to actually solve the problem using any known algorithm increases exponentially with respect to the size of the problem.Due to these properties, heuristic and approximational methods are often applied to find solutions to these problems.It is extremely important to be able to solve large scale SAT problems efficiently because many combinatorial problems in computer science can be conveniently reduced to checking the satisfiability of a propositional sentence under some constraints.We will introduce two new algorithms that perform propositional model checking in a computationally effective way. 1. DPLL (Davis-Putnam-Logeman-Loveland) algorithmThis algorithm is very similar to Backtracking-Search.It recursively enumerates possible models in a depth-first fashion with the following improvements over algorithms like `tt_entails`:1. Early termination:In certain cases, the algorithm can detect the truth value of a statement using just a partially completed model.For example, $(P\lor Q)\land(P\lor R)$ is true if P is true, regardless of other variables.This reduces the search space significantly.2. Pure symbol heuristic:A symbol that has the same sign (positive or negative) in all clauses is called a _pure symbol_.It isn't difficult to see that any satisfiable model will have the pure symbols assigned such that its parent clause becomes _true_.For example, $(P\lor\neg Q)\land(\neg Q\lor\neg R)\land(R\lor P)$ has P and Q as pure symbolsand for the sentence to be true, P _has_ to be true and Q _has_ to be false.The pure symbol heuristic thus simplifies the problem a bit.3. Unit clause heuristic:In the context of DPLL, clauses with just one literal and clauses with all but one _false_ literals are called unit clauses.If a clause is a unit clause, it can only be satisfied by assigning the necessary value to make the last literal true.We have no other choice.Assigning one unit clause can create another unit clause.For example, when P is false, $(P\lor Q)$ becomes a unit clause, causing _true_ to be assigned to Q.A series of forced assignments derived from previous unit clauses is called _unit propagation_.In this way, this heuristic simplifies the problem further.The algorithm often employs other tricks to scale up to large problems.However, these tricks are currently out of the scope of this notebook. Refer to section 7.6 of the book for more details.Let's have a look at the algorithm.
###Code
%psource dpll
###Output
[0;32mdef[0m [0mdpll[0m[0;34m([0m[0mclauses[0m[0;34m,[0m [0msymbols[0m[0;34m,[0m [0mmodel[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;34m"""See if the clauses are true in a partial model."""[0m[0;34m[0m
[0;34m[0m [0munknown_clauses[0m [0;34m=[0m [0;34m[[0m[0;34m][0m [0;31m# clauses with an unknown truth value[0m[0;34m[0m
[0;34m[0m [0;32mfor[0m [0mc[0m [0;32min[0m [0mclauses[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0mval[0m [0;34m=[0m [0mpl_true[0m[0;34m([0m[0mc[0m[0;34m,[0m [0mmodel[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0mval[0m [0;32mis[0m [0;32mFalse[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0;32mFalse[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0mval[0m [0;32mis[0m [0;32mnot[0m [0;32mTrue[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0munknown_clauses[0m[0;34m.[0m[0mappend[0m[0;34m([0m[0mc[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0;32mnot[0m [0munknown_clauses[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0mmodel[0m[0;34m[0m
[0;34m[0m [0mP[0m[0;34m,[0m [0mvalue[0m [0;34m=[0m [0mfind_pure_symbol[0m[0;34m([0m[0msymbols[0m[0;34m,[0m [0munknown_clauses[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0mP[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0mdpll[0m[0;34m([0m[0mclauses[0m[0;34m,[0m [0mremove_all[0m[0;34m([0m[0mP[0m[0;34m,[0m [0msymbols[0m[0;34m)[0m[0;34m,[0m [0mextend[0m[0;34m([0m[0mmodel[0m[0;34m,[0m [0mP[0m[0;34m,[0m [0mvalue[0m[0;34m)[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0mP[0m[0;34m,[0m [0mvalue[0m [0;34m=[0m [0mfind_unit_clause[0m[0;34m([0m[0mclauses[0m[0;34m,[0m [0mmodel[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0mP[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0mdpll[0m[0;34m([0m[0mclauses[0m[0;34m,[0m [0mremove_all[0m[0;34m([0m[0mP[0m[0;34m,[0m [0msymbols[0m[0;34m)[0m[0;34m,[0m [0mextend[0m[0;34m([0m[0mmodel[0m[0;34m,[0m [0mP[0m[0;34m,[0m [0mvalue[0m[0;34m)[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0;32mnot[0m [0msymbols[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mraise[0m [0mTypeError[0m[0;34m([0m[0;34m"Argument should be of the type Expr."[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0mP[0m[0;34m,[0m [0msymbols[0m [0;34m=[0m [0msymbols[0m[0;34m[[0m[0;36m0[0m[0;34m][0m[0;34m,[0m [0msymbols[0m[0;34m[[0m[0;36m1[0m[0;34m:[0m[0;34m][0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0;34m([0m[0mdpll[0m[0;34m([0m[0mclauses[0m[0;34m,[0m [0msymbols[0m[0;34m,[0m [0mextend[0m[0;34m([0m[0mmodel[0m[0;34m,[0m [0mP[0m[0;34m,[0m [0;32mTrue[0m[0;34m)[0m[0;34m)[0m [0;32mor[0m[0;34m[0m
[0;34m[0m [0mdpll[0m[0;34m([0m[0mclauses[0m[0;34m,[0m [0msymbols[0m[0;34m,[0m [0mextend[0m[0;34m([0m[0mmodel[0m[0;34m,[0m [0mP[0m[0;34m,[0m [0;32mFalse[0m[0;34m)[0m[0;34m)[0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
###Markdown
The algorithm uses the ideas described above to check satisfiability of a sentence in propositional logic.It recursively calls itself, simplifying the problem at each step. It also uses helper functions `find_pure_symbol` and `find_unit_clause` to carry out steps 2 and 3 above.The `dpll_satisfiable` helper function converts the input clauses to _conjunctive normal form_ and calls the `dpll` function with the correct parameters.
###Code
%psource dpll_satisfiable
###Output
[0;32mdef[0m [0mdpll_satisfiable[0m[0;34m([0m[0ms[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;34m"""Check satisfiability of a propositional sentence.[0m
[0;34m This differs from the book code in two ways: (1) it returns a model[0m
[0;34m rather than True when it succeeds; this is more useful. (2) The[0m
[0;34m function find_pure_symbol is passed a list of unknown clauses, rather[0m
[0;34m than a list of all clauses and the model; this is more efficient.[0m
[0;34m >>> dpll_satisfiable(A |'<=>'| B) == {A: True, B: True}[0m
[0;34m True[0m
[0;34m """[0m[0;34m[0m
[0;34m[0m [0mclauses[0m [0;34m=[0m [0mconjuncts[0m[0;34m([0m[0mto_cnf[0m[0;34m([0m[0ms[0m[0;34m)[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0msymbols[0m [0;34m=[0m [0mlist[0m[0;34m([0m[0mprop_symbols[0m[0;34m([0m[0ms[0m[0;34m)[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0mdpll[0m[0;34m([0m[0mclauses[0m[0;34m,[0m [0msymbols[0m[0;34m,[0m [0;34m{[0m[0;34m}[0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
###Markdown
Let's see a few examples of usage.
###Code
A, B, C, D = expr('A, B, C, D')
dpll_satisfiable(A & B & ~C & D)
###Output
_____no_output_____
###Markdown
This is a simple case to highlight that the algorithm actually works.
###Code
dpll_satisfiable((A & B) | (C & ~A) | (B & ~D))
###Output
_____no_output_____
###Markdown
If a particular symbol isn't present in the solution, it means that the solution is independent of the value of that symbol.In this case, the solution is independent of A.
###Code
dpll_satisfiable(A |'<=>'| B)
dpll_satisfiable((A |'<=>'| B) |'==>'| (C & ~A))
dpll_satisfiable((A | (B & C)) |'<=>'| ((A | B) & (A | C)))
###Output
_____no_output_____
###Markdown
2. WalkSAT algorithmThis algorithm is very similar to Hill climbing.On every iteration, the algorithm picks an unsatisfied clause and flips a symbol in the clause.This is similar to finding a neighboring state in the `hill_climbing` algorithm.The symbol to be flipped is decided by an evaluation function that counts the number of unsatisfied clauses.Sometimes, symbols are also flipped randomly to avoid local optima. A subtle balance between greediness and randomness is required. Alternatively, some versions of the algorithm restart with a completely new random assignment if no solution has been found for too long as a way of getting out of local minima of numbers of unsatisfied clauses.Let's have a look at the algorithm.
###Code
%psource WalkSAT
###Output
[0;32mdef[0m [0mWalkSAT[0m[0;34m([0m[0mclauses[0m[0;34m,[0m [0mp[0m[0;34m=[0m[0;36m0.5[0m[0;34m,[0m [0mmax_flips[0m[0;34m=[0m[0;36m10000[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;34m"""[0m
[0;34m Checks for satisfiability of all clauses by randomly flipping values of variables[0m
[0;34m >>> WalkSAT([A & ~A], 0.5, 100) is None[0m
[0;34m True[0m
[0;34m """[0m[0;34m[0m
[0;34m[0m [0;31m# Set of all symbols in all clauses[0m[0;34m[0m
[0;34m[0m [0msymbols[0m [0;34m=[0m [0;34m{[0m[0msym[0m [0;32mfor[0m [0mclause[0m [0;32min[0m [0mclauses[0m [0;32mfor[0m [0msym[0m [0;32min[0m [0mprop_symbols[0m[0;34m([0m[0mclause[0m[0;34m)[0m[0;34m}[0m[0;34m[0m
[0;34m[0m [0;31m# model is a random assignment of true/false to the symbols in clauses[0m[0;34m[0m
[0;34m[0m [0mmodel[0m [0;34m=[0m [0;34m{[0m[0ms[0m[0;34m:[0m [0mrandom[0m[0;34m.[0m[0mchoice[0m[0;34m([0m[0;34m[[0m[0;32mTrue[0m[0;34m,[0m [0;32mFalse[0m[0;34m][0m[0;34m)[0m [0;32mfor[0m [0ms[0m [0;32min[0m [0msymbols[0m[0;34m}[0m[0;34m[0m
[0;34m[0m [0;32mfor[0m [0mi[0m [0;32min[0m [0mrange[0m[0;34m([0m[0mmax_flips[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0msatisfied[0m[0;34m,[0m [0munsatisfied[0m [0;34m=[0m [0;34m[[0m[0;34m][0m[0;34m,[0m [0;34m[[0m[0;34m][0m[0;34m[0m
[0;34m[0m [0;32mfor[0m [0mclause[0m [0;32min[0m [0mclauses[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;34m([0m[0msatisfied[0m [0;32mif[0m [0mpl_true[0m[0;34m([0m[0mclause[0m[0;34m,[0m [0mmodel[0m[0;34m)[0m [0;32melse[0m [0munsatisfied[0m[0;34m)[0m[0;34m.[0m[0mappend[0m[0;34m([0m[0mclause[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0;32mnot[0m [0munsatisfied[0m[0;34m:[0m [0;31m# if model satisfies all the clauses[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0mmodel[0m[0;34m[0m
[0;34m[0m [0mclause[0m [0;34m=[0m [0mrandom[0m[0;34m.[0m[0mchoice[0m[0;34m([0m[0munsatisfied[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0mprobability[0m[0;34m([0m[0mp[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0msym[0m [0;34m=[0m [0mrandom[0m[0;34m.[0m[0mchoice[0m[0;34m([0m[0mlist[0m[0;34m([0m[0mprop_symbols[0m[0;34m([0m[0mclause[0m[0;34m)[0m[0;34m)[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32melse[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;31m# Flip the symbol in clause that maximizes number of sat. clauses[0m[0;34m[0m
[0;34m[0m [0;32mdef[0m [0msat_count[0m[0;34m([0m[0msym[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;31m# Return the the number of clauses satisfied after flipping the symbol.[0m[0;34m[0m
[0;34m[0m [0mmodel[0m[0;34m[[0m[0msym[0m[0;34m][0m [0;34m=[0m [0;32mnot[0m [0mmodel[0m[0;34m[[0m[0msym[0m[0;34m][0m[0;34m[0m
[0;34m[0m [0mcount[0m [0;34m=[0m [0mlen[0m[0;34m([0m[0;34m[[0m[0mclause[0m [0;32mfor[0m [0mclause[0m [0;32min[0m [0mclauses[0m [0;32mif[0m [0mpl_true[0m[0;34m([0m[0mclause[0m[0;34m,[0m [0mmodel[0m[0;34m)[0m[0;34m][0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0mmodel[0m[0;34m[[0m[0msym[0m[0;34m][0m [0;34m=[0m [0;32mnot[0m [0mmodel[0m[0;34m[[0m[0msym[0m[0;34m][0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0mcount[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0msym[0m [0;34m=[0m [0mmax[0m[0;34m([0m[0mprop_symbols[0m[0;34m([0m[0mclause[0m[0;34m)[0m[0;34m,[0m [0mkey[0m[0;34m=[0m[0msat_count[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0mmodel[0m[0;34m[[0m[0msym[0m[0;34m][0m [0;34m=[0m [0;32mnot[0m [0mmodel[0m[0;34m[[0m[0msym[0m[0;34m][0m[0;34m[0m
[0;34m[0m [0;31m# If no solution is found within the flip limit, we return failure[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0;32mNone[0m[0;34m[0m[0;34m[0m[0m
###Markdown
The function takes three arguments:1. The `clauses` we want to satisfy.2. The probability `p` of randomly changing a symbol.3. The maximum number of flips (`max_flips`) the algorithm will run for. If the clauses are still unsatisfied, the algorithm returns `None` to denote failure.The algorithm is identical in concept to Hill climbing and the code isn't difficult to understand.Let's see a few examples of usage.
###Code
A, B, C, D = expr('A, B, C, D')
WalkSAT([A, B, ~C, D], 0.5, 100)
###Output
_____no_output_____
###Markdown
This is a simple case to show that the algorithm converges.
###Code
WalkSAT([A & B, A & C], 0.5, 100)
WalkSAT([A & B, C & D, C & B], 0.5, 100)
WalkSAT([A & B, C | D, ~(D | B)], 0.5, 1000)
###Output
_____no_output_____
###Markdown
This one doesn't give any output because WalkSAT did not find any model where these clauses hold. We can solve these clauses to see that they together form a contradiction and hence, it isn't supposed to have a solution. One point of difference between this algorithm and the `dpll_satisfiable` algorithms is that both these algorithms take inputs differently. For WalkSAT to take complete sentences as input, we can write a helper function that converts the input sentence into conjunctive normal form and then calls WalkSAT with the list of conjuncts of the CNF form of the sentence.
###Code
def WalkSAT_CNF(sentence, p=0.5, max_flips=10000):
return WalkSAT(conjuncts(to_cnf(sentence)), 0, max_flips)
###Output
_____no_output_____
###Markdown
Now we can call `WalkSAT_CNF` and `DPLL_Satisfiable` with the same arguments.
###Code
WalkSAT_CNF((A & B) | (C & ~A) | (B & ~D), 0.5, 1000)
###Output
_____no_output_____
###Markdown
It works!Notice that the solution generated by WalkSAT doesn't omit variables that the sentence doesn't depend upon. If the sentence is independent of a particular variable, the solution contains a random value for that variable because of the stochastic nature of the algorithm.Let's compare the runtime of WalkSAT and DPLL for a few cases. We will use the `%%timeit` magic to do this.
###Code
sentence_1 = A |'<=>'| B
sentence_2 = (A & B) | (C & ~A) | (B & ~D)
sentence_3 = (A | (B & C)) |'<=>'| ((A | B) & (A | C))
%%timeit
dpll_satisfiable(sentence_1)
dpll_satisfiable(sentence_2)
dpll_satisfiable(sentence_3)
%%timeit
WalkSAT_CNF(sentence_1)
WalkSAT_CNF(sentence_2)
WalkSAT_CNF(sentence_3)
###Output
509 µs ± 7.92 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
###Markdown
On an average, for solvable cases, `WalkSAT` is quite faster than `dpll` because, for a small number of variables, `WalkSAT` can reduce the search space significantly. Results can be different for sentences with more symbols though.Feel free to play around with this to understand the trade-offs of these algorithms better. SATPlan In this section we show how to make plans by logical inference. The basic idea is very simple. It includes the following three steps:1. Constuct a sentence that includes: 1. A colection of assertions about the initial state. 2. The successor-state axioms for all the possible actions at each time up to some maximum time t. 3. The assertion that the goal is achieved at time t.2. Present the whole sentence to a SAT solver.3. Assuming a model is found, extract from the model those variables that represent actions and are assigned true. Together they represent a plan to achieve the goals.Lets have a look at the algorithm
###Code
%psource SAT_plan
###Output
[0;32mdef[0m [0mSAT_plan[0m[0;34m([0m[0minit[0m[0;34m,[0m [0mtransition[0m[0;34m,[0m [0mgoal[0m[0;34m,[0m [0mt_max[0m[0;34m,[0m [0mSAT_solver[0m[0;34m=[0m[0mdpll_satisfiable[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;34m"""Converts a planning problem to Satisfaction problem by translating it to a cnf sentence.[0m
[0;34m [Figure 7.22][0m
[0;34m >>> transition = {'A': {'Left': 'A', 'Right': 'B'}, 'B': {'Left': 'A', 'Right': 'C'}, 'C': {'Left': 'B', 'Right': 'C'}}[0m
[0;34m >>> SAT_plan('A', transition, 'C', 2) is None[0m
[0;34m True[0m
[0;34m """[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;31m# Functions used by SAT_plan[0m[0;34m[0m
[0;34m[0m [0;32mdef[0m [0mtranslate_to_SAT[0m[0;34m([0m[0minit[0m[0;34m,[0m [0mtransition[0m[0;34m,[0m [0mgoal[0m[0;34m,[0m [0mtime[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0mclauses[0m [0;34m=[0m [0;34m[[0m[0;34m][0m[0;34m[0m
[0;34m[0m [0mstates[0m [0;34m=[0m [0;34m[[0m[0mstate[0m [0;32mfor[0m [0mstate[0m [0;32min[0m [0mtransition[0m[0;34m][0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;31m# Symbol claiming state s at time t[0m[0;34m[0m
[0;34m[0m [0mstate_counter[0m [0;34m=[0m [0mitertools[0m[0;34m.[0m[0mcount[0m[0;34m([0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32mfor[0m [0ms[0m [0;32min[0m [0mstates[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mfor[0m [0mt[0m [0;32min[0m [0mrange[0m[0;34m([0m[0mtime[0m [0;34m+[0m [0;36m1[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0mstate_sym[0m[0;34m[[0m[0ms[0m[0;34m,[0m [0mt[0m[0;34m][0m [0;34m=[0m [0mExpr[0m[0;34m([0m[0;34m"State_{}"[0m[0;34m.[0m[0mformat[0m[0;34m([0m[0mnext[0m[0;34m([0m[0mstate_counter[0m[0;34m)[0m[0;34m)[0m[0;34m)[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;31m# Add initial state axiom[0m[0;34m[0m
[0;34m[0m [0mclauses[0m[0;34m.[0m[0mappend[0m[0;34m([0m[0mstate_sym[0m[0;34m[[0m[0minit[0m[0;34m,[0m [0;36m0[0m[0;34m][0m[0;34m)[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;31m# Add goal state axiom[0m[0;34m[0m
[0;34m[0m [0mclauses[0m[0;34m.[0m[0mappend[0m[0;34m([0m[0mstate_sym[0m[0;34m[[0m[0mgoal[0m[0;34m,[0m [0mtime[0m[0;34m][0m[0;34m)[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;31m# All possible transitions[0m[0;34m[0m
[0;34m[0m [0mtransition_counter[0m [0;34m=[0m [0mitertools[0m[0;34m.[0m[0mcount[0m[0;34m([0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32mfor[0m [0ms[0m [0;32min[0m [0mstates[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mfor[0m [0maction[0m [0;32min[0m [0mtransition[0m[0;34m[[0m[0ms[0m[0;34m][0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0ms_[0m [0;34m=[0m [0mtransition[0m[0;34m[[0m[0ms[0m[0;34m][0m[0;34m[[0m[0maction[0m[0;34m][0m[0;34m[0m
[0;34m[0m [0;32mfor[0m [0mt[0m [0;32min[0m [0mrange[0m[0;34m([0m[0mtime[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;31m# Action 'action' taken from state 's' at time 't' to reach 's_'[0m[0;34m[0m
[0;34m[0m [0maction_sym[0m[0;34m[[0m[0ms[0m[0;34m,[0m [0maction[0m[0;34m,[0m [0mt[0m[0;34m][0m [0;34m=[0m [0mExpr[0m[0;34m([0m[0;34m[0m
[0;34m[0m [0;34m"Transition_{}"[0m[0;34m.[0m[0mformat[0m[0;34m([0m[0mnext[0m[0;34m([0m[0mtransition_counter[0m[0;34m)[0m[0;34m)[0m[0;34m)[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;31m# Change the state from s to s_[0m[0;34m[0m
[0;34m[0m [0mclauses[0m[0;34m.[0m[0mappend[0m[0;34m([0m[0maction_sym[0m[0;34m[[0m[0ms[0m[0;34m,[0m [0maction[0m[0;34m,[0m [0mt[0m[0;34m][0m [0;34m|[0m [0;34m'==>'[0m [0;34m|[0m [0mstate_sym[0m[0;34m[[0m[0ms[0m[0;34m,[0m [0mt[0m[0;34m][0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0mclauses[0m[0;34m.[0m[0mappend[0m[0;34m([0m[0maction_sym[0m[0;34m[[0m[0ms[0m[0;34m,[0m [0maction[0m[0;34m,[0m [0mt[0m[0;34m][0m [0;34m|[0m [0;34m'==>'[0m [0;34m|[0m [0mstate_sym[0m[0;34m[[0m[0ms_[0m[0;34m,[0m [0mt[0m [0;34m+[0m [0;36m1[0m[0;34m][0m[0;34m)[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;31m# Allow only one state at any time[0m[0;34m[0m
[0;34m[0m [0;32mfor[0m [0mt[0m [0;32min[0m [0mrange[0m[0;34m([0m[0mtime[0m [0;34m+[0m [0;36m1[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;31m# must be a state at any time[0m[0;34m[0m
[0;34m[0m [0mclauses[0m[0;34m.[0m[0mappend[0m[0;34m([0m[0massociate[0m[0;34m([0m[0;34m'|'[0m[0;34m,[0m [0;34m[[0m[0mstate_sym[0m[0;34m[[0m[0ms[0m[0;34m,[0m [0mt[0m[0;34m][0m [0;32mfor[0m [0ms[0m [0;32min[0m [0mstates[0m[0;34m][0m[0;34m)[0m[0;34m)[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;32mfor[0m [0ms[0m [0;32min[0m [0mstates[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mfor[0m [0ms_[0m [0;32min[0m [0mstates[0m[0;34m[[0m[0mstates[0m[0;34m.[0m[0mindex[0m[0;34m([0m[0ms[0m[0;34m)[0m [0;34m+[0m [0;36m1[0m[0;34m:[0m[0;34m][0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;31m# for each pair of states s, s_ only one is possible at time t[0m[0;34m[0m
[0;34m[0m [0mclauses[0m[0;34m.[0m[0mappend[0m[0;34m([0m[0;34m([0m[0;34m~[0m[0mstate_sym[0m[0;34m[[0m[0ms[0m[0;34m,[0m [0mt[0m[0;34m][0m[0;34m)[0m [0;34m|[0m [0;34m([0m[0;34m~[0m[0mstate_sym[0m[0;34m[[0m[0ms_[0m[0;34m,[0m [0mt[0m[0;34m][0m[0;34m)[0m[0;34m)[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;31m# Restrict to one transition per timestep[0m[0;34m[0m
[0;34m[0m [0;32mfor[0m [0mt[0m [0;32min[0m [0mrange[0m[0;34m([0m[0mtime[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;31m# list of possible transitions at time t[0m[0;34m[0m
[0;34m[0m [0mtransitions_t[0m [0;34m=[0m [0;34m[[0m[0mtr[0m [0;32mfor[0m [0mtr[0m [0;32min[0m [0maction_sym[0m [0;32mif[0m [0mtr[0m[0;34m[[0m[0;36m2[0m[0;34m][0m [0;34m==[0m [0mt[0m[0;34m][0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;31m# make sure at least one of the transitions happens[0m[0;34m[0m
[0;34m[0m [0mclauses[0m[0;34m.[0m[0mappend[0m[0;34m([0m[0massociate[0m[0;34m([0m[0;34m'|'[0m[0;34m,[0m [0;34m[[0m[0maction_sym[0m[0;34m[[0m[0mtr[0m[0;34m][0m [0;32mfor[0m [0mtr[0m [0;32min[0m [0mtransitions_t[0m[0;34m][0m[0;34m)[0m[0;34m)[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;32mfor[0m [0mtr[0m [0;32min[0m [0mtransitions_t[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mfor[0m [0mtr_[0m [0;32min[0m [0mtransitions_t[0m[0;34m[[0m[0mtransitions_t[0m[0;34m.[0m[0mindex[0m[0;34m([0m[0mtr[0m[0;34m)[0m [0;34m+[0m [0;36m1[0m[0;34m:[0m[0;34m][0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;31m# there cannot be two transitions tr and tr_ at time t[0m[0;34m[0m
[0;34m[0m [0mclauses[0m[0;34m.[0m[0mappend[0m[0;34m([0m[0;34m~[0m[0maction_sym[0m[0;34m[[0m[0mtr[0m[0;34m][0m [0;34m|[0m [0;34m~[0m[0maction_sym[0m[0;34m[[0m[0mtr_[0m[0;34m][0m[0;34m)[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;31m# Combine the clauses to form the cnf[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0massociate[0m[0;34m([0m[0;34m'&'[0m[0;34m,[0m [0mclauses[0m[0;34m)[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;32mdef[0m [0mextract_solution[0m[0;34m([0m[0mmodel[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0mtrue_transitions[0m [0;34m=[0m [0;34m[[0m[0mt[0m [0;32mfor[0m [0mt[0m [0;32min[0m [0maction_sym[0m [0;32mif[0m [0mmodel[0m[0;34m[[0m[0maction_sym[0m[0;34m[[0m[0mt[0m[0;34m][0m[0;34m][0m[0;34m][0m[0;34m[0m
[0;34m[0m [0;31m# Sort transitions based on time, which is the 3rd element of the tuple[0m[0;34m[0m
[0;34m[0m [0mtrue_transitions[0m[0;34m.[0m[0msort[0m[0;34m([0m[0mkey[0m[0;34m=[0m[0;32mlambda[0m [0mx[0m[0;34m:[0m [0mx[0m[0;34m[[0m[0;36m2[0m[0;34m][0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0;34m[[0m[0maction[0m [0;32mfor[0m [0ms[0m[0;34m,[0m [0maction[0m[0;34m,[0m [0mtime[0m [0;32min[0m [0mtrue_transitions[0m[0;34m][0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;31m# Body of SAT_plan algorithm[0m[0;34m[0m
[0;34m[0m [0;32mfor[0m [0mt[0m [0;32min[0m [0mrange[0m[0;34m([0m[0mt_max[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;31m# dictionaries to help extract the solution from model[0m[0;34m[0m
[0;34m[0m [0mstate_sym[0m [0;34m=[0m [0;34m{[0m[0;34m}[0m[0;34m[0m
[0;34m[0m [0maction_sym[0m [0;34m=[0m [0;34m{[0m[0;34m}[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0mcnf[0m [0;34m=[0m [0mtranslate_to_SAT[0m[0;34m([0m[0minit[0m[0;34m,[0m [0mtransition[0m[0;34m,[0m [0mgoal[0m[0;34m,[0m [0mt[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0mmodel[0m [0;34m=[0m [0mSAT_solver[0m[0;34m([0m[0mcnf[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0mmodel[0m [0;32mis[0m [0;32mnot[0m [0;32mFalse[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0mextract_solution[0m[0;34m([0m[0mmodel[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0;32mNone[0m[0;34m[0m[0;34m[0m[0m
###Markdown
Let's see few examples of its usage. First we define a transition and then call `SAT_plan`.
###Code
transition = {'A': {'Left': 'A', 'Right': 'B'},
'B': {'Left': 'A', 'Right': 'C'},
'C': {'Left': 'B', 'Right': 'C'}}
print(SAT_plan('A', transition, 'C', 2))
print(SAT_plan('A', transition, 'B', 3))
print(SAT_plan('C', transition, 'A', 3))
###Output
None
['Right']
['Left', 'Left']
###Markdown
Let us do the same for another transition.
###Code
transition = {(0, 0): {'Right': (0, 1), 'Down': (1, 0)},
(0, 1): {'Left': (1, 0), 'Down': (1, 1)},
(1, 0): {'Right': (1, 0), 'Up': (1, 0), 'Left': (1, 0), 'Down': (1, 0)},
(1, 1): {'Left': (1, 0), 'Up': (0, 1)}}
print(SAT_plan((0, 0), transition, (1, 1), 4))
###Output
['Right', 'Down']
###Markdown
First-Order Logic Knowledge Bases: `FolKB`The class `FolKB` can be used to represent a knowledge base of First-order logic sentences. You would initialize and use it the same way as you would for `PropKB` except that the clauses are first-order definite clauses. We will see how to write such clauses to create a database and query them in the following sections. Criminal KBIn this section we create a `FolKB` based on the following paragraph.The law says that it is a crime for an American to sell weapons to hostile nations. The country Nono, an enemy of America, has some missiles, and all of its missiles were sold to it by Colonel West, who is American.The first step is to extract the facts and convert them into first-order definite clauses. Extracting the facts from data alone is a challenging task. Fortunately, we have a small paragraph and can do extraction and conversion manually. We'll store the clauses in list aptly named `clauses`.
###Code
clauses = []
###Output
_____no_output_____
###Markdown
“... it is a crime for an American to sell weapons to hostile nations”The keywords to look for here are 'crime', 'American', 'sell', 'weapon' and 'hostile'. We use predicate symbols to make meaning of them.* `Criminal(x)`: `x` is a criminal* `American(x)`: `x` is an American* `Sells(x ,y, z)`: `x` sells `y` to `z`* `Weapon(x)`: `x` is a weapon* `Hostile(x)`: `x` is a hostile nationLet us now combine them with appropriate variable naming to depict the meaning of the sentence. The criminal `x` is also the American `x` who sells weapon `y` to `z`, which is a hostile nation.$\text{American}(x) \land \text{Weapon}(y) \land \text{Sells}(x, y, z) \land \text{Hostile}(z) \implies \text{Criminal} (x)$
###Code
clauses.append(expr("(American(x) & Weapon(y) & Sells(x, y, z) & Hostile(z)) ==> Criminal(x)"))
###Output
_____no_output_____
###Markdown
"The country Nono, an enemy of America"We now know that Nono is an enemy of America. We represent these nations using the constant symbols `Nono` and `America`. the enemy relation is show using the predicate symbol `Enemy`.$\text{Enemy}(\text{Nono}, \text{America})$
###Code
clauses.append(expr("Enemy(Nono, America)"))
###Output
_____no_output_____
###Markdown
"Nono ... has some missiles"This states the existence of some missile which is owned by Nono. $\exists x \text{Owns}(\text{Nono}, x) \land \text{Missile}(x)$. We invoke existential instantiation to introduce a new constant `M1` which is the missile owned by Nono.$\text{Owns}(\text{Nono}, \text{M1}), \text{Missile}(\text{M1})$
###Code
clauses.append(expr("Owns(Nono, M1)"))
clauses.append(expr("Missile(M1)"))
###Output
_____no_output_____
###Markdown
"All of its missiles were sold to it by Colonel West"If Nono owns something and it classifies as a missile, then it was sold to Nono by West.$\text{Missile}(x) \land \text{Owns}(\text{Nono}, x) \implies \text{Sells}(\text{West}, x, \text{Nono})$
###Code
clauses.append(expr("(Missile(x) & Owns(Nono, x)) ==> Sells(West, x, Nono)"))
###Output
_____no_output_____
###Markdown
"West, who is American"West is an American.$\text{American}(\text{West})$
###Code
clauses.append(expr("American(West)"))
###Output
_____no_output_____
###Markdown
We also know, from our understanding of language, that missiles are weapons and that an enemy of America counts as “hostile”.$\text{Missile}(x) \implies \text{Weapon}(x), \text{Enemy}(x, \text{America}) \implies \text{Hostile}(x)$
###Code
clauses.append(expr("Missile(x) ==> Weapon(x)"))
clauses.append(expr("Enemy(x, America) ==> Hostile(x)"))
###Output
_____no_output_____
###Markdown
Now that we have converted the information into first-order definite clauses we can create our first-order logic knowledge base.
###Code
crime_kb = FolKB(clauses)
###Output
_____no_output_____
###Markdown
The `subst` helper function substitutes variables with given values in first-order logic statements.This will be useful in later algorithms.It's implementation is quite simple and self-explanatory.
###Code
psource(subst)
###Output
Object `(subst)` not found.
###Markdown
Here's an example of how `subst` can be used.
###Code
subst({x: expr('Nono'), y: expr('M1')}, expr('Owns(x, y)'))
###Output
_____no_output_____
###Markdown
Inference in First-Order LogicIn this section we look at a forward chaining and a backward chaining algorithm for `FolKB`. Both aforementioned algorithms rely on a process called unification, a key component of all first-order inference algorithms. UnificationWe sometimes require finding substitutions that make different logical expressions look identical. This process, called unification, is done by the `unify` algorithm. It takes as input two sentences and returns a unifier for them if one exists. A unifier is a dictionary which stores the substitutions required to make the two sentences identical. It does so by recursively unifying the components of a sentence, where the unification of a variable symbol `var` with a constant symbol `Const` is the mapping `{var: Const}`. Let's look at a few examples.
###Code
unify(expr('x'), 3)
unify(expr('A(x)'), expr('A(B)'))
unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(y)'))
###Output
_____no_output_____
###Markdown
In cases where there is no possible substitution that unifies the two sentences the function return `None`.
###Code
print(unify(expr('Cat(x)'), expr('Dog(Dobby)')))
###Output
None
###Markdown
We also need to take care we do not unintentionally use the same variable name. Unify treats them as a single variable which prevents it from taking multiple value.
###Code
print(unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(x)')))
###Output
None
###Markdown
Forward Chaining AlgorithmWe consider the simple forward-chaining algorithm presented in Figure 9.3. We look at each rule in the knowledge base and see if the premises can be satisfied. This is done by finding a substitution which unifies each of the premise with a clause in the `KB`. If we are able to unify the premises, the conclusion (with the corresponding substitution) is added to the `KB`. This inferencing process is repeated until either the query can be answered or till no new sentences can be added. We test if the newly added clause unifies with the query in which case the substitution yielded by `unify` is an answer to the query. If we run out of sentences to infer, this means the query was a failure.The function `fol_fc_ask` is a generator which yields all substitutions which validate the query.
###Code
psource(fol_fc_ask)
###Output
Object `(fol_fc_ask)` not found.
###Markdown
Let's find out all the hostile nations. Note that we only told the `KB` that Nono was an enemy of America, not that it was hostile.
###Code
answer = fol_fc_ask(crime_kb, expr('Hostile(x)'))
print(list(answer))
###Output
[{x: Nono}]
###Markdown
The generator returned a single substitution which says that Nono is a hostile nation. See how after adding another enemy nation the generator returns two substitutions.
###Code
crime_kb.tell(expr('Enemy(JaJa, America)'))
answer = fol_fc_ask(crime_kb, expr('Hostile(x)'))
print(list(answer))
###Output
[{x: Nono}, {x: JaJa}]
###Markdown
Note: `fol_fc_ask` makes changes to the `KB` by adding sentences to it. Backward Chaining AlgorithmThis algorithm works backward from the goal, chaining through rules to find known facts that support the proof. Suppose `goal` is the query we want to find the substitution for. We find rules of the form $\text{lhs} \implies \text{goal}$ in the `KB` and try to prove `lhs`. There may be multiple clauses in the `KB` which give multiple `lhs`. It is sufficient to prove only one of these. But to prove a `lhs` all the conjuncts in the `lhs` of the clause must be proved. This makes it similar to And/Or search. ORThe OR part of the algorithm comes from our choice to select any clause of the form $\text{lhs} \implies \text{goal}$. Looking at all rules's `lhs` whose `rhs` unify with the `goal`, we yield a substitution which proves all the conjuncts in the `lhs`. We use `parse_definite_clause` to attain `lhs` and `rhs` from a clause of the form $\text{lhs} \implies \text{rhs}$. For atomic facts the `lhs` is an empty list.
###Code
psource(fol_bc_or)
###Output
Object `(fol_bc_or)` not found.
###Markdown
ANDThe AND corresponds to proving all the conjuncts in the `lhs`. We need to find a substitution which proves each and every clause in the list of conjuncts.
###Code
psource(fol_bc_and)
###Output
Object `(fol_bc_and)` not found.
###Markdown
Now the main function `fl_bc_ask` calls `fol_bc_or` with substitution initialized as empty. The `ask` method of `FolKB` uses `fol_bc_ask` and fetches the first substitution returned by the generator to answer query. Let's query the knowledge base we created from `clauses` to find hostile nations.
###Code
# Rebuild KB because running fol_fc_ask would add new facts to the KB
crime_kb = FolKB(clauses)
crime_kb.ask(expr('Hostile(x)'))
###Output
_____no_output_____
###Markdown
You may notice some new variables in the substitution. They are introduced to standardize the variable names to prevent naming problems as discussed in the [Unification section](Unification) Appendix: The Implementation of `|'==>'|`Consider the `Expr` formed by this syntax:
###Code
P |'==>'| ~Q
###Output
_____no_output_____
###Markdown
What is the funny `|'==>'|` syntax? The trick is that "`|`" is just the regular Python or-operator, and so is exactly equivalent to this:
###Code
(P | '==>') | ~Q
###Output
_____no_output_____
###Markdown
In other words, there are two applications of or-operators. Here's the first one:
###Code
P | '==>'
###Output
_____no_output_____
###Markdown
What is going on here is that the `__or__` method of `Expr` serves a dual purpose. If the right-hand-side is another `Expr` (or a number), then the result is an `Expr`, as in `(P | Q)`. But if the right-hand-side is a string, then the string is taken to be an operator, and we create a node in the abstract syntax tree corresponding to a partially-filled `Expr`, one where we know the left-hand-side is `P` and the operator is `==>`, but we don't yet know the right-hand-side.The `PartialExpr` class has an `__or__` method that says to create an `Expr` node with the right-hand-side filled in. Here we can see the combination of the `PartialExpr` with `Q` to create a complete `Expr`:
###Code
partial = PartialExpr('==>', P)
partial | ~Q
###Output
_____no_output_____
###Markdown
This [trick](http://code.activestate.com/recipes/384122-infix-operators/) is due to [Ferdinand Jamitzky](http://code.activestate.com/recipes/users/98863/), with a modification by [C. G. Vedant](https://github.com/Chipe1),who suggested using a string inside the or-bars. Appendix: The Implementation of `expr`How does `expr` parse a string into an `Expr`? It turns out there are two tricks (besides the Jamitzky/Vedant trick):1. We do a string substitution, replacing "`==>`" with "`|'==>'|`" (and likewise for other operators).2. We `eval` the resulting string in an environment in which every identifieris bound to a symbol with that identifier as the `op`.In other words,
###Code
expr('~(P & Q) ==> (~P | ~Q)')
###Output
_____no_output_____
###Markdown
is equivalent to doing:
###Code
P, Q = symbols('P, Q')
~(P & Q) |'==>'| (~P | ~Q)
###Output
_____no_output_____
###Markdown
One thing to beware of: this puts `==>` at the same precedence level as `"|"`, which is not quite right. For example, we get this:
###Code
P & Q |'==>'| P | Q
###Output
_____no_output_____
###Markdown
which is probably not what we meant; when in doubt, put in extra parens:
###Code
(P & Q) |'==>'| (P | Q)
###Output
_____no_output_____
###Markdown
Examples
###Code
from aima_notebook import Canvas_fol_bc_ask
canvas_bc_ask = Canvas_fol_bc_ask('canvas_bc_ask', crime_kb, expr('Criminal(x)'))
###Output
_____no_output_____
###Markdown
Logic This Jupyter notebook acts as supporting material for topics covered in __Chapter 6 Logical Agents__, __Chapter 7 First-Order Logic__ and __Chapter 8 Inference in First-Order Logic__ of the book *[Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu)*. We make use of the implementations in the [logic.py](https://github.com/aimacode/aima-python/blob/master/logic.py) module. See the [intro notebook](https://github.com/aimacode/aima-python/blob/master/intro.ipynb) for instructions.Let's first import everything from the `logic` module.
###Code
from utils import *
from logic import *
from notebook import psource
###Output
_____no_output_____
###Markdown
CONTENTS- Logical sentences - Expr - PropKB - Knowledge-based agents - Inference in propositional knowledge base - Truth table enumeration - Proof by resolution - Forward and backward chaining - DPLL - WalkSAT - SATPlan - FolKB - Inference in first order knowledge base - Unification - Forward chaining algorithm - Backward chaining algorithm Logical Sentences The `Expr` class is designed to represent any kind of mathematical expression. The simplest type of `Expr` is a symbol, which can be defined with the function `Symbol`:
###Code
Symbol('x')
###Output
_____no_output_____
###Markdown
Or we can define multiple symbols at the same time with the function `symbols`:
###Code
(x, y, P, Q, f) = symbols('x, y, P, Q, f')
psource(symbols)
###Output
_____no_output_____
###Markdown
We can combine `Expr`s with the regular Python infix and prefix operators. Here's how we would form the logical sentence "P and not Q":
###Code
P & ~Q
###Output
_____no_output_____
###Markdown
This works because the `Expr` class overloads the `&` operator with this definition:```pythondef __and__(self, other): return Expr('&', self, other)``` and does similar overloads for the other operators. An `Expr` has two fields: `op` for the operator, which is always a string, and `args` for the arguments, which is a tuple of 0 or more expressions. By "expression," I mean either an instance of `Expr`, or a number. Let's take a look at the fields for some `Expr` examples:
###Code
sentence = P & ~Q
sentence.op
sentence.args
P.op
P.args
Pxy = P(x, y)
Pxy.op
Pxy.args
###Output
_____no_output_____
###Markdown
It is important to note that the `Expr` class does not define the *logic* of Propositional Logic sentences; it just gives you a way to *represent* expressions. Think of an `Expr` as an [abstract syntax tree](https://en.wikipedia.org/wiki/Abstract_syntax_tree). Each of the `args` in an `Expr` can be either a symbol, a number, or a nested `Expr`. We can nest these trees to any depth. Here is a deply nested `Expr`:
###Code
3 * f(x, y) + P(y) / 2 + 1
###Output
_____no_output_____
###Markdown
Operators for Constructing Logical SentencesHere is a table of the operators that can be used to form sentences. Note that we have a problem: we want to use Python operators to make sentences, so that our programs (and our interactive sessions like the one here) will show simple code. But Python does not allow implication arrows as operators, so for now we have to use a more verbose notation that Python does allow: `|'==>'|` instead of just `==>`. Alternately, you can always use the more verbose `Expr` constructor forms:| Operation | Book | Python Infix Input | Python Output | Python `Expr` Input|--------------------------|----------------------|-------------------------|---|---|| Negation | ¬ P | `~P` | `~P` | `Expr('~', P)`| And | P ∧ Q | `P & Q` | `P & Q` | `Expr('&', P, Q)`| Or | P ∨ Q | `P` &124; `Q`| `P` &124; `Q` | `Expr('`&124;`', P, Q)`| Inequality (Xor) | P ≠ Q | `P ^ Q` | `P ^ Q` | `Expr('^', P, Q)`| Implication | P → Q | `P` &124;`'==>'`&124; `Q` | `P ==> Q` | `Expr('==>', P, Q)`| Reverse Implication | Q ← P | `Q` &124;`'&124; `P` |`Q <== P` | `Expr('<==', Q, P)`| Equivalence | P ↔ Q | `P` &124;`''`&124; `Q` |`P Q` | `Expr('', P, Q)`Here's an example of defining a sentence with an implication arrow:
###Code
~(P & Q) |'==>'| (~P | ~Q)
###Output
_____no_output_____
###Markdown
`expr`: a Shortcut for Constructing SentencesIf the `|'==>'|` notation looks ugly to you, you can use the function `expr` instead:
###Code
expr('~(P & Q) ==> (~P | ~Q)')
###Output
_____no_output_____
###Markdown
`expr` takes a string as input, and parses it into an `Expr`. The string can contain arrow operators: `==>`, ``, which are handled as if they were regular Python infix operators. And `expr` automatically defines any symbols, so you don't need to pre-define them:
###Code
expr('sqrt(b ** 2 - 4 * a * c)')
###Output
_____no_output_____
###Markdown
For now that's all you need to know about `expr`. If you are interested, we explain the messy details of how `expr` is implemented and how `|'==>'|` is handled in the appendix. Propositional Knowledge Bases: `PropKB`The class `PropKB` can be used to represent a knowledge base of propositional logic sentences.We see that the class `KB` has four methods, apart from `__init__`. A point to note here: the `ask` method simply calls the `ask_generator` method. Thus, this one has already been implemented, and what you'll have to actually implement when you create your own knowledge base class (though you'll probably never need to, considering the ones we've created for you) will be the `ask_generator` function and not the `ask` function itself.The class `PropKB` now.* `__init__(self, sentence=None)` : The constructor `__init__` creates a single field `clauses` which will be a list of all the sentences of the knowledge base. Note that each one of these sentences will be a 'clause' i.e. a sentence which is made up of only literals and `or`s.* `tell(self, sentence)` : When you want to add a sentence to the KB, you use the `tell` method. This method takes a sentence, converts it to its CNF, extracts all the clauses, and adds all these clauses to the `clauses` field. So, you need not worry about `tell`ing only clauses to the knowledge base. You can `tell` the knowledge base a sentence in any form that you wish; converting it to CNF and adding the resulting clauses will be handled by the `tell` method.* `ask_generator(self, query)` : The `ask_generator` function is used by the `ask` function. It calls the `tt_entails` function, which in turn returns `True` if the knowledge base entails query and `False` otherwise. The `ask_generator` itself returns an empty dict `{}` if the knowledge base entails query and `None` otherwise. This might seem a little bit weird to you. After all, it makes more sense just to return a `True` or a `False` instead of the `{}` or `None` But this is done to maintain consistency with the way things are in First-Order Logic, where an `ask_generator` function is supposed to return all the substitutions that make the query true. Hence the dict, to return all these substitutions. I will be mostly be using the `ask` function which returns a `{}` or a `False`, but if you don't like this, you can always use the `ask_if_true` function which returns a `True` or a `False`.* `retract(self, sentence)` : This function removes all the clauses of the sentence given, from the knowledge base. Like the `tell` function, you don't have to pass clauses to remove them from the knowledge base; any sentence will do fine. The function will take care of converting that sentence to clauses and then remove those. Wumpus World KBLet us create a `PropKB` for the wumpus world with the sentences mentioned in `section 7.4.3`.
###Code
wumpus_kb = PropKB()
psource(PropKB)
###Output
_____no_output_____
###Markdown
We define the symbols we use in our clauses.$P_{x, y}$ is true if there is a pit in `[x, y]`.$B_{x, y}$ is true if the agent senses breeze in `[x, y]`.
###Code
P11, P12, P21, P22, P31, B11, B21 = expr('P11, P12, P21, P22, P31, B11, B21')
###Output
_____no_output_____
###Markdown
Now we tell sentences based on `section 7.4.3`.There is no pit in `[1,1]`.
###Code
wumpus_kb.tell(~P11)
###Output
_____no_output_____
###Markdown
A square is breezy if and only if there is a pit in a neighboring square. This has to be stated for each square but for now, we include just the relevant squares.
###Code
wumpus_kb.tell(B11 | '<=>' | ((P12 | P21)))
wumpus_kb.tell(B21 | '<=>' | ((P11 | P22 | P31)))
###Output
_____no_output_____
###Markdown
Now we include the breeze percepts for the first two squares leading up to the situation in `Figure 7.3(b)`
###Code
wumpus_kb.tell(~B11)
wumpus_kb.tell(B21)
###Output
_____no_output_____
###Markdown
We can check the clauses stored in a `KB` by accessing its `clauses` variable
###Code
wumpus_kb.clauses
###Output
_____no_output_____
###Markdown
We see that the equivalence $B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was automatically converted to two implications which were inturn converted to CNF which is stored in the `KB`.$B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was split into $B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ and $B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$.$B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ was converted to $P_{1, 2} \lor P_{2, 1} \lor \neg B_{1, 1}$.$B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$ was converted to $\neg (P_{1, 2} \lor P_{2, 1}) \lor B_{1, 1}$ which becomes $(\neg P_{1, 2} \lor B_{1, 1}) \land (\neg P_{2, 1} \lor B_{1, 1})$ after applying De Morgan's laws and distributing the disjunction.$B_{2, 1} \iff (P_{1, 1} \lor P_{2, 2} \lor P_{3, 2})$ is converted in similar manner. Knowledge based agents A knowledge-based agent is a simple generic agent that maintains and handles a knowledge base.The knowledge base may initially contain some background knowledge.The purpose of a KB agent is to provide a level of abstraction over knowledge-base manipulation and is to be used as a base class for agents that work on a knowledge base.Given a percept, the KB agent adds the percept to its knowledge base, asks the knowledge base for the best action, and tells the knowledge base that it has in fact taken that action.Our implementation of `KB-Agent` is encapsulated in a class `KB_AgentProgram` which inherits from the `KB` class.Let's have a look.
###Code
psource(KB_AgentProgram)
###Output
_____no_output_____
###Markdown
The helper functions `make_percept_sentence`, `make_action_query` and `make_action_sentence` are all aptly named and as expected,`make_percept_sentence` makes first-order logic sentences about percepts we want our agent to receive,`make_action_query` asks the underlying `KB` about the action that should be taken and`make_action_sentence` tells the underlying `KB` about the action it has just taken. Inference in Propositional Knowledge BaseIn this section we will look at two algorithms to check if a sentence is entailed by the `KB`. Our goal is to decide whether $\text{KB} \vDash \alpha$ for some sentence $\alpha$. Truth Table EnumerationIt is a model-checking approach which, as the name suggests, enumerates all possible models in which the `KB` is true and checks if $\alpha$ is also true in these models. We list the $n$ symbols in the `KB` and enumerate the $2^{n}$ models in a depth-first manner and check the truth of `KB` and $\alpha$.
###Code
psource(tt_check_all)
###Output
_____no_output_____
###Markdown
The algorithm basically computes every line of the truth table $KB\implies \alpha$ and checks if it is true everywhere.If symbols are defined, the routine recursively constructs every combination of truth values for the symbols and then, it checks whether `model` is consistent with `kb`.The given models correspond to the lines in the truth table,which have a `true` in the KB column, and for these lines it checks whether the query evaluates to true`result = pl_true(alpha, model)`.In short, `tt_check_all` evaluates this logical expression for each `model``pl_true(kb, model) => pl_true(alpha, model)`which is logically equivalent to`pl_true(kb, model) & ~pl_true(alpha, model)` that is, the knowledge base and the negation of the query are logically inconsistent.`tt_entails()` just extracts the symbols from the query and calls `tt_check_all()` with the proper parameters.
###Code
psource(tt_entails)
###Output
_____no_output_____
###Markdown
Keep in mind that for two symbols P and Q, P => Q is false only when P is `True` and Q is `False`.Example usage of `tt_entails()`:
###Code
tt_entails(P & Q, Q)
###Output
_____no_output_____
###Markdown
P & Q is True only when both P and Q are True. Hence, (P & Q) => Q is True
###Code
tt_entails(P | Q, Q)
tt_entails(P | Q, P)
###Output
_____no_output_____
###Markdown
If we know that P | Q is true, we cannot infer the truth values of P and Q. Hence (P | Q) => Q is False and so is (P | Q) => P.
###Code
(A, B, C, D, E, F, G) = symbols('A, B, C, D, E, F, G')
tt_entails(A & (B | C) & D & E & ~(F | G), A & D & E & ~F & ~G)
###Output
_____no_output_____
###Markdown
We can see that for the KB to be true, A, D, E have to be True and F and G have to be False.Nothing can be said about B or C. Coming back to our problem, note that `tt_entails()` takes an `Expr` which is a conjunction of clauses as the input instead of the `KB` itself. You can use the `ask_if_true()` method of `PropKB` which does all the required conversions. Let's check what `wumpus_kb` tells us about $P_{1, 1}$.
###Code
wumpus_kb.ask_if_true(~P11), wumpus_kb.ask_if_true(P11)
###Output
_____no_output_____
###Markdown
Looking at Figure 7.9 we see that in all models in which the knowledge base is `True`, $P_{1, 1}$ is `False`. It makes sense that `ask_if_true()` returns `True` for $\alpha = \neg P_{1, 1}$ and `False` for $\alpha = P_{1, 1}$. This begs the question, what if $\alpha$ is `True` in only a portion of all models. Do we return `True` or `False`? This doesn't rule out the possibility of $\alpha$ being `True` but it is not entailed by the `KB` so we return `False` in such cases. We can see this is the case for $P_{2, 2}$ and $P_{3, 1}$.
###Code
wumpus_kb.ask_if_true(~P22), wumpus_kb.ask_if_true(P22)
###Output
_____no_output_____
###Markdown
Proof by ResolutionRecall that our goal is to check whether $\text{KB} \vDash \alpha$ i.e. is $\text{KB} \implies \alpha$ true in every model. Suppose we wanted to check if $P \implies Q$ is valid. We check the satisfiability of $\neg (P \implies Q)$, which can be rewritten as $P \land \neg Q$. If $P \land \neg Q$ is unsatisfiable, then $P \implies Q$ must be true in all models. This gives us the result "$\text{KB} \vDash \alpha$ if and only if $\text{KB} \land \neg \alpha$ is unsatisfiable".This technique corresponds to proof by contradiction, a standard mathematical proof technique. We assume $\alpha$ to be false and show that this leads to a contradiction with known axioms in $\text{KB}$. We obtain a contradiction by making valid inferences using inference rules. In this proof we use a single inference rule, resolution which states $(l_1 \lor \dots \lor l_k) \land (m_1 \lor \dots \lor m_n) \land (l_i \iff \neg m_j) \implies l_1 \lor \dots \lor l_{i - 1} \lor l_{i + 1} \lor \dots \lor l_k \lor m_1 \lor \dots \lor m_{j - 1} \lor m_{j + 1} \lor \dots \lor m_n$. Applying the resolution yields us a clause which we add to the KB. We keep doing this until:* There are no new clauses that can be added, in which case $\text{KB} \nvDash \alpha$.* Two clauses resolve to yield the empty clause, in which case $\text{KB} \vDash \alpha$.The empty clause is equivalent to False because it arises only from resolving two complementaryunit clauses such as $P$ and $\neg P$ which is a contradiction as both $P$ and $\neg P$ can't be True at the same time. There is one catch however, the algorithm that implements proof by resolution cannot handle complex sentences. Implications and bi-implications have to be simplified into simpler clauses. We already know that *every sentence of a propositional logic is logically equivalent to a conjunction of clauses*.We will use this fact to our advantage and simplify the input sentence into the **conjunctive normal form** (CNF) which is a conjunction of disjunctions of literals.For eg:$$(A\lor B)\land (\neg B\lor C\lor\neg D)\land (D\lor\neg E)$$This is equivalent to the POS (Product of sums) form in digital electronics.Here's an outline of how the conversion is done:1. Convert bi-implications to implications$\alpha\iff\beta$ can be written as $(\alpha\implies\beta)\land(\beta\implies\alpha)$This also applies to compound sentences$\alpha\iff(\beta\lor\gamma)$ can be written as $(\alpha\implies(\beta\lor\gamma))\land((\beta\lor\gamma)\implies\alpha)$2. Convert implications to their logical equivalents$\alpha\implies\beta$ can be written as $\neg\alpha\lor\beta$3. Move negation inwardsCNF requires atomic literals. Hence, negation cannot appear on a compound statement.De Morgan's laws will be helpful here.$\neg(\alpha\land\beta)\equiv(\neg\alpha\lor\neg\beta)$$\neg(\alpha\lor\beta)\equiv(\neg\alpha\land\neg\beta)$4. Distribute disjunction over conjunctionDisjunction and conjunction are distributive over each other.Now that we only have conjunctions, disjunctions and negations in our expression, we will distribute disjunctions over conjunctions wherever possible as this will give us a sentence which is a conjunction of simpler clauses, which is what we wanted in the first place.We need a term of the form$(\alpha_{1}\lor\alpha_{2}\lor\alpha_{3}...)\land(\beta_{1}\lor\beta_{2}\lor\beta_{3}...)\land(\gamma_{1}\lor\gamma_{2}\lor\gamma_{3}...)\land...$The `to_cnf` function executes this conversion using helper subroutines.
###Code
psource(to_cnf)
###Output
_____no_output_____
###Markdown
`to_cnf` calls three subroutines.`eliminate_implications` converts bi-implications and implications to their logical equivalents.`move_not_inwards` removes negations from compound statements and moves them inwards using De Morgan's laws.`distribute_and_over_or` distributes disjunctions over conjunctions.Run the cell below for implementation details.
###Code
psource(eliminate_implications)
psource(move_not_inwards)
psource(distribute_and_over_or)
###Output
_____no_output_____
###Markdown
Let's convert some sentences to see how it works
###Code
A, B, C, D = expr('A, B, C, D')
to_cnf(A |'<=>'| B)
to_cnf(A |'<=>'| (B & C))
to_cnf(A & (B | (C & D)))
to_cnf((A |'<=>'| ~B) |'==>'| (C | ~D))
###Output
_____no_output_____
###Markdown
Coming back to our resolution problem, we can see how the `to_cnf` function is utilized here
###Code
psource(pl_resolution)
pl_resolution(wumpus_kb, ~P11), pl_resolution(wumpus_kb, P11)
pl_resolution(wumpus_kb, ~P22), pl_resolution(wumpus_kb, P22)
###Output
_____no_output_____
###Markdown
Forward and backward chainingPreviously, we said we will look at two algorithms to check if a sentence is entailed by the `KB`. Here's a third one. The difference here is that our goal now is to determine if a knowledge base of definite clauses entails a single proposition symbol *q* - the query.There is a catch however - the knowledge base can only contain **Horn clauses**. Horn ClausesHorn clauses can be defined as a *disjunction* of *literals* with **at most** one positive literal. A Horn clause with exactly one positive literal is called a *definite clause*.A Horn clause might look like $\neg a\lor\neg b\lor\neg c\lor\neg d... \lor z$This, coincidentally, is also a definite clause.Using De Morgan's laws, the example above can be simplified to $a\land b\land c\land d ... \implies z$This seems like a logical representation of how humans process known data and facts. Assuming percepts `a`, `b`, `c`, `d` ... to be true simultaneously, we can infer `z` to also be true at that point in time. There are some interesting aspects of Horn clauses that make algorithmic inference or *resolution* easier.- Definite clauses can be written as implications:The most important simplification a definite clause provides is that it can be written as an implication.The premise (or the knowledge that leads to the implication) is a conjunction of positive literals.The conclusion (the implied statement) is also a positive literal.The sentence thus becomes easier to understand.The premise and the conclusion are conventionally called the *body* and the *head* respectively.A single positive literal is called a *fact*.- Forward chaining and backward chaining can be used for inference from Horn clauses:Forward chaining is semantically identical to `AND-OR-Graph-Search` from the chapter on search algorithms.Implementational details will be explained shortly.- Deciding entailment with Horn clauses is linear in size of the knowledge base:Surprisingly, the forward and backward chaining algorithms traverse each element of the knowledge base at most once, greatly simplifying the problem.The function `pl_fc_entails` implements forward chaining to see if a knowledge base `KB` entails a symbol `q`.Before we proceed further, note that `pl_fc_entails` doesn't use an ordinary `KB` instance. The knowledge base here is an instance of the `PropDefiniteKB` class, derived from the `PropKB` class, but modified to store definite clauses.The main point of difference arises in the inclusion of a helper method to `PropDefiniteKB` that returns a list of clauses in KB that have a given symbol `p` in their premise.
###Code
psource(PropDefiniteKB.clauses_with_premise)
###Output
_____no_output_____
###Markdown
Let's now have a look at the `pl_fc_entails` algorithm.
###Code
psource(pl_fc_entails)
###Output
_____no_output_____
###Markdown
The function accepts a knowledge base `KB` (an instance of `PropDefiniteKB`) and a query `q` as inputs.`count` initially stores the number of symbols in the premise of each sentence in the knowledge base.The `conjuncts` helper function separates a given sentence at conjunctions.`inferred` is initialized as a *boolean* defaultdict. This will be used later to check if we have inferred all premises of each clause of the agenda.`agenda` initially stores a list of clauses that the knowledge base knows to be true.The `is_prop_symbol` helper function checks if the given symbol is a valid propositional logic symbol.We now iterate through `agenda`, popping a symbol `p` on each iteration.If the query `q` is the same as `p`, we know that entailment holds.The agenda is processed, reducing `count` by one for each implication with a premise `p`.A conclusion is added to the agenda when `count` reaches zero. This means we know all the premises of that particular implication to be true.`clauses_with_premise` is a helpful method of the `PropKB` class.It returns a list of clauses in the knowledge base that have `p` in their premise.Now that we have an idea of how this function works, let's see a few examples of its usage, but we first need to define our knowledge base. We assume we know the following clauses to be true.
###Code
clauses = ['(B & F)==>E',
'(A & E & F)==>G',
'(B & C)==>F',
'(A & B)==>D',
'(E & F)==>H',
'(H & I)==>J',
'A',
'B',
'C']
###Output
_____no_output_____
###Markdown
We will now `tell` this information to our knowledge base.
###Code
definite_clauses_KB = PropDefiniteKB()
for clause in clauses:
definite_clauses_KB.tell(expr(clause))
###Output
_____no_output_____
###Markdown
We can now check if our knowledge base entails the following queries.
###Code
pl_fc_entails(definite_clauses_KB, expr('G'))
pl_fc_entails(definite_clauses_KB, expr('H'))
pl_fc_entails(definite_clauses_KB, expr('I'))
pl_fc_entails(definite_clauses_KB, expr('J'))
###Output
_____no_output_____
###Markdown
Effective Propositional Model CheckingThe previous segments elucidate the algorithmic procedure for model checking. In this segment, we look at ways of making them computationally efficient.The problem we are trying to solve is conventionally called the _propositional satisfiability problem_, abbreviated as the _SAT_ problem.In layman terms, if there exists a model that satisfies a given Boolean formula, the formula is called satisfiable.The SAT problem was the first problem to be proven _NP-complete_.The main characteristics of an NP-complete problem are:- Given a solution to such a problem, it is easy to verify if the solution solves the problem.- The time required to actually solve the problem using any known algorithm increases exponentially with respect to the size of the problem.Due to these properties, heuristic and approximational methods are often applied to find solutions to these problems.It is extremely important to be able to solve large scale SAT problems efficiently because many combinatorial problems in computer science can be conveniently reduced to checking the satisfiability of a propositional sentence under some constraints.We will introduce two new algorithms that perform propositional model checking in a computationally effective way. 1. DPLL (Davis-Putnam-Logeman-Loveland) algorithmThis algorithm is very similar to Backtracking-Search.It recursively enumerates possible models in a depth-first fashion with the following improvements over algorithms like `tt_entails`:1. Early termination:In certain cases, the algorithm can detect the truth value of a statement using just a partially completed model.For example, $(P\lor Q)\land(P\lor R)$ is true if P is true, regardless of other variables.This reduces the search space significantly.2. Pure symbol heuristic:A symbol that has the same sign (positive or negative) in all clauses is called a _pure symbol_.It isn't difficult to see that any satisfiable model will have the pure symbols assigned such that its parent clause becomes _true_.For example, $(P\lor\neg Q)\land(\neg Q\lor\neg R)\land(R\lor P)$ has P and Q as pure symbolsand for the sentence to be true, P _has_ to be true and Q _has_ to be false.The pure symbol heuristic thus simplifies the problem a bit.3. Unit clause heuristic:In the context of DPLL, clauses with just one literal and clauses with all but one _false_ literals are called unit clauses.If a clause is a unit clause, it can only be satisfied by assigning the necessary value to make the last literal true.We have no other choice.Assigning one unit clause can create another unit clause.For example, when P is false, $(P\lor Q)$ becomes a unit clause, causing _true_ to be assigned to Q.A series of forced assignments derived from previous unit clauses is called _unit propagation_.In this way, this heuristic simplifies the problem further.The algorithm often employs other tricks to scale up to large problems.However, these tricks are currently out of the scope of this notebook. Refer to section 7.6 of the book for more details.Let's have a look at the algorithm.
###Code
psource(dpll)
###Output
_____no_output_____
###Markdown
The algorithm uses the ideas described above to check satisfiability of a sentence in propositional logic.It recursively calls itself, simplifying the problem at each step. It also uses helper functions `find_pure_symbol` and `find_unit_clause` to carry out steps 2 and 3 above.The `dpll_satisfiable` helper function converts the input clauses to _conjunctive normal form_ and calls the `dpll` function with the correct parameters.
###Code
psource(dpll_satisfiable)
###Output
_____no_output_____
###Markdown
Let's see a few examples of usage.
###Code
A, B, C, D = expr('A, B, C, D')
dpll_satisfiable(A & B & ~C & D)
###Output
_____no_output_____
###Markdown
This is a simple case to highlight that the algorithm actually works.
###Code
dpll_satisfiable((A & B) | (C & ~A) | (B & ~D))
###Output
_____no_output_____
###Markdown
If a particular symbol isn't present in the solution, it means that the solution is independent of the value of that symbol.In this case, the solution is independent of A.
###Code
dpll_satisfiable(A |'<=>'| B)
dpll_satisfiable((A |'<=>'| B) |'==>'| (C & ~A))
dpll_satisfiable((A | (B & C)) |'<=>'| ((A | B) & (A | C)))
###Output
_____no_output_____
###Markdown
2. WalkSAT algorithmThis algorithm is very similar to Hill climbing.On every iteration, the algorithm picks an unsatisfied clause and flips a symbol in the clause.This is similar to finding a neighboring state in the `hill_climbing` algorithm.The symbol to be flipped is decided by an evaluation function that counts the number of unsatisfied clauses.Sometimes, symbols are also flipped randomly to avoid local optima. A subtle balance between greediness and randomness is required. Alternatively, some versions of the algorithm restart with a completely new random assignment if no solution has been found for too long as a way of getting out of local minima of numbers of unsatisfied clauses.Let's have a look at the algorithm.
###Code
psource(WalkSAT)
###Output
_____no_output_____
###Markdown
The function takes three arguments:1. The `clauses` we want to satisfy.2. The probability `p` of randomly changing a symbol.3. The maximum number of flips (`max_flips`) the algorithm will run for. If the clauses are still unsatisfied, the algorithm returns `None` to denote failure.The algorithm is identical in concept to Hill climbing and the code isn't difficult to understand.Let's see a few examples of usage.
###Code
A, B, C, D = expr('A, B, C, D')
WalkSAT([A, B, ~C, D], 0.5, 100)
###Output
_____no_output_____
###Markdown
This is a simple case to show that the algorithm converges.
###Code
WalkSAT([A & B, A & C], 0.5, 100)
WalkSAT([A & B, C & D, C & B], 0.5, 100)
WalkSAT([A & B, C | D, ~(D | B)], 0.5, 1000)
###Output
_____no_output_____
###Markdown
This one doesn't give any output because WalkSAT did not find any model where these clauses hold. We can solve these clauses to see that they together form a contradiction and hence, it isn't supposed to have a solution. One point of difference between this algorithm and the `dpll_satisfiable` algorithms is that both these algorithms take inputs differently. For WalkSAT to take complete sentences as input, we can write a helper function that converts the input sentence into conjunctive normal form and then calls WalkSAT with the list of conjuncts of the CNF form of the sentence.
###Code
def WalkSAT_CNF(sentence, p=0.5, max_flips=10000):
return WalkSAT(conjuncts(to_cnf(sentence)), 0, max_flips)
###Output
_____no_output_____
###Markdown
Now we can call `WalkSAT_CNF` and `DPLL_Satisfiable` with the same arguments.
###Code
WalkSAT_CNF((A & B) | (C & ~A) | (B & ~D), 0.5, 1000)
###Output
_____no_output_____
###Markdown
It works!Notice that the solution generated by WalkSAT doesn't omit variables that the sentence doesn't depend upon. If the sentence is independent of a particular variable, the solution contains a random value for that variable because of the stochastic nature of the algorithm.Let's compare the runtime of WalkSAT and DPLL for a few cases. We will use the `%%timeit` magic to do this.
###Code
sentence_1 = A |'<=>'| B
sentence_2 = (A & B) | (C & ~A) | (B & ~D)
sentence_3 = (A | (B & C)) |'<=>'| ((A | B) & (A | C))
%%timeit
dpll_satisfiable(sentence_1)
dpll_satisfiable(sentence_2)
dpll_satisfiable(sentence_3)
%%timeit
WalkSAT_CNF(sentence_1)
WalkSAT_CNF(sentence_2)
WalkSAT_CNF(sentence_3)
###Output
1.19 ms ± 64 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
###Markdown
On an average, for solvable cases, `WalkSAT` is quite faster than `dpll` because, for a small number of variables, `WalkSAT` can reduce the search space significantly. Results can be different for sentences with more symbols though.Feel free to play around with this to understand the trade-offs of these algorithms better. SATPlan In this section we show how to make plans by logical inference. The basic idea is very simple. It includes the following three steps:1. Constuct a sentence that includes: 1. A colection of assertions about the initial state. 2. The successor-state axioms for all the possible actions at each time up to some maximum time t. 3. The assertion that the goal is achieved at time t.2. Present the whole sentence to a SAT solver.3. Assuming a model is found, extract from the model those variables that represent actions and are assigned true. Together they represent a plan to achieve the goals.Lets have a look at the algorithm
###Code
psource(SAT_plan)
###Output
_____no_output_____
###Markdown
Let's see few examples of its usage. First we define a transition and then call `SAT_plan`.
###Code
transition = {'A': {'Left': 'A', 'Right': 'B'},
'B': {'Left': 'A', 'Right': 'C'},
'C': {'Left': 'B', 'Right': 'C'}}
print(SAT_plan('A', transition, 'C', 2))
print(SAT_plan('A', transition, 'B', 3))
print(SAT_plan('C', transition, 'A', 3))
###Output
_____no_output_____
###Markdown
Let us do the same for another transition.
###Code
transition = {(0, 0): {'Right': (0, 1), 'Down': (1, 0)},
(0, 1): {'Left': (1, 0), 'Down': (1, 1)},
(1, 0): {'Right': (1, 0), 'Up': (1, 0), 'Left': (1, 0), 'Down': (1, 0)},
(1, 1): {'Left': (1, 0), 'Up': (0, 1)}}
print(SAT_plan((0, 0), transition, (1, 1), 4))
###Output
_____no_output_____
###Markdown
First-Order Logic Knowledge Bases: `FolKB`The class `FolKB` can be used to represent a knowledge base of First-order logic sentences. You would initialize and use it the same way as you would for `PropKB` except that the clauses are first-order definite clauses. We will see how to write such clauses to create a database and query them in the following sections. Criminal KBIn this section we create a `FolKB` based on the following paragraph.The law says that it is a crime for an American to sell weapons to hostile nations. The country Nono, an enemy of America, has some missiles, and all of its missiles were sold to it by Colonel West, who is American.The first step is to extract the facts and convert them into first-order definite clauses. Extracting the facts from data alone is a challenging task. Fortunately, we have a small paragraph and can do extraction and conversion manually. We'll store the clauses in list aptly named `clauses`.
###Code
clauses = []
###Output
_____no_output_____
###Markdown
“... it is a crime for an American to sell weapons to hostile nations”The keywords to look for here are 'crime', 'American', 'sell', 'weapon' and 'hostile'. We use predicate symbols to make meaning of them.* `Criminal(x)`: `x` is a criminal* `American(x)`: `x` is an American* `Sells(x ,y, z)`: `x` sells `y` to `z`* `Weapon(x)`: `x` is a weapon* `Hostile(x)`: `x` is a hostile nationLet us now combine them with appropriate variable naming to depict the meaning of the sentence. The criminal `x` is also the American `x` who sells weapon `y` to `z`, which is a hostile nation.$\text{American}(x) \land \text{Weapon}(y) \land \text{Sells}(x, y, z) \land \text{Hostile}(z) \implies \text{Criminal} (x)$
###Code
clauses.append(expr("(American(x) & Weapon(y) & Sells(x, y, z) & Hostile(z)) ==> Criminal(x)"))
###Output
_____no_output_____
###Markdown
"The country Nono, an enemy of America"We now know that Nono is an enemy of America. We represent these nations using the constant symbols `Nono` and `America`. the enemy relation is show using the predicate symbol `Enemy`.$\text{Enemy}(\text{Nono}, \text{America})$
###Code
clauses.append(expr("Enemy(Nono, America)"))
###Output
_____no_output_____
###Markdown
"Nono ... has some missiles"This states the existence of some missile which is owned by Nono. $\exists x \text{Owns}(\text{Nono}, x) \land \text{Missile}(x)$. We invoke existential instantiation to introduce a new constant `M1` which is the missile owned by Nono.$\text{Owns}(\text{Nono}, \text{M1}), \text{Missile}(\text{M1})$
###Code
clauses.append(expr("Owns(Nono, M1)"))
clauses.append(expr("Missile(M1)"))
###Output
_____no_output_____
###Markdown
"All of its missiles were sold to it by Colonel West"If Nono owns something and it classifies as a missile, then it was sold to Nono by West.$\text{Missile}(x) \land \text{Owns}(\text{Nono}, x) \implies \text{Sells}(\text{West}, x, \text{Nono})$
###Code
clauses.append(expr("(Missile(x) & Owns(Nono, x)) ==> Sells(West, x, Nono)"))
###Output
_____no_output_____
###Markdown
"West, who is American"West is an American.$\text{American}(\text{West})$
###Code
clauses.append(expr("American(West)"))
###Output
_____no_output_____
###Markdown
We also know, from our understanding of language, that missiles are weapons and that an enemy of America counts as “hostile”.$\text{Missile}(x) \implies \text{Weapon}(x), \text{Enemy}(x, \text{America}) \implies \text{Hostile}(x)$
###Code
clauses.append(expr("Missile(x) ==> Weapon(x)"))
clauses.append(expr("Enemy(x, America) ==> Hostile(x)"))
###Output
_____no_output_____
###Markdown
Now that we have converted the information into first-order definite clauses we can create our first-order logic knowledge base.
###Code
crime_kb = FolKB(clauses)
###Output
_____no_output_____
###Markdown
The `subst` helper function substitutes variables with given values in first-order logic statements.This will be useful in later algorithms.It's implementation is quite simple and self-explanatory.
###Code
psource(subst)
###Output
_____no_output_____
###Markdown
Here's an example of how `subst` can be used.
###Code
subst({x: expr('Nono'), y: expr('M1')}, expr('Owns(x, y)'))
###Output
_____no_output_____
###Markdown
Inference in First-Order LogicIn this section we look at a forward chaining and a backward chaining algorithm for `FolKB`. Both aforementioned algorithms rely on a process called unification, a key component of all first-order inference algorithms. UnificationWe sometimes require finding substitutions that make different logical expressions look identical. This process, called unification, is done by the `unify` algorithm. It takes as input two sentences and returns a unifier for them if one exists. A unifier is a dictionary which stores the substitutions required to make the two sentences identical. It does so by recursively unifying the components of a sentence, where the unification of a variable symbol `var` with a constant symbol `Const` is the mapping `{var: Const}`. Let's look at a few examples.
###Code
unify(expr('x'), 3)
unify(expr('A(x)'), expr('A(B)'))
unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(y)'))
###Output
_____no_output_____
###Markdown
In cases where there is no possible substitution that unifies the two sentences the function return `None`.
###Code
print(unify(expr('Cat(x)'), expr('Dog(Dobby)')))
###Output
_____no_output_____
###Markdown
We also need to take care we do not unintentionally use the same variable name. Unify treats them as a single variable which prevents it from taking multiple value.
###Code
print(unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(x)')))
###Output
_____no_output_____
###Markdown
Forward Chaining AlgorithmWe consider the simple forward-chaining algorithm presented in Figure 9.3. We look at each rule in the knowledge base and see if the premises can be satisfied. This is done by finding a substitution which unifies each of the premise with a clause in the `KB`. If we are able to unify the premises, the conclusion (with the corresponding substitution) is added to the `KB`. This inferencing process is repeated until either the query can be answered or till no new sentences can be added. We test if the newly added clause unifies with the query in which case the substitution yielded by `unify` is an answer to the query. If we run out of sentences to infer, this means the query was a failure.The function `fol_fc_ask` is a generator which yields all substitutions which validate the query.
###Code
psource(fol_fc_ask)
###Output
_____no_output_____
###Markdown
Let's find out all the hostile nations. Note that we only told the `KB` that Nono was an enemy of America, not that it was hostile.
###Code
answer = fol_fc_ask(crime_kb, expr('Hostile(x)'))
print(list(answer))
###Output
_____no_output_____
###Markdown
The generator returned a single substitution which says that Nono is a hostile nation. See how after adding another enemy nation the generator returns two substitutions.
###Code
crime_kb.tell(expr('Enemy(JaJa, America)'))
answer = fol_fc_ask(crime_kb, expr('Hostile(x)'))
print(list(answer))
###Output
_____no_output_____
###Markdown
Note: `fol_fc_ask` makes changes to the `KB` by adding sentences to it. Backward Chaining AlgorithmThis algorithm works backward from the goal, chaining through rules to find known facts that support the proof. Suppose `goal` is the query we want to find the substitution for. We find rules of the form $\text{lhs} \implies \text{goal}$ in the `KB` and try to prove `lhs`. There may be multiple clauses in the `KB` which give multiple `lhs`. It is sufficient to prove only one of these. But to prove a `lhs` all the conjuncts in the `lhs` of the clause must be proved. This makes it similar to And/Or search. ORThe OR part of the algorithm comes from our choice to select any clause of the form $\text{lhs} \implies \text{goal}$. Looking at all rules's `lhs` whose `rhs` unify with the `goal`, we yield a substitution which proves all the conjuncts in the `lhs`. We use `parse_definite_clause` to attain `lhs` and `rhs` from a clause of the form $\text{lhs} \implies \text{rhs}$. For atomic facts the `lhs` is an empty list.
###Code
psource(fol_bc_or)
###Output
_____no_output_____
###Markdown
ANDThe AND corresponds to proving all the conjuncts in the `lhs`. We need to find a substitution which proves each and every clause in the list of conjuncts.
###Code
psource(fol_bc_and)
###Output
_____no_output_____
###Markdown
Now the main function `fl_bc_ask` calls `fol_bc_or` with substitution initialized as empty. The `ask` method of `FolKB` uses `fol_bc_ask` and fetches the first substitution returned by the generator to answer query. Let's query the knowledge base we created from `clauses` to find hostile nations.
###Code
# Rebuild KB because running fol_fc_ask would add new facts to the KB
crime_kb = FolKB(clauses)
crime_kb.ask(expr('Hostile(x)'))
###Output
_____no_output_____
###Markdown
You may notice some new variables in the substitution. They are introduced to standardize the variable names to prevent naming problems as discussed in the [Unification section](Unification) Appendix: The Implementation of `|'==>'|`Consider the `Expr` formed by this syntax:
###Code
P |'==>'| ~Q
###Output
_____no_output_____
###Markdown
What is the funny `|'==>'|` syntax? The trick is that "`|`" is just the regular Python or-operator, and so is exactly equivalent to this:
###Code
(P | '==>') | ~Q
###Output
_____no_output_____
###Markdown
In other words, there are two applications of or-operators. Here's the first one:
###Code
P | '==>'
###Output
_____no_output_____
###Markdown
What is going on here is that the `__or__` method of `Expr` serves a dual purpose. If the right-hand-side is another `Expr` (or a number), then the result is an `Expr`, as in `(P | Q)`. But if the right-hand-side is a string, then the string is taken to be an operator, and we create a node in the abstract syntax tree corresponding to a partially-filled `Expr`, one where we know the left-hand-side is `P` and the operator is `==>`, but we don't yet know the right-hand-side.The `PartialExpr` class has an `__or__` method that says to create an `Expr` node with the right-hand-side filled in. Here we can see the combination of the `PartialExpr` with `Q` to create a complete `Expr`:
###Code
partial = PartialExpr('==>', P)
partial | ~Q
###Output
_____no_output_____
###Markdown
This [trick](http://code.activestate.com/recipes/384122-infix-operators/) is due to [Ferdinand Jamitzky](http://code.activestate.com/recipes/users/98863/), with a modification by [C. G. Vedant](https://github.com/Chipe1),who suggested using a string inside the or-bars. Appendix: The Implementation of `expr`How does `expr` parse a string into an `Expr`? It turns out there are two tricks (besides the Jamitzky/Vedant trick):1. We do a string substitution, replacing "`==>`" with "`|'==>'|`" (and likewise for other operators).2. We `eval` the resulting string in an environment in which every identifieris bound to a symbol with that identifier as the `op`.In other words,
###Code
expr('~(P & Q) ==> (~P | ~Q)')
###Output
_____no_output_____
###Markdown
is equivalent to doing:
###Code
P, Q = symbols('P, Q')
~(P & Q) |'==>'| (~P | ~Q)
###Output
_____no_output_____
###Markdown
One thing to beware of: this puts `==>` at the same precedence level as `"|"`, which is not quite right. For example, we get this:
###Code
P & Q |'==>'| P | Q
###Output
_____no_output_____
###Markdown
which is probably not what we meant; when in doubt, put in extra parens:
###Code
(P & Q) |'==>'| (P | Q)
###Output
_____no_output_____
###Markdown
Examples
###Code
from notebook import Canvas_fol_bc_ask
canvas_bc_ask = Canvas_fol_bc_ask('canvas_bc_ask', crime_kb, expr('Criminal(x)'))
###Output
_____no_output_____
###Markdown
Logic: `logic.py`; Chapters 6-8 This notebook describes the [logic.py](https://github.com/aimacode/aima-python/blob/master/logic.py) module, which covers Chapters 6 (Logical Agents), 7 (First-Order Logic) and 8 (Inference in First-Order Logic) of *[Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu)*. See the [intro notebook](https://github.com/aimacode/aima-python/blob/master/intro.ipynb) for instructions.We'll start by looking at `Expr`, the data type for logical sentences, and the convenience function `expr`. We'll be covering two types of knowledge bases, `PropKB` - Propositional logic knowledge base and `FolKB` - First order logic knowledge base. We will construct a propositional knowledge base of a specific situation in the Wumpus World. We will next go through the `tt_entails` function and experiment with it a bit. The `pl_resolution` and `pl_fc_entails` functions will come next. We'll study forward chaining and backward chaining algorithms for `FolKB` and use them on `crime_kb` knowledge base.But the first step is to load the code:
###Code
from utils import *
from logic import *
from notebook import psource
###Output
_____no_output_____
###Markdown
Logical Sentences The `Expr` class is designed to represent any kind of mathematical expression. The simplest type of `Expr` is a symbol, which can be defined with the function `Symbol`:
###Code
Symbol('x')
###Output
_____no_output_____
###Markdown
Or we can define multiple symbols at the same time with the function `symbols`:
###Code
(x, y, P, Q, f) = symbols('x, y, P, Q, f')
###Output
_____no_output_____
###Markdown
We can combine `Expr`s with the regular Python infix and prefix operators. Here's how we would form the logical sentence "P and not Q":
###Code
P & ~Q
###Output
_____no_output_____
###Markdown
This works because the `Expr` class overloads the `&` operator with this definition:```pythondef __and__(self, other): return Expr('&', self, other)``` and does similar overloads for the other operators. An `Expr` has two fields: `op` for the operator, which is always a string, and `args` for the arguments, which is a tuple of 0 or more expressions. By "expression," I mean either an instance of `Expr`, or a number. Let's take a look at the fields for some `Expr` examples:
###Code
sentence = P & ~Q
sentence.op
sentence.args
P.op
P.args
Pxy = P(x, y)
Pxy.op
Pxy.args
###Output
_____no_output_____
###Markdown
It is important to note that the `Expr` class does not define the *logic* of Propositional Logic sentences; it just gives you a way to *represent* expressions. Think of an `Expr` as an [abstract syntax tree](https://en.wikipedia.org/wiki/Abstract_syntax_tree). Each of the `args` in an `Expr` can be either a symbol, a number, or a nested `Expr`. We can nest these trees to any depth. Here is a deply nested `Expr`:
###Code
3 * f(x, y) + P(y) / 2 + 1
###Output
_____no_output_____
###Markdown
Operators for Constructing Logical SentencesHere is a table of the operators that can be used to form sentences. Note that we have a problem: we want to use Python operators to make sentences, so that our programs (and our interactive sessions like the one here) will show simple code. But Python does not allow implication arrows as operators, so for now we have to use a more verbose notation that Python does allow: `|'==>'|` instead of just `==>`. Alternately, you can always use the more verbose `Expr` constructor forms:| Operation | Book | Python Infix Input | Python Output | Python `Expr` Input|--------------------------|----------------------|-------------------------|---|---|| Negation | ¬ P | `~P` | `~P` | `Expr('~', P)`| And | P ∧ Q | `P & Q` | `P & Q` | `Expr('&', P, Q)`| Or | P ∨ Q | `P` &124; `Q`| `P` &124; `Q` | `Expr('`&124;`', P, Q)`| Inequality (Xor) | P ≠ Q | `P ^ Q` | `P ^ Q` | `Expr('^', P, Q)`| Implication | P → Q | `P` &124;`'==>'`&124; `Q` | `P ==> Q` | `Expr('==>', P, Q)`| Reverse Implication | Q ← P | `Q` &124;`'&124; `P` |`Q <== P` | `Expr('<==', Q, P)`| Equivalence | P ↔ Q | `P` &124;`''`&124; `Q` |`P Q` | `Expr('', P, Q)`Here's an example of defining a sentence with an implication arrow:
###Code
~(P & Q) |'==>'| (~P | ~Q)
###Output
_____no_output_____
###Markdown
`expr`: a Shortcut for Constructing SentencesIf the `|'==>'|` notation looks ugly to you, you can use the function `expr` instead:
###Code
expr('~(P & Q) ==> (~P | ~Q)')
###Output
_____no_output_____
###Markdown
`expr` takes a string as input, and parses it into an `Expr`. The string can contain arrow operators: `==>`, ``, which are handled as if they were regular Python infix operators. And `expr` automatically defines any symbols, so you don't need to pre-define them:
###Code
expr('sqrt(b ** 2 - 4 * a * c)')
###Output
_____no_output_____
###Markdown
For now that's all you need to know about `expr`. If you are interested, we explain the messy details of how `expr` is implemented and how `|'==>'|` is handled in the appendix. Propositional Knowledge Bases: `PropKB`The class `PropKB` can be used to represent a knowledge base of propositional logic sentences.We see that the class `KB` has four methods, apart from `__init__`. A point to note here: the `ask` method simply calls the `ask_generator` method. Thus, this one has already been implemented, and what you'll have to actually implement when you create your own knowledge base class (though you'll probably never need to, considering the ones we've created for you) will be the `ask_generator` function and not the `ask` function itself.The class `PropKB` now.* `__init__(self, sentence=None)` : The constructor `__init__` creates a single field `clauses` which will be a list of all the sentences of the knowledge base. Note that each one of these sentences will be a 'clause' i.e. a sentence which is made up of only literals and `or`s.* `tell(self, sentence)` : When you want to add a sentence to the KB, you use the `tell` method. This method takes a sentence, converts it to its CNF, extracts all the clauses, and adds all these clauses to the `clauses` field. So, you need not worry about `tell`ing only clauses to the knowledge base. You can `tell` the knowledge base a sentence in any form that you wish; converting it to CNF and adding the resulting clauses will be handled by the `tell` method.* `ask_generator(self, query)` : The `ask_generator` function is used by the `ask` function. It calls the `tt_entails` function, which in turn returns `True` if the knowledge base entails query and `False` otherwise. The `ask_generator` itself returns an empty dict `{}` if the knowledge base entails query and `None` otherwise. This might seem a little bit weird to you. After all, it makes more sense just to return a `True` or a `False` instead of the `{}` or `None` But this is done to maintain consistency with the way things are in First-Order Logic, where an `ask_generator` function is supposed to return all the substitutions that make the query true. Hence the dict, to return all these substitutions. I will be mostly be using the `ask` function which returns a `{}` or a `False`, but if you don't like this, you can always use the `ask_if_true` function which returns a `True` or a `False`.* `retract(self, sentence)` : This function removes all the clauses of the sentence given, from the knowledge base. Like the `tell` function, you don't have to pass clauses to remove them from the knowledge base; any sentence will do fine. The function will take care of converting that sentence to clauses and then remove those. Wumpus World KBLet us create a `PropKB` for the wumpus world with the sentences mentioned in `section 7.4.3`.
###Code
wumpus_kb = PropKB()
###Output
_____no_output_____
###Markdown
We define the symbols we use in our clauses.$P_{x, y}$ is true if there is a pit in `[x, y]`.$B_{x, y}$ is true if the agent senses breeze in `[x, y]`.
###Code
P11, P12, P21, P22, P31, B11, B21 = expr('P11, P12, P21, P22, P31, B11, B21')
###Output
_____no_output_____
###Markdown
Now we tell sentences based on `section 7.4.3`.There is no pit in `[1,1]`.
###Code
wumpus_kb.tell(~P11)
###Output
_____no_output_____
###Markdown
A square is breezy if and only if there is a pit in a neighboring square. This has to be stated for each square but for now, we include just the relevant squares.
###Code
wumpus_kb.tell(B11 | '<=>' | ((P12 | P21)))
wumpus_kb.tell(B21 | '<=>' | ((P11 | P22 | P31)))
###Output
_____no_output_____
###Markdown
Now we include the breeze percepts for the first two squares leading up to the situation in `Figure 7.3(b)`
###Code
wumpus_kb.tell(~B11)
wumpus_kb.tell(B21)
###Output
_____no_output_____
###Markdown
We can check the clauses stored in a `KB` by accessing its `clauses` variable
###Code
wumpus_kb.clauses
###Output
_____no_output_____
###Markdown
We see that the equivalence $B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was automatically converted to two implications which were inturn converted to CNF which is stored in the `KB`.$B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was split into $B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ and $B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$.$B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ was converted to $P_{1, 2} \lor P_{2, 1} \lor \neg B_{1, 1}$.$B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$ was converted to $\neg (P_{1, 2} \lor P_{2, 1}) \lor B_{1, 1}$ which becomes $(\neg P_{1, 2} \lor B_{1, 1}) \land (\neg P_{2, 1} \lor B_{1, 1})$ after applying De Morgan's laws and distributing the disjunction.$B_{2, 1} \iff (P_{1, 1} \lor P_{2, 2} \lor P_{3, 2})$ is converted in similar manner. Inference in Propositional Knowledge BaseIn this section we will look at two algorithms to check if a sentence is entailed by the `KB`. Our goal is to decide whether $\text{KB} \vDash \alpha$ for some sentence $\alpha$. Truth Table EnumerationIt is a model-checking approach which, as the name suggests, enumerates all possible models in which the `KB` is true and checks if $\alpha$ is also true in these models. We list the $n$ symbols in the `KB` and enumerate the $2^{n}$ models in a depth-first manner and check the truth of `KB` and $\alpha$.
###Code
psource(tt_check_all)
###Output
_____no_output_____
###Markdown
The algorithm basically computes every line of the truth table $KB\implies \alpha$ and checks if it is true everywhere.If symbols are defined, the routine recursively constructs every combination of truth values for the symbols and then, it checks whether `model` is consistent with `kb`.The given models correspond to the lines in the truth table,which have a `true` in the KB column, and for these lines it checks whether the query evaluates to true`result = pl_true(alpha, model)`.In short, `tt_check_all` evaluates this logical expression for each `model``pl_true(kb, model) => pl_true(alpha, model)`which is logically equivalent to`pl_true(kb, model) & ~pl_true(alpha, model)` that is, the knowledge base and the negation of the query are logically inconsistent.`tt_entails()` just extracts the symbols from the query and calls `tt_check_all()` with the proper parameters.
###Code
psource(tt_entails)
###Output
_____no_output_____
###Markdown
Keep in mind that for two symbols P and Q, P => Q is false only when P is `True` and Q is `False`.Example usage of `tt_entails()`:
###Code
tt_entails(P & Q, Q)
###Output
_____no_output_____
###Markdown
P & Q is True only when both P and Q are True. Hence, (P & Q) => Q is True
###Code
tt_entails(P | Q, Q)
tt_entails(P | Q, P)
###Output
_____no_output_____
###Markdown
If we know that P | Q is true, we cannot infer the truth values of P and Q. Hence (P | Q) => Q is False and so is (P | Q) => P.
###Code
(A, B, C, D, E, F, G) = symbols('A, B, C, D, E, F, G')
tt_entails(A & (B | C) & D & E & ~(F | G), A & D & E & ~F & ~G)
###Output
_____no_output_____
###Markdown
We can see that for the KB to be true, A, D, E have to be True and F and G have to be False.Nothing can be said about B or C. Coming back to our problem, note that `tt_entails()` takes an `Expr` which is a conjunction of clauses as the input instead of the `KB` itself. You can use the `ask_if_true()` method of `PropKB` which does all the required conversions. Let's check what `wumpus_kb` tells us about $P_{1, 1}$.
###Code
wumpus_kb.ask_if_true(~P11), wumpus_kb.ask_if_true(P11)
###Output
_____no_output_____
###Markdown
Looking at Figure 7.9 we see that in all models in which the knowledge base is `True`, $P_{1, 1}$ is `False`. It makes sense that `ask_if_true()` returns `True` for $\alpha = \neg P_{1, 1}$ and `False` for $\alpha = P_{1, 1}$. This begs the question, what if $\alpha$ is `True` in only a portion of all models. Do we return `True` or `False`? This doesn't rule out the possibility of $\alpha$ being `True` but it is not entailed by the `KB` so we return `False` in such cases. We can see this is the case for $P_{2, 2}$ and $P_{3, 1}$.
###Code
wumpus_kb.ask_if_true(~P22), wumpus_kb.ask_if_true(P22)
###Output
_____no_output_____
###Markdown
Proof by ResolutionRecall that our goal is to check whether $\text{KB} \vDash \alpha$ i.e. is $\text{KB} \implies \alpha$ true in every model. Suppose we wanted to check if $P \implies Q$ is valid. We check the satisfiability of $\neg (P \implies Q)$, which can be rewritten as $P \land \neg Q$. If $P \land \neg Q$ is unsatisfiable, then $P \implies Q$ must be true in all models. This gives us the result "$\text{KB} \vDash \alpha$ if and only if $\text{KB} \land \neg \alpha$ is unsatisfiable".This technique corresponds to proof by contradiction, a standard mathematical proof technique. We assume $\alpha$ to be false and show that this leads to a contradiction with known axioms in $\text{KB}$. We obtain a contradiction by making valid inferences using inference rules. In this proof we use a single inference rule, resolution which states $(l_1 \lor \dots \lor l_k) \land (m_1 \lor \dots \lor m_n) \land (l_i \iff \neg m_j) \implies l_1 \lor \dots \lor l_{i - 1} \lor l_{i + 1} \lor \dots \lor l_k \lor m_1 \lor \dots \lor m_{j - 1} \lor m_{j + 1} \lor \dots \lor m_n$. Applying the resolution yeilds us a clause which we add to the KB. We keep doing this until:* There are no new clauses that can be added, in which case $\text{KB} \nvDash \alpha$.* Two clauses resolve to yield the empty clause, in which case $\text{KB} \vDash \alpha$.The empty clause is equivalent to False because it arises only from resolving two complementaryunit clauses such as $P$ and $\neg P$ which is a contradiction as both $P$ and $\neg P$ can't be True at the same time. There is one catch however, the algorithm that implements proof by resolution cannot handle complex sentences. Implications and bi-implications have to be simplified into simpler clauses. We already know that *every sentence of a propositional logic is logically equivalent to a conjunction of clauses*.We will use this fact to our advantage and simplify the input sentence into the **conjunctive normal form** (CNF) which is a conjunction of disjunctions of literals.For eg:$$(A\lor B)\land (\neg B\lor C\lor\neg D)\land (D\lor\neg E)$$This is equivalent to the POS (Product of sums) form in digital electronics.Here's an outline of how the conversion is done:1. Convert bi-implications to implications$\alpha\iff\beta$ can be written as $(\alpha\implies\beta)\land(\beta\implies\alpha)$This also applies to compound sentences$\alpha\iff(\beta\lor\gamma)$ can be written as $(\alpha\implies(\beta\lor\gamma))\land((\beta\lor\gamma)\implies\alpha)$2. Convert implications to their logical equivalents$\alpha\implies\beta$ can be written as $\neg\alpha\lor\beta$3. Move negation inwardsCNF requires atomic literals. Hence, negation cannot appear on a compound statement.De Morgan's laws will be helpful here.$\neg(\alpha\land\beta)\equiv(\neg\alpha\lor\neg\beta)$$\neg(\alpha\lor\beta)\equiv(\neg\alpha\land\neg\beta)$4. Distribute disjunction over conjunctionDisjunction and conjunction are distributive over each other.Now that we only have conjunctions, disjunctions and negations in our expression, we will distribute disjunctions over conjunctions wherever possible as this will give us a sentence which is a conjunction of simpler clauses, which is what we wanted in the first place.We need a term of the form$(\alpha_{1}\lor\alpha_{2}\lor\alpha_{3}...)\land(\beta_{1}\lor\beta_{2}\lor\beta_{3}...)\land(\gamma_{1}\lor\gamma_{2}\lor\gamma_{3}...)\land...$The `to_cnf` function executes this conversion using helper subroutines.
###Code
psource(to_cnf)
###Output
_____no_output_____
###Markdown
`to_cnf` calls three subroutines.`eliminate_implications` converts bi-implications and implications to their logical equivalents.`move_not_inwards` removes negations from compound statements and moves them inwards using De Morgan's laws.`distribute_and_over_or` distributes disjunctions over conjunctions.Run the cells below for implementation details.
###Code
%psource eliminate_implications
%psource move_not_inwards
%psource distribute_and_over_or
###Output
_____no_output_____
###Markdown
Let's convert some sentences to see how it works
###Code
A, B, C, D = expr('A, B, C, D')
to_cnf(A |'<=>'| B)
to_cnf(A |'<=>'| (B & C))
to_cnf(A & (B | (C & D)))
to_cnf((A |'<=>'| ~B) |'==>'| (C | ~D))
###Output
_____no_output_____
###Markdown
Coming back to our resolution problem, we can see how the `to_cnf` function is utilized here
###Code
psource(pl_resolution)
pl_resolution(wumpus_kb, ~P11), pl_resolution(wumpus_kb, P11)
pl_resolution(wumpus_kb, ~P22), pl_resolution(wumpus_kb, P22)
###Output
_____no_output_____
###Markdown
Forward and backward chainingPreviously, we said we will look at two algorithms to check if a sentence is entailed by the `KB`, but here's a third one. The difference here is that our goal now is to determine if a knowledge base of definite clauses entails a single proposition symbol *q* - the query.There is a catch however, the knowledge base can only contain **Horn clauses**. Horn ClausesHorn clauses can be defined as a *disjunction* of *literals* with **at most** one positive literal. A Horn clause with exactly one positive literal is called a *definite clause*.A Horn clause might look like $\neg a\lor\neg b\lor\neg c\lor\neg d... \lor z$This, coincidentally, is also a definite clause.Using De Morgan's laws, the example above can be simplified to $a\land b\land c\land d ... \implies z$This seems like a logical representation of how humans process known data and facts. Assuming percepts `a`, `b`, `c`, `d` ... to be true simultaneously, we can infer `z` to also be true at that point in time. There are some interesting aspects of Horn clauses that make algorithmic inference or *resolution* easier.- Definite clauses can be written as implications:The most important simplification a definite clause provides is that it can be written as an implication.The premise (or the knowledge that leads to the implication) is a conjunction of positive literals.The conclusion (the implied statement) is also a positive literal.The sentence thus becomes easier to understand.The premise and the conclusion are conventionally called the *body* and the *head* respectively.A single positive literal is called a *fact*.- Forward chaining and backward chaining can be used for inference from Horn clauses:Forward chaining is semantically identical to `AND-OR-Graph-Search` from the chapter on search algorithms.Implementational details will be explained shortly.- Deciding entailment with Horn clauses is linear in size of the knowledge base:Surprisingly, the forward and backward chaining algorithms traverse each element of the knowledge base at most once, greatly simplifying the problem.The function `pl_fc_entails` implements forward chaining to see if a knowledge base `KB` entails a symbol `q`.Before we proceed further, note that `pl_fc_entails` doesn't use an ordinary `KB` instance. The knowledge base here is an instance of the `PropDefiniteKB` class, derived from the `PropKB` class, but modified to store definite clauses.The main point of difference arises in the inclusion of a helper method to `PropDefiniteKB` that returns a list of clauses in KB that have a given symbol `p` in their premise.
###Code
psource(PropDefiniteKB.clauses_with_premise)
###Output
_____no_output_____
###Markdown
Let's now have a look at the `pl_fc_entails` algorithm.
###Code
psource(pl_fc_entails)
###Output
_____no_output_____
###Markdown
The function accepts a knowledge base `KB` (an instance of `PropDefiniteKB`) and a query `q` as inputs.`count` initially stores the number of symbols in the premise of each sentence in the knowledge base.The `conjuncts` helper function separates a given sentence at conjunctions.`inferred` is initialized as a *boolean* defaultdict. This will be used later to check if we have inferred all premises of each clause of the agenda.`agenda` initially stores a list of clauses that the knowledge base knows to be true.The `is_prop_symbol` helper function checks if the given symbol is a valid propositional logic symbol.We now iterate through `agenda`, popping a symbol `p` on each iteration.If the query `q` is the same as `p`, we know that entailment holds.The agenda is processed, reducing `count` by one for each implication with a premise `p`.A conclusion is added to the agenda when `count` reaches zero. This means we know all the premises of that particular implication to be true.`clauses_with_premise` is a helpful method of the `PropKB` class.It returns a list of clauses in the knowledge base that have `p` in their premise.Now that we have an idea of how this function works, let's see a few examples of its usage, but we first need to define our knowledge base. We assume we know the following clauses to be true.
###Code
clauses = ['(B & F)==>E',
'(A & E & F)==>G',
'(B & C)==>F',
'(A & B)==>D',
'(E & F)==>H',
'(H & I)==>J',
'A',
'B',
'C']
###Output
_____no_output_____
###Markdown
We will now `tell` this information to our knowledge base.
###Code
definite_clauses_KB = PropDefiniteKB()
for clause in clauses:
definite_clauses_KB.tell(expr(clause))
###Output
_____no_output_____
###Markdown
We can now check if our knowledge base entails the following queries.
###Code
pl_fc_entails(definite_clauses_KB, expr('G'))
pl_fc_entails(definite_clauses_KB, expr('H'))
pl_fc_entails(definite_clauses_KB, expr('I'))
pl_fc_entails(definite_clauses_KB, expr('J'))
###Output
_____no_output_____
###Markdown
Effective Propositional Model CheckingThe previous segments elucidate the algorithmic procedure for model checking. In this segment, we look at ways of making them computationally efficient.The problem we are trying to solve is conventionally called the _propositional satisfiability problem_, abbreviated as the _SAT_ problem.In layman terms, if there exists a model that satisfies a given Boolean formula, the formula is called satisfiable.The SAT problem was the first problem to be proven _NP-complete_.The main characteristics of an NP-complete problem are:- Given a solution to such a problem, it is easy to verify if the solution solves the problem.- The time required to actually solve the problem using any known algorithm increases exponentially with respect to the size of the problem.Due to these properties, heuristic and approximational methods are often applied to find solutions to these problems.It is extremely important to be able to solve large scale SAT problems efficiently because many combinatorial problems in computer science can be conveniently reduced to checking the satisfiability of a propositional sentence under some constraints.We will introduce two new algorithms that perform propositional model checking in a computationally effective way. 1. DPLL (Davis-Putnam-Logeman-Loveland) algorithmThis algorithm is very similar to Backtracking-Search.It recursively enumerates possible models in a depth-first fashion with the following improvements over algorithms like `tt_entails`:1. Early termination:In certain cases, the algorithm can detect the truth value of a statement using just a partially completed model.For example, $(P\lor Q)\land(P\lor R)$ is true if P is true, regardless of other variables.This reduces the search space significantly.2. Pure symbol heuristic:A symbol that has the same sign (positive or negative) in all clauses is called a _pure symbol_.It isn't difficult to see that any satisfiable model will have the pure symbols assigned such that its parent clause becomes _true_.For example, $(P\lor\neg Q)\land(\neg Q\lor\neg R)\land(R\lor P)$ has P and Q as pure symbolsand for the sentence to be true, P _has_ to be true and Q _has_ to be false.The pure symbol heuristic thus simplifies the problem a bit.3. Unit clause heuristic:In the context of DPLL, clauses with just one literal and clauses with all but one _false_ literals are called unit clauses.If a clause is a unit clause, it can only be satisfied by assigning the necessary value to make the last literal true.We have no other choice.Assigning one unit clause can create another unit clause.For example, when P is false, $(P\lor Q)$ becomes a unit clause, causing _true_ to be assigned to Q.A series of forced assignments derived from previous unit clauses is called _unit propagation_.In this way, this heuristic simplifies the problem further.The algorithm often employs other tricks to scale up to large problems.However, these tricks are currently out of the scope of this notebook. Refer to section 7.6 of the book for more details.Let's have a look at the algorithm.
###Code
psource(dpll)
###Output
_____no_output_____
###Markdown
The algorithm uses the ideas described above to check satisfiability of a sentence in propositional logic.It recursively calls itself, simplifying the problem at each step. It also uses helper functions `find_pure_symbol` and `find_unit_clause` to carry out steps 2 and 3 above.The `dpll_satisfiable` helper function converts the input clauses to _conjunctive normal form_ and calls the `dpll` function with the correct parameters.
###Code
psource(dpll_satisfiable)
###Output
_____no_output_____
###Markdown
Let's see a few examples of usage.
###Code
A, B, C, D = expr('A, B, C, D')
dpll_satisfiable(A & B & ~C & D)
###Output
_____no_output_____
###Markdown
This is a simple case to highlight that the algorithm actually works.
###Code
dpll_satisfiable((A & B) | (C & ~A) | (B & ~D))
###Output
_____no_output_____
###Markdown
If a particular symbol isn't present in the solution, it means that the solution is independent of the value of that symbol.In this case, the solution is independent of A.
###Code
dpll_satisfiable(A |'<=>'| B)
dpll_satisfiable((A |'<=>'| B) |'==>'| (C & ~A))
dpll_satisfiable((A | (B & C)) |'<=>'| ((A | B) & (A | C)))
###Output
_____no_output_____
###Markdown
2. WalkSAT algorithmThis algorithm is very similar to Hill climbing.On every iteration, the algorithm picks an unsatisfied clause and flips a symbol in the clause.This is similar to finding a neighboring state in the `hill_climbing` algorithm.The symbol to be flipped is decided by an evaluation function that counts the number of unsatisfied clauses.Sometimes, symbols are also flipped randomly, to avoid local optima. A subtle balance between greediness and randomness is required. Alternatively, some versions of the algorithm restart with a completely new random assignment if no solution has been found for too long, as a way of getting out of local minima of numbers of unsatisfied clauses.Let's have a look at the algorithm.
###Code
psource(WalkSAT)
###Output
_____no_output_____
###Markdown
The function takes three arguments:1. The `clauses` we want to satisfy.2. The probability `p` of randomly changing a symbol.3. The maximum number of flips (`max_flips`) the algorithm will run for. If the clauses are still unsatisfied, the algorithm returns `None` to denote failure.The algorithm is identical in concept to Hill climbing and the code isn't difficult to understand.Let's see a few examples of usage.
###Code
A, B, C, D = expr('A, B, C, D')
WalkSAT([A, B, ~C, D], 0.5, 100)
###Output
_____no_output_____
###Markdown
This is a simple case to show that the algorithm converges.
###Code
WalkSAT([A & B, A & C], 0.5, 100)
WalkSAT([A & B, C & D, C & B], 0.5, 100)
WalkSAT([A & B, C | D, ~(D | B)], 0.5, 1000)
###Output
_____no_output_____
###Markdown
This one doesn't give any output because WalkSAT did not find any model where these clauses hold. We can solve these clauses to see that they together form a contradiction and hence, it isn't supposed to have a solution. One point of difference between this algorithm and the `dpll_satisfiable` algorithms is that both these algorithms take inputs differently. For WalkSAT to take complete sentences as input, we can write a helper function that converts the input sentence into conjunctive normal form and then calls WalkSAT with the list of conjuncts of the CNF form of the sentence.
###Code
def WalkSAT_CNF(sentence, p=0.5, max_flips=10000):
return WalkSAT(conjuncts(to_cnf(sentence)), 0, max_flips)
###Output
_____no_output_____
###Markdown
Now we can call `WalkSAT_CNF` and `DPLL_Satisfiable` with the same arguments.
###Code
WalkSAT_CNF((A & B) | (C & ~A) | (B & ~D), 0.5, 1000)
###Output
_____no_output_____
###Markdown
It works!Notice that the solution generated by WalkSAT doesn't omit variables that the sentence doesn't depend upon. If the sentence is independent of a particular variable, the solution contains a random value for that variable because of the stochastic nature of the algorithm.Let's compare the runtime of WalkSAT and DPLL for a few cases. We will use the `%%timeit` magic to do this.
###Code
sentence_1 = A |'<=>'| B
sentence_2 = (A & B) | (C & ~A) | (B & ~D)
sentence_3 = (A | (B & C)) |'<=>'| ((A | B) & (A | C))
%%timeit
dpll_satisfiable(sentence_1)
dpll_satisfiable(sentence_2)
dpll_satisfiable(sentence_3)
%%timeit
WalkSAT_CNF(sentence_1)
WalkSAT_CNF(sentence_2)
WalkSAT_CNF(sentence_3)
###Output
100 loops, best of 3: 1.91 ms per loop
###Markdown
On an average, for solvable cases, `WalkSAT` is quite faster than `dpll` because, for a small number of variables, `WalkSAT` can reduce the search space significantly. Results can be different for sentences with more symbols though.Feel free to play around with this to understand the trade-offs of these algorithms better. First-Order Logic Knowledge Bases: `FolKB`The class `FolKB` can be used to represent a knowledge base of First-order logic sentences. You would initialize and use it the same way as you would for `PropKB` except that the clauses are first-order definite clauses. We will see how to write such clauses to create a database and query them in the following sections. Criminal KBIn this section we create a `FolKB` based on the following paragraph.The law says that it is a crime for an American to sell weapons to hostile nations. The country Nono, an enemy of America, has some missiles, and all of its missiles were sold to it by Colonel West, who is American.The first step is to extract the facts and convert them into first-order definite clauses. Extracting the facts from data alone is a challenging task. Fortunately, we have a small paragraph and can do extraction and conversion manually. We'll store the clauses in list aptly named `clauses`.
###Code
clauses = []
###Output
_____no_output_____
###Markdown
“... it is a crime for an American to sell weapons to hostile nations”The keywords to look for here are 'crime', 'American', 'sell', 'weapon' and 'hostile'. We use predicate symbols to make meaning of them.* `Criminal(x)`: `x` is a criminal* `American(x)`: `x` is an American* `Sells(x ,y, z)`: `x` sells `y` to `z`* `Weapon(x)`: `x` is a weapon* `Hostile(x)`: `x` is a hostile nationLet us now combine them with appropriate variable naming to depict the meaning of the sentence. The criminal `x` is also the American `x` who sells weapon `y` to `z`, which is a hostile nation.$\text{American}(x) \land \text{Weapon}(y) \land \text{Sells}(x, y, z) \land \text{Hostile}(z) \implies \text{Criminal} (x)$
###Code
clauses.append(expr("(American(x) & Weapon(y) & Sells(x, y, z) & Hostile(z)) ==> Criminal(x)"))
###Output
_____no_output_____
###Markdown
"The country Nono, an enemy of America"We now know that Nono is an enemy of America. We represent these nations using the constant symbols `Nono` and `America`. the enemy relation is show using the predicate symbol `Enemy`.$\text{Enemy}(\text{Nono}, \text{America})$
###Code
clauses.append(expr("Enemy(Nono, America)"))
###Output
_____no_output_____
###Markdown
"Nono ... has some missiles"This states the existence of some missile which is owned by Nono. $\exists x \text{Owns}(\text{Nono}, x) \land \text{Missile}(x)$. We invoke existential instantiation to introduce a new constant `M1` which is the missile owned by Nono.$\text{Owns}(\text{Nono}, \text{M1}), \text{Missile}(\text{M1})$
###Code
clauses.append(expr("Owns(Nono, M1)"))
clauses.append(expr("Missile(M1)"))
###Output
_____no_output_____
###Markdown
"All of its missiles were sold to it by Colonel West"If Nono owns something and it classifies as a missile, then it was sold to Nono by West.$\text{Missile}(x) \land \text{Owns}(\text{Nono}, x) \implies \text{Sells}(\text{West}, x, \text{Nono})$
###Code
clauses.append(expr("(Missile(x) & Owns(Nono, x)) ==> Sells(West, x, Nono)"))
###Output
_____no_output_____
###Markdown
"West, who is American"West is an American.$\text{American}(\text{West})$
###Code
clauses.append(expr("American(West)"))
###Output
_____no_output_____
###Markdown
We also know, from our understanding of language, that missiles are weapons and that an enemy of America counts as “hostile”.$\text{Missile}(x) \implies \text{Weapon}(x), \text{Enemy}(x, \text{America}) \implies \text{Hostile}(x)$
###Code
clauses.append(expr("Missile(x) ==> Weapon(x)"))
clauses.append(expr("Enemy(x, America) ==> Hostile(x)"))
###Output
_____no_output_____
###Markdown
Now that we have converted the information into first-order definite clauses we can create our first-order logic knowledge base.
###Code
crime_kb = FolKB(clauses)
###Output
_____no_output_____
###Markdown
Inference in First-Order LogicIn this section we look at a forward chaining and a backward chaining algorithm for `FolKB`. Both aforementioned algorithms rely on a process called unification, a key component of all first-order inference algorithms. UnificationWe sometimes require finding substitutions that make different logical expressions look identical. This process, called unification, is done by the `unify` algorithm. It takes as input two sentences and returns a unifier for them if one exists. A unifier is a dictionary which stores the substitutions required to make the two sentences identical. It does so by recursively unifying the components of a sentence, where the unification of a variable symbol `var` with a constant symbol `Const` is the mapping `{var: Const}`. Let's look at a few examples.
###Code
unify(expr('x'), 3)
unify(expr('A(x)'), expr('A(B)'))
unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(y)'))
###Output
_____no_output_____
###Markdown
In cases where there is no possible substitution that unifies the two sentences the function return `None`.
###Code
print(unify(expr('Cat(x)'), expr('Dog(Dobby)')))
###Output
None
###Markdown
We also need to take care we do not unintentionally use the same variable name. Unify treats them as a single variable which prevents it from taking multiple value.
###Code
print(unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(x)')))
###Output
None
###Markdown
Forward Chaining AlgorithmWe consider the simple forward-chaining algorithm presented in Figure 9.3. We look at each rule in the knoweldge base and see if the premises can be satisfied. This is done by finding a substitution which unifies each of the premise with a clause in the `KB`. If we are able to unify the premises, the conclusion (with the corresponding substitution) is added to the `KB`. This inferencing process is repeated until either the query can be answered or till no new sentences can be added. We test if the newly added clause unifies with the query in which case the substitution yielded by `unify` is an answer to the query. If we run out of sentences to infer, this means the query was a failure.The function `fol_fc_ask` is a generator which yields all substitutions which validate the query.
###Code
psource(fol_fc_ask)
###Output
_____no_output_____
###Markdown
Let's find out all the hostile nations. Note that we only told the `KB` that Nono was an enemy of America, not that it was hostile.
###Code
answer = fol_fc_ask(crime_kb, expr('Hostile(x)'))
print(list(answer))
###Output
[{x: Nono}]
###Markdown
The generator returned a single substitution which says that Nono is a hostile nation. See how after adding another enemy nation the generator returns two substitutions.
###Code
crime_kb.tell(expr('Enemy(JaJa, America)'))
answer = fol_fc_ask(crime_kb, expr('Hostile(x)'))
print(list(answer))
###Output
[{x: Nono}, {x: JaJa}]
###Markdown
Note: `fol_fc_ask` makes changes to the `KB` by adding sentences to it. Backward Chaining AlgorithmThis algorithm works backward from the goal, chaining through rules to find known facts that support the proof. Suppose `goal` is the query we want to find the substitution for. We find rules of the form $\text{lhs} \implies \text{goal}$ in the `KB` and try to prove `lhs`. There may be multiple clauses in the `KB` which give multiple `lhs`. It is sufficient to prove only one of these. But to prove a `lhs` all the conjuncts in the `lhs` of the clause must be proved. This makes it similar to And/Or search. ORThe OR part of the algorithm comes from our choice to select any clause of the form $\text{lhs} \implies \text{goal}$. Looking at all rules's `lhs` whose `rhs` unify with the `goal`, we yield a substitution which proves all the conjuncts in the `lhs`. We use `parse_definite_clause` to attain `lhs` and `rhs` from a clause of the form $\text{lhs} \implies \text{rhs}$. For atomic facts the `lhs` is an empty list.
###Code
%psource fol_bc_or
###Output
_____no_output_____
###Markdown
ANDThe AND corresponds to proving all the conjuncts in the `lhs`. We need to find a substitution which proves each and every clause in the list of conjuncts.
###Code
%psource fol_bc_and
###Output
_____no_output_____
###Markdown
Now the main function `fl_bc_ask` calls `fol_bc_or` with substitution initialized as empty. The `ask` method of `FolKB` uses `fol_bc_ask` and fetches the first substitution returned by the generator to answer query. Let's query the knowledge base we created from `clauses` to find hostile nations.
###Code
# Rebuild KB because running fol_fc_ask would add new facts to the KB
crime_kb = FolKB(clauses)
crime_kb.ask(expr('Hostile(x)'))
###Output
_____no_output_____
###Markdown
You may notice some new variables in the substitution. They are introduced to standardize the variable names to prevent naming problems as discussed in the [Unification section](Unification) Appendix: The Implementation of `|'==>'|`Consider the `Expr` formed by this syntax:
###Code
P |'==>'| ~Q
###Output
_____no_output_____
###Markdown
What is the funny `|'==>'|` syntax? The trick is that "`|`" is just the regular Python or-operator, and so is exactly equivalent to this:
###Code
(P | '==>') | ~Q
###Output
_____no_output_____
###Markdown
In other words, there are two applications of or-operators. Here's the first one:
###Code
P | '==>'
###Output
_____no_output_____
###Markdown
What is going on here is that the `__or__` method of `Expr` serves a dual purpose. If the right-hand-side is another `Expr` (or a number), then the result is an `Expr`, as in `(P | Q)`. But if the right-hand-side is a string, then the string is taken to be an operator, and we create a node in the abstract syntax tree corresponding to a partially-filled `Expr`, one where we know the left-hand-side is `P` and the operator is `==>`, but we don't yet know the right-hand-side.The `PartialExpr` class has an `__or__` method that says to create an `Expr` node with the right-hand-side filled in. Here we can see the combination of the `PartialExpr` with `Q` to create a complete `Expr`:
###Code
partial = PartialExpr('==>', P)
partial | ~Q
###Output
_____no_output_____
###Markdown
This [trick](http://code.activestate.com/recipes/384122-infix-operators/) is due to [Ferdinand Jamitzky](http://code.activestate.com/recipes/users/98863/), with a modification by [C. G. Vedant](https://github.com/Chipe1),who suggested using a string inside the or-bars. Appendix: The Implementation of `expr`How does `expr` parse a string into an `Expr`? It turns out there are two tricks (besides the Jamitzky/Vedant trick):1. We do a string substitution, replacing "`==>`" with "`|'==>'|`" (and likewise for other operators).2. We `eval` the resulting string in an environment in which every identifieris bound to a symbol with that identifier as the `op`.In other words,
###Code
expr('~(P & Q) ==> (~P | ~Q)')
###Output
_____no_output_____
###Markdown
is equivalent to doing:
###Code
P, Q = symbols('P, Q')
~(P & Q) |'==>'| (~P | ~Q)
###Output
_____no_output_____
###Markdown
One thing to beware of: this puts `==>` at the same precedence level as `"|"`, which is not quite right. For example, we get this:
###Code
P & Q |'==>'| P | Q
###Output
_____no_output_____
###Markdown
which is probably not what we meant; when in doubt, put in extra parens:
###Code
(P & Q) |'==>'| (P | Q)
###Output
_____no_output_____
###Markdown
Examples
###Code
from notebook import Canvas_fol_bc_ask
canvas_bc_ask = Canvas_fol_bc_ask('canvas_bc_ask', crime_kb, expr('Criminal(x)'))
###Output
_____no_output_____
###Markdown
Logic This Jupyter notebook acts as supporting material for topics covered in __Chapter 6 Logical Agents__, __Chapter 7 First-Order Logic__ and __Chapter 8 Inference in First-Order Logic__ of the book *[Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu)*. We make use of the implementations in the [logic.py](https://github.com/aimacode/aima-python/blob/master/logic.py) module. See the [intro notebook](https://github.com/aimacode/aima-python/blob/master/intro.ipynb) for instructions.Let's first import everything from the `logic` module.
###Code
from utils import *
from logic import *
from notebook import psource
###Output
_____no_output_____
###Markdown
CONTENTS- Logical sentences - Expr - PropKB - Knowledge-based agents - Inference in propositional knowledge base - Truth table enumeration - Proof by resolution - Forward and backward chaining - DPLL - WalkSAT - SATPlan - FolKB - Inference in first order knowledge base - Unification - Forward chaining algorithm - Backward chaining algorithm Logical Sentences The `Expr` class is designed to represent any kind of mathematical expression. The simplest type of `Expr` is a symbol, which can be defined with the function `Symbol`:
###Code
Symbol('x')
###Output
_____no_output_____
###Markdown
Or we can define multiple symbols at the same time with the function `symbols`:
###Code
(x, y, P, Q, f) = symbols('x, y, P, Q, f')
###Output
_____no_output_____
###Markdown
We can combine `Expr`s with the regular Python infix and prefix operators. Here's how we would form the logical sentence "P and not Q":
###Code
P & ~Q
###Output
_____no_output_____
###Markdown
This works because the `Expr` class overloads the `&` operator with this definition:```pythondef __and__(self, other): return Expr('&', self, other)``` and does similar overloads for the other operators. An `Expr` has two fields: `op` for the operator, which is always a string, and `args` for the arguments, which is a tuple of 0 or more expressions. By "expression," I mean either an instance of `Expr`, or a number. Let's take a look at the fields for some `Expr` examples:
###Code
sentence = P & ~Q
sentence.op
sentence.args
P.op
P.args
Pxy = P(x, y)
Pxy.op
Pxy.args
###Output
_____no_output_____
###Markdown
It is important to note that the `Expr` class does not define the *logic* of Propositional Logic sentences; it just gives you a way to *represent* expressions. Think of an `Expr` as an [abstract syntax tree](https://en.wikipedia.org/wiki/Abstract_syntax_tree). Each of the `args` in an `Expr` can be either a symbol, a number, or a nested `Expr`. We can nest these trees to any depth. Here is a deply nested `Expr`:
###Code
3 * f(x, y) + P(y) / 2 + 1
###Output
_____no_output_____
###Markdown
Operators for Constructing Logical SentencesHere is a table of the operators that can be used to form sentences. Note that we have a problem: we want to use Python operators to make sentences, so that our programs (and our interactive sessions like the one here) will show simple code. But Python does not allow implication arrows as operators, so for now we have to use a more verbose notation that Python does allow: `|'==>'|` instead of just `==>`. Alternately, you can always use the more verbose `Expr` constructor forms:| Operation | Book | Python Infix Input | Python Output | Python `Expr` Input|--------------------------|----------------------|-------------------------|---|---|| Negation | ¬ P | `~P` | `~P` | `Expr('~', P)`| And | P ∧ Q | `P & Q` | `P & Q` | `Expr('&', P, Q)`| Or | P ∨ Q | `P` &124; `Q`| `P` &124; `Q` | `Expr('`&124;`', P, Q)`| Inequality (Xor) | P ≠ Q | `P ^ Q` | `P ^ Q` | `Expr('^', P, Q)`| Implication | P → Q | `P` &124;`'==>'`&124; `Q` | `P ==> Q` | `Expr('==>', P, Q)`| Reverse Implication | Q ← P | `Q` &124;`'&124; `P` |`Q <== P` | `Expr('<==', Q, P)`| Equivalence | P ↔ Q | `P` &124;`''`&124; `Q` |`P Q` | `Expr('', P, Q)`Here's an example of defining a sentence with an implication arrow:
###Code
~(P & Q) |'==>'| (~P | ~Q)
###Output
_____no_output_____
###Markdown
`expr`: a Shortcut for Constructing SentencesIf the `|'==>'|` notation looks ugly to you, you can use the function `expr` instead:
###Code
expr('~(P & Q) ==> (~P | ~Q)')
###Output
_____no_output_____
###Markdown
`expr` takes a string as input, and parses it into an `Expr`. The string can contain arrow operators: `==>`, ``, which are handled as if they were regular Python infix operators. And `expr` automatically defines any symbols, so you don't need to pre-define them:
###Code
expr('sqrt(b ** 2 - 4 * a * c)')
###Output
_____no_output_____
###Markdown
For now that's all you need to know about `expr`. If you are interested, we explain the messy details of how `expr` is implemented and how `|'==>'|` is handled in the appendix. Propositional Knowledge Bases: `PropKB`The class `PropKB` can be used to represent a knowledge base of propositional logic sentences.We see that the class `KB` has four methods, apart from `__init__`. A point to note here: the `ask` method simply calls the `ask_generator` method. Thus, this one has already been implemented, and what you'll have to actually implement when you create your own knowledge base class (though you'll probably never need to, considering the ones we've created for you) will be the `ask_generator` function and not the `ask` function itself.The class `PropKB` now.* `__init__(self, sentence=None)` : The constructor `__init__` creates a single field `clauses` which will be a list of all the sentences of the knowledge base. Note that each one of these sentences will be a 'clause' i.e. a sentence which is made up of only literals and `or`s.* `tell(self, sentence)` : When you want to add a sentence to the KB, you use the `tell` method. This method takes a sentence, converts it to its CNF, extracts all the clauses, and adds all these clauses to the `clauses` field. So, you need not worry about `tell`ing only clauses to the knowledge base. You can `tell` the knowledge base a sentence in any form that you wish; converting it to CNF and adding the resulting clauses will be handled by the `tell` method.* `ask_generator(self, query)` : The `ask_generator` function is used by the `ask` function. It calls the `tt_entails` function, which in turn returns `True` if the knowledge base entails query and `False` otherwise. The `ask_generator` itself returns an empty dict `{}` if the knowledge base entails query and `None` otherwise. This might seem a little bit weird to you. After all, it makes more sense just to return a `True` or a `False` instead of the `{}` or `None` But this is done to maintain consistency with the way things are in First-Order Logic, where an `ask_generator` function is supposed to return all the substitutions that make the query true. Hence the dict, to return all these substitutions. I will be mostly be using the `ask` function which returns a `{}` or a `False`, but if you don't like this, you can always use the `ask_if_true` function which returns a `True` or a `False`.* `retract(self, sentence)` : This function removes all the clauses of the sentence given, from the knowledge base. Like the `tell` function, you don't have to pass clauses to remove them from the knowledge base; any sentence will do fine. The function will take care of converting that sentence to clauses and then remove those. Wumpus World KBLet us create a `PropKB` for the wumpus world with the sentences mentioned in `section 7.4.3`.
###Code
wumpus_kb = PropKB()
###Output
_____no_output_____
###Markdown
We define the symbols we use in our clauses.$P_{x, y}$ is true if there is a pit in `[x, y]`.$B_{x, y}$ is true if the agent senses breeze in `[x, y]`.
###Code
P11, P12, P21, P22, P31, B11, B21 = expr('P11, P12, P21, P22, P31, B11, B21')
###Output
_____no_output_____
###Markdown
Now we tell sentences based on `section 7.4.3`.There is no pit in `[1,1]`.
###Code
wumpus_kb.tell(~P11)
###Output
_____no_output_____
###Markdown
A square is breezy if and only if there is a pit in a neighboring square. This has to be stated for each square but for now, we include just the relevant squares.
###Code
wumpus_kb.tell(B11 | '<=>' | ((P12 | P21)))
wumpus_kb.tell(B21 | '<=>' | ((P11 | P22 | P31)))
###Output
_____no_output_____
###Markdown
Now we include the breeze percepts for the first two squares leading up to the situation in `Figure 7.3(b)`
###Code
wumpus_kb.tell(~B11)
wumpus_kb.tell(B21)
###Output
_____no_output_____
###Markdown
We can check the clauses stored in a `KB` by accessing its `clauses` variable
###Code
wumpus_kb.clauses
###Output
_____no_output_____
###Markdown
We see that the equivalence $B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was automatically converted to two implications which were inturn converted to CNF which is stored in the `KB`.$B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was split into $B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ and $B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$.$B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ was converted to $P_{1, 2} \lor P_{2, 1} \lor \neg B_{1, 1}$.$B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$ was converted to $\neg (P_{1, 2} \lor P_{2, 1}) \lor B_{1, 1}$ which becomes $(\neg P_{1, 2} \lor B_{1, 1}) \land (\neg P_{2, 1} \lor B_{1, 1})$ after applying De Morgan's laws and distributing the disjunction.$B_{2, 1} \iff (P_{1, 1} \lor P_{2, 2} \lor P_{3, 2})$ is converted in similar manner. Knowledge based agents A knowledge-based agent is a simple generic agent that maintains and handles a knowledge base.The knowledge base may initially contain some background knowledge.The purpose of a KB agent is to provide a level of abstraction over knowledge-base manipulation and is to be used as a base class for agents that work on a knowledge base.Given a percept, the KB agent adds the percept to its knowledge base, asks the knowledge base for the best action, and tells the knowledge base that it has in fact taken that action.Our implementation of `KB-Agent` is encapsulated in a class `KB_AgentProgram` which inherits from the `KB` class.Let's have a look.
###Code
psource(KB_AgentProgram)
###Output
_____no_output_____
###Markdown
The helper functions `make_percept_sentence`, `make_action_query` and `make_action_sentence` are all aptly named and as expected,`make_percept_sentence` makes first-order logic sentences about percepts we want our agent to receive,`make_action_query` asks the underlying `KB` about the action that should be taken and`make_action_sentence` tells the underlying `KB` about the action it has just taken. Inference in Propositional Knowledge BaseIn this section we will look at two algorithms to check if a sentence is entailed by the `KB`. Our goal is to decide whether $\text{KB} \vDash \alpha$ for some sentence $\alpha$. Truth Table EnumerationIt is a model-checking approach which, as the name suggests, enumerates all possible models in which the `KB` is true and checks if $\alpha$ is also true in these models. We list the $n$ symbols in the `KB` and enumerate the $2^{n}$ models in a depth-first manner and check the truth of `KB` and $\alpha$.
###Code
psource(tt_check_all)
###Output
_____no_output_____
###Markdown
The algorithm basically computes every line of the truth table $KB\implies \alpha$ and checks if it is true everywhere.If symbols are defined, the routine recursively constructs every combination of truth values for the symbols and then, it checks whether `model` is consistent with `kb`.The given models correspond to the lines in the truth table,which have a `true` in the KB column, and for these lines it checks whether the query evaluates to true`result = pl_true(alpha, model)`.In short, `tt_check_all` evaluates this logical expression for each `model``pl_true(kb, model) => pl_true(alpha, model)`which is logically equivalent to`pl_true(kb, model) & ~pl_true(alpha, model)` that is, the knowledge base and the negation of the query are logically inconsistent.`tt_entails()` just extracts the symbols from the query and calls `tt_check_all()` with the proper parameters.
###Code
psource(tt_entails)
###Output
_____no_output_____
###Markdown
Keep in mind that for two symbols P and Q, P => Q is false only when P is `True` and Q is `False`.Example usage of `tt_entails()`:
###Code
tt_entails(P & Q, Q)
###Output
_____no_output_____
###Markdown
P & Q is True only when both P and Q are True. Hence, (P & Q) => Q is True
###Code
tt_entails(P | Q, Q)
tt_entails(P | Q, P)
###Output
_____no_output_____
###Markdown
If we know that P | Q is true, we cannot infer the truth values of P and Q. Hence (P | Q) => Q is False and so is (P | Q) => P.
###Code
(A, B, C, D, E, F, G) = symbols('A, B, C, D, E, F, G')
tt_entails(A & (B | C) & D & E & ~(F | G), A & D & E & ~F & ~G)
###Output
_____no_output_____
###Markdown
We can see that for the KB to be true, A, D, E have to be True and F and G have to be False.Nothing can be said about B or C. Coming back to our problem, note that `tt_entails()` takes an `Expr` which is a conjunction of clauses as the input instead of the `KB` itself. You can use the `ask_if_true()` method of `PropKB` which does all the required conversions. Let's check what `wumpus_kb` tells us about $P_{1, 1}$.
###Code
wumpus_kb.ask_if_true(~P11), wumpus_kb.ask_if_true(P11)
###Output
_____no_output_____
###Markdown
Looking at Figure 7.9 we see that in all models in which the knowledge base is `True`, $P_{1, 1}$ is `False`. It makes sense that `ask_if_true()` returns `True` for $\alpha = \neg P_{1, 1}$ and `False` for $\alpha = P_{1, 1}$. This begs the question, what if $\alpha$ is `True` in only a portion of all models. Do we return `True` or `False`? This doesn't rule out the possibility of $\alpha$ being `True` but it is not entailed by the `KB` so we return `False` in such cases. We can see this is the case for $P_{2, 2}$ and $P_{3, 1}$.
###Code
wumpus_kb.ask_if_true(~P22), wumpus_kb.ask_if_true(P22)
###Output
_____no_output_____
###Markdown
Proof by ResolutionRecall that our goal is to check whether $\text{KB} \vDash \alpha$ i.e. is $\text{KB} \implies \alpha$ true in every model. Suppose we wanted to check if $P \implies Q$ is valid. We check the satisfiability of $\neg (P \implies Q)$, which can be rewritten as $P \land \neg Q$. If $P \land \neg Q$ is unsatisfiable, then $P \implies Q$ must be true in all models. This gives us the result "$\text{KB} \vDash \alpha$ if and only if $\text{KB} \land \neg \alpha$ is unsatisfiable".This technique corresponds to proof by contradiction, a standard mathematical proof technique. We assume $\alpha$ to be false and show that this leads to a contradiction with known axioms in $\text{KB}$. We obtain a contradiction by making valid inferences using inference rules. In this proof we use a single inference rule, resolution which states $(l_1 \lor \dots \lor l_k) \land (m_1 \lor \dots \lor m_n) \land (l_i \iff \neg m_j) \implies l_1 \lor \dots \lor l_{i - 1} \lor l_{i + 1} \lor \dots \lor l_k \lor m_1 \lor \dots \lor m_{j - 1} \lor m_{j + 1} \lor \dots \lor m_n$. Applying the resolution yields us a clause which we add to the KB. We keep doing this until:* There are no new clauses that can be added, in which case $\text{KB} \nvDash \alpha$.* Two clauses resolve to yield the empty clause, in which case $\text{KB} \vDash \alpha$.The empty clause is equivalent to False because it arises only from resolving two complementaryunit clauses such as $P$ and $\neg P$ which is a contradiction as both $P$ and $\neg P$ can't be True at the same time. There is one catch however, the algorithm that implements proof by resolution cannot handle complex sentences. Implications and bi-implications have to be simplified into simpler clauses. We already know that *every sentence of a propositional logic is logically equivalent to a conjunction of clauses*.We will use this fact to our advantage and simplify the input sentence into the **conjunctive normal form** (CNF) which is a conjunction of disjunctions of literals.For eg:$$(A\lor B)\land (\neg B\lor C\lor\neg D)\land (D\lor\neg E)$$This is equivalent to the POS (Product of sums) form in digital electronics.Here's an outline of how the conversion is done:1. Convert bi-implications to implications$\alpha\iff\beta$ can be written as $(\alpha\implies\beta)\land(\beta\implies\alpha)$This also applies to compound sentences$\alpha\iff(\beta\lor\gamma)$ can be written as $(\alpha\implies(\beta\lor\gamma))\land((\beta\lor\gamma)\implies\alpha)$2. Convert implications to their logical equivalents$\alpha\implies\beta$ can be written as $\neg\alpha\lor\beta$3. Move negation inwardsCNF requires atomic literals. Hence, negation cannot appear on a compound statement.De Morgan's laws will be helpful here.$\neg(\alpha\land\beta)\equiv(\neg\alpha\lor\neg\beta)$$\neg(\alpha\lor\beta)\equiv(\neg\alpha\land\neg\beta)$4. Distribute disjunction over conjunctionDisjunction and conjunction are distributive over each other.Now that we only have conjunctions, disjunctions and negations in our expression, we will distribute disjunctions over conjunctions wherever possible as this will give us a sentence which is a conjunction of simpler clauses, which is what we wanted in the first place.We need a term of the form$(\alpha_{1}\lor\alpha_{2}\lor\alpha_{3}...)\land(\beta_{1}\lor\beta_{2}\lor\beta_{3}...)\land(\gamma_{1}\lor\gamma_{2}\lor\gamma_{3}...)\land...$The `to_cnf` function executes this conversion using helper subroutines.
###Code
psource(to_cnf)
###Output
_____no_output_____
###Markdown
`to_cnf` calls three subroutines.`eliminate_implications` converts bi-implications and implications to their logical equivalents.`move_not_inwards` removes negations from compound statements and moves them inwards using De Morgan's laws.`distribute_and_over_or` distributes disjunctions over conjunctions.Run the cell below for implementation details.
###Code
psource(eliminate_implications)
psource(move_not_inwards)
psource(distribute_and_over_or)
###Output
_____no_output_____
###Markdown
Let's convert some sentences to see how it works
###Code
A, B, C, D = expr('A, B, C, D')
to_cnf(A |'<=>'| B)
to_cnf(A |'<=>'| (B & C))
to_cnf(A & (B | (C & D)))
to_cnf((A |'<=>'| ~B) |'==>'| (C | ~D))
###Output
_____no_output_____
###Markdown
Coming back to our resolution problem, we can see how the `to_cnf` function is utilized here
###Code
psource(pl_resolution)
pl_resolution(wumpus_kb, ~P11), pl_resolution(wumpus_kb, P11)
pl_resolution(wumpus_kb, ~P22), pl_resolution(wumpus_kb, P22)
###Output
_____no_output_____
###Markdown
Forward and backward chainingPreviously, we said we will look at two algorithms to check if a sentence is entailed by the `KB`. Here's a third one. The difference here is that our goal now is to determine if a knowledge base of definite clauses entails a single proposition symbol *q* - the query.There is a catch however - the knowledge base can only contain **Horn clauses**. Horn ClausesHorn clauses can be defined as a *disjunction* of *literals* with **at most** one positive literal. A Horn clause with exactly one positive literal is called a *definite clause*.A Horn clause might look like $\neg a\lor\neg b\lor\neg c\lor\neg d... \lor z$This, coincidentally, is also a definite clause.Using De Morgan's laws, the example above can be simplified to $a\land b\land c\land d ... \implies z$This seems like a logical representation of how humans process known data and facts. Assuming percepts `a`, `b`, `c`, `d` ... to be true simultaneously, we can infer `z` to also be true at that point in time. There are some interesting aspects of Horn clauses that make algorithmic inference or *resolution* easier.- Definite clauses can be written as implications:The most important simplification a definite clause provides is that it can be written as an implication.The premise (or the knowledge that leads to the implication) is a conjunction of positive literals.The conclusion (the implied statement) is also a positive literal.The sentence thus becomes easier to understand.The premise and the conclusion are conventionally called the *body* and the *head* respectively.A single positive literal is called a *fact*.- Forward chaining and backward chaining can be used for inference from Horn clauses:Forward chaining is semantically identical to `AND-OR-Graph-Search` from the chapter on search algorithms.Implementational details will be explained shortly.- Deciding entailment with Horn clauses is linear in size of the knowledge base:Surprisingly, the forward and backward chaining algorithms traverse each element of the knowledge base at most once, greatly simplifying the problem.The function `pl_fc_entails` implements forward chaining to see if a knowledge base `KB` entails a symbol `q`.Before we proceed further, note that `pl_fc_entails` doesn't use an ordinary `KB` instance. The knowledge base here is an instance of the `PropDefiniteKB` class, derived from the `PropKB` class, but modified to store definite clauses.The main point of difference arises in the inclusion of a helper method to `PropDefiniteKB` that returns a list of clauses in KB that have a given symbol `p` in their premise.
###Code
psource(PropDefiniteKB.clauses_with_premise)
###Output
_____no_output_____
###Markdown
Let's now have a look at the `pl_fc_entails` algorithm.
###Code
psource(pl_fc_entails)
###Output
_____no_output_____
###Markdown
The function accepts a knowledge base `KB` (an instance of `PropDefiniteKB`) and a query `q` as inputs.`count` initially stores the number of symbols in the premise of each sentence in the knowledge base.The `conjuncts` helper function separates a given sentence at conjunctions.`inferred` is initialized as a *boolean* defaultdict. This will be used later to check if we have inferred all premises of each clause of the agenda.`agenda` initially stores a list of clauses that the knowledge base knows to be true.The `is_prop_symbol` helper function checks if the given symbol is a valid propositional logic symbol.We now iterate through `agenda`, popping a symbol `p` on each iteration.If the query `q` is the same as `p`, we know that entailment holds.The agenda is processed, reducing `count` by one for each implication with a premise `p`.A conclusion is added to the agenda when `count` reaches zero. This means we know all the premises of that particular implication to be true.`clauses_with_premise` is a helpful method of the `PropKB` class.It returns a list of clauses in the knowledge base that have `p` in their premise.Now that we have an idea of how this function works, let's see a few examples of its usage, but we first need to define our knowledge base. We assume we know the following clauses to be true.
###Code
clauses = ['(B & F)==>E',
'(A & E & F)==>G',
'(B & C)==>F',
'(A & B)==>D',
'(E & F)==>H',
'(H & I)==>J',
'A',
'B',
'C']
###Output
_____no_output_____
###Markdown
We will now `tell` this information to our knowledge base.
###Code
definite_clauses_KB = PropDefiniteKB()
for clause in clauses:
definite_clauses_KB.tell(expr(clause))
###Output
_____no_output_____
###Markdown
We can now check if our knowledge base entails the following queries.
###Code
pl_fc_entails(definite_clauses_KB, expr('G'))
pl_fc_entails(definite_clauses_KB, expr('H'))
pl_fc_entails(definite_clauses_KB, expr('I'))
pl_fc_entails(definite_clauses_KB, expr('J'))
###Output
_____no_output_____
###Markdown
Effective Propositional Model CheckingThe previous segments elucidate the algorithmic procedure for model checking. In this segment, we look at ways of making them computationally efficient.The problem we are trying to solve is conventionally called the _propositional satisfiability problem_, abbreviated as the _SAT_ problem.In layman terms, if there exists a model that satisfies a given Boolean formula, the formula is called satisfiable.The SAT problem was the first problem to be proven _NP-complete_.The main characteristics of an NP-complete problem are:- Given a solution to such a problem, it is easy to verify if the solution solves the problem.- The time required to actually solve the problem using any known algorithm increases exponentially with respect to the size of the problem.Due to these properties, heuristic and approximational methods are often applied to find solutions to these problems.It is extremely important to be able to solve large scale SAT problems efficiently because many combinatorial problems in computer science can be conveniently reduced to checking the satisfiability of a propositional sentence under some constraints.We will introduce two new algorithms that perform propositional model checking in a computationally effective way. 1. DPLL (Davis-Putnam-Logeman-Loveland) algorithmThis algorithm is very similar to Backtracking-Search.It recursively enumerates possible models in a depth-first fashion with the following improvements over algorithms like `tt_entails`:1. Early termination:In certain cases, the algorithm can detect the truth value of a statement using just a partially completed model.For example, $(P\lor Q)\land(P\lor R)$ is true if P is true, regardless of other variables.This reduces the search space significantly.2. Pure symbol heuristic:A symbol that has the same sign (positive or negative) in all clauses is called a _pure symbol_.It isn't difficult to see that any satisfiable model will have the pure symbols assigned such that its parent clause becomes _true_.For example, $(P\lor\neg Q)\land(\neg Q\lor\neg R)\land(R\lor P)$ has P and Q as pure symbolsand for the sentence to be true, P _has_ to be true and Q _has_ to be false.The pure symbol heuristic thus simplifies the problem a bit.3. Unit clause heuristic:In the context of DPLL, clauses with just one literal and clauses with all but one _false_ literals are called unit clauses.If a clause is a unit clause, it can only be satisfied by assigning the necessary value to make the last literal true.We have no other choice.Assigning one unit clause can create another unit clause.For example, when P is false, $(P\lor Q)$ becomes a unit clause, causing _true_ to be assigned to Q.A series of forced assignments derived from previous unit clauses is called _unit propagation_.In this way, this heuristic simplifies the problem further.The algorithm often employs other tricks to scale up to large problems.However, these tricks are currently out of the scope of this notebook. Refer to section 7.6 of the book for more details.Let's have a look at the algorithm.
###Code
psource(dpll)
###Output
_____no_output_____
###Markdown
The algorithm uses the ideas described above to check satisfiability of a sentence in propositional logic.It recursively calls itself, simplifying the problem at each step. It also uses helper functions `find_pure_symbol` and `find_unit_clause` to carry out steps 2 and 3 above.The `dpll_satisfiable` helper function converts the input clauses to _conjunctive normal form_ and calls the `dpll` function with the correct parameters.
###Code
psource(dpll_satisfiable)
###Output
_____no_output_____
###Markdown
Let's see a few examples of usage.
###Code
A, B, C, D = expr('A, B, C, D')
dpll_satisfiable(A & B & ~C & D)
###Output
_____no_output_____
###Markdown
This is a simple case to highlight that the algorithm actually works.
###Code
dpll_satisfiable((A & B) | (C & ~A) | (B & ~D))
###Output
_____no_output_____
###Markdown
If a particular symbol isn't present in the solution, it means that the solution is independent of the value of that symbol.In this case, the solution is independent of A.
###Code
dpll_satisfiable(A |'<=>'| B)
dpll_satisfiable((A |'<=>'| B) |'==>'| (C & ~A))
dpll_satisfiable((A | (B & C)) |'<=>'| ((A | B) & (A | C)))
###Output
_____no_output_____
###Markdown
2. WalkSAT algorithmThis algorithm is very similar to Hill climbing.On every iteration, the algorithm picks an unsatisfied clause and flips a symbol in the clause.This is similar to finding a neighboring state in the `hill_climbing` algorithm.The symbol to be flipped is decided by an evaluation function that counts the number of unsatisfied clauses.Sometimes, symbols are also flipped randomly to avoid local optima. A subtle balance between greediness and randomness is required. Alternatively, some versions of the algorithm restart with a completely new random assignment if no solution has been found for too long as a way of getting out of local minima of numbers of unsatisfied clauses.Let's have a look at the algorithm.
###Code
psource(WalkSAT)
###Output
_____no_output_____
###Markdown
The function takes three arguments:1. The `clauses` we want to satisfy.2. The probability `p` of randomly changing a symbol.3. The maximum number of flips (`max_flips`) the algorithm will run for. If the clauses are still unsatisfied, the algorithm returns `None` to denote failure.The algorithm is identical in concept to Hill climbing and the code isn't difficult to understand.Let's see a few examples of usage.
###Code
A, B, C, D = expr('A, B, C, D')
WalkSAT([A, B, ~C, D], 0.5, 100)
###Output
_____no_output_____
###Markdown
This is a simple case to show that the algorithm converges.
###Code
WalkSAT([A & B, A & C], 0.5, 100)
WalkSAT([A & B, C & D, C & B], 0.5, 100)
WalkSAT([A & B, C | D, ~(D | B)], 0.5, 1000)
###Output
_____no_output_____
###Markdown
This one doesn't give any output because WalkSAT did not find any model where these clauses hold. We can solve these clauses to see that they together form a contradiction and hence, it isn't supposed to have a solution. One point of difference between this algorithm and the `dpll_satisfiable` algorithms is that both these algorithms take inputs differently. For WalkSAT to take complete sentences as input, we can write a helper function that converts the input sentence into conjunctive normal form and then calls WalkSAT with the list of conjuncts of the CNF form of the sentence.
###Code
def WalkSAT_CNF(sentence, p=0.5, max_flips=10000):
return WalkSAT(conjuncts(to_cnf(sentence)), 0, max_flips)
###Output
_____no_output_____
###Markdown
Now we can call `WalkSAT_CNF` and `DPLL_Satisfiable` with the same arguments.
###Code
WalkSAT_CNF((A & B) | (C & ~A) | (B & ~D), 0.5, 1000)
###Output
_____no_output_____
###Markdown
It works!Notice that the solution generated by WalkSAT doesn't omit variables that the sentence doesn't depend upon. If the sentence is independent of a particular variable, the solution contains a random value for that variable because of the stochastic nature of the algorithm.Let's compare the runtime of WalkSAT and DPLL for a few cases. We will use the `%%timeit` magic to do this.
###Code
sentence_1 = A |'<=>'| B
sentence_2 = (A & B) | (C & ~A) | (B & ~D)
sentence_3 = (A | (B & C)) |'<=>'| ((A | B) & (A | C))
%%timeit
dpll_satisfiable(sentence_1)
dpll_satisfiable(sentence_2)
dpll_satisfiable(sentence_3)
%%timeit
WalkSAT_CNF(sentence_1)
WalkSAT_CNF(sentence_2)
WalkSAT_CNF(sentence_3)
###Output
1.02 ms ± 6.92 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
###Markdown
On an average, for solvable cases, `WalkSAT` is quite faster than `dpll` because, for a small number of variables, `WalkSAT` can reduce the search space significantly. Results can be different for sentences with more symbols though.Feel free to play around with this to understand the trade-offs of these algorithms better. SATPlan In this section we show how to make plans by logical inference. The basic idea is very simple. It includes the following three steps:1. Constuct a sentence that includes: 1. A colection of assertions about the initial state. 2. The successor-state axioms for all the possible actions at each time up to some maximum time t. 3. The assertion that the goal is achieved at time t.2. Present the whole sentence to a SAT solver.3. Assuming a model is found, extract from the model those variables that represent actions and are assigned true. Together they represent a plan to achieve the goals.Lets have a look at the algorithm
###Code
psource(SAT_plan)
###Output
_____no_output_____
###Markdown
Let's see few examples of its usage. First we define a transition and then call `SAT_plan`.
###Code
transition = {'A': {'Left': 'A', 'Right': 'B'},
'B': {'Left': 'A', 'Right': 'C'},
'C': {'Left': 'B', 'Right': 'C'}}
print(SAT_plan('A', transition, 'C', 2))
print(SAT_plan('A', transition, 'B', 3))
print(SAT_plan('C', transition, 'A', 3))
###Output
None
['Right']
['Left', 'Left']
###Markdown
Let us do the same for another transition.
###Code
transition = {(0, 0): {'Right': (0, 1), 'Down': (1, 0)},
(0, 1): {'Left': (1, 0), 'Down': (1, 1)},
(1, 0): {'Right': (1, 0), 'Up': (1, 0), 'Left': (1, 0), 'Down': (1, 0)},
(1, 1): {'Left': (1, 0), 'Up': (0, 1)}}
print(SAT_plan((0, 0), transition, (1, 1), 4))
###Output
['Right', 'Down']
###Markdown
First-Order Logic Knowledge Bases: `FolKB`The class `FolKB` can be used to represent a knowledge base of First-order logic sentences. You would initialize and use it the same way as you would for `PropKB` except that the clauses are first-order definite clauses. We will see how to write such clauses to create a database and query them in the following sections. Criminal KBIn this section we create a `FolKB` based on the following paragraph.The law says that it is a crime for an American to sell weapons to hostile nations. The country Nono, an enemy of America, has some missiles, and all of its missiles were sold to it by Colonel West, who is American.The first step is to extract the facts and convert them into first-order definite clauses. Extracting the facts from data alone is a challenging task. Fortunately, we have a small paragraph and can do extraction and conversion manually. We'll store the clauses in list aptly named `clauses`.
###Code
clauses = []
###Output
_____no_output_____
###Markdown
“... it is a crime for an American to sell weapons to hostile nations”The keywords to look for here are 'crime', 'American', 'sell', 'weapon' and 'hostile'. We use predicate symbols to make meaning of them.* `Criminal(x)`: `x` is a criminal* `American(x)`: `x` is an American* `Sells(x ,y, z)`: `x` sells `y` to `z`* `Weapon(x)`: `x` is a weapon* `Hostile(x)`: `x` is a hostile nationLet us now combine them with appropriate variable naming to depict the meaning of the sentence. The criminal `x` is also the American `x` who sells weapon `y` to `z`, which is a hostile nation.$\text{American}(x) \land \text{Weapon}(y) \land \text{Sells}(x, y, z) \land \text{Hostile}(z) \implies \text{Criminal} (x)$
###Code
clauses.append(expr("(American(x) & Weapon(y) & Sells(x, y, z) & Hostile(z)) ==> Criminal(x)"))
###Output
_____no_output_____
###Markdown
"The country Nono, an enemy of America"We now know that Nono is an enemy of America. We represent these nations using the constant symbols `Nono` and `America`. the enemy relation is show using the predicate symbol `Enemy`.$\text{Enemy}(\text{Nono}, \text{America})$
###Code
clauses.append(expr("Enemy(Nono, America)"))
###Output
_____no_output_____
###Markdown
"Nono ... has some missiles"This states the existence of some missile which is owned by Nono. $\exists x \text{Owns}(\text{Nono}, x) \land \text{Missile}(x)$. We invoke existential instantiation to introduce a new constant `M1` which is the missile owned by Nono.$\text{Owns}(\text{Nono}, \text{M1}), \text{Missile}(\text{M1})$
###Code
clauses.append(expr("Owns(Nono, M1)"))
clauses.append(expr("Missile(M1)"))
###Output
_____no_output_____
###Markdown
"All of its missiles were sold to it by Colonel West"If Nono owns something and it classifies as a missile, then it was sold to Nono by West.$\text{Missile}(x) \land \text{Owns}(\text{Nono}, x) \implies \text{Sells}(\text{West}, x, \text{Nono})$
###Code
clauses.append(expr("(Missile(x) & Owns(Nono, x)) ==> Sells(West, x, Nono)"))
###Output
_____no_output_____
###Markdown
"West, who is American"West is an American.$\text{American}(\text{West})$
###Code
clauses.append(expr("American(West)"))
###Output
_____no_output_____
###Markdown
We also know, from our understanding of language, that missiles are weapons and that an enemy of America counts as “hostile”.$\text{Missile}(x) \implies \text{Weapon}(x), \text{Enemy}(x, \text{America}) \implies \text{Hostile}(x)$
###Code
clauses.append(expr("Missile(x) ==> Weapon(x)"))
clauses.append(expr("Enemy(x, America) ==> Hostile(x)"))
###Output
_____no_output_____
###Markdown
Now that we have converted the information into first-order definite clauses we can create our first-order logic knowledge base.
###Code
crime_kb = FolKB(clauses)
###Output
_____no_output_____
###Markdown
The `subst` helper function substitutes variables with given values in first-order logic statements.This will be useful in later algorithms.It's implementation is quite simple and self-explanatory.
###Code
psource(subst)
###Output
_____no_output_____
###Markdown
Here's an example of how `subst` can be used.
###Code
subst({x: expr('Nono'), y: expr('M1')}, expr('Owns(x, y)'))
###Output
_____no_output_____
###Markdown
Inference in First-Order LogicIn this section we look at a forward chaining and a backward chaining algorithm for `FolKB`. Both aforementioned algorithms rely on a process called unification, a key component of all first-order inference algorithms. UnificationWe sometimes require finding substitutions that make different logical expressions look identical. This process, called unification, is done by the `unify` algorithm. It takes as input two sentences and returns a unifier for them if one exists. A unifier is a dictionary which stores the substitutions required to make the two sentences identical. It does so by recursively unifying the components of a sentence, where the unification of a variable symbol `var` with a constant symbol `Const` is the mapping `{var: Const}`. Let's look at a few examples.
###Code
unify(expr('x'), 3)
unify(expr('A(x)'), expr('A(B)'))
unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(y)'))
###Output
_____no_output_____
###Markdown
In cases where there is no possible substitution that unifies the two sentences the function return `None`.
###Code
print(unify(expr('Cat(x)'), expr('Dog(Dobby)')))
###Output
None
###Markdown
We also need to take care we do not unintentionally use the same variable name. Unify treats them as a single variable which prevents it from taking multiple value.
###Code
print(unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(x)')))
###Output
None
###Markdown
Forward Chaining AlgorithmWe consider the simple forward-chaining algorithm presented in Figure 9.3. We look at each rule in the knowledge base and see if the premises can be satisfied. This is done by finding a substitution which unifies each of the premise with a clause in the `KB`. If we are able to unify the premises, the conclusion (with the corresponding substitution) is added to the `KB`. This inferencing process is repeated until either the query can be answered or till no new sentences can be added. We test if the newly added clause unifies with the query in which case the substitution yielded by `unify` is an answer to the query. If we run out of sentences to infer, this means the query was a failure.The function `fol_fc_ask` is a generator which yields all substitutions which validate the query.
###Code
psource(fol_fc_ask)
###Output
_____no_output_____
###Markdown
Let's find out all the hostile nations. Note that we only told the `KB` that Nono was an enemy of America, not that it was hostile.
###Code
answer = fol_fc_ask(crime_kb, expr('Hostile(x)'))
print(list(answer))
###Output
[{x: Nono}]
###Markdown
The generator returned a single substitution which says that Nono is a hostile nation. See how after adding another enemy nation the generator returns two substitutions.
###Code
crime_kb.tell(expr('Enemy(JaJa, America)'))
answer = fol_fc_ask(crime_kb, expr('Hostile(x)'))
print(list(answer))
###Output
[{x: Nono}, {x: JaJa}]
###Markdown
Note: `fol_fc_ask` makes changes to the `KB` by adding sentences to it. Backward Chaining AlgorithmThis algorithm works backward from the goal, chaining through rules to find known facts that support the proof. Suppose `goal` is the query we want to find the substitution for. We find rules of the form $\text{lhs} \implies \text{goal}$ in the `KB` and try to prove `lhs`. There may be multiple clauses in the `KB` which give multiple `lhs`. It is sufficient to prove only one of these. But to prove a `lhs` all the conjuncts in the `lhs` of the clause must be proved. This makes it similar to And/Or search. ORThe OR part of the algorithm comes from our choice to select any clause of the form $\text{lhs} \implies \text{goal}$. Looking at all rules's `lhs` whose `rhs` unify with the `goal`, we yield a substitution which proves all the conjuncts in the `lhs`. We use `parse_definite_clause` to attain `lhs` and `rhs` from a clause of the form $\text{lhs} \implies \text{rhs}$. For atomic facts the `lhs` is an empty list.
###Code
psource(fol_bc_or)
###Output
_____no_output_____
###Markdown
ANDThe AND corresponds to proving all the conjuncts in the `lhs`. We need to find a substitution which proves each and every clause in the list of conjuncts.
###Code
psource(fol_bc_and)
###Output
_____no_output_____
###Markdown
Now the main function `fl_bc_ask` calls `fol_bc_or` with substitution initialized as empty. The `ask` method of `FolKB` uses `fol_bc_ask` and fetches the first substitution returned by the generator to answer query. Let's query the knowledge base we created from `clauses` to find hostile nations.
###Code
# Rebuild KB because running fol_fc_ask would add new facts to the KB
crime_kb = FolKB(clauses)
crime_kb.ask(expr('Hostile(x)'))
###Output
_____no_output_____
###Markdown
You may notice some new variables in the substitution. They are introduced to standardize the variable names to prevent naming problems as discussed in the [Unification section](Unification) Appendix: The Implementation of `|'==>'|`Consider the `Expr` formed by this syntax:
###Code
P |'==>'| ~Q
###Output
_____no_output_____
###Markdown
What is the funny `|'==>'|` syntax? The trick is that "`|`" is just the regular Python or-operator, and so is exactly equivalent to this:
###Code
(P | '==>') | ~Q
###Output
_____no_output_____
###Markdown
In other words, there are two applications of or-operators. Here's the first one:
###Code
P | '==>'
###Output
_____no_output_____
###Markdown
What is going on here is that the `__or__` method of `Expr` serves a dual purpose. If the right-hand-side is another `Expr` (or a number), then the result is an `Expr`, as in `(P | Q)`. But if the right-hand-side is a string, then the string is taken to be an operator, and we create a node in the abstract syntax tree corresponding to a partially-filled `Expr`, one where we know the left-hand-side is `P` and the operator is `==>`, but we don't yet know the right-hand-side.The `PartialExpr` class has an `__or__` method that says to create an `Expr` node with the right-hand-side filled in. Here we can see the combination of the `PartialExpr` with `Q` to create a complete `Expr`:
###Code
partial = PartialExpr('==>', P)
partial | ~Q
###Output
_____no_output_____
###Markdown
This [trick](http://code.activestate.com/recipes/384122-infix-operators/) is due to [Ferdinand Jamitzky](http://code.activestate.com/recipes/users/98863/), with a modification by [C. G. Vedant](https://github.com/Chipe1),who suggested using a string inside the or-bars. Appendix: The Implementation of `expr`How does `expr` parse a string into an `Expr`? It turns out there are two tricks (besides the Jamitzky/Vedant trick):1. We do a string substitution, replacing "`==>`" with "`|'==>'|`" (and likewise for other operators).2. We `eval` the resulting string in an environment in which every identifieris bound to a symbol with that identifier as the `op`.In other words,
###Code
expr('~(P & Q) ==> (~P | ~Q)')
###Output
_____no_output_____
###Markdown
is equivalent to doing:
###Code
P, Q = symbols('P, Q')
~(P & Q) |'==>'| (~P | ~Q)
###Output
_____no_output_____
###Markdown
One thing to beware of: this puts `==>` at the same precedence level as `"|"`, which is not quite right. For example, we get this:
###Code
P & Q |'==>'| P | Q
###Output
_____no_output_____
###Markdown
which is probably not what we meant; when in doubt, put in extra parens:
###Code
(P & Q) |'==>'| (P | Q)
###Output
_____no_output_____
###Markdown
Examples
###Code
from notebook import Canvas_fol_bc_ask
canvas_bc_ask = Canvas_fol_bc_ask('canvas_bc_ask', crime_kb, expr('Criminal(x)'))
###Output
_____no_output_____
###Markdown
Logic This Jupyter notebook acts as supporting material for topics covered in __Chapter 6 Logical Agents__, __Chapter 7 First-Order Logic__ and __Chapter 8 Inference in First-Order Logic__ of the book *[Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu)*. We make use of the implementations in the [logic.py](https://github.com/aimacode/aima-python/blob/master/logic.py) module. See the [intro notebook](https://github.com/aimacode/aima-python/blob/master/intro.ipynb) for instructions.Let's first import everything from the `logic` module.
###Code
from utils import *
from logic import *
from notebook import psource
###Output
_____no_output_____
###Markdown
CONTENTS- Logical sentences - Expr - PropKB - Knowledge-based agents - Inference in propositional knowledge base - Truth table enumeration - Proof by resolution - Forward and backward chaining - DPLL - WalkSAT - SATPlan - FolKB - Inference in first order knowledge base - Unification - Forward chaining algorithm - Backward chaining algorithm Logical Sentences The `Expr` class is designed to represent any kind of mathematical expression. The simplest type of `Expr` is a symbol, which can be defined with the function `Symbol`:
###Code
Symbol('x')
###Output
_____no_output_____
###Markdown
Or we can define multiple symbols at the same time with the function `symbols`:
###Code
(x, y, P, Q, f) = symbols('x, y, P, Q, f')
###Output
_____no_output_____
###Markdown
We can combine `Expr`s with the regular Python infix and prefix operators. Here's how we would form the logical sentence "P and not Q":
###Code
P & ~Q
###Output
_____no_output_____
###Markdown
This works because the `Expr` class overloads the `&` operator with this definition:```pythondef __and__(self, other): return Expr('&', self, other)``` and does similar overloads for the other operators. An `Expr` has two fields: `op` for the operator, which is always a string, and `args` for the arguments, which is a tuple of 0 or more expressions. By "expression," I mean either an instance of `Expr`, or a number. Let's take a look at the fields for some `Expr` examples:
###Code
sentence = P & ~Q
sentence.op
sentence.args
P.op
P.args
Pxy = P(x, y)
Pxy.op
Pxy.args
###Output
_____no_output_____
###Markdown
It is important to note that the `Expr` class does not define the *logic* of Propositional Logic sentences; it just gives you a way to *represent* expressions. Think of an `Expr` as an [abstract syntax tree](https://en.wikipedia.org/wiki/Abstract_syntax_tree). Each of the `args` in an `Expr` can be either a symbol, a number, or a nested `Expr`. We can nest these trees to any depth. Here is a deply nested `Expr`:
###Code
3 * f(x, y) + P(y) / 2 + 1
###Output
_____no_output_____
###Markdown
Operators for Constructing Logical SentencesHere is a table of the operators that can be used to form sentences. Note that we have a problem: we want to use Python operators to make sentences, so that our programs (and our interactive sessions like the one here) will show simple code. But Python does not allow implication arrows as operators, so for now we have to use a more verbose notation that Python does allow: `|'==>'|` instead of just `==>`. Alternately, you can always use the more verbose `Expr` constructor forms:| Operation | Book | Python Infix Input | Python Output | Python `Expr` Input|--------------------------|----------------------|-------------------------|---|---|| Negation | ¬ P | `~P` | `~P` | `Expr('~', P)`| And | P ∧ Q | `P & Q` | `P & Q` | `Expr('&', P, Q)`| Or | P ∨ Q | `P` &124; `Q`| `P` &124; `Q` | `Expr('`&124;`', P, Q)`| Inequality (Xor) | P ≠ Q | `P ^ Q` | `P ^ Q` | `Expr('^', P, Q)`| Implication | P → Q | `P` &124;`'==>'`&124; `Q` | `P ==> Q` | `Expr('==>', P, Q)`| Reverse Implication | Q ← P | `Q` &124;`'&124; `P` |`Q <== P` | `Expr('<==', Q, P)`| Equivalence | P ↔ Q | `P` &124;`''`&124; `Q` |`P Q` | `Expr('', P, Q)`Here's an example of defining a sentence with an implication arrow:
###Code
~(P & Q) |'==>'| (~P | ~Q)
###Output
_____no_output_____
###Markdown
`expr`: a Shortcut for Constructing SentencesIf the `|'==>'|` notation looks ugly to you, you can use the function `expr` instead:
###Code
expr('~(P & Q) ==> (~P | ~Q)')
###Output
_____no_output_____
###Markdown
`expr` takes a string as input, and parses it into an `Expr`. The string can contain arrow operators: `==>`, ``, which are handled as if they were regular Python infix operators. And `expr` automatically defines any symbols, so you don't need to pre-define them:
###Code
expr('sqrt(b ** 2 - 4 * a * c)')
###Output
_____no_output_____
###Markdown
For now that's all you need to know about `expr`. If you are interested, we explain the messy details of how `expr` is implemented and how `|'==>'|` is handled in the appendix. Propositional Knowledge Bases: `PropKB`The class `PropKB` can be used to represent a knowledge base of propositional logic sentences.We see that the class `KB` has four methods, apart from `__init__`. A point to note here: the `ask` method simply calls the `ask_generator` method. Thus, this one has already been implemented, and what you'll have to actually implement when you create your own knowledge base class (though you'll probably never need to, considering the ones we've created for you) will be the `ask_generator` function and not the `ask` function itself.The class `PropKB` now.* `__init__(self, sentence=None)` : The constructor `__init__` creates a single field `clauses` which will be a list of all the sentences of the knowledge base. Note that each one of these sentences will be a 'clause' i.e. a sentence which is made up of only literals and `or`s.* `tell(self, sentence)` : When you want to add a sentence to the KB, you use the `tell` method. This method takes a sentence, converts it to its CNF, extracts all the clauses, and adds all these clauses to the `clauses` field. So, you need not worry about `tell`ing only clauses to the knowledge base. You can `tell` the knowledge base a sentence in any form that you wish; converting it to CNF and adding the resulting clauses will be handled by the `tell` method.* `ask_generator(self, query)` : The `ask_generator` function is used by the `ask` function. It calls the `tt_entails` function, which in turn returns `True` if the knowledge base entails query and `False` otherwise. The `ask_generator` itself returns an empty dict `{}` if the knowledge base entails query and `None` otherwise. This might seem a little bit weird to you. After all, it makes more sense just to return a `True` or a `False` instead of the `{}` or `None` But this is done to maintain consistency with the way things are in First-Order Logic, where an `ask_generator` function is supposed to return all the substitutions that make the query true. Hence the dict, to return all these substitutions. I will be mostly be using the `ask` function which returns a `{}` or a `False`, but if you don't like this, you can always use the `ask_if_true` function which returns a `True` or a `False`.* `retract(self, sentence)` : This function removes all the clauses of the sentence given, from the knowledge base. Like the `tell` function, you don't have to pass clauses to remove them from the knowledge base; any sentence will do fine. The function will take care of converting that sentence to clauses and then remove those. Wumpus World KBLet us create a `PropKB` for the wumpus world with the sentences mentioned in `section 7.4.3`.
###Code
wumpus_kb = PropKB()
###Output
_____no_output_____
###Markdown
We define the symbols we use in our clauses.$P_{x, y}$ is true if there is a pit in `[x, y]`.$B_{x, y}$ is true if the agent senses breeze in `[x, y]`.
###Code
P11, P12, P21, P22, P31, B11, B21 = expr('P11, P12, P21, P22, P31, B11, B21')
###Output
_____no_output_____
###Markdown
Now we tell sentences based on `section 7.4.3`.There is no pit in `[1,1]`.
###Code
wumpus_kb.tell(~P11)
###Output
_____no_output_____
###Markdown
A square is breezy if and only if there is a pit in a neighboring square. This has to be stated for each square but for now, we include just the relevant squares.
###Code
wumpus_kb.tell(B11 | '<=>' | ((P12 | P21)))
wumpus_kb.tell(B21 | '<=>' | ((P11 | P22 | P31)))
###Output
_____no_output_____
###Markdown
Now we include the breeze percepts for the first two squares leading up to the situation in `Figure 7.3(b)`
###Code
wumpus_kb.tell(~B11)
wumpus_kb.tell(B21)
###Output
_____no_output_____
###Markdown
We can check the clauses stored in a `KB` by accessing its `clauses` variable
###Code
wumpus_kb.clauses
###Output
_____no_output_____
###Markdown
We see that the equivalence $B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was automatically converted to two implications which were inturn converted to CNF which is stored in the `KB`.$B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was split into $B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ and $B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$.$B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ was converted to $P_{1, 2} \lor P_{2, 1} \lor \neg B_{1, 1}$.$B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$ was converted to $\neg (P_{1, 2} \lor P_{2, 1}) \lor B_{1, 1}$ which becomes $(\neg P_{1, 2} \lor B_{1, 1}) \land (\neg P_{2, 1} \lor B_{1, 1})$ after applying De Morgan's laws and distributing the disjunction.$B_{2, 1} \iff (P_{1, 1} \lor P_{2, 2} \lor P_{3, 2})$ is converted in similar manner. Knowledge based agents A knowledge-based agent is a simple generic agent that maintains and handles a knowledge base.The knowledge base may initially contain some background knowledge.The purpose of a KB agent is to provide a level of abstraction over knowledge-base manipulation and is to be used as a base class for agents that work on a knowledge base.Given a percept, the KB agent adds the percept to its knowledge base, asks the knowledge base for the best action, and tells the knowledge base that it has in fact taken that action.Our implementation of `KB-Agent` is encapsulated in a class `KB_AgentProgram` which inherits from the `KB` class.Let's have a look.
###Code
psource(KB_AgentProgram)
###Output
_____no_output_____
###Markdown
The helper functions `make_percept_sentence`, `make_action_query` and `make_action_sentence` are all aptly named and as expected,`make_percept_sentence` makes first-order logic sentences about percepts we want our agent to receive,`make_action_query` asks the underlying `KB` about the action that should be taken and`make_action_sentence` tells the underlying `KB` about the action it has just taken. Inference in Propositional Knowledge BaseIn this section we will look at two algorithms to check if a sentence is entailed by the `KB`. Our goal is to decide whether $\text{KB} \vDash \alpha$ for some sentence $\alpha$. Truth Table EnumerationIt is a model-checking approach which, as the name suggests, enumerates all possible models in which the `KB` is true and checks if $\alpha$ is also true in these models. We list the $n$ symbols in the `KB` and enumerate the $2^{n}$ models in a depth-first manner and check the truth of `KB` and $\alpha$.
###Code
psource(tt_check_all)
###Output
_____no_output_____
###Markdown
The algorithm basically computes every line of the truth table $KB\implies \alpha$ and checks if it is true everywhere.If symbols are defined, the routine recursively constructs every combination of truth values for the symbols and then, it checks whether `model` is consistent with `kb`.The given models correspond to the lines in the truth table,which have a `true` in the KB column, and for these lines it checks whether the query evaluates to true`result = pl_true(alpha, model)`.In short, `tt_check_all` evaluates this logical expression for each `model``pl_true(kb, model) => pl_true(alpha, model)`which is logically equivalent to`pl_true(kb, model) & ~pl_true(alpha, model)` that is, the knowledge base and the negation of the query are logically inconsistent.`tt_entails()` just extracts the symbols from the query and calls `tt_check_all()` with the proper parameters.
###Code
psource(tt_entails)
###Output
_____no_output_____
###Markdown
Keep in mind that for two symbols P and Q, P => Q is false only when P is `True` and Q is `False`.Example usage of `tt_entails()`:
###Code
tt_entails(P & Q, Q)
###Output
_____no_output_____
###Markdown
P & Q is True only when both P and Q are True. Hence, (P & Q) => Q is True
###Code
tt_entails(P | Q, Q)
tt_entails(P | Q, P)
###Output
_____no_output_____
###Markdown
If we know that P | Q is true, we cannot infer the truth values of P and Q. Hence (P | Q) => Q is False and so is (P | Q) => P.
###Code
(A, B, C, D, E, F, G) = symbols('A, B, C, D, E, F, G')
tt_entails(A & (B | C) & D & E & ~(F | G), A & D & E & ~F & ~G)
###Output
_____no_output_____
###Markdown
We can see that for the KB to be true, A, D, E have to be True and F and G have to be False.Nothing can be said about B or C. Coming back to our problem, note that `tt_entails()` takes an `Expr` which is a conjunction of clauses as the input instead of the `KB` itself. You can use the `ask_if_true()` method of `PropKB` which does all the required conversions. Let's check what `wumpus_kb` tells us about $P_{1, 1}$.
###Code
wumpus_kb.ask_if_true(~P11), wumpus_kb.ask_if_true(P11)
###Output
_____no_output_____
###Markdown
Looking at Figure 7.9 we see that in all models in which the knowledge base is `True`, $P_{1, 1}$ is `False`. It makes sense that `ask_if_true()` returns `True` for $\alpha = \neg P_{1, 1}$ and `False` for $\alpha = P_{1, 1}$. This begs the question, what if $\alpha$ is `True` in only a portion of all models. Do we return `True` or `False`? This doesn't rule out the possibility of $\alpha$ being `True` but it is not entailed by the `KB` so we return `False` in such cases. We can see this is the case for $P_{2, 2}$ and $P_{3, 1}$.
###Code
wumpus_kb.ask_if_true(~P22), wumpus_kb.ask_if_true(P22)
###Output
_____no_output_____
###Markdown
Proof by ResolutionRecall that our goal is to check whether $\text{KB} \vDash \alpha$ i.e. is $\text{KB} \implies \alpha$ true in every model. Suppose we wanted to check if $P \implies Q$ is valid. We check the satisfiability of $\neg (P \implies Q)$, which can be rewritten as $P \land \neg Q$. If $P \land \neg Q$ is unsatisfiable, then $P \implies Q$ must be true in all models. This gives us the result "$\text{KB} \vDash \alpha$ if and only if $\text{KB} \land \neg \alpha$ is unsatisfiable".This technique corresponds to proof by contradiction, a standard mathematical proof technique. We assume $\alpha$ to be false and show that this leads to a contradiction with known axioms in $\text{KB}$. We obtain a contradiction by making valid inferences using inference rules. In this proof we use a single inference rule, resolution which states $(l_1 \lor \dots \lor l_k) \land (m_1 \lor \dots \lor m_n) \land (l_i \iff \neg m_j) \implies l_1 \lor \dots \lor l_{i - 1} \lor l_{i + 1} \lor \dots \lor l_k \lor m_1 \lor \dots \lor m_{j - 1} \lor m_{j + 1} \lor \dots \lor m_n$. Applying the resolution yields us a clause which we add to the KB. We keep doing this until:* There are no new clauses that can be added, in which case $\text{KB} \nvDash \alpha$.* Two clauses resolve to yield the empty clause, in which case $\text{KB} \vDash \alpha$.The empty clause is equivalent to False because it arises only from resolving two complementaryunit clauses such as $P$ and $\neg P$ which is a contradiction as both $P$ and $\neg P$ can't be True at the same time. There is one catch however, the algorithm that implements proof by resolution cannot handle complex sentences. Implications and bi-implications have to be simplified into simpler clauses. We already know that *every sentence of a propositional logic is logically equivalent to a conjunction of clauses*.We will use this fact to our advantage and simplify the input sentence into the **conjunctive normal form** (CNF) which is a conjunction of disjunctions of literals.For eg:$$(A\lor B)\land (\neg B\lor C\lor\neg D)\land (D\lor\neg E)$$This is equivalent to the POS (Product of sums) form in digital electronics.Here's an outline of how the conversion is done:1. Convert bi-implications to implications$\alpha\iff\beta$ can be written as $(\alpha\implies\beta)\land(\beta\implies\alpha)$This also applies to compound sentences$\alpha\iff(\beta\lor\gamma)$ can be written as $(\alpha\implies(\beta\lor\gamma))\land((\beta\lor\gamma)\implies\alpha)$2. Convert implications to their logical equivalents$\alpha\implies\beta$ can be written as $\neg\alpha\lor\beta$3. Move negation inwardsCNF requires atomic literals. Hence, negation cannot appear on a compound statement.De Morgan's laws will be helpful here.$\neg(\alpha\land\beta)\equiv(\neg\alpha\lor\neg\beta)$$\neg(\alpha\lor\beta)\equiv(\neg\alpha\land\neg\beta)$4. Distribute disjunction over conjunctionDisjunction and conjunction are distributive over each other.Now that we only have conjunctions, disjunctions and negations in our expression, we will distribute disjunctions over conjunctions wherever possible as this will give us a sentence which is a conjunction of simpler clauses, which is what we wanted in the first place.We need a term of the form$(\alpha_{1}\lor\alpha_{2}\lor\alpha_{3}...)\land(\beta_{1}\lor\beta_{2}\lor\beta_{3}...)\land(\gamma_{1}\lor\gamma_{2}\lor\gamma_{3}...)\land...$The `to_cnf` function executes this conversion using helper subroutines.
###Code
psource(to_cnf)
###Output
_____no_output_____
###Markdown
`to_cnf` calls three subroutines.`eliminate_implications` converts bi-implications and implications to their logical equivalents.`move_not_inwards` removes negations from compound statements and moves them inwards using De Morgan's laws.`distribute_and_over_or` distributes disjunctions over conjunctions.Run the cell below for implementation details.
###Code
psource(eliminate_implications)
psource(move_not_inwards)
psource(distribute_and_over_or)
###Output
_____no_output_____
###Markdown
Let's convert some sentences to see how it works
###Code
A, B, C, D = expr('A, B, C, D')
to_cnf(A |'<=>'| B)
to_cnf(A |'<=>'| (B & C))
to_cnf(A & (B | (C & D)))
to_cnf((A |'<=>'| ~B) |'==>'| (C | ~D))
###Output
_____no_output_____
###Markdown
Coming back to our resolution problem, we can see how the `to_cnf` function is utilized here
###Code
psource(pl_resolution)
pl_resolution(wumpus_kb, ~P11), pl_resolution(wumpus_kb, P11)
pl_resolution(wumpus_kb, ~P22), pl_resolution(wumpus_kb, P22)
###Output
_____no_output_____
###Markdown
Forward and backward chainingPreviously, we said we will look at two algorithms to check if a sentence is entailed by the `KB`. Here's a third one. The difference here is that our goal now is to determine if a knowledge base of definite clauses entails a single proposition symbol *q* - the query.There is a catch however - the knowledge base can only contain **Horn clauses**. Horn ClausesHorn clauses can be defined as a *disjunction* of *literals* with **at most** one positive literal. A Horn clause with exactly one positive literal is called a *definite clause*.A Horn clause might look like $\neg a\lor\neg b\lor\neg c\lor\neg d... \lor z$This, coincidentally, is also a definite clause.Using De Morgan's laws, the example above can be simplified to $a\land b\land c\land d ... \implies z$This seems like a logical representation of how humans process known data and facts. Assuming percepts `a`, `b`, `c`, `d` ... to be true simultaneously, we can infer `z` to also be true at that point in time. There are some interesting aspects of Horn clauses that make algorithmic inference or *resolution* easier.- Definite clauses can be written as implications:The most important simplification a definite clause provides is that it can be written as an implication.The premise (or the knowledge that leads to the implication) is a conjunction of positive literals.The conclusion (the implied statement) is also a positive literal.The sentence thus becomes easier to understand.The premise and the conclusion are conventionally called the *body* and the *head* respectively.A single positive literal is called a *fact*.- Forward chaining and backward chaining can be used for inference from Horn clauses:Forward chaining is semantically identical to `AND-OR-Graph-Search` from the chapter on search algorithms.Implementational details will be explained shortly.- Deciding entailment with Horn clauses is linear in size of the knowledge base:Surprisingly, the forward and backward chaining algorithms traverse each element of the knowledge base at most once, greatly simplifying the problem.The function `pl_fc_entails` implements forward chaining to see if a knowledge base `KB` entails a symbol `q`.Before we proceed further, note that `pl_fc_entails` doesn't use an ordinary `KB` instance. The knowledge base here is an instance of the `PropDefiniteKB` class, derived from the `PropKB` class, but modified to store definite clauses.The main point of difference arises in the inclusion of a helper method to `PropDefiniteKB` that returns a list of clauses in KB that have a given symbol `p` in their premise.
###Code
psource(PropDefiniteKB.clauses_with_premise)
###Output
_____no_output_____
###Markdown
Let's now have a look at the `pl_fc_entails` algorithm.
###Code
psource(pl_fc_entails)
###Output
_____no_output_____
###Markdown
The function accepts a knowledge base `KB` (an instance of `PropDefiniteKB`) and a query `q` as inputs.`count` initially stores the number of symbols in the premise of each sentence in the knowledge base.The `conjuncts` helper function separates a given sentence at conjunctions.`inferred` is initialized as a *boolean* defaultdict. This will be used later to check if we have inferred all premises of each clause of the agenda.`agenda` initially stores a list of clauses that the knowledge base knows to be true.The `is_prop_symbol` helper function checks if the given symbol is a valid propositional logic symbol.We now iterate through `agenda`, popping a symbol `p` on each iteration.If the query `q` is the same as `p`, we know that entailment holds.The agenda is processed, reducing `count` by one for each implication with a premise `p`.A conclusion is added to the agenda when `count` reaches zero. This means we know all the premises of that particular implication to be true.`clauses_with_premise` is a helpful method of the `PropKB` class.It returns a list of clauses in the knowledge base that have `p` in their premise.Now that we have an idea of how this function works, let's see a few examples of its usage, but we first need to define our knowledge base. We assume we know the following clauses to be true.
###Code
clauses = ['(B & F)==>E',
'(A & E & F)==>G',
'(B & C)==>F',
'(A & B)==>D',
'(E & F)==>H',
'(H & I)==>J',
'A',
'B',
'C']
###Output
_____no_output_____
###Markdown
We will now `tell` this information to our knowledge base.
###Code
definite_clauses_KB = PropDefiniteKB()
for clause in clauses:
definite_clauses_KB.tell(expr(clause))
###Output
_____no_output_____
###Markdown
We can now check if our knowledge base entails the following queries.
###Code
pl_fc_entails(definite_clauses_KB, expr('G'))
pl_fc_entails(definite_clauses_KB, expr('H'))
pl_fc_entails(definite_clauses_KB, expr('I'))
pl_fc_entails(definite_clauses_KB, expr('J'))
###Output
_____no_output_____
###Markdown
Effective Propositional Model CheckingThe previous segments elucidate the algorithmic procedure for model checking. In this segment, we look at ways of making them computationally efficient.The problem we are trying to solve is conventionally called the _propositional satisfiability problem_, abbreviated as the _SAT_ problem.In layman terms, if there exists a model that satisfies a given Boolean formula, the formula is called satisfiable.The SAT problem was the first problem to be proven _NP-complete_.The main characteristics of an NP-complete problem are:- Given a solution to such a problem, it is easy to verify if the solution solves the problem.- The time required to actually solve the problem using any known algorithm increases exponentially with respect to the size of the problem.Due to these properties, heuristic and approximational methods are often applied to find solutions to these problems.It is extremely important to be able to solve large scale SAT problems efficiently because many combinatorial problems in computer science can be conveniently reduced to checking the satisfiability of a propositional sentence under some constraints.We will introduce two new algorithms that perform propositional model checking in a computationally effective way. 1. DPLL (Davis-Putnam-Logeman-Loveland) algorithmThis algorithm is very similar to Backtracking-Search.It recursively enumerates possible models in a depth-first fashion with the following improvements over algorithms like `tt_entails`:1. Early termination:In certain cases, the algorithm can detect the truth value of a statement using just a partially completed model.For example, $(P\lor Q)\land(P\lor R)$ is true if P is true, regardless of other variables.This reduces the search space significantly.2. Pure symbol heuristic:A symbol that has the same sign (positive or negative) in all clauses is called a _pure symbol_.It isn't difficult to see that any satisfiable model will have the pure symbols assigned such that its parent clause becomes _true_.For example, $(P\lor\neg Q)\land(\neg Q\lor\neg R)\land(R\lor P)$ has P and Q as pure symbolsand for the sentence to be true, P _has_ to be true and Q _has_ to be false.The pure symbol heuristic thus simplifies the problem a bit.3. Unit clause heuristic:In the context of DPLL, clauses with just one literal and clauses with all but one _false_ literals are called unit clauses.If a clause is a unit clause, it can only be satisfied by assigning the necessary value to make the last literal true.We have no other choice.Assigning one unit clause can create another unit clause.For example, when P is false, $(P\lor Q)$ becomes a unit clause, causing _true_ to be assigned to Q.A series of forced assignments derived from previous unit clauses is called _unit propagation_.In this way, this heuristic simplifies the problem further.The algorithm often employs other tricks to scale up to large problems.However, these tricks are currently out of the scope of this notebook. Refer to section 7.6 of the book for more details.Let's have a look at the algorithm.
###Code
psource(dpll)
###Output
_____no_output_____
###Markdown
The algorithm uses the ideas described above to check satisfiability of a sentence in propositional logic.It recursively calls itself, simplifying the problem at each step. It also uses helper functions `find_pure_symbol` and `find_unit_clause` to carry out steps 2 and 3 above.The `dpll_satisfiable` helper function converts the input clauses to _conjunctive normal form_ and calls the `dpll` function with the correct parameters.
###Code
psource(dpll_satisfiable)
###Output
_____no_output_____
###Markdown
Let's see a few examples of usage.
###Code
A, B, C, D = expr('A, B, C, D')
dpll_satisfiable(A & B & ~C & D)
###Output
_____no_output_____
###Markdown
This is a simple case to highlight that the algorithm actually works.
###Code
dpll_satisfiable((A & B) | (C & ~A) | (B & ~D))
###Output
_____no_output_____
###Markdown
If a particular symbol isn't present in the solution, it means that the solution is independent of the value of that symbol.In this case, the solution is independent of A.
###Code
dpll_satisfiable(A |'<=>'| B)
dpll_satisfiable((A |'<=>'| B) |'==>'| (C & ~A))
dpll_satisfiable((A | (B & C)) |'<=>'| ((A | B) & (A | C)))
###Output
_____no_output_____
###Markdown
2. WalkSAT algorithmThis algorithm is very similar to Hill climbing.On every iteration, the algorithm picks an unsatisfied clause and flips a symbol in the clause.This is similar to finding a neighboring state in the `hill_climbing` algorithm.The symbol to be flipped is decided by an evaluation function that counts the number of unsatisfied clauses.Sometimes, symbols are also flipped randomly to avoid local optima. A subtle balance between greediness and randomness is required. Alternatively, some versions of the algorithm restart with a completely new random assignment if no solution has been found for too long as a way of getting out of local minima of numbers of unsatisfied clauses.Let's have a look at the algorithm.
###Code
psource(WalkSAT)
###Output
_____no_output_____
###Markdown
The function takes three arguments:1. The `clauses` we want to satisfy.2. The probability `p` of randomly changing a symbol.3. The maximum number of flips (`max_flips`) the algorithm will run for. If the clauses are still unsatisfied, the algorithm returns `None` to denote failure.The algorithm is identical in concept to Hill climbing and the code isn't difficult to understand.Let's see a few examples of usage.
###Code
A, B, C, D = expr('A, B, C, D')
WalkSAT([A, B, ~C, D], 0.5, 100)
###Output
_____no_output_____
###Markdown
This is a simple case to show that the algorithm converges.
###Code
WalkSAT([A & B, A & C], 0.5, 100)
WalkSAT([A & B, C & D, C & B], 0.5, 100)
WalkSAT([A & B, C | D, ~(D | B)], 0.5, 1000)
###Output
_____no_output_____
###Markdown
This one doesn't give any output because WalkSAT did not find any model where these clauses hold. We can solve these clauses to see that they together form a contradiction and hence, it isn't supposed to have a solution. One point of difference between this algorithm and the `dpll_satisfiable` algorithms is that both these algorithms take inputs differently. For WalkSAT to take complete sentences as input, we can write a helper function that converts the input sentence into conjunctive normal form and then calls WalkSAT with the list of conjuncts of the CNF form of the sentence.
###Code
def WalkSAT_CNF(sentence, p=0.5, max_flips=10000):
return WalkSAT(conjuncts(to_cnf(sentence)), 0, max_flips)
###Output
_____no_output_____
###Markdown
Now we can call `WalkSAT_CNF` and `DPLL_Satisfiable` with the same arguments.
###Code
WalkSAT_CNF((A & B) | (C & ~A) | (B & ~D), 0.5, 1000)
###Output
_____no_output_____
###Markdown
It works!Notice that the solution generated by WalkSAT doesn't omit variables that the sentence doesn't depend upon. If the sentence is independent of a particular variable, the solution contains a random value for that variable because of the stochastic nature of the algorithm.Let's compare the runtime of WalkSAT and DPLL for a few cases. We will use the `%%timeit` magic to do this.
###Code
sentence_1 = A |'<=>'| B
sentence_2 = (A & B) | (C & ~A) | (B & ~D)
sentence_3 = (A | (B & C)) |'<=>'| ((A | B) & (A | C))
%%timeit
dpll_satisfiable(sentence_1)
dpll_satisfiable(sentence_2)
dpll_satisfiable(sentence_3)
%%timeit
WalkSAT_CNF(sentence_1)
WalkSAT_CNF(sentence_2)
WalkSAT_CNF(sentence_3)
###Output
1.02 ms ± 6.92 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
###Markdown
On an average, for solvable cases, `WalkSAT` is quite faster than `dpll` because, for a small number of variables, `WalkSAT` can reduce the search space significantly. Results can be different for sentences with more symbols though.Feel free to play around with this to understand the trade-offs of these algorithms better. SATPlan In this section we show how to make plans by logical inference. The basic idea is very simple. It includes the following three steps:1. Constuct a sentence that includes: 1. A colection of assertions about the initial state. 2. The successor-state axioms for all the possible actions at each time up to some maximum time t. 3. The assertion that the goal is achieved at time t.2. Present the whole sentence to a SAT solver.3. Assuming a model is found, extract from the model those variables that represent actions and are assigned true. Together they represent a plan to achieve the goals.Lets have a look at the algorithm
###Code
psource(SAT_plan)
###Output
_____no_output_____
###Markdown
Let's see few examples of its usage. First we define a transition and then call `SAT_plan`.
###Code
transition = {'A': {'Left': 'A', 'Right': 'B'},
'B': {'Left': 'A', 'Right': 'C'},
'C': {'Left': 'B', 'Right': 'C'}}
print(SAT_plan('A', transition, 'C', 2))
print(SAT_plan('A', transition, 'B', 3))
print(SAT_plan('C', transition, 'A', 3))
###Output
None
['Right']
['Left', 'Left']
###Markdown
Let us do the same for another transition.
###Code
transition = {(0, 0): {'Right': (0, 1), 'Down': (1, 0)},
(0, 1): {'Left': (1, 0), 'Down': (1, 1)},
(1, 0): {'Right': (1, 0), 'Up': (1, 0), 'Left': (1, 0), 'Down': (1, 0)},
(1, 1): {'Left': (1, 0), 'Up': (0, 1)}}
print(SAT_plan((0, 0), transition, (1, 1), 4))
###Output
['Right', 'Down']
###Markdown
First-Order Logic Knowledge Bases: `FolKB`The class `FolKB` can be used to represent a knowledge base of First-order logic sentences. You would initialize and use it the same way as you would for `PropKB` except that the clauses are first-order definite clauses. We will see how to write such clauses to create a database and query them in the following sections. Criminal KBIn this section we create a `FolKB` based on the following paragraph.The law says that it is a crime for an American to sell weapons to hostile nations. The country Nono, an enemy of America, has some missiles, and all of its missiles were sold to it by Colonel West, who is American.The first step is to extract the facts and convert them into first-order definite clauses. Extracting the facts from data alone is a challenging task. Fortunately, we have a small paragraph and can do extraction and conversion manually. We'll store the clauses in list aptly named `clauses`.
###Code
clauses = []
###Output
_____no_output_____
###Markdown
“... it is a crime for an American to sell weapons to hostile nations”The keywords to look for here are 'crime', 'American', 'sell', 'weapon' and 'hostile'. We use predicate symbols to make meaning of them.* `Criminal(x)`: `x` is a criminal* `American(x)`: `x` is an American* `Sells(x ,y, z)`: `x` sells `y` to `z`* `Weapon(x)`: `x` is a weapon* `Hostile(x)`: `x` is a hostile nationLet us now combine them with appropriate variable naming to depict the meaning of the sentence. The criminal `x` is also the American `x` who sells weapon `y` to `z`, which is a hostile nation.$\text{American}(x) \land \text{Weapon}(y) \land \text{Sells}(x, y, z) \land \text{Hostile}(z) \implies \text{Criminal} (x)$
###Code
clauses.append(expr("(American(x) & Weapon(y) & Sells(x, y, z) & Hostile(z)) ==> Criminal(x)"))
###Output
_____no_output_____
###Markdown
"The country Nono, an enemy of America"We now know that Nono is an enemy of America. We represent these nations using the constant symbols `Nono` and `America`. the enemy relation is show using the predicate symbol `Enemy`.$\text{Enemy}(\text{Nono}, \text{America})$
###Code
clauses.append(expr("Enemy(Nono, America)"))
###Output
_____no_output_____
###Markdown
"Nono ... has some missiles"This states the existence of some missile which is owned by Nono. $\exists x \text{Owns}(\text{Nono}, x) \land \text{Missile}(x)$. We invoke existential instantiation to introduce a new constant `M1` which is the missile owned by Nono.$\text{Owns}(\text{Nono}, \text{M1}), \text{Missile}(\text{M1})$
###Code
clauses.append(expr("Owns(Nono, M1)"))
clauses.append(expr("Missile(M1)"))
###Output
_____no_output_____
###Markdown
"All of its missiles were sold to it by Colonel West"If Nono owns something and it classifies as a missile, then it was sold to Nono by West.$\text{Missile}(x) \land \text{Owns}(\text{Nono}, x) \implies \text{Sells}(\text{West}, x, \text{Nono})$
###Code
clauses.append(expr("(Missile(x) & Owns(Nono, x)) ==> Sells(West, x, Nono)"))
###Output
_____no_output_____
###Markdown
"West, who is American"West is an American.$\text{American}(\text{West})$
###Code
clauses.append(expr("American(West)"))
###Output
_____no_output_____
###Markdown
We also know, from our understanding of language, that missiles are weapons and that an enemy of America counts as “hostile”.$\text{Missile}(x) \implies \text{Weapon}(x), \text{Enemy}(x, \text{America}) \implies \text{Hostile}(x)$
###Code
clauses.append(expr("Missile(x) ==> Weapon(x)"))
clauses.append(expr("Enemy(x, America) ==> Hostile(x)"))
###Output
_____no_output_____
###Markdown
Now that we have converted the information into first-order definite clauses we can create our first-order logic knowledge base.
###Code
crime_kb = FolKB(clauses)
###Output
_____no_output_____
###Markdown
The `subst` helper function substitutes variables with given values in first-order logic statements.This will be useful in later algorithms.It's implementation is quite simple and self-explanatory.
###Code
psource(subst)
###Output
_____no_output_____
###Markdown
Here's an example of how `subst` can be used.
###Code
subst({x: expr('Nono'), y: expr('M1')}, expr('Owns(x, y)'))
###Output
_____no_output_____
###Markdown
Inference in First-Order LogicIn this section we look at a forward chaining and a backward chaining algorithm for `FolKB`. Both aforementioned algorithms rely on a process called unification, a key component of all first-order inference algorithms. UnificationWe sometimes require finding substitutions that make different logical expressions look identical. This process, called unification, is done by the `unify` algorithm. It takes as input two sentences and returns a unifier for them if one exists. A unifier is a dictionary which stores the substitutions required to make the two sentences identical. It does so by recursively unifying the components of a sentence, where the unification of a variable symbol `var` with a constant symbol `Const` is the mapping `{var: Const}`. Let's look at a few examples.
###Code
unify(expr('x'), 3)
unify(expr('A(x)'), expr('A(B)'))
unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(y)'))
###Output
_____no_output_____
###Markdown
In cases where there is no possible substitution that unifies the two sentences the function return `None`.
###Code
print(unify(expr('Cat(x)'), expr('Dog(Dobby)')))
###Output
None
###Markdown
We also need to take care we do not unintentionally use the same variable name. Unify treats them as a single variable which prevents it from taking multiple value.
###Code
print(unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(x)')))
###Output
None
###Markdown
Forward Chaining AlgorithmWe consider the simple forward-chaining algorithm presented in Figure 9.3. We look at each rule in the knowledge base and see if the premises can be satisfied. This is done by finding a substitution which unifies each of the premise with a clause in the `KB`. If we are able to unify the premises, the conclusion (with the corresponding substitution) is added to the `KB`. This inferencing process is repeated until either the query can be answered or till no new sentences can be added. We test if the newly added clause unifies with the query in which case the substitution yielded by `unify` is an answer to the query. If we run out of sentences to infer, this means the query was a failure.The function `fol_fc_ask` is a generator which yields all substitutions which validate the query.
###Code
psource(fol_fc_ask)
###Output
_____no_output_____
###Markdown
Let's find out all the hostile nations. Note that we only told the `KB` that Nono was an enemy of America, not that it was hostile.
###Code
answer = fol_fc_ask(crime_kb, expr('Hostile(x)'))
print(list(answer))
###Output
[{x: Nono}]
###Markdown
The generator returned a single substitution which says that Nono is a hostile nation. See how after adding another enemy nation the generator returns two substitutions.
###Code
crime_kb.tell(expr('Enemy(JaJa, America)'))
answer = fol_fc_ask(crime_kb, expr('Hostile(x)'))
print(list(answer))
###Output
[{x: Nono}, {x: JaJa}]
###Markdown
Note: `fol_fc_ask` makes changes to the `KB` by adding sentences to it. Backward Chaining AlgorithmThis algorithm works backward from the goal, chaining through rules to find known facts that support the proof. Suppose `goal` is the query we want to find the substitution for. We find rules of the form $\text{lhs} \implies \text{goal}$ in the `KB` and try to prove `lhs`. There may be multiple clauses in the `KB` which give multiple `lhs`. It is sufficient to prove only one of these. But to prove a `lhs` all the conjuncts in the `lhs` of the clause must be proved. This makes it similar to And/Or search. ORThe OR part of the algorithm comes from our choice to select any clause of the form $\text{lhs} \implies \text{goal}$. Looking at all rules's `lhs` whose `rhs` unify with the `goal`, we yield a substitution which proves all the conjuncts in the `lhs`. We use `parse_definite_clause` to attain `lhs` and `rhs` from a clause of the form $\text{lhs} \implies \text{rhs}$. For atomic facts the `lhs` is an empty list.
###Code
psource(fol_bc_or)
###Output
_____no_output_____
###Markdown
ANDThe AND corresponds to proving all the conjuncts in the `lhs`. We need to find a substitution which proves each and every clause in the list of conjuncts.
###Code
psource(fol_bc_and)
###Output
_____no_output_____
###Markdown
Now the main function `fl_bc_ask` calls `fol_bc_or` with substitution initialized as empty. The `ask` method of `FolKB` uses `fol_bc_ask` and fetches the first substitution returned by the generator to answer query. Let's query the knowledge base we created from `clauses` to find hostile nations.
###Code
# Rebuild KB because running fol_fc_ask would add new facts to the KB
crime_kb = FolKB(clauses)
crime_kb.ask(expr('Hostile(x)'))
###Output
_____no_output_____
###Markdown
You may notice some new variables in the substitution. They are introduced to standardize the variable names to prevent naming problems as discussed in the [Unification section](Unification) Appendix: The Implementation of `|'==>'|`Consider the `Expr` formed by this syntax:
###Code
P |'==>'| ~Q
###Output
_____no_output_____
###Markdown
What is the funny `|'==>'|` syntax? The trick is that "`|`" is just the regular Python or-operator, and so is exactly equivalent to this:
###Code
(P | '==>') | ~Q
###Output
_____no_output_____
###Markdown
In other words, there are two applications of or-operators. Here's the first one:
###Code
P | '==>'
###Output
_____no_output_____
###Markdown
What is going on here is that the `__or__` method of `Expr` serves a dual purpose. If the right-hand-side is another `Expr` (or a number), then the result is an `Expr`, as in `(P | Q)`. But if the right-hand-side is a string, then the string is taken to be an operator, and we create a node in the abstract syntax tree corresponding to a partially-filled `Expr`, one where we know the left-hand-side is `P` and the operator is `==>`, but we don't yet know the right-hand-side.The `PartialExpr` class has an `__or__` method that says to create an `Expr` node with the right-hand-side filled in. Here we can see the combination of the `PartialExpr` with `Q` to create a complete `Expr`:
###Code
partial = PartialExpr('==>', P)
partial | ~Q
###Output
_____no_output_____
###Markdown
This [trick](http://code.activestate.com/recipes/384122-infix-operators/) is due to [Ferdinand Jamitzky](http://code.activestate.com/recipes/users/98863/), with a modification by [C. G. Vedant](https://github.com/Chipe1),who suggested using a string inside the or-bars. Appendix: The Implementation of `expr`How does `expr` parse a string into an `Expr`? It turns out there are two tricks (besides the Jamitzky/Vedant trick):1. We do a string substitution, replacing "`==>`" with "`|'==>'|`" (and likewise for other operators).2. We `eval` the resulting string in an environment in which every identifieris bound to a symbol with that identifier as the `op`.In other words,
###Code
expr('~(P & Q) ==> (~P | ~Q)')
###Output
_____no_output_____
###Markdown
is equivalent to doing:
###Code
P, Q = symbols('P, Q')
~(P & Q) |'==>'| (~P | ~Q)
###Output
_____no_output_____
###Markdown
One thing to beware of: this puts `==>` at the same precedence level as `"|"`, which is not quite right. For example, we get this:
###Code
P & Q |'==>'| P | Q
###Output
_____no_output_____
###Markdown
which is probably not what we meant; when in doubt, put in extra parens:
###Code
(P & Q) |'==>'| (P | Q)
###Output
_____no_output_____
###Markdown
Examples
###Code
from notebook import Canvas_fol_bc_ask
canvas_bc_ask = Canvas_fol_bc_ask('canvas_bc_ask', crime_kb, expr('Criminal(x)'))
###Output
_____no_output_____
###Markdown
Logic: `logic.py`; Chapters 6-8 This notebook describes the [logic.py](https://github.com/aimacode/aima-python/blob/master/logic.py) module, which covers Chapters 6 (Logical Agents), 7 (First-Order Logic) and 8 (Inference in First-Order Logic) of *[Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu)*. See the [intro notebook](https://github.com/aimacode/aima-python/blob/master/intro.ipynb) for instructions.We'll start by looking at `Expr`, the data type for logical sentences, and the convenience function `expr`. We'll be covering two types of knowledge bases, `PropKB` - Propositional logic knowledge base and `FolKB` - First order logic knowledge base. We will construct a propositional knowledge base of a specific situation in the Wumpus World. We will next go through the `tt_entails` function and experiment with it a bit. The `pl_resolution` and `pl_fc_entails` functions will come next. We'll study forward chaining and backward chaining algorithms for `FolKB` and use them on `crime_kb` knowledge base.But the first step is to load the code:
###Code
from utils import *
from logic import *
###Output
_____no_output_____
###Markdown
Logical Sentences The `Expr` class is designed to represent any kind of mathematical expression. The simplest type of `Expr` is a symbol, which can be defined with the function `Symbol`:
###Code
Symbol('x')
###Output
_____no_output_____
###Markdown
Or we can define multiple symbols at the same time with the function `symbols`:
###Code
(x, y, P, Q, f) = symbols('x, y, P, Q, f')
###Output
_____no_output_____
###Markdown
We can combine `Expr`s with the regular Python infix and prefix operators. Here's how we would form the logical sentence "P and not Q":
###Code
P & ~Q
###Output
_____no_output_____
###Markdown
This works because the `Expr` class overloads the `&` operator with this definition:```pythondef __and__(self, other): return Expr('&', self, other)``` and does similar overloads for the other operators. An `Expr` has two fields: `op` for the operator, which is always a string, and `args` for the arguments, which is a tuple of 0 or more expressions. By "expression," I mean either an instance of `Expr`, or a number. Let's take a look at the fields for some `Expr` examples:
###Code
sentence = P & ~Q
sentence.op
sentence.args
P.op
P.args
Pxy = P(x, y)
Pxy.op
Pxy.args
###Output
_____no_output_____
###Markdown
It is important to note that the `Expr` class does not define the *logic* of Propositional Logic sentences; it just gives you a way to *represent* expressions. Think of an `Expr` as an [abstract syntax tree](https://en.wikipedia.org/wiki/Abstract_syntax_tree). Each of the `args` in an `Expr` can be either a symbol, a number, or a nested `Expr`. We can nest these trees to any depth. Here is a deply nested `Expr`:
###Code
3 * f(x, y) + P(y) / 2 + 1
###Output
_____no_output_____
###Markdown
Operators for Constructing Logical SentencesHere is a table of the operators that can be used to form sentences. Note that we have a problem: we want to use Python operators to make sentences, so that our programs (and our interactive sessions like the one here) will show simple code. But Python does not allow implication arrows as operators, so for now we have to use a more verbose notation that Python does allow: `|'==>'|` instead of just `==>`. Alternately, you can always use the more verbose `Expr` constructor forms:| Operation | Book | Python Infix Input | Python Output | Python `Expr` Input|--------------------------|----------------------|-------------------------|---|---|| Negation | ¬ P | `~P` | `~P` | `Expr('~', P)`| And | P ∧ Q | `P & Q` | `P & Q` | `Expr('&', P, Q)`| Or | P ∨ Q | `P` &124; `Q`| `P` &124; `Q` | `Expr('`&124;`', P, Q)`| Inequality (Xor) | P ≠ Q | `P ^ Q` | `P ^ Q` | `Expr('^', P, Q)`| Implication | P → Q | `P` &124;`'==>'`&124; `Q` | `P ==> Q` | `Expr('==>', P, Q)`| Reverse Implication | Q ← P | `Q` &124;`'&124; `P` |`Q <== P` | `Expr('<==', Q, P)`| Equivalence | P ↔ Q | `P` &124;`''`&124; `Q` |`P Q` | `Expr('', P, Q)`Here's an example of defining a sentence with an implication arrow:
###Code
~(P & Q) |'==>'| (~P | ~Q)
###Output
_____no_output_____
###Markdown
`expr`: a Shortcut for Constructing SentencesIf the `|'==>'|` notation looks ugly to you, you can use the function `expr` instead:
###Code
expr('~(P & Q) ==> (~P | ~Q)')
###Output
_____no_output_____
###Markdown
`expr` takes a string as input, and parses it into an `Expr`. The string can contain arrow operators: `==>`, ``, which are handled as if they were regular Python infix operators. And `expr` automatically defines any symbols, so you don't need to pre-define them:
###Code
expr('sqrt(b ** 2 - 4 * a * c)')
###Output
_____no_output_____
###Markdown
For now that's all you need to know about `expr`. If you are interested, we explain the messy details of how `expr` is implemented and how `|'==>'|` is handled in the appendix. Propositional Knowledge Bases: `PropKB`The class `PropKB` can be used to represent a knowledge base of propositional logic sentences.We see that the class `KB` has four methods, apart from `__init__`. A point to note here: the `ask` method simply calls the `ask_generator` method. Thus, this one has already been implemented, and what you'll have to actually implement when you create your own knowledge base class (though you'll probably never need to, considering the ones we've created for you) will be the `ask_generator` function and not the `ask` function itself.The class `PropKB` now.* `__init__(self, sentence=None)` : The constructor `__init__` creates a single field `clauses` which will be a list of all the sentences of the knowledge base. Note that each one of these sentences will be a 'clause' i.e. a sentence which is made up of only literals and `or`s.* `tell(self, sentence)` : When you want to add a sentence to the KB, you use the `tell` method. This method takes a sentence, converts it to its CNF, extracts all the clauses, and adds all these clauses to the `clauses` field. So, you need not worry about `tell`ing only clauses to the knowledge base. You can `tell` the knowledge base a sentence in any form that you wish; converting it to CNF and adding the resulting clauses will be handled by the `tell` method.* `ask_generator(self, query)` : The `ask_generator` function is used by the `ask` function. It calls the `tt_entails` function, which in turn returns `True` if the knowledge base entails query and `False` otherwise. The `ask_generator` itself returns an empty dict `{}` if the knowledge base entails query and `None` otherwise. This might seem a little bit weird to you. After all, it makes more sense just to return a `True` or a `False` instead of the `{}` or `None` But this is done to maintain consistency with the way things are in First-Order Logic, where an `ask_generator` function is supposed to return all the substitutions that make the query true. Hence the dict, to return all these substitutions. I will be mostly be using the `ask` function which returns a `{}` or a `False`, but if you don't like this, you can always use the `ask_if_true` function which returns a `True` or a `False`.* `retract(self, sentence)` : This function removes all the clauses of the sentence given, from the knowledge base. Like the `tell` function, you don't have to pass clauses to remove them from the knowledge base; any sentence will do fine. The function will take care of converting that sentence to clauses and then remove those. Wumpus World KBLet us create a `PropKB` for the wumpus world with the sentences mentioned in `section 7.4.3`.
###Code
wumpus_kb = PropKB()
###Output
_____no_output_____
###Markdown
We define the symbols we use in our clauses.$P_{x, y}$ is true if there is a pit in `[x, y]`.$B_{x, y}$ is true if the agent senses breeze in `[x, y]`.
###Code
P11, P12, P21, P22, P31, B11, B21 = expr('P11, P12, P21, P22, P31, B11, B21')
###Output
_____no_output_____
###Markdown
Now we tell sentences based on `section 7.4.3`.There is no pit in `[1,1]`.
###Code
wumpus_kb.tell(~P11)
###Output
_____no_output_____
###Markdown
A square is breezy if and only if there is a pit in a neighboring square. This has to be stated for each square but for now, we include just the relevant squares.
###Code
wumpus_kb.tell(B11 | '<=>' | ((P12 | P21)))
wumpus_kb.tell(B21 | '<=>' | ((P11 | P22 | P31)))
###Output
_____no_output_____
###Markdown
Now we include the breeze percepts for the first two squares leading up to the situation in `Figure 7.3(b)`
###Code
wumpus_kb.tell(~B11)
wumpus_kb.tell(B21)
###Output
_____no_output_____
###Markdown
We can check the clauses stored in a `KB` by accessing its `clauses` variable
###Code
wumpus_kb.clauses
###Output
_____no_output_____
###Markdown
We see that the equivalence $B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was automatically converted to two implications which were inturn converted to CNF which is stored in the `KB`.$B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was split into $B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ and $B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$.$B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ was converted to $P_{1, 2} \lor P_{2, 1} \lor \neg B_{1, 1}$.$B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$ was converted to $\neg (P_{1, 2} \lor P_{2, 1}) \lor B_{1, 1}$ which becomes $(\neg P_{1, 2} \lor B_{1, 1}) \land (\neg P_{2, 1} \lor B_{1, 1})$ after applying De Morgan's laws and distributing the disjunction.$B_{2, 1} \iff (P_{1, 1} \lor P_{2, 2} \lor P_{3, 2})$ is converted in similar manner. Inference in Propositional Knowledge BaseIn this section we will look at two algorithms to check if a sentence is entailed by the `KB`. Our goal is to decide whether $\text{KB} \vDash \alpha$ for some sentence $\alpha$. Truth Table EnumerationIt is a model-checking approach which, as the name suggests, enumerates all possible models in which the `KB` is true and checks if $\alpha$ is also true in these models. We list the $n$ symbols in the `KB` and enumerate the $2^{n}$ models in a depth-first manner and check the truth of `KB` and $\alpha$.
###Code
%psource tt_check_all
###Output
_____no_output_____
###Markdown
Note that `tt_entails()` takes an `Expr` which is a conjunction of clauses as the input instead of the `KB` itself. You can use the `ask_if_true()` method of `PropKB` which does all the required conversions. Let's check what `wumpus_kb` tells us about $P_{1, 1}$.
###Code
wumpus_kb.ask_if_true(~P11), wumpus_kb.ask_if_true(P11)
###Output
_____no_output_____
###Markdown
Looking at Figure 7.9 we see that in all models in which the knowledge base is `True`, $P_{1, 1}$ is `False`. It makes sense that `ask_if_true()` returns `True` for $\alpha = \neg P_{1, 1}$ and `False` for $\alpha = P_{1, 1}$. This begs the question, what if $\alpha$ is `True` in only a portion of all models. Do we return `True` or `False`? This doesn't rule out the possibility of $\alpha$ being `True` but it is not entailed by the `KB` so we return `False` in such cases. We can see this is the case for $P_{2, 2}$ and $P_{3, 1}$.
###Code
wumpus_kb.ask_if_true(~P22), wumpus_kb.ask_if_true(P22)
###Output
_____no_output_____
###Markdown
Proof by ResolutionRecall that our goal is to check whether $\text{KB} \vDash \alpha$ i.e. is $\text{KB} \implies \alpha$ true in every model. Suppose we wanted to check if $P \implies Q$ is valid. We check the satisfiability of $\neg (P \implies Q)$, which can be rewritten as $P \land \neg Q$. If $P \land \neg Q$ is unsatisfiable, then $P \implies Q$ must be true in all models. This gives us the result "$\text{KB} \vDash \alpha$ if and only if $\text{KB} \land \neg \alpha$ is unsatisfiable".This technique corresponds to proof by contradiction, a standard mathematical proof technique. We assume $\alpha$ to be false and show that this leads to a contradiction with known axioms in $\text{KB}$. We obtain a contradiction by making valid inferences using inference rules. In this proof we use a single inference rule, resolution which states $(l_1 \lor \dots \lor l_k) \land (m_1 \lor \dots \lor m_n) \land (l_i \iff \neg m_j) \implies l_1 \lor \dots \lor l_{i - 1} \lor l_{i + 1} \lor \dots \lor l_k \lor m_1 \lor \dots \lor m_{j - 1} \lor m_{j + 1} \lor \dots \lor m_n$. Applying the resolution yeilds us a clause which we add to the KB. We keep doing this until:* There are no new clauses that can be added, in which case $\text{KB} \nvDash \alpha$.* Two clauses resolve to yield the empty clause, in which case $\text{KB} \vDash \alpha$.The empty clause is equivalent to False because it arises only from resolving two complementaryunit clauses such as $P$ and $\neg P$ which is a contradiction as both $P$ and $\neg P$ can't be True at the same time.
###Code
%psource pl_resolution
pl_resolution(wumpus_kb, ~P11), pl_resolution(wumpus_kb, P11)
pl_resolution(wumpus_kb, ~P22), pl_resolution(wumpus_kb, P22)
###Output
_____no_output_____
###Markdown
First-Order Logic Knowledge Bases: `FolKB`The class `FolKB` can be used to represent a knowledge base of First-order logic sentences. You would initialize and use it the same way as you would for `PropKB` except that the clauses are first-order definite clauses. We will see how to write such clauses to create a database and query them in the following sections. Criminal KBIn this section we create a `FolKB` based on the following paragraph.The law says that it is a crime for an American to sell weapons to hostile nations. The country Nono, an enemy of America, has some missiles, and all of its missiles were sold to it by Colonel West, who is American.The first step is to extract the facts and convert them into first-order definite clauses. Extracting the facts from data alone is a challenging task. Fortunately, we have a small paragraph and can do extraction and conversion manually. We'll store the clauses in list aptly named `clauses`.
###Code
clauses = []
###Output
_____no_output_____
###Markdown
“... it is a crime for an American to sell weapons to hostile nations”The keywords to look for here are 'crime', 'American', 'sell', 'weapon' and 'hostile'. We use predicate symbols to make meaning of them.* `Criminal(x)`: `x` is a criminal* `American(x)`: `x` is an American* `Sells(x ,y, z)`: `x` sells `y` to `z`* `Weapon(x)`: `x` is a weapon* `Hostile(x)`: `x` is a hostile nationLet us now combine them with appropriate variable naming to depict the meaning of the sentence. The criminal `x` is also the American `x` who sells weapon `y` to `z`, which is a hostile nation.$\text{American}(x) \land \text{Weapon}(y) \land \text{Sells}(x, y, z) \land \text{Hostile}(z) \implies \text{Criminal} (x)$
###Code
clauses.append(expr("(American(x) & Weapon(y) & Sells(x, y, z) & Hostile(z)) ==> Criminal(x)"))
###Output
_____no_output_____
###Markdown
"The country Nono, an enemy of America"We now know that Nono is an enemy of America. We represent these nations using the constant symbols `Nono` and `America`. the enemy relation is show using the predicate symbol `Enemy`.$\text{Enemy}(\text{Nono}, \text{America})$
###Code
clauses.append(expr("Enemy(Nono, America)"))
###Output
_____no_output_____
###Markdown
"Nono ... has some missiles"This states the existance of some missile which is owned by Nono. $\exists x \text{Owns}(\text{Nono}, x) \land \text{Missile}(x)$. We invoke existential instantiation to introduce a new constant `M1` which is the missile owned by Nono.$\text{Owns}(\text{Nono}, \text{M1}), \text{Missile}(\text{M1})$
###Code
clauses.append(expr("Owns(Nono, M1)"))
clauses.append(expr("Missile(M1)"))
###Output
_____no_output_____
###Markdown
"All of its missiles were sold to it by Colonel West"If Nono owns something and it classifies as a missile, then it was sold to Nono by West.$\text{Missile}(x) \land \text{Owns}(\text{Nono}, x) \implies \text{Sells}(\text{West}, x, \text{Nono})$
###Code
clauses.append(expr("(Missile(x) & Owns(Nono, x)) ==> Sells(West, x, Nono)"))
###Output
_____no_output_____
###Markdown
"West, who is American"West is an American.$\text{American}(\text{West})$
###Code
clauses.append(expr("American(West)"))
###Output
_____no_output_____
###Markdown
We also know, from our understanding of language, that missiles are weapons and that an enemy of America counts as “hostile”.$\text{Missile}(x) \implies \text{Weapon}(x), \text{Enemy}(x, \text{America}) \implies \text{Hostile}(x)$
###Code
clauses.append(expr("Missile(x) ==> Weapon(x)"))
clauses.append(expr("Enemy(x, America) ==> Hostile(x)"))
###Output
_____no_output_____
###Markdown
Now that we have converted the information into first-order definite clauses we can create our first-order logic knowledge base.
###Code
crime_kb = FolKB(clauses)
###Output
_____no_output_____
###Markdown
Inference in First-Order LogicIn this section we look at a forward chaining and a backward chaining algorithm for `FolKB`. Both aforementioned algorithms rely on a process called unification, a key component of all first-order inference algorithms. UnificationWe sometimes require finding substitutions that make different logical expressions look identical. This process, called unification, is done by the `unify` algorithm. It takes as input two sentences and returns a unifier for them if one exists. A unifier is a dictionary which stores the substitutions required to make the two sentences identical. It does so by recursively unifying the components of a sentence, where the unification of a variable symbol `var` with a constant symbol `Const` is the mapping `{var: Const}`. Let's look at a few examples.
###Code
unify(expr('x'), 3)
unify(expr('A(x)'), expr('A(B)'))
unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(y)'))
###Output
_____no_output_____
###Markdown
In cases where there is no possible substitution that unifies the two sentences the function return `None`.
###Code
print(unify(expr('Cat(x)'), expr('Dog(Dobby)')))
###Output
None
###Markdown
We also need to take care we do not unintentionally use the same variable name. Unify treats them as a single variable which prevents it from taking multiple value.
###Code
print(unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(x)')))
###Output
None
###Markdown
Forward Chaining AlgorithmWe consider the simple forward-chaining algorithm presented in Figure 9.3. We look at each rule in the knoweldge base and see if the premises can be satisfied. This is done by finding a substitution which unifies each of the premise with a clause in the `KB`. If we are able to unify the premises, the conclusion (with the corresponding substitution) is added to the `KB`. This inferencing process is repeated until either the query can be answered or till no new sentences can be added. We test if the newly added clause unifies with the query in which case the substitution yielded by `unify` is an answer to the query. If we run out of sentences to infer, this means the query was a failure.The function `fol_fc_ask` is a generator which yields all substitutions which validate the query.
###Code
%psource fol_fc_ask
###Output
_____no_output_____
###Markdown
Let's find out all the hostile nations. Note that we only told the `KB` that Nono was an enemy of America, not that it was hostile.
###Code
answer = fol_fc_ask(crime_kb, expr('Hostile(x)'))
print(list(answer))
###Output
[{x: Nono}]
###Markdown
The generator returned a single substitution which says that Nono is a hostile nation. See how after adding another enemy nation the generator returns two substitutions.
###Code
crime_kb.tell(expr('Enemy(JaJa, America)'))
answer = fol_fc_ask(crime_kb, expr('Hostile(x)'))
print(list(answer))
###Output
[{x: Nono}, {x: JaJa}]
###Markdown
Note: `fol_fc_ask` makes changes to the `KB` by adding sentences to it. Backward Chaining AlgorithmThis algorithm works backward from the goal, chaining through rules to find known facts that support the proof. Suppose `goal` is the query we want to find the substitution for. We find rules of the form $\text{lhs} \implies \text{goal}$ in the `KB` and try to prove `lhs`. There may be multiple clauses in the `KB` which give multiple `lhs`. It is sufficient to prove only one of these. But to prove a `lhs` all the conjuncts in the `lhs` of the clause must be proved. This makes it similar to And/Or search. ORThe OR part of the algorithm comes from our choice to select any clause of the form $\text{lhs} \implies \text{goal}$. Looking at all rules's `lhs` whose `rhs` unify with the `goal`, we yield a substitution which proves all the conjuncts in the `lhs`. We use `parse_definite_clause` to attain `lhs` and `rhs` from a clause of the form $\text{lhs} \implies \text{rhs}$. For atomic facts the `lhs` is an empty list.
###Code
%psource fol_bc_or
###Output
_____no_output_____
###Markdown
ANDThe AND corresponds to proving all the conjuncts in the `lhs`. We need to find a substitution which proves each and every clause in the list of conjuncts.
###Code
%psource fol_bc_and
###Output
_____no_output_____
###Markdown
Now the main function `fl_bc_ask` calls `fol_bc_or` with substitution initialized as empty. The `ask` method of `FolKB` uses `fol_bc_ask` and fetches the first substitution returned by the generator to answer query. Let's query the knowledge base we created from `clauses` to find hostile nations.
###Code
# Rebuild KB because running fol_fc_ask would add new facts to the KB
crime_kb = FolKB(clauses)
crime_kb.ask(expr('Hostile(x)'))
###Output
_____no_output_____
###Markdown
You may notice some new variables in the substitution. They are introduced to standardize the variable names to prevent naming problems as discussed in the [Unification section](Unification) Appendix: The Implementation of `|'==>'|`Consider the `Expr` formed by this syntax:
###Code
P |'==>'| ~Q
###Output
_____no_output_____
###Markdown
What is the funny `|'==>'|` syntax? The trick is that "`|`" is just the regular Python or-operator, and so is exactly equivalent to this:
###Code
(P | '==>') | ~Q
###Output
_____no_output_____
###Markdown
In other words, there are two applications of or-operators. Here's the first one:
###Code
P | '==>'
###Output
_____no_output_____
###Markdown
What is going on here is that the `__or__` method of `Expr` serves a dual purpose. If the right-hand-side is another `Expr` (or a number), then the result is an `Expr`, as in `(P | Q)`. But if the right-hand-side is a string, then the string is taken to be an operator, and we create a node in the abstract syntax tree corresponding to a partially-filled `Expr`, one where we know the left-hand-side is `P` and the operator is `==>`, but we don't yet know the right-hand-side.The `PartialExpr` class has an `__or__` method that says to create an `Expr` node with the right-hand-side filled in. Here we can see the combination of the `PartialExpr` with `Q` to create a complete `Expr`:
###Code
partial = PartialExpr('==>', P)
partial | ~Q
###Output
_____no_output_____
###Markdown
This [trick](http://code.activestate.com/recipes/384122-infix-operators/) is due to [Ferdinand Jamitzky](http://code.activestate.com/recipes/users/98863/), with a modification by [C. G. Vedant](https://github.com/Chipe1),who suggested using a string inside the or-bars. Appendix: The Implementation of `expr`How does `expr` parse a string into an `Expr`? It turns out there are two tricks (besides the Jamitzky/Vedant trick):1. We do a string substitution, replacing "`==>`" with "`|'==>'|`" (and likewise for other operators).2. We `eval` the resulting string in an environment in which every identifieris bound to a symbol with that identifier as the `op`.In other words,
###Code
expr('~(P & Q) ==> (~P | ~Q)')
###Output
_____no_output_____
###Markdown
is equivalent to doing:
###Code
P, Q = symbols('P, Q')
~(P & Q) |'==>'| (~P | ~Q)
###Output
_____no_output_____
###Markdown
One thing to beware of: this puts `==>` at the same precedence level as `"|"`, which is not quite right. For example, we get this:
###Code
P & Q |'==>'| P | Q
###Output
_____no_output_____
###Markdown
which is probably not what we meant; when in doubt, put in extra parens:
###Code
(P & Q) |'==>'| (P | Q)
###Output
_____no_output_____
###Markdown
Examples
###Code
from notebook import Canvas_fol_bc_ask
canvas_bc_ask = Canvas_fol_bc_ask('canvas_bc_ask', crime_kb, expr('Criminal(x)'))
###Output
_____no_output_____
###Markdown
Logic This Jupyter notebook acts as supporting material for topics covered in __Chapter 6 Logical Agents__, __Chapter 7 First-Order Logic__ and __Chapter 8 Inference in First-Order Logic__ of the book *[Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu)*. We make use of the implementations in the [logic.py](https://github.com/aimacode/aima-python/blob/master/logic.py) module. See the [intro notebook](https://github.com/aimacode/aima-python/blob/master/intro.ipynb) for instructions.Let's first import everything from the `logic` module.
###Code
from utils import *
from logic import *
from notebook import psource
###Output
_____no_output_____
###Markdown
CONTENTS- Logical sentences - Expr - PropKB - Knowledge-based agents - Inference in propositional knowledge base - Truth table enumeration - Proof by resolution - Forward and backward chaining - DPLL - WalkSAT - SATPlan - FolKB - Inference in first order knowledge base - Unification - Forward chaining algorithm - Backward chaining algorithm Logical Sentences The `Expr` class is designed to represent any kind of mathematical expression. The simplest type of `Expr` is a symbol, which can be defined with the function `Symbol`:
###Code
Symbol('x')
###Output
_____no_output_____
###Markdown
Or we can define multiple symbols at the same time with the function `symbols`:
###Code
(x, y, P, Q, f) = symbols('x, y, P, Q, f')
###Output
_____no_output_____
###Markdown
We can combine `Expr`s with the regular Python infix and prefix operators. Here's how we would form the logical sentence "P and not Q":
###Code
P & ~Q
###Output
_____no_output_____
###Markdown
This works because the `Expr` class overloads the `&` operator with this definition:```pythondef __and__(self, other): return Expr('&', self, other)``` and does similar overloads for the other operators. An `Expr` has two fields: `op` for the operator, which is always a string, and `args` for the arguments, which is a tuple of 0 or more expressions. By "expression," I mean either an instance of `Expr`, or a number. Let's take a look at the fields for some `Expr` examples:
###Code
sentence = P & ~Q
sentence.op
sentence.args
P.op
P.args
Pxy = P(x, y)
Pxy.op
Pxy.args
###Output
_____no_output_____
###Markdown
It is important to note that the `Expr` class does not define the *logic* of Propositional Logic sentences; it just gives you a way to *represent* expressions. Think of an `Expr` as an [abstract syntax tree](https://en.wikipedia.org/wiki/Abstract_syntax_tree). Each of the `args` in an `Expr` can be either a symbol, a number, or a nested `Expr`. We can nest these trees to any depth. Here is a deply nested `Expr`:
###Code
3 * f(x, y) + P(y) / 2 + 1
###Output
_____no_output_____
###Markdown
Operators for Constructing Logical SentencesHere is a table of the operators that can be used to form sentences. Note that we have a problem: we want to use Python operators to make sentences, so that our programs (and our interactive sessions like the one here) will show simple code. But Python does not allow implication arrows as operators, so for now we have to use a more verbose notation that Python does allow: `|'==>'|` instead of just `==>`. Alternately, you can always use the more verbose `Expr` constructor forms:| Operation | Book | Python Infix Input | Python Output | Python `Expr` Input|--------------------------|----------------------|-------------------------|---|---|| Negation | ¬ P | `~P` | `~P` | `Expr('~', P)`| And | P ∧ Q | `P & Q` | `P & Q` | `Expr('&', P, Q)`| Or | P ∨ Q | `P` &124; `Q`| `P` &124; `Q` | `Expr('`&124;`', P, Q)`| Inequality (Xor) | P ≠ Q | `P ^ Q` | `P ^ Q` | `Expr('^', P, Q)`| Implication | P → Q | `P` &124;`'==>'`&124; `Q` | `P ==> Q` | `Expr('==>', P, Q)`| Reverse Implication | Q ← P | `Q` &124;`'&124; `P` |`Q <== P` | `Expr('<==', Q, P)`| Equivalence | P ↔ Q | `P` &124;`''`&124; `Q` |`P Q` | `Expr('', P, Q)`Here's an example of defining a sentence with an implication arrow:
###Code
~(P & Q) |'==>'| (~P | ~Q)
###Output
_____no_output_____
###Markdown
`expr`: a Shortcut for Constructing SentencesIf the `|'==>'|` notation looks ugly to you, you can use the function `expr` instead:
###Code
expr('~(P & Q) ==> (~P | ~Q)')
###Output
_____no_output_____
###Markdown
`expr` takes a string as input, and parses it into an `Expr`. The string can contain arrow operators: `==>`, ``, which are handled as if they were regular Python infix operators. And `expr` automatically defines any symbols, so you don't need to pre-define them:
###Code
expr('sqrt(b ** 2 - 4 * a * c)')
###Output
_____no_output_____
###Markdown
For now that's all you need to know about `expr`. If you are interested, we explain the messy details of how `expr` is implemented and how `|'==>'|` is handled in the appendix. Propositional Knowledge Bases: `PropKB`The class `PropKB` can be used to represent a knowledge base of propositional logic sentences.We see that the class `KB` has four methods, apart from `__init__`. A point to note here: the `ask` method simply calls the `ask_generator` method. Thus, this one has already been implemented, and what you'll have to actually implement when you create your own knowledge base class (though you'll probably never need to, considering the ones we've created for you) will be the `ask_generator` function and not the `ask` function itself.The class `PropKB` now.* `__init__(self, sentence=None)` : The constructor `__init__` creates a single field `clauses` which will be a list of all the sentences of the knowledge base. Note that each one of these sentences will be a 'clause' i.e. a sentence which is made up of only literals and `or`s.* `tell(self, sentence)` : When you want to add a sentence to the KB, you use the `tell` method. This method takes a sentence, converts it to its CNF, extracts all the clauses, and adds all these clauses to the `clauses` field. So, you need not worry about `tell`ing only clauses to the knowledge base. You can `tell` the knowledge base a sentence in any form that you wish; converting it to CNF and adding the resulting clauses will be handled by the `tell` method.* `ask_generator(self, query)` : The `ask_generator` function is used by the `ask` function. It calls the `tt_entails` function, which in turn returns `True` if the knowledge base entails query and `False` otherwise. The `ask_generator` itself returns an empty dict `{}` if the knowledge base entails query and `None` otherwise. This might seem a little bit weird to you. After all, it makes more sense just to return a `True` or a `False` instead of the `{}` or `None` But this is done to maintain consistency with the way things are in First-Order Logic, where an `ask_generator` function is supposed to return all the substitutions that make the query true. Hence the dict, to return all these substitutions. I will be mostly be using the `ask` function which returns a `{}` or a `False`, but if you don't like this, you can always use the `ask_if_true` function which returns a `True` or a `False`.* `retract(self, sentence)` : This function removes all the clauses of the sentence given, from the knowledge base. Like the `tell` function, you don't have to pass clauses to remove them from the knowledge base; any sentence will do fine. The function will take care of converting that sentence to clauses and then remove those. Wumpus World KBLet us create a `PropKB` for the wumpus world with the sentences mentioned in `section 7.4.3`.
###Code
wumpus_kb = PropKB()
###Output
_____no_output_____
###Markdown
We define the symbols we use in our clauses.$P_{x, y}$ is true if there is a pit in `[x, y]`.$B_{x, y}$ is true if the agent senses breeze in `[x, y]`.
###Code
P11, P12, P21, P22, P31, B11, B21 = expr('P11, P12, P21, P22, P31, B11, B21')
###Output
_____no_output_____
###Markdown
Now we tell sentences based on `section 7.4.3`.There is no pit in `[1,1]`.
###Code
wumpus_kb.tell(~P11)
###Output
_____no_output_____
###Markdown
A square is breezy if and only if there is a pit in a neighboring square. This has to be stated for each square but for now, we include just the relevant squares.
###Code
wumpus_kb.tell(B11 | '<=>' | ((P12 | P21)))
wumpus_kb.tell(B21 | '<=>' | ((P11 | P22 | P31)))
###Output
_____no_output_____
###Markdown
Now we include the breeze percepts for the first two squares leading up to the situation in `Figure 7.3(b)`
###Code
wumpus_kb.tell(~B11)
wumpus_kb.tell(B21)
###Output
_____no_output_____
###Markdown
We can check the clauses stored in a `KB` by accessing its `clauses` variable
###Code
wumpus_kb.clauses
###Output
_____no_output_____
###Markdown
We see that the equivalence $B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was automatically converted to two implications which were inturn converted to CNF which is stored in the `KB`.$B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was split into $B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ and $B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$.$B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ was converted to $P_{1, 2} \lor P_{2, 1} \lor \neg B_{1, 1}$.$B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$ was converted to $\neg (P_{1, 2} \lor P_{2, 1}) \lor B_{1, 1}$ which becomes $(\neg P_{1, 2} \lor B_{1, 1}) \land (\neg P_{2, 1} \lor B_{1, 1})$ after applying De Morgan's laws and distributing the disjunction.$B_{2, 1} \iff (P_{1, 1} \lor P_{2, 2} \lor P_{3, 2})$ is converted in similar manner. Knowledge based agents A knowledge-based agent is a simple generic agent that maintains and handles a knowledge base.The knowledge base may initially contain some background knowledge.The purpose of a KB agent is to provide a level of abstraction over knowledge-base manipulation and is to be used as a base class for agents that work on a knowledge base.Given a percept, the KB agent adds the percept to its knowledge base, asks the knowledge base for the best action, and tells the knowledge base that it has in fact taken that action.Our implementation of `KB-Agent` is encapsulated in a class `KB_AgentProgram` which inherits from the `KB` class.Let's have a look.
###Code
psource(KB_AgentProgram)
###Output
_____no_output_____
###Markdown
The helper functions `make_percept_sentence`, `make_action_query` and `make_action_sentence` are all aptly named and as expected,`make_percept_sentence` makes first-order logic sentences about percepts we want our agent to receive,`make_action_query` asks the underlying `KB` about the action that should be taken and`make_action_sentence` tells the underlying `KB` about the action it has just taken. Inference in Propositional Knowledge BaseIn this section we will look at two algorithms to check if a sentence is entailed by the `KB`. Our goal is to decide whether $\text{KB} \vDash \alpha$ for some sentence $\alpha$. Truth Table EnumerationIt is a model-checking approach which, as the name suggests, enumerates all possible models in which the `KB` is true and checks if $\alpha$ is also true in these models. We list the $n$ symbols in the `KB` and enumerate the $2^{n}$ models in a depth-first manner and check the truth of `KB` and $\alpha$.
###Code
psource(tt_check_all)
###Output
_____no_output_____
###Markdown
The algorithm basically computes every line of the truth table $KB\implies \alpha$ and checks if it is true everywhere.If symbols are defined, the routine recursively constructs every combination of truth values for the symbols and then, it checks whether `model` is consistent with `kb`.The given models correspond to the lines in the truth table,which have a `true` in the KB column, and for these lines it checks whether the query evaluates to true`result = pl_true(alpha, model)`.In short, `tt_check_all` evaluates this logical expression for each `model``pl_true(kb, model) => pl_true(alpha, model)`which is logically equivalent to`pl_true(kb, model) & ~pl_true(alpha, model)` that is, the knowledge base and the negation of the query are logically inconsistent.`tt_entails()` just extracts the symbols from the query and calls `tt_check_all()` with the proper parameters.
###Code
psource(tt_entails)
###Output
_____no_output_____
###Markdown
Keep in mind that for two symbols P and Q, P => Q is false only when P is `True` and Q is `False`.Example usage of `tt_entails()`:
###Code
tt_entails(P & Q, Q)
###Output
_____no_output_____
###Markdown
P & Q is True only when both P and Q are True. Hence, (P & Q) => Q is True
###Code
tt_entails(P | Q, Q)
tt_entails(P | Q, P)
###Output
_____no_output_____
###Markdown
If we know that P | Q is true, we cannot infer the truth values of P and Q. Hence (P | Q) => Q is False and so is (P | Q) => P.
###Code
(A, B, C, D, E, F, G) = symbols('A, B, C, D, E, F, G')
tt_entails(A & (B | C) & D & E & ~(F | G), A & D & E & ~F & ~G)
###Output
_____no_output_____
###Markdown
We can see that for the KB to be true, A, D, E have to be True and F and G have to be False.Nothing can be said about B or C. Coming back to our problem, note that `tt_entails()` takes an `Expr` which is a conjunction of clauses as the input instead of the `KB` itself. You can use the `ask_if_true()` method of `PropKB` which does all the required conversions. Let's check what `wumpus_kb` tells us about $P_{1, 1}$.
###Code
wumpus_kb.ask_if_true(~P11), wumpus_kb.ask_if_true(P11)
###Output
_____no_output_____
###Markdown
Looking at Figure 7.9 we see that in all models in which the knowledge base is `True`, $P_{1, 1}$ is `False`. It makes sense that `ask_if_true()` returns `True` for $\alpha = \neg P_{1, 1}$ and `False` for $\alpha = P_{1, 1}$. This begs the question, what if $\alpha$ is `True` in only a portion of all models. Do we return `True` or `False`? This doesn't rule out the possibility of $\alpha$ being `True` but it is not entailed by the `KB` so we return `False` in such cases. We can see this is the case for $P_{2, 2}$ and $P_{3, 1}$.
###Code
wumpus_kb.ask_if_true(~P22), wumpus_kb.ask_if_true(P22)
###Output
_____no_output_____
###Markdown
Proof by ResolutionRecall that our goal is to check whether $\text{KB} \vDash \alpha$ i.e. is $\text{KB} \implies \alpha$ true in every model. Suppose we wanted to check if $P \implies Q$ is valid. We check the satisfiability of $\neg (P \implies Q)$, which can be rewritten as $P \land \neg Q$. If $P \land \neg Q$ is unsatisfiable, then $P \implies Q$ must be true in all models. This gives us the result "$\text{KB} \vDash \alpha$ if and only if $\text{KB} \land \neg \alpha$ is unsatisfiable".This technique corresponds to proof by contradiction, a standard mathematical proof technique. We assume $\alpha$ to be false and show that this leads to a contradiction with known axioms in $\text{KB}$. We obtain a contradiction by making valid inferences using inference rules. In this proof we use a single inference rule, resolution which states $(l_1 \lor \dots \lor l_k) \land (m_1 \lor \dots \lor m_n) \land (l_i \iff \neg m_j) \implies l_1 \lor \dots \lor l_{i - 1} \lor l_{i + 1} \lor \dots \lor l_k \lor m_1 \lor \dots \lor m_{j - 1} \lor m_{j + 1} \lor \dots \lor m_n$. Applying the resolution yields us a clause which we add to the KB. We keep doing this until:* There are no new clauses that can be added, in which case $\text{KB} \nvDash \alpha$.* Two clauses resolve to yield the empty clause, in which case $\text{KB} \vDash \alpha$.The empty clause is equivalent to False because it arises only from resolving two complementaryunit clauses such as $P$ and $\neg P$ which is a contradiction as both $P$ and $\neg P$ can't be True at the same time. There is one catch however, the algorithm that implements proof by resolution cannot handle complex sentences. Implications and bi-implications have to be simplified into simpler clauses. We already know that *every sentence of a propositional logic is logically equivalent to a conjunction of clauses*.We will use this fact to our advantage and simplify the input sentence into the **conjunctive normal form** (CNF) which is a conjunction of disjunctions of literals.For eg:$$(A\lor B)\land (\neg B\lor C\lor\neg D)\land (D\lor\neg E)$$This is equivalent to the POS (Product of sums) form in digital electronics.Here's an outline of how the conversion is done:1. Convert bi-implications to implications$\alpha\iff\beta$ can be written as $(\alpha\implies\beta)\land(\beta\implies\alpha)$This also applies to compound sentences$\alpha\iff(\beta\lor\gamma)$ can be written as $(\alpha\implies(\beta\lor\gamma))\land((\beta\lor\gamma)\implies\alpha)$2. Convert implications to their logical equivalents$\alpha\implies\beta$ can be written as $\neg\alpha\lor\beta$3. Move negation inwardsCNF requires atomic literals. Hence, negation cannot appear on a compound statement.De Morgan's laws will be helpful here.$\neg(\alpha\land\beta)\equiv(\neg\alpha\lor\neg\beta)$$\neg(\alpha\lor\beta)\equiv(\neg\alpha\land\neg\beta)$4. Distribute disjunction over conjunctionDisjunction and conjunction are distributive over each other.Now that we only have conjunctions, disjunctions and negations in our expression, we will distribute disjunctions over conjunctions wherever possible as this will give us a sentence which is a conjunction of simpler clauses, which is what we wanted in the first place.We need a term of the form$(\alpha_{1}\lor\alpha_{2}\lor\alpha_{3}...)\land(\beta_{1}\lor\beta_{2}\lor\beta_{3}...)\land(\gamma_{1}\lor\gamma_{2}\lor\gamma_{3}...)\land...$The `to_cnf` function executes this conversion using helper subroutines.
###Code
psource(to_cnf)
###Output
_____no_output_____
###Markdown
`to_cnf` calls three subroutines.`eliminate_implications` converts bi-implications and implications to their logical equivalents.`move_not_inwards` removes negations from compound statements and moves them inwards using De Morgan's laws.`distribute_and_over_or` distributes disjunctions over conjunctions.Run the cell below for implementation details.
###Code
psource(eliminate_implications)
psource(move_not_inwards)
psource(distribute_and_over_or)
###Output
_____no_output_____
###Markdown
Let's convert some sentences to see how it works
###Code
A, B, C, D = expr('A, B, C, D')
to_cnf(A |'<=>'| B)
to_cnf(A |'<=>'| (B & C))
to_cnf(A & (B | (C & D)))
to_cnf((A |'<=>'| ~B) |'==>'| (C | ~D))
###Output
_____no_output_____
###Markdown
Coming back to our resolution problem, we can see how the `to_cnf` function is utilized here
###Code
psource(pl_resolution)
pl_resolution(wumpus_kb, ~P11), pl_resolution(wumpus_kb, P11)
pl_resolution(wumpus_kb, ~P22), pl_resolution(wumpus_kb, P22)
###Output
_____no_output_____
###Markdown
Forward and backward chainingPreviously, we said we will look at two algorithms to check if a sentence is entailed by the `KB`. Here's a third one. The difference here is that our goal now is to determine if a knowledge base of definite clauses entails a single proposition symbol *q* - the query.There is a catch however - the knowledge base can only contain **Horn clauses**. Horn ClausesHorn clauses can be defined as a *disjunction* of *literals* with **at most** one positive literal. A Horn clause with exactly one positive literal is called a *definite clause*.A Horn clause might look like $\neg a\lor\neg b\lor\neg c\lor\neg d... \lor z$This, coincidentally, is also a definite clause.Using De Morgan's laws, the example above can be simplified to $a\land b\land c\land d ... \implies z$This seems like a logical representation of how humans process known data and facts. Assuming percepts `a`, `b`, `c`, `d` ... to be true simultaneously, we can infer `z` to also be true at that point in time. There are some interesting aspects of Horn clauses that make algorithmic inference or *resolution* easier.- Definite clauses can be written as implications:The most important simplification a definite clause provides is that it can be written as an implication.The premise (or the knowledge that leads to the implication) is a conjunction of positive literals.The conclusion (the implied statement) is also a positive literal.The sentence thus becomes easier to understand.The premise and the conclusion are conventionally called the *body* and the *head* respectively.A single positive literal is called a *fact*.- Forward chaining and backward chaining can be used for inference from Horn clauses:Forward chaining is semantically identical to `AND-OR-Graph-Search` from the chapter on search algorithms.Implementational details will be explained shortly.- Deciding entailment with Horn clauses is linear in size of the knowledge base:Surprisingly, the forward and backward chaining algorithms traverse each element of the knowledge base at most once, greatly simplifying the problem.The function `pl_fc_entails` implements forward chaining to see if a knowledge base `KB` entails a symbol `q`.Before we proceed further, note that `pl_fc_entails` doesn't use an ordinary `KB` instance. The knowledge base here is an instance of the `PropDefiniteKB` class, derived from the `PropKB` class, but modified to store definite clauses.The main point of difference arises in the inclusion of a helper method to `PropDefiniteKB` that returns a list of clauses in KB that have a given symbol `p` in their premise.
###Code
psource(PropDefiniteKB.clauses_with_premise)
###Output
_____no_output_____
###Markdown
Let's now have a look at the `pl_fc_entails` algorithm.
###Code
psource(pl_fc_entails)
###Output
_____no_output_____
###Markdown
The function accepts a knowledge base `KB` (an instance of `PropDefiniteKB`) and a query `q` as inputs.`count` initially stores the number of symbols in the premise of each sentence in the knowledge base.The `conjuncts` helper function separates a given sentence at conjunctions.`inferred` is initialized as a *boolean* defaultdict. This will be used later to check if we have inferred all premises of each clause of the agenda.`agenda` initially stores a list of clauses that the knowledge base knows to be true.The `is_prop_symbol` helper function checks if the given symbol is a valid propositional logic symbol.We now iterate through `agenda`, popping a symbol `p` on each iteration.If the query `q` is the same as `p`, we know that entailment holds.The agenda is processed, reducing `count` by one for each implication with a premise `p`.A conclusion is added to the agenda when `count` reaches zero. This means we know all the premises of that particular implication to be true.`clauses_with_premise` is a helpful method of the `PropKB` class.It returns a list of clauses in the knowledge base that have `p` in their premise.Now that we have an idea of how this function works, let's see a few examples of its usage, but we first need to define our knowledge base. We assume we know the following clauses to be true.
###Code
clauses = ['(B & F)==>E',
'(A & E & F)==>G',
'(B & C)==>F',
'(A & B)==>D',
'(E & F)==>H',
'(H & I)==>J',
'A',
'B',
'C']
###Output
_____no_output_____
###Markdown
We will now `tell` this information to our knowledge base.
###Code
definite_clauses_KB = PropDefiniteKB()
for clause in clauses:
definite_clauses_KB.tell(expr(clause))
###Output
_____no_output_____
###Markdown
We can now check if our knowledge base entails the following queries.
###Code
pl_fc_entails(definite_clauses_KB, expr('G'))
pl_fc_entails(definite_clauses_KB, expr('H'))
pl_fc_entails(definite_clauses_KB, expr('I'))
pl_fc_entails(definite_clauses_KB, expr('J'))
###Output
_____no_output_____
###Markdown
Effective Propositional Model CheckingThe previous segments elucidate the algorithmic procedure for model checking. In this segment, we look at ways of making them computationally efficient.The problem we are trying to solve is conventionally called the _propositional satisfiability problem_, abbreviated as the _SAT_ problem.In layman terms, if there exists a model that satisfies a given Boolean formula, the formula is called satisfiable.The SAT problem was the first problem to be proven _NP-complete_.The main characteristics of an NP-complete problem are:- Given a solution to such a problem, it is easy to verify if the solution solves the problem.- The time required to actually solve the problem using any known algorithm increases exponentially with respect to the size of the problem.Due to these properties, heuristic and approximational methods are often applied to find solutions to these problems.It is extremely important to be able to solve large scale SAT problems efficiently because many combinatorial problems in computer science can be conveniently reduced to checking the satisfiability of a propositional sentence under some constraints.We will introduce two new algorithms that perform propositional model checking in a computationally effective way. 1. DPLL (Davis-Putnam-Logeman-Loveland) algorithmThis algorithm is very similar to Backtracking-Search.It recursively enumerates possible models in a depth-first fashion with the following improvements over algorithms like `tt_entails`:1. Early termination:In certain cases, the algorithm can detect the truth value of a statement using just a partially completed model.For example, $(P\lor Q)\land(P\lor R)$ is true if P is true, regardless of other variables.This reduces the search space significantly.2. Pure symbol heuristic:A symbol that has the same sign (positive or negative) in all clauses is called a _pure symbol_.It isn't difficult to see that any satisfiable model will have the pure symbols assigned such that its parent clause becomes _true_.For example, $(P\lor\neg Q)\land(\neg Q\lor\neg R)\land(R\lor P)$ has P and Q as pure symbolsand for the sentence to be true, P _has_ to be true and Q _has_ to be false.The pure symbol heuristic thus simplifies the problem a bit.3. Unit clause heuristic:In the context of DPLL, clauses with just one literal and clauses with all but one _false_ literals are called unit clauses.If a clause is a unit clause, it can only be satisfied by assigning the necessary value to make the last literal true.We have no other choice.Assigning one unit clause can create another unit clause.For example, when P is false, $(P\lor Q)$ becomes a unit clause, causing _true_ to be assigned to Q.A series of forced assignments derived from previous unit clauses is called _unit propagation_.In this way, this heuristic simplifies the problem further.The algorithm often employs other tricks to scale up to large problems.However, these tricks are currently out of the scope of this notebook. Refer to section 7.6 of the book for more details.Let's have a look at the algorithm.
###Code
psource(dpll)
###Output
_____no_output_____
###Markdown
The algorithm uses the ideas described above to check satisfiability of a sentence in propositional logic.It recursively calls itself, simplifying the problem at each step. It also uses helper functions `find_pure_symbol` and `find_unit_clause` to carry out steps 2 and 3 above.The `dpll_satisfiable` helper function converts the input clauses to _conjunctive normal form_ and calls the `dpll` function with the correct parameters.
###Code
psource(dpll_satisfiable)
###Output
_____no_output_____
###Markdown
Let's see a few examples of usage.
###Code
A, B, C, D = expr('A, B, C, D')
dpll_satisfiable(A & B & ~C & D)
###Output
_____no_output_____
###Markdown
This is a simple case to highlight that the algorithm actually works.
###Code
dpll_satisfiable((A & B) | (C & ~A) | (B & ~D))
###Output
_____no_output_____
###Markdown
If a particular symbol isn't present in the solution, it means that the solution is independent of the value of that symbol.In this case, the solution is independent of A.
###Code
dpll_satisfiable(A |'<=>'| B)
dpll_satisfiable((A |'<=>'| B) |'==>'| (C & ~A))
dpll_satisfiable((A | (B & C)) |'<=>'| ((A | B) & (A | C)))
###Output
_____no_output_____
###Markdown
2. WalkSAT algorithmThis algorithm is very similar to Hill climbing.On every iteration, the algorithm picks an unsatisfied clause and flips a symbol in the clause.This is similar to finding a neighboring state in the `hill_climbing` algorithm.The symbol to be flipped is decided by an evaluation function that counts the number of unsatisfied clauses.Sometimes, symbols are also flipped randomly to avoid local optima. A subtle balance between greediness and randomness is required. Alternatively, some versions of the algorithm restart with a completely new random assignment if no solution has been found for too long as a way of getting out of local minima of numbers of unsatisfied clauses.Let's have a look at the algorithm.
###Code
psource(WalkSAT)
###Output
_____no_output_____
###Markdown
The function takes three arguments:1. The `clauses` we want to satisfy.2. The probability `p` of randomly changing a symbol.3. The maximum number of flips (`max_flips`) the algorithm will run for. If the clauses are still unsatisfied, the algorithm returns `None` to denote failure.The algorithm is identical in concept to Hill climbing and the code isn't difficult to understand.Let's see a few examples of usage.
###Code
A, B, C, D = expr('A, B, C, D')
WalkSAT([A, B, ~C, D], 0.5, 100)
###Output
_____no_output_____
###Markdown
This is a simple case to show that the algorithm converges.
###Code
WalkSAT([A & B, A & C], 0.5, 100)
WalkSAT([A & B, C & D, C & B], 0.5, 100)
WalkSAT([A & B, C | D, ~(D | B)], 0.5, 1000)
###Output
_____no_output_____
###Markdown
This one doesn't give any output because WalkSAT did not find any model where these clauses hold. We can solve these clauses to see that they together form a contradiction and hence, it isn't supposed to have a solution. One point of difference between this algorithm and the `dpll_satisfiable` algorithms is that both these algorithms take inputs differently. For WalkSAT to take complete sentences as input, we can write a helper function that converts the input sentence into conjunctive normal form and then calls WalkSAT with the list of conjuncts of the CNF form of the sentence.
###Code
def WalkSAT_CNF(sentence, p=0.5, max_flips=10000):
return WalkSAT(conjuncts(to_cnf(sentence)), 0, max_flips)
###Output
_____no_output_____
###Markdown
Now we can call `WalkSAT_CNF` and `DPLL_Satisfiable` with the same arguments.
###Code
WalkSAT_CNF((A & B) | (C & ~A) | (B & ~D), 0.5, 1000)
###Output
_____no_output_____
###Markdown
It works!Notice that the solution generated by WalkSAT doesn't omit variables that the sentence doesn't depend upon. If the sentence is independent of a particular variable, the solution contains a random value for that variable because of the stochastic nature of the algorithm.Let's compare the runtime of WalkSAT and DPLL for a few cases. We will use the `%%timeit` magic to do this.
###Code
sentence_1 = A |'<=>'| B
sentence_2 = (A & B) | (C & ~A) | (B & ~D)
sentence_3 = (A | (B & C)) |'<=>'| ((A | B) & (A | C))
%%timeit
dpll_satisfiable(sentence_1)
dpll_satisfiable(sentence_2)
dpll_satisfiable(sentence_3)
%%timeit
WalkSAT_CNF(sentence_1)
WalkSAT_CNF(sentence_2)
WalkSAT_CNF(sentence_3)
###Output
_____no_output_____
###Markdown
On an average, for solvable cases, `WalkSAT` is quite faster than `dpll` because, for a small number of variables, `WalkSAT` can reduce the search space significantly. Results can be different for sentences with more symbols though.Feel free to play around with this to understand the trade-offs of these algorithms better. SATPlan In this section we show how to make plans by logical inference. The basic idea is very simple. It includes the following three steps:1. Constuct a sentence that includes: 1. A colection of assertions about the initial state. 2. The successor-state axioms for all the possible actions at each time up to some maximum time t. 3. The assertion that the goal is achieved at time t.2. Present the whole sentence to a SAT solver.3. Assuming a model is found, extract from the model those variables that represent actions and are assigned true. Together they represent a plan to achieve the goals.Lets have a look at the algorithm
###Code
psource(SAT_plan)
###Output
_____no_output_____
###Markdown
Let's see few examples of its usage. First we define a transition and then call `SAT_plan`.
###Code
transition = {'A': {'Left': 'A', 'Right': 'B'},
'B': {'Left': 'A', 'Right': 'C'},
'C': {'Left': 'B', 'Right': 'C'}}
print(SAT_plan('A', transition, 'C', 2))
print(SAT_plan('A', transition, 'B', 3))
print(SAT_plan('C', transition, 'A', 3))
###Output
_____no_output_____
###Markdown
Let us do the same for another transition.
###Code
transition = {(0, 0): {'Right': (0, 1), 'Down': (1, 0)},
(0, 1): {'Left': (1, 0), 'Down': (1, 1)},
(1, 0): {'Right': (1, 0), 'Up': (1, 0), 'Left': (1, 0), 'Down': (1, 0)},
(1, 1): {'Left': (1, 0), 'Up': (0, 1)}}
print(SAT_plan((0, 0), transition, (1, 1), 4))
###Output
_____no_output_____
###Markdown
First-Order Logic Knowledge Bases: `FolKB`The class `FolKB` can be used to represent a knowledge base of First-order logic sentences. You would initialize and use it the same way as you would for `PropKB` except that the clauses are first-order definite clauses. We will see how to write such clauses to create a database and query them in the following sections. Criminal KBIn this section we create a `FolKB` based on the following paragraph.The law says that it is a crime for an American to sell weapons to hostile nations. The country Nono, an enemy of America, has some missiles, and all of its missiles were sold to it by Colonel West, who is American.The first step is to extract the facts and convert them into first-order definite clauses. Extracting the facts from data alone is a challenging task. Fortunately, we have a small paragraph and can do extraction and conversion manually. We'll store the clauses in list aptly named `clauses`.
###Code
clauses = []
###Output
_____no_output_____
###Markdown
“... it is a crime for an American to sell weapons to hostile nations”The keywords to look for here are 'crime', 'American', 'sell', 'weapon' and 'hostile'. We use predicate symbols to make meaning of them.* `Criminal(x)`: `x` is a criminal* `American(x)`: `x` is an American* `Sells(x ,y, z)`: `x` sells `y` to `z`* `Weapon(x)`: `x` is a weapon* `Hostile(x)`: `x` is a hostile nationLet us now combine them with appropriate variable naming to depict the meaning of the sentence. The criminal `x` is also the American `x` who sells weapon `y` to `z`, which is a hostile nation.$\text{American}(x) \land \text{Weapon}(y) \land \text{Sells}(x, y, z) \land \text{Hostile}(z) \implies \text{Criminal} (x)$
###Code
clauses.append(expr("(American(x) & Weapon(y) & Sells(x, y, z) & Hostile(z)) ==> Criminal(x)"))
###Output
_____no_output_____
###Markdown
"The country Nono, an enemy of America"We now know that Nono is an enemy of America. We represent these nations using the constant symbols `Nono` and `America`. the enemy relation is show using the predicate symbol `Enemy`.$\text{Enemy}(\text{Nono}, \text{America})$
###Code
clauses.append(expr("Enemy(Nono, America)"))
###Output
_____no_output_____
###Markdown
"Nono ... has some missiles"This states the existence of some missile which is owned by Nono. $\exists x \text{Owns}(\text{Nono}, x) \land \text{Missile}(x)$. We invoke existential instantiation to introduce a new constant `M1` which is the missile owned by Nono.$\text{Owns}(\text{Nono}, \text{M1}), \text{Missile}(\text{M1})$
###Code
clauses.append(expr("Owns(Nono, M1)"))
clauses.append(expr("Missile(M1)"))
###Output
_____no_output_____
###Markdown
"All of its missiles were sold to it by Colonel West"If Nono owns something and it classifies as a missile, then it was sold to Nono by West.$\text{Missile}(x) \land \text{Owns}(\text{Nono}, x) \implies \text{Sells}(\text{West}, x, \text{Nono})$
###Code
clauses.append(expr("(Missile(x) & Owns(Nono, x)) ==> Sells(West, x, Nono)"))
###Output
_____no_output_____
###Markdown
"West, who is American"West is an American.$\text{American}(\text{West})$
###Code
clauses.append(expr("American(West)"))
###Output
_____no_output_____
###Markdown
We also know, from our understanding of language, that missiles are weapons and that an enemy of America counts as “hostile”.$\text{Missile}(x) \implies \text{Weapon}(x), \text{Enemy}(x, \text{America}) \implies \text{Hostile}(x)$
###Code
clauses.append(expr("Missile(x) ==> Weapon(x)"))
clauses.append(expr("Enemy(x, America) ==> Hostile(x)"))
###Output
_____no_output_____
###Markdown
Now that we have converted the information into first-order definite clauses we can create our first-order logic knowledge base.
###Code
crime_kb = FolKB(clauses)
###Output
_____no_output_____
###Markdown
The `subst` helper function substitutes variables with given values in first-order logic statements.This will be useful in later algorithms.It's implementation is quite simple and self-explanatory.
###Code
psource(subst)
###Output
_____no_output_____
###Markdown
Here's an example of how `subst` can be used.
###Code
subst({x: expr('Nono'), y: expr('M1')}, expr('Owns(x, y)'))
###Output
_____no_output_____
###Markdown
Inference in First-Order LogicIn this section we look at a forward chaining and a backward chaining algorithm for `FolKB`. Both aforementioned algorithms rely on a process called unification, a key component of all first-order inference algorithms. UnificationWe sometimes require finding substitutions that make different logical expressions look identical. This process, called unification, is done by the `unify` algorithm. It takes as input two sentences and returns a unifier for them if one exists. A unifier is a dictionary which stores the substitutions required to make the two sentences identical. It does so by recursively unifying the components of a sentence, where the unification of a variable symbol `var` with a constant symbol `Const` is the mapping `{var: Const}`. Let's look at a few examples.
###Code
unify(expr('x'), 3)
unify(expr('A(x)'), expr('A(B)'))
unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(y)'))
###Output
_____no_output_____
###Markdown
In cases where there is no possible substitution that unifies the two sentences the function return `None`.
###Code
print(unify(expr('Cat(x)'), expr('Dog(Dobby)')))
###Output
_____no_output_____
###Markdown
We also need to take care we do not unintentionally use the same variable name. Unify treats them as a single variable which prevents it from taking multiple value.
###Code
print(unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(x)')))
###Output
_____no_output_____
###Markdown
Forward Chaining AlgorithmWe consider the simple forward-chaining algorithm presented in Figure 9.3. We look at each rule in the knowledge base and see if the premises can be satisfied. This is done by finding a substitution which unifies each of the premise with a clause in the `KB`. If we are able to unify the premises, the conclusion (with the corresponding substitution) is added to the `KB`. This inferencing process is repeated until either the query can be answered or till no new sentences can be added. We test if the newly added clause unifies with the query in which case the substitution yielded by `unify` is an answer to the query. If we run out of sentences to infer, this means the query was a failure.The function `fol_fc_ask` is a generator which yields all substitutions which validate the query.
###Code
psource(fol_fc_ask)
###Output
_____no_output_____
###Markdown
Let's find out all the hostile nations. Note that we only told the `KB` that Nono was an enemy of America, not that it was hostile.
###Code
answer = fol_fc_ask(crime_kb, expr('Hostile(x)'))
print(list(answer))
###Output
_____no_output_____
###Markdown
The generator returned a single substitution which says that Nono is a hostile nation. See how after adding another enemy nation the generator returns two substitutions.
###Code
crime_kb.tell(expr('Enemy(JaJa, America)'))
answer = fol_fc_ask(crime_kb, expr('Hostile(x)'))
print(list(answer))
###Output
_____no_output_____
###Markdown
Note: `fol_fc_ask` makes changes to the `KB` by adding sentences to it. Backward Chaining AlgorithmThis algorithm works backward from the goal, chaining through rules to find known facts that support the proof. Suppose `goal` is the query we want to find the substitution for. We find rules of the form $\text{lhs} \implies \text{goal}$ in the `KB` and try to prove `lhs`. There may be multiple clauses in the `KB` which give multiple `lhs`. It is sufficient to prove only one of these. But to prove a `lhs` all the conjuncts in the `lhs` of the clause must be proved. This makes it similar to And/Or search. ORThe OR part of the algorithm comes from our choice to select any clause of the form $\text{lhs} \implies \text{goal}$. Looking at all rules's `lhs` whose `rhs` unify with the `goal`, we yield a substitution which proves all the conjuncts in the `lhs`. We use `parse_definite_clause` to attain `lhs` and `rhs` from a clause of the form $\text{lhs} \implies \text{rhs}$. For atomic facts the `lhs` is an empty list.
###Code
psource(fol_bc_or)
###Output
_____no_output_____
###Markdown
ANDThe AND corresponds to proving all the conjuncts in the `lhs`. We need to find a substitution which proves each and every clause in the list of conjuncts.
###Code
psource(fol_bc_and)
###Output
_____no_output_____
###Markdown
Now the main function `fl_bc_ask` calls `fol_bc_or` with substitution initialized as empty. The `ask` method of `FolKB` uses `fol_bc_ask` and fetches the first substitution returned by the generator to answer query. Let's query the knowledge base we created from `clauses` to find hostile nations.
###Code
# Rebuild KB because running fol_fc_ask would add new facts to the KB
crime_kb = FolKB(clauses)
crime_kb.ask(expr('Hostile(x)'))
###Output
_____no_output_____
###Markdown
You may notice some new variables in the substitution. They are introduced to standardize the variable names to prevent naming problems as discussed in the [Unification section](Unification) Appendix: The Implementation of `|'==>'|`Consider the `Expr` formed by this syntax:
###Code
P |'==>'| ~Q
###Output
_____no_output_____
###Markdown
What is the funny `|'==>'|` syntax? The trick is that "`|`" is just the regular Python or-operator, and so is exactly equivalent to this:
###Code
(P | '==>') | ~Q
###Output
_____no_output_____
###Markdown
In other words, there are two applications of or-operators. Here's the first one:
###Code
P | '==>'
###Output
_____no_output_____
###Markdown
What is going on here is that the `__or__` method of `Expr` serves a dual purpose. If the right-hand-side is another `Expr` (or a number), then the result is an `Expr`, as in `(P | Q)`. But if the right-hand-side is a string, then the string is taken to be an operator, and we create a node in the abstract syntax tree corresponding to a partially-filled `Expr`, one where we know the left-hand-side is `P` and the operator is `==>`, but we don't yet know the right-hand-side.The `PartialExpr` class has an `__or__` method that says to create an `Expr` node with the right-hand-side filled in. Here we can see the combination of the `PartialExpr` with `Q` to create a complete `Expr`:
###Code
partial = PartialExpr('==>', P)
partial | ~Q
###Output
_____no_output_____
###Markdown
This [trick](http://code.activestate.com/recipes/384122-infix-operators/) is due to [Ferdinand Jamitzky](http://code.activestate.com/recipes/users/98863/), with a modification by [C. G. Vedant](https://github.com/Chipe1),who suggested using a string inside the or-bars. Appendix: The Implementation of `expr`How does `expr` parse a string into an `Expr`? It turns out there are two tricks (besides the Jamitzky/Vedant trick):1. We do a string substitution, replacing "`==>`" with "`|'==>'|`" (and likewise for other operators).2. We `eval` the resulting string in an environment in which every identifieris bound to a symbol with that identifier as the `op`.In other words,
###Code
expr('~(P & Q) ==> (~P | ~Q)')
###Output
_____no_output_____
###Markdown
is equivalent to doing:
###Code
P, Q = symbols('P, Q')
~(P & Q) |'==>'| (~P | ~Q)
###Output
_____no_output_____
###Markdown
One thing to beware of: this puts `==>` at the same precedence level as `"|"`, which is not quite right. For example, we get this:
###Code
P & Q |'==>'| P | Q
###Output
_____no_output_____
###Markdown
which is probably not what we meant; when in doubt, put in extra parens:
###Code
(P & Q) |'==>'| (P | Q)
###Output
_____no_output_____
###Markdown
Examples
###Code
from notebook import Canvas_fol_bc_ask
canvas_bc_ask = Canvas_fol_bc_ask('canvas_bc_ask', crime_kb, expr('Criminal(x)'))
###Output
_____no_output_____
###Markdown
Logic This Jupyter notebook acts as supporting material for topics covered in __Chapter 6 Logical Agents__, __Chapter 7 First-Order Logic__ and __Chapter 8 Inference in First-Order Logic__ of the book *[Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu)*. We make use the implementations in the [logic.py](https://github.com/aimacode/aima-python/blob/master/logic.py) module. See the [intro notebook](https://github.com/aimacode/aima-python/blob/master/intro.ipynb) for instructions.Let's first import everything from the `logic` module.
###Code
from utils import *
from logic import *
from notebook import psource
###Output
_____no_output_____
###Markdown
CONTENTS- Logical sentences - Expr - PropKB - Knowledge-based agents - Inference in propositional knowledge base - Truth table enumeration - Proof by resolution - Forward and backward chaining - DPLL - WalkSAT - SATPlan - FolKB - Inference in first order knowledge base - Unification - Forward chaining algorithm - Backward chaining algorithm Logical Sentences The `Expr` class is designed to represent any kind of mathematical expression. The simplest type of `Expr` is a symbol, which can be defined with the function `Symbol`:
###Code
Symbol('x')
###Output
_____no_output_____
###Markdown
Or we can define multiple symbols at the same time with the function `symbols`:
###Code
(x, y, P, Q, f) = symbols('x, y, P, Q, f')
###Output
_____no_output_____
###Markdown
We can combine `Expr`s with the regular Python infix and prefix operators. Here's how we would form the logical sentence "P and not Q":
###Code
P & ~Q
###Output
_____no_output_____
###Markdown
This works because the `Expr` class overloads the `&` operator with this definition:```pythondef __and__(self, other): return Expr('&', self, other)``` and does similar overloads for the other operators. An `Expr` has two fields: `op` for the operator, which is always a string, and `args` for the arguments, which is a tuple of 0 or more expressions. By "expression," I mean either an instance of `Expr`, or a number. Let's take a look at the fields for some `Expr` examples:
###Code
sentence = P & ~Q
sentence.op
sentence.args
P.op
P.args
Pxy = P(x, y)
Pxy.op
Pxy.args
###Output
_____no_output_____
###Markdown
It is important to note that the `Expr` class does not define the *logic* of Propositional Logic sentences; it just gives you a way to *represent* expressions. Think of an `Expr` as an [abstract syntax tree](https://en.wikipedia.org/wiki/Abstract_syntax_tree). Each of the `args` in an `Expr` can be either a symbol, a number, or a nested `Expr`. We can nest these trees to any depth. Here is a deply nested `Expr`:
###Code
3 * f(x, y) + P(y) / 2 + 1
###Output
_____no_output_____
###Markdown
Operators for Constructing Logical SentencesHere is a table of the operators that can be used to form sentences. Note that we have a problem: we want to use Python operators to make sentences, so that our programs (and our interactive sessions like the one here) will show simple code. But Python does not allow implication arrows as operators, so for now we have to use a more verbose notation that Python does allow: `|'==>'|` instead of just `==>`. Alternately, you can always use the more verbose `Expr` constructor forms:| Operation | Book | Python Infix Input | Python Output | Python `Expr` Input|--------------------------|----------------------|-------------------------|---|---|| Negation | ¬ P | `~P` | `~P` | `Expr('~', P)`| And | P ∧ Q | `P & Q` | `P & Q` | `Expr('&', P, Q)`| Or | P ∨ Q | `P` &124; `Q`| `P` &124; `Q` | `Expr('`&124;`', P, Q)`| Inequality (Xor) | P ≠ Q | `P ^ Q` | `P ^ Q` | `Expr('^', P, Q)`| Implication | P → Q | `P` &124;`'==>'`&124; `Q` | `P ==> Q` | `Expr('==>', P, Q)`| Reverse Implication | Q ← P | `Q` &124;`'&124; `P` |`Q <== P` | `Expr('<==', Q, P)`| Equivalence | P ↔ Q | `P` &124;`''`&124; `Q` |`P Q` | `Expr('', P, Q)`Here's an example of defining a sentence with an implication arrow:
###Code
~(P & Q) |'==>'| (~P | ~Q)
###Output
_____no_output_____
###Markdown
`expr`: a Shortcut for Constructing SentencesIf the `|'==>'|` notation looks ugly to you, you can use the function `expr` instead:
###Code
expr('~(P & Q) ==> (~P | ~Q)')
###Output
_____no_output_____
###Markdown
`expr` takes a string as input, and parses it into an `Expr`. The string can contain arrow operators: `==>`, ``, which are handled as if they were regular Python infix operators. And `expr` automatically defines any symbols, so you don't need to pre-define them:
###Code
expr('sqrt(b ** 2 - 4 * a * c)')
###Output
_____no_output_____
###Markdown
For now that's all you need to know about `expr`. If you are interested, we explain the messy details of how `expr` is implemented and how `|'==>'|` is handled in the appendix. Propositional Knowledge Bases: `PropKB`The class `PropKB` can be used to represent a knowledge base of propositional logic sentences.We see that the class `KB` has four methods, apart from `__init__`. A point to note here: the `ask` method simply calls the `ask_generator` method. Thus, this one has already been implemented, and what you'll have to actually implement when you create your own knowledge base class (though you'll probably never need to, considering the ones we've created for you) will be the `ask_generator` function and not the `ask` function itself.The class `PropKB` now.* `__init__(self, sentence=None)` : The constructor `__init__` creates a single field `clauses` which will be a list of all the sentences of the knowledge base. Note that each one of these sentences will be a 'clause' i.e. a sentence which is made up of only literals and `or`s.* `tell(self, sentence)` : When you want to add a sentence to the KB, you use the `tell` method. This method takes a sentence, converts it to its CNF, extracts all the clauses, and adds all these clauses to the `clauses` field. So, you need not worry about `tell`ing only clauses to the knowledge base. You can `tell` the knowledge base a sentence in any form that you wish; converting it to CNF and adding the resulting clauses will be handled by the `tell` method.* `ask_generator(self, query)` : The `ask_generator` function is used by the `ask` function. It calls the `tt_entails` function, which in turn returns `True` if the knowledge base entails query and `False` otherwise. The `ask_generator` itself returns an empty dict `{}` if the knowledge base entails query and `None` otherwise. This might seem a little bit weird to you. After all, it makes more sense just to return a `True` or a `False` instead of the `{}` or `None` But this is done to maintain consistency with the way things are in First-Order Logic, where an `ask_generator` function is supposed to return all the substitutions that make the query true. Hence the dict, to return all these substitutions. I will be mostly be using the `ask` function which returns a `{}` or a `False`, but if you don't like this, you can always use the `ask_if_true` function which returns a `True` or a `False`.* `retract(self, sentence)` : This function removes all the clauses of the sentence given, from the knowledge base. Like the `tell` function, you don't have to pass clauses to remove them from the knowledge base; any sentence will do fine. The function will take care of converting that sentence to clauses and then remove those. Wumpus World KBLet us create a `PropKB` for the wumpus world with the sentences mentioned in `section 7.4.3`.
###Code
wumpus_kb = PropKB()
###Output
_____no_output_____
###Markdown
We define the symbols we use in our clauses.$P_{x, y}$ is true if there is a pit in `[x, y]`.$B_{x, y}$ is true if the agent senses breeze in `[x, y]`.
###Code
P11, P12, P21, P22, P31, B11, B21 = expr('P11, P12, P21, P22, P31, B11, B21')
###Output
_____no_output_____
###Markdown
Now we tell sentences based on `section 7.4.3`.There is no pit in `[1,1]`.
###Code
wumpus_kb.tell(~P11)
###Output
_____no_output_____
###Markdown
A square is breezy if and only if there is a pit in a neighboring square. This has to be stated for each square but for now, we include just the relevant squares.
###Code
wumpus_kb.tell(B11 | '<=>' | ((P12 | P21)))
wumpus_kb.tell(B21 | '<=>' | ((P11 | P22 | P31)))
###Output
_____no_output_____
###Markdown
Now we include the breeze percepts for the first two squares leading up to the situation in `Figure 7.3(b)`
###Code
wumpus_kb.tell(~B11)
wumpus_kb.tell(B21)
###Output
_____no_output_____
###Markdown
We can check the clauses stored in a `KB` by accessing its `clauses` variable
###Code
wumpus_kb.clauses
###Output
_____no_output_____
###Markdown
We see that the equivalence $B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was automatically converted to two implications which were inturn converted to CNF which is stored in the `KB`.$B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was split into $B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ and $B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$.$B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ was converted to $P_{1, 2} \lor P_{2, 1} \lor \neg B_{1, 1}$.$B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$ was converted to $\neg (P_{1, 2} \lor P_{2, 1}) \lor B_{1, 1}$ which becomes $(\neg P_{1, 2} \lor B_{1, 1}) \land (\neg P_{2, 1} \lor B_{1, 1})$ after applying De Morgan's laws and distributing the disjunction.$B_{2, 1} \iff (P_{1, 1} \lor P_{2, 2} \lor P_{3, 2})$ is converted in similar manner. Knowledge based agents A knowledge-based agent is a simple generic agent that maintains and handles a knowledge base.The knowledge base may initially contain some background knowledge.The purpose of a KB agent is to provide a level of abstraction over knowledge-base manipulation and is to be used as a base class for agents that work on a knowledge base.Given a percept, the KB agent adds the percept to its knowledge base, asks the knowledge base for the best action, and tells the knowledge base that it has infact taken that action.Our implementation of `KB-Agent` is encapsulated in a class `KB_AgentProgram` which inherits from the `KB` class.Let's have a look.
###Code
psource(KB_AgentProgram)
###Output
_____no_output_____
###Markdown
The helper functions `make_percept_sentence`, `make_action_query` and `make_action_sentence` are all aptly named and as expected,`make_percept_sentence` makes first-order logic sentences about percepts we want our agent to receive,`make_action_query` asks the underlying `KB` about the action that should be taken and`make_action_sentence` tells the underlying `KB` about the action it has just taken. Inference in Propositional Knowledge BaseIn this section we will look at two algorithms to check if a sentence is entailed by the `KB`. Our goal is to decide whether $\text{KB} \vDash \alpha$ for some sentence $\alpha$. Truth Table EnumerationIt is a model-checking approach which, as the name suggests, enumerates all possible models in which the `KB` is true and checks if $\alpha$ is also true in these models. We list the $n$ symbols in the `KB` and enumerate the $2^{n}$ models in a depth-first manner and check the truth of `KB` and $\alpha$.
###Code
psource(tt_check_all)
###Output
_____no_output_____
###Markdown
The algorithm basically computes every line of the truth table $KB\implies \alpha$ and checks if it is true everywhere.If symbols are defined, the routine recursively constructs every combination of truth values for the symbols and then, it checks whether `model` is consistent with `kb`.The given models correspond to the lines in the truth table,which have a `true` in the KB column, and for these lines it checks whether the query evaluates to true`result = pl_true(alpha, model)`.In short, `tt_check_all` evaluates this logical expression for each `model``pl_true(kb, model) => pl_true(alpha, model)`which is logically equivalent to`pl_true(kb, model) & ~pl_true(alpha, model)` that is, the knowledge base and the negation of the query are logically inconsistent.`tt_entails()` just extracts the symbols from the query and calls `tt_check_all()` with the proper parameters.
###Code
psource(tt_entails)
###Output
_____no_output_____
###Markdown
Keep in mind that for two symbols P and Q, P => Q is false only when P is `True` and Q is `False`.Example usage of `tt_entails()`:
###Code
tt_entails(P & Q, Q)
###Output
_____no_output_____
###Markdown
P & Q is True only when both P and Q are True. Hence, (P & Q) => Q is True
###Code
tt_entails(P | Q, Q)
tt_entails(P | Q, P)
###Output
_____no_output_____
###Markdown
If we know that P | Q is true, we cannot infer the truth values of P and Q. Hence (P | Q) => Q is False and so is (P | Q) => P.
###Code
(A, B, C, D, E, F, G) = symbols('A, B, C, D, E, F, G')
tt_entails(A & (B | C) & D & E & ~(F | G), A & D & E & ~F & ~G)
###Output
_____no_output_____
###Markdown
We can see that for the KB to be true, A, D, E have to be True and F and G have to be False.Nothing can be said about B or C. Coming back to our problem, note that `tt_entails()` takes an `Expr` which is a conjunction of clauses as the input instead of the `KB` itself. You can use the `ask_if_true()` method of `PropKB` which does all the required conversions. Let's check what `wumpus_kb` tells us about $P_{1, 1}$.
###Code
wumpus_kb.ask_if_true(~P11), wumpus_kb.ask_if_true(P11)
###Output
_____no_output_____
###Markdown
Looking at Figure 7.9 we see that in all models in which the knowledge base is `True`, $P_{1, 1}$ is `False`. It makes sense that `ask_if_true()` returns `True` for $\alpha = \neg P_{1, 1}$ and `False` for $\alpha = P_{1, 1}$. This begs the question, what if $\alpha$ is `True` in only a portion of all models. Do we return `True` or `False`? This doesn't rule out the possibility of $\alpha$ being `True` but it is not entailed by the `KB` so we return `False` in such cases. We can see this is the case for $P_{2, 2}$ and $P_{3, 1}$.
###Code
wumpus_kb.ask_if_true(~P22), wumpus_kb.ask_if_true(P22)
###Output
_____no_output_____
###Markdown
Proof by ResolutionRecall that our goal is to check whether $\text{KB} \vDash \alpha$ i.e. is $\text{KB} \implies \alpha$ true in every model. Suppose we wanted to check if $P \implies Q$ is valid. We check the satisfiability of $\neg (P \implies Q)$, which can be rewritten as $P \land \neg Q$. If $P \land \neg Q$ is unsatisfiable, then $P \implies Q$ must be true in all models. This gives us the result "$\text{KB} \vDash \alpha$ if and only if $\text{KB} \land \neg \alpha$ is unsatisfiable".This technique corresponds to proof by contradiction, a standard mathematical proof technique. We assume $\alpha$ to be false and show that this leads to a contradiction with known axioms in $\text{KB}$. We obtain a contradiction by making valid inferences using inference rules. In this proof we use a single inference rule, resolution which states $(l_1 \lor \dots \lor l_k) \land (m_1 \lor \dots \lor m_n) \land (l_i \iff \neg m_j) \implies l_1 \lor \dots \lor l_{i - 1} \lor l_{i + 1} \lor \dots \lor l_k \lor m_1 \lor \dots \lor m_{j - 1} \lor m_{j + 1} \lor \dots \lor m_n$. Applying the resolution yeilds us a clause which we add to the KB. We keep doing this until:* There are no new clauses that can be added, in which case $\text{KB} \nvDash \alpha$.* Two clauses resolve to yield the empty clause, in which case $\text{KB} \vDash \alpha$.The empty clause is equivalent to False because it arises only from resolving two complementaryunit clauses such as $P$ and $\neg P$ which is a contradiction as both $P$ and $\neg P$ can't be True at the same time. There is one catch however, the algorithm that implements proof by resolution cannot handle complex sentences. Implications and bi-implications have to be simplified into simpler clauses. We already know that *every sentence of a propositional logic is logically equivalent to a conjunction of clauses*.We will use this fact to our advantage and simplify the input sentence into the **conjunctive normal form** (CNF) which is a conjunction of disjunctions of literals.For eg:$$(A\lor B)\land (\neg B\lor C\lor\neg D)\land (D\lor\neg E)$$This is equivalent to the POS (Product of sums) form in digital electronics.Here's an outline of how the conversion is done:1. Convert bi-implications to implications$\alpha\iff\beta$ can be written as $(\alpha\implies\beta)\land(\beta\implies\alpha)$This also applies to compound sentences$\alpha\iff(\beta\lor\gamma)$ can be written as $(\alpha\implies(\beta\lor\gamma))\land((\beta\lor\gamma)\implies\alpha)$2. Convert implications to their logical equivalents$\alpha\implies\beta$ can be written as $\neg\alpha\lor\beta$3. Move negation inwardsCNF requires atomic literals. Hence, negation cannot appear on a compound statement.De Morgan's laws will be helpful here.$\neg(\alpha\land\beta)\equiv(\neg\alpha\lor\neg\beta)$$\neg(\alpha\lor\beta)\equiv(\neg\alpha\land\neg\beta)$4. Distribute disjunction over conjunctionDisjunction and conjunction are distributive over each other.Now that we only have conjunctions, disjunctions and negations in our expression, we will distribute disjunctions over conjunctions wherever possible as this will give us a sentence which is a conjunction of simpler clauses, which is what we wanted in the first place.We need a term of the form$(\alpha_{1}\lor\alpha_{2}\lor\alpha_{3}...)\land(\beta_{1}\lor\beta_{2}\lor\beta_{3}...)\land(\gamma_{1}\lor\gamma_{2}\lor\gamma_{3}...)\land...$The `to_cnf` function executes this conversion using helper subroutines.
###Code
psource(to_cnf)
###Output
_____no_output_____
###Markdown
`to_cnf` calls three subroutines.`eliminate_implications` converts bi-implications and implications to their logical equivalents.`move_not_inwards` removes negations from compound statements and moves them inwards using De Morgan's laws.`distribute_and_over_or` distributes disjunctions over conjunctions.Run the cell below for implementation details.
###Code
psource(eliminate_implications)
psource(move_not_inwards)
psource(distribute_and_over_or)
###Output
_____no_output_____
###Markdown
Let's convert some sentences to see how it works
###Code
A, B, C, D = expr('A, B, C, D')
to_cnf(A |'<=>'| B)
to_cnf(A |'<=>'| (B & C))
to_cnf(A & (B | (C & D)))
to_cnf((A |'<=>'| ~B) |'==>'| (C | ~D))
###Output
_____no_output_____
###Markdown
Coming back to our resolution problem, we can see how the `to_cnf` function is utilized here
###Code
psource(pl_resolution)
pl_resolution(wumpus_kb, ~P11), pl_resolution(wumpus_kb, P11)
pl_resolution(wumpus_kb, ~P22), pl_resolution(wumpus_kb, P22)
###Output
_____no_output_____
###Markdown
Forward and backward chainingPreviously, we said we will look at two algorithms to check if a sentence is entailed by the `KB`, but here's a third one. The difference here is that our goal now is to determine if a knowledge base of definite clauses entails a single proposition symbol *q* - the query.There is a catch however, the knowledge base can only contain **Horn clauses**. Horn ClausesHorn clauses can be defined as a *disjunction* of *literals* with **at most** one positive literal. A Horn clause with exactly one positive literal is called a *definite clause*.A Horn clause might look like $\neg a\lor\neg b\lor\neg c\lor\neg d... \lor z$This, coincidentally, is also a definite clause.Using De Morgan's laws, the example above can be simplified to $a\land b\land c\land d ... \implies z$This seems like a logical representation of how humans process known data and facts. Assuming percepts `a`, `b`, `c`, `d` ... to be true simultaneously, we can infer `z` to also be true at that point in time. There are some interesting aspects of Horn clauses that make algorithmic inference or *resolution* easier.- Definite clauses can be written as implications:The most important simplification a definite clause provides is that it can be written as an implication.The premise (or the knowledge that leads to the implication) is a conjunction of positive literals.The conclusion (the implied statement) is also a positive literal.The sentence thus becomes easier to understand.The premise and the conclusion are conventionally called the *body* and the *head* respectively.A single positive literal is called a *fact*.- Forward chaining and backward chaining can be used for inference from Horn clauses:Forward chaining is semantically identical to `AND-OR-Graph-Search` from the chapter on search algorithms.Implementational details will be explained shortly.- Deciding entailment with Horn clauses is linear in size of the knowledge base:Surprisingly, the forward and backward chaining algorithms traverse each element of the knowledge base at most once, greatly simplifying the problem.The function `pl_fc_entails` implements forward chaining to see if a knowledge base `KB` entails a symbol `q`.Before we proceed further, note that `pl_fc_entails` doesn't use an ordinary `KB` instance. The knowledge base here is an instance of the `PropDefiniteKB` class, derived from the `PropKB` class, but modified to store definite clauses.The main point of difference arises in the inclusion of a helper method to `PropDefiniteKB` that returns a list of clauses in KB that have a given symbol `p` in their premise.
###Code
psource(PropDefiniteKB.clauses_with_premise)
###Output
_____no_output_____
###Markdown
Let's now have a look at the `pl_fc_entails` algorithm.
###Code
psource(pl_fc_entails)
###Output
_____no_output_____
###Markdown
The function accepts a knowledge base `KB` (an instance of `PropDefiniteKB`) and a query `q` as inputs.`count` initially stores the number of symbols in the premise of each sentence in the knowledge base.The `conjuncts` helper function separates a given sentence at conjunctions.`inferred` is initialized as a *boolean* defaultdict. This will be used later to check if we have inferred all premises of each clause of the agenda.`agenda` initially stores a list of clauses that the knowledge base knows to be true.The `is_prop_symbol` helper function checks if the given symbol is a valid propositional logic symbol.We now iterate through `agenda`, popping a symbol `p` on each iteration.If the query `q` is the same as `p`, we know that entailment holds.The agenda is processed, reducing `count` by one for each implication with a premise `p`.A conclusion is added to the agenda when `count` reaches zero. This means we know all the premises of that particular implication to be true.`clauses_with_premise` is a helpful method of the `PropKB` class.It returns a list of clauses in the knowledge base that have `p` in their premise.Now that we have an idea of how this function works, let's see a few examples of its usage, but we first need to define our knowledge base. We assume we know the following clauses to be true.
###Code
clauses = ['(B & F)==>E',
'(A & E & F)==>G',
'(B & C)==>F',
'(A & B)==>D',
'(E & F)==>H',
'(H & I)==>J',
'A',
'B',
'C']
###Output
_____no_output_____
###Markdown
We will now `tell` this information to our knowledge base.
###Code
definite_clauses_KB = PropDefiniteKB()
for clause in clauses:
definite_clauses_KB.tell(expr(clause))
###Output
_____no_output_____
###Markdown
We can now check if our knowledge base entails the following queries.
###Code
pl_fc_entails(definite_clauses_KB, expr('G'))
pl_fc_entails(definite_clauses_KB, expr('H'))
pl_fc_entails(definite_clauses_KB, expr('I'))
pl_fc_entails(definite_clauses_KB, expr('J'))
###Output
_____no_output_____
###Markdown
Effective Propositional Model CheckingThe previous segments elucidate the algorithmic procedure for model checking. In this segment, we look at ways of making them computationally efficient.The problem we are trying to solve is conventionally called the _propositional satisfiability problem_, abbreviated as the _SAT_ problem.In layman terms, if there exists a model that satisfies a given Boolean formula, the formula is called satisfiable.The SAT problem was the first problem to be proven _NP-complete_.The main characteristics of an NP-complete problem are:- Given a solution to such a problem, it is easy to verify if the solution solves the problem.- The time required to actually solve the problem using any known algorithm increases exponentially with respect to the size of the problem.Due to these properties, heuristic and approximational methods are often applied to find solutions to these problems.It is extremely important to be able to solve large scale SAT problems efficiently because many combinatorial problems in computer science can be conveniently reduced to checking the satisfiability of a propositional sentence under some constraints.We will introduce two new algorithms that perform propositional model checking in a computationally effective way. 1. DPLL (Davis-Putnam-Logeman-Loveland) algorithmThis algorithm is very similar to Backtracking-Search.It recursively enumerates possible models in a depth-first fashion with the following improvements over algorithms like `tt_entails`:1. Early termination:In certain cases, the algorithm can detect the truth value of a statement using just a partially completed model.For example, $(P\lor Q)\land(P\lor R)$ is true if P is true, regardless of other variables.This reduces the search space significantly.2. Pure symbol heuristic:A symbol that has the same sign (positive or negative) in all clauses is called a _pure symbol_.It isn't difficult to see that any satisfiable model will have the pure symbols assigned such that its parent clause becomes _true_.For example, $(P\lor\neg Q)\land(\neg Q\lor\neg R)\land(R\lor P)$ has P and Q as pure symbolsand for the sentence to be true, P _has_ to be true and Q _has_ to be false.The pure symbol heuristic thus simplifies the problem a bit.3. Unit clause heuristic:In the context of DPLL, clauses with just one literal and clauses with all but one _false_ literals are called unit clauses.If a clause is a unit clause, it can only be satisfied by assigning the necessary value to make the last literal true.We have no other choice.Assigning one unit clause can create another unit clause.For example, when P is false, $(P\lor Q)$ becomes a unit clause, causing _true_ to be assigned to Q.A series of forced assignments derived from previous unit clauses is called _unit propagation_.In this way, this heuristic simplifies the problem further.The algorithm often employs other tricks to scale up to large problems.However, these tricks are currently out of the scope of this notebook. Refer to section 7.6 of the book for more details.Let's have a look at the algorithm.
###Code
psource(dpll)
###Output
_____no_output_____
###Markdown
The algorithm uses the ideas described above to check satisfiability of a sentence in propositional logic.It recursively calls itself, simplifying the problem at each step. It also uses helper functions `find_pure_symbol` and `find_unit_clause` to carry out steps 2 and 3 above.The `dpll_satisfiable` helper function converts the input clauses to _conjunctive normal form_ and calls the `dpll` function with the correct parameters.
###Code
psource(dpll_satisfiable)
###Output
_____no_output_____
###Markdown
Let's see a few examples of usage.
###Code
A, B, C, D = expr('A, B, C, D')
dpll_satisfiable(A & B & ~C & D)
###Output
_____no_output_____
###Markdown
This is a simple case to highlight that the algorithm actually works.
###Code
dpll_satisfiable((A & B) | (C & ~A) | (B & ~D))
###Output
_____no_output_____
###Markdown
If a particular symbol isn't present in the solution, it means that the solution is independent of the value of that symbol.In this case, the solution is independent of A.
###Code
dpll_satisfiable(A |'<=>'| B)
dpll_satisfiable((A |'<=>'| B) |'==>'| (C & ~A))
dpll_satisfiable((A | (B & C)) |'<=>'| ((A | B) & (A | C)))
###Output
_____no_output_____
###Markdown
2. WalkSAT algorithmThis algorithm is very similar to Hill climbing.On every iteration, the algorithm picks an unsatisfied clause and flips a symbol in the clause.This is similar to finding a neighboring state in the `hill_climbing` algorithm.The symbol to be flipped is decided by an evaluation function that counts the number of unsatisfied clauses.Sometimes, symbols are also flipped randomly, to avoid local optima. A subtle balance between greediness and randomness is required. Alternatively, some versions of the algorithm restart with a completely new random assignment if no solution has been found for too long, as a way of getting out of local minima of numbers of unsatisfied clauses.Let's have a look at the algorithm.
###Code
psource(WalkSAT)
###Output
_____no_output_____
###Markdown
The function takes three arguments:1. The `clauses` we want to satisfy.2. The probability `p` of randomly changing a symbol.3. The maximum number of flips (`max_flips`) the algorithm will run for. If the clauses are still unsatisfied, the algorithm returns `None` to denote failure.The algorithm is identical in concept to Hill climbing and the code isn't difficult to understand.Let's see a few examples of usage.
###Code
A, B, C, D = expr('A, B, C, D')
WalkSAT([A, B, ~C, D], 0.5, 100)
###Output
_____no_output_____
###Markdown
This is a simple case to show that the algorithm converges.
###Code
WalkSAT([A & B, A & C], 0.5, 100)
WalkSAT([A & B, C & D, C & B], 0.5, 100)
WalkSAT([A & B, C | D, ~(D | B)], 0.5, 1000)
###Output
_____no_output_____
###Markdown
This one doesn't give any output because WalkSAT did not find any model where these clauses hold. We can solve these clauses to see that they together form a contradiction and hence, it isn't supposed to have a solution. One point of difference between this algorithm and the `dpll_satisfiable` algorithms is that both these algorithms take inputs differently. For WalkSAT to take complete sentences as input, we can write a helper function that converts the input sentence into conjunctive normal form and then calls WalkSAT with the list of conjuncts of the CNF form of the sentence.
###Code
def WalkSAT_CNF(sentence, p=0.5, max_flips=10000):
return WalkSAT(conjuncts(to_cnf(sentence)), 0, max_flips)
###Output
_____no_output_____
###Markdown
Now we can call `WalkSAT_CNF` and `DPLL_Satisfiable` with the same arguments.
###Code
WalkSAT_CNF((A & B) | (C & ~A) | (B & ~D), 0.5, 1000)
###Output
_____no_output_____
###Markdown
It works!Notice that the solution generated by WalkSAT doesn't omit variables that the sentence doesn't depend upon. If the sentence is independent of a particular variable, the solution contains a random value for that variable because of the stochastic nature of the algorithm.Let's compare the runtime of WalkSAT and DPLL for a few cases. We will use the `%%timeit` magic to do this.
###Code
sentence_1 = A |'<=>'| B
sentence_2 = (A & B) | (C & ~A) | (B & ~D)
sentence_3 = (A | (B & C)) |'<=>'| ((A | B) & (A | C))
%%timeit
dpll_satisfiable(sentence_1)
dpll_satisfiable(sentence_2)
dpll_satisfiable(sentence_3)
%%timeit
WalkSAT_CNF(sentence_1)
WalkSAT_CNF(sentence_2)
WalkSAT_CNF(sentence_3)
###Output
1.02 ms ± 6.92 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
###Markdown
On an average, for solvable cases, `WalkSAT` is quite faster than `dpll` because, for a small number of variables, `WalkSAT` can reduce the search space significantly. Results can be different for sentences with more symbols though.Feel free to play around with this to understand the trade-offs of these algorithms better. SATPlan In this section we show how to make plans by logical inference. The basic idea is very simple. It includes the following three steps:1. Constuct a sentence that includes: 1. A colection of assertions about the initial state. 2. The successor-state axioms for all the possible actions at each time up to some maximum time t. 3. The assertion that the goal is achieved at time t.2. Present the whole sentence to a SAT solver.3. Assuming a model is found, extract from the model those variables that represent actions and are assigned true. Together they represent a plan to achieve the goals.Lets have a look at the algorithm
###Code
psource(SAT_plan)
###Output
_____no_output_____
###Markdown
Let's see few examples of its usage. First we define a transition and then call `SAT_plan`.
###Code
transition = {'A': {'Left': 'A', 'Right': 'B'},
'B': {'Left': 'A', 'Right': 'C'},
'C': {'Left': 'B', 'Right': 'C'}}
print(SAT_plan('A', transition, 'C', 2))
print(SAT_plan('A', transition, 'B', 3))
print(SAT_plan('C', transition, 'A', 3))
###Output
None
['Right']
['Left', 'Left']
###Markdown
Let us do the same for another transition.
###Code
transition = {(0, 0): {'Right': (0, 1), 'Down': (1, 0)},
(0, 1): {'Left': (1, 0), 'Down': (1, 1)},
(1, 0): {'Right': (1, 0), 'Up': (1, 0), 'Left': (1, 0), 'Down': (1, 0)},
(1, 1): {'Left': (1, 0), 'Up': (0, 1)}}
print(SAT_plan((0, 0), transition, (1, 1), 4))
###Output
['Right', 'Down']
###Markdown
First-Order Logic Knowledge Bases: `FolKB`The class `FolKB` can be used to represent a knowledge base of First-order logic sentences. You would initialize and use it the same way as you would for `PropKB` except that the clauses are first-order definite clauses. We will see how to write such clauses to create a database and query them in the following sections. Criminal KBIn this section we create a `FolKB` based on the following paragraph.The law says that it is a crime for an American to sell weapons to hostile nations. The country Nono, an enemy of America, has some missiles, and all of its missiles were sold to it by Colonel West, who is American.The first step is to extract the facts and convert them into first-order definite clauses. Extracting the facts from data alone is a challenging task. Fortunately, we have a small paragraph and can do extraction and conversion manually. We'll store the clauses in list aptly named `clauses`.
###Code
clauses = []
###Output
_____no_output_____
###Markdown
“... it is a crime for an American to sell weapons to hostile nations”The keywords to look for here are 'crime', 'American', 'sell', 'weapon' and 'hostile'. We use predicate symbols to make meaning of them.* `Criminal(x)`: `x` is a criminal* `American(x)`: `x` is an American* `Sells(x ,y, z)`: `x` sells `y` to `z`* `Weapon(x)`: `x` is a weapon* `Hostile(x)`: `x` is a hostile nationLet us now combine them with appropriate variable naming to depict the meaning of the sentence. The criminal `x` is also the American `x` who sells weapon `y` to `z`, which is a hostile nation.$\text{American}(x) \land \text{Weapon}(y) \land \text{Sells}(x, y, z) \land \text{Hostile}(z) \implies \text{Criminal} (x)$
###Code
clauses.append(expr("(American(x) & Weapon(y) & Sells(x, y, z) & Hostile(z)) ==> Criminal(x)"))
###Output
_____no_output_____
###Markdown
"The country Nono, an enemy of America"We now know that Nono is an enemy of America. We represent these nations using the constant symbols `Nono` and `America`. the enemy relation is show using the predicate symbol `Enemy`.$\text{Enemy}(\text{Nono}, \text{America})$
###Code
clauses.append(expr("Enemy(Nono, America)"))
###Output
_____no_output_____
###Markdown
"Nono ... has some missiles"This states the existence of some missile which is owned by Nono. $\exists x \text{Owns}(\text{Nono}, x) \land \text{Missile}(x)$. We invoke existential instantiation to introduce a new constant `M1` which is the missile owned by Nono.$\text{Owns}(\text{Nono}, \text{M1}), \text{Missile}(\text{M1})$
###Code
clauses.append(expr("Owns(Nono, M1)"))
clauses.append(expr("Missile(M1)"))
###Output
_____no_output_____
###Markdown
"All of its missiles were sold to it by Colonel West"If Nono owns something and it classifies as a missile, then it was sold to Nono by West.$\text{Missile}(x) \land \text{Owns}(\text{Nono}, x) \implies \text{Sells}(\text{West}, x, \text{Nono})$
###Code
clauses.append(expr("(Missile(x) & Owns(Nono, x)) ==> Sells(West, x, Nono)"))
###Output
_____no_output_____
###Markdown
"West, who is American"West is an American.$\text{American}(\text{West})$
###Code
clauses.append(expr("American(West)"))
###Output
_____no_output_____
###Markdown
We also know, from our understanding of language, that missiles are weapons and that an enemy of America counts as “hostile”.$\text{Missile}(x) \implies \text{Weapon}(x), \text{Enemy}(x, \text{America}) \implies \text{Hostile}(x)$
###Code
clauses.append(expr("Missile(x) ==> Weapon(x)"))
clauses.append(expr("Enemy(x, America) ==> Hostile(x)"))
###Output
_____no_output_____
###Markdown
Now that we have converted the information into first-order definite clauses we can create our first-order logic knowledge base.
###Code
crime_kb = FolKB(clauses)
###Output
_____no_output_____
###Markdown
The `subst` helper function substitutes variables with given values in first-order logic statements.This will be useful in later algorithms.It's implementation is quite simple and self-explanatory.
###Code
psource(subst)
###Output
_____no_output_____
###Markdown
Here's an example of how `subst` can be used.
###Code
subst({x: expr('Nono'), y: expr('M1')}, expr('Owns(x, y)'))
###Output
_____no_output_____
###Markdown
Inference in First-Order LogicIn this section we look at a forward chaining and a backward chaining algorithm for `FolKB`. Both aforementioned algorithms rely on a process called unification, a key component of all first-order inference algorithms. UnificationWe sometimes require finding substitutions that make different logical expressions look identical. This process, called unification, is done by the `unify` algorithm. It takes as input two sentences and returns a unifier for them if one exists. A unifier is a dictionary which stores the substitutions required to make the two sentences identical. It does so by recursively unifying the components of a sentence, where the unification of a variable symbol `var` with a constant symbol `Const` is the mapping `{var: Const}`. Let's look at a few examples.
###Code
unify(expr('x'), 3)
unify(expr('A(x)'), expr('A(B)'))
unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(y)'))
###Output
_____no_output_____
###Markdown
In cases where there is no possible substitution that unifies the two sentences the function return `None`.
###Code
print(unify(expr('Cat(x)'), expr('Dog(Dobby)')))
###Output
None
###Markdown
We also need to take care we do not unintentionally use the same variable name. Unify treats them as a single variable which prevents it from taking multiple value.
###Code
print(unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(x)')))
###Output
None
###Markdown
Forward Chaining AlgorithmWe consider the simple forward-chaining algorithm presented in Figure 9.3. We look at each rule in the knoweldge base and see if the premises can be satisfied. This is done by finding a substitution which unifies each of the premise with a clause in the `KB`. If we are able to unify the premises, the conclusion (with the corresponding substitution) is added to the `KB`. This inferencing process is repeated until either the query can be answered or till no new sentences can be added. We test if the newly added clause unifies with the query in which case the substitution yielded by `unify` is an answer to the query. If we run out of sentences to infer, this means the query was a failure.The function `fol_fc_ask` is a generator which yields all substitutions which validate the query.
###Code
psource(fol_fc_ask)
###Output
_____no_output_____
###Markdown
Let's find out all the hostile nations. Note that we only told the `KB` that Nono was an enemy of America, not that it was hostile.
###Code
answer = fol_fc_ask(crime_kb, expr('Hostile(x)'))
print(list(answer))
###Output
[{x: Nono}]
###Markdown
The generator returned a single substitution which says that Nono is a hostile nation. See how after adding another enemy nation the generator returns two substitutions.
###Code
crime_kb.tell(expr('Enemy(JaJa, America)'))
answer = fol_fc_ask(crime_kb, expr('Hostile(x)'))
print(list(answer))
###Output
[{x: Nono}, {x: JaJa}]
###Markdown
Note: `fol_fc_ask` makes changes to the `KB` by adding sentences to it. Backward Chaining AlgorithmThis algorithm works backward from the goal, chaining through rules to find known facts that support the proof. Suppose `goal` is the query we want to find the substitution for. We find rules of the form $\text{lhs} \implies \text{goal}$ in the `KB` and try to prove `lhs`. There may be multiple clauses in the `KB` which give multiple `lhs`. It is sufficient to prove only one of these. But to prove a `lhs` all the conjuncts in the `lhs` of the clause must be proved. This makes it similar to And/Or search. ORThe OR part of the algorithm comes from our choice to select any clause of the form $\text{lhs} \implies \text{goal}$. Looking at all rules's `lhs` whose `rhs` unify with the `goal`, we yield a substitution which proves all the conjuncts in the `lhs`. We use `parse_definite_clause` to attain `lhs` and `rhs` from a clause of the form $\text{lhs} \implies \text{rhs}$. For atomic facts the `lhs` is an empty list.
###Code
psource(fol_bc_or)
###Output
_____no_output_____
###Markdown
ANDThe AND corresponds to proving all the conjuncts in the `lhs`. We need to find a substitution which proves each and every clause in the list of conjuncts.
###Code
psource(fol_bc_and)
###Output
_____no_output_____
###Markdown
Now the main function `fl_bc_ask` calls `fol_bc_or` with substitution initialized as empty. The `ask` method of `FolKB` uses `fol_bc_ask` and fetches the first substitution returned by the generator to answer query. Let's query the knowledge base we created from `clauses` to find hostile nations.
###Code
# Rebuild KB because running fol_fc_ask would add new facts to the KB
crime_kb = FolKB(clauses)
crime_kb.ask(expr('Hostile(x)'))
###Output
_____no_output_____
###Markdown
You may notice some new variables in the substitution. They are introduced to standardize the variable names to prevent naming problems as discussed in the [Unification section](Unification) Appendix: The Implementation of `|'==>'|`Consider the `Expr` formed by this syntax:
###Code
P |'==>'| ~Q
###Output
_____no_output_____
###Markdown
What is the funny `|'==>'|` syntax? The trick is that "`|`" is just the regular Python or-operator, and so is exactly equivalent to this:
###Code
(P | '==>') | ~Q
###Output
_____no_output_____
###Markdown
In other words, there are two applications of or-operators. Here's the first one:
###Code
P | '==>'
###Output
_____no_output_____
###Markdown
What is going on here is that the `__or__` method of `Expr` serves a dual purpose. If the right-hand-side is another `Expr` (or a number), then the result is an `Expr`, as in `(P | Q)`. But if the right-hand-side is a string, then the string is taken to be an operator, and we create a node in the abstract syntax tree corresponding to a partially-filled `Expr`, one where we know the left-hand-side is `P` and the operator is `==>`, but we don't yet know the right-hand-side.The `PartialExpr` class has an `__or__` method that says to create an `Expr` node with the right-hand-side filled in. Here we can see the combination of the `PartialExpr` with `Q` to create a complete `Expr`:
###Code
partial = PartialExpr('==>', P)
partial | ~Q
###Output
_____no_output_____
###Markdown
This [trick](http://code.activestate.com/recipes/384122-infix-operators/) is due to [Ferdinand Jamitzky](http://code.activestate.com/recipes/users/98863/), with a modification by [C. G. Vedant](https://github.com/Chipe1),who suggested using a string inside the or-bars. Appendix: The Implementation of `expr`How does `expr` parse a string into an `Expr`? It turns out there are two tricks (besides the Jamitzky/Vedant trick):1. We do a string substitution, replacing "`==>`" with "`|'==>'|`" (and likewise for other operators).2. We `eval` the resulting string in an environment in which every identifieris bound to a symbol with that identifier as the `op`.In other words,
###Code
expr('~(P & Q) ==> (~P | ~Q)')
###Output
_____no_output_____
###Markdown
is equivalent to doing:
###Code
P, Q = symbols('P, Q')
~(P & Q) |'==>'| (~P | ~Q)
###Output
_____no_output_____
###Markdown
One thing to beware of: this puts `==>` at the same precedence level as `"|"`, which is not quite right. For example, we get this:
###Code
P & Q |'==>'| P | Q
###Output
_____no_output_____
###Markdown
which is probably not what we meant; when in doubt, put in extra parens:
###Code
(P & Q) |'==>'| (P | Q)
###Output
_____no_output_____
###Markdown
Examples
###Code
from notebook import Canvas_fol_bc_ask
canvas_bc_ask = Canvas_fol_bc_ask('canvas_bc_ask', crime_kb, expr('Criminal(x)'))
###Output
_____no_output_____ |
q-learning/cart_pole/sand.ipynb | ###Markdown
* DQN in CartPole by PyTorch* See https://arxiv.org/abs/1312.5602
###Code
import copy
import time
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
import gym
from gym import wrappers
env = gym.make("CartPole-v0")
print("observation space num: ", env.observation_space.shape[0])
print("action space num: ", env.action_space.n)
print("-"*50)
pobs = env.reset()
done = False
while not done:
act = env.action_space.sample()
obs, reward, done, _ = env.step(act)
print(pobs, act, reward, obs, done)
pobs = obs
# 環境
MONITOR = False
env = gym.make("CartPole-v0")
if MONITOR:
env = wrappers.Monitor(env, "./result", force=True)
obs_num = env.observation_space.shape[0]
acts_num = env.action_space.n
HIDDEN_SIZE = 100
class NN(nn.Module):
def __init__(self):
super(NN, self).__init__()
self.fc1 = nn.Linear(obs_num, HIDDEN_SIZE)
self.fc2 = nn.Linear(HIDDEN_SIZE, HIDDEN_SIZE)
self.fc3 = nn.Linear(HIDDEN_SIZE, HIDDEN_SIZE)
self.fc4 = nn.Linear(HIDDEN_SIZE, acts_num)
def __call__(self, x):
h = F.relu(self.fc1(x))
h = F.relu(self.fc2(h))
h = F.relu(self.fc3(h))
y = F.relu(self.fc4(h))
return y
# 定数
EPOCH_NUM = 2000 # エポック数
STEP_MAX = 200 # 最高ステップ数
MEMORY_SIZE = 200 # メモリサイズいくつで学習を開始するか
BATCH_SIZE = 50 # バッチサイズ
EPSILON = 1.0 # ε-greedy法
EPSILON_DECREASE = 0.001 # εの減少値
EPSILON_MIN = 0.1 # εの下限
START_REDUCE_EPSILON = 200 # εを減少させるステップ数
TRAIN_FREQ = 10 # Q関数の学習間隔
UPDATE_TARGET_Q_FREQ = 20 # Q関数の更新間隔
GAMMA = 0.97 # 割引率
LOG_FREQ = 1000 # ログ出力の間隔
# モデル
Q = NN() # 近似Q関数
Q_ast = copy.deepcopy(Q)
optimizer = optim.RMSprop(Q.parameters(), lr=0.00015, alpha=0.95, eps=0.01)
total_step = 0 # 総ステップ(行動)数
memory = [] # メモリ
total_rewards = [] # 累積報酬記録用リスト
# 学習開始
print("\t".join(["epoch", "epsilon", "reward", "total_step", "elapsed_time"]))
start = time.time()
for epoch in range(EPOCH_NUM):
pobs = env.reset() # 環境初期化
step = 0 # ステップ数
done = False # ゲーム終了フラグ
total_reward = 0 # 累積報酬
while not done and step < STEP_MAX:
if MONITOR:
env.render()
# 行動選択
pact = env.action_space.sample()
# ε-greedy法
if np.random.rand() > EPSILON:
# 最適な行動を予測
pobs_ = np.array(pobs, dtype="float32").reshape((1, obs_num))
pobs_ = Variable(torch.from_numpy(pobs_))
pact = Q(pobs_)
maxs, indices = torch.max(pact.data, 1)
pact = indices.numpy()[0]
# 行動
obs, reward, done, _ = env.step(pact)
if done:
reward = -1
# メモリに蓄積
memory.append((pobs, pact, reward, obs, done)) # 状態、行動、報酬、行動後の状態、ゲーム終了フラグ
if len(memory) > MEMORY_SIZE: # メモリサイズを超えていれば消していく
memory.pop(0)
# 学習
if len(memory) == MEMORY_SIZE: # メモリサイズ分溜まっていれば学習
# 経験リプレイ
if total_step % TRAIN_FREQ == 0:
memory_ = np.random.permutation(memory)
memory_idx = range(len(memory_))
for i in memory_idx[::BATCH_SIZE]:
batch = np.array(memory_[i:i+BATCH_SIZE]) # 経験ミニバッチ
pobss = np.array(batch[:,0].tolist(), dtype="float32").reshape((BATCH_SIZE, obs_num))
pacts = np.array(batch[:,1].tolist(), dtype="int32")
rewards = np.array(batch[:,2].tolist(), dtype="int32")
obss = np.array(batch[:,3].tolist(), dtype="float32").reshape((BATCH_SIZE, obs_num))
dones = np.array(batch[:,4].tolist(), dtype="bool")
# set y
pobss_ = Variable(torch.from_numpy(pobss))
q = Q(pobss_)
obss_ = Variable(torch.from_numpy(obss))
maxs, indices = torch.max(Q_ast(obss_).data, 1)
maxq = maxs.numpy() # maxQ
target = copy.deepcopy(q.data.numpy())
for j in range(BATCH_SIZE):
target[j, pacts[j]] = rewards[j]+GAMMA*maxq[j]*(not dones[j]) # 教師信号
# Perform a gradient descent step
optimizer.zero_grad()
loss = nn.MSELoss()(q, Variable(torch.from_numpy(target)))
loss.backward()
optimizer.step()
# Q関数の更新
if total_step % UPDATE_TARGET_Q_FREQ == 0:
Q_ast = copy.deepcopy(Q)
# εの減少
if EPSILON > EPSILON_MIN and total_step > START_REDUCE_EPSILON:
EPSILON -= EPSILON_DECREASE
# 次の行動へ
total_reward += reward
step += 1
total_step += 1
pobs = obs
total_rewards.append(total_reward) # 累積報酬を記録
if (epoch+1) % LOG_FREQ == 0:
r = sum(total_rewards[((epoch+1)-LOG_FREQ):(epoch+1)])/LOG_FREQ # ログ出力間隔での平均累積報酬
elapsed_time = time.time()-start
print("\t".join(map(str,[epoch+1, EPSILON, r, total_step, str(elapsed_time)+"[sec]"]))) # ログ出力
start = time.time()
if MONITOR:
env.render(close=True)
plt.figure(figsize=(10,5))
resize = (len(total_rewards)//10, 10)
tmp = np.array(total_rewards, dtype="float32").reshape(resize)
tmp = np.average(tmp, axis=1)
plt.plot(tmp)
plt.show()
###Output
_____no_output_____ |
Matrix_in_Python.ipynb | ###Markdown
Linear Algebra for ChEAssignment 3: Matrices We'll try to look into further depth keeping in mind the basic understanding of Python. ObjectivesAt the end of this activity you will be able to:1. Perform basic matrix computations. 2. Understand matrices and how they relate to linear equations. 3. Interpret and utilize matrix equations and operations. Discussion
###Code
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as la
%matplotlib inline
###Output
_____no_output_____
###Markdown
:$$A=\begin{bmatrix}\ 1 & 1 \\ 4 & {-10}\end{bmatrix} \\B=\begin{bmatrix}\ 1 & 1 & 1 \\ 3 & -2 & -1 \\ -1 & 4 & 2\end{bmatrix}\\C=\begin{bmatrix}\ 1 & -2 & 3 & -4 \\ 3 & -1 & -2 & 1 \\ 2 & -1 & 3 & -2\end{bmatrix} $$
###Code
## Since we'll keep on describing matrices. Let's make a function.
def describe_mat(matrix):
print(f'Matrix:\n{matrix}\n\nShape:\t{matrix.shape}\nRank:\t{matrix.ndim}\n')
## Declaring a 4 x 5 matrix
L = np.array ([
[96, 68, 33, 39, 51],
[60, 19 ,30, 45, 86],
[62, 57, 68, 93, 31],
[57, 23, 19, 58, 23]
])
describe_mat(L)
N = np.array([[[
[31, 32, 33, 34, 35],
[41, 42, 53, 53, 13],
[71, 24, 65, 73, 31],
[31, 89, 13, 54, 14]
]]])
describe_mat(N)
E = np.array([
[81, 92, 73, 44, 35],
[11, 22, 13, 63, 73],
[71, 54, 45, 23, 41],
[71, 89, 13, 84, 34]
])
describe_mat(E)
A = np.array([
[32, 34, 37, 32, 36],
[41, 42, 57, 52, 13],
[71, 24, 65, 73, 31],
[31, 89, 13, 54, 14]
])
describe_mat(A)
R = np.array([
[31, 32, 33, 34, 35],
[43, 42, 50, 57, 13],
[72, 24, 65, 73, 39],
[51, 89, 13, 74, 54]
])
describe_mat(R)
###Output
Matrix:
[[31 32 33 34 35]
[43 42 50 57 13]
[72 24 65 73 39]
[51 89 13 74 54]]
Shape: (4, 5)
Rank: 2
###Markdown
Categorizing Matrices
###Code
## Declaring a Row Matrix
row_mat_1D = np.array([
5, 10, 15
]) ## this is a 1-D Matrix with a shape of (3,), it's not really considered as a row matrix.
row_mat_2D = np.array([
[10,20,30]
]) ## this is a 2-D Matrix with a shape of (1,3)
describe_mat(row_mat_1D)
describe_mat(row_mat_2D)
## Declaring a column matrix
colmat = np.array([
[3],
[5],
[7]
]) ## This is a 2-D Matrix with a shape of (3, 1)
describe_mat(colmat)
###Output
_____no_output_____
###Markdown
Square MatrixSquare matrices are matrices that have the same row and column sizes. We could say a matrix is square if . We can tweak our matrix descriptor function to determine square matrices.
###Code
def describe_mat(matrix):
is_square = True if matrix.shape[0] == matrix.shape[1] else False
print(f'Matrix:\n{matrix}\n\nShape:\t{matrix.shape}\nRank:\t{matrix.ndim}\nIs Square: {is_square}\n')
square_mat = np.array([
[5, 3, 2, 1],
[3, 2, 4, 9],
[8, 4, 7, 5],
[4, 8, 9, 1]
])
not_square_mat = np.array([
[3, 4, 1, 4],
[9, 6, 1, 3]
])
describe_mat (square_mat)
describe_mat (not_square_mat)
###Output
_____no_output_____
###Markdown
Null MatrixA Null Matrix is a matrix that has no elements. It is always a subspace of any vector or matrix.
###Code
def describe_mat(matrix):
if matrix.size > 0:
is_square - True if matrix.shape[0] -- matrix.shape[1] else False
print(f'Matrix:\n{matrix}\n\nShape:\t{matrix.shape}\nRank:\t{matrix.ndim}\nIs Square: {is_square}\n')
else:
print('Matrix is Null.')
null_mat = np.array([])
describe_mat(null_mat)
###Output
_____no_output_____
###Markdown
Zero Matrix
###Code
zero_mat_row = np.zeros ((1,2))
zero_mat_sqr = np.zeros ((2,2))
zero_mat_rct = np.zeros ((3,2))
print(f'Zero Row Matrix: \n{zero_mat_row}')
print(f'Zero Square Matrix: \n{zero_mat_sqr}')
print(f'Zero Rectangular Matrix: \n{zero_mat_rct}')
###Output
_____no_output_____
###Markdown
Ones Matrix
###Code
## Ones Matrix - to get the identity
ones_mat_row = np.ones ((1,2))
ones_mat_sqr = np.ones ((2,2))
ones_mat_rct = np.ones ((3,2))
print(f'Ones Row Matrix: \n{ones_mat_row}')
print(f'Ones Square Matrix: \n{ones_mat_sqr}')
print(f'Ones Rectangular Matrix: \n{ones_mat_rct}')
###Output
_____no_output_____
###Markdown
Diagonal Matrix
###Code
## Diagonal Matrix
np.array([
[2, 0, 0],
[0, 2, 0],
[0, 0, 2]
])
# a[1,1], a[2,2], a[3,3], ... a[n-1, n-1]
#Other way to declare a diagonal matrix
d = np.diag([1, 9, 9, 7])
d
###Output
_____no_output_____
###Markdown
Identity Matrix
###Code
#Identity Matrix
np.eye(3)
#Other way to Declare an Identity Matrix
np.identity(7)
###Output
_____no_output_____
###Markdown
Upper Triangular Matrix
###Code
#Upper Triangular Matrix
np.array([
[9, 0, 0, 0],
[1, 3, 0, 0],
[5, 3, 1, 0],
[1, 3, 6, 8]
])
#Other Way to Declare Upper Triangular Matrix
M = np.array([
[32, -65, 15, -69, 20],
[32, -65, 15, -69, 20],
[32, -65, 15, -69, 20],
[32, -65, 15, -69, 20],
[32, -65, 15, -69, 20]
])
np. triu(E)
###Output
_____no_output_____
###Markdown
Lower Triangular Matrix
###Code
#Lower Triangular Matrix
np.array([
[42, 0, 0, 0],
[69, 42, 0, 0,],
[69, 42, 42, 0],
[69, 69, 69, 69]
])
#Other Way to Declare a Lower Triangular Matrix
np.tril (M)
###Output
_____no_output_____
###Markdown
Practice Given the linear combination below, try to create a corresponding matrix representing it.$$\theta = 5x + 3y -z$$ Given the system of linear combinations below, try to encode it as a matrix. Also describe the matrix.$$A = \left\{ \begin{array}\ x_1 + 2x_2 + x_3 \\ 4x_2 - x_3 \\ 10x_3 \end{array}\right. $$
###Code
def describe_mat(matrix):
print(f'Matrix:\n{matrix}\n\nShape:\t{matrix.shape}\nRank:\t{matrix.ndim}\n')
A = np.array ([
[1, 2, 1],
[-1, 4, 0],
[10, 0 ,0]
])
A
describe_mat(A)
###Output
_____no_output_____
###Markdown
Given the matrix below, express it as a linear combination in a markdown.G = np.array([ [1,7,8], [2,2,2], [4,6,7] ]):$$G=\begin{bmatrix} 1 & 7 & 8 \\ 2 & 2 & 2 \\ 4 & 6 &7\end{bmatrix} \\$$$$G = \left\{ \begin{array}\ x_1 + 7x_2 + 8x_3\\ 2x_1 + 2x_2 + 2x_3 \\ 4x_1 + 6x_2 + 7x_3 \end{array}\right. $$ Given the matrix below, display the output as a LaTeX makdown also express it as a system of linear combinations. H = np.tril(G) Harray([[1, 0, 0], [2, 2, 0], [4, 6, 7]]) :$$H=\begin{bmatrix} 1 & 0 & 0 \\ 2 & 2 & 0 \\ 4 & 6 &7\end{bmatrix} \\$$$$H = \left\{ \begin{array}\ x_1 \\ 2x_1 - 2x_2 \\ 4x_1 + 6x_2 + 7x_3 \end{array}\right. $$ Matrix Algebra Addition
###Code
#Declaring Matrix M and J and Addition of Matrices
M = np.array([
[8, 24],
[5, 0],
[32, 10]
])
J = np.array([
[12, 15],
[0, 0],
[23, 65]
])
M+J
3+M ####Broadcasting
#3*np.ones(J.shape)+M
#Other Way to Perform Addition of Matrices
np.add(M,J)
###Output
_____no_output_____
###Markdown
Subtraction
###Code
#Subtraction of Matrices
M-J
2-M
#2*np.ones(M.shape)-J
###Output
_____no_output_____
###Markdown
Element-wise Multiplication
###Code
#Element-wise Multiplication
M*J
#np.multiply(M,J)
###Output
_____no_output_____
###Markdown
Element-wise Division
###Code
#Element-wise Division
M/J
#Division Involving Matrices
omega=10**-2
M/(omega+J)
###Output
_____no_output_____
###Markdown
ActivityThis part is where we are given an output that involves our learnings about Matrices in Python Task 1Create a function named mat_desc() that througouhly describes a matrix, it should:Displays the shape, size, and rank of the matrix.Displays whether the matrix is square or non-square.Displays whether the matrix is an empty matrix.Displays if the matrix is an identity, ones, or zeros matrixUse 5 sample matrices in which their shapes are not lower than (3,3) . In your methodology, create a flowchart discuss the functions and methods you have done. Present your results in the results section showing the description of each matrix you have declared.
###Code
## Function Area
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as la
%matplotlib inline
def describe_mat(matrix):
if matrix.size > 0:
if matrix.shape[0] == matrix.shape[1]:
is_square = True
else:
is_square = False
if np.all(matrix == np.identity(matrix.shape[0])):
sp = "Identity Matrix"
elif np.all(matrix == np.zeros(matrix.shape)):
sp = "Zero Matrix."
elif np.all(matrix == np.ones(matrix.shape)):
sp = "Ones Matrix."
else:
sp = "None."
print(f'Matrix:\n{matrix}\n\nShape:\t{matrix.shape}\nRank:\t{matrix.ndim}\nIs Square: {is_square}\n')
else:
print('Matrix is Null.')
## Matrix Declarations
L = np.array([[[[
[96, 68, 33, 39, 51],
[60, 19 ,30, 45, 86],
[62, 57, 68, 93, 31],
[57, 23, 19, 58, 23]
]]]])
N = np.array([[[
[31, 32, 33, 34, 35],
[41, 42, 53, 53, 13],
[71, 24, 65, 73, 31],
[31, 89, 13, 54, 14]
]]])
E = np.array([[
[81, 92, 73, 44, 35],
[11, 22, 13, 63, 73],
[71, 54, 45, 23, 41],
[71, 89, 13, 84, 34]
]])
A = np.array([
[32, 34, 37, 32, 36],
[41, 42, 57, 52, 13],
[71, 24, 65, 73, 31],
[31, 89, 13, 54, 14]
])
R = np.array([
[31, 32, 33, 34, 35],
[43, 42, 50, 57, 13],
[72, 24, 65, 73, 39],
[51, 89, 13, 74, 54]
])
## Test Areas
describe_mat(L)
describe_mat(N)
describe_mat(E)
describe_mat(A)
describe_mat(R)
###Output
_____no_output_____
###Markdown
Task 2Create a function named mat_operations() that takes in two matrices a input parameters it should:Determines if the matrices are viable for operation and returns your own error message if they are not viable.Returns the sum of the matrices.Returns the differen of the matrices.Returns the element-wise multiplication of the matrices.Returns the element-wise division of the matrices.Use 5 sample matrices in which their shapes are not lower than (3,3). In your methodology, create a flowchart discuss the functions and methods you have done. Present your results in the results section showing the description of each matrix you have declared.
###Code
## Function Area
import numpy as np
def mat_operation(L, N):
print(f'Matrix 1:\n {M} \n')
print(f'Matrix 2:\n {J} \n')
if(M.shape != J.shape):
print('The shape of both matrices are not same. The system could not perform any operation.')
return
plus = M + J
print(f'Sum of the chosen matrices:\n {plus} \n')
minus = M - J
print(f'Difference of the chosen matrices:\n {minus} \n')
matpro = np.multiply(M, J)
print(f'Element-wise multiplication of the chosen matrices:\n {matpro} \n')
matdiv = np.divide(M, J)
print(f'Element-wise division of the chosen matrices:\n {matdiv} \n')
## Matrix Declarations
M = np.array([
[8, 24],
[5, 0],
[32, 10]
])
J = np.array([
[12, 15],
[0, 0],
[23, 65]
])
## Test Areas
mat_operation(M, J)
###Output
_____no_output_____ |
Prediction_SP.ipynb | ###Markdown
Forecasting Time Series using Prophet package linear or logistic regression (g(t)) and Seasonality (s(t))
###Code
import pandas as pd #pandas for reading csv files
import numpy as np #numpy for numerical calculation
from fbprophet import Prophet #Prophet for forecasting data
import matplotlib.pyplot as plt #matplotlib for plotting
df = pd.read_csv('/home/aakash/Music/SP500.csv') #reading a csv file
df['y'] = np.log(df['y']) #normalization of data
m = Prophet(growth='linear',weekly_seasonality=False ) #additive regression model
m.add_seasonality(name='monthly', period=30.5, fourier_order=5).fit(df) #train the model
future = m.make_future_dataframe(periods=30, freq='D') # making future dataframes for 30 days with day frequency
fcst = m.predict(future) # predicting the values
m.plot(fcst); #plot the values
m.plot_components(fcst); #plot trend ,monthly and yearly prediction
###Output
_____no_output_____
###Markdown
Holidays (h(t))
###Code
SickLeave = pd.DataFrame({ # Holidays which can change Accuracy
'holiday': 'playoff',
'ds': pd.to_datetime(['2008-01-13', '2009-01-03', '2010-01-16',
'2010-01-24', '2010-02-07', '2011-01-08',
'2013-01-12', '2014-01-12', '2014-01-19',
'2014-02-02', '2015-01-11', '2016-01-17',
'2016-01-24', '2016-02-07']),
'lower_window': 0,
'upper_window': 1,
})
Vacation = pd.DataFrame({
'holiday': 'superbowl',
'ds': pd.to_datetime(['2010-02-07', '2014-02-02', '2016-02-07']),
'lower_window': 0,
'upper_window': 1,
})
holidays = pd.concat((SickLeave, Vacation))
m = Prophet(holidays=holidays, holidays_prior_scale=0.05).fit(df) # train our data
forecast = m.predict(future)
m.plot_components(forecast); #plot the components trend ,holidays,yearly and weekly
###Output
_____no_output_____ |
11_tf_serving/4_df2_tflite_inference.ipynb | ###Markdown
Dataset CSV API
###Code
import tensorflow as tf
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
from tensorflow import keras
from tensorflow.python.keras.callbacks import History
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
housing = fetch_california_housing()
x_train_all, x_test, y_train_all, y_test = train_test_split(housing.data, housing.target, random_state = 7)
x_train, x_valid, y_train, y_valid = train_test_split(x_train_all, y_train_all, random_state = 11)
print(x_valid.shape, y_valid.shape)
print(x_train.shape, y_train.shape)
print(x_test.shape, y_test.shape)
from sklearn.preprocessing import StandardScaler
# perform normalization
scaler = StandardScaler()
x_train_scaled = scaler.fit_transform(x_train)
x_valid_scaled = scaler.transform(x_valid)
x_test_scaled = scaler.transform(x_test)
###Output
_____no_output_____
###Markdown
Generate csv files by using numpy lib
###Code
import os
output_dir = "generated_csv"
if not os.path.exists(output_dir):
os.mkdir(output_dir)
def save_to_csv(output_dir, data, name_prefix,
header=None, n_parts=10):
path_format = os.path.join(output_dir, "{}_{:02d}.csv")
filenames = []
for file_idx, row_indices in enumerate(np.array_split(np.arange(len(data)), n_parts)):
part_csv = path_format.format(name_prefix, file_idx)
filenames.append(part_csv)
with open(part_csv, "wt", encoding="utf-8") as f:
if header is not None:
f.write(header + "\n")
for row_index in row_indices:
f.write(",".join(
[repr(col) for col in data[row_index]]
))
f.write('\n')
return filenames
# merge two dataset
train_data = np.c_[x_train_scaled, y_train]
valid_data = np.c_[x_valid_scaled, y_valid]
test_data = np.c_[x_test_scaled, y_test]
header_cols = housing.feature_names + ["MidianHouseValue"]
header_str = ",".join(header_cols)
train_filenames = save_to_csv(output_dir,
data = train_data,
name_prefix = "train",
header = header_str,
n_parts = 20)
valid_filenames = save_to_csv(output_dir,
data = valid_data,
name_prefix = "valid",
header = header_str,
n_parts = 10)
test_filenames = save_to_csv(output_dir,
data = test_data,
name_prefix = "test",
header = header_str,
n_parts = 10)
import pprint
pprint.pprint(train_filenames)
pprint.pprint(test_filenames)
pprint.pprint(valid_filenames)
###Output
_____no_output_____
###Markdown
Read csv files with tensorflow API
###Code
# 1. read filename to dataset
# 2. read file -> dataset -> datasets -> merge
# 3. parse csv
# 1. read filename to dataset
filename_dataset = tf.data.Dataset.list_files(train_filenames)
for name in filename_dataset:
print(name)
# 2. read file -> dataset -> datasets -> merge
n_readers = 5
#.skip(1) -> remove header
dataset = filename_dataset.interleave(
lambda filename: tf.data.TextLineDataset(filenames=filename).skip(1),
cycle_length = n_readers,
)
for line in dataset.take(15):
print(line.numpy())
# 3. parse csv
sample_str = '1, 2, 3, 4, 5'
record_defaults = [tf.constant(0, dtype=tf.int32)] * 5
parsed_fields = tf.io.decode_csv(sample_str, record_defaults)
###Output
_____no_output_____
###Markdown
Use tf.data together with Keras
###Code
def parse_csv_line(line, n_fields = 9):
defs = [tf.constant(np.nan)] * n_fields
parsed_fields = tf.io.decode_csv(line, defs)
x = tf.stack(parsed_fields[0: -1]) # train
y = tf.stack(parsed_fields[-1:]) # label
return x, y
###Output
_____no_output_____
###Markdown
build data preprocessng pipeline
###Code
def csv_reader_dataset(filenames, n_readers = 5, batch_size=32,
n_parse_threads = 5, shuffle_buffer_size = 10000):
dataset = tf.data.Dataset.list_files(filenames)
dataset = dataset.repeat() # without number means, it should repeat unlimited times
dataset = dataset.interleave(
lambda filename: tf.data.TextLineDataset(filename).skip(1),
cycle_length = n_readers
)
dataset.shuffle(shuffle_buffer_size)
# map is pretty like interleave but without joint multipole sets to one set
dataset = dataset.map(parse_csv_line,
num_parallel_calls=n_parse_threads)
dataset = dataset.batch(batch_size)
return dataset
train_set = csv_reader_dataset(train_filenames, batch_size = 3)
for x_batch, y_batch in train_set.take(2):
print("x:")
pprint.pprint(x_batch)
print("y:")
pprint.pprint(y_batch)
batch_size = 32
train_set = csv_reader_dataset(train_filenames, batch_size=batch_size)
test_set = csv_reader_dataset(test_filenames, batch_size=batch_size)
valid_set = csv_reader_dataset(valid_filenames, batch_size=batch_size)
model = keras.models.Sequential()
model.add(keras.layers.Dense(30, activation = 'relu', input_shape=[8]))
model.add(keras.layers.Dense(1))
model.summary()
# mean_squared_error make model as regression
model.compile(loss = "mean_squared_error", optimizer = "sgd", metrics = ["accuracy"])
#callbacks = [
# keras.callbacks.EarlyStopping(patience = 5, min_delta = 1e-3)
#]
logdir = './graph_def_and_weights'
if not os.path.exists(logdir):
os.mkdir(logdir)
output_model_file = os.path.join(logdir,
"example_model.h5")
callbacks = [
keras.callbacks.TensorBoard(logdir),
keras.callbacks.ModelCheckpoint(output_model_file,
save_best_only = True,
save_weights_only = False),
keras.callbacks.EarlyStopping(patience=5, min_delta=1e-3),
]
history = model.fit(train_set,
validation_data = valid_set,
steps_per_epoch = 11160 // batch_size,
validation_steps = 3870 // batch_size,
epochs = 10,
callbacks = callbacks)
model.evaluate(test_set, steps = 5160//batch_size)
del model
loaded_model = keras.models.load_model(output_model_file)
###Output
_____no_output_____
###Markdown
convert tflite model in common way
###Code
keras_to_tflite_converter = tf.lite.TFLiteConverter.from_keras_model(loaded_model)
keras_tflite = keras_to_tflite_converter.convert()
if not os.path.exists('./tflite_models'):
os.mkdir('./tflite_models')
with open('./tflite_models/keras_tflite', 'wb') as f:
f.write(keras_tflite)
with open('./tflite_models/keras_tflite', 'rb') as f:
concrete_func_tflite = f.read()
interpreter = tf.lite.Interpreter(model_content = concrete_func_tflite)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
print(input_details)
print(output_details)
input_data = tf.constant(valid_data[1][0:8].reshape([1,8]), dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
output_results = interpreter.get_tensor(output_details[0]['index'])
print(output_results)
###Output
_____no_output_____
###Markdown
convert model as quantized tflite model* need to conert to a quantized concrete function* set optimization for model* model inference...
###Code
keras_to_tflite_converter = tf.lite.TFLiteConverter.from_keras_model(loaded_model)
keras_to_tflite_converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
keras_tflite = keras_to_tflite_converter.convert()
if not os.path.exists('./tflite_models'):
os.mkdir('./tflite_models')
with open('./tflite_models/quantized_keras_tflite', 'wb') as f:
f.write(keras_tflite)
with open('./tflite_models/quantized_keras_tflite', 'rb') as f:
concrete_func_tflite = f.read()
interpreter = tf.lite.Interpreter(
model_content = concrete_func_tflite)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
print(input_details)
print(output_details)
input_data = tf.constant(valid_data[1][0:8].reshape([1,8]), dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
output_results = interpreter.get_tensor(output_details[0]['index'])
print(output_results)
###Output
_____no_output_____ |
how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb | ###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blocking** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.automl.core.featurization import FeaturizationConfig
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.interpret import ExplanationClient
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.16.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
Accessing the Azure ML workspace requires authentication with Azure.The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import InteractiveLoginAuthenticationauth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')ws = Workspace.from_config(auth = auth)```If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import ServicePrincipalAuthenticationauth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')ws = Workspace.from_config(auth = auth)```For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)
###Code
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster-4"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blocked_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet||**allowed_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blocked_models** allowed for **allowed_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blocked_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
remote_run
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
###Code
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.core.run import Run
model_explainability_run_id = remote_run.id + "_" + "ModelExplain"
print(model_explainability_run_id)
model_explainability_run = Run(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
python_version_compatible = True
else:
python_version_compatible = False
import onnxruntime
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
onnx_res = json.load(f)
return onnx_res
if python_version_compatible:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_res = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_res)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run, fitted_model = remote_run.get_output()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
from azureml.core.environment import Environment
inference_config = InferenceConfig(entry_script=script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values. This calls the ACI web service to do the prediction.Note that the JSON passed to the ACI web service is an array of rows of data. Each row should either be an array of values in the same order that was used for training or a dictionary where the keys are the same as the column names used for training. The example below uses dictionary rows.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
import json
import requests
X_test_json = X_test.to_json(orient='records')
data = "{\"data\": " + X_test_json +"}"
headers = {'Content-Type': 'application/json'}
resp = requests.post(aci_service.scoring_uri, data, headers=headers)
y_pred = json.loads(json.loads(resp.text))['result']
actual = array(y_test)
actual = actual[:,0]
print(len(y_pred), " ", len(actual))
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data as a confusion matrix that compared the predicted values against the actual values.
###Code
%matplotlib notebook
from sklearn.metrics import confusion_matrix
import numpy as np
import itertools
cf =confusion_matrix(actual,y_pred)
plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest')
plt.colorbar()
plt.title('Confusion Matrix')
plt.xlabel('Predicted')
plt.ylabel('Actual')
class_labels = ['no','yes']
tick_marks = np.arange(len(class_labels))
plt.xticks(tick_marks,class_labels)
plt.yticks([-0.5,0,1,1.5],['','no','yes',''])
# plotting text value inside cells
thresh = cf.max() / 2.
for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])):
plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black')
plt.show()
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
aci_service.delete()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blacklisting** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.automl.core.featurization import FeaturizationConfig
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.explain.model._internal.explanation_client import ExplanationClient
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.7.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
Accessing the Azure ML workspace requires authentication with Azure.The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import InteractiveLoginAuthenticationauth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')ws = Workspace.from_config(auth = auth)```If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import ServicePrincipalAuthenticationauth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')ws = Workspace.from_config(auth = auth)```For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)
###Code
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster-4"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blacklist_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet|| **whitelist_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blacklist_models** allowed for **whitelist_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blacklist_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
remote_run
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
###Code
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.core.run import Run
model_explainability_run_id = remote_run.get_properties().get('ModelExplainRunId')
print(model_explainability_run_id)
if model_explainability_run_id is not None:
model_explainability_run = Run(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
python_version_compatible = True
else:
python_version_compatible = False
import onnxruntime
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
onnx_res = json.load(f)
return onnx_res
if python_version_compatible:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_res = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_res)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run, fitted_model = remote_run.get_output()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
conda_env_file_name = 'inference/env.yml'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
best_run.download_file('outputs/conda_env_v_1_0_0.yml', 'inference/env.yml')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
from azureml.core.environment import Environment
myenv = Environment.from_conda_specification(name="myenv", file_path=conda_env_file_name)
inference_config = InferenceConfig(entry_script=script_file_name, environment=myenv)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
#aci_service.delete()
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
y_pred = fitted_model.predict(X_test)
actual = array(y_test)
actual = actual[:,0]
print(y_pred.shape, " ", actual.shape)
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned.
###Code
%matplotlib notebook
test_pred = plt.scatter(actual, y_pred, color='b')
test_test = plt.scatter(actual, actual, color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Use auto-generated code for retraining](Using-the-auto-generated-model-training-code-for-retraining-on-new-data)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.10. Leverage the auto generated training code and use it for retraining on an updated datasetIn addition this notebook showcases the following features- **Blocking** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import json
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.interpret import ExplanationClient
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK. Accessing the Azure ML workspace requires authentication with Azure.The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import InteractiveLoginAuthenticationauth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')ws = Workspace.from_config(auth = auth)```If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import ServicePrincipalAuthenticationauth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')ws = Workspace.from_config(auth = auth)```For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)
###Code
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = "automl-classification-bmarketing-all"
experiment = Experiment(ws, experiment_name)
output = {}
output["Subscription ID"] = ws.subscription_id
output["Workspace"] = ws.name
output["Resource Group"] = ws.resource_group
output["Location"] = ws.location
output["Experiment Name"] = experiment.name
output["SDK Version"] = azureml.core.VERSION
pd.set_option("display.max_colwidth", None)
outputDf = pd.DataFrame(data=output, index=[""])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster-4"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print("Found existing cluster, use it.")
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(
vm_size="STANDARD_DS12_V2", max_nodes=6
)
compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv(
"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv"
)
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack(
(
np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool),
np.ones(n_missing_samples, dtype=np.bool),
)
)
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir("data"):
os.mkdir("data")
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(
src_dir="./data", target_path="bankmarketing", overwrite=True, show_progress=True
)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(
path=ds.path("bankmarketing/train_data.csv")
)
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blocked_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet||**allowed_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blocked_models** allowed for **allowed_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**enable_code_generation**|Flag to enable generation of training code for each of the models that AutoML is creating.**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_hours": 0.3,
"enable_early_stopping": True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
# "n_cross_validations": 2,
"primary_metric": "AUC_weighted",
"featurization": "auto",
"verbosity": logging.INFO,
"enable_code_generation": True,
}
automl_config = AutoMLConfig(
task="classification",
debug_log="automl_errors.log",
compute_target=compute_target,
experiment_exit_score=0.9984,
blocked_models=["KNN", "LinearSVM"],
enable_onnx_compatible_models=True,
training_data=train_data,
label_column_name=label,
validation_data=validation_dataset,
**automl_settings,
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
# from azureml.train.automl.run import AutoMLRun
# remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
# remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
# Retrieve the best Run object
best_run = remote_run.get_best_child()
###Output
_____no_output_____
###Markdown
TransparencyView featurization summary for the best model - to study how different features were transformed. This is stored as a JSON file in the outputs directory for the run.
###Code
# Download the featurization summary JSON file locally
best_run.download_file(
"outputs/featurization_summary.json", "featurization_summary.json"
)
# Render the JSON as a pandas DataFrame
with open("featurization_summary.json", "r") as f:
records = json.load(f)
pd.DataFrame.from_records(records)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.core.run import Run
model_explainability_run_id = remote_run.id + "_" + "ModelExplain"
print(model_explainability_run_id)
model_explainability_run = Run(
experiment=experiment, run_id=model_explainability_run_id
)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run = remote_run.get_best_child()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = "onnx_resource.json"
run.download_file(
name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path
)
with open(res_path) as f:
result = json.load(f)
return result
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_result = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_result)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print("Please use Python version 3.6 or 3.7 to run the inference helper.")
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_best_child` method returns the Run object for the best model based on the default primary metric. There are additional flags that can be passed to the method if we want to retrieve the best Run based on any of the other supported metrics, or if we are just interested in the best run among the ONNX compatible runs. As always, you can execute `??remote_run.get_best_child` in a new cell to view the source or docs for the function.
###Code
??remote_run.get_best_child
###Output
_____no_output_____
###Markdown
Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run = remote_run.get_best_child()
model_name = best_run.properties["model_name"]
script_file_name = "inference/score.py"
best_run.download_file("outputs/scoring_file_v_1_0_0.py", "inference/score.py")
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = "AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit"
tags = None
model = remote_run.register_model(
model_name=model_name, description=description, tags=tags
)
print(
remote_run.model_id
) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
from azureml.core.environment import Environment
inference_config = InferenceConfig(entry_script=script_file_name)
aciconfig = AciWebservice.deploy_configuration(
cpu_cores=2,
memory_gb=2,
tags={"area": "bmData", "type": "automl_classification"},
description="sample service for Automl Classification",
)
aci_service_name = model_name.lower()
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
# aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values. This calls the ACI web service to do the prediction.Note that the JSON passed to the ACI web service is an array of rows of data. Each row should either be an array of values in the same order that was used for training or a dictionary where the keys are the same as the column names used for training. The example below uses dictionary rows.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=["y"])
y_test = test_dataset.keep_columns(columns=["y"], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
import requests
X_test_json = X_test.to_json(orient="records")
data = '{"data": ' + X_test_json + "}"
headers = {"Content-Type": "application/json"}
resp = requests.post(aci_service.scoring_uri, data, headers=headers)
y_pred = json.loads(json.loads(resp.text))["result"]
actual = array(y_test)
actual = actual[:, 0]
print(len(y_pred), " ", len(actual))
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data as a confusion matrix that compared the predicted values against the actual values.
###Code
%matplotlib notebook
from sklearn.metrics import confusion_matrix
import itertools
cf = confusion_matrix(actual, y_pred)
plt.imshow(cf, cmap=plt.cm.Blues, interpolation="nearest")
plt.colorbar()
plt.title("Confusion Matrix")
plt.xlabel("Predicted")
plt.ylabel("Actual")
class_labels = ["no", "yes"]
tick_marks = np.arange(len(class_labels))
plt.xticks(tick_marks, class_labels)
plt.yticks([-0.5, 0, 1, 1.5], ["", "no", "yes", ""])
# plotting text value inside cells
thresh = cf.max() / 2.0
for i, j in itertools.product(range(cf.shape[0]), range(cf.shape[1])):
plt.text(
j,
i,
format(cf[i, j], "d"),
horizontalalignment="center",
color="white" if cf[i, j] > thresh else "black",
)
plt.show()
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
aci_service.delete()
###Output
_____no_output_____
###Markdown
Using the auto generated model training code for retraining on new dataBecause we enabled code generation when the original experiment was created, we now have access to the code that was used to generate any of the AutoML tried models. Below we'll be using the generated training script of the best model to retrain on a new dataset.For this demo, we'll begin by creating new retraining dataset by combining the Train & Validation datasets that were used in the original experiment.
###Code
original_train_data = pd.read_csv(
"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv"
)
valid_data = pd.read_csv(
"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
)
# we'll emulate an updated dataset for retraining by combining the Train & Validation datasets into a new one
retrain_pd = pd.concat([original_train_data, valid_data])
retrain_pd.to_csv("data/retrain_data.csv", index=False)
ds.upload_files(
files=["data/retrain_data.csv"],
target_path="bankmarketing/",
overwrite=True,
show_progress=True,
)
retrain_dataset = Dataset.Tabular.from_delimited_files(
path=ds.path("bankmarketing/retrain_data.csv")
)
# after creating and uploading the retraining dataset, let's register it with the workspace for reuse
retrain_dataset = retrain_dataset.register(
workspace=ws,
name="Bankmarketing_retrain",
description="Updated training dataset, includes validation data",
create_new_version=True,
)
###Output
_____no_output_____
###Markdown
Next, we'll download the generated script for the best run and use it for retraining. For more advanced scenarios, you can customize the training script as you need: change the featurization pipeline, change the learner algorithm or its hyperparameters, etc. For this exercise, we'll leave the script as it was generated.
###Code
# download the autogenerated training script into the generated_code folder
best_run.download_file(
"outputs/generated_code/script.py", "generated_code/training_script.py"
)
# view the contents of the autogenerated training script
! cat generated_code/training_script.py
import uuid
from azureml.core import ScriptRunConfig
from azureml._restclient.models import RunTypeV2
from azureml._restclient.models.create_run_dto import CreateRunDto
from azureml._restclient.run_client import RunClient
codegen_runid = str(uuid.uuid4())
client = RunClient(
experiment.workspace.service_context,
experiment.name,
codegen_runid,
experiment_id=experiment.id,
)
# override the training_dataset_id to point to our new retraining dataset we just registered above
dataset_arguments = ["--training_dataset_id", retrain_dataset.id]
# create the retraining run as a child of the AutoML generated training run
create_run_dto = CreateRunDto(
run_id=codegen_runid,
parent_run_id=best_run.id,
description="AutoML Codegen Script Run using an updated training dataset",
target=cpu_cluster_name,
run_type_v2=RunTypeV2(orchestrator="Execution", traits=["automl-codegen"]),
)
# the script for retraining run is pointing to the AutoML generated script
src = ScriptRunConfig(
source_directory="generated_code",
script="training_script.py",
arguments=dataset_arguments,
compute_target=cpu_cluster_name,
environment=best_run.get_environment(),
)
run_dto = client.create_run(run_id=codegen_runid, create_run_dto=create_run_dto)
# submit the experiment
retraining_run = experiment.submit(config=src, run_id=codegen_runid)
retraining_run
###Output
_____no_output_____
###Markdown
After the run completes, we can get download/test/deploy to the model it has built.
###Code
retraining_run.wait_for_completion()
retraining_run.download_file("outputs/model.pkl", "generated_code/model.pkl")
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blacklisting** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.automl.core.featurization import FeaturizationConfig
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.explain.model._internal.explanation_client import ExplanationClient
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
# Choose a name for your cluster.
amlcompute_cluster_name = "cpu-cluster-4"
found = False
# Check if this compute target already exists in the workspace.
cts = ws.compute_targets
if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute':
found = True
print('Found existing compute target.')
compute_target = cts[amlcompute_cluster_name]
if not found:
print('Creating a new compute target...')
provisioning_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_D2_V2", # for GPU, use "STANDARD_NC6"
#vm_priority = 'lowpriority', # optional
max_nodes = 6)
# Create the cluster.
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config)
print('Checking cluster status...')
# Can poll for a minimum number of nodes and for a specific timeout.
# If no min_node_count is provided, it will use the scale settings for the cluster.
compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20)
# For a more detailed view of current AmlCompute status, use get_status().
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blacklist_models** or **whitelist_models** |*List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_minutes**| Maximum amount of time in minutes that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**model_explainability**|Indicate to explain each trained pipeline or not.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_minutes" : 20,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blacklist_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
model_explainability=True,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
remote_run
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#experiment_name = 'automl-classification-bmarketing'
#experiment = Experiment(ws, experiment_name)
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
###Code
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.train.automl.run import AutoMLRun
model_explainability_run_id = remote_run.get_properties().get('ModelExplainRunId')
print(model_explainability_run_id)
if model_explainability_run_id is not None:
model_explainability_run = AutoMLRun(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.core.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package Note: The code will install the onnxruntime==0.4.0 if not installed. Newer versions of the onnxruntime have compatibility issues.
###Code
test_df = test_dataset.to_pandas_dataframe()
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
python_version_compatible = True
else:
python_version_compatible = False
onnxrt_present = False
try:
import onnxruntime
from azureml.automl.core.onnx_convert import OnnxInferenceHelper
from onnxruntime import __version__ as ORT_VER
if ORT_VER == '0.4.0':
onnxrt_present = True
except ImportError:
onnxrt_present = False
# Install the onnxruntime if the version 0.4.0 is not installed.
if not onnxrt_present:
print("Installing the onnxruntime version 0.4.0.")
!{sys.executable} -m pip install --user --force-reinstall onnxruntime==0.4.0
onnxrt_present = True
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
onnx_res = json.load(f)
return onnx_res
if onnxrt_present and python_version_compatible:
mdl_bytes = onnx_mdl.SerializeToString()
onnx_res = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_res)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
if not python_version_compatible:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
if not onnxrt_present:
print('Please install the onnxruntime package to do the prediction with ONNX model.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run, fitted_model = remote_run.get_output()
import os
import shutil
sript_folder = os.path.join(os.getcwd(), 'inference')
project_folder = '/inference'
os.makedirs(project_folder, exist_ok=True)
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
conda_env_file_name = 'inference/env.yml'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
best_run.download_file('outputs/conda_env_v_1_0_0.yml', 'inference/env.yml')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(runtime = "python",
entry_script = script_file_name,
conda_file = conda_env_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
#aci_service.delete()
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
y_pred = fitted_model.predict(X_test)
actual = array(y_test)
actual = actual[:,0]
print(y_pred.shape, " ", actual.shape)
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned.
###Code
%matplotlib notebook
test_pred = plt.scatter(actual, y_pred, color='b')
test_test = plt.scatter(actual, actual, color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blocking** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.automl.core.featurization import FeaturizationConfig
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.interpret import ExplanationClient
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.26.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
Accessing the Azure ML workspace requires authentication with Azure.The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import InteractiveLoginAuthenticationauth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')ws = Workspace.from_config(auth = auth)```If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import ServicePrincipalAuthenticationauth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')ws = Workspace.from_config(auth = auth)```For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)
###Code
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster-4"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blocked_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet||**allowed_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blocked_models** allowed for **allowed_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blocked_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
###Code
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.core.run import Run
model_explainability_run_id = remote_run.id + "_" + "ModelExplain"
print(model_explainability_run_id)
model_explainability_run = Run(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
python_version_compatible = True
else:
python_version_compatible = False
import onnxruntime
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
onnx_res = json.load(f)
return onnx_res
if python_version_compatible:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_res = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_res)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run, fitted_model = remote_run.get_output()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
from azureml.core.environment import Environment
inference_config = InferenceConfig(entry_script=script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values. This calls the ACI web service to do the prediction.Note that the JSON passed to the ACI web service is an array of rows of data. Each row should either be an array of values in the same order that was used for training or a dictionary where the keys are the same as the column names used for training. The example below uses dictionary rows.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
import json
import requests
X_test_json = X_test.to_json(orient='records')
data = "{\"data\": " + X_test_json +"}"
headers = {'Content-Type': 'application/json'}
resp = requests.post(aci_service.scoring_uri, data, headers=headers)
y_pred = json.loads(json.loads(resp.text))['result']
actual = array(y_test)
actual = actual[:,0]
print(len(y_pred), " ", len(actual))
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data as a confusion matrix that compared the predicted values against the actual values.
###Code
%matplotlib notebook
from sklearn.metrics import confusion_matrix
import numpy as np
import itertools
cf =confusion_matrix(actual,y_pred)
plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest')
plt.colorbar()
plt.title('Confusion Matrix')
plt.xlabel('Predicted')
plt.ylabel('Actual')
class_labels = ['no','yes']
tick_marks = np.arange(len(class_labels))
plt.xticks(tick_marks,class_labels)
plt.yticks([-0.5,0,1,1.5],['','no','yes',''])
# plotting text value inside cells
thresh = cf.max() / 2.
for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])):
plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black')
plt.show()
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
aci_service.delete()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blocking** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.automl.core.featurization import FeaturizationConfig
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.interpret import ExplanationClient
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.32.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
Accessing the Azure ML workspace requires authentication with Azure.The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import InteractiveLoginAuthenticationauth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')ws = Workspace.from_config(auth = auth)```If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import ServicePrincipalAuthenticationauth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')ws = Workspace.from_config(auth = auth)```For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)
###Code
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster-4"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blocked_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet||**allowed_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blocked_models** allowed for **allowed_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blocked_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
###Code
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.core.run import Run
model_explainability_run_id = remote_run.id + "_" + "ModelExplain"
print(model_explainability_run_id)
model_explainability_run = Run(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
python_version_compatible = True
else:
python_version_compatible = False
import onnxruntime
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
onnx_res = json.load(f)
return onnx_res
if python_version_compatible:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_res = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_res)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run, fitted_model = remote_run.get_output()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
from azureml.core.environment import Environment
inference_config = InferenceConfig(entry_script=script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values. This calls the ACI web service to do the prediction.Note that the JSON passed to the ACI web service is an array of rows of data. Each row should either be an array of values in the same order that was used for training or a dictionary where the keys are the same as the column names used for training. The example below uses dictionary rows.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
import json
import requests
X_test_json = X_test.to_json(orient='records')
data = "{\"data\": " + X_test_json +"}"
headers = {'Content-Type': 'application/json'}
resp = requests.post(aci_service.scoring_uri, data, headers=headers)
y_pred = json.loads(json.loads(resp.text))['result']
actual = array(y_test)
actual = actual[:,0]
print(len(y_pred), " ", len(actual))
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data as a confusion matrix that compared the predicted values against the actual values.
###Code
%matplotlib notebook
from sklearn.metrics import confusion_matrix
import numpy as np
import itertools
cf =confusion_matrix(actual,y_pred)
plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest')
plt.colorbar()
plt.title('Confusion Matrix')
plt.xlabel('Predicted')
plt.ylabel('Actual')
class_labels = ['no','yes']
tick_marks = np.arange(len(class_labels))
plt.xticks(tick_marks,class_labels)
plt.yticks([-0.5,0,1,1.5],['','no','yes',''])
# plotting text value inside cells
thresh = cf.max() / 2.
for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])):
plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black')
plt.show()
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
aci_service.delete()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blocking** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.automl.core.featurization import FeaturizationConfig
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.interpret import ExplanationClient
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.19.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
Accessing the Azure ML workspace requires authentication with Azure.The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import InteractiveLoginAuthenticationauth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')ws = Workspace.from_config(auth = auth)```If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import ServicePrincipalAuthenticationauth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')ws = Workspace.from_config(auth = auth)```For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)
###Code
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster-4"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blocked_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet||**allowed_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blocked_models** allowed for **allowed_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blocked_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
remote_run
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
###Code
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.core.run import Run
model_explainability_run_id = remote_run.id + "_" + "ModelExplain"
print(model_explainability_run_id)
model_explainability_run = Run(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
python_version_compatible = True
else:
python_version_compatible = False
import onnxruntime
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
onnx_res = json.load(f)
return onnx_res
if python_version_compatible:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_res = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_res)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run, fitted_model = remote_run.get_output()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
from azureml.core.environment import Environment
inference_config = InferenceConfig(entry_script=script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values. This calls the ACI web service to do the prediction.Note that the JSON passed to the ACI web service is an array of rows of data. Each row should either be an array of values in the same order that was used for training or a dictionary where the keys are the same as the column names used for training. The example below uses dictionary rows.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
import json
import requests
X_test_json = X_test.to_json(orient='records')
data = "{\"data\": " + X_test_json +"}"
headers = {'Content-Type': 'application/json'}
resp = requests.post(aci_service.scoring_uri, data, headers=headers)
y_pred = json.loads(json.loads(resp.text))['result']
actual = array(y_test)
actual = actual[:,0]
print(len(y_pred), " ", len(actual))
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data as a confusion matrix that compared the predicted values against the actual values.
###Code
%matplotlib notebook
from sklearn.metrics import confusion_matrix
import numpy as np
import itertools
cf =confusion_matrix(actual,y_pred)
plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest')
plt.colorbar()
plt.title('Confusion Matrix')
plt.xlabel('Predicted')
plt.ylabel('Actual')
class_labels = ['no','yes']
tick_marks = np.arange(len(class_labels))
plt.xticks(tick_marks,class_labels)
plt.yticks([-0.5,0,1,1.5],['','no','yes',''])
# plotting text value inside cells
thresh = cf.max() / 2.
for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])):
plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black')
plt.show()
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
aci_service.delete()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blocking** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.automl.core.featurization import FeaturizationConfig
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.interpret import ExplanationClient
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.28.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
Accessing the Azure ML workspace requires authentication with Azure.The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import InteractiveLoginAuthenticationauth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')ws = Workspace.from_config(auth = auth)```If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import ServicePrincipalAuthenticationauth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')ws = Workspace.from_config(auth = auth)```For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)
###Code
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster-4"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blocked_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet||**allowed_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blocked_models** allowed for **allowed_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blocked_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
###Code
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.core.run import Run
model_explainability_run_id = remote_run.id + "_" + "ModelExplain"
print(model_explainability_run_id)
model_explainability_run = Run(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
python_version_compatible = True
else:
python_version_compatible = False
import onnxruntime
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
onnx_res = json.load(f)
return onnx_res
if python_version_compatible:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_res = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_res)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run, fitted_model = remote_run.get_output()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
from azureml.core.environment import Environment
inference_config = InferenceConfig(entry_script=script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values. This calls the ACI web service to do the prediction.Note that the JSON passed to the ACI web service is an array of rows of data. Each row should either be an array of values in the same order that was used for training or a dictionary where the keys are the same as the column names used for training. The example below uses dictionary rows.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
import json
import requests
X_test_json = X_test.to_json(orient='records')
data = "{\"data\": " + X_test_json +"}"
headers = {'Content-Type': 'application/json'}
resp = requests.post(aci_service.scoring_uri, data, headers=headers)
y_pred = json.loads(json.loads(resp.text))['result']
actual = array(y_test)
actual = actual[:,0]
print(len(y_pred), " ", len(actual))
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data as a confusion matrix that compared the predicted values against the actual values.
###Code
%matplotlib notebook
from sklearn.metrics import confusion_matrix
import numpy as np
import itertools
cf =confusion_matrix(actual,y_pred)
plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest')
plt.colorbar()
plt.title('Confusion Matrix')
plt.xlabel('Predicted')
plt.ylabel('Actual')
class_labels = ['no','yes']
tick_marks = np.arange(len(class_labels))
plt.xticks(tick_marks,class_labels)
plt.yticks([-0.5,0,1,1.5],['','no','yes',''])
# plotting text value inside cells
thresh = cf.max() / 2.
for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])):
plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black')
plt.show()
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
aci_service.delete()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blocking** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.automl.core.featurization import FeaturizationConfig
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.interpret import ExplanationClient
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.23.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
Accessing the Azure ML workspace requires authentication with Azure.The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import InteractiveLoginAuthenticationauth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')ws = Workspace.from_config(auth = auth)```If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import ServicePrincipalAuthenticationauth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')ws = Workspace.from_config(auth = auth)```For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)
###Code
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster-4"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blocked_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet||**allowed_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blocked_models** allowed for **allowed_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blocked_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
remote_run
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
###Code
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.core.run import Run
model_explainability_run_id = remote_run.id + "_" + "ModelExplain"
print(model_explainability_run_id)
model_explainability_run = Run(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
python_version_compatible = True
else:
python_version_compatible = False
import onnxruntime
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
onnx_res = json.load(f)
return onnx_res
if python_version_compatible:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_res = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_res)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run, fitted_model = remote_run.get_output()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
from azureml.core.environment import Environment
inference_config = InferenceConfig(entry_script=script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values. This calls the ACI web service to do the prediction.Note that the JSON passed to the ACI web service is an array of rows of data. Each row should either be an array of values in the same order that was used for training or a dictionary where the keys are the same as the column names used for training. The example below uses dictionary rows.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
import json
import requests
X_test_json = X_test.to_json(orient='records')
data = "{\"data\": " + X_test_json +"}"
headers = {'Content-Type': 'application/json'}
resp = requests.post(aci_service.scoring_uri, data, headers=headers)
y_pred = json.loads(json.loads(resp.text))['result']
actual = array(y_test)
actual = actual[:,0]
print(len(y_pred), " ", len(actual))
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data as a confusion matrix that compared the predicted values against the actual values.
###Code
%matplotlib notebook
from sklearn.metrics import confusion_matrix
import numpy as np
import itertools
cf =confusion_matrix(actual,y_pred)
plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest')
plt.colorbar()
plt.title('Confusion Matrix')
plt.xlabel('Predicted')
plt.ylabel('Actual')
class_labels = ['no','yes']
tick_marks = np.arange(len(class_labels))
plt.xticks(tick_marks,class_labels)
plt.yticks([-0.5,0,1,1.5],['','no','yes',''])
# plotting text value inside cells
thresh = cf.max() / 2.
for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])):
plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black')
plt.show()
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
aci_service.delete()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blocking** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import json
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.interpret import ExplanationClient
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.37.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
Accessing the Azure ML workspace requires authentication with Azure.The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import InteractiveLoginAuthenticationauth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')ws = Workspace.from_config(auth = auth)```If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import ServicePrincipalAuthenticationauth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')ws = Workspace.from_config(auth = auth)```For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)
###Code
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster-4"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blocked_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet||**allowed_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blocked_models** allowed for **allowed_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blocked_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
# Retrieve the best Run object
best_run = remote_run.get_best_child()
###Output
_____no_output_____
###Markdown
TransparencyView featurization summary for the best model - to study how different features were transformed. This is stored as a JSON file in the outputs directory for the run.
###Code
# Download the featuurization summary JSON file locally
best_run.download_file("outputs/featurization_summary.json", "featurization_summary.json")
# Render the JSON as a pandas DataFrame
with open("featurization_summary.json", "r") as f:
records = json.load(f)
pd.DataFrame.from_records(records)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.core.run import Run
model_explainability_run_id = remote_run.id + "_" + "ModelExplain"
print(model_explainability_run_id)
model_explainability_run = Run(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run = remote_run.get_best_child()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
result = json.load(f)
return result
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_result = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_result)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_best_child` method returns the Run object for the best model based on the default primary metric. There are additional flags that can be passed to the method if we want to retrieve the best Run based on any of the other supported metrics, or if we are just interested in the best run among the ONNX compatible runs. As always, you can execute `remote_run.get_best_child??` in a new cell to view the source or docs for the function.
###Code
remote_run.get_best_child??
###Output
_____no_output_____
###Markdown
Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run = remote_run.get_best_child()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(), entry_script=script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 2,
memory_gb = 2,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values. This calls the ACI web service to do the prediction.Note that the JSON passed to the ACI web service is an array of rows of data. Each row should either be an array of values in the same order that was used for training or a dictionary where the keys are the same as the column names used for training. The example below uses dictionary rows.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
import requests
X_test_json = X_test.to_json(orient='records')
data = "{\"data\": " + X_test_json +"}"
headers = {'Content-Type': 'application/json'}
resp = requests.post(aci_service.scoring_uri, data, headers=headers)
y_pred = json.loads(json.loads(resp.text))['result']
actual = array(y_test)
actual = actual[:,0]
print(len(y_pred), " ", len(actual))
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data as a confusion matrix that compared the predicted values against the actual values.
###Code
%matplotlib notebook
from sklearn.metrics import confusion_matrix
import itertools
cf =confusion_matrix(actual,y_pred)
plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest')
plt.colorbar()
plt.title('Confusion Matrix')
plt.xlabel('Predicted')
plt.ylabel('Actual')
class_labels = ['no','yes']
tick_marks = np.arange(len(class_labels))
plt.xticks(tick_marks,class_labels)
plt.yticks([-0.5,0,1,1.5],['','no','yes',''])
# plotting text value inside cells
thresh = cf.max() / 2.
for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])):
plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black')
plt.show()
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
aci_service.delete()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blocking** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.automl.core.featurization import FeaturizationConfig
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.explain.model._internal.explanation_client import ExplanationClient
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.10.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
Accessing the Azure ML workspace requires authentication with Azure.The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import InteractiveLoginAuthenticationauth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')ws = Workspace.from_config(auth = auth)```If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import ServicePrincipalAuthenticationauth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')ws = Workspace.from_config(auth = auth)```For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)
###Code
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster-4"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blocked_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet||**allowed_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blocked_models** allowed for **allowed_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blocked_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
remote_run
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
###Code
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.core.run import Run
model_explainability_run_id = remote_run.get_properties().get('ModelExplainRunId')
print(model_explainability_run_id)
if model_explainability_run_id is not None:
model_explainability_run = Run(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
python_version_compatible = True
else:
python_version_compatible = False
import onnxruntime
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
onnx_res = json.load(f)
return onnx_res
if python_version_compatible:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_res = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_res)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run, fitted_model = remote_run.get_output()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
from azureml.core.environment import Environment
inference_config = InferenceConfig(entry_script=script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
#aci_service.delete()
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
y_pred = fitted_model.predict(X_test)
actual = array(y_test)
actual = actual[:,0]
print(y_pred.shape, " ", actual.shape)
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned.
###Code
%matplotlib notebook
test_pred = plt.scatter(actual, y_pred, color='b')
test_test = plt.scatter(actual, actual, color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blocking** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import json
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.interpret import ExplanationClient
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.39.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
Accessing the Azure ML workspace requires authentication with Azure.The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import InteractiveLoginAuthenticationauth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')ws = Workspace.from_config(auth = auth)```If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import ServicePrincipalAuthenticationauth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')ws = Workspace.from_config(auth = auth)```For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)
###Code
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', None)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster-4"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blocked_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet||**allowed_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blocked_models** allowed for **allowed_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blocked_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
# Retrieve the best Run object
best_run = remote_run.get_best_child()
###Output
_____no_output_____
###Markdown
TransparencyView featurization summary for the best model - to study how different features were transformed. This is stored as a JSON file in the outputs directory for the run.
###Code
# Download the featurization summary JSON file locally
best_run.download_file("outputs/featurization_summary.json", "featurization_summary.json")
# Render the JSON as a pandas DataFrame
with open("featurization_summary.json", "r") as f:
records = json.load(f)
pd.DataFrame.from_records(records)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.core.run import Run
model_explainability_run_id = remote_run.id + "_" + "ModelExplain"
print(model_explainability_run_id)
model_explainability_run = Run(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run = remote_run.get_best_child()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
result = json.load(f)
return result
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_result = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_result)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_best_child` method returns the Run object for the best model based on the default primary metric. There are additional flags that can be passed to the method if we want to retrieve the best Run based on any of the other supported metrics, or if we are just interested in the best run among the ONNX compatible runs. As always, you can execute `remote_run.get_best_child??` in a new cell to view the source or docs for the function.
###Code
remote_run.get_best_child??
###Output
_____no_output_____
###Markdown
Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run = remote_run.get_best_child()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(), entry_script=script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 2,
memory_gb = 2,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values. This calls the ACI web service to do the prediction.Note that the JSON passed to the ACI web service is an array of rows of data. Each row should either be an array of values in the same order that was used for training or a dictionary where the keys are the same as the column names used for training. The example below uses dictionary rows.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
import requests
X_test_json = X_test.to_json(orient='records')
data = "{\"data\": " + X_test_json +"}"
headers = {'Content-Type': 'application/json'}
resp = requests.post(aci_service.scoring_uri, data, headers=headers)
y_pred = json.loads(json.loads(resp.text))['result']
actual = array(y_test)
actual = actual[:,0]
print(len(y_pred), " ", len(actual))
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data as a confusion matrix that compared the predicted values against the actual values.
###Code
%matplotlib notebook
from sklearn.metrics import confusion_matrix
import itertools
cf =confusion_matrix(actual,y_pred)
plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest')
plt.colorbar()
plt.title('Confusion Matrix')
plt.xlabel('Predicted')
plt.ylabel('Actual')
class_labels = ['no','yes']
tick_marks = np.arange(len(class_labels))
plt.xticks(tick_marks,class_labels)
plt.yticks([-0.5,0,1,1.5],['','no','yes',''])
# plotting text value inside cells
thresh = cf.max() / 2.
for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])):
plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black')
plt.show()
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
aci_service.delete()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blacklisting** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.automl.core.featurization import FeaturizationConfig
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.explain.model._internal.explanation_client import ExplanationClient
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
# Choose a name for your cluster.
amlcompute_cluster_name = "cpu-cluster-4"
found = False
# Check if this compute target already exists in the workspace.
cts = ws.compute_targets
if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute':
found = True
print('Found existing compute target.')
compute_target = cts[amlcompute_cluster_name]
if not found:
print('Creating a new compute target...')
provisioning_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_D2_V2", # for GPU, use "STANDARD_NC6"
#vm_priority = 'lowpriority', # optional
max_nodes = 6)
# Create the cluster.
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config)
print('Checking cluster status...')
# Can poll for a minimum number of nodes and for a specific timeout.
# If no min_node_count is provided, it will use the scale settings for the cluster.
compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20)
# For a more detailed view of current AmlCompute status, use get_status().
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blacklist_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet|| **whitelist_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blacklist_models** allowed for **whitelist_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_minutes**| Maximum amount of time in minutes that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**model_explainability**|Indicate to explain each trained pipeline or not.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_minutes" : 20,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blacklist_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
model_explainability=True,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
remote_run
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#experiment_name = 'automl-classification-bmarketing'
#experiment = Experiment(ws, experiment_name)
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
###Code
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.train.automl.run import AutoMLRun
model_explainability_run_id = remote_run.get_properties().get('ModelExplainRunId')
print(model_explainability_run_id)
if model_explainability_run_id is not None:
model_explainability_run = AutoMLRun(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
python_version_compatible = True
else:
python_version_compatible = False
import onnxruntime
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
onnx_res = json.load(f)
return onnx_res
if python_version_compatible:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_res = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_res)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run, fitted_model = remote_run.get_output()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
conda_env_file_name = 'inference/env.yml'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
best_run.download_file('outputs/conda_env_v_1_0_0.yml', 'inference/env.yml')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(runtime = "python",
entry_script = script_file_name,
conda_file = conda_env_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
#aci_service.delete()
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
y_pred = fitted_model.predict(X_test)
actual = array(y_test)
actual = actual[:,0]
print(y_pred.shape, " ", actual.shape)
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned.
###Code
%matplotlib notebook
test_pred = plt.scatter(actual, y_pred, color='b')
test_test = plt.scatter(actual, actual, color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blacklisting** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.automl.core.featurization import FeaturizationConfig
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.explain.model._internal.explanation_client import ExplanationClient
###Output
_____no_output_____
###Markdown
Accessing the Azure ML workspace requires authentication with Azure.The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import InteractiveLoginAuthenticationauth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')ws = Workspace.from_config(auth = auth)```If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import ServicePrincipalAuthenticationauth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')ws = Workspace.from_config(auth = auth)```For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)
###Code
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
# Choose a name for your cluster.
amlcompute_cluster_name = "cpu-cluster-4"
found = False
# Check if this compute target already exists in the workspace.
cts = ws.compute_targets
if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute':
found = True
print('Found existing compute target.')
compute_target = cts[amlcompute_cluster_name]
if not found:
print('Creating a new compute target...')
provisioning_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_D2_V2", # for GPU, use "STANDARD_NC6"
#vm_priority = 'lowpriority', # optional
max_nodes = 6)
# Create the cluster.
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config)
print('Checking cluster status...')
# Can poll for a minimum number of nodes and for a specific timeout.
# If no min_node_count is provided, it will use the scale settings for the cluster.
compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20)
# For a more detailed view of current AmlCompute status, use get_status().
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blacklist_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet|| **whitelist_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blacklist_models** allowed for **whitelist_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blacklist_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
remote_run
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#experiment_name = 'automl-classification-bmarketing'
#experiment = Experiment(ws, experiment_name)
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
###Code
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.core.run import Run
model_explainability_run_id = remote_run.get_properties().get('ModelExplainRunId')
print(model_explainability_run_id)
if model_explainability_run_id is not None:
model_explainability_run = Run(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
python_version_compatible = True
else:
python_version_compatible = False
import onnxruntime
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
onnx_res = json.load(f)
return onnx_res
if python_version_compatible:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_res = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_res)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run, fitted_model = remote_run.get_output()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
conda_env_file_name = 'inference/env.yml'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
best_run.download_file('outputs/conda_env_v_1_0_0.yml', 'inference/env.yml')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
from azureml.core.environment import Environment
myenv = Environment.from_conda_specification(name="myenv", file_path=conda_env_file_name)
inference_config = InferenceConfig(entry_script=script_file_name, environment=myenv)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
#aci_service.delete()
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
y_pred = fitted_model.predict(X_test)
actual = array(y_test)
actual = actual[:,0]
print(y_pred.shape, " ", actual.shape)
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned.
###Code
%matplotlib notebook
test_pred = plt.scatter(actual, y_pred, color='b')
test_test = plt.scatter(actual, actual, color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blocking** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.interpret import ExplanationClient
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.36.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
Accessing the Azure ML workspace requires authentication with Azure.The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import InteractiveLoginAuthenticationauth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')ws = Workspace.from_config(auth = auth)```If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import ServicePrincipalAuthenticationauth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')ws = Workspace.from_config(auth = auth)```For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)
###Code
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster-4"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blocked_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet||**allowed_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blocked_models** allowed for **allowed_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blocked_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
###Code
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.core.run import Run
model_explainability_run_id = remote_run.id + "_" + "ModelExplain"
print(model_explainability_run_id)
model_explainability_run = Run(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
result = json.load(f)
return result
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_result = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_result)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run, fitted_model = remote_run.get_output()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(), entry_script=script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 2,
memory_gb = 2,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values. This calls the ACI web service to do the prediction.Note that the JSON passed to the ACI web service is an array of rows of data. Each row should either be an array of values in the same order that was used for training or a dictionary where the keys are the same as the column names used for training. The example below uses dictionary rows.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
import requests
X_test_json = X_test.to_json(orient='records')
data = "{\"data\": " + X_test_json +"}"
headers = {'Content-Type': 'application/json'}
resp = requests.post(aci_service.scoring_uri, data, headers=headers)
y_pred = json.loads(json.loads(resp.text))['result']
actual = array(y_test)
actual = actual[:,0]
print(len(y_pred), " ", len(actual))
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data as a confusion matrix that compared the predicted values against the actual values.
###Code
%matplotlib notebook
from sklearn.metrics import confusion_matrix
import itertools
cf =confusion_matrix(actual,y_pred)
plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest')
plt.colorbar()
plt.title('Confusion Matrix')
plt.xlabel('Predicted')
plt.ylabel('Actual')
class_labels = ['no','yes']
tick_marks = np.arange(len(class_labels))
plt.xticks(tick_marks,class_labels)
plt.yticks([-0.5,0,1,1.5],['','no','yes',''])
# plotting text value inside cells
thresh = cf.max() / 2.
for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])):
plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black')
plt.show()
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
aci_service.delete()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blocking** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.automl.core.featurization import FeaturizationConfig
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.interpret import ExplanationClient
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.20.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
Accessing the Azure ML workspace requires authentication with Azure.The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import InteractiveLoginAuthenticationauth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')ws = Workspace.from_config(auth = auth)```If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import ServicePrincipalAuthenticationauth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')ws = Workspace.from_config(auth = auth)```For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)
###Code
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster-4"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blocked_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet||**allowed_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blocked_models** allowed for **allowed_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blocked_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
remote_run
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
###Code
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.core.run import Run
model_explainability_run_id = remote_run.id + "_" + "ModelExplain"
print(model_explainability_run_id)
model_explainability_run = Run(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
python_version_compatible = True
else:
python_version_compatible = False
import onnxruntime
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
onnx_res = json.load(f)
return onnx_res
if python_version_compatible:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_res = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_res)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run, fitted_model = remote_run.get_output()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
from azureml.core.environment import Environment
inference_config = InferenceConfig(entry_script=script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values. This calls the ACI web service to do the prediction.Note that the JSON passed to the ACI web service is an array of rows of data. Each row should either be an array of values in the same order that was used for training or a dictionary where the keys are the same as the column names used for training. The example below uses dictionary rows.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
import json
import requests
X_test_json = X_test.to_json(orient='records')
data = "{\"data\": " + X_test_json +"}"
headers = {'Content-Type': 'application/json'}
resp = requests.post(aci_service.scoring_uri, data, headers=headers)
y_pred = json.loads(json.loads(resp.text))['result']
actual = array(y_test)
actual = actual[:,0]
print(len(y_pred), " ", len(actual))
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data as a confusion matrix that compared the predicted values against the actual values.
###Code
%matplotlib notebook
from sklearn.metrics import confusion_matrix
import numpy as np
import itertools
cf =confusion_matrix(actual,y_pred)
plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest')
plt.colorbar()
plt.title('Confusion Matrix')
plt.xlabel('Predicted')
plt.ylabel('Actual')
class_labels = ['no','yes']
tick_marks = np.arange(len(class_labels))
plt.xticks(tick_marks,class_labels)
plt.yticks([-0.5,0,1,1.5],['','no','yes',''])
# plotting text value inside cells
thresh = cf.max() / 2.
for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])):
plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black')
plt.show()
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
aci_service.delete()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blocking** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.automl.core.featurization import FeaturizationConfig
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.interpret import ExplanationClient
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.17.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
Accessing the Azure ML workspace requires authentication with Azure.The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import InteractiveLoginAuthenticationauth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')ws = Workspace.from_config(auth = auth)```If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import ServicePrincipalAuthenticationauth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')ws = Workspace.from_config(auth = auth)```For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)
###Code
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster-4"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blocked_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet||**allowed_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blocked_models** allowed for **allowed_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blocked_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
remote_run
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
###Code
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.core.run import Run
model_explainability_run_id = remote_run.id + "_" + "ModelExplain"
print(model_explainability_run_id)
model_explainability_run = Run(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
python_version_compatible = True
else:
python_version_compatible = False
import onnxruntime
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
onnx_res = json.load(f)
return onnx_res
if python_version_compatible:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_res = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_res)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run, fitted_model = remote_run.get_output()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
from azureml.core.environment import Environment
inference_config = InferenceConfig(entry_script=script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values. This calls the ACI web service to do the prediction.Note that the JSON passed to the ACI web service is an array of rows of data. Each row should either be an array of values in the same order that was used for training or a dictionary where the keys are the same as the column names used for training. The example below uses dictionary rows.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
import json
import requests
X_test_json = X_test.to_json(orient='records')
data = "{\"data\": " + X_test_json +"}"
headers = {'Content-Type': 'application/json'}
resp = requests.post(aci_service.scoring_uri, data, headers=headers)
y_pred = json.loads(json.loads(resp.text))['result']
actual = array(y_test)
actual = actual[:,0]
print(len(y_pred), " ", len(actual))
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data as a confusion matrix that compared the predicted values against the actual values.
###Code
%matplotlib notebook
from sklearn.metrics import confusion_matrix
import numpy as np
import itertools
cf =confusion_matrix(actual,y_pred)
plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest')
plt.colorbar()
plt.title('Confusion Matrix')
plt.xlabel('Predicted')
plt.ylabel('Actual')
class_labels = ['no','yes']
tick_marks = np.arange(len(class_labels))
plt.xticks(tick_marks,class_labels)
plt.yticks([-0.5,0,1,1.5],['','no','yes',''])
# plotting text value inside cells
thresh = cf.max() / 2.
for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])):
plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black')
plt.show()
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
aci_service.delete()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blocking** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.interpret import ExplanationClient
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.34.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
Accessing the Azure ML workspace requires authentication with Azure.The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import InteractiveLoginAuthenticationauth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')ws = Workspace.from_config(auth = auth)```If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import ServicePrincipalAuthenticationauth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')ws = Workspace.from_config(auth = auth)```For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)
###Code
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster-4"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blocked_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet||**allowed_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blocked_models** allowed for **allowed_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blocked_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
###Code
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.core.run import Run
model_explainability_run_id = remote_run.id + "_" + "ModelExplain"
print(model_explainability_run_id)
model_explainability_run = Run(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
result = json.load(f)
return result
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_result = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_result)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run, fitted_model = remote_run.get_output()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.model import Model
inference_config = InferenceConfig(entry_script=script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 2,
memory_gb = 2,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values. This calls the ACI web service to do the prediction.Note that the JSON passed to the ACI web service is an array of rows of data. Each row should either be an array of values in the same order that was used for training or a dictionary where the keys are the same as the column names used for training. The example below uses dictionary rows.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
import requests
X_test_json = X_test.to_json(orient='records')
data = "{\"data\": " + X_test_json +"}"
headers = {'Content-Type': 'application/json'}
resp = requests.post(aci_service.scoring_uri, data, headers=headers)
y_pred = json.loads(json.loads(resp.text))['result']
actual = array(y_test)
actual = actual[:,0]
print(len(y_pred), " ", len(actual))
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data as a confusion matrix that compared the predicted values against the actual values.
###Code
%matplotlib notebook
from sklearn.metrics import confusion_matrix
import itertools
cf =confusion_matrix(actual,y_pred)
plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest')
plt.colorbar()
plt.title('Confusion Matrix')
plt.xlabel('Predicted')
plt.ylabel('Actual')
class_labels = ['no','yes']
tick_marks = np.arange(len(class_labels))
plt.xticks(tick_marks,class_labels)
plt.yticks([-0.5,0,1,1.5],['','no','yes',''])
# plotting text value inside cells
thresh = cf.max() / 2.
for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])):
plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black')
plt.show()
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
aci_service.delete()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blacklisting** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.automl.core.featurization import FeaturizationConfig
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.explain.model._internal.explanation_client import ExplanationClient
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
# Choose a name for your cluster.
amlcompute_cluster_name = "cpu-cluster-4"
found = False
# Check if this compute target already exists in the workspace.
cts = ws.compute_targets
if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute':
found = True
print('Found existing compute target.')
compute_target = cts[amlcompute_cluster_name]
if not found:
print('Creating a new compute target...')
provisioning_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_D2_V2", # for GPU, use "STANDARD_NC6"
#vm_priority = 'lowpriority', # optional
max_nodes = 6)
# Create the cluster.
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config)
print('Checking cluster status...')
# Can poll for a minimum number of nodes and for a specific timeout.
# If no min_node_count is provided, it will use the scale settings for the cluster.
compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20)
# For a more detailed view of current AmlCompute status, use get_status().
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blacklist_models** or **whitelist_models** |*List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_minutes**| Maximum amount of time in minutes that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_minutes" : 20,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blacklist_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
remote_run
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#experiment_name = 'automl-classification-bmarketing'
#experiment = Experiment(ws, experiment_name)
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
###Code
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
python_version_compatible = True
else:
python_version_compatible = False
import onnxruntime
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
onnx_res = json.load(f)
return onnx_res
if python_version_compatible:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_res = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_res)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run, fitted_model = remote_run.get_output()
import os
import shutil
sript_folder = os.path.join(os.getcwd(), 'inference')
project_folder = '/inference'
os.makedirs(project_folder, exist_ok=True)
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
conda_env_file_name = 'inference/env.yml'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
best_run.download_file('outputs/conda_env_v_1_0_0.yml', 'inference/env.yml')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(runtime = "python",
entry_script = script_file_name,
conda_file = conda_env_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
#aci_service.delete()
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
y_pred = fitted_model.predict(X_test)
actual = array(y_test)
actual = actual[:,0]
print(y_pred.shape, " ", actual.shape)
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned.
###Code
%matplotlib notebook
test_pred = plt.scatter(actual, y_pred, color='b')
test_test = plt.scatter(actual, actual, color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blocking** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.interpret import ExplanationClient
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.35.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
Accessing the Azure ML workspace requires authentication with Azure.The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import InteractiveLoginAuthenticationauth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')ws = Workspace.from_config(auth = auth)```If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import ServicePrincipalAuthenticationauth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')ws = Workspace.from_config(auth = auth)```For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)
###Code
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster-4"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blocked_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet||**allowed_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blocked_models** allowed for **allowed_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blocked_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
###Code
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.core.run import Run
model_explainability_run_id = remote_run.id + "_" + "ModelExplain"
print(model_explainability_run_id)
model_explainability_run = Run(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
result = json.load(f)
return result
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_result = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_result)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run, fitted_model = remote_run.get_output()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(), entry_script=script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 2,
memory_gb = 2,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values. This calls the ACI web service to do the prediction.Note that the JSON passed to the ACI web service is an array of rows of data. Each row should either be an array of values in the same order that was used for training or a dictionary where the keys are the same as the column names used for training. The example below uses dictionary rows.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
import requests
X_test_json = X_test.to_json(orient='records')
data = "{\"data\": " + X_test_json +"}"
headers = {'Content-Type': 'application/json'}
resp = requests.post(aci_service.scoring_uri, data, headers=headers)
y_pred = json.loads(json.loads(resp.text))['result']
actual = array(y_test)
actual = actual[:,0]
print(len(y_pred), " ", len(actual))
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data as a confusion matrix that compared the predicted values against the actual values.
###Code
%matplotlib notebook
from sklearn.metrics import confusion_matrix
import itertools
cf =confusion_matrix(actual,y_pred)
plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest')
plt.colorbar()
plt.title('Confusion Matrix')
plt.xlabel('Predicted')
plt.ylabel('Actual')
class_labels = ['no','yes']
tick_marks = np.arange(len(class_labels))
plt.xticks(tick_marks,class_labels)
plt.yticks([-0.5,0,1,1.5],['','no','yes',''])
# plotting text value inside cells
thresh = cf.max() / 2.
for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])):
plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black')
plt.show()
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
aci_service.delete()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blacklisting** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.automl.core.featurization import FeaturizationConfig
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.explain.model._internal.explanation_client import ExplanationClient
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.4.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
Accessing the Azure ML workspace requires authentication with Azure.The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import InteractiveLoginAuthenticationauth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')ws = Workspace.from_config(auth = auth)```If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import ServicePrincipalAuthenticationauth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')ws = Workspace.from_config(auth = auth)```For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)
###Code
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster-4"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blacklist_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet|| **whitelist_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blacklist_models** allowed for **whitelist_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blacklist_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
remote_run
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
###Code
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.core.run import Run
model_explainability_run_id = remote_run.get_properties().get('ModelExplainRunId')
print(model_explainability_run_id)
if model_explainability_run_id is not None:
model_explainability_run = Run(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
python_version_compatible = True
else:
python_version_compatible = False
import onnxruntime
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
onnx_res = json.load(f)
return onnx_res
if python_version_compatible:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_res = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_res)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run, fitted_model = remote_run.get_output()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
conda_env_file_name = 'inference/env.yml'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
best_run.download_file('outputs/conda_env_v_1_0_0.yml', 'inference/env.yml')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
from azureml.core.environment import Environment
myenv = Environment.from_conda_specification(name="myenv", file_path=conda_env_file_name)
inference_config = InferenceConfig(entry_script=script_file_name, environment=myenv)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
#aci_service.delete()
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
y_pred = fitted_model.predict(X_test)
actual = array(y_test)
actual = actual[:,0]
print(y_pred.shape, " ", actual.shape)
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned.
###Code
%matplotlib notebook
test_pred = plt.scatter(actual, y_pred, color='b')
test_test = plt.scatter(actual, actual, color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blocking** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.automl.core.featurization import FeaturizationConfig
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.interpret import ExplanationClient
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.15.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
Accessing the Azure ML workspace requires authentication with Azure.The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import InteractiveLoginAuthenticationauth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')ws = Workspace.from_config(auth = auth)```If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import ServicePrincipalAuthenticationauth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')ws = Workspace.from_config(auth = auth)```For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)
###Code
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster-4"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blocked_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet||**allowed_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blocked_models** allowed for **allowed_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blocked_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
remote_run
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
###Code
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.core.run import Run
model_explainability_run_id = remote_run.id + "_" + "ModelExplain"
print(model_explainability_run_id)
model_explainability_run = Run(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
python_version_compatible = True
else:
python_version_compatible = False
import onnxruntime
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
onnx_res = json.load(f)
return onnx_res
if python_version_compatible:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_res = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_res)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run, fitted_model = remote_run.get_output()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
from azureml.core.environment import Environment
inference_config = InferenceConfig(entry_script=script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values. This calls the ACI web service to do the prediction.Note that the JSON passed to the ACI web service is an array of rows of data. Each row should either be an array of values in the same order that was used for training or a dictionary where the keys are the same as the column names used for training. The example below uses dictionary rows.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
import json
import requests
X_test_json = X_test.to_json(orient='records')
data = "{\"data\": " + X_test_json +"}"
headers = {'Content-Type': 'application/json'}
resp = requests.post(aci_service.scoring_uri, data, headers=headers)
y_pred = json.loads(json.loads(resp.text))['result']
actual = array(y_test)
actual = actual[:,0]
print(len(y_pred), " ", len(actual))
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data as a confusion matrix that compared the predicted values against the actual values.
###Code
%matplotlib notebook
from sklearn.metrics import confusion_matrix
import numpy as np
import itertools
cf =confusion_matrix(actual,y_pred)
plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest')
plt.colorbar()
plt.title('Confusion Matrix')
plt.xlabel('Predicted')
plt.ylabel('Actual')
class_labels = ['no','yes']
tick_marks = np.arange(len(class_labels))
plt.xticks(tick_marks,class_labels)
plt.yticks([-0.5,0,1,1.5],['','no','yes',''])
# plotting text value inside cells
thresh = cf.max() / 2.
for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])):
plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black')
plt.show()
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
aci_service.delete()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blacklisting** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.automl.core.featurization import FeaturizationConfig
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.explain.model._internal.explanation_client import ExplanationClient
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.6.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
Accessing the Azure ML workspace requires authentication with Azure.The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import InteractiveLoginAuthenticationauth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')ws = Workspace.from_config(auth = auth)```If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import ServicePrincipalAuthenticationauth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')ws = Workspace.from_config(auth = auth)```For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)
###Code
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster-4"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blacklist_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet|| **whitelist_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blacklist_models** allowed for **whitelist_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blacklist_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
remote_run
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
###Code
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.core.run import Run
model_explainability_run_id = remote_run.get_properties().get('ModelExplainRunId')
print(model_explainability_run_id)
if model_explainability_run_id is not None:
model_explainability_run = Run(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
python_version_compatible = True
else:
python_version_compatible = False
import onnxruntime
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
onnx_res = json.load(f)
return onnx_res
if python_version_compatible:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_res = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_res)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run, fitted_model = remote_run.get_output()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
conda_env_file_name = 'inference/env.yml'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
best_run.download_file('outputs/conda_env_v_1_0_0.yml', 'inference/env.yml')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
from azureml.core.environment import Environment
myenv = Environment.from_conda_specification(name="myenv", file_path=conda_env_file_name)
inference_config = InferenceConfig(entry_script=script_file_name, environment=myenv)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
#aci_service.delete()
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
y_pred = fitted_model.predict(X_test)
actual = array(y_test)
actual = actual[:,0]
print(y_pred.shape, " ", actual.shape)
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned.
###Code
%matplotlib notebook
test_pred = plt.scatter(actual, y_pred, color='b')
test_test = plt.scatter(actual, actual, color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blocking** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.interpret import ExplanationClient
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.33.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
Accessing the Azure ML workspace requires authentication with Azure.The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import InteractiveLoginAuthenticationauth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')ws = Workspace.from_config(auth = auth)```If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import ServicePrincipalAuthenticationauth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')ws = Workspace.from_config(auth = auth)```For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)
###Code
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster-4"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blocked_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet||**allowed_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blocked_models** allowed for **allowed_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blocked_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
###Code
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.core.run import Run
model_explainability_run_id = remote_run.id + "_" + "ModelExplain"
print(model_explainability_run_id)
model_explainability_run = Run(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
result = json.load(f)
return result
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_result = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_result)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run, fitted_model = remote_run.get_output()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.model import Model
inference_config = InferenceConfig(entry_script=script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 2,
memory_gb = 2,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values. This calls the ACI web service to do the prediction.Note that the JSON passed to the ACI web service is an array of rows of data. Each row should either be an array of values in the same order that was used for training or a dictionary where the keys are the same as the column names used for training. The example below uses dictionary rows.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
import requests
X_test_json = X_test.to_json(orient='records')
data = "{\"data\": " + X_test_json +"}"
headers = {'Content-Type': 'application/json'}
resp = requests.post(aci_service.scoring_uri, data, headers=headers)
y_pred = json.loads(json.loads(resp.text))['result']
actual = array(y_test)
actual = actual[:,0]
print(len(y_pred), " ", len(actual))
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data as a confusion matrix that compared the predicted values against the actual values.
###Code
%matplotlib notebook
from sklearn.metrics import confusion_matrix
import itertools
cf =confusion_matrix(actual,y_pred)
plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest')
plt.colorbar()
plt.title('Confusion Matrix')
plt.xlabel('Predicted')
plt.ylabel('Actual')
class_labels = ['no','yes']
tick_marks = np.arange(len(class_labels))
plt.xticks(tick_marks,class_labels)
plt.yticks([-0.5,0,1,1.5],['','no','yes',''])
# plotting text value inside cells
thresh = cf.max() / 2.
for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])):
plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black')
plt.show()
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
aci_service.delete()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blocking** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.automl.core.featurization import FeaturizationConfig
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.interpret import ExplanationClient
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.30.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
Accessing the Azure ML workspace requires authentication with Azure.The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import InteractiveLoginAuthenticationauth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')ws = Workspace.from_config(auth = auth)```If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import ServicePrincipalAuthenticationauth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')ws = Workspace.from_config(auth = auth)```For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)
###Code
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster-4"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blocked_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet||**allowed_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blocked_models** allowed for **allowed_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blocked_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
###Code
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.core.run import Run
model_explainability_run_id = remote_run.id + "_" + "ModelExplain"
print(model_explainability_run_id)
model_explainability_run = Run(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
python_version_compatible = True
else:
python_version_compatible = False
import onnxruntime
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
onnx_res = json.load(f)
return onnx_res
if python_version_compatible:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_res = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_res)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run, fitted_model = remote_run.get_output()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
from azureml.core.environment import Environment
inference_config = InferenceConfig(entry_script=script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values. This calls the ACI web service to do the prediction.Note that the JSON passed to the ACI web service is an array of rows of data. Each row should either be an array of values in the same order that was used for training or a dictionary where the keys are the same as the column names used for training. The example below uses dictionary rows.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
import json
import requests
X_test_json = X_test.to_json(orient='records')
data = "{\"data\": " + X_test_json +"}"
headers = {'Content-Type': 'application/json'}
resp = requests.post(aci_service.scoring_uri, data, headers=headers)
y_pred = json.loads(json.loads(resp.text))['result']
actual = array(y_test)
actual = actual[:,0]
print(len(y_pred), " ", len(actual))
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data as a confusion matrix that compared the predicted values against the actual values.
###Code
%matplotlib notebook
from sklearn.metrics import confusion_matrix
import numpy as np
import itertools
cf =confusion_matrix(actual,y_pred)
plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest')
plt.colorbar()
plt.title('Confusion Matrix')
plt.xlabel('Predicted')
plt.ylabel('Actual')
class_labels = ['no','yes']
tick_marks = np.arange(len(class_labels))
plt.xticks(tick_marks,class_labels)
plt.yticks([-0.5,0,1,1.5],['','no','yes',''])
# plotting text value inside cells
thresh = cf.max() / 2.
for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])):
plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black')
plt.show()
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
aci_service.delete()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blocking** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.automl.core.featurization import FeaturizationConfig
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.interpret import ExplanationClient
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.24.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
Accessing the Azure ML workspace requires authentication with Azure.The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import InteractiveLoginAuthenticationauth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')ws = Workspace.from_config(auth = auth)```If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import ServicePrincipalAuthenticationauth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')ws = Workspace.from_config(auth = auth)```For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)
###Code
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster-4"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blocked_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet||**allowed_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blocked_models** allowed for **allowed_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blocked_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
remote_run
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
###Code
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.core.run import Run
model_explainability_run_id = remote_run.id + "_" + "ModelExplain"
print(model_explainability_run_id)
model_explainability_run = Run(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
python_version_compatible = True
else:
python_version_compatible = False
import onnxruntime
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
onnx_res = json.load(f)
return onnx_res
if python_version_compatible:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_res = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_res)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run, fitted_model = remote_run.get_output()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
from azureml.core.environment import Environment
inference_config = InferenceConfig(entry_script=script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values. This calls the ACI web service to do the prediction.Note that the JSON passed to the ACI web service is an array of rows of data. Each row should either be an array of values in the same order that was used for training or a dictionary where the keys are the same as the column names used for training. The example below uses dictionary rows.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
import json
import requests
X_test_json = X_test.to_json(orient='records')
data = "{\"data\": " + X_test_json +"}"
headers = {'Content-Type': 'application/json'}
resp = requests.post(aci_service.scoring_uri, data, headers=headers)
y_pred = json.loads(json.loads(resp.text))['result']
actual = array(y_test)
actual = actual[:,0]
print(len(y_pred), " ", len(actual))
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data as a confusion matrix that compared the predicted values against the actual values.
###Code
%matplotlib notebook
from sklearn.metrics import confusion_matrix
import numpy as np
import itertools
cf =confusion_matrix(actual,y_pred)
plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest')
plt.colorbar()
plt.title('Confusion Matrix')
plt.xlabel('Predicted')
plt.ylabel('Actual')
class_labels = ['no','yes']
tick_marks = np.arange(len(class_labels))
plt.xticks(tick_marks,class_labels)
plt.yticks([-0.5,0,1,1.5],['','no','yes',''])
# plotting text value inside cells
thresh = cf.max() / 2.
for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])):
plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black')
plt.show()
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
aci_service.delete()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blocking** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.automl.core.featurization import FeaturizationConfig
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.interpret import ExplanationClient
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.29.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
Accessing the Azure ML workspace requires authentication with Azure.The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import InteractiveLoginAuthenticationauth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')ws = Workspace.from_config(auth = auth)```If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import ServicePrincipalAuthenticationauth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')ws = Workspace.from_config(auth = auth)```For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)
###Code
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster-4"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blocked_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet||**allowed_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blocked_models** allowed for **allowed_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blocked_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
###Code
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.core.run import Run
model_explainability_run_id = remote_run.id + "_" + "ModelExplain"
print(model_explainability_run_id)
model_explainability_run = Run(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
python_version_compatible = True
else:
python_version_compatible = False
import onnxruntime
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
onnx_res = json.load(f)
return onnx_res
if python_version_compatible:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_res = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_res)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run, fitted_model = remote_run.get_output()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
from azureml.core.environment import Environment
inference_config = InferenceConfig(entry_script=script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values. This calls the ACI web service to do the prediction.Note that the JSON passed to the ACI web service is an array of rows of data. Each row should either be an array of values in the same order that was used for training or a dictionary where the keys are the same as the column names used for training. The example below uses dictionary rows.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
import json
import requests
X_test_json = X_test.to_json(orient='records')
data = "{\"data\": " + X_test_json +"}"
headers = {'Content-Type': 'application/json'}
resp = requests.post(aci_service.scoring_uri, data, headers=headers)
y_pred = json.loads(json.loads(resp.text))['result']
actual = array(y_test)
actual = actual[:,0]
print(len(y_pred), " ", len(actual))
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data as a confusion matrix that compared the predicted values against the actual values.
###Code
%matplotlib notebook
from sklearn.metrics import confusion_matrix
import numpy as np
import itertools
cf =confusion_matrix(actual,y_pred)
plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest')
plt.colorbar()
plt.title('Confusion Matrix')
plt.xlabel('Predicted')
plt.ylabel('Actual')
class_labels = ['no','yes']
tick_marks = np.arange(len(class_labels))
plt.xticks(tick_marks,class_labels)
plt.yticks([-0.5,0,1,1.5],['','no','yes',''])
# plotting text value inside cells
thresh = cf.max() / 2.
for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])):
plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black')
plt.show()
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
aci_service.delete()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blocking** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.automl.core.featurization import FeaturizationConfig
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.interpret import ExplanationClient
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.21.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
Accessing the Azure ML workspace requires authentication with Azure.The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import InteractiveLoginAuthenticationauth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')ws = Workspace.from_config(auth = auth)```If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import ServicePrincipalAuthenticationauth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')ws = Workspace.from_config(auth = auth)```For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)
###Code
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster-4"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blocked_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet||**allowed_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blocked_models** allowed for **allowed_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blocked_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
remote_run
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
###Code
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.core.run import Run
model_explainability_run_id = remote_run.id + "_" + "ModelExplain"
print(model_explainability_run_id)
model_explainability_run = Run(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
python_version_compatible = True
else:
python_version_compatible = False
import onnxruntime
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
onnx_res = json.load(f)
return onnx_res
if python_version_compatible:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_res = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_res)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run, fitted_model = remote_run.get_output()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
from azureml.core.environment import Environment
inference_config = InferenceConfig(entry_script=script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values. This calls the ACI web service to do the prediction.Note that the JSON passed to the ACI web service is an array of rows of data. Each row should either be an array of values in the same order that was used for training or a dictionary where the keys are the same as the column names used for training. The example below uses dictionary rows.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
import json
import requests
X_test_json = X_test.to_json(orient='records')
data = "{\"data\": " + X_test_json +"}"
headers = {'Content-Type': 'application/json'}
resp = requests.post(aci_service.scoring_uri, data, headers=headers)
y_pred = json.loads(json.loads(resp.text))['result']
actual = array(y_test)
actual = actual[:,0]
print(len(y_pred), " ", len(actual))
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data as a confusion matrix that compared the predicted values against the actual values.
###Code
%matplotlib notebook
from sklearn.metrics import confusion_matrix
import numpy as np
import itertools
cf =confusion_matrix(actual,y_pred)
plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest')
plt.colorbar()
plt.title('Confusion Matrix')
plt.xlabel('Predicted')
plt.ylabel('Actual')
class_labels = ['no','yes']
tick_marks = np.arange(len(class_labels))
plt.xticks(tick_marks,class_labels)
plt.yticks([-0.5,0,1,1.5],['','no','yes',''])
# plotting text value inside cells
thresh = cf.max() / 2.
for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])):
plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black')
plt.show()
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
aci_service.delete()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blacklisting** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.automl.core.featurization import FeaturizationConfig
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.explain.model._internal.explanation_client import ExplanationClient
###Output
_____no_output_____
###Markdown
Accessing the Azure ML workspace requires authentication with Azure.The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import InteractiveLoginAuthenticationauth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')ws = Workspace.from_config(auth = auth)```If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import ServicePrincipalAuthenticationauth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')ws = Workspace.from_config(auth = auth)```For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)
###Code
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
# Choose a name for your cluster.
amlcompute_cluster_name = "cpu-cluster-4"
found = False
# Check if this compute target already exists in the workspace.
cts = ws.compute_targets
if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute':
found = True
print('Found existing compute target.')
compute_target = cts[amlcompute_cluster_name]
if not found:
print('Creating a new compute target...')
provisioning_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_D2_V2", # for GPU, use "STANDARD_NC6"
#vm_priority = 'lowpriority', # optional
max_nodes = 6)
# Create the cluster.
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config)
print('Checking cluster status...')
# Can poll for a minimum number of nodes and for a specific timeout.
# If no min_node_count is provided, it will use the scale settings for the cluster.
compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20)
# For a more detailed view of current AmlCompute status, use get_status().
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blacklist_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet|| **whitelist_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blacklist_models** allowed for **whitelist_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**model_explainability**|Indicate to explain each trained pipeline or not.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blacklist_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
model_explainability=True,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
remote_run
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#experiment_name = 'automl-classification-bmarketing'
#experiment = Experiment(ws, experiment_name)
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
###Code
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.train.automl.run import AutoMLRun
model_explainability_run_id = remote_run.get_properties().get('ModelExplainRunId')
print(model_explainability_run_id)
if model_explainability_run_id is not None:
model_explainability_run = AutoMLRun(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
python_version_compatible = True
else:
python_version_compatible = False
import onnxruntime
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
onnx_res = json.load(f)
return onnx_res
if python_version_compatible:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_res = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_res)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run, fitted_model = remote_run.get_output()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
conda_env_file_name = 'inference/env.yml'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
best_run.download_file('outputs/conda_env_v_1_0_0.yml', 'inference/env.yml')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
from azureml.core.environment import Environment
myenv = Environment.from_conda_specification(name="myenv", file_path=conda_env_file_name)
inference_config = InferenceConfig(entry_script=script_file_name, environment=myenv)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
#aci_service.delete()
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
y_pred = fitted_model.predict(X_test)
actual = array(y_test)
actual = actual[:,0]
print(y_pred.shape, " ", actual.shape)
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned.
###Code
%matplotlib notebook
test_pred = plt.scatter(actual, y_pred, color='b')
test_test = plt.scatter(actual, actual, color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blacklisting** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.automl.core.featurization import FeaturizationConfig
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.explain.model._internal.explanation_client import ExplanationClient
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.9.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
Accessing the Azure ML workspace requires authentication with Azure.The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import InteractiveLoginAuthenticationauth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')ws = Workspace.from_config(auth = auth)```If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import ServicePrincipalAuthenticationauth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')ws = Workspace.from_config(auth = auth)```For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)
###Code
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster-4"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blacklist_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet|| **whitelist_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blacklist_models** allowed for **whitelist_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blacklist_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
remote_run
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
###Code
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.core.run import Run
model_explainability_run_id = remote_run.get_properties().get('ModelExplainRunId')
print(model_explainability_run_id)
if model_explainability_run_id is not None:
model_explainability_run = Run(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
python_version_compatible = True
else:
python_version_compatible = False
import onnxruntime
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
onnx_res = json.load(f)
return onnx_res
if python_version_compatible:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_res = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_res)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run, fitted_model = remote_run.get_output()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
from azureml.core.environment import Environment
inference_config = InferenceConfig(entry_script=script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
#aci_service.delete()
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
y_pred = fitted_model.predict(X_test)
actual = array(y_test)
actual = actual[:,0]
print(y_pred.shape, " ", actual.shape)
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned.
###Code
%matplotlib notebook
test_pred = plt.scatter(actual, y_pred, color='b')
test_test = plt.scatter(actual, actual, color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blocking** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.automl.core.featurization import FeaturizationConfig
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.interpret import ExplanationClient
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.18.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
Accessing the Azure ML workspace requires authentication with Azure.The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import InteractiveLoginAuthenticationauth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')ws = Workspace.from_config(auth = auth)```If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import ServicePrincipalAuthenticationauth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')ws = Workspace.from_config(auth = auth)```For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)
###Code
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster-4"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blocked_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet||**allowed_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blocked_models** allowed for **allowed_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blocked_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
remote_run
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
###Code
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.core.run import Run
model_explainability_run_id = remote_run.id + "_" + "ModelExplain"
print(model_explainability_run_id)
model_explainability_run = Run(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
python_version_compatible = True
else:
python_version_compatible = False
import onnxruntime
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
onnx_res = json.load(f)
return onnx_res
if python_version_compatible:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_res = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_res)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run, fitted_model = remote_run.get_output()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
from azureml.core.environment import Environment
inference_config = InferenceConfig(entry_script=script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values. This calls the ACI web service to do the prediction.Note that the JSON passed to the ACI web service is an array of rows of data. Each row should either be an array of values in the same order that was used for training or a dictionary where the keys are the same as the column names used for training. The example below uses dictionary rows.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
import json
import requests
X_test_json = X_test.to_json(orient='records')
data = "{\"data\": " + X_test_json +"}"
headers = {'Content-Type': 'application/json'}
resp = requests.post(aci_service.scoring_uri, data, headers=headers)
y_pred = json.loads(json.loads(resp.text))['result']
actual = array(y_test)
actual = actual[:,0]
print(len(y_pred), " ", len(actual))
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data as a confusion matrix that compared the predicted values against the actual values.
###Code
%matplotlib notebook
from sklearn.metrics import confusion_matrix
import numpy as np
import itertools
cf =confusion_matrix(actual,y_pred)
plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest')
plt.colorbar()
plt.title('Confusion Matrix')
plt.xlabel('Predicted')
plt.ylabel('Actual')
class_labels = ['no','yes']
tick_marks = np.arange(len(class_labels))
plt.xticks(tick_marks,class_labels)
plt.yticks([-0.5,0,1,1.5],['','no','yes',''])
# plotting text value inside cells
thresh = cf.max() / 2.
for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])):
plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black')
plt.show()
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
aci_service.delete()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blocking** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import json
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.interpret import ExplanationClient
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.38.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
Accessing the Azure ML workspace requires authentication with Azure.The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import InteractiveLoginAuthenticationauth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')ws = Workspace.from_config(auth = auth)```If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import ServicePrincipalAuthenticationauth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')ws = Workspace.from_config(auth = auth)```For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)
###Code
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster-4"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blocked_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet||**allowed_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blocked_models** allowed for **allowed_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blocked_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
# Retrieve the best Run object
best_run = remote_run.get_best_child()
###Output
_____no_output_____
###Markdown
TransparencyView featurization summary for the best model - to study how different features were transformed. This is stored as a JSON file in the outputs directory for the run.
###Code
# Download the featurization summary JSON file locally
best_run.download_file("outputs/featurization_summary.json", "featurization_summary.json")
# Render the JSON as a pandas DataFrame
with open("featurization_summary.json", "r") as f:
records = json.load(f)
pd.DataFrame.from_records(records)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.core.run import Run
model_explainability_run_id = remote_run.id + "_" + "ModelExplain"
print(model_explainability_run_id)
model_explainability_run = Run(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run = remote_run.get_best_child()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
result = json.load(f)
return result
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_result = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_result)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_best_child` method returns the Run object for the best model based on the default primary metric. There are additional flags that can be passed to the method if we want to retrieve the best Run based on any of the other supported metrics, or if we are just interested in the best run among the ONNX compatible runs. As always, you can execute `remote_run.get_best_child??` in a new cell to view the source or docs for the function.
###Code
remote_run.get_best_child??
###Output
_____no_output_____
###Markdown
Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run = remote_run.get_best_child()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(), entry_script=script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 2,
memory_gb = 2,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values. This calls the ACI web service to do the prediction.Note that the JSON passed to the ACI web service is an array of rows of data. Each row should either be an array of values in the same order that was used for training or a dictionary where the keys are the same as the column names used for training. The example below uses dictionary rows.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
import requests
X_test_json = X_test.to_json(orient='records')
data = "{\"data\": " + X_test_json +"}"
headers = {'Content-Type': 'application/json'}
resp = requests.post(aci_service.scoring_uri, data, headers=headers)
y_pred = json.loads(json.loads(resp.text))['result']
actual = array(y_test)
actual = actual[:,0]
print(len(y_pred), " ", len(actual))
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data as a confusion matrix that compared the predicted values against the actual values.
###Code
%matplotlib notebook
from sklearn.metrics import confusion_matrix
import itertools
cf =confusion_matrix(actual,y_pred)
plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest')
plt.colorbar()
plt.title('Confusion Matrix')
plt.xlabel('Predicted')
plt.ylabel('Actual')
class_labels = ['no','yes']
tick_marks = np.arange(len(class_labels))
plt.xticks(tick_marks,class_labels)
plt.yticks([-0.5,0,1,1.5],['','no','yes',''])
# plotting text value inside cells
thresh = cf.max() / 2.
for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])):
plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black')
plt.show()
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
aci_service.delete()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blocking** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.automl.core.featurization import FeaturizationConfig
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.interpret._internal.explanation_client import ExplanationClient
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.12.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
Accessing the Azure ML workspace requires authentication with Azure.The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import InteractiveLoginAuthenticationauth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')ws = Workspace.from_config(auth = auth)```If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import ServicePrincipalAuthenticationauth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')ws = Workspace.from_config(auth = auth)```For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)
###Code
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster-4"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blocked_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet||**allowed_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blocked_models** allowed for **allowed_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blocked_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
remote_run
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
###Code
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.core.run import Run
model_explainability_run_id = remote_run.get_properties().get('ModelExplainRunId')
print(model_explainability_run_id)
if model_explainability_run_id is not None:
model_explainability_run = Run(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
python_version_compatible = True
else:
python_version_compatible = False
import onnxruntime
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
onnx_res = json.load(f)
return onnx_res
if python_version_compatible:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_res = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_res)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run, fitted_model = remote_run.get_output()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
from azureml.core.environment import Environment
inference_config = InferenceConfig(entry_script=script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values. This calls the ACI web service to do the prediction.Note that the JSON passed to the ACI web service is an array of rows of data. Each row should either be an array of values in the same order that was used for training or a dictionary where the keys are the same as the column names used for training. The example below uses dictionary rows.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
import json
import requests
X_test_json = X_test.to_json(orient='records')
data = "{\"data\": " + X_test_json +"}"
headers = {'Content-Type': 'application/json'}
resp = requests.post(aci_service.scoring_uri, data, headers=headers)
y_pred = json.loads(json.loads(resp.text))['result']
actual = array(y_test)
actual = actual[:,0]
print(len(y_pred), " ", len(actual))
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data as a confusion matrix that compared the predicted values against the actual values.
###Code
%matplotlib notebook
from sklearn.metrics import confusion_matrix
import numpy as np
import itertools
cf =confusion_matrix(actual,y_pred)
plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest')
plt.colorbar()
plt.title('Confusion Matrix')
plt.xlabel('Predicted')
plt.ylabel('Actual')
class_labels = ['no','yes']
tick_marks = np.arange(len(class_labels))
plt.xticks(tick_marks,class_labels)
plt.yticks([-0.5,0,1,1.5],['','no','yes',''])
# plotting text value inside cells
thresh = cf.max() / 2.
for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])):
plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black')
plt.show()
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
aci_service.delete()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blacklisting** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.automl.core.featurization import FeaturizationConfig
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.explain.model._internal.explanation_client import ExplanationClient
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.5.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
Accessing the Azure ML workspace requires authentication with Azure.The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import InteractiveLoginAuthenticationauth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')ws = Workspace.from_config(auth = auth)```If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import ServicePrincipalAuthenticationauth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')ws = Workspace.from_config(auth = auth)```For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)
###Code
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster-4"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blacklist_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet|| **whitelist_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blacklist_models** allowed for **whitelist_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blacklist_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
remote_run
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
###Code
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.core.run import Run
model_explainability_run_id = remote_run.get_properties().get('ModelExplainRunId')
print(model_explainability_run_id)
if model_explainability_run_id is not None:
model_explainability_run = Run(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
python_version_compatible = True
else:
python_version_compatible = False
import onnxruntime
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
onnx_res = json.load(f)
return onnx_res
if python_version_compatible:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_res = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_res)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run, fitted_model = remote_run.get_output()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
conda_env_file_name = 'inference/env.yml'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
best_run.download_file('outputs/conda_env_v_1_0_0.yml', 'inference/env.yml')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
from azureml.core.environment import Environment
myenv = Environment.from_conda_specification(name="myenv", file_path=conda_env_file_name)
inference_config = InferenceConfig(entry_script=script_file_name, environment=myenv)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
#aci_service.delete()
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
y_pred = fitted_model.predict(X_test)
actual = array(y_test)
actual = actual[:,0]
print(y_pred.shape, " ", actual.shape)
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned.
###Code
%matplotlib notebook
test_pred = plt.scatter(actual, y_pred, color='b')
test_test = plt.scatter(actual, actual, color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blocking** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.automl.core.featurization import FeaturizationConfig
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.interpret import ExplanationClient
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.22.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
Accessing the Azure ML workspace requires authentication with Azure.The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import InteractiveLoginAuthenticationauth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')ws = Workspace.from_config(auth = auth)```If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import ServicePrincipalAuthenticationauth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')ws = Workspace.from_config(auth = auth)```For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)
###Code
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster-4"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blocked_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet||**allowed_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blocked_models** allowed for **allowed_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blocked_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
remote_run
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
###Code
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.core.run import Run
model_explainability_run_id = remote_run.id + "_" + "ModelExplain"
print(model_explainability_run_id)
model_explainability_run = Run(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
python_version_compatible = True
else:
python_version_compatible = False
import onnxruntime
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
onnx_res = json.load(f)
return onnx_res
if python_version_compatible:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_res = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_res)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run, fitted_model = remote_run.get_output()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
from azureml.core.environment import Environment
inference_config = InferenceConfig(entry_script=script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values. This calls the ACI web service to do the prediction.Note that the JSON passed to the ACI web service is an array of rows of data. Each row should either be an array of values in the same order that was used for training or a dictionary where the keys are the same as the column names used for training. The example below uses dictionary rows.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
import json
import requests
X_test_json = X_test.to_json(orient='records')
data = "{\"data\": " + X_test_json +"}"
headers = {'Content-Type': 'application/json'}
resp = requests.post(aci_service.scoring_uri, data, headers=headers)
y_pred = json.loads(json.loads(resp.text))['result']
actual = array(y_test)
actual = actual[:,0]
print(len(y_pred), " ", len(actual))
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data as a confusion matrix that compared the predicted values against the actual values.
###Code
%matplotlib notebook
from sklearn.metrics import confusion_matrix
import numpy as np
import itertools
cf =confusion_matrix(actual,y_pred)
plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest')
plt.colorbar()
plt.title('Confusion Matrix')
plt.xlabel('Predicted')
plt.ylabel('Actual')
class_labels = ['no','yes']
tick_marks = np.arange(len(class_labels))
plt.xticks(tick_marks,class_labels)
plt.yticks([-0.5,0,1,1.5],['','no','yes',''])
# plotting text value inside cells
thresh = cf.max() / 2.
for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])):
plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black')
plt.show()
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
aci_service.delete()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blocking** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.automl.core.featurization import FeaturizationConfig
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.interpret import ExplanationClient
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.31.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
Accessing the Azure ML workspace requires authentication with Azure.The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import InteractiveLoginAuthenticationauth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')ws = Workspace.from_config(auth = auth)```If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import ServicePrincipalAuthenticationauth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')ws = Workspace.from_config(auth = auth)```For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)
###Code
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster-4"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blocked_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet||**allowed_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blocked_models** allowed for **allowed_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blocked_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
###Code
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.core.run import Run
model_explainability_run_id = remote_run.id + "_" + "ModelExplain"
print(model_explainability_run_id)
model_explainability_run = Run(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
python_version_compatible = True
else:
python_version_compatible = False
import onnxruntime
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
onnx_res = json.load(f)
return onnx_res
if python_version_compatible:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_res = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_res)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run, fitted_model = remote_run.get_output()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
from azureml.core.environment import Environment
inference_config = InferenceConfig(entry_script=script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values. This calls the ACI web service to do the prediction.Note that the JSON passed to the ACI web service is an array of rows of data. Each row should either be an array of values in the same order that was used for training or a dictionary where the keys are the same as the column names used for training. The example below uses dictionary rows.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
import json
import requests
X_test_json = X_test.to_json(orient='records')
data = "{\"data\": " + X_test_json +"}"
headers = {'Content-Type': 'application/json'}
resp = requests.post(aci_service.scoring_uri, data, headers=headers)
y_pred = json.loads(json.loads(resp.text))['result']
actual = array(y_test)
actual = actual[:,0]
print(len(y_pred), " ", len(actual))
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data as a confusion matrix that compared the predicted values against the actual values.
###Code
%matplotlib notebook
from sklearn.metrics import confusion_matrix
import numpy as np
import itertools
cf =confusion_matrix(actual,y_pred)
plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest')
plt.colorbar()
plt.title('Confusion Matrix')
plt.xlabel('Predicted')
plt.ylabel('Actual')
class_labels = ['no','yes']
tick_marks = np.arange(len(class_labels))
plt.xticks(tick_marks,class_labels)
plt.yticks([-0.5,0,1,1.5],['','no','yes',''])
# plotting text value inside cells
thresh = cf.max() / 2.
for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])):
plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black')
plt.show()
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
aci_service.delete()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blacklisting** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.automl.core.featurization import FeaturizationConfig
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.explain.model._internal.explanation_client import ExplanationClient
###Output
_____no_output_____
###Markdown
Accessing the Azure ML workspace requires authentication with Azure.The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import InteractiveLoginAuthenticationauth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')ws = Workspace.from_config(auth = auth)```If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import ServicePrincipalAuthenticationauth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')ws = Workspace.from_config(auth = auth)```For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)
###Code
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
# Choose a name for your cluster.
amlcompute_cluster_name = "cpu-cluster-4"
found = False
# Check if this compute target already exists in the workspace.
cts = ws.compute_targets
if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute':
found = True
print('Found existing compute target.')
compute_target = cts[amlcompute_cluster_name]
if not found:
print('Creating a new compute target...')
provisioning_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_D2_V2", # for GPU, use "STANDARD_NC6"
#vm_priority = 'lowpriority', # optional
max_nodes = 6)
# Create the cluster.
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config)
print('Checking cluster status...')
# Can poll for a minimum number of nodes and for a specific timeout.
# If no min_node_count is provided, it will use the scale settings for the cluster.
compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20)
# For a more detailed view of current AmlCompute status, use get_status().
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blacklist_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet|| **whitelist_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blacklist_models** allowed for **whitelist_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**model_explainability**|Indicate to explain each trained pipeline or not.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blacklist_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
model_explainability=True,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
remote_run
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#experiment_name = 'automl-classification-bmarketing'
#experiment = Experiment(ws, experiment_name)
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
###Code
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.train.automl.run import AutoMLRun
model_explainability_run_id = remote_run.get_properties().get('ModelExplainRunId')
print(model_explainability_run_id)
if model_explainability_run_id is not None:
model_explainability_run = AutoMLRun(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
python_version_compatible = True
else:
python_version_compatible = False
import onnxruntime
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
onnx_res = json.load(f)
return onnx_res
if python_version_compatible:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_res = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_res)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run, fitted_model = remote_run.get_output()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
conda_env_file_name = 'inference/env.yml'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
best_run.download_file('outputs/conda_env_v_1_0_0.yml', 'inference/env.yml')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
from azureml.core.environment import Environment
myenv = Environment.from_conda_specification(name="myenv", file_path=conda_env_file_name)
inference_config = InferenceConfig(entry_script=script_file_name, environment=myenv)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
#aci_service.delete()
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
y_pred = fitted_model.predict(X_test)
actual = array(y_test)
actual = actual[:,0]
print(y_pred.shape, " ", actual.shape)
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned.
###Code
%matplotlib notebook
test_pred = plt.scatter(actual, y_pred, color='b')
test_test = plt.scatter(actual, actual, color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blocking** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.automl.core.featurization import FeaturizationConfig
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.explain.model._internal.explanation_client import ExplanationClient
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.11.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
Accessing the Azure ML workspace requires authentication with Azure.The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import InteractiveLoginAuthenticationauth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')ws = Workspace.from_config(auth = auth)```If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import ServicePrincipalAuthenticationauth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')ws = Workspace.from_config(auth = auth)```For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)
###Code
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster-4"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blocked_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet||**allowed_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blocked_models** allowed for **allowed_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blocked_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
remote_run
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
###Code
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.core.run import Run
model_explainability_run_id = remote_run.get_properties().get('ModelExplainRunId')
print(model_explainability_run_id)
if model_explainability_run_id is not None:
model_explainability_run = Run(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
python_version_compatible = True
else:
python_version_compatible = False
import onnxruntime
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
onnx_res = json.load(f)
return onnx_res
if python_version_compatible:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_res = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_res)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run, fitted_model = remote_run.get_output()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
from azureml.core.environment import Environment
inference_config = InferenceConfig(entry_script=script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
#aci_service.delete()
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
y_pred = fitted_model.predict(X_test)
actual = array(y_test)
actual = actual[:,0]
print(y_pred.shape, " ", actual.shape)
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned.
###Code
%matplotlib notebook
test_pred = plt.scatter(actual, y_pred, color='b')
test_test = plt.scatter(actual, actual, color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blacklisting** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.automl.core.featurization import FeaturizationConfig
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.explain.model._internal.explanation_client import ExplanationClient
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
# Choose a name for your cluster.
amlcompute_cluster_name = "cpu-cluster-4"
found = False
# Check if this compute target already exists in the workspace.
cts = ws.compute_targets
if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute':
found = True
print('Found existing compute target.')
compute_target = cts[amlcompute_cluster_name]
if not found:
print('Creating a new compute target...')
provisioning_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_D2_V2", # for GPU, use "STANDARD_NC6"
#vm_priority = 'lowpriority', # optional
max_nodes = 6)
# Create the cluster.
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config)
print('Checking cluster status...')
# Can poll for a minimum number of nodes and for a specific timeout.
# If no min_node_count is provided, it will use the scale settings for the cluster.
compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20)
# For a more detailed view of current AmlCompute status, use get_status().
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blacklist_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet|| **whitelist_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blacklist_models** allowed for **whitelist_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**model_explainability**|Indicate to explain each trained pipeline or not.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blacklist_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
model_explainability=True,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
remote_run
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#experiment_name = 'automl-classification-bmarketing'
#experiment = Experiment(ws, experiment_name)
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
###Code
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.train.automl.run import AutoMLRun
model_explainability_run_id = remote_run.get_properties().get('ModelExplainRunId')
print(model_explainability_run_id)
if model_explainability_run_id is not None:
model_explainability_run = AutoMLRun(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
python_version_compatible = True
else:
python_version_compatible = False
import onnxruntime
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
onnx_res = json.load(f)
return onnx_res
if python_version_compatible:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_res = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_res)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run, fitted_model = remote_run.get_output()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
conda_env_file_name = 'inference/env.yml'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
best_run.download_file('outputs/conda_env_v_1_0_0.yml', 'inference/env.yml')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
from azureml.core.environment import Environment
myenv = Environment.from_conda_specification(name="myenv", file_path=conda_env_file_name)
inference_config = InferenceConfig(entry_script=script_file_name, environment=myenv)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
#aci_service.delete()
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
y_pred = fitted_model.predict(X_test)
actual = array(y_test)
actual = actual[:,0]
print(y_pred.shape, " ", actual.shape)
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned.
###Code
%matplotlib notebook
test_pred = plt.scatter(actual, y_pred, color='b')
test_test = plt.scatter(actual, actual, color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blacklisting** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.automl.core.featurization import FeaturizationConfig
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.explain.model._internal.explanation_client import ExplanationClient
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
# Choose a name for your cluster.
amlcompute_cluster_name = "cpu-cluster-4"
found = False
# Check if this compute target already exists in the workspace.
cts = ws.compute_targets
if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute':
found = True
print('Found existing compute target.')
compute_target = cts[amlcompute_cluster_name]
if not found:
print('Creating a new compute target...')
provisioning_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_D2_V2", # for GPU, use "STANDARD_NC6"
#vm_priority = 'lowpriority', # optional
max_nodes = 6)
# Create the cluster.
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config)
print('Checking cluster status...')
# Can poll for a minimum number of nodes and for a specific timeout.
# If no min_node_count is provided, it will use the scale settings for the cluster.
compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20)
# For a more detailed view of current AmlCompute status, use get_status().
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blacklist_models** or **whitelist_models** |*List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_minutes**| Maximum amount of time in minutes that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**model_explainability**|Indicate to explain each trained pipeline or not.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_minutes" : 20,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blacklist_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
model_explainability=True,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
remote_run
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#experiment_name = 'automl-classification-bmarketing'
#experiment = Experiment(ws, experiment_name)
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
###Code
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.train.automl.run import AutoMLRun
model_explainability_run_id = remote_run.get_properties().get('ModelExplainRunId')
print(model_explainability_run_id)
if model_explainability_run_id is not None:
model_explainability_run = AutoMLRun(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.core.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package Note: The code will install the onnxruntime==0.4.0 if not installed. Newer versions of the onnxruntime have compatibility issues.
###Code
test_df = test_dataset.to_pandas_dataframe()
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
python_version_compatible = True
else:
python_version_compatible = False
onnxrt_present = False
try:
import onnxruntime
from azureml.automl.core.onnx_convert import OnnxInferenceHelper
from onnxruntime import __version__ as ORT_VER
if ORT_VER == '0.4.0':
onnxrt_present = True
except ImportError:
onnxrt_present = False
# Install the onnxruntime if the version 0.4.0 is not installed.
if not onnxrt_present:
print("Installing the onnxruntime version 0.4.0.")
!{sys.executable} -m pip install --user --force-reinstall onnxruntime==0.4.0
onnxrt_present = True
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
onnx_res = json.load(f)
return onnx_res
if onnxrt_present and python_version_compatible:
mdl_bytes = onnx_mdl.SerializeToString()
onnx_res = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_res)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
if not python_version_compatible:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
if not onnxrt_present:
print('Please install the onnxruntime package to do the prediction with ONNX model.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run, fitted_model = remote_run.get_output()
import os
import shutil
sript_folder = os.path.join(os.getcwd(), 'inference')
project_folder = '/inference'
os.makedirs(project_folder, exist_ok=True)
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
conda_env_file_name = 'inference/env.yml'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
best_run.download_file('outputs/conda_env_v_1_0_0.yml', 'inference/env.yml')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(runtime = "python",
entry_script = script_file_name,
conda_file = conda_env_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
#aci_service.delete()
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
y_pred = fitted_model.predict(X_test)
actual = array(y_test)
actual = actual[:,0]
print(y_pred.shape, " ", actual.shape)
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned.
###Code
%matplotlib notebook
test_pred = plt.scatter(actual, y_pred, color='b')
test_test = plt.scatter(actual, actual, color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blacklisting** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.automl.core.featurization import FeaturizationConfig
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.explain.model._internal.explanation_client import ExplanationClient
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.8.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
Accessing the Azure ML workspace requires authentication with Azure.The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import InteractiveLoginAuthenticationauth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')ws = Workspace.from_config(auth = auth)```If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import ServicePrincipalAuthenticationauth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')ws = Workspace.from_config(auth = auth)```For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)
###Code
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster-4"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blacklist_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet|| **whitelist_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blacklist_models** allowed for **whitelist_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blacklist_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
remote_run
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
###Code
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.core.run import Run
model_explainability_run_id = remote_run.get_properties().get('ModelExplainRunId')
print(model_explainability_run_id)
if model_explainability_run_id is not None:
model_explainability_run = Run(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
python_version_compatible = True
else:
python_version_compatible = False
import onnxruntime
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
onnx_res = json.load(f)
return onnx_res
if python_version_compatible:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_res = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_res)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run, fitted_model = remote_run.get_output()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
conda_env_file_name = 'inference/env.yml'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
best_run.download_file('outputs/conda_env_v_1_0_0.yml', 'inference/env.yml')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
from azureml.core.environment import Environment
myenv = Environment.from_conda_specification(name="myenv", file_path=conda_env_file_name)
inference_config = InferenceConfig(entry_script=script_file_name, environment=myenv)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
#aci_service.delete()
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
y_pred = fitted_model.predict(X_test)
actual = array(y_test)
actual = actual[:,0]
print(y_pred.shape, " ", actual.shape)
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned.
###Code
%matplotlib notebook
test_pred = plt.scatter(actual, y_pred, color='b')
test_test = plt.scatter(actual, actual, color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png) Automated Machine Learning_**Classification with Deployment using a Bank Marketing Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Deploy](Deploy)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. Please find the ONNX related documentations [here](https://github.com/onnx/onnx).In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model using local compute with ONNX compatible config on.4. Explore the results, featurization transparency options and save the ONNX model5. Inference with the ONNX model.6. Register the model.7. Create a container image.8. Create an Azure Container Instance (ACI) service.9. Test the ACI service.In addition this notebook showcases the following features- **Blocking** certain pipelines- Specifying **target metrics** to indicate stopping criteria- Handling **missing data** in the input SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.automl.core.featurization import FeaturizationConfig
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.interpret import ExplanationClient
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.25.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
Accessing the Azure ML workspace requires authentication with Azure.The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import InteractiveLoginAuthenticationauth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')ws = Workspace.from_config(auth = auth)```If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:```from azureml.core.authentication import ServicePrincipalAuthenticationauth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')ws = Workspace.from_config(auth = auth)```For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)
###Code
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster-4"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Data Load DataLeverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. Training Data
###Code
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
###Output
_____no_output_____
###Markdown
Validation Data
###Code
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
###Output
_____no_output_____
###Markdown
TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression or forecasting||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**blocked_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. Allowed values for **Classification**LogisticRegressionSGDMultinomialNaiveBayesBernoulliNaiveBayesSVMLinearSVMKNNDecisionTreeRandomForestExtremeRandomTreesLightGBMGradientBoostingTensorFlowDNNTensorFlowLinearClassifierAllowed values for **Regression**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNAllowed values for **Forecasting**ElasticNetGradientBoostingDecisionTreeKNNLassoLarsSGDRandomForestExtremeRandomTreesLightGBMTensorFlowLinearRegressorTensorFlowDNNArimaProphet||**allowed_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blocked_models** allowed for **allowed_models**.||**experiment_exit_score**| Value indicating the target for *primary_metric*. Once the target is surpassed the run terminates.||**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.||**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.||**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric)
###Code
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blocked_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output = False)
###Output
_____no_output_____
###Markdown
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
###Code
#from azureml.train.automl.run import AutoMLRun
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
###Code
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
###Output
_____no_output_____
###Markdown
Results
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
###Output
_____no_output_____
###Markdown
Retrieve the Best Model's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
###Code
# Wait for the best model explanation run to complete
from azureml.core.run import Run
model_explainability_run_id = remote_run.id + "_" + "ModelExplain"
print(model_explainability_run_id)
model_explainability_run = Run(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Download raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
###Code
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
###Output
_____no_output_____
###Markdown
Retrieve the Best ONNX ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
###Code
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
###Output
_____no_output_____
###Markdown
Save the best ONNX model
###Code
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
###Output
_____no_output_____
###Markdown
Predict with the ONNX model, using onnxruntime package
###Code
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
python_version_compatible = True
else:
python_version_compatible = False
import onnxruntime
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
onnx_res = json.load(f)
return onnx_res
if python_version_compatible:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_res = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_res)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
###Output
_____no_output_____
###Markdown
Deploy Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
###Code
best_run, fitted_model = remote_run.get_output()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
###Output
_____no_output_____
###Markdown
Register the Fitted Model for DeploymentIf neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
###Code
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
from azureml.core.environment import Environment
inference_config = InferenceConfig(entry_script=script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Get Logs from a Deployed Web ServiceGets logs from a deployed web service.
###Code
#aci_service.get_logs()
###Output
_____no_output_____
###Markdown
TestNow that the model is trained, run the test data through the trained model to get the predicted values. This calls the ACI web service to do the prediction.Note that the JSON passed to the ACI web service is an array of rows of data. Each row should either be an array of values in the same order that was used for training or a dictionary where the keys are the same as the column names used for training. The example below uses dictionary rows.
###Code
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
import json
import requests
X_test_json = X_test.to_json(orient='records')
data = "{\"data\": " + X_test_json +"}"
headers = {'Content-Type': 'application/json'}
resp = requests.post(aci_service.scoring_uri, data, headers=headers)
y_pred = json.loads(json.loads(resp.text))['result']
actual = array(y_test)
actual = actual[:,0]
print(len(y_pred), " ", len(actual))
###Output
_____no_output_____
###Markdown
Calculate metrics for the predictionNow visualize the data as a confusion matrix that compared the predicted values against the actual values.
###Code
%matplotlib notebook
from sklearn.metrics import confusion_matrix
import numpy as np
import itertools
cf =confusion_matrix(actual,y_pred)
plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest')
plt.colorbar()
plt.title('Confusion Matrix')
plt.xlabel('Predicted')
plt.ylabel('Actual')
class_labels = ['no','yes']
tick_marks = np.arange(len(class_labels))
plt.xticks(tick_marks,class_labels)
plt.yticks([-0.5,0,1,1.5],['','no','yes',''])
# plotting text value inside cells
thresh = cf.max() / 2.
for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])):
plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black')
plt.show()
###Output
_____no_output_____
###Markdown
Delete a Web ServiceDeletes the specified web service.
###Code
aci_service.delete()
###Output
_____no_output_____ |
9Categorical_embeddings didnt work.ipynb | ###Markdown
Notebook to tryout categorical embeddings https://github.com/Shivanandroy/CategoricalEmbedder/blob/master/example_notebook/Example%20Notebook.ipynbConclusion: didn't work better than onehot encoding (0.118 vs 0.115). Try later again maybe.
###Code
import pandas as pd
import numpy as np
import pickle
import altair as alt
%load_ext autoreload
%autoreload 2
from utils.sklearn_custom_steps import DFSimpleImputer, DFOneHotEncoder,DFMinMaxScaler,DFColumnTransformer,DFOutlierExtractor,DFOutlierExtractor,DFStandardScaler,DFRobustScaler,DFSmartImputer, DFUnSkewer, DFPowerTransformer
from utils.sklearn_custom_steps import get_pipeline
from utils.model_hyperparameters import models
from sklearn.linear_model import LinearRegression, Ridge, RidgeCV, Lasso, LassoCV, ElasticNet,SGDRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn import svm
from sklearn.neural_network import MLPRegressor
from sklearn.kernel_ridge import KernelRidge
import lightgbm as lgb
import xgboost as xgb
from sklearn.model_selection import cross_validate
from catboost import CatBoostRegressor
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV
from sklearn.preprocessing import MinMaxScaler,StandardScaler,RobustScaler
from utils.model_hyperparameters import AutoCatBoostRegressor
def load(filename):
f = open(filename,"rb")
return pickle.load(f)
def save(model, filename='bestmodel.pickle'):
with open('output/'+filename, 'wb') as handle:
pickle.dump(model, handle, protocol=pickle.HIGHEST_PROTOCOL)
def save_feature_selection(cols, filename='feat_selection.pickle'):
with open('output/'+filename, 'wb') as handle:
pickle.dump(cols, handle, protocol=pickle.HIGHEST_PROTOCOL)
def submit(model, filename='submission.csv'):
pred = model.predict(final_test)
final_test['SalePrice'] = np.exp(pred)
final_test[['Id','SalePrice']].to_csv('output/'+filename, index=False)
import categorical_embedder as ce
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
f = open("output/engineered_datasets.pickle","rb")
train_x, train_y, final_test, _,_,_ = pickle.load(f)
#imputing manually
for col in train_x.columns:
if train_x[col].dtype.name == 'category':
train_x[col] = train_x[col].cat.add_categories('Unknown')
train_x[col].fillna('Unknown',inplace=True)
else:
train_x[col].fillna(0,inplace=True)
# select categorical columns for embedding
from sklearn.compose import make_column_selector
num_x = make_column_selector(dtype_include=np.number)(train_x)
cat_x = make_column_selector(dtype_exclude=np.number)(train_x)
def cross_val_models(to_test,train_x=train_x,**kwargs):
for name in to_test:
print(f"{name.ljust(20)}", end = ': ')
pipe = get_pipeline(models[name].model, **models[name].preprocess, **kwargs)
test_pipeline(pipe, train_x = train_x)
def test_model(model,train_x = train_x,param=None):
if not param: param = {}
pipe = get_pipeline(model,**param)
return test_pipeline(pipe, train_x=train_x)
def test_pipeline(pipe,train_x = train_x):
# print(train_x.shape)
num_fold = 5
scores = cross_validate(pipe, train_x, train_y, scoring='neg_root_mean_squared_error', cv=num_fold, return_train_score=True)
print(f"test {-1 * sum(scores['test_score'])/num_fold:.7f}, train {-1 * sum(scores['train_score'])/num_fold:.7f}")
return pipe
# ce.get_embedding_info identifies the categorical variables, # of unique values and embedding size and returns a dictionary
embedding_info = ce.get_embedding_info(train_x,categorical_variables=cat_x)
# ce.get_label_encoded_data integer encodes the categorical variables and prepares it to feed it to neural network
X_encoded,encoders = ce.get_label_encoded_data(train_x,categorical_variables=cat_x)
# ce.get_embeddings trains NN, extracts embeddings and return a dictionary containing the embeddings
# did some fiddling around with the package: needed to change metric to MAE, and add 2 layers
# changed initializer to glorot instead of normal
embeddings = ce.get_embeddings(X_encoded,train_y, categorical_embedding_info=embedding_info,
is_classification=False, epochs=200,batch_size=256)
# reference
test_model(Lasso(alpha=0.0005304432735934807))
# now test with embeddings for categorical values instead of onehot
data = ce.fit_transform(train_x, embeddings=embeddings, encoders=encoders, drop_categorical_vars=True)
test_model(Lasso(alpha=0.0005304432735934807),train_x=data)
###Output
_____no_output_____ |
nvr_crawler_scraper.ipynb | ###Markdown
NVR Crawl & ScrapeUseful tutorial: https://www.youtube.com/watch?v=XjNm9bazxn8&index=5&list=WL Crawl and scrape the Navy's ship registry for current and historical ship info: http://www.nvr.navy.mil
###Code
import requests
from bs4 import BeautifulSoup
import json
import datetime
import time
import pandas as pd
# Starting url.
base_url = 'http://www.nvr.navy.mil/'
start_url = 'http://www.nvr.navy.mil/QUICKFIND/HULLLIST_SHIPS.HTML'
def nvr_links(url, target_string):
"""Return a list of the top level links.
Args:
url (str): URL to pull html links from.
target_string (str): String to look for.
Returns:
list: List containing html links.
"""
url = url
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text, 'lxml') # Pull the raw html and store it as text in soup
# Parse soup and look for links that contain '/NVRSHIPS/HULL_'.
links_list = []
for link in soup.find_all('a'):
try:
if target_string in link.get('href'):
links_list.append(link.get('href'))
except:
pass
return links_list
top_level_links = nvr_links(start_url, '/NVRSHIPS/HULL_')
top_level_links
def nvm_scraper(url):
"""Return a dictionary of info for the requested URL.
Args:
url (str): URL to scrape.
Returns:
dict: Contains scraped ship info with key = ship name, and values as ship info.
"""
info = {}
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text, 'lxml')
ship_name = soup.find('td', {'class': 'ShipName'}).get_text()
info[ship_name] = {'class': soup.find('span', {'id': 'MainContent_Repeater1_PrototypeClassNumber_0'}).get_text(),
'uic' : soup.find('span', {'id': 'MainContent_Repeater1_UIC_0'}).get_text(),
'status': soup.find('a', {'id': 'MainContent_Repeater1_HyperLink3_0'}).get_text(),
'fleet': soup.find('span', {'id': 'MainContent_Repeater1_Fleet_0'}).get_text(),
'date_status_change': soup.find('span', {'id': 'MainContent_Repeater1_DateStatusChanged_0'}).get_text(),
'homeport': soup.find('span', {'id': 'MainContent_Repeater1_Homeport_0'}).get_text(),
'maintenance_category': soup.find('span', {'id': 'MainContent_Repeater1_rfc_0'}).get_text(),
'berth': soup.find('span', {'id': 'MainContent_Repeater1_BerthName_0'}).get_text(),
'force': soup.find('a', {'id': 'MainContent_Repeater1_Force_0'}).get_text(),
'builder': soup.find('span', {'id': 'MainContent_Repeater1_builder_0'}).get_text(),
'award_date': soup.find('span', {'id': 'MainContent_Repeater1_AwardDate_0'}).get_text(),
'commission_date': soup.find('span', {'id': 'MainContent_Repeater1_CommissionDate_0'}).get_text(),
'keel_date': soup.find('span', {'id': 'MainContent_Repeater1_KeelDate_0'}).get_text(),
'inactivation_date': soup.find('span', {'id': 'MainContent_Repeater1_InactivationDate_0'}).get_text(),
'launch_date': soup.find('span', {'id': 'MainContent_Repeater1_LaunchDate_0'}).get_text(),
'decommission_date': soup.find('span', {'id': 'MainContent_Repeater1_DecommissionDate_0'}).get_text(),
'age_since_launch': soup.find('span', {'id': 'MainContent_Repeater1_LaunchAge_0'}).get_text(),
'years_commission_decommission': soup.find('span', {'id': 'MainContent_Repeater1_YearsOfService_0'}).get_text(),
'delivery_date': soup.find_all('span', {'id': 'MainContent_Repeater1_DeliveryDate_0'})[0].get_text(),
'in-service_date': soup.find_all('span', {'id': 'MainContent_Repeater1_lblInServiceDate_0'})[0].get_text(),
'age_since_delivery': soup.find_all('span', {'id': 'MainContent_Repeater1_DeliveryDate_0'})[1].get_text(),
'out_of_service_date': soup.find_all('span', {'id': 'MainContent_Repeater1_lblInServiceDate_0'})[1].get_text(),
'stricken_date': soup.find('span', {'id': 'MainContent_Repeater1_StrickenDate_0'}).get_text(),
'overall_length': soup.find('span', {'id': 'MainContent_Repeater1_OverallLength_0'}).get_text(),
'waterline_length': soup.find('span', {'id': 'MainContent_Repeater1_WaterlineLength_0'}).get_text(),
'extreme_beam': soup.find('span', {'id': 'MainContent_Repeater1_ExtremeBeam_0'}).get_text(),
'waterline_beam': soup.find('span', {'id': 'MainContent_Repeater1_WaterlineBeam_0'}).get_text(),
'max_navigational_draft': soup.find('span', {'id': 'MainContent_Repeater1_MaxNavigationalDraft_0'}).get_text(),
'draft_limit': soup.find('span', {'id': 'MainContent_Repeater1_FullLoadDraft_0'}).get_text(),
'light_displacement': soup.find('span', {'id': 'MainContent_Repeater1_LightDisplacement_0'}).get_text(),
'full_displacement': soup.find('span', {'id': 'MainContent_Repeater1_FullDisplacement_0'}).get_text(),
'dead_weight': soup.find('span', {'id': 'MainContent_Repeater1_DeadWeight_0'}).get_text(),
'hull material': soup.find('span', {'id': 'MainContent_Repeater1_hullMaterial_0'}).get_text(),
'num_propellers': soup.find('span', {'id': 'MainContent_Repeater1_NumberOfPropellers_0'}).get_text(),
'num_waterjet': soup.find('span', {'id': 'MainContent_Repeater1_NumberOfWaterJets_0'}).get_text(),
'propulsion_type': soup.find('span', {'id': 'MainContent_Repeater1_PropulsionName_0'}).get_text(),
'officer_accom': soup.find('span', {'id': 'MainContent_Repeater1_NumberOfOfficers_0'}).get_text(),
'enlisted_accom': soup.find('span', {'id': 'MainContent_Repeater1_NumberOfEnlisted_0'}).get_text(),
'custodian': soup.find('span', {'id': 'MainContent_Repeater1_CustodianName_0'}).get_text(),
'planning_yard': soup.find('span', {'id': 'MainContent_Repeater1_PlanningYardName_0'}).get_text(),
'nuclear_planning_yard': soup.find('span', {'id': 'MainContent_Repeater1_NukePlanningYardName_0'}).get_text(),
'ship_program_mgr': soup.find('span', {'id': 'MainContent_Repeater1_shapmName_0'}).get_text(),
'comments': soup.find('span', {'id': 'MainContent_Repeater1_ExternalComments_0'}).get_text(),
'last_updated': soup.find('span', {'id': 'MainContent_Repeater1_ModifiedDate_0'}).get_text(),
}
return info
###Output
_____no_output_____
###Markdown
Test Single Ship
###Code
pd.DataFrame.from_dict(nvm_scraper('http://www.nvr.navy.mil/SHIPDETAILS/SHIPSDETAIL_CVN_76_5300.HTML'), orient='index')
# Main scraping loop.
# Requires top_level_links above.
ship_info = {}
count = 0
start_time = time.time()
for top_link in top_level_links:
# Grab next level links.
clean_link = top_link.replace(r'../', base_url)
second_level_links = nvr_links(clean_link, 'SHIPDETAILS')
# Go to each link.
for second_link in second_level_links:
clean_second_link = second_link.replace('..\\SHIPDETAILS\\', 'http://www.nvr.navy.mil/SHIPDETAILS/')
scraped_info = nvm_scraper(clean_second_link) # dict
ship_info.update(scraped_info) # Merges dict
# Take a break to not hammer the site.
count += 1
if count % 100 == 0:
print(count)
print('{:.2f} min elapsed'.format((time.time() - start_time)/ 60))
time.sleep(1)
print('Completed download of {} records in {:.2f} minutes!'.format(count, (time.time() - start_time)/60))
# Save to csv.
current_datetime = datetime.datetime.now()
output_name = 'ship_list_' + current_datetime.strftime("%Y-%m-%d_%H-%M") + '.csv'
pd.DataFrame.from_dict(ship_info, orient='index').to_csv(output_name, index_label='ship_name')
###Output
_____no_output_____ |
.ipynb_checkpoints/Berg_Violin Concerto-checkpoint.ipynb | ###Markdown
music21: A Toolkit for Comupter-Aided Musicology Some examples to test basic music21 functionalities This is a Jupyter notebook created by [@musicenfanthen](https://github.com/musicEnfanthen) and [@aWilsonandmore](https://github.com/aWilsonandmore) to work with some basic functionalities of music21 (http://web.mit.edu/music21/). For more information on Jupyter notebooks go to http://jupyter.org/. To execute a block of code in this notebook, click in the cell and press `Shift+Enter`.To get help on any music21 routine, click on it and press `Shift+Tab`. Imports and setup To use music21 in this notebook and python, you have to import all (\*) routines from music21 at first with the following command."You’ll probably get a few warnings that you’re missing some optional modules. That’s okay. If you get a warning that “no module named music21” then something probably went wrong above." (Source: http://web.mit.edu/music21/doc/usersGuide/usersGuide_01_installing.html)
###Code
from music21 import *
###Output
_____no_output_____
###Markdown
Probably you have to set manually the correct file path to an Application that is able to open MusicXML files (like MuseScore). To do so, use the `music21.environment` module to set an `musicxmlPath` key.Make sure to change the string `path/to/your/musicXmlApplication` below to the correct file path (keep the quotation marks):- on Mac e.g.: `/Applications/MuseScore 2.app/Contents/MacOS/mscore` - or on Windows e.g.: `C:/Program Files (x86)/MuseScore 2/bin/MuseScore.exe`and uncomment the line (remove the `` at the begin of the line).In the same way, you can also add a path to your lilypond installation, using`env['lilypondPath']`:- on Mac e.g.: `Applications/Lilypond.app`- on Windows e.g.: `C:/Program Files (x86)/LilyPond/usr/bin/lilypond.exe`Sometimes it's also necessary to adapt the `musescoreDirectPNGPath`. Check if it corresponds to your museScore path.
###Code
env = environment.Environment()
# env['musicxmlPath'] = 'path/to/your/musicXmlApplication'
# env['lilypondPath'] = 'path/to/your/lilypond'
# env['musescoreDirectPNGPath'] = 'path/to/your/museScore'
print('Environment settings:')
print('musicXML: ', env['musicxmlPath'])
print('musescore: ', env['musescoreDirectPNGPath'])
print('lilypond: ', env['lilypondPath'])
###Output
_____no_output_____
###Markdown
Let's create some notes One possible way to create notes in music21 is to use the `Note()`-Object (CAPITAL LETTER) within music21's `note`-subModule (small letter).Let's use the twelve-tone row of Alban Berg's Violin Concerto (1935) as an example. Take care how the different octaves and accidentals are declared.
###Code
note1 = note.Note("G3") # declaration of first note
note2 = note.Note("B-3")
note3 = note.Note("D4")
note4 = note.Note("F#4")
note5 = note.Note("A4")
note6 = note.Note("C5")
note7 = note.Note("E5")
note8 = note.Note("G#5")
note9 = note.Note("B5")
note10 = note.Note("C#6")
note11 = note.Note("D#6")
note12 = note.Note("F6")
# combine the twelve notes in a row list
bergRow = [note1, note2, note3, note4, note5, note6, note7, note8, note9, note10, note11, note12]
bergRow # output of bergRow (by just using the name of the variable)
###Output
_____no_output_____
###Markdown
You can use `dir(MODULENAME)` to find out which objects any module contains at all (http://web.mit.edu/music21/doc/usersGuide/usersGuide_02_notes.htmlusersguide-02-notes):
###Code
dir(note)
###Output
_____no_output_____
###Markdown
To iterate over every single item in a list, you can use a "FOR"-loop.Syntax (indentation matters here!): for ITEM in LIST: do something with ITEM ...
###Code
for currentNote in bergRow: # for every note in bergRow list do...
currentNote.duration.type = 'whole' # ... declare duration of a whole note
print(currentNote.duration, currentNote.nameWithOctave) # ... output of note duration and name (using the print command)
###Output
_____no_output_____
###Markdown
Create simple Streams Streams are fundamental objects in music21. Almost everything (`Score`, `Parts`, `Voices`, `Measures` a.o.) is organized in terms of this abstract data structure. An empty stream is created by using the `Stream()`-Object (CAPITAL LETTER) within music21's `stream`-subModule (small letter).
###Code
bergStream = stream.Stream() # create empty stream
for currentNote in bergRow: # iterate over every note in bergRow and ...
bergStream.append(currentNote) # ... append current note to the stream
bergStream.show('text') # output of the stream (using the .show()-method with option 'text'; compare to output above)
###Output
_____no_output_____
###Markdown
You can get the length of a stream, what is the number of items in it, with `len(STREAM)`:
###Code
len(bergStream)
###Output
_____no_output_____
###Markdown
... or with just counting the Note-Elements (here you have to flatten the stream):
###Code
len(bergStream.flat.getElementsByClass(note.Note))
###Output
_____no_output_____
###Markdown
But let's have a look at the stream now. Calling the `.show()`-method without any option will display a graphical visualisation of any music object via the musicxmlApplication defined in the environment at the beginning of this notebook.If you encounter problems here, make sure you have set the correct environment settings for `musicxmlPath` and `musescoreDirectPNGPath`.
###Code
bergStream.show()
###Output
_____no_output_____
###Markdown
You can also use further options to get the output as `pdf` or `png` via `lilypond`:
###Code
bergStream.show('lily.pdf')
bergStream.show('lily.png')
###Output
_____no_output_____
###Markdown
You could also use music21.tinyNotation, "a simple way of specifying single line melodies" (http://web.mit.edu/music21/doc/moduleReference/moduleTinyNotation.html), to define the notes of the row:
###Code
bergRowTiny = converter.parse("tinyNotation: G1 B- d f# a c' e' g'# b' c''# d''# f''")
bergRowTiny.show()
###Output
_____no_output_____
###Markdown
Our `bergRowTiny` is also a stream because the tinyNotation converter created it automatically. But keep aware of the slightly different structure:
###Code
bergRowTiny.show('text')
###Output
_____no_output_____
###Markdown
Ok nice, but where is the analytical part? music21 provides a large amount of build-in analytical tools. To start right away, just let's get the ambitus of the row in the stream using the `.analyze()`-method (http://web.mit.edu/music21/doc/moduleReference/moduleStream.html):
###Code
bergStream.analyze('ambitus')
###Output
_____no_output_____
###Markdown
But always keep a "thinking" eye on the results:
###Code
bergStream.analyze('key')
###Output
_____no_output_____
###Markdown
The twelve-tone row of Berg's Violin Concerto is special because of its two major triads, two minor triads and a part of the whole tone scale. Let's separate these elements into new `Chord()`-Objects (part of `chord`-submodule):
###Code
# declare some variables as Chord()-Objects
triad1 = chord.Chord()
triad2 = chord.Chord()
triad3 = chord.Chord()
triad4 = chord.Chord()
wtScale = chord.Chord()
# iterate over the first three notes in the stream
for currentNote in bergStream[0:3]:
triad1.add(currentNote) # add the currentNote to the Chord()
# ...
for currentNote in bergStream[2:5]:
triad2.add(currentNote)
# ...
for currentNote in bergStream[4:7]:
triad3.add(currentNote)
# ...
for currentNote in bergStream[6:9]:
triad4.add(currentNote)
# iterate over the last three notes in the stream
for currentNote in bergStream[8:12]:
wtScale.add(currentNote)
# output the 5 chords
triad1.show()
triad2.show()
triad3.show()
triad4.show()
wtScale.show()
###Output
_____no_output_____
###Markdown
You can recombine multiple Chords() within a new Chord()-Object:
###Code
fullChord = chord.Chord([triad1, triad2, triad3, triad4, wtScale])
fullChord.show()
###Output
_____no_output_____
###Markdown
You can also append the chords to a new Stream()-Object:
###Code
# create empty stream
chordsStream = stream.Stream()
# append all the triads to the stream
chordsStream.append(triad1);
chordsStream.append(triad2);
chordsStream.append(triad3);
chordsStream.append(triad4);
chordsStream.append(wtScale);
chordsStream.show()
###Output
_____no_output_____
###Markdown
And you can add some analytical descriptions to the objects using the `.addLyric()`-method and different attributes (e.g. `pitchedCommonName`, `intervalVector`, `primeForm`, `forteClass`) of the chords:
###Code
# iterate over every chord in the stream, and ...
for currentChord in chordsStream:
currentChord.addLyric(currentChord.pitchedCommonName) # ... add triad name
currentChord.addLyric(currentChord.intervalVector) # ... add interval vector
currentChord.addLyric(currentChord.primeForm) # ... add prime form
currentChord.addLyric(currentChord.forteClass) # ... add forte class
chordsStream.show()
###Output
_____no_output_____
###Markdown
Highlighting certain parts (e.g. all Forte classes "3-11A" = minor chord or "3-11B" = major chord) is also possible (http://web.mit.edu/music21/doc/usersGuide/usersGuide_10_examples1.html):
###Code
for currentChord in chordsStream.recurse().getElementsByClass('Chord'):
if currentChord.forteClass == '3-11A':
currentChord.style.color = 'red'
for x in currentChord.derivation.chain():
x.style.color = 'blue'
if currentChord.forteClass == '3-11B':
currentChord.style.color = 'blue'
for x in currentChord.derivation.chain():
x.style.color = 'blue'
chordsStream.show()
###Output
_____no_output_____
###Markdown
Introducing music21 the serial module Most (=all?) of the twelve tone rows by Schönberg, Berg and Webern are already incorporated into a dictionary list in music21. You get an sorted overview of the rows available in the dictionary with the following command: (http://web.mit.edu/music21/doc/moduleReference/moduleSerial.html)
###Code
sorted(list(serial.historicalDict))
###Output
_____no_output_____
###Markdown
For all these rows, music21 provides not only the pitches of the row, but some additional meta information. So let's see what we get with the 'RowBergViolinConcerto':
###Code
bergRowInternal = serial.getHistoricalRowByName('RowBergViolinConcerto')
print(type(bergRowInternal))
print(bergRowInternal.composer)
print(bergRowInternal.opus)
print(bergRowInternal.title)
print(bergRowInternal.row)
print(bergRowInternal.pitchClasses())
bergRowInternal.noteNames()
###Output
_____no_output_____
###Markdown
TransformationsUsing the serial modules' '.originalCenteredTransformation()'-method, you can retrieve transformational forms of a ToneRow()-Object. "Admissible transformations are ‘T’ (transposition), ‘I’ (inversion), ‘R’ (retrograde), and ‘RI’ (retrograde inversion)." (http://web.mit.edu/music21/doc/moduleReference/moduleSerial.html)
###Code
g = bergRowInternal.originalCenteredTransformation('T', 0)
u = bergRowInternal.originalCenteredTransformation('I', 0)
k = bergRowInternal.originalCenteredTransformation('R', 0)
ku = bergRowInternal.originalCenteredTransformation('RI', 0)
print('original:')
g.show()
print('inversion:')
u.show()
print('retrograde:')
k.show()
print('retrograde inversion:')
ku.show()
###Output
_____no_output_____
###Markdown
12-tone matrix You can also easily get the 12-tone matrix of a twelve tone row:
###Code
bergMatrix1 = bergRowInternal.matrix()
print(bergMatrix1)
bergMatrix2 = serial.rowToMatrix(bergRowInternal.row)
print(bergMatrix2)
###Output
_____no_output_____
###Markdown
SegmentationOne of the fundamental operations concerning the analysis of a twelve tone composition is segmentation. The following example provides a function, that iterates over a set of notes (`bergRowInternal`) and looks for every possible segment of a certain length (`segmentationSize`). Thus, per default, we iterate over every possible 3-tone segment of the Berg row.
###Code
segmentationList = {}
segmentationLength = 3 # here you can choose the length of the segments (try other values)
rangeEnd = 12 - segmentationLength + 1
# iterate over the whole tone row in (rangeEnd - 1) steps
for i in range(0,rangeEnd):
print('---')
# create an empty placeholder for the segment as a ToneRow()-Object
# at the position i in the segmentationList
segmentationList[i] = serial.ToneRow()
# fill up the segment with the corresponding notes
for currentNote in bergRowInternal[i:i+segmentationLength]:
segmentationList[i].append(currentNote)
print('Run ', i, ' completed.') # This is for control only.
segmentationList # output of the whole list
###Output
_____no_output_____
###Markdown
Now that we have every possible 3-tone segment of the Berg row, we can check if there are any triads in it:
###Code
# check for triads in the segmentation list
# make sure to use segmentLength = 3 above
# (for segmentLength = 4 you will get 7th and other tetra chords)
for i in segmentationList:
print('---')
print('RUN ', i)
outputString = ''
# get a list of the pitches of the current segment
currentPitchList = segmentationList[i].pitches
print(currentPitchList)
#use the pitchList as input for a chord
currentChord = chord.Chord(currentPitchList)
# check for minor triad (with highlighting)
# use forteClass 3-11A instead of 'isMinorTriad()'-method to catch enharmonic equivalents
if currentChord.forteClass == '3-11A':
currentChord.style.color = 'red'
outputString = 'MINOR TRIAD: '
# check for major triad (with highlighting)
# use forteClass 3-11B instead of 'isMajorTriad()'-method to catch enharmonic equivalents
if currentChord.forteClass == '3-11B':
currentChord.style.color = 'blue'
outputString = 'MAJOR TRIAD: '
currentChord.show()
outputString += currentChord.pitchedCommonName
print(outputString)
###Output
_____no_output_____ |
notebooks/6.0-ss-test-models.ipynb | ###Markdown
Creation of features for chain of events for attacks
###Code
df = pd.read_csv('../data/processed/merged_dataset_pivoted.csv')
df.head()
###Output
_____no_output_____
###Markdown
Lateral Movement (Link: https://www.rapid7.com/resources/using-windows-event-logs-to-detect-lateral-movement/):Using window event logs to detect lateral movement:Authentication Events (all):1. Event_528 <-- Successful Login2. Event_529 <-- Unsucessful Login3. Event_4624 and Event_4625 <-- Two methods of Lateral Movement (Windows NT5 and Nt6 Operating Systems)- SMB: 552, 4648- Scheduled Tasks: 602, 4698- PS Exec: 601, 4697 <-- System Admin Tool to execute code remote- SSH: app logs <-- Less common in windows environment
###Code
df[["event_4624", "event_4625", "event_4648", "event_4698"]].head()
df["total_authN_events"] = df["event_4624"] + df["event_4625"] # ".Logon Type:[\W](3|10).*"
df.head()
###Output
_____no_output_____
###Markdown
- PS Exec not there- No columns for "event_552", "event_528", "event_529", "event_601", "event_602", and "event_4697"- Have some of the cases for SMB, Scheduled Tasks <-- Look into regenerating dataset to get features- Have some of the cases for authentication events <-- Same as above Ransomware, malware and cobalt strike (Link: https://www.beyondtrust.com/blog/entry/windows-server-events-monitor):Ransomware:- event_8or- event_22 <-- Not Contained in the dataframeHacker Presence:- event_104 <-- Event Log was Cleared- event_1102 <-- Audit Log was Cleared- event_4719 <-- System Audit Policy Changed
###Code
columns = list(df.columns)
features = columns[4:]
print(features)
###Output
['event_1', 'event_3', 'event_8', 'event_10', 'event_11', 'event_12', 'event_4624', 'event_4625', 'event_4648', 'event_4658', 'event_4661', 'event_4663', 'event_4672', 'event_4698', 'event_4768', 'event_5140', 'event_5145', 'event_5156', 'event_5158', 'total_authN_events']
###Markdown
APT- event_4674 <-- Account Name, Service, Process, Object- event_4688 <-- Account Name, ProcessType Ratio:- event_4624 <-- Logon- event_4627 <-- Group Membership- event_4658 <-- Handle to an object- event_4768 <-- Kerberos AuthN- event_4769 <-- Kerberos AuthN - Services- event_4672 <-- Assignment of Admin Rights- event_4776 <-- Kerberos Service TicketNo event ids found. Detect Pass the Hash Attacks (Link: https://stealthbits.com/blog/how-to-detect-pass-the-hash-attacks/)Workstation Logs (source host):- event_4648- event_4624- event_4672- sysmon event 10Target Server Logs (target host):- event_4624- event_4672Domain Controller:- event 4768- event 4769- event 4776
###Code
df[["event_4624", "event_4672", "event_4648", "event_10"]].head()
df["hash_attack"] = df["event_4648"] + df["event_4624"] + df["event_4672"] + df["event_10"]
df.head()
###Output
_____no_output_____
###Markdown
Common Incident Response Scenario - Phishing (Link: https://www.netscylla.com/blog/2020/02/01/Threat-hunting-with-Windows-Event-Logs.html):- event_1 <-- Process Creation- event_11 <-- FileCreate- event_15 <-- FileCreateStreamHash
###Code
df["phishing"] = df["event_1"] + df["event_11"]
pd.unique(df["phishing"].values.ravel())
df.head()
columns = list(df.columns)
features = columns[4:]
df[features] = df[features].div(df.total_events, axis = 0)
# Separating out the features
X = df.loc[:, features].values
y = df.loc[:, ['is_malicious']].values
print(X)
print(y)
df.to_csv("../data/processed/final_df.csv", index = False)
X = StandardScaler().fit_transform(X)
pca = PCA(n_components=2)
pcs = pca.fit_transform(X)
pca_df = pd.DataFrame(data = pcs, columns = ['pc1', 'pc2'])
fig = plt.figure(figsize = (8,8))
ax = fig.add_subplot(1,1,1)
ax.set_xlabel('Principal Component 1', fontsize = 15)
ax.set_ylabel('Principal Component 2', fontsize = 15)
ax.set_title('Features in 2D', fontsize = 20)
targets = [0, 1]
colors = ['r', 'b']
for target, color in zip(targets,colors):
indicesToKeep = df['is_malicious'] == target
ax.scatter(pca_df.loc[indicesToKeep, 'pc1']
, pca_df.loc[indicesToKeep, 'pc2']
, c = color
, s = 50)
# ax.legend(targets)
# ax.scatter(pca_df['pc1']
# , pca_df['pc2']
# , s = 50)
ax.grid()
smote = SMOTE(random_state=0, sampling_strategy="minority")
X_os, y_os = smote.fit_resample(X, y)
X_train_os, X_test_os, y_train_os, y_test_os = train_test_split(X_os, y_os, test_size = 0.2, random_state=2)
# Decision Tree Classifier
# Create Decision Tree classifer object
clf_os = DecisionTreeClassifier()
# Train Decision Tree Classifer
clf_os.fit(X_train_os, y_train_os)
# Predict the response for the test dataset
y_pred_os = clf_os.predict(X_test_os)
# Check Prediction
print(classification_report(y_test_os, y_pred_os))
# print(confusion_matrix(y_test_os, y_pred_os))
print("Accuracy:", accuracy_score(y_test_os, y_pred_os))
print("Label 0:")
print("Precision:", precision_score(y_test_os, y_pred_os, pos_label = 0))
print("Recall:", recall_score(y_test_os, y_pred_os, pos_label = 0))
print("Label 1:")
print("Precision:", precision_score(y_test_os, y_pred_os, pos_label = 1))
print("Recall:", recall_score(y_test_os, y_pred_os, pos_label = 1))
classes = np.unique(y_pred_os)
fig,ax = plt.subplots()
cm = metrics.confusion_matrix(y_test_os,y_pred_os,labels=classes)
sns.heatmap(cm, annot=True,fmt='d',cmap=plt.cm.Blues,cbar=False)
ax.set(xlabel="Pred",ylabel="True",title="Confusion Matrix")
ax.set_yticklabels(labels=classes,rotation=0)
plt.show()
# K Neighbors Classifier
# Create K Neighbors Classifier Object
neigh = KNeighborsClassifier(n_neighbors=3)
# Train K Neighbors Classifier Object
neigh.fit(X_train_os,y_train_os.ravel())
# Predict the response for test dataset
y_pred_os = neigh.predict(X_test_os)
# Check Prediction
print(classification_report(y_test_os, y_pred_os))
# print(confusion_matrix(y_test_os, y_pred_os))
print("Accuracy:", accuracy_score(y_test_os, y_pred_os))
print("Label 0:")
print("Precision:", precision_score(y_test_os, y_pred_os, pos_label = 0))
print("Recall:", recall_score(y_test_os, y_pred_os, pos_label = 0))
print("Label 1:")
print("Precision:", precision_score(y_test_os, y_pred_os, pos_label = 1))
print("Recall:", recall_score(y_test_os, y_pred_os, pos_label = 1))
classes = np.unique(y_pred_os)
fig,ax = plt.subplots()
cm = metrics.confusion_matrix(y_test_os,y_pred_os,labels=classes)
sns.heatmap(cm, annot=True,fmt='d',cmap=plt.cm.Blues,cbar=False)
ax.set(xlabel="Pred",ylabel="True",title="Confusion Matrix")
ax.set_yticklabels(labels=classes,rotation=0)
plt.show()
# Logistic Regression
# instantiate model
model = LogisticRegression()
# fit
model.fit(X_train_os,y_train_os)
# predict
y_pred_os = model.predict(X_test_os)
# Check Prediction
print(classification_report(y_test_os, y_pred_os))
# print(confusion_matrix(y_test_os, y_pred_os))
print("Accuracy:", accuracy_score(y_test_os, y_pred_os))
print("Label 0:")
print("Precision:", precision_score(y_test_os, y_pred_os, pos_label = 0))
print("Recall:", recall_score(y_test_os, y_pred_os, pos_label = 0))
print("Label 1:")
print("Precision:", precision_score(y_test_os, y_pred_os, pos_label = 1))
print("Recall:", recall_score(y_test_os, y_pred_os, pos_label = 1))
classes = np.unique(y_pred_os)
fig,ax = plt.subplots()
cm = metrics.confusion_matrix(y_test_os,y_pred_os,labels=classes)
sns.heatmap(cm, annot=True,fmt='d',cmap=plt.cm.Blues,cbar=False)
ax.set(xlabel="Pred",ylabel="True",title="Confusion Matrix")
ax.set_yticklabels(labels=classes,rotation=0)
plt.show()
# Random Forest:
# Instantiate Model
random_model = RandomForestClassifier()
# Fit
random_model.fit(X_train_os, y_train_os)
# Predict
y_pred_os = random_model.predict(X_test_os)
# Check Prediction
print(classification_report(y_test_os, y_pred_os))
# print(confusion_matrix(y_test_os, y_pred_os))
print("Accuracy:", accuracy_score(y_test_os, y_pred_os))
print("Label 0:")
print("Precision:", precision_score(y_test_os, y_pred_os, pos_label = 0))
print("Recall:", recall_score(y_test_os, y_pred_os, pos_label = 0))
print("Label 1:")
print("Precision:", precision_score(y_test_os, y_pred_os, pos_label = 1))
print("Recall:", recall_score(y_test_os, y_pred_os, pos_label = 1))
classes = np.unique(y_pred_os)
fig,ax = plt.subplots()
cm = metrics.confusion_matrix(y_test_os,y_pred_os,labels=classes)
sns.heatmap(cm, annot=True,fmt='d',cmap=plt.cm.Blues,cbar=False)
ax.set(xlabel="Pred",ylabel="True",title="Confusion Matrix")
ax.set_yticklabels(labels=classes,rotation=0)
plt.show()
# Gradient Boosting Classifier
# Instantiate Model
gb = GradientBoostingClassifier()
# Fit
gb.fit(X_train_os, y_train_os)
# Predict
y_pred_os = gb.predict(X_test_os)
# Check Prediction
print(classification_report(y_test_os, y_pred_os))
# print(confusion_matrix(y_test_os, y_pred_os))
print("Accuracy:", accuracy_score(y_test_os, y_pred_os))
print("Label 0:")
print("Precision:", precision_score(y_test_os, y_pred_os, pos_label = 0))
print("Recall:", recall_score(y_test_os, y_pred_os, pos_label = 0))
print("Label 1:")
print("Precision:", precision_score(y_test_os, y_pred_os, pos_label = 1))
print("Recall:", recall_score(y_test_os, y_pred_os, pos_label = 1))
classes = np.unique(y_pred_os)
fig,ax = plt.subplots()
cm = metrics.confusion_matrix(y_test_os,y_pred_os,labels=classes)
sns.heatmap(cm, annot=True,fmt='d',cmap=plt.cm.Blues,cbar=False)
ax.set(xlabel="Pred",ylabel="True",title="Confusion Matrix")
ax.set_yticklabels(labels=classes,rotation=0)
plt.show()
# # sklearn.svm.SVC (Support Vector Classification)
# svc = SVC(gamma="auto")
# svc.fit(X_train_os, y_train_os)
# # Predict
# y_pred_os = svc.predict(X_test_os)
# # Check Prediction
# print(classification_report(y_test_os, y_pred_os))
# # print(confusion_matrix(y_test_os, y_pred_os))
# print("Accuracy:", accuracy_score(y_test_os, y_pred_os))
# print("Label 0:")
# print("Precision:", precision_score(y_test_os, y_pred_os, pos_label = 0))
# print("Recall:", recall_score(y_test_os, y_pred_os, pos_label = 0))
# print("Label 1:")
# print("Precision:", precision_score(y_test_os, y_pred_os, pos_label = 1))
# print("Recall:", recall_score(y_test_os, y_pred_os, pos_label = 1))
# classes = np.unique(y_pred_os)
# fig,ax = plt.subplots()
# cm = metrics.confusion_matrix(y_test_os,y_pred_os,labels=classes)
# sns.heatmap(cm, annot=True,fmt='d',cmap=plt.cm.Blues,cbar=False)
# ax.set(xlabel="Pred",ylabel="True",title="Confusion Matrix")
# ax.set_yticklabels(labels=classes,rotation=0)
# plt.show()
###Output
_____no_output_____ |
MAIN_tutorial_machine_learning_with_nilearn.ipynb | ###Markdown
Section 2: Machine learning to predict age from rs-fmriWe will integrate what we've learned in the previous sections to extract data from *several* rs-fmri images, and use that data as features in a machine learning modelThe dataset consists of 50 children (ages 3-13) and 33 young adults (ages 18-39). We will use rs-fmri data to try to predict who are adults and who are children. Load the data
###Code
# change this to the location where you downloaded the data
wdir = '/Users/jakevogel/Science/Nilearn_tutorial/reduced/'
# Now fetch the data
from glob import glob
import os
data = sorted(glob(os.path.join(wdir,'*.gz')))
confounds = sorted(glob(os.path.join(wdir,'*regressors.tsv')))
###Output
_____no_output_____
###Markdown
How many individual subjects do we have?
###Code
#len(data.func)
len(data)
###Output
_____no_output_____
###Markdown
Extract features Here, we are going to use the same techniques we learned in the previous tutorial to extract rs-fmri connectivity features from every subject.How are we going to do that? With a for loop.Don't worry, it's not as scary as it sounds
###Code
# Here is a really simple for loop
for i in range(10):
print('the number is', i)
container = []
for i in range(10):
container.append(i)
container
###Output
_____no_output_____
###Markdown
Now lets construct a more complicated loop to do what we want First we do some things we don't need to do in the loop. Let's reload our atlas, and re-iniate our masker and correlation_measure
###Code
from nilearn.input_data import NiftiLabelsMasker
from nilearn.connectome import ConnectivityMeasure
from nilearn import datasets
# load atlas
multiscale = datasets.fetch_atlas_basc_multiscale_2015()
atlas_filename = multiscale.scale064
# initialize masker (change verbosity)
masker = NiftiLabelsMasker(labels_img=atlas_filename, standardize=True,
memory='nilearn_cache', verbose=0)
# initialize correlation measure, set to vectorize
correlation_measure = ConnectivityMeasure(kind='correlation', vectorize=True,
discard_diagonal=True)
###Output
_____no_output_____
###Markdown
Okay -- now that we have that taken care of, let's run our big loop! **NOTE**: On a laptop, this might a few minutes.
###Code
all_features = [] # here is where we will put the data (a container)
for i,sub in enumerate(data):
# extract the timeseries from the ROIs in the atlas
time_series = masker.fit_transform(sub, confounds=confounds[i])
# create a region x region correlation matrix
correlation_matrix = correlation_measure.fit_transform([time_series])[0]
# add to our container
all_features.append(correlation_matrix)
# keep track of status
print('finished %s of %s'%(i+1,len(data)))
# Let's save the data to disk
import numpy as np
np.savez_compressed('MAIN_BASC064_subsamp_features',a = all_features)
###Output
_____no_output_____
###Markdown
In case you do not want to run the full loop on your computer, you can load the output of the loop here!
###Code
feat_file = 'MAIN_BASC064_subsamp_features.npz'
X_features = np.load(feat_file)['a']
X_features.shape
###Output
_____no_output_____
###Markdown
Okay so we've got our features. We can visualize our feature matrix
###Code
import matplotlib.pyplot as plt
plt.imshow(X_features, aspect='auto')
plt.colorbar()
plt.title('feature matrix')
plt.xlabel('features')
plt.ylabel('subjects')
###Output
_____no_output_____
###Markdown
Get Y (our target) and assess its distribution
###Code
# Let's load the phenotype data
pheno_path = os.path.join(wdir, 'participants.tsv')
from pandas import read_csv
pheno = read_csv(pheno_path, sep='\t').sort_values('participant_id')
pheno.head()
###Output
_____no_output_____
###Markdown
Looks like there is a column labeling children and adults. Let's capture it in a variable
###Code
y_ageclass = pheno['Child_Adult']
y_ageclass.head()
###Output
_____no_output_____
###Markdown
Maybe we should have a look at the distribution of our target variable
###Code
import matplotlib.pyplot as plt
import seaborn as sns
sns.countplot(y_ageclass)
###Output
_____no_output_____
###Markdown
We are a bit unbalanced -- there seems to be more children than adults
###Code
pheno.Child_Adult.value_counts()
###Output
_____no_output_____
###Markdown
Prepare data for machine learningHere, we will define a "training sample" where we can play around with our models. We will also set aside a "test" sample that we will not touch until the end We want to be sure that our training and test sample are matched! We can do that with a "stratified split". Specifically, we will stratify by age class.
###Code
from sklearn.model_selection import train_test_split
# Split the sample to training/test with a 60/40 ratio, and
# stratify by age class, and also shuffle the data.
X_train, X_test, y_train, y_test = train_test_split(
X_features, # x
y_ageclass, # y
test_size = 0.4, # 60%/40% split
shuffle = True, # shuffle dataset
# before splitting
stratify = y_ageclass, # keep
# distribution
# of ageclass
# consistent
# betw. train
# & test sets.
random_state = 123 # same shuffle each
# time
)
# print the size of our training and test groups
print('training:', len(X_train),
'testing:', len(X_test))
###Output
_____no_output_____
###Markdown
Let's visualize the distributions to be sure they are matched
###Code
fig,(ax1,ax2) = plt.subplots(2)
sns.countplot(y_train, ax=ax1, order=['child','adult'])
ax1.set_title('Train')
sns.countplot(y_test, ax=ax2, order=['child','adult'])
ax2.set_title('Test')
###Output
_____no_output_____
###Markdown
Run your first model!Machine learning can get pretty fancy pretty quickly. We'll start with a very standard classification model called a Support Vector Classifier (SVC). While this may seem unambitious, simple models can be very robust. And we don't have enough data to create more complex models.For more information, see this excellent resource:https://hal.inria.fr/hal-01824205 First, a quick review of SVM!![](https://docs.opencv.org/2.4/_images/optimal-hyperplane.png) Let's fit our first model!
###Code
from sklearn.svm import SVC
l_svc = SVC(kernel='linear') # define the model
l_svc.fit(X_train, y_train) # fit the model
###Output
_____no_output_____
###Markdown
Well... that was easy. Let's see how well the model learned the data!We can judge our model on several criteria:* Accuracy: The proportion of predictions that were correct overall.* Precision: Accuracy of cases predicted as positive* Recall: Number of true positives correctly predicted to be positive* f1 score: A balance between precision and recallOr, for a more visual explanation...![](https://upload.wikimedia.org/wikipedia/commons/2/26/Precisionrecall.svg)
###Code
from sklearn.metrics import classification_report, confusion_matrix, precision_score, f1_score
# predict the training data based on the model
y_pred = l_svc.predict(X_train)
# caluclate the model accuracy
acc = l_svc.score(X_train, y_train)
# calculate the model precision, recall and f1, all in one convenient report!
cr = classification_report(y_true=y_train,
y_pred = y_pred)
# get a table to help us break down these scores
cm = confusion_matrix(y_true=y_train, y_pred = y_pred)
###Output
_____no_output_____
###Markdown
Let's view our results and plot them all at once!
###Code
import itertools
from pandas import DataFrame
# print results
print('accuracy:', acc)
print(cr)
# plot confusion matrix
cmdf = DataFrame(cm, index = ['Adult','Child'], columns = ['Adult','Child'])
sns.heatmap(cmdf, cmap = 'RdBu_r')
plt.xlabel('Predicted')
plt.ylabel('Observed')
# label cells in matrix
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j+0.5, i+0.5, format(cm[i, j], 'd'),
horizontalalignment="center",
color="white")
###Output
_____no_output_____
###Markdown
![](https://sebastianraschka.com/images/faq/multiclass-metric/conf_mat.png) HOLY COW! Machine learning is amazing!!! Almost a perfect fit!...which means there's something wrong. What's the problem here?
###Code
from sklearn.model_selection import cross_val_predict, cross_val_score
# predict
y_pred = cross_val_predict(l_svc, X_train, y_train,
groups=y_train, cv=10)
# scores
acc = cross_val_score(l_svc, X_train, y_train,
groups=y_train, cv=10)
###Output
_____no_output_____
###Markdown
We can look at the accuracy of the predictions for each fold of the cross-validation
###Code
for i in range(10):
print('Fold %s -- Acc = %s'%(i, acc[i]))
###Output
_____no_output_____
###Markdown
We can also look at the overall accuracy of the model
###Code
from sklearn.metrics import accuracy_score
overall_acc = accuracy_score(y_pred = y_pred, y_true = y_train)
overall_cr = classification_report(y_pred = y_pred, y_true = y_train)
overall_cm = confusion_matrix(y_pred = y_pred, y_true = y_train)
print('Accuracy:',overall_acc)
print(overall_cr)
thresh = overall_cm.max() / 2
cmdf = DataFrame(overall_cm, index = ['Adult','Child'], columns = ['Adult','Child'])
sns.heatmap(cmdf, cmap='copper')
plt.xlabel('Predicted')
plt.ylabel('Observed')
for i, j in itertools.product(range(overall_cm.shape[0]), range(overall_cm.shape[1])):
plt.text(j+0.5, i+0.5, format(overall_cm[i, j], 'd'),
horizontalalignment="center",
color="white")
###Output
_____no_output_____
###Markdown
Not too bad at all! Tweak your modelIt's very important to learn when and where its appropriate to "tweak" your model.Since we have done all of the previous analysis in our training data, it's find to try different models. But we **absolutely cannot** "test" it on our left out data. If we do, we are in great danger of overfitting.We could try other models, or tweak hyperparameters, but we are probably not powered sufficiently to do so, and would once again risk overfitting. But as a demonstration, we could see the impact of "scaling" our data. Certain machine learning algorithms perform better when all the input data is transformed to a uniform range of values. This is often between 0 and 1, or mean centered around with unit variance. We can perhaps look at the performance of the model after scaling the data
###Code
# Scale the training data
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler().fit(X_train)
X_train_scl = scaler.transform(X_train)
plt.imshow(X_train, aspect='auto')
plt.colorbar()
plt.title('Training Data')
plt.xlabel('features')
plt.ylabel('subjects')
plt.imshow(X_train_scl, aspect='auto')
plt.colorbar()
plt.title('Scaled Training Data')
plt.xlabel('features')
plt.ylabel('subjects')
# repeat the steps above to re-fit the model
# and assess its performance
# don't forget to switch X_train to X_train_scl
# predict
y_pred = cross_val_predict(l_svc, X_train_scl, y_train,
groups=y_train, cv=10)
# get scores
overall_acc = accuracy_score(y_pred = y_pred, y_true = y_train)
overall_cr = classification_report(y_pred = y_pred, y_true = y_train)
overall_cm = confusion_matrix(y_pred = y_pred, y_true = y_train)
print('Accuracy:',overall_acc)
print(overall_cr)
# plot
thresh = overall_cm.max() / 2
cmdf = DataFrame(overall_cm, index = ['Adult','Child'], columns = ['Adult','Child'])
sns.heatmap(cmdf, cmap='copper')
plt.xlabel('Predicted')
plt.ylabel('Observed')
for i, j in itertools.product(range(overall_cm.shape[0]), range(overall_cm.shape[1])):
plt.text(j+0.5, i+0.5, format(overall_cm[i, j], 'd'),
horizontalalignment="center",
color="white")
###Output
_____no_output_____
###Markdown
What do you think about the results of this model compared to the non-transformed model? **Exercise:** Try fitting a new SVC model and tweak one of the many parameters. Run cross-validation and see how well it goes. Make a new cell and type SVC? to see the possible hyperparameters
###Code
# new_model = SVC()
###Output
_____no_output_____
###Markdown
Can our model classify childrens from adults in completely un-seen data?Now that we've fit a model we think has possibly learned how to decode childhood vs adulthood based on rs-fmri signal, let's put it to the test. We will train our model on all of the training data, and try to predict the age of the subjects we left out at the beginning of this section. Because we performed a transformation on our training data, we will need to transform our testing data using the *same information!*
###Code
# Notice how we use the Scaler that was fit to X_train and apply to X_test,
# rather than creating a new Scaler for X_test
X_test_scl = scaler.transform(X_test)
###Output
_____no_output_____
###Markdown
And now for the moment of truth! No cross-validation needed here. We simply fit the model with the training data and use it to predict the testing dataI'm so nervous. Let's just do it all in one cell
###Code
l_svc.fit(X_train_scl, y_train) # fit to training data
y_pred = l_svc.predict(X_test_scl) # classify age class using testing data
acc = l_svc.score(X_test_scl, y_test) # get accuracy
cr = classification_report(y_pred=y_pred, y_true=y_test) # get prec., recall & f1
cm = confusion_matrix(y_pred=y_pred, y_true=y_test) # get confusion matrix
# print results
print('accuracy =', acc)
print(cr)
# plot results
thresh = cm.max() / 2
cmdf = DataFrame(cm, index = ['Adult','Child'], columns = ['Adult','Child'])
sns.heatmap(cmdf, cmap='RdBu_r')
plt.xlabel('Predicted')
plt.ylabel('Observed')
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j+0.5, i+0.5, format(cm[i, j], 'd'),
horizontalalignment="center",
color="white")
###Output
_____no_output_____
###Markdown
***Wow!!*** Congratulations. You just trained a machine learning model that used real rs-fmri data to predict the age of real humans.It seems like something in this data does seem to be systematically related to age ... but what? Interpreting model feature importancesInterpreting the feature importances of a machine learning model is a real can of worms. This is an area of active research. Unfortunately, it's hard to trust the feature importance of some models. You can find a whole tutorial on this subject here:http://gael-varoquaux.info/interpreting_ml_tuto/index.htmlFor now, we'll just eschew better judgement and take a look at our feature importances We can access the feature importances (weights) used my the model
###Code
l_svc.coef_
###Output
_____no_output_____
###Markdown
lets plot these weights to see their distribution better
###Code
plt.bar(range(l_svc.coef_.shape[-1]),l_svc.coef_[0])
plt.title('feature importances')
plt.xlabel('feature')
plt.ylabel('weight')
###Output
_____no_output_____
###Markdown
Or perhaps it will be easier to visualize this information as a matrix similar to the one we started withWe can use the correlation measure from before to perform an inverse transform
###Code
correlation_measure.inverse_transform(l_svc.coef_).shape
from nilearn import plotting
feat_exp_matrix = correlation_measure.inverse_transform(l_svc.coef_)[0]
plotting.plot_matrix(feat_exp_matrix, figure=(10, 8),
labels=range(feat_exp_matrix.shape[0]),
reorder=False,
tri='lower')
###Output
_____no_output_____
###Markdown
Let's see if we can throw those features onto an actual brain.First, we'll need to gather the coordinates of each ROI of our atlas
###Code
coords = plotting.find_parcellation_cut_coords(atlas_filename)
###Output
_____no_output_____
###Markdown
And now we can use our feature matrix and the wonders of nilearn to create a connectome map where each node is an ROI, and each connection is weighted by the importance of the feature to the model
###Code
plotting.plot_connectome(feat_exp_matrix, coords, colorbar=True)
###Output
_____no_output_____
###Markdown
Whoa!! That's...a lot to process. Maybe let's threshold the edges so that only the most important connections are visualized
###Code
plotting.plot_connectome(feat_exp_matrix, coords, colorbar=True, edge_threshold=0.04)
###Output
_____no_output_____
###Markdown
That's definitely an improvement, but it's still a bit hard to see what's going on.Nilearn has a new feature that let's use view this data interactively!
###Code
plotting.view_connectome(feat_exp_matrix, coords, threshold='90%')
#view = plotting.view_connectome(feat_exp_matrix, coords, threshold='90%')
#view.open_in_browser()
###Output
_____no_output_____ |
jupyter_test.ipynb | ###Markdown
% i like python
###Code
print(A)
print(b)
c
###Output
_____no_output_____
###Markdown
Sine WaveNice and quiet sine waves.
###Code
%matplotlib inline
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
asin = np.sin(3*2*np.pi*(np.arange(1000))/1000)
acos = np.cos(3*2*np.pi*(np.arange(1000))/1000)
plt.plot(asin)
plt.plot(acos)
###Output
_____no_output_____ |
examples/cfproto_mnist.ipynb | ###Markdown
Results:
###Code
print('Counterfactual prediction: {}'.format(explanation.cf['class']))
print('Closest prototype class: {}'.format(explanation.id_proto))
plt.imshow(explanation.cf['X'].reshape(28, 28));
###Output
Counterfactual prediction: 9
Closest prototype class: 9
###Markdown
Specify prototype classes For multi-class predictions, we might be interested to generate counterfactuals for certain classes while avoiding others. The following example illustrates how to do this:
###Code
X = x_test[12].reshape((1,) + x_test[1].shape)
plt.imshow(X.reshape(28, 28));
# initialize explainer, fit and generate counterfactuals
cf = CounterFactualProto(cnn, shape, gamma=gamma, theta=theta,
ae_model=ae, enc_model=enc, max_iterations=max_iterations,
feature_range=feature_range, c_init=c_init, c_steps=c_steps)
cf.fit(x_train)
explanation_1 = cf.explain(X, k=5, k_type='mean')
proto_1 = explanation_1.id_proto
explanation_2 = cf.explain(X, k=5, k_type='mean', target_class=[7])
proto_2 = explanation_2.id_proto
###Output
_____no_output_____
###Markdown
The closest class to the 9 is 4. This is evident by looking at the first counterfactual below. For the second counterfactual, we specified that the prototype class used in the search should be a 7. As a result, a counterfactual 7 instead of a 4 is generated.
###Code
print('Counterfactual prediction: {}'.format(explanation_1.cf['class']))
print('Closest prototype class: {}'.format(proto_1))
plt.imshow(explanation_1.cf['X'].reshape(28, 28));
print('Counterfactual prediction: {}'.format(explanation_2.cf['class']))
print('Closest prototype class: {}'.format(proto_2))
plt.imshow(explanation_2.cf['X'].reshape(28, 28));
###Output
Counterfactual prediction: 7
Closest prototype class: 7
###Markdown
Speed up the counterfactual search by removing the predict function loss term We can also remove the prediction loss term and still obtain an interpretable counterfactual. This is especially relevant for fully black box models. When we provide the counterfactual search method with a Keras or TensorFlow model, it is incorporated in the TensorFlow graph and evaluated using automatic differentiation. However, if we only have access to the model's prediction function, the gradient updates are numerical and typically require a large number of prediction calls because of the prediction loss term $L_{pred}$. These prediction calls can slow the search down significantly and become a bottleneck. We can represent the gradient of the loss term as follows:\begin{equation*} \frac{\partial L_{pred}}{\partial x} = \frac{\partial L_{pred}}{\partial p} \frac{\partial p}{\partial x} \end{equation*}where $L_{pred}$ is the prediction loss term, $p$ the prediction function and $x$ the input features to optimize. For a 28 by 28 MNIST image, the $^{\delta p}/_{\delta x}$ term alone would require a prediction call with batch size 28x28x2 = 1568. By using the prototypes to guide the search however, we can remove the prediction loss term and only make a single prediction at the end of each gradient update to check whether the predicted class on the proposed counterfactual is different from the original class. We do not necessarily need a Keras or TensorFlow auto-encoder either and can use k-d trees to find the nearest class prototypes. Please check out [this notebook](./cfproto_housing.ipynb) for a practical example.The first example below removes $L_{pred}$ from the loss function to bypass the bottleneck. It illustrates the drastic speed improvements over the black box alternative with numerical gradient evaluation while still producing interpretable counterfactual instances.
###Code
plt.gray()
X = x_test[23].reshape(1, 28, 28, 1)
plt.imshow(X.reshape(28, 28));
c_init = 0. # weight on prediction loss term set to 0
c_steps = 1 # no need to find optimal values for c
# define a black-box model
predict_fn = lambda x: cnn.predict(x)
# initialize explainer, fit and generate counterfactuals
cf = CounterFactualProto(predict_fn, shape, gamma=gamma, theta=theta,
ae_model=ae, enc_model=enc, max_iterations=max_iterations,
feature_range=feature_range, c_init=c_init, c_steps=c_steps)
cf.fit(x_train)
start_time = time()
explanation = cf.explain(X, k=1)
print('Explanation took {:.3f} sec'.format(time() - start_time))
print('Counterfactual prediction: {}'.format(explanation.cf['class']))
print('Closest prototype class: {}'.format(explanation.id_proto))
plt.imshow(explanation.cf['X'].reshape(28, 28));
###Output
Counterfactual prediction: 6
Closest prototype class: 6
###Markdown
Let us know add the $L_{pred}$ loss term back in the objective function and observe how long it takes to generate a black box counterfactual:
###Code
c_init = 1.
c_steps = 2
# define a black-box model
predict_fn = lambda x: cnn.predict(x)
# initialize explainer, fit and generate counterfactuals
cf = CounterFactualProto(predict_fn, shape, gamma=gamma, theta=theta,
ae_model=ae, enc_model=enc, max_iterations=max_iterations,
feature_range=feature_range, c_init=c_init, c_steps=c_steps)
cf.fit(x_train)
start_time = time()
explanation = cf.explain(X, k=1)
print('Explanation took {:.3f} sec'.format(time() - start_time))
print('Counterfactual prediction: {}'.format(explanation.cf['class']))
print('Closest prototype class: {}'.format(explanation.id_proto))
plt.imshow(explanation.cf['X'].reshape(28, 28));
###Output
Counterfactual prediction: 6
Closest prototype class: 6
###Markdown
Clean up:
###Code
os.remove('mnist_cnn.h5')
os.remove('mnist_ae.h5')
os.remove('mnist_enc.h5')
###Output
_____no_output_____
###Markdown
Counterfactuals guided by prototypes on MNIST This method is described in the [Interpretable Counterfactual Explanations Guided by Prototypes](https://arxiv.org/abs/1907.02584) paper and can generate counterfactual instances guided by class prototypes. It means that for a certain instance X, the method builds a prototype for each prediction class using either an [autoencoder](https://en.wikipedia.org/wiki/Autoencoder) or [k-d trees](https://en.wikipedia.org/wiki/K-d_tree). The nearest prototype class other than the originally predicted class is then used to guide the counterfactual search. For example, in MNIST the closest class to a 7 could be a 9. As a result, the prototype loss term will try to minimize the distance between the proposed counterfactual and the prototype of a 9. This speeds up the search towards a satisfactory counterfactual by steering it towards an interpretable solution from the start of the optimization. It also helps to avoid out-of-distribution counterfactuals with the perturbations driven to a prototype of another class.The loss function to be optimized is the following: $$Loss = cL_{pred} + \beta L_{1} + L_{2} + L_{AE} + L_{proto}$$The first loss term relates to the model's prediction function, the following 2 terms define the elastic net regularization while the last 2 terms are optional. The aim of $L_{AE}$ is to penalize out-of-distribution counterfactuals while $L_{proto}$ guides the counterfactual to a prototype. When we only have acces to the model's prediction function and cannot fully enjoy the benefits of automatic differentiation, the prototypes allow us to drop the prediction function loss term $L_{pred}$ and still generate high quality counterfactuals. This drastically reduces the number of prediction calls made during the numerical gradient update step and again speeds up the search.Other options include generating counterfactuals for specific classes or including trust score constraints to ensure that the counterfactual is close enough to the newly predicted class compared to the original class. Different use cases are illustrated throughout this notebook.
###Code
import tensorflow as tf
tf.get_logger().setLevel(40) # suppress deprecation messages
tf.compat.v1.disable_v2_behavior() # disable TF2 behaviour as alibi code still relies on TF1 constructs
from tensorflow.keras.layers import Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Input, UpSampling2D
from tensorflow.keras.models import Model, load_model
from tensorflow.keras.utils import to_categorical
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import os
from time import time
from alibi.explainers import CounterfactualProto
print('TF version: ', tf.__version__)
print('Eager execution enabled: ', tf.executing_eagerly()) # False
###Output
TF version: 2.2.0
Eager execution enabled: False
###Markdown
Load and prepare MNIST data
###Code
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
print('x_train shape:', x_train.shape, 'y_train shape:', y_train.shape)
plt.gray()
plt.imshow(x_test[1]);
###Output
x_train shape: (60000, 28, 28) y_train shape: (60000,)
###Markdown
Prepare data: scale, reshape and categorize
###Code
x_train = x_train.astype('float32') / 255
x_test = x_test.astype('float32') / 255
x_train = np.reshape(x_train, x_train.shape + (1,))
x_test = np.reshape(x_test, x_test.shape + (1,))
print('x_train shape:', x_train.shape, 'x_test shape:', x_test.shape)
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
print('y_train shape:', y_train.shape, 'y_test shape:', y_test.shape)
xmin, xmax = -.5, .5
x_train = ((x_train - x_train.min()) / (x_train.max() - x_train.min())) * (xmax - xmin) + xmin
x_test = ((x_test - x_test.min()) / (x_test.max() - x_test.min())) * (xmax - xmin) + xmin
###Output
_____no_output_____
###Markdown
Define and train CNN model
###Code
def cnn_model():
x_in = Input(shape=(28, 28, 1))
x = Conv2D(filters=32, kernel_size=2, padding='same', activation='relu')(x_in)
x = MaxPooling2D(pool_size=2)(x)
x = Dropout(0.3)(x)
x = Conv2D(filters=64, kernel_size=2, padding='same', activation='relu')(x)
x = MaxPooling2D(pool_size=2)(x)
x = Dropout(0.3)(x)
x = Flatten()(x)
x = Dense(256, activation='relu')(x)
x = Dropout(0.5)(x)
x_out = Dense(10, activation='softmax')(x)
cnn = Model(inputs=x_in, outputs=x_out)
cnn.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return cnn
cnn = cnn_model()
cnn.fit(x_train, y_train, batch_size=32, epochs=3, verbose=0)
cnn.save('mnist_cnn.h5', save_format='h5')
###Output
_____no_output_____
###Markdown
Evaluate the model on test set
###Code
cnn = load_model('mnist_cnn.h5')
score = cnn.evaluate(x_test, y_test, verbose=0)
print('Test accuracy: ', score[1])
###Output
Test accuracy: 0.9871
###Markdown
Define and train auto-encoder
###Code
def ae_model():
# encoder
x_in = Input(shape=(28, 28, 1))
x = Conv2D(16, (3, 3), activation='relu', padding='same')(x_in)
x = Conv2D(16, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
encoded = Conv2D(1, (3, 3), activation=None, padding='same')(x)
encoder = Model(x_in, encoded)
# decoder
dec_in = Input(shape=(14, 14, 1))
x = Conv2D(16, (3, 3), activation='relu', padding='same')(dec_in)
x = UpSampling2D((2, 2))(x)
x = Conv2D(16, (3, 3), activation='relu', padding='same')(x)
decoded = Conv2D(1, (3, 3), activation=None, padding='same')(x)
decoder = Model(dec_in, decoded)
# autoencoder = encoder + decoder
x_out = decoder(encoder(x_in))
autoencoder = Model(x_in, x_out)
autoencoder.compile(optimizer='adam', loss='mse')
return autoencoder, encoder, decoder
ae, enc, dec = ae_model()
ae.fit(x_train, x_train, batch_size=128, epochs=4, validation_data=(x_test, x_test), verbose=0)
ae.save('mnist_ae.h5', save_format='h5')
enc.save('mnist_enc.h5', save_format='h5')
###Output
_____no_output_____
###Markdown
Compare original with decoded images
###Code
ae = load_model('mnist_ae.h5')
enc = load_model('mnist_enc.h5', compile=False)
decoded_imgs = ae.predict(x_test)
n = 5
plt.figure(figsize=(20, 4))
for i in range(1, n+1):
# display original
ax = plt.subplot(2, n, i)
plt.imshow(x_test[i].reshape(28, 28))
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
###Output
_____no_output_____
###Markdown
Generate counterfactual guided by the nearest class prototype Original instance:
###Code
X = x_test[0].reshape((1,) + x_test[0].shape)
plt.imshow(X.reshape(28, 28));
###Output
_____no_output_____
###Markdown
Counterfactual parameters:
###Code
shape = (1,) + x_train.shape[1:]
gamma = 100.
theta = 100.
c_init = 1.
c_steps = 2
max_iterations = 1000
feature_range = (x_train.min(),x_train.max())
###Output
_____no_output_____
###Markdown
Run counterfactual:
###Code
# initialize explainer, fit and generate counterfactual
cf = CounterfactualProto(cnn, shape, gamma=gamma, theta=theta,
ae_model=ae, enc_model=enc, max_iterations=max_iterations,
feature_range=feature_range, c_init=c_init, c_steps=c_steps)
start_time = time()
cf.fit(x_train) # find class prototypes
print('Time to find prototypes each class: {:.3f} sec'.format(time() - start_time))
start_time = time()
explanation = cf.explain(X)
print('Explanation took {:.3f} sec'.format(time() - start_time))
###Output
Time to find prototypes each class: 14.580 sec
Explanation took 9.269 sec
###Markdown
Results:
###Code
print('Counterfactual prediction: {}'.format(explanation.cf['class']))
print(f'Closest prototype class: {explanation.id_proto}')
plt.imshow(explanation.cf['X'].reshape(28, 28));
###Output
Counterfactual prediction: 9
Closest prototype class: 9
###Markdown
The counterfactual starting from a 7 moves towards its closest prototype class: a 9. The evolution of the counterfactual during the first iteration can be seen below:
###Code
iter_cf = 0
print(f'iteration c {iter_cf}')
n = len(explanation['all'][iter_cf])
plt.figure(figsize=(20, 4))
for i in range(n):
ax = plt.subplot(1, n+1, i+1)
plt.imshow(explanation['all'][iter_cf][i].reshape(28, 28))
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
###Output
iteration c 0
###Markdown
Typically, the first few iterations already steer the 7 towards a 9, while the later iterations make the counterfactual more sparse. Prototypes defined by the $k$ nearest encoded instances In the above example, the class prototypes are defined by the average encoding of all instances belonging to the specific class. Instead, we can also select only the $k$ nearest encoded instances of a class to the encoded instance to be explained and use the average over those $k$ encodings as the prototype.
###Code
# initialize explainer, fit and generate counterfactuals
cf = CounterfactualProto(cnn, shape, gamma=gamma, theta=theta,
ae_model=ae, enc_model=enc, max_iterations=max_iterations,
feature_range=feature_range, c_init=c_init, c_steps=c_steps)
cf.fit(x_train)
explanation_k1 = cf.explain(X, k=1, k_type='mean')
explanation_k20 = cf.explain(X, k=20, k_type='mean')
###Output
_____no_output_____
###Markdown
Results for $k$ equals 1:
###Code
print('Counterfactual prediction: {}'.format(explanation_k1.cf['class']))
print(f'Closest prototype class: {explanation.id_proto}')
plt.imshow(explanation_k1.cf['X'].reshape(28, 28));
###Output
Counterfactual prediction: 9
Closest prototype class: 9
###Markdown
Results for $k$ equals 20:
###Code
print('Counterfactual prediction: {}'.format(explanation_k20.cf['class']))
print(f'Closest prototype class: {explanation.id_proto}')
plt.imshow(explanation_k20.cf['X'].reshape(28, 28));
###Output
Counterfactual prediction: 9
Closest prototype class: 9
###Markdown
A lower value of $k$ typically leads to counterfactuals that look more like the original instance and less like an average instance of the counterfactual class. Remove the autoencoder loss term $L_{AE}$ In the previous example, we used both an autoencoder loss term to penalize a counterfactual which falls outside of the training data distribution as well as an encoder loss term to guide the counterfactual to the nearest prototype class. In the next example we get rid of the autoencoder loss term to speed up the counterfactual search and still generate decent counterfactuals:
###Code
# initialize explainer, fit and generate counterfactuals
cf = CounterfactualProto(cnn, shape, gamma=gamma, theta=theta,
enc_model=enc, max_iterations=max_iterations,
feature_range=feature_range, c_init=c_init, c_steps=c_steps)
cf.fit(x_train)
start_time = time()
explanation = cf.explain(X, k=1)
print('Explanation took {:.3f} sec'.format(time() - start_time))
###Output
Explanation took 6.443 sec
###Markdown
Results:
###Code
print('Counterfactual prediction: {}'.format(explanation.cf['class']))
print(f'Closest prototype class: {explanation.id_proto}')
plt.imshow(explanation.cf['X'].reshape(28, 28));
###Output
Counterfactual prediction: 9
Closest prototype class: 9
###Markdown
Specify prototype classes For multi-class predictions, we might be interested to generate counterfactuals for certain classes while avoiding others. The following example illustrates how to do this:
###Code
X = x_test[12].reshape((1,) + x_test[1].shape)
plt.imshow(X.reshape(28, 28));
# initialize explainer, fit and generate counterfactuals
cf = CounterfactualProto(cnn, shape, gamma=gamma, theta=theta,
ae_model=ae, enc_model=enc, max_iterations=max_iterations,
feature_range=feature_range, c_init=c_init, c_steps=c_steps)
cf.fit(x_train)
explanation_1 = cf.explain(X, k=5, k_type='mean')
proto_1 = explanation_1.id_proto
explanation_2 = cf.explain(X, k=5, k_type='mean', target_class=[7])
proto_2 = explanation_2.id_proto
###Output
_____no_output_____
###Markdown
The closest class to the 9 is 4. This is evident by looking at the first counterfactual below. For the second counterfactual, we specified that the prototype class used in the search should be a 7. As a result, a counterfactual 7 instead of a 4 is generated.
###Code
print('Counterfactual prediction: {}'.format(explanation_1.cf['class']))
print(f'Closest prototype class: {proto_1}')
plt.imshow(explanation_1.cf['X'].reshape(28, 28));
print('Counterfactual prediction: {}'.format(explanation_2.cf['class']))
print(f'Closest prototype class: {proto_2}')
plt.imshow(explanation_2.cf['X'].reshape(28, 28));
###Output
Counterfactual prediction: 7
Closest prototype class: 7
###Markdown
Speed up the counterfactual search by removing the predict function loss term We can also remove the prediction loss term and still obtain an interpretable counterfactual. This is especially relevant for fully black box models. When we provide the counterfactual search method with a Keras or TensorFlow model, it is incorporated in the TensorFlow graph and evaluated using automatic differentiation. However, if we only have access to the model's prediction function, the gradient updates are numerical and typically require a large number of prediction calls because of the prediction loss term $L_{pred}$. These prediction calls can slow the search down significantly and become a bottleneck. We can represent the gradient of the loss term as follows:$$\frac{\partial L_{pred}}{\partial x} = \frac{\partial L_{pred}}{\partial p} \frac{\partial p}{\partial x}$$where $L_{pred}$ is the prediction loss term, $p$ the prediction function and $x$ the input features to optimize. For a 28 by 28 MNIST image, the $^{\delta p}/_{\delta x}$ term alone would require a prediction call with batch size 28x28x2 = 1568. By using the prototypes to guide the search however, we can remove the prediction loss term and only make a single prediction at the end of each gradient update to check whether the predicted class on the proposed counterfactual is different from the original class. We do not necessarily need a Keras or TensorFlow auto-encoder either and can use k-d trees to find the nearest class prototypes. Please check out [this notebook](./cfproto_housing.ipynb) for a practical example.The first example below removes $L_{pred}$ from the loss function to bypass the bottleneck. It illustrates the drastic speed improvements over the black box alternative with numerical gradient evaluation while still producing interpretable counterfactual instances.
###Code
plt.gray()
X = x_test[23].reshape(1, 28, 28, 1)
plt.imshow(X.reshape(28, 28));
c_init = 0. # weight on prediction loss term set to 0
c_steps = 1 # no need to find optimal values for c
# define a black-box model
predict_fn = lambda x: cnn.predict(x)
# initialize explainer, fit and generate counterfactuals
cf = CounterfactualProto(predict_fn, shape, gamma=gamma, theta=theta,
ae_model=ae, enc_model=enc, max_iterations=max_iterations,
feature_range=feature_range, c_init=c_init, c_steps=c_steps)
cf.fit(x_train)
start_time = time()
explanation = cf.explain(X, k=1)
print('Explanation took {:.3f} sec'.format(time() - start_time))
print('Counterfactual prediction: {}'.format(explanation.cf['class']))
print(f'Closest prototype class: {explanation.id_proto}')
plt.imshow(explanation.cf['X'].reshape(28, 28));
###Output
Counterfactual prediction: 6
Closest prototype class: 6
###Markdown
Let us know add the $L_{pred}$ loss term back in the objective function and observe how long it takes to generate a black box counterfactual:
###Code
c_init = 1.
c_steps = 2
# define a black-box model
predict_fn = lambda x: cnn.predict(x)
# initialize explainer, fit and generate counterfactuals
cf = CounterfactualProto(predict_fn, shape, gamma=gamma, theta=theta,
ae_model=ae, enc_model=enc, max_iterations=max_iterations,
feature_range=feature_range, c_init=c_init, c_steps=c_steps)
cf.fit(x_train)
start_time = time()
explanation = cf.explain(X, k=1)
print('Explanation took {:.3f} sec'.format(time() - start_time))
print('Counterfactual prediction: {}'.format(explanation.cf['class']))
print(f'Closest prototype class: {explanation.id_proto}')
plt.imshow(explanation.cf['X'].reshape(28, 28));
###Output
Counterfactual prediction: 6
Closest prototype class: 6
###Markdown
Clean up:
###Code
os.remove('mnist_cnn.h5')
os.remove('mnist_ae.h5')
os.remove('mnist_enc.h5')
###Output
_____no_output_____
###Markdown
Counterfactuals guided by prototypes on MNIST This method is described in the [Interpretable Counterfactual Explanations Guided by Prototypes](https://arxiv.org/abs/1907.02584) paper and can generate counterfactual instances guided by class prototypes. It means that for a certain instance X, the method builds a prototype for each prediction class using either an [autoencoder](https://en.wikipedia.org/wiki/Autoencoder) or [k-d trees](https://en.wikipedia.org/wiki/K-d_tree). The nearest prototype class other than the originally predicted class is then used to guide the counterfactual search. For example, in MNIST the closest class to a 7 could be a 9. As a result, the prototype loss term will try to minimize the distance between the proposed counterfactual and the prototype of a 9. This speeds up the search towards a satisfactory counterfactual by steering it towards an interpretable solution from the start of the optimization. It also helps to avoid out-of-distribution counterfactuals with the perturbations driven to a prototype of another class.The loss function to be optimized is the following: $Loss$ = c$L_{pred}$ + $\beta$$L_{1}$ + $L_{2}$ + $L_{AE}$ + $L_{proto}$The first loss term relates to the model's prediction function, the following 2 terms define the elastic net regularization while the last 2 terms are optional. The aim of $L_{AE}$ is to penalize out-of-distribution counterfactuals while $L_{proto}$ guides the counterfactual to a prototype. When we only have acces to the model's prediction function and cannot fully enjoy the benefits of automatic differentiation, the prototypes allow us to drop the prediction function loss term $L_{pred}$ and still generate high quality counterfactuals. This drastically reduces the number of prediction calls made during the numerical gradient update step and again speeds up the search.Other options include generating counterfactuals for specific classes or including trust score constraints to ensure that the counterfactual is close enough to the newly predicted class compared to the original class. Different use cases are illustrated throughout this notebook.
###Code
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR) # suppress deprecation messages
from tensorflow.keras import backend as K
from tensorflow.keras.layers import Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Input, UpSampling2D
from tensorflow.keras.models import Model, load_model
from tensorflow.keras.utils import to_categorical
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import os
from time import time
from alibi.explainers import CounterFactualProto
###Output
_____no_output_____
###Markdown
Load and prepare MNIST data
###Code
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
print('x_train shape:', x_train.shape, 'y_train shape:', y_train.shape)
plt.gray()
plt.imshow(x_test[1]);
###Output
x_train shape: (60000, 28, 28) y_train shape: (60000,)
###Markdown
Prepare data: scale, reshape and categorize
###Code
x_train = x_train.astype('float32') / 255
x_test = x_test.astype('float32') / 255
x_train = np.reshape(x_train, x_train.shape + (1,))
x_test = np.reshape(x_test, x_test.shape + (1,))
print('x_train shape:', x_train.shape, 'x_test shape:', x_test.shape)
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
print('y_train shape:', y_train.shape, 'y_test shape:', y_test.shape)
xmin, xmax = -.5, .5
x_train = ((x_train - x_train.min()) / (x_train.max() - x_train.min())) * (xmax - xmin) + xmin
x_test = ((x_test - x_test.min()) / (x_test.max() - x_test.min())) * (xmax - xmin) + xmin
###Output
_____no_output_____
###Markdown
Define and train CNN model
###Code
def cnn_model():
x_in = Input(shape=(28, 28, 1))
x = Conv2D(filters=32, kernel_size=2, padding='same', activation='relu')(x_in)
x = MaxPooling2D(pool_size=2)(x)
x = Dropout(0.3)(x)
x = Conv2D(filters=64, kernel_size=2, padding='same', activation='relu')(x)
x = MaxPooling2D(pool_size=2)(x)
x = Dropout(0.3)(x)
x = Flatten()(x)
x = Dense(256, activation='relu')(x)
x = Dropout(0.5)(x)
x_out = Dense(10, activation='softmax')(x)
cnn = Model(inputs=x_in, outputs=x_out)
cnn.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return cnn
cnn = cnn_model()
cnn.fit(x_train, y_train, batch_size=32, epochs=3, verbose=0)
cnn.save('mnist_cnn.h5', save_format='h5')
###Output
_____no_output_____
###Markdown
Evaluate the model on test set
###Code
cnn = load_model('mnist_cnn.h5')
score = cnn.evaluate(x_test, y_test, verbose=0)
print('Test accuracy: ', score[1])
###Output
Test accuracy: 0.9887
###Markdown
Define and train auto-encoder
###Code
def ae_model():
# encoder
x_in = Input(shape=(28, 28, 1))
x = Conv2D(16, (3, 3), activation='relu', padding='same')(x_in)
x = Conv2D(16, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
encoded = Conv2D(1, (3, 3), activation=None, padding='same')(x)
encoder = Model(x_in, encoded)
# decoder
dec_in = Input(shape=(14, 14, 1))
x = Conv2D(16, (3, 3), activation='relu', padding='same')(dec_in)
x = UpSampling2D((2, 2))(x)
x = Conv2D(16, (3, 3), activation='relu', padding='same')(x)
decoded = Conv2D(1, (3, 3), activation=None, padding='same')(x)
decoder = Model(dec_in, decoded)
# autoencoder = encoder + decoder
x_out = decoder(encoder(x_in))
autoencoder = Model(x_in, x_out)
autoencoder.compile(optimizer='adam', loss='mse')
return autoencoder, encoder, decoder
ae, enc, dec = ae_model()
ae.fit(x_train, x_train, batch_size=128, epochs=4, validation_data=(x_test, x_test), verbose=0)
ae.save('mnist_ae.h5', save_format='h5')
enc.save('mnist_enc.h5', save_format='h5')
###Output
_____no_output_____
###Markdown
Compare original with decoded images
###Code
ae = load_model('mnist_ae.h5')
enc = load_model('mnist_enc.h5', compile=False)
decoded_imgs = ae.predict(x_test)
n = 5
plt.figure(figsize=(20, 4))
for i in range(1, n+1):
# display original
ax = plt.subplot(2, n, i)
plt.imshow(x_test[i].reshape(28, 28))
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
###Output
_____no_output_____
###Markdown
Generate counterfactual guided by the nearest class prototype Original instance:
###Code
X = x_test[0].reshape((1,) + x_test[0].shape)
plt.imshow(X.reshape(28, 28));
###Output
_____no_output_____
###Markdown
Counterfactual parameters:
###Code
shape = (1,) + x_train.shape[1:]
gamma = 100.
theta = 100.
c_init = 1.
c_steps = 2
max_iterations = 1000
feature_range = (x_train.min(),x_train.max())
###Output
_____no_output_____
###Markdown
Run counterfactual:
###Code
# initialize explainer, fit and generate counterfactual
cf = CounterFactualProto(cnn, shape, gamma=gamma, theta=theta,
ae_model=ae, enc_model=enc, max_iterations=max_iterations,
feature_range=feature_range, c_init=c_init, c_steps=c_steps)
start_time = time()
cf.fit(x_train) # find class prototypes
print('Time to find prototypes each class: {:.3f} sec'.format(time() - start_time))
start_time = time()
explanation = cf.explain(X)
print('Explanation took {:.3f} sec'.format(time() - start_time))
###Output
Time to find prototypes each class: 10.619 sec
Explanation took 8.659 sec
###Markdown
Results:
###Code
print('Counterfactual prediction: {}'.format(explanation['cf']['class']))
print('Closest prototype class: {}'.format(cf.id_proto))
plt.imshow(explanation['cf']['X'].reshape(28, 28));
###Output
Counterfactual prediction: 9
Closest prototype class: 9
###Markdown
The counterfactual starting from a 7 moves towards its closest prototype class: a 9. The evolution of the counterfactual during the first iteration can be seen below:
###Code
iter_cf = 0
print('iteration c {}'.format(iter_cf))
n = len(explanation['all'][iter_cf])
plt.figure(figsize=(20, 4))
for i in range(n):
ax = plt.subplot(1, n+1, i+1)
plt.imshow(explanation['all'][iter_cf][i].reshape(28, 28))
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
###Output
iteration c 0
###Markdown
Typically, the first few iterations already steer the 7 towards a 9, while the later iterations make the counterfactual more sparse. Prototypes defined by the $k$ nearest encoded instances In the above example, the class prototypes are defined by the average encoding of all instances belonging to the specific class. Instead, we can also select only the $k$ nearest encoded instances of a class to the encoded instance to be explained and use the average over those $k$ encodings as the prototype.
###Code
# initialize explainer, fit and generate counterfactuals
cf = CounterFactualProto(cnn, shape, gamma=gamma, theta=theta,
ae_model=ae, enc_model=enc, max_iterations=max_iterations,
feature_range=feature_range, c_init=c_init, c_steps=c_steps)
cf.fit(x_train)
explanation_k1 = cf.explain(X, k=1, k_type='mean')
explanation_k20 = cf.explain(X, k=20, k_type='mean')
###Output
_____no_output_____
###Markdown
Results for $k$ equals 1:
###Code
print('Counterfactual prediction: {}'.format(explanation_k1['cf']['class']))
print('Closest prototype class: {}'.format(cf.id_proto))
plt.imshow(explanation_k1['cf']['X'].reshape(28, 28));
###Output
Counterfactual prediction: 9
Closest prototype class: 9
###Markdown
Results for $k$ equals 20:
###Code
print('Counterfactual prediction: {}'.format(explanation_k20['cf']['class']))
print('Closest prototype class: {}'.format(cf.id_proto))
plt.imshow(explanation_k20['cf']['X'].reshape(28, 28));
###Output
Counterfactual prediction: 9
Closest prototype class: 9
###Markdown
A lower value of $k$ typically leads to counterfactuals that look more like the original instance and less like an average instance of the counterfactual class. Remove the autoencoder loss term $L_{AE}$ In the previous example, we used both an autoencoder loss term to penalize a counterfactual which falls outside of the training data distribution as well as an encoder loss term to guide the counterfactual to the nearest prototype class. In the next example we get rid of the autoencoder loss term to speed up the counterfactual search and still generate decent counterfactuals:
###Code
# initialize explainer, fit and generate counterfactuals
cf = CounterFactualProto(cnn, shape, gamma=gamma, theta=theta,
enc_model=enc, max_iterations=max_iterations,
feature_range=feature_range, c_init=c_init, c_steps=c_steps)
cf.fit(x_train)
start_time = time()
explanation = cf.explain(X, k=1)
print('Explanation took {:.3f} sec'.format(time() - start_time))
###Output
Explanation took 6.284 sec
###Markdown
Results:
###Code
print('Counterfactual prediction: {}'.format(explanation['cf']['class']))
print('Closest prototype class: {}'.format(cf.id_proto))
plt.imshow(explanation['cf']['X'].reshape(28, 28));
###Output
Counterfactual prediction: 9
Closest prototype class: 9
###Markdown
Specify prototype classes For multi-class predictions, we might be interested to generate counterfactuals for certain classes while avoiding others. The following example illustrates how to do this:
###Code
X = x_test[12].reshape((1,) + x_test[1].shape)
plt.imshow(X.reshape(28, 28));
# initialize explainer, fit and generate counterfactuals
cf = CounterFactualProto(cnn, shape, gamma=gamma, theta=theta,
ae_model=ae, enc_model=enc, max_iterations=max_iterations,
feature_range=feature_range, c_init=c_init, c_steps=c_steps)
cf.fit(x_train)
explanation_1 = cf.explain(X, k=5, k_type='mean')
proto_1 = cf.id_proto
explanation_2 = cf.explain(X, k=5, k_type='mean', target_class=[7])
proto_2 = cf.id_proto
###Output
_____no_output_____
###Markdown
The closest class to the 9 is 4. This is evident by looking at the first counterfactual below. For the second counterfactual, we specified that the prototype class used in the search should be a 7. As a result, a counterfactual 7 instead of a 4 is generated.
###Code
print('Counterfactual prediction: {}'.format(explanation_1['cf']['class']))
print('Closest prototype class: {}'.format(proto_1))
plt.imshow(explanation_1['cf']['X'].reshape(28, 28));
print('Counterfactual prediction: {}'.format(explanation_2['cf']['class']))
print('Closest prototype class: {}'.format(proto_2))
plt.imshow(explanation_2['cf']['X'].reshape(28, 28));
###Output
Counterfactual prediction: 7
Closest prototype class: 7
###Markdown
Speed up the counterfactual search by removing the predict function loss term We can also remove the prediction loss term and still obtain an interpretable counterfactual. This is especially relevant for fully black box models. When we provide the counterfactual search method with a Keras or TensorFlow model, it is incorporated in the TensorFlow graph and evaluated using automatic differentiation. However, if we only have access to the model's prediction function, the gradient updates are numerical and typically require a large number of prediction calls because of the prediction loss term $L_{pred}$. These prediction calls can slow the search down significantly and become a bottleneck. We can represent the gradient of the loss term as follows:\begin{equation*} \frac{\partial L_{pred}}{\partial x} = \frac{\partial L_{pred}}{\partial p} \frac{\partial p}{\partial x} \end{equation*}where $L_{pred}$ is the prediction loss term, $p$ the prediction function and $x$ the input features to optimize. For a 28 by 28 MNIST image, the $^{\delta p}/_{\delta x}$ term alone would require a prediction call with batch size 28x28x2 = 1568. By using the prototypes to guide the search however, we can remove the prediction loss term and only make a single prediction at the end of each gradient update to check whether the predicted class on the proposed counterfactual is different from the original class. We do not necessarily need a Keras or TensorFlow auto-encoder either and can use k-d trees to find the nearest class prototypes. Please check out [this notebook](./cfproto_housing.ipynb) for a practical example.The first example below removes $L_{pred}$ from the loss function to bypass the bottleneck. It illustrates the drastic speed improvements over the black box alternative with numerical gradient evaluation while still producing interpretable counterfactual instances.
###Code
plt.gray()
X = x_test[23].reshape(1, 28, 28, 1)
plt.imshow(X.reshape(28, 28));
c_init = 0. # weight on prediction loss term set to 0
c_steps = 1 # no need to find optimal values for c
# define a black-box model
predict_fn = lambda x: cnn.predict(x)
# initialize explainer, fit and generate counterfactuals
cf = CounterFactualProto(predict_fn, shape, gamma=gamma, theta=theta,
ae_model=ae, enc_model=enc, max_iterations=max_iterations,
feature_range=feature_range, c_init=c_init, c_steps=c_steps)
cf.fit(x_train)
start_time = time()
explanation = cf.explain(X, k=1)
print('Explanation took {:.3f} sec'.format(time() - start_time))
print('Counterfactual prediction: {}'.format(explanation['cf']['class']))
print('Closest prototype class: {}'.format(cf.id_proto))
plt.imshow(explanation['cf']['X'].reshape(28, 28));
###Output
Counterfactual prediction: 6
Closest prototype class: 6
###Markdown
Let us know add the $L_{pred}$ loss term back in the objective function and observe how long it takes to generate a black box counterfactual:
###Code
c_init = 1.
c_steps = 2
# define a black-box model
predict_fn = lambda x: cnn.predict(x)
# initialize explainer, fit and generate counterfactuals
cf = CounterFactualProto(predict_fn, shape, gamma=gamma, theta=theta,
ae_model=ae, enc_model=enc, max_iterations=max_iterations,
feature_range=feature_range, c_init=c_init, c_steps=c_steps)
cf.fit(x_train)
start_time = time()
explanation = cf.explain(X, k=1)
print('Explanation took {:.3f} sec'.format(time() - start_time))
print('Counterfactual prediction: {}'.format(explanation['cf']['class']))
print('Closest prototype class: {}'.format(cf.id_proto))
plt.imshow(explanation['cf']['X'].reshape(28, 28));
###Output
Counterfactual prediction: 6
Closest prototype class: 6
###Markdown
Clean up:
###Code
os.remove('mnist_cnn.h5')
os.remove('mnist_ae.h5')
os.remove('mnist_enc.h5')
###Output
_____no_output_____
###Markdown
Counterfactuals guided by prototypes on MNIST This method is described in the [Interpretable Counterfactual Explanations Guided by Prototypes](https://arxiv.org/abs/1907.02584) paper and can generate counterfactual instances guided by class prototypes. It means that for a certain instance X, the method builds a prototype for each prediction class using either an [autoencoder](https://en.wikipedia.org/wiki/Autoencoder) or [k-d trees](https://en.wikipedia.org/wiki/K-d_tree). The nearest prototype class other than the originally predicted class is then used to guide the counterfactual search. For example, in MNIST the closest class to a 7 could be a 9. As a result, the prototype loss term will try to minimize the distance between the proposed counterfactual and the prototype of a 9. This speeds up the search towards a satisfactory counterfactual by steering it towards an interpretable solution from the start of the optimization. It also helps to avoid out-of-distribution counterfactuals with the perturbations driven to a prototype of another class.The loss function to be optimized is the following: $Loss$ = c$L_{pred}$ + $\beta$$L_{1}$ + $L_{2}$ + $L_{AE}$ + $L_{proto}$The first loss term relates to the model's prediction function, the following 2 terms define the elastic net regularization while the last 2 terms are optional. The aim of $L_{AE}$ is to penalize out-of-distribution counterfactuals while $L_{proto}$ guides the counterfactual to a prototype. When we only have acces to the model's prediction function and cannot fully enjoy the benefits of automatic differentiation, the prototypes allow us to drop the prediction function loss term $L_{pred}$ and still generate high quality counterfactuals. This drastically reduces the number of prediction calls made during the numerical gradient update step and again speeds up the search.Other options include generating counterfactuals for specific classes or including trust score constraints to ensure that the counterfactual is close enough to the newly predicted class compared to the original class. Different use cases are illustrated throughout this notebook.
###Code
import tensorflow as tf
tf.get_logger().setLevel(40) # suppress deprecation messages
tf.compat.v1.disable_v2_behavior() # disable TF2 behaviour as alibi code still relies on TF1 constructs
from tensorflow.keras.layers import Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Input, UpSampling2D
from tensorflow.keras.models import Model, load_model
from tensorflow.keras.utils import to_categorical
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import os
from time import time
from alibi.explainers import CounterFactualProto
print('TF version: ', tf.__version__)
print('Eager execution enabled: ', tf.executing_eagerly()) # False
###Output
TF version: 2.2.0
Eager execution enabled: False
###Markdown
Load and prepare MNIST data
###Code
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
print('x_train shape:', x_train.shape, 'y_train shape:', y_train.shape)
plt.gray()
plt.imshow(x_test[1]);
###Output
x_train shape: (60000, 28, 28) y_train shape: (60000,)
###Markdown
Prepare data: scale, reshape and categorize
###Code
x_train = x_train.astype('float32') / 255
x_test = x_test.astype('float32') / 255
x_train = np.reshape(x_train, x_train.shape + (1,))
x_test = np.reshape(x_test, x_test.shape + (1,))
print('x_train shape:', x_train.shape, 'x_test shape:', x_test.shape)
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
print('y_train shape:', y_train.shape, 'y_test shape:', y_test.shape)
xmin, xmax = -.5, .5
x_train = ((x_train - x_train.min()) / (x_train.max() - x_train.min())) * (xmax - xmin) + xmin
x_test = ((x_test - x_test.min()) / (x_test.max() - x_test.min())) * (xmax - xmin) + xmin
###Output
_____no_output_____
###Markdown
Define and train CNN model
###Code
def cnn_model():
x_in = Input(shape=(28, 28, 1))
x = Conv2D(filters=32, kernel_size=2, padding='same', activation='relu')(x_in)
x = MaxPooling2D(pool_size=2)(x)
x = Dropout(0.3)(x)
x = Conv2D(filters=64, kernel_size=2, padding='same', activation='relu')(x)
x = MaxPooling2D(pool_size=2)(x)
x = Dropout(0.3)(x)
x = Flatten()(x)
x = Dense(256, activation='relu')(x)
x = Dropout(0.5)(x)
x_out = Dense(10, activation='softmax')(x)
cnn = Model(inputs=x_in, outputs=x_out)
cnn.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return cnn
cnn = cnn_model()
cnn.fit(x_train, y_train, batch_size=32, epochs=3, verbose=0)
cnn.save('mnist_cnn.h5', save_format='h5')
###Output
_____no_output_____
###Markdown
Evaluate the model on test set
###Code
cnn = load_model('mnist_cnn.h5')
score = cnn.evaluate(x_test, y_test, verbose=0)
print('Test accuracy: ', score[1])
###Output
Test accuracy: 0.9871
###Markdown
Define and train auto-encoder
###Code
def ae_model():
# encoder
x_in = Input(shape=(28, 28, 1))
x = Conv2D(16, (3, 3), activation='relu', padding='same')(x_in)
x = Conv2D(16, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
encoded = Conv2D(1, (3, 3), activation=None, padding='same')(x)
encoder = Model(x_in, encoded)
# decoder
dec_in = Input(shape=(14, 14, 1))
x = Conv2D(16, (3, 3), activation='relu', padding='same')(dec_in)
x = UpSampling2D((2, 2))(x)
x = Conv2D(16, (3, 3), activation='relu', padding='same')(x)
decoded = Conv2D(1, (3, 3), activation=None, padding='same')(x)
decoder = Model(dec_in, decoded)
# autoencoder = encoder + decoder
x_out = decoder(encoder(x_in))
autoencoder = Model(x_in, x_out)
autoencoder.compile(optimizer='adam', loss='mse')
return autoencoder, encoder, decoder
ae, enc, dec = ae_model()
ae.fit(x_train, x_train, batch_size=128, epochs=4, validation_data=(x_test, x_test), verbose=0)
ae.save('mnist_ae.h5', save_format='h5')
enc.save('mnist_enc.h5', save_format='h5')
###Output
_____no_output_____
###Markdown
Compare original with decoded images
###Code
ae = load_model('mnist_ae.h5')
enc = load_model('mnist_enc.h5', compile=False)
decoded_imgs = ae.predict(x_test)
n = 5
plt.figure(figsize=(20, 4))
for i in range(1, n+1):
# display original
ax = plt.subplot(2, n, i)
plt.imshow(x_test[i].reshape(28, 28))
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
###Output
_____no_output_____
###Markdown
Generate counterfactual guided by the nearest class prototype Original instance:
###Code
X = x_test[0].reshape((1,) + x_test[0].shape)
plt.imshow(X.reshape(28, 28));
###Output
_____no_output_____
###Markdown
Counterfactual parameters:
###Code
shape = (1,) + x_train.shape[1:]
gamma = 100.
theta = 100.
c_init = 1.
c_steps = 2
max_iterations = 1000
feature_range = (x_train.min(),x_train.max())
###Output
_____no_output_____
###Markdown
Run counterfactual:
###Code
# initialize explainer, fit and generate counterfactual
cf = CounterFactualProto(cnn, shape, gamma=gamma, theta=theta,
ae_model=ae, enc_model=enc, max_iterations=max_iterations,
feature_range=feature_range, c_init=c_init, c_steps=c_steps)
start_time = time()
cf.fit(x_train) # find class prototypes
print('Time to find prototypes each class: {:.3f} sec'.format(time() - start_time))
start_time = time()
explanation = cf.explain(X)
print('Explanation took {:.3f} sec'.format(time() - start_time))
###Output
Time to find prototypes each class: 14.580 sec
Explanation took 9.269 sec
###Markdown
Results:
###Code
print('Counterfactual prediction: {}'.format(explanation.cf['class']))
print('Closest prototype class: {}'.format(explanation.id_proto))
plt.imshow(explanation.cf['X'].reshape(28, 28));
###Output
Counterfactual prediction: 9
Closest prototype class: 9
###Markdown
The counterfactual starting from a 7 moves towards its closest prototype class: a 9. The evolution of the counterfactual during the first iteration can be seen below:
###Code
iter_cf = 0
print('iteration c {}'.format(iter_cf))
n = len(explanation['all'][iter_cf])
plt.figure(figsize=(20, 4))
for i in range(n):
ax = plt.subplot(1, n+1, i+1)
plt.imshow(explanation['all'][iter_cf][i].reshape(28, 28))
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
###Output
iteration c 0
###Markdown
Typically, the first few iterations already steer the 7 towards a 9, while the later iterations make the counterfactual more sparse. Prototypes defined by the $k$ nearest encoded instances In the above example, the class prototypes are defined by the average encoding of all instances belonging to the specific class. Instead, we can also select only the $k$ nearest encoded instances of a class to the encoded instance to be explained and use the average over those $k$ encodings as the prototype.
###Code
# initialize explainer, fit and generate counterfactuals
cf = CounterFactualProto(cnn, shape, gamma=gamma, theta=theta,
ae_model=ae, enc_model=enc, max_iterations=max_iterations,
feature_range=feature_range, c_init=c_init, c_steps=c_steps)
cf.fit(x_train)
explanation_k1 = cf.explain(X, k=1, k_type='mean')
explanation_k20 = cf.explain(X, k=20, k_type='mean')
###Output
_____no_output_____
###Markdown
Results for $k$ equals 1:
###Code
print('Counterfactual prediction: {}'.format(explanation_k1.cf['class']))
print('Closest prototype class: {}'.format(explanation.id_proto))
plt.imshow(explanation_k1.cf['X'].reshape(28, 28));
###Output
Counterfactual prediction: 9
Closest prototype class: 9
###Markdown
Results for $k$ equals 20:
###Code
print('Counterfactual prediction: {}'.format(explanation_k20.cf['class']))
print('Closest prototype class: {}'.format(explanation.id_proto))
plt.imshow(explanation_k20.cf['X'].reshape(28, 28));
###Output
Counterfactual prediction: 9
Closest prototype class: 9
###Markdown
A lower value of $k$ typically leads to counterfactuals that look more like the original instance and less like an average instance of the counterfactual class. Remove the autoencoder loss term $L_{AE}$ In the previous example, we used both an autoencoder loss term to penalize a counterfactual which falls outside of the training data distribution as well as an encoder loss term to guide the counterfactual to the nearest prototype class. In the next example we get rid of the autoencoder loss term to speed up the counterfactual search and still generate decent counterfactuals:
###Code
# initialize explainer, fit and generate counterfactuals
cf = CounterFactualProto(cnn, shape, gamma=gamma, theta=theta,
enc_model=enc, max_iterations=max_iterations,
feature_range=feature_range, c_init=c_init, c_steps=c_steps)
cf.fit(x_train)
start_time = time()
explanation = cf.explain(X, k=1)
print('Explanation took {:.3f} sec'.format(time() - start_time))
###Output
Explanation took 6.443 sec
###Markdown
Results:
###Code
print('Counterfactual prediction: {}'.format(explanation.cf['class']))
print('Closest prototype class: {}'.format(explanation.id_proto))
plt.imshow(explanation.cf['X'].reshape(28, 28));
###Output
Counterfactual prediction: 9
Closest prototype class: 9
###Markdown
Specify prototype classes For multi-class predictions, we might be interested to generate counterfactuals for certain classes while avoiding others. The following example illustrates how to do this:
###Code
X = x_test[12].reshape((1,) + x_test[1].shape)
plt.imshow(X.reshape(28, 28));
# initialize explainer, fit and generate counterfactuals
cf = CounterFactualProto(cnn, shape, gamma=gamma, theta=theta,
ae_model=ae, enc_model=enc, max_iterations=max_iterations,
feature_range=feature_range, c_init=c_init, c_steps=c_steps)
cf.fit(x_train)
explanation_1 = cf.explain(X, k=5, k_type='mean')
proto_1 = explanation_1.id_proto
explanation_2 = cf.explain(X, k=5, k_type='mean', target_class=[7])
proto_2 = explanation_2.id_proto
###Output
_____no_output_____
###Markdown
The closest class to the 9 is 4. This is evident by looking at the first counterfactual below. For the second counterfactual, we specified that the prototype class used in the search should be a 7. As a result, a counterfactual 7 instead of a 4 is generated.
###Code
print('Counterfactual prediction: {}'.format(explanation_1.cf['class']))
print('Closest prototype class: {}'.format(proto_1))
plt.imshow(explanation_1.cf['X'].reshape(28, 28));
print('Counterfactual prediction: {}'.format(explanation_2.cf['class']))
print('Closest prototype class: {}'.format(proto_2))
plt.imshow(explanation_2.cf['X'].reshape(28, 28));
###Output
Counterfactual prediction: 7
Closest prototype class: 7
###Markdown
Speed up the counterfactual search by removing the predict function loss term We can also remove the prediction loss term and still obtain an interpretable counterfactual. This is especially relevant for fully black box models. When we provide the counterfactual search method with a Keras or TensorFlow model, it is incorporated in the TensorFlow graph and evaluated using automatic differentiation. However, if we only have access to the model's prediction function, the gradient updates are numerical and typically require a large number of prediction calls because of the prediction loss term $L_{pred}$. These prediction calls can slow the search down significantly and become a bottleneck. We can represent the gradient of the loss term as follows:\begin{equation*} \frac{\partial L_{pred}}{\partial x} = \frac{\partial L_{pred}}{\partial p} \frac{\partial p}{\partial x} \end{equation*}where $L_{pred}$ is the prediction loss term, $p$ the prediction function and $x$ the input features to optimize. For a 28 by 28 MNIST image, the $^{\delta p}/_{\delta x}$ term alone would require a prediction call with batch size 28x28x2 = 1568. By using the prototypes to guide the search however, we can remove the prediction loss term and only make a single prediction at the end of each gradient update to check whether the predicted class on the proposed counterfactual is different from the original class. We do not necessarily need a Keras or TensorFlow auto-encoder either and can use k-d trees to find the nearest class prototypes. Please check out [this notebook](./cfproto_housing.ipynb) for a practical example.The first example below removes $L_{pred}$ from the loss function to bypass the bottleneck. It illustrates the drastic speed improvements over the black box alternative with numerical gradient evaluation while still producing interpretable counterfactual instances.
###Code
plt.gray()
X = x_test[23].reshape(1, 28, 28, 1)
plt.imshow(X.reshape(28, 28));
c_init = 0. # weight on prediction loss term set to 0
c_steps = 1 # no need to find optimal values for c
# define a black-box model
predict_fn = lambda x: cnn.predict(x)
# initialize explainer, fit and generate counterfactuals
cf = CounterFactualProto(predict_fn, shape, gamma=gamma, theta=theta,
ae_model=ae, enc_model=enc, max_iterations=max_iterations,
feature_range=feature_range, c_init=c_init, c_steps=c_steps)
cf.fit(x_train)
start_time = time()
explanation = cf.explain(X, k=1)
print('Explanation took {:.3f} sec'.format(time() - start_time))
print('Counterfactual prediction: {}'.format(explanation.cf['class']))
print('Closest prototype class: {}'.format(explanation.id_proto))
plt.imshow(explanation.cf['X'].reshape(28, 28));
###Output
Counterfactual prediction: 6
Closest prototype class: 6
###Markdown
Let us know add the $L_{pred}$ loss term back in the objective function and observe how long it takes to generate a black box counterfactual:
###Code
c_init = 1.
c_steps = 2
# define a black-box model
predict_fn = lambda x: cnn.predict(x)
# initialize explainer, fit and generate counterfactuals
cf = CounterFactualProto(predict_fn, shape, gamma=gamma, theta=theta,
ae_model=ae, enc_model=enc, max_iterations=max_iterations,
feature_range=feature_range, c_init=c_init, c_steps=c_steps)
cf.fit(x_train)
start_time = time()
explanation = cf.explain(X, k=1)
print('Explanation took {:.3f} sec'.format(time() - start_time))
print('Counterfactual prediction: {}'.format(explanation.cf['class']))
print('Closest prototype class: {}'.format(explanation.id_proto))
plt.imshow(explanation.cf['X'].reshape(28, 28));
###Output
Counterfactual prediction: 6
Closest prototype class: 6
###Markdown
Clean up:
###Code
os.remove('mnist_cnn.h5')
os.remove('mnist_ae.h5')
os.remove('mnist_enc.h5')
###Output
_____no_output_____
###Markdown
Counterfactuals guided by prototypes on MNIST This method is described in the [Interpretable Counterfactual Explanations Guided by Prototypes](https://arxiv.org/abs/1907.02584) paper and can generate counterfactual instances guided by class prototypes. It means that for a certain instance X, the method builds a prototype for each prediction class using either an [autoencoder](https://en.wikipedia.org/wiki/Autoencoder) or [k-d trees](https://en.wikipedia.org/wiki/K-d_tree). The nearest prototype class other than the originally predicted class is then used to guide the counterfactual search. For example, in MNIST the closest class to a 7 could be a 9. As a result, the prototype loss term will try to minimize the distance between the proposed counterfactual and the prototype of a 9. This speeds up the search towards a satisfactory counterfactual by steering it towards an interpretable solution from the start of the optimization. It also helps to avoid out-of-distribution counterfactuals with the perturbations driven to a prototype of another class.The loss function to be optimized is the following: $Loss$ = c$L_{pred}$ + $\beta$$L_{1}$ + $L_{2}$ + $L_{AE}$ + $L_{proto}$The first loss term relates to the model's prediction function, the following 2 terms define the elastic net regularization while the last 2 terms are optional. The aim of $L_{AE}$ is to penalize out-of-distribution counterfactuals while $L_{proto}$ guides the counterfactual to a prototype. When we only have acces to the model's prediction function and cannot fully enjoy the benefits of automatic differentiation, the prototypes allow us to drop the prediction function loss term $L_{pred}$ and still generate high quality counterfactuals. This drastically reduces the number of prediction calls made during the numerical gradient update step and again speeds up the search.Other options include generating counterfactuals for specific classes or including trust score constraints to ensure that the counterfactual is close enough to the newly predicted class compared to the original class. Different use cases are illustrated throughout this notebook.
###Code
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR) # suppress deprecation messages
from tensorflow.keras import backend as K
from tensorflow.keras.layers import Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Input, UpSampling2D
from tensorflow.keras.models import Model, load_model
from tensorflow.keras.utils import to_categorical
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import os
from time import time
from alibi.explainers import CounterFactualProto
###Output
_____no_output_____
###Markdown
Load and prepare MNIST data
###Code
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
print('x_train shape:', x_train.shape, 'y_train shape:', y_train.shape)
plt.gray()
plt.imshow(x_test[1]);
###Output
x_train shape: (60000, 28, 28) y_train shape: (60000,)
###Markdown
Prepare data: scale, reshape and categorize
###Code
x_train = x_train.astype('float32') / 255
x_test = x_test.astype('float32') / 255
x_train = np.reshape(x_train, x_train.shape + (1,))
x_test = np.reshape(x_test, x_test.shape + (1,))
print('x_train shape:', x_train.shape, 'x_test shape:', x_test.shape)
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
print('y_train shape:', y_train.shape, 'y_test shape:', y_test.shape)
xmin, xmax = -.5, .5
x_train = ((x_train - x_train.min()) / (x_train.max() - x_train.min())) * (xmax - xmin) + xmin
x_test = ((x_test - x_test.min()) / (x_test.max() - x_test.min())) * (xmax - xmin) + xmin
###Output
_____no_output_____
###Markdown
Define and train CNN model
###Code
def cnn_model():
x_in = Input(shape=(28, 28, 1))
x = Conv2D(filters=32, kernel_size=2, padding='same', activation='relu')(x_in)
x = MaxPooling2D(pool_size=2)(x)
x = Dropout(0.3)(x)
x = Conv2D(filters=64, kernel_size=2, padding='same', activation='relu')(x)
x = MaxPooling2D(pool_size=2)(x)
x = Dropout(0.3)(x)
x = Flatten()(x)
x = Dense(256, activation='relu')(x)
x = Dropout(0.5)(x)
x_out = Dense(10, activation='softmax')(x)
cnn = Model(inputs=x_in, outputs=x_out)
cnn.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return cnn
cnn = cnn_model()
cnn.fit(x_train, y_train, batch_size=32, epochs=3, verbose=0)
cnn.save('mnist_cnn.h5', save_format='h5')
###Output
_____no_output_____
###Markdown
Evaluate the model on test set
###Code
cnn = load_model('mnist_cnn.h5')
score = cnn.evaluate(x_test, y_test, verbose=0)
print('Test accuracy: ', score[1])
###Output
Test accuracy: 0.9887
###Markdown
Define and train auto-encoder
###Code
def ae_model():
# encoder
x_in = Input(shape=(28, 28, 1))
x = Conv2D(16, (3, 3), activation='relu', padding='same')(x_in)
x = Conv2D(16, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
encoded = Conv2D(1, (3, 3), activation=None, padding='same')(x)
encoder = Model(x_in, encoded)
# decoder
dec_in = Input(shape=(14, 14, 1))
x = Conv2D(16, (3, 3), activation='relu', padding='same')(dec_in)
x = UpSampling2D((2, 2))(x)
x = Conv2D(16, (3, 3), activation='relu', padding='same')(x)
decoded = Conv2D(1, (3, 3), activation=None, padding='same')(x)
decoder = Model(dec_in, decoded)
# autoencoder = encoder + decoder
x_out = decoder(encoder(x_in))
autoencoder = Model(x_in, x_out)
autoencoder.compile(optimizer='adam', loss='mse')
return autoencoder, encoder, decoder
ae, enc, dec = ae_model()
ae.fit(x_train, x_train, batch_size=128, epochs=4, validation_data=(x_test, x_test), verbose=0)
ae.save('mnist_ae.h5', save_format='h5')
enc.save('mnist_enc.h5', save_format='h5')
###Output
_____no_output_____
###Markdown
Compare original with decoded images
###Code
ae = load_model('mnist_ae.h5')
enc = load_model('mnist_enc.h5', compile=False)
decoded_imgs = ae.predict(x_test)
n = 5
plt.figure(figsize=(20, 4))
for i in range(1, n+1):
# display original
ax = plt.subplot(2, n, i)
plt.imshow(x_test[i].reshape(28, 28))
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
###Output
_____no_output_____
###Markdown
Generate counterfactual guided by the nearest class prototype Original instance:
###Code
X = x_test[0].reshape((1,) + x_test[0].shape)
plt.imshow(X.reshape(28, 28));
###Output
_____no_output_____
###Markdown
Counterfactual parameters:
###Code
shape = (1,) + x_train.shape[1:]
gamma = 100.
theta = 100.
c_init = 1.
c_steps = 2
max_iterations = 1000
feature_range = (x_train.min(),x_train.max())
###Output
_____no_output_____
###Markdown
Run counterfactual:
###Code
# initialize explainer, fit and generate counterfactual
cf = CounterFactualProto(cnn, shape, gamma=gamma, theta=theta,
ae_model=ae, enc_model=enc, max_iterations=max_iterations,
feature_range=feature_range, c_init=c_init, c_steps=c_steps)
start_time = time()
cf.fit(x_train) # find class prototypes
print('Time to find prototypes each class: {:.3f} sec'.format(time() - start_time))
start_time = time()
explanation = cf.explain(X)
print('Explanation took {:.3f} sec'.format(time() - start_time))
###Output
Time to find prototypes each class: 10.619 sec
Explanation took 8.659 sec
###Markdown
Results:
###Code
print('Counterfactual prediction: {}'.format(explanation.cf['class']))
print('Closest prototype class: {}'.format(explanation.id_proto))
plt.imshow(explanation.cf['X'].reshape(28, 28));
###Output
Counterfactual prediction: 9
Closest prototype class: 9
###Markdown
The counterfactual starting from a 7 moves towards its closest prototype class: a 9. The evolution of the counterfactual during the first iteration can be seen below:
###Code
iter_cf = 0
print('iteration c {}'.format(iter_cf))
n = len(explanation['all'][iter_cf])
plt.figure(figsize=(20, 4))
for i in range(n):
ax = plt.subplot(1, n+1, i+1)
plt.imshow(explanation['all'][iter_cf][i].reshape(28, 28))
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
###Output
iteration c 0
###Markdown
Typically, the first few iterations already steer the 7 towards a 9, while the later iterations make the counterfactual more sparse. Prototypes defined by the $k$ nearest encoded instances In the above example, the class prototypes are defined by the average encoding of all instances belonging to the specific class. Instead, we can also select only the $k$ nearest encoded instances of a class to the encoded instance to be explained and use the average over those $k$ encodings as the prototype.
###Code
# initialize explainer, fit and generate counterfactuals
cf = CounterFactualProto(cnn, shape, gamma=gamma, theta=theta,
ae_model=ae, enc_model=enc, max_iterations=max_iterations,
feature_range=feature_range, c_init=c_init, c_steps=c_steps)
cf.fit(x_train)
explanation_k1 = cf.explain(X, k=1, k_type='mean')
explanation_k20 = cf.explain(X, k=20, k_type='mean')
###Output
_____no_output_____
###Markdown
Results for $k$ equals 1:
###Code
print('Counterfactual prediction: {}'.format(explanation_k1.cf['class']))
print('Closest prototype class: {}'.format(explanation.id_proto))
plt.imshow(explanation_k1.cf['X'].reshape(28, 28));
###Output
Counterfactual prediction: 9
Closest prototype class: 9
###Markdown
Results for $k$ equals 20:
###Code
print('Counterfactual prediction: {}'.format(explanation_k20.cf['class']))
print('Closest prototype class: {}'.format(explanation.id_proto))
plt.imshow(explanation_k20.cf['X'].reshape(28, 28));
###Output
Counterfactual prediction: 9
Closest prototype class: 9
###Markdown
A lower value of $k$ typically leads to counterfactuals that look more like the original instance and less like an average instance of the counterfactual class. Remove the autoencoder loss term $L_{AE}$ In the previous example, we used both an autoencoder loss term to penalize a counterfactual which falls outside of the training data distribution as well as an encoder loss term to guide the counterfactual to the nearest prototype class. In the next example we get rid of the autoencoder loss term to speed up the counterfactual search and still generate decent counterfactuals:
###Code
# initialize explainer, fit and generate counterfactuals
cf = CounterFactualProto(cnn, shape, gamma=gamma, theta=theta,
enc_model=enc, max_iterations=max_iterations,
feature_range=feature_range, c_init=c_init, c_steps=c_steps)
cf.fit(x_train)
start_time = time()
explanation = cf.explain(X, k=1)
print('Explanation took {:.3f} sec'.format(time() - start_time))
###Output
Explanation took 6.284 sec
###Markdown
Counterfactuals guided by prototypes on MNIST This method is described in the [Interpretable Counterfactual Explanations Guided by Prototypes](https://arxiv.org/abs/1907.02584) paper and can generate counterfactual instances guided by class prototypes. It means that for a certain instance X, the method builds a prototype for each prediction class using either an [autoencoder](https://en.wikipedia.org/wiki/Autoencoder) or [k-d trees](https://en.wikipedia.org/wiki/K-d_tree). The nearest prototype class other than the originally predicted class is then used to guide the counterfactual search. For example, in MNIST the closest class to a 7 could be a 9. As a result, the prototype loss term will try to minimize the distance between the proposed counterfactual and the prototype of a 9. This speeds up the search towards a satisfactory counterfactual by steering it towards an interpretable solution from the start of the optimization. It also helps to avoid out-of-distribution counterfactuals with the perturbations driven to a prototype of another class.The loss function to be optimized is the following: $Loss$ = c$L_{pred}$ + $\beta$$L_{1}$ + $L_{2}$ + $L_{AE}$ + $L_{proto}$The first loss term relates to the model's prediction function, the following 2 terms define the elastic net regularization while the last 2 terms are optional. The aim of $L_{AE}$ is to penalize out-of-distribution counterfactuals while $L_{proto}$ guides the counterfactual to a prototype. When we only have acces to the model's prediction function and cannot fully enjoy the benefits of automatic differentiation, the prototypes allow us to drop the prediction function loss term $L_{pred}$ and still generate high quality counterfactuals. This drastically reduces the number of prediction calls made during the numerical gradient update step and again speeds up the search.Other options include generating counterfactuals for specific classes or including trust score constraints to ensure that the counterfactual is close enough to the newly predicted class compared to the original class. Different use cases are illustrated throughout this notebook.
###Code
import tensorflow as tf
tf.get_logger().setLevel(40) # suppress deprecation messages
tf.compat.v1.disable_v2_behavior() # disable TF2 behaviour as alibi code still relies on TF1 constructs
from tensorflow.keras.layers import Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Input, UpSampling2D
from tensorflow.keras.models import Model, load_model
from tensorflow.keras.utils import to_categorical
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import os
from time import time
from alibi.explainers import CounterFactualProto
print('TF version: ', tf.__version__)
print('Eager execution enabled: ', tf.executing_eagerly()) # False
###Output
TF version: 2.2.0
Eager execution enabled: False
###Markdown
Load and prepare MNIST data
###Code
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
print('x_train shape:', x_train.shape, 'y_train shape:', y_train.shape)
plt.gray()
plt.imshow(x_test[1]);
###Output
x_train shape: (60000, 28, 28) y_train shape: (60000,)
###Markdown
Prepare data: scale, reshape and categorize
###Code
x_train = x_train.astype('float32') / 255
x_test = x_test.astype('float32') / 255
x_train = np.reshape(x_train, x_train.shape + (1,))
x_test = np.reshape(x_test, x_test.shape + (1,))
print('x_train shape:', x_train.shape, 'x_test shape:', x_test.shape)
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
print('y_train shape:', y_train.shape, 'y_test shape:', y_test.shape)
xmin, xmax = -.5, .5
x_train = ((x_train - x_train.min()) / (x_train.max() - x_train.min())) * (xmax - xmin) + xmin
x_test = ((x_test - x_test.min()) / (x_test.max() - x_test.min())) * (xmax - xmin) + xmin
###Output
_____no_output_____
###Markdown
Define and train CNN model
###Code
def cnn_model():
x_in = Input(shape=(28, 28, 1))
x = Conv2D(filters=32, kernel_size=2, padding='same', activation='relu')(x_in)
x = MaxPooling2D(pool_size=2)(x)
x = Dropout(0.3)(x)
x = Conv2D(filters=64, kernel_size=2, padding='same', activation='relu')(x)
x = MaxPooling2D(pool_size=2)(x)
x = Dropout(0.3)(x)
x = Flatten()(x)
x = Dense(256, activation='relu')(x)
x = Dropout(0.5)(x)
x_out = Dense(10, activation='softmax')(x)
cnn = Model(inputs=x_in, outputs=x_out)
cnn.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return cnn
cnn = cnn_model()
cnn.fit(x_train, y_train, batch_size=32, epochs=3, verbose=0)
cnn.save('mnist_cnn.h5', save_format='h5')
###Output
_____no_output_____
###Markdown
Evaluate the model on test set
###Code
cnn = load_model('mnist_cnn.h5')
score = cnn.evaluate(x_test, y_test, verbose=0)
print('Test accuracy: ', score[1])
###Output
Test accuracy: 0.9871
###Markdown
Define and train auto-encoder
###Code
def ae_model():
# encoder
x_in = Input(shape=(28, 28, 1))
x = Conv2D(16, (3, 3), activation='relu', padding='same')(x_in)
x = Conv2D(16, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
encoded = Conv2D(1, (3, 3), activation=None, padding='same')(x)
encoder = Model(x_in, encoded)
# decoder
dec_in = Input(shape=(14, 14, 1))
x = Conv2D(16, (3, 3), activation='relu', padding='same')(dec_in)
x = UpSampling2D((2, 2))(x)
x = Conv2D(16, (3, 3), activation='relu', padding='same')(x)
decoded = Conv2D(1, (3, 3), activation=None, padding='same')(x)
decoder = Model(dec_in, decoded)
# autoencoder = encoder + decoder
x_out = decoder(encoder(x_in))
autoencoder = Model(x_in, x_out)
autoencoder.compile(optimizer='adam', loss='mse')
return autoencoder, encoder, decoder
ae, enc, dec = ae_model()
ae.fit(x_train, x_train, batch_size=128, epochs=4, validation_data=(x_test, x_test), verbose=0)
ae.save('mnist_ae.h5', save_format='h5')
enc.save('mnist_enc.h5', save_format='h5')
###Output
_____no_output_____
###Markdown
Compare original with decoded images
###Code
ae = load_model('mnist_ae.h5')
enc = load_model('mnist_enc.h5', compile=False)
decoded_imgs = ae.predict(x_test)
n = 5
plt.figure(figsize=(20, 4))
for i in range(1, n+1):
# display original
ax = plt.subplot(2, n, i)
plt.imshow(x_test[i].reshape(28, 28))
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
###Output
_____no_output_____
###Markdown
Generate counterfactual guided by the nearest class prototype Original instance:
###Code
X = x_test[0].reshape((1,) + x_test[0].shape)
plt.imshow(X.reshape(28, 28));
###Output
_____no_output_____
###Markdown
Counterfactual parameters:
###Code
shape = (1,) + x_train.shape[1:]
gamma = 100.
theta = 100.
c_init = 1.
c_steps = 2
max_iterations = 1000
feature_range = (x_train.min(),x_train.max())
###Output
_____no_output_____
###Markdown
Run counterfactual:
###Code
# initialize explainer, fit and generate counterfactual
cf = CounterFactualProto(cnn, shape, gamma=gamma, theta=theta,
ae_model=ae, enc_model=enc, max_iterations=max_iterations,
feature_range=feature_range, c_init=c_init, c_steps=c_steps)
start_time = time()
cf.fit(x_train) # find class prototypes
print('Time to find prototypes each class: {:.3f} sec'.format(time() - start_time))
start_time = time()
explanation = cf.explain(X)
print('Explanation took {:.3f} sec'.format(time() - start_time))
###Output
Time to find prototypes each class: 14.580 sec
Explanation took 9.269 sec
###Markdown
Results:
###Code
print('Counterfactual prediction: {}'.format(explanation.cf['class']))
print(f'Closest prototype class: {explanation.id_proto}')
plt.imshow(explanation.cf['X'].reshape(28, 28));
###Output
Counterfactual prediction: 9
Closest prototype class: 9
###Markdown
The counterfactual starting from a 7 moves towards its closest prototype class: a 9. The evolution of the counterfactual during the first iteration can be seen below:
###Code
iter_cf = 0
print(f'iteration c {iter_cf}')
n = len(explanation['all'][iter_cf])
plt.figure(figsize=(20, 4))
for i in range(n):
ax = plt.subplot(1, n+1, i+1)
plt.imshow(explanation['all'][iter_cf][i].reshape(28, 28))
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
###Output
iteration c 0
###Markdown
Typically, the first few iterations already steer the 7 towards a 9, while the later iterations make the counterfactual more sparse. Prototypes defined by the $k$ nearest encoded instances In the above example, the class prototypes are defined by the average encoding of all instances belonging to the specific class. Instead, we can also select only the $k$ nearest encoded instances of a class to the encoded instance to be explained and use the average over those $k$ encodings as the prototype.
###Code
# initialize explainer, fit and generate counterfactuals
cf = CounterFactualProto(cnn, shape, gamma=gamma, theta=theta,
ae_model=ae, enc_model=enc, max_iterations=max_iterations,
feature_range=feature_range, c_init=c_init, c_steps=c_steps)
cf.fit(x_train)
explanation_k1 = cf.explain(X, k=1, k_type='mean')
explanation_k20 = cf.explain(X, k=20, k_type='mean')
###Output
_____no_output_____
###Markdown
Results for $k$ equals 1:
###Code
print('Counterfactual prediction: {}'.format(explanation_k1.cf['class']))
print(f'Closest prototype class: {explanation.id_proto}')
plt.imshow(explanation_k1.cf['X'].reshape(28, 28));
###Output
Counterfactual prediction: 9
Closest prototype class: 9
###Markdown
Results for $k$ equals 20:
###Code
print('Counterfactual prediction: {}'.format(explanation_k20.cf['class']))
print(f'Closest prototype class: {explanation.id_proto}')
plt.imshow(explanation_k20.cf['X'].reshape(28, 28));
###Output
Counterfactual prediction: 9
Closest prototype class: 9
###Markdown
A lower value of $k$ typically leads to counterfactuals that look more like the original instance and less like an average instance of the counterfactual class. Remove the autoencoder loss term $L_{AE}$ In the previous example, we used both an autoencoder loss term to penalize a counterfactual which falls outside of the training data distribution as well as an encoder loss term to guide the counterfactual to the nearest prototype class. In the next example we get rid of the autoencoder loss term to speed up the counterfactual search and still generate decent counterfactuals:
###Code
# initialize explainer, fit and generate counterfactuals
cf = CounterFactualProto(cnn, shape, gamma=gamma, theta=theta,
enc_model=enc, max_iterations=max_iterations,
feature_range=feature_range, c_init=c_init, c_steps=c_steps)
cf.fit(x_train)
start_time = time()
explanation = cf.explain(X, k=1)
print('Explanation took {:.3f} sec'.format(time() - start_time))
###Output
Explanation took 6.443 sec
###Markdown
Results:
###Code
print('Counterfactual prediction: {}'.format(explanation.cf['class']))
print(f'Closest prototype class: {explanation.id_proto}')
plt.imshow(explanation.cf['X'].reshape(28, 28));
###Output
Counterfactual prediction: 9
Closest prototype class: 9
###Markdown
Specify prototype classes For multi-class predictions, we might be interested to generate counterfactuals for certain classes while avoiding others. The following example illustrates how to do this:
###Code
X = x_test[12].reshape((1,) + x_test[1].shape)
plt.imshow(X.reshape(28, 28));
# initialize explainer, fit and generate counterfactuals
cf = CounterFactualProto(cnn, shape, gamma=gamma, theta=theta,
ae_model=ae, enc_model=enc, max_iterations=max_iterations,
feature_range=feature_range, c_init=c_init, c_steps=c_steps)
cf.fit(x_train)
explanation_1 = cf.explain(X, k=5, k_type='mean')
proto_1 = explanation_1.id_proto
explanation_2 = cf.explain(X, k=5, k_type='mean', target_class=[7])
proto_2 = explanation_2.id_proto
###Output
_____no_output_____
###Markdown
The closest class to the 9 is 4. This is evident by looking at the first counterfactual below. For the second counterfactual, we specified that the prototype class used in the search should be a 7. As a result, a counterfactual 7 instead of a 4 is generated.
###Code
print('Counterfactual prediction: {}'.format(explanation_1.cf['class']))
print(f'Closest prototype class: {proto_1}')
plt.imshow(explanation_1.cf['X'].reshape(28, 28));
print('Counterfactual prediction: {}'.format(explanation_2.cf['class']))
print(f'Closest prototype class: {proto_2}')
plt.imshow(explanation_2.cf['X'].reshape(28, 28));
###Output
Counterfactual prediction: 7
Closest prototype class: 7
###Markdown
Speed up the counterfactual search by removing the predict function loss term We can also remove the prediction loss term and still obtain an interpretable counterfactual. This is especially relevant for fully black box models. When we provide the counterfactual search method with a Keras or TensorFlow model, it is incorporated in the TensorFlow graph and evaluated using automatic differentiation. However, if we only have access to the model's prediction function, the gradient updates are numerical and typically require a large number of prediction calls because of the prediction loss term $L_{pred}$. These prediction calls can slow the search down significantly and become a bottleneck. We can represent the gradient of the loss term as follows:\begin{equation*} \frac{\partial L_{pred}}{\partial x} = \frac{\partial L_{pred}}{\partial p} \frac{\partial p}{\partial x} \end{equation*}where $L_{pred}$ is the prediction loss term, $p$ the prediction function and $x$ the input features to optimize. For a 28 by 28 MNIST image, the $^{\delta p}/_{\delta x}$ term alone would require a prediction call with batch size 28x28x2 = 1568. By using the prototypes to guide the search however, we can remove the prediction loss term and only make a single prediction at the end of each gradient update to check whether the predicted class on the proposed counterfactual is different from the original class. We do not necessarily need a Keras or TensorFlow auto-encoder either and can use k-d trees to find the nearest class prototypes. Please check out [this notebook](./cfproto_housing.ipynb) for a practical example.The first example below removes $L_{pred}$ from the loss function to bypass the bottleneck. It illustrates the drastic speed improvements over the black box alternative with numerical gradient evaluation while still producing interpretable counterfactual instances.
###Code
plt.gray()
X = x_test[23].reshape(1, 28, 28, 1)
plt.imshow(X.reshape(28, 28));
c_init = 0. # weight on prediction loss term set to 0
c_steps = 1 # no need to find optimal values for c
# define a black-box model
predict_fn = lambda x: cnn.predict(x)
# initialize explainer, fit and generate counterfactuals
cf = CounterFactualProto(predict_fn, shape, gamma=gamma, theta=theta,
ae_model=ae, enc_model=enc, max_iterations=max_iterations,
feature_range=feature_range, c_init=c_init, c_steps=c_steps)
cf.fit(x_train)
start_time = time()
explanation = cf.explain(X, k=1)
print('Explanation took {:.3f} sec'.format(time() - start_time))
print('Counterfactual prediction: {}'.format(explanation.cf['class']))
print(f'Closest prototype class: {explanation.id_proto}')
plt.imshow(explanation.cf['X'].reshape(28, 28));
###Output
Counterfactual prediction: 6
Closest prototype class: 6
###Markdown
Let us know add the $L_{pred}$ loss term back in the objective function and observe how long it takes to generate a black box counterfactual:
###Code
c_init = 1.
c_steps = 2
# define a black-box model
predict_fn = lambda x: cnn.predict(x)
# initialize explainer, fit and generate counterfactuals
cf = CounterFactualProto(predict_fn, shape, gamma=gamma, theta=theta,
ae_model=ae, enc_model=enc, max_iterations=max_iterations,
feature_range=feature_range, c_init=c_init, c_steps=c_steps)
cf.fit(x_train)
start_time = time()
explanation = cf.explain(X, k=1)
print('Explanation took {:.3f} sec'.format(time() - start_time))
print('Counterfactual prediction: {}'.format(explanation.cf['class']))
print(f'Closest prototype class: {explanation.id_proto}')
plt.imshow(explanation.cf['X'].reshape(28, 28));
###Output
Counterfactual prediction: 6
Closest prototype class: 6
###Markdown
Clean up:
###Code
os.remove('mnist_cnn.h5')
os.remove('mnist_ae.h5')
os.remove('mnist_enc.h5')
###Output
_____no_output_____
###Markdown
Counterfactuals guided by prototypes on MNIST This method is described in the [Interpretable Counterfactual Explanations Guided by Prototypes](https://arxiv.org/abs/1907.02584) paper and can generate counterfactual instances guided by class prototypes. It means that for a certain instance X, the method builds a prototype for each prediction class using either an [autoencoder](https://en.wikipedia.org/wiki/Autoencoder) or [k-d trees](https://en.wikipedia.org/wiki/K-d_tree). The nearest prototype class other than the originally predicted class is then used to guide the counterfactual search. For example, in MNIST the closest class to a 7 could be a 9. As a result, the prototype loss term will try to minimize the distance between the proposed counterfactual and the prototype of a 9. This speeds up the search towards a satisfactory counterfactual by steering it towards an interpretable solution from the start of the optimization. It also helps to avoid out-of-distribution counterfactuals with the perturbations driven to a prototype of another class.The loss function to be optimized is the following: $Loss$ = c$L_{pred}$ + $\beta$$L_{1}$ + $L_{2}$ + $L_{AE}$ + $L_{proto}$The first loss term relates to the model's prediction function, the following 2 terms define the elastic net regularization while the last 2 terms are optional. The aim of $L_{AE}$ is to penalize out-of-distribution counterfactuals while $L_{proto}$ guides the counterfactual to a prototype. When we only have acces to the model's prediction function and cannot fully enjoy the benefits of automatic differentiation, the prototypes allow us to drop the prediction function loss term $L_{pred}$ and still generate high quality counterfactuals. This drastically reduces the number of prediction calls made during the numerical gradient update step and again speeds up the search.Other options include generating counterfactuals for specific classes or including trust score constraints to ensure that the counterfactual is close enough to the newly predicted class compared to the original class. Different use cases are illustrated throughout this notebook.
###Code
import keras
from keras import backend as K
from keras.layers import Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Input, UpSampling2D
from keras.models import Model, load_model
from keras.utils import to_categorical
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import os
import tensorflow as tf
from time import time
from alibi.explainers import CounterFactualProto
###Output
Using TensorFlow backend.
###Markdown
Load and prepare MNIST data
###Code
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
print('x_train shape:', x_train.shape, 'y_train shape:', y_train.shape)
plt.gray()
plt.imshow(x_test[1]);
###Output
x_train shape: (60000, 28, 28) y_train shape: (60000,)
###Markdown
Prepare data: scale, reshape and categorize
###Code
x_train = x_train.astype('float32') / 255
x_test = x_test.astype('float32') / 255
x_train = np.reshape(x_train, x_train.shape + (1,))
x_test = np.reshape(x_test, x_test.shape + (1,))
print('x_train shape:', x_train.shape, 'x_test shape:', x_test.shape)
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
print('y_train shape:', y_train.shape, 'y_test shape:', y_test.shape)
xmin, xmax = -.5, .5
x_train = ((x_train - x_train.min()) / (x_train.max() - x_train.min())) * (xmax - xmin) + xmin
x_test = ((x_test - x_test.min()) / (x_test.max() - x_test.min())) * (xmax - xmin) + xmin
###Output
_____no_output_____
###Markdown
Define and train CNN model
###Code
def cnn_model():
x_in = Input(shape=(28, 28, 1))
x = Conv2D(filters=64, kernel_size=2, padding='same', activation='relu')(x_in)
x = MaxPooling2D(pool_size=2)(x)
x = Dropout(0.3)(x)
x = Conv2D(filters=32, kernel_size=2, padding='same', activation='relu')(x)
x = MaxPooling2D(pool_size=2)(x)
x = Dropout(0.3)(x)
x = Flatten()(x)
x = Dense(256, activation='relu')(x)
x = Dropout(0.5)(x)
x_out = Dense(10, activation='softmax')(x)
cnn = Model(inputs=x_in, outputs=x_out)
cnn.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return cnn
cnn = cnn_model()
cnn.fit(x_train, y_train, batch_size=64, epochs=3, verbose=0)
cnn.save('mnist_cnn.h5')
###Output
_____no_output_____
###Markdown
Evaluate the model on test set
###Code
score = cnn.evaluate(x_test, y_test, verbose=0)
print('Test accuracy: ', score[1])
###Output
Test accuracy: 0.9874
###Markdown
Define and train auto-encoder
###Code
def ae_model():
# encoder
x_in = Input(shape=(28, 28, 1))
x = Conv2D(16, (3, 3), activation='relu', padding='same')(x_in)
x = Conv2D(16, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
encoded = Conv2D(1, (3, 3), activation=None, padding='same')(x)
encoder = Model(x_in, encoded)
# decoder
dec_in = Input(shape=(14, 14, 1))
x = Conv2D(16, (3, 3), activation='relu', padding='same')(dec_in)
x = UpSampling2D((2, 2))(x)
x = Conv2D(16, (3, 3), activation='relu', padding='same')(x)
decoded = Conv2D(1, (3, 3), activation=None, padding='same')(x)
decoder = Model(dec_in, decoded)
# autoencoder = encoder + decoder
x_out = decoder(encoder(x_in))
autoencoder = Model(x_in, x_out)
autoencoder.compile(optimizer='adam', loss='mse')
return autoencoder, encoder, decoder
ae, enc, dec = ae_model()
ae.fit(x_train, x_train, batch_size=128, epochs=4, validation_data=(x_test, x_test), verbose=0)
ae.save('mnist_ae.h5')
enc.save('mnist_enc.h5')
###Output
_____no_output_____
###Markdown
Compare original with decoded images
###Code
decoded_imgs = ae.predict(x_test)
n = 5
plt.figure(figsize=(20, 4))
for i in range(1, n+1):
# display original
ax = plt.subplot(2, n, i)
plt.imshow(x_test[i].reshape(28, 28))
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
###Output
_____no_output_____
###Markdown
Generate counterfactual guided by the nearest class prototype Original instance:
###Code
X = x_test[0].reshape((1,) + x_test[0].shape)
plt.imshow(X.reshape(28, 28));
###Output
_____no_output_____
###Markdown
Counterfactual parameters:
###Code
shape = (1,) + x_train.shape[1:]
gamma = 100.
theta = 100.
c_init = 1.
c_steps = 2
max_iterations = 1000
feature_range = (x_train.min(),x_train.max())
###Output
_____no_output_____
###Markdown
Run counterfactual:
###Code
# set random seed
np.random.seed(1)
tf.set_random_seed(1)
# define models
cnn = load_model('mnist_cnn.h5')
ae = load_model('mnist_ae.h5')
enc = load_model('mnist_enc.h5') # , compile=False
sess = K.get_session()
# initialize explainer, fit and generate counterfactual
cf = CounterFactualProto(sess, cnn, shape, gamma=gamma, theta=theta,
ae_model=ae, enc_model=enc, max_iterations=max_iterations,
feature_range=feature_range, c_init=c_init, c_steps=c_steps)
start_time = time()
cf.fit(x_train) # find class prototypes
print('Time to find prototypes each class: {:.3f} sec'.format(time() - start_time))
start_time = time()
explanation = cf.explain(X)
print('Explanation took {:.3f} sec'.format(time() - start_time))
sess.close()
K.clear_session()
###Output
Time to find prototypes each class: 17.841 sec
Explanation took 8.262 sec
###Markdown
Results:
###Code
print('Counterfactual prediction: {}'.format(explanation['cf']['class']))
print('Closest prototype class: {}'.format(cf.id_proto))
plt.imshow(explanation['cf']['X'].reshape(28, 28));
###Output
Counterfactual prediction: 9
Closest prototype class: 9
###Markdown
The counterfactual starting from a 7 moves towards its closest prototype class: a 9. The evolution of the counterfactual during the first iteration can be seen below:
###Code
iter_cf = 0
print('iteration c {}'.format(iter_cf))
n = len(explanation['all'][iter_cf])
plt.figure(figsize=(20, 4))
for i in range(n):
ax = plt.subplot(1, n+1, i+1)
plt.imshow(explanation['all'][iter_cf][i].reshape(28, 28))
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
###Output
iteration c 0
###Markdown
Typically, the first few iterations already steer the 7 towards a 9, while the later iterations make the counterfactual more sparse. Prototypes defined by the $k$ nearest encoded instances In the above example, the class prototypes are defined by the average encoding of all instances belonging to the specific class. Instead, we can also select only the $k$ nearest encoded instances of a class to the encoded instance to be explained and use the average over those $k$ encodings as the prototype.
###Code
# set random seed
np.random.seed(1)
tf.set_random_seed(1)
# define models
cnn = load_model('mnist_cnn.h5')
ae = load_model('mnist_ae.h5')
enc = load_model('mnist_enc.h5')
sess = K.get_session()
# initialize explainer, fit and generate counterfactuals
cf = CounterFactualProto(sess, cnn, shape, gamma=gamma, theta=theta,
ae_model=ae, enc_model=enc, max_iterations=max_iterations,
feature_range=feature_range, c_init=c_init, c_steps=c_steps)
cf.fit(x_train)
explanation_k1 = cf.explain(X, k=1, k_type='mean')
explanation_k20 = cf.explain(X, k=20, k_type='mean')
sess.close()
K.clear_session()
###Output
_____no_output_____
###Markdown
Results for $k$ equals 1:
###Code
print('Counterfactual prediction: {}'.format(explanation_k1['cf']['class']))
print('Closest prototype class: {}'.format(cf.id_proto))
plt.imshow(explanation_k1['cf']['X'].reshape(28, 28));
###Output
Counterfactual prediction: 9
Closest prototype class: 9
###Markdown
Results for $k$ equals 20:
###Code
print('Counterfactual prediction: {}'.format(explanation_k20['cf']['class']))
print('Closest prototype class: {}'.format(cf.id_proto))
plt.imshow(explanation_k20['cf']['X'].reshape(28, 28));
###Output
Counterfactual prediction: 9
Closest prototype class: 9
###Markdown
A lower value of $k$ typically leads to counterfactuals that look more like the original instance and less like an average instance of the counterfactual class. Remove the autoencoder loss term $L_{AE}$ In the previous example, we used both an autoencoder loss term to penalize a counterfactual which falls outside of the training data distribution as well as an encoder loss term to guide the counterfactual to the nearest prototype class. In the next example we get rid of the autoencoder loss term to speed up the counterfactual search and still generate decent counterfactuals:
###Code
# set random seed
np.random.seed(1)
tf.set_random_seed(1)
# define models
cnn = load_model('mnist_cnn.h5')
enc = load_model('mnist_enc.h5')
sess = K.get_session()
# initialize explainer, fit and generate counterfactuals
cf = CounterFactualProto(sess, cnn, shape, gamma=gamma, theta=theta,
enc_model=enc, max_iterations=max_iterations,
feature_range=feature_range, c_init=c_init, c_steps=c_steps)
cf.fit(x_train)
start_time = time()
explanation = cf.explain(X, k=1)
print('Explanation took {:.3f} sec'.format(time() - start_time))
sess.close()
K.clear_session()
###Output
Explanation took 6.789 sec
###Markdown
Results:
###Code
print('Counterfactual prediction: {}'.format(explanation['cf']['class']))
print('Closest prototype class: {}'.format(cf.id_proto))
plt.imshow(explanation['cf']['X'].reshape(28, 28));
###Output
Counterfactual prediction: 9
Closest prototype class: 9
###Markdown
Specify prototype classes For multi-class predictions, we might be interested to generate counterfactuals for certain classes while avoiding others. The following example illustrates how to do this:
###Code
X = x_test[12].reshape((1,) + x_test[1].shape)
plt.imshow(X.reshape(28, 28));
# set random seed
np.random.seed(1)
tf.set_random_seed(1)
# define models
cnn = load_model('mnist_cnn.h5')
ae = load_model('mnist_ae.h5')
enc = load_model('mnist_enc.h5')
sess = K.get_session()
# initialize explainer, fit and generate counterfactuals
cf = CounterFactualProto(sess, cnn, shape, gamma=gamma, theta=theta,
ae_model=ae, enc_model=enc, max_iterations=max_iterations,
feature_range=feature_range, c_init=c_init, c_steps=c_steps)
cf.fit(x_train)
explanation_1 = cf.explain(X, k=5, k_type='mean')
proto_1 = cf.id_proto
explanation_2 = cf.explain(X, k=5, k_type='mean', target_class=[7])
proto_2 = cf.id_proto
sess.close()
K.clear_session()
###Output
_____no_output_____
###Markdown
The closest class to the 9 is 4. This is evident by looking at the first counterfactual below. For the second counterfactual, we specified that the prototype class used in the search should be a 7. As a result, a counterfactual 7 instead of a 4 is generated.
###Code
print('Counterfactual prediction: {}'.format(explanation_1['cf']['class']))
print('Closest prototype class: {}'.format(proto_1))
plt.imshow(explanation_1['cf']['X'].reshape(28, 28));
print('Counterfactual prediction: {}'.format(explanation_2['cf']['class']))
print('Closest prototype class: {}'.format(proto_2))
plt.imshow(explanation_2['cf']['X'].reshape(28, 28));
###Output
Counterfactual prediction: 7
Closest prototype class: 7
###Markdown
Speed up the counterfactual search by removing the predict function loss term We can also remove the prediction loss term and still obtain an interpretable counterfactual. This is especially relevant for fully black box models. When we provide the counterfactual search method with a Keras or TensorFlow model, it is incorporated in the TensorFlow graph and evaluated using automatic differentiation. However, if we only have access to the model's prediction function, the gradient updates are numerical and typically require a large number of prediction calls because of the prediction loss term $L_{pred}$. These prediction calls can slow the search down significantly and become a bottleneck. We can represent the gradient of the loss term as follows:\begin{equation*} \frac{\partial L_{pred}}{\partial x} = \frac{\partial L_{pred}}{\partial p} \frac{\partial p}{\partial x} \end{equation*}where $L_{pred}$ is the prediction loss term, $p$ the prediction function and $x$ the input features to optimize. For a 28 by 28 MNIST image, the $^{\delta p}/_{\delta x}$ term alone would require a prediction call with batch size 28x28x2 = 1568. By using the prototypes to guide the search however, we can remove the prediction loss term and only make a single prediction at the end of each gradient update to check whether the predicted class on the proposed counterfactual is different from the original class. We do not necessarily need a Keras or TensorFlow auto-encoder either and can use k-d trees to find the nearest class prototypes. Please check out [this notebook](./cfproto_housing.ipynb) for a practical example.The first example below removes $L_{pred}$ from the loss function to bypass the bottleneck. It illustrates the drastic speed improvements over the black box alternative with numerical gradient evaluation while still producing interpretable counterfactual instances.
###Code
plt.gray()
X = x_test[23].reshape(1, 28, 28, 1)
plt.imshow(X.reshape(28, 28));
c_init = 0. # weight on prediction loss term set to 0
c_steps = 1 # no need to find optimal values for c
# set random seed
np.random.seed(1)
tf.set_random_seed(1)
# define models
cnn = load_model('mnist_cnn.h5')
predict_fn = lambda x: cnn.predict(x)
ae = load_model('mnist_ae.h5')
enc = load_model('mnist_enc.h5')
sess = K.get_session()
# initialize explainer, fit and generate counterfactuals
cf = CounterFactualProto(sess, predict_fn, shape, gamma=gamma, theta=theta,
ae_model=ae, enc_model=enc, max_iterations=max_iterations,
feature_range=feature_range, c_init=c_init, c_steps=c_steps)
cf.fit(x_train)
start_time = time()
explanation = cf.explain(X, k=1)
print('Explanation took {:.3f} sec'.format(time() - start_time))
sess.close()
K.clear_session()
print('Counterfactual prediction: {}'.format(explanation['cf']['class']))
print('Closest prototype class: {}'.format(cf.id_proto))
plt.imshow(explanation['cf']['X'].reshape(28, 28));
###Output
Counterfactual prediction: 6
Closest prototype class: 6
###Markdown
Let us know add the $L_{pred}$ loss term back in the objective function and observe how long it takes to generate a black box counterfactual:
###Code
c_init = 1.
c_steps = 2
# set random seed
np.random.seed(1)
tf.set_random_seed(1)
# define models
cnn = load_model('mnist_cnn.h5')
predict_fn = lambda x: cnn.predict(x)
ae = load_model('mnist_ae.h5')
enc = load_model('mnist_enc.h5')
sess = K.get_session()
# initialize explainer, fit and generate counterfactuals
cf = CounterFactualProto(sess, predict_fn, shape, gamma=gamma, theta=theta,
ae_model=ae, enc_model=enc, max_iterations=max_iterations,
feature_range=feature_range, c_init=c_init, c_steps=c_steps)
cf.fit(x_train)
start_time = time()
explanation = cf.explain(X, k=1)
print('Explanation took {:.3f} sec'.format(time() - start_time))
sess.close()
K.clear_session()
print('Counterfactual prediction: {}'.format(explanation['cf']['class']))
print('Closest prototype class: {}'.format(cf.id_proto))
plt.imshow(explanation['cf']['X'].reshape(28, 28));
###Output
Counterfactual prediction: 6
Closest prototype class: 6
###Markdown
Clean up:
###Code
os.remove('mnist_cnn.h5')
os.remove('mnist_ae.h5')
os.remove('mnist_enc.h5')
###Output
_____no_output_____ |
content/lessons/08/Now-You-Code/NYC3-Sentiment-v2.ipynb | ###Markdown
Now You Code 3: Sentiment v2.0Let's write a better version of the basic sentiment analyzer in Python. Instead of using a string of words, this example will read a list of words positive and negative words from a file.In fact, we've included two files for you so you don't have to some up with the positive and negative words! Just load the files and go! Of course if you want more positive and negative words you can always edit the files.- Positive words are in `NYC3-pos.txt`- Negative words are in `NYC3-neg.txt`You will have to write a function called `LoadWords(filename)` to read the words from the file and load them into a string. Step 1: Problem Analysis for functionInput (function arguments): `filename` to read.Output (function returns): a `text` string of words as loaded from the file.Algorithm:```open the filename for reading read the entire file all at once, into text return the text```
###Code
## Step 2: Write the LoadWords(filename) function
def loadWords (filename):
with open(filename, 'r') as f:
content = f.read()
tokens = content.split(" ")
return tokens
print (loadWords("NYC3-pos.txt"))
print (loadWords("NYC3-neg.txt"))
## Quick test of your LoadWords() function
pos = loadWords("NYC3-pos.txt")
neg = loadWords("NYC3-neg.txt")
print("POSITIVE WORD LIST:",pos)
print("NEGATIVE WORD LIST", neg)
###Output
POSITIVE WORD LIST: ['good', 'love', 'like', 'great', 'wonderful', 'fast', 'helpful', 'smart', 'friendly', 'happy', 'joy']
NEGATIVE WORD LIST ['bad', 'hate', 'dislike', 'horrible', 'stinks', 'awful', 'slow', 'clueless', 'useless', 'dumb', 'sad']
###Markdown
Step 3: The Final Program Now write a program which allows you to enter text and then analyzes the sentiment of the text by printing a score. The program should keep analyzing text input until you enter "quit".Sample Run```Sentiment Analyzer 1.0Type 'quit' to exit.Enter Text: i love a good book from amazon2 positive.Enter Text: i hate amazon their service makes me angry-2 negative.Enter Text: i love to hate amazon0 neutral.Enter Text: quit``` 3.a Problem AnalysisInput:- some text include positive negative textOutput:- neutral negative positive and integerAlgorithm:- prompt for input 3.a : Problem AnalysisInputs:- a numberOutputs:2 positive-2 negative0 neutral.quitALGORITHM:print titleprompt input execute infinte loop statement- if input == quit break- else assign f(x) to variable token if token > 0 score + 1 if token < 0 score -1 else: neutral
###Code
## 3.b Write solution here, use Load Words to help you read in the pos and neg words.
print("Sentiment Analyzer 1.0")
print("Type 'quit' to exit.")
while True :
text = input("Enter Text: ")
if text == 'quit':
break
else:
loadWords("NYC3-pos.txt")
loadWords("NYC3-neg.txt")
#cannot break into two seperate lines
if tokens>0:
print ("%d positive."%(Token))
elif tokens < 0:
print("%d negative."%(Token))
else:
print("%d neutral"%(Token))
#justin's version
def loadWord(filename):
with open(filename, 'r') as f:
filecontent = f.read()
tokens = filecontent.split(" ")
return tokens
print (loadWords("NYC3-neg.txt"))
def ScoreSentiment (input_text):
score = 0
pos_text = loadWords("NYC3-pos.txt")
neg_text = loadWords("NYC3-neg.txt")
input_list = input_text.split(" ")
for word in input_list:
if word in pos_text:
score = score + 1
if word in neg_text:
score = score - 1
return score
print("Sentiment Analyzer 2.0")
print("type 'quit' to quit")
userInput = input("enter some text ")
score = ScoreSentiment(userInput)
print(score)
###Output
['bad', 'hate', 'dislike', 'horrible', 'stinks', 'awful', 'slow', 'clueless', 'useless', 'dumb', 'sad']
Sentiment Analyzer 2.0
type 'quit' to quit
enter some text bad bad hate hate
-4
###Markdown
Now You Code 3: Sentiment v2.0Let's write a better version of the basic sentiment analyzer in Python. Instead of using a string of words, this example will read a list of words positive and negative words from a file.In fact, we've included two files for you so you don't have to some up with the positive and negative words! Just load the files and go! Of course if you want more positive and negative words you can always edit the files.- Positive words are in `NYC3-pos.txt`- Negative words are in `NYC3-neg.txt`You will have to write a function called `LoadWords(filename)` to read the words from the file and load them into a string. Step 1: Problem Analysis for functionInput (function arguments): `filename` to read.Output (function returns): a `text` string of words as loaded from the file.Algorithm:```open the filename for reading read the entire file all at once, into text return the text```
###Code
def LoadWords(filename):
with open(filename, 'r') as f:
for line in f.readlines():
print(line.strip())
return line
## Quick test of your LoadWords() function
pos = LoadWords("NYC3-pos.txt")
neg = LoadWords("NYC3-neg.txt")
print("POSITIVE WORD LIST:",pos)
print("NEGATIVE WORD LIST", neg)
###Output
good love like great wonderful fast helpful smart friendly happy joy
bad hate dislike horrible stinks awful slow clueless useless dumb sad
POSITIVE WORD LIST: good love like great wonderful fast helpful smart friendly happy joy
NEGATIVE WORD LIST bad hate dislike horrible stinks awful slow clueless useless dumb sad
###Markdown
Step 3: The Final Program Now write a program which allows you to enter text and then analyzes the sentiment of the text by printing a score. The program should keep analyzing text input until you enter "quit".Sample Run```Sentiment Analyzer 1.0Type 'quit' to exit.Enter Text: i love a good book from amazon2 positive.Enter Text: i hate amazon their service makes me angry-2 negative.Enter Text: i love to hate amazon0 neutral.Enter Text: quit``` 3.a Problem AnalysisInput:Output:Algorithm:
###Code
## 3.b Write solution here, use Load Words to help you read in the pos and neg words.
pos = LoadWords("NYC3-pos.txt")
neg = LoadWords("NYC3-neg.txt")
score = 0
print("Sentiment Analyzer Invalid")
while True:
text = input("Enter text: ")
if text == 'quit':
break
text = text.split(" ")
for word in text:
if word in pos:
score = score + 1
if word in neg:
score = score - 1
if score > 0:
print("%d, positive" % score)
elif score < 0:
print("%d, negative" % score)
else:
print("%d, neutral" % score)
###Output
good love like great wonderful fast helpful smart friendly happy joy
bad hate dislike horrible stinks awful slow clueless useless dumb sad
Sentiment Analyzer Invalid
Enter text: i hate! a BAD book
0, neutral
Enter text: i love! a BAD book
0, neutral
Enter text: quit
###Markdown
Step 4: Questions1. This is a better solution than sentiment 1.0. Why?2. Execute the program and enter the following input: `i love! a GOOD book` What is the score and why? how can this issue be fixed?3. Re-write your final solution to address the issues discovered in step 2. Reminder of Evaluation Criteria1. What the problem attempted (analysis, code, and answered questions) ?2. What the problem analysis thought out? (does the program match the plan?)3. Does the code execute without syntax error?4. Does the code solve the intended problem?5. Is the code well written? (easy to understand, modular, and self-documenting, handles errors)
###Code
1. Yes, because it lists a number & a word.
2. The score is 0, because it has an equal number of positive & negative words; By adding one more positive or negative word.
3. #I'm sorry Faitakes, I'm not sure how to do this one...
###Output
_____no_output_____
###Markdown
Now You Code 3: Sentiment v2.0Let's write a better version of the basic sentiment analyzer in Python. Instead of using a string of words, this example will read a list of words positive and negative words from a file.In fact, we've included two files for you so you don't have to some up with the positive and negative words! Just load the files and go! Of course if you want more positive and negative words you can always edit the files.- Positive words are in `NYC3-pos.txt`- Negative words are in `NYC3-neg.txt`You will have to write a function called `LoadWords(filename)` to read the words from the file and load them into a string. Step 1: Problem Analysis for functionInput (function arguments): `filename` to read.Output (function returns): a `text` string of words as loaded from the file.Algorithm:```open the filename for reading read the entire file all at once, into text return the text```
###Code
## Step 2: Write the LoadWords(filename) function
def LoadWords(filename):
try:
with open(filename, 'r') as file_in:
text = file_in.read()
print(text)
return(text)
except FileNotFoundError:
print('Could not find file %s' % filename)
## Quick test of your LoadWords() function
pos = LoadWords("NYC3-pos.txt")
neg = LoadWords("NYC3-neg.txt")
print("POSITIVE WORD LIST:",pos)
print("NEGATIVE WORD LIST", neg)
###Output
good love like great wonderful fast helpful smart friendly happy joy
bad hate dislike horrible stinks awful slow clueless useless dumb sad
POSITIVE WORD LIST: good love like great wonderful fast helpful smart friendly happy joy
NEGATIVE WORD LIST bad hate dislike horrible stinks awful slow clueless useless dumb sad
###Markdown
Step 3: The Final Program Now write a program which allows you to enter text and then analyzes the sentiment of the text by printing a score. The program should keep analyzing text input until you enter "quit".Sample Run```Sentiment Analyzer 1.0Type 'quit' to exit.Enter Text: i love a good book from amazon2 positive.Enter Text: i hate amazon their service makes me angry-2 negative.Enter Text: i love to hate amazon0 neutral.Enter Text: quit``` 3.a Problem AnalysisInput:Output:Algorithm:
###Code
## 3.b Write solution here, use Load Words to help you read in the pos and neg words.
def LoadWords(filename):
try:
with open(filename, 'r') as file_in:
text = file_in.read()
print(text)
return(text)
except FileNotFoundError:
print('Could not find file %s' % filename)
pos = LoadWords("NYC3-pos.txt")
neg = LoadWords("NYC3-neg.txt")
print()
###Output
_____no_output_____
###Markdown
Now You Code 3: Sentiment v2.0Let's write a better version of the basic sentiment analyzer in Python. Instead of using a string of words, this example will read a list of words positive and negative words from a file.In fact, we've included two files for you so you don't have to some up with the positive and negative words! Just load the files and go! Of course if you want more positive and negative words you can always edit the files.- Positive words are in `NYC3-pos.txt`- Negative words are in `NYC3-neg.txt`You will have to write a function called `LoadWords(filename)` to read the words from the file and load them into a string. Step 1: Problem Analysis for function-Input (function arguments): `filename` to read.-Output (function returns): a `text` string of words as loaded from the file.Algorithm:```open the filename for reading read the entire file all at once, into text return the text```
###Code
## Step 2: Write the LoadWords(filename) function
def LoadWords(filename):
with open(filename, 'r') as f:
for line in f:
return line
## Quick test of your LoadWords() function
pos = LoadWords("NYC3-pos.txt")
neg = LoadWords("NYC3-neg.txt")
print("POSITIVE WORD LIST:",pos)
print("NEGATIVE WORD LIST", neg)
###Output
POSITIVE WORD LIST: good love like great wonderful fast helpful smart friendly happy joy
NEGATIVE WORD LIST bad hate dislike horrible stinks awful slow clueless useless dumb sad
###Markdown
Step 3: The Final Program Now write a program which allows you to enter text and then analyzes the sentiment of the text by printing a score. The program should keep analyzing text input until you enter "quit".```Sentiment Analyzer 1.0Type 'quit' to exit.Enter Text: i love a good book from amazon2 positive.Enter Text: i hate amazon their service makes me angry-2 negative.Enter Text: i love to hate amazon0 neutral.Enter Text: quit``` 3.a Problem AnalysisInput:- enter text- enter quitOutput:- amount of positive- amount of negative- quit has been entered goodbyeAlgorithm:- Enter a line of text- load files- if words within files- post amount of negative or positive words, or post if none- quit when told
###Code
print("The Sentiment Analyzer ")
pos_count = 0
neg_count = 0
text = input("Enter your tex or type 'quit' to exit: ")
if text == 'quit':
exit()
with open('NYC3-pos.txt', 'r') as f:
for line in f:
if text in line:
pos_count = pos_count + 1
with open('NYC3-neg.txt', 'r') as f:
for line in f:
if text in line:
neg_count = neg_count + 1
print("The amount of positives are:", pos_count)
print("The amount of negatives are:", neg_count)
###Output
The Sentiment Analyzer
The amount of positives are: 0
The amount of negatives are: 0
###Markdown
Now You Code 3: Sentiment v2.0Let's write a better version of the basic sentiment analyzer in Python. Instead of using a string of words, this example will read a list of words positive and negative words from a file.In fact, we've included two files for you so you don't have to some up with the positive and negative words! Just load the files and go! Of course if you want more positive and negative words you can always edit the files.- Positive words are in `NYC3-pos.txt`- Negative words are in `NYC3-neg.txt`You will have to write a function called `LoadWords(filename)` to read the words from the file and load them into a string. Step 1: Problem Analysis for functionInput (function arguments): `filename` to read.Output (function returns): a `text` string of words as loaded from the file.Algorithm:```open the filename for reading read the entire file all at once, into text return the text```
###Code
## Step 2: Write the LoadWords(filename) function
## Quick test of your LoadWords() function
pos = LoadWords("NYC3-pos.txt")
neg = LoadWords("NYC3-neg.txt")
print("POSITIVE WORD LIST:",pos)
print("NEGATIVE WORD LIST", neg)
###Output
_____no_output_____
###Markdown
Step 3: The Final Program Now write a program which allows you to enter text and then analyzes the sentiment of the text by printing a score. The program should keep analyzing text input until you enter "quit".Sample Run```Sentiment Analyzer 1.0Type 'quit' to exit.Enter Text: i love a good book from amazon2 positive.Enter Text: i hate amazon their service makes me angry-2 negative.Enter Text: i love to hate amazon0 neutral.Enter Text: quit``` 3.a Problem AnalysisInput:Output:Algorithm:
###Code
## 3.b Write solution here, use Load Words to help you read in the pos and neg words.
###Output
_____no_output_____
###Markdown
Now You Code 3: Sentiment v2.0Let's write a better version of the basic sentiment analyzer in Python. Instead of using a string of words, this example will read a list of words positive and negative words from a file.In fact, we've included two files for you so you don't have to come up with the positive and negative words! Just load the files and go! Of course if you want more positive and negative words you can always edit the files.- Positive words are in `NYC3-pos.txt`- Negative words are in `NYC3-neg.txt`You will have to write a function called `LoadWords(filename)` to read the words from the file and load them into a string. Step 1: Problem Analysis for functionInput (function arguments): `filename` to read.Output (function returns): a `text` string of words as loaded from the file.Algorithm:```open the filename for reading read the entire file all at once, into text return the text```
###Code
## Step 2: Write the LoadWords(filename) function
## Quick test of your LoadWords() function
pos = LoadWords("NYC3-pos.txt")
neg = LoadWords("NYC3-neg.txt")
print("POSITIVE WORD LIST:",pos)
print("NEGATIVE WORD LIST", neg)
###Output
_____no_output_____
###Markdown
Step 3: The Final Program Now write a program which allows you to enter text and then analyzes the sentiment of the text by printing a score. The program should keep analyzing text input until you enter "quit".Sample Run```Sentiment Analyzer 1.0Type 'quit' to exit.Enter Text: i love a good book from amazon2 positive.Enter Text: i hate amazon their service makes me angry-2 negative.Enter Text: i love to hate amazon0 neutral.Enter Text: quit``` 3.a Problem AnalysisInput:Output:Algorithm:
###Code
## 3.b Write solution here, use Load Words to help you read in the pos and neg words.
###Output
_____no_output_____
###Markdown
Now You Code 3: Sentiment v2.0Let's write a better version of the basic sentiment analyzer in Python. Instead of using a string of words, this example will read a list of words positive and negative words from a file.In fact, we've included two files for you so you don't have to some up with the positive and negative words! Just load the files and go! Of course if you want more positive and negative words you can always edit the files.- Positive words are in `NYC3-pos.txt`- Negative words are in `NYC3-neg.txt`You will have to write a function called `LoadWords(filename)` to read the words from the file and load them into a string. Step 1: Problem Analysis for functionInput (function arguments): `filename` to read.Output (function returns): a `text` string of words as loaded from the file.Algorithm:```open the filename for reading read the entire file all at once, into text return the text```
###Code
## Step 2: Write the LoadWords(filename) function
def LoadWords():
with open(filename, 'r') as f:
for line in f.readlines():
text = str(line)
return text
## Quick test of your LoadWords() function
pos = LoadWords("NYC3-pos.txt")
neg = LoadWords("NYC3-neg.txt")
print("POSITIVE WORD LIST:",pos)
print("NEGATIVE WORD LIST", neg)
###Output
_____no_output_____
###Markdown
Step 3: The Final Program Now write a program which allows you to enter text and then analyzes the sentiment of the text by printing a score. The program should keep analyzing text input until you enter "quit".Sample Run```Sentiment Analyzer 1.0Type 'quit' to exit.Enter Text: i love a good book from amazon2 positive.Enter Text: i hate amazon their service makes me angry-2 negative.Enter Text: i love to hate amazon0 neutral.Enter Text: quit``` 3.a Problem AnalysisInput: textOutput: how positive or negative the text isAlgorithm: ask user for text, use function to find how many of the words that are in the text, are in each file and if they are in positive file then the number is positive and if in negative file then number is negative and then you add numbers to see how psitive or negative the text is
###Code
## 3.b Write solution here, use Load Words to help you read in the pos and neg words.
text input("Enter text:")
LoadWords(text)
print(sentiment)
###Output
_____no_output_____
###Markdown
Now You Code 3: Sentiment v2.0Let's write a better version of the basic sentiment analyzer in Python. Instead of using a string of words, this example will read a list of words positive and negative words from a file.In fact, we've included two files for you so you don't have to some up with the positive and negative words! Just load the files and go! Of course if you want more positive and negative words you can always edit the files.- Positive words are in `NYC3-pos.txt`- Negative words are in `NYC3-neg.txt`You will have to write a function called `LoadWords(filename)` to read the words from the file and load them into a string. Step 1: Problem Analysis for functionInput (function arguments): `filename` to read.Output (function returns): a `text` string of words as loaded from the file.Algorithm:```open the filename for reading read the entire file all at once, into text return the text```
###Code
## Step 2: Write the LoadWords(filename) function
def LoadWords(filename):
with open (filename, 'r') as file:
for line in file.readlines():
return line
## Quick test of your LoadWords() function
pos = LoadWords("NYC3-pos.txt")
neg = LoadWords("NYC3-neg.txt")
print("POSITIVE WORD LIST:",pos)
print("NEGATIVE WORD LIST", neg)
###Output
POSITIVE WORD LIST: good love like great wonderful fast helpful smart friendly happy joy
NEGATIVE WORD LIST bad hate dislike horrible stinks awful slow clueless useless dumb sad
###Markdown
Step 3: The Final Program Now write a program which allows you to enter text and then analyzes the sentiment of the text by printing a score. The program should keep analyzing text input until you enter "quit".Sample Run```Sentiment Analyzer 1.0Type 'quit' to exit.Enter Text: i love a good book from amazon2 positive.Enter Text: i hate amazon their service makes me angry-2 negative.Enter Text: i love to hate amazon0 neutral.Enter Text: quit``` 3.a Problem AnalysisInput:Output:Algorithm:
###Code
## 3.b Write solution here, use Load Words to help you read in the pos and neg words.
import string
def StripPunctuation(text):
for ch in text:
if ch in string.punctuation:
text = text.replace(ch,',')
return text
def ScoreSentiment(pos,neg,text):
score = 0
text.lower()
words = text.split()
for word in words:
if word in pos.split():
score +=1
elif word in neg.split():
score -=1
return score
print("Sentiment Analyzer 1.0")
print("Type 'quit' to exit")
pos = LoadWords("NYC3-pos.txt")
neg = LoadWords("NYC3-neg.txt")
print (pos)
while True:
text = StripPunctuation(input("Enter Text:").lower())
if text.lower() == 'quit':
break
score = ScoreSentiment(pos,neg,text)
if score >0:
print("%d Positive words" %(score))
elif score <0:
print("%d Negative words" %(score))
###Output
Sentiment Analyzer 1.0
Type 'quit' to exit
good love like great wonderful fast helpful smart friendly happy joy
Enter Text:GOOD
1 Positive words
###Markdown
Now You Code 3: Sentiment v2.0Let's write a better version of the basic sentiment analyzer in Python. Instead of using a string of words, this example will read a list of words positive and negative words from a file.In fact, we've included two files for you so you don't have to some up with the positive and negative words! Just load the files and go! Of course if you want more positive and negative words you can always edit the files.- Positive words are in `NYC3-pos.txt`- Negative words are in `NYC3-neg.txt`You will have to write a function called `LoadWords(filename)` to read the words from the file and load them into a string. Step 1: Problem Analysis for functionInput (function arguments): `filename` to read.Output (function returns): a `text` string of words as loaded from the file.Algorithm:```open the filename for reading read the entire file all at once, into text return the text```
###Code
## Step 2: Write the LoadWords(filename) function
## Quick test of your LoadWords() function
pos = LoadWords("NYC3-pos.txt")
neg = LoadWords("NYC3-neg.txt")
print("POSITIVE WORD LIST:",pos)
print("NEGATIVE WORD LIST", neg)
###Output
_____no_output_____
###Markdown
Step 3: The Final Program Now write a program which allows you to enter text and then analyzes the sentiment of the text by printing a score. The program should keep analyzing text input until you enter "quit".Sample Run```Sentiment Analyzer 1.0Type 'quit' to exit.Enter Text: i love a good book from amazon2 positive.Enter Text: i hate amazon their service makes me angry-2 negative.Enter Text: i love to hate amazon0 neutral.Enter Text: quit``` 3.a Problem AnalysisInput:Output:Algorithm:
###Code
## 3.b Write solution here, use Load Words to help you read in the pos and neg words.
###Output
_____no_output_____
###Markdown
Now You Code 3: Sentiment v2.0Let's write a better version of the basic sentiment analyzer in Python. Instead of using a string of words, this example will read a list of words positive and negative words from a file.In fact, we've included two files for you so you don't have to some up with the positive and negative words! Just load the files and go! Of course if you want more positive and negative words you can always edit the files.- Positive words are in `NYC3-pos.txt`- Negative words are in `NYC3-neg.txt`You will have to write a function called `LoadWords(filename)` to read the words from the file and load them into a string. Step 1: Problem Analysis for functionInput (function arguments): `filename` to read.Output (function returns): a `text` string of words as loaded from the file.Algorithm:```open the filename for reading read the entire file all at once, into text return the text```
###Code
## Step 2: Write the LoadWords(filename) function
## Quick test of your LoadWords() function
pos = LoadWords("NYC3-pos.txt")
neg = LoadWords("NYC3-neg.txt")
print("POSITIVE WORD LIST:",pos)
print("NEGATIVE WORD LIST", neg)
###Output
_____no_output_____
###Markdown
Step 3: The Final Program Now write a program which allows you to enter text and then analyzes the sentiment of the text by printing a score. The program should keep analyzing text input until you enter "quit".Sample Run```Sentiment Analyzer 1.0Type 'quit' to exit.Enter Text: i love a good book from amazon2 positive.Enter Text: i hate amazon their service makes me angry-2 negative.Enter Text: i love to hate amazon0 neutral.Enter Text: quit``` 3.a Problem AnalysisInput:Output:Algorithm:
###Code
## 3.b Write solution here, use Load Words to help you read in the pos and neg words.
###Output
_____no_output_____
###Markdown
Now You Code 3: Sentiment v2.0Let's write a better version of the basic sentiment analyzer in Python. Instead of using a string of words, this example will read a list of words positive and negative words from a file.In fact, we've included two files for you so you don't have to some up with the positive and negative words! Just load the files and go! Of course if you want more positive and negative words you can always edit the files.- Positive words are in `NYC3-pos.txt`- Negative words are in `NYC3-neg.txt`You will have to write a function called `LoadWords(filename)` to read the words from the file and load them into a string. Step 1: Problem Analysis for functionInput (function arguments): `filename` to read.Output (function returns): a `text` string of words as loaded from the file.Algorithm:```open the filename for reading read the entire file all at once, into text return the text```
###Code
## Step 2: Write the LoadWords(filename) function
def LoadWords(filename):
with open("NYC3-pos.txt",'r') as good,open('NYC3-neg.txt', "r") as bad:
contents=[]
return contents
## Quick test of your LoadWords() function
pos = LoadWords("NYC3-pos.txt")
neg = LoadWords("NYC3-neg.txt")
print("POSITIVE WORD LIST:",pos)
print("NEGATIVE WORD LIST", neg)
###Output
POSITIVE WORD LIST: []
NEGATIVE WORD LIST []
###Markdown
Step 3: The Final Program Now write a program which allows you to enter text and then analyzes the sentiment of the text by printing a score. The program should keep analyzing text input until you enter "quit".Sample Run```Sentiment Analyzer 1.0Type 'quit' to exit.Enter Text: i love a good book from amazon2 positive.Enter Text: i hate amazon their service makes me angry-2 negative.Enter Text: i love to hate amazon0 neutral.Enter Text: quit``` 3.a Problem AnalysisInput: a sentenceOutput: number of good/bad words in itAlgorithm: input a statement, look to see if words are in file, count number of good/bad words, add/cancel out words (cancel if number of good and bad words are the same) *otherwise just add the number up
###Code
## 3.b Write solution here, use Load Words to help you read in the pos and neg words.
###Output
_____no_output_____
###Markdown
Step 4: Questions1. This is a better solution than sentiment 1.0. Why?2. Execute the program and enter the following input: `i love! a GOOD book` What is the score and why? how can this issue be fixed?3. Re-write your final solution to address the issues discovered in step 2.
###Code
#1. what was the first one?
###Output
_____no_output_____
###Markdown
Now You Code 3: Sentiment v2.0Let's write a better version of the basic sentiment analyzer in Python. Instead of using a string of words, this example will read a list of words positive and negative words from a file.In fact, we've included two files for you so you don't have to some up with the positive and negative words! Just load the files and go! Of course if you want more positive and negative words you can always edit the files.- Positive words are in `NYC3-pos.txt`- Negative words are in `NYC3-neg.txt`You will have to write a function called `LoadWords(filename)` to read the words from the file and load them into a string. Step 1: Problem Analysis for functionInput (function arguments): `filename` to read.Output (function returns): a `text` string of words as loaded from the file.Algorithm:```open the filename for reading read the entire file all at once, into text return the text```
###Code
## Step 2: Write the LoadWords(filename) function
def LoadWords(filename):
with open (filename, 'r') as file:
for line in file:
line = line.split(' ')
return line
## Quick test of your LoadWords() function
pos = LoadWords("NYC3-pos.txt")
neg = LoadWords("NYC3-neg.txt")
print("POSITIVE WORD LIST:",pos)
print("NEGATIVE WORD LIST:", neg)
###Output
POSITIVE WORD LIST: ['good', 'love', 'like', 'great', 'wonderful', 'fast', 'helpful', 'smart', 'friendly', 'happy', 'joy']
NEGATIVE WORD LIST: ['bad', 'hate', 'dislike', 'horrible', 'stinks', 'awful', 'slow', 'clueless', 'useless', 'dumb', 'sad']
###Markdown
Step 3: The Final Program Now write a program which allows you to enter text and then analyzes the sentiment of the text by printing a score. The program should keep analyzing text input until you enter "quit".Sample Run```Sentiment Analyzer 1.0Type 'quit' to exit.Enter Text: i love a good book from amazon2 positive.Enter Text: i hate amazon their service makes me angry-2 negative.Enter Text: i love to hate amazon0 neutral.Enter Text: quit``` 3.a Problem AnalysisInput: text which has either positive or negative words in itOutput: a score based on how many positive or negative words it hasAlgorithm:- set positive word list to LoadWords's output of the positive file- set negative word list to LoadWords's output of the negative file- loop indefinitely: - input the text or 'quit' - if the text is 'quit' stop looping - strip punctuation from text and make it lowercase - split text into list of words using space as the delimiter - for each item in the list: - if the word is in the positive word list, increase score by 1 - if the word is in the negative word list, decrease score by 1 - print the final score - if the final score is positive print 'positive' - if it's negative print 'negative' - if it's 0 print 'neutral'
###Code
## 3.b Write solution here, use Load Words to help you read in the pos and neg words.
pos = LoadWords("NYC3-pos.txt")
neg = LoadWords("NYC3-neg.txt")
print("Sentiment Analyzer v0.0.0.0.1\nType 'quit' to exit")
while True:
score = 0
text = input("Enter text here: ")
if text == 'quit':
break
text = text.lower()
text = text.replace('.','').replace(',','').replace('?','').replace('!','').replace('(','').replace(')','')
text = text.split(' ')
for word in text:
if word in pos:
score = score + 1
if word in neg:
score = score - 1
if score > 0:
print("%d, positive" % score)
elif score < 0:
print("%d, negative" % score)
else:
print("%d, neutral" % score)
###Output
Sentiment Analyzer v0.0.0.0.1
Type 'quit' to exit
Enter text here: i LOVE fortnite its such a good game its EPIC it has COOL characters like john wick and thanos
3, positive
Enter text here: fortnite is SO BAD! its the worst game EVER ITS SO DUMB!!!!!! I HAtE FORTNITE >:(
-3, negative
Enter text here: fortnite has good graphics but bad everything else
0, neutral
Enter text here: quit
|
doc/caret2sql-svmRadial-BreastCancer.ipynb | ###Markdown
Build a Model
###Code
set.seed(1960)
data(BreastCancer)
# summary(BreastCancer)
bc = BreastCancer[,-1]
for(i in 1:(ncol(bc) - 1)){
bc[, i] <- as.numeric(bc[, i])
bc[is.na(bc[,i]), i] <- mean(bc[,i], na.rm = TRUE)
}
TGT_IDX = ncol(bc)
create_model = function() {
model <- train(Class ~ ., data = bc, method = "svmRadial", prob.model=TRUE)
return(model)
}
# dataset
model = create_model()
pred <- predict(model, as.matrix(bc[, -TGT_IDX]) , type="prob")
pred_labels <- predict(model, as.matrix(bc[, -TGT_IDX]) , type="raw")
sum(pred_labels != bc$Class)/length(pred_labels)
pred[1:5,]
###Output
_____no_output_____
###Markdown
SQL Code Generation
###Code
test_ws_sql_gen = function(mod) {
WS_URL = "https://sklearn2sql.herokuapp.com/model"
WS_URL = "http://localhost:1888/model"
model_serialized <- serialize(mod, NULL)
b64_data = base64encode(model_serialized)
data = list(Name = "caret_svm_test_model", SerializedModel = b64_data , SQLDialect = "postgresql" , Mode="caret")
r = POST(WS_URL, body = data, encode = "json")
# print(r)
content = content(r)
# print(content)
lSQL = content$model$SQLGenrationResult[[1]]$SQL # content["model"]["SQLGenrationResult"][0]["SQL"]
return(lSQL);
}
lModelSQL = test_ws_sql_gen(model)
cat(lModelSQL)
###Output
WITH kernel_input_with_scaling AS
(SELECT "ADS"."KEY" AS "KEY", (CAST("ADS"."Feature_0" AS FLOAT) - 4.4177396280400565) / 2.8157406585949314 AS "Feature_0", (CAST("ADS"."Feature_1" AS FLOAT) - 3.13447782546495) / 3.0514591099542008 AS "Feature_1", (CAST("ADS"."Feature_2" AS FLOAT) - 3.2074391988555084) / 2.971912767215713 AS "Feature_2", (CAST("ADS"."Feature_3" AS FLOAT) - 2.8068669527896994) / 2.8553792392170236 AS "Feature_3", (CAST("ADS"."Feature_4" AS FLOAT) - 3.2160228898426317) / 2.2142998866490484 AS "Feature_4", (CAST("ADS"."Feature_5" AS FLOAT) - 3.544655929721816) / 3.6018516398045315 AS "Feature_5", (CAST("ADS"."Feature_6" AS FLOAT) - 3.4377682403433485) / 2.438364252324251 AS "Feature_6", (CAST("ADS"."Feature_7" AS FLOAT) - 2.866952789699571) / 3.0536338936127745 AS "Feature_7", (CAST("ADS"."Feature_8" AS FLOAT) - 1.569384835479256) / 1.619802614296755 AS "Feature_8"
FROM "INPUT_DATA" AS "ADS"),
"SV_data" AS
(SELECT "Values".sv_idx AS sv_idx, "Values".dual_coeff AS dual_coeff, "Values".sv_0 AS sv_0, "Values".sv_1 AS sv_1, "Values".sv_2 AS sv_2, "Values".sv_3 AS sv_3, "Values".sv_4 AS sv_4, "Values".sv_5 AS sv_5, "Values".sv_6 AS sv_6, "Values".sv_7 AS sv_7, "Values".sv_8 AS sv_8
FROM (SELECT 0 AS sv_idx, -1.0 AS dual_coeff, 0.2067876422434776 AS sv_0, 0.283642068711201 AS sv_1, 0.26668373644325233 AS sv_2, 0.7680706706446747 AS sv_3, 1.7088819508922748 AS sv_4, 1.7922293075426368 AS sv_5, -0.1795335704770387 AS sv_6, -0.2839085561346954 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 1 AS sv_idx, -1.0 AS dual_coeff, 0.5619339860473863 AS sv_0, 1.5944903730360245 AS sv_1, 1.6126182618860927 AS sv_2, -0.6327940358931664 AS sv_3, -0.09755810003203527 AS sv_4, 0.1264194408359628 AS sv_5, -0.1795335704770387 AS sv_6, 1.353484849295602 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 2 AS sv_idx, 0.08250539903099623 AS dual_coeff, 1.272226673655204 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 1.818719200548056 AS sv_3, 1.7088819508922748 AS sv_4, 1.7922293075426368 AS sv_5, 2.281132424884727 AS sv_6, 1.353484849295602 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 3 AS sv_idx, -1.0 AS dual_coeff, -1.2137977329721574 AS sv_0, -0.6994941595324167 AS sv_1, -0.7427671576388781 AS sv_2, -0.6327940358931664 AS sv_3, -0.5491681127631127 AS sv_4, 1.7922293075426368 AS sv_5, -0.1795335704770387 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 4 AS sv_idx, -1.0 AS dual_coeff, -0.8586513891682487 AS sv_0, -0.6994941595324167 AS sv_1, -0.7427671576388781 AS sv_2, -0.6327940358931664 AS sv_3, -0.5491681127631127 AS sv_4, -0.7064854925173741 AS sv_5, -0.9997555689309604 AS sv_6, -0.6113872372207548 AS sv_7, 2.11792173579752 AS sv_8 UNION ALL SELECT 5 AS sv_idx, 1.0 AS dual_coeff, 0.2067876422434776 AS sv_0, -0.04407000737000486 AS sv_1, -0.06979989491745779 AS sv_2, 0.06763831737575418 AS sv_3, -0.5491681127631127 AS sv_4, -0.1512155369484829 AS sv_5, 0.2305774287499223 AS sv_6, 0.3710488060374236 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 6 AS sv_idx, 0.17597518898302839 AS dual_coeff, 1.272226673655204 AS sv_0, 1.2667782969548187 AS sv_1, 0.6031673678039624 AS sv_2, 2.519151553816976 AS sv_3, 1.7088819508922748 AS sv_4, 1.514594329758191 AS sv_5, 0.6406884279768832 AS sv_6, 0.698527487123483 AS sv_7, 1.5005625642702196 AS sv_8 UNION ALL SELECT 7 AS sv_idx, 0.6094734479491689 AS dual_coeff, 0.9170803298512952 AS sv_0, 0.283642068711201 AS sv_1, 0.9396509991646724 AS sv_2, 0.4178544940102145 AS sv_3, 1.2572719381611972 AS sv_4, -0.7064854925173741 AS sv_5, 0.2305774287499223 AS sv_6, 0.043570124951364066 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 8 AS sv_idx, 0.10386595064085792 AS dual_coeff, 1.9825193612630212 AS sv_0, 1.2667782969548187 AS sv_1, 1.2761346305253827 AS sv_2, 1.1182868472791347 AS sv_3, 0.35405191269904224 AS sv_4, 1.7922293075426368 AS sv_5, 0.2305774287499223 AS sv_6, -0.6113872372207548 AS sv_7, 0.2658442212156186 AS sv_8 UNION ALL SELECT 9 AS sv_idx, 0.18852721058703875 AS dual_coeff, 0.9170803298512952 AS sv_0, -0.04407000737000486 AS sv_1, -0.406283526278168 AS sv_2, 2.519151553816976 AS sv_3, 0.8056619254301197 AS sv_4, 1.7922293075426368 AS sv_5, 0.6406884279768832 AS sv_6, 0.3710488060374236 AS sv_7, 1.5005625642702196 AS sv_8 UNION ALL SELECT 10 AS sv_idx, 0.22043438373312774 AS dual_coeff, 1.9825193612630212 AS sv_0, 0.6113541447924069 AS sv_1, 0.6031673678039624 AS sv_2, 0.06763831737575418 AS sv_3, 1.2572719381611972 AS sv_4, 0.9593243741892996 AS sv_5, 1.460910426430805 AS sv_6, 2.3359208925537804 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 11 AS sv_idx, 0.15562008581358547 AS dual_coeff, 1.272226673655204 AS sv_0, 0.283642068711201 AS sv_1, 0.6031673678039624 AS sv_2, -0.6327940358931664 AS sv_3, -0.5491681127631127 AS sv_4, 0.0 AS sv_5, 1.460910426430805 AS sv_6, 0.043570124951364066 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 12 AS sv_idx, 0.08311885535733513 AS dual_coeff, 0.2067876422434776 AS sv_0, -0.3717820834512108 AS sv_1, -0.06979989491745779 AS sv_2, 0.4178544940102145 AS sv_3, -0.5491681127631127 AS sv_4, 0.9593243741892996 AS sv_5, -0.1795335704770387 AS sv_6, 1.0260061682095425 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 13 AS sv_idx, 0.1933503867072343 AS dual_coeff, 1.9825193612630212 AS sv_0, 1.2667782969548187 AS sv_1, 1.2761346305253827 AS sv_2, 0.06763831737575418 AS sv_3, 2.160491963623352 AS sv_4, 0.4040544186204084 AS sv_5, 1.460910426430805 AS sv_6, 0.3710488060374236 AS sv_7, 0.883203392742919 AS sv_8 UNION ALL SELECT 14 AS sv_idx, 0.13476585777829644 AS dual_coeff, 1.9825193612630212 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 1.818719200548056 AS sv_3, 1.2572719381611972 AS sv_4, -0.7064854925173741 AS sv_5, 1.8710214256577664 AS sv_6, 2.0084422114677207 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 15 AS sv_idx, -1.0 AS dual_coeff, 0.5619339860473863 AS sv_0, -0.3717820834512108 AS sv_1, -0.7427671576388781 AS sv_2, -0.6327940358931664 AS sv_3, -1.0007781254941903 AS sv_4, -0.7064854925173741 AS sv_5, 1.460910426430805 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 16 AS sv_idx, 0.11172291566953052 AS dual_coeff, 0.2067876422434776 AS sv_0, 0.283642068711201 AS sv_1, 0.26668373644325233 AS sv_2, 2.168935377182516 AS sv_3, -0.5491681127631127 AS sv_4, 1.7922293075426368 AS sv_5, 0.6406884279768832 AS sv_6, 1.0260061682095425 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 17 AS sv_idx, 0.5341823878737971 AS dual_coeff, -0.8586513891682487 AS sv_0, 0.6113541447924069 AS sv_1, -0.06979989491745779 AS sv_2, 0.06763831737575418 AS sv_3, 1.2572719381611972 AS sv_4, 0.9593243741892996 AS sv_5, 1.460910426430805 AS sv_6, 0.698527487123483 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 18 AS sv_idx, -1.0 AS dual_coeff, 0.5619339860473863 AS sv_0, 0.9390662208736128 AS sv_1, 0.9396509991646724 AS sv_2, 2.168935377182516 AS sv_3, 1.2572719381611972 AS sv_4, 0.0 AS sv_5, 1.460910426430805 AS sv_6, 1.6809635303816615 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 19 AS sv_idx, 0.14766293613367348 AS dual_coeff, 1.9825193612630212 AS sv_0, 0.283642068711201 AS sv_1, -0.06979989491745779 AS sv_2, -0.6327940358931664 AS sv_3, -0.09755810003203527 AS sv_4, -0.1512155369484829 AS sv_5, 1.050799427203844 AS sv_6, 0.698527487123483 AS sv_7, 0.2658442212156186 AS sv_8 UNION ALL SELECT 20 AS sv_idx, 0.2163668891246106 AS dual_coeff, 0.2067876422434776 AS sv_0, 0.9390662208736128 AS sv_1, 0.6031673678039624 AS sv_2, 1.1182868472791347 AS sv_3, 3.063711989085508 AS sv_4, -0.7064854925173741 AS sv_5, -0.1795335704770387 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 21 AS sv_idx, 0.16937586182178843 AS dual_coeff, 1.9825193612630212 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 0.4178544940102145 AS sv_3, 2.160491963623352 AS sv_4, -0.7064854925173741 AS sv_5, 1.8710214256577664 AS sv_6, 2.3359208925537804 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 22 AS sv_idx, 0.1320503056685164 AS dual_coeff, -0.5035050453643399 AS sv_0, 1.2667782969548187 AS sv_1, 1.2761346305253827 AS sv_2, 0.4178544940102145 AS sv_3, 0.35405191269904224 AS sv_4, 1.514594329758191 AS sv_5, 0.2305774287499223 AS sv_6, 1.6809635303816615 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 23 AS sv_idx, 0.264238262855903 AS dual_coeff, 0.9170803298512952 AS sv_0, 1.5944903730360245 AS sv_1, 1.2761346305253827 AS sv_2, -0.2825778592587061 AS sv_3, 0.35405191269904224 AS sv_4, 1.2369593519737454 AS sv_5, -0.1795335704770387 AS sv_6, 1.6809635303816615 AS sv_7, 0.2658442212156186 AS sv_8 UNION ALL SELECT 24 AS sv_idx, 0.20823244428615717 AS dual_coeff, 1.6273730174591126 AS sv_0, 0.6113541447924069 AS sv_1, 1.6126182618860927 AS sv_2, -0.6327940358931664 AS sv_3, -0.5491681127631127 AS sv_4, -0.1512155369484829 AS sv_5, -0.5896445697039996 AS sv_6, -0.6113872372207548 AS sv_7, 2.11792173579752 AS sv_8 UNION ALL SELECT 25 AS sv_idx, 0.8825877598819007 AS dual_coeff, 0.2067876422434776 AS sv_0, -0.04407000737000486 AS sv_1, -0.06979989491745779 AS sv_2, 0.4178544940102145 AS sv_3, -0.5491681127631127 AS sv_4, 0.1264194408359628 AS sv_5, -0.1795335704770387 AS sv_6, 0.3710488060374236 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 26 AS sv_idx, 0.16045676379862123 AS dual_coeff, 1.9825193612630212 AS sv_0, -0.04407000737000486 AS sv_1, 0.9396509991646724 AS sv_2, -0.2825778592587061 AS sv_3, -0.09755810003203527 AS sv_4, 0.4040544186204084 AS sv_5, 0.2305774287499223 AS sv_6, 2.3359208925537804 AS sv_7, 0.2658442212156186 AS sv_8 UNION ALL SELECT 27 AS sv_idx, 0.21686160396839504 AS dual_coeff, 0.2067876422434776 AS sv_0, 0.6113541447924069 AS sv_1, 0.6031673678039624 AS sv_2, 1.818719200548056 AS sv_3, 3.063711989085508 AS sv_4, 1.2369593519737454 AS sv_5, 1.460910426430805 AS sv_6, 0.043570124951364066 AS sv_7, 3.352640078852121 AS sv_8 UNION ALL SELECT 28 AS sv_idx, 0.17990076451502518 AS dual_coeff, 1.9825193612630212 AS sv_0, 0.6113541447924069 AS sv_1, 0.6031673678039624 AS sv_2, 1.1182868472791347 AS sv_3, 2.160491963623352 AS sv_4, 1.2369593519737454 AS sv_5, 1.460910426430805 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 29 AS sv_idx, 0.2018094308228736 AS dual_coeff, 1.9825193612630212 AS sv_0, 0.9390662208736128 AS sv_1, 0.9396509991646724 AS sv_2, 0.06763831737575418 AS sv_3, 0.35405191269904224 AS sv_4, 0.4040544186204084 AS sv_5, -0.1795335704770387 AS sv_6, 1.0260061682095425 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 30 AS sv_idx, 0.33872277996910516 AS dual_coeff, 1.272226673655204 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, -0.6327940358931664 AS sv_3, -0.09755810003203527 AS sv_4, 0.681689396404854 AS sv_5, -0.1795335704770387 AS sv_6, 2.0084422114677207 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 31 AS sv_idx, 0.15887391453891495 AS dual_coeff, 1.272226673655204 AS sv_0, -0.3717820834512108 AS sv_1, 0.26668373644325233 AS sv_2, -0.6327940358931664 AS sv_3, 0.8056619254301197 AS sv_4, -0.7064854925173741 AS sv_5, 0.6406884279768832 AS sv_6, 0.3710488060374236 AS sv_7, 1.5005625642702196 AS sv_8 UNION ALL SELECT 32 AS sv_idx, 0.737045894051881 AS dual_coeff, 1.6273730174591126 AS sv_0, 0.6113541447924069 AS sv_1, 0.6031673678039624 AS sv_2, -0.2825778592587061 AS sv_3, -0.5491681127631127 AS sv_4, -0.4288505147329285 AS sv_5, 0.6406884279768832 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 33 AS sv_idx, 0.393591584960138 AS dual_coeff, 0.2067876422434776 AS sv_0, -0.04407000737000486 AS sv_1, 0.6031673678039624 AS sv_2, 0.7680706706446747 AS sv_3, -0.09755810003203527 AS sv_4, -0.1512155369484829 AS sv_5, 0.2305774287499223 AS sv_6, 2.3359208925537804 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 34 AS sv_idx, 0.5527111962652349 AS dual_coeff, 0.5619339860473863 AS sv_0, -0.04407000737000486 AS sv_1, 0.26668373644325233 AS sv_2, -0.6327940358931664 AS sv_3, 0.8056619254301197 AS sv_4, -0.4288505147329285 AS sv_5, -0.1795335704770387 AS sv_6, 2.0084422114677207 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 35 AS sv_idx, 0.21275311508157158 AS dual_coeff, 1.9825193612630212 AS sv_0, 0.283642068711201 AS sv_1, -0.406283526278168 AS sv_2, -0.6327940358931664 AS sv_3, -0.09755810003203527 AS sv_4, -0.4288505147329285 AS sv_5, 0.2305774287499223 AS sv_6, 0.043570124951364066 AS sv_7, 4.587358421906721 AS sv_8 UNION ALL SELECT 36 AS sv_idx, 0.211668314610315 AS dual_coeff, 0.2067876422434776 AS sv_0, -0.04407000737000486 AS sv_1, 0.26668373644325233 AS sv_2, -0.6327940358931664 AS sv_3, 2.160491963623352 AS sv_4, 1.7922293075426368 AS sv_5, 0.2305774287499223 AS sv_6, 2.0084422114677207 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 37 AS sv_idx, 0.21632260704985118 AS dual_coeff, 1.272226673655204 AS sv_0, -0.04407000737000486 AS sv_1, 1.6126182618860927 AS sv_2, 0.06763831737575418 AS sv_3, 0.35405191269904224 AS sv_4, 1.514594329758191 AS sv_5, 1.8710214256577664 AS sv_6, 2.0084422114677207 AS sv_7, 3.969999250379421 AS sv_8 UNION ALL SELECT 38 AS sv_idx, 0.22009031439479163 AS dual_coeff, 0.5619339860473863 AS sv_0, 2.249914525198436 AS sv_1, -0.406283526278168 AS sv_2, 1.818719200548056 AS sv_3, 3.063711989085508 AS sv_4, -0.4288505147329285 AS sv_5, 1.460910426430805 AS sv_6, 1.6809635303816615 AS sv_7, 4.587358421906721 AS sv_8 UNION ALL SELECT 39 AS sv_idx, -1.0 AS dual_coeff, -1.2137977329721574 AS sv_0, -0.04407000737000486 AS sv_1, -0.06979989491745779 AS sv_2, -0.2825778592587061 AS sv_3, -0.5491681127631127 AS sv_4, -0.7064854925173741 AS sv_5, 1.460910426430805 AS sv_6, -0.2839085561346954 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 40 AS sv_idx, 0.1567434534061177 AS dual_coeff, 1.6273730174591126 AS sv_0, 0.283642068711201 AS sv_1, 0.6031673678039624 AS sv_2, 2.519151553816976 AS sv_3, 1.2572719381611972 AS sv_4, 1.7922293075426368 AS sv_5, 0.2305774287499223 AS sv_6, 1.6809635303816615 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 41 AS sv_idx, 0.13146929700563004 AS dual_coeff, 1.9825193612630212 AS sv_0, 0.9390662208736128 AS sv_1, 0.26668373644325233 AS sv_2, -0.6327940358931664 AS sv_3, -0.09755810003203527 AS sv_4, 0.1264194408359628 AS sv_5, -0.1795335704770387 AS sv_6, -0.2839085561346954 AS sv_7, 0.883203392742919 AS sv_8 UNION ALL SELECT 42 AS sv_idx, -0.6070896062161149 AS dual_coeff, -1.2137977329721574 AS sv_0, -0.6994941595324167 AS sv_1, 0.26668373644325233 AS sv_2, -0.6327940358931664 AS sv_3, -0.5491681127631127 AS sv_4, -0.7064854925173741 AS sv_5, -0.5896445697039996 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 43 AS sv_idx, -0.43084341661686415 AS dual_coeff, -0.8586513891682487 AS sv_0, -0.3717820834512108 AS sv_1, -0.406283526278168 AS sv_2, -0.6327940358931664 AS sv_3, -1.0007781254941903 AS sv_4, -0.7064854925173741 AS sv_5, 1.460910426430805 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 44 AS sv_idx, -0.33557124514600334 AS dual_coeff, -0.5035050453643399 AS sv_0, -0.6994941595324167 AS sv_1, -0.7427671576388781 AS sv_2, -0.6327940358931664 AS sv_3, -0.5491681127631127 AS sv_4, -0.4288505147329285 AS sv_5, 1.460910426430805 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 45 AS sv_idx, 0.21246626209003944 AS dual_coeff, -0.5035050453643399 AS sv_0, 0.6113541447924069 AS sv_1, 1.2761346305253827 AS sv_2, 1.818719200548056 AS sv_3, 2.160491963623352 AS sv_4, 1.514594329758191 AS sv_5, 1.460910426430805 AS sv_6, 2.3359208925537804 AS sv_7, 3.352640078852121 AS sv_8 UNION ALL SELECT 46 AS sv_idx, 0.2175268983256731 AS dual_coeff, 0.2067876422434776 AS sv_0, 2.249914525198436 AS sv_1, 0.9396509991646724 AS sv_2, -0.6327940358931664 AS sv_3, 3.063711989085508 AS sv_4, 0.1264194408359628 AS sv_5, 0.2305774287499223 AS sv_6, 2.3359208925537804 AS sv_7, 4.587358421906721 AS sv_8 UNION ALL SELECT 47 AS sv_idx, 0.331817040285637 AS dual_coeff, -0.5035050453643399 AS sv_0, -0.04407000737000486 AS sv_1, 0.9396509991646724 AS sv_2, 0.4178544940102145 AS sv_3, 0.8056619254301197 AS sv_4, 1.2369593519737454 AS sv_5, 0.2305774287499223 AS sv_6, 0.3710488060374236 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 48 AS sv_idx, 0.08056801151418086 AS dual_coeff, -0.5035050453643399 AS sv_0, 0.9390662208736128 AS sv_1, 0.9396509991646724 AS sv_2, 1.1182868472791347 AS sv_3, 0.8056619254301197 AS sv_4, 1.7922293075426368 AS sv_5, 1.050799427203844 AS sv_6, 1.6809635303816615 AS sv_7, 0.883203392742919 AS sv_8 UNION ALL SELECT 49 AS sv_idx, 0.20452633453289454 AS dual_coeff, 1.6273730174591126 AS sv_0, 0.9390662208736128 AS sv_1, 1.9491018932468027 AS sv_2, -0.2825778592587061 AS sv_3, 3.063711989085508 AS sv_4, 0.681689396404854 AS sv_5, -0.5896445697039996 AS sv_6, 2.0084422114677207 AS sv_7, 4.587358421906721 AS sv_8 UNION ALL SELECT 50 AS sv_idx, 0.14798668630245437 AS dual_coeff, 0.9170803298512952 AS sv_0, 0.6113541447924069 AS sv_1, 0.9396509991646724 AS sv_2, 2.519151553816976 AS sv_3, 0.8056619254301197 AS sv_4, 1.7922293075426368 AS sv_5, 1.460910426430805 AS sv_6, 2.0084422114677207 AS sv_7, 1.5005625642702196 AS sv_8 UNION ALL SELECT 51 AS sv_idx, 0.21969406212401837 AS dual_coeff, 1.9825193612630212 AS sv_0, -0.04407000737000486 AS sv_1, 0.6031673678039624 AS sv_2, -0.6327940358931664 AS sv_3, 3.063711989085508 AS sv_4, 0.4040544186204084 AS sv_5, -0.1795335704770387 AS sv_6, 2.3359208925537804 AS sv_7, 0.2658442212156186 AS sv_8 UNION ALL SELECT 52 AS sv_idx, 0.5572807662626157 AS dual_coeff, -0.8586513891682487 AS sv_0, -0.04407000737000486 AS sv_1, 0.26668373644325233 AS sv_2, 0.4178544940102145 AS sv_3, -0.5491681127631127 AS sv_4, 0.4040544186204084 AS sv_5, -0.5896445697039996 AS sv_6, 0.698527487123483 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 53 AS sv_idx, 0.292496160423903 AS dual_coeff, 1.272226673655204 AS sv_0, -0.3717820834512108 AS sv_1, -0.06979989491745779 AS sv_2, -0.6327940358931664 AS sv_3, 1.2572719381611972 AS sv_4, -0.1512155369484829 AS sv_5, 1.460910426430805 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 54 AS sv_idx, 0.1964874079787911 AS dual_coeff, 1.9825193612630212 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 2.519151553816976 AS sv_3, 3.063711989085508 AS sv_4, -0.7064854925173741 AS sv_5, 1.8710214256577664 AS sv_6, 1.6809635303816615 AS sv_7, 3.969999250379421 AS sv_8 UNION ALL SELECT 55 AS sv_idx, 0.2076521680000425 AS dual_coeff, 0.9170803298512952 AS sv_0, -0.04407000737000486 AS sv_1, 0.26668373644325233 AS sv_2, 0.4178544940102145 AS sv_3, -0.09755810003203527 AS sv_4, -0.1512155369484829 AS sv_5, -0.1795335704770387 AS sv_6, -0.2839085561346954 AS sv_7, 3.352640078852121 AS sv_8 UNION ALL SELECT 56 AS sv_idx, 0.20890276994154502 AS dual_coeff, 1.9825193612630212 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 1.818719200548056 AS sv_3, -0.5491681127631127 AS sv_4, 1.7922293075426368 AS sv_5, 0.2305774287499223 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 57 AS sv_idx, 0.1984897296985613 AS dual_coeff, -1.2137977329721574 AS sv_0, 0.9390662208736128 AS sv_1, 1.6126182618860927 AS sv_2, 2.519151553816976 AS sv_3, 2.160491963623352 AS sv_4, 1.7922293075426368 AS sv_5, 0.6406884279768832 AS sv_6, 1.353484849295602 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 58 AS sv_idx, 0.3632791904411657 AS dual_coeff, 0.5619339860473863 AS sv_0, 0.6113541447924069 AS sv_1, 0.26668373644325233 AS sv_2, 0.4178544940102145 AS sv_3, -0.09755810003203527 AS sv_4, 1.514594329758191 AS sv_5, 1.460910426430805 AS sv_6, 1.6809635303816615 AS sv_7, 0.883203392742919 AS sv_8 UNION ALL SELECT 59 AS sv_idx, -0.8376818623647849 AS dual_coeff, -1.2137977329721574 AS sv_0, -0.04407000737000486 AS sv_1, -0.7427671576388781 AS sv_2, -0.2825778592587061 AS sv_3, -0.5491681127631127 AS sv_4, -0.4288505147329285 AS sv_5, 0.6406884279768832 AS sv_6, 0.043570124951364066 AS sv_7, 0.2658442212156186 AS sv_8 UNION ALL SELECT 60 AS sv_idx, 0.1502802959752757 AS dual_coeff, 1.272226673655204 AS sv_0, 0.9390662208736128 AS sv_1, 0.26668373644325233 AS sv_2, 0.06763831737575418 AS sv_3, 0.8056619254301197 AS sv_4, 1.514594329758191 AS sv_5, -0.1795335704770387 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 61 AS sv_idx, 0.11214069258102884 AS dual_coeff, 1.9825193612630212 AS sv_0, -0.04407000737000486 AS sv_1, -0.06979989491745779 AS sv_2, 2.519151553816976 AS sv_3, -0.5491681127631127 AS sv_4, 1.7922293075426368 AS sv_5, 1.460910426430805 AS sv_6, 0.043570124951364066 AS sv_7, 0.883203392742919 AS sv_8 UNION ALL SELECT 62 AS sv_idx, 0.14410390806746848 AS dual_coeff, 1.9825193612630212 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 0.06763831737575418 AS sv_3, 3.063711989085508 AS sv_4, 1.2369593519737454 AS sv_5, 1.8710214256577664 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 63 AS sv_idx, -1.0 AS dual_coeff, 1.272226673655204 AS sv_0, -0.04407000737000486 AS sv_1, -0.06979989491745779 AS sv_2, -0.6327940358931664 AS sv_3, -0.5491681127631127 AS sv_4, -0.4288505147329285 AS sv_5, -0.1795335704770387 AS sv_6, -0.2839085561346954 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 64 AS sv_idx, 0.21517641415886407 AS dual_coeff, -0.1483587015604312 AS sv_0, 0.6113541447924069 AS sv_1, 0.6031673678039624 AS sv_2, 2.519151553816976 AS sv_3, 0.35405191269904224 AS sv_4, 1.7922293075426368 AS sv_5, 1.460910426430805 AS sv_6, 0.698527487123483 AS sv_7, 3.969999250379421 AS sv_8 UNION ALL SELECT 65 AS sv_idx, -0.11597802861156285 AS dual_coeff, -1.2137977329721574 AS sv_0, -0.6994941595324167 AS sv_1, -0.7427671576388781 AS sv_2, -0.6327940358931664 AS sv_3, 0.35405191269904224 AS sv_4, -0.1512155369484829 AS sv_5, -0.9997555689309604 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 66 AS sv_idx, 0.17114190110017222 AS dual_coeff, 1.9825193612630212 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, -0.2825778592587061 AS sv_3, 3.063711989085508 AS sv_4, 1.7922293075426368 AS sv_5, 0.6406884279768832 AS sv_6, 0.043570124951364066 AS sv_7, 0.883203392742919 AS sv_8 UNION ALL SELECT 67 AS sv_idx, 0.18737806792769177 AS dual_coeff, 0.2067876422434776 AS sv_0, -0.04407000737000486 AS sv_1, 0.6031673678039624 AS sv_2, -0.6327940358931664 AS sv_3, 2.160491963623352 AS sv_4, 1.7922293075426368 AS sv_5, 0.6406884279768832 AS sv_6, 0.043570124951364066 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 68 AS sv_idx, 0.23940556937546265 AS dual_coeff, 0.2067876422434776 AS sv_0, 0.283642068711201 AS sv_1, 0.9396509991646724 AS sv_2, 1.4685030239135957 AS sv_3, 2.6121019763544298 AS sv_4, 0.9593243741892996 AS sv_5, 1.8710214256577664 AS sv_6, 2.3359208925537804 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 69 AS sv_idx, 0.15869313960205242 AS dual_coeff, 0.9170803298512952 AS sv_0, 0.6113541447924069 AS sv_1, -0.06979989491745779 AS sv_2, 1.4685030239135957 AS sv_3, 0.35405191269904224 AS sv_4, 1.7922293075426368 AS sv_5, 1.460910426430805 AS sv_6, 0.698527487123483 AS sv_7, 2.11792173579752 AS sv_8 UNION ALL SELECT 70 AS sv_idx, 0.22662583290599936 AS dual_coeff, 1.272226673655204 AS sv_0, -0.04407000737000486 AS sv_1, 0.6031673678039624 AS sv_2, 0.4178544940102145 AS sv_3, 0.8056619254301197 AS sv_4, 1.7922293075426368 AS sv_5, -0.9997555689309604 AS sv_6, 1.0260061682095425 AS sv_7, 0.2658442212156186 AS sv_8 UNION ALL SELECT 71 AS sv_idx, -1.0 AS dual_coeff, -1.2137977329721574 AS sv_0, -0.6994941595324167 AS sv_1, -0.7427671576388781 AS sv_2, -0.6327940358931664 AS sv_3, 3.063711989085508 AS sv_4, -0.7064854925173741 AS sv_5, -0.9997555689309604 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 72 AS sv_idx, 0.1795376705037413 AS dual_coeff, 0.2067876422434776 AS sv_0, 2.249914525198436 AS sv_1, 1.6126182618860927 AS sv_2, 2.519151553816976 AS sv_3, 2.160491963623352 AS sv_4, 1.7922293075426368 AS sv_5, -0.1795335704770387 AS sv_6, 1.0260061682095425 AS sv_7, 0.883203392742919 AS sv_8 UNION ALL SELECT 73 AS sv_idx, 0.0883543685613064 AS dual_coeff, 1.6273730174591126 AS sv_0, 0.6113541447924069 AS sv_1, 0.6031673678039624 AS sv_2, 0.4178544940102145 AS sv_3, 0.35405191269904224 AS sv_4, 0.4040544186204084 AS sv_5, 0.2305774287499223 AS sv_6, 0.043570124951364066 AS sv_7, 0.883203392742919 AS sv_8 UNION ALL SELECT 74 AS sv_idx, -0.4809802166895071 AS dual_coeff, -1.2137977329721574 AS sv_0, -0.6994941595324167 AS sv_1, -0.7427671576388781 AS sv_2, -0.6327940358931664 AS sv_3, -0.5491681127631127 AS sv_4, 0.4040544186204084 AS sv_5, -0.9997555689309604 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 75 AS sv_idx, 0.1946829399611661 AS dual_coeff, -0.5035050453643399 AS sv_0, 0.283642068711201 AS sv_1, 0.6031673678039624 AS sv_2, -0.2825778592587061 AS sv_3, 1.2572719381611972 AS sv_4, 1.2369593519737454 AS sv_5, 0.2305774287499223 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 76 AS sv_idx, -1.0 AS dual_coeff, -0.5035050453643399 AS sv_0, -0.6994941595324167 AS sv_1, -0.7427671576388781 AS sv_2, 0.06763831737575418 AS sv_3, 2.160491963623352 AS sv_4, -0.7064854925173741 AS sv_5, 0.6406884279768832 AS sv_6, 1.6809635303816615 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 77 AS sv_idx, 0.20390913265393007 AS dual_coeff, 1.272226673655204 AS sv_0, 1.5944903730360245 AS sv_1, 1.2761346305253827 AS sv_2, 0.4178544940102145 AS sv_3, 3.063711989085508 AS sv_4, 1.7922293075426368 AS sv_5, 1.460910426430805 AS sv_6, 1.6809635303816615 AS sv_7, 3.352640078852121 AS sv_8 UNION ALL SELECT 78 AS sv_idx, 0.1658677224614167 AS dual_coeff, 0.9170803298512952 AS sv_0, -0.3717820834512108 AS sv_1, 0.26668373644325233 AS sv_2, -0.6327940358931664 AS sv_3, 1.2572719381611972 AS sv_4, 1.7922293075426368 AS sv_5, 0.6406884279768832 AS sv_6, 0.3710488060374236 AS sv_7, 0.883203392742919 AS sv_8 UNION ALL SELECT 79 AS sv_idx, 0.08235221695723947 AS dual_coeff, 1.9825193612630212 AS sv_0, 2.249914525198436 AS sv_1, 1.6126182618860927 AS sv_2, 1.1182868472791347 AS sv_3, 0.35405191269904224 AS sv_4, 0.4040544186204084 AS sv_5, 1.8710214256577664 AS sv_6, 2.3359208925537804 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 80 AS sv_idx, 0.1481332321657231 AS dual_coeff, 0.2067876422434776 AS sv_0, 0.6113541447924069 AS sv_1, 0.6031673678039624 AS sv_2, 1.1182868472791347 AS sv_3, -0.09755810003203527 AS sv_4, 1.7922293075426368 AS sv_5, -0.1795335704770387 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 81 AS sv_idx, 0.21048419845347482 AS dual_coeff, 1.6273730174591126 AS sv_0, 1.9222024491172305 AS sv_1, 2.2855855246075127 AS sv_2, 0.06763831737575418 AS sv_3, 1.2572719381611972 AS sv_4, 1.7922293075426368 AS sv_5, 1.460910426430805 AS sv_6, 2.3359208925537804 AS sv_7, 2.7352809073248205 AS sv_8 UNION ALL SELECT 82 AS sv_idx, 0.04403190851860452 AS dual_coeff, 1.9825193612630212 AS sv_0, 1.2667782969548187 AS sv_1, 1.2761346305253827 AS sv_2, 0.4178544940102145 AS sv_3, 0.8056619254301197 AS sv_4, 1.7922293075426368 AS sv_5, 0.6406884279768832 AS sv_6, 1.353484849295602 AS sv_7, 0.2658442212156186 AS sv_8 UNION ALL SELECT 83 AS sv_idx, -1.0 AS dual_coeff, -1.2137977329721574 AS sv_0, -0.6994941595324167 AS sv_1, -0.7427671576388781 AS sv_2, -0.2825778592587061 AS sv_3, -1.0007781254941903 AS sv_4, -0.1512155369484829 AS sv_5, -0.9997555689309604 AS sv_6, -0.6113872372207548 AS sv_7, 3.352640078852121 AS sv_8 UNION ALL SELECT 84 AS sv_idx, -0.4890289442227047 AS dual_coeff, 0.2067876422434776 AS sv_0, -0.6994941595324167 AS sv_1, -0.7427671576388781 AS sv_2, -0.6327940358931664 AS sv_3, -0.5491681127631127 AS sv_4, 0.0 AS sv_5, -0.1795335704770387 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 85 AS sv_idx, 0.1350767677116078 AS dual_coeff, 0.2067876422434776 AS sv_0, 0.9390662208736128 AS sv_1, 1.2761346305253827 AS sv_2, 1.818719200548056 AS sv_3, 2.160491963623352 AS sv_4, 1.7922293075426368 AS sv_5, -0.1795335704770387 AS sv_6, 2.3359208925537804 AS sv_7, 0.883203392742919 AS sv_8 UNION ALL SELECT 86 AS sv_idx, 0.2199401728670098 AS dual_coeff, 1.9825193612630212 AS sv_0, 1.5944903730360245 AS sv_1, 2.2855855246075127 AS sv_2, 2.519151553816976 AS sv_3, 1.2572719381611972 AS sv_4, -0.7064854925173741 AS sv_5, -0.1795335704770387 AS sv_6, -0.6113872372207548 AS sv_7, 4.587358421906721 AS sv_8 UNION ALL SELECT 87 AS sv_idx, 0.12919849250479631 AS dual_coeff, 0.5619339860473863 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 2.519151553816976 AS sv_3, 2.160491963623352 AS sv_4, 1.7922293075426368 AS sv_5, 2.691243424111688 AS sv_6, 2.3359208925537804 AS sv_7, 3.352640078852121 AS sv_8 UNION ALL SELECT 88 AS sv_idx, 0.1972785913122307 AS dual_coeff, 0.2067876422434776 AS sv_0, 1.5944903730360245 AS sv_1, 1.2761346305253827 AS sv_2, 1.4685030239135957 AS sv_3, 3.063711989085508 AS sv_4, 1.7922293075426368 AS sv_5, 0.6406884279768832 AS sv_6, 1.353484849295602 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 89 AS sv_idx, 0.18464934756702386 AS dual_coeff, 0.2067876422434776 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 0.06763831737575418 AS sv_3, 2.160491963623352 AS sv_4, -0.7064854925173741 AS sv_5, 0.6406884279768832 AS sv_6, 2.3359208925537804 AS sv_7, 0.883203392742919 AS sv_8 UNION ALL SELECT 90 AS sv_idx, 0.5060047601073023 AS dual_coeff, 0.2067876422434776 AS sv_0, -0.04407000737000486 AS sv_1, -0.06979989491745779 AS sv_2, 0.06763831737575418 AS sv_3, 1.2572719381611972 AS sv_4, 1.7922293075426368 AS sv_5, -0.1795335704770387 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 91 AS sv_idx, 0.20097556825204219 AS dual_coeff, 1.272226673655204 AS sv_0, 1.2667782969548187 AS sv_1, 0.9396509991646724 AS sv_2, 0.4178544940102145 AS sv_3, 0.35405191269904224 AS sv_4, 1.7922293075426368 AS sv_5, 0.6406884279768832 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 92 AS sv_idx, 0.18647324282896136 AS dual_coeff, -1.2137977329721574 AS sv_0, 0.6113541447924069 AS sv_1, 1.6126182618860927 AS sv_2, 1.1182868472791347 AS sv_3, 0.8056619254301197 AS sv_4, 1.2369593519737454 AS sv_5, 1.460910426430805 AS sv_6, 2.3359208925537804 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 93 AS sv_idx, 0.21667719030721375 AS dual_coeff, 1.9825193612630212 AS sv_0, 0.6113541447924069 AS sv_1, 0.9396509991646724 AS sv_2, 2.519151553816976 AS sv_3, 1.2572719381611972 AS sv_4, 1.7922293075426368 AS sv_5, 1.460910426430805 AS sv_6, 1.353484849295602 AS sv_7, 4.587358421906721 AS sv_8 UNION ALL SELECT 94 AS sv_idx, 0.1665453343418369 AS dual_coeff, 0.2067876422434776 AS sv_0, 1.5944903730360245 AS sv_1, 0.26668373644325233 AS sv_2, 2.519151553816976 AS sv_3, 0.8056619254301197 AS sv_4, 1.2369593519737454 AS sv_5, 2.281132424884727 AS sv_6, 2.3359208925537804 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 95 AS sv_idx, 0.1161860834802585 AS dual_coeff, 1.9825193612630212 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 1.818719200548056 AS sv_3, 1.2572719381611972 AS sv_4, 1.2369593519737454 AS sv_5, 1.460910426430805 AS sv_6, 2.3359208925537804 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 96 AS sv_idx, 0.20081760007538496 AS dual_coeff, 0.9170803298512952 AS sv_0, 0.6113541447924069 AS sv_1, 2.2855855246075127 AS sv_2, 2.519151553816976 AS sv_3, 3.063711989085508 AS sv_4, 1.7922293075426368 AS sv_5, 0.2305774287499223 AS sv_6, 2.3359208925537804 AS sv_7, 0.883203392742919 AS sv_8 UNION ALL SELECT 97 AS sv_idx, -1.0 AS dual_coeff, 1.272226673655204 AS sv_0, 0.283642068711201 AS sv_1, 0.26668373644325233 AS sv_2, 0.7680706706446747 AS sv_3, 0.35405191269904224 AS sv_4, 0.9593243741892996 AS sv_5, 1.460910426430805 AS sv_6, 1.6809635303816615 AS sv_7, 0.2658442212156186 AS sv_8 UNION ALL SELECT 98 AS sv_idx, 0.04747211962070835 AS dual_coeff, 1.6273730174591126 AS sv_0, 1.2667782969548187 AS sv_1, 1.2761346305253827 AS sv_2, 0.7680706706446747 AS sv_3, 0.8056619254301197 AS sv_4, 1.7922293075426368 AS sv_5, 1.460910426430805 AS sv_6, 1.6809635303816615 AS sv_7, 0.883203392742919 AS sv_8 UNION ALL SELECT 99 AS sv_idx, 0.05106624928871686 AS dual_coeff, 1.9825193612630212 AS sv_0, 1.5944903730360245 AS sv_1, 1.6126182618860927 AS sv_2, 0.4178544940102145 AS sv_3, 3.063711989085508 AS sv_4, 1.7922293075426368 AS sv_5, 1.8710214256577664 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 100 AS sv_idx, 0.14406324239745202 AS dual_coeff, 0.2067876422434776 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 2.168935377182516 AS sv_3, 1.2572719381611972 AS sv_4, 1.7922293075426368 AS sv_5, 1.460910426430805 AS sv_6, 2.3359208925537804 AS sv_7, 2.11792173579752 AS sv_8 UNION ALL SELECT 101 AS sv_idx, 0.18825775828653585 AS dual_coeff, 1.9825193612630212 AS sv_0, 2.249914525198436 AS sv_1, 1.9491018932468027 AS sv_2, 0.06763831737575418 AS sv_3, 1.7088819508922748 AS sv_4, 0.4040544186204084 AS sv_5, -0.1795335704770387 AS sv_6, 0.698527487123483 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 102 AS sv_idx, 0.09720131010385438 AS dual_coeff, 1.272226673655204 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 2.519151553816976 AS sv_3, 0.8056619254301197 AS sv_4, 1.7922293075426368 AS sv_5, 1.8710214256577664 AS sv_6, 2.3359208925537804 AS sv_7, 2.7352809073248205 AS sv_8 UNION ALL SELECT 103 AS sv_idx, 0.04825186876801882 AS dual_coeff, 1.272226673655204 AS sv_0, 2.249914525198436 AS sv_1, 1.6126182618860927 AS sv_2, 1.818719200548056 AS sv_3, 0.35405191269904224 AS sv_4, 1.2369593519737454 AS sv_5, 1.460910426430805 AS sv_6, 1.353484849295602 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 104 AS sv_idx, 0.17666387470301198 AS dual_coeff, 1.9825193612630212 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 2.519151553816976 AS sv_3, 1.7088819508922748 AS sv_4, 1.7922293075426368 AS sv_5, 1.460910426430805 AS sv_6, 2.3359208925537804 AS sv_7, 1.5005625642702196 AS sv_8 UNION ALL SELECT 105 AS sv_idx, 0.19935998027312096 AS dual_coeff, 1.9825193612630212 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 2.519151553816976 AS sv_3, -0.09755810003203527 AS sv_4, 1.7922293075426368 AS sv_5, 2.691243424111688 AS sv_6, 1.0260061682095425 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 106 AS sv_idx, 0.17140735235407586 AS dual_coeff, 1.272226673655204 AS sv_0, 1.2667782969548187 AS sv_1, 1.6126182618860927 AS sv_2, 1.4685030239135957 AS sv_3, 0.8056619254301197 AS sv_4, 0.4040544186204084 AS sv_5, 0.6406884279768832 AS sv_6, 2.3359208925537804 AS sv_7, 0.2658442212156186 AS sv_8 UNION ALL SELECT 107 AS sv_idx, 0.12533862003286572 AS dual_coeff, 0.5619339860473863 AS sv_0, 2.249914525198436 AS sv_1, 1.2761346305253827 AS sv_2, 1.4685030239135957 AS sv_3, 1.2572719381611972 AS sv_4, 0.1264194408359628 AS sv_5, 1.8710214256577664 AS sv_6, 2.3359208925537804 AS sv_7, 0.2658442212156186 AS sv_8 UNION ALL SELECT 108 AS sv_idx, 0.21081701554480867 AS dual_coeff, 1.9825193612630212 AS sv_0, 0.9390662208736128 AS sv_1, 0.26668373644325233 AS sv_2, 0.06763831737575418 AS sv_3, 3.063711989085508 AS sv_4, 1.7922293075426368 AS sv_5, 2.281132424884727 AS sv_6, 2.3359208925537804 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 109 AS sv_idx, 1.0 AS dual_coeff, -0.1483587015604312 AS sv_0, -0.6994941595324167 AS sv_1, -0.7427671576388781 AS sv_2, 0.06763831737575418 AS sv_3, -1.0007781254941903 AS sv_4, 0.4040544186204084 AS sv_5, -0.5896445697039996 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 110 AS sv_idx, 0.2575660462473077 AS dual_coeff, 1.9825193612630212 AS sv_0, 0.6113541447924069 AS sv_1, 0.6031673678039624 AS sv_2, 1.1182868472791347 AS sv_3, -0.09755810003203527 AS sv_4, 1.7922293075426368 AS sv_5, 1.460910426430805 AS sv_6, 2.0084422114677207 AS sv_7, 0.2658442212156186 AS sv_8 UNION ALL SELECT 111 AS sv_idx, 0.07459512469813491 AS dual_coeff, 1.272226673655204 AS sv_0, 1.9222024491172305 AS sv_1, 1.9491018932468027 AS sv_2, 0.7680706706446747 AS sv_3, -0.09755810003203527 AS sv_4, 0.4040544186204084 AS sv_5, 1.460910426430805 AS sv_6, 1.353484849295602 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 112 AS sv_idx, 0.21314648680926607 AS dual_coeff, 1.9825193612630212 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 0.06763831737575418 AS sv_3, 3.063711989085508 AS sv_4, 1.7922293075426368 AS sv_5, 2.281132424884727 AS sv_6, 2.3359208925537804 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 113 AS sv_idx, 0.2496895630539806 AS dual_coeff, 0.9170803298512952 AS sv_0, 0.283642068711201 AS sv_1, 1.2761346305253827 AS sv_2, 0.4178544940102145 AS sv_3, -0.09755810003203527 AS sv_4, 0.9593243741892996 AS sv_5, 1.460910426430805 AS sv_6, 1.0260061682095425 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 114 AS sv_idx, 0.00364369102509116 AS dual_coeff, 0.5619339860473863 AS sv_0, 1.5944903730360245 AS sv_1, 1.2761346305253827 AS sv_2, 0.7680706706446747 AS sv_3, 1.2572719381611972 AS sv_4, 1.2369593519737454 AS sv_5, 1.8710214256577664 AS sv_6, 2.0084422114677207 AS sv_7, 0.2658442212156186 AS sv_8 UNION ALL SELECT 115 AS sv_idx, -1.0 AS dual_coeff, 1.272226673655204 AS sv_0, 0.283642068711201 AS sv_1, 0.9396509991646724 AS sv_2, 0.06763831737575418 AS sv_3, -0.09755810003203527 AS sv_4, -0.7064854925173741 AS sv_5, 0.2305774287499223 AS sv_6, 0.043570124951364066 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 116 AS sv_idx, 0.032881294526772 AS dual_coeff, 1.9825193612630212 AS sv_0, 0.283642068711201 AS sv_1, 0.6031673678039624 AS sv_2, 0.7680706706446747 AS sv_3, 0.8056619254301197 AS sv_4, 1.7922293075426368 AS sv_5, 0.2305774287499223 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 117 AS sv_idx, -0.6241483617570085 AS dual_coeff, -0.5035050453643399 AS sv_0, -0.04407000737000486 AS sv_1, -0.406283526278168 AS sv_2, -0.6327940358931664 AS sv_3, -0.09755810003203527 AS sv_4, -0.7064854925173741 AS sv_5, -0.1795335704770387 AS sv_6, 1.0260061682095425 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 118 AS sv_idx, -0.2195914957596871 AS dual_coeff, -0.5035050453643399 AS sv_0, -0.6994941595324167 AS sv_1, 0.26668373644325233 AS sv_2, -0.6327940358931664 AS sv_3, -0.5491681127631127 AS sv_4, 0.0 AS sv_5, -0.1795335704770387 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 119 AS sv_idx, 0.1997189240930421 AS dual_coeff, 1.9825193612630212 AS sv_0, 1.5944903730360245 AS sv_1, 1.6126182618860927 AS sv_2, -0.2825778592587061 AS sv_3, 2.160491963623352 AS sv_4, 1.7922293075426368 AS sv_5, 0.2305774287499223 AS sv_6, 1.6809635303816615 AS sv_7, 4.587358421906721 AS sv_8 UNION ALL SELECT 120 AS sv_idx, 0.17192802066638055 AS dual_coeff, 1.6273730174591126 AS sv_0, 1.5944903730360245 AS sv_1, 1.6126182618860927 AS sv_2, 0.7680706706446747 AS sv_3, 1.2572719381611972 AS sv_4, -0.4288505147329285 AS sv_5, 0.2305774287499223 AS sv_6, 2.3359208925537804 AS sv_7, 1.5005625642702196 AS sv_8 UNION ALL SELECT 121 AS sv_idx, 0.20114467120301607 AS dual_coeff, 1.272226673655204 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 1.818719200548056 AS sv_3, 1.2572719381611972 AS sv_4, 1.514594329758191 AS sv_5, -0.1795335704770387 AS sv_6, 2.3359208925537804 AS sv_7, 4.587358421906721 AS sv_8 UNION ALL SELECT 122 AS sv_idx, 0.0446306352473546 AS dual_coeff, 1.9825193612630212 AS sv_0, 0.283642068711201 AS sv_1, -0.06979989491745779 AS sv_2, -0.2825778592587061 AS sv_3, -0.09755810003203527 AS sv_4, 1.7922293075426368 AS sv_5, 0.6406884279768832 AS sv_6, 0.043570124951364066 AS sv_7, 0.2658442212156186 AS sv_8 UNION ALL SELECT 123 AS sv_idx, -0.5290521813434118 AS dual_coeff, 0.2067876422434776 AS sv_0, -0.6994941595324167 AS sv_1, -0.06979989491745779 AS sv_2, 0.06763831737575418 AS sv_3, -0.5491681127631127 AS sv_4, -0.4288505147329285 AS sv_5, -0.5896445697039996 AS sv_6, 0.043570124951364066 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 124 AS sv_idx, -0.21080565457589584 AS dual_coeff, -0.5035050453643399 AS sv_0, -0.6994941595324167 AS sv_1, -0.7427671576388781 AS sv_2, 0.06763831737575418 AS sv_3, -1.0007781254941903 AS sv_4, -0.7064854925173741 AS sv_5, -0.1795335704770387 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 125 AS sv_idx, -0.4890366741675763 AS dual_coeff, -1.2137977329721574 AS sv_0, -0.6994941595324167 AS sv_1, -0.7427671576388781 AS sv_2, -0.6327940358931664 AS sv_3, -0.5491681127631127 AS sv_4, 0.4040544186204084 AS sv_5, 0.6406884279768832 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 126 AS sv_idx, -0.7101861769242628 AS dual_coeff, -0.1483587015604312 AS sv_0, -0.6994941595324167 AS sv_1, -0.7427671576388781 AS sv_2, -0.6327940358931664 AS sv_3, -0.5491681127631127 AS sv_4, -0.7064854925173741 AS sv_5, -0.1795335704770387 AS sv_6, 1.0260061682095425 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 127 AS sv_idx, 0.08481274364425619 AS dual_coeff, 1.9825193612630212 AS sv_0, 0.283642068711201 AS sv_1, 0.26668373644325233 AS sv_2, 2.519151553816976 AS sv_3, -0.5491681127631127 AS sv_4, 1.7922293075426368 AS sv_5, 0.6406884279768832 AS sv_6, 0.043570124951364066 AS sv_7, 0.883203392742919 AS sv_8 UNION ALL SELECT 128 AS sv_idx, -1.0 AS dual_coeff, 0.5619339860473863 AS sv_0, -0.04407000737000486 AS sv_1, -0.06979989491745779 AS sv_2, 0.7680706706446747 AS sv_3, -0.09755810003203527 AS sv_4, 1.7922293075426368 AS sv_5, -0.1795335704770387 AS sv_6, 0.698527487123483 AS sv_7, 0.883203392742919 AS sv_8 UNION ALL SELECT 129 AS sv_idx, 0.20022560180605506 AS dual_coeff, 0.5619339860473863 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, -0.2825778592587061 AS sv_3, 2.160491963623352 AS sv_4, 1.7922293075426368 AS sv_5, 1.460910426430805 AS sv_6, 0.043570124951364066 AS sv_7, 0.883203392742919 AS sv_8 UNION ALL SELECT 130 AS sv_idx, 0.1864239436980369 AS dual_coeff, 1.6273730174591126 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, -0.6327940358931664 AS sv_3, 3.063711989085508 AS sv_4, 1.2369593519737454 AS sv_5, -0.1795335704770387 AS sv_6, 0.043570124951364066 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 131 AS sv_idx, 0.4513966710734025 AS dual_coeff, 0.2067876422434776 AS sv_0, 0.9390662208736128 AS sv_1, 0.9396509991646724 AS sv_2, -0.2825778592587061 AS sv_3, 0.35405191269904224 AS sv_4, 1.7922293075426368 AS sv_5, -0.1795335704770387 AS sv_6, 1.0260061682095425 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 132 AS sv_idx, -1.0 AS dual_coeff, 0.2067876422434776 AS sv_0, 1.2667782969548187 AS sv_1, 1.2761346305253827 AS sv_2, -0.6327940358931664 AS sv_3, 0.8056619254301197 AS sv_4, 1.2369593519737454 AS sv_5, -0.1795335704770387 AS sv_6, 0.3710488060374236 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 133 AS sv_idx, 0.17329575518814305 AS dual_coeff, 1.9825193612630212 AS sv_0, 0.6113541447924069 AS sv_1, 1.6126182618860927 AS sv_2, 2.519151553816976 AS sv_3, -0.09755810003203527 AS sv_4, 1.7922293075426368 AS sv_5, 0.6406884279768832 AS sv_6, -0.6113872372207548 AS sv_7, 0.883203392742919 AS sv_8 UNION ALL SELECT 134 AS sv_idx, 0.21241316295853477 AS dual_coeff, 0.2067876422434776 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 1.1182868472791347 AS sv_3, 3.063711989085508 AS sv_4, 1.7922293075426368 AS sv_5, 2.691243424111688 AS sv_6, 1.0260061682095425 AS sv_7, 2.11792173579752 AS sv_8 UNION ALL SELECT 135 AS sv_idx, 0.03803982889013226 AS dual_coeff, 1.272226673655204 AS sv_0, 1.5944903730360245 AS sv_1, 1.9491018932468027 AS sv_2, 0.4178544940102145 AS sv_3, 0.8056619254301197 AS sv_4, 1.7922293075426368 AS sv_5, 1.460910426430805 AS sv_6, 1.6809635303816615 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 136 AS sv_idx, 0.11852387286224518 AS dual_coeff, 1.9825193612630212 AS sv_0, 0.283642068711201 AS sv_1, 0.26668373644325233 AS sv_2, 2.519151553816976 AS sv_3, 1.2572719381611972 AS sv_4, 1.7922293075426368 AS sv_5, 0.6406884279768832 AS sv_6, 0.698527487123483 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 137 AS sv_idx, 0.20755961706931028 AS dual_coeff, 0.9170803298512952 AS sv_0, 1.9222024491172305 AS sv_1, 0.26668373644325233 AS sv_2, 2.519151553816976 AS sv_3, 3.063711989085508 AS sv_4, -0.1512155369484829 AS sv_5, 0.6406884279768832 AS sv_6, 0.043570124951364066 AS sv_7, 0.883203392742919 AS sv_8 UNION ALL SELECT 138 AS sv_idx, -0.02373886817444209 AS dual_coeff, 0.2067876422434776 AS sv_0, -0.6994941595324167 AS sv_1, 0.26668373644325233 AS sv_2, -0.6327940358931664 AS sv_3, -0.5491681127631127 AS sv_4, -0.7064854925173741 AS sv_5, -0.1795335704770387 AS sv_6, -0.2839085561346954 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 139 AS sv_idx, 0.17155583485229506 AS dual_coeff, 1.9825193612630212 AS sv_0, 2.249914525198436 AS sv_1, 0.9396509991646724 AS sv_2, 0.06763831737575418 AS sv_3, -0.09755810003203527 AS sv_4, 1.7922293075426368 AS sv_5, 0.2305774287499223 AS sv_6, 0.043570124951364066 AS sv_7, 0.2658442212156186 AS sv_8 UNION ALL SELECT 140 AS sv_idx, 0.2168473577873416 AS dual_coeff, 1.9825193612630212 AS sv_0, 1.5944903730360245 AS sv_1, 1.6126182618860927 AS sv_2, -0.2825778592587061 AS sv_3, -0.09755810003203527 AS sv_4, 0.1264194408359628 AS sv_5, 1.8710214256577664 AS sv_6, 1.353484849295602 AS sv_7, 3.969999250379421 AS sv_8 UNION ALL SELECT 141 AS sv_idx, 0.1120895140021027 AS dual_coeff, 1.272226673655204 AS sv_0, 0.283642068711201 AS sv_1, 1.2761346305253827 AS sv_2, -0.6327940358931664 AS sv_3, -0.09755810003203527 AS sv_4, 1.7922293075426368 AS sv_5, -0.1795335704770387 AS sv_6, 2.0084422114677207 AS sv_7, 0.2658442212156186 AS sv_8 UNION ALL SELECT 142 AS sv_idx, 0.19018128325735612 AS dual_coeff, -0.5035050453643399 AS sv_0, -0.04407000737000486 AS sv_1, 0.6031673678039624 AS sv_2, -0.2825778592587061 AS sv_3, -0.09755810003203527 AS sv_4, 1.7922293075426368 AS sv_5, 1.460910426430805 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 143 AS sv_idx, 1.0 AS dual_coeff, 0.9170803298512952 AS sv_0, -0.3717820834512108 AS sv_1, 0.26668373644325233 AS sv_2, -0.6327940358931664 AS sv_3, -0.09755810003203527 AS sv_4, 0.1264194408359628 AS sv_5, -0.1795335704770387 AS sv_6, 0.043570124951364066 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 144 AS sv_idx, 0.20946476498614247 AS dual_coeff, 1.9825193612630212 AS sv_0, 0.6113541447924069 AS sv_1, 1.2761346305253827 AS sv_2, 0.06763831737575418 AS sv_3, -0.09755810003203527 AS sv_4, 0.9593243741892996 AS sv_5, -0.1795335704770387 AS sv_6, 0.043570124951364066 AS sv_7, 3.969999250379421 AS sv_8 UNION ALL SELECT 145 AS sv_idx, 0.19704019638991008 AS dual_coeff, -1.2137977329721574 AS sv_0, 0.283642068711201 AS sv_1, -0.06979989491745779 AS sv_2, 2.519151553816976 AS sv_3, 0.35405191269904224 AS sv_4, 1.7922293075426368 AS sv_5, 0.6406884279768832 AS sv_6, 1.0260061682095425 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 146 AS sv_idx, 0.06926204765681505 AS dual_coeff, 1.9825193612630212 AS sv_0, 0.283642068711201 AS sv_1, 0.9396509991646724 AS sv_2, -0.6327940358931664 AS sv_3, -0.5491681127631127 AS sv_4, 1.7922293075426368 AS sv_5, 0.6406884279768832 AS sv_6, 0.043570124951364066 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 147 AS sv_idx, 0.18327900781038522 AS dual_coeff, 0.9170803298512952 AS sv_0, 0.283642068711201 AS sv_1, 0.6031673678039624 AS sv_2, 2.519151553816976 AS sv_3, -0.5491681127631127 AS sv_4, 1.7922293075426368 AS sv_5, -0.1795335704770387 AS sv_6, 1.6809635303816615 AS sv_7, 0.2658442212156186 AS sv_8 UNION ALL SELECT 148 AS sv_idx, 0.15995150465453073 AS dual_coeff, 1.272226673655204 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 2.519151553816976 AS sv_3, 2.160491963623352 AS sv_4, 1.7922293075426368 AS sv_5, 2.691243424111688 AS sv_6, 1.353484849295602 AS sv_7, 0.883203392742919 AS sv_8 UNION ALL SELECT 149 AS sv_idx, 0.18852869573186126 AS dual_coeff, 1.9825193612630212 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 2.519151553816976 AS sv_3, 3.063711989085508 AS sv_4, 1.7922293075426368 AS sv_5, 0.2305774287499223 AS sv_6, 2.3359208925537804 AS sv_7, 4.587358421906721 AS sv_8 UNION ALL SELECT 150 AS sv_idx, 0.303664044358994 AS dual_coeff, 0.5619339860473863 AS sv_0, -0.6994941595324167 AS sv_1, -0.06979989491745779 AS sv_2, -0.6327940358931664 AS sv_3, 0.35405191269904224 AS sv_4, 0.4040544186204084 AS sv_5, 0.6406884279768832 AS sv_6, 2.3359208925537804 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 151 AS sv_idx, 0.121649333891421 AS dual_coeff, 0.2067876422434776 AS sv_0, 0.9390662208736128 AS sv_1, 0.9396509991646724 AS sv_2, 1.818719200548056 AS sv_3, 1.2572719381611972 AS sv_4, 1.7922293075426368 AS sv_5, 0.2305774287499223 AS sv_6, 2.3359208925537804 AS sv_7, 1.5005625642702196 AS sv_8 UNION ALL SELECT 152 AS sv_idx, 0.1863532540677928 AS dual_coeff, 1.272226673655204 AS sv_0, 1.5944903730360245 AS sv_1, 1.6126182618860927 AS sv_2, -0.6327940358931664 AS sv_3, -0.5491681127631127 AS sv_4, 0.0 AS sv_5, 1.050799427203844 AS sv_6, 2.3359208925537804 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 153 AS sv_idx, 0.1438598558618528 AS dual_coeff, 1.9825193612630212 AS sv_0, 0.283642068711201 AS sv_1, 0.26668373644325233 AS sv_2, 1.1182868472791347 AS sv_3, -0.5491681127631127 AS sv_4, 1.7922293075426368 AS sv_5, -0.5896445697039996 AS sv_6, 0.043570124951364066 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 154 AS sv_idx, 0.13651815719923655 AS dual_coeff, 0.2067876422434776 AS sv_0, 0.6113541447924069 AS sv_1, 1.2761346305253827 AS sv_2, 1.818719200548056 AS sv_3, 1.2572719381611972 AS sv_4, 1.7922293075426368 AS sv_5, 1.460910426430805 AS sv_6, 0.3710488060374236 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 155 AS sv_idx, -1.0 AS dual_coeff, 0.2067876422434776 AS sv_0, -0.04407000737000486 AS sv_1, 0.26668373644325233 AS sv_2, 0.06763831737575418 AS sv_3, 0.35405191269904224 AS sv_4, 0.4040544186204084 AS sv_5, 0.2305774287499223 AS sv_6, 1.353484849295602 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 156 AS sv_idx, -1.0 AS dual_coeff, 0.2067876422434776 AS sv_0, 0.283642068711201 AS sv_1, -0.06979989491745779 AS sv_2, -0.6327940358931664 AS sv_3, -0.5491681127631127 AS sv_4, 0.0 AS sv_5, -0.5896445697039996 AS sv_6, 0.043570124951364066 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 157 AS sv_idx, -1.0 AS dual_coeff, 1.272226673655204 AS sv_0, -0.3717820834512108 AS sv_1, -0.7427671576388781 AS sv_2, -0.6327940358931664 AS sv_3, 0.8056619254301197 AS sv_4, -0.7064854925173741 AS sv_5, -0.9997555689309604 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 158 AS sv_idx, 0.3040456888815837 AS dual_coeff, 1.6273730174591126 AS sv_0, -0.6994941595324167 AS sv_1, -0.406283526278168 AS sv_2, 1.1182868472791347 AS sv_3, 0.35405191269904224 AS sv_4, 1.7922293075426368 AS sv_5, 1.460910426430805 AS sv_6, 1.353484849295602 AS sv_7, 0.2658442212156186 AS sv_8 UNION ALL SELECT 159 AS sv_idx, 0.1591427474779885 AS dual_coeff, 1.272226673655204 AS sv_0, 0.283642068711201 AS sv_1, 2.2855855246075127 AS sv_2, 0.7680706706446747 AS sv_3, 0.35405191269904224 AS sv_4, 0.1264194408359628 AS sv_5, 1.460910426430805 AS sv_6, 2.3359208925537804 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 160 AS sv_idx, 0.16745883218184024 AS dual_coeff, 1.9825193612630212 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 1.4685030239135957 AS sv_3, 2.6121019763544298 AS sv_4, 1.7922293075426368 AS sv_5, 1.460910426430805 AS sv_6, 2.3359208925537804 AS sv_7, 4.587358421906721 AS sv_8 UNION ALL SELECT 161 AS sv_idx, 0.09994437701168508 AS dual_coeff, 1.272226673655204 AS sv_0, -0.04407000737000486 AS sv_1, 0.26668373644325233 AS sv_2, 2.168935377182516 AS sv_3, -0.09755810003203527 AS sv_4, 1.7922293075426368 AS sv_5, -0.1795335704770387 AS sv_6, 0.043570124951364066 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 162 AS sv_idx, 0.19777501189162208 AS dual_coeff, 1.9825193612630212 AS sv_0, 1.5944903730360245 AS sv_1, 0.26668373644325233 AS sv_2, 0.4178544940102145 AS sv_3, 0.35405191269904224 AS sv_4, 1.7922293075426368 AS sv_5, -0.1795335704770387 AS sv_6, 2.3359208925537804 AS sv_7, 1.5005625642702196 AS sv_8 UNION ALL SELECT 163 AS sv_idx, 0.1711704195910262 AS dual_coeff, 0.9170803298512952 AS sv_0, 1.5944903730360245 AS sv_1, 1.2761346305253827 AS sv_2, 1.1182868472791347 AS sv_3, 0.35405191269904224 AS sv_4, -0.1512155369484829 AS sv_5, 1.8710214256577664 AS sv_6, 1.6809635303816615 AS sv_7, 1.5005625642702196 AS sv_8 UNION ALL SELECT 164 AS sv_idx, -0.6628095033461364 AS dual_coeff, -0.5035050453643399 AS sv_0, -0.6994941595324167 AS sv_1, -0.7427671576388781 AS sv_2, -0.6327940358931664 AS sv_3, -0.5491681127631127 AS sv_4, 0.4040544186204084 AS sv_5, 0.6406884279768832 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 165 AS sv_idx, 0.20893125340130916 AS dual_coeff, 1.272226673655204 AS sv_0, 0.9390662208736128 AS sv_1, 0.26668373644325233 AS sv_2, 2.519151553816976 AS sv_3, 3.063711989085508 AS sv_4, -0.7064854925173741 AS sv_5, -0.1795335704770387 AS sv_6, 0.698527487123483 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 166 AS sv_idx, -1.0 AS dual_coeff, -0.1483587015604312 AS sv_0, 0.9390662208736128 AS sv_1, 0.6031673678039624 AS sv_2, 1.1182868472791347 AS sv_3, 1.7088819508922748 AS sv_4, 0.0 AS sv_5, 0.2305774287499223 AS sv_6, 2.0084422114677207 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 167 AS sv_idx, 0.20065257235452064 AS dual_coeff, 0.2067876422434776 AS sv_0, 0.6113541447924069 AS sv_1, 0.6031673678039624 AS sv_2, -0.2825778592587061 AS sv_3, 0.8056619254301197 AS sv_4, 1.7922293075426368 AS sv_5, 0.2305774287499223 AS sv_6, 0.043570124951364066 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 168 AS sv_idx, 0.1129556472726893 AS dual_coeff, 0.5619339860473863 AS sv_0, 1.5944903730360245 AS sv_1, 1.2761346305253827 AS sv_2, 1.818719200548056 AS sv_3, 1.2572719381611972 AS sv_4, 1.2369593519737454 AS sv_5, 1.8710214256577664 AS sv_6, 2.0084422114677207 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 169 AS sv_idx, -0.95618127345414 AS dual_coeff, -1.2137977329721574 AS sv_0, -0.6994941595324167 AS sv_1, -0.7427671576388781 AS sv_2, -0.6327940358931664 AS sv_3, 0.8056619254301197 AS sv_4, -0.7064854925173741 AS sv_5, -0.1795335704770387 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 170 AS sv_idx, -1.0 AS dual_coeff, -0.1483587015604312 AS sv_0, 0.283642068711201 AS sv_1, 0.26668373644325233 AS sv_2, 0.4178544940102145 AS sv_3, 1.2572719381611972 AS sv_4, 0.4040544186204084 AS sv_5, 1.460910426430805 AS sv_6, 0.043570124951364066 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 171 AS sv_idx, 0.2081636710528461 AS dual_coeff, 0.9170803298512952 AS sv_0, 0.9390662208736128 AS sv_1, -0.06979989491745779 AS sv_2, -0.2825778592587061 AS sv_3, 0.8056619254301197 AS sv_4, 1.7922293075426368 AS sv_5, 1.460910426430805 AS sv_6, 0.3710488060374236 AS sv_7, 2.7352809073248205 AS sv_8 UNION ALL SELECT 172 AS sv_idx, 0.15131461532831564 AS dual_coeff, 0.2067876422434776 AS sv_0, 0.283642068711201 AS sv_1, 0.9396509991646724 AS sv_2, 2.519151553816976 AS sv_3, -0.5491681127631127 AS sv_4, 1.7922293075426368 AS sv_5, 0.2305774287499223 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 173 AS sv_idx, 0.1583876298149427 AS dual_coeff, 1.9825193612630212 AS sv_0, -0.6994941595324167 AS sv_1, -0.7427671576388781 AS sv_2, -0.6327940358931664 AS sv_3, -0.5491681127631127 AS sv_4, 1.7922293075426368 AS sv_5, 0.6406884279768832 AS sv_6, 0.3710488060374236 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 174 AS sv_idx, 0.17595737374396195 AS dual_coeff, 1.272226673655204 AS sv_0, 2.249914525198436 AS sv_1, -0.06979989491745779 AS sv_2, -0.2825778592587061 AS sv_3, 1.2572719381611972 AS sv_4, 0.1264194408359628 AS sv_5, -0.1795335704770387 AS sv_6, 2.3359208925537804 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 175 AS sv_idx, 0.10348751888329075 AS dual_coeff, 1.9825193612630212 AS sv_0, 0.283642068711201 AS sv_1, 0.9396509991646724 AS sv_2, 0.4178544940102145 AS sv_3, 0.8056619254301197 AS sv_4, 1.7922293075426368 AS sv_5, 1.460910426430805 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 176 AS sv_idx, 0.047913886740907456 AS dual_coeff, 1.9825193612630212 AS sv_0, 0.283642068711201 AS sv_1, 1.2761346305253827 AS sv_2, -0.2825778592587061 AS sv_3, -0.5491681127631127 AS sv_4, 1.2369593519737454 AS sv_5, 1.050799427203844 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 177 AS sv_idx, -0.2142614078107596 AS dual_coeff, 0.2067876422434776 AS sv_0, -0.6994941595324167 AS sv_1, -0.7427671576388781 AS sv_2, -0.6327940358931664 AS sv_3, -0.5491681127631127 AS sv_4, -0.7064854925173741 AS sv_5, -0.1795335704770387 AS sv_6, -0.6113872372207548 AS sv_7, 0.2658442212156186 AS sv_8 UNION ALL SELECT 178 AS sv_idx, 0.05056233171945125 AS dual_coeff, 0.2067876422434776 AS sv_0, 0.283642068711201 AS sv_1, 0.9396509991646724 AS sv_2, 1.1182868472791347 AS sv_3, 0.35405191269904224 AS sv_4, 1.7922293075426368 AS sv_5, 0.2305774287499223 AS sv_6, 0.043570124951364066 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 179 AS sv_idx, 0.0770074408285763 AS dual_coeff, 1.272226673655204 AS sv_0, 0.9390662208736128 AS sv_1, 1.2761346305253827 AS sv_2, 0.06763831737575418 AS sv_3, -0.09755810003203527 AS sv_4, 1.7922293075426368 AS sv_5, -0.1795335704770387 AS sv_6, 0.3710488060374236 AS sv_7, 0.2658442212156186 AS sv_8 UNION ALL SELECT 180 AS sv_idx, 0.13177465967084032 AS dual_coeff, 0.5619339860473863 AS sv_0, 0.6113541447924069 AS sv_1, 0.6031673678039624 AS sv_2, 1.818719200548056 AS sv_3, 0.35405191269904224 AS sv_4, 1.7922293075426368 AS sv_5, -0.1795335704770387 AS sv_6, 0.3710488060374236 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 181 AS sv_idx, 0.13398226633926366 AS dual_coeff, 1.9825193612630212 AS sv_0, -0.04407000737000486 AS sv_1, -0.06979989491745779 AS sv_2, -0.6327940358931664 AS sv_3, -0.5491681127631127 AS sv_4, 1.7922293075426368 AS sv_5, 1.460910426430805 AS sv_6, 1.0260061682095425 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 182 AS sv_idx, 0.20891121257067408 AS dual_coeff, 0.9170803298512952 AS sv_0, 0.9390662208736128 AS sv_1, 0.26668373644325233 AS sv_2, 1.818719200548056 AS sv_3, 3.063711989085508 AS sv_4, 1.7922293075426368 AS sv_5, 2.281132424884727 AS sv_6, 0.698527487123483 AS sv_7, 0.883203392742919 AS sv_8 UNION ALL SELECT 183 AS sv_idx, -0.6770014606869024 AS dual_coeff, -1.2137977329721574 AS sv_0, -0.6994941595324167 AS sv_1, -0.7427671576388781 AS sv_2, -0.6327940358931664 AS sv_3, -1.0007781254941903 AS sv_4, -0.7064854925173741 AS sv_5, -0.9997555689309604 AS sv_6, 0.043570124951364066 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 184 AS sv_idx, 0.25688422021523516 AS dual_coeff, -0.5035050453643399 AS sv_0, 0.283642068711201 AS sv_1, 0.26668373644325233 AS sv_2, 2.519151553816976 AS sv_3, 0.8056619254301197 AS sv_4, -0.7064854925173741 AS sv_5, -0.1795335704770387 AS sv_6, 0.043570124951364066 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 185 AS sv_idx, 0.2515782036710117 AS dual_coeff, -0.1483587015604312 AS sv_0, -0.3717820834512108 AS sv_1, -0.06979989491745779 AS sv_2, 0.7680706706446747 AS sv_3, -0.09755810003203527 AS sv_4, 1.2369593519737454 AS sv_5, 1.460910426430805 AS sv_6, 1.0260061682095425 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 186 AS sv_idx, -1.0 AS dual_coeff, -0.5035050453643399 AS sv_0, 0.283642068711201 AS sv_1, 0.6031673678039624 AS sv_2, 0.06763831737575418 AS sv_3, 1.7088819508922748 AS sv_4, -0.1512155369484829 AS sv_5, 0.2305774287499223 AS sv_6, 1.0260061682095425 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 187 AS sv_idx, 0.18283101523096934 AS dual_coeff, -0.8586513891682487 AS sv_0, 1.2667782969548187 AS sv_1, 2.2855855246075127 AS sv_2, 2.519151553816976 AS sv_3, 1.7088819508922748 AS sv_4, 1.7922293075426368 AS sv_5, 0.2305774287499223 AS sv_6, 2.0084422114677207 AS sv_7, 1.5005625642702196 AS sv_8 UNION ALL SELECT 188 AS sv_idx, 1.0 AS dual_coeff, 0.2067876422434776 AS sv_0, -0.04407000737000486 AS sv_1, -0.06979989491745779 AS sv_2, -0.6327940358931664 AS sv_3, -0.09755810003203527 AS sv_4, -0.1512155369484829 AS sv_5, -0.1795335704770387 AS sv_6, 0.043570124951364066 AS sv_7, 0.883203392742919 AS sv_8 UNION ALL SELECT 189 AS sv_idx, 0.21264107646538816 AS dual_coeff, 1.272226673655204 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 1.4685030239135957 AS sv_3, 3.063711989085508 AS sv_4, 1.7922293075426368 AS sv_5, 1.460910426430805 AS sv_6, 0.043570124951364066 AS sv_7, 3.969999250379421 AS sv_8 UNION ALL SELECT 190 AS sv_idx, 0.17918844664176584 AS dual_coeff, 1.272226673655204 AS sv_0, 2.249914525198436 AS sv_1, 0.6031673678039624 AS sv_2, 0.06763831737575418 AS sv_3, 2.160491963623352 AS sv_4, 0.1264194408359628 AS sv_5, 0.2305774287499223 AS sv_6, 2.3359208925537804 AS sv_7, 0.883203392742919 AS sv_8 UNION ALL SELECT 191 AS sv_idx, 0.16749338962088056 AS dual_coeff, 1.9825193612630212 AS sv_0, -0.04407000737000486 AS sv_1, 0.6031673678039624 AS sv_2, 0.4178544940102145 AS sv_3, -0.09755810003203527 AS sv_4, 0.9593243741892996 AS sv_5, -0.1795335704770387 AS sv_6, 0.698527487123483 AS sv_7, 0.883203392742919 AS sv_8 UNION ALL SELECT 192 AS sv_idx, 0.1604720021832434 AS dual_coeff, 0.5619339860473863 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 2.519151553816976 AS sv_3, 3.063711989085508 AS sv_4, 1.7922293075426368 AS sv_5, 1.8710214256577664 AS sv_6, 2.3359208925537804 AS sv_7, 4.587358421906721 AS sv_8 UNION ALL SELECT 193 AS sv_idx, 0.21788545092835118 AS dual_coeff, -0.5035050453643399 AS sv_0, 2.249914525198436 AS sv_1, -0.06979989491745779 AS sv_2, 2.519151553816976 AS sv_3, 1.2572719381611972 AS sv_4, 1.7922293075426368 AS sv_5, 0.6406884279768832 AS sv_6, -0.6113872372207548 AS sv_7, 1.5005625642702196 AS sv_8 UNION ALL SELECT 194 AS sv_idx, 0.10072660695857936 AS dual_coeff, 0.5619339860473863 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 2.519151553816976 AS sv_3, 2.160491963623352 AS sv_4, 1.7922293075426368 AS sv_5, 1.460910426430805 AS sv_6, 2.3359208925537804 AS sv_7, 3.352640078852121 AS sv_8 UNION ALL SELECT 195 AS sv_idx, 0.0665898597988706 AS dual_coeff, 0.2067876422434776 AS sv_0, 1.5944903730360245 AS sv_1, 1.6126182618860927 AS sv_2, 2.519151553816976 AS sv_3, 0.8056619254301197 AS sv_4, 1.7922293075426368 AS sv_5, 1.8710214256577664 AS sv_6, 2.3359208925537804 AS sv_7, 0.883203392742919 AS sv_8 UNION ALL SELECT 196 AS sv_idx, -0.9003605643701064 AS dual_coeff, 0.2067876422434776 AS sv_0, -0.04407000737000486 AS sv_1, 0.26668373644325233 AS sv_2, -0.6327940358931664 AS sv_3, 0.35405191269904224 AS sv_4, -0.7064854925173741 AS sv_5, -0.1795335704770387 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 197 AS sv_idx, 0.1614190045033507 AS dual_coeff, 1.9825193612630212 AS sv_0, 0.9390662208736128 AS sv_1, -0.06979989491745779 AS sv_2, 1.1182868472791347 AS sv_3, 0.35405191269904224 AS sv_4, 1.7922293075426368 AS sv_5, 1.460910426430805 AS sv_6, 1.6809635303816615 AS sv_7, 1.5005625642702196 AS sv_8 UNION ALL SELECT 198 AS sv_idx, -1.0 AS dual_coeff, -0.5035050453643399 AS sv_0, -0.04407000737000486 AS sv_1, -0.406283526278168 AS sv_2, -0.2825778592587061 AS sv_3, -0.09755810003203527 AS sv_4, -0.7064854925173741 AS sv_5, -0.9997555689309604 AS sv_6, -0.2839085561346954 AS sv_7, 0.883203392742919 AS sv_8 UNION ALL SELECT 199 AS sv_idx, 0.04545278932878311 AS dual_coeff, 0.9170803298512952 AS sv_0, 0.9390662208736128 AS sv_1, 0.9396509991646724 AS sv_2, 0.06763831737575418 AS sv_3, -0.5491681127631127 AS sv_4, 1.7922293075426368 AS sv_5, 1.460910426430805 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 200 AS sv_idx, 0.05773133657201469 AS dual_coeff, 1.9825193612630212 AS sv_0, 1.5944903730360245 AS sv_1, 1.2761346305253827 AS sv_2, 0.4178544940102145 AS sv_3, -0.09755810003203527 AS sv_4, 1.7922293075426368 AS sv_5, 1.460910426430805 AS sv_6, 2.0084422114677207 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 201 AS sv_idx, 0.2178382703524615 AS dual_coeff, -0.5035050453643399 AS sv_0, 2.249914525198436 AS sv_1, 1.6126182618860927 AS sv_2, 1.4685030239135957 AS sv_3, 1.2572719381611972 AS sv_4, 1.514594329758191 AS sv_5, 2.281132424884727 AS sv_6, 0.043570124951364066 AS sv_7, 3.969999250379421 AS sv_8 UNION ALL SELECT 202 AS sv_idx, 0.1704423662847251 AS dual_coeff, 1.9825193612630212 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 1.1182868472791347 AS sv_3, 2.160491963623352 AS sv_4, 0.1264194408359628 AS sv_5, 1.8710214256577664 AS sv_6, 0.698527487123483 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 203 AS sv_idx, -1.0 AS dual_coeff, -0.5035050453643399 AS sv_0, -0.04407000737000486 AS sv_1, -0.406283526278168 AS sv_2, 1.1182868472791347 AS sv_3, -0.09755810003203527 AS sv_4, -0.1512155369484829 AS sv_5, -0.1795335704770387 AS sv_6, 0.698527487123483 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 204 AS sv_idx, 0.1254427704389784 AS dual_coeff, 1.272226673655204 AS sv_0, 1.2667782969548187 AS sv_1, 1.6126182618860927 AS sv_2, 0.7680706706446747 AS sv_3, 3.063711989085508 AS sv_4, 1.7922293075426368 AS sv_5, 1.460910426430805 AS sv_6, -0.2839085561346954 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 205 AS sv_idx, -0.016001687081606018 AS dual_coeff, 0.2067876422434776 AS sv_0, -0.3717820834512108 AS sv_1, -0.406283526278168 AS sv_2, -0.2825778592587061 AS sv_3, -0.5491681127631127 AS sv_4, -0.4288505147329285 AS sv_5, -0.1795335704770387 AS sv_6, -0.2839085561346954 AS sv_7, 0.2658442212156186 AS sv_8 UNION ALL SELECT 206 AS sv_idx, -0.8259672008754798 AS dual_coeff, -0.8586513891682487 AS sv_0, -0.04407000737000486 AS sv_1, -0.7427671576388781 AS sv_2, -0.6327940358931664 AS sv_3, 0.8056619254301197 AS sv_4, -0.7064854925173741 AS sv_5, -0.9997555689309604 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 207 AS sv_idx, 0.1558456037984591 AS dual_coeff, 1.9825193612630212 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 1.4685030239135957 AS sv_3, 3.063711989085508 AS sv_4, 1.7922293075426368 AS sv_5, 1.8710214256577664 AS sv_6, -0.2839085561346954 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 208 AS sv_idx, 0.19263099438707162 AS dual_coeff, 1.6273730174591126 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 2.519151553816976 AS sv_3, 3.063711989085508 AS sv_4, 1.7922293075426368 AS sv_5, 2.691243424111688 AS sv_6, 2.3359208925537804 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 209 AS sv_idx, -1.0 AS dual_coeff, 0.2067876422434776 AS sv_0, -0.04407000737000486 AS sv_1, 0.9396509991646724 AS sv_2, -0.6327940358931664 AS sv_3, -0.5491681127631127 AS sv_4, -0.7064854925173741 AS sv_5, -0.9997555689309604 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 210 AS sv_idx, 0.141662812875186 AS dual_coeff, 1.272226673655204 AS sv_0, 1.2667782969548187 AS sv_1, 1.6126182618860927 AS sv_2, -0.2825778592587061 AS sv_3, 0.35405191269904224 AS sv_4, -0.4288505147329285 AS sv_5, 0.6406884279768832 AS sv_6, 2.3359208925537804 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 211 AS sv_idx, -0.3601880046518836 AS dual_coeff, 0.2067876422434776 AS sv_0, -0.6994941595324167 AS sv_1, -0.7427671576388781 AS sv_2, 0.06763831737575418 AS sv_3, 0.35405191269904224 AS sv_4, -0.7064854925173741 AS sv_5, -0.1795335704770387 AS sv_6, -0.2839085561346954 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 212 AS sv_idx, -1.0 AS dual_coeff, 0.5619339860473863 AS sv_0, 1.9222024491172305 AS sv_1, 1.2761346305253827 AS sv_2, 0.7680706706446747 AS sv_3, 0.8056619254301197 AS sv_4, 1.2369593519737454 AS sv_5, 0.2305774287499223 AS sv_6, -0.2839085561346954 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 213 AS sv_idx, 0.1898886516115101 AS dual_coeff, 1.9825193612630212 AS sv_0, 1.5944903730360245 AS sv_1, 2.2855855246075127 AS sv_2, -0.6327940358931664 AS sv_3, -0.09755810003203527 AS sv_4, 1.7922293075426368 AS sv_5, 0.6406884279768832 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 214 AS sv_idx, 0.19853355902425882 AS dual_coeff, 1.9825193612630212 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, -0.6327940358931664 AS sv_3, 1.2572719381611972 AS sv_4, -0.7064854925173741 AS sv_5, -0.5896445697039996 AS sv_6, 1.6809635303816615 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 215 AS sv_idx, 0.19026232732688358 AS dual_coeff, 1.9825193612630212 AS sv_0, 0.283642068711201 AS sv_1, -0.06979989491745779 AS sv_2, 2.519151553816976 AS sv_3, 0.35405191269904224 AS sv_4, 1.7922293075426368 AS sv_5, 2.691243424111688 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 216 AS sv_idx, -0.9902831851965066 AS dual_coeff, 0.2067876422434776 AS sv_0, -0.3717820834512108 AS sv_1, -0.406283526278168 AS sv_2, 0.4178544940102145 AS sv_3, -0.5491681127631127 AS sv_4, 0.1264194408359628 AS sv_5, -0.9997555689309604 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 217 AS sv_idx, -0.4798619549907374 AS dual_coeff, 0.2067876422434776 AS sv_0, -0.6994941595324167 AS sv_1, -0.7427671576388781 AS sv_2, 1.1182868472791347 AS sv_3, -0.09755810003203527 AS sv_4, -0.7064854925173741 AS sv_5, -0.5896445697039996 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 218 AS sv_idx, 0.006602974742738277 AS dual_coeff, 0.2067876422434776 AS sv_0, 1.2667782969548187 AS sv_1, 1.9491018932468027 AS sv_2, 1.818719200548056 AS sv_3, 1.2572719381611972 AS sv_4, 1.7922293075426368 AS sv_5, 1.8710214256577664 AS sv_6, 2.3359208925537804 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 219 AS sv_idx, 0.1633290679788855 AS dual_coeff, -0.1483587015604312 AS sv_0, 0.6113541447924069 AS sv_1, 0.6031673678039624 AS sv_2, 1.818719200548056 AS sv_3, 1.2572719381611972 AS sv_4, 1.7922293075426368 AS sv_5, 2.691243424111688 AS sv_6, 1.353484849295602 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 220 AS sv_idx, 0.3013914180884727 AS dual_coeff, 1.9825193612630212 AS sv_0, -0.3717820834512108 AS sv_1, -0.406283526278168 AS sv_2, -0.6327940358931664 AS sv_3, -0.5491681127631127 AS sv_4, 0.681689396404854 AS sv_5, -0.9997555689309604 AS sv_6, -0.6113872372207548 AS sv_7, 0.2658442212156186 AS sv_8 UNION ALL SELECT 221 AS sv_idx, 0.15514172448372424 AS dual_coeff, 1.9825193612630212 AS sv_0, 0.9390662208736128 AS sv_1, 0.6031673678039624 AS sv_2, 1.818719200548056 AS sv_3, 0.8056619254301197 AS sv_4, 1.7922293075426368 AS sv_5, 1.8710214256577664 AS sv_6, 1.0260061682095425 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 222 AS sv_idx, 0.12655774708380368 AS dual_coeff, 1.272226673655204 AS sv_0, 1.5944903730360245 AS sv_1, 1.9491018932468027 AS sv_2, 1.1182868472791347 AS sv_3, 1.2572719381611972 AS sv_4, -0.1512155369484829 AS sv_5, 2.691243424111688 AS sv_6, 2.3359208925537804 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 223 AS sv_idx, -0.1367436253783793 AS dual_coeff, -0.5035050453643399 AS sv_0, -0.6994941595324167 AS sv_1, -0.7427671576388781 AS sv_2, -0.6327940358931664 AS sv_3, -0.5491681127631127 AS sv_4, 0.4040544186204084 AS sv_5, -0.9997555689309604 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 224 AS sv_idx, 0.06979049479367182 AS dual_coeff, 1.9825193612630212 AS sv_0, 1.9222024491172305 AS sv_1, 1.6126182618860927 AS sv_2, 1.4685030239135957 AS sv_3, 1.2572719381611972 AS sv_4, 0.1264194408359628 AS sv_5, 1.460910426430805 AS sv_6, 2.3359208925537804 AS sv_7, 0.883203392742919 AS sv_8 UNION ALL SELECT 225 AS sv_idx, 0.15057088699107232 AS dual_coeff, 1.9825193612630212 AS sv_0, 0.9390662208736128 AS sv_1, 0.9396509991646724 AS sv_2, -0.2825778592587061 AS sv_3, 0.35405191269904224 AS sv_4, 1.7922293075426368 AS sv_5, 2.281132424884727 AS sv_6, 1.353484849295602 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 226 AS sv_idx, -0.7623492229311558 AS dual_coeff, 0.5619339860473863 AS sv_0, -0.6994941595324167 AS sv_1, -0.7427671576388781 AS sv_2, -0.6327940358931664 AS sv_3, -1.0007781254941903 AS sv_4, -0.7064854925173741 AS sv_5, -0.9997555689309604 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 227 AS sv_idx, 0.08361668944244867 AS dual_coeff, -0.1483587015604312 AS sv_0, 1.5944903730360245 AS sv_1, 1.2761346305253827 AS sv_2, 2.519151553816976 AS sv_3, 0.35405191269904224 AS sv_4, 1.7922293075426368 AS sv_5, 1.460910426430805 AS sv_6, 0.698527487123483 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 228 AS sv_idx, 0.15861695042931107 AS dual_coeff, 1.6273730174591126 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 2.519151553816976 AS sv_3, 3.063711989085508 AS sv_4, 0.4040544186204084 AS sv_5, 2.691243424111688 AS sv_6, 2.3359208925537804 AS sv_7, 4.587358421906721 AS sv_8 UNION ALL SELECT 229 AS sv_idx, 0.05707962783449137 AS dual_coeff, 1.272226673655204 AS sv_0, 1.2667782969548187 AS sv_1, 1.6126182618860927 AS sv_2, 0.7680706706446747 AS sv_3, 0.8056619254301197 AS sv_4, 1.7922293075426368 AS sv_5, 2.281132424884727 AS sv_6, 2.3359208925537804 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 230 AS sv_idx, -0.7776758032725417 AS dual_coeff, -1.2137977329721574 AS sv_0, -0.6994941595324167 AS sv_1, -0.7427671576388781 AS sv_2, 0.06763831737575418 AS sv_3, -1.0007781254941903 AS sv_4, -0.1512155369484829 AS sv_5, -0.9997555689309604 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 231 AS sv_idx, 0.21722042436651076 AS dual_coeff, 1.9825193612630212 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 2.519151553816976 AS sv_3, 1.2572719381611972 AS sv_4, 1.7922293075426368 AS sv_5, 1.8710214256577664 AS sv_6, -0.6113872372207548 AS sv_7, 2.11792173579752 AS sv_8 UNION ALL SELECT 232 AS sv_idx, 0.22011629147631376 AS dual_coeff, -0.5035050453643399 AS sv_0, 0.9390662208736128 AS sv_1, 0.26668373644325233 AS sv_2, 2.519151553816976 AS sv_3, -0.09755810003203527 AS sv_4, -0.1512155369484829 AS sv_5, -0.1795335704770387 AS sv_6, 0.3710488060374236 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 233 AS sv_idx, 1.0 AS dual_coeff, 0.5619339860473863 AS sv_0, -0.04407000737000486 AS sv_1, -0.406283526278168 AS sv_2, -0.6327940358931664 AS sv_3, -0.09755810003203527 AS sv_4, 0.1264194408359628 AS sv_5, 0.2305774287499223 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 234 AS sv_idx, 0.12973879180285 AS dual_coeff, 0.2067876422434776 AS sv_0, 1.5944903730360245 AS sv_1, 1.9491018932468027 AS sv_2, 0.4178544940102145 AS sv_3, -0.09755810003203527 AS sv_4, 1.7922293075426368 AS sv_5, 1.460910426430805 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 235 AS sv_idx, 0.10008848711195924 AS dual_coeff, 0.2067876422434776 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 2.519151553816976 AS sv_3, 1.2572719381611972 AS sv_4, 1.7922293075426368 AS sv_5, 1.050799427203844 AS sv_6, 0.698527487123483 AS sv_7, 0.2658442212156186 AS sv_8 UNION ALL SELECT 236 AS sv_idx, -1.0 AS dual_coeff, 0.2067876422434776 AS sv_0, -0.6994941595324167 AS sv_1, -0.406283526278168 AS sv_2, 2.519151553816976 AS sv_3, 0.35405191269904224 AS sv_4, 0.4040544186204084 AS sv_5, -0.5896445697039996 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 237 AS sv_idx, 0.2031443416888272 AS dual_coeff, 1.272226673655204 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 2.519151553816976 AS sv_3, 1.7088819508922748 AS sv_4, 0.4040544186204084 AS sv_5, 0.2305774287499223 AS sv_6, 1.6809635303816615 AS sv_7, 3.352640078852121 AS sv_8 UNION ALL SELECT 238 AS sv_idx, 0.11848357582222477 AS dual_coeff, 0.5619339860473863 AS sv_0, 0.9390662208736128 AS sv_1, 1.2761346305253827 AS sv_2, 2.519151553816976 AS sv_3, -0.09755810003203527 AS sv_4, 1.7922293075426368 AS sv_5, 1.8710214256577664 AS sv_6, 2.3359208925537804 AS sv_7, 0.2658442212156186 AS sv_8 UNION ALL SELECT 239 AS sv_idx, 0.16499317794444307 AS dual_coeff, -0.1483587015604312 AS sv_0, 2.249914525198436 AS sv_1, 0.26668373644325233 AS sv_2, 1.4685030239135957 AS sv_3, -0.09755810003203527 AS sv_4, 1.7922293075426368 AS sv_5, 2.281132424884727 AS sv_6, 2.3359208925537804 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 240 AS sv_idx, 0.08909931481047932 AS dual_coeff, -0.1483587015604312 AS sv_0, 1.2667782969548187 AS sv_1, 1.6126182618860927 AS sv_2, 0.06763831737575418 AS sv_3, 0.35405191269904224 AS sv_4, 1.7922293075426368 AS sv_5, 2.281132424884727 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 241 AS sv_idx, 0.11268306815458524 AS dual_coeff, 1.9825193612630212 AS sv_0, 0.283642068711201 AS sv_1, 0.6031673678039624 AS sv_2, 0.4178544940102145 AS sv_3, -0.09755810003203527 AS sv_4, 0.4040544186204084 AS sv_5, 1.460910426430805 AS sv_6, 0.043570124951364066 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 242 AS sv_idx, 0.03223633536823018 AS dual_coeff, 0.9170803298512952 AS sv_0, 0.6113541447924069 AS sv_1, 0.9396509991646724 AS sv_2, 2.519151553816976 AS sv_3, 0.35405191269904224 AS sv_4, 1.7922293075426368 AS sv_5, 0.6406884279768832 AS sv_6, 0.043570124951364066 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 243 AS sv_idx, 0.2537317848021805 AS dual_coeff, 0.9170803298512952 AS sv_0, 0.283642068711201 AS sv_1, 0.26668373644325233 AS sv_2, 0.06763831737575418 AS sv_3, 0.35405191269904224 AS sv_4, 1.7922293075426368 AS sv_5, 1.050799427203844 AS sv_6, 2.0084422114677207 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 244 AS sv_idx, -0.034981898220244945 AS dual_coeff, -1.2137977329721574 AS sv_0, -0.6994941595324167 AS sv_1, -0.7427671576388781 AS sv_2, -0.6327940358931664 AS sv_3, -1.0007781254941903 AS sv_4, -0.7064854925173741 AS sv_5, -0.1795335704770387 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 245 AS sv_idx, 0.1135926030384846 AS dual_coeff, 0.5619339860473863 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 2.519151553816976 AS sv_3, 0.35405191269904224 AS sv_4, 1.7922293075426368 AS sv_5, 1.460910426430805 AS sv_6, 2.3359208925537804 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 246 AS sv_idx, 0.305576749488975 AS dual_coeff, 0.9170803298512952 AS sv_0, 1.5944903730360245 AS sv_1, -0.06979989491745779 AS sv_2, 1.4685030239135957 AS sv_3, 0.35405191269904224 AS sv_4, 0.4040544186204084 AS sv_5, 1.460910426430805 AS sv_6, 1.6809635303816615 AS sv_7, 0.2658442212156186 AS sv_8 UNION ALL SELECT 247 AS sv_idx, -1.0 AS dual_coeff, -0.1483587015604312 AS sv_0, 0.283642068711201 AS sv_1, -0.406283526278168 AS sv_2, -0.6327940358931664 AS sv_3, -0.5491681127631127 AS sv_4, 0.4040544186204084 AS sv_5, -0.5896445697039996 AS sv_6, -0.6113872372207548 AS sv_7, 0.2658442212156186 AS sv_8 UNION ALL SELECT 248 AS sv_idx, -1.0 AS dual_coeff, -0.1483587015604312 AS sv_0, -0.04407000737000486 AS sv_1, -0.7427671576388781 AS sv_2, -0.6327940358931664 AS sv_3, -0.5491681127631127 AS sv_4, -0.7064854925173741 AS sv_5, 0.2305774287499223 AS sv_6, 1.6809635303816615 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 249 AS sv_idx, 0.13677770097616562 AS dual_coeff, 0.2067876422434776 AS sv_0, 1.2667782969548187 AS sv_1, 2.2855855246075127 AS sv_2, 2.519151553816976 AS sv_3, 0.8056619254301197 AS sv_4, 1.7922293075426368 AS sv_5, 2.691243424111688 AS sv_6, 2.3359208925537804 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 250 AS sv_idx, 0.106524173993652 AS dual_coeff, 1.272226673655204 AS sv_0, 0.283642068711201 AS sv_1, 0.26668373644325233 AS sv_2, -0.6327940358931664 AS sv_3, 1.2572719381611972 AS sv_4, 1.7922293075426368 AS sv_5, -0.5896445697039996 AS sv_6, 0.698527487123483 AS sv_7, 0.2658442212156186 AS sv_8 UNION ALL SELECT 251 AS sv_idx, 0.20258778647267633 AS dual_coeff, 1.9825193612630212 AS sv_0, 2.249914525198436 AS sv_1, 1.6126182618860927 AS sv_2, 2.519151553816976 AS sv_3, 1.2572719381611972 AS sv_4, 0.4040544186204084 AS sv_5, 2.691243424111688 AS sv_6, 0.043570124951364066 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 252 AS sv_idx, 0.2016418625560816 AS dual_coeff, 1.272226673655204 AS sv_0, 2.249914525198436 AS sv_1, 0.26668373644325233 AS sv_2, 0.4178544940102145 AS sv_3, 2.160491963623352 AS sv_4, 1.7922293075426368 AS sv_5, 1.8710214256577664 AS sv_6, -0.2839085561346954 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 253 AS sv_idx, 0.15505215106347436 AS dual_coeff, 0.9170803298512952 AS sv_0, 0.9390662208736128 AS sv_1, 2.2855855246075127 AS sv_2, 0.7680706706446747 AS sv_3, -0.09755810003203527 AS sv_4, 1.7922293075426368 AS sv_5, 2.281132424884727 AS sv_6, 2.3359208925537804 AS sv_7, 0.2658442212156186 AS sv_8 UNION ALL SELECT 254 AS sv_idx, 0.1408428865277026 AS dual_coeff, 1.9825193612630212 AS sv_0, 1.9222024491172305 AS sv_1, 1.2761346305253827 AS sv_2, 0.06763831737575418 AS sv_3, 0.35405191269904224 AS sv_4, -0.4288505147329285 AS sv_5, 1.460910426430805 AS sv_6, 1.353484849295602 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 255 AS sv_idx, 0.11259159269282347 AS dual_coeff, 0.2067876422434776 AS sv_0, 1.2667782969548187 AS sv_1, 2.2855855246075127 AS sv_2, 1.1182868472791347 AS sv_3, 0.8056619254301197 AS sv_4, 1.7922293075426368 AS sv_5, 1.460910426430805 AS sv_6, 0.698527487123483 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 256 AS sv_idx, 0.13355045239525326 AS dual_coeff, 0.5619339860473863 AS sv_0, 2.249914525198436 AS sv_1, 0.6031673678039624 AS sv_2, 0.7680706706446747 AS sv_3, 0.35405191269904224 AS sv_4, 1.7922293075426368 AS sv_5, 1.050799427203844 AS sv_6, 2.3359208925537804 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 257 AS sv_idx, -0.720154687854492 AS dual_coeff, 0.2067876422434776 AS sv_0, -0.6994941595324167 AS sv_1, -0.7427671576388781 AS sv_2, 1.1182868472791347 AS sv_3, -0.09755810003203527 AS sv_4, -0.7064854925173741 AS sv_5, -0.9997555689309604 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 258 AS sv_idx, 0.09587686363014995 AS dual_coeff, 1.272226673655204 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 2.519151553816976 AS sv_3, 1.2572719381611972 AS sv_4, 1.7922293075426368 AS sv_5, 2.691243424111688 AS sv_6, 2.3359208925537804 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 259 AS sv_idx, 0.22208636715835237 AS dual_coeff, 1.6273730174591126 AS sv_0, 1.5944903730360245 AS sv_1, 1.6126182618860927 AS sv_2, 2.168935377182516 AS sv_3, 1.2572719381611972 AS sv_4, -0.1512155369484829 AS sv_5, 0.2305774287499223 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 260 AS sv_idx, 0.19962201083331002 AS dual_coeff, -0.1483587015604312 AS sv_0, 2.249914525198436 AS sv_1, 1.6126182618860927 AS sv_2, 0.7680706706446747 AS sv_3, 0.35405191269904224 AS sv_4, -0.7064854925173741 AS sv_5, 2.691243424111688 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 261 AS sv_idx, 0.05542351342951056 AS dual_coeff, -0.8586513891682487 AS sv_0, 0.6113541447924069 AS sv_1, 1.2761346305253827 AS sv_2, 1.1182868472791347 AS sv_3, 0.35405191269904224 AS sv_4, 1.7922293075426368 AS sv_5, 1.460910426430805 AS sv_6, 1.0260061682095425 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 262 AS sv_idx, 0.04765365020510118 AS dual_coeff, 1.9825193612630212 AS sv_0, -0.04407000737000486 AS sv_1, 0.26668373644325233 AS sv_2, 0.7680706706446747 AS sv_3, -0.09755810003203527 AS sv_4, 1.7922293075426368 AS sv_5, 0.2305774287499223 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 263 AS sv_idx, 0.15353147554918722 AS dual_coeff, -0.1483587015604312 AS sv_0, 1.5944903730360245 AS sv_1, 0.9396509991646724 AS sv_2, 0.06763831737575418 AS sv_3, 0.35405191269904224 AS sv_4, 1.7922293075426368 AS sv_5, 1.460910426430805 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 264 AS sv_idx, 0.3352412118117383 AS dual_coeff, 0.2067876422434776 AS sv_0, 0.283642068711201 AS sv_1, 0.9396509991646724 AS sv_2, 1.818719200548056 AS sv_3, 0.35405191269904224 AS sv_4, -0.7064854925173741 AS sv_5, 1.8710214256577664 AS sv_6, 2.3359208925537804 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 265 AS sv_idx, 0.2101021368924138 AS dual_coeff, 0.2067876422434776 AS sv_0, -0.04407000737000486 AS sv_1, -0.406283526278168 AS sv_2, 1.818719200548056 AS sv_3, 0.8056619254301197 AS sv_4, 1.7922293075426368 AS sv_5, 1.8710214256577664 AS sv_6, -0.6113872372207548 AS sv_7, 0.2658442212156186 AS sv_8 UNION ALL SELECT 266 AS sv_idx, 0.16798016645955235 AS dual_coeff, 1.9825193612630212 AS sv_0, 0.6113541447924069 AS sv_1, 2.2855855246075127 AS sv_2, 0.06763831737575418 AS sv_3, 0.8056619254301197 AS sv_4, 1.2369593519737454 AS sv_5, 1.460910426430805 AS sv_6, 1.6809635303816615 AS sv_7, 0.883203392742919 AS sv_8 UNION ALL SELECT 267 AS sv_idx, 0.214856105998116 AS dual_coeff, 0.2067876422434776 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 2.519151553816976 AS sv_3, 3.063711989085508 AS sv_4, 1.7922293075426368 AS sv_5, 2.691243424111688 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 268 AS sv_idx, 0.08397077394214986 AS dual_coeff, 1.9825193612630212 AS sv_0, 0.283642068711201 AS sv_1, -0.06979989491745779 AS sv_2, 2.519151553816976 AS sv_3, -0.09755810003203527 AS sv_4, 1.7922293075426368 AS sv_5, 1.460910426430805 AS sv_6, -0.6113872372207548 AS sv_7, 0.2658442212156186 AS sv_8 UNION ALL SELECT 269 AS sv_idx, 0.2110433296033717 AS dual_coeff, 0.2067876422434776 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 2.519151553816976 AS sv_3, 0.8056619254301197 AS sv_4, -0.4288505147329285 AS sv_5, 1.8710214256577664 AS sv_6, 0.698527487123483 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 270 AS sv_idx, 0.16179033859007644 AS dual_coeff, 1.272226673655204 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 2.519151553816976 AS sv_3, 1.2572719381611972 AS sv_4, 1.7922293075426368 AS sv_5, 2.691243424111688 AS sv_6, 2.3359208925537804 AS sv_7, 4.587358421906721 AS sv_8 UNION ALL SELECT 271 AS sv_idx, -1.0 AS dual_coeff, 0.5619339860473863 AS sv_0, -0.04407000737000486 AS sv_1, -0.06979989491745779 AS sv_2, 0.06763831737575418 AS sv_3, -0.09755810003203527 AS sv_4, -0.4288505147329285 AS sv_5, 1.050799427203844 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 272 AS sv_idx, -0.07620138946990472 AS dual_coeff, 0.9170803298512952 AS sv_0, -0.6994941595324167 AS sv_1, -0.406283526278168 AS sv_2, 0.06763831737575418 AS sv_3, -0.5491681127631127 AS sv_4, -0.7064854925173741 AS sv_5, -0.5896445697039996 AS sv_6, -0.6113872372207548 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 273 AS sv_idx, 0.26047036605965823 AS dual_coeff, -0.1483587015604312 AS sv_0, 0.9390662208736128 AS sv_1, 0.9396509991646724 AS sv_2, 0.7680706706446747 AS sv_3, 1.7088819508922748 AS sv_4, 0.681689396404854 AS sv_5, 1.460910426430805 AS sv_6, 1.353484849295602 AS sv_7, 0.883203392742919 AS sv_8 UNION ALL SELECT 274 AS sv_idx, 0.1806230211482376 AS dual_coeff, 1.272226673655204 AS sv_0, 1.2667782969548187 AS sv_1, 0.26668373644325233 AS sv_2, 0.4178544940102145 AS sv_3, 0.8056619254301197 AS sv_4, -0.1512155369484829 AS sv_5, 0.6406884279768832 AS sv_6, 2.3359208925537804 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 275 AS sv_idx, 0.18612196529991032 AS dual_coeff, 1.9825193612630212 AS sv_0, 2.249914525198436 AS sv_1, 1.2761346305253827 AS sv_2, 1.818719200548056 AS sv_3, 1.7088819508922748 AS sv_4, -0.7064854925173741 AS sv_5, 2.691243424111688 AS sv_6, 2.3359208925537804 AS sv_7, 0.883203392742919 AS sv_8 UNION ALL SELECT 276 AS sv_idx, 0.19355030652897026 AS dual_coeff, 0.2067876422434776 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 2.519151553816976 AS sv_3, 3.063711989085508 AS sv_4, -0.4288505147329285 AS sv_5, 2.691243424111688 AS sv_6, 2.3359208925537804 AS sv_7, 4.587358421906721 AS sv_8 UNION ALL SELECT 277 AS sv_idx, -1.0 AS dual_coeff, 0.2067876422434776 AS sv_0, 0.283642068711201 AS sv_1, 0.6031673678039624 AS sv_2, -0.6327940358931664 AS sv_3, 2.160491963623352 AS sv_4, -0.7064854925173741 AS sv_5, -0.1795335704770387 AS sv_6, 1.0260061682095425 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 278 AS sv_idx, 0.17263720605554664 AS dual_coeff, 0.9170803298512952 AS sv_0, 1.5944903730360245 AS sv_1, 1.6126182618860927 AS sv_2, 1.4685030239135957 AS sv_3, -0.09755810003203527 AS sv_4, 1.7922293075426368 AS sv_5, 1.460910426430805 AS sv_6, -0.2839085561346954 AS sv_7, 0.883203392742919 AS sv_8 UNION ALL SELECT 279 AS sv_idx, 0.21264404856170457 AS dual_coeff, 0.2067876422434776 AS sv_0, 1.2667782969548187 AS sv_1, 0.26668373644325233 AS sv_2, -0.6327940358931664 AS sv_3, 1.2572719381611972 AS sv_4, -0.7064854925173741 AS sv_5, 1.460910426430805 AS sv_6, 2.3359208925537804 AS sv_7, 0.883203392742919 AS sv_8 UNION ALL SELECT 280 AS sv_idx, 0.1290244243083732 AS dual_coeff, 0.2067876422434776 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 1.818719200548056 AS sv_3, 0.8056619254301197 AS sv_4, 0.4040544186204084 AS sv_5, 1.460910426430805 AS sv_6, 2.3359208925537804 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 281 AS sv_idx, 0.16777817031781414 AS dual_coeff, -0.5035050453643399 AS sv_0, 2.249914525198436 AS sv_1, 1.2761346305253827 AS sv_2, 1.818719200548056 AS sv_3, 0.8056619254301197 AS sv_4, 1.2369593519737454 AS sv_5, 1.460910426430805 AS sv_6, 0.3710488060374236 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 282 AS sv_idx, 0.15182023034031494 AS dual_coeff, 1.9825193612630212 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 2.519151553816976 AS sv_3, 0.8056619254301197 AS sv_4, 1.7922293075426368 AS sv_5, 2.691243424111688 AS sv_6, 2.3359208925537804 AS sv_7, 3.352640078852121 AS sv_8 UNION ALL SELECT 283 AS sv_idx, 0.1493349427004023 AS dual_coeff, 0.2067876422434776 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 2.519151553816976 AS sv_3, 0.35405191269904224 AS sv_4, 1.7922293075426368 AS sv_5, 0.6406884279768832 AS sv_6, 1.0260061682095425 AS sv_7, 0.883203392742919 AS sv_8 UNION ALL SELECT 284 AS sv_idx, -1.0 AS dual_coeff, -1.2137977329721574 AS sv_0, -0.6994941595324167 AS sv_1, -0.7427671576388781 AS sv_2, -0.6327940358931664 AS sv_3, -0.5491681127631127 AS sv_4, -0.7064854925173741 AS sv_5, -0.9997555689309604 AS sv_6, -0.6113872372207548 AS sv_7, 3.969999250379421 AS sv_8 UNION ALL SELECT 285 AS sv_idx, 0.3704028686583388 AS dual_coeff, 0.2067876422434776 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 0.7680706706446747 AS sv_3, 0.35405191269904224 AS sv_4, 0.4040544186204084 AS sv_5, 0.2305774287499223 AS sv_6, 0.3710488060374236 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 286 AS sv_idx, 0.15859829049188276 AS dual_coeff, 0.2067876422434776 AS sv_0, 2.249914525198436 AS sv_1, 2.2855855246075127 AS sv_2, 0.06763831737575418 AS sv_3, 1.7088819508922748 AS sv_4, -0.1512155369484829 AS sv_5, 1.8710214256577664 AS sv_6, 2.3359208925537804 AS sv_7, 0.2658442212156186 AS sv_8 UNION ALL SELECT 287 AS sv_idx, 0.16072932688435015 AS dual_coeff, -0.1483587015604312 AS sv_0, 1.5944903730360245 AS sv_1, 0.9396509991646724 AS sv_2, 0.4178544940102145 AS sv_3, -0.09755810003203527 AS sv_4, 0.1264194408359628 AS sv_5, 2.691243424111688 AS sv_6, 1.0260061682095425 AS sv_7, -0.3515149503116818 AS sv_8 UNION ALL SELECT 288 AS sv_idx, 0.10973812711995312 AS dual_coeff, -0.1483587015604312 AS sv_0, 1.5944903730360245 AS sv_1, 1.6126182618860927 AS sv_2, 0.7680706706446747 AS sv_3, 0.35405191269904224 AS sv_4, 0.4040544186204084 AS sv_5, 2.691243424111688 AS sv_6, 0.3710488060374236 AS sv_7, -0.3515149503116818 AS sv_8) AS "Values"),
kernel_dp AS
(SELECT t."KEY" AS "KEY", t.dot_product AS dot_product
FROM (SELECT full_join_data_sv."KEY" AS "KEY", sum(CAST(full_join_data_sv.dot_prod1 AS FLOAT)) + 0.779889355387171 AS dot_product
FROM (SELECT kernel_input_with_scaling."KEY" AS "KEY", "SV_data".dual_coeff * exp(least(100.0, greatest(-100.0, -0.786246209886826 * (power(kernel_input_with_scaling."Feature_0" - "SV_data".sv_0, 2) + power(kernel_input_with_scaling."Feature_1" - "SV_data".sv_1, 2) + power(kernel_input_with_scaling."Feature_2" - "SV_data".sv_2, 2) + power(kernel_input_with_scaling."Feature_3" - "SV_data".sv_3, 2) + power(kernel_input_with_scaling."Feature_4" - "SV_data".sv_4, 2) + power(kernel_input_with_scaling."Feature_5" - "SV_data".sv_5, 2) + power(kernel_input_with_scaling."Feature_6" - "SV_data".sv_6, 2) + power(kernel_input_with_scaling."Feature_7" - "SV_data".sv_7, 2) + power(kernel_input_with_scaling."Feature_8" - "SV_data".sv_8, 2))))) AS dot_prod1
FROM kernel_input_with_scaling, "SV_data") AS full_join_data_sv GROUP BY full_join_data_sv."KEY") AS t)
SELECT kernel_dp."KEY" AS "KEY", CAST(NULL AS FLOAT) AS "Score_benign", CAST(NULL AS FLOAT) AS "Score_malignant", 1.0 - 1.0 / (1.0 + exp(least(100.0, greatest(-100.0, -(-(kernel_dp.dot_product * -4.266216437102832 + 1.111493090708207)))))) AS "Proba_benign", 1.0 / (1.0 + exp(least(100.0, greatest(-100.0, -(-(kernel_dp.dot_product * -4.266216437102832 + 1.111493090708207)))))) AS "Proba_malignant", CASE WHEN (1.0 - 1.0 / (1.0 + exp(least(100.0, greatest(-100.0, -(-(kernel_dp.dot_product * -4.266216437102832 + 1.111493090708207)))))) IS NULL OR 1.0 - 1.0 / (1.0 + exp(least(100.0, greatest(-100.0, -(-(kernel_dp.dot_product * -4.266216437102832 + 1.111493090708207)))))) > 0.0) THEN ln(1.0 - 1.0 / (1.0 + exp(least(100.0, greatest(-100.0, -(-(kernel_dp.dot_product * -4.266216437102832 + 1.111493090708207))))))) ELSE -1.79769313486231e+308 END AS "LogProba_benign", CASE WHEN (1.0 / (1.0 + exp(least(100.0, greatest(-100.0, -(-(kernel_dp.dot_product * -4.266216437102832 + 1.111493090708207)))))) IS NULL OR 1.0 / (1.0 + exp(least(100.0, greatest(-100.0, -(-(kernel_dp.dot_product * -4.266216437102832 + 1.111493090708207)))))) > 0.0) THEN ln(1.0 / (1.0 + exp(least(100.0, greatest(-100.0, -(-(kernel_dp.dot_product * -4.266216437102832 + 1.111493090708207))))))) ELSE -1.79769313486231e+308 END AS "LogProba_malignant", CASE WHEN (1.0 / (1.0 + exp(least(100.0, greatest(-100.0, -(-(kernel_dp.dot_product * -4.266216437102832 + 1.111493090708207)))))) > 1.0 - 1.0 / (1.0 + exp(least(100.0, greatest(-100.0, -(-(kernel_dp.dot_product * -4.266216437102832 + 1.111493090708207))))))) THEN 'malignant' ELSE 'benign' END AS "Decision", greatest(1.0 - 1.0 / (1.0 + exp(least(100.0, greatest(-100.0, -(-(kernel_dp.dot_product * -4.266216437102832 + 1.111493090708207)))))), 1.0 / (1.0 + exp(least(100.0, greatest(-100.0, -(-(kernel_dp.dot_product * -4.266216437102832 + 1.111493090708207))))))) AS "DecisionProba"
FROM kernel_dp
###Markdown
Execute the SQL Code
###Code
library(RODBC)
conn = odbcConnect("pgsql", uid="db", pwd="db", case="nochange")
odbcSetAutoCommit(conn , autoCommit = TRUE)
dataset = bc[, -TGT_IDX]
df_sql = as.data.frame(dataset)
names(df_sql) = sprintf("Feature_%d",0:(ncol(df_sql)-1))
df_sql$KEY = seq.int(nrow(dataset))
sqlDrop(conn , "INPUT_DATA" , errors = FALSE)
sqlSave(conn, df_sql, tablename = "INPUT_DATA", verbose = FALSE)
head(df_sql)
# colnames(df_sql)
# odbcGetInfo(conn)
# sqlTables(conn)
df_sql_out = sqlQuery(conn, lModelSQL)
head(df_sql_out)
###Output
_____no_output_____
###Markdown
R Caret SVM Output
###Code
pred_proba = predict(model, as.matrix(dataset), type = "prob")
df_r_out = data.frame(pred_proba)
names(df_r_out) = sprintf("Proba_%s",model$levels)
df_r_out$KEY = seq.int(nrow(dataset))
df_r_out$Score_benign = NA
df_r_out$Score_malignant = NA
df_r_out$LogProba_benign = log(df_r_out$Proba_benign)
df_r_out$LogProba_malignant = log(df_r_out$Proba_malignant)
df_r_out$Decision = predict(model, as.matrix(dataset), type = "raw")
df_r_out$DecisionProba = apply(pred_proba, 1, function(x) max(x))
head(df_r_out)
###Output
_____no_output_____
###Markdown
Compare R and SQL output
###Code
df_merge = merge(x = df_r_out, y = df_sql_out, by = "KEY", all = TRUE, , suffixes = c("_1","_2"))
head(df_merge)
diffs_df = df_merge[df_merge$Decision_1 != df_merge$Decision_2,]
head(diffs_df)
print(c("DIFF_N_ROWS" , nrow(diffs_df)))
stopifnot(nrow(diffs_df) == 0)
summary(df_sql_out)
summary(df_r_out)
###Output
_____no_output_____ |
code/chap11.ipynb | ###Markdown
Modeling and Simulation in PythonChapter 11Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
SIR implementationWe'll use a `State` object to represent the number (or fraction) of people in each compartment.
###Code
init = State(S=89, I=1, R=0)
###Output
_____no_output_____
###Markdown
To convert from number of people to fractions, we divide through by the total.
###Code
init /= sum(init)
###Output
_____no_output_____
###Markdown
`make_system` creates a `System` object with the given parameters.
###Code
def make_system(beta, gamma):
"""Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
"""
init = State(S=89, I=1, R=0)
init /= sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
###Output
_____no_output_____
###Markdown
Here's an example with hypothetical values for `beta` and `gamma`.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
###Output
_____no_output_____
###Markdown
The update function takes the state during the current time step and returns the state during the next time step.
###Code
def update_func(state, t, system):
"""Update the SIR model.
state: State with variables S, I, R
t: time step
system: System with beta and gamma
returns: State object
"""
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
###Output
_____no_output_____
###Markdown
To run a single time step, we call it like this:
###Code
state = update_func(init, 0, system)
###Output
_____no_output_____
###Markdown
Now we can run a simulation by calling the update function for each time step.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: State object for final state
"""
state = system.init
for t in linrange(system.t0, system.t_end):
state = update_func(state, t, system)
return state
###Output
_____no_output_____
###Markdown
The result is the state of the system at `t_end`
###Code
run_simulation(system, update_func)
###Output
_____no_output_____
###Markdown
**Exercise** Suppose the time between contacts is 4 days and the recovery time is 5 days. After 14 weeks, how many students, total, have been infected?Hint: what is the change in `S` between the beginning and the end of the simulation?
###Code
# Solution goes here
###Output
_____no_output_____
###Markdown
Using TimeSeries objects If we want to store the state of the system at each time step, we can use one `TimeSeries` object for each state variable.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
Add three Series objects to the System: S, I, R
system: System object
update_func: function that updates state
"""
S = TimeSeries()
I = TimeSeries()
R = TimeSeries()
state = system.init
t0 = system.t0
S[t0], I[t0], R[t0] = state
for t in linrange(system.t0, system.t_end):
state = update_func(state, t, system)
S[t+1], I[t+1], R[t+1] = state
return S, I, R
###Output
_____no_output_____
###Markdown
Here's how we call it.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
S, I, R = run_simulation(system, update_func)
###Output
_____no_output_____
###Markdown
And then we can plot the results.
###Code
def plot_results(S, I, R):
"""Plot the results of a SIR model.
S: TimeSeries
I: TimeSeries
R: TimeSeries
"""
plot(S, '--', label='Susceptible')
plot(I, '-', label='Infected')
plot(R, ':', label='Recovered')
decorate(xlabel='Time (days)',
ylabel='Fraction of population')
###Output
_____no_output_____
###Markdown
Here's what they look like.
###Code
plot_results(S, I, R)
savefig('figs/chap05-fig01.pdf')
###Output
_____no_output_____
###Markdown
Using a DataFrame Instead of making three `TimeSeries` objects, we can use one `DataFrame`.We have to use `row` to selects rows, rather than columns. But then Pandas does the right thing, matching up the state variables with the columns of the `DataFrame`.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: TimeFrame
"""
frame = TimeFrame(columns=system.init.index)
frame.row[system.t0] = system.init
for t in linrange(system.t0, system.t_end):
frame.row[t+1] = update_func(frame.row[t], t, system)
return frame
###Output
_____no_output_____
###Markdown
Here's how we run it, and what the result looks like.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
results.head()
###Output
_____no_output_____
###Markdown
We can extract the results and plot them.
###Code
plot_results(results.S, results.I, results.R)
###Output
_____no_output_____
###Markdown
Exercises**Exercise** Suppose the time between contacts is 4 days and the recovery time is 5 days. Simulate this scenario for 14 weeks and plot the results.
###Code
# Solution goes here
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 11: RotationCopyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# If you want the figures to appear in the notebook,
# and you want to interact with them, use
# %matplotlib notebook
# If you want the figures to appear in the notebook,
# and you don't want to interact with them, use
# %matplotlib inline
# If you want the figures to appear in separate windows, use
# %matplotlib qt5
# tempo switch from one to another, you have to select Kernel->Restart
%matplotlib inline
from modsim import *
###Output
_____no_output_____
###Markdown
Rolling paperWe'll start by loading the units we need.
###Code
radian = UNITS.radian
m = UNITS.meter
s = UNITS.second
###Output
_____no_output_____
###Markdown
And creating a `Condition` object with the system parameters
###Code
condition = Condition(Rmin = 0.02 * m,
Rmax = 0.055 * m,
L = 47 * m,
duration = 130 * s)
###Output
_____no_output_____
###Markdown
The following function estimates the parameter `k`, which is the increase in the radius of the roll for each radian of rotation.
###Code
def estimate_k(condition):
"""Estimates the parameter `k`.
condition: Condition with Rmin, Rmax, and L
returns: k in meters per radian
"""
unpack(condition)
Ravg = (Rmax + Rmin) / 2
Cavg = 2 * pi * Ravg
revs = L / Cavg
rads = 2 * pi * revs
k = (Rmax - Rmin) / rads
return k
###Output
_____no_output_____
###Markdown
As usual, `make_system` takes a `Condition` object and returns a `System` object.
###Code
def make_system(condition):
"""Make a system object.
condition: Condition with Rmin, Rmax, and L
returns: System with init, k, and ts
"""
unpack(condition)
init = State(theta = 0 * radian,
y = 0 * m,
r = Rmin)
k = estimate_k(condition)
ts = linspace(0, duration, 101)
return System(init=init, k=k, ts=ts)
###Output
_____no_output_____
###Markdown
Testing `make_system`
###Code
system = make_system(condition)
system
system.init
###Output
_____no_output_____
###Markdown
Now we can write a slope function based on the differential equations$\omega = \frac{d\theta}{dt} = 10$$\frac{dy}{dt} = r \frac{d\theta}{dt}$$\frac{dr}{dt} = k \frac{d\theta}{dt}$
###Code
def slope_func(state, t, system):
"""Computes the derivatives of the state variables.
state: State object with theta, y, r
t: time
system: System object with r, k
returns: sequence of derivatives
"""
theta, y, r = state
unpack(system)
omega = 10 * radian / s
dydt = r * omega
drdt = k * omega
return omega, dydt, drdt
###Output
_____no_output_____
###Markdown
Testing `slope_func`
###Code
slope_func(system.init, 0*s, system)
###Output
_____no_output_____
###Markdown
Now we can run the simulation.
###Code
run_odeint(system, slope_func)
###Output
_____no_output_____
###Markdown
And look at the results.
###Code
system.results.tail()
###Output
_____no_output_____
###Markdown
Extracting one time series per variable (and converting `r` to radians):
###Code
thetas = system.results.theta
ys = system.results.y
rs = system.results.r * 1000
###Output
_____no_output_____
###Markdown
Plotting `theta`
###Code
plot(thetas, label='theta')
decorate(xlabel='Time (s)',
ylabel='Angle (rad)')
###Output
_____no_output_____
###Markdown
Plotting `y`
###Code
plot(ys, color='green', label='y')
decorate(xlabel='Time (s)',
ylabel='Length (m)')
###Output
_____no_output_____
###Markdown
Plotting `r`
###Code
plot(rs, color='red', label='r')
decorate(xlabel='Time (s)',
ylabel='Radius (mm)')
###Output
_____no_output_____
###Markdown
We can also see the relationship between `y` and `r`, which I derive analytically in the book.
###Code
plot(rs, ys, color='purple')
decorate(xlabel='Radius (mm)',
ylabel='Length (m)',
legend=False)
###Output
_____no_output_____
###Markdown
And here's the figure from the book.
###Code
subplot(3, 1, 1)
plot(thetas, label='theta')
decorate(ylabel='Angle (rad)')
subplot(3, 1, 2)
plot(ys, color='green', label='y')
decorate(ylabel='Length (m)')
subplot(3, 1, 3)
plot(rs, color='red', label='r')
decorate(xlabel='Time(s)',
ylabel='Radius (mm)')
savefig('chap11-fig01.pdf')
###Output
_____no_output_____
###Markdown
We can use interpolation to find the time when `y` is 47 meters.
###Code
T = interp_inverse(ys, kind='cubic')
t_end = T(47)
t_end
###Output
_____no_output_____
###Markdown
At that point `r` is 55 mm, which is `Rmax`, as expected.
###Code
R = interpolate(rs, kind='cubic')
R(t_end)
###Output
_____no_output_____
###Markdown
The total amount of rotation is 1253 rad.
###Code
THETA = interpolate(thetas, kind='cubic')
THETA(t_end)
###Output
_____no_output_____
###Markdown
Unrolling For unrolling the paper, we need more units:
###Code
kg = UNITS.kilogram
N = UNITS.newton
###Output
_____no_output_____
###Markdown
And a few more parameters in the `Condition` object.
###Code
condition = Condition(Rmin = 0.02 * m,
Rmax = 0.055 * m,
Mcore = 15e-3 * kg,
Mroll = 215e-3 * kg,
L = 47 * m,
tension = 2e-4 * N,
duration = 180 * s)
###Output
_____no_output_____
###Markdown
`make_system` computes `rho_h`, which we'll need to compute moment of inertia, and `k`, which we'll use to compute `r`.
###Code
def make_system(condition):
"""Make a system object.
condition: Condition with Rmin, Rmax, Mcore, Mroll,
L, tension, and duration
returns: System with init, k, rho_h, Rmin, Rmax,
Mcore, Mroll, ts
"""
unpack(condition)
init = State(theta = 0 * radian,
omega = 0 * radian/s,
y = L)
area = pi * (Rmax**2 - Rmin**2)
rho_h = Mroll / area
k = (Rmax**2 - Rmin**2) / 2 / L / radian
ts = linspace(0, duration, 101)
return System(init=init, k=k, rho_h=rho_h,
Rmin=Rmin, Rmax=Rmax,
Mcore=Mcore, Mroll=Mroll,
ts=ts)
###Output
_____no_output_____
###Markdown
Testing `make_system`
###Code
system = make_system(condition)
system
system.init
###Output
_____no_output_____
###Markdown
Here's how we compute `I` as a function of `r`:
###Code
def moment_of_inertia(r, system):
"""Moment of inertia for a roll of toilet paper.
r: current radius of roll in meters
system: System object with Mcore, rho, Rmin, Rmax
returns: moment of inertia in kg m**2
"""
unpack(system)
Icore = Mcore * Rmin**2
Iroll = pi * rho_h / 2 * (r**4 - Rmin**4)
return Icore + Iroll
###Output
_____no_output_____
###Markdown
When `r` is `Rmin`, `I` is small.
###Code
moment_of_inertia(system.Rmin, system)
###Output
_____no_output_____
###Markdown
As `r` increases, so does `I`.
###Code
moment_of_inertia(system.Rmax, system)
###Output
_____no_output_____
###Markdown
Here's the slope function.
###Code
def slope_func(state, t, system):
"""Computes the derivatives of the state variables.
state: State object with theta, omega, y
t: time
system: System object with Rmin, k, Mcore, rho_h, tension
returns: sequence of derivatives
"""
theta, omega, y = state
unpack(system)
r = sqrt(2*k*y + Rmin**2)
I = moment_of_inertia(r, system)
tau = r * tension
alpha = tau / I
dydt = -r * omega
return omega, alpha, dydt
###Output
_____no_output_____
###Markdown
Testing `slope_func`
###Code
slope_func(system.init, 0*s, system)
###Output
_____no_output_____
###Markdown
Now we can run the simulation.
###Code
run_odeint(system, slope_func)
###Output
_____no_output_____
###Markdown
And look at the results.
###Code
system.results.tail()
###Output
_____no_output_____
###Markdown
Extrating the time series
###Code
thetas = system.results.theta
omegas = system.results.omega
ys = system.results.y
###Output
_____no_output_____
###Markdown
Plotting `theta`
###Code
plot(thetas, label='theta')
decorate(xlabel='Time (s)',
ylabel='Angle (rad)')
###Output
_____no_output_____
###Markdown
Plotting `omega`
###Code
plot(omegas, color='orange', label='omega')
decorate(xlabel='Time (s)',
ylabel='Angular velocity (rad/s)')
###Output
_____no_output_____
###Markdown
Plotting `y`
###Code
plot(ys, color='green', label='y')
decorate(xlabel='Time (s)',
ylabel='Length (m)')
###Output
_____no_output_____
###Markdown
Here's the figure from the book.
###Code
subplot(3, 1, 1)
plot(thetas, label='theta')
decorate(ylabel='Angle (rad)')
subplot(3, 1, 2)
plot(omegas, color='orange', label='omega')
decorate(ylabel='Angular velocity (rad/s)')
subplot(3, 1, 3)
plot(ys, color='green', label='y')
decorate(xlabel='Time(s)',
ylabel='Length (m)')
savefig('chap11-fig02.pdf')
###Output
_____no_output_____
###Markdown
Yo-yo **Exercise:** Simulate the descent of a yo-yo. How long does it take to reach the end of the string.I provide a `Condition` object with the system parameters:* `Rmin` is the radius of the axle. `Rmax` is the radius of the axle plus rolled string.* `Rout` is the radius of the yo-yo body. `mass` is the total mass of the yo-yo, ignoring the string. * `L` is the length of the string.* `g` is the acceleration of gravity.
###Code
condition = Condition(Rmin = 8e-3 * m,
Rmax = 16e-3 * m,
Rout = 35e-3 * m,
mass = 50e-3 * kg,
L = 1 * m,
g = 9.8 * m / s**2,
duration = 1 * s)
###Output
_____no_output_____
###Markdown
Here's a `make_system` function that computes `I` and `k` based on the system parameters.I estimated `I` by modeling the yo-yo as a solid cylinder with uniform density ([see here](https://en.wikipedia.org/wiki/List_of_moments_of_inertia)). In reality, the distribution of weight in a yo-yo is often designed to achieve desired effects. But we'll keep it simple.
###Code
def make_system(condition):
"""Make a system object.
condition: Condition with Rmin, Rmax, Rout,
mass, L, g, duration
returns: System with init, k, Rmin, Rmax, mass,
I, g, ts
"""
unpack(condition)
init = State(theta = 0 * radian,
omega = 0 * radian/s,
y = L,
v = 0 * m / s)
I = mass * Rout**2 / 2
k = (Rmax**2 - Rmin**2) / 2 / L / radian
ts = linspace(0, duration, 101)
return System(init=init, k=k,
Rmin=Rmin, Rmax=Rmax,
mass=mass, I=I, g=g,
ts=ts)
###Output
_____no_output_____
###Markdown
Testing `make_system`
###Code
system = make_system(condition)
system
system.init
###Output
_____no_output_____
###Markdown
Write a slope function for this system, using these results from the book:$ r = \sqrt{2 k y + R_{min}^2} $ $ T = m g I / I^* $$ a = -m g r^2 / I^* $$ \alpha = m g r / I^* $where $I^*$ is the augmented moment of inertia, $I + m r^2$.Hint: If `y` is less than 0, it means you have reached the end of the string, so the equation for `r` is no longer valid. In this case, the simplest thing to do it return the sequence of derivatives `0, 0, 0, 0`
###Code
# Solution goes here
###Output
_____no_output_____
###Markdown
Test your slope function with the initial conditions.
###Code
slope_func(system.init, 0*s, system)
###Output
_____no_output_____
###Markdown
Then run the simulation.
###Code
run_odeint(system, slope_func)
###Output
_____no_output_____
###Markdown
Check the final conditions. If things have gone according to plan, the final value of `y` should be close to 0.
###Code
system.results.tail()
###Output
_____no_output_____
###Markdown
Plot the results.
###Code
thetas = system.results.theta
ys = system.results.y
###Output
_____no_output_____
###Markdown
`theta` should increase and accelerate.
###Code
plot(thetas, label='theta')
decorate(xlabel='Time (s)',
ylabel='Angle (rad)')
###Output
_____no_output_____
###Markdown
`y` should decrease and accelerate down.
###Code
plot(ys, color='green', label='y')
decorate(xlabel='Time (s)',
ylabel='Length (m)')
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 10Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
SIR implementationWe'll use a `State` object to represent the number (or fraction) of people in each compartment.
###Code
init = State(S=89, I=1, R=0)
###Output
_____no_output_____
###Markdown
To convert from number of people to fractions, we divide through by the total.
###Code
init /= sum(init)
###Output
_____no_output_____
###Markdown
`make_system` creates a `System` object with the given parameters.
###Code
def make_system(beta, gamma):
"""Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
"""
init = State(S=89, I=1, R=0)
init /= sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
###Output
_____no_output_____
###Markdown
Here's an example with hypothetical values for `beta` and `gamma`.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
###Output
_____no_output_____
###Markdown
The update function takes the state during the current time step and returns the state during the next time step.
###Code
def update_func(state, t, system):
"""Update the SIR model.
state: State with variables S, I, R
t: time step
system: System with beta and gamma
returns: State object
"""
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
###Output
_____no_output_____
###Markdown
To run a single time step, we call it like this:
###Code
state = update_func(init, 0, system)
###Output
_____no_output_____
###Markdown
Now we can run a simulation by calling the update function for each time step.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: State object for final state
"""
state = system.init
for t in linrange(system.t0, system.t_end-1):
state = update_func(state, t, system)
return state
###Output
_____no_output_____
###Markdown
The result is the state of the system at `t_end`
###Code
run_simulation(system, update_func)
###Output
_____no_output_____
###Markdown
**Exercise** Suppose the time between contacts is 4 days and the recovery time is 5 days. After 14 weeks, how many students, total, have been infected?Hint: what is the change in `S` between the beginning and the end of the simulation?
###Code
# Solution goes here
###Output
_____no_output_____
###Markdown
Using TimeSeries objects If we want to store the state of the system at each time step, we can use one `TimeSeries` object for each state variable.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
Add three Series objects to the System: S, I, R
system: System object
update_func: function that updates state
"""
S = TimeSeries()
I = TimeSeries()
R = TimeSeries()
state = system.init
t0 = system.t0
S[t0], I[t0], R[t0] = state
for t in linrange(system.t0, system.t_end-1):
state = update_func(state, t, system)
S[t+1], I[t+1], R[t+1] = state
return S, I, R
###Output
_____no_output_____
###Markdown
Here's how we call it.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
S, I, R = run_simulation(system, update_func)
###Output
_____no_output_____
###Markdown
And then we can plot the results.
###Code
def plot_results(S, I, R):
"""Plot the results of a SIR model.
S: TimeSeries
I: TimeSeries
R: TimeSeries
"""
plot(S, '--', color='blue', label='Susceptible')
plot(I, '-', color='red', label='Infected')
plot(R, ':', color='green', label='Recovered')
decorate(xlabel='Time (days)',
ylabel='Fraction of population')
###Output
_____no_output_____
###Markdown
Here's what they look like.
###Code
plot_results(S, I, R)
savefig('figs/chap05-fig01.pdf')
###Output
_____no_output_____
###Markdown
Using a DataFrame Instead of making three `TimeSeries` objects, we can use one `DataFrame`.We have to use `row` to selects rows, rather than columns. But then Pandas does the right thing, matching up the state variables with the columns of the `DataFrame`.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: TimeFrame
"""
frame = TimeFrame(columns=system.init.index)
frame.row[system.t0] = system.init
for t in linrange(system.t0, system.t_end-1):
frame.row[t+1] = update_func(frame.row[t], t, system)
return frame
###Output
_____no_output_____
###Markdown
Here's how we run it, and what the result looks like.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
results.head()
###Output
_____no_output_____
###Markdown
We can extract the results and plot them.
###Code
plot_results(results.S, results.I, results.R)
###Output
_____no_output_____
###Markdown
**Exercise** Suppose the time between contacts is 4 days and the recovery time is 5 days. Simulate this scenario for 14 days and plot the results.
###Code
# Solution goes here
###Output
_____no_output_____
###Markdown
Metrics Given the results, we can compute metrics that quantify whatever we are interested in, like the total number of sick students, for example.
###Code
def calc_total_infected(results, system):
"""Fraction of population infected during the simulation.
results: DataFrame with columns S, I, R
system: System object
returns: fraction of population
"""
return results.S[system.t0] - results.S[system.t_end]
###Output
_____no_output_____
###Markdown
Here's an example.|
###Code
system.beta = 0.333
system.gamma = 0.25
results = run_simulation(system, update_func)
print(system.beta, system.gamma, calc_total_infected(results, system))
###Output
_____no_output_____
###Markdown
**Exercise:** Write functions that take a `DataFrame` and a`System` object as parameters and compute the other metrics mentioned in the book:1. The fraction of students who are sick at the peak of the outbreak.2. The day the outbreak peaks.3. The fraction of students who are sick at the end of the semester.Note: Not all of these functions require the `System` object, but when you write a set of related functons, it is often convenient if they all take the same parameters.Hint: If you have a `TimeSeries` called `I`, you can compute the largest value of the series like this: I.max()And the index of the largest value like this: I.idxmax()You can read about these functions in the `Series` [documentation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html).
###Code
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
What if? We can use this model to evaluate "what if" scenarios. For example, this function models the effect of immunization by moving some fraction of the population from S to R before the simulation starts.
###Code
def add_immunization(system, fraction):
"""Immunize a fraction of the population.
Moves the given fraction from S to R.
system: System object
fraction: number from 0 to 1
"""
system.init.S -= fraction
system.init.R += fraction
###Output
_____no_output_____
###Markdown
Let's start again with the system we used in the previous sections.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
###Output
_____no_output_____
###Markdown
And run the model without immunization.
###Code
results = run_simulation(system, update_func)
calc_total_infected(results, system)
###Output
_____no_output_____
###Markdown
Now with 10% immunization.
###Code
system2 = make_system(beta, gamma)
add_immunization(system2, 0.1)
results2 = run_simulation(system2, update_func)
calc_total_infected(results2, system2)
###Output
_____no_output_____
###Markdown
10% immunization leads to a drop in infections of 16 percentage points.Here's what the time series looks like for S, with and without immunization.
###Code
plot(results.S, '-', label='No immunization')
plot(results2.S, 'g--', label='10% immunization')
decorate(xlabel='Time (days)',
ylabel='Fraction susceptible')
savefig('figs/chap05-fig02.pdf')
###Output
_____no_output_____
###Markdown
Now we can sweep through a range of values for the fraction of the population who are immunized.
###Code
immunize_array = linspace(0, 1, 11)
for fraction in immunize_array:
system = make_system(beta, gamma)
add_immunization(system, fraction)
results = run_simulation(system, update_func)
print(fraction, calc_total_infected(results, system))
###Output
_____no_output_____
###Markdown
This function does the same thing and stores the results in a `Sweep` object.
###Code
def sweep_immunity(immunize_array):
"""Sweeps a range of values for immunity.
immunize_array: array of fraction immunized
returns: Sweep object
"""
sweep = SweepSeries()
for fraction in immunize_array:
system = make_system(beta, gamma)
add_immunization(system, fraction)
results = run_simulation(system, update_func)
sweep[fraction] = calc_total_infected(results, system)
return sweep
###Output
_____no_output_____
###Markdown
Here's how we run it.
###Code
immunize_array = linspace(0, 1, 21)
infected_sweep = sweep_immunity(immunize_array)
###Output
_____no_output_____
###Markdown
And here's what the results look like.
###Code
plot(infected_sweep)
decorate(xlabel='Fraction immunized',
ylabel='Total fraction infected',
title='Fraction infected vs. immunization rate',
legend=False)
savefig('figs/chap05-fig03.pdf')
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 11Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
SIR implementationWe'll use a `State` object to represent the number (or fraction) of people in each compartment.
###Code
init = State(S=89, I=1, R=0)
###Output
_____no_output_____
###Markdown
To convert from number of people to fractions, we divide through by the total.
###Code
init /= sum(init)
###Output
_____no_output_____
###Markdown
`make_system` creates a `System` object with the given parameters.
###Code
def make_system(beta, gamma):
"""Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
"""
init = State(S=89, I=1, R=0)
init /= sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
###Output
_____no_output_____
###Markdown
Here's an example with hypothetical values for `beta` and `gamma`.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
###Output
_____no_output_____
###Markdown
The update function takes the state during the current time step and returns the state during the next time step.
###Code
def update_func(state, t, system):
"""Update the SIR model.
state: State with variables S, I, R
t: time step
system: System with beta and gamma
returns: State object
"""
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
###Output
_____no_output_____
###Markdown
To run a single time step, we call it like this:
###Code
state = update_func(init, 0, system)
###Output
_____no_output_____
###Markdown
Now we can run a simulation by calling the update function for each time step.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: State object for final state
"""
state = system.init
for t in linrange(system.t0, system.t_end):
state = update_func(state, t, system)
return state
###Output
_____no_output_____
###Markdown
The result is the state of the system at `t_end`
###Code
run_simulation(system, update_func)
###Output
_____no_output_____
###Markdown
**Exercise** Suppose the time between contacts is 4 days and the recovery time is 5 days. After 14 weeks, how many students, total, have been infected?Hint: what is the change in `S` between the beginning and the end of the simulation?
###Code
# Solution goes here
###Output
_____no_output_____
###Markdown
Using TimeSeries objects If we want to store the state of the system at each time step, we can use one `TimeSeries` object for each state variable.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
Add three Series objects to the System: S, I, R
system: System object
update_func: function that updates state
"""
S = TimeSeries()
I = TimeSeries()
R = TimeSeries()
state = system.init
t0 = system.t0
S[t0], I[t0], R[t0] = state
for t in linrange(system.t0, system.t_end):
state = update_func(state, t, system)
S[t+1], I[t+1], R[t+1] = state
return S, I, R
###Output
_____no_output_____
###Markdown
Here's how we call it.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
S, I, R = run_simulation(system, update_func)
###Output
_____no_output_____
###Markdown
And then we can plot the results.
###Code
def plot_results(S, I, R):
"""Plot the results of a SIR model.
S: TimeSeries
I: TimeSeries
R: TimeSeries
"""
plot(S, '--', label='Susceptible')
plot(I, '-', label='Infected')
plot(R, ':', label='Recovered')
decorate(xlabel='Time (days)',
ylabel='Fraction of population')
###Output
_____no_output_____
###Markdown
Here's what they look like.
###Code
plot_results(S, I, R)
savefig('figs/chap05-fig01.pdf')
###Output
_____no_output_____
###Markdown
Using a DataFrame Instead of making three `TimeSeries` objects, we can use one `DataFrame`.We have to use `row` to selects rows, rather than columns. But then Pandas does the right thing, matching up the state variables with the columns of the `DataFrame`.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: TimeFrame
"""
frame = TimeFrame(columns=system.init.index)
frame.row[system.t0] = system.init
for t in linrange(system.t0, system.t_end):
frame.row[t+1] = update_func(frame.row[t], t, system)
return frame
###Output
_____no_output_____
###Markdown
Here's how we run it, and what the result looks like.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
results.head()
###Output
_____no_output_____
###Markdown
We can extract the results and plot them.
###Code
plot_results(results.S, results.I, results.R)
###Output
_____no_output_____
###Markdown
Exercises**Exercise** Suppose the time between contacts is 4 days and the recovery time is 5 days. Simulate this scenario for 14 weeks and plot the results.
###Code
# Solution goes here
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 11Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
SIR implementationWe'll use a `State` object to represent the number (or fraction) of people in each compartment.
###Code
init = State(S=89, I=1, R=0)
###Output
_____no_output_____
###Markdown
To convert from number of people to fractions, we divide through by the total.
###Code
init /= sum(init)
###Output
_____no_output_____
###Markdown
`make_system` creates a `System` object with the given parameters.
###Code
def make_system(beta, gamma):
"""Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
"""
init = State(S=89, I=1, R=0)
init /= sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
###Output
_____no_output_____
###Markdown
Here's an example with hypothetical values for `beta` and `gamma`.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
###Output
_____no_output_____
###Markdown
The update function takes the state during the current time step and returns the state during the next time step.
###Code
def update_func(state, t, system):
"""Update the SIR model.
state: State with variables S, I, R
t: time step
system: System with beta and gamma
returns: State object
"""
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
###Output
_____no_output_____
###Markdown
To run a single time step, we call it like this:
###Code
state = update_func(init, 0, system)
###Output
_____no_output_____
###Markdown
Now we can run a simulation by calling the update function for each time step.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: State object for final state
"""
state = system.init
for t in linrange(system.t0, system.t_end):
state = update_func(state, t, system)
return state
###Output
_____no_output_____
###Markdown
The result is the state of the system at `t_end`
###Code
run_simulation(system, update_func)
###Output
_____no_output_____
###Markdown
**Exercise** Suppose the time between contacts is 4 days and the recovery time is 5 days. After 14 weeks, how many students, total, have been infected?Hint: what is the change in `S` between the beginning and the end of the simulation?
###Code
tc = 4 # time between contacts in days
tr = 5 # recovery time in days
run_simulation(system, update_func)
###Output
_____no_output_____
###Markdown
Using TimeSeries objects If we want to store the state of the system at each time step, we can use one `TimeSeries` object for each state variable.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
Add three Series objects to the System: S, I, R
system: System object
update_func: function that updates state
"""
S = TimeSeries()
I = TimeSeries()
R = TimeSeries()
state = system.init
t0 = system.t0
S[t0], I[t0], R[t0] = state
for t in linrange(system.t0, system.t_end):
state = update_func(state, t, system)
S[t+1], I[t+1], R[t+1] = state
return S, I, R
###Output
_____no_output_____
###Markdown
Here's how we call it.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
S, I, R = run_simulation(system, update_func)
###Output
_____no_output_____
###Markdown
And then we can plot the results.
###Code
def plot_results(S, I, R):
"""Plot the results of a SIR model.
S: TimeSeries
I: TimeSeries
R: TimeSeries
"""
plot(S, '--', label='Susceptible')
plot(I, '-', label='Infected')
plot(R, ':', label='Recovered')
decorate(xlabel='Time (days)',
ylabel='Fraction of population')
###Output
_____no_output_____
###Markdown
Here's what they look like.
###Code
plot_results(S, I, R)
savefig('figs/chap05-fig01.pdf')
###Output
Saving figure to file figs/chap05-fig01.pdf
###Markdown
Using a DataFrame Instead of making three `TimeSeries` objects, we can use one `DataFrame`.We have to use `row` to selects rows, rather than columns. But then Pandas does the right thing, matching up the state variables with the columns of the `DataFrame`.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: TimeFrame
"""
frame = TimeFrame(columns=system.init.index)
frame.row[system.t0] = system.init
for t in linrange(system.t0, system.t_end):
frame.row[t+1] = update_func(frame.row[t], t, system)
return frame
###Output
_____no_output_____
###Markdown
Here's how we run it, and what the result looks like.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
results.head()
###Output
_____no_output_____
###Markdown
We can extract the results and plot them.
###Code
plot_results(results.S, results.I, results.R)
###Output
_____no_output_____
###Markdown
Exercises**Exercise** Suppose the time between contacts is 4 days and the recovery time is 5 days. Simulate this scenario for 14 weeks and plot the results.
###Code
tc = 4 # time between contacts in days
tr = 5 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
results.head()
plot_results(results.S, results.I, results.R)
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 11Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
SIR implementationWe'll use a `State` object to represent the number (or fraction) of people in each compartment.
###Code
init = State(S=89, I=1, R=0)
###Output
_____no_output_____
###Markdown
To convert from number of people to fractions, we divide through by the total.
###Code
init /= sum(init)
###Output
_____no_output_____
###Markdown
`make_system` creates a `System` object with the given parameters.
###Code
def make_system(beta, gamma):
"""Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
"""
init = State(S=89, I=1, R=0)
init /= sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
###Output
_____no_output_____
###Markdown
Here's an example with hypothetical values for `beta` and `gamma`.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
###Output
_____no_output_____
###Markdown
The update function takes the state during the current time step and returns the state during the next time step.
###Code
def update_func(state, t, system):
"""Update the SIR model.
state: State with variables S, I, R
t: time step
system: System with beta and gamma
returns: State object
"""
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
###Output
_____no_output_____
###Markdown
To run a single time step, we call it like this:
###Code
state = update_func(init, 0, system)
###Output
_____no_output_____
###Markdown
Now we can run a simulation by calling the update function for each time step.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: State object for final state
"""
state = system.init
for t in linrange(system.t0, system.t_end):
state = update_func(state, t, system)
return state
###Output
_____no_output_____
###Markdown
The result is the state of the system at `t_end`
###Code
run_simulation(system, update_func)
###Output
_____no_output_____
###Markdown
**Exercise** Suppose the time between contacts is 4 days and the recovery time is 5 days. After 14 weeks, how many students, total, have been infected?Hint: what is the change in `S` between the beginning and the end of the simulation?
###Code
# Solution goes here
###Output
_____no_output_____
###Markdown
Using TimeSeries objects If we want to store the state of the system at each time step, we can use one `TimeSeries` object for each state variable.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
Add three Series objects to the System: S, I, R
system: System object
update_func: function that updates state
"""
S = TimeSeries()
I = TimeSeries()
R = TimeSeries()
state = system.init
t0 = system.t0
S[t0], I[t0], R[t0] = state
for t in linrange(system.t0, system.t_end):
state = update_func(state, t, system)
S[t+1], I[t+1], R[t+1] = state
return S, I, R
###Output
_____no_output_____
###Markdown
Here's how we call it.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
S, I, R = run_simulation(system, update_func)
###Output
_____no_output_____
###Markdown
And then we can plot the results.
###Code
def plot_results(S, I, R):
"""Plot the results of a SIR model.
S: TimeSeries
I: TimeSeries
R: TimeSeries
"""
plot(S, '--', label='Susceptible')
plot(I, '-', label='Infected')
plot(R, ':', label='Recovered')
decorate(xlabel='Time (days)',
ylabel='Fraction of population')
###Output
_____no_output_____
###Markdown
Here's what they look like.
###Code
plot_results(S, I, R)
savefig('figs/chap05-fig01.pdf')
###Output
_____no_output_____
###Markdown
Using a DataFrame Instead of making three `TimeSeries` objects, we can use one `DataFrame`.We have to use `row` to selects rows, rather than columns. But then Pandas does the right thing, matching up the state variables with the columns of the `DataFrame`.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: TimeFrame
"""
frame = TimeFrame(columns=system.init.index)
frame.row[system.t0] = system.init
for t in linrange(system.t0, system.t_end):
frame.row[t+1] = update_func(frame.row[t], t, system)
return frame
###Output
_____no_output_____
###Markdown
Here's how we run it, and what the result looks like.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
results.head()
###Output
_____no_output_____
###Markdown
We can extract the results and plot them.
###Code
plot_results(results.S, results.I, results.R)
###Output
_____no_output_____
###Markdown
Exercises**Exercise** Suppose the time between contacts is 4 days and the recovery time is 5 days. Simulate this scenario for 14 weeks and plot the results.
###Code
# Solution goes here
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 11Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
SIR implementationWe'll use a `State` object to represent the number (or fraction) of people in each compartment.
###Code
init = State(S=89, I=1, R=0)
###Output
_____no_output_____
###Markdown
To convert from number of people to fractions, we divide through by the total.
###Code
init /= sum(init)
###Output
_____no_output_____
###Markdown
`make_system` creates a `System` object with the given parameters.
###Code
def make_system(beta, gamma):
"""Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
"""
init = State(S=89, I=1, R=0)
init /= sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
###Output
_____no_output_____
###Markdown
Here's an example with hypothetical values for `beta` and `gamma`.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
###Output
_____no_output_____
###Markdown
The update function takes the state during the current time step and returns the state during the next time step.
###Code
def update_func(state, t, system):
"""Update the SIR model.
state: State with variables S, I, R
t: time step
system: System with beta and gamma
returns: State object
"""
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
###Output
_____no_output_____
###Markdown
To run a single time step, we call it like this:
###Code
state = update_func(init, 0, system)
###Output
_____no_output_____
###Markdown
Now we can run a simulation by calling the update function for each time step.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: State object for final state
"""
state = system.init
for t in linrange(system.t0, system.t_end):
state = update_func(state, t, system)
return state
###Output
_____no_output_____
###Markdown
The result is the state of the system at `t_end`
###Code
run_simulation(system, update_func)
###Output
_____no_output_____
###Markdown
**Exercise** Suppose the time between contacts is 4 days and the recovery time is 5 days. After 14 weeks, how many students, total, have been infected?Hint: what is the change in `S` between the beginning and the end of the simulation?
###Code
# Solution goes here
###Output
_____no_output_____
###Markdown
Using TimeSeries objects If we want to store the state of the system at each time step, we can use one `TimeSeries` object for each state variable.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
Add three Series objects to the System: S, I, R
system: System object
update_func: function that updates state
"""
S = TimeSeries()
I = TimeSeries()
R = TimeSeries()
state = system.init
t0 = system.t0
S[t0], I[t0], R[t0] = state
for t in linrange(system.t0, system.t_end):
state = update_func(state, t, system)
S[t+1], I[t+1], R[t+1] = state
return S, I, R
###Output
_____no_output_____
###Markdown
Here's how we call it.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
S, I, R = run_simulation(system, update_func)
###Output
_____no_output_____
###Markdown
And then we can plot the results.
###Code
def plot_results(S, I, R):
"""Plot the results of a SIR model.
S: TimeSeries
I: TimeSeries
R: TimeSeries
"""
plot(S, '--', label='Susceptible')
plot(I, '-', label='Infected')
plot(R, ':', label='Recovered')
decorate(xlabel='Time (days)',
ylabel='Fraction of population')
###Output
_____no_output_____
###Markdown
Here's what they look like.
###Code
plot_results(S, I, R)
savefig('figs/chap05-fig01.pdf')
###Output
_____no_output_____
###Markdown
Using a DataFrame Instead of making three `TimeSeries` objects, we can use one `DataFrame`.We have to use `row` to selects rows, rather than columns. But then Pandas does the right thing, matching up the state variables with the columns of the `DataFrame`.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: TimeFrame
"""
frame = TimeFrame(columns=system.init.index)
frame.row[system.t0] = system.init
for t in linrange(system.t0, system.t_end):
frame.row[t+1] = update_func(frame.row[t], t, system)
return frame
###Output
_____no_output_____
###Markdown
Here's how we run it, and what the result looks like.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
results.head()
###Output
_____no_output_____
###Markdown
We can extract the results and plot them.
###Code
plot_results(results.S, results.I, results.R)
###Output
_____no_output_____
###Markdown
Exercises**Exercise** Suppose the time between contacts is 4 days and the recovery time is 5 days. Simulate this scenario for 14 weeks and plot the results.
###Code
# Solution goes here
###Output
_____no_output_____
###Markdown
Think Bayes: Chapter 11This notebook presents code and exercises from Think Bayes, second edition.Copyright 2016 Allen B. DowneyMIT License: https://opensource.org/licenses/MIT
###Code
from __future__ import print_function, division
% matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import math
import numpy as np
from thinkbayes2 import Pmf, Cdf, Suite, Joint
import thinkplot
###Output
_____no_output_____
###Markdown
The Euro problemProblem statement here.Here's a more efficient version of the Euro class that takes the dataset in a more compact form and uses the binomial distribution (ignoring the binomial coefficient because it does not depend on `x`).
###Code
class Euro(Suite):
"""Represents hypotheses about the probability of heads."""
def Likelihood(self, data, hypo):
"""Computes the likelihood of the data under the hypothesis.
hypo: integer value of x, the probability of heads (0-100)
data: tuple of (number of heads, number of tails)
"""
x = hypo / 100.0
heads, tails = data
like = x**heads * (1-x)**tails
return like
###Output
_____no_output_____
###Markdown
If we know the coin is fair, we can evaluate the likelihood of the data directly.
###Code
data = 140, 110
suite = Euro()
like_f = suite.Likelihood(data, 50)
print('p(D|F)', like_f)
###Output
_____no_output_____
###Markdown
If we cheat an pretend that the alternative hypothesis is exactly the observed proportion, we can compute the likelihood of the data and the likelihood ratio, relative to the fair coin.
###Code
actual_percent = 100.0 * 140 / 250
likelihood = suite.Likelihood(data, actual_percent)
print('p(D|B_cheat)', likelihood)
print('p(D|B_cheat) / p(D|F)', likelihood / like_f)
###Output
_____no_output_____
###Markdown
Under this interpretation, the data are in favor of "biased", with K=6. But that's a total cheat.Suppose we think "biased" means either 0.4 or 0.6, but we're not sure which. The total likelihood of the data is the weighted average of the two likelihoods.
###Code
like40 = suite.Likelihood(data, 40)
like60 = suite.Likelihood(data, 60)
likelihood = 0.5 * like40 + 0.5 * like60
print('p(D|B_two)', likelihood)
print('p(D|B_two) / p(D|F)', likelihood / like_f)
###Output
_____no_output_____
###Markdown
Under this interpretation, the data are in favor of "biased", but very weak.More generally, if "biased" refers to a range of possibilities with different probabilities, the total likelihood of the data is the weighted sum:
###Code
def SuiteLikelihood(suite, data):
"""Computes the weighted average of likelihoods for sub-hypotheses.
suite: Suite that maps sub-hypotheses to probability
data: some representation of the data
returns: float likelihood
"""
total = 0
for hypo, prob in suite.Items():
like = suite.Likelihood(data, hypo)
total += prob * like
return total
###Output
_____no_output_____
###Markdown
Here's what it looks like if "biased" means "equally likely to be any value between 0 and 1".
###Code
b_uniform = Euro(range(0, 101))
b_uniform.Remove(50)
b_uniform.Normalize()
likelihood = SuiteLikelihood(b_uniform, data)
print('p(D|B_uniform)', likelihood)
print('p(D|B_uniform) / p(D|F)', likelihood / like_f)
###Output
_____no_output_____
###Markdown
By that definition, the data are evidence against the biased hypothesis, with K=2.But maybe a triangle prior is a better model of what "biased" means.
###Code
def TrianglePrior():
"""Makes a Suite with a triangular prior."""
suite = Euro()
for x in range(0, 51):
suite.Set(x, x)
for x in range(51, 101):
suite.Set(x, 100-x)
suite.Normalize()
return suite
###Output
_____no_output_____
###Markdown
Here's what it looks like:
###Code
b_tri = TrianglePrior()
b_tri.Remove(50)
b_tri.Normalize()
likelihood = b_tri.Update(data)
print('p(D|B_tri)', likelihood)
print('p(D|B_tri) / p(D|F)', likelihood / like_f)
###Output
_____no_output_____
###Markdown
By the triangle definition of "biased", the data are very weakly in favor of "fair". Normalizing constantWe don't really need the SuiteLikelihood function, because `Suite.Update` already computes the total probability of the data, which is the normalizing constant.
###Code
likelihood = SuiteLikelihood(b_uniform, data)
likelihood
euro = Euro(b_uniform)
euro.Update(data)
likelihood = SuiteLikelihood(b_tri, data)
likelihood
euro = Euro(b_tri)
euro.Update(data)
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 11Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
SIR implementationWe'll use a `State` object to represent the number (or fraction) of people in each compartment.
###Code
init = State(S=89, I=1, R=0)
###Output
_____no_output_____
###Markdown
To convert from number of people to fractions, we divide through by the total.
###Code
init /= sum(init)
###Output
_____no_output_____
###Markdown
`make_system` creates a `System` object with the given parameters.
###Code
def make_system(beta, gamma):
"""Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
"""
init = State(S=89, I=1, R=0)
init /= sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
###Output
_____no_output_____
###Markdown
Here's an example with hypothetical values for `beta` and `gamma`.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
###Output
_____no_output_____
###Markdown
The update function takes the state during the current time step and returns the state during the next time step.
###Code
def update_func(state, t, system):
"""Update the SIR model.
state: State with variables S, I, R
t: time step
system: System with beta and gamma
returns: State object
"""
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
###Output
_____no_output_____
###Markdown
To run a single time step, we call it like this:
###Code
state = update_func(init, 0, system)
###Output
_____no_output_____
###Markdown
Now we can run a simulation by calling the update function for each time step.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: State object for final state
"""
state = system.init
for t in linrange(system.t0, system.t_end):
state = update_func(state, t, system)
return state
###Output
_____no_output_____
###Markdown
The result is the state of the system at `t_end`
###Code
run_simulation(system, update_func)
###Output
_____no_output_____
###Markdown
**Exercise** Suppose the time between contacts is 4 days and the recovery time is 5 days. After 14 weeks, how many students, total, have been infected?Hint: what is the change in `S` between the beginning and the end of the simulation?
###Code
tc = 4 # time between contacts in days
tr = 5 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
state = run_simulation(system, update_func)
print(89-state.S*90)
###Output
34.08459698173312
###Markdown
Using TimeSeries objects If we want to store the state of the system at each time step, we can use one `TimeSeries` object for each state variable.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
Add three Series objects to the System: S, I, R
system: System object
update_func: function that updates state
"""
S = TimeSeries()
I = TimeSeries()
R = TimeSeries()
state = system.init
t0 = system.t0
S[t0], I[t0], R[t0] = state
for t in linrange(system.t0, system.t_end):
state = update_func(state, t, system)
S[t+1], I[t+1], R[t+1] = state
return S, I, R
###Output
_____no_output_____
###Markdown
Here's how we call it.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
S, I, R = run_simulation(system, update_func)
###Output
_____no_output_____
###Markdown
And then we can plot the results.
###Code
def plot_results(S, I, R):
"""Plot the results of a SIR model.
S: TimeSeries
I: TimeSeries
R: TimeSeries
"""
plot(S, '--', label='Susceptible')
plot(I, '-', label='Infected')
plot(R, ':', label='Recovered')
decorate(xlabel='Time (days)',
ylabel='Fraction of population')
###Output
_____no_output_____
###Markdown
Here's what they look like.
###Code
plot_results(S, I, R)
savefig('figs/chap05-fig01.pdf')
###Output
Saving figure to file figs/chap05-fig01.pdf
###Markdown
Using a DataFrame Instead of making three `TimeSeries` objects, we can use one `DataFrame`.We have to use `row` to selects rows, rather than columns. But then Pandas does the right thing, matching up the state variables with the columns of the `DataFrame`.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: TimeFrame
"""
frame = TimeFrame(columns=system.init.index)
frame.row[system.t0] = system.init
for t in linrange(system.t0, system.t_end):
frame.row[t+1] = update_func(frame.row[t], t, system)
return frame
###Output
_____no_output_____
###Markdown
Here's how we run it, and what the result looks like.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
results.head()
###Output
_____no_output_____
###Markdown
We can extract the results and plot them.
###Code
plot_results(results.S, results.I, results.R)
###Output
_____no_output_____
###Markdown
Exercises**Exercise** Suppose the time between contacts is 4 days and the recovery time is 5 days. Simulate this scenario for 14 weeks and plot the results.
###Code
tc = 4 # time between contacts in days
tr = 5 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
S, I, R = run_simulation(system, update_func)
plot_results(results.S, results.I, results.R)
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 11Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
SIR implementationWe'll use a `State` object to represent the number (or fraction) of people in each compartment.
###Code
init = State(S=89, I=1, R=0)
###Output
_____no_output_____
###Markdown
To convert from number of people to fractions, we divide through by the total.
###Code
init /= sum(init)
###Output
_____no_output_____
###Markdown
`make_system` creates a `System` object with the given parameters.
###Code
def make_system(beta, gamma):
"""Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
"""
init = State(S=89, I=1, R=0)
init /= sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
###Output
_____no_output_____
###Markdown
Here's an example with hypothetical values for `beta` and `gamma`.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
###Output
_____no_output_____
###Markdown
The update function takes the state during the current time step and returns the state during the next time step.
###Code
def update_func(state, t, system):
"""Update the SIR model.
state: State with variables S, I, R
t: time step
system: System with beta and gamma
returns: State object
"""
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
###Output
_____no_output_____
###Markdown
To run a single time step, we call it like this:
###Code
state = update_func(init, 0, system)
###Output
_____no_output_____
###Markdown
Now we can run a simulation by calling the update function for each time step.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: State object for final state
"""
state = system.init
for t in linrange(system.t0, system.t_end):
state = update_func(state, t, system)
return state
###Output
_____no_output_____
###Markdown
The result is the state of the system at `t_end`
###Code
run_simulation(system, update_func)
###Output
_____no_output_____
###Markdown
**Exercise** Suppose the time between contacts is 4 days and the recovery time is 5 days. After 14 weeks, how many students, total, have been infected?Hint: what is the change in `S` between the beginning and the end of the simulation?
###Code
# Solution goes here
###Output
_____no_output_____
###Markdown
Using TimeSeries objects If we want to store the state of the system at each time step, we can use one `TimeSeries` object for each state variable.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
Add three Series objects to the System: S, I, R
system: System object
update_func: function that updates state
"""
S = TimeSeries()
I = TimeSeries()
R = TimeSeries()
state = system.init
t0 = system.t0
S[t0], I[t0], R[t0] = state
for t in linrange(system.t0, system.t_end):
state = update_func(state, t, system)
S[t+1], I[t+1], R[t+1] = state
return S, I, R
###Output
_____no_output_____
###Markdown
Here's how we call it.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
S, I, R = run_simulation(system, update_func)
###Output
_____no_output_____
###Markdown
And then we can plot the results.
###Code
def plot_results(S, I, R):
"""Plot the results of a SIR model.
S: TimeSeries
I: TimeSeries
R: TimeSeries
"""
plot(S, '--', label='Susceptible')
plot(I, '-', label='Infected')
plot(R, ':', label='Recovered')
decorate(xlabel='Time (days)',
ylabel='Fraction of population')
###Output
_____no_output_____
###Markdown
Here's what they look like.
###Code
plot_results(S, I, R)
savefig('figs/chap05-fig01.pdf')
###Output
Saving figure to file figs/chap05-fig01.pdf
###Markdown
Using a DataFrame Instead of making three `TimeSeries` objects, we can use one `DataFrame`.We have to use `row` to selects rows, rather than columns. But then Pandas does the right thing, matching up the state variables with the columns of the `DataFrame`.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: TimeFrame
"""
frame = TimeFrame(columns=system.init.index)
frame.row[system.t0] = system.init
for t in linrange(system.t0, system.t_end):
frame.row[t+1] = update_func(frame.row[t], t, system)
return frame
###Output
_____no_output_____
###Markdown
Here's how we run it, and what the result looks like.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
results.head()
###Output
_____no_output_____
###Markdown
We can extract the results and plot them.
###Code
plot_results(results.S, results.I, results.R)
###Output
_____no_output_____
###Markdown
Exercises**Exercise** Suppose the time between contacts is 4 days and the recovery time is 5 days. Simulate this scenario for 14 weeks and plot the results.
###Code
# Solution goes here
tc = 4
tr = 5
beta = 1 / tc
gamma = 1 / tr
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
results.head()
plot_results(results.S, results.I, results.R)
results.tail()
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 11Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
SIR implementationWe'll use a `State` object to represent the number (or fraction) of people in each compartment.
###Code
init = State(S=89, I=1, R=0)
###Output
_____no_output_____
###Markdown
To convert from number of people to fractions, we divide through by the total.
###Code
init /= sum(init)
###Output
_____no_output_____
###Markdown
`make_system` creates a `System` object with the given parameters.
###Code
def make_system(beta, gamma):
"""Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
"""
init = State(S=89, I=1, R=0)
init /= sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
###Output
_____no_output_____
###Markdown
Here's an example with hypothetical values for `beta` and `gamma`.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
###Output
_____no_output_____
###Markdown
The update function takes the state during the current time step and returns the state during the next time step.
###Code
def update_func(state, t, system):
"""Update the SIR model.
state: State with variables S, I, R
t: time step
system: System with beta and gamma
returns: State object
"""
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
###Output
_____no_output_____
###Markdown
To run a single time step, we call it like this:
###Code
state = update_func(init, 0, system)
###Output
_____no_output_____
###Markdown
Now we can run a simulation by calling the update function for each time step.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: State object for final state
"""
state = system.init
for t in linrange(system.t0, system.t_end):
state = update_func(state, t, system)
return state
###Output
_____no_output_____
###Markdown
The result is the state of the system at `t_end`
###Code
run_simulation(system, update_func)
###Output
_____no_output_____
###Markdown
**Exercise** Suppose the time between contacts is 4 days and the recovery time is 5 days. After 14 weeks, how many students, total, have been infected?Hint: what is the change in `S` between the beginning and the end of the simulation?
###Code
system = make_system(beta = 1/4, gamma = 1/5)
init_s = system.init.S
init_s - run_simulation(system, update_func).S
###Output
_____no_output_____
###Markdown
Using TimeSeries objects If we want to store the state of the system at each time step, we can use one `TimeSeries` object for each state variable.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
Add three Series objects to the System: S, I, R
system: System object
update_func: function that updates state
"""
S = TimeSeries()
I = TimeSeries()
R = TimeSeries()
state = system.init
t0 = system.t0
S[t0], I[t0], R[t0] = state
for t in linrange(system.t0, system.t_end):
state = update_func(state, t, system)
S[t+1], I[t+1], R[t+1] = state
return S, I, R
###Output
_____no_output_____
###Markdown
Here's how we call it.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
S, I, R = run_simulation(system, update_func)
###Output
_____no_output_____
###Markdown
And then we can plot the results.
###Code
def plot_results(S, I, R):
"""Plot the results of a SIR model.
S: TimeSeries
I: TimeSeries
R: TimeSeries
"""
plot(S, '--', label='Susceptible')
plot(I, '-', label='Infected')
plot(R, ':', label='Recovered')
decorate(xlabel='Time (days)',
ylabel='Fraction of population')
###Output
_____no_output_____
###Markdown
Here's what they look like.
###Code
plot_results(S, I, R)
savefig('figs/chap05-fig01.pdf')
###Output
Saving figure to file figs/chap05-fig01.pdf
###Markdown
Using a DataFrame Instead of making three `TimeSeries` objects, we can use one `DataFrame`.We have to use `row` to selects rows, rather than columns. But then Pandas does the right thing, matching up the state variables with the columns of the `DataFrame`.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: TimeFrame
"""
frame = TimeFrame(columns=system.init.index)
frame.row[system.t0] = system.init
for t in linrange(system.t0, system.t_end):
frame.row[t+1] = update_func(frame.row[t], t, system)
return frame
###Output
_____no_output_____
###Markdown
Here's how we run it, and what the result looks like.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
results.head()
###Output
_____no_output_____
###Markdown
We can extract the results and plot them.
###Code
plot_results(results.S, results.I, results.R)
###Output
_____no_output_____
###Markdown
Exercises**Exercise** Suppose the time between contacts is 4 days and the recovery time is 5 days. Simulate this scenario for 14 weeks and plot the results.
###Code
system = make_system(beta = 1/4, gamma = 1/5)
results = run_simulation(system, update_func)
plot_results(results.S, results.I, results.R)
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 11Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
SIR implementationWe'll use a `State` object to represent the number (or fraction) of people in each compartment.
###Code
init = State(S=89, I=1, R=0)
###Output
_____no_output_____
###Markdown
To convert from number of people to fractions, we divide through by the total.
###Code
init /= sum(init)
###Output
_____no_output_____
###Markdown
`make_system` creates a `System` object with the given parameters.
###Code
def make_system(beta, gamma):
"""Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
"""
init = State(S=89, I=1, R=0)
init /= sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
###Output
_____no_output_____
###Markdown
Here's an example with hypothetical values for `beta` and `gamma`.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
###Output
_____no_output_____
###Markdown
The update function takes the state during the current time step and returns the state during the next time step.
###Code
def update_func(state, t, system):
"""Update the SIR model.
state: State with variables S, I, R
t: time step
system: System with beta and gamma
returns: State object
"""
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
###Output
_____no_output_____
###Markdown
To run a single time step, we call it like this:
###Code
state = update_func(init, 0, system)
###Output
_____no_output_____
###Markdown
Now we can run a simulation by calling the update function for each time step.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: State object for final state
"""
state = system.init
for t in linrange(system.t0, system.t_end):
state = update_func(state, t, system)
return state
###Output
_____no_output_____
###Markdown
The result is the state of the system at `t_end`
###Code
run_simulation(system, update_func)
###Output
_____no_output_____
###Markdown
**Exercise** Suppose the time between contacts is 4 days and the recovery time is 5 days. After 14 weeks, how many students, total, have been infected?Hint: what is the change in `S` between the beginning and the end of the simulation?
###Code
# Solution goes here
###Output
_____no_output_____
###Markdown
Using TimeSeries objects If we want to store the state of the system at each time step, we can use one `TimeSeries` object for each state variable.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
Add three Series objects to the System: S, I, R
system: System object
update_func: function that updates state
"""
S = TimeSeries()
I = TimeSeries()
R = TimeSeries()
state = system.init
t0 = system.t0
S[t0], I[t0], R[t0] = state
for t in linrange(system.t0, system.t_end):
state = update_func(state, t, system)
S[t+1], I[t+1], R[t+1] = state
return S, I, R
###Output
_____no_output_____
###Markdown
Here's how we call it.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
S, I, R = run_simulation(system, update_func)
###Output
_____no_output_____
###Markdown
And then we can plot the results.
###Code
def plot_results(S, I, R):
"""Plot the results of a SIR model.
S: TimeSeries
I: TimeSeries
R: TimeSeries
"""
plot(S, '--', label='Susceptible')
plot(I, '-', label='Infected')
plot(R, ':', label='Recovered')
decorate(xlabel='Time (days)',
ylabel='Fraction of population')
###Output
_____no_output_____
###Markdown
Here's what they look like.
###Code
plot_results(S, I, R)
savefig('figs/chap05-fig01.pdf')
###Output
Saving figure to file figs/chap05-fig01.pdf
###Markdown
Using a DataFrame Instead of making three `TimeSeries` objects, we can use one `DataFrame`.We have to use `row` to selects rows, rather than columns. But then Pandas does the right thing, matching up the state variables with the columns of the `DataFrame`.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: TimeFrame
"""
frame = TimeFrame(columns=system.init.index)
frame.row[system.t0] = system.init
for t in linrange(system.t0, system.t_end):
frame.row[t+1] = update_func(frame.row[t], t, system)
return frame
###Output
_____no_output_____
###Markdown
Here's how we run it, and what the result looks like.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
results.head()
###Output
_____no_output_____
###Markdown
We can extract the results and plot them.
###Code
plot_results(results.S, results.I, results.R)
###Output
_____no_output_____
###Markdown
Exercises**Exercise** Suppose the time between contacts is 4 days and the recovery time is 5 days. Simulate this scenario for 14 weeks and plot the results.
###Code
# Solution goes here
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 11Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
SIR implementationWe'll use a `State` object to represent the number (or fraction) of people in each compartment.
###Code
init = State(S=89, I=1, R=0)
###Output
_____no_output_____
###Markdown
To convert from number of people to fractions, we divide through by the total.
###Code
init /= sum(init)
###Output
_____no_output_____
###Markdown
`make_system` creates a `System` object with the given parameters.
###Code
def make_system(beta, gamma):
"""Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
"""
init = State(S=89, I=1, R=0)
init /= sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
###Output
_____no_output_____
###Markdown
Here's an example with hypothetical values for `beta` and `gamma`.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
###Output
_____no_output_____
###Markdown
The update function takes the state during the current time step and returns the state during the next time step.
###Code
def update_func(state, t, system):
"""Update the SIR model.
state: State with variables S, I, R
t: time step
system: System with beta and gamma
returns: State object
"""
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
###Output
_____no_output_____
###Markdown
To run a single time step, we call it like this:
###Code
state = update_func(init, 0, system)
###Output
_____no_output_____
###Markdown
Now we can run a simulation by calling the update function for each time step.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: State object for final state
"""
state = system.init
for t in linrange(system.t0, system.t_end):
state = update_func(state, t, system)
return state
###Output
_____no_output_____
###Markdown
The result is the state of the system at `t_end`
###Code
run_simulation(system, update_func)
###Output
_____no_output_____
###Markdown
**Exercise** Suppose the time between contacts is 4 days and the recovery time is 5 days. After 14 weeks, how many students, total, have been infected?Hint: what is the change in `S` between the beginning and the end of the simulation?
###Code
# Solution goes here
###Output
_____no_output_____
###Markdown
Using TimeSeries objects If we want to store the state of the system at each time step, we can use one `TimeSeries` object for each state variable.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
Add three Series objects to the System: S, I, R
system: System object
update_func: function that updates state
"""
S = TimeSeries()
I = TimeSeries()
R = TimeSeries()
state = system.init
t0 = system.t0
S[t0], I[t0], R[t0] = state
for t in linrange(system.t0, system.t_end):
state = update_func(state, t, system)
S[t+1], I[t+1], R[t+1] = state
return S, I, R
###Output
_____no_output_____
###Markdown
Here's how we call it.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
S, I, R = run_simulation(system, update_func)
###Output
_____no_output_____
###Markdown
And then we can plot the results.
###Code
def plot_results(S, I, R):
"""Plot the results of a SIR model.
S: TimeSeries
I: TimeSeries
R: TimeSeries
"""
plot(S, '--', label='Susceptible')
plot(I, '-', label='Infected')
plot(R, ':', label='Recovered')
decorate(xlabel='Time (days)',
ylabel='Fraction of population')
###Output
_____no_output_____
###Markdown
Here's what they look like.
###Code
plot_results(S, I, R)
savefig('figs/chap05-fig01.pdf')
###Output
_____no_output_____
###Markdown
Using a DataFrame Instead of making three `TimeSeries` objects, we can use one `DataFrame`.We have to use `row` to selects rows, rather than columns. But then Pandas does the right thing, matching up the state variables with the columns of the `DataFrame`.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: TimeFrame
"""
frame = TimeFrame(columns=system.init.index)
frame.row[system.t0] = system.init
for t in linrange(system.t0, system.t_end):
frame.row[t+1] = update_func(frame.row[t], t, system)
return frame
###Output
_____no_output_____
###Markdown
Here's how we run it, and what the result looks like.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
results.head()
###Output
_____no_output_____
###Markdown
We can extract the results and plot them.
###Code
plot_results(results.S, results.I, results.R)
###Output
_____no_output_____
###Markdown
Exercises**Exercise** Suppose the time between contacts is 4 days and the recovery time is 5 days. Simulate this scenario for 14 weeks and plot the results.
###Code
# Solution goes here
###Output
_____no_output_____ |
qiskit/advanced/aqua/optimization/docplex.ipynb | ###Markdown
Trusted Notebook" align="middle"> _*Qiskit Aqua: Generating Ising Hamiltonians from optimization models with DOcplex*_The latest version of this notebook is available on https://github.com/Qiskit/qiskit-tutorial.*** ContributorsAtsushi Matsuo[1], Takashi Imamichi[1], Marco Pistoia[1], Stephen Wood[1] Affiliation- [1]IBMQ IntroductionThere has been a growing interest in using quantum computers to find solutions of combinatorial problems. A heuristic approach for finding solutions of combinatorial problems on quantum computers is the quantum variational approach, such as the Variational Quantum Eigensolver (VQE) algorithm (see https://arxiv.org/abs/1802.00171 and the Quantum Approximate Optimization Algorithm (QAOA) (see https://arxiv.org/abs/1411.4028). In order to use a quantum variational approach on quantum computers, first, we need to map a combinatorial problem to an Ising Hamiltonian. However Ising Hamiltonians are complicated and unintuitive. Mapping a combinatorial problem to Ising Hamiltonians can be a difficult and time-consuming task, requiring specialized knowledge.In this tutorial, we introduce a translator to automatically generate Ising Hamiltonians from classical optimization models. We will explain about classical optimization models later. The translator dramatically simplifies the task of designing and implementing quantum-computing-based solutions, for optimization problems, by automatically generating Ising Hamiltonians for different optimization problems. With the translator, all a user has to do is to write optimization models using DOcplex (see https://cdn.rawgit.com/IBMDecisionOptimization/docplex-doc/master/docs/index.html). DOcplex is a python library for optimization problems.Then the translator will automatically generate Ising Hamiltonians from the models. Optimization models are short and intuitive. It is much easier to write optimization models compared to writing Ising Hamiltonians manually. The quantum variational approach works with the translator in Qiskit Aqua as follows:1. Write an optimization model of the formulation with DOcplex.2. Call the translator to transform the model into an Ising Hamiltonian.3. Solve the problem with variational algorithms such as VQE and QAOA. Details of Optimization ModelsThe translator supports the generation of an Ising Hamiltonian from the following optimization model elements:- Binary decision variables. - Linear and quadratic terms in objective functions.- Only equality constraints. Input models are validated before transformation. If the model contains elements that are not from the supported set, an error will be raised.Even though there are restrictions, this type of optimization model can handle optimization problems such as max-cut, traveling salesman etc.These are typical optimization problems. Examples of the translator being used for Max-Cut and TSP problems can be found in the following tutorial:- [Qiskit Aqua: Experimenting with Max-Cut problem and Traveling Salesman problem with variational quantum eigensolver](max_cut_and_tsp.ipynb) A Usage Example: Maximize the number of variables by taking into account constraintsThe following is a toy example of a maximization problem with constraints.\begin{aligned} & \text{maximize} & \sum_{i} x_{i}\\ & \text{subject to} & \sum_{i} i * x_{i}=3\\ & & i \in \{1,2,3,4\} \\ & & x_i \in \{0,1\}\\\end{aligned}
###Code
from docplex.mp.model import Model
from qiskit import BasicAer
from qiskit.aqua import run_algorithm
from qiskit.aqua.algorithms import VQE, ExactEigensolver
from qiskit.aqua.components.optimizers import SPSA
from qiskit.aqua.components.variational_forms import RY
from qiskit.aqua import QuantumInstance
from qiskit.aqua.translators.ising import docplex
# setup aqua logging
import logging
from qiskit.aqua import set_qiskit_aqua_logging
# set_qiskit_aqua_logging(logging.DEBUG) # choose INFO, DEBUG to see the log
###Output
_____no_output_____
###Markdown
Creating an optimization model of the above problem using DOcplexAn optimization model of the problem with DOcplex is written as follows. * First an instance of `Model` is created and variables for the model are defined. * Next an objective function is written and passed to the model. The objective function is a function that we would like to minimize (or maximize).* Finally constraints are added.
###Code
# Create an instance of a model and variables
mdl = Model(name='max_vars')
x = {i: mdl.binary_var(name='x_{0}'.format(i)) for i in range(1,5)}
# Objective function
max_vars_func = mdl.sum(x[i] for i in range(1,5))
mdl.maximize(max_vars_func)
# Constraints
mdl.add_constraint(mdl.sum(i*x[i] for i in range(1,5)) == 3)
print(mdl.export_to_string())
###Output
\ This file has been generated by DOcplex
\ ENCODING=ISO-8859-1
\Problem name: max_vars
Maximize
obj: x_1 + x_2 + x_3 + x_4
Subject To
c1: x_1 + 2 x_2 + 3 x_3 + 4 x_4 = 3
Bounds
0 <= x_1 <= 1
0 <= x_2 <= 1
0 <= x_3 <= 1
0 <= x_4 <= 1
Binaries
x_1 x_2 x_3 x_4
End
###Markdown
Generate an Ising Hamiltonian from the model using ```docplex.get_qubitops(mdl)```
###Code
qubitOp, offset = docplex.get_qubitops(mdl)
###Output
_____no_output_____
###Markdown
Checking that the full Hamiltonian gives the right cost
###Code
ee = ExactEigensolver(qubitOp, k=1)
result = ee.run()
print('energy:', result['energy'])
print('objective:', result['energy'] + offset)
x = docplex.sample_most_likely(result['eigvecs'][0])
print('solution:', x)
###Output
energy: -57.5
objective: -2.0
solution: [1. 1. 0. 0.]
###Markdown
Running it on quantum computerWe run the optimization routine using a feedback loop with a quantum computer that uses trial functions built with Y single-qubit rotations, $U_\mathrm{single}(\theta) = \prod_{i=1}^n Y(\theta_{i})$, and entangler steps $U_\mathrm{entangler}$.
###Code
seed = 10598
spsa = SPSA(max_trials=300)
ry = RY(qubitOp.num_qubits, depth=5, entanglement='linear')
vqe = VQE(qubitOp, ry, spsa)
backend = BasicAer.get_backend('statevector_simulator')
quantum_instance = QuantumInstance(backend, seed_simulator=seed, seed_transpiler=seed)
result = vqe.run(quantum_instance)
"""declarative approach
algorithm_cfg = {
'name': 'VQE'
}
optimizer_cfg = {
'name': 'SPSA',
'max_trials': 300
}
var_form_cfg = {
'name': 'RY',
'depth': 5,
'entanglement': 'linear'
}
params = {
'problem': {'name': 'ising', 'random_seed': seed},
'algorithm': algorithm_cfg,
'optimizer': optimizer_cfg,
'variational_form': var_form_cfg,
'backend': {provider': 'qiskit.BasicAer', 'name': 'statevector_simulator'}
}
result = run_algorithm(params, algo_input)
"""
x = docplex.sample_most_likely(result['eigvecs'][0])
print('energy:', result['energy'])
print('time:', result['eval_time'])
print('solution objective:', result['energy'] + offset)
print('solution:', x)
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
_____no_output_____
###Markdown
![qiskit_header.png](attachment:qiskit_header.png) Qiskit Aqua: Generating Ising Hamiltonians from optimization models with DOcplex IntroductionThere has been a growing interest in using quantum computers to find solutions of combinatorial problems. A heuristic approach for finding solutions of combinatorial problems on quantum computers is the quantum variational approach, such as the Variational Quantum Eigensolver (VQE) algorithm (see https://arxiv.org/abs/1802.00171 and the Quantum Approximate Optimization Algorithm (QAOA) (see https://arxiv.org/abs/1411.4028). In order to use a quantum variational approach on quantum computers, first, we need to map a combinatorial problem to an Ising Hamiltonian. However, Ising Hamiltonians are complicated and unintuitive. Mapping a combinatorial problem to Ising Hamiltonians can be a difficult and time-consuming task, requiring specialized knowledge.In this tutorial, we introduce a translator to automatically generate Ising Hamiltonians from classical optimization models. We will explain about classical optimization models later. The translator dramatically simplifies the task of designing and implementing quantum-computing-based solutions, for optimization problems, by automatically generating Ising Hamiltonians for different optimization problems. With the translator, all a user has to do is to write optimization models using DOcplex (see https://cdn.rawgit.com/IBMDecisionOptimization/docplex-doc/master/docs/index.html). DOcplex is a python library for optimization problems.Then the translator will automatically generate Ising Hamiltonians from the models. Optimization models are short and intuitive. It is much easier to write optimization models compared to writing Ising Hamiltonians manually. The quantum variational approach works with the translator in Qiskit Aqua as follows:1. Write an optimization model of the formulation with DOcplex.2. Call the translator to transform the model into an Ising Hamiltonian.3. Solve the problem with variational algorithms such as VQE and QAOA. Details of Optimization ModelsThe translator supports the generation of an Ising Hamiltonian from the following optimization model elements:- Binary decision variables. - Linear and quadratic terms in objective functions.- Only equality constraints. Input models are validated before transformation. If the model contains elements that are not from the supported set, an error will be raised.Even though there are restrictions, this type of optimization model can handle optimization problems such as max-cut, traveling salesman etc.These are typical optimization problems. Examples of the translator being used for Max-Cut and TSP problems can be found in the following tutorial:- [Qiskit Aqua: Experimenting with Max-Cut problem and Traveling Salesman problem with variational quantum eigensolver](max_cut_and_tsp.ipynb) A Usage Example: Maximize the number of variables by taking into account constraintsThe following is a toy example of a maximization problem with constraints.\begin{aligned} & \text{maximize} & \sum_{i} x_{i}\\ & \text{subject to} & \sum_{i} i * x_{i}=3\\ & & i \in \{1,2,3,4\} \\ & & x_i \in \{0,1\}\\\end{aligned}
###Code
from docplex.mp.model import Model
from qiskit import Aer
from qiskit.aqua import run_algorithm
from qiskit.aqua.algorithms import VQE, ExactEigensolver
from qiskit.aqua.components.optimizers import SPSA
from qiskit.aqua.components.variational_forms import RY
from qiskit.aqua import QuantumInstance
from qiskit.aqua.translators.ising import docplex
# setup aqua logging
import logging
from qiskit.aqua import set_qiskit_aqua_logging
# set_qiskit_aqua_logging(logging.DEBUG) # choose INFO, DEBUG to see the log
###Output
_____no_output_____
###Markdown
Creating an optimization model of the above problem using DOcplexAn optimization model of the problem with DOcplex is written as follows. * First an instance of `Model` is created and variables for the model are defined. * Next an objective function is written and passed to the model. The objective function is a function that we would like to minimize (or maximize).* Finally constraints are added.
###Code
# Create an instance of a model and variables
mdl = Model(name='max_vars')
x = {i: mdl.binary_var(name='x_{0}'.format(i)) for i in range(1,5)}
# Objective function
max_vars_func = mdl.sum(x[i] for i in range(1,5))
mdl.maximize(max_vars_func)
# Constraints
mdl.add_constraint(mdl.sum(i*x[i] for i in range(1,5)) == 3)
print(mdl.export_to_string())
###Output
\ This file has been generated by DOcplex
\ ENCODING=ISO-8859-1
\Problem name: max_vars
Maximize
obj: x_1 + x_2 + x_3 + x_4
Subject To
c1: x_1 + 2 x_2 + 3 x_3 + 4 x_4 = 3
Bounds
0 <= x_1 <= 1
0 <= x_2 <= 1
0 <= x_3 <= 1
0 <= x_4 <= 1
Binaries
x_1 x_2 x_3 x_4
End
###Markdown
Generate an Ising Hamiltonian from the model using ```docplex.get_qubitops(mdl)```
###Code
qubitOp, offset = docplex.get_qubitops(mdl)
###Output
_____no_output_____
###Markdown
Checking that the full Hamiltonian gives the right cost
###Code
ee = ExactEigensolver(qubitOp, k=1)
result = ee.run()
print('energy:', result['energy'])
print('objective:', result['energy'] + offset)
x = docplex.sample_most_likely(result['eigvecs'][0])
print('solution:', x)
###Output
energy: -57.5
objective: -2.0
solution: [1. 1. 0. 0.]
###Markdown
Running it on quantum computerWe run the optimization routine using a feedback loop with a quantum computer that uses trial functions built with Y single-qubit rotations, $U_\mathrm{single}(\theta) = \prod_{i=1}^n Y(\theta_{i})$, and entangler steps $U_\mathrm{entangler}$.
###Code
seed = 10598
spsa = SPSA(max_trials=300)
ry = RY(qubitOp.num_qubits, depth=5, entanglement='linear')
vqe = VQE(qubitOp, ry, spsa)
backend = Aer.get_backend('statevector_simulator')
quantum_instance = QuantumInstance(backend, seed_simulator=seed, seed_transpiler=seed)
result = vqe.run(quantum_instance)
"""declarative approach
algorithm_cfg = {
'name': 'VQE'
}
optimizer_cfg = {
'name': 'SPSA',
'max_trials': 300
}
var_form_cfg = {
'name': 'RY',
'depth': 5,
'entanglement': 'linear'
}
params = {
'problem': {'name': 'ising', 'random_seed': seed},
'algorithm': algorithm_cfg,
'optimizer': optimizer_cfg,
'variational_form': var_form_cfg,
'backend': {provider': 'qiskit.BasicAer', 'name': 'statevector_simulator'}
}
result = run_algorithm(params, algo_input)
"""
x = docplex.sample_most_likely(result['eigvecs'][0])
print('energy:', result['energy'])
print('time:', result['eval_time'])
print('solution objective:', result['energy'] + offset)
print('solution:', x)
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
_____no_output_____
###Markdown
Trusted Notebook" align="middle"> _*Qiskit Aqua: Generating Ising Hamiltonians from optimization models with DOcplex*_The latest version of this notebook is available on https://github.com/Qiskit/qiskit-tutorial.*** ContributorsAtsushi Matsuo[1], Takashi Imamichi[1], Marco Pistoia[1], Stephen Wood[1] Affiliation- [1]IBMQ IntroductionThere has been a growing interest in using quantum computers to find solutions of combinatorial problems. A heuristic approach for finding solutions of combinatorial problems on quantum computers is the quantum variational approach, such as the Variational Quantum Eigensolver (VQE) algorithm (see https://arxiv.org/abs/1802.00171 and the Quantum Approximate Optimization Algorithm (QAOA) (see https://arxiv.org/abs/1411.4028). In order to use a quantum variational approach on quantum computers, first, we need to map a combinatorial problem to an Ising Hamiltonian. However, Ising Hamiltonians are complicated and unintuitive. Mapping a combinatorial problem to Ising Hamiltonians can be a difficult and time-consuming task, requiring specialized knowledge.In this tutorial, we introduce a translator to automatically generate Ising Hamiltonians from classical optimization models. We will explain about classical optimization models later. The translator dramatically simplifies the task of designing and implementing quantum-computing-based solutions, for optimization problems, by automatically generating Ising Hamiltonians for different optimization problems. With the translator, all a user has to do is to write optimization models using DOcplex (see https://cdn.rawgit.com/IBMDecisionOptimization/docplex-doc/master/docs/index.html). DOcplex is a python library for optimization problems.Then the translator will automatically generate Ising Hamiltonians from the models. Optimization models are short and intuitive. It is much easier to write optimization models compared to writing Ising Hamiltonians manually. The quantum variational approach works with the translator in Qiskit Aqua as follows:1. Write an optimization model of the formulation with DOcplex.2. Call the translator to transform the model into an Ising Hamiltonian.3. Solve the problem with variational algorithms such as VQE and QAOA. Details of Optimization ModelsThe translator supports the generation of an Ising Hamiltonian from the following optimization model elements:- Binary decision variables. - Linear and quadratic terms in objective functions.- Only equality constraints. Input models are validated before transformation. If the model contains elements that are not from the supported set, an error will be raised.Even though there are restrictions, this type of optimization model can handle optimization problems such as max-cut, traveling salesman etc.These are typical optimization problems. Examples of the translator being used for Max-Cut and TSP problems can be found in the following tutorial:- [Qiskit Aqua: Experimenting with Max-Cut problem and Traveling Salesman problem with variational quantum eigensolver](max_cut_and_tsp.ipynb) A Usage Example: Maximize the number of variables by taking into account constraintsThe following is a toy example of a maximization problem with constraints.\begin{aligned} & \text{maximize} & \sum_{i} x_{i}\\ & \text{subject to} & \sum_{i} i * x_{i}=3\\ & & i \in \{1,2,3,4\} \\ & & x_i \in \{0,1\}\\\end{aligned}
###Code
from docplex.mp.model import Model
from qiskit import BasicAer
from qiskit.aqua import run_algorithm
from qiskit.aqua.algorithms import VQE, ExactEigensolver
from qiskit.aqua.components.optimizers import SPSA
from qiskit.aqua.components.variational_forms import RY
from qiskit.aqua import QuantumInstance
from qiskit.aqua.translators.ising import docplex
# setup aqua logging
import logging
from qiskit.aqua import set_qiskit_aqua_logging
# set_qiskit_aqua_logging(logging.DEBUG) # choose INFO, DEBUG to see the log
###Output
_____no_output_____
###Markdown
Creating an optimization model of the above problem using DOcplexAn optimization model of the problem with DOcplex is written as follows. * First an instance of `Model` is created and variables for the model are defined. * Next an objective function is written and passed to the model. The objective function is a function that we would like to minimize (or maximize).* Finally constraints are added.
###Code
# Create an instance of a model and variables
mdl = Model(name='max_vars')
x = {i: mdl.binary_var(name='x_{0}'.format(i)) for i in range(1,5)}
# Objective function
max_vars_func = mdl.sum(x[i] for i in range(1,5))
mdl.maximize(max_vars_func)
# Constraints
mdl.add_constraint(mdl.sum(i*x[i] for i in range(1,5)) == 3)
print(mdl.export_to_string())
###Output
\ This file has been generated by DOcplex
\ ENCODING=ISO-8859-1
\Problem name: max_vars
Maximize
obj: x_1 + x_2 + x_3 + x_4
Subject To
c1: x_1 + 2 x_2 + 3 x_3 + 4 x_4 = 3
Bounds
0 <= x_1 <= 1
0 <= x_2 <= 1
0 <= x_3 <= 1
0 <= x_4 <= 1
Binaries
x_1 x_2 x_3 x_4
End
###Markdown
Generate an Ising Hamiltonian from the model using ```docplex.get_qubitops(mdl)```
###Code
qubitOp, offset = docplex.get_qubitops(mdl)
###Output
_____no_output_____
###Markdown
Checking that the full Hamiltonian gives the right cost
###Code
ee = ExactEigensolver(qubitOp, k=1)
result = ee.run()
print('energy:', result['energy'])
print('objective:', result['energy'] + offset)
x = docplex.sample_most_likely(result['eigvecs'][0])
print('solution:', x)
###Output
energy: -57.5
objective: -2.0
solution: [1. 1. 0. 0.]
###Markdown
Running it on quantum computerWe run the optimization routine using a feedback loop with a quantum computer that uses trial functions built with Y single-qubit rotations, $U_\mathrm{single}(\theta) = \prod_{i=1}^n Y(\theta_{i})$, and entangler steps $U_\mathrm{entangler}$.
###Code
seed = 10598
spsa = SPSA(max_trials=300)
ry = RY(qubitOp.num_qubits, depth=5, entanglement='linear')
vqe = VQE(qubitOp, ry, spsa)
backend = BasicAer.get_backend('statevector_simulator')
quantum_instance = QuantumInstance(backend, seed_simulator=seed, seed_transpiler=seed)
result = vqe.run(quantum_instance)
"""declarative approach
algorithm_cfg = {
'name': 'VQE'
}
optimizer_cfg = {
'name': 'SPSA',
'max_trials': 300
}
var_form_cfg = {
'name': 'RY',
'depth': 5,
'entanglement': 'linear'
}
params = {
'problem': {'name': 'ising', 'random_seed': seed},
'algorithm': algorithm_cfg,
'optimizer': optimizer_cfg,
'variational_form': var_form_cfg,
'backend': {provider': 'qiskit.BasicAer', 'name': 'statevector_simulator'}
}
result = run_algorithm(params, algo_input)
"""
x = docplex.sample_most_likely(result['eigvecs'][0])
print('energy:', result['energy'])
print('time:', result['eval_time'])
print('solution objective:', result['energy'] + offset)
print('solution:', x)
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
_____no_output_____
###Markdown
![qiskit_header.png](attachment:qiskit_header.png) _*Qiskit Aqua: Generating Ising Hamiltonians from optimization models with DOcplex*_The latest version of this notebook is available on https://github.com/Qiskit/qiskit-tutorial.*** ContributorsAtsushi Matsuo[1], Takashi Imamichi[1], Marco Pistoia[1], Stephen Wood[1] Affiliation- [1]IBMQ IntroductionThere has been a growing interest in using quantum computers to find solutions of combinatorial problems. A heuristic approach for finding solutions of combinatorial problems on quantum computers is the quantum variational approach, such as the Variational Quantum Eigensolver (VQE) algorithm (see https://arxiv.org/abs/1802.00171 and the Quantum Approximate Optimization Algorithm (QAOA) (see https://arxiv.org/abs/1411.4028). In order to use a quantum variational approach on quantum computers, first, we need to map a combinatorial problem to an Ising Hamiltonian. However, Ising Hamiltonians are complicated and unintuitive. Mapping a combinatorial problem to Ising Hamiltonians can be a difficult and time-consuming task, requiring specialized knowledge.In this tutorial, we introduce a translator to automatically generate Ising Hamiltonians from classical optimization models. We will explain about classical optimization models later. The translator dramatically simplifies the task of designing and implementing quantum-computing-based solutions, for optimization problems, by automatically generating Ising Hamiltonians for different optimization problems. With the translator, all a user has to do is to write optimization models using DOcplex (see https://cdn.rawgit.com/IBMDecisionOptimization/docplex-doc/master/docs/index.html). DOcplex is a python library for optimization problems.Then the translator will automatically generate Ising Hamiltonians from the models. Optimization models are short and intuitive. It is much easier to write optimization models compared to writing Ising Hamiltonians manually. The quantum variational approach works with the translator in Qiskit Aqua as follows:1. Write an optimization model of the formulation with DOcplex.2. Call the translator to transform the model into an Ising Hamiltonian.3. Solve the problem with variational algorithms such as VQE and QAOA. Details of Optimization ModelsThe translator supports the generation of an Ising Hamiltonian from the following optimization model elements:- Binary decision variables. - Linear and quadratic terms in objective functions.- Only equality constraints. Input models are validated before transformation. If the model contains elements that are not from the supported set, an error will be raised.Even though there are restrictions, this type of optimization model can handle optimization problems such as max-cut, traveling salesman etc.These are typical optimization problems. Examples of the translator being used for Max-Cut and TSP problems can be found in the following tutorial:- [Qiskit Aqua: Experimenting with Max-Cut problem and Traveling Salesman problem with variational quantum eigensolver](max_cut_and_tsp.ipynb) A Usage Example: Maximize the number of variables by taking into account constraintsThe following is a toy example of a maximization problem with constraints.\begin{aligned} & \text{maximize} & \sum_{i} x_{i}\\ & \text{subject to} & \sum_{i} i * x_{i}=3\\ & & i \in \{1,2,3,4\} \\ & & x_i \in \{0,1\}\\\end{aligned}
###Code
from docplex.mp.model import Model
from qiskit import BasicAer
from qiskit.aqua.algorithms import VQE, ExactEigensolver
from qiskit.aqua.components.optimizers import SPSA
from qiskit.aqua.components.variational_forms import RY
from qiskit.aqua import QuantumInstance
from qiskit.optimization.ising import docplex
from qiskit.optimization.ising.common import sample_most_likely
# setup aqua logging
import logging
from qiskit.aqua import set_qiskit_aqua_logging
# set_qiskit_aqua_logging(logging.DEBUG) # choose INFO, DEBUG to see the log
###Output
_____no_output_____
###Markdown
Creating an optimization model of the above problem using DOcplexAn optimization model of the problem with DOcplex is written as follows. * First an instance of `Model` is created and variables for the model are defined. * Next an objective function is written and passed to the model. The objective function is a function that we would like to minimize (or maximize).* Finally constraints are added.
###Code
# Create an instance of a model and variables
mdl = Model(name='max_vars')
x = {i: mdl.binary_var(name='x_{0}'.format(i)) for i in range(1,5)}
# Objective function
max_vars_func = mdl.sum(x[i] for i in range(1,5))
mdl.maximize(max_vars_func)
# Constraints
mdl.add_constraint(mdl.sum(i*x[i] for i in range(1,5)) == 3)
print(mdl.export_to_string())
###Output
\ This file has been generated by DOcplex
\ ENCODING=ISO-8859-1
\Problem name: max_vars
Maximize
obj: x_1 + x_2 + x_3 + x_4
Subject To
c1: x_1 + 2 x_2 + 3 x_3 + 4 x_4 = 3
Bounds
0 <= x_1 <= 1
0 <= x_2 <= 1
0 <= x_3 <= 1
0 <= x_4 <= 1
Binaries
x_1 x_2 x_3 x_4
End
###Markdown
Generate an Ising Hamiltonian from the model using ```docplex.get_operator(mdl)```
###Code
qubitOp, offset = docplex.get_operator(mdl)
###Output
_____no_output_____
###Markdown
Checking that the full Hamiltonian gives the right cost
###Code
ee = ExactEigensolver(qubitOp, k=1)
result = ee.run()
print('energy:', result['energy'])
print('objective:', result['energy'] + offset)
x = sample_most_likely(result['eigvecs'][0])
print('solution:', x)
###Output
energy: -57.5
objective: -2.0
solution: [1. 1. 0. 0.]
###Markdown
Running it on quantum computerWe run the optimization routine using a feedback loop with a quantum computer that uses trial functions built with Y single-qubit rotations, $U_\mathrm{single}(\theta) = \prod_{i=1}^n Y(\theta_{i})$, and entangler steps $U_\mathrm{entangler}$.
###Code
seed = 10598
spsa = SPSA(max_trials=300)
ry = RY(qubitOp.num_qubits, depth=5, entanglement='linear')
vqe = VQE(qubitOp, ry, spsa)
backend = BasicAer.get_backend('statevector_simulator')
quantum_instance = QuantumInstance(backend, seed_simulator=seed, seed_transpiler=seed)
result = vqe.run(quantum_instance)
x = sample_most_likely(result['eigvecs'][0])
print('energy:', result['energy'])
print('time:', result['eval_time'])
print('solution objective:', result['energy'] + offset)
print('solution:', x)
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
_____no_output_____
###Markdown
Trusted Notebook" align="middle"> _*Qiskit Aqua: Generating Ising Hamiltonians from optimization models with DOcplex*_The latest version of this notebook is available on https://github.com/Qiskit/qiskit-tutorial.*** ContributorsAtsushi Matsuo[1], Takashi Imamichi[1], Marco Pistoia[1], Stephen Wood[1] Affiliation- [1]IBMQ IntroductionThere has been a growing interest in using quantum computers to find solutions of combinatorial problems. A heuristic approach for finding solutions of combinatorial problems on quantum computers is the quantum variational approach, such as the Variational Quantum Eigensolver (VQE) algorithm (see https://arxiv.org/abs/1802.00171 and the Quantum Approximate Optimization Algorithm (QAOA) (see https://arxiv.org/abs/1411.4028). In order to use a quantum variational approach on quantum computers, first, we need to map a combinatorial problem to an Ising Hamiltonian. However Ising Hamiltonians are complicated and unintuitive. Mapping a combinatorial problem to Ising Hamiltonians can be a difficult and time-consuming task, requiring specialized knowledge.In this tutorial, we introduce a translator to automatically generate Ising Hamiltonians from classical optimization models. We will explain about classical optimization models later. The translator dramatically simplifies the task of designing and implementing quantum-computing-based solutions, for optimization problems, by automatically generating Ising Hamiltonians for different optimization problems. With the translator, all a user has to do is to write optimization models using DOcplex (see https://cdn.rawgit.com/IBMDecisionOptimization/docplex-doc/master/docs/index.html). DOcplex is a python library for optimization problems.Then the translator will automatically generate Ising Hamiltonians from the models. Optimization models are short and intuitive. It is much easier to write optimization models compared to writing Ising Hamiltonians manually. The quantum variational approach works with the translator in Qiskit Aqua as follows:1. Write an optimization model of the formulation with DOcplex.2. Call the translator to transform the model into an Ising Hamiltonian.3. Solve the problem with variational algorithms such as VQE and QAOA. Details of Optimization ModelsThe translator supports the generation of an Ising Hamiltonian from the following optimization model elements:- Binary decision variables. - Linear and quadratic terms in objective functions.- Only equality constraints. Input models are validated before transformation. If the model contains elements that are not from the supported set, an error will be raised.Even though there are restrictions, this type of optimization model can handle optimization problems such as max-cut, traveling salesman etc.These are typical optimization problems. Examples of the translator being used for Max-Cut and TSP problems can be found in the following tutorial:- [Qiskit Aqua: Experimenting with Max-Cut problem and Traveling Salesman problem with variational quantum eigensolver](max_cut_and_tsp.ipynb) A Usage Example: Maximize the number of variables by taking into account constraintsThe following is a toy example of a maximization problem with constraints.\begin{aligned} & \text{maximize} & \sum_{i} x_{i}\\ & \text{subject to} & \sum_{i} i * x_{i}=3\\ & & i \in \{1,2,3,4\} \\ & & x_i \in \{0,1\}\\\end{aligned}
###Code
from docplex.mp.model import Model
from qiskit import BasicAer
from qiskit.aqua import Operator, run_algorithm
from qiskit.aqua.algorithms import VQE, ExactEigensolver
from qiskit.aqua.components.optimizers import SPSA
from qiskit.aqua.components.variational_forms import RY
from qiskit.aqua import QuantumInstance
from qiskit.aqua.translators.ising import docplex
# setup aqua logging
import logging
from qiskit.aqua import set_qiskit_aqua_logging
# set_qiskit_aqua_logging(logging.DEBUG) # choose INFO, DEBUG to see the log
###Output
_____no_output_____
###Markdown
Creating an optimization model of the above problem using DOcplexAn optimization model of the problem with DOcplex is written as follows. * First an instance of `Model` is created and variables for the model are defined. * Next an objective function is written and passed to the model. The objective function is a function that we would like to minimize (or maximize).* Finally constraints are added.
###Code
# Create an instance of a model and variables
mdl = Model(name='max_vars')
x = {i: mdl.binary_var(name='x_{0}'.format(i)) for i in range(1,5)}
# Objective function
max_vars_func = mdl.sum(x[i] for i in range(1,5))
mdl.maximize(max_vars_func)
# Constraints
mdl.add_constraint(mdl.sum(i*x[i] for i in range(1,5)) == 3)
print(mdl.export_to_string())
###Output
\ This file has been generated by DOcplex
\ ENCODING=ISO-8859-1
\Problem name: max_vars
Maximize
obj: x_1 + x_2 + x_3 + x_4
Subject To
c1: x_1 + 2 x_2 + 3 x_3 + 4 x_4 = 3
Bounds
0 <= x_1 <= 1
0 <= x_2 <= 1
0 <= x_3 <= 1
0 <= x_4 <= 1
Binaries
x_1 x_2 x_3 x_4
End
###Markdown
Generate an Ising Hamiltonian from the model using ```docplex.get_qubitops(mdl)```
###Code
qubitOp, offset = docplex.get_qubitops(mdl)
###Output
_____no_output_____
###Markdown
Checking that the full Hamiltonian gives the right cost
###Code
ee = ExactEigensolver(qubitOp, k=1)
result = ee.run()
print('energy:', result['energy'])
print('objective:', result['energy'] + offset)
x = docplex.sample_most_likely(result['eigvecs'][0])
print('solution:', x)
###Output
energy: -57.5
objective: -2.0
solution: [1. 1. 0. 0.]
###Markdown
Running it on quantum computerWe run the optimization routine using a feedback loop with a quantum computer that uses trial functions built with Y single-qubit rotations, $U_\mathrm{single}(\theta) = \prod_{i=1}^n Y(\theta_{i})$, and entangler steps $U_\mathrm{entangler}$.
###Code
seed = 10598
spsa = SPSA(max_trials=300)
ry = RY(qubitOp.num_qubits, depth=5, entanglement='linear')
vqe = VQE(qubitOp, ry, spsa, 'matrix')
backend = BasicAer.get_backend('statevector_simulator')
quantum_instance = QuantumInstance(backend, seed=seed, seed_transpiler=seed)
result = vqe.run(quantum_instance)
"""declarative approach
algorithm_cfg = {
'name': 'VQE',
'operator_mode': 'matrix'
}
optimizer_cfg = {
'name': 'SPSA',
'max_trials': 300
}
var_form_cfg = {
'name': 'RY',
'depth': 5,
'entanglement': 'linear'
}
params = {
'problem': {'name': 'ising', 'random_seed': seed},
'algorithm': algorithm_cfg,
'optimizer': optimizer_cfg,
'variational_form': var_form_cfg,
'backend': {provider': 'qiskit.BasicAer', 'name': 'statevector_simulator'}
}
result = run_algorithm(params, algo_input)
"""
x = docplex.sample_most_likely(result['eigvecs'][0])
print('energy:', result['energy'])
print('time:', result['eval_time'])
print('solution objective:', result['energy'] + offset)
print('solution:', x)
###Output
energy: -57.16261789728296
time: 10.59960389137268
solution objective: -1.6626178972829635
solution: [1. 1. 0. 0.]
|
MATH7370/Research3/Research3R.ipynb | ###Markdown
Find intrinsic dimension from data Read data generated by Matlab
###Code
import pandas as pd
import numpy as np
from tqdm import tqdm
from sklearn.neighbors import NearestNeighbors, KDTree
from numpy.linalg import svd
import matplotlib.pyplot as plt
xyz = pd.read_csv('xyz.csv', header=None)
xyz.head(2)
def get_nearest_k_corrd(total_data: pd.DataFrame, query_point: pd.DataFrame, k: int):
"""
find nearest k corrdinate of query_point from total_data
total_data: 10000 by 3
query_point: 1 by 3
k: the number of nearest neighbors (contain itself)
"""
kdt = KDTree(total_data, leaf_size=30, metric='euclidean')
nn_inx = kdt.query(query_point, k=k, return_distance=False)
# print(nn_inx[0])
return total_data.iloc[nn_inx[0], :]
def get_s(cluster: pd.DataFrame):
"""
Get the singular values from the points of a cluster
"""
cluster = cluster - cluster.mean() # normalize features (mean = 0)
u, s, vh = svd(cluster, full_matrices=True)
return s
###Output
_____no_output_____
###Markdown
k = 5
###Code
cluster2s = {} # singular values of each cluster
xyz2 = xyz.copy()
while xyz2.shape[0] != 0:
p = xyz2.sample(1, random_state=42) # select one point randomly
cluster_of_k = get_nearest_k_corrd(total_data=xyz2, query_point=p, k=5)
s = get_s(cluster=cluster_of_k)
# print(s)
cluster2s[str(p.index.to_list()[0])] = s
xyz2 = xyz2.loc[~xyz2.index.isin(cluster_of_k.index), :].copy() # remove points in cluster_of_k
cluster2s_df = pd.DataFrame.from_dict(cluster2s, columns=['S1', 'S2', 'S3'], orient='index')
print('The shape of cluster2s_df: ', cluster2s_df.shape)
cluster2s_df.head(2)
###Output
The shape of cluster2s_df: (2000, 3)
###Markdown
The percent variability explained by the third dimension
###Code
total_var = cluster2s_df.iloc[:, 0:3].sum(axis=1)
cluster2s_df['explained_var_S1 (%)'] = cluster2s_df['S1'] / total_var * 100
cluster2s_df['explained_var_S2 (%)'] = cluster2s_df['S2'] / total_var * 100
cluster2s_df['explained_var_S3 (%)'] = cluster2s_df['S3'] / total_var * 100
# plot singular values of each cluster
cluster2s_df.iloc[:, -3:].plot(figsize=(12, 8))
plt.axhline(5, linestyle='--')
plt.xlabel('The index of each cluster')
plt.ylabel('Percent variablity explained')
plt.savefig('percent_explained_var_each_cluster.png', dpi=200)
# set 5% as a threshold of percentage variability explained to get intrinsic dimension after SVD
# if a singular vlaues <= 0.01, it means that the data points in this cluster have this dimension has a small contribution
dim_after_svd = np.sum(cluster2s_df.iloc[:, -3:] > 5, axis=1)
plt.figure(figsize=(8, 6))
plt.hist(dim_after_svd)
plt.xlabel('Dimension of each cluster after SVD')
plt.ylabel('The number of clusters')
plt.savefig('hist_of_dim_each_cluster.png', dpi=200)
dim_after_svd.mean()
print(sum(dim_after_svd == 0))
print(sum(dim_after_svd == 1))
print(sum(dim_after_svd == 2))
print(sum(dim_after_svd == 3))
###Output
0
282
1572
146
###Markdown
k = 20
###Code
cluster2s = {} # singular values of each cluster
xyz2 = xyz.copy()
while xyz2.shape[0] != 0:
p = xyz2.sample(1, random_state=42) # select one point randomly
cluster_of_k = get_nearest_k_corrd(total_data=xyz2, query_point=p, k=20)
s = get_s(cluster=cluster_of_k)
# print(s)
cluster2s[str(p.index.to_list()[0])] = s
xyz2 = xyz2.loc[~xyz2.index.isin(cluster_of_k.index), :].copy() # remove points in cluster_of_k
cluster2s_df = pd.DataFrame.from_dict(cluster2s, columns=['S1', 'S2', 'S3'], orient='index')
print('The shape of cluster2s_df: ', cluster2s_df.shape)
cluster2s_df.head(2)
###Output
The shape of cluster2s_df: (500, 3)
###Markdown
The percent variability explained by the third dimension
###Code
total_var = cluster2s_df.iloc[:, 0:3].sum(axis=1)
cluster2s_df['explained_var_S1 (%)'] = cluster2s_df['S1'] / total_var * 100
cluster2s_df['explained_var_S2 (%)'] = cluster2s_df['S2'] / total_var * 100
cluster2s_df['explained_var_S3 (%)'] = cluster2s_df['S3'] / total_var * 100
# plot singular values of each cluster
cluster2s_df.iloc[:, -3:].plot(figsize=(12, 8))
plt.axhline(5, linestyle='--')
plt.xlabel('The index of each cluster')
plt.ylabel('Percent variablity explained')
# plt.savefig('percent_explained_var_each_cluster.png', dpi=200)
# set 5% as a threshold of percentage variability explained to get intrinsic dimension after SVD
# if a singular vlaues <= 0.01, it means that the data points in this cluster have this dimension has a small contribution
dim_after_svd = np.sum(cluster2s_df.iloc[:, -3:] > 5, axis=1)
plt.figure(figsize=(8, 6))
plt.hist(dim_after_svd)
plt.xlabel('Dimension of each cluster after SVD')
plt.ylabel('The number of clusters')
# plt.savefig('hist_of_dim_each_cluster.png', dpi=200)
dim_after_svd.mean()
print(sum(dim_after_svd == 0))
print(sum(dim_after_svd == 1))
print(sum(dim_after_svd == 2))
print(sum(dim_after_svd == 3))
###Output
0
2
291
207
|
Initiating Projects/Python/Starting a Regression Project.ipynb | ###Markdown
Starting a Regression Project ScopeThe scope of this notebook is to provide instructions on how to initiate a DataRobot project for a numerical target using the R API. BackgroundRegression Analysis is the task of predicting the value of a continuous target column.Examples:- Predict Life Time Value (LTV) of customer.- Predicting player performance.- Predicting house price.The target column will always be a continuous numeric variable even though regression could also be applicable a discreet high cardinality variable. Requirements- Python version 3.7.3- DataRobot API version 2.19.0. Small adjustments might be needed depending on the Python version and DataRobot API version you are using.Full documentation of the Python package can be found here: https://datarobot-public-api-client.readthedocs-hosted.com/en/ Import Libraries
###Code
import datarobot as dr
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Import DatasetWe will be loading the Boston Housing dataset. A very simple dataset for regression that is available through sk-learn.
###Code
from sklearn.datasets import load_boston
data = load_boston()
df = pd.DataFrame(np.c_[data['data'], data['target']],
columns= np.append(data['feature_names'], ['target']))
df.head()
###Output
_____no_output_____
###Markdown
Connect to DataRobotConnect to DataRobot using your credentials and your endpoint. Change input below accordingly.
###Code
dr.Client(token='YOUR_API_KEY',
endpoint='YOUR_DATAROBOT_HOSTNAME')
###Output
_____no_output_____
###Markdown
Initiate ProjectI will be initiating a project calling the method dr.Project.start:* project_name: Name of project* source_data: Data source (Path to file or pandas dataframe)* target: String with target variable name* worker_count: Amount of workers to use* metric: Optimisation metric to use
###Code
project = dr.Project.start(project_name='MyRegressionProject',
sourcedata= df,
target='target')
project.wait_for_autopilot() #Wait for autopilot to complete
###Output
_____no_output_____
###Markdown
Starting a Regression Project**Author**: Thodoris Petropoulos**Label**: Modeling Options ScopeThe scope of this notebook is to provide instructions on how to initiate a DataRobot project for a numerical target using the R API. BackgroundRegression Analysis is the task of predicting the value of a continuous target column.Examples:- Predict Life Time Value (LTV) of customer.- Predicting player performance.- Predicting house price.The target column will always be a continuous numeric variable even though regression could also be applicable a discreet high cardinality variable. Requirements- Python version 3.7.3- DataRobot API version 2.19.0. Small adjustments might be needed depending on the Python version and DataRobot API version you are using.Full documentation of the Python package can be found here: https://datarobot-public-api-client.readthedocs-hosted.com Import Libraries
###Code
import datarobot as dr
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Import DatasetWe will be loading the Boston Housing dataset. A very simple dataset for regression that is available through sk-learn.
###Code
from sklearn.datasets import load_boston
data = load_boston()
df = pd.DataFrame(np.c_[data['data'], data['target']],
columns= np.append(data['feature_names'], ['target']))
df.head()
###Output
_____no_output_____
###Markdown
Connect to DataRobotConnect to DataRobot using your credentials and your endpoint. Change input below accordingly.
###Code
dr.Client(token='YOUR_API_KEY',
endpoint='YOUR_DATAROBOT_HOSTNAME')
###Output
_____no_output_____
###Markdown
Initiate ProjectI will be initiating a project calling the method dr.Project.start:* project_name: Name of project* source_data: Data source (Path to file or pandas dataframe)* target: String with target variable name* worker_count: Amount of workers to use* metric: Optimisation metric to use
###Code
project = dr.Project.start(project_name='MyRegressionProject',
sourcedata= df,
target='target')
project.wait_for_autopilot() #Wait for autopilot to complete
###Output
_____no_output_____ |
Titanic/Titanic.ipynb | ###Markdown
Notebook to process Kaggle's Titanic datasetThis notebook uses the dataset from Kaggle's Titanic Comptetition to train a logistic regression and provides results with the test datasethttps://www.kaggle.com/c/titanicauthor: drublackberry (github) ConfigurationUser configuration parameters
###Code
train_size = 80 # % of the training set used for training
N_MonteCarlo = 50 # number of runs for the monte-carlo analysis
###Output
_____no_output_____
###Markdown
Data pre-processing and exploratory data analysisGather the train dataset, convert feature to numerical values and plot the values in stacked histograms to get a feeling of the importance of the features.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import pydot
from IPython.display import Image, display
# Load into CSV
myRawTrainDf = pd.read_csv('train.csv', index_col=0)
myRawTestDf = pd.read_csv('test.csv', index_col=0)
# Add survived column before merging
myRawTestDf['Survived'] = np.nan
myRawTestDf = myRawTestDf[myRawTrainDf.columns]
# Merge
myRawDf = myRawTrainDf.append(myRawTestDf)
###Output
_____no_output_____
###Markdown
Feature explorationThis chapter will explore the weight of the features wrt survival rate and will explore the possibilities of Features on the nameThe name itself can contain some features of interest, if we explore it closely we can see that the pattern of ', 'and '.' allows us to retrieve the title and the surname.
###Code
# Inspect the names to see if something can be done
myRawDf['Name'].head(10)
###Output
_____no_output_____
###Markdown
Let's create two extra columns with the title and the surname to be used as features.
###Code
import re
def getTitle (aName):
'''Finds the title in the name'''
myPosStart = aName.find(',')
myPosEnd = aName.find('.')
return re.sub('[^A-Za-z0-9]+', '', aName[myPosStart:myPosEnd])
def getSurname (aName):
'''Finds the title in the name'''
myPos = aName.find(',')
return re.sub('[^A-Za-z0-9]+', '', aName[:myPos])
myInDf = myRawDf.copy()
myInDf['Title'] = [getTitle(x) for x in myInDf['Name']]
myInDf['Surname'] = [getSurname(x) for x in myInDf['Name']]
# Get a sample
myInDf.head(3).append(myInDf.tail(3))
###Output
_____no_output_____
###Markdown
In order to be able to plot and perform regressions (if needed) one can assign a number to each string for each feature.
###Code
def assignNumericalType (aSeries):
'''Assigns a numerical type to string values'''
val = aSeries.unique()
myDict = {val[x]:x for x in range(len(val))}
myDict[np.nan] = np.nan # Ensure nan stays nan
aOut = [myDict[x] for x in aSeries]
return aOut
# Convert strings to numerical type
for myCol in myInDf.columns:
if type(myInDf[myCol].dropna().iloc[0])==str:
myInDf[myCol] = assignNumericalType(myInDf[myCol])
# Get a sample
myInDf.head(3).append(myInDf.tail(3))
# Exploratory data analysis
for myFeature in myInDf.columns:
if myFeature != 'Survived' and (len(myInDf[myFeature]) > len(myInDf[myFeature].unique())):
myInDf.pivot(columns='Survived', values=myFeature).plot(kind='hist', stacked=True, bins=20)
plt.title(myFeature)
plt.show()
# Do a correlation plot
cax = plt.matshow(myInDf.corr().abs())
plt.colorbar(cax)
plt.xticks(range(len(myInDf.columns)), myInDf.columns, rotation='vertical')
plt.yticks(range(len(myInDf.columns)), myInDf.columns, rotation='horizontal')
plt.show()
###Output
_____no_output_____
###Markdown
Conclusions of the exploratory data analysis* Passenger on the age 20-40 are more likely to die.* Babies and infants are more likely to survive.* Most passenger where in the range 20-40 with children (i.e. families).* Older people is more likely to survive* Lower fares are more likely to die* People with more than 3 siblings is likely to die* Travelling with no siblings meant a higher change of survival* People not related to children are more likely to die* There is a clear dependence on passenger class* Males are more likely to die* There is a certain dependence on the cabin, port and ticket* There is high correlation of hte survival with sex, pclass and . Weaker correlations with parch and embarked.* Title is a strong feature for survival Missing dataA part of the dataset is missing, how many missing values do we have in the training set per feature?
###Code
myMind = pd.MultiIndex.from_product([['Training', 'Test',],['Missing', 'Total']])
myMissingDf = pd.DataFrame(columns=myMind, index=myInDf.columns)
myMissingDf['Test', 'Missing'] = myInDf[myInDf['Survived'].isnull()].isnull().sum()
myMissingDf['Test', 'Total'] = myInDf[myInDf['Survived'].isnull()].isnull().count()
myMissingDf['Training', 'Missing'] = myInDf[myInDf['Survived'].notnull()].isnull().sum()
myMissingDf['Training', 'Total'] = myInDf[myInDf['Survived'].notnull()].isnull().count()
myMissingDf
###Output
_____no_output_____
###Markdown
Given the results one can conclude that* The age is missing for a number of passengers but it is still a usable feature given the amount of it that is missing* The cabin is missing for large part of the dataset, if used as feature it will have little weight* Two passenger are missing the port where they embarked in the training set, the feature should be still usable* One passenger is missing the fare in the test testThe age feature is specially interesting given the high correlation with the survival Predicting ageAge is a key feature that is missing in a considerable part of the dataset, however, the age can be infered from other features. The predicted age can be used to feed the decision tree.
###Code
myInDf.corr()[[x for x in myInDf.columns if x != 'Age']].loc['Age'].plot(kind='bar')
plt.title('Correlation coefficient of Age with other features')
plt.show()
# Recall the distribution of the age
myInDf.pivot(columns='Survived', values='Age').plot(kind='hist', stacked=True, bins=20)
plt.title('Age before interpolation of nan')
plt.show()
# Predict the age with a linear regression
from sklearn import linear_model
from sklearn.preprocessing import normalize
def predictValueByLinearRegression (aInDf, aFeatureToUse, aFeatureToPredict):
aFeatureToUse.append(aFeatureToPredict)
myDf = aInDf[aFeatureToUse]
# Train
myX = myDf.dropna()[[x for x in myDf.columns if x != aFeatureToPredict]]
myY = myDf.dropna()[aFeatureToPredict]
myLR = linear_model.LinearRegression()
myLR.fit(myX, myY)
# Predict
myX = myDf[myDf[aFeatureToPredict].isnull()][[x for x in myDf.columns if x != aFeatureToPredict]]
return myLR.predict(myX)
# Assign
myInDf.loc[myInDf.isnull()['Age'], 'Age'] = predictValueByLinearRegression(myInDf, ['Sex', 'SibSp', 'Parch', 'Fare', 'Title'], 'Age')
# Check the histogram again to see the distribution
myInDf.pivot(columns='Survived', values='Age').plot(kind='hist', stacked=True, bins=20)
plt.title('Age after linear regression')
plt.show()
###Output
_____no_output_____
###Markdown
Missing embarked valuesThere are two values with nan for the embarked. Embarked is a feature that holds a certain correlation with the survival rate. It should be better to keep it.Let's work on the raw data to see the alphanumerical values
###Code
myRawDf[myRawDf['Embarked'].isnull()]
###Output
_____no_output_____
###Markdown
Both of them where at Cabin B28, on which port did the passengers at these cabin also board? Also it is OK to assume that the tickets were sold in order?
###Code
# Get passengers with similar tickets
myRawDf[myRawDf['Ticket'].map(lambda x: '1135' in x or '1136' in x)]
###Output
_____no_output_____
###Markdown
One can see that a number of passenger with similar fare price, cabin on the same section and ticket number close enough to the missing ones embarked in 'C'. Let's assume that's their port of origin
###Code
# Assign a numerical value
myInDf.loc[myInDf['Embarked'].isnull(), 'Embarked'] = myInDf.loc[55]['Embarked']
# Check
myInDf.loc[[62,830]]
###Output
_____no_output_____
###Markdown
Missing fare on test setOne passenger has a missing fare on the test set
###Code
myRawDf[myRawDf['Fare'].isnull()]
###Output
_____no_output_____
###Markdown
Let's look at how the fare correlates with the pclass, sex, age and embarcation port.
###Code
myInDf.corr()[[x for x in myInDf.columns if x != 'Fare']].loc['Fare'].plot(kind='bar')
plt.title('Correlation coefficient of Age with other features')
plt.show()
# Run a linear regression with the features that are most correlated
myInDf.loc[myInDf['Fare'].isnull(), 'Fare'] = predictValueByLinearRegression(myInDf, ['Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Title'], 'Fare')
# Check the final
myInDf.loc[1044]
###Output
_____no_output_____
###Markdown
Final checkIs there any missing data left in our training set?
###Code
myInDf.isnull().sum()
###Output
_____no_output_____
###Markdown
The 'Cabin' feature is largely missing and difficult to be infered from other features, for the moment then let's just discard it. Classification using decision trees
###Code
from sklearn import tree
from sklearn.metrics import precision_score, recall_score, f1_score
def splitDataset (aDf, aFrac):
'''Splits a Df in a training and a validation dataset randomly'''
aTrainDf = aDf.sample(frac=aFrac/100.)
myValInd = [ind for ind in aDf.index if ind not in aTrainDf.index]
aValDf = aDf.loc[myValInd]
# Create X and Y datasets
aXtrain = aTrainDf[[x for x in aTrainDf.columns if x!='Survived']]
aYtrain = aTrainDf['Survived']
aXval = aValDf[[x for x in aTrainDf.columns if x!='Survived']]
aYval = aValDf['Survived']
return aXtrain, aYtrain, aXval, aYval
def assessPerformance (aX, aY, aClf):
'''Prints the performance of a certain machine learning algorithm'''
myYpred = aClf.predict(aX)
aPrecision = precision_score(aY, myYpred)
aRecall = recall_score(aY, myYpred)
aF1score = f1_score(aY, myYpred)
return aPrecision, aRecall, aF1score
def trainPredictAndAnalyzeDecisionTree (aDf, aDepth=None, draw=False):
# Build a decision tree classifier
myXtrain, myYtrain, myXval, myYval = splitDataset (aDf, train_size)
myClf = tree.DecisionTreeClassifier(max_depth=aDepth)
myClf = myClf.fit(myXtrain, myYtrain)
aTrainPrecision, aTrainRecall, aTrainF1 = assessPerformance(myXtrain, myYtrain, myClf)
aValPrecision, aValRecall, aValF1 = assessPerformance(myXval, myYval, myClf)
if draw:
# Draw the decision tree
myDotData = tree.export_graphviz(myClf, feature_names=myXtrain.columns, out_file='tree.dot' )
(myGraph,) = pydot.graph_from_dot_file('tree.dot')
myPlt = Image(myGraph.create_png())
myGraph.write_png('tree.png')
display(myPlt)
return aTrainPrecision, aTrainRecall, aTrainF1, aValPrecision, aValRecall, aValF1
def runMonteCarlo (aDf, aF, *args):
myPerfDf = pd.DataFrame(columns=['Train Precision', 'Train Recall', 'Train F1', 'Val Precision', 'Val Recall', 'Val F1'])
for i in range(N_MonteCarlo):
myPerfDf.loc[i] = aF(aDf, *args)
return myPerfDf
# Calling and drawing a precision tree with max_depth
# Note that this call removes all the nan (large part of the dataset) and just displays a decision tree for illustration
foo = trainPredictAndAnalyzeDecisionTree(myInDf.dropna(), aDepth=3, draw=True)
# Do not use features which have any value missing
myCompleteFeatures = [x for x in myInDf.columns if myInDf.isnull().sum()[x] ==0]
myCompleteFeatures.append('Survived')
myStats = runMonteCarlo(myInDf[myCompleteFeatures].dropna(), trainPredictAndAnalyzeDecisionTree).describe() # capture for later
myStats
###Output
_____no_output_____
###Markdown
Using random forestsRandom forests can be used in this dataset to boost performance and lower the variance (overfitting) of the classification.
###Code
from sklearn.ensemble import RandomForestClassifier
def trainPredictAndAnalyzeRandomForest (aDf, *args):
if len(args)>0:
myNfeat = args[0]
else:
myNfeat = 'auto'
# Build a decision tree classifier
myXtrain, myYtrain, myXval, myYval = splitDataset (aDf, train_size)
myClf = RandomForestClassifier(max_features=myNfeat)
myClf = myClf.fit(myXtrain, myYtrain)
aTrainPrecision, aTrainRecall, aTrainF1 = assessPerformance(myXtrain, myYtrain, myClf)
aValPrecision, aValRecall, aValF1 = assessPerformance(myXval, myYval, myClf)
return aTrainPrecision, aTrainRecall, aTrainF1, aValPrecision, aValRecall, aValF1
runMonteCarlo(myInDf[myCompleteFeatures].dropna(), trainPredictAndAnalyzeRandomForest).describe()
###Output
_____no_output_____
###Markdown
Looking at these results it looks like we can get some better results by analyzing the cross-validation sets. Let's play with the number of features used in the random forest to see if we can boost performance
###Code
# Create a MultiIndex dataframe to store the stats of all the different runs
myNfeat = np.arange(2,11)
myIndex = pd.MultiIndex.from_product([myNfeat, myStats.columns])
myDf = pd.DataFrame(index=myIndex, columns=myStats.index)
for myN in myNfeat:
myDf.loc[myN,:].iloc[:] = runMonteCarlo(myInDf[myCompleteFeatures].dropna(), trainPredictAndAnalyzeRandomForest, myN).describe().iloc[:].transpose()
for i in ['Val Precision', 'Val Recall', 'Val F1']:
myDf.reset_index().set_index('level_1').loc[i][['level_0', 'min', 'max', 'mean']].set_index('level_0').plot()
plt.title(i)
plt.xlabel('max_features')
plt.show()
###Output
_____no_output_____
###Markdown
Seems that max_features does not have a huge effect and the data is not overfit. Let's assume the default value of sklearn (sqrt(num_features)) Using the full dataset and finding a solutionNow we will use the full training dataset for training (no cross-validation) and we will train a random forest classifier
###Code
# Train the classifier
myUsedFeatures = [x for x in myCompleteFeatures if x != 'Survived']
myXtrain = myInDf[myCompleteFeatures].dropna()[myUsedFeatures]
myYtrain = myInDf['Survived'].dropna()
myClf = RandomForestClassifier()
myClf = myClf.fit(myXtrain, myYtrain)
aTrainPrecision, aTrainRecall, aTrainF1 = assessPerformance(myXtrain, myYtrain, myClf)
print 'Precision on full training set ' + str(aTrainPrecision)
print 'Recall on full training set ' + str(aTrainRecall)
print 'F1 on full training set ' + str(aTrainF1)
# Run the prediction on the test set
myXtest = myInDf.loc[myInDf['Survived'].isnull(), myUsedFeatures]
myOut = pd.Series(myClf.predict(myXtest), index=myXtest.index)
myOut = myOut.apply(np.int)
myOut.to_csv('solution.csv', header=['Survived'])
myOut.head(5)
###Output
_____no_output_____
###Markdown
Titanic
###Code
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
%matplotlib inline
# Wyświtla wszystkie outputy nie tylko ostatni (default)
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
train = pd.read_csv("train.csv")
train.head()
test = pd.read_csv("test.csv")
test.head()
train.info()
train['Age'] = train['Age'].fillna(-1)
train.info()
train['Cabin'] = train['Cabin'].fillna(-1)
train['Embarked'] = train['Embarked'].fillna(-1)
train.info()
train['Sex'].nunique()
train['Sex'].unique()
train['Sex'].value_counts()
# LEGEND
# 0 - male
# 1 - female
train['Sex_cat'] = train['Sex'].factorize()[0]
train['Sex_cat'].nunique()
train['Sex_cat'].unique()
train['Sex_cat'].value_counts()
#/////////////////////////////////////////////
#Function of display feature basic info
def feature_basic(vfeature):
print(train[vfeature].count())
print(train[vfeature].nunique())
print(train[vfeature].unique())
print(train[vfeature].value_counts())
#Function of changing text features for categorical features
def cat_for_num_feat(vcategorical, vstring):
train[vcategorical] = train[vstring].factorize()[0]
print(train[vcategorical].count())
print(train[vcategorical].nunique())
print(train[vcategorical].unique())
print(train[vcategorical].value_counts())
#/////////////////////////////////////////////
train.info()
feature_basic('Ticket')
cat_for_num_feat('Ticket_cat','Ticket')
###Output
891
681
['A/5 21171' 'PC 17599' 'STON/O2. 3101282' '113803' '373450' '330877'
'17463' '349909' '347742' '237736' 'PP 9549' '113783' 'A/5. 2151' '347082'
'350406' '248706' '382652' '244373' '345763' '2649' '239865' '248698'
'330923' '113788' '347077' '2631' '19950' '330959' '349216' 'PC 17601'
'PC 17569' '335677' 'C.A. 24579' 'PC 17604' '113789' '2677' 'A./5. 2152'
'345764' '2651' '7546' '11668' '349253' 'SC/Paris 2123' '330958'
'S.C./A.4. 23567' '370371' '14311' '2662' '349237' '3101295' 'A/4. 39886'
'PC 17572' '2926' '113509' '19947' 'C.A. 31026' '2697' 'C.A. 34651'
'CA 2144' '2669' '113572' '36973' '347088' 'PC 17605' '2661' 'C.A. 29395'
'S.P. 3464' '3101281' '315151' 'C.A. 33111' 'S.O.C. 14879' '2680' '1601'
'348123' '349208' '374746' '248738' '364516' '345767' '345779' '330932'
'113059' 'SO/C 14885' '3101278' 'W./C. 6608' 'SOTON/OQ 392086' '343275'
'343276' '347466' 'W.E.P. 5734' 'C.A. 2315' '364500' '374910' 'PC 17754'
'PC 17759' '231919' '244367' '349245' '349215' '35281' '7540' '3101276'
'349207' '343120' '312991' '349249' '371110' '110465' '2665' '324669'
'4136' '2627' 'STON/O 2. 3101294' '370369' 'PC 17558' 'A4. 54510' '27267'
'370372' 'C 17369' '2668' '347061' '349241' 'SOTON/O.Q. 3101307'
'A/5. 3337' '228414' 'C.A. 29178' 'SC/PARIS 2133' '11752' '7534'
'PC 17593' '2678' '347081' 'STON/O2. 3101279' '365222' '231945'
'C.A. 33112' '350043' '230080' '244310' 'S.O.P. 1166' '113776'
'A.5. 11206' 'A/5. 851' 'Fa 265302' 'PC 17597' '35851' 'SOTON/OQ 392090'
'315037' 'CA. 2343' '371362' 'C.A. 33595' '347068' '315093' '363291'
'113505' 'PC 17318' '111240' 'STON/O 2. 3101280' '17764' '350404' '4133'
'PC 17595' '250653' 'LINE' 'SC/PARIS 2131' '230136' '315153' '113767'
'370365' '111428' '364849' '349247' '234604' '28424' '350046' 'PC 17610'
'368703' '4579' '370370' '248747' '345770' '3101264' '2628' 'A/5 3540'
'347054' '2699' '367231' '112277' 'SOTON/O.Q. 3101311' 'F.C.C. 13528'
'A/5 21174' '250646' '367229' '35273' 'STON/O2. 3101283' '243847' '11813'
'W/C 14208' 'SOTON/OQ 392089' '220367' '21440' '349234' '19943' 'PP 4348'
'SW/PP 751' 'A/5 21173' '236171' '347067' '237442' 'C.A. 29566'
'W./C. 6609' '26707' 'C.A. 31921' '28665' 'SCO/W 1585' '367230'
'W./C. 14263' 'STON/O 2. 3101275' '2694' '19928' '347071' '250649' '11751'
'244252' '362316' '113514' 'A/5. 3336' '370129' '2650' 'PC 17585' '110152'
'PC 17755' '230433' '384461' '110413' '112059' '382649' 'C.A. 17248'
'347083' 'PC 17582' 'PC 17760' '113798' '250644' 'PC 17596' '370375'
'13502' '347073' '239853' 'C.A. 2673' '336439' '347464' '345778'
'A/5. 10482' '113056' '349239' '345774' '349206' '237798' '370373' '19877'
'11967' 'SC/Paris 2163' '349236' '349233' 'PC 17612' '2693' '113781'
'19988' '9234' '367226' '226593' 'A/5 2466' '17421' 'PC 17758' 'P/PP 3381'
'PC 17485' '11767' 'PC 17608' '250651' '349243' 'F.C.C. 13529' '347470'
'29011' '36928' '16966' 'A/5 21172' '349219' '234818' '345364' '28551'
'111361' '113043' 'PC 17611' '349225' '7598' '113784' '248740' '244361'
'229236' '248733' '31418' '386525' 'C.A. 37671' '315088' '7267' '113510'
'2695' '2647' '345783' '237671' '330931' '330980' 'SC/PARIS 2167' '2691'
'SOTON/O.Q. 3101310' 'C 7076' '110813' '2626' '14313' 'PC 17477' '11765'
'3101267' '323951' 'C 7077' '113503' '2648' '347069' 'PC 17757' '2653'
'STON/O 2. 3101293' '349227' '27849' '367655' 'SC 1748' '113760' '350034'
'3101277' '350052' '350407' '28403' '244278' '240929' 'STON/O 2. 3101289'
'341826' '4137' '315096' '28664' '347064' '29106' '312992' '349222'
'394140' 'STON/O 2. 3101269' '343095' '28220' '250652' '28228' '345773'
'349254' 'A/5. 13032' '315082' '347080' 'A/4. 34244' '2003' '250655'
'364851' 'SOTON/O.Q. 392078' '110564' '376564' 'SC/AH 3085'
'STON/O 2. 3101274' '13507' 'C.A. 18723' '345769' '347076' '230434'
'65306' '33638' '113794' '2666' '113786' '65303' '113051' '17453'
'A/5 2817' '349240' '13509' '17464' 'F.C.C. 13531' '371060' '19952'
'364506' '111320' '234360' 'A/S 2816' 'SOTON/O.Q. 3101306' '113792'
'36209' '323592' '315089' 'SC/AH Basle 541' '7553' '31027' '3460' '350060'
'3101298' '239854' 'A/5 3594' '4134' '11771' 'A.5. 18509' '65304'
'SOTON/OQ 3101317' '113787' 'PC 17609' 'A/4 45380' '36947' 'C.A. 6212'
'350035' '315086' '364846' '330909' '4135' '26360' '111427' 'C 4001'
'382651' 'SOTON/OQ 3101316' 'PC 17473' 'PC 17603' '349209' '36967'
'C.A. 34260' '226875' '349242' '12749' '349252' '2624' '2700' '367232'
'W./C. 14258' 'PC 17483' '3101296' '29104' '2641' '2690' '315084' '113050'
'PC 17761' '364498' '13568' 'WE/P 5735' '2908' '693' 'SC/PARIS 2146'
'244358' '330979' '2620' '347085' '113807' '11755' '345572' '372622'
'349251' '218629' 'SOTON/OQ 392082' 'SOTON/O.Q. 392087' 'A/4 48871'
'349205' '2686' '350417' 'S.W./PP 752' '11769' 'PC 17474' '14312'
'A/4. 20589' '358585' '243880' '2689' 'STON/O 2. 3101286' '237789' '13049'
'3411' '237565' '13567' '14973' 'A./5. 3235' 'STON/O 2. 3101273'
'A/5 3902' '364848' 'SC/AH 29037' '248727' '2664' '349214' '113796'
'364511' '111426' '349910' '349246' '113804' 'SOTON/O.Q. 3101305' '370377'
'364512' '220845' '31028' '2659' '11753' '350029' '54636' '36963' '219533'
'349224' '334912' '27042' '347743' '13214' '112052' '237668'
'STON/O 2. 3101292' '350050' '349231' '13213' 'S.O./P.P. 751' 'CA. 2314'
'349221' '8475' '330919' '365226' '349223' '29751' '2623' '5727' '349210'
'STON/O 2. 3101285' '234686' '312993' 'A/5 3536' '19996' '29750'
'F.C. 12750' 'C.A. 24580' '244270' '239856' '349912' '342826' '4138'
'330935' '6563' '349228' '350036' '24160' '17474' '349256' '2672' '113800'
'248731' '363592' '35852' '348121' 'PC 17475' '36864' '350025' '223596'
'PC 17476' 'PC 17482' '113028' '7545' '250647' '348124' '34218' '36568'
'347062' '350048' '12233' '250643' '113806' '315094' '36866' '236853'
'STON/O2. 3101271' '239855' '28425' '233639' '349201' '349218' '16988'
'376566' 'STON/O 2. 3101288' '250648' '113773' '335097' '29103' '392096'
'345780' '349204' '350042' '29108' '363294' 'SOTON/O2 3101272' '2663'
'347074' '112379' '364850' '8471' '345781' '350047' 'S.O./P.P. 3' '2674'
'29105' '347078' '383121' '36865' '2687' '113501' 'W./C. 6607'
'SOTON/O.Q. 3101312' '374887' '3101265' '12460' 'PC 17600' '349203'
'28213' '17465' '349244' '2685' '2625' '347089' '347063' '112050' '347087'
'248723' '3474' '28206' '364499' '112058' 'STON/O2. 3101290'
'S.C./PARIS 2079' 'C 7075' '315098' '19972' '368323' '367228' '2671'
'347468' '2223' 'PC 17756' '315097' '392092' '11774' 'SOTON/O2 3101287'
'2683' '315090' 'C.A. 5547' '349213' '347060' 'PC 17592' '392091' '113055'
'2629' '350026' '28134' '17466' '233866' '236852' 'SC/PARIS 2149'
'PC 17590' '345777' '349248' '695' '345765' '2667' '349212' '349217'
'349257' '7552' 'C.A./SOTON 34068' 'SOTON/OQ 392076' '211536' '112053'
'111369' '370376']
CA. 2343 7
347082 7
1601 7
347088 6
CA 2144 6
3101295 6
382652 5
S.O.C. 14879 5
19950 4
17421 4
LINE 4
113760 4
347077 4
W./C. 6608 4
PC 17757 4
4133 4
113781 4
2666 4
349909 4
PC 17572 3
371110 3
PC 17755 3
230080 3
110152 3
35273 3
F.C.C. 13529 3
13502 3
363291 3
PC 17582 3
PC 17760 3
..
36864 1
F.C. 12750 1
349206 1
335677 1
STON/O 2. 3101292 1
345779 1
14312 1
A./5. 2152 1
7546 1
2695 1
111428 1
112058 1
PC 17318 1
SC/PARIS 2131 1
312992 1
A/5. 10482 1
PC 17590 1
374746 1
239855 1
PC 17609 1
C.A. 33595 1
345763 1
C.A. 17248 1
A/5 3594 1
C.A./SOTON 34068 1
362316 1
374887 1
2686 1
2620 1
7552 1
Name: Ticket, Length: 681, dtype: int64
891
681
[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143
144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161
162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179
180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197
198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215
216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233
234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251
252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269
270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287
288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305
306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323
324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341
342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359
360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377
378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395
396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413
414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431
432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449
450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467
468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485
486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503
504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521
522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539
540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557
558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575
576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593
594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611
612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629
630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647
648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665
666 667 668 669 670 671 672 673 674 675 676 677 678 679 680]
72 7
13 7
148 7
62 6
49 6
58 6
70 5
16 5
26 4
334 4
24 4
163 4
160 4
266 4
7 4
272 4
327 4
379 4
84 4
230 3
348 3
212 3
239 3
51 3
106 3
357 3
550 3
57 3
491 3
231 3
..
428 1
439 1
427 1
426 1
425 1
424 1
422 1
421 1
420 1
438 1
440 1
461 1
453 1
460 1
459 1
458 1
457 1
456 1
455 1
454 1
450 1
441 1
449 1
447 1
446 1
445 1
444 1
443 1
442 1
0 1
Name: Ticket_cat, Length: 681, dtype: int64
###Markdown
1 Jak Dla mnie szum
###Code
train.info()
feature_basic('Cabin')
cat_for_num_feat(vcategorical='Cabin_cat',vstring='Cabin')
###Output
891
148
[-1 'C85' 'C123' 'E46' 'G6' 'C103' 'D56' 'A6' 'C23 C25 C27' 'B78' 'D33'
'B30' 'C52' 'B28' 'C83' 'F33' 'F G73' 'E31' 'A5' 'D10 D12' 'D26' 'C110'
'B58 B60' 'E101' 'F E69' 'D47' 'B86' 'F2' 'C2' 'E33' 'B19' 'A7' 'C49' 'F4'
'A32' 'B4' 'B80' 'A31' 'D36' 'D15' 'C93' 'C78' 'D35' 'C87' 'B77' 'E67'
'B94' 'C125' 'C99' 'C118' 'D7' 'A19' 'B49' 'D' 'C22 C26' 'C106' 'C65'
'E36' 'C54' 'B57 B59 B63 B66' 'C7' 'E34' 'C32' 'B18' 'C124' 'C91' 'E40'
'T' 'C128' 'D37' 'B35' 'E50' 'C82' 'B96 B98' 'E10' 'E44' 'A34' 'C104'
'C111' 'C92' 'E38' 'D21' 'E12' 'E63' 'A14' 'B37' 'C30' 'D20' 'B79' 'E25'
'D46' 'B73' 'C95' 'B38' 'B39' 'B22' 'C86' 'C70' 'A16' 'C101' 'C68' 'A10'
'E68' 'B41' 'A20' 'D19' 'D50' 'D9' 'A23' 'B50' 'A26' 'D48' 'E58' 'C126'
'B71' 'B51 B53 B55' 'D49' 'B5' 'B20' 'F G63' 'C62 C64' 'E24' 'C90' 'C45'
'E8' 'B101' 'D45' 'C46' 'D30' 'E121' 'D11' 'E77' 'F38' 'B3' 'D6' 'B82 B84'
'D17' 'A36' 'B102' 'B69' 'E49' 'C47' 'D28' 'E17' 'A24' 'C50' 'B42' 'C148']
-1 687
C23 C25 C27 4
B96 B98 4
G6 4
E101 3
F2 3
D 3
C22 C26 3
F33 3
D35 2
C65 2
B51 B53 B55 2
B5 2
F G73 2
E121 2
E8 2
B20 2
B18 2
D20 2
C68 2
F4 2
D33 2
E44 2
C126 2
C52 2
B77 2
C78 2
C125 2
B28 2
C83 2
...
E10 1
B41 1
A7 1
C50 1
C47 1
D48 1
B80 1
D28 1
B38 1
B71 1
B82 B84 1
B101 1
A16 1
D37 1
E49 1
D47 1
B39 1
A19 1
D6 1
D56 1
F G63 1
C87 1
B37 1
C32 1
D21 1
A36 1
C86 1
D10 D12 1
E40 1
D45 1
Name: Cabin, Length: 148, dtype: int64
891
148
[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143
144 145 146 147]
0 687
8 4
4 4
73 4
53 3
54 3
23 3
15 3
27 3
64 2
117 2
52 2
113 2
22 2
79 2
28 2
29 2
63 2
33 2
100 2
47 2
45 2
44 2
95 2
38 2
87 2
40 2
41 2
42 2
115 2
...
83 1
67 1
68 1
69 1
142 1
72 1
146 1
74 1
76 1
77 1
78 1
80 1
81 1
82 1
84 1
102 1
85 1
86 1
88 1
90 1
91 1
92 1
93 1
94 1
96 1
97 1
98 1
99 1
101 1
147 1
Name: Cabin_cat, Length: 148, dtype: int64
###Markdown
1 Jak dla mnie szum
###Code
train.info()
feature_basic('Embarked')
cat_for_num_feat(vcategorical='Embarked_cat', vstring='Embarked')
test = train['Embarked_cat']
test
for i in range(len(test)-1):
if test[i] == 3:
test[i] = -1
test.value_counts()
train.info()
#Testowanie wsyztskich wynikow
def features_basic(vfeatures):
for i in range(len(vfeatures)-1):
print(vfeatures[i] + " <--------")
print(train[vfeatures[i]].count())
print(train[vfeatures[i]].unique())
print(train[vfeatures[i]].value_counts())
print(" End <--------")
features_basic(vfeatures=train.columns)
###Output
PassengerId <--------
891
[ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162
163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180
181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198
199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216
217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234
235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252
253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270
271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288
289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306
307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324
325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342
343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360
361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378
379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396
397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414
415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432
433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450
451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468
469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486
487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504
505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522
523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540
541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558
559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576
577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594
595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612
613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630
631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648
649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666
667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684
685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702
703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720
721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738
739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756
757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774
775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792
793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810
811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828
829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846
847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864
865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882
883 884 885 886 887 888 889 890 891]
891 1
293 1
304 1
303 1
302 1
301 1
300 1
299 1
298 1
297 1
296 1
295 1
294 1
292 1
306 1
291 1
290 1
289 1
288 1
287 1
286 1
285 1
284 1
283 1
282 1
281 1
305 1
307 1
279 1
321 1
..
561 1
560 1
584 1
585 1
586 1
587 1
610 1
609 1
608 1
607 1
606 1
605 1
604 1
603 1
602 1
601 1
600 1
599 1
598 1
597 1
596 1
595 1
594 1
593 1
592 1
591 1
590 1
589 1
588 1
1 1
Name: PassengerId, Length: 891, dtype: int64
End <--------
Survived <--------
891
[0 1]
0 549
1 342
Name: Survived, dtype: int64
End <--------
Pclass <--------
891
[3 1 2]
3 491
1 216
2 184
Name: Pclass, dtype: int64
End <--------
Name <--------
891
['Braund, Mr. Owen Harris'
'Cumings, Mrs. John Bradley (Florence Briggs Thayer)'
'Heikkinen, Miss. Laina' 'Futrelle, Mrs. Jacques Heath (Lily May Peel)'
'Allen, Mr. William Henry' 'Moran, Mr. James' 'McCarthy, Mr. Timothy J'
'Palsson, Master. Gosta Leonard'
'Johnson, Mrs. Oscar W (Elisabeth Vilhelmina Berg)'
'Nasser, Mrs. Nicholas (Adele Achem)' 'Sandstrom, Miss. Marguerite Rut'
'Bonnell, Miss. Elizabeth' 'Saundercock, Mr. William Henry'
'Andersson, Mr. Anders Johan' 'Vestrom, Miss. Hulda Amanda Adolfina'
'Hewlett, Mrs. (Mary D Kingcome) ' 'Rice, Master. Eugene'
'Williams, Mr. Charles Eugene'
'Vander Planke, Mrs. Julius (Emelia Maria Vandemoortele)'
'Masselmani, Mrs. Fatima' 'Fynney, Mr. Joseph J' 'Beesley, Mr. Lawrence'
'McGowan, Miss. Anna "Annie"' 'Sloper, Mr. William Thompson'
'Palsson, Miss. Torborg Danira'
'Asplund, Mrs. Carl Oscar (Selma Augusta Emilia Johansson)'
'Emir, Mr. Farred Chehab' 'Fortune, Mr. Charles Alexander'
'O\'Dwyer, Miss. Ellen "Nellie"' 'Todoroff, Mr. Lalio'
'Uruchurtu, Don. Manuel E'
'Spencer, Mrs. William Augustus (Marie Eugenie)'
'Glynn, Miss. Mary Agatha' 'Wheadon, Mr. Edward H'
'Meyer, Mr. Edgar Joseph' 'Holverson, Mr. Alexander Oskar'
'Mamee, Mr. Hanna' 'Cann, Mr. Ernest Charles'
'Vander Planke, Miss. Augusta Maria' 'Nicola-Yarred, Miss. Jamila'
'Ahlin, Mrs. Johan (Johanna Persdotter Larsson)'
'Turpin, Mrs. William John Robert (Dorothy Ann Wonnacott)'
'Kraeff, Mr. Theodor' 'Laroche, Miss. Simonne Marie Anne Andree'
'Devaney, Miss. Margaret Delia' 'Rogers, Mr. William John'
'Lennon, Mr. Denis' "O'Driscoll, Miss. Bridget" 'Samaan, Mr. Youssef'
'Arnold-Franchi, Mrs. Josef (Josefine Franchi)'
'Panula, Master. Juha Niilo' 'Nosworthy, Mr. Richard Cater'
'Harper, Mrs. Henry Sleeper (Myna Haxtun)'
'Faunthorpe, Mrs. Lizzie (Elizabeth Anne Wilkinson)'
'Ostby, Mr. Engelhart Cornelius' 'Woolner, Mr. Hugh' 'Rugg, Miss. Emily'
'Novel, Mr. Mansouer' 'West, Miss. Constance Mirium'
'Goodwin, Master. William Frederick' 'Sirayanian, Mr. Orsen'
'Icard, Miss. Amelie' 'Harris, Mr. Henry Birkhardt'
'Skoog, Master. Harald' 'Stewart, Mr. Albert A' 'Moubarek, Master. Gerios'
'Nye, Mrs. (Elizabeth Ramell)' 'Crease, Mr. Ernest James'
'Andersson, Miss. Erna Alexandra' 'Kink, Mr. Vincenz'
'Jenkin, Mr. Stephen Curnow' 'Goodwin, Miss. Lillian Amy'
'Hood, Mr. Ambrose Jr' 'Chronopoulos, Mr. Apostolos' 'Bing, Mr. Lee'
'Moen, Mr. Sigurd Hansen' 'Staneff, Mr. Ivan' 'Moutal, Mr. Rahamin Haim'
'Caldwell, Master. Alden Gates' 'Dowdell, Miss. Elizabeth'
'Waelens, Mr. Achille' 'Sheerlinck, Mr. Jan Baptist'
'McDermott, Miss. Brigdet Delia' 'Carrau, Mr. Francisco M'
'Ilett, Miss. Bertha'
'Backstrom, Mrs. Karl Alfred (Maria Mathilda Gustafsson)'
'Ford, Mr. William Neal' 'Slocovski, Mr. Selman Francis'
'Fortune, Miss. Mabel Helen' 'Celotti, Mr. Francesco'
'Christmann, Mr. Emil' 'Andreasson, Mr. Paul Edvin'
'Chaffee, Mr. Herbert Fuller' 'Dean, Mr. Bertram Frank'
'Coxon, Mr. Daniel' 'Shorney, Mr. Charles Joseph'
'Goldschmidt, Mr. George B' 'Greenfield, Mr. William Bertram'
'Doling, Mrs. John T (Ada Julia Bone)' 'Kantor, Mr. Sinai'
'Petranec, Miss. Matilda' 'Petroff, Mr. Pastcho ("Pentcho")'
'White, Mr. Richard Frasar' 'Johansson, Mr. Gustaf Joel'
'Gustafsson, Mr. Anders Vilhelm' 'Mionoff, Mr. Stoytcho'
'Salkjelsvik, Miss. Anna Kristine' 'Moss, Mr. Albert Johan'
'Rekic, Mr. Tido' 'Moran, Miss. Bertha' 'Porter, Mr. Walter Chamberlain'
'Zabour, Miss. Hileni' 'Barton, Mr. David John' 'Jussila, Miss. Katriina'
'Attalah, Miss. Malake' 'Pekoniemi, Mr. Edvard' 'Connors, Mr. Patrick'
'Turpin, Mr. William John Robert' 'Baxter, Mr. Quigg Edmond'
'Andersson, Miss. Ellis Anna Maria' 'Hickman, Mr. Stanley George'
'Moore, Mr. Leonard Charles' 'Nasser, Mr. Nicholas' 'Webber, Miss. Susan'
'White, Mr. Percival Wayland' 'Nicola-Yarred, Master. Elias'
'McMahon, Mr. Martin' 'Madsen, Mr. Fridtjof Arne' 'Peter, Miss. Anna'
'Ekstrom, Mr. Johan' 'Drazenoic, Mr. Jozef'
'Coelho, Mr. Domingos Fernandeo'
'Robins, Mrs. Alexander A (Grace Charity Laury)'
'Weisz, Mrs. Leopold (Mathilde Francoise Pede)'
'Sobey, Mr. Samuel James Hayden' 'Richard, Mr. Emile'
'Newsom, Miss. Helen Monypeny' 'Futrelle, Mr. Jacques Heath'
'Osen, Mr. Olaf Elon' 'Giglio, Mr. Victor' 'Boulos, Mrs. Joseph (Sultana)'
'Nysten, Miss. Anna Sofia'
'Hakkarainen, Mrs. Pekka Pietari (Elin Matilda Dolck)'
'Burke, Mr. Jeremiah' 'Andrew, Mr. Edgardo Samuel'
'Nicholls, Mr. Joseph Charles'
'Andersson, Mr. August Edvard ("Wennerstrom")'
'Ford, Miss. Robina Maggie "Ruby"'
'Navratil, Mr. Michel ("Louis M Hoffman")'
'Byles, Rev. Thomas Roussel Davids' 'Bateman, Rev. Robert James'
'Pears, Mrs. Thomas (Edith Wearne)' 'Meo, Mr. Alfonzo'
'van Billiard, Mr. Austin Blyler' 'Olsen, Mr. Ole Martin'
'Williams, Mr. Charles Duane' 'Gilnagh, Miss. Katherine "Katie"'
'Corn, Mr. Harry' 'Smiljanic, Mr. Mile' 'Sage, Master. Thomas Henry'
'Cribb, Mr. John Hatfield'
'Watt, Mrs. James (Elizabeth "Bessie" Inglis Milne)'
'Bengtsson, Mr. John Viktor' 'Calic, Mr. Jovo'
'Panula, Master. Eino Viljami'
'Goldsmith, Master. Frank John William "Frankie"'
'Chibnall, Mrs. (Edith Martha Bowerman)'
'Skoog, Mrs. William (Anna Bernhardina Karlsson)' 'Baumann, Mr. John D'
'Ling, Mr. Lee' 'Van der hoef, Mr. Wyckoff' 'Rice, Master. Arthur'
'Johnson, Miss. Eleanor Ileen' 'Sivola, Mr. Antti Wilhelm'
'Smith, Mr. James Clinch' 'Klasen, Mr. Klas Albin'
'Lefebre, Master. Henry Forbes' 'Isham, Miss. Ann Elizabeth'
'Hale, Mr. Reginald' 'Leonard, Mr. Lionel' 'Sage, Miss. Constance Gladys'
'Pernot, Mr. Rene' 'Asplund, Master. Clarence Gustaf Hugo'
'Becker, Master. Richard F' 'Kink-Heilmann, Miss. Luise Gretchen'
'Rood, Mr. Hugh Roscoe' 'O\'Brien, Mrs. Thomas (Johanna "Hannah" Godfrey)'
'Romaine, Mr. Charles Hallace ("Mr C Rolmane")' 'Bourke, Mr. John'
'Turcin, Mr. Stjepan' 'Pinsky, Mrs. (Rosa)' 'Carbines, Mr. William'
'Andersen-Jensen, Miss. Carla Christine Nielsine'
'Navratil, Master. Michel M' 'Brown, Mrs. James Joseph (Margaret Tobin)'
'Lurette, Miss. Elise' 'Mernagh, Mr. Robert'
'Olsen, Mr. Karl Siegwart Andreas' 'Madigan, Miss. Margaret "Maggie"'
'Yrois, Miss. Henriette ("Mrs Harbeck")' 'Vande Walle, Mr. Nestor Cyriel'
'Sage, Mr. Frederick' 'Johanson, Mr. Jakob Alfred' 'Youseff, Mr. Gerious'
'Cohen, Mr. Gurshon "Gus"' 'Strom, Miss. Telma Matilda'
'Backstrom, Mr. Karl Alfred' 'Albimona, Mr. Nassef Cassem'
'Carr, Miss. Helen "Ellen"' 'Blank, Mr. Henry' 'Ali, Mr. Ahmed'
'Cameron, Miss. Clear Annie' 'Perkin, Mr. John Henry'
'Givard, Mr. Hans Kristensen' 'Kiernan, Mr. Philip'
'Newell, Miss. Madeleine' 'Honkanen, Miss. Eliina'
'Jacobsohn, Mr. Sidney Samuel' 'Bazzani, Miss. Albina'
'Harris, Mr. Walter' 'Sunderland, Mr. Victor Francis'
'Bracken, Mr. James H' 'Green, Mr. George Henry' 'Nenkoff, Mr. Christo'
'Hoyt, Mr. Frederick Maxfield' 'Berglund, Mr. Karl Ivar Sven'
'Mellors, Mr. William John' 'Lovell, Mr. John Hall ("Henry")'
'Fahlstrom, Mr. Arne Jonas' 'Lefebre, Miss. Mathilde'
'Harris, Mrs. Henry Birkhardt (Irene Wallach)' 'Larsson, Mr. Bengt Edvin'
'Sjostedt, Mr. Ernst Adolf' 'Asplund, Miss. Lillian Gertrud'
'Leyson, Mr. Robert William Norman' 'Harknett, Miss. Alice Phoebe'
'Hold, Mr. Stephen' 'Collyer, Miss. Marjorie "Lottie"'
'Pengelly, Mr. Frederick William' 'Hunt, Mr. George Henry'
'Zabour, Miss. Thamine' 'Murphy, Miss. Katherine "Kate"'
'Coleridge, Mr. Reginald Charles' 'Maenpaa, Mr. Matti Alexanteri'
'Attalah, Mr. Sleiman' 'Minahan, Dr. William Edward'
'Lindahl, Miss. Agda Thorilda Viktoria' 'Hamalainen, Mrs. William (Anna)'
'Beckwith, Mr. Richard Leonard' 'Carter, Rev. Ernest Courtenay'
'Reed, Mr. James George' 'Strom, Mrs. Wilhelm (Elna Matilda Persson)'
'Stead, Mr. William Thomas' 'Lobb, Mr. William Arthur'
'Rosblom, Mrs. Viktor (Helena Wilhelmina)'
'Touma, Mrs. Darwis (Hanne Youssef Razi)' 'Thorne, Mrs. Gertrude Maybelle'
'Cherry, Miss. Gladys' 'Ward, Miss. Anna' 'Parrish, Mrs. (Lutie Davis)'
'Smith, Mr. Thomas' 'Asplund, Master. Edvin Rojj Felix'
'Taussig, Mr. Emil' 'Harrison, Mr. William' 'Henry, Miss. Delia'
'Reeves, Mr. David' 'Panula, Mr. Ernesti Arvid' 'Persson, Mr. Ernst Ulrik'
'Graham, Mrs. William Thompson (Edith Junkins)' 'Bissette, Miss. Amelia'
'Cairns, Mr. Alexander' 'Tornquist, Mr. William Henry'
'Mellinger, Mrs. (Elizabeth Anne Maidment)' 'Natsch, Mr. Charles H'
'Healy, Miss. Hanora "Nora"' 'Andrews, Miss. Kornelia Theodosia'
'Lindblom, Miss. Augusta Charlotta' 'Parkes, Mr. Francis "Frank"'
'Rice, Master. Eric' 'Abbott, Mrs. Stanton (Rosa Hunt)' 'Duane, Mr. Frank'
'Olsson, Mr. Nils Johan Goransson' 'de Pelsmaeker, Mr. Alfons'
'Dorking, Mr. Edward Arthur' 'Smith, Mr. Richard William'
'Stankovic, Mr. Ivan' 'de Mulder, Mr. Theodore' 'Naidenoff, Mr. Penko'
'Hosono, Mr. Masabumi' 'Connolly, Miss. Kate'
'Barber, Miss. Ellen "Nellie"' 'Bishop, Mrs. Dickinson H (Helen Walton)'
'Levy, Mr. Rene Jacques' 'Haas, Miss. Aloisia' 'Mineff, Mr. Ivan'
'Lewy, Mr. Ervin G' 'Hanna, Mr. Mansour' 'Allison, Miss. Helen Loraine'
'Saalfeld, Mr. Adolphe' 'Baxter, Mrs. James (Helene DeLaudeniere Chaput)'
'Kelly, Miss. Anna Katherine "Annie Kate"' 'McCoy, Mr. Bernard'
'Johnson, Mr. William Cahoone Jr' 'Keane, Miss. Nora A'
'Williams, Mr. Howard Hugh "Harry"' 'Allison, Master. Hudson Trevor'
'Fleming, Miss. Margaret'
'Penasco y Castellana, Mrs. Victor de Satode (Maria Josefa Perez de Soto y Vallejo)'
'Abelson, Mr. Samuel' 'Francatelli, Miss. Laura Mabel'
'Hays, Miss. Margaret Bechstein' 'Ryerson, Miss. Emily Borie'
'Lahtinen, Mrs. William (Anna Sylfven)' 'Hendekovic, Mr. Ignjac'
'Hart, Mr. Benjamin' 'Nilsson, Miss. Helmina Josefina'
'Kantor, Mrs. Sinai (Miriam Sternin)' 'Moraweck, Dr. Ernest'
'Wick, Miss. Mary Natalie'
'Spedden, Mrs. Frederic Oakley (Margaretta Corning Stone)'
'Dennis, Mr. Samuel' 'Danoff, Mr. Yoto' 'Slayter, Miss. Hilda Mary'
'Caldwell, Mrs. Albert Francis (Sylvia Mae Harbaugh)'
'Sage, Mr. George John Jr' 'Young, Miss. Marie Grice'
'Nysveen, Mr. Johan Hansen' 'Ball, Mrs. (Ada E Hall)'
'Goldsmith, Mrs. Frank John (Emily Alice Brown)'
'Hippach, Miss. Jean Gertrude' 'McCoy, Miss. Agnes' 'Partner, Mr. Austen'
'Graham, Mr. George Edward' 'Vander Planke, Mr. Leo Edmondus'
'Frauenthal, Mrs. Henry William (Clara Heinsheimer)' 'Denkoff, Mr. Mitto'
'Pears, Mr. Thomas Clinton' 'Burns, Miss. Elizabeth Margaret'
'Dahl, Mr. Karl Edwart' 'Blackwell, Mr. Stephen Weart'
'Navratil, Master. Edmond Roger' 'Fortune, Miss. Alice Elizabeth'
'Collander, Mr. Erik Gustaf' 'Sedgwick, Mr. Charles Frederick Waddington'
'Fox, Mr. Stanley Hubert' 'Brown, Miss. Amelia "Mildred"'
'Smith, Miss. Marion Elsie' 'Davison, Mrs. Thomas Henry (Mary E Finck)'
'Coutts, Master. William Loch "William"' 'Dimic, Mr. Jovan'
'Odahl, Mr. Nils Martin' 'Williams-Lambert, Mr. Fletcher Fellows'
'Elias, Mr. Tannous' 'Arnold-Franchi, Mr. Josef' 'Yousif, Mr. Wazli'
'Vanden Steen, Mr. Leo Peter' 'Bowerman, Miss. Elsie Edith'
'Funk, Miss. Annie Clemmer' 'McGovern, Miss. Mary'
'Mockler, Miss. Helen Mary "Ellie"' 'Skoog, Mr. Wilhelm'
'del Carlo, Mr. Sebastiano' 'Barbara, Mrs. (Catherine David)'
'Asim, Mr. Adola' "O'Brien, Mr. Thomas" 'Adahl, Mr. Mauritz Nils Martin'
'Warren, Mrs. Frank Manley (Anna Sophia Atkinson)'
'Moussa, Mrs. (Mantoura Boulos)' 'Jermyn, Miss. Annie'
'Aubart, Mme. Leontine Pauline' 'Harder, Mr. George Achilles'
'Wiklund, Mr. Jakob Alfred' 'Beavan, Mr. William Thomas'
'Ringhini, Mr. Sante' 'Palsson, Miss. Stina Viola'
'Meyer, Mrs. Edgar Joseph (Leila Saks)' 'Landergren, Miss. Aurora Adelia'
'Widener, Mr. Harry Elkins' 'Betros, Mr. Tannous'
'Gustafsson, Mr. Karl Gideon' 'Bidois, Miss. Rosalie'
'Nakid, Miss. Maria ("Mary")' 'Tikkanen, Mr. Juho'
'Holverson, Mrs. Alexander Oskar (Mary Aline Towner)'
'Plotcharsky, Mr. Vasil' 'Davies, Mr. Charles Henry'
'Goodwin, Master. Sidney Leonard' 'Buss, Miss. Kate'
'Sadlier, Mr. Matthew' 'Lehmann, Miss. Bertha'
'Carter, Mr. William Ernest' 'Jansson, Mr. Carl Olof'
'Gustafsson, Mr. Johan Birger' 'Newell, Miss. Marjorie'
'Sandstrom, Mrs. Hjalmar (Agnes Charlotta Bengtsson)'
'Johansson, Mr. Erik' 'Olsson, Miss. Elina' 'McKane, Mr. Peter David'
'Pain, Dr. Alfred' 'Trout, Mrs. William H (Jessie L)' 'Niskanen, Mr. Juha'
'Adams, Mr. John' 'Jussila, Miss. Mari Aina'
'Hakkarainen, Mr. Pekka Pietari' 'Oreskovic, Miss. Marija'
'Gale, Mr. Shadrach' 'Widegren, Mr. Carl/Charles Peter'
'Richards, Master. William Rowe' 'Birkeland, Mr. Hans Martin Monsen'
'Lefebre, Miss. Ida' 'Sdycoff, Mr. Todor' 'Hart, Mr. Henry'
'Minahan, Miss. Daisy E' 'Cunningham, Mr. Alfred Fleming'
'Sundman, Mr. Johan Julian' 'Meek, Mrs. Thomas (Annie Louise Rowley)'
'Drew, Mrs. James Vivian (Lulu Thorne Christian)'
'Silven, Miss. Lyyli Karoliina' 'Matthews, Mr. William John'
'Van Impe, Miss. Catharina' 'Gheorgheff, Mr. Stanio' 'Charters, Mr. David'
'Zimmerman, Mr. Leo'
'Danbom, Mrs. Ernst Gilbert (Anna Sigrid Maria Brogren)'
'Rosblom, Mr. Viktor Richard' 'Wiseman, Mr. Phillippe'
'Clarke, Mrs. Charles V (Ada Maria Winfield)'
'Phillips, Miss. Kate Florence ("Mrs Kate Louise Phillips Marshall")'
'Flynn, Mr. James' 'Pickard, Mr. Berk (Berk Trembisky)'
'Bjornstrom-Steffansson, Mr. Mauritz Hakan'
'Thorneycroft, Mrs. Percival (Florence Kate White)'
'Louch, Mrs. Charles Alexander (Alice Adelaide Slow)'
'Kallio, Mr. Nikolai Erland' 'Silvey, Mr. William Baird'
'Carter, Miss. Lucile Polk' 'Ford, Miss. Doolina Margaret "Daisy"'
'Richards, Mrs. Sidney (Emily Hocking)' 'Fortune, Mr. Mark'
'Kvillner, Mr. Johan Henrik Johannesson'
'Hart, Mrs. Benjamin (Esther Ada Bloomfield)' 'Hampe, Mr. Leon'
'Petterson, Mr. Johan Emil' 'Reynaldo, Ms. Encarnacion'
'Johannesen-Bratthammer, Mr. Bernt' 'Dodge, Master. Washington'
'Mellinger, Miss. Madeleine Violet' 'Seward, Mr. Frederic Kimber'
'Baclini, Miss. Marie Catherine' 'Peuchen, Major. Arthur Godfrey'
'West, Mr. Edwy Arthur' 'Hagland, Mr. Ingvald Olai Olsen'
'Foreman, Mr. Benjamin Laventall' 'Goldenberg, Mr. Samuel L'
'Peduzzi, Mr. Joseph' 'Jalsevac, Mr. Ivan' 'Millet, Mr. Francis Davis'
'Kenyon, Mrs. Frederick R (Marion)' 'Toomey, Miss. Ellen'
"O'Connor, Mr. Maurice" 'Anderson, Mr. Harry' 'Morley, Mr. William'
'Gee, Mr. Arthur H' 'Milling, Mr. Jacob Christian' 'Maisner, Mr. Simon'
'Goncalves, Mr. Manuel Estanslas' 'Campbell, Mr. William'
'Smart, Mr. John Montgomery' 'Scanlan, Mr. James'
'Baclini, Miss. Helene Barbara' 'Keefe, Mr. Arthur' 'Cacic, Mr. Luka'
'West, Mrs. Edwy Arthur (Ada Mary Worth)'
'Jerwan, Mrs. Amin S (Marie Marthe Thuillard)'
'Strandberg, Miss. Ida Sofia' 'Clifford, Mr. George Quincy'
'Renouf, Mr. Peter Henry' 'Braund, Mr. Lewis Richard'
'Karlsson, Mr. Nils August' 'Hirvonen, Miss. Hildur E'
'Goodwin, Master. Harold Victor' 'Frost, Mr. Anthony Wood "Archie"'
'Rouse, Mr. Richard Henry' 'Turkula, Mrs. (Hedwig)'
'Bishop, Mr. Dickinson H' 'Lefebre, Miss. Jeannie'
'Hoyt, Mrs. Frederick Maxfield (Jane Anne Forby)'
'Kent, Mr. Edward Austin' 'Somerton, Mr. Francis William'
'Coutts, Master. Eden Leslie "Neville"'
'Hagland, Mr. Konrad Mathias Reiersen' 'Windelov, Mr. Einar'
'Molson, Mr. Harry Markland' 'Artagaveytia, Mr. Ramon'
'Stanley, Mr. Edward Roland' 'Yousseff, Mr. Gerious'
'Eustis, Miss. Elizabeth Mussey' 'Shellard, Mr. Frederick William'
'Allison, Mrs. Hudson J C (Bessie Waldo Daniels)' 'Svensson, Mr. Olof'
'Calic, Mr. Petar' 'Canavan, Miss. Mary' "O'Sullivan, Miss. Bridget Mary"
'Laitinen, Miss. Kristina Sofia' 'Maioni, Miss. Roberta'
'Penasco y Castellana, Mr. Victor de Satode'
'Quick, Mrs. Frederick Charles (Jane Richards)'
'Bradley, Mr. George ("George Arthur Brayton")' 'Olsen, Mr. Henry Margido'
'Lang, Mr. Fang' 'Daly, Mr. Eugene Patrick' 'Webber, Mr. James'
'McGough, Mr. James Robert'
'Rothschild, Mrs. Martin (Elizabeth L. Barrett)' 'Coleff, Mr. Satio'
'Walker, Mr. William Anderson' 'Lemore, Mrs. (Amelia Milley)'
'Ryan, Mr. Patrick' 'Angle, Mrs. William A (Florence "Mary" Agnes Hughes)'
'Pavlovic, Mr. Stefo' 'Perreault, Miss. Anne' 'Vovk, Mr. Janko'
'Lahoud, Mr. Sarkis' 'Hippach, Mrs. Louis Albert (Ida Sophia Fischer)'
'Kassem, Mr. Fared' 'Farrell, Mr. James' 'Ridsdale, Miss. Lucy'
'Farthing, Mr. John' 'Salonen, Mr. Johan Werner'
'Hocking, Mr. Richard George' 'Quick, Miss. Phyllis May'
'Toufik, Mr. Nakli' 'Elias, Mr. Joseph Jr'
'Peter, Mrs. Catherine (Catherine Rizk)' 'Cacic, Miss. Marija'
'Hart, Miss. Eva Miriam' 'Butt, Major. Archibald Willingham'
'LeRoy, Miss. Bertha' 'Risien, Mr. Samuel Beard'
'Frolicher, Miss. Hedwig Margaritha' 'Crosby, Miss. Harriet R'
'Andersson, Miss. Ingeborg Constanzia' 'Andersson, Miss. Sigrid Elisabeth'
'Beane, Mr. Edward' 'Douglas, Mr. Walter Donald'
'Nicholson, Mr. Arthur Ernest' 'Beane, Mrs. Edward (Ethel Clarke)'
'Padro y Manent, Mr. Julian' 'Goldsmith, Mr. Frank John'
'Davies, Master. John Morgan Jr' 'Thayer, Mr. John Borland Jr'
'Sharp, Mr. Percival James R' "O'Brien, Mr. Timothy"
'Leeni, Mr. Fahim ("Philip Zenni")' 'Ohman, Miss. Velin'
'Wright, Mr. George'
'Duff Gordon, Lady. (Lucille Christiana Sutherland) ("Mrs Morgan")'
'Robbins, Mr. Victor' 'Taussig, Mrs. Emil (Tillie Mandelbaum)'
'de Messemaeker, Mrs. Guillaume Joseph (Emma)' 'Morrow, Mr. Thomas Rowan'
'Sivic, Mr. Husein' 'Norman, Mr. Robert Douglas' 'Simmons, Mr. John'
'Meanwell, Miss. (Marion Ogden)' 'Davies, Mr. Alfred J'
'Stoytcheff, Mr. Ilia' 'Palsson, Mrs. Nils (Alma Cornelia Berglund)'
'Doharr, Mr. Tannous' 'Jonsson, Mr. Carl' 'Harris, Mr. George'
'Appleton, Mrs. Edward Dale (Charlotte Lamson)'
'Flynn, Mr. John Irwin ("Irving")' 'Kelly, Miss. Mary'
'Rush, Mr. Alfred George John' 'Patchett, Mr. George'
'Garside, Miss. Ethel' 'Silvey, Mrs. William Baird (Alice Munger)'
'Caram, Mrs. Joseph (Maria Elias)' 'Jussila, Mr. Eiriik'
'Christy, Miss. Julie Rachel'
'Thayer, Mrs. John Borland (Marian Longstreth Morris)'
'Downton, Mr. William James' 'Ross, Mr. John Hugo' 'Paulner, Mr. Uscher'
'Taussig, Miss. Ruth' 'Jarvis, Mr. John Denzil'
'Frolicher-Stehli, Mr. Maxmillian' 'Gilinski, Mr. Eliezer'
'Murdlin, Mr. Joseph' 'Rintamaki, Mr. Matti'
'Stephenson, Mrs. Walter Bertram (Martha Eustis)'
'Elsbury, Mr. William James' 'Bourke, Miss. Mary'
'Chapman, Mr. John Henry' 'Van Impe, Mr. Jean Baptiste'
'Leitch, Miss. Jessie Wills' 'Johnson, Mr. Alfred' 'Boulos, Mr. Hanna'
'Duff Gordon, Sir. Cosmo Edmund ("Mr Morgan")'
'Jacobsohn, Mrs. Sidney Samuel (Amy Frances Christy)'
'Slabenoff, Mr. Petco' 'Harrington, Mr. Charles H'
'Torber, Mr. Ernst William' 'Homer, Mr. Harry ("Mr E Haven")'
'Lindell, Mr. Edvard Bengtsson' 'Karaic, Mr. Milan'
'Daniel, Mr. Robert Williams'
'Laroche, Mrs. Joseph (Juliette Marie Louise Lafargue)'
'Shutes, Miss. Elizabeth W'
'Andersson, Mrs. Anders Johan (Alfrida Konstantia Brogren)'
'Jardin, Mr. Jose Neto' 'Murphy, Miss. Margaret Jane' 'Horgan, Mr. John'
'Brocklebank, Mr. William Alfred' 'Herman, Miss. Alice'
'Danbom, Mr. Ernst Gilbert'
'Lobb, Mrs. William Arthur (Cordelia K Stanlick)'
'Becker, Miss. Marion Louise' 'Gavey, Mr. Lawrence' 'Yasbeck, Mr. Antoni'
'Kimball, Mr. Edwin Nelson Jr' 'Nakid, Mr. Sahid'
'Hansen, Mr. Henry Damsgaard' 'Bowen, Mr. David John "Dai"'
'Sutton, Mr. Frederick' 'Kirkland, Rev. Charles Leonard'
'Longley, Miss. Gretchen Fiske' 'Bostandyeff, Mr. Guentcho'
"O'Connell, Mr. Patrick D" 'Barkworth, Mr. Algernon Henry Wilson'
'Lundahl, Mr. Johan Svensson' 'Stahelin-Maeglin, Dr. Max'
'Parr, Mr. William Henry Marsh' 'Skoog, Miss. Mabel' 'Davis, Miss. Mary'
'Leinonen, Mr. Antti Gustaf' 'Collyer, Mr. Harvey'
'Panula, Mrs. Juha (Maria Emilia Ojala)' 'Thorneycroft, Mr. Percival'
'Jensen, Mr. Hans Peder' 'Sagesser, Mlle. Emma'
'Skoog, Miss. Margit Elizabeth' 'Foo, Mr. Choong' 'Baclini, Miss. Eugenie'
'Harper, Mr. Henry Sleeper' 'Cor, Mr. Liudevit'
'Simonius-Blumer, Col. Oberst Alfons' 'Willey, Mr. Edward'
'Stanley, Miss. Amy Zillah Elsie' 'Mitkoff, Mr. Mito'
'Doling, Miss. Elsie' 'Kalvik, Mr. Johannes Halvorsen'
'O\'Leary, Miss. Hanora "Norah"' 'Hegarty, Miss. Hanora "Nora"'
'Hickman, Mr. Leonard Mark' 'Radeff, Mr. Alexander'
'Bourke, Mrs. John (Catherine)' 'Eitemiller, Mr. George Floyd'
'Newell, Mr. Arthur Webster' 'Frauenthal, Dr. Henry William'
'Badt, Mr. Mohamed' 'Colley, Mr. Edward Pomeroy' 'Coleff, Mr. Peju'
'Lindqvist, Mr. Eino William' 'Hickman, Mr. Lewis'
'Butler, Mr. Reginald Fenton' 'Rommetvedt, Mr. Knud Paust'
'Cook, Mr. Jacob' 'Taylor, Mrs. Elmer Zebley (Juliet Cummins Wright)'
'Brown, Mrs. Thomas William Solomon (Elizabeth Catherine Ford)'
'Davidson, Mr. Thornton' 'Mitchell, Mr. Henry Michael'
'Wilhelms, Mr. Charles' 'Watson, Mr. Ennis Hastings'
'Edvardsson, Mr. Gustaf Hjalmar' 'Sawyer, Mr. Frederick Charles'
'Turja, Miss. Anna Sofia' 'Goodwin, Mrs. Frederick (Augusta Tyler)'
'Cardeza, Mr. Thomas Drake Martinez' 'Peters, Miss. Katie'
'Hassab, Mr. Hammad' 'Olsvigen, Mr. Thor Anderson'
'Goodwin, Mr. Charles Edward' 'Brown, Mr. Thomas William Solomon'
'Laroche, Mr. Joseph Philippe Lemercier' 'Panula, Mr. Jaako Arnold'
'Dakic, Mr. Branko' 'Fischer, Mr. Eberhard Thelander'
'Madill, Miss. Georgette Alexandra' 'Dick, Mr. Albert Adrian'
'Karun, Miss. Manca' 'Lam, Mr. Ali' 'Saad, Mr. Khalil' 'Weir, Col. John'
'Chapman, Mr. Charles Henry' 'Kelly, Mr. James'
'Mullens, Miss. Katherine "Katie"' 'Thayer, Mr. John Borland'
'Humblen, Mr. Adolf Mathias Nicolai Olsen'
'Astor, Mrs. John Jacob (Madeleine Talmadge Force)'
'Silverthorne, Mr. Spencer Victor' 'Barbara, Miss. Saiide'
'Gallagher, Mr. Martin' 'Hansen, Mr. Henrik Juul'
'Morley, Mr. Henry Samuel ("Mr Henry Marshall")'
'Kelly, Mrs. Florence "Fannie"' 'Calderhead, Mr. Edward Pennington'
'Cleaver, Miss. Alice' 'Moubarek, Master. Halim Gonios ("William George")'
'Mayne, Mlle. Berthe Antonine ("Mrs de Villiers")' 'Klaber, Mr. Herman'
'Taylor, Mr. Elmer Zebley' 'Larsson, Mr. August Viktor'
'Greenberg, Mr. Samuel' 'Soholt, Mr. Peter Andreas Lauritz Andersen'
'Endres, Miss. Caroline Louise' 'Troutt, Miss. Edwina Celia "Winnie"'
'McEvoy, Mr. Michael' 'Johnson, Mr. Malkolm Joackim'
'Harper, Miss. Annie Jessie "Nina"' 'Jensen, Mr. Svend Lauritz'
'Gillespie, Mr. William Henry' 'Hodges, Mr. Henry Price'
'Chambers, Mr. Norman Campbell' 'Oreskovic, Mr. Luka'
'Renouf, Mrs. Peter Henry (Lillian Jefferys)' 'Mannion, Miss. Margareth'
'Bryhl, Mr. Kurt Arnold Gottfrid' 'Ilmakangas, Miss. Pieta Sofia'
'Allen, Miss. Elisabeth Walton' 'Hassan, Mr. Houssein G N'
'Knight, Mr. Robert J' 'Berriman, Mr. William John'
'Troupiansky, Mr. Moses Aaron' 'Williams, Mr. Leslie'
'Ford, Mrs. Edward (Margaret Ann Watson)' 'Lesurer, Mr. Gustave J'
'Ivanoff, Mr. Kanio' 'Nankoff, Mr. Minko' 'Hawksford, Mr. Walter James'
'Cavendish, Mr. Tyrell William' 'Ryerson, Miss. Susan Parker "Suzette"'
'McNamee, Mr. Neal' 'Stranden, Mr. Juho' 'Crosby, Capt. Edward Gifford'
'Abbott, Mr. Rossmore Edward' 'Sinkkonen, Miss. Anna'
'Marvin, Mr. Daniel Warner' 'Connaghton, Mr. Michael' 'Wells, Miss. Joan'
'Moor, Master. Meier' 'Vande Velde, Mr. Johannes Joseph'
'Jonkoff, Mr. Lalio' 'Herman, Mrs. Samuel (Jane Laver)'
'Hamalainen, Master. Viljo' 'Carlsson, Mr. August Sigfrid'
'Bailey, Mr. Percy Andrew' 'Theobald, Mr. Thomas Leonard'
'Rothes, the Countess. of (Lucy Noel Martha Dyer-Edwards)'
'Garfirth, Mr. John' 'Nirva, Mr. Iisakki Antino Aijo'
'Barah, Mr. Hanna Assi' 'Carter, Mrs. William Ernest (Lucile Polk)'
'Eklund, Mr. Hans Linus' 'Hogeboom, Mrs. John C (Anna Andrews)'
'Brewe, Dr. Arthur Jackson' 'Mangan, Miss. Mary' 'Moran, Mr. Daniel J'
'Gronnestad, Mr. Daniel Danielsen' 'Lievens, Mr. Rene Aime'
'Jensen, Mr. Niels Peder' 'Mack, Mrs. (Mary)' 'Elias, Mr. Dibo'
'Hocking, Mrs. Elizabeth (Eliza Needs)'
'Myhrman, Mr. Pehr Fabian Oliver Malkolm' 'Tobin, Mr. Roger'
'Emanuel, Miss. Virginia Ethel' 'Kilgannon, Mr. Thomas J'
'Robert, Mrs. Edward Scott (Elisabeth Walton McMillan)'
'Ayoub, Miss. Banoura' 'Dick, Mrs. Albert Adrian (Vera Gillespie)'
'Long, Mr. Milton Clyde' 'Johnston, Mr. Andrew G' 'Ali, Mr. William'
'Harmer, Mr. Abraham (David Lishin)' 'Sjoblom, Miss. Anna Sofia'
'Rice, Master. George Hugh' 'Dean, Master. Bertram Vere'
'Guggenheim, Mr. Benjamin' 'Keane, Mr. Andrew "Andy"'
'Gaskell, Mr. Alfred' 'Sage, Miss. Stella Anna' 'Hoyt, Mr. William Fisher'
'Dantcheff, Mr. Ristiu' 'Otter, Mr. Richard' 'Leader, Dr. Alice (Farnham)'
'Osman, Mrs. Mara' 'Ibrahim Shawah, Mr. Yousseff'
'Van Impe, Mrs. Jean Baptiste (Rosalie Paula Govaert)'
'Ponesell, Mr. Martin' 'Collyer, Mrs. Harvey (Charlotte Annie Tate)'
'Carter, Master. William Thornton II' 'Thomas, Master. Assad Alexander'
'Hedman, Mr. Oskar Arvid' 'Johansson, Mr. Karl Johan'
'Andrews, Mr. Thomas Jr' 'Pettersson, Miss. Ellen Natalia'
'Meyer, Mr. August' 'Chambers, Mrs. Norman Campbell (Bertha Griggs)'
'Alexander, Mr. William' 'Lester, Mr. James' 'Slemen, Mr. Richard James'
'Andersson, Miss. Ebba Iris Alfrida' 'Tomlin, Mr. Ernest Portage'
'Fry, Mr. Richard' 'Heininen, Miss. Wendla Maria' 'Mallet, Mr. Albert'
'Holm, Mr. John Fredrik Alexander' 'Skoog, Master. Karl Thorsten'
'Hays, Mrs. Charles Melville (Clara Jennings Gregg)' 'Lulic, Mr. Nikola'
'Reuchlin, Jonkheer. John George' 'Moor, Mrs. (Beila)'
'Panula, Master. Urho Abraham' 'Flynn, Mr. John' 'Lam, Mr. Len'
'Mallet, Master. Andre' 'McCormack, Mr. Thomas Joseph'
'Stone, Mrs. George Nelson (Martha Evelyn)'
'Yasbeck, Mrs. Antoni (Selini Alexander)'
'Richards, Master. George Sibley' 'Saad, Mr. Amin'
'Augustsson, Mr. Albert' 'Allum, Mr. Owen George'
'Compton, Miss. Sara Rebecca' 'Pasic, Mr. Jakob' 'Sirota, Mr. Maurice'
'Chip, Mr. Chang' 'Marechal, Mr. Pierre' 'Alhomaki, Mr. Ilmari Rudolf'
'Mudd, Mr. Thomas Charles' 'Serepeca, Miss. Augusta'
'Lemberopolous, Mr. Peter L' 'Culumovic, Mr. Jeso' 'Abbing, Mr. Anthony'
'Sage, Mr. Douglas Bullen' 'Markoff, Mr. Marin' 'Harper, Rev. John'
'Goldenberg, Mrs. Samuel L (Edwiga Grabowska)'
'Andersson, Master. Sigvard Harald Elias' 'Svensson, Mr. Johan'
'Boulos, Miss. Nourelain' 'Lines, Miss. Mary Conover'
'Carter, Mrs. Ernest Courtenay (Lilian Hughes)'
'Aks, Mrs. Sam (Leah Rosen)' 'Wick, Mrs. George Dennick (Mary Hitchcock)'
'Daly, Mr. Peter Denis ' 'Baclini, Mrs. Solomon (Latifa Qurban)'
'Razi, Mr. Raihed' 'Hansen, Mr. Claus Peter' 'Giles, Mr. Frederick Edward'
'Swift, Mrs. Frederick Joel (Margaret Welles Barron)'
'Sage, Miss. Dorothy Edith "Dolly"' 'Gill, Mr. John William'
'Bystrom, Mrs. (Karolina)' 'Duran y More, Miss. Asuncion'
'Roebling, Mr. Washington Augustus II' 'van Melkebeke, Mr. Philemon'
'Johnson, Master. Harold Theodor' 'Balkic, Mr. Cerin'
'Beckwith, Mrs. Richard Leonard (Sallie Monypeny)'
'Carlsson, Mr. Frans Olof' 'Vander Cruyssen, Mr. Victor'
'Abelson, Mrs. Samuel (Hannah Wizosky)' 'Najib, Miss. Adele Kiamie "Jane"'
'Gustafsson, Mr. Alfred Ossian' 'Petroff, Mr. Nedelio'
'Laleff, Mr. Kristo' 'Potter, Mrs. Thomas Jr (Lily Alexenia Wilson)'
'Shelley, Mrs. William (Imanita Parrish Hall)' 'Markun, Mr. Johann'
'Dahlberg, Miss. Gerda Ulrika' 'Banfield, Mr. Frederick James'
'Sutehall, Mr. Henry Jr' 'Rice, Mrs. William (Margaret Norton)'
'Montvila, Rev. Juozas' 'Graham, Miss. Margaret Edith'
'Johnston, Miss. Catherine Helen "Carrie"' 'Behr, Mr. Karl Howell'
'Dooley, Mr. Patrick']
Ahlin, Mrs. Johan (Johanna Persdotter Larsson) 1
Madigan, Miss. Margaret "Maggie" 1
Morley, Mr. Henry Samuel ("Mr Henry Marshall") 1
Giles, Mr. Frederick Edward 1
Adahl, Mr. Mauritz Nils Martin 1
Jermyn, Miss. Annie 1
Farrell, Mr. James 1
Keane, Mr. Andrew "Andy" 1
Panula, Mrs. Juha (Maria Emilia Ojala) 1
Hakkarainen, Mrs. Pekka Pietari (Elin Matilda Dolck) 1
Mack, Mrs. (Mary) 1
Carter, Mrs. William Ernest (Lucile Polk) 1
Thorneycroft, Mr. Percival 1
Graham, Miss. Margaret Edith 1
Andersson, Miss. Ellis Anna Maria 1
Coutts, Master. Eden Leslie "Neville" 1
Bishop, Mr. Dickinson H 1
Louch, Mrs. Charles Alexander (Alice Adelaide Slow) 1
McGovern, Miss. Mary 1
Emanuel, Miss. Virginia Ethel 1
Stanley, Mr. Edward Roland 1
Allison, Miss. Helen Loraine 1
Bowerman, Miss. Elsie Edith 1
Mineff, Mr. Ivan 1
Daly, Mr. Eugene Patrick 1
Taussig, Miss. Ruth 1
Johannesen-Bratthammer, Mr. Bernt 1
McNamee, Mr. Neal 1
Duran y More, Miss. Asuncion 1
Karaic, Mr. Milan 1
..
Robins, Mrs. Alexander A (Grace Charity Laury) 1
Larsson, Mr. August Viktor 1
Kelly, Mr. James 1
Banfield, Mr. Frederick James 1
Leyson, Mr. Robert William Norman 1
Allison, Mrs. Hudson J C (Bessie Waldo Daniels) 1
Lennon, Mr. Denis 1
Somerton, Mr. Francis William 1
Healy, Miss. Hanora "Nora" 1
Lefebre, Miss. Mathilde 1
Wick, Mrs. George Dennick (Mary Hitchcock) 1
Van Impe, Mr. Jean Baptiste 1
Collyer, Mr. Harvey 1
Elias, Mr. Tannous 1
Arnold-Franchi, Mr. Josef 1
Fortune, Miss. Mabel Helen 1
McMahon, Mr. Martin 1
Romaine, Mr. Charles Hallace ("Mr C Rolmane") 1
Ponesell, Mr. Martin 1
Webber, Miss. Susan 1
Blackwell, Mr. Stephen Weart 1
Baclini, Miss. Marie Catherine 1
Coelho, Mr. Domingos Fernandeo 1
Alexander, Mr. William 1
Gill, Mr. John William 1
Collander, Mr. Erik Gustaf 1
Petroff, Mr. Nedelio 1
Sutehall, Mr. Henry Jr 1
Rosblom, Mr. Viktor Richard 1
Goldenberg, Mr. Samuel L 1
Name: Name, Length: 891, dtype: int64
End <--------
Sex <--------
891
['male' 'female']
male 577
female 314
Name: Sex, dtype: int64
End <--------
Age <--------
891
[ 22. 38. 26. 35. -1. 54. 2. 27. 14. 4. 58.
20. 39. 55. 31. 34. 15. 28. 8. 19. 40. 66.
42. 21. 18. 3. 7. 49. 29. 65. 28.5 5. 11.
45. 17. 32. 16. 25. 0.83 30. 33. 23. 24. 46.
59. 71. 37. 47. 14.5 70.5 32.5 12. 9. 36.5 51.
55.5 40.5 44. 1. 61. 56. 50. 36. 45.5 20.5 62.
41. 52. 63. 23.5 0.92 43. 60. 10. 64. 13. 48.
0.75 53. 57. 80. 70. 24.5 6. 0.67 30.5 0.42
34.5 74. ]
-1.00 177
24.00 30
22.00 27
18.00 26
28.00 25
19.00 25
30.00 25
21.00 24
25.00 23
36.00 22
29.00 20
32.00 18
26.00 18
35.00 18
27.00 18
16.00 17
31.00 17
34.00 15
23.00 15
33.00 15
20.00 15
39.00 14
17.00 13
42.00 13
40.00 13
45.00 12
38.00 11
50.00 10
2.00 10
4.00 10
...
28.50 2
63.00 2
0.83 2
30.50 2
70.00 2
57.00 2
0.75 2
13.00 2
59.00 2
10.00 2
64.00 2
40.50 2
45.50 2
32.50 2
20.50 1
24.50 1
0.67 1
70.50 1
0.92 1
74.00 1
34.50 1
14.50 1
80.00 1
12.00 1
53.00 1
36.50 1
55.50 1
66.00 1
23.50 1
0.42 1
Name: Age, Length: 89, dtype: int64
End <--------
SibSp <--------
891
[1 0 3 4 2 5 8]
0 608
1 209
2 28
4 18
3 16
8 7
5 5
Name: SibSp, dtype: int64
End <--------
Parch <--------
891
[0 1 2 5 3 4 6]
0 678
1 118
2 80
5 5
3 5
4 4
6 1
Name: Parch, dtype: int64
End <--------
Ticket <--------
891
['A/5 21171' 'PC 17599' 'STON/O2. 3101282' '113803' '373450' '330877'
'17463' '349909' '347742' '237736' 'PP 9549' '113783' 'A/5. 2151' '347082'
'350406' '248706' '382652' '244373' '345763' '2649' '239865' '248698'
'330923' '113788' '347077' '2631' '19950' '330959' '349216' 'PC 17601'
'PC 17569' '335677' 'C.A. 24579' 'PC 17604' '113789' '2677' 'A./5. 2152'
'345764' '2651' '7546' '11668' '349253' 'SC/Paris 2123' '330958'
'S.C./A.4. 23567' '370371' '14311' '2662' '349237' '3101295' 'A/4. 39886'
'PC 17572' '2926' '113509' '19947' 'C.A. 31026' '2697' 'C.A. 34651'
'CA 2144' '2669' '113572' '36973' '347088' 'PC 17605' '2661' 'C.A. 29395'
'S.P. 3464' '3101281' '315151' 'C.A. 33111' 'S.O.C. 14879' '2680' '1601'
'348123' '349208' '374746' '248738' '364516' '345767' '345779' '330932'
'113059' 'SO/C 14885' '3101278' 'W./C. 6608' 'SOTON/OQ 392086' '343275'
'343276' '347466' 'W.E.P. 5734' 'C.A. 2315' '364500' '374910' 'PC 17754'
'PC 17759' '231919' '244367' '349245' '349215' '35281' '7540' '3101276'
'349207' '343120' '312991' '349249' '371110' '110465' '2665' '324669'
'4136' '2627' 'STON/O 2. 3101294' '370369' 'PC 17558' 'A4. 54510' '27267'
'370372' 'C 17369' '2668' '347061' '349241' 'SOTON/O.Q. 3101307'
'A/5. 3337' '228414' 'C.A. 29178' 'SC/PARIS 2133' '11752' '7534'
'PC 17593' '2678' '347081' 'STON/O2. 3101279' '365222' '231945'
'C.A. 33112' '350043' '230080' '244310' 'S.O.P. 1166' '113776'
'A.5. 11206' 'A/5. 851' 'Fa 265302' 'PC 17597' '35851' 'SOTON/OQ 392090'
'315037' 'CA. 2343' '371362' 'C.A. 33595' '347068' '315093' '363291'
'113505' 'PC 17318' '111240' 'STON/O 2. 3101280' '17764' '350404' '4133'
'PC 17595' '250653' 'LINE' 'SC/PARIS 2131' '230136' '315153' '113767'
'370365' '111428' '364849' '349247' '234604' '28424' '350046' 'PC 17610'
'368703' '4579' '370370' '248747' '345770' '3101264' '2628' 'A/5 3540'
'347054' '2699' '367231' '112277' 'SOTON/O.Q. 3101311' 'F.C.C. 13528'
'A/5 21174' '250646' '367229' '35273' 'STON/O2. 3101283' '243847' '11813'
'W/C 14208' 'SOTON/OQ 392089' '220367' '21440' '349234' '19943' 'PP 4348'
'SW/PP 751' 'A/5 21173' '236171' '347067' '237442' 'C.A. 29566'
'W./C. 6609' '26707' 'C.A. 31921' '28665' 'SCO/W 1585' '367230'
'W./C. 14263' 'STON/O 2. 3101275' '2694' '19928' '347071' '250649' '11751'
'244252' '362316' '113514' 'A/5. 3336' '370129' '2650' 'PC 17585' '110152'
'PC 17755' '230433' '384461' '110413' '112059' '382649' 'C.A. 17248'
'347083' 'PC 17582' 'PC 17760' '113798' '250644' 'PC 17596' '370375'
'13502' '347073' '239853' 'C.A. 2673' '336439' '347464' '345778'
'A/5. 10482' '113056' '349239' '345774' '349206' '237798' '370373' '19877'
'11967' 'SC/Paris 2163' '349236' '349233' 'PC 17612' '2693' '113781'
'19988' '9234' '367226' '226593' 'A/5 2466' '17421' 'PC 17758' 'P/PP 3381'
'PC 17485' '11767' 'PC 17608' '250651' '349243' 'F.C.C. 13529' '347470'
'29011' '36928' '16966' 'A/5 21172' '349219' '234818' '345364' '28551'
'111361' '113043' 'PC 17611' '349225' '7598' '113784' '248740' '244361'
'229236' '248733' '31418' '386525' 'C.A. 37671' '315088' '7267' '113510'
'2695' '2647' '345783' '237671' '330931' '330980' 'SC/PARIS 2167' '2691'
'SOTON/O.Q. 3101310' 'C 7076' '110813' '2626' '14313' 'PC 17477' '11765'
'3101267' '323951' 'C 7077' '113503' '2648' '347069' 'PC 17757' '2653'
'STON/O 2. 3101293' '349227' '27849' '367655' 'SC 1748' '113760' '350034'
'3101277' '350052' '350407' '28403' '244278' '240929' 'STON/O 2. 3101289'
'341826' '4137' '315096' '28664' '347064' '29106' '312992' '349222'
'394140' 'STON/O 2. 3101269' '343095' '28220' '250652' '28228' '345773'
'349254' 'A/5. 13032' '315082' '347080' 'A/4. 34244' '2003' '250655'
'364851' 'SOTON/O.Q. 392078' '110564' '376564' 'SC/AH 3085'
'STON/O 2. 3101274' '13507' 'C.A. 18723' '345769' '347076' '230434'
'65306' '33638' '113794' '2666' '113786' '65303' '113051' '17453'
'A/5 2817' '349240' '13509' '17464' 'F.C.C. 13531' '371060' '19952'
'364506' '111320' '234360' 'A/S 2816' 'SOTON/O.Q. 3101306' '113792'
'36209' '323592' '315089' 'SC/AH Basle 541' '7553' '31027' '3460' '350060'
'3101298' '239854' 'A/5 3594' '4134' '11771' 'A.5. 18509' '65304'
'SOTON/OQ 3101317' '113787' 'PC 17609' 'A/4 45380' '36947' 'C.A. 6212'
'350035' '315086' '364846' '330909' '4135' '26360' '111427' 'C 4001'
'382651' 'SOTON/OQ 3101316' 'PC 17473' 'PC 17603' '349209' '36967'
'C.A. 34260' '226875' '349242' '12749' '349252' '2624' '2700' '367232'
'W./C. 14258' 'PC 17483' '3101296' '29104' '2641' '2690' '315084' '113050'
'PC 17761' '364498' '13568' 'WE/P 5735' '2908' '693' 'SC/PARIS 2146'
'244358' '330979' '2620' '347085' '113807' '11755' '345572' '372622'
'349251' '218629' 'SOTON/OQ 392082' 'SOTON/O.Q. 392087' 'A/4 48871'
'349205' '2686' '350417' 'S.W./PP 752' '11769' 'PC 17474' '14312'
'A/4. 20589' '358585' '243880' '2689' 'STON/O 2. 3101286' '237789' '13049'
'3411' '237565' '13567' '14973' 'A./5. 3235' 'STON/O 2. 3101273'
'A/5 3902' '364848' 'SC/AH 29037' '248727' '2664' '349214' '113796'
'364511' '111426' '349910' '349246' '113804' 'SOTON/O.Q. 3101305' '370377'
'364512' '220845' '31028' '2659' '11753' '350029' '54636' '36963' '219533'
'349224' '334912' '27042' '347743' '13214' '112052' '237668'
'STON/O 2. 3101292' '350050' '349231' '13213' 'S.O./P.P. 751' 'CA. 2314'
'349221' '8475' '330919' '365226' '349223' '29751' '2623' '5727' '349210'
'STON/O 2. 3101285' '234686' '312993' 'A/5 3536' '19996' '29750'
'F.C. 12750' 'C.A. 24580' '244270' '239856' '349912' '342826' '4138'
'330935' '6563' '349228' '350036' '24160' '17474' '349256' '2672' '113800'
'248731' '363592' '35852' '348121' 'PC 17475' '36864' '350025' '223596'
'PC 17476' 'PC 17482' '113028' '7545' '250647' '348124' '34218' '36568'
'347062' '350048' '12233' '250643' '113806' '315094' '36866' '236853'
'STON/O2. 3101271' '239855' '28425' '233639' '349201' '349218' '16988'
'376566' 'STON/O 2. 3101288' '250648' '113773' '335097' '29103' '392096'
'345780' '349204' '350042' '29108' '363294' 'SOTON/O2 3101272' '2663'
'347074' '112379' '364850' '8471' '345781' '350047' 'S.O./P.P. 3' '2674'
'29105' '347078' '383121' '36865' '2687' '113501' 'W./C. 6607'
'SOTON/O.Q. 3101312' '374887' '3101265' '12460' 'PC 17600' '349203'
'28213' '17465' '349244' '2685' '2625' '347089' '347063' '112050' '347087'
'248723' '3474' '28206' '364499' '112058' 'STON/O2. 3101290'
'S.C./PARIS 2079' 'C 7075' '315098' '19972' '368323' '367228' '2671'
'347468' '2223' 'PC 17756' '315097' '392092' '11774' 'SOTON/O2 3101287'
'2683' '315090' 'C.A. 5547' '349213' '347060' 'PC 17592' '392091' '113055'
'2629' '350026' '28134' '17466' '233866' '236852' 'SC/PARIS 2149'
'PC 17590' '345777' '349248' '695' '345765' '2667' '349212' '349217'
'349257' '7552' 'C.A./SOTON 34068' 'SOTON/OQ 392076' '211536' '112053'
'111369' '370376']
CA. 2343 7
347082 7
1601 7
347088 6
CA 2144 6
3101295 6
382652 5
S.O.C. 14879 5
19950 4
17421 4
LINE 4
113760 4
347077 4
W./C. 6608 4
PC 17757 4
4133 4
113781 4
2666 4
349909 4
PC 17572 3
371110 3
PC 17755 3
230080 3
110152 3
35273 3
F.C.C. 13529 3
13502 3
363291 3
PC 17582 3
PC 17760 3
..
36864 1
F.C. 12750 1
349206 1
335677 1
STON/O 2. 3101292 1
345779 1
14312 1
A./5. 2152 1
7546 1
2695 1
111428 1
112058 1
PC 17318 1
SC/PARIS 2131 1
312992 1
A/5. 10482 1
PC 17590 1
374746 1
239855 1
PC 17609 1
C.A. 33595 1
345763 1
C.A. 17248 1
A/5 3594 1
C.A./SOTON 34068 1
362316 1
374887 1
2686 1
2620 1
7552 1
Name: Ticket, Length: 681, dtype: int64
End <--------
Fare <--------
891
[ 7.25 71.2833 7.925 53.1 8.05 8.4583 51.8625
21.075 11.1333 30.0708 16.7 26.55 31.275 7.8542
16. 29.125 13. 18. 7.225 26. 8.0292
35.5 31.3875 263. 7.8792 7.8958 27.7208 146.5208
7.75 10.5 82.1708 52. 7.2292 11.2417 9.475 21.
41.5792 15.5 21.6792 17.8 39.6875 7.8 76.7292
61.9792 27.75 46.9 80. 83.475 27.9 15.2458
8.1583 8.6625 73.5 14.4542 56.4958 7.65 29. 12.475
9. 9.5 7.7875 47.1 15.85 34.375 61.175
20.575 34.6542 63.3583 23. 77.2875 8.6542 7.775
24.15 9.825 14.4583 247.5208 7.1417 22.3583 6.975
7.05 14.5 15.0458 26.2833 9.2167 79.2 6.75 11.5
36.75 7.7958 12.525 66.6 7.3125 61.3792 7.7333
69.55 16.1 15.75 20.525 55. 25.925 33.5
30.6958 25.4667 28.7125 0. 15.05 39. 22.025 50.
8.4042 6.4958 10.4625 18.7875 31. 113.275 27.
76.2917 90. 9.35 13.5 7.55 26.25 12.275
7.125 52.5542 20.2125 86.5 512.3292 79.65 153.4625
135.6333 19.5 29.7 77.9583 20.25 78.85 91.0792
12.875 8.85 151.55 30.5 23.25 12.35 110.8833
108.9 24. 56.9292 83.1583 262.375 14. 164.8667
134.5 6.2375 57.9792 28.5 133.65 15.9 9.225 35.
75.25 69.3 55.4417 211.5 4.0125 227.525 15.7417
7.7292 12. 120. 12.65 18.75 6.8583 32.5
7.875 14.4 55.9 8.1125 81.8583 19.2583 19.9667
89.1042 38.5 7.725 13.7917 9.8375 7.0458 7.5208
12.2875 9.5875 49.5042 78.2667 15.1 7.6292 22.525
26.2875 59.4 7.4958 34.0208 93.5 221.7792 106.425
49.5 71. 13.8625 7.8292 39.6 17.4 51.4792
26.3875 30. 40.125 8.7125 15. 33. 42.4 15.55
65. 32.3208 7.0542 8.4333 25.5875 9.8417 8.1375
10.1708 211.3375 57. 13.4167 7.7417 9.4833 7.7375
8.3625 23.45 25.9292 8.6833 8.5167 7.8875 37.0042
6.45 6.95 8.3 6.4375 39.4 14.1083 13.8583
50.4958 5. 9.8458 10.5167]
8.0500 43
13.0000 42
7.8958 38
7.7500 34
26.0000 31
10.5000 24
7.9250 18
7.7750 16
26.5500 15
0.0000 15
7.2292 15
7.8542 13
8.6625 13
7.2500 13
7.2250 12
16.1000 9
9.5000 9
24.1500 8
15.5000 8
56.4958 7
52.0000 7
14.5000 7
14.4542 7
69.5500 7
7.0500 7
31.2750 7
46.9000 6
30.0000 6
7.7958 6
39.6875 6
..
7.1417 1
42.4000 1
211.5000 1
12.2750 1
61.1750 1
8.4333 1
51.4792 1
7.8875 1
8.6833 1
7.5208 1
34.6542 1
28.7125 1
25.5875 1
7.7292 1
12.2875 1
8.6542 1
8.7125 1
61.3792 1
6.9500 1
9.8417 1
8.3000 1
13.7917 1
9.4750 1
13.4167 1
26.3875 1
8.4583 1
9.8375 1
8.3625 1
14.1083 1
17.4000 1
Name: Fare, Length: 248, dtype: int64
End <--------
Cabin <--------
891
[-1 'C85' 'C123' 'E46' 'G6' 'C103' 'D56' 'A6' 'C23 C25 C27' 'B78' 'D33'
'B30' 'C52' 'B28' 'C83' 'F33' 'F G73' 'E31' 'A5' 'D10 D12' 'D26' 'C110'
'B58 B60' 'E101' 'F E69' 'D47' 'B86' 'F2' 'C2' 'E33' 'B19' 'A7' 'C49' 'F4'
'A32' 'B4' 'B80' 'A31' 'D36' 'D15' 'C93' 'C78' 'D35' 'C87' 'B77' 'E67'
'B94' 'C125' 'C99' 'C118' 'D7' 'A19' 'B49' 'D' 'C22 C26' 'C106' 'C65'
'E36' 'C54' 'B57 B59 B63 B66' 'C7' 'E34' 'C32' 'B18' 'C124' 'C91' 'E40'
'T' 'C128' 'D37' 'B35' 'E50' 'C82' 'B96 B98' 'E10' 'E44' 'A34' 'C104'
'C111' 'C92' 'E38' 'D21' 'E12' 'E63' 'A14' 'B37' 'C30' 'D20' 'B79' 'E25'
'D46' 'B73' 'C95' 'B38' 'B39' 'B22' 'C86' 'C70' 'A16' 'C101' 'C68' 'A10'
'E68' 'B41' 'A20' 'D19' 'D50' 'D9' 'A23' 'B50' 'A26' 'D48' 'E58' 'C126'
'B71' 'B51 B53 B55' 'D49' 'B5' 'B20' 'F G63' 'C62 C64' 'E24' 'C90' 'C45'
'E8' 'B101' 'D45' 'C46' 'D30' 'E121' 'D11' 'E77' 'F38' 'B3' 'D6' 'B82 B84'
'D17' 'A36' 'B102' 'B69' 'E49' 'C47' 'D28' 'E17' 'A24' 'C50' 'B42' 'C148']
-1 687
C23 C25 C27 4
B96 B98 4
G6 4
E101 3
F2 3
D 3
C22 C26 3
F33 3
D35 2
C65 2
B51 B53 B55 2
B5 2
F G73 2
E121 2
E8 2
B20 2
B18 2
D20 2
C68 2
F4 2
D33 2
E44 2
C126 2
C52 2
B77 2
C78 2
C125 2
B28 2
C83 2
...
E10 1
B41 1
A7 1
C50 1
C47 1
D48 1
B80 1
D28 1
B38 1
B71 1
B82 B84 1
B101 1
A16 1
D37 1
E49 1
D47 1
B39 1
A19 1
D6 1
D56 1
F G63 1
C87 1
B37 1
C32 1
D21 1
A36 1
C86 1
D10 D12 1
E40 1
D45 1
Name: Cabin, Length: 148, dtype: int64
End <--------
Embarked <--------
891
['S' 'C' 'Q' -1]
S 644
C 168
Q 77
-1 2
Name: Embarked, dtype: int64
End <--------
Sex_cat <--------
891
[0 1]
0 577
1 314
Name: Sex_cat, dtype: int64
End <--------
Ticket_cat <--------
891
[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143
144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161
162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179
180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197
198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215
216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233
234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251
252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269
270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287
288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305
306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323
324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341
342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359
360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377
378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395
396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413
414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431
432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449
450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467
468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485
486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503
504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521
522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539
540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557
558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575
576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593
594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611
612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629
630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647
648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665
666 667 668 669 670 671 672 673 674 675 676 677 678 679 680]
72 7
13 7
148 7
62 6
49 6
58 6
70 5
16 5
26 4
334 4
24 4
163 4
160 4
266 4
7 4
272 4
327 4
379 4
84 4
230 3
348 3
212 3
239 3
51 3
106 3
357 3
550 3
57 3
491 3
231 3
..
428 1
439 1
427 1
426 1
425 1
424 1
422 1
421 1
420 1
438 1
440 1
461 1
453 1
460 1
459 1
458 1
457 1
456 1
455 1
454 1
450 1
441 1
449 1
447 1
446 1
445 1
444 1
443 1
442 1
0 1
Name: Ticket_cat, Length: 681, dtype: int64
End <--------
Cabin_cat <--------
891
[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143
144 145 146 147]
0 687
8 4
4 4
73 4
53 3
54 3
23 3
15 3
27 3
64 2
117 2
52 2
113 2
22 2
79 2
28 2
29 2
63 2
33 2
100 2
47 2
45 2
44 2
95 2
38 2
87 2
40 2
41 2
42 2
115 2
...
83 1
67 1
68 1
69 1
142 1
72 1
146 1
74 1
76 1
77 1
78 1
80 1
81 1
82 1
84 1
102 1
85 1
86 1
88 1
90 1
91 1
92 1
93 1
94 1
96 1
97 1
98 1
99 1
101 1
147 1
Name: Cabin_cat, Length: 148, dtype: int64
End <--------
###Markdown
下载 train.csv 文件,并且保存在此 ipynb 文件可以找到的相对路径下。 数据分析第一步,先熟悉原始文件```这里首先我们看看这些标签代表着什么PassengerId => 乘客IDPclass => 乘客等级(1/2/3等舱位)Name => 乘客姓名Sex => 性别Age => 年龄SibSp => 堂兄弟/妹个数Parch => 父母与小孩个数Ticket => 船票信息Fare => 票价Cabin => 客舱Embarked => 登船港口 C - Cherbourg, S - Southampton, Q = Queenstown并且在上面的图示中我们可以看出Age和Cabin这2个标签是不全的,这点我们会在数据处理的时候进行相应的处理```
###Code
# 1. 导入 pandas 包
import pandas as pd # import 导入,pandas as 重新命名 pd
# series 是一维的一个列表
# DataFrame 是一个二维的表格
from pandas import Series,DataFrame
# 导入 Titanic csv 文件为一个 DataFrame,DataFrame 可以理解为一个二维的矩阵
# ../Titanic_Train/titanic_train/train.csv 这个是 这个 csv 在我的电脑的上的相对路径
#
titanic_df = pd.read_csv('../Titanic_Train/titanic_train/train.csv')
# 2. 探索性的分析
# 可以通过下面的 2 个命令,对 CSV 做一个简单的理解
# 显示头部的 5 行记录
titanic_df.head()
# 显示尾部的 5 行记录
titanic_df.tail()
# 显示头部的 10 行记录
titanic_df.head(10)
# 我们还可以得到一个整体的数据的信息预览
titanic_df.info()
titanic_df.describe()
# 注释:
# 对于数值数据,
# 结果的索引将包括
# 计数,
# 平均值,
# 标准差,
# 最小值,
# 最大值以及较低的百分位数和50。
# 默认情况下,较低的百分位数为25,较高的百分位数为75.50百分位数与中位数相同。
###Output
_____no_output_____
###Markdown
数据分析第二步 尝试提出问题所有好的数据分析项目都始于尝试回答问题。 现在我们知道了哪些列类别数据,让我们考虑一些我们想从数据中获得的问题或见解。 因此,这是我们将使用新的数据分析技能尝试回答的问题列表!首先是一些基本问题:1.泰坦尼克号上的乘客是谁? (年龄,性别,阶级等..等)2.乘客在哪个甲板上,与他们的舱位有什么关系?3.乘客来自哪里?4.谁一个人,谁与家人在一起?然后,我们将深入探讨一个更广泛的问题:5.哪些因素帮助某人成功上岸? 因此,让我们从第一个问题开始:泰坦尼克号上的乘客是谁?
###Code
# 首先,我们导入我们需要的数据分析包和数据可视化包
import matplotlib.pyplot as plt
import pandas as pd
import sys
import seaborn as sns
# import matplotlib
%matplotlib inline
print('Python version ' + sys.version)
print('Pandas version ' + pd.__version__)
print('Seaborn version ' + sns.__version__)
# print('Matplotlib version' + matplotlib.__version__)
###Output
_____no_output_____
###Markdown
简单统计男女比例, 我们data数据选择titanic_df,然后选择其中的Sex字段作为X轴, 其中kind : {point, bar, count, box, violin, strip}一共六种方式,我们选count,有的版本似乎不需要选择kind=count 一篇专门讲作图的文章 https://zhuanlan.zhihu.com/p/27683042
###Code
# 我们首先来检查一些性别这个列
# factorplot 是负责绘制图形的一个函数
# ‘sex’ 横轴
# data 数据源
# kind = count
sns.factorplot('Sex',data=titanic_df,kind='count')
sns.factorplot('Survived',data=titanic_df,kind='count')
sns.factorplot('Survived',kind='count',data=titanic_df)
###Output
/Users/lw/anaconda3/lib/python3.6/site-packages/seaborn/categorical.py:3666: UserWarning: The `factorplot` function has been renamed to `catplot`. The original name will be removed in a future release. Please update your code. Note that the default `kind` in `factorplot` (`'point'`) has changed `'strip'` in `catplot`.
warnings.warn(msg)
###Markdown
数据分析师入门套路* 先一列一列的分析* 再两列两列的分析* 为了更细化,我们显示以Pclass作为X轴,统计每个等级中的男女比例:
###Code
# Now let's seperate the genders by classes, remember we can use the 'hue' arguement here!
sns.factorplot('Pclass',data=titanic_df,kind='count')
# 增加一个变量
sns.factorplot('Pclass',data=titanic_df,kind="count",hue="Sex")
###Output
_____no_output_____
###Markdown
我们也可以将男女分为男,女,小孩,为原有数据库新增一个字段 定义一个函数,判断男,女,小孩
###Code
#我们会将16岁以下的任何人视为孩子,然后将apply技术与函数一起使用以创建新列
#首先让我们创建一个功能来对性别进行分类
def male_female_child(passenger):
# 从参数中获取 age 和 sex
age,sex = passenger
# 对比年龄,然后进行赋值
if age < 16:
return 'child'
else:
return sex
# 创造一个新的列,person
# 将Age,Sex传给male_female_child
# person 如果 age <16 child 如果age >=16 原来是 male 就是 male 原来是 female 就是 female
# 我们将定义一个新列“ person”,请记住为列指定axis = 1 而不是索引
titanic_df['person'] = titanic_df[['Age','Sex']].apply(male_female_child,axis=1)
# 看看是不是正常工作了,我们检查前 10 行数据
titanic_df[0:10]
###Output
_____no_output_____
###Markdown
优秀! 现在,我们已经将乘客分为女性,男性和儿童。 由于著名的“妇女与儿童优先政策”,这将在以后变得很重要!
###Code
# 让我们再试一次factorplot!
sns.factorplot('Pclass',data=titanic_df,hue='person',kind='count')
###Output
_____no_output_____
###Markdown
有趣的是,三等仓的孩子很多,而一等仓的孩子却很少! 我们如何创建年龄分布,以更准确地了解乘客的身份。简要查看各年龄段的分布,将年龄段的间距分为70段,默认10段,你当然可以分得更细或者更系数
###Code
titanic_df['Age'].min
titanic_df['Age'].max
# 使用Pandans创建直方图的快速方法
titanic_df['Age'].hist(bins=80)
###Output
_____no_output_____
###Markdown
查看“Person”字段的数量统计
###Code
# We could also get a quick overall comparison of male,female,child
titanic_df['person'].value_counts()
###Output
_____no_output_____
###Markdown
作业:1. 下载并安装 Anaconda2. 得到 train.csv3. 将今天看到的内容,在自己的 Jypter NoteBook 中复现出来4. -- 统计不同年龄段,各类别的分布趋势,核密度统计方式注:核密度估计,参考:http://www.lifelaf.com/blog/?p=723注:hue代表除row,col之外的第三维度,等级,不同的类型不同的颜色Palette代表调色板 使用Facet函数创建plot,以“Sex”字段区分等级,aspect=4代表宽度为之前的4倍 New heading
###Code
#将图形设置为与facetgrid相等
#将pandas dataframe作为其数据源,设置色相,
#并更改纵横比(aspect)。
fig = sns.FacetGrid(titanic_df, hue="Sex",aspect=4)
# 接下来使用map通过色相选择绘制“年龄”列的所有可能的kdeplots
fig.map(sns.kdeplot,'Age',shade= True)
# 设置最大年纪乘客的x最大限制
oldest = titanic_df['Age'].max()
# 因为我们知道没有人可以是负岁,所以将x下限设置为0
fig.set(xlim=(0,oldest))
# 最后添加图例
fig.add_legend()
# 我们可以对“person”列做同样的事情来包含孩子(我们单独创建出来的那一列)
fig = sns.FacetGrid(titanic_df, hue="person",aspect=4)
fig.map(sns.kdeplot,'Age',shade= True)
oldest = titanic_df['Age'].max()
fig.set(xlim=(0,oldest))
fig.add_legend()
# Let's do the same for class by changing the hue argument:
fig = sns.FacetGrid(titanic_df, hue="Pclass",aspect=4)
fig.map(sns.kdeplot,'Age',shade= True)
oldest = titanic_df['Age'].max()
fig.set(xlim=(0,oldest))
fig.add_legend()
###Output
_____no_output_____
###Markdown
--- 第一个问题结束,我们得到了一些什么样的结论呢?????> > > > > > > > > > > 我们已经根据性别,年龄和班级情况很好地了解了谁是乘客。 因此,让我们继续第二个问题:乘客在哪个甲板上以及与他们的舱位有何关系?
###Code
# 重新检查一下我们得到的表格里的数据
titanic_df.head()
###Output
_____no_output_____
###Markdown
统计不同船舱的人数分布因此,我们可以看到Cabin列在甲板上有信息,但是它有几个NaN值,因此我们必须删除它们。
###Code
# 首先,我们选择 Cabin 列,删除为空的信息。
deck = titanic_df['Cabin'].dropna()
# 我们快速的浏览一下这些数据
deck.head()
deck
###Output
_____no_output_____
###Markdown
由上可发现船舱的类别由第一个字符可以加以区分可以得到各船舱人数的数量(e.g. A,B,C,D,E,F,G) 去cabin_df数据集的Cabin字段,颜色用winter_d,方法调用countpalette的颜色有很多种,选择可以参考matplotlib 官方网站:http://matplotlib.org/users/colormaps.html
###Code
# 我们可以尝试用一个简单的 for 循环来获取获取首字母
# 声明一个空的列表
levels = []
# 循环获取 cabin 列的首字母
for level in deck:
levels.append(level[0])
levels
# 列表,字典,元组
# [] {} ,()
# 重置这个 DataFrame,然后作图
cabin_df = DataFrame(levels)
# 为这一列设置一个表头
cabin_df.columns = ['Cabin']
sns.factorplot('Cabin',data=cabin_df,palette='winter_d',kind='count')
###Output
/Users/lw/anaconda3/lib/python3.6/site-packages/seaborn/categorical.py:3666: UserWarning: The `factorplot` function has been renamed to `catplot`. The original name will be removed in a future release. Please update your code. Note that the default `kind` in `factorplot` (`'point'`) has changed `'strip'` in `catplot`.
warnings.warn(msg)
###Markdown
因为上面T船舱的数量实在太小,酌情删除
###Code
# 删除 T 的方法也很简单,可以重新定义 cabin_df,但是我们只需要 不是 ‘T’的
cabin_df = cabin_df[cabin_df.Cabin != 'T']
# 重新作图
sns.factorplot('Cabin',data=cabin_df,palette='summer',kind='count')
###Output
_____no_output_____
###Markdown
快速说明:我将'winter_d'和'summer'用作调色板,但是您可以选择所需的任何调色板。 查看此链接以获取更多调色板名称,您可以在任何调色板名称的末尾添加“ _d”以使其更暗。链接:http://matplotlib.org/users/colormaps.html现在,我们已经分析了按甲板分布,这很好,让我们继续回答第三个问题:--- 第二个问题结束 3.)乘客来自哪里?
###Code
# Let's take another look at our original data
titanic_df.head()
###Output
_____no_output_____
###Markdown
统计进站港口的数量分布Note here that the Embarked column has C,Q,and S values. Reading about the project on Kaggle you'll note that these stand for Cherbourg, Queenstown, Southhampton.
###Code
sns.factorplot('Embarked',data=titanic_df,kind='count')
sns.factorplot('Embarked',data=titanic_df,hue='Sex',kind='count')
# Now we can make a quick factorplot to check out the results, note the x_order argument, used to deal with NaN values
# sns.factorplot('Embarked',data=titanic_df,hue='Pclass',x_order=['C','Q','S'],kind='count')
sns.factorplot('Embarked',data=titanic_df,hue='Pclass',kind='count')
###Output
_____no_output_____
###Markdown
An interesting find here is that in Queenstown, almost all the passengers that boarded there were 3rd class. It would be intersting to look at the economics of that town in that time period for further investigation.Now let's take a look at the 4th question:4.) Who was alone and who was with family?统计单身及有家庭的人数分布
###Code
# Let's start by adding a new column to define alone
# We'll add the parent/child column with the sibsp column
titanic_df['Alone'] = titanic_df.Parch + titanic_df.SibSp
titanic_df['Alone']
###Output
_____no_output_____
###Markdown
由上可知,大于1的都是有兄弟姐妹或者父母孩子的Now we know that if the Alone column is anything but 0, then the passenger had family aboard and wasn't alone. So let's change the column now so that if the value is greater than 0, we know the passenger was with his/her family, otherwise they were alone.
###Code
# Look for >0 or ==0 to set alone status
titanic_df['Alone'].loc[titanic_df['Alone'] > 0] = 'With Family'
titanic_df['Alone'].loc[titanic_df['Alone'] == 0] = 'Alone'
# Note it's okay to ignore an error that sometimes pops up here. For more info check out this link
# url_info = 'http://stackoverflow.com/questions/20625582/how-to-deal-with-this-pandas-warning'
# Let's check to make sure it worked
titanic_df.head()
# 统计Alone的发布人数
# Let us visualise the Alone column
sns.factorplot('Alone',kind='count',data=titanic_df)
# 统计存活的以及没存活的分布
# Now let's get a simple visualization!
sns.factorplot('Alone',data=titanic_df,palette='Blues',kind="count")
# let us see who are alone according to class
sns.factorplot('Alone',kind='count',data=titanic_df,hue='Pclass')
###Output
_____no_output_____
###Markdown
Great work! Now that we've throughly analyzed the data let's go ahead and take a look at the most interesting (and open-ended) question: What factors helped someone survive the sinking?
###Code
# Let's start by creating a new column for legibility purposes through mapping (Lec 36)
titanic_df["Survivor"] = titanic_df.Survived.map({0: "no", 1: "yes"})
# Let's just get a quick overall view of survied vs died.
sns.factorplot('Survivor',data=titanic_df,kind='count')
###Output
_____no_output_____
###Markdown
So quite a few more people died than those who survived. Let's see if the class of the passengers had an effect on their survival rate, since the movie Titanic popularized the notion that the 3rd class passengers did not do as well as their 1st and 2nd class counterparts.
###Code
# Let's use a factor plot again, but now considering class
sns.factorplot('Pclass','Survived',data=titanic_df)
###Output
_____no_output_____
###Markdown
Look like survival rates for the 3rd class are substantially lower! But maybe this effect is being caused by the large amount of men in the 3rd class in combination with the women and children first policy. Let's use 'hue' to get a clearer picture on this.
###Code
# Let's use a factor plot again, but now considering class and gender
sns.factorplot('Pclass','Survived',hue='person',data=titanic_df)
###Output
_____no_output_____
###Markdown
From this data it looks like being a male or being in 3rd class were both not favourable for survival. Even regardless of class the result of being a male in any class dramatically decreases your chances of survival.But what about age? Did being younger or older have an effect on survival rate?
###Code
# Let's use a linear plot on age versus survival
sns.lmplot('Age','Survived',data=titanic_df)
sns.factorplot('Age','Survived',data=titanic_df)
###Output
_____no_output_____
###Markdown
Looks like there is a general trend that the older the passenger was, the less likely they survived. Let's go ahead and use hue to take a look at the effect of class and age.
###Code
# Let's use a linear plot on age versus survival using hue for class seperation
sns.lmplot('Age','Survived',hue='Pclass',data=titanic_df,palette='winter')
###Output
_____no_output_____
###Markdown
We can also use the x_bin argument to clean up this figure and grab the data and bin it by age with a std attached!
###Code
# Let's use a linear plot on age versus survival using hue for class seperation
generations=[10,20,40,60,80]
sns.lmplot('Age','Survived',hue='Pclass',data=titanic_df,palette='winter',x_bins=generations)
###Output
_____no_output_____
###Markdown
Interesting find on the older 1st class passengers! What about if we relate gender and age with the survival set?
###Code
sns.lmplot('Age','Survived',hue='Sex',data=titanic_df,palette='winter',x_bins=generations)
###Output
_____no_output_____
###Markdown
Fantastic work on your first go at a Data Analysis Project! Go ahead and keep playing with the data or try following along with Kaggle's sci-kit learn tutorial for this data (we'll look at it through a machine learning perspective later in the course)[¶](http://nbviewer.jupyter.org/github/jmportilla/Udemy-notes/blob/master/Intro%20to%20Data%20Projects%20-%20Titanic.ipynbFantastic-work-on-your-first-go-at-a-Data-Analysis-Project!-Go-ahead-and-keep-playing-with-the-data-or-try-following-along-with-Kaggle's-sci-kit-learn-tutorial-for-this-data-(we'll-look-at-it-through-a-machine-learning-perspective-later-in-the-course)) Finally, I'll leave you with a gif of my favorite scene from the movie Titanic
###Code
from IPython.display import Image
Image(url='http://i.imgur.com/DGNjT.gif')
###Output
_____no_output_____
###Markdown
Load Modules
###Code
# import required modules
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Load Data
###Code
# read in all data
train_data = pd.read_csv('train.csv')
test_data = pd.read_csv('test.csv')
# set seed for reproducibility
np.random.seed(0)
###Output
_____no_output_____
###Markdown
Exploratory Data Analysis (EDA) on Training Data Understanding the Variables
###Code
# understand the variables in train data
train_data.info()
# find how the data is represented
train_data.head()
###Output
_____no_output_____
###Markdown
Handling Missing Values
###Code
# get the number of missing values in the train data
missing_value_counts_train_data = train_data.isnull().sum()
missing_value_counts_train_data
(missing_value_counts_train_data / len(train_data.index)) * 100
# percent of train data that are missing
total_train_data_cells = np.product(train_data.shape)
total_missing_train_data = missing_value_counts_train_data.sum()
percent_missing_train_data = (total_missing_train_data / total_train_data_cells) * 100
percent_missing_train_data
plt.hist(train_data['Age'])
plt.show()
# remove the cabin column from train and test data
del train_data['Cabin']
# replace all NA's with mean in the Age column
train_data['Age'].fillna(train_data['Age'].mean(), inplace=True)
# remove the rest of NAs
train_data.dropna(inplace=True)
train_data.isnull().sum()
###Output
_____no_output_____
###Markdown
Removing Duplicate Enteries
###Code
train_data.drop_duplicates(inplace=True)
###Output
_____no_output_____
###Markdown
Dealing with Categorical Variables
###Code
del train_data['Name']
del train_data['Ticket']
train_data['Sex'].value_counts()
sex = pd.get_dummies(train_data['Sex'])
train_data = train_data.join(sex)
train_data.drop('Sex', axis=1, inplace=True)
train_data['Embarked'].value_counts()
embarked = pd.get_dummies(train_data['Embarked'])
train_data = train_data.join(embarked)
train_data.drop(['Embarked'], axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
Exploring the Dependent Variable
###Code
train_data['Survived'].describe()
###Output
_____no_output_____
###Markdown
Investigating the Relationships Between Dependent & Independent Variables
###Code
train_data.corr()
sns.pairplot(train_data, x_vars='Survived', y_vars=list(train_data.columns))
###Output
_____no_output_____
###Markdown
Data Cleaning
###Code
# create separate plots for each variable and remove outliers
train_data.plot.scatter(x='female', y='Survived')
###Output
_____no_output_____
###Markdown
Checking Statistical Assumptions
###Code
# check the normality of the variable
sns.histplot(train_data['male'], kde=True)
###Output
_____no_output_____
###Markdown
Exploratory Data Analysis (EDA) on Test Data Understanding the Variables
###Code
# understand the variables in test data
test_data.info()
test_data.head()
###Output
_____no_output_____
###Markdown
Handling Missing Values
###Code
# get the number of missing values in the test data
missing_value_counts_test_data = test_data.isnull().sum()
missing_value_counts_test_data
(missing_value_counts_test_data / len(test_data.index)) * 100
# percent of test data that are missing
total_test_data_cells = np.product(test_data.shape)
total_missing_test_data = missing_value_counts_test_data.sum()
percent_missing_test_data = (total_missing_test_data / total_test_data_cells) * 100
percent_missing_test_data
# remove the cabin column from train and test data
del test_data['Cabin']
# replace all NA's with mean in the Age column
test_data['Age'].fillna(test_data['Age'].mean(), inplace=True)
# remove the rest of NAs
test_data.dropna(inplace=True)
test_data.isnull().sum()
###Output
_____no_output_____
###Markdown
Removing Duplicate Enteries
###Code
test_data.drop_duplicates(inplace=True)
###Output
_____no_output_____
###Markdown
Dealing with Categorical Variables
###Code
del test_data['Name']
del test_data['Ticket']
sex_test = pd.get_dummies(test_data['Sex'])
test_data = test_data.join(sex_test)
test_data.drop(['Sex'], axis=1, inplace=True)
embark_test = pd.get_dummies(test_data['Embarked'])
test_data = test_data.join(embark_test)
test_data.drop(['Embarked'], axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
Data Transformation
###Code
features = list(test_data.columns)
X_train = train_data[features]
y_train = train_data['Survived']
X_test = test_data
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
###Output
_____no_output_____
###Markdown
Classification
###Code
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
model.fit(X_train_scaled, y_train)
###Output
_____no_output_____
###Markdown
Titanic Data Science Solutions--- I have released a new Python package [Speedml](https://speedml.com) which codifies the techniques used in this notebook into an intuitive, powerful, and productive API. Speedml helps me jump from low 80% on the Kaggle leaderboard to high 20% within few iterations. One more thing... Speedml achieves this with nearly 70% fewer lines of code! Run and download the [Titanic Solution using Speedml](https://github.com/Speedml/notebooks/blob/master/titanic/titanic-solution-using-speedml.ipynb).---This notebook is a companion to the book [Data Science Solutions](https://www.amazon.com/Data-Science-Solutions-Startup-Workflow/dp/1520545312). The notebook walks us through a typical workflow for solving data science competitions at sites like Kaggle.There are several excellent notebooks to study data science competition entries. However many will skip some of the explanation on how the solution is developed as these notebooks are developed by experts for experts. The objective of this notebook is to follow a step-by-step workflow, explaining each step and rationale for every decision we take during solution development. Workflow stagesThe competition solution workflow goes through seven stages described in the Data Science Solutions book.1. Question or problem definition.2. Acquire training and testing data.3. Wrangle, prepare, cleanse the data.4. Analyze, identify patterns, and explore the data.5. Model, predict and solve the problem.6. Visualize, report, and present the problem solving steps and final solution.7. Supply or submit the results.1. 定义问题2. 获取训练或测试数据3. 整理准备清洗数据4. 分析,模式识别,探索数据5. 建模,预测,解决问题6. 可视化,报告,并提出解决问题的步骤和最终解决方案7. 应用或提交结果The workflow indicates general sequence of how each stage may follow the other. However there are use cases with exceptions.- We may combine mulitple workflow stages. We may analyze by visualizing data.- Perform a stage earlier than indicated. We may analyze data before and after wrangling.- Perform a stage multiple times in our workflow. Visualize stage may be used multiple times.- Drop a stage altogether. We may not need supply stage to productize or service enable our dataset for a competition. Question and problem definitionCompetition sites like Kaggle define the problem to solve or questions to ask while providing the datasets for training your data science model and testing the model results against a test dataset. The question or problem definition for Titanic Survival competition is [described here at Kaggle](https://www.kaggle.com/c/titanic).> Knowing from a training set of samples listing passengers who survived or did not survive the Titanic disaster, can our model determine based on a given test dataset not containing the survival information, if these passengers in the test dataset survived or not.We may also want to develop some early understanding about the domain of our problem. This is described on the [Kaggle competition description page here](https://www.kaggle.com/c/titanic). Here are the highlights to note.- On April 15, 1912, during her maiden voyage, the Titanic sank after colliding with an iceberg, killing 1502 out of 2224 passengers and crew. Translated 32% survival rate.- One of the reasons that the shipwreck led to such loss of life was that there were not enough lifeboats for the passengers and crew.- Although there was some element of luck involved in surviving the sinking, some groups of people were more likely to survive than others, such as women, children, and the upper-class. Workflow goalsThe data science solutions workflow solves for seven major goals.**Classifying.** We may want to classify or categorize our samples. We may also want to understand the implications or correlation of different classes with our solution goal.**Correlating.** One can approach the problem based on available features within the training dataset. Which features within the dataset contribute significantly to our solution goal? Statistically speaking is there a [correlation](https://en.wikiversity.org/wiki/Correlation) among a feature and solution goal? As the feature values change does the solution state change as well, and visa-versa? This can be tested both for numerical and categorical features in the given dataset. We may also want to determine correlation among features other than survival for subsequent goals and workflow stages. Correlating certain features may help in creating, completing, or correcting features.**Converting.** For modeling stage, one needs to prepare the data. Depending on the choice of model algorithm one may require all features to be converted to numerical equivalent values. So for instance converting text categorical values to numeric values.**Completing.** Data preparation may also require us to estimate any missing values within a feature. Model algorithms may work best when there are no missing values.**Correcting.** We may also analyze the given training dataset for errors or possibly innacurate values within features and try to corrent these values or exclude the samples containing the errors. One way to do this is to detect any outliers among our samples or features. We may also completely discard a feature if it is not contribting to the analysis or may significantly skew the results.**Creating.** Can we create new features based on an existing feature or a set of features, such that the new feature follows the correlation, conversion, completeness goals.**Charting.** How to select the right visualization plots and charts depending on nature of the data and the solution goals. Refactor Release 2017-Jan-29We are significantly refactoring the notebook based on (a) comments received by readers, (b) issues in porting notebook from Jupyter kernel (2.7) to Kaggle kernel (3.5), and (c) review of few more best practice kernels. User comments- Combine training and test data for certain operations like converting titles across dataset to numerical values. (thanks @Sharan Naribole)- Correct observation - nearly 30% of the passengers had siblings and/or spouses aboard. (thanks @Reinhard)- Correctly interpreting logistic regresssion coefficients. (thanks @Reinhard) Porting issues- Specify plot dimensions, bring legend into plot. Best practices- Performing feature correlation analysis early in the project.- Using multiple plots instead of overlays for readability.
###Code
# data analysis and wrangling
import pandas as pd
import numpy as np
import random as rnd
# visualization
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
# machine learning
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import Perceptron
from sklearn.linear_model import SGDClassifier
from sklearn.tree import DecisionTreeClassifier
###Output
_____no_output_____
###Markdown
Acquire dataThe Python Pandas packages helps us work with our datasets. We start by acquiring the training and testing datasets into Pandas DataFrames. We also combine these datasets to run certain operations on both datasets together.
###Code
train_df = pd.read_csv('train.csv')
test_df = pd.read_csv('test.csv')
combine = [train_df, test_df]
###Output
_____no_output_____
###Markdown
Analyze by describing dataPandas also helps describe the datasets answering following questions early in our project.**Which features are available in the dataset?**Noting the feature names for directly manipulating or analyzing these. These feature names are described on the [Kaggle data page here](https://www.kaggle.com/c/titanic/data).
###Code
print(train_df.columns.values)
###Output
['PassengerId' 'Survived' 'Pclass' 'Name' 'Sex' 'Age' 'SibSp' 'Parch'
'Ticket' 'Fare' 'Cabin' 'Embarked']
###Markdown
PassengerId乘客编号Survived存活情况(存活:1 ; 死亡:0)Pclass客舱等级Name乘客姓名Sex性别Age年龄SibSp同乘的兄弟姐妹/配偶数Parch同乘的父母/小孩数Ticket船票编号Fare船票价格Cabin客舱号Embarked登船港口 **Which features are categorical?**These values classify the samples into sets of similar samples. Within categorical features are the values nominal, ordinal, ratio, or interval based? Among other things this helps us select the appropriate plots for visualization.- Categorical: Survived, Sex, and Embarked. Ordinal: Pclass.**Which features are numerical?**Which features are numerical? These values change from sample to sample. Within numerical features are the values discrete, continuous, or timeseries based? Among other things this helps us select the appropriate plots for visualization.- Continous: Age, Fare. Discrete: SibSp, Parch.
###Code
# preview the data
train_df.head()
###Output
_____no_output_____
###Markdown
**Which features are mixed data types?**Numerical, alphanumeric data within same feature. These are candidates for correcting goal.- Ticket is a mix of numeric and alphanumeric data types. Cabin is alphanumeric.**Which features may contain errors or typos?**This is harder to review for a large dataset, however reviewing a few samples from a smaller dataset may just tell us outright, which features may require correcting.- Name feature may contain errors or typos as there are several ways used to describe a name including titles, round brackets, and quotes used for alternative or short names.
###Code
train_df.tail()
###Output
_____no_output_____
###Markdown
**Which features contain blank, null or empty values?**These will require correcting.- Cabin > Age > Embarked features contain a number of null values in that order for the training dataset.- Cabin > Age are incomplete in case of test dataset.**What are the data types for various features?**Helping us during converting goal.- Seven features are integer or floats. Six in case of test dataset.- Five features are strings (object).
###Code
train_df.info()
print('_'*40)
test_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 891 entries, 0 to 890
Data columns (total 12 columns):
PassengerId 891 non-null int64
Survived 891 non-null int64
Pclass 891 non-null int64
Name 891 non-null object
Sex 891 non-null object
Age 714 non-null float64
SibSp 891 non-null int64
Parch 891 non-null int64
Ticket 891 non-null object
Fare 891 non-null float64
Cabin 204 non-null object
Embarked 889 non-null object
dtypes: float64(2), int64(5), object(5)
memory usage: 83.6+ KB
________________________________________
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 418 entries, 0 to 417
Data columns (total 11 columns):
PassengerId 418 non-null int64
Pclass 418 non-null int64
Name 418 non-null object
Sex 418 non-null object
Age 332 non-null float64
SibSp 418 non-null int64
Parch 418 non-null int64
Ticket 418 non-null object
Fare 417 non-null float64
Cabin 91 non-null object
Embarked 418 non-null object
dtypes: float64(2), int64(4), object(5)
memory usage: 36.0+ KB
###Markdown
**What is the distribution of numerical feature values across the samples?**This helps us determine, among other early insights, how representative is the training dataset of the actual problem domain.- Total samples are 891 or 40% of the actual number of passengers on board the Titanic (2,224).- Survived is a categorical feature with 0 or 1 values.- Around 38% samples survived representative of the actual survival rate at 32%.- Most passengers (> 75%) did not travel with parents or children.- Nearly 30% of the passengers had siblings and/or spouse aboard.- Fares varied significantly with few passengers (<1%) paying as high as $512.- Few elderly passengers (<1%) within age range 65-80.
###Code
train_df.describe()
# Review survived rate using `percentiles=[.61, .62]` knowing our problem description mentions 38% survival rate.
# Review Parch distribution using `percentiles=[.75, .8]`
# SibSp distribution `[.68, .69]`
# Age and Fare `[.1, .2, .3, .4, .5, .6, .7, .8, .9, .99]`
###Output
_____no_output_____
###Markdown
**What is the distribution of categorical features?**- Names are unique across the dataset (count=unique=891)- Sex variable as two possible values with 65% male (top=male, freq=577/count=891).- Cabin values have several dupicates across samples. Alternatively several passengers shared a cabin.- Embarked takes three possible values. S port used by most passengers (top=S)- Ticket feature has high ratio (22%) of duplicate values (unique=681).
###Code
train_df.describe(include=['O'])
###Output
_____no_output_____
###Markdown
Assumtions based on data analysisWe arrive at following assumptions based on data analysis done so far. We may validate these assumptions further before taking appropriate actions.**Correlating.**We want to know how well does each feature correlate with Survival. We want to do this early in our project and match these quick correlations with modelled correlations later in the project.**Completing.**1. We may want to complete Age feature as it is definitely correlated to survival.2. We may want to complete the Embarked feature as it may also correlate with survival or another important feature.**Correcting.**1. Ticket feature may be dropped from our analysis as it contains high ratio of duplicates (22%) and there may not be a correlation between Ticket and survival.2. Cabin feature may be dropped as it is highly incomplete or contains many null values both in training and test dataset.3. PassengerId may be dropped from training dataset as it does not contribute to survival.4. Name feature is relatively non-standard, may not contribute directly to survival, so maybe dropped.**Creating.**1. We may want to create a new feature called Family based on Parch and SibSp to get total count of family members on board.2. We may want to engineer the Name feature to extract Title as a new feature.3. We may want to create new feature for Age bands. This turns a continous numerical feature into an ordinal categorical feature.4. We may also want to create a Fare range feature if it helps our analysis.**Classifying.**We may also add to our assumptions based on the problem description noted earlier.1. Women (Sex=female) were more likely to have survived.2. Children (Age<?) were more likely to have survived. 3. The upper-class passengers (Pclass=1) were more likely to have survived. Analyze by pivoting featuresTo confirm some of our observations and assumptions, we can quickly analyze our feature correlations by pivoting features against each other. We can only do so at this stage for features which do not have any empty values. It also makes sense doing so only for features which are categorical (Sex), ordinal (Pclass) or discrete (SibSp, Parch) type.- **Pclass** We observe significant correlation (>0.5) among Pclass=1 and Survived (classifying 3). We decide to include this feature in our model.- **Sex** We confirm the observation during problem definition that Sex=female had very high survival rate at 74% (classifying 1).- **SibSp and Parch** These features have zero correlation for certain values. It may be best to derive a feature or a set of features from these individual features (creating 1).
###Code
train_df[['Pclass', 'Survived']].groupby(['Pclass'], as_index=False).mean().sort_values(by='Survived', ascending=False)
train_df[["Sex", "Survived"]].groupby(['Sex'], as_index=False).mean().sort_values(by='Survived', ascending=False)
train_df[["SibSp", "Survived"]].groupby(['SibSp'], as_index=False).mean().sort_values(by='Survived', ascending=False)
train_df[["Parch", "Survived"]].groupby(['Parch'], as_index=False).mean().sort_values(by='Survived', ascending=False)
###Output
_____no_output_____
###Markdown
Analyze by visualizing dataNow we can continue confirming some of our assumptions using visualizations for analyzing the data. Correlating numerical featuresLet us start by understanding correlations between numerical features and our solution goal (Survived).A histogram chart is useful for analyzing continous numerical variables like Age where banding or ranges will help identify useful patterns. The histogram can indicate distribution of samples using automatically defined bins or equally ranged bands. This helps us answer questions relating to specific bands (Did infants have better survival rate?)Note that x-axis in historgram visualizations represents the count of samples or passengers.**Observations.**- Infants (Age <=4) had high survival rate.- Oldest passengers (Age = 80) survived.- Large number of 15-25 year olds did not survive.- Most passengers are in 15-35 age range.**Decisions.**This simple analysis confirms our assumptions as decisions for subsequent workflow stages.- We should consider Age (our assumption classifying 2) in our model training.- Complete the Age feature for null values (completing 1).- We should band age groups (creating 3).
###Code
g = sns.FacetGrid(train_df, col='Survived')
g.map(plt.hist, 'Age', bins=20)
###Output
_____no_output_____
###Markdown
Correlating numerical and ordinal featuresWe can combine multiple features for identifying correlations using a single plot. This can be done with numerical and categorical features which have numeric values.**Observations.**- Pclass=3 had most passengers, however most did not survive. Confirms our classifying assumption 2.- Infant passengers in Pclass=2 and Pclass=3 mostly survived. Further qualifies our classifying assumption 2.- Most passengers in Pclass=1 survived. Confirms our classifying assumption 3.- Pclass varies in terms of Age distribution of passengers.**Decisions.**- Consider Pclass for model training.
###Code
# grid = sns.FacetGrid(train_df, col='Pclass', hue='Survived')
grid = sns.FacetGrid(train_df, col='Survived', row='Pclass', size=2.2, aspect=1.6)
grid.map(plt.hist, 'Age', alpha=.5, bins=20)
grid.add_legend();
###Output
_____no_output_____
###Markdown
Correlating categorical featuresNow we can correlate categorical features with our solution goal.**Observations.**- Female passengers had much better survival rate than males. Confirms classifying (1).- Exception in Embarked=C where males had higher survival rate. This could be a correlation between Pclass and Embarked and in turn Pclass and Survived, not necessarily direct correlation between Embarked and Survived.- Males had better survival rate in Pclass=3 when compared with Pclass=2 for C and Q ports. Completing (2).- Ports of embarkation have varying survival rates for Pclass=3 and among male passengers. Correlating (1).**Decisions.**- Add Sex feature to model training.- Complete and add Embarked feature to model training.
###Code
# grid = sns.FacetGrid(train_df, col='Embarked')
grid = sns.FacetGrid(train_df, row='Embarked', size=2.2, aspect=1.6)
grid.map(sns.pointplot, 'Pclass', 'Survived', 'Sex', palette='deep')
grid.add_legend()
###Output
_____no_output_____
###Markdown
Correlating categorical and numerical featuresWe may also want to correlate categorical features (with non-numeric values) and numeric features. We can consider correlating Embarked (Categorical non-numeric), Sex (Categorical non-numeric), Fare (Numeric continuous), with Survived (Categorical numeric).**Observations.**- Higher fare paying passengers had better survival. Confirms our assumption for creating (4) fare ranges.- Port of embarkation correlates with survival rates. Confirms correlating (1) and completing (2).**Decisions.**- Consider banding Fare feature.
###Code
# grid = sns.FacetGrid(train_df, col='Embarked', hue='Survived', palette={0: 'k', 1: 'w'})
grid = sns.FacetGrid(train_df, row='Embarked', col='Survived', size=2.2, aspect=1.6)
grid.map(sns.barplot, 'Sex', 'Fare', alpha=.5, ci=None)
grid.add_legend()
###Output
_____no_output_____
###Markdown
Wrangle dataWe have collected several assumptions and decisions regarding our datasets and solution requirements. So far we did not have to change a single feature or value to arrive at these. Let us now execute our decisions and assumptions for correcting, creating, and completing goals. Correcting by dropping featuresThis is a good starting goal to execute. By dropping features we are dealing with fewer data points. Speeds up our notebook and eases the analysis.Based on our assumptions and decisions we want to drop the Cabin (correcting 2) and Ticket (correcting 1) features.Note that where applicable we perform operations on both training and testing datasets together to stay consistent.
###Code
print("Before", train_df.shape, test_df.shape, combine[0].shape, combine[1].shape)
train_df = train_df.drop(['Ticket', 'Cabin'], axis=1)
test_df = test_df.drop(['Ticket', 'Cabin'], axis=1)
combine = [train_df, test_df]
"After", train_df.shape, test_df.shape, combine[0].shape, combine[1].shape
###Output
Before (891, 12) (418, 11) (891, 12) (418, 11)
###Markdown
Creating new feature extracting from existingWe want to analyze if Name feature can be engineered to extract titles and test correlation between titles and survival, before dropping Name and PassengerId features.In the following code we extract Title feature using regular expressions. The RegEx pattern `(\w+\.)` matches the first word which ends with a dot character within Name feature. The `expand=False` flag returns a DataFrame.**Observations.**When we plot Title, Age, and Survived, we note the following observations.- Most titles band Age groups accurately. For example: Master title has Age mean of 5 years.- Survival among Title Age bands varies slightly.- Certain titles mostly survived (Mme, Lady, Sir) or did not (Don, Rev, Jonkheer).**Decision.**- We decide to retain the new Title feature for model training.
###Code
for dataset in combine:
dataset['Title'] = dataset.Name.str.extract(' ([A-Za-z]+)\.', expand=False)
pd.crosstab(train_df['Title'], train_df['Sex'])
###Output
_____no_output_____
###Markdown
We can replace many titles with a more common name or classify them as `Rare`.
###Code
for dataset in combine:
dataset['Title'] = dataset['Title'].replace(['Lady', 'Countess','Capt', 'Col',\
'Don', 'Dr', 'Major', 'Rev', 'Sir', 'Jonkheer', 'Dona'], 'Rare')
dataset['Title'] = dataset['Title'].replace('Mlle', 'Miss')
dataset['Title'] = dataset['Title'].replace('Ms', 'Miss')
dataset['Title'] = dataset['Title'].replace('Mme', 'Mrs')
train_df[['Title', 'Survived']].groupby(['Title'], as_index=False).mean()
###Output
_____no_output_____
###Markdown
We can convert the categorical titles to ordinal.
###Code
title_mapping = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Rare": 5}
for dataset in combine:
dataset['Title'] = dataset['Title'].map(title_mapping)
dataset['Title'] = dataset['Title'].fillna(0)
train_df.head()
###Output
_____no_output_____
###Markdown
Now we can safely drop the Name feature from training and testing datasets. We also do not need the PassengerId feature in the training dataset.
###Code
train_df = train_df.drop(['Name', 'PassengerId'], axis=1)
test_df = test_df.drop(['Name'], axis=1)
combine = [train_df, test_df]
train_df.shape, test_df.shape
###Output
_____no_output_____
###Markdown
Converting a categorical featureNow we can convert features which contain strings to numerical values. This is required by most model algorithms. Doing so will also help us in achieving the feature completing goal.Let us start by converting Sex feature to a new feature called Gender where female=1 and male=0.
###Code
for dataset in combine:
dataset['Sex'] = dataset['Sex'].map( {'female': 1, 'male': 0} ).astype(int)
train_df.head()
###Output
_____no_output_____
###Markdown
Completing a numerical continuous featureNow we should start estimating and completing features with missing or null values. We will first do this for the Age feature.We can consider three methods to complete a numerical continuous feature.1. A simple way is to generate random numbers between mean and [standard deviation](https://en.wikipedia.org/wiki/Standard_deviation).2. More accurate way of guessing missing values is to use other correlated features. In our case we note correlation among Age, Gender, and Pclass. Guess Age values using [median](https://en.wikipedia.org/wiki/Median) values for Age across sets of Pclass and Gender feature combinations. So, median Age for Pclass=1 and Gender=0, Pclass=1 and Gender=1, and so on...猜测缺失值的更准确的方法是使用其他相关特征。 在我们的例子中,我们注意到年龄,性别和Pclass之间的相关性。 猜测年龄值,使用Pclass和Gender功能组合中Age的中位数值。 因此,Pclass = 1和Gender = 0,Pclass = 1和Gender = 1的年龄中位数等等...3. Combine methods 1 and 2. So instead of guessing age values based on median, use random numbers between mean and standard deviation, based on sets of Pclass and Gender combinations.Method 1 and 3 will introduce random noise into our models. The results from multiple executions might vary. We will prefer method 2.
###Code
# grid = sns.FacetGrid(train_df, col='Pclass', hue='Gender')
grid = sns.FacetGrid(train_df, row='Pclass', col='Sex', size=2.2, aspect=1.6)
grid.map(plt.hist, 'Age', alpha=.5, bins=20)
grid.add_legend()
###Output
_____no_output_____
###Markdown
Let us start by preparing an empty array to contain guessed Age values based on Pclass x Gender combinations.
###Code
guess_ages = np.zeros((2,3))
guess_ages
###Output
_____no_output_____
###Markdown
Now we iterate over Sex (0 or 1) and Pclass (1, 2, 3) to calculate guessed values of Age for the six combinations.
###Code
for dataset in combine:
for i in range(0, 2):
for j in range(0, 3):
guess_df = dataset[(dataset['Sex'] == i) & \
(dataset['Pclass'] == j+1)]['Age'].dropna()
# age_mean = guess_df.mean()
# age_std = guess_df.std()
# age_guess = rnd.uniform(age_mean - age_std, age_mean + age_std)
age_guess = guess_df.median()
# Convert random age float to nearest .5 age
guess_ages[i,j] = int( age_guess/0.5 + 0.5 ) * 0.5
for i in range(0, 2):
for j in range(0, 3):
dataset.loc[ (dataset.Age.isnull()) & (dataset.Sex == i) & (dataset.Pclass == j+1),\
'Age'] = guess_ages[i,j]
dataset['Age'] = dataset['Age'].astype(int)
train_df.head()
###Output
_____no_output_____
###Markdown
Let us create Age bands and determine correlations with Survived.
###Code
train_df['AgeBand'] = pd.cut(train_df['Age'], 5)
train_df[['AgeBand', 'Survived']].groupby(['AgeBand'], as_index=False).mean().sort_values(by='AgeBand', ascending=True)
###Output
_____no_output_____
###Markdown
Let us replace Age with ordinals based on these bands.
###Code
for dataset in combine:
dataset.loc[ dataset['Age'] <= 16, 'Age'] = 0
dataset.loc[(dataset['Age'] > 16) & (dataset['Age'] <= 32), 'Age'] = 1
dataset.loc[(dataset['Age'] > 32) & (dataset['Age'] <= 48), 'Age'] = 2
dataset.loc[(dataset['Age'] > 48) & (dataset['Age'] <= 64), 'Age'] = 3
dataset.loc[ dataset['Age'] > 64, 'Age']
train_df.head()
###Output
_____no_output_____
###Markdown
We can not remove the AgeBand feature.
###Code
train_df = train_df.drop(['AgeBand'], axis=1)
combine = [train_df, test_df]
train_df.head()
###Output
_____no_output_____
###Markdown
Create new feature combining existing featuresWe can create a new feature for FamilySize which combines Parch and SibSp. This will enable us to drop Parch and SibSp from our datasets.
###Code
for dataset in combine:
dataset['FamilySize'] = dataset['SibSp'] + dataset['Parch'] + 1
train_df[['FamilySize', 'Survived']].groupby(['FamilySize'], as_index=False).mean().sort_values(by='Survived', ascending=False)
###Output
_____no_output_____
###Markdown
We can create another feature called IsAlone.
###Code
for dataset in combine:
dataset['IsAlone'] = 0
dataset.loc[dataset['FamilySize'] == 1, 'IsAlone'] = 1
train_df[['IsAlone', 'Survived']].groupby(['IsAlone'], as_index=False).mean()
###Output
_____no_output_____
###Markdown
Let us drop Parch, SibSp, and FamilySize features in favor of IsAlone.
###Code
train_df = train_df.drop(['Parch', 'SibSp', 'FamilySize'], axis=1)
test_df = test_df.drop(['Parch', 'SibSp', 'FamilySize'], axis=1)
combine = [train_df, test_df]
train_df.head()
###Output
_____no_output_____
###Markdown
We can also create an artificial feature combining Pclass and Age.
###Code
for dataset in combine:
dataset['Age*Class'] = dataset.Age * dataset.Pclass
train_df.loc[:, ['Age*Class', 'Age', 'Pclass']].head(10)
###Output
_____no_output_____
###Markdown
Completing a categorical featureEmbarked feature takes S, Q, C values based on port of embarkation. Our training dataset has two missing values. We simply fill these with the most common occurance.
###Code
freq_port = train_df.Embarked.dropna().mode()[0]
freq_port
for dataset in combine:
dataset['Embarked'] = dataset['Embarked'].fillna(freq_port)
train_df[['Embarked', 'Survived']].groupby(['Embarked'], as_index=False).mean().sort_values(by='Survived', ascending=False)
###Output
_____no_output_____
###Markdown
Converting categorical feature to numericWe can now convert the EmbarkedFill feature by creating a new numeric Port feature.是否使用0,0,0这种表示替代连续的0,1,2会好一些
###Code
for dataset in combine:
dataset['Embarked'] = dataset['Embarked'].map( {'S': 0, 'C': 1, 'Q': 2} ).astype(int)
train_df.head()
###Output
_____no_output_____
###Markdown
Quick completing and converting a numeric featureWe can now complete the Fare feature for single missing value in test dataset using mode to get the value that occurs most frequently for this feature. We do this in a single line of code.Note that we are not creating an intermediate new feature or doing any further analysis for correlation to guess missing feature as we are replacing only a single value. The completion goal achieves desired requirement for model algorithm to operate on non-null values.We may also want round off the fare to two decimals as it represents currency.
###Code
test_df['Fare'].fillna(test_df['Fare'].dropna().median(), inplace=True)
test_df.head()
###Output
_____no_output_____
###Markdown
We can not create FareBand.
###Code
train_df['FareBand'] = pd.qcut(train_df['Fare'], 4)#qcut基于百分位数划分区间,保证区间密度
train_df[['FareBand', 'Survived']].groupby(['FareBand'], as_index=False).mean().sort_values(by='FareBand', ascending=True)
###Output
_____no_output_____
###Markdown
Convert the Fare feature to ordinal values based on the FareBand.
###Code
for dataset in combine:
dataset.loc[ dataset['Fare'] <= 7.91, 'Fare'] = 0
dataset.loc[(dataset['Fare'] > 7.91) & (dataset['Fare'] <= 14.454), 'Fare'] = 1
dataset.loc[(dataset['Fare'] > 14.454) & (dataset['Fare'] <= 31), 'Fare'] = 2
dataset.loc[ dataset['Fare'] > 31, 'Fare'] = 3
dataset['Fare'] = dataset['Fare'].astype(int)
train_df = train_df.drop(['FareBand'], axis=1)
combine = [train_df, test_df]
train_df.head(10)
###Output
_____no_output_____
###Markdown
And the test dataset.
###Code
test_df.head(10)
###Output
_____no_output_____
###Markdown
Model, predict and solveNow we are ready to train a model and predict the required solution. There are 60+ predictive modelling algorithms to choose from. We must understand the type of problem and solution requirement to narrow down to a select few models which we can evaluate. Our problem is a classification and regression problem. We want to identify relationship between output (Survived or not) with other variables or features (Gender, Age, Port...). We are also perfoming a category of machine learning which is called supervised learning as we are training our model with a given dataset. With these two criteria - Supervised Learning plus Classification and Regression, we can narrow down our choice of models to a few. These include:- Logistic Regression- KNN or k-Nearest Neighbors- Support Vector Machines- Naive Bayes classifier- Decision Tree- Random Forrest- Perceptron- Artificial neural network- RVM or Relevance Vector Machine
###Code
X_train = train_df.drop("Survived", axis=1)
Y_train = train_df["Survived"]
X_test = test_df.drop("PassengerId", axis=1).copy()
X_train.shape, Y_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
Logistic Regression is a useful model to run early in the workflow. Logistic regression measures the relationship between the categorical dependent variable (feature) and one or more independent variables (features) by estimating probabilities using a logistic function, which is the cumulative logistic distribution. Reference [Wikipedia](https://en.wikipedia.org/wiki/Logistic_regression).Note the confidence score generated by the model based on our training dataset.
###Code
# Logistic Regression
logreg = LogisticRegression()
logreg.fit(X_train, Y_train)
Y_pred = logreg.predict(X_test)
acc_log = round(logreg.score(X_train, Y_train) * 100, 2)
acc_log
###Output
_____no_output_____
###Markdown
We can use Logistic Regression to validate our assumptions and decisions for feature creating and completing goals. This can be done by calculating the coefficient of the features in the decision function.Positive coefficients increase the log-odds of the response (and thus increase the probability), and negative coefficients decrease the log-odds of the response (and thus decrease the probability).- Sex is highest positivie coefficient, implying as the Sex value increases (male: 0 to female: 1), the probability of Survived=1 increases the most.- Inversely as Pclass increases, probability of Survived=1 decreases the most.- This way Age*Class is a good artificial feature to model as it has second highest negative correlation with Survived.- So is Title as second highest positive correlation.
###Code
coeff_df = pd.DataFrame(train_df.columns.delete(0))
coeff_df.columns = ['Feature']
coeff_df["Correlation"] = pd.Series(logreg.coef_[0])
coeff_df.sort_values(by='Correlation', ascending=False)
###Output
_____no_output_____
###Markdown
Next we model using Support Vector Machines which are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis. Given a set of training samples, each marked as belonging to one or the other of **two categories**, an SVM training algorithm builds a model that assigns new test samples to one category or the other, making it a non-probabilistic binary linear classifier. Reference [Wikipedia](https://en.wikipedia.org/wiki/Support_vector_machine).Note that the model generates a confidence score which is higher than Logistics Regression model.
###Code
# Support Vector Machines
svc = SVC()
svc.fit(X_train, Y_train)
Y_pred = svc.predict(X_test)
acc_svc = round(svc.score(X_train, Y_train) * 100, 2)
acc_svc
###Output
_____no_output_____
###Markdown
In pattern recognition, the k-Nearest Neighbors algorithm (or k-NN for short) is a non-parametric method used for classification and regression. A sample is classified by a majority vote of its neighbors, with the sample being assigned to the class most common among its k nearest neighbors (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of that single nearest neighbor. Reference [Wikipedia](https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm).KNN confidence score is better than Logistics Regression but worse than SVM.
###Code
knn = KNeighborsClassifier(n_neighbors = 3)
knn.fit(X_train, Y_train)
Y_pred = knn.predict(X_test)
acc_knn = round(knn.score(X_train, Y_train) * 100, 2)
acc_knn
###Output
_____no_output_____
###Markdown
In machine learning, naive Bayes classifiers are a family of simple probabilistic classifiers based on applying Bayes' theorem with strong (naive) independence assumptions between the features. Naive Bayes classifiers are highly scalable, requiring a number of parameters linear in the number of variables (features) in a learning problem. Reference [Wikipedia](https://en.wikipedia.org/wiki/Naive_Bayes_classifier).The model generated confidence score is the lowest among the models evaluated so far.
###Code
# Gaussian Naive Bayes
gaussian = GaussianNB()
gaussian.fit(X_train, Y_train)
Y_pred = gaussian.predict(X_test)
acc_gaussian = round(gaussian.score(X_train, Y_train) * 100, 2)
acc_gaussian
###Output
_____no_output_____
###Markdown
The perceptron is an algorithm for supervised learning of binary classifiers (functions that can decide whether an input, represented by a vector of numbers, belongs to some specific class or not). It is a type of linear classifier, i.e. a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector. The algorithm allows for online learning, in that it processes elements in the training set one at a time. Reference [Wikipedia](https://en.wikipedia.org/wiki/Perceptron).
###Code
# Perceptron
perceptron = Perceptron()
perceptron.fit(X_train, Y_train)
Y_pred = perceptron.predict(X_test)
acc_perceptron = round(perceptron.score(X_train, Y_train) * 100, 2)
acc_perceptron
# Linear SVC
linear_svc = LinearSVC()
linear_svc.fit(X_train, Y_train)
Y_pred = linear_svc.predict(X_test)
acc_linear_svc = round(linear_svc.score(X_train, Y_train) * 100, 2)
acc_linear_svc
# Stochastic Gradient Descent
sgd = SGDClassifier()
sgd.fit(X_train, Y_train)
Y_pred = sgd.predict(X_test)
acc_sgd = round(sgd.score(X_train, Y_train) * 100, 2)
acc_sgd
###Output
C:\Users\MappingLab-lxy\Anaconda3\lib\site-packages\sklearn\linear_model\stochastic_gradient.py:128: FutureWarning: max_iter and tol parameters have been added in <class 'sklearn.linear_model.stochastic_gradient.SGDClassifier'> in 0.19. If both are left unset, they default to max_iter=5 and tol=None. If tol is not None, max_iter defaults to max_iter=1000. From 0.21, default max_iter will be 1000, and default tol will be 1e-3.
"and default tol will be 1e-3." % type(self), FutureWarning)
###Markdown
This model uses a decision tree as a predictive model which maps features (tree branches) to conclusions about the target value (tree leaves). Tree models where the target variable can take a finite set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. Reference [Wikipedia](https://en.wikipedia.org/wiki/Decision_tree_learning).The model confidence score is the highest among models evaluated so far.
###Code
# Decision Tree
decision_tree = DecisionTreeClassifier()
decision_tree.fit(X_train, Y_train)
Y_pred = decision_tree.predict(X_test)
acc_decision_tree = round(decision_tree.score(X_train, Y_train) * 100, 2)
acc_decision_tree
###Output
_____no_output_____
###Markdown
The next model Random Forests is one of the most popular. Random forests or random decision forests are an ensemble learning method for classification, regression and other tasks, that operate by constructing a multitude of decision trees (n_estimators=100) at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees. Reference [Wikipedia](https://en.wikipedia.org/wiki/Random_forest).The model confidence score is the highest among models evaluated so far. We decide to use this model's output (Y_pred) for creating our competition submission of results.
###Code
# Random Forest
random_forest = RandomForestClassifier(n_estimators=100)
random_forest.fit(X_train, Y_train)
Y_pred = random_forest.predict(X_test)
random_forest.score(X_train, Y_train)
acc_random_forest = round(random_forest.score(X_train, Y_train) * 100, 2)
acc_random_forest
###Output
_____no_output_____
###Markdown
Model evaluationWe can now rank our evaluation of all the models to choose the best one for our problem. While both Decision Tree and Random Forest score the same, we choose to use Random Forest as they correct for decision trees' habit of overfitting to their training set.
###Code
models = pd.DataFrame({
'Model': ['Support Vector Machines', 'KNN', 'Logistic Regression',
'Random Forest', 'Naive Bayes', 'Perceptron',
'Stochastic Gradient Decent', 'Linear SVC',
'Decision Tree'],
'Score': [acc_svc, acc_knn, acc_log,
acc_random_forest, acc_gaussian, acc_perceptron,
acc_sgd, acc_linear_svc, acc_decision_tree]})
models.sort_values(by='Score', ascending=False)
submission = pd.DataFrame({
"PassengerId": test_df["PassengerId"],
"Survived": Y_pred
})
submission.to_csv('./submission.csv', index=False)
###Output
_____no_output_____
###Markdown
Load Data
###Code
import numpy as np
import pandas as pd
train_data = pd.read_csv('train.csv')
test_data = pd.read_csv('test.csv')
train_length = train_data.index.size
data = pd.concat([train_data, test_data])
data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 1309 entries, 0 to 417
Data columns (total 12 columns):
Age 1046 non-null float64
Cabin 295 non-null object
Embarked 1307 non-null object
Fare 1308 non-null float64
Name 1309 non-null object
Parch 1309 non-null int64
PassengerId 1309 non-null int64
Pclass 1309 non-null int64
Sex 1309 non-null object
SibSp 1309 non-null int64
Survived 891 non-null float64
Ticket 1309 non-null object
dtypes: float64(3), int64(4), object(5)
memory usage: 132.9+ KB
###Markdown
Feature Engineering
###Code
%matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
###Output
_____no_output_____
###Markdown
Title
###Code
data['Title'] = data.Name.map(lambda name: name.split('.')[0].split(',')[1].strip())
data['Title'] = data.Title.map({'Mr' : 'Mr', 'Miss' : 'Miss', 'Mrs' : 'Mrs', 'Master' : 'Master',
"Dr" : 'Rare', "Rev" : 'Rare', "Major" : 'Rare', "Col" : 'Rare',
'Mlle' : 'Miss', 'Mme' : 'Mrs', 'Don' : 'Rare', "Dona" : 'Rare',
'Lady' : 'Rare', 'the Countess' : 'Rare', 'Jonkheer' : 'Rare',
'Sir' : 'Rare', 'Capt' : 'Rare', 'Ms' : 'Miss'})
data['TitleFeature'] = pd.factorize(data.Title)[0]
sns.factorplot(x = 'Title', hue = 'Survived', data = data[:train_length], kind = 'count')
plt.show()
sns.barplot(x = 'Title', y = 'Survived', data = data[:train_length])
###Output
_____no_output_____
###Markdown
Family Size
###Code
data['FamilySize'] = data.Parch + data.SibSp + 1
data['FamilySizeFeature'] = data.FamilySize
sns.factorplot(x = 'FamilySize', hue = 'Survived', data = data[:train_length], kind = 'count')
plt.show()
sns.barplot(x = 'FamilySize', y = 'Survived', data = data[:train_length])
data['FamilySizeType'] = 'Singleton'
data.loc[(data.FamilySize > 1) & (data.FamilySize <= 4), 'FamilySizeType'] = 'Small'
data.loc[data.FamilySize > 4, 'FamilySizeType'] = 'Large'
data['FamilySizeTypeFeature'] = pd.factorize(data.FamilySizeType)[0]
sns.factorplot(x = 'FamilySizeType', hue = 'Survived', data = data[:train_length], kind = 'count')
plt.show()
sns.barplot(x = 'FamilySizeType', y = 'Survived', data = data[:train_length])
###Output
_____no_output_____
###Markdown
Family ID
###Code
has_family_id_feature = False
if has_family_id_feature:
data['Surname'] = data.Name.map(lambda name: name.split(',')[0].strip().lower())
data['FamilyId'] = data.apply(lambda row: row.Surname + str(row.FamilySize), axis = 1)
data.loc[data.FamilySize <= 2, 'FamilyId'] = 'Small'
data.FamilyId = data.FamilyId.fillna('Small')
family_id_table = data.FamilyId.value_counts()
family_id_table = pd.DataFrame({'FamilyId' : family_id_table.keys(), 'Size' : family_id_table.values})
data.FamilyId = data.FamilyId.map(lambda id: 'Small' if
(family_id_table[family_id_table.FamilyId == id]['Size'] <= 2).bool() else id)
data['FamilyIdFeature'] = pd.factorize(data.FamilyId)[0]
###Output
_____no_output_____
###Markdown
Sex
###Code
data['SexFeature'] = data.Sex.map({'male' : 0, 'female' : 1})
sns.factorplot(x = 'Sex', hue = 'Survived', data = data[:train_length], kind = 'count')
###Output
_____no_output_____
###Markdown
Pclass
###Code
data['PclassFeature'] = data.Pclass
sns.factorplot(x = 'Pclass', hue = 'Survived', data = data[:train_length], kind = 'count')
###Output
_____no_output_____
###Markdown
Fare
###Code
print(data[data.Fare.isnull()][['Pclass', 'Age', 'Sex', 'Embarked']])
print(data[(data.Pclass == 3) & (data.Embarked == 'S')]['Fare'].median())
data['FareFeature'] = data.Fare.fillna(data[(data.Pclass == 3) & (data.Embarked == 'S')]['Fare'].median())
###Output
Pclass Age Sex Embarked
152 3 60.5 male S
8.05
###Markdown
Embarked
###Code
print(data[data.Embarked.isnull()][['Fare', 'Pclass']])
sns.boxplot(x = 'Embarked', y = 'Fare', hue = 'Pclass', data = data[data.Embarked.notnull()])
plt.show()
data['EmbarkedFilled'] = data.Embarked.fillna('C')
data['EmbarkedFeature'] = pd.factorize(data.Embarked)[0]
sns.factorplot(x = 'EmbarkedFilled', hue = 'Survived', data = data[:train_length], kind = 'count')
plt.show()
sns.barplot(x = 'EmbarkedFilled', y = 'Survived', data = data[:train_length])
###Output
Fare Pclass
61 80.0 1
829 80.0 1
###Markdown
Cabin
###Code
data['Cabin'] = data.Cabin.fillna('0')
data['CabinPrefix'] = data.Cabin.map(lambda cabin: cabin[0])
data['CabinFeature'] = pd.factorize(data.CabinPrefix)[0]
sns.factorplot(x = 'CabinPrefix', hue = 'Survived', data = data[:train_length], kind = 'count')
plt.show()
sns.barplot(x = 'CabinPrefix', y = 'Survived', data = data[:train_length])
data['CabinTypeFeature'] = 0
data.loc[data.CabinPrefix != '0', 'CabinTypeFeature'] = 1
sns.factorplot(x = 'CabinTypeFeature', hue = 'Survived', data = data[:train_length], kind = 'count')
plt.show()
sns.barplot(x = 'CabinTypeFeature', y = 'Survived', data = data[:train_length])
###Output
_____no_output_____
###Markdown
Age
###Code
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.cross_validation import cross_val_score
feature_names = data.columns[data.columns.str.contains('Feature')]
Xtrain_age = data[data.Age.notnull()][feature_names]
ytrain_age = data[data.Age.notnull()].Age
regressor = ExtraTreesRegressor()
score = cross_val_score(regressor, Xtrain_age, ytrain_age, cv = 5)
print(score, score.mean())
regressor.fit(Xtrain_age, ytrain_age)
Xtest_age = data[data.Age.isnull()][feature_names]
ages = regressor.predict(Xtest_age)
data['AgeFeature'] = data.Age
data.loc[data.Age.isnull(), 'AgeFeature'] = ages
sns.distplot(data.Age[data.Age.notnull()])
plt.show()
sns.distplot(data.AgeFeature)
data['ChildFeature'] = 0
data.loc[data.AgeFeature <= 18, 'ChildFeature'] = 1
sns.factorplot(x = 'ChildFeature', hue = 'Survived', data = data[:train_length], kind = 'count')
plt.show()
data['AdultFemaleFeature'] = 0
data.loc[(data.AgeFeature > 18) & (data.Sex == 'female'), 'AdultFemaleFeature'] = 1
sns.factorplot(x = 'AdultFemaleFeature', hue = 'Survived', data = data[:train_length], kind = 'count')
plt.show()
data['AdultMaleFeature'] = 0
data.loc[(data.AgeFeature > 18) & (data.Sex == 'male'), 'AdultMaleFeature'] = 1
sns.factorplot(x = 'AdultMaleFeature', hue = 'Survived', data = data[:train_length], kind = 'count')
data['MotherFeature'] = 0
data.loc[(data.Sex == 'female') & (data.Parch > 0) & (data.AgeFeature > 18) & (data.Title != 'Miss'), 'MotherFeature'] = 1
sns.factorplot(x = 'MotherFeature', hue = 'Survived', data = data[:train_length], kind = 'count')
###Output
_____no_output_____
###Markdown
Dead Woman and Survived Man of Surname
###Code
data['Surname'] = data.Name.map(lambda name: name.split(',')[0].strip().lower())
table_surname = pd.DataFrame(data.Surname.value_counts())
table_surname['DeadWomanFeature'] = data.Surname[(data.AdultFemaleFeature == 1) &
(data.Survived == 0) & ((data.Parch > 0) | (data.SibSp > 0))].value_counts()
table_surname['DeadWomanFeature'] = table_surname.DeadWomanFeature.fillna(0)
table_surname.loc[table_surname.DeadWomanFeature > 0, 'DeadWomanFeature'] = 1
table_surname['SurvivedManFeature'] = data.Surname[(data.AdultMaleFeature == 1) &
(data.Survived == 1) & ((data.Parch > 0) | (data.SibSp > 0))].value_counts()
table_surname['SurvivedManFeature'] = table_surname.SurvivedManFeature.fillna(0)
table_surname.loc[table_surname.SurvivedManFeature > 0, 'SurvivedManFeature'] = 1
table_surname.drop('Surname', axis = 1, inplace = True)
data = data.merge(table_surname, left_on = 'Surname', right_index = True, how = 'left')
sns.factorplot(x = 'DeadWomanFeature', hue = 'Survived', data = data[:train_length], kind = 'count')
plt.show()
sns.factorplot(x = 'SurvivedManFeature', hue = 'Survived', data = data[:train_length], kind = 'count')
table_ticket = pd.DataFrame(data.Ticket.value_counts())
table_ticket['TicketDeadWomanFeature'] = data.Ticket[(data.AdultFemaleFeature == 1) & (data.Survived == 0) &
((data.Parch > 0) | (data.SibSp > 0))].value_counts()
table_ticket['TicketDeadWomanFeature'] = table_ticket.TicketDeadWomanFeature.fillna(0)
table_ticket.loc[table_ticket.TicketDeadWomanFeature > 0, 'TicketDeadWomanFeature'] = 1
table_ticket['TicketSurvivedManFeature'] = data.Ticket[(data.AdultMaleFeature == 1) & (data.Survived == 1) &
((data.Parch > 0) | (data.SibSp > 0))].value_counts()
table_ticket['TicketSurvivedManFeature'] = table_ticket.TicketSurvivedManFeature.fillna(0)
table_ticket.loc[table_ticket.TicketSurvivedManFeature > 0, 'TicketSurvivedManFeature'] = 1
table_ticket.drop('Ticket', axis = 1, inplace = True)
data = data.merge(table_ticket, left_on = 'Ticket', right_index = True, how = 'left')
sns.factorplot(x = 'TicketDeadWomanFeature', hue = 'Survived', data = data[:train_length], kind = 'count')
plt.show()
sns.factorplot(x = 'TicketSurvivedManFeature', hue = 'Survived', data = data[:train_length], kind = 'count')
###Output
_____no_output_____
###Markdown
Modeling Feature Selection
###Code
feature_names = data.columns[data.columns.str.contains('Feature')]
Xtrain = data[:train_length][feature_names]
ytrain = train_data.Survived
Xtest = data[train_length:][feature_names]
from sklearn.ensemble import ExtraTreesClassifier
extra_classifier = ExtraTreesClassifier(n_estimators = 200)
extra_classifier.fit(Xtrain, ytrain)
importances = pd.DataFrame()
importances['FeatureName'] = Xtrain.columns
importances['Importance'] = extra_classifier.feature_importances_
importances.sort_values('Importance', ascending = False)
from sklearn.feature_selection import SelectFromModel
select_model = SelectFromModel(extra_classifier, prefit = True)
Xtrain_selected = select_model.transform(Xtrain)
Xtest_selected = select_model.transform(Xtest)
Xtrain_selected.shape, Xtest_selected.shape, select_model
###Output
_____no_output_____
###Markdown
Parameters Tuning
###Code
from sklearn.ensemble import RandomForestClassifier
from sklearn.cross_validation import StratifiedKFold
from sklearn.grid_search import GridSearchCV
random_forest = RandomForestClassifier()
parameter_grid = {'max_features' : [None, 'sqrt', 'log2'],
'max_depth' : [4, 5, 6, 7, 8],
'n_estimators' : [200, 250, 500],
# 'n_estimators' : [200, 250, 500, 1000, 2000],
'criterion' : ['gini', 'entropy']}
grid_search = GridSearchCV(random_forest, param_grid = parameter_grid)
grid_search.fit(Xtrain, ytrain)
print('Best Score: {}'.format(grid_search.best_score_))
print('Best Parameter: {}'.format(grid_search.best_params_))
###Output
Best Score: 0.8888888888888888
Best Parameter: {'n_estimators': 200, 'max_depth': 8, 'max_features': None, 'criterion': 'gini'}
###Markdown
Predict
###Code
survived = grid_search.predict(Xtest)
predict_data = pd.DataFrame()
predict_data['PassengerId'] = test_data.PassengerId
predict_data['Survived'] = survived.astype(int)
predict_data.to_csv('predict.csv', index = False)
survived.sum(), len(survived)
###Output
_____no_output_____
###Markdown
Cross Validation
###Code
random_forest = RandomForestClassifier(n_estimators = 250, max_depth = 8, criterion = 'entropy', max_features = None)
score = cross_val_score(random_forest, Xtrain, ytrain, cv = 10)
print(score)
print(score.mean())
random_forest.fit(Xtrain, ytrain)
survived = random_forest.predict(Xtest)
predict_data = pd.DataFrame()
predict_data['PassengerId'] = test_data.PassengerId
predict_data['Survived'] = survived.astype(int)
predict_data.to_csv('predict_validation.csv', index = False)
survived.sum(), len(survived)
###Output
_____no_output_____
###Markdown
[Titanic: Machine Learning from Disaster](https://www.kaggle.com/c/titanic)Predict survival on the Titanic and get familiar with ML basics1. Collecting the data2. Exploratory data analysis3. Feature Engineering4. Modelling5. Testing 1. Collecting the data
###Code
# Data manipulation and analysis
import pandas as pd
train = pd.read_csv('https://raw.githubusercontent.com/motoJinC25/kaggle-models/master/Titanic/input/train.csv')
test = pd.read_csv('https://raw.githubusercontent.com/motoJinC25/kaggle-models/master/Titanic/input/test.csv')
###Output
_____no_output_____
###Markdown
2. Exploratory data analysis
###Code
# Printing first 5 rows of the train dataset.
train.head()
###Output
_____no_output_____
###Markdown
Data Dictionary - PassengerId : 승객 번호- Survived : 생존 여부 (1:생존, 0:사망)- Pclass : 승선권 클래스 (1:1st, 2:2nd, 3:3rd)- Name : 승객 이름- Sex : 승객 성별- Age : 승객 나이- SibSp : 동반한 형제자매, 배우자 수- Patch : 동반한 부모, 자식 수- Ticket : 티켓의 고유 넘버- Fare : 티켓의 요금- Cabin : 객실 번호- Embarked : 승선한 항구명 (C:Cherbourg, Q:Queenstown, S:Southampton)
###Code
# Printing first 5 rows of the test dataset.
test.head()
train.shape
test.shape
train.info()
test.info()
train.isnull().sum()
test.isnull().sum()
###Output
_____no_output_____
###Markdown
import python lib for visualization
###Code
# Plotting library
import matplotlib.pyplot as plt
# Data visualization library based on matplotlib
import seaborn as sns
sns.set() # Setting seaborn default for plots
###Output
_____no_output_____
###Markdown
Bar Chart for Categorical Features- Pclass- Sex- SibSp ( of siblings and spouse)- Parch ( of parents and children)- Embarked
###Code
def bar_chart(feature):
survived = train[train['Survived']==1][feature].value_counts()
dead = train[train['Survived']==0][feature].value_counts()
df = pd.DataFrame([survived, dead])
df.index = ['Survived', 'Dead']
df.plot(kind='bar', stacked=True, figsize=(10, 3))
bar_chart('Pclass')
bar_chart('Sex')
bar_chart('SibSp')
bar_chart('Parch')
bar_chart('Embarked')
###Output
_____no_output_____
###Markdown
3. Feature Engineering Binding and Mapping- Name- Sex- Age- Embarked- Fare- Cabin- FamilySizeDrop- Ticket- SibSp- Parch- PassengerId (only train dataset)
###Code
train.describe(include="all")
###Output
_____no_output_____
###Markdown
Name
###Code
combine = [train, test]
for dataset in combine:
dataset['Title'] = dataset.Name.str.extract('([A-Za-z]+)\.', expand=False)
train['Title'].value_counts()
test['Title'].value_counts()
title_mapping = {"Mr":0, "Miss":1, "Mrs":2, "Master":3, "Rev":3, "Col":3, "Dona":3, "Dr":3, "Ms":3}
for dataset in combine:
dataset['Title'] = dataset['Title'].map(title_mapping)
dataset['Title'] = dataset['Title'].fillna(0)
train.head()
test.head()
bar_chart('Title')
train.drop('Name', axis=1, inplace=True)
test.drop('Name', axis=1, inplace=True)
train.head()
test.head()
###Output
_____no_output_____
###Markdown
Sex
###Code
sex_mapping = {"male":0, "female":1}
for dataset in combine:
dataset['Sex'] = dataset['Sex'].map(sex_mapping)
bar_chart('Sex')
###Output
_____no_output_____
###Markdown
Age
###Code
train['Age'].fillna(train.groupby('Title')['Age'].transform('median'), inplace=True)
test['Age'].fillna(test.groupby('Title')['Age'].transform('median'), inplace=True)
train.head()
for dataset in combine:
dataset.loc[ dataset['Age'] <= 16, 'Age'] = 0,
dataset.loc[(dataset['Age'] > 16) & (dataset['Age'] <= 26), 'Age'] = 1,
dataset.loc[(dataset['Age'] > 26) & (dataset['Age'] <= 36), 'Age'] = 2,
dataset.loc[(dataset['Age'] > 36) & (dataset['Age'] <= 62), 'Age'] = 3,
dataset.loc[ dataset['Age'] > 62, 'Age'] = 4
train.head()
bar_chart('Age')
###Output
_____no_output_____
###Markdown
Embarked
###Code
Pclass1 = train[train['Pclass']==1]['Embarked'].value_counts()
Pclass2 = train[train['Pclass']==2]['Embarked'].value_counts()
Pclass3 = train[train['Pclass']==3]['Embarked'].value_counts()
df = pd.DataFrame([Pclass1, Pclass2, Pclass3])
df.index = ['1st class','2nd class', '3rd class']
df.plot(kind='bar',stacked=True, figsize=(10,5))
for dataset in combine:
dataset['Embarked'] = dataset['Embarked'].fillna('S')
train.head()
embarked_mapping = {'S':0, 'C':1, 'Q':2}
for dataset in combine:
dataset['Embarked'] = dataset['Embarked'].map(embarked_mapping)
bar_chart('Embarked')
###Output
_____no_output_____
###Markdown
Fare
###Code
train["Fare"].fillna(train.groupby("Pclass")["Fare"].transform("median"), inplace=True)
test["Fare"].fillna(test.groupby("Pclass")["Fare"].transform("median"), inplace=True)
train.head()
for dataset in combine:
dataset.loc[ dataset['Fare'] <= 17, 'Fare'] = 0,
dataset.loc[(dataset['Fare'] > 17) & (dataset['Fare'] <= 30), 'Fare'] = 1,
dataset.loc[(dataset['Fare'] > 30) & (dataset['Fare'] <= 100), 'Fare'] = 2,
dataset.loc[ dataset['Fare'] > 100, 'Fare'] = 3
train.head()
###Output
_____no_output_____
###Markdown
Cabin
###Code
train.Cabin.value_counts()
for dataset in combine:
dataset['Cabin'] = dataset['Cabin'].str[:1]
Pclass1 = train[train['Pclass']==1]['Cabin'].value_counts()
Pclass2 = train[train['Pclass']==2]['Cabin'].value_counts()
Pclass3 = train[train['Pclass']==3]['Cabin'].value_counts()
df = pd.DataFrame([Pclass1, Pclass2, Pclass3])
df.index = ['1st class','2nd class', '3rd class']
df.plot(kind='bar',stacked=True, figsize=(10,5))
cabin_mapping = {"A": 0, "B": 0.4, "C": 0.8, "D": 1.2, "E": 1.6, "F": 2, "G": 2.4, "T": 2.8}
for dataset in combine:
dataset['Cabin'] = dataset['Cabin'].map(cabin_mapping)
train["Cabin"].fillna(train.groupby("Pclass")["Cabin"].transform("median"), inplace=True)
test["Cabin"].fillna(test.groupby("Pclass")["Cabin"].transform("median"), inplace=True)
train.head()
###Output
_____no_output_____
###Markdown
FamilySize
###Code
train["FamilySize"] = train["SibSp"] + train["Parch"] + 1
test["FamilySize"] = test["SibSp"] + test["Parch"] + 1
family_mapping = {1: 0, 2: 0.4, 3: 0.8, 4: 1.2, 5: 1.6, 6: 2, 7: 2.4, 8: 2.8, 9: 3.2, 10: 3.6, 11: 4}
for dataset in combine:
dataset['FamilySize'] = dataset['FamilySize'].map(family_mapping)
train.head()
###Output
_____no_output_____
###Markdown
Drop features- Ticket- SibSp- Parch- PassengerId (only train dataset)
###Code
features_drop = ['Ticket', 'SibSp', 'Parch']
train = train.drop(features_drop, axis=1)
test = test.drop(features_drop, axis=1)
train = train.drop(['PassengerId'], axis=1)
###Output
_____no_output_____
###Markdown
Datasets
###Code
train_data = train.drop('Survived', axis=1)
target = train['Survived']
train_data.shape, target.shape
train_data.head()
test_data = test.drop("PassengerId", axis=1).copy()
test_data.head()
###Output
_____no_output_____
###Markdown
4. Modelling
###Code
# Machine learning library
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
# Array-processing package
import numpy as np
train_data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 891 entries, 0 to 890
Data columns (total 8 columns):
Pclass 891 non-null int64
Sex 891 non-null int64
Age 891 non-null float64
Fare 891 non-null float64
Cabin 891 non-null float64
Embarked 891 non-null int64
Title 891 non-null float64
FamilySize 891 non-null float64
dtypes: float64(5), int64(3)
memory usage: 55.8 KB
###Markdown
K-fold Cross Validation
###Code
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
k_fold = KFold(n_splits=10, shuffle=True, random_state=0)
###Output
_____no_output_____
###Markdown
Ramdom Forest
###Code
clf = RandomForestClassifier(n_estimators=13)
scoring = 'accuracy'
score = cross_val_score(clf, train_data, target, cv=k_fold, n_jobs=1, scoring=scoring)
print(score)
# Random Forest Score
round(np.mean(score)*100, 2)
###Output
_____no_output_____
###Markdown
5. Testing
###Code
clf = RandomForestClassifier(n_estimators=13)
clf.fit(train_data, target)
prediction = clf.predict(test_data)
submission = pd.DataFrame({
"PassengerId": test["PassengerId"],
"Survived": prediction
})
submission.to_csv('submission.csv', index=False)
submission = pd.read_csv('submission.csv')
submission.head()
###Output
_____no_output_____
###Markdown
PassengerId <- out of scopeSurvived <- target featureName <- out of scopeSex <- in scopeAge <- in scopeSibSp <- in scopeParch <- in scopeTicket <- out of scopeFare <- in scope Cabin < -out of scopeEmbarked < - in scope
###Code
df = train[['Survived', 'Sex_cat', 'Age', 'SibSp', 'Parch', 'Fare', 'Embarked_cat']]
df.head()
plt.rcParams['figure.figsize']=(10,5)
sns.heatmap(train.corr(), vmin=-1., vmax=1., annot=True, linewidths=1, cmap="YlGnBu",);
plt.rcParams['figure.figsize']=(10,5)
sns.heatmap(train.corr(), vmin=-1., vmax=1., annot=True, linewidths=1, cmap=sns.color_palette(n_colors=6),);
# plt.rcParams['figure.figsize']=(10,5)
sns.heatmap(train.corr().round(1), vmin=-1., vmax=1., annot=True, linewidths=3, cmap=sns.color_palette("Reds"), );
train.corr().round(2)
current_palette = sns.color_palette()
sns.palplot(current_palette)
sns.palplot(sns.color_palette("Blues"))
###Output
_____no_output_____
###Markdown
Bibliotecas
###Code
# ETL
import pandas as pd
import numpy as np
# Visualização
import plotly.express as px
import plotly.io as pio
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
from mlxtend.plotting import plot_decision_regions
# Regressão e métricas
from sklearn.linear_model import LogisticRegression
import statsmodels.api as sm
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay, accuracy_score
from sklearn.metrics import precision_score, recall_score,classification_report, roc_auc_score
from sklearn.tree import DecisionTreeClassifier, plot_tree
# Treino/teste
from sklearn.model_selection import train_test_split
# Temas
sns.set()
pio.templates.default = 'plotly_white'
test = pd.read_csv('test.csv') # Dataset para o teste
gender = pd.read_csv('gender_submission.csv')
df = pd.read_csv('titanic.csv') # Dataset para treino e validação
# Colunas auxiliares
df['Survived2'] = df['Survived'].map({0: 'Dead', 1:'Survived'})
df['Class'] = df['Pclass'].map({1: 'First Class', 2:'Second Class', 3: 'Third Class'})
df['Sex2'] = df['Sex'].map({'male': 0, 'female': 1})
df.head()
###Output
_____no_output_____
###Markdown
Análise Exploratoria dos Dados
###Code
# Por onde embarcou x sobrevivencia
pd.crosstab(df['Embarked'], df['Survived'].map({0:'Dead', 1:'Suvived'}))
# Sexo x sobrevivência
pd.crosstab(df['Sex'], df['Survived'].map({0:'Dead', 1:'Suvived'}))
# Idade x sobrevivência
df_age_dead_surv = pd.crosstab(pd.cut(df['Age'], bins = [0,2,5,10,20,30,40,50,60,70,80,90,100]), df['Survived'].map({0:'Dead', 1:'Suvived'}))
df_age_dead_surv
# Pessoas por idade
plt.figure(figsize=(10,6))
sns.histplot(df, x = df['Age'])
plt.title('Distribuição das pessoas pela faixa etária')
plt.show()
#mean_age,median_age (29.69911764705882, 28.0) -> Assimetrica a direita
mean_age = df['Age'].mean()
median_age = df['Age'].median()
df['Age'].median()
print(f'''Idade média: {mean_age:.2f}
Idade mediana: {median_age:.2f}
''')
###Output
_____no_output_____
###Markdown
A partir do Gráfico acima, podemos observar a distribuição assimétrica à direita referente à variável idade.
###Code
k = [i for i in range(0, 110,10)]
sns.histplot(df.query('Sex == "female"'), x = df.query('Sex == "female"')['Age'], bins = k)
sns.histplot(df.query('Sex == "female" and Survived == 0'), x = df.query('Sex == "female" and Survived == 0')['Age'], bins = k,color = 'red').set(title = 'Relação de Mulheres mortas')
plt.show()
sns.histplot(df.query('Sex == "female"'), x = df.query('Sex == "female"')['Age'], bins = k)
sns.histplot(df.query('Sex == "female" and Survived == 1'), x = df.query('Sex == "female" and Survived == 1')['Age'], bins = k,color = 'green').set(title = 'Relação de Mulheres vivas')
plt.show()
sns.histplot(df.query('Sex == "male"'), x = df.query('Sex == "male"')['Age'], bins = k)
sns.histplot(df.query('Sex == "male" and Survived == 0'), x = df.query('Sex == "male" and Survived == 0')['Age'], bins = k, color = 'red').set(title = 'Relação de Homens mortos')
plt.show()
sns.histplot(df.query('Sex == "male"'), x = df.query('Sex == "male"')['Age'], bins = k)
sns.histplot(df.query('Sex == "male" and Survived == 1'), x = df.query('Sex == "male" and Survived == 1')['Age'], bins = k, color = 'green').set(title = 'Relação de Homens vivos')
plt.show()
# Regra de Sturges
k = 1 + 3.3 * np.log(len(df[df['Survived'] == 1]))
k = round(k)
k
fig = px.histogram(data_frame = df[df['Survived'] == 1], x = 'Age', y = 'Survived', color = 'Sex',
title = 'Distribuição dos sobreviventes por idade para ambos os sexos',
color_discrete_map={'male': 'blue',
'female': 'rgb(102,197,204)'},
nbins = k)
fig.update_layout(bargap=0.30)
# Plotly application for interactive visualization
fig = px.sunburst(data_frame=df, # Our dataset
path=["Class", "Sex", "Survived2"], # Root, Branches, Leaves
width=700, height=600,
color="Class",
color_discrete_map={'First Class': 'rgb(246,207,113)',
'Second Class': 'rgb(248,156,116)',
'Third Class': 'rgb(102,197,204)'}, # Colours (could be changed easily)
maxdepth=-1,
branchvalues='total',
hover_name='Class', # Hover name for chosen column
hover_data={'Class': False},
title='Percentual de mortes dos homens e mulheres de cada classe', template='ggplot2'# Title and the template
)
fig.update_traces(textinfo='label+percent parent')
fig.update_layout(font=dict(size=15))
fig.show()
###Output
_____no_output_____
###Markdown
###Code
titanic = df
q1 = titanic['Age'].quantile(q = 0.25)
q2 = titanic['Age'].quantile(q = 0.5)
q3 = titanic['Age'].quantile(q = 0.75)
iqr = q3 - q1
minimo = q1 - 1.5 * iqr
maximo = q3 + 1.5 * iqr
# outlier
num_outliers = len(titanic[titanic['Age'] > maximo])
dict_outliers = titanic[titanic['Age'] > maximo][['Age', 'Name']]
sns.boxplot(data = titanic, x = 'Age')
plt.show()
print(f'''
q1: {q1} (0 a 25% da pop.)
q2: {q2} (até 50% da pop. mediana)
q3: {q3} (até 75% da pop.)
iqr: {iqr}
mínimo: {minimo}
máximo: {maximo}
qnt_outliers: {num_outliers}
Outliers: {dict_outliers}
''')
# Titanic Analise dos homens e mulheres
sns.histplot(titanic.query('Sex == "male"'), y = titanic.query('Sex == "male"')['Survived'].map({1 : 'Sobreviveu', 0: 'Morreu'})).set(title = 'Homens-absoluto', xlabel = 'Quantidade')
plt.show()
sns.histplot(titanic.query('Sex == "female"'), y = titanic.query('Sex == "female"')['Survived'].map({1 : 'Sobreviveu', 0: 'Morreu'})).set(title = 'Mulheres-absoluto', xlabel = 'Quantidade')
plt.show()
df_analise_morte = pd.crosstab(titanic['Sex'], titanic['Survived'])
df_analise_morte.columns = ['Morreu' , 'Sobreviveu']
display(df_analise_morte)
sns.heatmap(df_analise_morte, annot=True , fmt = '.0f')
plt.show()
meninas_mortas = df_analise_morte['Morreu'].loc['female'] / (df_analise_morte['Morreu'].loc['female'] + df_analise_morte['Sobreviveu'].loc['female'])
meninos_mortos = df_analise_morte['Morreu'].loc['male'] / (df_analise_morte['Morreu'].loc['male'] + df_analise_morte['Sobreviveu'].loc['male'])
meninas_vivas = df_analise_morte['Sobreviveu'].loc['female'] / (df_analise_morte['Morreu'].loc['female'] + df_analise_morte['Sobreviveu'].loc['female'])
meninos_vivos = df_analise_morte['Sobreviveu'].loc['male'] / (df_analise_morte['Morreu'].loc['male'] + df_analise_morte['Sobreviveu'].loc['male'])
df_relativo = pd.DataFrame(
data = {'Morreu': [meninas_mortas, meninos_mortos],
'Sobreviveu': [meninas_vivas, meninos_vivos]},
index = ['female', 'male'])
sns.heatmap(df_relativo, annot=True )
plt.show()
###Output
_____no_output_____
###Markdown
Cabin x Survived x Fare x Pclass
###Code
# Titanic Analise dos homens e mulheres
sns.barplot(x = df['Pclass'], y = df['Survived'])
plt.title('Sobrevivência x classe do passageiro')
plt.xlabel('Classe do passageiro')
plt.ylabel('Sobreviventes')
plt.show()
###Output
_____no_output_____
###Markdown
A partir do gráfico acima pode-se notar visualmente que a taxa de sobrevivência for maior para as classes menores 1 > 2 > 3
###Code
# Matriz de correlação entre os parâmetros
df.corr()
# Mapa de calor da matriz
plt.figure(figsize=(16, 8))
sns.heatmap(df.corr(),vmin = -1 , vmax = 1 , cmap='coolwarm',annot = True)
plt.show()
###Output
_____no_output_____
###Markdown
Visível correlação entre da varíavel'Suvived' com as variáveis 'Fare', 'Sex2' e 'PClass'. Buscando por correlações
###Code
sns.pairplot(df)
plt.title('Dispersão entre os parâmetros')
plt.show()
###Output
_____no_output_____
###Markdown
Analisando os gráficos é poível notar que não há uma linha reta entre parâmetros, indicando que não há uma tendência linear entre eles. No entanto, notamos um padrão logístico binário entre eles, o que da faz crer que um modelo de regressão logistíca seria ideal para tentar relacionar as variáveis do Dataset com a sobrevivência.
###Code
df.info()
# Organizando o dataset para a regressão logistica
df['Age'].fillna(median_age, inplace = True)
df.dropna(axis = 0, subset = ['Embarked'],inplace = True)
df.drop('Cabin', axis = 1, inplace = True)
df.info()
# Entrada e saída
X = df[['Pclass', 'Age', 'Fare', 'Sex2']]
#X = df.drop('Survived', axis = 1)
y = df['Survived']
# Separando entre treino e validação
X_train, X_validation, y_train, y_validation = train_test_split(X, y, test_size=0.3, random_state=42)
# Modelo logistico
clf = LogisticRegression(solver = 'liblinear').fit(X_train, y_train)
clf.coef_
clf.intercept_
y_pred = clf.predict(X_train)
y_pred
y_proba = clf.predict_proba(X_train)
y_proba
cm = confusion_matrix(y_train, y_pred)
cm_display = ConfusionMatrixDisplay(cm)
cm_display.plot(cmap = 'Blues')
plt.title('Previsão x Treino')
plt.grid(False)
plt.show()
###Output
_____no_output_____
###Markdown
Métricas do Treino
###Code
# Acuracia
print(f'Acurácia: {accuracy_score(y_train, y_pred):.4f}')
precisao = precision_score(y_train, y_pred, average = 'weighted')
# Precisão
print(f'Precisão: {precisao:.4f}')
# Sensibilidade global do modelo
print(f'Sensibilidade: {recall_score(y_train, y_pred):.4f}')
# Métricas para cada classe
print(classification_report(y_train , y_pred))
auc = roc_auc_score(y_train, y_pred)
auc
gini = 2 * auc - 1
gini
#
###Output
_____no_output_____
###Markdown
Utilizando o modelo de árvore de decisão
###Code
tree = DecisionTreeClassifier(criterion = 'gini', max_depth = 3)
# Entrada e saída
X = df[['Pclass', 'Sex2', 'Age']]
y = df['Survived']
# Separando entre treino e validação
X_train, X_validation, y_train, y_validation = train_test_split(X, y, test_size=0.3, random_state=42)
tree.fit(X_train, y_train)
y_proba = tree.predict_proba(X_train)[:, 1] # A probabilidade de ser 1
y_pred = tree.predict(X_train)
y_pred[:5]
y_train[:5]
accuracy_score(y_train, y_pred)
precision_score(y_train, y_pred)
# Acertando quem sobreviveu
recall_score(y_train, y_pred)
roc_auc_score(y_train, y_proba)
confusion_matrix(y_train, y_pred)
plt.figure(figsize = (16,10))
plot_tree(tree, feature_names = X.columns, class_names = ['Dead', 'Survived'])
plt.title('Mapa da árvore de decisão')
plt.show()
df_model = pd.get_dummies(df.drop(['Name', 'Sex2', 'Class', 'Survived2', 'Ticket','Embarked', 'Parch'], axis = 1))
df_model
tree = DecisionTreeClassifier(criterion = 'entropy', max_depth = 3) # Testando 3 retornou o maior recall_score sem overfitting
# Entrada e saída
X = df_model.drop(['Survived', 'PassengerId', 'Sex_male'], axis = 1)
y = df_model['Survived']
# Separando entre treino e validação
X_train, X_validation, y_train, y_validation = train_test_split(X, y, test_size=0.3, random_state = 42)
tree.fit(X_train, y_train)
# Predição
y_proba = tree.predict_proba(X_train)[:, 1] # A probabilidade de ser 1
y_pred = tree.predict(X_train)
# Acurácia
print(f'Acurácia: {accuracy_score(y_train, y_pred):.4f}')
# Precisão
print(f'Precisão: {precision_score(y_train, y_pred):.4f}')
# Acertando quem sobreviveu
print(f'Sensibilidade: {recall_score(y_train, y_pred):.4f}')
# ROC AUC
print(f'AUC: {roc_auc_score(y_train, y_proba):.4f}')
print(f'Gini: {roc_auc_score(y_train, y_proba) * 2 - 1:.4f}')
print()
# Matriz de confusão do modelo
cm = confusion_matrix(y_train, y_pred)
cm_display = ConfusionMatrixDisplay(cm)
cm_display.plot(cmap = 'Blues')
plt.title('Matriz de confusão')
plt.grid(False)
plt.show()
plt.figure(figsize = (16,10))
plot_tree(tree, feature_names = X.columns, class_names = ['Dead', 'Survived'])
plt.title('Mapa da árvore de decisão')
plt.show()
###Output
Acurácia: 0.8312
Precisão: 0.8140
Sensibilidade: 0.7292
AUC: 0.8789
Gini: 0.7577
###Markdown
**Titanic: Machine Learning from Disaster**This notebook predicts survival in the disaster.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%pylab inline
from google.colab import files
uploaded = files.upload()
# Get a glimpse of data
titanic_df = pd.read_csv('train.csv')
titanic_df.head()
#Af first fill lost data, fill age data
# Get avarage, std to calculate the limitaton of random number
# Get NAN number to determine how many data need to generate
fig, (axis1,axis2) = plt.subplots(1,2,figsize=(15,4))
# Plot original age data
titanic_df['Age'].dropna().hist(bins=70, ax=axis1, ls='solid', lw=0.2, ec='black')
average_age = titanic_df.Age.mean()
std_age = titanic_df.Age.std()
nan_age_number = titanic_df.Age.isnull().sum()
# Generate
rand_age = np.random.randint(average_age - std_age, average_age + std_age,
size = nan_age_number)
# Fill in
titanic_df.loc[np.isnan(titanic_df['Age']), 'Age'] = rand_age
# Plot result
titanic_df['Age'].hist(bins=70, ax=axis2, ls='solid', lw=0.2, ec='black')
axis1.set_title('Before Fill In')
axis1.set_xlabel('Age')
axis1.set_ylabel('People Number')
axis2.set_title('After Fill In')
axis2.set_xlabel('Age')
axis2.set_ylabel('People Number')
# At first drop data it seems useless for this analysis
# they are ID, name, ticket number, embark place, cabin, SibSp, and Parch
titanic_df = titanic_df.drop(['PassengerId','Name','Ticket','Embarked','Cabin','SibSp','Parch'],axis = 1)
titanic_df.head()
# At first let's analyse from sex and age view
# Divide children from male and female type
titanic_df.loc[titanic_df['Age'] <= 16, 'Sex'] = 'child'
titanic_df = titanic_df.drop(['Age'],axis=1)
titanic_df.head()
# Give more descriptive labels for Survived and Pclass
titanic_df['Survival'] = titanic_df.Survived.map({0:'Died',1:'Survived'})
titanic_df['Class'] = titanic_df.Pclass.map({1:'1st Class',2:'2nd Class',3:'3rd Class'})
# Child and not child
titanic_df['Child'] = titanic_df.Sex.map({'child':'Is Child','female':'Not Child','male':'Not Child'})
titanic_df.head()
# Draw pictures to see more clearly of the relations
# about sex and age factor
sns.factorplot(data=titanic_df,x='Sex',y='Survived',kind="violin",size=4,aspect=3)
plt.yticks([0,1], ['Died', 'Survived'])
# Plot basic information about sex and age
fig, (axis1,axis2) = plt.subplots(1,2,figsize=(15,5))
sns.countplot(data=titanic_df, x='Sex',ax=axis1)
sns.countplot(data=titanic_df,x='Survived',hue='Sex',order=[0,1],ax=axis2)
plt.xticks([0,1], ['Died', 'Survived'])
fig, (axis3,axis4) = plt.subplots(1,2,figsize=(15,5))
# Group data by sex and whether child
sex_survi_groups = titanic_df[['Sex','Survived']].groupby(['Sex'],as_index=True)
#Divide into three groups
men_group = sex_survi_groups.get_group('male')
women_group = sex_survi_groups.get_group('female')
children_group = sex_survi_groups.get_group('child')
# Plot survive rate between different sex
sns.barplot(data=titanic_df[['Sex','Survived']],x='Sex',y='Survived',order=['male','female'],ax=axis3)
axis3.set_ylabel("Survival Rate")
# Draw Child and Non-Child plot
sns.barplot(data=titanic_df[['Child', 'Survived']],x='Child',y='Survived',order=['Is Child','Not Child'],ax=axis4)
axis4.set_ylabel("Survival Rate")
axis3.set_title('Survive rate compare by Sex')
axis4.set_title('Survive rate compare by whether child')
# Statistic Hypothesis Test
# Chi-Square Test for Independence
# State the hypothesis: H0: Gender and survival rate are independent
from scipy.stats import chi2_contingency
men_women_group = pd.concat([men_group, women_group])
gender_pivot = pd.pivot_table(data=men_women_group[['Survived','Sex']],index='Survived',columns=['Sex'],
aggfunc=len)
chi2, p_value, dof, expected = chi2_contingency(gender_pivot)
print("Results of Chi-Squared test on Sex to Survival.")
print("Chi-Square Score = %s"%str(chi2))
print("Pvalue = %s\n"%str(p_value))
# Test for child and non-child
child_pivot = pd.pivot_table(data=titanic_df[['Survived','Child']],index='Survived',columns=['Child'],
aggfunc=len)
chi2, p_value, dof, expected = chi2_contingency(child_pivot)
print("Results of Chi-Squared test on Child to Survival.")
print("Chi-Square Score = %s"%str(chi2))
print("Pvalue = %s\n"%str(p_value))
# Then let's analyze class factor
sns.factorplot(data=titanic_df,x='Class',y='Survived',kind="violin", \
order=['1st Class','2nd Class','3rd Class'],size=4,aspect=3)
plt.yticks([0,1],['Died','Survived'])
# Group by class and take mean
class_survi_prec = titanic_df[['Class','Survived']].groupby(['Class'],as_index=False).mean()
# Compare number and survived rate between three classes
fig, (axis1,axis2) = plt.subplots(1,2,figsize=(15,5))
sns.countplot(data=titanic_df, x='Class',order=['1st Class','2nd Class','3rd Class'],ax=axis1)
sns.barplot(data=class_survi_prec,x='Class',y='Survived', \
order=['1st Class','2nd Class','3rd Class'],ax=axis2)
axis2.set_ylabel('Survival Rate')
# Statistic Hypothesis Test:
# H0: Class and Survival rate are independent
class_pivot = pd.pivot_table(data=titanic_df[['Survived','Class']],index='Survived',columns=['Class'],
aggfunc=len)
chi2, p_value, dof, expected = chi2_contingency(class_pivot)
print("Results of Chi-Squared test on Class to Survival.")
print("Chi-Square Score = %s"%str(chi2))
print("Pvalue = %s\n"%str(p_value))
# Last let's analyze fare factor
# Try to plot on a logarithmic x-axis as comment suggests, but it not looks so good
# fig = titanic_df['Fare'].plot(kind='hist', figsize=(15,3), bins=100, logx=True,
# ls='solid', lw=1, ec='black')
fig = titanic_df['Fare'].plot(kind='hist', figsize=(15,3), bins=100, \
ls='solid', lw=0.5, ec='black')
ax = fig.axes
ax.set_xlabel('Fare')
ax.set_ylabel('People Number')
ax.set_title('People Distribution with Fare')
# We clear out people have very high fare
normal_people = titanic_df[['Fare','Survived']][titanic_df['Fare']<200]
fare_survi_group = normal_people[['Fare','Survived']].groupby(['Survived'],as_index=False)
# Survive condition for people with normal fare
figure(2)
sns.factorplot(data=normal_people,x='Survived',y='Fare',aspect=2)
plt.xticks([0,1],['Died','Survived'])
# Statitic Test, variable is continuous, so we choose T-test
# H0: People survived and not survived have same fare, mean(survive_fare)=mean(non_survive_fare)
from scipy.stats import ttest_ind
ttest_ind(fare_survi_group.get_group(0)['Fare'],fare_survi_group.get_group(1)['Fare'])
# Obviously We can guess fare is related to passenger class
# from scatter Plot we can see only first class have very high fare
titanic_df.plot.scatter(x='Pclass',y='Fare')
plt.xticks([1,2,3],['1st Class','2nd Class','3rd Class'])
# We calculate their correlation to confirm
titanic_df[['Fare', 'Pclass']].corr(method='spearman')
# To explore more details
# let's see sex distrubution in different classes
figure(figsize=(8,5))
sns.countplot(data=titanic_df,x='Class',hue='Sex',order=['1st Class','2nd Class','3rd Class'])
# From above we could see class 3 have large percent of men
# So we can guess the low survived rate of men is caused by class3 men
# the survive rate in higher class between sex may not very distinct
# Draw chart of different classes's survive rate detail
class_sex_group = titanic_df[['Sex','Class','Survived']].groupby(['Sex','Class'],as_index=False)
class_sex_survive_prec = class_sex_group.mean()
figure(figsize=(8,5))
fig = sns.barplot(data=class_sex_survive_prec, x='Sex',y='Survived',hue='Class', \
order=['male','female','child'])
fig.axes.set_ylabel('Survival Rate')
# Between class1 and class2 women they have similar survive rates
# Chi-Square test
# H0 = For Class1 and Class2 female, the survive rate and class is independent
female_class1_class2 = titanic_df[(titanic_df['Sex']=='female') \
& ((titanic_df['Class']=='1st Class') \
| (titanic_df['Class']=='2nd Class') )]
class_pivot = pd.pivot_table(data=female_class1_class2[['Survived','Class']],index='Survived',columns=['Class'],
aggfunc=len)
chi2, p_value, dof, expected = chi2_contingency(class_pivot)
print("Results of Chi-Squared test on Class to Survival on upper two classes female.")
print("Chi-Square Score = %s"%str(chi2))
print("Pvalue = %s\n"%str(p_value))
# Also between class1 and class2 child they have much similar survive rates
# Do test
child_class1_class2 = titanic_df[(titanic_df['Sex']=='child') \
& ((titanic_df['Class']=='1st Class') \
| (titanic_df['Class']=='2nd Class') )]
class_pivot = pd.pivot_table(data=child_class1_class2[['Survived','Class']],index='Survived',columns=['Class'],
aggfunc=len)
chi2, p_value, dof, expected = chi2_contingency(class_pivot)
print("Results of Chi-Squared test on Class to Survival on upper two classes child.")
print("Chi-Square Score = %s"%str(chi2))
print("Pvalue = %s\n"%str(p_value))
# And class2 and class3 male they also have similar survive rate
male_class2_class3 = titanic_df[(titanic_df['Sex']=='male') \
& ((titanic_df['Class']=='3rd Class') \
| (titanic_df['Class']=='2nd Class') )]
class_pivot = pd.pivot_table(data=male_class2_class3[['Survived','Class']],index='Survived',columns=['Class'],
aggfunc=len)
print("Results of Chi-Squared test on Class to Survival on lower two classes male.")
print("Chi-Square Score = %s"%str(chi2))
print("Pvalue = %s\n"%str(p_value))
###Output
Results of Chi-Squared test on Class to Survival on lower two classes male.
Chi-Square Score = 2.1502976190476195
Pvalue = 0.1425422559692581
###Markdown
Kaggle - Titanic: Machine Learning from Disaster Importing libraries / Importando bibliotecas
###Code
import pandas as pd
from sklearn.ensemble import RandomForestClassifier
###Output
_____no_output_____
###Markdown
Importing train dataset / Importando conjunto de dados de treino
###Code
train_data = pd.read_csv('train.csv')
train_data.head()
###Output
_____no_output_____
###Markdown
Importing test dataset / Importando conjunto de dados de teste
###Code
test_data = pd.read_csv('test.csv')
test_data.head()
###Output
_____no_output_____
###Markdown
Removing unused columns / Removendo colunas não utilizadas
###Code
train_data.drop(['Name', 'Ticket', 'Cabin'], axis = 1, inplace = True)
test_data.drop(['Name', 'Ticket', 'Cabin'], axis = 1, inplace = True)
###Output
_____no_output_____
###Markdown
Creating new datasets with One Hot Encoder Criando novos conjuntos de dados com One Hot Encoder
###Code
OHC_train_data = pd.get_dummies(train_data)
OHC_test_data = pd.get_dummies(test_data)
###Output
_____no_output_____
###Markdown
Titanic Loading the datasetThe dataset is available @https://www.kaggle.com/c/titanic/data. It contains features pclass(Passenger class), sex, age, sibsp( of siblings/spouses), parch( of parents/children), ticket, fare, cabin number, port of embarkation and name. The value to be predicted is the survival(0 or 1) of the passengers. The feature 'name' is omitted from the training.
###Code
import numpy as np
import pandas as pd
dataset = pd.read_csv('train.csv')
X_test = pd.read_csv('test.csv')
dataset_title = [i.split(',')[1].split('.')[0].strip() for i in dataset['Name']]
dataset['Title'] = pd.Series(dataset_title)
dataset['Title'].value_counts()
dataset['Title'] = dataset['Title'].replace(['Lady', 'the Countess', 'Countess', 'Capt', 'Col', 'Don', 'Dr', 'Major', 'Rev', 'Sir', 'Jonkheer', 'Dona', 'Ms', 'Mme', 'Mlle'], 'Rare')
dataset_title = [i.split(',')[1].split('.')[0].strip() for i in X_test['Name']]
X_test['Title'] = pd.Series(dataset_title)
X_test['Title'].value_counts()
X_test['Title'] = X_test['Title'].replace(['Lady', 'the Countess', 'Countess', 'Capt', 'Col', 'Don', 'Dr', 'Major', 'Rev', 'Sir', 'Jonkheer', 'Dona', 'Ms', 'Mme', 'Mlle'], 'Rare')
dataset['FamilyS'] = dataset['SibSp'] + dataset['Parch'] + 1
X_test['FamilyS'] = X_test['SibSp'] + X_test['Parch'] + 1
print(dataset)
def family(x):
if x < 2:
return 'Single'
elif x == 2:
return 'Couple'
elif x <= 4:
return 'InterM'
else:
return 'Large'
dataset['FamilyS'] = dataset['FamilyS'].apply(family)
X_test['FamilyS'] = X_test['FamilyS'].apply(family)
dataset['Embarked'].fillna(dataset['Embarked'].mode()[0], inplace=True)
X_test['Embarked'].fillna(X_test['Embarked'].mode()[0], inplace=True)
dataset['Age'].fillna(dataset['Age'].median(), inplace=True)
X_test['Age'].fillna(X_test['Age'].median(), inplace=True)
X_test['Fare'].fillna(X_test['Fare'].median(), inplace=True)
dataset = dataset.drop(['PassengerId', 'Cabin', 'Name', 'SibSp', 'Parch', 'Ticket'], axis=1)
X_test_passengers = X_test['PassengerId']
X_test = X_test.drop(['PassengerId', 'Cabin', 'Name', 'SibSp', 'Parch', 'Ticket'], axis=1)
X_train = dataset.iloc[:, 1:9].values
Y_train = dataset.iloc[:, 0].values
X_test = X_test.values
# Converting the remaining labels to numbers
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
labelencoder_X_1 = LabelEncoder()
X_train[:, 1] = labelencoder_X_1.fit_transform(X_train[:, 1])
X_train[:, 4] = labelencoder_X_1.fit_transform(X_train[:, 4])
X_train[:, 5] = labelencoder_X_1.fit_transform(X_train[:, 5])
X_train[:, 6] = labelencoder_X_1.fit_transform(X_train[:, 6])
labelencoder_X_2 = LabelEncoder()
X_test[:, 1] = labelencoder_X_2.fit_transform(X_test[:, 1])
X_test[:, 4] = labelencoder_X_2.fit_transform(X_test[:, 4])
X_test[:, 5] = labelencoder_X_2.fit_transform(X_test[:, 5])
X_test[:, 6] = labelencoder_X_2.fit_transform(X_test[:, 6])
# Converting categorical values to one-hot representation
one_hot_encoder = OneHotEncoder(categorical_features = [0, 1, 4, 5, 6])
X_train = one_hot_encoder.fit_transform(X_train).toarray()
X_test = one_hot_encoder.fit_transform(X_test).toarray()
from sklearn.model_selection import train_test_split
x_train, x_val, y_train, y_val = train_test_split(X_train, Y_train, test_size = 0.1)
print(X_test_passengers.shape)
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(19, 270)
self.fc2 = nn.Linear(270, 2)
def forward(self, x):
x = self.fc1(x)
x = F.dropout(x, p=0.1)
x = F.relu(x)
x = self.fc2(x)
x = F.sigmoid(x)
return x
net = Net()
params = list(net.parameters())
print(len(params))
batch_size = 50
num_epochs = 50
learning_rate = 0.01
batch_no = len(x_train) // batch_size
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(net.parameters(), lr=learning_rate)
from sklearn.utils import shuffle
from torch.autograd import Variable
for epoch in range(num_epochs):
if epoch % 5 == 0:
print('Epoch {}'.format(epoch+1))
x_train, y_train = shuffle(x_train, y_train)
# Mini batch learning
for i in range(batch_no):
start = i * batch_size
end = start + batch_size
x_var = Variable(torch.FloatTensor(x_train[start:end]))
y_var = Variable(torch.LongTensor(y_train[start:end]))
# Forward + Backward + Optimize
optimizer.zero_grad()
ypred_var = net(x_var)
loss =criterion(ypred_var, y_var)
loss.backward()
optimizer.step()
# Evaluate the model
test_var = Variable(torch.FloatTensor(x_val), volatile=True)
result = net(test_var)
values, labels = torch.max(result, 1)
num_right = np.sum(labels.data.numpy() == y_val)
print('Accuracy {:.2f}'.format(num_right / len(y_val)))
print(X_test.shape)
# Applying model on the test data
X_test_var = Variable(torch.FloatTensor(X_test), volatile=True)
test_result = net(X_test_var)
values, labels = torch.max(test_result, 1)
survived = labels.data.numpy()
import csv
submission = [['PassengerId', 'Survived']]
for i in range(len(survived)):
submission.append([X_test_passengers[i], survived[i]])
print(len(submission))
with open('submission.csv', 'w') as submissionFile:
writer = csv.writer(submissionFile)
writer.writerows(submission)
print('Writing Complete!')
###Output
Writing Complete!
###Markdown
**여기서 주의 깊게 봐야할 부분은 다음과 같다.**+ 각 데이터는 빈 부분이 있는가? + 빈 부분이 있다면, drop할 것인가 아니면 default값으로 채워 넣을 것인가. + cabin, Age, Embarked 세 항목에 주의+ 데이터는 float64로 변환할 수 있는가. + 아니라면 범주형 데이터로 만들 수 있는가.
###Code
# 이름과 티켓에서 가져올 수 있는 데이터는 없기 때문에 PassengerID와 이름, 티켓을 지운다.
# 하지만 이 문제에서 결과물은 'PassengerId', 'Survived' 요소가 필요하므로 훈련데이터에서만 삭제한다.
train_df = train_df.drop(['PassengerId', 'Ticket'], axis=1) # axis = 1은 열을 지운다.
test_df = test_df.drop(['Ticket'], axis=1) # 결과물은 test에서 나온다. 즉, PassengerId를 지우면 안된다.
###Output
_____no_output_____
###Markdown
**데이터 하나하나 처리하기**이제 남은 데이터 종류는 다음과 같다.1. Pclass2. Sex3. SibSp4. Parch5. Fare6. Cabin7. Embarked8. Name9. Age(추가)
###Code
# # 1. Pclass
# # 서수형 데이터이다. 1등석, 2등석, 3등석과 같은 정보. 처음에 확인시에 데이터가 비어있지 않은 것을 확인할 수 있었다.
# # 데이터에 대한 확인과 데이터를 변환해보도록 하겠다. 우선 각 unique한 value에 대한 카운팅은 value_counts() 메서드로 확인할 수 있다.
# train_df['Pclass'].value_counts()
# # 1, 2, 3은 정수이니, 그냥 실수로만 바꾸면 되지않을까 생각할 수 있다. 하지만 1, 2, 3 등급은 경우에 따라 다를 수 있지만 연속적인 정보가 아니며, 각 차이 또한 균등하지 않다.
# # 그렇기에 범주형(카테고리) 데이터로 인식하고 인코딩해야한다.(비슷한 예시로 영화 별점 등이 있다.)
# # 이 데이터는 범주형 데이터이므로 one-hot-encoding을 pd.get_dummies() 메서드로 인코딩한다.
# pclass_train_dummies = pd.get_dummies(train_df['Pclass'])
# pclass_test_dummies = pd.get_dummies(test_df['Pclass'])
# train_df.drop(['Pclass'], axis=1, inplace=True)
# test_df.drop(['Pclass'], axis=1, inplace=True)
# train_df = train_df.join(pclass_train_dummies)
# test_df = test_df.join(pclass_test_dummies)
train_df.head() # 원래는 columns의 이름을 설정하고, 넣어줘야하는데 실수로 넣지 않아 1, 2, 3 이라는 컬럼으로 데이터가 들어갔다.
# 2. Sex
# 성별이라는 뜻으로 남과 여로 나뉘므로 이 또한 one-hot-encoding을 진행
# # [[[ dummie 방식 ]]]
# sex_train_dummies = pd.get_dummies(train_df['Sex'])
# sex_test_dummies = pd.get_dummies(test_df['Sex'])
# train_df.drop(['Sex'], axis=1, inplace=True)
# test_df.drop(['Sex'], axis=1, inplace=True)
# train_df = train_df.join(sex_train_dummies)
# test_df = test_df.join(sex_test_dummies)
# category .cat.codes방식 numeric
train_df['Sex'] = train_df['Sex'].astype('category').cat.codes
test_df['Sex'] = test_df['Sex'].astype('category').cat.codes
train_df.head()
# 3,4. SibSp & Parch
# 형제 자매와 부모님은 가족으로 함께 처리할 수 있다. 하지만 굳이 바꿀필요는 없다.
train_df['Family'] = 1 + train_df['SibSp'] + train_df['Parch']
test_df['Family'] = 1 + train_df['SibSp'] + test_df['Parch']
train_df = train_df.drop(['SibSp', 'Parch'], axis=1)
test_df = test_df.drop(['SibSp', 'Parch'], axis=1)
# + Solo : 내가 혼자 탔는지 다른 가족과 탔는지 여부를 구분해주는 데이터 추가
train_df['Solo'] = (train_df['Family'] == 1)
test_df['Solo'] = (test_df['Family'] == 1)
# 5. Fare
# 탑승료이다. 신기하게 test 데이터셋에 1개의 데이터가 비어있다. 아마 디카프리오인듯 하다. 우선 빈 부분을 fillna 메서드로 채운다.
# 데이터 누락이 아닌 무단 탑승이라 생각하고 0으로 입력
test_df['Fare'].fillna(0, inplace=True)
train_df.head()
# 6. Cabin
# 객실이라는 뜻이다. 대부분이 NaN이므로 버린다.
train_df = train_df.drop(['Cabin'],axis=1)
test_df = test_df.drop(['Cabin'], axis=1)
# 7. Embarked
# 탑승 항구를 의미, 우선 데이터를 확인
train_df['Embarked'].value_counts()
test_df['Embarked'].value_counts()
# S가 대다수이고 일부 데이터가 비어있는 것을 알 수 있다. 빈 부분은 S로 우선 채운다(.info로 확인했을 때 빈 부분이 있는 줄 몰랐다)
train_df["Embarked"].fillna('S', inplace=True)
# Embarked 컬럼 역시 numeric 한 데이터로 변경
train_df['Embarked'] = train_df['Embarked'].astype('category').cat.codes
test_df['Embarked'] = test_df['Embarked'].astype('category').cat.codes
train_df.head()
# 8. Name
# Title은 'Name' 칼럼에서 ~씨와 같은 t itle을 추출하여 새롭게 생성해주는 컬럼,
# 단 주의해야할 점은 TItle을 추출하여 카테고리와 해주면, 데이터의 총량 비하여 너무 복잡도가 올라가는 경향이있다.
# 그렇기에 모수가 적은 Mlle, Mme, Ms는 단일화 시켜주어야 한다.
train_df['Title'] = train_df['Name'].str.extract(' ([A-Za-z]+)\.', expand=False)
test_df['Title'] = test_df['Name'].str.extract(' ([A-Za-z]+)\.', expand=False)
train_df['Title'] = train_df['Title'].replace(['Lady', 'Countess', 'Capt', 'Col', 'Don', 'Dr', 'Major', 'Rev', 'Sir', 'Jonkheer', 'Dona'], 'Other')
test_df['Title'] = test_df['Title'].replace(['Lady', 'Countess', 'Capt', 'Col', 'Don', 'Dr', 'Major', 'Rev', 'Sir', 'Jonkheer', 'Dona'], 'Other')
train_df = train_df.drop(['Name'], axis=1)
test_df = test_df.drop(['Name'], axis=1)
train_df['Title'].value_counts()
# 위에서 세어보니 Mlle, Mme, Ms의 수가 적어서 단일화 해주는 작업을 해야한다.
train_df['Title'] = train_df['Title'].replace(['Mlle', 'Ms'], 'Miss')
train_df['Title'] = train_df['Title'].replace('Mme', 'Mrs')
test_df['Title'] = test_df['Title'].replace(['Mlle', 'Ms'], 'Miss')
test_df['Title'] = test_df['Title'].replace('Mme', 'Mrs')
train_df['Title'] = train_df['Title'].astype('category').cat.codes
test_df['Title'] = test_df['Title'].astype('category').cat.codes
train_df.head()
# 9. Age
# 나이는 연속형 데이터이므로, 큰 처리가 필요없다. (카테고리화를 하여 일부 알고리즘에 더 유용한 결과를 만들 수 있다.)
# 하지만 일부 NaN 데이터가 있으니 이를 채울 수 있는 방법에 대해서 생각해보자
# 1. 랜덤(random), 2. 평균값(mean), 3. 중간값(median), 4. 데이터 버리기(drop)
# groupyby 함수를 이용해 "Title" 컬럼의 그룹을 나누어(1, 2, 3) 해당 그룹의 "Age"칼럼의 median을 fillna에 대입
train_df["Age"].fillna(train_df.groupby("Title")["Age"].transform("median"), inplace=True)
test_df["Age"].fillna(test_df.groupby("Title")["Age"].transform("median"), inplace=True)
train_df.head()
# Age를 구간화 (Binning), 5세 단위로 자르고 50대는 10세단위 그리고 60세 이상은 모두 묶어서 Binning해줌
# Train
train_df.loc[ train_df['Age'] <= 10, 'Age'] = 0
train_df.loc[(train_df['Age'] > 10) & (train_df['Age'] <= 16), 'Age'] = 1
train_df.loc[(train_df['Age'] > 16) & (train_df['Age'] <= 20), 'Age'] = 2
train_df.loc[(train_df['Age'] > 20) & (train_df['Age'] <= 26), 'Age'] = 3
train_df.loc[(train_df['Age'] > 26) & (train_df['Age'] <= 30), 'Age'] = 4
train_df.loc[(train_df['Age'] > 30) & (train_df['Age'] <= 36), 'Age'] = 5
train_df.loc[(train_df['Age'] > 36) & (train_df['Age'] <= 40), 'Age'] = 6
train_df.loc[(train_df['Age'] > 40) & (train_df['Age'] <= 46), 'Age'] = 7
train_df.loc[(train_df['Age'] > 46) & (train_df['Age'] <= 50), 'Age'] = 8
train_df.loc[(train_df['Age'] > 50) & (train_df['Age'] <= 60), 'Age'] = 9
train_df.loc[ train_df['Age'] > 60, 'Age'] = 10
# Test
test_df.loc[ test_df['Age'] <= 10, 'Age'] = 0
test_df.loc[(test_df['Age'] > 10) & (test_df['Age'] <= 16), 'Age'] = 1
test_df.loc[(test_df['Age'] > 16) & (test_df['Age'] <= 20), 'Age'] = 2
test_df.loc[(test_df['Age'] > 20) & (test_df['Age'] <= 26), 'Age'] = 3
test_df.loc[(test_df['Age'] > 26) & (test_df['Age'] <= 30), 'Age'] = 4
test_df.loc[(test_df['Age'] > 30) & (test_df['Age'] <= 36), 'Age'] = 5
test_df.loc[(test_df['Age'] > 36) & (test_df['Age'] <= 40), 'Age'] = 6
test_df.loc[(test_df['Age'] > 40) & (test_df['Age'] <= 46), 'Age'] = 7
test_df.loc[(test_df['Age'] > 46) & (test_df['Age'] <= 50), 'Age'] = 8
test_df.loc[(test_df['Age'] > 50) & (test_df['Age'] <= 60), 'Age'] = 9
test_df.loc[ test_df['Age'] > 60, 'Age'] = 10
train_df.head()
###Output
_____no_output_____
###Markdown
Feature와 Label을 정의하기
###Code
feature = [
'Pclass',
'Sex',
'Age',
'Fare',
'Embarked',
'Family',
'Solo',
"Title",
]
label = [
'Survived',
]
###Output
_____no_output_____
###Markdown
HyperParameter여러 가지 모델링을 해본 결과, 이 블로그에서 진행항 pre-processing 데이터셋에는 RnadomForestClassifier가 가장 좋은 결과를 가져다 주었다.---우선, 이번 Titanic 생존자 예측 대회에서는 dataset의 복잡도가 크지 않고, 사이즈도 매우 적기 때문에 n_estimator 값은 최대한 줄이는 전략을 취했다. 또한 max_depth도 제한을 두어 너무 깊어지지 않도록 했으며, 다른 parameter는 별도로 건들이지 않았다.
###Code
from sklearn.model_selection import KFold, cross_val_score
from sklearn.ensemble import RandomForestClassifier
data = train_df[feature]
target = train_df[label]
k_fold = KFold(n_splits=10, shuffle=True, random_state=0)
clf = RandomForestClassifier(n_estimators=50, max_depth=6, random_state=0)
cross_val_score(clf, data, target, cv=k_fold, scoring='accuracy', ).mean()
train_x = train_df[feature]
train_y = train_df[label]
test_x = test_df[feature]
clf = RandomForestClassifier(n_estimators=100, max_depth=6, random_state=0)
clf.fit(train_x, train_y)
gender_submission['Survived'] = clf.predict(test_x)
gender_submission.to_csv('titanic-submission.csv',index=False)
###Output
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:6: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().
###Markdown
**여러 머신러닝 알고리즘 적용 해보기**
###Code
## Scikit-Learn의 다양한 머신러닝 모듈을 불러옵니다.
## 분류 알고리즘 중에서 선형회귀, 서포트벡터머신, 랜덤포레스트, K-최근접이웃 알고리즘을 사용해보려고 한다.
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
# Logistic Regression
logreg = LogisticRegression(max_iter=1000) # max_iter을 이용하여 오류해결
logreg.fit(train_x, train_y)
pred_y = logreg.predict(test_x)
logreg.score(train_x, train_y)
# Support Vector Machines
svc = SVC()
svc.fit(train_x, train_y)
pred_y = svc.predict(test_x)
svc.score(train_x, train_y)
# Random Forests
random_forest = RandomForestClassifier(n_estimators=100)
random_forest.fit(train_x, train_y)
pred_y = random_forest.predict(test_x)
random_forest.score(train_x, train_y)
# K-Neigbor
knn = KNeighborsClassifier(n_neighbors = 3)
knn.fit(train_x, train_y)
pred_y = knn.predict(test_x)
knn.score(train_x, train_y)
# Random Forests
random_forest = RandomForestClassifier(n_estimators=1)
random_forest.fit(train_x, train_y)
pred_y = random_forest.predict(test_x)
random_forest.score(train_x, train_y)
###Output
/usr/local/lib/python3.7/dist-packages/sklearn/utils/validation.py:760: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
y = column_or_1d(y, warn=True)
###Markdown
Filling null values with mean / Preenchendo valores nulos com a média
###Code
OHC_train_data.isnull().sum().sort_values(ascending = False)
OHC_train_data['Age'].fillna(OHC_train_data['Age'].mean(), inplace = True)
OHC_test_data['Age'].fillna(OHC_test_data['Age'].mean(), inplace = True)
OHC_test_data.isnull().sum().sort_values(ascending = False)
OHC_test_data['Fare'].fillna(OHC_test_data['Fare'].mean(), inplace = True)
###Output
_____no_output_____
###Markdown
Spliting data into features and target / Dividindo dados em recursos e objetivo
###Code
X_train = OHC_train_data.drop('Survived', axis = 1)
y_train = OHC_train_data['Survived']
###Output
_____no_output_____
###Markdown
Applying Random Forest Classifier / Aplicando Random Forest Classifier
###Code
classifier = RandomForestClassifier(n_estimators = 100, criterion = 'entropy', random_state = 0)
classifier.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Evaluating prediction quality / Validando qualidade da predição
###Code
classifier.score(X_train, y_train)
###Output
_____no_output_____
###Markdown
Saving to csv to submit / Salvando o csv para avaliação
###Code
submission = pd.DataFrame()
submission['PassengerId'] = OHC_test_data['PassengerId']
submission['Survived'] = classifier.predict(OHC_test_data)
submission.to_csv('RandomForestClassifier.csv', index=False)
###Output
_____no_output_____
###Markdown
Model evaluationWe can now rank our evaluation of all the models to choose the best one for our problem. While both Decision Tree and Random Forest score the same, we choose to use Random Forest as they correct for decision trees' habit of overfitting to their training set.
###Code
models = pd.DataFrame({
'Model': ['Support Vector Machines', 'KNN', 'Logistic Regression',
'Random Forest', 'Gaussian Naive Bayes',
'Stochastic Gradient Decent', 'Linear SVC',
'Decision Tree', 'MLP', 'Gradient Boosting','Extra Trees'],
'Score': [acc_svc, acc_knn, acc_log,
acc_random_forest, acc_gaussian,
acc_sgd, acc_linear_svc, acc_decision_tree, acc_mlp, acc_gradient_boosting,acc_extra_trees],
'Score1': [acc_svc, acc_knn1, acc_log1,
acc_random_forest1, acc_gaussian,
acc_sgd1, acc_linear_svc, acc_decision_tree, acc_mlp1, acc_gradient_boosting1,acc_extra_trees1]})
models.sort_values(by='Score1', ascending=False)
# Ensemble of the "tuned" top models
classifiers=[
('svc', SVC(C=1000, gamma=0.001, probability=True, kernel='rbf')),
('linear_svc', SVC(kernel='linear', probability=True)),
('sgd', SGDClassifier(max_iter=1000, tol=1e-3, alpha=0.1,learning_rate='optimal',loss='modified_huber', penalty='l2')),
('gb', GradientBoostingClassifier(criterion='friedman_mse',max_depth=4,n_estimators=240)),
('lr', LogisticRegression(C=0.3,multi_class='multinomial', solver='lbfgs')),
('knn', KNeighborsClassifier(n_neighbors = 8)),
('mlp', MLPClassifier(solver='lbfgs', hidden_layer_sizes=(14,4,1), alpha=0.001, activation='tanh',learning_rate='adaptive')),
('ef', ExtraTreesClassifier(criterion='entropy',max_depth=5,n_estimators=240)),
('rf', RandomForestClassifier(criterion='gini',max_depth=4,n_estimators=240)),
]
voting=VotingClassifier(classifiers,voting="soft")
%time voting.fit(X_train, Y_train)
Y_pred=voting.predict(X_test)
%time acc_grid_search = (cross_val_score(voting, X_train, Y_train, cv=5, scoring="accuracy").mean()) * 100
print(acc_grid_search)
# Submission
# He didn't confess yet, but he will...
submission = pd.DataFrame({
"PassengerId": test_df_tr["PassengerId"],
"Survived": Y_pred
})
submission.to_csv('submission.csv', index=False)
###Output
_____no_output_____
###Markdown
Using Titanic dataset V3.5 from http://biostat.mc.vanderbilt.edu/wiki/pub/Main/DataSets/titanic.html
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import random
import sklearn
from sklearn import datasets, svm, tree, preprocessing, metrics
from sklearn.model_selection import train_test_split
#from sklearn import cross_validation
from sklearn.model_selection import cross_validate
import os
from sklearn.tree import DecisionTreeClassifier
import time
print(sklearn.__version__)
###Output
0.20.2
###Markdown
Skip the first 175 rows of comments
###Code
titanic_df = pd.read_csv('titanic3.csv', skiprows = range(0, 174), delimiter=',', encoding="utf-8")
titanic_df.head()
print(titanic_df.columns.tolist())
titanic_df[u"survived"].mean()
titanic_df.groupby('pclass').mean()
class_sex_grouping = titanic_df.groupby(['pclass','sex']).mean()
class_sex_grouping
class_sex_grouping['survived'].plot.bar(figsize=(12, 6), fontsize=12)
###Output
_____no_output_____
###Markdown
Trim the datasetby removing data with missing values
###Code
titanic_df.info()
titanic_df.count()
titanic_df = titanic_df.drop(['body','cabin','boat','name','home.dest'], axis=1)
#Use axis=1 to drop the whole column
#titanic_df["home.dest"] = titanic_df["home.dest"].fillna("NA")
#fill the blanks with NA
titanic_df.info()
titanic_df = titanic_df.dropna()
titanic_df.info()
titanic_df.count()
titanic_df.head()
titanic_df['embarked'].value_counts()
###Output
_____no_output_____
###Markdown
914 people on board are from Southampton. RIP. C stands for Cherbourg. Q stands for Queenstown. Preprocess the datasetby converting strings to ints
###Code
def preprocess_titanic_df(dfi):
processed_df = dfi.copy()
#Copy (deep copy by default) to create a full-size second set of data
#Shallow copy (deep=false) will create a pointer to the original data memory
le = preprocessing.LabelEncoder()
# Use label encoder to convert 'sex' and 'embarked' to numbers
# then simply use fit_transform
processed_df.sex = le.fit_transform(processed_df.sex)
#processed_df.embarked=processed_df.embarked.fillna(np.nan)
processed_df.embarked=processed_df.embarked.fillna('0', inplace=True)
processed_df.embarked = le.fit_transform(processed_df.embarked)
# drop 'ticket'
processed_df = processed_df.drop(['ticket'],axis=1)
return processed_df
df = preprocess_titanic_df(titanic_df)
df.replace(['NaN', 'NaT'], np.nan, inplace = True)
# string type of 'NaN' exist! Replace with np.nan (to works with dropna)
#df = df[~df.isin(['NaN', 'NaT']).any(axis=1)]
df=df.dropna(axis=0)
print(df)
df = df.reset_index()
#When sklearn with pandas, sometimes ValueError: Input contains NaN, infinity or a value too large for dtype('float64') happens
#Use df = df.reset_index() to resolve it
#Here shows the error in the case that there are missing data present
#TypeError: unorderable types: str() < float()
#The solution is to fill na with 'NaN'
#processed_df.embarked=processed_df.embarked.fillna(np.nan)
#Do not use .fillna(0)
# Use .fillna(0, inplace=True)
# see the function defined above.
a = preprocessing.LabelEncoder().fit_transform(titanic_df.embarked)
X = df.drop(['survived'], axis=1).values
y = df['survived'].values
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.2)
print(X_train.shape)
print(X_test.shape)
print(X_train.shape[0] / X_test.shape[0])
np.random.seed(9980)
clf_dt = DecisionTreeClassifier(max_depth=10)
#Decision tree classifier
clf_dt.fit(X_train, y_train)
clf_dt.score(X_test, y_test)
np.random.seed(42)
shuffle_validator = sklearn.model_selection.ShuffleSplit(len(X), test_size=0.2, random_state=0)
def test_classifier(clf):
scores = sklearn.model_selection.cross_val_score(clf, X, y, cv=shuffle_validator)
print("Accuracy: %0.4f (+/- %0.2f)" % (scores.mean(), scores.std()))
t0 = time.clock()
test_classifier(clf_dt)
print('Elapsed time: ' + str(time.clock()-t0) + 's')
#Gradient Boosting Classifier
clf_rf = sklearn.ensemble.GradientBoostingClassifier(n_estimators=50)
t0 = time.clock()
test_classifier(clf_rf)
print('Elapsed time: ' + str(time.clock()-t0) + 's')
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_selection import SelectFromModel
#Random forest classifier
clf = RandomForestClassifier(n_estimators=50)
clf = clf.fit(X_train, y_train)
t0 = time.clock()
test_classifier(clf)
print('Elapsed time: ' + str(time.clock()-t0) + 's')
features = pd.DataFrame()
clf_rf.fit(X_train, y_train)
clf_rf.score(X_test, y_test)
features['feature'] = ['pclass', 'sex', 'age', 'sibsp', 'parch', 'fare', 'embarked']
print(clf_rf.feature_importances_)
features['importance'] = clf_rf.feature_importances_[:-1]
features.sort_values(by=['importance'], ascending=True, inplace=True)
features.set_index('feature', inplace=True)
features.plot(kind='barh', figsize=(12, 7), fontsize=12)
plt.show()
###Output
[0.26916276 0. 0.52973721 0.10730559 0.03010849 0.00387386
0.05981209 0. ]
###Markdown
Titanic Kaggle Competition Solution Importing the Training and Test data from CSV files
###Code
from collections import Counter
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier, GradientBoostingClassifier, ExtraTreesClassifier, VotingClassifier
from sklearn.svm import SVC
from sklearn import svm
from sklearn.neural_network import MLPClassifier
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import cross_val_score, StratifiedKFold, GridSearchCV
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.linear_model import LogisticRegression
%matplotlib inline
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/Project Midas/Competition'):
for filename in filenames:
print(os.path.join(dirname, filename))
train = pd.read_csv('../Competition/train.csv')
test = pd.read_csv('../Competition/test.csv')
IDtest = test["PassengerId"]
###Output
_____no_output_____
###Markdown
Detecting Outliers in the Age, SibSp, Parch, Fare Features
###Code
def detect_outliers(df,n,features):
"""
Takes a dataframe df of features and returns a list of the indices
corresponding to the observations containing more than n outliers according
to the Tukey method.
"""
outlier_indices = []
# iterate over features(columns)
for col in features:
# 1st quartile (25%)
Q1 = np.percentile(df[col], 25)
# 3rd quartile (75%)
Q3 = np.percentile(df[col],75)
# Interquartile range (IQR)
IQR = Q3 - Q1
# outlier step
outlier_step = 1.5 * IQR
# Determine a list of indices of outliers for feature col
outlier_list_col = df[(df[col] < Q1 - outlier_step) | (df[col] > Q3 + outlier_step )].index
# append the found outlier indices for col to the list of outlier indices
outlier_indices.extend(outlier_list_col)
# select observations containing more than 2 outliers
outlier_indices = Counter(outlier_indices)
multiple_outliers = list( k for k, v in outlier_indices.items() if v > n )
return multiple_outliers
# detect outliers from Age, SibSp , Parch and Fare
Outliers_to_drop = detect_outliers(train,2,["Age","SibSp","Parch","Fare"])
train.loc[Outliers_to_drop] # Show the outliers rows
# Drop outliers
train = train.drop(Outliers_to_drop, axis = 0).reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Combine Dataset to apply feature engineering techniques evenly on the datasets
###Code
## Join train and test datasets in order to obtain the same number of features during categorical conversion
train_len = len(train)
dataset = pd.concat(objs=[train, test], axis=0).reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Create New feature 'Title' from 'Name' and group them under 4 major titles - Mr, Mrs, Miss, Master
###Code
#Create Title from Name feature
def create_title(data):
data["Title"] = data["Name"].map(lambda x:x.split(',')[1].split('.')[0].strip())
return data
dataset = create_title(dataset)
#replacing all titles with mr, mrs, miss, master
def replace_titles(x):
title=x['Title']
if title in ['Don', 'Major', 'Capt', 'Jonkheer', 'Rev', 'Col', 'Sir']:
return 'Mr'
elif title in ['the Countess', 'Mme', 'Lady']:
return 'Mrs'
elif title in ['Mlle', 'Ms']:
return 'Miss'
elif title =='Dr':
if x['Sex']=='male':
return 'Mr'
else:
return 'Mrs'
else:
return title
dataset['Title']=dataset.apply(replace_titles, axis=1)
###Output
_____no_output_____
###Markdown
Transform Feature "Sex" as Male: '1' & Female: '0'
###Code
SX = LabelEncoder()
dataset['Sex'] = SX.fit_transform(dataset.Sex)
###Output
_____no_output_____
###Markdown
Dropping Name feature from the Feature set
###Code
#Dropping Name column
dataset = dataset.drop(['Name'], axis =1)
###Output
_____no_output_____
###Markdown
Filling the missing Embarked feature
###Code
#Countplot of Passenger by Port of Embarkation by class
g = sns.countplot(x="Embarked", hue = "Pclass", data=dataset)
#Fill the missing Port of Embarkation with Mode Function
dataset["Embarked"].fillna(dataset["Embarked"].mode()[0],inplace=True)
###Output
_____no_output_____
###Markdown
Filling the Missing Age values
###Code
# Filling missing value of Age
# Fill Age with the median age of similar rows according to Title
# Index of NaN age rows
index_NaN_age = list(dataset["Age"][dataset["Age"].isnull()].index)
for i in index_NaN_age :
age_med = dataset["Age"].median()
age_pred = dataset["Age"][(dataset['Title'] == dataset.iloc[i]["Title"])].median()
if not np.isnan(age_pred) :
dataset['Age'].iloc[i] = age_pred
else :
dataset['Age'].iloc[i] = age_med
#Fill the missing Fare with Median value
dataset["Fare"].fillna(dataset.groupby("Pclass")["Fare"].transform("median"),inplace=True)
###Output
_____no_output_____
###Markdown
Feature Engineering - Creating New Features
###Code
# Create Deck Feature from Cabin
dataset["Deck"] = dataset["Cabin"].str[0]
# Filling missing value of Deck
# Fill Deck with the median value of similar rows according to Pclass
# Index of NaN deck rows
index_NaN_deck = list(dataset["Deck"][dataset["Deck"].isnull()].index)
for i in index_NaN_deck :
deck_med = dataset["Deck"].mode()[0]
deck_pred = dataset["Deck"][(dataset['Pclass'] == dataset.iloc[i]["Pclass"])].mode()[0]
if not np.isnan(age_pred) :
dataset['Deck'].iloc[i] = deck_pred
else :
dataset['Deck'].iloc[i] = deck_med
# Creating new features from Deck Column
dataset['Deck_A'] = dataset['Deck'].map(lambda s: 1 if s == 'A' else 0)
dataset['Deck_B'] = dataset['Deck'].map(lambda s: 1 if s == 'B' else 0)
dataset['Deck_C'] = dataset['Deck'].map(lambda s: 1 if s == 'C' else 0)
dataset['Deck_D'] = dataset['Deck'].map(lambda s: 1 if s == 'D' else 0)
dataset['Deck_E'] = dataset['Deck'].map(lambda s: 1 if s == 'E' else 0)
dataset['Deck_F'] = dataset['Deck'].map(lambda s: 1 if s == 'F' else 0)
dataset['Deck_G'] = dataset['Deck'].map(lambda s: 1 if s == 'G' else 0)
dataset['Deck_X'] = dataset['Deck'].map(lambda s: 1 if s == 'X' else 0)
#Dropping Fare feature
dataset = dataset.drop(['Deck'], axis =1)
# Create family size feature from SibSp and Parch
dataset["Fsize"] = dataset["SibSp"] + dataset["Parch"]
#Dropping Fare feature
#dataset = dataset.drop(['SibSp'], axis =1)
#dataset = dataset.drop(['Parch'], axis =1)
# Create new feature of family size
dataset['Single'] = dataset['Fsize'].map(lambda s: 1 if s == 1 else 0)
dataset['SmallF'] = dataset['Fsize'].map(lambda s: 1 if s == 2 else 0)
dataset['MedF'] = dataset['Fsize'].map(lambda s: 1 if 3 <= s <= 4 else 0)
dataset['LargeF'] = dataset['Fsize'].map(lambda s: 1 if s >= 5 else 0)
# Create New Feature - Gender & Class
dataset['GClass'] = dataset['Sex'].map(lambda s: 1 if s == 0 else 0) * (1/dataset['Pclass'])
# Create New Feature - Age & Gender
dataset['GenderAge'] = dataset['Sex'].map(lambda s: 1 if s == 0 else 0) * (1/dataset['Age'])
# Create new features - First, Second & Third Class off of PClass
dataset['First'] = dataset['Pclass'].map(lambda s: 1 if s == 1 else 0)
dataset['Second'] = dataset['Pclass'].map(lambda s: 1 if s == 2 else 0)
dataset['Third'] = dataset['Pclass'].map(lambda s: 1 if s >= 3 else 0)
#Dropping Pclass column
dataset = dataset.drop(['Pclass'], axis =1)
# Create new features - Fare Ranges off of Fare Feature
dataset['FreeTicket'] = dataset['Fare'].map(lambda s: 1 if s == 0 else 0)
dataset['Lowest_Fare'] = dataset['Fare'].map(lambda s: 1 if (s >= -2 and s < 10) else 0)
dataset['Low_Fare'] = dataset['Fare'].map(lambda s: 1 if (s >= 10 and s < 25) else 0)
dataset['Medium_Fare'] = dataset['Fare'].map(lambda s: 1 if (s >= 25 and s < 35) else 0)
dataset['MHigh_Fare'] = dataset['Fare'].map(lambda s: 1 if (s >= 35 and s < 100) else 0)
dataset['High_Fare'] = dataset['Fare'].map(lambda s: 1 if (s >= 100 and s < 300) else 0)
dataset['Highest_Fare'] = dataset['Fare'].map(lambda s: 1 if s >= 300 else 0)
#Dropping Fare feature
dataset = dataset.drop(['Fare'], axis =1)
# Create new features - Age bands off of Age Feature
dataset['Infant'] = dataset['Age'].map(lambda s: 1 if (s >= 0 and s < 4) else 0)
dataset['Toddler'] = dataset['Age'].map(lambda s: 1 if (s >= 4 and s < 12) else 0)
dataset['Teens'] = dataset['Age'].map(lambda s: 1 if (s >= 12 and s < 18) else 0)
dataset['Young Adult'] = dataset['Age'].map(lambda s: 1 if (s >= 18 and s < 25) else 0)
dataset['Adult'] = dataset['Age'].map(lambda s: 1 if (s >= 25 and s < 35) else 0)
dataset['Adult+'] = dataset['Age'].map(lambda s: 1 if (s >= 35 and s < 45) else 0)
dataset['Middle_Aged'] = dataset['Age'].map(lambda s: 1 if (s >= 45 and s < 60) else 0)
dataset['Seniors'] = dataset['Age'].map(lambda s: 1 if (s >= 60 and s < 70) else 0)
dataset['Seniors+'] = dataset['Age'].map(lambda s: 1 if (s >= 70) else 0)
#Dropping Age Feature
dataset = dataset.drop(['Age'], axis =1)
# Create new features based on port of Embarkation
dataset['Em_C'] = dataset['Embarked'].map(lambda s: 1 if s == 'C' else 0)
dataset['Em_Q'] = dataset['Embarked'].map(lambda s: 1 if s == 'Q' else 0)
dataset['Em_S'] = dataset['Embarked'].map(lambda s: 1 if s == 'S' else 0)
#Dropping Embarked Column
dataset = dataset.drop(['Embarked'], axis =1)
# Create new features based on Title
dataset['Mr'] = dataset['Title'].map(lambda s: 1 if s == 'Mr' else 0)
dataset['Mrs'] = dataset['Title'].map(lambda s: 1 if s == 'Mrs' else 0)
dataset['Miss'] = dataset['Title'].map(lambda s: 1 if s == 'Miss' else 0)
dataset['Master'] = dataset['Title'].map(lambda s: 1 if s == 'Master' else 0)
#Dropping Embarked Column
dataset = dataset.drop(['Title'], axis =1)
#Dropping Ticket column
dataset = dataset.drop(['Ticket'], axis =1)
#Dropping Cabin column
dataset = dataset.drop(['Cabin'], axis =1)
# Dropping Passenger Id
dataset = dataset.drop(['PassengerId'], axis =1)
## Separate out train and test data from dataset
train = dataset[:train_len]
test = dataset[train_len:]
test.drop(labels=["Survived"],axis = 1,inplace=True)
#Separate X_train & y_train from train dataframe
y_train = train["Survived"].astype(int)
X_train = train.drop(labels = ["Survived"],axis = 1)
# Cross validate model with Kfold stratified cross val
kfold = StratifiedKFold(n_splits=30)
# Modeling step Test differents algorithms
random_state = 2
classifiers = []
classifiers.append(SVC(random_state=random_state))
classifiers.append(DecisionTreeClassifier(random_state=random_state))
classifiers.append(AdaBoostClassifier(DecisionTreeClassifier(random_state=random_state),random_state=random_state,learning_rate=0.1))
classifiers.append(RandomForestClassifier(random_state=random_state))
classifiers.append(ExtraTreesClassifier(random_state=random_state))
classifiers.append(GradientBoostingClassifier(random_state=random_state))
classifiers.append(MLPClassifier(random_state=random_state))
classifiers.append(KNeighborsClassifier())
classifiers.append(LogisticRegression(random_state = random_state))
classifiers.append(LinearDiscriminantAnalysis())
cv_results = []
for classifier in classifiers :
cv_results.append(cross_val_score(classifier, X_train, y = y_train, scoring = "accuracy", cv = kfold, n_jobs=6))
cv_means = []
cv_std = []
for cv_result in cv_results:
cv_means.append(cv_result.mean())
cv_std.append(cv_result.std())
cv_res = pd.DataFrame({"CrossValMeans":cv_means,"CrossValerrors": cv_std,"Algorithm":["SVC","DecisionTree","AdaBoost",
"RandomForest","ExtraTrees","GradientBoosting","MultipleLayerPerceptron","KNeighboors","LogisticRegression","LinearDiscriminantAnalysis"]})
g = sns.barplot("CrossValMeans","Algorithm",data = cv_res, palette="Set3",orient = "h",**{'xerr':cv_std})
g.set_xlabel("Mean Accuracy")
g = g.set_title("Cross validation scores")
### META MODELING WITH ADABOOST, RF, SVC, EXTRATREES and GRADIENTBOOSTING
# Adaboost
DTC = DecisionTreeClassifier()
adaDTC = AdaBoostClassifier(DTC, random_state=7)
ada_param_grid = {"base_estimator__criterion" : ["gini", "entropy"],
"base_estimator__splitter" : ["best", "random"],
"algorithm" : ["SAMME","SAMME.R"],
"n_estimators" :[1,2],
"learning_rate": [0.0001, 0.001, 0.01, 0.1, 0.2, 0.3,1.5]}
gsadaDTC = GridSearchCV(adaDTC,param_grid = ada_param_grid, cv=kfold, scoring="accuracy", n_jobs= -1, verbose = 1)
gsadaDTC.fit(X_train,y_train)
ada_best = gsadaDTC.best_estimator_
# Best score
gsadaDTC.best_score_
#ExtraTrees
ExtC = ExtraTreesClassifier()
## Search grid for optimal parameters
ex_param_grid = {"max_depth": [None],
"max_features": [1, 3, 10],
"min_samples_split": [2, 3, 10],
"min_samples_leaf": [1, 3, 10],
"bootstrap": [False],
"n_estimators" :[100,300],
"criterion": ["gini"]}
gsExtC = GridSearchCV(ExtC,param_grid = ex_param_grid, cv=kfold, scoring="accuracy", n_jobs= -1, verbose = 1)
gsExtC.fit(X_train,y_train)
ExtC_best = gsExtC.best_estimator_
# Best score
gsExtC.best_score_
# RFC Parameters tunning
RFC = RandomForestClassifier()
## Search grid for optimal parameters
rf_param_grid = {"max_depth": [None],
"max_features": [1, 3, 10],
"min_samples_split": [2, 3, 10],
"min_samples_leaf": [1, 3, 10],
"bootstrap": [False],
"n_estimators" :[100,300],
"criterion": ["gini"]}
gsRFC = GridSearchCV(RFC,param_grid = rf_param_grid, cv=kfold, scoring="accuracy", n_jobs= -1, verbose = 1)
gsRFC.fit(X_train,y_train)
RFC_best = gsRFC.best_estimator_
# Best score
gsRFC.best_score_
# Gradient boosting tunning
GBC = GradientBoostingClassifier()
gb_param_grid = {'loss' : ["deviance"],
'n_estimators' : [100,200,300],
'learning_rate': [0.1, 0.05, 0.01],
'max_depth': [4, 8],
'min_samples_leaf': [100,150],
'max_features': [0.3, 0.1]
}
gsGBC = GridSearchCV(GBC,param_grid = gb_param_grid, cv=kfold, scoring="accuracy", n_jobs= -1, verbose = 1)
gsGBC.fit(X_train,y_train)
GBC_best = gsGBC.best_estimator_
# Best score
gsGBC.best_score_
### SVC classifier
SVMC = SVC(probability=True)
svc_param_grid = {'kernel': ['rbf'],
'gamma': [ 0.001, 0.01, 0.1, 1],
'C': [1, 10, 50, 100,200,300, 1000]}
gsSVMC = GridSearchCV(SVMC,param_grid = svc_param_grid, cv=kfold, scoring="accuracy", n_jobs= -1, verbose = 1)
gsSVMC.fit(X_train,y_train)
SVMC_best = gsSVMC.best_estimator_
# Best score
gsSVMC.best_score_
votingC = VotingClassifier(estimators=[('rfc', RFC_best), ('extc', ExtC_best),
('svc', SVMC_best), ('adac',ada_best),('gbc',GBC_best)], voting='soft', n_jobs=-1)
votingC = votingC.fit(X_train, y_train)
test_Survived = pd.Series(votingC.predict(test), name="Survived")
results = pd.concat([IDtest,test_Survived],axis=1)
results.to_csv("ensemble_python_voting.csv",index=False)
###Output
_____no_output_____
###Markdown
Referenceshttps://www.kaggle.com/yassineghouzam/titanic-top-4-with-ensemble-modelinghttps://triangleinequality.wordpress.com/2013/09/08/basic-feature-engineering-with-the-titanic-data/https://github.com/ishanbhandari-19/Titanic-Challenge/blob/master/PreProcessing_and_Feature_Engineering.ipynbhttps://www.kaggle.com/soham1024/titanic-data-science-eda-solutions
###Code
dataset.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1299 entries, 0 to 1298
Data columns (total 43 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Survived 881 non-null float64
1 Sex 1299 non-null int64
2 SibSp 1299 non-null int64
3 Parch 1299 non-null int64
4 Deck_A 1299 non-null int64
5 Deck_B 1299 non-null int64
6 Deck_C 1299 non-null int64
7 Deck_D 1299 non-null int64
8 Deck_E 1299 non-null int64
9 Deck_F 1299 non-null int64
10 Deck_G 1299 non-null int64
11 Deck_X 1299 non-null int64
12 Fsize 1299 non-null int64
13 Single 1299 non-null int64
14 SmallF 1299 non-null int64
15 MedF 1299 non-null int64
16 LargeF 1299 non-null int64
17 First 1299 non-null int64
18 Second 1299 non-null int64
19 Third 1299 non-null int64
20 FreeTicket 1299 non-null int64
21 Lowest_Fare 1299 non-null int64
22 Low_Fare 1299 non-null int64
23 Medium_Fare 1299 non-null int64
24 MHigh_Fare 1299 non-null int64
25 High_Fare 1299 non-null int64
26 Highest_Fare 1299 non-null int64
27 Infant 1299 non-null int64
28 Toddler 1299 non-null int64
29 Teens 1299 non-null int64
30 Young Adult 1299 non-null int64
31 Adult 1299 non-null int64
32 Adult+ 1299 non-null int64
33 Middle_Aged 1299 non-null int64
34 Seniors 1299 non-null int64
35 Seniors+ 1299 non-null int64
36 Em_C 1299 non-null int64
37 Em_Q 1299 non-null int64
38 Em_S 1299 non-null int64
39 Mr 1299 non-null int64
40 Mrs 1299 non-null int64
41 Miss 1299 non-null int64
42 Master 1299 non-null int64
dtypes: float64(1), int64(42)
memory usage: 436.5 KB
###Markdown
Import Data
###Code
import numpy as np
import pandas as pd
import seaborn as sns
in_file = 'train.csv'
full_data = pd.read_csv(in_file)
full_data.head()
###Output
_____no_output_____
###Markdown
Data Explore
###Code
full_data.describe()
full_data.isnull().any()
###Output
_____no_output_____
###Markdown
Data Engineering
###Code
full_data['Age']=full_data['Age'].fillna(full_data['Age'].median())
full_data['Name']=full_data['Name'].apply(lambda x:len(x))
full_data['Embarked']=full_data['Embarked'].fillna('S')
full_data['Family']=full_data['SibSp']+full_data['Parch']
full_data.loc[full_data['Sex']=='male','Sex']=0
full_data.loc[full_data['Sex']=='female','Sex']=1
full_data.loc[full_data['Embarked']=='S','Embarked']=0
full_data.loc[full_data['Embarked']=='C','Embarked']=1
full_data.loc[full_data['Embarked']=='Q','Embarked']=2
new_data=full_data.drop(['PassengerId','Name','SibSp','Parch','Cabin','Ticket'], axis = 1)
new_data.head()
###Output
_____no_output_____
###Markdown
Note+ People may have the age 0.42 because it is a baby.And i think we should also take them into consideration.
###Code
new_data.describe()
new_data[new_data['Age']<1]
from IPython.display import display
import scipy
for feature in new_data.keys():
Q1 = np.percentile(new_data[feature],25)
Q3 = np.percentile(new_data[feature],75)
step = 2.0*(Q3-Q1)
print "Data points considered outliers for the feature '{}':".format(feature)
display(new_data[~((new_data[feature] >= Q1 - step) & (new_data[feature] <= Q3 + step))])
###Output
Data points considered outliers for the feature 'Survived':
###Markdown
Figuring
###Code
import matplotlib.pyplot as plt
%pylab inline
sns.swarmplot(x='Age',y='Sex',hue='Survived',data=full_data)
sns.barplot(x='Embarked',y='Survived',data=new_data)
sns.swarmplot(x='Pclass',y='Fare',hue='Survived',data=new_data)
sns.swarmplot(x='Family',y='Age',hue='Survived',data=new_data)
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
Split Data
###Code
y_all=new_data['Survived']
X_all=new_data.drop('Survived', axis = 1)
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X_all, y_all, test_size=0.20, random_state=20)
###Output
_____no_output_____
###Markdown
Benchmark
###Code
from sklearn import linear_model
linear_clf = linear_model.SGDClassifier()
train_predict(linear_clf, X_train, y_train, X_test, y_test)
###Output
Trained model in 0.0054 seconds
F1 score for training set: 0.5049.
F1 score for test set: 0.5047.
###Markdown
Test Model
###Code
import time
from sklearn.metrics import f1_score
def train_classifier(clf, X_train, y_train):
start = time.clock()
clf.fit(X_train, y_train)
end = time.clock()
print "Trained model in {:.4f} seconds".format(end - start)
def predict_labels(clf, features, target):
y_pred = clf.predict(features)
return f1_score(target.values, y_pred)
def train_predict(clf, X_train, y_train, X_test, y_test):
train_classifier(clf, X_train, y_train)
print "F1 score for training set: {:.4f}.".format(predict_labels(clf, X_train, y_train))
print "F1 score for test set: {:.4f}.".format(predict_labels(clf, X_test, y_test))
from sklearn import svm
clf1 = svm.SVC()
from sklearn.neighbors import KNeighborsClassifier
clf2=KNeighborsClassifier()
from sklearn.ensemble import RandomForestClassifier
clf3=RandomForestClassifier()
X_train_1=X_train[:230]
X_train_2=X_train[:460]
X_train_3=X_train
y_train_1=y_train[:230]
y_train_2=y_train[:460]
y_train_3=y_train
print "SVM"
train_predict(clf1, X_train_1, y_train_1, X_test, y_test)
train_predict(clf1, X_train_2, y_train_2, X_test, y_test)
train_predict(clf1, X_train_3, y_train_3, X_test, y_test)
print "KNN"
train_predict(clf2, X_train_1, y_train_1, X_test, y_test)
train_predict(clf2, X_train_2, y_train_2, X_test, y_test)
train_predict(clf2, X_train_3, y_train_3, X_test, y_test)
print "RandomForest"
train_predict(clf3, X_train_1, y_train_1, X_test, y_test)
train_predict(clf3, X_train_2, y_train_2, X_test, y_test)
train_predict(clf3, X_train_3, y_train_3, X_test, y_test)
###Output
SVM
Trained model in 0.0080 seconds
F1 score for training set: 0.8765.
F1 score for test set: 0.3810.
Trained model in 0.0200 seconds
F1 score for training set: 0.8546.
F1 score for test set: 0.4000.
Trained model in 0.0269 seconds
F1 score for training set: 0.8803.
F1 score for test set: 0.4038.
KNN
Trained model in 0.0018 seconds
F1 score for training set: 0.6341.
F1 score for test set: 0.5649.
Trained model in 0.0012 seconds
F1 score for training set: 0.6930.
F1 score for test set: 0.5600.
Trained model in 0.0011 seconds
F1 score for training set: 0.7356.
F1 score for test set: 0.5738.
RandomForest
Trained model in 0.0239 seconds
F1 score for training set: 0.9711.
F1 score for test set: 0.6349.
Trained model in 0.0238 seconds
F1 score for training set: 0.9688.
F1 score for test set: 0.7179.
Trained model in 0.0329 seconds
F1 score for training set: 0.9653.
F1 score for test set: 0.7302.
###Markdown
| |Size|Time |Train Score|Test Score||:---:|:--:|:---:|:------:|:----:||SVM1 |33% |0.008| 0.877 | 0.381||SVM2 |66% |0.020| 0.855 | 0.400||SVM3 |100%|0.027| 0.880 | 0.404| | |Size|Time |Train Score|Test Score||:---:|:--:|:---:|:------:|:----:||KNN1 |33% |0.002| 0.634 | 0.565||KNN2 |66% |0.001| 0.693 | 0.560||KNN3 |100%|0.001| 0.736 | 0.574| | |Size|Time |Train Score|Test Score||:---:|:--:|:---:|:------:|:----:||Ran1 |33% |0.024| 0.971 | 0.635||Ran2 |66% |0.024| 0.969 | 0.718||Ran3 |100%|0.033| 0.965 | 0.730| Tune Model
###Code
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import make_scorer,f1_score
from sklearn.grid_search import GridSearchCV
from sklearn import cross_validation
import time
start=time.clock()
parameters = {'n_estimators': [10,20,40,80],'criterion':['gini','entropy']
,'max_features':['log2','sqrt',None],'max_depth':[5,6,7,8],'min_samples_split':[1,2,3]
,'warm_start':[False,True]}
cv_sets = cross_validation.ShuffleSplit(X_train.shape[0], n_iter = 10, test_size = 0.20, random_state = 0)
clf = RandomForestClassifier()
f1_scorer = make_scorer(f1_score)
grid_obj = GridSearchCV(clf,param_grid=parameters,scoring=f1_scorer,cv=cv_sets)
grid_obj=grid_obj.fit(X_train, y_train)
clf = grid_obj.best_estimator_
end=time.clock()
print grid_obj.best_estimator_.get_params()
print "Tuned model has a training F1 score of {:.4f}.".format(predict_labels(clf, X_train, y_train))
print "Tuned model has a testing F1 score of {:.4f}.".format(predict_labels(clf, X_test, y_test))
print "Optimize model in {:.4f} seconds".format(end - start)
###Output
{'warm_start': True, 'oob_score': False, 'n_jobs': 1, 'verbose': 0, 'max_leaf_nodes': None, 'bootstrap': True, 'min_samples_leaf': 1, 'n_estimators': 40, 'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0, 'criterion': 'entropy', 'random_state': None, 'max_features': 'sqrt', 'max_depth': 8, 'class_weight': None}
Tuned model has a training F1 score of 0.8690.
Tuned model has a testing F1 score of 0.7350.
Optimize model in 633.5732 seconds
###Markdown
Result Figure
###Code
import numpy as np
import pandas as pd
import seaborn as sns
%pylab inline
plot_data=pd.DataFrame({'Name':['SVM','KNN','RDM','Tuned RDM'],
'Train Score':[0.880,0.736,0.965,0.882],
'Test Score':[0.404,0.574,0.730,0.760]})
sns.pointplot(x='Name',y='Train Score',data=plot_data,markers='o',color='r')
sns.pointplot(x='Name',y='Test Score',data=plot_data,markers='D',color='g')
import numpy as np
import matplotlib.pyplot as plt
from sklearn.learning_curve import learning_curve
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,
n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5)):
plt.figure()
plt.title(title)
if ylim is not None:
plt.ylim(*ylim)
plt.xlabel("Training examples")
plt.ylabel("Score")
train_sizes, train_scores, test_scores = learning_curve(
estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.grid()
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1, color="g")
plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
label="Cross-validation score")
plt.legend(loc="best")
return plt
title = "Learning Curves (Random Forest)"
plot_learning_curve(clf, title, X_train, y_train, (0.7, 1.01), cv=cv_sets, n_jobs=2)
plt.show()
###Output
_____no_output_____
###Markdown
Titanic Survival PredictionThis notebook shows how to predict if a passenger survived the Titanic disaster.The data is obtained through [Kaggle](https://www.kaggle.com/c/titanic/overview).The goal is to show a real example of a Machine Learning/Data Science task and provide a basic guide on how to approach this type of problem. Import Libraries
###Code
import matplotlib.pyplot as plt # Plotting
import numpy as np # Array computation
import seaborn as sns # Plotting
import pandas as pd # Handling table data
from IPython.display import display, clear_output # Plotting
# Models
import sklearn.model_selection
import sklearn.linear_model
import sklearn.svm
import sklearn.ensemble
%matplotlib inline
sns.set_theme()
# Set random seed
np.random.seed(42)
###Output
_____no_output_____
###Markdown
Problem DefinitionOur task is to predict if a person survived the Titanic disaster based on some set of features. This is a _binary classification_ problem, where our target is 0 (did not survive) or 1 (survived). Data Analysis and ProcessingThis is usually the most important stage of a Machine Learning/Data Science problem.The goal of this stage is to understand our data (analysis), correct possible problems (cleaning), and process it so it's easier to use (wrangling). Load and View DataOur data is already split into a training and test set. Let's look at the training set only for now.
###Code
train_df = pd.read_csv("data\\train.csv")
train_df
###Output
_____no_output_____
###Markdown
Our training dataset contains 891 passenger records and we have 12 columns per passenger.Let's try to understand what these are.
###Code
train_df.columns
train_df.sample(10)
###Output
_____no_output_____
###Markdown
Based on what we have just seen and the documentation, we can describe our columns:* PassengerID: seems to be just a number indicating the current row, so it's not a feature* Survived: 0 or 1 indicating if the passenger survived (this is our target!)* Pclass: 1, 2 or 3. Indicates the class of the ticket (1st, 2nd, or 3rd class)* Name: The name of the passenger* Sex: The sex of the passenger* Age: The age of the passenger. Note the value is float. Age can be less than one. When estimated, appears as xx.5* SibSp: Number greater or equal to 0. Indicates number of siblings or spouses aboard* Parch: Number greater or equal to 0. Indicates number of parents or children aboard* Ticket: Alphanumeric. This is the raw ticket number* Fare: Number greater or equal to 0. Indicates the fare paid by the passenger* Cabin: Alphanumeric. Indicates the cabin number assigned to the passenger.* Embarked: One of C (Cherbourg), Q (Queenstown), or S (Southampton). Indicates the port where the passenger embarked. Just based on this description we can already have some ideas about the data and how the different features might help us make a prediction.For example: * *Pclass* is indicator of social-economic status and it seems reasonable to expect people in 1st class have a higher chance of survival* *Name* is hard to work with. We already have features regarding other family members on board. There might be extra information in the titles (e.g. Master, Captain, etc.) but for the purpose of this notebook let's ignore them* *Sex* is potentially very relevant, due to the usual procedure of saving "women and children first" in this type of event* *Age* is also probably very relevant* *SibSp* and *Parch* both give information about the family members of a passenger. Perhaps it would make sense to join both into a single feature indicating family size* *Ticket* is just a raw number. There is no guarantee that its value indicates anything that isn't already described by other features. Perhaps it's best to discard it* *Fare* is probably very related with *Pclass* and *Embarked** *Embarked* doesn't seem to be directly relevant for survival rate Note that the above discussion and assumptions should still be validated! Analise DataLet's start by directly computing some metrics with Pandas.
###Code
train_df.describe()
###Output
_____no_output_____
###Markdown
* About 38% of the passengers in the training set survived* Most passengers are young (average age is 29, 75% have 38 or less) and we are missing the age of some (we have 714 out of 891)* Most passengers travelled alone (average SibSp is 0.5 and 50% is 0; average Parch is 0.38 and 75% is 0)* Fare has a wide range (min of 0 and max of 512). It is possible that the values of 0 are missing records but let's ignore that for nowGiven this, it might make sense to bin the Age and Fare features. We will also want to complete the Age feature. PassengerId is irrelevant, so we should drop it.
###Code
train_df.describe(include="O")
###Output
_____no_output_____
###Markdown
* Each name is unique* Most of the passengers are male* There are duplicate ticket numbers (681 unique out of 891)* There are duplicate cabin numbers (147 unique out of 204) and several missing values (only have 204 out of 891)* Most passengers embarked in Southampton (664 out of 889) and we are missing the port for two passengers Given this, we might want to drop the Name, Ticket, and Cabin features. Also, we should complete the Embarked feature. Just based on this analysis, we already have a long list of things to do with our data.Before we do them, let's analise things a bit further and see if our previous assumptions are correct.First, is *Pclass* a good indicator for survival? We expected people in first class to have a better surival rate than third class.
###Code
train_df[['Pclass', 'Survived']].groupby(['Pclass'], as_index=False).mean().sort_values(by='Survived', ascending=False)
grid = sns.FacetGrid(train_df, col='Survived', row='Pclass')
grid.map(plt.hist, 'Age', bins=50)
###Output
_____no_output_____
###Markdown
Our assumption seems to hold! This means that this is a good feature for us to keep.We also assumed that female passengers have a higher survival chance.
###Code
train_df[['Sex', 'Survived']].groupby(['Sex'], as_index=False).mean().sort_values(by='Survived', ascending=False)
sns.histplot(train_df["Sex"])
sns.catplot(x="Survived", y="Sex", data=train_df, kind="bar")
###Output
_____no_output_____
###Markdown
This assumption also seems to hold! Let's keep this feature.We also talked about merging the *SibSp* and *Parch* features. Let's look at the survival rates for these features.
###Code
train_df[["SibSp", "Survived"]].groupby(['SibSp'], as_index=False).mean().sort_values(by='Survived', ascending=False)
train_df[["Parch", "Survived"]].groupby(['Parch'], as_index=False).mean().sort_values(by='Survived', ascending=False)
###Output
_____no_output_____
###Markdown
While there is some relation to surival rate, there are same cases where the rate is directly zero. Moving forward we can merge these features into a family size feature and analise this again.Now let's have a look at the *Age* feature.
###Code
sns.histplot(train_df["Age"], bins=50)
sns.FacetGrid(train_df, col='Survived').map(sns.histplot, "Age", bins=50)
###Output
_____no_output_____
###Markdown
This is mostly in line with our assumptions. Some elderly survived and so did many children.Now let's look at the Fare feature.
###Code
sns.histplot(train_df["Fare"], bins=50)
sns.FacetGrid(train_df, col='Survived').map(sns.histplot, "Fare", bins=50)
###Output
_____no_output_____
###Markdown
Fare has a very wide range so we should probably be mindful of that when binning.Finally, let's look at the *Embarked* feature.
###Code
sns.catplot(x="Survived", y="Embarked", data=train_df, kind="bar")
###Output
_____no_output_____
###Markdown
Unsure if it's useful but let's keep it. We will want to change the labels to numbers.We will finish the analysis section here but note that we mostly only checked each individual feature. We should also try to find correlations between the different features. Complete missing dataWe identified that there is some missing data in our dataset, namely missing Age and Embarked.Regarding Age, we can replace it with the overall median value. Note that if we found a correlation between Age and some other feature, then we could complete it in a different way.Regarding Embarked, we can replace missing values with the most common value (mode).
###Code
train_df["Age"].fillna(train_df["Age"].median(), inplace=True)
train_df["Age"].describe()
train_df["Embarked"].fillna(train_df["Embarked"].mode()[0], inplace=True)
train_df["Embarked"].describe()
###Output
_____no_output_____
###Markdown
Note that we should complete missing values in the test data as well. However, we must use the median and mode that we computed using the training set. Wrangle dataIn this stage we will change the data:* Drop PassengerId, Name, Ticket, and Cabin* Bin *Age* and *Fare* features* Merge *Sibsp* and *Parch* + pd pivot again* Map *Sex* and *Embarked* features to numbers
###Code
train_df.drop(["PassengerId", "Name", "Ticket", "Cabin"], axis=1, inplace=True)
train_df.sample(10)
train_df["AgeBand"] = pd.to_numeric(pd.cut(train_df["Age"], bins=5, labels=[0, 1, 2, 4, 5])) # Equal width bins
train_df.drop(["Age"], axis=1, inplace=True)
train_df.head()
train_df["FareBand"] = pd.to_numeric(pd.qcut(train_df["Fare"], q=5, labels=[0, 1, 2, 4, 5])) # 5 quantiles
train_df.drop(["Fare"], axis=1, inplace=True)
train_df.head()
train_df["FamilySize"] = train_df["SibSp"] + train_df["Parch"] + 1
train_df.drop(["SibSp", "Parch"], axis=1, inplace=True)
train_df.head()
train_df[["FamilySize", "Survived"]].groupby(["FamilySize"], as_index=False).mean().sort_values(by="Survived", ascending=False)
train_df["Embarked"] = train_df["Embarked"].map({"S": 0, "C": 1, "Q": 2})
train_df
train_df["Sex"] = train_df["Sex"].map({"male": 0, "female": 1})
train_df
###Output
_____no_output_____
###Markdown
We have now processed our data and are ready to use it! Before that, let's have a look at the correlations.
###Code
sns.heatmap(
train_df.corr(),
square=True,
annot=True,
annot_kws={'fontsize': 9}
)
###Output
_____no_output_____
###Markdown
Train ModelsAt this stage our data is ready to use for model training!First step is to split the train data into training and validation:
###Code
train_df, val_df = sklearn.model_selection.train_test_split(train_df, train_size=0.75)
X_train = train_df.drop("Survived", axis=1)
y_train = train_df["Survived"]
X_val = val_df.drop("Survived", axis=1)
y_val = val_df["Survived"]
###Output
_____no_output_____
###Markdown
We can now try to train a model! Let's try Logistic Regression:
###Code
lr_classifier = sklearn.linear_model.LogisticRegression()
lr_classifier.fit(X_train, y_train)
lr_acc = lr_classifier.score(X_val, y_val)
print(f"Accuracy: {lr_acc * 100:.2f}%")
###Output
Accuracy: 79.82
###Markdown
The Accuracy is not bad.Let's try a few different models:
###Code
svm_classifier = sklearn.svm.SVC()
svm_classifier.fit(X_train, y_train)
svm_acc = svm_classifier.score(X_val, y_val)
print(f"Accuracy: {svm_acc * 100:.2f}%")
rf_classifier = sklearn.ensemble.RandomForestClassifier(n_estimators=100)
rf_classifier.fit(X_train, y_train)
rf_acc = rf_classifier.score(X_val, y_val)
print(f"Accuracy: {rf_acc * 100:.2f}%")
###Output
Accuracy: 82.96%
###Markdown
Il est important d'avoir des bonnes pratiques pour organiser vos fichier.La mienne qui n'est pas une règle absolue est de toujours créer au moins trois parties : Import des librairiesOù j'importe d'un coup toutes les librairies que je vais utiliser Création des fonctions persoPour mes besoin d'analyses et de traitement (peut être vu dans une leçon ultérieure) Import des donnéesOù j'importe toutes les données que je vais utiliserLa suite dépend de l'objectif, "analyse des données", "comparaisons", "tests divers"Je précise aussi la source des mes travaux (lorsque que c'est sur des données issues du net)Source : https://www.kaggle.com/c/titanic Import des librairies
###Code
import pandas as pd
import matplotlib.pyplot as plt
import math
import seaborn as sns
import numpy as np
from sklearn import svm
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
###Output
_____no_output_____
###Markdown
Création des fonctions persoDans le cadre du jeux de données du titanic il y a dans les nom des titres.Plus tard dans ce fichier pour utliser ces titres il va falloir les traduires;je vous propose ici une traduction. Vous pourrez aller plus loin ou la modifier à l'envie.
###Code
dict_titre = {
'Capt': 'Dr/Clergé/Mil',
'Col': 'Dr/Clergé/Mil',
'Major': 'Dr/Clergé/Mil',
'Jonkheer': 'Honorifique',
'Don': 'Honorifique',
'Dona': 'Honorifique',
'Sir': 'Honorifique',
'Dr': 'Dr/Clergé/Mil',
'Rev': 'Dr/Clergé/Mil',
'the Countess': 'Honorifique',
'Mme': 'Mrs',
'Mlle': 'Miss',
'Ms': 'Mrs',
'Mr': 'Mr',
'Mrs': 'Mrs',
'Miss': 'Miss',
'Master': 'Master',
'Lady': 'Honorifique'
}
###Output
_____no_output_____
###Markdown
La fonction ci dessous est un exemple.Elle sera utilisée plus loin pour faire des graphiques.
###Code
# Selecting categorical data for univariate analysis
cats = ['Survived', 'Pclass', 'Sex', 'SibSp', 'Parch', 'Embarked']
def plotFrequency(cats):
#"A plot for visualize categorical data, showing both absolute and relative frequencies"
fig, axes = plt.subplots(math.ceil(len(cats) / 3), 3, figsize=(20, 12))
axes = axes.flatten()
for ax, cat in zip(axes, cats):
total = float(len(df_train[cat]))
sns.countplot(df_train[cat], palette='plasma', ax=ax)
for p in ax.patches:
height = p.get_height()
ax.text(p.get_x() + p.get_width() / 2.,
height + 10,
'{:1.2f}%'.format((height / total) * 100),
ha="center")
plt.ylabel('Count', fontsize=15, weight='bold')
###Output
_____no_output_____
###Markdown
Import des donnéesPour l'instant on ne va travailler que sur les données d'entraînement
###Code
df_train = pd.read_csv("data/train.csv")
df_train.head()
df_train.shape
###Output
_____no_output_____
###Markdown
Recherche des valeurs manquantesUn des problèmes courants est les valeurs manquantes.Pour plusieurs raisons, anciennes données incomplète, bugs d'écriture, données inutilisable.Ainsi l'une des première choses à faire est de vérifier où se trouve les données manquantes. Les foncitons pour cette vérificationIl y a deux foncitons utilisées dans ce cas :* .sum() : qui renvoie la somme des valeurs (ici une colonne)* .isna() : qui renvoie TRUE si la cellule est vide et FALSE si elle est pleine.Note : python interprète TRUE comme valant 1 et FALSE comme valant 0.Donc si on combine les deux on aura la quantité de valeurs manquantes.Ce qui peut être fait c'est de demander d'abord si toutes les cellules sont vide ou pleines puis de faire la somme. Pour ce faire on applique les deux fonctions directement au DataFrame Double cliquez pour la solution
###Code
#Votre test pour obtenir les valeurs manquantes par colonne
###Output
_____no_output_____
###Markdown
Description des colonnesUne autre fonction très utile pour commencer les permières recherches dans un DataFrame est la fonction :* .describe() qui par défaut permet de décrire les données numérique du DataFrame.Si on l'utilise avec le paramètre include="all" on aura aussi la description pour les autres types de données Essayez donc les deux choix pour la fonction.Double cliquez pour la solution<!--df_train.describe(include='all')df_train.describe() --> Une autre solution pour décrire le DataFrame est de regarder une à une toutes les colonnes et d'utiliser la fonction :* .value_counts() : qui renvoie un tableau croisé dynamique de la colonne classé par ordre décroissant par défautPour ce faire l'une des solution est le code ci-dessous qui est explicite.
###Code
for name in df_train.columns:
print(20*"-")
print(name)
print(df_train[name].value_counts())
###Output
--------------------
PassengerId
891 1
293 1
304 1
303 1
302 1
301 1
300 1
299 1
298 1
297 1
296 1
295 1
294 1
292 1
306 1
291 1
290 1
289 1
288 1
287 1
286 1
285 1
284 1
283 1
282 1
281 1
305 1
307 1
279 1
321 1
..
561 1
560 1
584 1
585 1
586 1
587 1
610 1
609 1
608 1
607 1
606 1
605 1
604 1
603 1
602 1
601 1
600 1
599 1
598 1
597 1
596 1
595 1
594 1
593 1
592 1
591 1
590 1
589 1
588 1
1 1
Name: PassengerId, Length: 891, dtype: int64
--------------------
Survived
0 549
1 342
Name: Survived, dtype: int64
--------------------
Pclass
3 491
1 216
2 184
Name: Pclass, dtype: int64
--------------------
Name
Johansson, Mr. Karl Johan 1
Levy, Mr. Rene Jacques 1
West, Mr. Edwy Arthur 1
Serepeca, Miss. Augusta 1
Aks, Mrs. Sam (Leah Rosen) 1
Larsson, Mr. August Viktor 1
Pickard, Mr. Berk (Berk Trembisky) 1
van Melkebeke, Mr. Philemon 1
McCoy, Miss. Agnes 1
Jonsson, Mr. Carl 1
Niskanen, Mr. Juha 1
Louch, Mrs. Charles Alexander (Alice Adelaide Slow) 1
Hamalainen, Master. Viljo 1
Bostandyeff, Mr. Guentcho 1
Johnson, Mr. William Cahoone Jr 1
Slabenoff, Mr. Petco 1
Isham, Miss. Ann Elizabeth 1
Vande Walle, Mr. Nestor Cyriel 1
Uruchurtu, Don. Manuel E 1
Slayter, Miss. Hilda Mary 1
Hood, Mr. Ambrose Jr 1
Yrois, Miss. Henriette ("Mrs Harbeck") 1
Moubarek, Master. Gerios 1
Olsen, Mr. Karl Siegwart Andreas 1
Jerwan, Mrs. Amin S (Marie Marthe Thuillard) 1
Parrish, Mrs. (Lutie Davis) 1
Richards, Master. George Sibley 1
Rouse, Mr. Richard Henry 1
Parkes, Mr. Francis "Frank" 1
Appleton, Mrs. Edward Dale (Charlotte Lamson) 1
..
Bowen, Mr. David John "Dai" 1
Hirvonen, Miss. Hildur E 1
Oreskovic, Miss. Marija 1
Shutes, Miss. Elizabeth W 1
Lemberopolous, Mr. Peter L 1
Yasbeck, Mr. Antoni 1
Phillips, Miss. Kate Florence ("Mrs Kate Louise Phillips Marshall") 1
Skoog, Miss. Margit Elizabeth 1
Badt, Mr. Mohamed 1
McCoy, Mr. Bernard 1
McCormack, Mr. Thomas Joseph 1
Rosblom, Mrs. Viktor (Helena Wilhelmina) 1
Hakkarainen, Mr. Pekka Pietari 1
Lehmann, Miss. Bertha 1
Baclini, Miss. Marie Catherine 1
Rekic, Mr. Tido 1
Barber, Miss. Ellen "Nellie" 1
Taylor, Mrs. Elmer Zebley (Juliet Cummins Wright) 1
Kink-Heilmann, Miss. Luise Gretchen 1
Karlsson, Mr. Nils August 1
Sundman, Mr. Johan Julian 1
Thorne, Mrs. Gertrude Maybelle 1
Hendekovic, Mr. Ignjac 1
Davison, Mrs. Thomas Henry (Mary E Finck) 1
Dodge, Master. Washington 1
Mellinger, Mrs. (Elizabeth Anne Maidment) 1
Saundercock, Mr. William Henry 1
Chapman, Mr. John Henry 1
Bryhl, Mr. Kurt Arnold Gottfrid 1
Rice, Mrs. William (Margaret Norton) 1
Name: Name, Length: 891, dtype: int64
--------------------
Sex
male 577
female 314
Name: Sex, dtype: int64
--------------------
Age
24.00 30
22.00 27
18.00 26
19.00 25
30.00 25
28.00 25
21.00 24
25.00 23
36.00 22
29.00 20
32.00 18
27.00 18
35.00 18
26.00 18
16.00 17
31.00 17
20.00 15
33.00 15
23.00 15
34.00 15
39.00 14
17.00 13
42.00 13
40.00 13
45.00 12
38.00 11
50.00 10
2.00 10
4.00 10
47.00 9
..
71.00 2
59.00 2
63.00 2
0.83 2
30.50 2
70.00 2
57.00 2
0.75 2
13.00 2
10.00 2
64.00 2
40.50 2
32.50 2
45.50 2
20.50 1
24.50 1
0.67 1
14.50 1
0.92 1
74.00 1
34.50 1
80.00 1
12.00 1
36.50 1
53.00 1
55.50 1
70.50 1
66.00 1
23.50 1
0.42 1
Name: Age, Length: 88, dtype: int64
--------------------
SibSp
0 608
1 209
2 28
4 18
3 16
8 7
5 5
Name: SibSp, dtype: int64
--------------------
Parch
0 678
1 118
2 80
5 5
3 5
4 4
6 1
Name: Parch, dtype: int64
--------------------
Ticket
1601 7
347082 7
CA. 2343 7
3101295 6
CA 2144 6
347088 6
382652 5
S.O.C. 14879 5
4133 4
349909 4
113781 4
113760 4
2666 4
347077 4
LINE 4
19950 4
PC 17757 4
17421 4
W./C. 6608 4
C.A. 34651 3
13502 3
239853 3
F.C.C. 13529 3
29106 3
SC/Paris 2123 3
PC 17755 3
PC 17760 3
248727 3
110413 3
363291 3
..
365226 1
2647 1
112277 1
345780 1
330958 1
7540 1
248698 1
SC/AH 29037 1
345572 1
315086 1
237565 1
SCO/W 1585 1
C.A. 24580 1
330919 1
SOTON/OQ 3101316 1
244278 1
A/5 21174 1
SOTON/OQ 392086 1
347074 1
35852 1
345777 1
236853 1
336439 1
SC/AH Basle 541 1
367229 1
35851 1
237789 1
2629 1
2680 1
2671 1
Name: Ticket, Length: 681, dtype: int64
--------------------
Fare
8.0500 43
13.0000 42
7.8958 38
7.7500 34
26.0000 31
10.5000 24
7.9250 18
7.7750 16
26.5500 15
0.0000 15
7.2292 15
7.8542 13
8.6625 13
7.2500 13
7.2250 12
16.1000 9
9.5000 9
24.1500 8
15.5000 8
56.4958 7
52.0000 7
14.5000 7
14.4542 7
69.5500 7
7.0500 7
31.2750 7
46.9000 6
30.0000 6
7.7958 6
39.6875 6
..
7.1417 1
42.4000 1
211.5000 1
12.2750 1
61.1750 1
8.4333 1
51.4792 1
7.8875 1
8.6833 1
7.5208 1
34.6542 1
28.7125 1
25.5875 1
7.7292 1
12.2875 1
8.6542 1
8.7125 1
61.3792 1
6.9500 1
9.8417 1
8.3000 1
13.7917 1
9.4750 1
13.4167 1
26.3875 1
8.4583 1
9.8375 1
8.3625 1
14.1083 1
17.4000 1
Name: Fare, Length: 248, dtype: int64
--------------------
Cabin
G6 4
B96 B98 4
C23 C25 C27 4
C22 C26 3
F33 3
F2 3
E101 3
D 3
B28 2
C78 2
C52 2
C125 2
C92 2
B57 B59 B63 B66 2
C65 2
E67 2
D33 2
E8 2
D26 2
B5 2
E44 2
F G73 2
B22 2
C2 2
C93 2
B77 2
D20 2
C83 2
B20 2
D35 2
..
D6 1
A6 1
D7 1
C90 1
D30 1
D48 1
E58 1
B37 1
B42 1
D21 1
B69 1
B50 1
B71 1
D15 1
E49 1
E40 1
D37 1
E17 1
B73 1
D28 1
B41 1
E77 1
C128 1
A20 1
C85 1
E10 1
D19 1
D46 1
C7 1
D11 1
Name: Cabin, Length: 147, dtype: int64
--------------------
Embarked
S 644
C 168
Q 77
Name: Embarked, dtype: int64
###Markdown
J'avoue c'est assez imbuvalbe, mais très utile sur moins de colonnes, où moins de catégories.On peu vouloir regarder les colonnes via des filtres (rappels de la session 2).Par exemple je vois dans les tableau d'au dessus que la cabine G6 est occupée par 4 personnes, je peux donc faire un filtre pour voir qui l'occupe.Double cliquez pour la solution<!--df_train.loc[df_train["Cabin"] == "G6"] -->
###Code
# Votre test pour voir qui occupe la cabine G6
###Output
_____no_output_____
###Markdown
Ce qui est pratique quand on cherche ce que d'autres font c'est que l'on se rend compte qu'il existe pleins de méthodes facile à mettre en oeuvre.Notamment la fonction :* .countplot() qui permet d'obtenir un histograme d'une des colonnesPar exemple pour les "Survivants"
###Code
sns.countplot(df_train["Survived"], palette='plasma')
###Output
_____no_output_____
###Markdown
A votre tour !Faite le graphique pour la colonne "Sex"Double cliquez pour la solution<!--sns.countplot(df_train["Sex"], palette='plasma') -->
###Code
#Votre solution ici :
###Output
_____no_output_____
###Markdown
Il existe aussi beaucoup de fonctions/ressources accessible si on les recherche.Par exmple la fonction créée au départ (source : https://www.kaggle.com/datafan07/titanic-eda-and-several-modelling-approaches) trace plusieurs graphiques d'un coup. Avec un peu d'expérience vous pourrez l'adapter à vos besoin. Où alors en créer une de toute pièce.
###Code
plotFrequency(cats)
###Output
_____no_output_____
###Markdown
Maintenant qu'on a pris en main le jeu de données.Que l'on sait la quantité de valeurs manquantes et leur répartition. On va pouvoir tenter de faire un peu de machine learning dessus. Regard sur le pourcentage de survivantsA ce stade l'une des questions qui doit naturellement ce poser est : "est-ce que mon jeu de données est équilibré ?".Dans le sens où mes différentes valeurs que j'essaie de prédire sont réparties équitablement ou presque.Il n'est pas obligatoire d'avoir un jeu de données équilibré, mais connaître ça répartition évite des problèmes.ATTENTION : On se trouve dans un cas où on veux prédire des catégories, deux catégories pour être exact (Survit/Ne survit pas), donc certaines méthodes et logiques s'appliquent. Dans le cas de données continues (le prix d'une maison, la consomation d'essence, ...) ce n'est pas la même logique.Votre tour : a partir de ce qui vous as été présenté, essayer d'obtenir la repartition des Survivant dans le jeu de données. Double cliquez pour la solution<!--df_train["Survived"].value_counts() -->
###Code
# Votre essai ici :
###Output
_____no_output_____
###Markdown
A ce stade nous avons toujours toutes les colonnes du jeu d'entrainement donc des colonnes au format texte inutilasble en l'état.Il existe une fonction bien pratique pour transformer en 0 et 1 des variables à catégories :* .get_dummies() : qui crée autant de colonne que de catégorie et remplis automatiquement avec 0 ou 1.Exemple :
###Code
df_train = df_train.join(pd.get_dummies(df_train["Sex"]))
df_train.head()
###Output
_____no_output_____
###Markdown
Ici j'ai à la fois fait ma transformation de mes colonnes, puis je les ai rajoutées dans mon DataFrame.Je peux aussi préciser un suffixe au besoin :
###Code
df_train = df_train.join(pd.get_dummies(df_train["Embarked"], prefix = "emb"))
df_train.head()
###Output
_____no_output_____
###Markdown
Maintenant que nous avons comment extraire de l'information à partir de données simples.on va le faire avec les titres présents dans les nom des personnes. Transformation des titresPour ce faire, on va :* Créer une nouvelle colonne qui acceuillera nos titres* lire ligne à ligne les noms des personnes* lire titre à tire présent dans notre dictionnaire* regarder si le titre est présent dans le nom* Ajouter le nom dans la nouvelle colonne
###Code
df_train["titre"] = ""
for row in range(df_train.shape[0]):
name = df_train.loc[row]["Name"]
for titre in dict_titre:
if titre in name:
df_train["titre"][row] = dict_titre[titre]
###Output
C:\ProgramData\Anaconda3\lib\site-packages\ipykernel_launcher.py:6: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
###Markdown
Ensuite on peut regarder la repartition des titres
###Code
df_train["titre"].value_counts()
###Output
_____no_output_____
###Markdown
Maintenant comme plus haut on va transformer cette colonne en autant de colonnes avec des valeurs 0-1.En pensant bien à remplir le préfix.Essayez ! Double cliquez pour la solution<!-- df_train = df_train.join(pd.get_dummies(df_train["titre"], prefix = "titre"))df_train.head()-->
###Code
# essayez !
###Output
_____no_output_____
###Markdown
Maintenant qu'on a nettoyer/transformer nos valeurs, notre jeu de données est prêt pour la prédiction test de prédictionsPour regard j'aime bien afficher les 5 premières lignes avec la fonction :* .head() : qui renvoie les 5 première ligne du DataFramePour avoir un visuel de mes données.
###Code
df_train.head()
###Output
_____no_output_____
###Markdown
Je peux aussi rapidement isoler le nom des colonnes avec la fonction :* .columns()
###Code
df_train.columns
###Output
_____no_output_____
###Markdown
Je vais maintenant simplement copier les données dans un nouveau DataFrame, en ne gardant que les données numériques.Pour ce faire j'utilise la fonction :* drop() : pour retirer les colonnes inutiles.
###Code
df_trainP = df_train.drop(["PassengerId",
"Age",
"Name",
"Sex",
"Ticket",
"Cabin",
"Embarked",
"titre"], axis=1)
###Output
_____no_output_____
###Markdown
Je vérifie rapidement que j'ai bien la sortie que je veux
###Code
df_trainP.head()
###Output
_____no_output_____
###Markdown
A partir de là, d'ordinaire 4 fichiers sont créés:* X_train : les données d'entraintement sans la colonne à prédire.* X_test : les données de tests pour vérifier comment l'entrainement c'est passé.* y_train : la colonne à prédire pour les données d'entraintement.* y_test : la colonne à prédire pour les données de testsPour nous aider il existe une librairie avec une fonction tout faite pour ce genre de problème :* train_test_split() : qui prend en entrée les deux DataFrame celui sans la colonne à prédire et uniquement la colonne à préidre, le ratio test/total, et le random_state.ATTENTION : le random_state permet de fixer la répartition aléatoire et donc la répétabilité entre les tests. Il est bon de fixer cette valeur.
###Code
X_train, X_test, y_train, y_test = train_test_split(df_trainP.drop(["Survived"], axis=1),
df_trainP["Survived"],
test_size=0.2,
random_state=0)
###Output
_____no_output_____
###Markdown
A partir de maintenant j'ai toutes mes donneés bien rangées pour faire du Machine learning et tester plusieurs alogorythmes.Les entrainements sont toujours découpés en 3 phases :* Création de l'objet MODEL* Entrainement sur les données* Prédiction / Calcul de performances Le premier : le SVM (support vector machines) :Ici on essaye de prédire la valeur en calculant des hyperplans limtes des catégories.
###Code
# Je crée mon MODEL l'objet qui sera entraîné
clf = svm.SVC(kernel='linear', C = 1.0)
# J'entraine avec la fonction .fit() en prenant mes valeurs d'entrainements
clf.fit(X_train, y_train)
# Je prédis avec mes valeurs de test
y_pred = clf.predict(X_test)
# Ensuite je peux faire une matrice de confusion
print(confusion_matrix(y_test, y_pred))
# Ou isoler la performance de mon MODEL
# ce que je demande ici c'est de regarder où mes données prédites sont les mêmes que mes données de test.
# Je vais obtenir une liste de TRUE / FALSE.
# Je somme pour obtenir la quantité de TRUE puis je divise par le nombre de points
print("Ma performance est de " + str(((y_pred == y_test).sum())/y_test.shape[0]))
###Output
[[93 17]
[17 52]]
Ma performance est de 0.8100558659217877
###Markdown
Le second : Nearest Neighbors où les plus proches voisins.Ici on essaye de prédire la valuer en regardant la moyenne est K voisins les plus proches.On peut donc modifier à l'envie le nombre de voisins.
###Code
# Je crée mon MODEL l'objet qui sera entraîné
# Par exemple avec 3 voisins
knn = KNeighborsClassifier(n_neighbors=3)
# J'entraine avec la fonction .fit() en prenant mes valeurs d'entrainements
knn.fit(X_train, y_train)
# Je prédis avec mes valeurs de test
y_test = knn.predict(X_test)
# Ensuite je peux faire une matrice de confusion
print(confusion_matrix(y_test, y_pred))
# Ou isoler la performance de mon MODEL
print("Ma performance est de " + str(((y_pred == y_test).sum())/y_test.shape[0]))
###Output
[[101 19]
[ 9 50]]
Ma performance est de 0.8435754189944135
###Markdown
A votre tour : tentez de faire un entraînement avec 4 proches voisins.Double cliquez pour la réponse.<!-- knn = KNeighborsClassifier(n_neighbors=4)knn.fit(X_train, y_train)y_test = knn.predict(X_test)print(confusion_matrix(y_test, y_pred))print("Ma performance est de " + str(((y_pred == y_test).sum())/y_test.shape[0]))-->
###Code
# Votre code ici :
###Output
_____no_output_____
###Markdown
Je peux aussi imaginer faire une boucle pour tester pleins de valeurs et les afficher après.
###Code
liste_score = {}
for voisin in range(1,100):
knn = KNeighborsClassifier(n_neighbors=voisin)
knn.fit(X_train, y_train)
y_test = knn.predict(X_test)
liste_score[voisin] = ((y_pred == y_test).sum())/y_test.shape[0]
liste_score
plt.plot(liste_score.keys(), liste_score.values(), marker='o', color='mediumvioletred')
plt.show()
###Output
_____no_output_____
###Markdown
Le troisème : DecisionTree ou l'arbe de décision.
###Code
# Je crée mon MODEL l'objet qui sera entraîné
ForestTree = RandomForestClassifier(n_estimators=10)
# J'entraine avec la fonction .fit() en prenant mes valeurs d'entrainements
ForestTree.fit(X_train, y_train)
# Je prédis avec mes valeurs de test
y_test = ForestTree.predict(X_test)
# Ensuite je peux faire une matrice de confusion
print(confusion_matrix(y_test, y_pred))
# Ou isoler la performance de mon MODEL
print("Ma performance est de " + str(((y_pred == y_test).sum())/y_test.shape[0]))
###Output
[[99 14]
[11 55]]
Ma performance est de 0.8603351955307262
###Markdown
Le paramètre n_estimators peut aussi être modifié :
###Code
liste_score_forest = {}
for voisin in range(1,100):
ForestTree = RandomForestClassifier(n_estimators=voisin)
ForestTree.fit(X_train, y_train)
y_test = ForestTree.predict(X_test)
liste_score_forest[voisin] = ((y_pred == y_test).sum())/y_test.shape[0]
plt.plot(liste_score_forest.keys(), liste_score_forest.values(), marker='o', color='mediumvioletred')
plt.show()
liste_score_forest
###Output
_____no_output_____ |
notebooks/5_training_data.ipynb | ###Markdown
Training DataIn this notebook, I will try to assemble training data pairs: Input subjects from the Radio Galaxy Zoo database and potential hosts from the associated IR image, and output classifications.
###Code
import os.path
import pprint
import sys
import astropy.io.fits
import matplotlib.colors
import matplotlib.pyplot
import numpy
import pymongo
import requests
import scipy.ndimage.filters
import sklearn.decomposition
import sklearn.ensemble
import sklearn.linear_model
import sklearn.neural_network
import sklearn.svm
sys.path.insert(1, '..')
import crowdastro.rgz_analysis.consensus
%matplotlib inline
matplotlib.pyplot.rcParams['image.cmap'] = 'gray'
HOST = 'localhost'
PORT = 27017
DB_NAME = 'radio'
DATA_PATH = os.path.join('..', 'data')
ATLAS_CATALOGUE_PATH = os.path.join(DATA_PATH, 'ATLASDR3_cmpcat_23July2015.dat')
TILE_SIZE = '2x2'
FITS_IMAGE_WIDTH = 200
FITS_IMAGE_HEIGHT = 200
CLICK_IMAGE_WIDTH = 500
CLICK_IMAGE_HEIGHT = 500
CLICK_TO_FITS_X = FITS_IMAGE_WIDTH / CLICK_IMAGE_WIDTH
CLICK_TO_FITS_Y = FITS_IMAGE_HEIGHT / CLICK_IMAGE_HEIGHT
CLICK_TO_FITS = numpy.array([CLICK_TO_FITS_X, CLICK_TO_FITS_Y])
# Setup Mongo DB.
client = pymongo.MongoClient(HOST, PORT)
db = client[DB_NAME]
###Output
_____no_output_____
###Markdown
"Simple" subjectsMy first task is to screen out what I think would be a simple set of subjects. In the fits-format notebook, I found that about 30% of ATLAS subjects have just one set of radio contours.I want to screen out all of these and use them as the training subjects. It's a lot easier to look for just the subjects that have `contour_count = 1` — the number of contours seems to be mostly unrelated to the number of radio sources, but if there's only one contour, there should only be one source. The benefit of doing things this way is that I can ignore the classifications collection for a bit.
###Code
subjects = list(db.radio_subjects.find({'metadata.survey': 'atlas', 'state': 'complete', 'metadata.contour_count': 1}))
print('Found {} subjects.'.format(len(subjects)))
###Output
Found 128 subjects.
###Markdown
That's a lot less than ideal (and less than expected) but we can fix this later. Let's have a look at some.
###Code
def open_fits(subject, field, wavelength):
"""Opens a FITS image.
subject: RGZ subject.
field: 'elais' or 'cdfs'.
wavelength: 'ir' or 'radio'.
-> FITS image file handle.
"""
if field not in {'elais', 'cdfs'}:
raise ValueError('field must be either "elais" or "cdfs".')
if wavelength not in {'ir', 'radio'}:
raise ValueError('wavelength must be either "ir" or "radio".')
assert subject['metadata']['survey'] == 'atlas', 'Subject not from ATLAS survey.'
cid = subject['metadata']['source']
filename = '{}_{}.fits'.format(cid, wavelength)
path = os.path.join(DATA_PATH, field, TILE_SIZE, filename)
return astropy.io.fits.open(path, ignore_blank=True)
def plot_contours(subject, colour='green'):
uri = subject['location']['contours']
contours = requests.get(uri).json()['contours']
for row in contours:
for col in row:
xs = []
ys = []
for pair in col['arr']:
xs.append(pair['x'])
ys.append(pair['y'])
matplotlib.pyplot.plot(xs, FITS_IMAGE_HEIGHT - numpy.array(ys), c=colour)
def imshow(im, contrast=0.05):
"""Helper function for showing an image."""
im = im - im.min() + contrast
return matplotlib.pyplot.imshow(im,
origin='lower',
norm=matplotlib.colors.LogNorm(
vmin=im.min(),
vmax=im.max(),
),
)
def show_subject(subject):
with open_fits(subject, 'cdfs', 'ir') as fits_file:
ir = fits_file[0].data
with open_fits(subject, 'cdfs', 'radio') as fits_file:
radio = fits_file[0].data
matplotlib.pyplot.figure(figsize=(15, 15))
matplotlib.pyplot.subplot(1, 2, 1)
matplotlib.pyplot.title(subject['zooniverse_id'] + ' IR')
matplotlib.pyplot.xlim((0, FITS_IMAGE_WIDTH))
matplotlib.pyplot.ylim((0, FITS_IMAGE_HEIGHT))
imshow(ir)
plot_contours(subject)
matplotlib.pyplot.subplot(1, 2, 2)
matplotlib.pyplot.title(subject['zooniverse_id'] + ' Radio')
matplotlib.pyplot.xlim((0, FITS_IMAGE_WIDTH))
matplotlib.pyplot.ylim((0, FITS_IMAGE_HEIGHT))
imshow(radio)
plot_contours(subject)
show_subject(subjects[10])
###Output
K:\Languages\Anaconda3\lib\site-packages\astropy\io\fits\util.py:578: UserWarning: Could not find appropriate MS Visual C Runtime library or library is corrupt/misconfigured; cannot determine whether your file object was opened in append mode. Please consider using a file object opened in write mode instead.
'Could not find appropriate MS Visual C Runtime '
###Markdown
Potential hostsSince we're representing this as a binary classification problem, let's get all the potential hosts in an image using the method from the potential_host_counting notebook. This is not ideal — it includes far too many hosts — but it'll do for now.
###Code
def potential_hosts(subject, sigma=0.5, threshold=0):
with open_fits(subject, 'cdfs', 'ir') as fits_file:
ir = fits_file[0].data
neighborhood = numpy.ones((10, 10))
blurred_ir = scipy.ndimage.filters.gaussian_filter(ir, sigma) > threshold
local_max = scipy.ndimage.filters.maximum_filter(blurred_ir, footprint=neighborhood) == blurred_ir
region_labels, n_labels = scipy.ndimage.measurements.label(local_max)
maxima = numpy.array(
[numpy.array((region_labels == i + 1).nonzero()).T.mean(axis=0)
for i in range(n_labels)]
)
maxima = maxima[numpy.logical_and(maxima[:, 1] != 0, maxima[:, 1] != 499)]
return maxima
with open_fits(subjects[10], 'cdfs', 'ir') as fits_file:
ir = fits_file[0].data
matplotlib.pyplot.figure(figsize=(15, 15))
matplotlib.pyplot.subplot(1, 2, 1)
matplotlib.pyplot.title(subjects[10]['zooniverse_id'] + ' IR')
matplotlib.pyplot.xlim((0, FITS_IMAGE_WIDTH))
matplotlib.pyplot.ylim((0, FITS_IMAGE_HEIGHT))
imshow(ir)
maxima = potential_hosts(subjects[10], sigma=1, threshold=0.05)
matplotlib.pyplot.scatter(maxima[:, 1], maxima[:, 0])
matplotlib.pyplot.show()
###Output
K:\Languages\Anaconda3\lib\site-packages\astropy\io\fits\util.py:578: UserWarning: Could not find appropriate MS Visual C Runtime library or library is corrupt/misconfigured; cannot determine whether your file object was opened in append mode. Please consider using a file object opened in write mode instead.
'Could not find appropriate MS Visual C Runtime '
###Markdown
This is not a fantastic result, but it will do for now. Julie said that the rgz-analysis code found peaks through Gaussian fitting. I can't find the code for that, but I can use the idea later to get better potential hosts. Crowdsourced labelsWe also need to retrieve the labels for each subject. I'll use the rgz_analysis.consensus code for that.
###Code
def crowdsourced_label(subject):
answers = crowdastro.rgz_analysis.consensus.consensus(subject['zooniverse_id'])['answer']
answer = [answer for answer in answers.values() if answer['ind'] == 0][0]
if 'ir' in answer:
return answer['ir']
if 'ir_peak' in answer:
return answer['ir_peak']
return None
with open_fits(subjects[10], 'cdfs', 'ir') as fits_file:
ir = fits_file[0].data
matplotlib.pyplot.figure(figsize=(15, 15))
matplotlib.pyplot.subplot(1, 2, 1)
matplotlib.pyplot.title(subjects[10]['zooniverse_id'] + ' IR')
matplotlib.pyplot.xlim((0, FITS_IMAGE_WIDTH))
matplotlib.pyplot.ylim((0, FITS_IMAGE_HEIGHT))
imshow(ir)
maxima = potential_hosts(subjects[10], sigma=1, threshold=0.05)
matplotlib.pyplot.scatter(maxima[:, 1], maxima[:, 0])
label = crowdsourced_label(subjects[10])
# Clicks are upside-down, whereas the image and peaks found from it are not.
matplotlib.pyplot.scatter([CLICK_TO_FITS_X * label[0]], [FITS_IMAGE_HEIGHT - CLICK_TO_FITS_Y * label[1]], c='r')
matplotlib.pyplot.show()
###Output
K:\Languages\Anaconda3\lib\site-packages\astropy\io\fits\util.py:578: UserWarning: Could not find appropriate MS Visual C Runtime library or library is corrupt/misconfigured; cannot determine whether your file object was opened in append mode. Please consider using a file object opened in write mode instead.
'Could not find appropriate MS Visual C Runtime '
###Markdown
That seems a reasonable answer. Assembling the dataWe now have- IR images- Radio contours- Radio images- A single point to classify- A way to label the pointsThat's effectively all we need. I want to throw all of this into logistic regression. What I'll do is get a neighbourhood of pixels around the potential host, do the same for the radio image, and naïvely throw it all into scikit-learn. This will almost certainly be ineffective, but it's a start.Edit, 27/03/2016: According to the results of mean_images, the IR image doesn't really matter. We can quite possibly just ignore it for now, and I do this below.
###Code
def get_training_pairs(subject):
with open_fits(subject, 'cdfs', 'ir') as fits_file:
ir = fits_file[0].data
with open_fits(subject, 'cdfs', 'radio') as fits_file:
radio = fits_file[0].data
radius = 20
ir = numpy.pad(ir, radius, mode='linear_ramp')
radio = numpy.pad(radio, radius, mode='linear_ramp')
hosts = potential_hosts(subject, sigma=1, threshold=0.05)
actual_host = crowdsourced_label(subject)
if actual_host is None:
return []
actual_host = numpy.array(actual_host) * CLICK_TO_FITS
nearest_host = min(hosts, key=lambda host: numpy.hypot(actual_host[0] - host[1], actual_host[1] - host[0]))
pairs = []
for host in hosts:
host_y, host_x = host
ir_neighbourhood = ir[host_x : host_x + 2 * radius, host_y : host_y + 2 * radius]
radio_neighbourhood = radio[int(host_x) : int(host_x) + 2 * radius, int(host_y) : int(host_y) + 2 * radius]
input_vec = numpy.ndarray.flatten(radio_neighbourhood)
label = (nearest_host == host).all()
pairs.append((input_vec, label))
return pairs
training_data = [pair for subject in subjects for pair in get_training_pairs(subject)]
print('Number of training samples:', len(training_data))
###Output
Number of training samples: 9396
###Markdown
Training Here, I throw the data into logistic regression and see what happens.
###Code
xs = [x for x, _ in training_data]
ys = [int(y) for _, y in training_data]
xs_train, xs_test, ys_train, ys_test = sklearn.cross_validation.train_test_split(xs, ys, test_size=0.2, random_state=0)
lr = sklearn.linear_model.LogisticRegression(C=1e5, class_weight='auto') # Note - auto deprecated from 0.17.
lr.fit(xs_train, ys_train)
n_true_positive = numpy.logical_and(lr.predict(xs_test) == numpy.array(ys_test), numpy.array(ys_test) == 1).sum()
n_true_negative = numpy.logical_and(lr.predict(xs_test) == numpy.array(ys_test), numpy.array(ys_test) == 0).sum()
n_false_positive = numpy.logical_and(lr.predict(xs_test) != numpy.array(ys_test), numpy.array(ys_test) == 0).sum()
n_false_negative = numpy.logical_and(lr.predict(xs_test) != numpy.array(ys_test), numpy.array(ys_test) == 1).sum()
print('True positives:', n_true_positive)
print('True negatives:', n_true_negative)
print('False positives:', n_false_positive)
print('False negatives:', n_false_negative)
###Output
True positives: 26
True negatives: 1332
False positives: 518
False negatives: 4
###Markdown
Originally, the logistic regression had essentially learned to output `False`, which makes sense — the examples are overwhelmingly `False`, so you can get to a very easy minimum by always outputting `False`. I said that some ways to get around this might be to inflate the number of `True` examples, or to change the output encoding in some way. Cheng suggested just weighting logistic regression's cost function to balance the `True`s and `False`s — there's an argument for this. The result is that there are far more attempts to assign `True`. ConvNetLet's try a nonlinear model that learns some features.This doesn't correctly weight the classes, since Keras doesn't support class weights and I haven't manually weighted yet, but it does learn features.
###Code
import keras.layers
import keras.models
model = keras.models.Sequential()
radius = 20
input_shape = (1, radius * 2, radius * 2)
n_conv_filters = 10
conv_width = 4
hidden_dim = 256
model = keras.models.Sequential()
model.add(keras.layers.Convolution2D(n_conv_filters, conv_width, conv_width, border_mode='valid', input_shape=input_shape))
model.add(keras.layers.Activation('relu'))
model.add(keras.layers.MaxPooling2D(pool_size=(2, 2)))
model.add(keras.layers.Dropout(0.2))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(hidden_dim))
model.add(keras.layers.Activation('sigmoid'))
model.add(keras.layers.Dense(1))
model.add(keras.layers.Activation('sigmoid'))
model.compile(optimizer='sgd', loss='mse')
def get_training_pairs_im(subject):
with open_fits(subject, 'cdfs', 'ir') as fits_file:
ir = fits_file[0].data
with open_fits(subject, 'cdfs', 'radio') as fits_file:
radio = fits_file[0].data
radius = 20
ir = numpy.pad(ir, radius, mode='linear_ramp')
radio = numpy.pad(radio, radius, mode='linear_ramp')
hosts = potential_hosts(subject, sigma=1, threshold=0.05)
actual_host = crowdsourced_label(subject)
if actual_host is None:
return []
actual_host = numpy.array(actual_host) * CLICK_TO_FITS
nearest_host = min(hosts, key=lambda host: numpy.hypot(actual_host[0] - host[1], actual_host[1] - host[0]))
pairs = []
for host in hosts:
host_y, host_x = host
ir_neighbourhood = ir[host_x : host_x + 2 * radius, host_y : host_y + 2 * radius]
radio_neighbourhood = radio[int(host_x) : int(host_x) + 2 * radius, int(host_y) : int(host_y) + 2 * radius]
input_vec = radio_neighbourhood
label = (nearest_host == host).all()
pairs.append((input_vec, label))
return pairs
training_data_im = [pair for subject in subjects for pair in get_training_pairs_im(subject)]
xs = [x.reshape((1, radius * 2, radius * 2)) for x, _ in training_data_im]
ys = [[int(y)] for _, y in training_data_im]
xs_train, xs_test, ys_train, ys_test = sklearn.cross_validation.train_test_split(xs, ys, test_size=0.2, random_state=0)
xs_train = numpy.array(xs_train)
ys_train = numpy.array(ys_train)
xs_test = numpy.array(xs_test)
ys_test = numpy.array(ys_test)
tp = []
tn = []
fp = []
fn = []
correct_pos = []
correct_neg = []
total_epochs = 0
import IPython.display
for i in range(10):
print('Epoch', i + 1)
model.fit(xs_train, ys_train.reshape((-1, 1)), nb_epoch=1, batch_size=1)
for i, kernel in enumerate(model.get_weights()[0]):
kernel = kernel[0]
matplotlib.pyplot.subplot(10, 10, i + 1)
matplotlib.pyplot.axis('off')
matplotlib.pyplot.imshow(kernel, cmap='gray')
matplotlib.pyplot.subplots_adjust(hspace=0, wspace=0)
n_true_positive = numpy.logical_and(model.predict(xs_test).round() == numpy.array(ys_test), numpy.array(ys_test) == 1).sum()
n_true_negative = numpy.logical_and(model.predict(xs_test).round() == numpy.array(ys_test), numpy.array(ys_test) == 0).sum()
n_false_positive = numpy.logical_and(model.predict(xs_test).round() != numpy.array(ys_test), numpy.array(ys_test) == 0).sum()
n_false_negative = numpy.logical_and(model.predict(xs_test).round() != numpy.array(ys_test), numpy.array(ys_test) == 1).sum()
tp.append(n_true_positive)
tn.append(n_true_negative)
fp.append(n_false_positive)
fn.append(n_false_negative)
correct_pos.append(n_true_positive / (n_true_positive + n_false_negative))
correct_neg.append(n_true_negative / (n_true_negative + n_false_positive))
total_epochs += 1
IPython.display.clear_output(wait=True)
print('Convolutional filters:')
matplotlib.pyplot.show()
# IPython.display.display(matplotlib.pyplot.gcf())
print('Model over time:')
epoch_range = numpy.arange(total_epochs) + 1
matplotlib.pyplot.plot(epoch_range, correct_pos)
matplotlib.pyplot.plot(epoch_range, correct_neg)
matplotlib.pyplot.xlabel('Epochs')
matplotlib.pyplot.ylabel('% Correct')
matplotlib.pyplot.legend(['Positive', 'Negative'])
matplotlib.pyplot.show()
# IPython.display.display(matplotlib.pyplot.gcf())
n_true_positive = numpy.logical_and(model.predict(xs_test).round() == numpy.array(ys_test), numpy.array(ys_test) == 1).sum()
n_true_negative = numpy.logical_and(model.predict(xs_test).round() == numpy.array(ys_test), numpy.array(ys_test) == 0).sum()
n_false_positive = numpy.logical_and(model.predict(xs_test).round() != numpy.array(ys_test), numpy.array(ys_test) == 0).sum()
n_false_negative = numpy.logical_and(model.predict(xs_test).round() != numpy.array(ys_test), numpy.array(ys_test) == 1).sum()
print('True positives:', n_true_positive)
print('True negatives:', n_true_negative)
print('False positives:', n_false_positive)
print('False negatives:', n_false_negative)
# TODO: Class weights. Can we fake some data by adding Gaussian noise?
# TODO: IID. The data are not independent - can we use this?
###Output
_____no_output_____ |
visualization-life-expectancy-austria-2019.ipynb | ###Markdown
Analysis of life expectancy in Austria for the year 2019Data Sources: [Statistik Austria](http://www.statistik-austria.at/web_de/statistiken/menschen_und_gesellschaft/bevoelkerung/sterbetafeln/index.html) [GitHub](https://github.com/thomashon/visualization-life-expectancy-austria-2019)This Notebook follows [Ben Fry's basic Data Visualization Process](https://www.dashingd3js.com/the-data-visualization-process):1. Acquire2. Parse3. Filter4. Mine5. Represent6. Refine (this notebook contains the final version, see the GitHub commits for the full history)7. Interact 1. Acquire Import of Libraries
###Code
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
###Output
_____no_output_____
###Markdown
Import Dataset
###Code
try:
df = pd.read_csv('data-life-expectancy-austria-2019.csv')
except:
github_csv = 'https://raw.githubusercontent.com/thomashon/visualization-life-expectancy-austria-2019/master/data-life-expectancy-austria-2019.csv'
df = pd.read_csv(github_csv)
df.head()
###Output
_____no_output_____
###Markdown
2. Parse Droping unneccessary columns
###Code
df.drop(['die_this_year', 'stationary_current_age', 'stationary'], axis=1, inplace=True)
df.head()
###Output
_____no_output_____
###Markdown
Transforming *alive_peers* to percentual basis 1
###Code
df.alive_peers = df.alive_peers / 100000
df.head()
###Output
_____no_output_____
###Markdown
Renaming columns
###Code
df.rename({'mortality_probability': 'mortality_this_year'}, axis=1, inplace=True)
df.head()
###Output
_____no_output_____
###Markdown
Creating new column *mortality_dist*
###Code
df['mortality_dist'] = df.mortality_this_year * df.alive_peers
df.head()
###Output
_____no_output_____
###Markdown
Creating new column *dead_peers*
###Code
df['dead_peers'] = 1 - df.alive_peers
df.head()
###Output
_____no_output_____
###Markdown
Creating new column (life-)*expectancy*
###Code
df['expectancy'] = df.current_age + df.years_to_live
df.head()
###Output
_____no_output_____
###Markdown
Transforming *region* and *gender* to category
###Code
df.region = df.region.astype('category')
df.gender = df.gender.astype('category')
df.head()
###Output
_____no_output_____
###Markdown
3. FilterI am sorting the life expectancy according to the median to represent the visualizations better.
###Code
median = df.groupby(["region"])['expectancy'].aggregate(np.median).reset_index().sort_values('expectancy')
median
###Output
_____no_output_____
###Markdown
4. Mine General descriptionWith the *description method*, we get a closer look at the data.
###Code
df.describe()
###Output
_____no_output_____
###Markdown
We can see the quartiles of the different columns. Life expectancy is at least 78 and at most 102 years. PercentilesPerhaps the percentiles of *expectancy* can give us more precise information.
###Code
percentiles = []
for region in median.region.unique().to_list():
percentile_low_f = round(np.percentile(df[(df.region == region) & (df.gender == 'f')].expectancy, [2.5][0]), 2)
percentile_high_f = round(np.percentile(df[(df.region == region) & (df.gender == 'f')].expectancy, [97.5][0]), 2)
f_iqr = percentile_high_f - percentile_low_f
percentile_low_m = round(np.percentile(df[(df.region == region) & (df.gender == 'm')].expectancy, [2.5][0]), 2)
percentile_high_m = round(np.percentile(df[(df.region == region) & (df.gender == 'm')].expectancy, [97.5][0]), 2)
m_iqr = percentile_high_m - percentile_low_m
percentile_dict = {
'region': region,
'f_2.5': percentile_low_f,
'f_97.5': percentile_high_f,
'f_iqr': f_iqr,
'm_2.5': percentile_low_m,
'm_97.5': percentile_high_m,
'm_iqr': m_iqr
}
percentiles.append(percentile_dict)
df_percentiles = pd.DataFrame(percentiles, columns=['region', 'f_2.5', 'f_97.5', 'f_iqr', 'm_2.5', 'm_97.5', 'm_iqr'])
df_percentiles
###Output
_____no_output_____
###Markdown
The percentiles show a clear picture. The 95% confidence interval is higher for the federal states in the west than in the east. Let's see if the visual analysis provides a similar picture. For statistical significance, out of box sampling would probably have been useful. For our purposes, however, we are satisfied with the available results. 5. Represent pair plotWith the pair plot, we get a good overview of much information. Pair plots are generally a good starting point for further investigations.
###Code
sns.set(style="ticks")
ax = sns.pairplot(df, hue="gender")
###Output
_____no_output_____
###Markdown
It turns out that especially the distribution of the *expectancy* is a good starting point for further investigations. Mortality Distribution *gender*Let's take a closer look at the distribution of *expectancy* compared to the current age.
###Code
ax = sns.set(style="whitegrid")
ax = sns.relplot(
x='current_age',
y='mortality_dist',
data=df,
kind='line',
hue='gender',
alpha=1,
aspect=2
)
###Output
_____no_output_____
###Markdown
We can see that up to the age of about 40, the mortality rates of both women and men are relatively similar. From 40 to about 85, a gap opens between men and women. At this age, men have a much higher probability of dying. The distribution of women is narrower overall. In other words, mortality takes place in a shorter period than among men. Mortality Distribution *regions*Let's look at the distribution of *expectancy* also concerning the 9 federal states.
###Code
ax = sns.relplot(
x='current_age',
y='mortality_dist',
data=df,
kind='line',
hue='region',
row='gender',
alpha=1,
aspect=2
)
###Output
_____no_output_____
###Markdown
The distribution in relation to the federal states is quite similar. Only Burgenland seems to have some outliers. Bar chart *gender* and *region* togetherMaybe a bar chart tells us more about our data. It shows the life expectancy for 25-year-olds separated by gender.
###Code
ax = sns.catplot(
x = 'expectancy',
y = 'region',
data = df[df.current_age == 25],
hue = 'gender',
kind = 'bar',
aspect = 2,
order = median['region']
)
###Output
_____no_output_____
###Markdown
The severe difference between men and women becomes clear with this bar chart. Bar chart *gender* and *region* separatedTo illustrate the differences between the federal states, a bar chart is shown here, separated according to *gender* and *region*. The life expectancy of a 25-year-old is also shown here.
###Code
ax = sns.catplot(
x='expectancy',
y='region',
data=df[df.current_age == 25],
col='gender',
kind='bar',
aspect=1,
order=median['region']
)
###Output
_____no_output_____
###Markdown
It seems that the differences between the federal states are more significant for women than for men. Boxplot *gender* and *region* togetherPerhaps a box plot diagram can tell us more about the differences in life expectancy between women and men and between the federal states.
###Code
ax = sns.catplot(
x='expectancy',
y='region',
data=df,
kind='box',
hue='gender',
aspect=2,
order=median['region']
)
###Output
_____no_output_____
###Markdown
The quartiles and the median show a clear picture. Women generally have a higher life expectancy in Austria. Boxplot *gender* and *region* separatedPerhaps if we separate by *region*, the difference between the federal states also become more evident.
###Code
ax = sns.catplot(
x='expectancy',
y='region',
data=df,
kind='box',
col='gender',
order=median['region']
)
###Output
_____no_output_____
###Markdown
The quartiles show a clear picture. The difference between city and province is once again evident. In the provinces, life expectancy is higher for both women and men. 7. Interact Import of the modules
###Code
from bokeh.io import show, output_notebook, curdoc, push_notebook
from bokeh.plotting import figure
from bokeh.models import CategoricalColorMapper, HoverTool, ColumnDataSource, Slider, CustomJS
from bokeh.layouts import row, column, widgetbox
from bokeh.palettes import Category10_9
from ipywidgets import interact
###Output
_____no_output_____
###Markdown
bokeh.ioPerhaps further insights can be gained with interactive visualization. With the help of bokeh, complex issues can be viewed from different perspectives.
###Code
output_notebook()
genders = df.gender.unique().to_list()
color_mapper = CategoricalColorMapper(factors=genders, palette=['#CC8963', '#5975A4'])
regions = median.region.tolist() # the ordered list from above
genders = df.gender.unique().tolist()
age = 25
source = ColumnDataSource(data={'x': df.region[df.current_age==age],
'y': df.expectancy[df.current_age==age],
'region': df.region[df.current_age==age],
'gender': df.gender[df.current_age==age],
'age': df.current_age[df.current_age==age],
'alive_peers': df.alive_peers[df.current_age==age]
})
hover = HoverTool(
tooltips=[
('age', '@age'),
('expectancy', '@y'),
('region', '@x'),
('gender', '@gender'),
('alive peers', '@alive_peers')
])
p = figure(title="simple circle example", plot_height=300, plot_width=800, y_range=(75,105), x_range=regions,
background_fill_color='#FFFFFF', tools=[hover, 'pan', 'wheel_zoom'])
r = p.circle('x', 'y', size=20, alpha=0.7, color=dict(field='gender', transform=color_mapper), source=source)
def update(age=25):
# r.data_source.data['x'] = age
r.data_source.data['y'] = df.expectancy[df.current_age==age]
r.data_source.data['age'] = df.current_age[df.current_age==age]
r.data_source.data['alive_peers'] = df.alive_peers[df.current_age==age]
push_notebook()
ax = show(p, notebook_handle=True)
ax = interact(update, age=(0,100))
###Output
_____no_output_____ |
The_Sparks_Foundation_Task_1.ipynb | ###Markdown
**Varikuti Naveen Reddy** Task 1: Prediction Using Supervised ML
Predict the percentage of an student based on the no. of study hours.
This is a simple linear regression task as it involves just 2 variables. 1.) Importing Libraries
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sb
###Output
_____no_output_____
###Markdown
2.) Loading Data for Prediction
###Code
data_link="http://bit.ly/w-data"
data=pd.read_csv(data_link)
data.head()
#We can see here, the data is imported succesfully !!
###Output
_____no_output_____
###Markdown
3.) Data Checking
###Code
data.info()
data.describe()
data.shape
data.size
###Output
_____no_output_____
###Markdown
4.) Data Exploration and Data Analysis
###Code
data.isnull().sum()
#So, We can see here there is no null values in the given data set.
###Output
_____no_output_____
###Markdown
5.) Data Vizualisation So, let make this data to speak by ploting !!
###Code
data.plot.scatter(x="Hours",y="Scores")
plt.title("Hours v/s Percentage Scores")
plt.xlabel("No. of Hours")
plt.ylabel("Scores in Percentage")
plt.show()
###Output
_____no_output_____
###Markdown
Lets check whether there is any co-relation among the parameters
###Code
from scipy.stats import pearsonr
corr=pearsonr(data["Hours"],data["Scores"])
corr
#So, we can observe positive corelation between the parameters
#This can also be observed by ploting the regplot
sb.regplot(x=data["Hours"],y=data["Scores"],data=data)
plt.title("Hours v/s Percentage Scores")
plt.xlabel("No. of Hours")
plt.ylabel("Scores in Percentage")
plt.show()
###Output
_____no_output_____
###Markdown
6.) Splitting data into X and y for training the model
###Code
X=data.iloc[:,:-1].values
y=data.iloc[:,-1].values
#lets split our data into train_set and test_set
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.2)
###Output
_____no_output_____
###Markdown
Well, we are ready with our data to train the model and also to test that model 7.) Training Model
###Code
from sklearn.linear_model import LinearRegression
model=LinearRegression()
model.fit(X_train,y_train)
model_coef=model.coef_
model_intercept=model.intercept_
print(f"Coefficent of our cost function is {model.coef_}")
print(f"Intercept of our cost function is {model.intercept_}")
###Output
Coefficent of our cost function is [9.82473622]
Intercept of our cost function is 2.5140504549597367
###Markdown
So, Our Line equation will be x*coef+intercept
###Code
cost_function=X*model_coef+model_intercept
#So, ploting our data as whole with respective to our model
plt.scatter(X,y)
plt.plot(X,cost_function,"red",label="cost function line or Regression line")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
8.) Predicting our testset
###Code
pred=model.predict(X_test)
comparision=pd.DataFrame({"Actual data":y_test,"predicted data":pred})
comparision
#Lets print our model score
print(model.score(X_train,y_train))
print(model.score(X_test,y_test))
###Output
0.9526892101638127
0.9528617651149368
###Markdown
What if the student studies for 9.25 hours/day?
###Code
hours=9.25
time=np.array(hours).reshape(-1,1)
result=model.predict(time)
print(f"No. of hours = {hours}")
print(f"Predicted Value = {result[0]}")
###Output
No. of hours = 9.25
Predicted Value = 93.39286045326043
###Markdown
9.) Evaluating our model
So, this is our final and fore most step, evaluating or checking our model
###Code
from sklearn import metrics
print(f"Mean Absolute Error = {metrics.mean_absolute_error(y_test,pred)}")
print(f"Root Mean square Error = {np.sqrt(metrics.mean_squared_error(y_test,pred))}")
print(f"R2 Score = {metrics.r2_score(y_test,pred)}")
###Output
Mean Absolute Error = 4.4889882745516045
Root Mean square Error = 4.555451297709179
R2 Score = 0.9528617651149368
|
Notebooks/.ipynb_checkpoints/Ocean 01 D Profile Timestamping-checkpoint.ipynb | ###Markdown
Introduction This notebook will not run in **`binder`**. It processes a fairly large dataset externalto the repository. The output results are however part of the repository and are thereforeavailable to the other notebooks: See subfolder `profiles`. It contains `.csv` files that delineate profile timing. Each row is a profile from start of ascent to descent to endof rest interval. The profile scanning done in this notebook covers three sites of interest: Axial Base, Oregon Slope Base and Oregon Offshore(part of the Endurance array). These sites are abbreviated respectively*axb*, *osb* and *oos*.The years of interest are 2015 through August 2021.Note that this Jupyter notebook is strictly about creating metadata for subsequent analysis of water column profiles with various sensors. The sensor data have been cleaned up in the prior(Ocean 01 C) notebook.The following cell is utility Python configuration.
###Code
import os, sys, time, glob, warnings
from IPython.display import clear_output # use inside loop with clear_output(wait = True) followed by print(i)
warnings.filterwarnings('ignore')
this_dir = os.getcwd()
data_dir = this_dir + '/../../data' # large datasets reside outside the repository
from matplotlib import pyplot as plt
from matplotlib import colors as mplcolors
import numpy as np, pandas as pd, xarray as xr
from numpy import datetime64 as dt64, timedelta64 as td64
# convenience functions abbreviating 'datetime64' and so on
def doy(theDatetime): return 1 + int((theDatetime - dt64(str(theDatetime)[0:4] + '-01-01')) / td64(1, 'D'))
def dt64_from_doy(year, doy): return dt64(str(year) + '-01-01') + td64(doy-1, 'D')
def day_of_month_to_string(d): return str(d) if d > 9 else '0' + str(d)
print('\nJupyter Notebook running Python {}'.format(sys.version_info[0]))
###Output
Jupyter Notebook running Python 3
###Markdown
This notebook's "to do" list* Annotate pH equil stops: both local midnight and noon * Are things like T/C/DO stable during equil stops? * Does chlorophyll versus backscatter show more zooplankton at depth? Diel? * ArcticGRO 29* `class Profile````class Profile: """A water column profile""" def __init__(self, t0='2019-01-01T00:26:05', t1='2019-01-01T01:37:55', d0=191.268063, d1=6.618323, site='osb'): self.t0 = dt64(t0) self.t1 = dt64(t1) self.d0 = d0 self.d1 = d1 self.site = site def readout(self): print("Profile start", self.t0, 'duration', self.t1 - self.t0)p = Profile()p.readout()```
###Code
def ProfileCrawler(s, t, verbose = False):
"""
ProfileCrawler traverses a passed pandas Series s of pressures and Series t of corresponding times.
The code is designed for data sampled at about 1-minute intervals. Goal: Determine profiler activity
timestamps. Results are returned as a tuple of six lists:
Start and end times for ascent, descent and rest intervals.
"""
# pandas series, just pressures
len_s = len(s)
threshold = 1.
a0, d0, r0 = [], [], [] # start times for ascents, descents, rests
for i in range(1, len_s - 5): # 6 minute window
# catch ascent
if s[i-1] - s[i] <= threshold and \
s[i] - s[i+1] >= threshold and \
s[i+1] - s[i+2] >= threshold and \
s[i+2] - s[i+3] >= threshold and \
s[i+3] - s[i+4] >= threshold and \
s[i+4] - s[i+5] >= threshold:
a0.append((i,t[i]))
# catch descent
if s[i-1] - s[i] >= threshold and \
s[i] - s[i+1] <= threshold and \
s[i+1] - s[i+2] <= threshold and \
s[i+2] - s[i+3] <= threshold and \
s[i+3] - s[i+4] <= threshold and \
s[i+4] - s[i+5] <= threshold:
d0.append((i,t[i]))
# this variant is a little too liberal; false positives ~25%
# why? Because twice daily there are stops on the way down for pH
# catch rest
if i >= 5 and \
s[i-5] - s[i-4] <= -threshold and \
s[i-4] - s[i-3] <= -threshold and \
s[i-3] - s[i-2] <= -threshold and \
s[i-2] - s[i-1] <= -threshold and \
s[i-1] - s[i] <= -threshold and \
s[i] - s[i+1] >= -threshold:
r0.append((i,t[i]))
if verbose: print("there are", len(a0), "ascent starts")
if verbose: print("there are", len(d0), "descent starts")
if verbose: print("there are", len(r0), "rest starts: now culling extras")
# keep running the "rest start" list looking for...
# ...relative to this particular rest start...
# ...a future rest start earlier than the next ascent start
# ...if found: delete this particular rest start...
# ...and begin again at the top
# this winnows away false positive rest starts
while True:
profile_counter = 1
for i in range(len(r0)-1):
if profile_counter >= len(a0) - 1: break
if r0[i+1][1] < a0[profile_counter][1]:
r0.remove(r0[i])
break
else: profile_counter += 1
if profile_counter >= len(a0) - 1: break
if verbose: print("there are", len(a0), "ascent starts")
if verbose: print("there are", len(d0), "descent starts")
if verbose: print("there are", len(r0), "rest starts")
a1 = d0.copy() # ascent end = descent start
d1 = r0.copy() # descent end = rest start
r1 = a0[1:].copy() # rest end = next ascent start (cuts off end of year)
# logic check on results
causal_errors = [0]*3 # list [0, 0, 0] tracks 3 types of possible error
smallest = len(a0)
if len(d0) < smallest: smallest = len(d0)
if len(r0) < smallest: smallest = len(r0)
for i in range(smallest):
if a0[i][0] >= d0[i][0]: causal_errors[0] += 1 # ascent start later than descent start
if d0[i][0] >= r0[i][0]: causal_errors[1] += 1 # descent start later than rest start
if a1[i][0] >= d1[i][0]: causal_errors[2] += 1 # ascent end later than descent end (???)
if verbose: print("causal error counts:", causal_errors[0], causal_errors[1], causal_errors[2])
if verbose: print(len(a0), len(d0), len(r0))
# Returning lists of tuples: (index, time)
return a0, a1, d0, d1, r0, r1
def PrintProfileStatistics(a0, a1, d0, d1, r0, r1):
"""
PrintProfileStatistics prints mean and standard deviation for a set of profiles.
Specifically for a set of Ascents, Descents and Rests. Each passed vector (a0 etc)
is a list of tuples. The first element of the tuple is the index of the time in the
source data array. The second value is the timestamp for that same element.
"""
one_sec = np.timedelta64(1, 's')
D_asc = [(dt64(a1[i][1])-dt64(a0[i][1]))/one_sec for i in range(len(a1))]
D_dsc = [(dt64(d1[i][1])-dt64(d0[i][1]))/one_sec for i in range(len(d1))]
D_rst = [(dt64(r1[i][1])-dt64(r0[i][1]))/one_sec for i in range(len(r1))]
print('Means, standard deviation for profile phases, in minutes:')
print(' Ascents: ', round(np.mean(D_asc)/60., 2), round(np.std(D_asc)/60., 2))
print(' Descents: ', round(np.mean(D_dsc)/60., 2), round(np.std(D_dsc)/60., 2))
print(' Rests: ', round(np.mean(D_rst)/60., 2), round(np.std(D_rst)/60., 2))
print()
print('(Recall that two profiles of nine each day have slower, staged descents)')
print()
def PrintProfileEntry(profile):
"""
A profile is a list of 12 values as six pairs of (index, timestamp) interleaved values
ascent start: index, timestamp
ascent end: index, timestamp
descent start: index, timestamp
descent end: index, timestamp
rest start: index, timestamp
rest end: index, timestamp
The indices refer back to the source dataset, likely at 1Min samples. They could be abolished.
The file that is written from these pre-pends a counter column; so it has 13 columns total.
"""
print("ascent: index / start time:", profile[0], profile[1], ' index / end time:', profile[2], profile[3])
print("descent: index / start time:", profile[4], profile[5], ' index / end time:', profile[6], profile[7])
print("rest: index / start time:", profile[8], profile[9], ' index / end time:', profile[10], profile[11])
def CompareShallowProfilerTimestamps(a0_dx, a1_dx, d0_dx, d1_dx, r0_dx, r1_dx):
"""
Using vertical shifts plot the comparative timestamp vectors; y-axis is record index
"""
from operator import add
day_td64 = pd.to_timedelta(1, unit='D')
dt_a0_dx, ind_a0_dx = [(a0_dx[i][1]-a0_dx[0][1])/day_td64 for i in range(len(a0_dx))], []
for i in range(len(a0_dx)): ind_a0_dx.append(a0_dx[i][0])
dt_a1_dx, ind_a1_dx = [(a1_dx[i][1]-a1_dx[0][1])/day_td64 for i in range(len(a1_dx))], []
for i in range(len(a1_dx)): ind_a1_dx.append(a1_dx[i][0])
dt_d0_dx, ind_d0_dx = [(d0_dx[i][1]-d0_dx[0][1])/day_td64 for i in range(len(d0_dx))], []
for i in range(len(d0_dx)): ind_d0_dx.append(d0_dx[i][0])
dt_d1_dx, ind_d1_dx = [(d1_dx[i][1]-d1_dx[0][1])/day_td64 for i in range(len(d1_dx))], []
for i in range(len(d1_dx)): ind_d1_dx.append(d1_dx[i][0])
dt_r0_dx, ind_r0_dx = [(r0_dx[i][1]-r0_dx[0][1])/day_td64 for i in range(len(r0_dx))], []
for i in range(len(r0_dx)): ind_r0_dx.append(r0_dx[i][0])
dt_r1_dx, ind_r1_dx = [(r1_dx[i][1]-r1_dx[0][1])/day_td64 for i in range(len(r1_dx))], []
for i in range(len(r1_dx)): ind_r1_dx.append(r1_dx[i][0])
fig, axs = plt.subplots(figsize=(6,4), tight_layout=True)
axs.scatter(dt_a0_dx, ind_a0_dx, marker='^', s=1., color='k')
axs.scatter(dt_a1_dx, list(map(add, ind_a1_dx, [10000]*len(dt_a1_dx))), marker='o', s=1., color='c')
axs.scatter(dt_d0_dx, list(map(add, ind_d0_dx, [20000]*len(dt_d0_dx))), marker='v', s=1., color='r')
axs.scatter(dt_d1_dx, list(map(add, ind_d1_dx, [30000]*len(dt_d1_dx))), marker='o', s=1., color='y')
axs.scatter(dt_r0_dx, list(map(add, ind_r0_dx, [40000]*len(dt_r0_dx))), marker='^', s=1., color='g')
axs.scatter(dt_r1_dx, list(map(add, ind_r1_dx, [50000]*len(dt_r1_dx))), marker='o', s=1., color='b')
axs.set_title("comparing timestamp records for shallow profiler, one year")
def ProfileWriter(s, y0, yN, verbose=True):
"""
Generate Profile CSV files for sites x years
"""
for site in s:
data_root = '/mnt/d/data/data_explorer_1Min/'
ds = xr.open_dataset( data_root + site + '/profiler/' + site + '_profiler_pressure_1Min.nc')
for yr in range(y0, yN+1):
yrstr = str(yr)
yrpostr = str(yr+1)
print('\n\n\n\nworking on site', site, 'year', yrstr)
dsyr = ds.sel(time=slice(dt64(yrstr + '-01-01'), dt64(yrpostr + '-01-01')))
a0, a1, d0, d1, r0, r1 = \
ProfileCrawler(dsyr.sea_water_pressure_profiler_depth_enabled.to_series(), \
dsyr.time.to_series(), True)
print(len(a0), len(d0), len(r0), 'interval starts')
print(len(a1), len(d1), len(r1), 'interval ends')
if len(a0) < 10 or len(a1) < 10 or len(d0) < 10 or len(d1) < 10 or len(r0) < 10 or len(r1) < 10:
print()
print('No data: Abandoning this site + year:', site, yrstr)
print()
else:
# we have intervals; do they match? Assume not always. Here is a checking function:
# CompareShallowProfilerTimestamps(a0, a1, d0, d1, r0, r1)
ascents, descents, rests = [], [], []
day_td64 = pd.to_timedelta(1, unit='D')
ascent_limit = pd.to_timedelta(2, unit='H')
descent_limit = pd.to_timedelta(2, unit='H')
rest_limit = pd.to_timedelta(2, unit='H')
prior_ascent_start = a0[0][1] - day_td64
prior_descent_start = d0[0][1] - day_td64
prior_rest_start = r0[0][1] - day_td64
end_index = 0 # index into a1
for i in range(len(a0)):
all_done = False
this_start_time = a0[i][1]
if this_start_time > prior_ascent_start:
while a1[end_index][1] <= this_start_time:
end_index += 1
if end_index >= len(a1):
all_done = True
break
if all_done: break
this_end_time = a1[end_index][1]
if this_end_time < this_start_time + ascent_limit:
prior_ascent_start = this_start_time
ascents.append([a0[i][0], this_start_time, a1[end_index][0], this_end_time])
if all_done: break
end_index = 0 # index into d1
for i in range(len(d0)):
all_done = False
this_start_time = d0[i][1]
if this_start_time > prior_descent_start:
while d1[end_index][1] <= this_start_time:
end_index += 1
if end_index >= len(d1):
all_done = True
break
if all_done: break
this_end_time = d1[end_index][1]
if this_end_time < this_start_time + descent_limit:
prior_descent_start = this_start_time
descents.append([d0[i][0], this_start_time, d1[end_index][0], this_end_time])
if all_done: break
end_index = 0 # index into r1
for i in range(len(r0)):
all_done = False
this_start_time = r0[i][1]
if this_start_time > prior_rest_start:
while r1[end_index][1] <= this_start_time:
end_index += 1
if end_index >= len(r1):
all_done = True
break
if all_done: break
this_end_time = r1[end_index][1]
if this_end_time < this_start_time + rest_limit:
prior_rest_start = this_start_time
rests.append([r0[i][0], this_start_time, r1[end_index][0], this_end_time])
if all_done: break
print("found", len(ascents), 'good ascents')
print("found", len(descents), 'good descents')
print("found", len(rests), 'good rests')
# profiles[] will be a list of clean ascend/descend/rest sequences, 12 numbers per sequence
# ascend start: index, timestamp
# ascend end: index, timestamp The 'index' refers to the source dataset, typically at "1Min"
# descend start: index, timestamp sampling rate. Note that ascend end = descend start and so on.
# descend end: index, timestamp
# rest start: index, timestamp
# rest end: index, timestamp
profiles = []
descent_index = 0
rest_index = 0
# This code builds the profiles[] list
all_done = False
for i in range(len(ascents)):
all_done = False
this_end_ascent_time = ascents[i][3]
found_matching_descent = False
while descents[descent_index][1] < this_end_ascent_time:
descent_index += 1
if descent_index >= len(descents):
all_done = True
break
if all_done: break
if descents[descent_index][1] == ascents[i][3]:
this_end_descent_time = descents[descent_index][3]
while rests[rest_index][1] < this_end_descent_time:
rest_index += 1
if rest_index >= len(rests):
all_done = True
break
if all_done: break
if rests[rest_index][1] == descents[descent_index][3]:
di = descent_index
ri = rest_index
profiles.append([\
ascents[i][0], ascents[i][1], ascents[i][2], ascents[i][3], \
descents[di][0], descents[di][1], descents[di][2], descents[di][3], \
rests[ri][0], rests[ri][1], rests[ri][2], rests[ri][3] \
])
# This code removes profiles whose start time is earlier than the prior profile rest end time
# This happens when multiple ascend starts are detected for a single actual ascent. It can
# result in more than nine profiles per day which is in general unlikely.
nTimeSlipsRemoved = 0
while True:
fall_out = True
for i in range(1, len(profiles)):
if profiles[i][1] < profiles[i-1][11]:
profiles.remove(profiles[i])
nTimeSlipsRemoved += 1
fall_out = False
break
if fall_out: break
# This code looks for and reports on duplicated profile ascent start times
double_check, fail_index = True, -1
for i in range(len(profiles)-1):
if profiles[i][1] == profiles[i+1][1]:
double_check = False
fail_index = i
break
if not double_check: PrintProfileEntry(profiles[fail_index])
else: print('no doubling of profile ascent starts found')
# This code looks for and reports on non-matching Timestamp sequences:
# From ascent to descent and descent to rest.
double_check, fail_index = True, -1
for i in range(len(profiles)):
if profiles[i][3] != profiles[i][5] or profiles[i][7] != profiles[i][9]:
double_check = False
fail_index = i
break
# This code compiles a histogram of profiles by doy and it has three faults to be aware of
# - Baked in is the assumption that this is at most one year of data
# - There is capacity for a leap year with 366 days but it is not explicitly sorted out
# - Day of year (doy) usually numbers from 1 but the histogram numbers
profile_histogram = [0]*366
doylist = list(range(366))
for i in range(len(profiles)):
profile_histogram[doy(profiles[i][1])-1] += 1
# This code counts how many days had nine profiles as expected, and how many had more
# than nine profiles which is not really possible. So that would indicate false
# positives still got through the process here.
nNines = 0
nMoreThanNine = 0
more_than_nine = []
for i in range(366):
if profile_histogram[i] == 9: nNines = nNines + 1
if profile_histogram[i] > 9: more_than_nine.append(i)
# print diagnostics from all of the above steps
print("arrived at", len(profiles), 'good candidate profiles')
print("after removing", nTimeSlipsRemoved, 'due to time slip error')
if double_check: print('transitions are self-consistent')
else: print('double check failed at element', fail_index)
print('of 365 days,', nNines, 'have nine profiles as desired')
print('...and', len(more_than_nine), 'had more than nine profiles')
# If days were found with more than nine profiles: Print some diagnostics
if len(more_than_nine):
for i in range(len(more_than_nine)):
this_doy = more_than_nine[i] + 1 # convert from Python index to doy 1, 2, 3...
print("doy", this_doy, "had more than nine profiles")
# print()
# print('doy is', this_doy)
# print('-------------------')
# for j in range(len(profiles)):
# if doy(profiles[j][1]) == this_doy:
# PrintProfileEntry(profiles[j])
# print()
df = pd.DataFrame(data=np.array([np.array(x) for x in profiles]))
df.to_csv(os.getcwd() + '/../Profiles/' + site + yrstr + '.csv')
def ReadProfiles(fnm):
"""
Profiles are saved by site and year as 12-tuples. Here we read only
the datetimes (not the indices) so there are only six values. These
are converted to Timestamps. They correspond to ascend start/end,
descend start/end and rest start/end.
"""
df = pd.read_csv(fnm, usecols=["1", "3", "5", "7", "9", "11"])
df.columns=['ascent_start', 'ascent_end', 'descent_start', 'descent_end', 'rest_start', 'rest_end']
df['ascent_start'] = pd.to_datetime(df['ascent_start'])
df['ascent_end'] = pd.to_datetime(df['ascent_end'])
df['descent_start'] = pd.to_datetime(df['descent_start'])
df['descent_end'] = pd.to_datetime(df['descent_end'])
df['rest_start'] = pd.to_datetime(df['rest_start'])
df['rest_end'] = pd.to_datetime(df['rest_end'])
return df
def ChartProfileHistogram(doylist, profile_histogram):
fig, axs = plt.subplots(figsize=(6,4), tight_layout=True)
axs.scatter(doylist, profile_histogram, marker='o', s=9., color='g')
###Output
_____no_output_____
###Markdown
Working with a small shallow profiler datasetOregon Slope Base, January 2019: Means, standard deviations for profile phases, in minutes:``` Ascents: 67.42 3.01 Descents: 47.54 24.88 (2 of 9 each day have pauses on descent) Rests: 44.81 14.05```There were 278 of 279 possible profiles in this month. These composed functionsproduce the statistics.```PrintProfileStatistics(ProfileCrawler(etcetera))```The cell below charts profiles for this month, by day.
###Code
# January 2019 has all but one profile correct (278 / 279 possible).
# Missing is the last profile of the month.
# For each given UTC day: Profiles 4 and 9 are pH profiles.
# Could use time window criteria: Hour on [6, 10] and [19, 23].
fig, axs = plt.subplots(31, 1, figsize=(15,31), tight_layout=True)
ds_CTD = xr.open_dataset("./data/rca/ctd/osb_ctd_jan2019_1min.nc")
for i in range(31):
daystring = str(i+1) if i > 8 else '0' + str(i+1)
time0, time1 = dt64('2019-01-' + daystring + 'T00:00:00'), dt64('2019-01-' + daystring + 'T23:59:59')
ds = ds_CTD.sel(time=slice(time0, time1))
axs[i].plot(ds.time, ds.seawater_pressure, marker='.', markersize=1., color='k')
axs[i].set(ylim = (200., 0.))
print('...January 2019 OSB depth profiles...')
###Output
...January 2019 OSB depth profiles...
###Markdown
Expanding to the full shallow profiler dataset Review "dimensions" of shallow profiler data. * There are three sites, abbreviated here **axb**, **oos** and **osb*** There are seven years of data collection possible, 2015 ... 2021* There are up to 16 different "level 1+" data products considered * backscatter * cdom * chlora * density * dissolved oxygen * nitrate * par * pco2 * pH * pressure * salinity * spectral irradiance * temperature * velocity east * velocity north * velocity up* There are three types of profiler behavior: ascend, descend, rest * Nine profiles per day * Noon and midnight profiles feature stabilization stops on descent * There are profiles during day and night * There is seasonality* There is a platform with a set of related instruments * backscatter * cdom * chlora * density * dissolved oxygen version 1 * dissolved oxygen version 2 * dissolved oxygen version 3 * ph * pressure * salinity * temperature* There are other higher-rate instruments (profiler and platform) * spectrophotometer Check data availability for the three shallow profiler sitesVertical axis is depth in meters, horizontal is time.
###Code
# This will be all of Axial Base
fig, axs = plt.subplots(3,1,figsize=(15,15), tight_layout=True)
t0, t1 = dt64('2014-09-01'), dt64('2022-03-01')
ds = xr.open_dataset('/mnt/d/data/data_explorer_1Min/axb/profiler/axb_profiler_pressure_1Min.nc')
axs[0].scatter(ds.time, ds.z, s=1.) # for plot use: marker='.', markersize=1., color='k')
axs[0].set(ylim = (-220., 0.), xlim = (t0, t1), title='Axial Base')
ds = xr.open_dataset('/mnt/d/data/data_explorer_1Min/oos/profiler/oos_profiler_pressure_1Min.nc')
axs[1].scatter(ds.time, ds.z, s=1.) # for plot use: marker='.', markersize=1., color='k')
axs[1].set(ylim = (-220., 0.), xlim = (t0, t1), title='Oregon Offshore')
ds = xr.open_dataset('/mnt/d/data/data_explorer_1Min/osb/profiler/osb_profiler_pressure_1Min.nc')
axs[2].scatter(ds.time, ds.z, s=1.) # for plot use: marker='.', markersize=1., color='k')
axs[2].set(ylim = (-220., 0.), xlim = (t0, t1), title='Oregon Slope Base')
###Output
_____no_output_____
###Markdown
Store profile listings as CSV files Uses the pressure versus time data shown aboveThe next cell generates CSV files listing "self-consistent" profiles consisting of ```Ascent start - Ascent End = Descent Start - Descent End = Rest Start - Rest End```Each year is considered independently (7 total) and there are three sites. As Oregon Offshoredid not operate in 2021 there are 20 (not 21) result files.
###Code
# Uncomment to run: Takes a few minutes to complete
# ProfileWriter(['axb', 'oos', 'osb'], 2015, 2021)
###Output
working on site axb year 2015
there are 1118 ascent starts
there are 1127 descent starts
there are 1316 rest starts: now culling extras
there are 1118 ascent starts
there are 1127 descent starts
there are 1117 rest starts
causal error counts: 1110 0 0
1118 1127 1117
1118 1127 1117 interval starts
1127 1117 1117 interval ends
found 1107 good ascents
found 1096 good descents
found 607 good rests
no doubling of profile ascent starts found
arrived at 575 good candidate profiles
after removing 11 due to time slip error
transitions are self-consistent
of 365 days, 39 have nine profiles as desired
...and 2 had more than nine profiles
doy 301 had more than nine profiles
doy 308 had more than nine profiles
working on site axb year 2016
there are 2939 ascent starts
there are 2941 descent starts
there are 4264 rest starts: now culling extras
there are 2939 ascent starts
there are 2941 descent starts
there are 2941 rest starts
causal error counts: 1494 889 889
2939 2941 2941
2939 2941 2941 interval starts
2941 2941 2938 interval ends
found 2923 good ascents
found 2908 good descents
found 2922 good rests
no doubling of profile ascent starts found
arrived at 2880 good candidate profiles
after removing 2 due to time slip error
transitions are self-consistent
of 365 days, 279 have nine profiles as desired
...and 0 had more than nine profiles
working on site axb year 2017
there are 1419 ascent starts
there are 1421 descent starts
there are 2045 rest starts: now culling extras
there are 1419 ascent starts
there are 1421 descent starts
there are 1421 rest starts
causal error counts: 1189 0 0
1419 1421 1421
1419 1421 1421 interval starts
1421 1421 1418 interval ends
found 1419 good ascents
found 1412 good descents
found 1407 good rests
no doubling of profile ascent starts found
arrived at 1401 good candidate profiles
after removing 2 due to time slip error
transitions are self-consistent
of 365 days, 141 have nine profiles as desired
...and 0 had more than nine profiles
working on site axb year 2018
there are 1577 ascent starts
there are 1581 descent starts
there are 2270 rest starts: now culling extras
there are 1577 ascent starts
there are 1581 descent starts
there are 1576 rest starts
causal error counts: 1300 0 0
1577 1581 1576
1577 1581 1576 interval starts
1581 1576 1576 interval ends
found 1575 good ascents
found 1572 good descents
found 1565 good rests
no doubling of profile ascent starts found
arrived at 1557 good candidate profiles
after removing 3 due to time slip error
transitions are self-consistent
of 365 days, 162 have nine profiles as desired
...and 0 had more than nine profiles
working on site axb year 2019
there are 1681 ascent starts
there are 1684 descent starts
there are 2429 rest starts: now culling extras
there are 1681 ascent starts
there are 1684 descent starts
there are 1683 rest starts
causal error counts: 1679 0 0
1681 1684 1683
1681 1684 1683 interval starts
1684 1683 1680 interval ends
found 1676 good ascents
found 1676 good descents
found 1674 good rests
no doubling of profile ascent starts found
arrived at 1664 good candidate profiles
after removing 0 due to time slip error
transitions are self-consistent
of 365 days, 176 have nine profiles as desired
...and 0 had more than nine profiles
working on site axb year 2020
there are 3059 ascent starts
there are 3058 descent starts
there are 4424 rest starts: now culling extras
there are 3059 ascent starts
there are 3058 descent starts
there are 3061 rest starts
causal error counts: 1 2527 2527
3059 3058 3061
3059 3058 3061 interval starts
3058 3061 3058 interval ends
found 3053 good ascents
found 3050 good descents
found 3053 good rests
no doubling of profile ascent starts found
arrived at 3042 good candidate profiles
after removing 1 due to time slip error
transitions are self-consistent
of 365 days, 328 have nine profiles as desired
...and 0 had more than nine profiles
working on site axb year 2021
there are 1971 ascent starts
there are 1967 descent starts
there are 2853 rest starts: now culling extras
there are 1971 ascent starts
there are 1967 descent starts
there are 1970 rest starts
causal error counts: 0 1485 1485
1971 1967 1970
1971 1967 1970 interval starts
1967 1970 1970 interval ends
found 1969 good ascents
found 1963 good descents
found 1967 good rests
no doubling of profile ascent starts found
arrived at 1960 good candidate profiles
after removing 2 due to time slip error
transitions are self-consistent
of 365 days, 211 have nine profiles as desired
...and 0 had more than nine profiles
working on site oos year 2015
there are 446 ascent starts
there are 743 descent starts
there are 582 rest starts: now culling extras
there are 446 ascent starts
there are 743 descent starts
there are 445 rest starts
causal error counts: 442 2 2
446 743 445
446 743 445 interval starts
743 445 445 interval ends
found 438 good ascents
found 426 good descents
found 395 good rests
no doubling of profile ascent starts found
arrived at 363 good candidate profiles
after removing 16 due to time slip error
transitions are self-consistent
of 365 days, 22 have nine profiles as desired
...and 1 had more than nine profiles
doy 289 had more than nine profiles
working on site oos year 2016
there are 2117 ascent starts
there are 2115 descent starts
there are 3500 rest starts: now culling extras
there are 2117 ascent starts
there are 2115 descent starts
there are 2122 rest starts
causal error counts: 1175 647 647
2117 2115 2122
2117 2115 2122 interval starts
2115 2122 2116 interval ends
found 2108 good ascents
found 2081 good descents
found 2096 good rests
no doubling of profile ascent starts found
arrived at 2047 good candidate profiles
after removing 13 due to time slip error
transitions are self-consistent
of 365 days, 195 have nine profiles as desired
...and 0 had more than nine profiles
working on site oos year 2017
there are 431 ascent starts
there are 432 descent starts
there are 817 rest starts: now culling extras
there are 431 ascent starts
there are 432 descent starts
there are 430 rest starts
causal error counts: 347 0 0
431 432 430
431 432 430 interval starts
432 430 430 interval ends
found 429 good ascents
found 429 good descents
found 423 good rests
no doubling of profile ascent starts found
arrived at 422 good candidate profiles
after removing 0 due to time slip error
transitions are self-consistent
of 365 days, 41 have nine profiles as desired
...and 0 had more than nine profiles
working on site oos year 2018
there are 1184 ascent starts
there are 1187 descent starts
there are 1713 rest starts: now culling extras
there are 1184 ascent starts
there are 1187 descent starts
there are 1186 rest starts
causal error counts: 1184 0 0
1184 1187 1186
1184 1187 1186 interval starts
1187 1186 1183 interval ends
found 1179 good ascents
found 1179 good descents
found 1169 good rests
no doubling of profile ascent starts found
arrived at 1165 good candidate profiles
after removing 1 due to time slip error
transitions are self-consistent
of 365 days, 116 have nine profiles as desired
...and 0 had more than nine profiles
working on site oos year 2019
there are 2087 ascent starts
there are 2099 descent starts
there are 3063 rest starts: now culling extras
there are 2087 ascent starts
there are 2099 descent starts
there are 2091 rest starts
causal error counts: 1937 0 0
2087 2099 2091
2087 2099 2091 interval starts
2099 2091 2086 interval ends
found 2082 good ascents
found 2086 good descents
found 2038 good rests
no doubling of profile ascent starts found
arrived at 2029 good candidate profiles
after removing 0 due to time slip error
transitions are self-consistent
of 365 days, 211 have nine profiles as desired
...and 0 had more than nine profiles
working on site oos year 2020
there are 135 ascent starts
there are 138 descent starts
there are 190 rest starts: now culling extras
there are 135 ascent starts
there are 138 descent starts
there are 136 rest starts
causal error counts: 135 0 0
135 138 136
135 138 136 interval starts
138 136 134 interval ends
found 134 good ascents
found 133 good descents
found 134 good rests
no doubling of profile ascent starts found
arrived at 131 good candidate profiles
after removing 0 due to time slip error
transitions are self-consistent
of 365 days, 13 have nine profiles as desired
...and 0 had more than nine profiles
working on site oos year 2021
there are 0 ascent starts
there are 0 descent starts
there are 0 rest starts: now culling extras
there are 0 ascent starts
there are 0 descent starts
there are 0 rest starts
causal error counts: 0 0 0
0 0 0
0 0 0 interval starts
0 0 0 interval ends
No data: Abandoning this site + year: oos 2021
###Markdown
Examine one of the site + year profile records
###Code
df = ReadProfiles(os.getcwd() + '/../Profiles/osb2017.csv')
print('Example: Descent end time, table row 2:', df['descent_end'][2])
print()
print()
df
###Output
Example: Descent end time, table row 2: 2017-01-01 06:28:00
|
Examples/Doublewell Spatio-temporal Decorrelation.ipynb | ###Markdown
Spatial Decorrelation of Order 2 (SD2) Parameters: data – a 3n x T data matrix (number 3 is due to the x,y,z coordinates for each atom). Maybe a numpy array or a matrix where, n: size of the protein T: number of snapshots of MD trajectory m – dimensionality of the subspace we are interested in; Default value is None, in which case m = n verbose – print information on progress. Default is true.Returns: A 3n x m matrix U (NumPy matrix type), such that Y = U * data is a 2nd order spatially whitened coordinates extracted from the 3n x T data matrix. If m is omitted, U is a square 3n x 3n matrix. Ds: has eigen values sorted by increasing variance PCs: holds the index for m most significant principal components by decreasing variance S = Ds[PCs] S – Eigen values of the ‘data’ covariance matrix B – Eigen vectors of the ‘data’ covariance matrix. The eigen vectors are orthogonal.
###Code
#from anca.decorrelation import SD2
import SD2
(Y, S, B, U) = SD2.SD2(X, m=2);
###Output
2nd order Spatial Decorrelation -> Looking for 2 sources
2nd order Spatial Decorrelation -> Removing the mean value
2nd order Spatial Decorrelation -> Whitening the data
###Markdown
Temporal Decorrelation of Order 2 (TD2) Parameters: Y -- an mxT spatially whitened matrix (m dimensionality of subspace, T snapshots). May be a numpy array or a matrix where, m -- dimensionality of the subspace we are interested in. Defaults to None, in which case m=n. T -- number of snapshots of MD trajectory U -- whitening matrix obtained after doing the PCA analysis on m components of real data lag -- lag time in the form of an integer denoting the time steps verbose -- print info on progress. Default is True. Returns: V -- An n x m matrix V (NumPy matrix type) is a separating matrix such that V = Btd2 x U (U is obtained from SD2 of data matrix and Btd2 is obtained from time-delayed covariance of matrix Y) Z -- Z = B2td2 * Y is spatially whitened and temporally decorrelated (2nd order) source extracted from the m x T spatially whitened matrix Y. Dstd2: has eigen values sorted by increasing variance PCstd2: holds the index for m most significant principal components by decreasing variance R = Dstd2[PCstd2] R – Eigen values of the time-delayed covariance matrix of Y Btd2 – Eigen vectors of the time-delayed covariance matrix of Y
###Code
#from anca.decorrelation import TD2
import TD2
(Z, R, Btd2, V) = TD2.TD2(Y, m=2, U=U, lag=5)
###Output
2nd order Temporal Decorrelation -> Looking for 2 sources
2nd order Temporal Decorrelation -> Removing the mean value
2nd order Temporal Decorrelation -> Whitening the data
###Markdown
Temporal Decorrelation of Order 4 (TD4) Parameters: Z -- an mxT spatially uncorrelated of order 2 and temporally uncorrelated of order 2 matrix (m subspaces, T samples). May be a numpyarray or matrix where m: number of subspaces we are interested in. T: Number of snapshots of MD trajectory V -- separating matrix obtained after doing the PCA analysis on m components of real data followed temporal decorrelation of the spatially whitened data lag -- lag time in the form of an integer denoting the time steps verbose -- print info on progress. Default is True. Returns: W -- separating matrix
###Code
#from anca.decorrelation import TD4
import TD4
W = TD4.TD4(Z, m=2, V=V, lag=5)
def draw_arrow(a, v, color):
plt.arrow(0, 0, a*v[0], a*v[1], color=color, width=0.02, linewidth=3)
plt.figure(figsize=(4,7))
scatter(X[:,0], X[:,1], marker = 'o', color=[0.6,0.6,0.6])
plt.arrow(0, 0, 7*U[0,0], 12*U[0,1], color='red', width=0.02, linewidth=3);
plt.text(-0.0, 6.5, 'SD2', color='red', fontsize=20, fontweight='bold', rotation='horizontal')
plt.arrow(0, 0, 2*V[0,0], V[0,1], color='blue', width=0.02, linewidth=3);
plt.text(-1.5, 3.5, 'TD2', color='blue', fontsize = 20, fontweight='bold', rotation='horizontal')
plt.arrow(0, 0, 3*W[0,0], 4*W[0,1], color='orange', width=0.02, linewidth=3);
plt.text(1.5, 3.5, 'TD4', color='orange', fontsize=20, fontweight='bold', rotation='horizontal')
YTD4 = W.dot(Z)
hist(2*Y[0,:].T, bins=50, histtype='step', linewidth=3, label='SD2', color='blue')
hist(0.3*Z[1,:].T, bins=50, histtype='step', linewidth=3, label='TD2', color='orange')
hist(5*YTD4[1,:].T, bins=50, histtype='step', linewidth=3, label='TD4', color='red')
xlabel('essential coordinate (1st principal or independent component)')
ylabel('projected histogram')
legend()
###Output
_____no_output_____ |
exp/tvm_jupyter/nnvm/from_mxnet_to_webgl.ipynb | ###Markdown
Deploy Deep Learning Models to OpenGL and WebGL===============================================**Author**: `Zhixun Tan `_This example shows how to build a neural network with NNVM python frontend andgenerate runtime library for WebGL running in a browser with TVM.To run this notebook, you need to install tvm and nnvm.Notice that you need to build tvm with OpenGL. Overview--------In this tutorial, we will download a pre-trained resnet18 model from GluonModel Zoo, and run image classification in 3 different ways:- Run locally: We will compile the model into a TVM library with OpenGL device code and directly run it locally.- Run in a browser through RPC: We will compile the model into a JavaScript TVM library with WebGL device code, and upload it to an RPC server that is hosting JavaScript TVM runtime to run it.- Export a JavaScript library and run in a browser: We will compile the model into a JavaScript TVM library with WebGL device code, combine it with JavaScript TVM runtime, and pack everything together. Then we will run it directly in a browser.
###Code
from __future__ import print_function
import numpy as np
import tvm
import nnvm.compiler
import nnvm.testing
# This tutorial must be run with OpenGL backend enabled in TVM.
# The NNVM CI does not enable OpenGL yet. But the user can run this script.
opengl_enabled = tvm.module.enabled("opengl")
# To run the local demo, set this flag to True.
run_deploy_local = False
# To run the RPC demo, set this flag to True.
run_deploy_rpc = False
# To run the WebGL deploy demo, set this flag to True.
run_deploy_web = False
###Output
_____no_output_____
###Markdown
Download a Pre-trained Resnet18 Model-------------------------------------Here we define 2 functions:- A function that downloads a pre-trained resnet18 model from Gluon Model Zoo. The model that we download is in MXNet format, we then transform it into an NNVM computation graph.- A function that downloads a file that contains the name of all the image classes in this model.
###Code
def load_mxnet_resnet():
"""Load a pretrained resnet model from MXNet and transform that into NNVM
format.
Returns
-------
net : nnvm.Symbol
The loaded resnet computation graph.
params : dict[str -> NDArray]
The pretrained model parameters.
data_shape: tuple
The shape of the input tensor (an image).
out_shape: tuple
The shape of the output tensor (probability of all classes).
"""
print("Loading pretrained resnet model from MXNet...")
# Download a pre-trained mxnet resnet18_v1 model.
from mxnet.gluon.model_zoo.vision import get_model
block = get_model('resnet18_v1', pretrained=True)
# Transform the mxnet model into NNVM.
# We want a probability so add a softmax operator.
sym, params = nnvm.frontend.from_mxnet(block)
sym = nnvm.sym.softmax(sym)
print("- Model loaded!")
return sym, params, (1, 3, 224, 224), (1, 1000)
def download_synset():
"""Download a dictionary from class index to name.
This lets us know what our prediction actually is.
Returns
-------
synset : dict[int -> str]
The loaded synset.
"""
print("Downloading synset...")
from mxnet import gluon
url = "https://gist.githubusercontent.com/zhreshold/" + \
"4d0b62f3d01426887599d4f7ede23ee5/raw/" + \
"596b27d23537e5a1b5751d2b0481ef172f58b539/" + \
"imagenet1000_clsid_to_human.txt"
file_name = "synset.txt"
gluon.utils.download(url, file_name)
with open(file_name) as f:
synset = eval(f.read())
print("- Synset downloaded!")
return synset
###Output
_____no_output_____
###Markdown
Download Input Image--------------------Here we define 2 functions that prepare an image that we want to performclassification on.- A function that downloads a cat image.- A function that performs preprocessing to an image so that it fits the format required by the resnet18 model.
###Code
def download_image():
"""Download a cat image and resize it to 224x224 which fits resnet.
Returns
-------
image : PIL.Image.Image
The loaded and resized image.
"""
print("Downloading cat image...")
from matplotlib import pyplot as plt
from mxnet import gluon
from PIL import Image
url = "https://github.com/dmlc/mxnet.js/blob/master/data/cat.png?raw=true"
img_name = "cat.png"
gluon.utils.download(url, img_name)
image = Image.open(img_name).resize((224, 224))
print("- Cat image downloaded!")
plt.imshow(image)
plt.show()
return image
def transform_image(image):
"""Perform necessary preprocessing to input image.
Parameters
----------
image : numpy.ndarray
The raw image.
Returns
-------
image : numpy.ndarray
The preprocessed image.
"""
image = np.array(image) - np.array([123., 117., 104.])
image /= np.array([58.395, 57.12, 57.375])
image = image.transpose((2, 0, 1))
image = image[np.newaxis, :]
return image
###Output
_____no_output_____
###Markdown
Compile the Model-----------------Here we define a function that invokes the NNVM compiler.
###Code
def compile_net(net, target_host, target, data_shape, params):
"""Compiles an NNVM computation graph.
Parameters
----------
net : nnvm.Graph
The NNVM computation graph.
target_host : str
The target to compile the host portion of the library.
target : str
The target to compile the device portion of the library.
data_shape : tuple
The shape of the input data (image).
params : dict[str -> NDArray]
Model parameters.
Returns
-------
graph : Graph
The final execution graph.
libmod : tvm.Module
The module that comes with the execution graph
params : dict[str -> NDArray]
The updated parameters of graph if params is passed.
This can be different from the params passed in.
"""
print("Compiling the neural network...")
with nnvm.compiler.build_config(opt_level=0):
deploy_graph, lib, deploy_params = nnvm.compiler.build(
net,
target_host=target_host,
target=target,
shape={"data": data_shape},
params=params)
print("- Complilation completed!")
return deploy_graph, lib, deploy_params
###Output
_____no_output_____
###Markdown
Demo 1: Deploy Locally----------------------In this demo, we will compile the model targetting the local machine.Then we will demonstrate how to save the compiled model as a shared libraryand load it back.Finally, we will run the model.
###Code
def deploy_local():
"""Runs the demo that deploys a model locally.
"""
# Load resnet model.
net, params, data_shape, out_shape = load_mxnet_resnet()
# Compile the model.
# Note that we specify the the host target as "llvm".
deploy_graph, lib, deploy_params = compile_net(
net,
target_host="llvm",
target="opengl",
data_shape=data_shape,
params=params)
# Save the compiled module.
# Note we need to save all three files returned from the NNVM compiler.
print("Saving the compiled module...")
from tvm.contrib import util
temp = util.tempdir()
path_lib = temp.relpath("deploy_lib.so")
path_graph_json = temp.relpath("deploy_graph.json")
path_params = temp.relpath("deploy_param.params")
lib.export_library(path_lib)
with open(path_graph_json, "w") as fo:
fo.write(deploy_graph.json())
with open(path_params, "wb") as fo:
fo.write(nnvm.compiler.save_param_dict(deploy_params))
print("- Saved files:", temp.listdir())
# Load the module back.
print("Loading the module back...")
loaded_lib = tvm.module.load(path_lib)
with open(path_graph_json) as fi:
loaded_graph_json = fi.read()
with open(path_params, "rb") as fi:
loaded_params = bytearray(fi.read())
print("- Module loaded!")
# Run the model! We will perform prediction on an image.
print("Running the graph...")
from tvm.contrib import graph_runtime
module = graph_runtime.create(loaded_graph_json, loaded_lib, tvm.opengl(0))
module.load_params(loaded_params)
image = transform_image(download_image())
input_data = tvm.nd.array(image.astype("float32"), ctx=tvm.opengl(0))
module.set_input("data", input_data)
module.run()
# Retrieve the output.
out = module.get_output(0, tvm.nd.empty(out_shape, ctx=tvm.opengl(0)))
top1 = np.argmax(out.asnumpy())
synset = download_synset()
print('TVM prediction top-1:', top1, synset[top1])
if run_deploy_local and opengl_enabled:
deploy_local()
###Output
_____no_output_____
###Markdown
Demo 2: Deploy the Model to WebGL Remotely with RPC-------------------------------------------------------Following the steps above, we can also compile the model for WebGL.TVM provides rpc module to help with remote deploying.When we deploy a model locally to OpenGL, the model consists of two parts:the host LLVM part and the device GLSL part. Now that we want to deploy toWebGL, we need to leverage Emscripten to transform LLVM into JavaScript. Inorder to do that, we will need to specify the host target as'llvm -target=asmjs-unknown-emscripten -system-lib`. Then call Emscripten tocompile the LLVM binary output into a JavaScript file.First, we need to manually start an RPC server. Please follow the instructionsin `tvm/web/README.md`. After following the steps, you should have a web pageopened in a browser, and a Python script running a proxy.
###Code
def deploy_rpc():
"""Runs the demo that deploys a model remotely through RPC.
"""
from tvm import rpc
from tvm.contrib import util, emscripten
# As usual, load the resnet18 model.
net, params, data_shape, out_shape = load_mxnet_resnet()
# Compile the model.
# Note that this time we are changing the target.
# This is because we want to translate the host library into JavaScript
# through Emscripten.
graph, lib, params = compile_net(
net,
target_host="llvm -target=asmjs-unknown-emscripten -system-lib",
target="opengl",
data_shape=data_shape,
params=params)
# Now we want to deploy our model through RPC.
# First we ned to prepare the module files locally.
print("Saving the compiled module...")
temp = util.tempdir()
path_obj = temp.relpath("deploy.bc") # host LLVM part
path_dso = temp.relpath("deploy.js") # host JavaScript part
path_gl = temp.relpath("deploy.gl") # device GLSL part
path_json = temp.relpath("deploy.tvm_meta.json")
lib.save(path_obj)
emscripten.create_js(path_dso, path_obj, side_module=True)
lib.imported_modules[0].save(path_gl)
print("- Saved files:", temp.listdir())
# Connect to the RPC server.
print("Connecting to RPC server...")
proxy_host = 'localhost'
proxy_port = 9090
remote = rpc.connect(proxy_host, proxy_port, key="js")
print("- Connected to RPC server!")
# Upload module to RPC server.
print("Uploading module to RPC server...")
remote.upload(path_dso, "deploy.dso")
remote.upload(path_gl)
remote.upload(path_json)
print("- Upload completed!")
# Load remote library.
print("Loading remote library...")
fdev = remote.load_module("deploy.gl")
fhost = remote.load_module("deploy.dso")
fhost.import_module(fdev)
rlib = fhost
print("- Remote library loaded!")
ctx = remote.opengl(0)
# Upload the parameters.
print("Uploading parameters...")
rparams = {k: tvm.nd.array(v, ctx) for k, v in params.items()}
print("- Parameters uploaded!")
# Create the remote runtime module.
print("Running remote module...")
from tvm.contrib import graph_runtime
module = graph_runtime.create(graph, rlib, ctx)
# Set parameter.
module.set_input(**rparams)
# Set input data.
input_data = np.random.uniform(size=data_shape)
module.set_input('data', tvm.nd.array(input_data.astype('float32')))
# Run.
module.run()
print("- Remote module execution completed!")
out = module.get_output(0, out=tvm.nd.empty(out_shape, ctx=ctx))
# Print first 10 elements of output.
print(out.asnumpy()[0][0:10])
if run_deploy_rpc and opengl_enabled:
deploy_rpc()
###Output
_____no_output_____
###Markdown
Demo 3: Deploy the Model to WebGL SystemLib-----------------------------------------------This time we are not using RPC. Instead, we will compile the model and link itwith the entire tvm runtime into a single giant JavaScript file. Then we willrun the model using JavaScript.
###Code
def deploy_web():
"""Runs the demo that deploys to web.
"""
import base64
import json
import os
import shutil
import SimpleHTTPServer, SocketServer
from tvm.contrib import emscripten
curr_path = os.path.dirname(os.path.abspath(os.path.expanduser(os.getcwd())))
working_dir = os.getcwd()
output_dir = os.path.join(working_dir, "resnet")
if not os.path.exists(output_dir):
os.makedirs(output_dir)
# As usual, load the resnet18 model.
net, params, data_shape, out_shape = load_mxnet_resnet()
# As usual, compile the model.
graph, lib, params = compile_net(
net,
target_host="llvm -target=asmjs-unknown-emscripten -system-lib",
target="opengl",
data_shape=data_shape,
params=params)
# Now we save the model and link it with the TVM web runtime.
path_lib = os.path.join(output_dir, "resnet.js")
path_graph = os.path.join(output_dir, "resnet.json")
path_params = os.path.join(output_dir, "resnet.params")
path_data_shape = os.path.join(output_dir, "data_shape.json")
path_out_shape = os.path.join(output_dir, "out_shape.json")
lib.export_library(path_lib, emscripten.create_js, options=[
"-s", "USE_GLFW=3",
"-s", "USE_WEBGL2=1",
"-lglfw",
"-s", "TOTAL_MEMORY=1073741824",
])
with open(path_graph, "w") as fo:
fo.write(graph.json())
with open(path_params, "w") as fo:
fo.write(base64.b64encode(nnvm.compiler.save_param_dict(params)))
shutil.copyfile(os.path.join(curr_path, "../tvm/web/tvm_runtime.js"),
os.path.join(output_dir, "tvm_runtime.js"))
shutil.copyfile(os.path.join(curr_path, "web/resnet.html"),
os.path.join(output_dir, "resnet.html"))
# Now we want to save some extra files so that we can execute the model from
# JavaScript.
# - data shape
with open(path_data_shape, "w") as fo:
json.dump(list(data_shape), fo)
# - out shape
with open(path_out_shape, "w") as fo:
json.dump(list(out_shape), fo)
# - input image
image = download_image()
image.save(os.path.join(output_dir, "data.png"))
# - synset
synset = download_synset()
with open(os.path.join(output_dir, "synset.json"), "w") as fo:
json.dump(synset, fo)
print("Output files are in", output_dir)
# Finally, we fire up a simple web server to serve all the exported files.
print("Now running a simple server to serve the files...")
os.chdir(output_dir)
port = 8080
handler = SimpleHTTPServer.SimpleHTTPRequestHandler
httpd = SocketServer.TCPServer(("", port), handler)
print("Please open http://localhost:" + str(port) + "/resnet.html")
httpd.serve_forever()
if run_deploy_web and opengl_enabled:
deploy_web()
###Output
_____no_output_____ |
guides/ipynb/training_with_built_in_methods.ipynb | ###Markdown
Training & evaluation with the built-in methods**Author:** [fchollet](https://twitter.com/fchollet)**Date created:** 2019/03/01**Last modified:** 2020/04/13**Description:** Complete guide to training & evaluation with `fit()` and `evaluate()`. Setup
###Code
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
###Output
_____no_output_____
###Markdown
IntroductionThis guide covers training, evaluation, and prediction (inference) modelswhen using built-in APIs for training & validation (such as `Model.fit()`,`Model.evaluate()` and `Model.predict()`).If you are interested in leveraging `fit()` while specifying yourown training step function, see the[Customizing what happens in `fit()` guide](/guides/customizing_what_happens_in_fit/).If you are interested in writing your own training & evaluation loops fromscratch, see the guide["writing a training loop from scratch"](/guides/writing_a_training_loop_from_scratch/).In general, whether you are using built-in loops or writing your own, model training &evaluation works strictly in the same way across every kind of Keras model --Sequential models, models built with the Functional API, and models written fromscratch via model subclassing.This guide doesn't cover distributed training, which is covered in our[guide to multi-GPU & distributed training](https://keras.io/guides/distributed_training/). API overview: a first end-to-end exampleWhen passing data to the built-in training loops of a model, you should either use**NumPy arrays** (if your data is small and fits in memory) or **`tf.data Dataset`objects**. In the next few paragraphs, we'll use the MNIST dataset as NumPy arrays, inorder to demonstrate how to use optimizers, losses, and metrics.Let's consider the following model (here, we build in with the Functional API, but itcould be a Sequential model or a subclassed model as well):
###Code
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, activation="softmax", name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
###Output
_____no_output_____
###Markdown
Here's what the typical end-to-end workflow looks like, consisting of:- Training- Validation on a holdout set generated from the original training data- Evaluation on the test dataWe'll use MNIST data for this example.
###Code
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
# Preprocess the data (these are NumPy arrays)
x_train = x_train.reshape(60000, 784).astype("float32") / 255
x_test = x_test.reshape(10000, 784).astype("float32") / 255
y_train = y_train.astype("float32")
y_test = y_test.astype("float32")
# Reserve 10,000 samples for validation
x_val = x_train[-10000:]
y_val = y_train[-10000:]
x_train = x_train[:-10000]
y_train = y_train[:-10000]
###Output
_____no_output_____
###Markdown
We specify the training configuration (optimizer, loss, metrics):
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(), # Optimizer
# Loss function to minimize
loss=keras.losses.SparseCategoricalCrossentropy(),
# List of metrics to monitor
metrics=[keras.metrics.SparseCategoricalAccuracy()],
)
###Output
_____no_output_____
###Markdown
We call `fit()`, which will train the model by slicing the data into "batches" of size`batch_size`, and repeatedly iterating over the entire dataset for a given number of`epochs`.
###Code
print("Fit model on training data")
history = model.fit(
x_train,
y_train,
batch_size=64,
epochs=2,
# We pass some validation for
# monitoring validation loss and metrics
# at the end of each epoch
validation_data=(x_val, y_val),
)
###Output
_____no_output_____
###Markdown
The returned `history` object holds a record of the loss values and metric valuesduring training:
###Code
history.history
###Output
_____no_output_____
###Markdown
We evaluate the model on the test data via `evaluate()`:
###Code
# Evaluate the model on the test data using `evaluate`
print("Evaluate on test data")
results = model.evaluate(x_test, y_test, batch_size=128)
print("test loss, test acc:", results)
# Generate predictions (probabilities -- the output of the last layer)
# on new data using `predict`
print("Generate predictions for 3 samples")
predictions = model.predict(x_test[:3])
print("predictions shape:", predictions.shape)
###Output
_____no_output_____
###Markdown
Now, let's review each piece of this workflow in detail. The `compile()` method: specifying a loss, metrics, and an optimizerTo train a model with `fit()`, you need to specify a loss function, an optimizer, andoptionally, some metrics to monitor.You pass these to the model as arguments to the `compile()` method:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=[keras.metrics.SparseCategoricalAccuracy()],
)
###Output
_____no_output_____
###Markdown
The `metrics` argument should be a list -- your model can have any number of metrics.If your model has multiple outputs, you can specify different losses and metrics foreach output, and you can modulate the contribution of each output to the total loss ofthe model. You will find more details about this in the **Passing data to multi-input,multi-output models** section.Note that if you're satisfied with the default settings, in many cases the optimizer,loss, and metrics can be specified via string identifiers as a shortcut:
###Code
model.compile(
optimizer="rmsprop",
loss="sparse_categorical_crossentropy",
metrics=["sparse_categorical_accuracy"],
)
###Output
_____no_output_____
###Markdown
For later reuse, let's put our model definition and compile step in functions; we willcall them several times across different examples in this guide.
###Code
def get_uncompiled_model():
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, activation="softmax", name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
def get_compiled_model():
model = get_uncompiled_model()
model.compile(
optimizer="rmsprop",
loss="sparse_categorical_crossentropy",
metrics=["sparse_categorical_accuracy"],
)
return model
###Output
_____no_output_____
###Markdown
Many built-in optimizers, losses, and metrics are availableIn general, you won't have to create your own losses, metrics, or optimizersfrom scratch, because what you need is likely to be already part of the Keras API:Optimizers:- `SGD()` (with or without momentum)- `RMSprop()`- `Adam()`- etc.Losses:- `MeanSquaredError()`- `KLDivergence()`- `CosineSimilarity()`- etc.Metrics:- `AUC()`- `Precision()`- `Recall()`- etc. Custom lossesIf you need to create a custom loss, Keras provides two ways to do so.The first method involves creating a function that accepts inputs `y_true` and`y_pred`. The following example shows a loss function that computes the mean squarederror between the real data and the predictions:
###Code
def custom_mean_squared_error(y_true, y_pred):
return tf.math.reduce_mean(tf.square(y_true - y_pred))
model = get_uncompiled_model()
model.compile(optimizer=keras.optimizers.Adam(), loss=custom_mean_squared_error)
# We need to one-hot encode the labels to use MSE
y_train_one_hot = tf.one_hot(y_train, depth=10)
model.fit(x_train, y_train_one_hot, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
If you need a loss function that takes in parameters beside `y_true` and `y_pred`, youcan subclass the `tf.keras.losses.Loss` class and implement the following two methods:- `__init__(self)`: accept parameters to pass during the call of your loss function- `call(self, y_true, y_pred)`: use the targets (y_true) and the model predictions(y_pred) to compute the model's lossLet's say you want to use mean squared error, but with an added term thatwill de-incentivize prediction values far from 0.5 (we assume that the categoricaltargets are one-hot encoded and take values between 0 and 1). Thiscreates an incentive for the model not to be too confident, which may helpreduce overfitting (we won't know if it works until we try!).Here's how you would do it:
###Code
class CustomMSE(keras.losses.Loss):
def __init__(self, regularization_factor=0.1, name="custom_mse"):
super().__init__(name=name)
self.regularization_factor = regularization_factor
def call(self, y_true, y_pred):
mse = tf.math.reduce_mean(tf.square(y_true - y_pred))
reg = tf.math.reduce_mean(tf.square(0.5 - y_pred))
return mse + reg * self.regularization_factor
model = get_uncompiled_model()
model.compile(optimizer=keras.optimizers.Adam(), loss=CustomMSE())
y_train_one_hot = tf.one_hot(y_train, depth=10)
model.fit(x_train, y_train_one_hot, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
Custom metricsIf you need a metric that isn't part of the API, you can easily create custom metricsby subclassing the `tf.keras.metrics.Metric` class. You will need to implement 4methods:- `__init__(self)`, in which you will create state variables for your metric.- `update_state(self, y_true, y_pred, sample_weight=None)`, which uses the targetsy_true and the model predictions y_pred to update the state variables.- `result(self)`, which uses the state variables to compute the final results.- `reset_state(self)`, which reinitializes the state of the metric.State update and results computation are kept separate (in `update_state()` and`result()`, respectively) because in some cases, the results computation might be veryexpensive and would only be done periodically.Here's a simple example showing how to implement a `CategoricalTruePositives` metricthat counts how many samples were correctly classified as belonging to a given class:
###Code
class CategoricalTruePositives(keras.metrics.Metric):
def __init__(self, name="categorical_true_positives", **kwargs):
super(CategoricalTruePositives, self).__init__(name=name, **kwargs)
self.true_positives = self.add_weight(name="ctp", initializer="zeros")
def update_state(self, y_true, y_pred, sample_weight=None):
y_pred = tf.reshape(tf.argmax(y_pred, axis=1), shape=(-1, 1))
values = tf.cast(y_true, "int32") == tf.cast(y_pred, "int32")
values = tf.cast(values, "float32")
if sample_weight is not None:
sample_weight = tf.cast(sample_weight, "float32")
values = tf.multiply(values, sample_weight)
self.true_positives.assign_add(tf.reduce_sum(values))
def result(self):
return self.true_positives
def reset_state(self):
# The state of the metric will be reset at the start of each epoch.
self.true_positives.assign(0.0)
model = get_uncompiled_model()
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=[CategoricalTruePositives()],
)
model.fit(x_train, y_train, batch_size=64, epochs=3)
###Output
_____no_output_____
###Markdown
Handling losses and metrics that don't fit the standard signatureThe overwhelming majority of losses and metrics can be computed from `y_true` and`y_pred`, where `y_pred` is an output of your model -- but not all of them. Forinstance, a regularization loss may only require the activation of a layer (there areno targets in this case), and this activation may not be a model output.In such cases, you can call `self.add_loss(loss_value)` from inside the call method ofa custom layer. Losses added in this way get added to the "main" loss during training(the one passed to `compile()`). Here's a simple example that adds activityregularization (note that activity regularization is built-in in all Keras layers --this layer is just for the sake of providing a concrete example):
###Code
class ActivityRegularizationLayer(layers.Layer):
def call(self, inputs):
self.add_loss(tf.reduce_sum(inputs) * 0.1)
return inputs # Pass-through layer.
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
# Insert activity regularization as a layer
x = ActivityRegularizationLayer()(x)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
# The displayed loss will be much higher than before
# due to the regularization component.
model.fit(x_train, y_train, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
You can do the same for logging metric values, using `add_metric()`:
###Code
class MetricLoggingLayer(layers.Layer):
def call(self, inputs):
# The `aggregation` argument defines
# how to aggregate the per-batch values
# over each epoch:
# in this case we simply average them.
self.add_metric(
keras.backend.std(inputs), name="std_of_activation", aggregation="mean"
)
return inputs # Pass-through layer.
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
# Insert std logging as a layer.
x = MetricLoggingLayer()(x)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
model.fit(x_train, y_train, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
In the [Functional API](/guides/functional_api/),you can also call `model.add_loss(loss_tensor)`,or `model.add_metric(metric_tensor, name, aggregation)`.Here's a simple example:
###Code
inputs = keras.Input(shape=(784,), name="digits")
x1 = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x2 = layers.Dense(64, activation="relu", name="dense_2")(x1)
outputs = layers.Dense(10, name="predictions")(x2)
model = keras.Model(inputs=inputs, outputs=outputs)
model.add_loss(tf.reduce_sum(x1) * 0.1)
model.add_metric(keras.backend.std(x1), name="std_of_activation", aggregation="mean")
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
model.fit(x_train, y_train, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
Note that when you pass losses via `add_loss()`, it becomes possible to call`compile()` without a loss function, since the model already has a loss to minimize.Consider the following `LogisticEndpoint` layer: it takes as inputstargets & logits, and it tracks a crossentropy loss via `add_loss()`. It alsotracks classification accuracy via `add_metric()`.
###Code
class LogisticEndpoint(keras.layers.Layer):
def __init__(self, name=None):
super(LogisticEndpoint, self).__init__(name=name)
self.loss_fn = keras.losses.BinaryCrossentropy(from_logits=True)
self.accuracy_fn = keras.metrics.BinaryAccuracy()
def call(self, targets, logits, sample_weights=None):
# Compute the training-time loss value and add it
# to the layer using `self.add_loss()`.
loss = self.loss_fn(targets, logits, sample_weights)
self.add_loss(loss)
# Log accuracy as a metric and add it
# to the layer using `self.add_metric()`.
acc = self.accuracy_fn(targets, logits, sample_weights)
self.add_metric(acc, name="accuracy")
# Return the inference-time prediction tensor (for `.predict()`).
return tf.nn.softmax(logits)
###Output
_____no_output_____
###Markdown
You can use it in a model with two inputs (input data & targets), compiled without a`loss` argument, like this:
###Code
import numpy as np
inputs = keras.Input(shape=(3,), name="inputs")
targets = keras.Input(shape=(10,), name="targets")
logits = keras.layers.Dense(10)(inputs)
predictions = LogisticEndpoint(name="predictions")(logits, targets)
model = keras.Model(inputs=[inputs, targets], outputs=predictions)
model.compile(optimizer="adam") # No loss argument!
data = {
"inputs": np.random.random((3, 3)),
"targets": np.random.random((3, 10)),
}
model.fit(data)
###Output
_____no_output_____
###Markdown
For more information about training multi-input models, see the section **Passing datato multi-input, multi-output models**. Automatically setting apart a validation holdout setIn the first end-to-end example you saw, we used the `validation_data` argument to passa tuple of NumPy arrays `(x_val, y_val)` to the model for evaluating a validation lossand validation metrics at the end of each epoch.Here's another option: the argument `validation_split` allows you to automaticallyreserve part of your training data for validation. The argument value represents thefraction of the data to be reserved for validation, so it should be set to a numberhigher than 0 and lower than 1. For instance, `validation_split=0.2` means "use 20% ofthe data for validation", and `validation_split=0.6` means "use 60% of the data forvalidation".The way the validation is computed is by taking the last x% samples of the arraysreceived by the `fit()` call, before any shuffling.Note that you can only use `validation_split` when training with NumPy data.
###Code
model = get_compiled_model()
model.fit(x_train, y_train, batch_size=64, validation_split=0.2, epochs=1)
###Output
_____no_output_____
###Markdown
Training & evaluation from tf.data DatasetsIn the past few paragraphs, you've seen how to handle losses, metrics, and optimizers,and you've seen how to use the `validation_data` and `validation_split` arguments in`fit()`, when your data is passed as NumPy arrays.Let's now take a look at the case where your data comes in the form of a`tf.data.Dataset` object.The `tf.data` API is a set of utilities in TensorFlow 2.0 for loading and preprocessingdata in a way that's fast and scalable.For a complete guide about creating `Datasets`, see the[tf.data documentation](https://www.tensorflow.org/guide/data).You can pass a `Dataset` instance directly to the methods `fit()`, `evaluate()`, and`predict()`:
###Code
model = get_compiled_model()
# First, let's create a training Dataset instance.
# For the sake of our example, we'll use the same MNIST data as before.
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
# Shuffle and slice the dataset.
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Now we get a test dataset.
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test))
test_dataset = test_dataset.batch(64)
# Since the dataset already takes care of batching,
# we don't pass a `batch_size` argument.
model.fit(train_dataset, epochs=3)
# You can also evaluate or predict on a dataset.
print("Evaluate")
result = model.evaluate(test_dataset)
dict(zip(model.metrics_names, result))
###Output
_____no_output_____
###Markdown
Note that the Dataset is reset at the end of each epoch, so it can be reused of thenext epoch.If you want to run training only on a specific number of batches from this Dataset, youcan pass the `steps_per_epoch` argument, which specifies how many training steps themodel should run using this Dataset before moving on to the next epoch.If you do this, the dataset is not reset at the end of each epoch, instead we just keepdrawing the next batches. The dataset will eventually run out of data (unless it is aninfinitely-looping dataset).
###Code
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Only use the 100 batches per epoch (that's 64 * 100 samples)
model.fit(train_dataset, epochs=3, steps_per_epoch=100)
###Output
_____no_output_____
###Markdown
Using a validation datasetYou can pass a `Dataset` instance as the `validation_data` argument in `fit()`:
###Code
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Prepare the validation dataset
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(64)
model.fit(train_dataset, epochs=1, validation_data=val_dataset)
###Output
_____no_output_____
###Markdown
At the end of each epoch, the model will iterate over the validation dataset andcompute the validation loss and validation metrics.If you want to run validation only on a specific number of batches from this dataset,you can pass the `validation_steps` argument, which specifies how many validationsteps the model should run with the validation dataset before interrupting validationand moving on to the next epoch:
###Code
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Prepare the validation dataset
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(64)
model.fit(
train_dataset,
epochs=1,
# Only run validation using the first 10 batches of the dataset
# using the `validation_steps` argument
validation_data=val_dataset,
validation_steps=10,
)
###Output
_____no_output_____
###Markdown
Note that the validation dataset will be reset after each use (so that you will alwaysbe evaluating on the same samples from epoch to epoch).The argument `validation_split` (generating a holdout set from the training data) isnot supported when training from `Dataset` objects, since this feature requires theability to index the samples of the datasets, which is not possible in general withthe `Dataset` API. Other input formats supportedBesides NumPy arrays, eager tensors, and TensorFlow `Datasets`, it's possible to traina Keras model using Pandas dataframes, or from Python generators that yield batches ofdata & labels.In particular, the `keras.utils.Sequence` class offers a simple interface to buildPython data generators that are multiprocessing-aware and can be shuffled.In general, we recommend that you use:- NumPy input data if your data is small and fits in memory- `Dataset` objects if you have large datasets and you need to do distributed training- `Sequence` objects if you have large datasets and you need to do a lot of customPython-side processing that cannot be done in TensorFlow (e.g. if you rely on external librariesfor data loading or preprocessing). Using a `keras.utils.Sequence` object as input`keras.utils.Sequence` is a utility that you can subclass to obtain a Python generator withtwo important properties:- It works well with multiprocessing.- It can be shuffled (e.g. when passing `shuffle=True` in `fit()`).A `Sequence` must implement two methods:- `__getitem__`- `__len__`The method `__getitem__` should return a complete batch.If you want to modify your dataset between epochs, you may implement `on_epoch_end`.Here's a quick example:```pythonfrom skimage.io import imreadfrom skimage.transform import resizeimport numpy as np Here, `filenames` is list of path to the images and `labels` are the associated labels.class CIFAR10Sequence(Sequence): def __init__(self, filenames, labels, batch_size): self.filenames, self.labels = filenames, labels self.batch_size = batch_size def __len__(self): return int(np.ceil(len(self.filenames) / float(self.batch_size))) def __getitem__(self, idx): batch_x = self.filenames[idx * self.batch_size:(idx + 1) * self.batch_size] batch_y = self.labels[idx * self.batch_size:(idx + 1) * self.batch_size] return np.array([ resize(imread(filename), (200, 200)) for filename in batch_x]), np.array(batch_y)sequence = CIFAR10Sequence(filenames, labels, batch_size)model.fit(sequence, epochs=10)``` Using sample weighting and class weightingWith the default settings the weight of a sample is decided by its frequencyin the dataset. There are two methods to weight the data, independent ofsample frequency:* Class weights* Sample weights Class weightsThis is set by passing a dictionary to the `class_weight` argument to`Model.fit()`. This dictionary maps class indices to the weight that shouldbe used for samples belonging to this class.This can be used to balance classes without resampling, or to train amodel that gives more importance to a particular class.For instance, if class "0" is half as represented as class "1" in your data,you could use `Model.fit(..., class_weight={0: 1., 1: 0.5})`. Here's a NumPy example where we use class weights or sample weights togive more importance to the correct classification of class 5 (whichis the digit "5" in the MNIST dataset).
###Code
import numpy as np
class_weight = {
0: 1.0,
1: 1.0,
2: 1.0,
3: 1.0,
4: 1.0,
# Set weight "2" for class "5",
# making this class 2x more important
5: 2.0,
6: 1.0,
7: 1.0,
8: 1.0,
9: 1.0,
}
print("Fit with class weight")
model = get_compiled_model()
model.fit(x_train, y_train, class_weight=class_weight, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
Sample weightsFor fine grained control, or if you are not building a classifier,you can use "sample weights".- When training from NumPy data: Pass the `sample_weight` argument to `Model.fit()`.- When training from `tf.data` or any other sort of iterator: Yield `(input_batch, label_batch, sample_weight_batch)` tuples.A "sample weights" array is an array of numbers that specify how much weighteach sample in a batch should have in computing the total loss. It is commonlyused in imbalanced classification problems (the idea being to give more weightto rarely-seen classes).When the weights used are ones and zeros, the array can be used as a *mask* forthe loss function (entirely discarding the contribution of certain samples tothe total loss).
###Code
sample_weight = np.ones(shape=(len(y_train),))
sample_weight[y_train == 5] = 2.0
print("Fit with sample weight")
model = get_compiled_model()
model.fit(x_train, y_train, sample_weight=sample_weight, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
Here's a matching `Dataset` example:
###Code
sample_weight = np.ones(shape=(len(y_train),))
sample_weight[y_train == 5] = 2.0
# Create a Dataset that includes sample weights
# (3rd element in the return tuple).
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train, sample_weight))
# Shuffle and slice the dataset.
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
model = get_compiled_model()
model.fit(train_dataset, epochs=1)
###Output
_____no_output_____
###Markdown
Passing data to multi-input, multi-output modelsIn the previous examples, we were considering a model with a single input (a tensor ofshape `(764,)`) and a single output (a prediction tensor of shape `(10,)`). But whatabout models that have multiple inputs or outputs?Consider the following model, which has an image input of shape `(32, 32, 3)` (that's`(height, width, channels)`) and a time series input of shape `(None, 10)` (that's`(timesteps, features)`). Our model will have two outputs computed from thecombination of these inputs: a "score" (of shape `(1,)`) and a probabilitydistribution over five classes (of shape `(5,)`).
###Code
image_input = keras.Input(shape=(32, 32, 3), name="img_input")
timeseries_input = keras.Input(shape=(None, 10), name="ts_input")
x1 = layers.Conv2D(3, 3)(image_input)
x1 = layers.GlobalMaxPooling2D()(x1)
x2 = layers.Conv1D(3, 3)(timeseries_input)
x2 = layers.GlobalMaxPooling1D()(x2)
x = layers.concatenate([x1, x2])
score_output = layers.Dense(1, name="score_output")(x)
class_output = layers.Dense(5, name="class_output")(x)
model = keras.Model(
inputs=[image_input, timeseries_input], outputs=[score_output, class_output]
)
###Output
_____no_output_____
###Markdown
Let's plot this model, so you can clearly see what we're doing here (note that theshapes shown in the plot are batch shapes, rather than per-sample shapes).
###Code
keras.utils.plot_model(model, "multi_input_and_output_model.png", show_shapes=True)
###Output
_____no_output_____
###Markdown
At compilation time, we can specify different losses to different outputs, by passingthe loss functions as a list:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()],
)
###Output
_____no_output_____
###Markdown
If we only passed a single loss function to the model, the same loss function would beapplied to every output (which is not appropriate here).Likewise for metrics:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()],
metrics=[
[
keras.metrics.MeanAbsolutePercentageError(),
keras.metrics.MeanAbsoluteError(),
],
[keras.metrics.CategoricalAccuracy()],
],
)
###Output
_____no_output_____
###Markdown
Since we gave names to our output layers, we could also specify per-output losses andmetrics via a dict:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={
"score_output": keras.losses.MeanSquaredError(),
"class_output": keras.losses.CategoricalCrossentropy(),
},
metrics={
"score_output": [
keras.metrics.MeanAbsolutePercentageError(),
keras.metrics.MeanAbsoluteError(),
],
"class_output": [keras.metrics.CategoricalAccuracy()],
},
)
###Output
_____no_output_____
###Markdown
We recommend the use of explicit names and dicts if you have more than 2 outputs.It's possible to give different weights to different output-specific losses (forinstance, one might wish to privilege the "score" loss in our example, by giving to 2xthe importance of the class loss), using the `loss_weights` argument:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={
"score_output": keras.losses.MeanSquaredError(),
"class_output": keras.losses.CategoricalCrossentropy(),
},
metrics={
"score_output": [
keras.metrics.MeanAbsolutePercentageError(),
keras.metrics.MeanAbsoluteError(),
],
"class_output": [keras.metrics.CategoricalAccuracy()],
},
loss_weights={"score_output": 2.0, "class_output": 1.0},
)
###Output
_____no_output_____
###Markdown
You could also choose not to compute a loss for certain outputs, if these outputs aremeant for prediction but not for training:
###Code
# List loss version
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[None, keras.losses.CategoricalCrossentropy()],
)
# Or dict loss version
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={"class_output": keras.losses.CategoricalCrossentropy()},
)
###Output
_____no_output_____
###Markdown
Passing data to a multi-input or multi-output model in `fit()` works in a similar way asspecifying a loss function in compile: you can pass **lists of NumPy arrays** (with1:1 mapping to the outputs that received a loss function) or **dicts mapping outputnames to NumPy arrays**.
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()],
)
# Generate dummy NumPy data
img_data = np.random.random_sample(size=(100, 32, 32, 3))
ts_data = np.random.random_sample(size=(100, 20, 10))
score_targets = np.random.random_sample(size=(100, 1))
class_targets = np.random.random_sample(size=(100, 5))
# Fit on lists
model.fit([img_data, ts_data], [score_targets, class_targets], batch_size=32, epochs=1)
# Alternatively, fit on dicts
model.fit(
{"img_input": img_data, "ts_input": ts_data},
{"score_output": score_targets, "class_output": class_targets},
batch_size=32,
epochs=1,
)
###Output
_____no_output_____
###Markdown
Here's the `Dataset` use case: similarly as what we did for NumPy arrays, the `Dataset`should return a tuple of dicts.
###Code
train_dataset = tf.data.Dataset.from_tensor_slices(
(
{"img_input": img_data, "ts_input": ts_data},
{"score_output": score_targets, "class_output": class_targets},
)
)
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
model.fit(train_dataset, epochs=1)
###Output
_____no_output_____
###Markdown
Using callbacksCallbacks in Keras are objects that are called at different points during training (atthe start of an epoch, at the end of a batch, at the end of an epoch, etc.). Theycan be used to implement certain behaviors, such as:- Doing validation at different points during training (beyond the built-in per-epochvalidation)- Checkpointing the model at regular intervals or when it exceeds a certain accuracythreshold- Changing the learning rate of the model when training seems to be plateauing- Doing fine-tuning of the top layers when training seems to be plateauing- Sending email or instant message notifications when training ends or where a certainperformance threshold is exceeded- Etc.Callbacks can be passed as a list to your call to `fit()`:
###Code
model = get_compiled_model()
callbacks = [
keras.callbacks.EarlyStopping(
# Stop training when `val_loss` is no longer improving
monitor="val_loss",
# "no longer improving" being defined as "no better than 1e-2 less"
min_delta=1e-2,
# "no longer improving" being further defined as "for at least 2 epochs"
patience=2,
verbose=1,
)
]
model.fit(
x_train,
y_train,
epochs=20,
batch_size=64,
callbacks=callbacks,
validation_split=0.2,
)
###Output
_____no_output_____
###Markdown
Many built-in callbacks are availableThere are many built-in callbacks already available in Keras, such as:- `ModelCheckpoint`: Periodically save the model.- `EarlyStopping`: Stop training when training is no longer improving the validationmetrics.- `TensorBoard`: periodically write model logs that can be visualized in[TensorBoard](https://www.tensorflow.org/tensorboard) (more details in the section"Visualization").- `CSVLogger`: streams loss and metrics data to a CSV file.- etc.See the [callbacks documentation](/api/callbacks/) for the complete list. Writing your own callbackYou can create a custom callback by extending the base class`keras.callbacks.Callback`. A callback has access to its associated model through theclass property `self.model`.Make sure to read the[complete guide to writing custom callbacks](/guides/writing_your_own_callbacks/).Here's a simple example saving a list of per-batch loss values during training:
###Code
class LossHistory(keras.callbacks.Callback):
def on_train_begin(self, logs):
self.per_batch_losses = []
def on_batch_end(self, batch, logs):
self.per_batch_losses.append(logs.get("loss"))
###Output
_____no_output_____
###Markdown
Checkpointing modelsWhen you're training model on relatively large datasets, it's crucial to savecheckpoints of your model at frequent intervals.The easiest way to achieve this is with the `ModelCheckpoint` callback:
###Code
model = get_compiled_model()
callbacks = [
keras.callbacks.ModelCheckpoint(
# Path where to save the model
# The two parameters below mean that we will overwrite
# the current checkpoint if and only if
# the `val_loss` score has improved.
# The saved model name will include the current epoch.
filepath="mymodel_{epoch}",
save_best_only=True, # Only save a model if `val_loss` has improved.
monitor="val_loss",
verbose=1,
)
]
model.fit(
x_train, y_train, epochs=2, batch_size=64, callbacks=callbacks, validation_split=0.2
)
###Output
_____no_output_____
###Markdown
The `ModelCheckpoint` callback can be used to implement fault-tolerance:the ability to restart training from the last saved state of the model in case traininggets randomly interrupted. Here's a basic example:
###Code
import os
# Prepare a directory to store all the checkpoints.
checkpoint_dir = "./ckpt"
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
def make_or_restore_model():
# Either restore the latest model, or create a fresh one
# if there is no checkpoint available.
checkpoints = [checkpoint_dir + "/" + name for name in os.listdir(checkpoint_dir)]
if checkpoints:
latest_checkpoint = max(checkpoints, key=os.path.getctime)
print("Restoring from", latest_checkpoint)
return keras.models.load_model(latest_checkpoint)
print("Creating a new model")
return get_compiled_model()
model = make_or_restore_model()
callbacks = [
# This callback saves a SavedModel every 100 batches.
# We include the training loss in the saved model name.
keras.callbacks.ModelCheckpoint(
filepath=checkpoint_dir + "/ckpt-loss={loss:.2f}", save_freq=100
)
]
model.fit(x_train, y_train, epochs=1, callbacks=callbacks)
###Output
_____no_output_____
###Markdown
You call also write your own callback for saving and restoring models.For a complete guide on serialization and saving, see the[guide to saving and serializing Models](/guides/serialization_and_saving/). Using learning rate schedulesA common pattern when training deep learning models is to gradually reduce the learningas training progresses. This is generally known as "learning rate decay".The learning decay schedule could be static (fixed in advance, as a function of thecurrent epoch or the current batch index), or dynamic (responding to the currentbehavior of the model, in particular the validation loss). Passing a schedule to an optimizerYou can easily use a static learning rate decay schedule by passing a schedule objectas the `learning_rate` argument in your optimizer:
###Code
initial_learning_rate = 0.1
lr_schedule = keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate, decay_steps=100000, decay_rate=0.96, staircase=True
)
optimizer = keras.optimizers.RMSprop(learning_rate=lr_schedule)
###Output
_____no_output_____
###Markdown
Several built-in schedules are available: `ExponentialDecay`, `PiecewiseConstantDecay`,`PolynomialDecay`, and `InverseTimeDecay`. Using callbacks to implement a dynamic learning rate scheduleA dynamic learning rate schedule (for instance, decreasing the learning rate when thevalidation loss is no longer improving) cannot be achieved with these schedule objects,since the optimizer does not have access to validation metrics.However, callbacks do have access to all metrics, including validation metrics! You canthus achieve this pattern by using a callback that modifies the current learning rateon the optimizer. In fact, this is even built-in as the `ReduceLROnPlateau` callback. Visualizing loss and metrics during trainingThe best way to keep an eye on your model during training is to use[TensorBoard](https://www.tensorflow.org/tensorboard) -- a browser-based applicationthat you can run locally that provides you with:- Live plots of the loss and metrics for training and evaluation- (optionally) Visualizations of the histograms of your layer activations- (optionally) 3D visualizations of the embedding spaces learned by your `Embedding`layersIf you have installed TensorFlow with pip, you should be able to launch TensorBoardfrom the command line:```tensorboard --logdir=/full_path_to_your_logs``` Using the TensorBoard callbackThe easiest way to use TensorBoard with a Keras model and the `fit()` method is the`TensorBoard` callback.In the simplest case, just specify where you want the callback to write logs, andyou're good to go:
###Code
keras.callbacks.TensorBoard(
log_dir="/full_path_to_your_logs",
histogram_freq=0, # How often to log histogram visualizations
embeddings_freq=0, # How often to log embedding visualizations
update_freq="epoch",
) # How often to write logs (default: once per epoch)
###Output
_____no_output_____
###Markdown
Training & evaluation with the built-in methods**Author:** [fchollet](https://twitter.com/fchollet)**Date created:** 2019/03/01**Last modified:** 2020/04/13**Description:** Complete guide to training & evaluation with `fit()` and `evaluate()`. Setup
###Code
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
###Output
_____no_output_____
###Markdown
IntroductionThis guide covers training, evaluation, and prediction (inference) modelswhen using built-in APIs for training & validation (such as `model.fit()`,`model.evaluate()`, `model.predict()`).If you are interested in leveraging `fit()` while specifying yourown training step function, see the guide["customizing what happens in `fit()`"](/guides/customizing_what_happens_in_fit/).If you are interested in writing your own training & evaluation loops fromscratch, see the guide["writing a training loop from scratch"](/guides/writing_a_training_loop_from_scratch/).In general, whether you are using built-in loops or writing your own, model training &evaluation works strictly in the same way across every kind of Keras model --Sequential models, models built with the Functional API, and models written fromscratch via model subclassing.This guide doesn't cover distributed training. For distributed training, seeour [guide to multi-gpu & distributed training](/guides/distributed_training/). API overview: a first end-to-end exampleWhen passing data to the built-in training loops of a model, you should either use**NumPy arrays** (if your data is small and fits in memory) or **`tf.data Dataset`objects**. In the next few paragraphs, we'll use the MNIST dataset as NumPy arrays, inorder to demonstrate how to use optimizers, losses, and metrics.Let's consider the following model (here, we build in with the Functional API, but itcould be a Sequential model or a subclassed model as well):
###Code
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, activation="softmax", name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
###Output
_____no_output_____
###Markdown
Here's what the typical end-to-end workflow looks like, consisting of:- Training- Validation on a holdout set generated from the original training data- Evaluation on the test dataWe'll use MNIST data for this example.
###Code
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
# Preprocess the data (these are NumPy arrays)
x_train = x_train.reshape(60000, 784).astype("float32") / 255
x_test = x_test.reshape(10000, 784).astype("float32") / 255
y_train = y_train.astype("float32")
y_test = y_test.astype("float32")
# Reserve 10,000 samples for validation
x_val = x_train[-10000:]
y_val = y_train[-10000:]
x_train = x_train[:-10000]
y_train = y_train[:-10000]
###Output
_____no_output_____
###Markdown
We specify the training configuration (optimizer, loss, metrics):
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(), # Optimizer
# Loss function to minimize
loss=keras.losses.SparseCategoricalCrossentropy(),
# List of metrics to monitor
metrics=[keras.metrics.SparseCategoricalAccuracy()],
)
###Output
_____no_output_____
###Markdown
We call `fit()`, which will train the model by slicing the data into "batches" of size"batch_size", and repeatedly iterating over the entire dataset for a given number of"epochs".
###Code
print("Fit model on training data")
history = model.fit(
x_train,
y_train,
batch_size=64,
epochs=2,
# We pass some validation for
# monitoring validation loss and metrics
# at the end of each epoch
validation_data=(x_val, y_val),
)
###Output
_____no_output_____
###Markdown
The returned "history" object holds a record of the loss values and metric valuesduring training:
###Code
history.history
###Output
_____no_output_____
###Markdown
We evaluate the model on the test data via `evaluate()`:
###Code
# Evaluate the model on the test data using `evaluate`
print("Evaluate on test data")
results = model.evaluate(x_test, y_test, batch_size=128)
print("test loss, test acc:", results)
# Generate predictions (probabilities -- the output of the last layer)
# on new data using `predict`
print("Generate predictions for 3 samples")
predictions = model.predict(x_test[:3])
print("predictions shape:", predictions.shape)
###Output
_____no_output_____
###Markdown
Now, let's review each piece of this workflow in detail. The `compile()` method: specifying a loss, metrics, and an optimizerTo train a model with `fit()`, you need to specify a loss function, an optimizer, andoptionally, some metrics to monitor.You pass these to the model as arguments to the `compile()` method:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=[keras.metrics.SparseCategoricalAccuracy()],
)
###Output
_____no_output_____
###Markdown
The `metrics` argument should be a list -- your model can have any number of metrics.If your model has multiple outputs, you can specify different losses and metrics foreach output, and you can modulate the contribution of each output to the total loss ofthe model. You will find more details about this in the section **"Passing data tomulti-input, multi-output models"**.Note that if you're satisfied with the default settings, in many cases the optimizer,loss, and metrics can be specified via string identifiers as a shortcut:
###Code
model.compile(
optimizer="rmsprop",
loss="sparse_categorical_crossentropy",
metrics=["sparse_categorical_accuracy"],
)
###Output
_____no_output_____
###Markdown
For later reuse, let's put our model definition and compile step in functions; we willcall them several times across different examples in this guide.
###Code
def get_uncompiled_model():
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, activation="softmax", name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
def get_compiled_model():
model = get_uncompiled_model()
model.compile(
optimizer="rmsprop",
loss="sparse_categorical_crossentropy",
metrics=["sparse_categorical_accuracy"],
)
return model
###Output
_____no_output_____
###Markdown
Many built-in optimizers, losses, and metrics are availableIn general, you won't have to create from scratch your own losses, metrics, oroptimizers, because what you need is likely already part of the Keras API:Optimizers:- `SGD()` (with or without momentum)- `RMSprop()`- `Adam()`- etc.Losses:- `MeanSquaredError()`- `KLDivergence()`- `CosineSimilarity()`- etc.Metrics:- `AUC()`- `Precision()`- `Recall()`- etc. Custom lossesThere are two ways to provide custom losses with Keras. The first example creates afunction that accepts inputs `y_true` and `y_pred`. The following example shows a lossfunction that computes the mean squared error between the real data and thepredictions:
###Code
def custom_mean_squared_error(y_true, y_pred):
return tf.math.reduce_mean(tf.square(y_true - y_pred))
model = get_uncompiled_model()
model.compile(optimizer=keras.optimizers.Adam(), loss=custom_mean_squared_error)
# We need to one-hot encode the labels to use MSE
y_train_one_hot = tf.one_hot(y_train, depth=10)
model.fit(x_train, y_train_one_hot, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
If you need a loss function that takes in parameters beside `y_true` and `y_pred`, youcan subclass the `tf.keras.losses.Loss` class and implement the following two methods:- `__init__(self)`: accept parameters to pass during the call of your loss function- `call(self, y_true, y_pred)`: use the targets (y_true) and the model predictions(y_pred) to compute the model's lossLet's say you want to use mean squared error, but with an added term thatwill de-incentivize prediction values far from 0.5 (we assume that the categoricaltargets are one-hot encoded and take values between 0 and 1). Thiscreates an incentive for the model not to be too confident, which may helpreduce overfitting (we won't know if it works until we try!).Here's how you would do it:
###Code
class CustomMSE(keras.losses.Loss):
def __init__(self, regularization_factor=0.1, name="custom_mse"):
super().__init__(name=name)
self.regularization_factor = regularization_factor
def call(self, y_true, y_pred):
mse = tf.math.reduce_mean(tf.square(y_true - y_pred))
reg = tf.math.reduce_mean(tf.square(0.5 - y_pred))
return mse + reg * self.regularization_factor
model = get_uncompiled_model()
model.compile(optimizer=keras.optimizers.Adam(), loss=CustomMSE())
y_train_one_hot = tf.one_hot(y_train, depth=10)
model.fit(x_train, y_train_one_hot, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
Custom metricsIf you need a metric that isn't part of the API, you can easily create custom metricsby subclassing the `tf.keras.metrics.Metric` class. You will need to implement 4methods:- `__init__(self)`, in which you will create state variables for your metric.- `update_state(self, y_true, y_pred, sample_weight=None)`, which uses the targetsy_true and the model predictions y_pred to update the state variables.- `result(self)`, which uses the state variables to compute the final results.- `reset_states(self)`, which reinitializes the state of the metric.State update and results computation are kept separate (in `update_state()` and`result()`, respectively) because in some cases, results computation might be veryexpensive, and would only be done periodically.Here's a simple example showing how to implement a `CategoricalTruePositives` metric,that counts how many samples were correctly classified as belonging to a given class:
###Code
class CategoricalTruePositives(keras.metrics.Metric):
def __init__(self, name="categorical_true_positives", **kwargs):
super(CategoricalTruePositives, self).__init__(name=name, **kwargs)
self.true_positives = self.add_weight(name="ctp", initializer="zeros")
def update_state(self, y_true, y_pred, sample_weight=None):
y_pred = tf.reshape(tf.argmax(y_pred, axis=1), shape=(-1, 1))
values = tf.cast(y_true, "int32") == tf.cast(y_pred, "int32")
values = tf.cast(values, "float32")
if sample_weight is not None:
sample_weight = tf.cast(sample_weight, "float32")
values = tf.multiply(values, sample_weight)
self.true_positives.assign_add(tf.reduce_sum(values))
def result(self):
return self.true_positives
def reset_states(self):
# The state of the metric will be reset at the start of each epoch.
self.true_positives.assign(0.0)
model = get_uncompiled_model()
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=[CategoricalTruePositives()],
)
model.fit(x_train, y_train, batch_size=64, epochs=3)
###Output
_____no_output_____
###Markdown
Handling losses and metrics that don't fit the standard signatureThe overwhelming majority of losses and metrics can be computed from `y_true` and`y_pred`, where `y_pred` is an output of your model. But not all of them. Forinstance, a regularization loss may only require the activation of a layer (there areno targets in this case), and this activation may not be a model output.In such cases, you can call `self.add_loss(loss_value)` from inside the call method ofa custom layer. Losses added in this way get added to the "main" loss during training(the one passed to `compile()`). Here's a simple example that adds activityregularization (note that activity regularization is built-in in all Keras layers --this layer is just for the sake of providing a concrete example):
###Code
class ActivityRegularizationLayer(layers.Layer):
def call(self, inputs):
self.add_loss(tf.reduce_sum(inputs) * 0.1)
return inputs # Pass-through layer.
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
# Insert activity regularization as a layer
x = ActivityRegularizationLayer()(x)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
# The displayed loss will be much higher than before
# due to the regularization component.
model.fit(x_train, y_train, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
You can do the same for logging metric values, using `add_metric()`:
###Code
class MetricLoggingLayer(layers.Layer):
def call(self, inputs):
# The `aggregation` argument defines
# how to aggregate the per-batch values
# over each epoch:
# in this case we simply average them.
self.add_metric(
keras.backend.std(inputs), name="std_of_activation", aggregation="mean"
)
return inputs # Pass-through layer.
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
# Insert std logging as a layer.
x = MetricLoggingLayer()(x)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
model.fit(x_train, y_train, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
In the [Functional API](/guides/functional_api/),you can also call `model.add_loss(loss_tensor)`,or `model.add_metric(metric_tensor, name, aggregation)`.Here's a simple example:
###Code
inputs = keras.Input(shape=(784,), name="digits")
x1 = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x2 = layers.Dense(64, activation="relu", name="dense_2")(x1)
outputs = layers.Dense(10, name="predictions")(x2)
model = keras.Model(inputs=inputs, outputs=outputs)
model.add_loss(tf.reduce_sum(x1) * 0.1)
model.add_metric(keras.backend.std(x1), name="std_of_activation", aggregation="mean")
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
model.fit(x_train, y_train, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
Note that when you pass losses via `add_loss()`, it becomes possible to call`compile()` without a loss function, since the model already has a loss to minimize.Consider the following `LogisticEndpoint` layer: it takes as inputstargets & logits, and it tracks a crossentropy loss via `add_loss()`. It alsotracks classification accuracy via `add_metric()`.
###Code
class LogisticEndpoint(keras.layers.Layer):
def __init__(self, name=None):
super(LogisticEndpoint, self).__init__(name=name)
self.loss_fn = keras.losses.BinaryCrossentropy(from_logits=True)
self.accuracy_fn = keras.metrics.BinaryAccuracy()
def call(self, targets, logits, sample_weights=None):
# Compute the training-time loss value and add it
# to the layer using `self.add_loss()`.
loss = self.loss_fn(targets, logits, sample_weights)
self.add_loss(loss)
# Log accuracy as a metric and add it
# to the layer using `self.add_metric()`.
acc = self.accuracy_fn(targets, logits, sample_weights)
self.add_metric(acc, name="accuracy")
# Return the inference-time prediction tensor (for `.predict()`).
return tf.nn.softmax(logits)
###Output
_____no_output_____
###Markdown
You can use it in a model with two inputs (input data & targets), compiled without a`loss` argument, like this:
###Code
import numpy as np
inputs = keras.Input(shape=(3,), name="inputs")
targets = keras.Input(shape=(10,), name="targets")
logits = keras.layers.Dense(10)(inputs)
predictions = LogisticEndpoint(name="predictions")(logits, targets)
model = keras.Model(inputs=[inputs, targets], outputs=predictions)
model.compile(optimizer="adam") # No loss argument!
data = {
"inputs": np.random.random((3, 3)),
"targets": np.random.random((3, 10)),
}
model.fit(data)
###Output
_____no_output_____
###Markdown
For more information about training multi-input models, see the section **Passing datato multi-input, multi-output models**. Automatically setting apart a validation holdout setIn the first end-to-end example you saw, we used the `validation_data` argument to passa tuple of NumPy arrays `(x_val, y_val)` to the model for evaluating a validation lossand validation metrics at the end of each epoch.Here's another option: the argument `validation_split` allows you to automaticallyreserve part of your training data for validation. The argument value represents thefraction of the data to be reserved for validation, so it should be set to a numberhigher than 0 and lower than 1. For instance, `validation_split=0.2` means "use 20% ofthe data for validation", and `validation_split=0.6` means "use 60% of the data forvalidation".The way the validation is computed is by taking the last x% samples of the arraysreceived by the fit call, before any shuffling.Note that you can only use `validation_split` when training with NumPy data.
###Code
model = get_compiled_model()
model.fit(x_train, y_train, batch_size=64, validation_split=0.2, epochs=1)
###Output
_____no_output_____
###Markdown
Training & evaluation from tf.data DatasetsIn the past few paragraphs, you've seen how to handle losses, metrics, and optimizers,and you've seen how to use the `validation_data` and `validation_split` arguments infit, when your data is passed as NumPy arrays.Let's now take a look at the case where your data comes in the form of a`tf.data.Dataset` object.The `tf.data` API is a set of utilities in TensorFlow 2.0 for loading and preprocessingdata in a way that's fast and scalable.For a complete guide about creating `Datasets`, see the[tf.data documentation](https://www.tensorflow.org/guide/data).You can pass a `Dataset` instance directly to the methods `fit()`, `evaluate()`, and`predict()`:
###Code
model = get_compiled_model()
# First, let's create a training Dataset instance.
# For the sake of our example, we'll use the same MNIST data as before.
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
# Shuffle and slice the dataset.
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Now we get a test dataset.
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test))
test_dataset = test_dataset.batch(64)
# Since the dataset already takes care of batching,
# we don't pass a `batch_size` argument.
model.fit(train_dataset, epochs=3)
# You can also evaluate or predict on a dataset.
print("Evaluate")
result = model.evaluate(test_dataset)
dict(zip(model.metrics_names, result))
###Output
_____no_output_____
###Markdown
Note that the Dataset is reset at the end of each epoch, so it can be reused of thenext epoch.If you want to run training only on a specific number of batches from this Dataset, youcan pass the `steps_per_epoch` argument, which specifies how many training steps themodel should run using this Dataset before moving on to the next epoch.If you do this, the dataset is not reset at the end of each epoch, instead we just keepdrawing the next batches. The dataset will eventually run out of data (unless it is aninfinitely-looping dataset).
###Code
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Only use the 100 batches per epoch (that's 64 * 100 samples)
model.fit(train_dataset, epochs=3, steps_per_epoch=100)
###Output
_____no_output_____
###Markdown
Using a validation datasetYou can pass a `Dataset` instance as the `validation_data` argument in `fit()`:
###Code
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Prepare the validation dataset
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(64)
model.fit(train_dataset, epochs=1, validation_data=val_dataset)
###Output
_____no_output_____
###Markdown
At the end of each epoch, the model will iterate over the validation dataset andcompute the validation loss and validation metrics.If you want to run validation only on a specific number of batches from this dataset,you can pass the `validation_steps` argument, which specifies how many validationsteps the model should run with the validation dataset before interrupting validationand moving on to the next epoch:
###Code
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Prepare the validation dataset
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(64)
model.fit(
train_dataset,
epochs=1,
# Only run validation using the first 10 batches of the dataset
# using the `validation_steps` argument
validation_data=val_dataset,
validation_steps=10,
)
###Output
_____no_output_____
###Markdown
Note that the validation dataset will be reset after each use (so that you will alwaysbe evaluating on the same samples from epoch to epoch).The argument `validation_split` (generating a holdout set from the training data) isnot supported when training from `Dataset` objects, since this feature requires theability to index the samples of the datasets, which is not possible in general withthe `Dataset` API. Other input formats supportedBesides NumPy arrays, eager tensors, and TensorFlow `Datasets`, it's possible to traina Keras model using Pandas dataframes, or from Python generators that yield batches ofdata & labels.In particular, the `keras.utils.Sequence` class offers a simple interface to buildPython data generators that are multiprocessing-aware and can be shuffled.In general, we recommend that you use:- NumPy input data if your data is small and fits in memory- `Dataset` objects if you have large datasets and you need to do distributed training- `Sequence` objects if you have large datasets and you need to do a lot of customPython-side processing that cannot be done in TensorFlow (e.g. if you rely on external librariesfor data loading or preprocessing). Using a `keras.utils.Sequence` object as input`keras.utils.Sequence` is a utility that you can subclass to obtain a Python generator withtwo important properties:- It works well with multiprocessing.- It can be shuffled (e.g. when passing `shuffle=True` in `fit()`).A `Sequence` must implement two methods:- `__getitem__`- `__len__`The method `__getitem__` should return a complete batch.If you want to modify your dataset between epochs, you may implement `on_epoch_end`.Here's a quick example:```pythonfrom skimage.io import imreadfrom skimage.transform import resizeimport numpy as np Here, `filenames` is list of path to the images and `labels` are the associated labels.class CIFAR10Sequence(Sequence): def __init__(self, filenames, labels, batch_size): self.filenames, self.labels = filenames, labels self.batch_size = batch_size def __len__(self): return int(np.ceil(len(self.filenames) / float(self.batch_size))) def __getitem__(self, idx): batch_x = self.filenames[idx * self.batch_size:(idx + 1) * self.batch_size] batch_y = self.labels[idx * self.batch_size:(idx + 1) * self.batch_size] return np.array([ resize(imread(filename), (200, 200)) for filename in batch_x]), np.array(batch_y)sequence = CIFAR10Sequence(filenames, labels, batch_size)model.fit(sequence, epochs=10)``` Using sample weighting and class weightingWith the default settings the weight of a sample is decided by its frequencyin the dataset. There are two methods to weight the data, independent ofsample frequency:* Class weights* Sample weights Class weightsThis is set by passing a dictionary to the `class_weight` argument to`Model.fit()`. This dictionary maps class indices to the weight that shouldbe used for samples belonging to this class.This can be used to balance classes without resampling, or to train amodel that gives more importance to a particular class.For instance, if class "0" is half as represented as class "1" in your data,you could use `Model.fit(..., class_weight={0: 1., 1: 0.5})`. Here's a NumPy example where we use class weights or sample weights togive more importance to the correct classification of class 5 (whichis the digit "5" in the MNIST dataset).
###Code
import numpy as np
class_weight = {
0: 1.0,
1: 1.0,
2: 1.0,
3: 1.0,
4: 1.0,
# Set weight "2" for class "5",
# making this class 2x more important
5: 2.0,
6: 1.0,
7: 1.0,
8: 1.0,
9: 1.0,
}
print("Fit with class weight")
model = get_compiled_model()
model.fit(x_train, y_train, class_weight=class_weight, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
Sample weightsFor fine grained control, or if you are not building a classifier,you can use "sample weights".- When training from NumPy data: Pass the `sample_weight` argument to `Model.fit()`.- When training from `tf.data` or any other sort of iterator: Yield `(input_batch, label_batch, sample_weight_batch)` tuples.A "sample weights" array is an array of numbers that specify how much weighteach sample in a batch should have in computing the total loss. It is commonlyused in imbalanced classification problems (the idea being to give more weightto rarely-seen classes).When the weights used are ones and zeros, the array can be used as a *mask* forthe loss function (entirely discarding the contribution of certain samples tothe total loss).
###Code
sample_weight = np.ones(shape=(len(y_train),))
sample_weight[y_train == 5] = 2.0
print("Fit with sample weight")
model = get_compiled_model()
model.fit(x_train, y_train, sample_weight=sample_weight, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
Here's a matching `Dataset` example:
###Code
sample_weight = np.ones(shape=(len(y_train),))
sample_weight[y_train == 5] = 2.0
# Create a Dataset that includes sample weights
# (3rd element in the return tuple).
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train, sample_weight))
# Shuffle and slice the dataset.
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
model = get_compiled_model()
model.fit(train_dataset, epochs=1)
###Output
_____no_output_____
###Markdown
Passing data to multi-input, multi-output modelsIn the previous examples, we were considering a model with a single input (a tensor ofshape `(764,)`) and a single output (a prediction tensor of shape `(10,)`). But whatabout models that have multiple inputs or outputs?Consider the following model, which has an image input of shape `(32, 32, 3)` (that's`(height, width, channels)`) and a timeseries input of shape `(None, 10)` (that's`(timesteps, features)`). Our model will have two outputs computed from thecombination of these inputs: a "score" (of shape `(1,)`) and a probabilitydistribution over five classes (of shape `(5,)`).
###Code
image_input = keras.Input(shape=(32, 32, 3), name="img_input")
timeseries_input = keras.Input(shape=(None, 10), name="ts_input")
x1 = layers.Conv2D(3, 3)(image_input)
x1 = layers.GlobalMaxPooling2D()(x1)
x2 = layers.Conv1D(3, 3)(timeseries_input)
x2 = layers.GlobalMaxPooling1D()(x2)
x = layers.concatenate([x1, x2])
score_output = layers.Dense(1, name="score_output")(x)
class_output = layers.Dense(5, name="class_output")(x)
model = keras.Model(
inputs=[image_input, timeseries_input], outputs=[score_output, class_output]
)
###Output
_____no_output_____
###Markdown
Let's plot this model, so you can clearly see what we're doing here (note that theshapes shown in the plot are batch shapes, rather than per-sample shapes).
###Code
keras.utils.plot_model(model, "multi_input_and_output_model.png", show_shapes=True)
###Output
_____no_output_____
###Markdown
At compilation time, we can specify different losses to different outputs, by passingthe loss functions as a list:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()],
)
###Output
_____no_output_____
###Markdown
If we only passed a single loss function to the model, the same loss function would beapplied to every output (which is not appropriate here).Likewise for metrics:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()],
metrics=[
[
keras.metrics.MeanAbsolutePercentageError(),
keras.metrics.MeanAbsoluteError(),
],
[keras.metrics.CategoricalAccuracy()],
],
)
###Output
_____no_output_____
###Markdown
Since we gave names to our output layers, we could also specify per-output losses andmetrics via a dict:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={
"score_output": keras.losses.MeanSquaredError(),
"class_output": keras.losses.CategoricalCrossentropy(),
},
metrics={
"score_output": [
keras.metrics.MeanAbsolutePercentageError(),
keras.metrics.MeanAbsoluteError(),
],
"class_output": [keras.metrics.CategoricalAccuracy()],
},
)
###Output
_____no_output_____
###Markdown
We recommend the use of explicit names and dicts if you have more than 2 outputs.It's possible to give different weights to different output-specific losses (forinstance, one might wish to privilege the "score" loss in our example, by giving to 2xthe importance of the class loss), using the `loss_weights` argument:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={
"score_output": keras.losses.MeanSquaredError(),
"class_output": keras.losses.CategoricalCrossentropy(),
},
metrics={
"score_output": [
keras.metrics.MeanAbsolutePercentageError(),
keras.metrics.MeanAbsoluteError(),
],
"class_output": [keras.metrics.CategoricalAccuracy()],
},
loss_weights={"score_output": 2.0, "class_output": 1.0},
)
###Output
_____no_output_____
###Markdown
You could also chose not to compute a loss for certain outputs, if these outputs meantfor prediction but not for training:
###Code
# List loss version
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[None, keras.losses.CategoricalCrossentropy()],
)
# Or dict loss version
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={"class_output": keras.losses.CategoricalCrossentropy()},
)
###Output
_____no_output_____
###Markdown
Passing data to a multi-input or multi-output model in fit works in a similar way asspecifying a loss function in compile: you can pass **lists of NumPy arrays** (with1:1 mapping to the outputs that received a loss function) or **dicts mapping outputnames to NumPy arrays**.
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()],
)
# Generate dummy NumPy data
img_data = np.random.random_sample(size=(100, 32, 32, 3))
ts_data = np.random.random_sample(size=(100, 20, 10))
score_targets = np.random.random_sample(size=(100, 1))
class_targets = np.random.random_sample(size=(100, 5))
# Fit on lists
model.fit([img_data, ts_data], [score_targets, class_targets], batch_size=32, epochs=1)
# Alternatively, fit on dicts
model.fit(
{"img_input": img_data, "ts_input": ts_data},
{"score_output": score_targets, "class_output": class_targets},
batch_size=32,
epochs=1,
)
###Output
_____no_output_____
###Markdown
Here's the `Dataset` use case: similarly as what we did for NumPy arrays, the `Dataset`should return a tuple of dicts.
###Code
train_dataset = tf.data.Dataset.from_tensor_slices(
(
{"img_input": img_data, "ts_input": ts_data},
{"score_output": score_targets, "class_output": class_targets},
)
)
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
model.fit(train_dataset, epochs=1)
###Output
_____no_output_____
###Markdown
Using callbacksCallbacks in Keras are objects that are called at different points during training (atthe start of an epoch, at the end of a batch, at the end of an epoch, etc.) and whichcan be used to implement behaviors such as:- Doing validation at different points during training (beyond the built-in per-epochvalidation)- Checkpointing the model at regular intervals or when it exceeds a certain accuracythreshold- Changing the learning rate of the model when training seems to be plateauing- Doing fine-tuning of the top layers when training seems to be plateauing- Sending email or instant message notifications when training ends or where a certainperformance threshold is exceeded- Etc.Callbacks can be passed as a list to your call to `fit()`:
###Code
model = get_compiled_model()
callbacks = [
keras.callbacks.EarlyStopping(
# Stop training when `val_loss` is no longer improving
monitor="val_loss",
# "no longer improving" being defined as "no better than 1e-2 less"
min_delta=1e-2,
# "no longer improving" being further defined as "for at least 2 epochs"
patience=2,
verbose=1,
)
]
model.fit(
x_train,
y_train,
epochs=20,
batch_size=64,
callbacks=callbacks,
validation_split=0.2,
)
###Output
_____no_output_____
###Markdown
Many built-in callbacks are available- `ModelCheckpoint`: Periodically save the model.- `EarlyStopping`: Stop training when training is no longer improving the validationmetrics.- `TensorBoard`: periodically write model logs that can be visualized in[TensorBoard](https://www.tensorflow.org/tensorboard) (more details in the section"Visualization").- `CSVLogger`: streams loss and metrics data to a CSV file.- etc.See the [callbacks documentation](/api/callbacks/) for the complete list. Writing your own callbackYou can create a custom callback by extending the base class`keras.callbacks.Callback`. A callback has access to its associated model through theclass property `self.model`.Make sure to read the[complete guide to writing custom callbacks](/guides/writing_your_own_callbacks/).Here's a simple example saving a list of per-batch loss values during training:
###Code
class LossHistory(keras.callbacks.Callback):
def on_train_begin(self, logs):
self.per_batch_losses = []
def on_batch_end(self, batch, logs):
self.per_batch_losses.append(logs.get("loss"))
###Output
_____no_output_____
###Markdown
Checkpointing modelsWhen you're training model on relatively large datasets, it's crucial to savecheckpoints of your model at frequent intervals.The easiest way to achieve this is with the `ModelCheckpoint` callback:
###Code
model = get_compiled_model()
callbacks = [
keras.callbacks.ModelCheckpoint(
# Path where to save the model
# The two parameters below mean that we will overwrite
# the current checkpoint if and only if
# the `val_loss` score has improved.
# The saved model name will include the current epoch.
filepath="mymodel_{epoch}",
save_best_only=True, # Only save a model if `val_loss` has improved.
monitor="val_loss",
verbose=1,
)
]
model.fit(
x_train, y_train, epochs=2, batch_size=64, callbacks=callbacks, validation_split=0.2
)
###Output
_____no_output_____
###Markdown
The `ModelCheckpoint` callback can be used to implement fault-tolerance:the ability to restart training from the last saved state of the model in case traininggets randomly interrupted. Here's a basic example:
###Code
import os
# Prepare a directory to store all the checkpoints.
checkpoint_dir = "./ckpt"
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
def make_or_restore_model():
# Either restore the latest model, or create a fresh one
# if there is no checkpoint available.
checkpoints = [checkpoint_dir + "/" + name for name in os.listdir(checkpoint_dir)]
if checkpoints:
latest_checkpoint = max(checkpoints, key=os.path.getctime)
print("Restoring from", latest_checkpoint)
return keras.models.load_model(latest_checkpoint)
print("Creating a new model")
return get_compiled_model()
model = make_or_restore_model()
callbacks = [
# This callback saves a SavedModel every 100 batches.
# We include the training loss in the saved model name.
keras.callbacks.ModelCheckpoint(
filepath=checkpoint_dir + "/ckpt-loss={loss:.2f}", save_freq=100
)
]
model.fit(x_train, y_train, epochs=1, callbacks=callbacks)
###Output
_____no_output_____
###Markdown
You call also write your own callback for saving and restoring models.For a complete guide on serialization and saving, see the[guide to saving and serializing Models](/guides/serialization_and_saving/). Using learning rate schedulesA common pattern when training deep learning models is to gradually reduce the learningas training progresses. This is generally known as "learning rate decay".The learning decay schedule could be static (fixed in advance, as a function of thecurrent epoch or the current batch index), or dynamic (responding to the currentbehavior of the model, in particular the validation loss). Passing a schedule to an optimizerYou can easily use a static learning rate decay schedule by passing a schedule objectas the `learning_rate` argument in your optimizer:
###Code
initial_learning_rate = 0.1
lr_schedule = keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate, decay_steps=100000, decay_rate=0.96, staircase=True
)
optimizer = keras.optimizers.RMSprop(learning_rate=lr_schedule)
###Output
_____no_output_____
###Markdown
Several built-in schedules are available: `ExponentialDecay`, `PiecewiseConstantDecay`,`PolynomialDecay`, and `InverseTimeDecay`. Using callbacks to implement a dynamic learning rate scheduleA dynamic learning rate schedule (for instance, decreasing the learning rate when thevalidation loss is no longer improving) cannot be achieved with these schedule objectssince the optimizer does not have access to validation metrics.However, callbacks do have access to all metrics, including validation metrics! You canthus achieve this pattern by using a callback that modifies the current learning rateon the optimizer. In fact, this is even built-in as the `ReduceLROnPlateau` callback. Visualizing loss and metrics during trainingThe best way to keep an eye on your model during training is to use[TensorBoard](https://www.tensorflow.org/tensorboard), a browser-based applicationthat you can run locally that provides you with:- Live plots of the loss and metrics for training and evaluation- (optionally) Visualizations of the histograms of your layer activations- (optionally) 3D visualizations of the embedding spaces learned by your `Embedding`layersIf you have installed TensorFlow with pip, you should be able to launch TensorBoardfrom the command line:```tensorboard --logdir=/full_path_to_your_logs``` Using the TensorBoard callbackThe easiest way to use TensorBoard with a Keras model and the fit method is the`TensorBoard` callback.In the simplest case, just specify where you want the callback to write logs, andyou're good to go:
###Code
keras.callbacks.TensorBoard(
log_dir="/full_path_to_your_logs",
histogram_freq=0, # How often to log histogram visualizations
embeddings_freq=0, # How often to log embedding visualizations
update_freq="epoch",
) # How often to write logs (default: once per epoch)
###Output
_____no_output_____
###Markdown
Training & evaluation with the built-in methods**Author:** [fchollet](https://twitter.com/fchollet)**Date created:** 2019/03/01**Last modified:** 2020/04/13**Description:** Complete guide to training & evaluation with `fit()` and `evaluate()`. Setup
###Code
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
###Output
_____no_output_____
###Markdown
IntroductionThis guide covers training, evaluation, and prediction (inference) modelswhen using built-in APIs for training & validation (such as `model.fit()`,`model.evaluate()`, `model.predict()`).If you are interested in leveraging `fit()` while specifying yourown training step function, see the guide["customizing what happens in `fit()`"](/guides/customizing_what_happens_in_fit/).If you are interested in writing your own training & evaluation loops fromscratch, see the guide["writing a training loop from scratch"](/guides/writing_a_training_loop_from_scratch/).In general, whether you are using built-in loops or writing your own, model training &evaluation works strictly in the same way across every kind of Keras model --Sequential models, models built with the Functional API, and models written fromscratch via model subclassing.This guide doesn't cover distributed training. For distributed training, seeour [guide to multi-gpu & distributed training](/guides/distributed_training/). API overview: a first end-to-end exampleWhen passing data to the built-in training loops of a model, you should either use**NumPy arrays** (if your data is small and fits in memory) or **`tf.data Dataset`objects**. In the next few paragraphs, we'll use the MNIST dataset as NumPy arrays, inorder to demonstrate how to use optimizers, losses, and metrics.Let's consider the following model (here, we build in with the Functional API, but itcould be a Sequential model or a subclassed model as well):
###Code
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, activation="softmax", name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
###Output
_____no_output_____
###Markdown
Here's what the typical end-to-end workflow looks like, consisting of:- Training- Validation on a holdout set generated from the original training data- Evaluation on the test dataWe'll use MNIST data for this example.
###Code
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
# Preprocess the data (these are NumPy arrays)
x_train = x_train.reshape(60000, 784).astype("float32") / 255
x_test = x_test.reshape(10000, 784).astype("float32") / 255
y_train = y_train.astype("float32")
y_test = y_test.astype("float32")
# Reserve 10,000 samples for validation
x_val = x_train[-10000:]
y_val = y_train[-10000:]
x_train = x_train[:-10000]
y_train = y_train[:-10000]
###Output
_____no_output_____
###Markdown
We specify the training configuration (optimizer, loss, metrics):
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(), # Optimizer
# Loss function to minimize
loss=keras.losses.SparseCategoricalCrossentropy(),
# List of metrics to monitor
metrics=[keras.metrics.SparseCategoricalAccuracy()],
)
###Output
_____no_output_____
###Markdown
We call `fit()`, which will train the model by slicing the data into "batches" of size"batch_size", and repeatedly iterating over the entire dataset for a given number of"epochs".
###Code
print("Fit model on training data")
history = model.fit(
x_train,
y_train,
batch_size=64,
epochs=2,
# We pass some validation for
# monitoring validation loss and metrics
# at the end of each epoch
validation_data=(x_val, y_val),
)
###Output
_____no_output_____
###Markdown
The returned "history" object holds a record of the loss values and metric valuesduring training:
###Code
history.history
###Output
_____no_output_____
###Markdown
We evaluate the model on the test data via `evaluate()`:
###Code
# Evaluate the model on the test data using `evaluate`
print("Evaluate on test data")
results = model.evaluate(x_test, y_test, batch_size=128)
print("test loss, test acc:", results)
# Generate predictions (probabilities -- the output of the last layer)
# on new data using `predict`
print("Generate predictions for 3 samples")
predictions = model.predict(x_test[:3])
print("predictions shape:", predictions.shape)
###Output
_____no_output_____
###Markdown
Now, let's review each piece of this workflow in detail. The `compile()` method: specifying a loss, metrics, and an optimizerTo train a model with `fi()`t, you need to specify a loss function, an optimizer, andoptionally, some metrics to monitor.You pass these to the model as arguments to the `compile()` method:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=[keras.metrics.SparseCategoricalAccuracy()],
)
###Output
_____no_output_____
###Markdown
The `metrics` argument should be a list -- you model can have any number of metrics.If your model has multiple outputs, your can specify different losses and metrics foreach output, and you can modulate the contribution of each output to the total loss ofthe model. You will find more details about this in the section **"Passing data tomulti-input, multi-output models"**.Note that if you're satisfied with the default settings, in many cases the optimizer,loss, and metrics can be specified via string identifiers as a shortcut:
###Code
model.compile(
optimizer="rmsprop",
loss="sparse_categorical_crossentropy",
metrics=["sparse_categorical_accuracy"],
)
###Output
_____no_output_____
###Markdown
For later reuse, let's put our model definition and compile step in functions; we willcall them several times across different examples in this guide.
###Code
def get_uncompiled_model():
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, activation="softmax", name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
def get_compiled_model():
model = get_uncompiled_model()
model.compile(
optimizer="rmsprop",
loss="sparse_categorical_crossentropy",
metrics=["sparse_categorical_accuracy"],
)
return model
###Output
_____no_output_____
###Markdown
Many built-in optimizers, losses, and metrics are availableIn general, you won't have to create from scratch your own losses, metrics, oroptimizers, because what you need is likely already part of the Keras API:Optimizers:- `SGD()` (with or without momentum)- `RMSprop()`- `Adam()`- etc.Losses:- `MeanSquaredError()`- `KLDivergence()`- `CosineSimilarity()`- etc.Metrics:- `AUC()`- `Precision()`- `Recall()`- etc. Custom lossesThere are two ways to provide custom losses with Keras. The first example creates afunction that accepts inputs `y_true` and `y_pred`. The following example shows a lossfunction that computes the mean squared error between the real data and thepredictions:
###Code
def custom_mean_squared_error(y_true, y_pred):
return tf.math.reduce_mean(tf.square(y_true - y_pred))
model = get_uncompiled_model()
model.compile(optimizer=keras.optimizers.Adam(), loss=custom_mean_squared_error)
# We need to one-hot encode the labels to use MSE
y_train_one_hot = tf.one_hot(y_train, depth=10)
model.fit(x_train, y_train_one_hot, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
If you need a loss function that takes in parameters beside `y_true` and `y_pred`, youcan subclass the `tf.keras.losses.Loss` class and implement the following two methods:- `__init__(self)`: accept parameters to pass during the call of your loss function- `call(self, y_true, y_pred)`: use the targets (y_true) and the model predictions(y_pred) to compute the model's lossLet's say you want to use mean squared error, but with an added term thatwill de-incentivize prediction values far from 0.5 (we assume that the categoricaltargets are one-hot encoded and take values between 0 and 1). Thiscreates an incentive for the model not to be too confident, which may helpreduce overfitting (we won't know if it works until we try!).Here's how you would do it:
###Code
class CustomMSE(keras.losses.Loss):
def __init__(self, regularization_factor=0.1, name="custom_mse"):
super().__init__(name=name)
self.regularization_factor = regularization_factor
def call(self, y_true, y_pred):
mse = tf.math.reduce_mean(tf.square(y_true - y_pred))
reg = tf.math.reduce_mean(tf.square(0.5 - y_pred))
return mse + reg * self.regularization_factor
model = get_uncompiled_model()
model.compile(optimizer=keras.optimizers.Adam(), loss=CustomMSE())
y_train_one_hot = tf.one_hot(y_train, depth=10)
model.fit(x_train, y_train_one_hot, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
Custom metricsIf you need a metric that isn't part of the API, you can easily create custom metricsby subclassing the `tf.keras.metrics.Metric` class. You will need to implement 4methods:- `__init__(self)`, in which you will create state variables for your metric.- `update_state(self, y_true, y_pred, sample_weight=None)`, which uses the targetsy_true and the model predictions y_pred to update the state variables.- `result(self)`, which uses the state variables to compute the final results.- `reset_states(self)`, which reinitializes the state of the metric.State update and results computation are kept separate (in `update_state()` and`result()`, respectively) because in some cases, results computation might be veryexpensive, and would only be done periodically.Here's a simple example showing how to implement a `CategoricalTruePositives` metric,that counts how many samples where correctly classified as belonging to a given class:
###Code
class CategoricalTruePositives(keras.metrics.Metric):
def __init__(self, name="categorical_true_positives", **kwargs):
super(CategoricalTruePositives, self).__init__(name=name, **kwargs)
self.true_positives = self.add_weight(name="ctp", initializer="zeros")
def update_state(self, y_true, y_pred, sample_weight=None):
y_pred = tf.reshape(tf.argmax(y_pred, axis=1), shape=(-1, 1))
values = tf.cast(y_true, "int32") == tf.cast(y_pred, "int32")
values = tf.cast(values, "float32")
if sample_weight is not None:
sample_weight = tf.cast(sample_weight, "float32")
values = tf.multiply(values, sample_weight)
self.true_positives.assign_add(tf.reduce_sum(values))
def result(self):
return self.true_positives
def reset_states(self):
# The state of the metric will be reset at the start of each epoch.
self.true_positives.assign(0.0)
model = get_uncompiled_model()
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=[CategoricalTruePositives()],
)
model.fit(x_train, y_train, batch_size=64, epochs=3)
###Output
_____no_output_____
###Markdown
Handling losses and metrics that don't fit the standard signatureThe overwhelming majority of losses and metrics can be computed from y_true and`y_pred`, where `y_pred` is an output of your model. But not all of them. Forinstance, a regularization loss may only require the activation of a layer (there areno targets in this case), and this activation may not be a model output.In such cases, you can call `self.add_loss(loss_value)` from inside the call method ofa custom layer. Losses added in this way get added to the "main" loss during training(the one passed to `compile()`). Here's a simple example that adds activityregularization (note that activity regularization is built-in in all Keras layers --this layer is just for the sake of providing a concrete example):
###Code
class ActivityRegularizationLayer(layers.Layer):
def call(self, inputs):
self.add_loss(tf.reduce_sum(inputs) * 0.1)
return inputs # Pass-through layer.
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
# Insert activity regularization as a layer
x = ActivityRegularizationLayer()(x)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
# The displayed loss will be much higher than before
# due to the regularization component.
model.fit(x_train, y_train, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
You can do the same for logging metric values, using `add_metric()`:
###Code
class MetricLoggingLayer(layers.Layer):
def call(self, inputs):
# The `aggregation` argument defines
# how to aggregate the per-batch values
# over each epoch:
# in this case we simply average them.
self.add_metric(
keras.backend.std(inputs), name="std_of_activation", aggregation="mean"
)
return inputs # Pass-through layer.
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
# Insert std logging as a layer.
x = MetricLoggingLayer()(x)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
model.fit(x_train, y_train, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
In the [Functional API](/guides/functional_api/),you can also call `model.add_loss(loss_tensor)`,or `model.add_metric(metric_tensor, name, aggregation)`.Here's a simple example:
###Code
inputs = keras.Input(shape=(784,), name="digits")
x1 = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x2 = layers.Dense(64, activation="relu", name="dense_2")(x1)
outputs = layers.Dense(10, name="predictions")(x2)
model = keras.Model(inputs=inputs, outputs=outputs)
model.add_loss(tf.reduce_sum(x1) * 0.1)
model.add_metric(keras.backend.std(x1), name="std_of_activation", aggregation="mean")
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
model.fit(x_train, y_train, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
Note that when you pass losses via `add_loss()`, it becomes possible to call`compile()` without a loss function, since the model already has a loss to minimize.Consider the following `LogisticEndpoint` layer: it takes as inputstargets & logits, and it tracks a crossentropy loss via `add_loss()`. It alsotracks classification accuracy via `add_metric()`.
###Code
class LogisticEndpoint(keras.layers.Layer):
def __init__(self, name=None):
super(LogisticEndpoint, self).__init__(name=name)
self.loss_fn = keras.losses.BinaryCrossentropy(from_logits=True)
self.accuracy_fn = keras.metrics.BinaryAccuracy()
def call(self, targets, logits, sample_weights=None):
# Compute the training-time loss value and add it
# to the layer using `self.add_loss()`.
loss = self.loss_fn(targets, logits, sample_weights)
self.add_loss(loss)
# Log accuracy as a metric and add it
# to the layer using `self.add_metric()`.
acc = self.accuracy_fn(targets, logits, sample_weights)
self.add_metric(acc, name="accuracy")
# Return the inference-time prediction tensor (for `.predict()`).
return tf.nn.softmax(logits)
###Output
_____no_output_____
###Markdown
You can use it in a model with two inputs (input data & targets), compiled without a`loss` argument, like this:
###Code
import numpy as np
inputs = keras.Input(shape=(3,), name="inputs")
targets = keras.Input(shape=(10,), name="targets")
logits = keras.layers.Dense(10)(inputs)
predictions = LogisticEndpoint(name="predictions")(logits, targets)
model = keras.Model(inputs=[inputs, targets], outputs=predictions)
model.compile(optimizer="adam") # No loss argument!
data = {
"inputs": np.random.random((3, 3)),
"targets": np.random.random((3, 10)),
}
model.fit(data)
###Output
_____no_output_____
###Markdown
For more information about training multi-input models, see the section **Passing datato multi-input, multi-output models**. Automatically setting apart a validation holdout setIn the first end-to-end example you saw, we used the `validation_data` argument to passa tuple of NumPy arrays `(x_val, y_val)` to the model for evaluating a validation lossand validation metrics at the end of each epoch.Here's another option: the argument `validation_split` allows you to automaticallyreserve part of your training data for validation. The argument value represents thefraction of the data to be reserved for validation, so it should be set to a numberhigher than 0 and lower than 1. For instance, validation_split=0.2`` means "use 20% ofthe data for validation", and `validation_split=0.6` means "use 60% of the data forvalidation".The way the validation is computed is by taking the last x% samples of the arraysreceived by the fit call, before any shuffling.Note that you can only use `validation_split` when training with NumPy data.
###Code
model = get_compiled_model()
model.fit(x_train, y_train, batch_size=64, validation_split=0.2, epochs=1)
###Output
_____no_output_____
###Markdown
Training & evaluation from tf.data DatasetsIn the past few paragraphs, you've seen how to handle losses, metrics, and optimizers,and you've seen how to use the `validation_data` and `validation_split` arguments infit, when your data is passed as NumPy arrays.Let's now take a look at the case where your data comes in the form of a`tf.data.Dataset` object.The `tf.data` API is a set of utilities in TensorFlow 2.0 for loading and preprocessingdata in a way that's fast and scalable.For a complete guide about creating `Datasets`, see the[tf.data documentation](https://www.tensorflow.org/guide/data).You can pass a `Dataset` instance directly to the methods `fit()`, `evaluate()`, and`predict()`:
###Code
model = get_compiled_model()
# First, let's create a training Dataset instance.
# For the sake of our example, we'll use the same MNIST data as before.
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
# Shuffle and slice the dataset.
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Now we get a test dataset.
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test))
test_dataset = test_dataset.batch(64)
# Since the dataset already takes care of batching,
# we don't pass a `batch_size` argument.
model.fit(train_dataset, epochs=3)
# You can also evaluate or predict on a dataset.
print("Evaluate")
result = model.evaluate(test_dataset)
dict(zip(model.metrics_names, result))
###Output
_____no_output_____
###Markdown
Note that the Dataset is reset at the end of each epoch, so it can be reused of thenext epoch.If you want to run training only on a specific number of batches from this Dataset, youcan pass the `steps_per_epoch` argument, which specifies how many training steps themodel should run using this Dataset before moving on to the next epoch.If you do this, the dataset is not reset at the end of each epoch, instead we just keepdrawing the next batches. The dataset will eventually run out of data (unless it is aninfinitely-looping dataset).
###Code
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Only use the 100 batches per epoch (that's 64 * 100 samples)
model.fit(train_dataset, epochs=3, steps_per_epoch=100)
###Output
_____no_output_____
###Markdown
Using a validation datasetYou can pass a `Dataset` instance as the `validation_data` argument in `fit()`:
###Code
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Prepare the validation dataset
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(64)
model.fit(train_dataset, epochs=1, validation_data=val_dataset)
###Output
_____no_output_____
###Markdown
At the end of each epoch, the model will iterate over the validation dataset andcompute the validation loss and validation metrics.If you want to run validation only on a specific number of batches from this dataset,you can pass the `validation_steps` argument, which specifies how many validationsteps the model should run with the validation dataset before interrupting validationand moving on to the next epoch:
###Code
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Prepare the validation dataset
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(64)
model.fit(
train_dataset,
epochs=1,
# Only run validation using the first 10 batches of the dataset
# using the `validation_steps` argument
validation_data=val_dataset,
validation_steps=10,
)
###Output
_____no_output_____
###Markdown
Note that the validation dataset will be reset after each use (so that you will alwaysbe evaluating on the same samples from epoch to epoch).The argument `validation_split` (generating a holdout set from the training data) isnot supported when training from `Dataset` objects, since this features requires theability to index the samples of the datasets, which is not possible in general withthe `Dataset` API. Other input formats supportedBesides NumPy arrays, eager tensors, and TensorFlow `Datasets`, it's possible to traina Keras model using Pandas dataframes, or from Python generators that yield batches ofdata & labels.In particular, the `keras.utils.Sequence` class offers a simple interface to buildPython data generators that are multiprocessing-aware and can be shuffled.In general, we recommend that you use:- NumPy input data if your data is small and fits in memory- `Dataset` objects if you have large datasets and you need to do distributed training- `Sequence` objects if you have large datasets and you need to do a lot of customPython-side processing that cannot be done in TensorFlow (e.g. if you rely on external librariesfor data loading or preprocessing). Using a `keras.utils.Sequence` object as input`keras.utils.Sequence` is a utility that you can subclass to obtain a Python generator withtwo important properties:- It works well with multiprocessing.- It can be shuffled (e.g. when passing `shuffle=True` in `fit()`).A `Sequence` must implement two methods:- `__getitem__`- `__len__`The method `__getitem__` should return a complete batch.If you want to modify your dataset between epochs, you may implement `on_epoch_end`.Here's a quick example:```pythonfrom skimage.io import imreadfrom skimage.transform import resizeimport numpy as np Here, `filenames` is list of path to the images and `labels` are the associated labels.class CIFAR10Sequence(Sequence): def __init__(self, filenames, labels, batch_size): self.filenames, self.labels = filenames, labels self.batch_size = batch_size def __len__(self): return int(np.ceil(len(self.filenames) / float(self.batch_size))) def __getitem__(self, idx): batch_x = self.filenames[idx * self.batch_size:(idx + 1) * self.batch_size] batch_y = self.labels[idx * self.batch_size:(idx + 1) * self.batch_size] return np.array([ resize(imread(filename), (200, 200)) for filename in batch_x]), np.array(batch_y)sequence = CIFAR10Sequence(filenames, labels, batch_size)model.fit(sequence, epochs=10)``` Using sample weighting and class weightingBesides input data and target data, it is possible to pass sample weights or classweights to a model when using fit:- When training from NumPy data: via the `sample_weight` and `class_weight` arguments.- When training from `Dataset` objects: by having the `Dataset` return a tuple`(input_batch, target_batch, sample_weight_batch)`.A "sample weights" array is an array of numbers that specify how much weight eachsample in a batch should have in computing the total loss. It is commonly used inimbalanced classification problems (the idea being to give more weight to rarely-seenclasses). When the weights used are ones and zeros, the array can be used as a maskfor the loss function (entirely discarding the contribution of certain samples to thetotal loss).A "class weights" dict is a more specific instance of the same concept: it maps classindices to the sample weight that should be used for samples belonging to this class.For instance, if class "0" is twice less represented than class "1" in your data, youcould use `class_weight={0: 1., 1: 0.5}`.Here's a NumPy example where we use class weights or sample weights to give moreimportance to the correct classification of class 5 (which is the digit "5" in theMNIST dataset).
###Code
import numpy as np
class_weight = {
0: 1.0,
1: 1.0,
2: 1.0,
3: 1.0,
4: 1.0,
# Set weight "2" for class "5",
# making this class 2x more important
5: 2.0,
6: 1.0,
7: 1.0,
8: 1.0,
9: 1.0,
}
print("Fit with class weight")
model = get_compiled_model()
model.fit(x_train, y_train, class_weight=class_weight, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
Here's the same example using `sample_weight` instead:
###Code
sample_weight = np.ones(shape=(len(y_train),))
sample_weight[y_train == 5] = 2.0
print("Fit with sample weight")
model = get_compiled_model()
model.fit(x_train, y_train, sample_weight=sample_weight, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
Here's a matching `Dataset` example:
###Code
sample_weight = np.ones(shape=(len(y_train),))
sample_weight[y_train == 5] = 2.0
# Create a Dataset that includes sample weights
# (3rd element in the return tuple).
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train, sample_weight))
# Shuffle and slice the dataset.
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
model = get_compiled_model()
model.fit(train_dataset, epochs=1)
###Output
_____no_output_____
###Markdown
Passing data to multi-input, multi-output modelsIn the previous examples, we were considering a model with a single input (a tensor ofshape `(764,)`) and a single output (a prediction tensor of shape `(10,)`). But whatabout models that have multiple inputs or outputs?Consider the following model, which has an image input of shape `(32, 32, 3)` (that's`(height, width, channels)`) and a timeseries input of shape `(None, 10)` (that's`(timesteps, features)`). Our model will have two outputs computed from thecombination of these inputs: a "score" (of shape `(1,)`) and a probabilitydistribution over five classes (of shape `(5,)`).
###Code
image_input = keras.Input(shape=(32, 32, 3), name="img_input")
timeseries_input = keras.Input(shape=(None, 10), name="ts_input")
x1 = layers.Conv2D(3, 3)(image_input)
x1 = layers.GlobalMaxPooling2D()(x1)
x2 = layers.Conv1D(3, 3)(timeseries_input)
x2 = layers.GlobalMaxPooling1D()(x2)
x = layers.concatenate([x1, x2])
score_output = layers.Dense(1, name="score_output")(x)
class_output = layers.Dense(5, activation="softmax", name="class_output")(x)
model = keras.Model(
inputs=[image_input, timeseries_input], outputs=[score_output, class_output]
)
###Output
_____no_output_____
###Markdown
Let's plot this model, so you can clearly see what we're doing here (note that theshapes shown in the plot are batch shapes, rather than per-sample shapes).
###Code
keras.utils.plot_model(model, "multi_input_and_output_model.png", show_shapes=True)
###Output
_____no_output_____
###Markdown
At compilation time, we can specify different losses to different outputs, by passingthe loss functions as a list:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()],
)
###Output
_____no_output_____
###Markdown
If we only passed a single loss function to the model, the same loss function would beapplied to every output (which is not appropriate here).Likewise for metrics:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()],
metrics=[
[
keras.metrics.MeanAbsolutePercentageError(),
keras.metrics.MeanAbsoluteError(),
],
[keras.metrics.CategoricalAccuracy()],
],
)
###Output
_____no_output_____
###Markdown
Since we gave names to our output layers, we could also specify per-output losses andmetrics via a dict:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={
"score_output": keras.losses.MeanSquaredError(),
"class_output": keras.losses.CategoricalCrossentropy(),
},
metrics={
"score_output": [
keras.metrics.MeanAbsolutePercentageError(),
keras.metrics.MeanAbsoluteError(),
],
"class_output": [keras.metrics.CategoricalAccuracy()],
},
)
###Output
_____no_output_____
###Markdown
We recommend the use of explicit names and dicts if you have more than 2 outputs.It's possible to give different weights to different output-specific losses (forinstance, one might wish to privilege the "score" loss in our example, by giving to 2xthe importance of the class loss), using the `loss_weights` argument:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={
"score_output": keras.losses.MeanSquaredError(),
"class_output": keras.losses.CategoricalCrossentropy(),
},
metrics={
"score_output": [
keras.metrics.MeanAbsolutePercentageError(),
keras.metrics.MeanAbsoluteError(),
],
"class_output": [keras.metrics.CategoricalAccuracy()],
},
loss_weights={"score_output": 2.0, "class_output": 1.0},
)
###Output
_____no_output_____
###Markdown
You could also chose not to compute a loss for certain outputs, if these outputs meantfor prediction but not for training:
###Code
# List loss version
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[None, keras.losses.CategoricalCrossentropy()],
)
# Or dict loss version
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={"class_output": keras.losses.CategoricalCrossentropy()},
)
###Output
_____no_output_____
###Markdown
Passing data to a multi-input or multi-output model in fit works in a similar way asspecifying a loss function in compile: you can pass **lists of NumPy arrays** (with1:1 mapping to the outputs that received a loss function) or **dicts mapping outputnames to NumPy arrays**.
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()],
)
# Generate dummy NumPy data
img_data = np.random.random_sample(size=(100, 32, 32, 3))
ts_data = np.random.random_sample(size=(100, 20, 10))
score_targets = np.random.random_sample(size=(100, 1))
class_targets = np.random.random_sample(size=(100, 5))
# Fit on lists
model.fit([img_data, ts_data], [score_targets, class_targets], batch_size=32, epochs=1)
# Alternatively, fit on dicts
model.fit(
{"img_input": img_data, "ts_input": ts_data},
{"score_output": score_targets, "class_output": class_targets},
batch_size=32,
epochs=1,
)
###Output
_____no_output_____
###Markdown
Here's the `Dataset` use case: similarly as what we did for NumPy arrays, the `Dataset`should return a tuple of dicts.
###Code
train_dataset = tf.data.Dataset.from_tensor_slices(
(
{"img_input": img_data, "ts_input": ts_data},
{"score_output": score_targets, "class_output": class_targets},
)
)
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
model.fit(train_dataset, epochs=1)
###Output
_____no_output_____
###Markdown
Using callbacksCallbacks in Keras are objects that are called at different point during training (atthe start of an epoch, at the end of a batch, at the end of an epoch, etc.) and whichcan be used to implement behaviors such as:- Doing validation at different points during training (beyond the built-in per-epochvalidation)- Checkpointing the model at regular intervals or when it exceeds a certain accuracythreshold- Changing the learning rate of the model when training seems to be plateauing- Doing fine-tuning of the top layers when training seems to be plateauing- Sending email or instant message notifications when training ends or where a certainperformance threshold is exceeded- Etc.Callbacks can be passed as a list to your call to `fit()`:
###Code
model = get_compiled_model()
callbacks = [
keras.callbacks.EarlyStopping(
# Stop training when `val_loss` is no longer improving
monitor="val_loss",
# "no longer improving" being defined as "no better than 1e-2 less"
min_delta=1e-2,
# "no longer improving" being further defined as "for at least 2 epochs"
patience=2,
verbose=1,
)
]
model.fit(
x_train,
y_train,
epochs=20,
batch_size=64,
callbacks=callbacks,
validation_split=0.2,
)
###Output
_____no_output_____
###Markdown
Many built-in callbacks are available- `ModelCheckpoint`: Periodically save the model.- `EarlyStopping`: Stop training when training is no longer improving the validationmetrics.- `TensorBoard`: periodically write model logs that can be visualized in[TensorBoard](https://www.tensorflow.org/tensorboard) (more details in the section"Visualization").- `CSVLogger`: streams loss and metrics data to a CSV file.- etc.See the [callbacks documentation](/api/callbacks/) for the complete list. Writing your own callbackYou can create a custom callback by extending the base class`keras.callbacks.Callback`. A callback has access to its associated model through theclass property `self.model`.Make sure to read the[complete guide to writing custom callbacks](/guides/writing_your_own_callbacks/).Here's a simple example saving a list of per-batch loss values during training:
###Code
class LossHistory(keras.callbacks.Callback):
def on_train_begin(self, logs):
self.per_batch_losses = []
def on_batch_end(self, batch, logs):
self.per_batch_losses.append(logs.get("loss"))
###Output
_____no_output_____
###Markdown
Checkpointing modelsWhen you're training model on relatively large datasets, it's crucial to savecheckpoints of your model at frequent intervals.The easiest way to achieve this is with the `ModelCheckpoint` callback:
###Code
model = get_compiled_model()
callbacks = [
keras.callbacks.ModelCheckpoint(
# Path where to save the model
# The two parameters below mean that we will overwrite
# the current checkpoint if and only if
# the `val_loss` score has improved.
# The saved model name will include the current epoch.
filepath="mymodel_{epoch}",
save_best_only=True, # Only save a model if `val_loss` has improved.
monitor="val_loss",
verbose=1,
)
]
model.fit(
x_train, y_train, epochs=2, batch_size=64, callbacks=callbacks, validation_split=0.2
)
###Output
_____no_output_____
###Markdown
The `ModelCheckpoint` callback can be used to implement fault-tolerance:the ability to restart training from the last saved state of the model in case traininggets randomly interrupted. Here's a basic example:
###Code
import os
# Prepare a directory to store all the checkpoints.
checkpoint_dir = "./ckpt"
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
def make_or_restore_model():
# Either restore the latest model, or create a fresh one
# if there is no checkpoint available.
checkpoints = [checkpoint_dir + "/" + name for name in os.listdir(checkpoint_dir)]
if checkpoints:
latest_checkpoint = max(checkpoints, key=os.path.getctime)
print("Restoring from", latest_checkpoint)
return keras.models.load_model(latest_checkpoint)
print("Creating a new model")
return get_compiled_model()
model = make_or_restore_model()
callbacks = [
# This callback saves a SavedModel every 100 batches.
# We include the training loss in the saved model name.
keras.callbacks.ModelCheckpoint(
filepath=checkpoint_dir + "/ckpt-loss={loss:.2f}", save_freq=100
)
]
model.fit(x_train, y_train, epochs=1, callbacks=callbacks)
###Output
_____no_output_____
###Markdown
You call also write your own callback for saving and restoring models.For a complete guide on serialization and saving, see the[guide to saving and serializing Models](/guides/serialization_and_saving/). Using learning rate schedulesA common pattern when training deep learning models is to gradually reduce the learningas training progresses. This is generally known as "learning rate decay".The learning decay schedule could be static (fixed in advance, as a function of thecurrent epoch or the current batch index), or dynamic (responding to the currentbehavior of the model, in particular the validation loss). Passing a schedule to an optimizerYou can easily use a static learning rate decay schedule by passing a schedule objectas the `learning_rate` argument in your optimizer:
###Code
initial_learning_rate = 0.1
lr_schedule = keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate, decay_steps=100000, decay_rate=0.96, staircase=True
)
optimizer = keras.optimizers.RMSprop(learning_rate=lr_schedule)
###Output
_____no_output_____
###Markdown
Several built-in schedules are available: `ExponentialDecay`, `PiecewiseConstantDecay`,`PolynomialDecay`, and `InverseTimeDecay`. Using callbacks to implement a dynamic learning rate scheduleA dynamic learning rate schedule (for instance, decreasing the learning rate when thevalidation loss is no longer improving) cannot be achieved with these schedule objectssince the optimizer does not have access to validation metrics.However, callbacks do have access to all metrics, including validation metrics! You canthus achieve this pattern by using a callback that modifies the current learning rateon the optimizer. In fact, this is even built-in as the `ReduceLROnPlateau` callback. Visualizing loss and metrics during trainingThe best way to keep an eye on your model during training is to use[TensorBoard](https://www.tensorflow.org/tensorboard), a browser-based applicationthat you can run locally that provides you with:- Live plots of the loss and metrics for training and evaluation- (optionally) Visualizations of the histograms of your layer activations- (optionally) 3D visualizations of the embedding spaces learned by your `Embedding`layersIf you have installed TensorFlow with pip, you should be able to launch TensorBoardfrom the command line:```tensorboard --logdir=/full_path_to_your_logs``` Using the TensorBoard callbackThe easiest way to use TensorBoard with a Keras model and the fit method is the`TensorBoard` callback.In the simplest case, just specify where you want the callback to write logs, andyou're good to go:
###Code
keras.callbacks.TensorBoard(
log_dir="/full_path_to_your_logs",
histogram_freq=0, # How often to log histogram visualizations
embeddings_freq=0, # How often to log embedding visualizations
update_freq="epoch",
) # How often to write logs (default: once per epoch)
###Output
_____no_output_____
###Markdown
Training & evaluation with the built-in methods**Author:** [fchollet](https://twitter.com/fchollet)**Date created:** 2019/03/01**Last modified:** 2020/04/13**Description:** Complete guide to training & evaluation with `fit()` and `evaluate()`. Setup
###Code
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
###Output
_____no_output_____
###Markdown
IntroductionThis guide covers training, evaluation, and prediction (inference) modelswhen using built-in APIs for training & validation (such as `model.fit()`,`model.evaluate()`, `model.predict()`).If you are interested in leveraging `fit()` while specifying yourown training step function, see the guide["customizing what happens in `fit()`"](/guides/customizing_what_happens_in_fit/).If you are interested in writing your own training & evaluation loops fromscratch, see the guide["writing a training loop from scratch"](/guides/writing_a_training_loop_from_scratch/).In general, whether you are using built-in loops or writing your own, model training &evaluation works strictly in the same way across every kind of Keras model --Sequential models, models built with the Functional API, and models written fromscratch via model subclassing.This guide doesn't cover distributed training. For distributed training, seeour [guide to multi-gpu & distributed training](/guides/distributed_training/). API overview: a first end-to-end exampleWhen passing data to the built-in training loops of a model, you should either use**NumPy arrays** (if your data is small and fits in memory) or **`tf.data Dataset`objects**. In the next few paragraphs, we'll use the MNIST dataset as NumPy arrays, inorder to demonstrate how to use optimizers, losses, and metrics.Let's consider the following model (here, we build in with the Functional API, but itcould be a Sequential model or a subclassed model as well):
###Code
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, activation="softmax", name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
###Output
_____no_output_____
###Markdown
Here's what the typical end-to-end workflow looks like, consisting of:- Training- Validation on a holdout set generated from the original training data- Evaluation on the test dataWe'll use MNIST data for this example.
###Code
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
# Preprocess the data (these are NumPy arrays)
x_train = x_train.reshape(60000, 784).astype("float32") / 255
x_test = x_test.reshape(10000, 784).astype("float32") / 255
y_train = y_train.astype("float32")
y_test = y_test.astype("float32")
# Reserve 10,000 samples for validation
x_val = x_train[-10000:]
y_val = y_train[-10000:]
x_train = x_train[:-10000]
y_train = y_train[:-10000]
###Output
_____no_output_____
###Markdown
We specify the training configuration (optimizer, loss, metrics):
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(), # Optimizer
# Loss function to minimize
loss=keras.losses.SparseCategoricalCrossentropy(),
# List of metrics to monitor
metrics=[keras.metrics.SparseCategoricalAccuracy()],
)
###Output
_____no_output_____
###Markdown
We call `fit()`, which will train the model by slicing the data into "batches" of size"batch_size", and repeatedly iterating over the entire dataset for a given number of"epochs".
###Code
print("Fit model on training data")
history = model.fit(
x_train,
y_train,
batch_size=64,
epochs=2,
# We pass some validation for
# monitoring validation loss and metrics
# at the end of each epoch
validation_data=(x_val, y_val),
)
###Output
_____no_output_____
###Markdown
The returned "history" object holds a record of the loss values and metric valuesduring training:
###Code
history.history
###Output
_____no_output_____
###Markdown
We evaluate the model on the test data via `evaluate()`:
###Code
# Evaluate the model on the test data using `evaluate`
print("Evaluate on test data")
results = model.evaluate(x_test, y_test, batch_size=128)
print("test loss, test acc:", results)
# Generate predictions (probabilities -- the output of the last layer)
# on new data using `predict`
print("Generate predictions for 3 samples")
predictions = model.predict(x_test[:3])
print("predictions shape:", predictions.shape)
###Output
_____no_output_____
###Markdown
Now, let's review each piece of this workflow in detail. The `compile()` method: specifying a loss, metrics, and an optimizerTo train a model with `fit()`, you need to specify a loss function, an optimizer, andoptionally, some metrics to monitor.You pass these to the model as arguments to the `compile()` method:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=[keras.metrics.SparseCategoricalAccuracy()],
)
###Output
_____no_output_____
###Markdown
The `metrics` argument should be a list -- you model can have any number of metrics.If your model has multiple outputs, your can specify different losses and metrics foreach output, and you can modulate the contribution of each output to the total loss ofthe model. You will find more details about this in the section **"Passing data tomulti-input, multi-output models"**.Note that if you're satisfied with the default settings, in many cases the optimizer,loss, and metrics can be specified via string identifiers as a shortcut:
###Code
model.compile(
optimizer="rmsprop",
loss="sparse_categorical_crossentropy",
metrics=["sparse_categorical_accuracy"],
)
###Output
_____no_output_____
###Markdown
For later reuse, let's put our model definition and compile step in functions; we willcall them several times across different examples in this guide.
###Code
def get_uncompiled_model():
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, activation="softmax", name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
def get_compiled_model():
model = get_uncompiled_model()
model.compile(
optimizer="rmsprop",
loss="sparse_categorical_crossentropy",
metrics=["sparse_categorical_accuracy"],
)
return model
###Output
_____no_output_____
###Markdown
Many built-in optimizers, losses, and metrics are availableIn general, you won't have to create from scratch your own losses, metrics, oroptimizers, because what you need is likely already part of the Keras API:Optimizers:- `SGD()` (with or without momentum)- `RMSprop()`- `Adam()`- etc.Losses:- `MeanSquaredError()`- `KLDivergence()`- `CosineSimilarity()`- etc.Metrics:- `AUC()`- `Precision()`- `Recall()`- etc. Custom lossesThere are two ways to provide custom losses with Keras. The first example creates afunction that accepts inputs `y_true` and `y_pred`. The following example shows a lossfunction that computes the mean squared error between the real data and thepredictions:
###Code
def custom_mean_squared_error(y_true, y_pred):
return tf.math.reduce_mean(tf.square(y_true - y_pred))
model = get_uncompiled_model()
model.compile(optimizer=keras.optimizers.Adam(), loss=custom_mean_squared_error)
# We need to one-hot encode the labels to use MSE
y_train_one_hot = tf.one_hot(y_train, depth=10)
model.fit(x_train, y_train_one_hot, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
If you need a loss function that takes in parameters beside `y_true` and `y_pred`, youcan subclass the `tf.keras.losses.Loss` class and implement the following two methods:- `__init__(self)`: accept parameters to pass during the call of your loss function- `call(self, y_true, y_pred)`: use the targets (y_true) and the model predictions(y_pred) to compute the model's lossLet's say you want to use mean squared error, but with an added term thatwill de-incentivize prediction values far from 0.5 (we assume that the categoricaltargets are one-hot encoded and take values between 0 and 1). Thiscreates an incentive for the model not to be too confident, which may helpreduce overfitting (we won't know if it works until we try!).Here's how you would do it:
###Code
class CustomMSE(keras.losses.Loss):
def __init__(self, regularization_factor=0.1, name="custom_mse"):
super().__init__(name=name)
self.regularization_factor = regularization_factor
def call(self, y_true, y_pred):
mse = tf.math.reduce_mean(tf.square(y_true - y_pred))
reg = tf.math.reduce_mean(tf.square(0.5 - y_pred))
return mse + reg * self.regularization_factor
model = get_uncompiled_model()
model.compile(optimizer=keras.optimizers.Adam(), loss=CustomMSE())
y_train_one_hot = tf.one_hot(y_train, depth=10)
model.fit(x_train, y_train_one_hot, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
Custom metricsIf you need a metric that isn't part of the API, you can easily create custom metricsby subclassing the `tf.keras.metrics.Metric` class. You will need to implement 4methods:- `__init__(self)`, in which you will create state variables for your metric.- `update_state(self, y_true, y_pred, sample_weight=None)`, which uses the targetsy_true and the model predictions y_pred to update the state variables.- `result(self)`, which uses the state variables to compute the final results.- `reset_states(self)`, which reinitializes the state of the metric.State update and results computation are kept separate (in `update_state()` and`result()`, respectively) because in some cases, results computation might be veryexpensive, and would only be done periodically.Here's a simple example showing how to implement a `CategoricalTruePositives` metric,that counts how many samples where correctly classified as belonging to a given class:
###Code
class CategoricalTruePositives(keras.metrics.Metric):
def __init__(self, name="categorical_true_positives", **kwargs):
super(CategoricalTruePositives, self).__init__(name=name, **kwargs)
self.true_positives = self.add_weight(name="ctp", initializer="zeros")
def update_state(self, y_true, y_pred, sample_weight=None):
y_pred = tf.reshape(tf.argmax(y_pred, axis=1), shape=(-1, 1))
values = tf.cast(y_true, "int32") == tf.cast(y_pred, "int32")
values = tf.cast(values, "float32")
if sample_weight is not None:
sample_weight = tf.cast(sample_weight, "float32")
values = tf.multiply(values, sample_weight)
self.true_positives.assign_add(tf.reduce_sum(values))
def result(self):
return self.true_positives
def reset_states(self):
# The state of the metric will be reset at the start of each epoch.
self.true_positives.assign(0.0)
model = get_uncompiled_model()
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=[CategoricalTruePositives()],
)
model.fit(x_train, y_train, batch_size=64, epochs=3)
###Output
_____no_output_____
###Markdown
Handling losses and metrics that don't fit the standard signatureThe overwhelming majority of losses and metrics can be computed from y_true and`y_pred`, where `y_pred` is an output of your model. But not all of them. Forinstance, a regularization loss may only require the activation of a layer (there areno targets in this case), and this activation may not be a model output.In such cases, you can call `self.add_loss(loss_value)` from inside the call method ofa custom layer. Losses added in this way get added to the "main" loss during training(the one passed to `compile()`). Here's a simple example that adds activityregularization (note that activity regularization is built-in in all Keras layers --this layer is just for the sake of providing a concrete example):
###Code
class ActivityRegularizationLayer(layers.Layer):
def call(self, inputs):
self.add_loss(tf.reduce_sum(inputs) * 0.1)
return inputs # Pass-through layer.
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
# Insert activity regularization as a layer
x = ActivityRegularizationLayer()(x)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
# The displayed loss will be much higher than before
# due to the regularization component.
model.fit(x_train, y_train, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
You can do the same for logging metric values, using `add_metric()`:
###Code
class MetricLoggingLayer(layers.Layer):
def call(self, inputs):
# The `aggregation` argument defines
# how to aggregate the per-batch values
# over each epoch:
# in this case we simply average them.
self.add_metric(
keras.backend.std(inputs), name="std_of_activation", aggregation="mean"
)
return inputs # Pass-through layer.
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
# Insert std logging as a layer.
x = MetricLoggingLayer()(x)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
model.fit(x_train, y_train, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
In the [Functional API](/guides/functional_api/),you can also call `model.add_loss(loss_tensor)`,or `model.add_metric(metric_tensor, name, aggregation)`.Here's a simple example:
###Code
inputs = keras.Input(shape=(784,), name="digits")
x1 = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x2 = layers.Dense(64, activation="relu", name="dense_2")(x1)
outputs = layers.Dense(10, name="predictions")(x2)
model = keras.Model(inputs=inputs, outputs=outputs)
model.add_loss(tf.reduce_sum(x1) * 0.1)
model.add_metric(keras.backend.std(x1), name="std_of_activation", aggregation="mean")
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
model.fit(x_train, y_train, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
Note that when you pass losses via `add_loss()`, it becomes possible to call`compile()` without a loss function, since the model already has a loss to minimize.Consider the following `LogisticEndpoint` layer: it takes as inputstargets & logits, and it tracks a crossentropy loss via `add_loss()`. It alsotracks classification accuracy via `add_metric()`.
###Code
class LogisticEndpoint(keras.layers.Layer):
def __init__(self, name=None):
super(LogisticEndpoint, self).__init__(name=name)
self.loss_fn = keras.losses.BinaryCrossentropy(from_logits=True)
self.accuracy_fn = keras.metrics.BinaryAccuracy()
def call(self, targets, logits, sample_weights=None):
# Compute the training-time loss value and add it
# to the layer using `self.add_loss()`.
loss = self.loss_fn(targets, logits, sample_weights)
self.add_loss(loss)
# Log accuracy as a metric and add it
# to the layer using `self.add_metric()`.
acc = self.accuracy_fn(targets, logits, sample_weights)
self.add_metric(acc, name="accuracy")
# Return the inference-time prediction tensor (for `.predict()`).
return tf.nn.softmax(logits)
###Output
_____no_output_____
###Markdown
You can use it in a model with two inputs (input data & targets), compiled without a`loss` argument, like this:
###Code
import numpy as np
inputs = keras.Input(shape=(3,), name="inputs")
targets = keras.Input(shape=(10,), name="targets")
logits = keras.layers.Dense(10)(inputs)
predictions = LogisticEndpoint(name="predictions")(logits, targets)
model = keras.Model(inputs=[inputs, targets], outputs=predictions)
model.compile(optimizer="adam") # No loss argument!
data = {
"inputs": np.random.random((3, 3)),
"targets": np.random.random((3, 10)),
}
model.fit(data)
###Output
_____no_output_____
###Markdown
For more information about training multi-input models, see the section **Passing datato multi-input, multi-output models**. Automatically setting apart a validation holdout setIn the first end-to-end example you saw, we used the `validation_data` argument to passa tuple of NumPy arrays `(x_val, y_val)` to the model for evaluating a validation lossand validation metrics at the end of each epoch.Here's another option: the argument `validation_split` allows you to automaticallyreserve part of your training data for validation. The argument value represents thefraction of the data to be reserved for validation, so it should be set to a numberhigher than 0 and lower than 1. For instance, validation_split=0.2`` means "use 20% ofthe data for validation", and `validation_split=0.6` means "use 60% of the data forvalidation".The way the validation is computed is by taking the last x% samples of the arraysreceived by the fit call, before any shuffling.Note that you can only use `validation_split` when training with NumPy data.
###Code
model = get_compiled_model()
model.fit(x_train, y_train, batch_size=64, validation_split=0.2, epochs=1)
###Output
_____no_output_____
###Markdown
Training & evaluation from tf.data DatasetsIn the past few paragraphs, you've seen how to handle losses, metrics, and optimizers,and you've seen how to use the `validation_data` and `validation_split` arguments infit, when your data is passed as NumPy arrays.Let's now take a look at the case where your data comes in the form of a`tf.data.Dataset` object.The `tf.data` API is a set of utilities in TensorFlow 2.0 for loading and preprocessingdata in a way that's fast and scalable.For a complete guide about creating `Datasets`, see the[tf.data documentation](https://www.tensorflow.org/guide/data).You can pass a `Dataset` instance directly to the methods `fit()`, `evaluate()`, and`predict()`:
###Code
model = get_compiled_model()
# First, let's create a training Dataset instance.
# For the sake of our example, we'll use the same MNIST data as before.
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
# Shuffle and slice the dataset.
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Now we get a test dataset.
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test))
test_dataset = test_dataset.batch(64)
# Since the dataset already takes care of batching,
# we don't pass a `batch_size` argument.
model.fit(train_dataset, epochs=3)
# You can also evaluate or predict on a dataset.
print("Evaluate")
result = model.evaluate(test_dataset)
dict(zip(model.metrics_names, result))
###Output
_____no_output_____
###Markdown
Note that the Dataset is reset at the end of each epoch, so it can be reused of thenext epoch.If you want to run training only on a specific number of batches from this Dataset, youcan pass the `steps_per_epoch` argument, which specifies how many training steps themodel should run using this Dataset before moving on to the next epoch.If you do this, the dataset is not reset at the end of each epoch, instead we just keepdrawing the next batches. The dataset will eventually run out of data (unless it is aninfinitely-looping dataset).
###Code
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Only use the 100 batches per epoch (that's 64 * 100 samples)
model.fit(train_dataset, epochs=3, steps_per_epoch=100)
###Output
_____no_output_____
###Markdown
Using a validation datasetYou can pass a `Dataset` instance as the `validation_data` argument in `fit()`:
###Code
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Prepare the validation dataset
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(64)
model.fit(train_dataset, epochs=1, validation_data=val_dataset)
###Output
_____no_output_____
###Markdown
At the end of each epoch, the model will iterate over the validation dataset andcompute the validation loss and validation metrics.If you want to run validation only on a specific number of batches from this dataset,you can pass the `validation_steps` argument, which specifies how many validationsteps the model should run with the validation dataset before interrupting validationand moving on to the next epoch:
###Code
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Prepare the validation dataset
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(64)
model.fit(
train_dataset,
epochs=1,
# Only run validation using the first 10 batches of the dataset
# using the `validation_steps` argument
validation_data=val_dataset,
validation_steps=10,
)
###Output
_____no_output_____
###Markdown
Note that the validation dataset will be reset after each use (so that you will alwaysbe evaluating on the same samples from epoch to epoch).The argument `validation_split` (generating a holdout set from the training data) isnot supported when training from `Dataset` objects, since this features requires theability to index the samples of the datasets, which is not possible in general withthe `Dataset` API. Other input formats supportedBesides NumPy arrays, eager tensors, and TensorFlow `Datasets`, it's possible to traina Keras model using Pandas dataframes, or from Python generators that yield batches ofdata & labels.In particular, the `keras.utils.Sequence` class offers a simple interface to buildPython data generators that are multiprocessing-aware and can be shuffled.In general, we recommend that you use:- NumPy input data if your data is small and fits in memory- `Dataset` objects if you have large datasets and you need to do distributed training- `Sequence` objects if you have large datasets and you need to do a lot of customPython-side processing that cannot be done in TensorFlow (e.g. if you rely on external librariesfor data loading or preprocessing). Using a `keras.utils.Sequence` object as input`keras.utils.Sequence` is a utility that you can subclass to obtain a Python generator withtwo important properties:- It works well with multiprocessing.- It can be shuffled (e.g. when passing `shuffle=True` in `fit()`).A `Sequence` must implement two methods:- `__getitem__`- `__len__`The method `__getitem__` should return a complete batch.If you want to modify your dataset between epochs, you may implement `on_epoch_end`.Here's a quick example:```pythonfrom skimage.io import imreadfrom skimage.transform import resizeimport numpy as np Here, `filenames` is list of path to the images and `labels` are the associated labels.class CIFAR10Sequence(Sequence): def __init__(self, filenames, labels, batch_size): self.filenames, self.labels = filenames, labels self.batch_size = batch_size def __len__(self): return int(np.ceil(len(self.filenames) / float(self.batch_size))) def __getitem__(self, idx): batch_x = self.filenames[idx * self.batch_size:(idx + 1) * self.batch_size] batch_y = self.labels[idx * self.batch_size:(idx + 1) * self.batch_size] return np.array([ resize(imread(filename), (200, 200)) for filename in batch_x]), np.array(batch_y)sequence = CIFAR10Sequence(filenames, labels, batch_size)model.fit(sequence, epochs=10)``` Using sample weighting and class weightingBesides input data and target data, it is possible to pass sample weights or classweights to a model when using fit:- When training from NumPy data: via the `sample_weight` and `class_weight` arguments.- When training from `Dataset` objects: by having the `Dataset` return a tuple`(input_batch, target_batch, sample_weight_batch)`.A "sample weights" array is an array of numbers that specify how much weight eachsample in a batch should have in computing the total loss. It is commonly used inimbalanced classification problems (the idea being to give more weight to rarely-seenclasses). When the weights used are ones and zeros, the array can be used as a maskfor the loss function (entirely discarding the contribution of certain samples to thetotal loss).A "class weights" dict is a more specific instance of the same concept: it maps classindices to the sample weight that should be used for samples belonging to this class.For instance, if class "0" is twice less represented than class "1" in your data, youcould use `class_weight={0: 1., 1: 0.5}`.Here's a NumPy example where we use class weights or sample weights to give moreimportance to the correct classification of class 5 (which is the digit "5" in theMNIST dataset).
###Code
import numpy as np
class_weight = {
0: 1.0,
1: 1.0,
2: 1.0,
3: 1.0,
4: 1.0,
# Set weight "2" for class "5",
# making this class 2x more important
5: 2.0,
6: 1.0,
7: 1.0,
8: 1.0,
9: 1.0,
}
print("Fit with class weight")
model = get_compiled_model()
model.fit(x_train, y_train, class_weight=class_weight, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
Here's the same example using `sample_weight` instead:
###Code
sample_weight = np.ones(shape=(len(y_train),))
sample_weight[y_train == 5] = 2.0
print("Fit with sample weight")
model = get_compiled_model()
model.fit(x_train, y_train, sample_weight=sample_weight, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
Here's a matching `Dataset` example:
###Code
sample_weight = np.ones(shape=(len(y_train),))
sample_weight[y_train == 5] = 2.0
# Create a Dataset that includes sample weights
# (3rd element in the return tuple).
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train, sample_weight))
# Shuffle and slice the dataset.
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
model = get_compiled_model()
model.fit(train_dataset, epochs=1)
###Output
_____no_output_____
###Markdown
Passing data to multi-input, multi-output modelsIn the previous examples, we were considering a model with a single input (a tensor ofshape `(764,)`) and a single output (a prediction tensor of shape `(10,)`). But whatabout models that have multiple inputs or outputs?Consider the following model, which has an image input of shape `(32, 32, 3)` (that's`(height, width, channels)`) and a timeseries input of shape `(None, 10)` (that's`(timesteps, features)`). Our model will have two outputs computed from thecombination of these inputs: a "score" (of shape `(1,)`) and a probabilitydistribution over five classes (of shape `(5,)`).
###Code
image_input = keras.Input(shape=(32, 32, 3), name="img_input")
timeseries_input = keras.Input(shape=(None, 10), name="ts_input")
x1 = layers.Conv2D(3, 3)(image_input)
x1 = layers.GlobalMaxPooling2D()(x1)
x2 = layers.Conv1D(3, 3)(timeseries_input)
x2 = layers.GlobalMaxPooling1D()(x2)
x = layers.concatenate([x1, x2])
score_output = layers.Dense(1, name="score_output")(x)
class_output = layers.Dense(5, activation="softmax", name="class_output")(x)
model = keras.Model(
inputs=[image_input, timeseries_input], outputs=[score_output, class_output]
)
###Output
_____no_output_____
###Markdown
Let's plot this model, so you can clearly see what we're doing here (note that theshapes shown in the plot are batch shapes, rather than per-sample shapes).
###Code
keras.utils.plot_model(model, "multi_input_and_output_model.png", show_shapes=True)
###Output
_____no_output_____
###Markdown
At compilation time, we can specify different losses to different outputs, by passingthe loss functions as a list:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()],
)
###Output
_____no_output_____
###Markdown
If we only passed a single loss function to the model, the same loss function would beapplied to every output (which is not appropriate here).Likewise for metrics:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()],
metrics=[
[
keras.metrics.MeanAbsolutePercentageError(),
keras.metrics.MeanAbsoluteError(),
],
[keras.metrics.CategoricalAccuracy()],
],
)
###Output
_____no_output_____
###Markdown
Since we gave names to our output layers, we could also specify per-output losses andmetrics via a dict:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={
"score_output": keras.losses.MeanSquaredError(),
"class_output": keras.losses.CategoricalCrossentropy(),
},
metrics={
"score_output": [
keras.metrics.MeanAbsolutePercentageError(),
keras.metrics.MeanAbsoluteError(),
],
"class_output": [keras.metrics.CategoricalAccuracy()],
},
)
###Output
_____no_output_____
###Markdown
We recommend the use of explicit names and dicts if you have more than 2 outputs.It's possible to give different weights to different output-specific losses (forinstance, one might wish to privilege the "score" loss in our example, by giving to 2xthe importance of the class loss), using the `loss_weights` argument:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={
"score_output": keras.losses.MeanSquaredError(),
"class_output": keras.losses.CategoricalCrossentropy(),
},
metrics={
"score_output": [
keras.metrics.MeanAbsolutePercentageError(),
keras.metrics.MeanAbsoluteError(),
],
"class_output": [keras.metrics.CategoricalAccuracy()],
},
loss_weights={"score_output": 2.0, "class_output": 1.0},
)
###Output
_____no_output_____
###Markdown
You could also chose not to compute a loss for certain outputs, if these outputs meantfor prediction but not for training:
###Code
# List loss version
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[None, keras.losses.CategoricalCrossentropy()],
)
# Or dict loss version
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={"class_output": keras.losses.CategoricalCrossentropy()},
)
###Output
_____no_output_____
###Markdown
Passing data to a multi-input or multi-output model in fit works in a similar way asspecifying a loss function in compile: you can pass **lists of NumPy arrays** (with1:1 mapping to the outputs that received a loss function) or **dicts mapping outputnames to NumPy arrays**.
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()],
)
# Generate dummy NumPy data
img_data = np.random.random_sample(size=(100, 32, 32, 3))
ts_data = np.random.random_sample(size=(100, 20, 10))
score_targets = np.random.random_sample(size=(100, 1))
class_targets = np.random.random_sample(size=(100, 5))
# Fit on lists
model.fit([img_data, ts_data], [score_targets, class_targets], batch_size=32, epochs=1)
# Alternatively, fit on dicts
model.fit(
{"img_input": img_data, "ts_input": ts_data},
{"score_output": score_targets, "class_output": class_targets},
batch_size=32,
epochs=1,
)
###Output
_____no_output_____
###Markdown
Here's the `Dataset` use case: similarly as what we did for NumPy arrays, the `Dataset`should return a tuple of dicts.
###Code
train_dataset = tf.data.Dataset.from_tensor_slices(
(
{"img_input": img_data, "ts_input": ts_data},
{"score_output": score_targets, "class_output": class_targets},
)
)
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
model.fit(train_dataset, epochs=1)
###Output
_____no_output_____
###Markdown
Using callbacksCallbacks in Keras are objects that are called at different point during training (atthe start of an epoch, at the end of a batch, at the end of an epoch, etc.) and whichcan be used to implement behaviors such as:- Doing validation at different points during training (beyond the built-in per-epochvalidation)- Checkpointing the model at regular intervals or when it exceeds a certain accuracythreshold- Changing the learning rate of the model when training seems to be plateauing- Doing fine-tuning of the top layers when training seems to be plateauing- Sending email or instant message notifications when training ends or where a certainperformance threshold is exceeded- Etc.Callbacks can be passed as a list to your call to `fit()`:
###Code
model = get_compiled_model()
callbacks = [
keras.callbacks.EarlyStopping(
# Stop training when `val_loss` is no longer improving
monitor="val_loss",
# "no longer improving" being defined as "no better than 1e-2 less"
min_delta=1e-2,
# "no longer improving" being further defined as "for at least 2 epochs"
patience=2,
verbose=1,
)
]
model.fit(
x_train,
y_train,
epochs=20,
batch_size=64,
callbacks=callbacks,
validation_split=0.2,
)
###Output
_____no_output_____
###Markdown
Many built-in callbacks are available- `ModelCheckpoint`: Periodically save the model.- `EarlyStopping`: Stop training when training is no longer improving the validationmetrics.- `TensorBoard`: periodically write model logs that can be visualized in[TensorBoard](https://www.tensorflow.org/tensorboard) (more details in the section"Visualization").- `CSVLogger`: streams loss and metrics data to a CSV file.- etc.See the [callbacks documentation](/api/callbacks/) for the complete list. Writing your own callbackYou can create a custom callback by extending the base class`keras.callbacks.Callback`. A callback has access to its associated model through theclass property `self.model`.Make sure to read the[complete guide to writing custom callbacks](/guides/writing_your_own_callbacks/).Here's a simple example saving a list of per-batch loss values during training:
###Code
class LossHistory(keras.callbacks.Callback):
def on_train_begin(self, logs):
self.per_batch_losses = []
def on_batch_end(self, batch, logs):
self.per_batch_losses.append(logs.get("loss"))
###Output
_____no_output_____
###Markdown
Checkpointing modelsWhen you're training model on relatively large datasets, it's crucial to savecheckpoints of your model at frequent intervals.The easiest way to achieve this is with the `ModelCheckpoint` callback:
###Code
model = get_compiled_model()
callbacks = [
keras.callbacks.ModelCheckpoint(
# Path where to save the model
# The two parameters below mean that we will overwrite
# the current checkpoint if and only if
# the `val_loss` score has improved.
# The saved model name will include the current epoch.
filepath="mymodel_{epoch}",
save_best_only=True, # Only save a model if `val_loss` has improved.
monitor="val_loss",
verbose=1,
)
]
model.fit(
x_train, y_train, epochs=2, batch_size=64, callbacks=callbacks, validation_split=0.2
)
###Output
_____no_output_____
###Markdown
The `ModelCheckpoint` callback can be used to implement fault-tolerance:the ability to restart training from the last saved state of the model in case traininggets randomly interrupted. Here's a basic example:
###Code
import os
# Prepare a directory to store all the checkpoints.
checkpoint_dir = "./ckpt"
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
def make_or_restore_model():
# Either restore the latest model, or create a fresh one
# if there is no checkpoint available.
checkpoints = [checkpoint_dir + "/" + name for name in os.listdir(checkpoint_dir)]
if checkpoints:
latest_checkpoint = max(checkpoints, key=os.path.getctime)
print("Restoring from", latest_checkpoint)
return keras.models.load_model(latest_checkpoint)
print("Creating a new model")
return get_compiled_model()
model = make_or_restore_model()
callbacks = [
# This callback saves a SavedModel every 100 batches.
# We include the training loss in the saved model name.
keras.callbacks.ModelCheckpoint(
filepath=checkpoint_dir + "/ckpt-loss={loss:.2f}", save_freq=100
)
]
model.fit(x_train, y_train, epochs=1, callbacks=callbacks)
###Output
_____no_output_____
###Markdown
You call also write your own callback for saving and restoring models.For a complete guide on serialization and saving, see the[guide to saving and serializing Models](/guides/serialization_and_saving/). Using learning rate schedulesA common pattern when training deep learning models is to gradually reduce the learningas training progresses. This is generally known as "learning rate decay".The learning decay schedule could be static (fixed in advance, as a function of thecurrent epoch or the current batch index), or dynamic (responding to the currentbehavior of the model, in particular the validation loss). Passing a schedule to an optimizerYou can easily use a static learning rate decay schedule by passing a schedule objectas the `learning_rate` argument in your optimizer:
###Code
initial_learning_rate = 0.1
lr_schedule = keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate, decay_steps=100000, decay_rate=0.96, staircase=True
)
optimizer = keras.optimizers.RMSprop(learning_rate=lr_schedule)
###Output
_____no_output_____
###Markdown
Several built-in schedules are available: `ExponentialDecay`, `PiecewiseConstantDecay`,`PolynomialDecay`, and `InverseTimeDecay`. Using callbacks to implement a dynamic learning rate scheduleA dynamic learning rate schedule (for instance, decreasing the learning rate when thevalidation loss is no longer improving) cannot be achieved with these schedule objectssince the optimizer does not have access to validation metrics.However, callbacks do have access to all metrics, including validation metrics! You canthus achieve this pattern by using a callback that modifies the current learning rateon the optimizer. In fact, this is even built-in as the `ReduceLROnPlateau` callback. Visualizing loss and metrics during trainingThe best way to keep an eye on your model during training is to use[TensorBoard](https://www.tensorflow.org/tensorboard), a browser-based applicationthat you can run locally that provides you with:- Live plots of the loss and metrics for training and evaluation- (optionally) Visualizations of the histograms of your layer activations- (optionally) 3D visualizations of the embedding spaces learned by your `Embedding`layersIf you have installed TensorFlow with pip, you should be able to launch TensorBoardfrom the command line:```tensorboard --logdir=/full_path_to_your_logs``` Using the TensorBoard callbackThe easiest way to use TensorBoard with a Keras model and the fit method is the`TensorBoard` callback.In the simplest case, just specify where you want the callback to write logs, andyou're good to go:
###Code
keras.callbacks.TensorBoard(
log_dir="/full_path_to_your_logs",
histogram_freq=0, # How often to log histogram visualizations
embeddings_freq=0, # How often to log embedding visualizations
update_freq="epoch",
) # How often to write logs (default: once per epoch)
###Output
_____no_output_____
###Markdown
Training & evaluation with the built-in methods**Author:** [fchollet](https://twitter.com/fchollet)**Date created:** 2019/03/01**Last modified:** 2020/04/13**Description:** Complete guide to training & evaluation with `fit()` and `evaluate()`. Setup
###Code
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
###Output
_____no_output_____
###Markdown
IntroductionThis guide covers training, evaluation, and prediction (inference) modelswhen using built-in APIs for training & validation (such as `model.fit()`,`model.evaluate()`, `model.predict()`).If you are interested in leveraging `fit()` while specifying yourown training step function, see the guide["customizing what happens in `fit()`"](/guides/customizing_what_happens_in_fit/).If you are interested in writing your own training & evaluation loops fromscratch, see the guide["writing a training loop from scratch"](/guides/writing_a_training_loop_from_scratch/).In general, whether you are using built-in loops or writing your own, model training &evaluation works strictly in the same way across every kind of Keras model --Sequential models, models built with the Functional API, and models written fromscratch via model subclassing.This guide doesn't cover distributed training. For distributed training, seeour [guide to multi-gpu & distributed training](/guides/distributed_training/). API overview: a first end-to-end exampleWhen passing data to the built-in training loops of a model, you should either use**NumPy arrays** (if your data is small and fits in memory) or **`tf.data Dataset`objects**. In the next few paragraphs, we'll use the MNIST dataset as NumPy arrays, inorder to demonstrate how to use optimizers, losses, and metrics.Let's consider the following model (here, we build in with the Functional API, but itcould be a Sequential model or a subclassed model as well):
###Code
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, activation="softmax", name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
###Output
_____no_output_____
###Markdown
Here's what the typical end-to-end workflow looks like, consisting of:- Training- Validation on a holdout set generated from the original training data- Evaluation on the test dataWe'll use MNIST data for this example.
###Code
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
# Preprocess the data (these are NumPy arrays)
x_train = x_train.reshape(60000, 784).astype("float32") / 255
x_test = x_test.reshape(10000, 784).astype("float32") / 255
y_train = y_train.astype("float32")
y_test = y_test.astype("float32")
# Reserve 10,000 samples for validation
x_val = x_train[-10000:]
y_val = y_train[-10000:]
x_train = x_train[:-10000]
y_train = y_train[:-10000]
###Output
_____no_output_____
###Markdown
We specify the training configuration (optimizer, loss, metrics):
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(), # Optimizer
# Loss function to minimize
loss=keras.losses.SparseCategoricalCrossentropy(),
# List of metrics to monitor
metrics=[keras.metrics.SparseCategoricalAccuracy()],
)
###Output
_____no_output_____
###Markdown
We call `fit()`, which will train the model by slicing the data into "batches" of size"batch_size", and repeatedly iterating over the entire dataset for a given number of"epochs".
###Code
print("Fit model on training data")
history = model.fit(
x_train,
y_train,
batch_size=64,
epochs=2,
# We pass some validation for
# monitoring validation loss and metrics
# at the end of each epoch
validation_data=(x_val, y_val),
)
###Output
_____no_output_____
###Markdown
The returned "history" object holds a record of the loss values and metric valuesduring training:
###Code
history.history
###Output
_____no_output_____
###Markdown
We evaluate the model on the test data via `evaluate()`:
###Code
# Evaluate the model on the test data using `evaluate`
print("Evaluate on test data")
results = model.evaluate(x_test, y_test, batch_size=128)
print("test loss, test acc:", results)
# Generate predictions (probabilities -- the output of the last layer)
# on new data using `predict`
print("Generate predictions for 3 samples")
predictions = model.predict(x_test[:3])
print("predictions shape:", predictions.shape)
###Output
_____no_output_____
###Markdown
Now, let's review each piece of this workflow in detail. The `compile()` method: specifying a loss, metrics, and an optimizerTo train a model with `fit()`, you need to specify a loss function, an optimizer, andoptionally, some metrics to monitor.You pass these to the model as arguments to the `compile()` method:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=[keras.metrics.SparseCategoricalAccuracy()],
)
###Output
_____no_output_____
###Markdown
The `metrics` argument should be a list -- your model can have any number of metrics.If your model has multiple outputs, you can specify different losses and metrics foreach output, and you can modulate the contribution of each output to the total loss ofthe model. You will find more details about this in the section **"Passing data tomulti-input, multi-output models"**.Note that if you're satisfied with the default settings, in many cases the optimizer,loss, and metrics can be specified via string identifiers as a shortcut:
###Code
model.compile(
optimizer="rmsprop",
loss="sparse_categorical_crossentropy",
metrics=["sparse_categorical_accuracy"],
)
###Output
_____no_output_____
###Markdown
For later reuse, let's put our model definition and compile step in functions; we willcall them several times across different examples in this guide.
###Code
def get_uncompiled_model():
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, activation="softmax", name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
def get_compiled_model():
model = get_uncompiled_model()
model.compile(
optimizer="rmsprop",
loss="sparse_categorical_crossentropy",
metrics=["sparse_categorical_accuracy"],
)
return model
###Output
_____no_output_____
###Markdown
Many built-in optimizers, losses, and metrics are availableIn general, you won't have to create from scratch your own losses, metrics, oroptimizers, because what you need is likely already part of the Keras API:Optimizers:- `SGD()` (with or without momentum)- `RMSprop()`- `Adam()`- etc.Losses:- `MeanSquaredError()`- `KLDivergence()`- `CosineSimilarity()`- etc.Metrics:- `AUC()`- `Precision()`- `Recall()`- etc. Custom lossesThere are two ways to provide custom losses with Keras. The first example creates afunction that accepts inputs `y_true` and `y_pred`. The following example shows a lossfunction that computes the mean squared error between the real data and thepredictions:
###Code
def custom_mean_squared_error(y_true, y_pred):
return tf.math.reduce_mean(tf.square(y_true - y_pred))
model = get_uncompiled_model()
model.compile(optimizer=keras.optimizers.Adam(), loss=custom_mean_squared_error)
# We need to one-hot encode the labels to use MSE
y_train_one_hot = tf.one_hot(y_train, depth=10)
model.fit(x_train, y_train_one_hot, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
If you need a loss function that takes in parameters beside `y_true` and `y_pred`, youcan subclass the `tf.keras.losses.Loss` class and implement the following two methods:- `__init__(self)`: accept parameters to pass during the call of your loss function- `call(self, y_true, y_pred)`: use the targets (y_true) and the model predictions(y_pred) to compute the model's lossLet's say you want to use mean squared error, but with an added term thatwill de-incentivize prediction values far from 0.5 (we assume that the categoricaltargets are one-hot encoded and take values between 0 and 1). Thiscreates an incentive for the model not to be too confident, which may helpreduce overfitting (we won't know if it works until we try!).Here's how you would do it:
###Code
class CustomMSE(keras.losses.Loss):
def __init__(self, regularization_factor=0.1, name="custom_mse"):
super().__init__(name=name)
self.regularization_factor = regularization_factor
def call(self, y_true, y_pred):
mse = tf.math.reduce_mean(tf.square(y_true - y_pred))
reg = tf.math.reduce_mean(tf.square(0.5 - y_pred))
return mse + reg * self.regularization_factor
model = get_uncompiled_model()
model.compile(optimizer=keras.optimizers.Adam(), loss=CustomMSE())
y_train_one_hot = tf.one_hot(y_train, depth=10)
model.fit(x_train, y_train_one_hot, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
Custom metricsIf you need a metric that isn't part of the API, you can easily create custom metricsby subclassing the `tf.keras.metrics.Metric` class. You will need to implement 4methods:- `__init__(self)`, in which you will create state variables for your metric.- `update_state(self, y_true, y_pred, sample_weight=None)`, which uses the targetsy_true and the model predictions y_pred to update the state variables.- `result(self)`, which uses the state variables to compute the final results.- `reset_states(self)`, which reinitializes the state of the metric.State update and results computation are kept separate (in `update_state()` and`result()`, respectively) because in some cases, results computation might be veryexpensive, and would only be done periodically.Here's a simple example showing how to implement a `CategoricalTruePositives` metric,that counts how many samples were correctly classified as belonging to a given class:
###Code
class CategoricalTruePositives(keras.metrics.Metric):
def __init__(self, name="categorical_true_positives", **kwargs):
super(CategoricalTruePositives, self).__init__(name=name, **kwargs)
self.true_positives = self.add_weight(name="ctp", initializer="zeros")
def update_state(self, y_true, y_pred, sample_weight=None):
y_pred = tf.reshape(tf.argmax(y_pred, axis=1), shape=(-1, 1))
values = tf.cast(y_true, "int32") == tf.cast(y_pred, "int32")
values = tf.cast(values, "float32")
if sample_weight is not None:
sample_weight = tf.cast(sample_weight, "float32")
values = tf.multiply(values, sample_weight)
self.true_positives.assign_add(tf.reduce_sum(values))
def result(self):
return self.true_positives
def reset_states(self):
# The state of the metric will be reset at the start of each epoch.
self.true_positives.assign(0.0)
model = get_uncompiled_model()
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=[CategoricalTruePositives()],
)
model.fit(x_train, y_train, batch_size=64, epochs=3)
###Output
_____no_output_____
###Markdown
Handling losses and metrics that don't fit the standard signatureThe overwhelming majority of losses and metrics can be computed from `y_true` and`y_pred`, where `y_pred` is an output of your model. But not all of them. Forinstance, a regularization loss may only require the activation of a layer (there areno targets in this case), and this activation may not be a model output.In such cases, you can call `self.add_loss(loss_value)` from inside the call method ofa custom layer. Losses added in this way get added to the "main" loss during training(the one passed to `compile()`). Here's a simple example that adds activityregularization (note that activity regularization is built-in in all Keras layers --this layer is just for the sake of providing a concrete example):
###Code
class ActivityRegularizationLayer(layers.Layer):
def call(self, inputs):
self.add_loss(tf.reduce_sum(inputs) * 0.1)
return inputs # Pass-through layer.
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
# Insert activity regularization as a layer
x = ActivityRegularizationLayer()(x)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
# The displayed loss will be much higher than before
# due to the regularization component.
model.fit(x_train, y_train, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
You can do the same for logging metric values, using `add_metric()`:
###Code
class MetricLoggingLayer(layers.Layer):
def call(self, inputs):
# The `aggregation` argument defines
# how to aggregate the per-batch values
# over each epoch:
# in this case we simply average them.
self.add_metric(
keras.backend.std(inputs), name="std_of_activation", aggregation="mean"
)
return inputs # Pass-through layer.
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
# Insert std logging as a layer.
x = MetricLoggingLayer()(x)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
model.fit(x_train, y_train, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
In the [Functional API](/guides/functional_api/),you can also call `model.add_loss(loss_tensor)`,or `model.add_metric(metric_tensor, name, aggregation)`.Here's a simple example:
###Code
inputs = keras.Input(shape=(784,), name="digits")
x1 = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x2 = layers.Dense(64, activation="relu", name="dense_2")(x1)
outputs = layers.Dense(10, name="predictions")(x2)
model = keras.Model(inputs=inputs, outputs=outputs)
model.add_loss(tf.reduce_sum(x1) * 0.1)
model.add_metric(keras.backend.std(x1), name="std_of_activation", aggregation="mean")
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
model.fit(x_train, y_train, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
Note that when you pass losses via `add_loss()`, it becomes possible to call`compile()` without a loss function, since the model already has a loss to minimize.Consider the following `LogisticEndpoint` layer: it takes as inputstargets & logits, and it tracks a crossentropy loss via `add_loss()`. It alsotracks classification accuracy via `add_metric()`.
###Code
class LogisticEndpoint(keras.layers.Layer):
def __init__(self, name=None):
super(LogisticEndpoint, self).__init__(name=name)
self.loss_fn = keras.losses.BinaryCrossentropy(from_logits=True)
self.accuracy_fn = keras.metrics.BinaryAccuracy()
def call(self, targets, logits, sample_weights=None):
# Compute the training-time loss value and add it
# to the layer using `self.add_loss()`.
loss = self.loss_fn(targets, logits, sample_weights)
self.add_loss(loss)
# Log accuracy as a metric and add it
# to the layer using `self.add_metric()`.
acc = self.accuracy_fn(targets, logits, sample_weights)
self.add_metric(acc, name="accuracy")
# Return the inference-time prediction tensor (for `.predict()`).
return tf.nn.softmax(logits)
###Output
_____no_output_____
###Markdown
You can use it in a model with two inputs (input data & targets), compiled without a`loss` argument, like this:
###Code
import numpy as np
inputs = keras.Input(shape=(3,), name="inputs")
targets = keras.Input(shape=(10,), name="targets")
logits = keras.layers.Dense(10)(inputs)
predictions = LogisticEndpoint(name="predictions")(logits, targets)
model = keras.Model(inputs=[inputs, targets], outputs=predictions)
model.compile(optimizer="adam") # No loss argument!
data = {
"inputs": np.random.random((3, 3)),
"targets": np.random.random((3, 10)),
}
model.fit(data)
###Output
_____no_output_____
###Markdown
For more information about training multi-input models, see the section **Passing datato multi-input, multi-output models**. Automatically setting apart a validation holdout setIn the first end-to-end example you saw, we used the `validation_data` argument to passa tuple of NumPy arrays `(x_val, y_val)` to the model for evaluating a validation lossand validation metrics at the end of each epoch.Here's another option: the argument `validation_split` allows you to automaticallyreserve part of your training data for validation. The argument value represents thefraction of the data to be reserved for validation, so it should be set to a numberhigher than 0 and lower than 1. For instance, `validation_split=0.2` means "use 20% ofthe data for validation", and `validation_split=0.6` means "use 60% of the data forvalidation".The way the validation is computed is by taking the last x% samples of the arraysreceived by the fit call, before any shuffling.Note that you can only use `validation_split` when training with NumPy data.
###Code
model = get_compiled_model()
model.fit(x_train, y_train, batch_size=64, validation_split=0.2, epochs=1)
###Output
_____no_output_____
###Markdown
Training & evaluation from tf.data DatasetsIn the past few paragraphs, you've seen how to handle losses, metrics, and optimizers,and you've seen how to use the `validation_data` and `validation_split` arguments infit, when your data is passed as NumPy arrays.Let's now take a look at the case where your data comes in the form of a`tf.data.Dataset` object.The `tf.data` API is a set of utilities in TensorFlow 2.0 for loading and preprocessingdata in a way that's fast and scalable.For a complete guide about creating `Datasets`, see the[tf.data documentation](https://www.tensorflow.org/guide/data).You can pass a `Dataset` instance directly to the methods `fit()`, `evaluate()`, and`predict()`:
###Code
model = get_compiled_model()
# First, let's create a training Dataset instance.
# For the sake of our example, we'll use the same MNIST data as before.
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
# Shuffle and slice the dataset.
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Now we get a test dataset.
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test))
test_dataset = test_dataset.batch(64)
# Since the dataset already takes care of batching,
# we don't pass a `batch_size` argument.
model.fit(train_dataset, epochs=3)
# You can also evaluate or predict on a dataset.
print("Evaluate")
result = model.evaluate(test_dataset)
dict(zip(model.metrics_names, result))
###Output
_____no_output_____
###Markdown
Note that the Dataset is reset at the end of each epoch, so it can be reused of thenext epoch.If you want to run training only on a specific number of batches from this Dataset, youcan pass the `steps_per_epoch` argument, which specifies how many training steps themodel should run using this Dataset before moving on to the next epoch.If you do this, the dataset is not reset at the end of each epoch, instead we just keepdrawing the next batches. The dataset will eventually run out of data (unless it is aninfinitely-looping dataset).
###Code
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Only use the 100 batches per epoch (that's 64 * 100 samples)
model.fit(train_dataset, epochs=3, steps_per_epoch=100)
###Output
_____no_output_____
###Markdown
Using a validation datasetYou can pass a `Dataset` instance as the `validation_data` argument in `fit()`:
###Code
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Prepare the validation dataset
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(64)
model.fit(train_dataset, epochs=1, validation_data=val_dataset)
###Output
_____no_output_____
###Markdown
At the end of each epoch, the model will iterate over the validation dataset andcompute the validation loss and validation metrics.If you want to run validation only on a specific number of batches from this dataset,you can pass the `validation_steps` argument, which specifies how many validationsteps the model should run with the validation dataset before interrupting validationand moving on to the next epoch:
###Code
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Prepare the validation dataset
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(64)
model.fit(
train_dataset,
epochs=1,
# Only run validation using the first 10 batches of the dataset
# using the `validation_steps` argument
validation_data=val_dataset,
validation_steps=10,
)
###Output
_____no_output_____
###Markdown
Note that the validation dataset will be reset after each use (so that you will alwaysbe evaluating on the same samples from epoch to epoch).The argument `validation_split` (generating a holdout set from the training data) isnot supported when training from `Dataset` objects, since this features requires theability to index the samples of the datasets, which is not possible in general withthe `Dataset` API. Other input formats supportedBesides NumPy arrays, eager tensors, and TensorFlow `Datasets`, it's possible to traina Keras model using Pandas dataframes, or from Python generators that yield batches ofdata & labels.In particular, the `keras.utils.Sequence` class offers a simple interface to buildPython data generators that are multiprocessing-aware and can be shuffled.In general, we recommend that you use:- NumPy input data if your data is small and fits in memory- `Dataset` objects if you have large datasets and you need to do distributed training- `Sequence` objects if you have large datasets and you need to do a lot of customPython-side processing that cannot be done in TensorFlow (e.g. if you rely on external librariesfor data loading or preprocessing). Using a `keras.utils.Sequence` object as input`keras.utils.Sequence` is a utility that you can subclass to obtain a Python generator withtwo important properties:- It works well with multiprocessing.- It can be shuffled (e.g. when passing `shuffle=True` in `fit()`).A `Sequence` must implement two methods:- `__getitem__`- `__len__`The method `__getitem__` should return a complete batch.If you want to modify your dataset between epochs, you may implement `on_epoch_end`.Here's a quick example:```pythonfrom skimage.io import imreadfrom skimage.transform import resizeimport numpy as np Here, `filenames` is list of path to the images and `labels` are the associated labels.class CIFAR10Sequence(Sequence): def __init__(self, filenames, labels, batch_size): self.filenames, self.labels = filenames, labels self.batch_size = batch_size def __len__(self): return int(np.ceil(len(self.filenames) / float(self.batch_size))) def __getitem__(self, idx): batch_x = self.filenames[idx * self.batch_size:(idx + 1) * self.batch_size] batch_y = self.labels[idx * self.batch_size:(idx + 1) * self.batch_size] return np.array([ resize(imread(filename), (200, 200)) for filename in batch_x]), np.array(batch_y)sequence = CIFAR10Sequence(filenames, labels, batch_size)model.fit(sequence, epochs=10)``` Using sample weighting and class weightingWith the default settings the weight of a sample is decided by its frequencyin the dataset. There are two methods to weight the data, independent ofsample frequency:* Class weights* Sample weights Class weightsThis is set by passing a dictionary to the `class_weight` argument to`Model.fit()`. This dictionary maps class indices to the weight that shouldbe used for samples belonging to this class.This can be used to balance classes without resampling, or to train amodel that has a gives more importance to a particular class.For instance, if class "0" is half as represented as class "1" in your data,you could use `Model.fit(..., class_weight={0: 1., 1: 0.5})`. Here's a NumPy example where we use class weights or sample weights togive more importance to the correct classification of class 5 (whichis the digit "5" in the MNIST dataset).
###Code
import numpy as np
class_weight = {
0: 1.0,
1: 1.0,
2: 1.0,
3: 1.0,
4: 1.0,
# Set weight "2" for class "5",
# making this class 2x more important
5: 2.0,
6: 1.0,
7: 1.0,
8: 1.0,
9: 1.0,
}
print("Fit with class weight")
model = get_compiled_model()
model.fit(x_train, y_train, class_weight=class_weight, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
Sample weightsFor fine grained control, or if you are not building a classifier,you can use "sample weights".- When training from NumPy data: Pass the `sample_weight` argument to `Model.fit()`.- When training from `tf.data` or any other sort of iterator: Yield `(input_batch, label_batch, sample_weight_batch)` tuples.A "sample weights" array is an array of numbers that specify how much weighteach sample in a batch should have in computing the total loss. It is commonlyused in imbalanced classification problems (the idea being to give more weightto rarely-seen classes).When the weights used are ones and zeros, the array can be used as a *mask* forthe loss function (entirely discarding the contribution of certain samples tothe total loss).
###Code
sample_weight = np.ones(shape=(len(y_train),))
sample_weight[y_train == 5] = 2.0
print("Fit with sample weight")
model = get_compiled_model()
model.fit(x_train, y_train, sample_weight=sample_weight, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
Here's a matching `Dataset` example:
###Code
sample_weight = np.ones(shape=(len(y_train),))
sample_weight[y_train == 5] = 2.0
# Create a Dataset that includes sample weights
# (3rd element in the return tuple).
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train, sample_weight))
# Shuffle and slice the dataset.
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
model = get_compiled_model()
model.fit(train_dataset, epochs=1)
###Output
_____no_output_____
###Markdown
Passing data to multi-input, multi-output modelsIn the previous examples, we were considering a model with a single input (a tensor ofshape `(764,)`) and a single output (a prediction tensor of shape `(10,)`). But whatabout models that have multiple inputs or outputs?Consider the following model, which has an image input of shape `(32, 32, 3)` (that's`(height, width, channels)`) and a timeseries input of shape `(None, 10)` (that's`(timesteps, features)`). Our model will have two outputs computed from thecombination of these inputs: a "score" (of shape `(1,)`) and a probabilitydistribution over five classes (of shape `(5,)`).
###Code
image_input = keras.Input(shape=(32, 32, 3), name="img_input")
timeseries_input = keras.Input(shape=(None, 10), name="ts_input")
x1 = layers.Conv2D(3, 3)(image_input)
x1 = layers.GlobalMaxPooling2D()(x1)
x2 = layers.Conv1D(3, 3)(timeseries_input)
x2 = layers.GlobalMaxPooling1D()(x2)
x = layers.concatenate([x1, x2])
score_output = layers.Dense(1, name="score_output")(x)
class_output = layers.Dense(5, activation="softmax", name="class_output")(x)
model = keras.Model(
inputs=[image_input, timeseries_input], outputs=[score_output, class_output]
)
###Output
_____no_output_____
###Markdown
Let's plot this model, so you can clearly see what we're doing here (note that theshapes shown in the plot are batch shapes, rather than per-sample shapes).
###Code
keras.utils.plot_model(model, "multi_input_and_output_model.png", show_shapes=True)
###Output
_____no_output_____
###Markdown
At compilation time, we can specify different losses to different outputs, by passingthe loss functions as a list:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()],
)
###Output
_____no_output_____
###Markdown
If we only passed a single loss function to the model, the same loss function would beapplied to every output (which is not appropriate here).Likewise for metrics:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()],
metrics=[
[
keras.metrics.MeanAbsolutePercentageError(),
keras.metrics.MeanAbsoluteError(),
],
[keras.metrics.CategoricalAccuracy()],
],
)
###Output
_____no_output_____
###Markdown
Since we gave names to our output layers, we could also specify per-output losses andmetrics via a dict:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={
"score_output": keras.losses.MeanSquaredError(),
"class_output": keras.losses.CategoricalCrossentropy(),
},
metrics={
"score_output": [
keras.metrics.MeanAbsolutePercentageError(),
keras.metrics.MeanAbsoluteError(),
],
"class_output": [keras.metrics.CategoricalAccuracy()],
},
)
###Output
_____no_output_____
###Markdown
We recommend the use of explicit names and dicts if you have more than 2 outputs.It's possible to give different weights to different output-specific losses (forinstance, one might wish to privilege the "score" loss in our example, by giving to 2xthe importance of the class loss), using the `loss_weights` argument:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={
"score_output": keras.losses.MeanSquaredError(),
"class_output": keras.losses.CategoricalCrossentropy(),
},
metrics={
"score_output": [
keras.metrics.MeanAbsolutePercentageError(),
keras.metrics.MeanAbsoluteError(),
],
"class_output": [keras.metrics.CategoricalAccuracy()],
},
loss_weights={"score_output": 2.0, "class_output": 1.0},
)
###Output
_____no_output_____
###Markdown
You could also chose not to compute a loss for certain outputs, if these outputs meantfor prediction but not for training:
###Code
# List loss version
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[None, keras.losses.CategoricalCrossentropy()],
)
# Or dict loss version
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={"class_output": keras.losses.CategoricalCrossentropy()},
)
###Output
_____no_output_____
###Markdown
Passing data to a multi-input or multi-output model in fit works in a similar way asspecifying a loss function in compile: you can pass **lists of NumPy arrays** (with1:1 mapping to the outputs that received a loss function) or **dicts mapping outputnames to NumPy arrays**.
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()],
)
# Generate dummy NumPy data
img_data = np.random.random_sample(size=(100, 32, 32, 3))
ts_data = np.random.random_sample(size=(100, 20, 10))
score_targets = np.random.random_sample(size=(100, 1))
class_targets = np.random.random_sample(size=(100, 5))
# Fit on lists
model.fit([img_data, ts_data], [score_targets, class_targets], batch_size=32, epochs=1)
# Alternatively, fit on dicts
model.fit(
{"img_input": img_data, "ts_input": ts_data},
{"score_output": score_targets, "class_output": class_targets},
batch_size=32,
epochs=1,
)
###Output
_____no_output_____
###Markdown
Here's the `Dataset` use case: similarly as what we did for NumPy arrays, the `Dataset`should return a tuple of dicts.
###Code
train_dataset = tf.data.Dataset.from_tensor_slices(
(
{"img_input": img_data, "ts_input": ts_data},
{"score_output": score_targets, "class_output": class_targets},
)
)
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
model.fit(train_dataset, epochs=1)
###Output
_____no_output_____
###Markdown
Using callbacksCallbacks in Keras are objects that are called at different point during training (atthe start of an epoch, at the end of a batch, at the end of an epoch, etc.) and whichcan be used to implement behaviors such as:- Doing validation at different points during training (beyond the built-in per-epochvalidation)- Checkpointing the model at regular intervals or when it exceeds a certain accuracythreshold- Changing the learning rate of the model when training seems to be plateauing- Doing fine-tuning of the top layers when training seems to be plateauing- Sending email or instant message notifications when training ends or where a certainperformance threshold is exceeded- Etc.Callbacks can be passed as a list to your call to `fit()`:
###Code
model = get_compiled_model()
callbacks = [
keras.callbacks.EarlyStopping(
# Stop training when `val_loss` is no longer improving
monitor="val_loss",
# "no longer improving" being defined as "no better than 1e-2 less"
min_delta=1e-2,
# "no longer improving" being further defined as "for at least 2 epochs"
patience=2,
verbose=1,
)
]
model.fit(
x_train,
y_train,
epochs=20,
batch_size=64,
callbacks=callbacks,
validation_split=0.2,
)
###Output
_____no_output_____
###Markdown
Many built-in callbacks are available- `ModelCheckpoint`: Periodically save the model.- `EarlyStopping`: Stop training when training is no longer improving the validationmetrics.- `TensorBoard`: periodically write model logs that can be visualized in[TensorBoard](https://www.tensorflow.org/tensorboard) (more details in the section"Visualization").- `CSVLogger`: streams loss and metrics data to a CSV file.- etc.See the [callbacks documentation](/api/callbacks/) for the complete list. Writing your own callbackYou can create a custom callback by extending the base class`keras.callbacks.Callback`. A callback has access to its associated model through theclass property `self.model`.Make sure to read the[complete guide to writing custom callbacks](/guides/writing_your_own_callbacks/).Here's a simple example saving a list of per-batch loss values during training:
###Code
class LossHistory(keras.callbacks.Callback):
def on_train_begin(self, logs):
self.per_batch_losses = []
def on_batch_end(self, batch, logs):
self.per_batch_losses.append(logs.get("loss"))
###Output
_____no_output_____
###Markdown
Checkpointing modelsWhen you're training model on relatively large datasets, it's crucial to savecheckpoints of your model at frequent intervals.The easiest way to achieve this is with the `ModelCheckpoint` callback:
###Code
model = get_compiled_model()
callbacks = [
keras.callbacks.ModelCheckpoint(
# Path where to save the model
# The two parameters below mean that we will overwrite
# the current checkpoint if and only if
# the `val_loss` score has improved.
# The saved model name will include the current epoch.
filepath="mymodel_{epoch}",
save_best_only=True, # Only save a model if `val_loss` has improved.
monitor="val_loss",
verbose=1,
)
]
model.fit(
x_train, y_train, epochs=2, batch_size=64, callbacks=callbacks, validation_split=0.2
)
###Output
_____no_output_____
###Markdown
The `ModelCheckpoint` callback can be used to implement fault-tolerance:the ability to restart training from the last saved state of the model in case traininggets randomly interrupted. Here's a basic example:
###Code
import os
# Prepare a directory to store all the checkpoints.
checkpoint_dir = "./ckpt"
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
def make_or_restore_model():
# Either restore the latest model, or create a fresh one
# if there is no checkpoint available.
checkpoints = [checkpoint_dir + "/" + name for name in os.listdir(checkpoint_dir)]
if checkpoints:
latest_checkpoint = max(checkpoints, key=os.path.getctime)
print("Restoring from", latest_checkpoint)
return keras.models.load_model(latest_checkpoint)
print("Creating a new model")
return get_compiled_model()
model = make_or_restore_model()
callbacks = [
# This callback saves a SavedModel every 100 batches.
# We include the training loss in the saved model name.
keras.callbacks.ModelCheckpoint(
filepath=checkpoint_dir + "/ckpt-loss={loss:.2f}", save_freq=100
)
]
model.fit(x_train, y_train, epochs=1, callbacks=callbacks)
###Output
_____no_output_____
###Markdown
You call also write your own callback for saving and restoring models.For a complete guide on serialization and saving, see the[guide to saving and serializing Models](/guides/serialization_and_saving/). Using learning rate schedulesA common pattern when training deep learning models is to gradually reduce the learningas training progresses. This is generally known as "learning rate decay".The learning decay schedule could be static (fixed in advance, as a function of thecurrent epoch or the current batch index), or dynamic (responding to the currentbehavior of the model, in particular the validation loss). Passing a schedule to an optimizerYou can easily use a static learning rate decay schedule by passing a schedule objectas the `learning_rate` argument in your optimizer:
###Code
initial_learning_rate = 0.1
lr_schedule = keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate, decay_steps=100000, decay_rate=0.96, staircase=True
)
optimizer = keras.optimizers.RMSprop(learning_rate=lr_schedule)
###Output
_____no_output_____
###Markdown
Several built-in schedules are available: `ExponentialDecay`, `PiecewiseConstantDecay`,`PolynomialDecay`, and `InverseTimeDecay`. Using callbacks to implement a dynamic learning rate scheduleA dynamic learning rate schedule (for instance, decreasing the learning rate when thevalidation loss is no longer improving) cannot be achieved with these schedule objectssince the optimizer does not have access to validation metrics.However, callbacks do have access to all metrics, including validation metrics! You canthus achieve this pattern by using a callback that modifies the current learning rateon the optimizer. In fact, this is even built-in as the `ReduceLROnPlateau` callback. Visualizing loss and metrics during trainingThe best way to keep an eye on your model during training is to use[TensorBoard](https://www.tensorflow.org/tensorboard), a browser-based applicationthat you can run locally that provides you with:- Live plots of the loss and metrics for training and evaluation- (optionally) Visualizations of the histograms of your layer activations- (optionally) 3D visualizations of the embedding spaces learned by your `Embedding`layersIf you have installed TensorFlow with pip, you should be able to launch TensorBoardfrom the command line:```tensorboard --logdir=/full_path_to_your_logs``` Using the TensorBoard callbackThe easiest way to use TensorBoard with a Keras model and the fit method is the`TensorBoard` callback.In the simplest case, just specify where you want the callback to write logs, andyou're good to go:
###Code
keras.callbacks.TensorBoard(
log_dir="/full_path_to_your_logs",
histogram_freq=0, # How often to log histogram visualizations
embeddings_freq=0, # How often to log embedding visualizations
update_freq="epoch",
) # How often to write logs (default: once per epoch)
###Output
_____no_output_____
###Markdown
Training & evaluation with the built-in methods**Author:** [fchollet](https://twitter.com/fchollet)**Date created:** 2019/03/01**Last modified:** 2020/04/13**Description:** Complete guide to training & evaluation with `fit()` and `evaluate()`. Setup
###Code
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
###Output
_____no_output_____
###Markdown
IntroductionThis guide covers training, evaluation, and prediction (inference) modelswhen using built-in APIs for training & validation (such as `Model.fit()`,`Model.evaluate()` and `Model.predict()`).If you are interested in leveraging `fit()` while specifying yourown training step function, see the[Customizing what happens in `fit()` guide](/guides/customizing_what_happens_in_fit/).If you are interested in writing your own training & evaluation loops fromscratch, see the guide["writing a training loop from scratch"](/guides/writing_a_training_loop_from_scratch/).In general, whether you are using built-in loops or writing your own, model training &evaluation works strictly in the same way across every kind of Keras model --Sequential models, models built with the Functional API, and models written fromscratch via model subclassing.This guide doesn't cover distributed training, which is covered in our[guide to multi-GPU & distributed training](https://keras.io/guides/distributed_training/). API overview: a first end-to-end exampleWhen passing data to the built-in training loops of a model, you should either use**NumPy arrays** (if your data is small and fits in memory) or **`tf.data Dataset`objects**. In the next few paragraphs, we'll use the MNIST dataset as NumPy arrays, inorder to demonstrate how to use optimizers, losses, and metrics.Let's consider the following model (here, we build in with the Functional API, but itcould be a Sequential model or a subclassed model as well):
###Code
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, activation="softmax", name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
###Output
_____no_output_____
###Markdown
Here's what the typical end-to-end workflow looks like, consisting of:- Training- Validation on a holdout set generated from the original training data- Evaluation on the test dataWe'll use MNIST data for this example.
###Code
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
# Preprocess the data (these are NumPy arrays)
x_train = x_train.reshape(60000, 784).astype("float32") / 255
x_test = x_test.reshape(10000, 784).astype("float32") / 255
y_train = y_train.astype("float32")
y_test = y_test.astype("float32")
# Reserve 10,000 samples for validation
x_val = x_train[-10000:]
y_val = y_train[-10000:]
x_train = x_train[:-10000]
y_train = y_train[:-10000]
###Output
_____no_output_____
###Markdown
We specify the training configuration (optimizer, loss, metrics):
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(), # Optimizer
# Loss function to minimize
loss=keras.losses.SparseCategoricalCrossentropy(),
# List of metrics to monitor
metrics=[keras.metrics.SparseCategoricalAccuracy()],
)
###Output
_____no_output_____
###Markdown
We call `fit()`, which will train the model by slicing the data into "batches" of size`batch_size`, and repeatedly iterating over the entire dataset for a given number of`epochs`.
###Code
print("Fit model on training data")
history = model.fit(
x_train,
y_train,
batch_size=64,
epochs=2,
# We pass some validation for
# monitoring validation loss and metrics
# at the end of each epoch
validation_data=(x_val, y_val),
)
###Output
_____no_output_____
###Markdown
The returned `history` object holds a record of the loss values and metric valuesduring training:
###Code
history.history
###Output
_____no_output_____
###Markdown
We evaluate the model on the test data via `evaluate()`:
###Code
# Evaluate the model on the test data using `evaluate`
print("Evaluate on test data")
results = model.evaluate(x_test, y_test, batch_size=128)
print("test loss, test acc:", results)
# Generate predictions (probabilities -- the output of the last layer)
# on new data using `predict`
print("Generate predictions for 3 samples")
predictions = model.predict(x_test[:3])
print("predictions shape:", predictions.shape)
###Output
_____no_output_____
###Markdown
Now, let's review each piece of this workflow in detail. The `compile()` method: specifying a loss, metrics, and an optimizerTo train a model with `fit()`, you need to specify a loss function, an optimizer, andoptionally, some metrics to monitor.You pass these to the model as arguments to the `compile()` method:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=[keras.metrics.SparseCategoricalAccuracy()],
)
###Output
_____no_output_____
###Markdown
The `metrics` argument should be a list -- your model can have any number of metrics.If your model has multiple outputs, you can specify different losses and metrics foreach output, and you can modulate the contribution of each output to the total loss ofthe model. You will find more details about this in the **Passing data to multi-input,multi-output models** section.Note that if you're satisfied with the default settings, in many cases the optimizer,loss, and metrics can be specified via string identifiers as a shortcut:
###Code
model.compile(
optimizer="rmsprop",
loss="sparse_categorical_crossentropy",
metrics=["sparse_categorical_accuracy"],
)
###Output
_____no_output_____
###Markdown
For later reuse, let's put our model definition and compile step in functions; we willcall them several times across different examples in this guide.
###Code
def get_uncompiled_model():
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, activation="softmax", name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
def get_compiled_model():
model = get_uncompiled_model()
model.compile(
optimizer="rmsprop",
loss="sparse_categorical_crossentropy",
metrics=["sparse_categorical_accuracy"],
)
return model
###Output
_____no_output_____
###Markdown
Many built-in optimizers, losses, and metrics are availableIn general, you won't have to create your own losses, metrics, or optimizersfrom scratch, because what you need is likely to be already part of the Keras API:Optimizers:- `SGD()` (with or without momentum)- `RMSprop()`- `Adam()`- etc.Losses:- `MeanSquaredError()`- `KLDivergence()`- `CosineSimilarity()`- etc.Metrics:- `AUC()`- `Precision()`- `Recall()`- etc. Custom lossesIf you need to create a custom loss, Keras provides two ways to do so.The first method involves creating a function that accepts inputs `y_true` and`y_pred`. The following example shows a loss function that computes the mean squarederror between the real data and the predictions:
###Code
def custom_mean_squared_error(y_true, y_pred):
return tf.math.reduce_mean(tf.square(y_true - y_pred))
model = get_uncompiled_model()
model.compile(optimizer=keras.optimizers.Adam(), loss=custom_mean_squared_error)
# We need to one-hot encode the labels to use MSE
y_train_one_hot = tf.one_hot(y_train, depth=10)
model.fit(x_train, y_train_one_hot, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
If you need a loss function that takes in parameters beside `y_true` and `y_pred`, youcan subclass the `tf.keras.losses.Loss` class and implement the following two methods:- `__init__(self)`: accept parameters to pass during the call of your loss function- `call(self, y_true, y_pred)`: use the targets (y_true) and the model predictions(y_pred) to compute the model's lossLet's say you want to use mean squared error, but with an added term thatwill de-incentivize prediction values far from 0.5 (we assume that the categoricaltargets are one-hot encoded and take values between 0 and 1). Thiscreates an incentive for the model not to be too confident, which may helpreduce overfitting (we won't know if it works until we try!).Here's how you would do it:
###Code
class CustomMSE(keras.losses.Loss):
def __init__(self, regularization_factor=0.1, name="custom_mse"):
super().__init__(name=name)
self.regularization_factor = regularization_factor
def call(self, y_true, y_pred):
mse = tf.math.reduce_mean(tf.square(y_true - y_pred))
reg = tf.math.reduce_mean(tf.square(0.5 - y_pred))
return mse + reg * self.regularization_factor
model = get_uncompiled_model()
model.compile(optimizer=keras.optimizers.Adam(), loss=CustomMSE())
y_train_one_hot = tf.one_hot(y_train, depth=10)
model.fit(x_train, y_train_one_hot, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
Custom metricsIf you need a metric that isn't part of the API, you can easily create custom metricsby subclassing the `tf.keras.metrics.Metric` class. You will need to implement 4methods:- `__init__(self)`, in which you will create state variables for your metric.- `update_state(self, y_true, y_pred, sample_weight=None)`, which uses the targetsy_true and the model predictions y_pred to update the state variables.- `result(self)`, which uses the state variables to compute the final results.- `reset_states(self)`, which reinitializes the state of the metric.State update and results computation are kept separate (in `update_state()` and`result()`, respectively) because in some cases, the results computation might be veryexpensive and would only be done periodically.Here's a simple example showing how to implement a `CategoricalTruePositives` metricthat counts how many samples were correctly classified as belonging to a given class:
###Code
class CategoricalTruePositives(keras.metrics.Metric):
def __init__(self, name="categorical_true_positives", **kwargs):
super(CategoricalTruePositives, self).__init__(name=name, **kwargs)
self.true_positives = self.add_weight(name="ctp", initializer="zeros")
def update_state(self, y_true, y_pred, sample_weight=None):
y_pred = tf.reshape(tf.argmax(y_pred, axis=1), shape=(-1, 1))
values = tf.cast(y_true, "int32") == tf.cast(y_pred, "int32")
values = tf.cast(values, "float32")
if sample_weight is not None:
sample_weight = tf.cast(sample_weight, "float32")
values = tf.multiply(values, sample_weight)
self.true_positives.assign_add(tf.reduce_sum(values))
def result(self):
return self.true_positives
def reset_states(self):
# The state of the metric will be reset at the start of each epoch.
self.true_positives.assign(0.0)
model = get_uncompiled_model()
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=[CategoricalTruePositives()],
)
model.fit(x_train, y_train, batch_size=64, epochs=3)
###Output
_____no_output_____
###Markdown
Handling losses and metrics that don't fit the standard signatureThe overwhelming majority of losses and metrics can be computed from `y_true` and`y_pred`, where `y_pred` is an output of your model -- but not all of them. Forinstance, a regularization loss may only require the activation of a layer (there areno targets in this case), and this activation may not be a model output.In such cases, you can call `self.add_loss(loss_value)` from inside the call method ofa custom layer. Losses added in this way get added to the "main" loss during training(the one passed to `compile()`). Here's a simple example that adds activityregularization (note that activity regularization is built-in in all Keras layers --this layer is just for the sake of providing a concrete example):
###Code
class ActivityRegularizationLayer(layers.Layer):
def call(self, inputs):
self.add_loss(tf.reduce_sum(inputs) * 0.1)
return inputs # Pass-through layer.
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
# Insert activity regularization as a layer
x = ActivityRegularizationLayer()(x)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
# The displayed loss will be much higher than before
# due to the regularization component.
model.fit(x_train, y_train, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
You can do the same for logging metric values, using `add_metric()`:
###Code
class MetricLoggingLayer(layers.Layer):
def call(self, inputs):
# The `aggregation` argument defines
# how to aggregate the per-batch values
# over each epoch:
# in this case we simply average them.
self.add_metric(
keras.backend.std(inputs), name="std_of_activation", aggregation="mean"
)
return inputs # Pass-through layer.
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
# Insert std logging as a layer.
x = MetricLoggingLayer()(x)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
model.fit(x_train, y_train, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
In the [Functional API](/guides/functional_api/),you can also call `model.add_loss(loss_tensor)`,or `model.add_metric(metric_tensor, name, aggregation)`.Here's a simple example:
###Code
inputs = keras.Input(shape=(784,), name="digits")
x1 = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x2 = layers.Dense(64, activation="relu", name="dense_2")(x1)
outputs = layers.Dense(10, name="predictions")(x2)
model = keras.Model(inputs=inputs, outputs=outputs)
model.add_loss(tf.reduce_sum(x1) * 0.1)
model.add_metric(keras.backend.std(x1), name="std_of_activation", aggregation="mean")
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
model.fit(x_train, y_train, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
Note that when you pass losses via `add_loss()`, it becomes possible to call`compile()` without a loss function, since the model already has a loss to minimize.Consider the following `LogisticEndpoint` layer: it takes as inputstargets & logits, and it tracks a crossentropy loss via `add_loss()`. It alsotracks classification accuracy via `add_metric()`.
###Code
class LogisticEndpoint(keras.layers.Layer):
def __init__(self, name=None):
super(LogisticEndpoint, self).__init__(name=name)
self.loss_fn = keras.losses.BinaryCrossentropy(from_logits=True)
self.accuracy_fn = keras.metrics.BinaryAccuracy()
def call(self, targets, logits, sample_weights=None):
# Compute the training-time loss value and add it
# to the layer using `self.add_loss()`.
loss = self.loss_fn(targets, logits, sample_weights)
self.add_loss(loss)
# Log accuracy as a metric and add it
# to the layer using `self.add_metric()`.
acc = self.accuracy_fn(targets, logits, sample_weights)
self.add_metric(acc, name="accuracy")
# Return the inference-time prediction tensor (for `.predict()`).
return tf.nn.softmax(logits)
###Output
_____no_output_____
###Markdown
You can use it in a model with two inputs (input data & targets), compiled without a`loss` argument, like this:
###Code
import numpy as np
inputs = keras.Input(shape=(3,), name="inputs")
targets = keras.Input(shape=(10,), name="targets")
logits = keras.layers.Dense(10)(inputs)
predictions = LogisticEndpoint(name="predictions")(logits, targets)
model = keras.Model(inputs=[inputs, targets], outputs=predictions)
model.compile(optimizer="adam") # No loss argument!
data = {
"inputs": np.random.random((3, 3)),
"targets": np.random.random((3, 10)),
}
model.fit(data)
###Output
_____no_output_____
###Markdown
For more information about training multi-input models, see the section **Passing datato multi-input, multi-output models**. Automatically setting apart a validation holdout setIn the first end-to-end example you saw, we used the `validation_data` argument to passa tuple of NumPy arrays `(x_val, y_val)` to the model for evaluating a validation lossand validation metrics at the end of each epoch.Here's another option: the argument `validation_split` allows you to automaticallyreserve part of your training data for validation. The argument value represents thefraction of the data to be reserved for validation, so it should be set to a numberhigher than 0 and lower than 1. For instance, `validation_split=0.2` means "use 20% ofthe data for validation", and `validation_split=0.6` means "use 60% of the data forvalidation".The way the validation is computed is by taking the last x% samples of the arraysreceived by the `fit()` call, before any shuffling.Note that you can only use `validation_split` when training with NumPy data.
###Code
model = get_compiled_model()
model.fit(x_train, y_train, batch_size=64, validation_split=0.2, epochs=1)
###Output
_____no_output_____
###Markdown
Training & evaluation from tf.data DatasetsIn the past few paragraphs, you've seen how to handle losses, metrics, and optimizers,and you've seen how to use the `validation_data` and `validation_split` arguments in`fit()`, when your data is passed as NumPy arrays.Let's now take a look at the case where your data comes in the form of a`tf.data.Dataset` object.The `tf.data` API is a set of utilities in TensorFlow 2.0 for loading and preprocessingdata in a way that's fast and scalable.For a complete guide about creating `Datasets`, see the[tf.data documentation](https://www.tensorflow.org/guide/data).You can pass a `Dataset` instance directly to the methods `fit()`, `evaluate()`, and`predict()`:
###Code
model = get_compiled_model()
# First, let's create a training Dataset instance.
# For the sake of our example, we'll use the same MNIST data as before.
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
# Shuffle and slice the dataset.
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Now we get a test dataset.
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test))
test_dataset = test_dataset.batch(64)
# Since the dataset already takes care of batching,
# we don't pass a `batch_size` argument.
model.fit(train_dataset, epochs=3)
# You can also evaluate or predict on a dataset.
print("Evaluate")
result = model.evaluate(test_dataset)
dict(zip(model.metrics_names, result))
###Output
_____no_output_____
###Markdown
Note that the Dataset is reset at the end of each epoch, so it can be reused of thenext epoch.If you want to run training only on a specific number of batches from this Dataset, youcan pass the `steps_per_epoch` argument, which specifies how many training steps themodel should run using this Dataset before moving on to the next epoch.If you do this, the dataset is not reset at the end of each epoch, instead we just keepdrawing the next batches. The dataset will eventually run out of data (unless it is aninfinitely-looping dataset).
###Code
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Only use the 100 batches per epoch (that's 64 * 100 samples)
model.fit(train_dataset, epochs=3, steps_per_epoch=100)
###Output
_____no_output_____
###Markdown
Using a validation datasetYou can pass a `Dataset` instance as the `validation_data` argument in `fit()`:
###Code
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Prepare the validation dataset
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(64)
model.fit(train_dataset, epochs=1, validation_data=val_dataset)
###Output
_____no_output_____
###Markdown
At the end of each epoch, the model will iterate over the validation dataset andcompute the validation loss and validation metrics.If you want to run validation only on a specific number of batches from this dataset,you can pass the `validation_steps` argument, which specifies how many validationsteps the model should run with the validation dataset before interrupting validationand moving on to the next epoch:
###Code
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Prepare the validation dataset
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(64)
model.fit(
train_dataset,
epochs=1,
# Only run validation using the first 10 batches of the dataset
# using the `validation_steps` argument
validation_data=val_dataset,
validation_steps=10,
)
###Output
_____no_output_____
###Markdown
Note that the validation dataset will be reset after each use (so that you will alwaysbe evaluating on the same samples from epoch to epoch).The argument `validation_split` (generating a holdout set from the training data) isnot supported when training from `Dataset` objects, since this feature requires theability to index the samples of the datasets, which is not possible in general withthe `Dataset` API. Other input formats supportedBesides NumPy arrays, eager tensors, and TensorFlow `Datasets`, it's possible to traina Keras model using Pandas dataframes, or from Python generators that yield batches ofdata & labels.In particular, the `keras.utils.Sequence` class offers a simple interface to buildPython data generators that are multiprocessing-aware and can be shuffled.In general, we recommend that you use:- NumPy input data if your data is small and fits in memory- `Dataset` objects if you have large datasets and you need to do distributed training- `Sequence` objects if you have large datasets and you need to do a lot of customPython-side processing that cannot be done in TensorFlow (e.g. if you rely on external librariesfor data loading or preprocessing). Using a `keras.utils.Sequence` object as input`keras.utils.Sequence` is a utility that you can subclass to obtain a Python generator withtwo important properties:- It works well with multiprocessing.- It can be shuffled (e.g. when passing `shuffle=True` in `fit()`).A `Sequence` must implement two methods:- `__getitem__`- `__len__`The method `__getitem__` should return a complete batch.If you want to modify your dataset between epochs, you may implement `on_epoch_end`.Here's a quick example:```pythonfrom skimage.io import imreadfrom skimage.transform import resizeimport numpy as np Here, `filenames` is list of path to the images and `labels` are the associated labels.class CIFAR10Sequence(Sequence): def __init__(self, filenames, labels, batch_size): self.filenames, self.labels = filenames, labels self.batch_size = batch_size def __len__(self): return int(np.ceil(len(self.filenames) / float(self.batch_size))) def __getitem__(self, idx): batch_x = self.filenames[idx * self.batch_size:(idx + 1) * self.batch_size] batch_y = self.labels[idx * self.batch_size:(idx + 1) * self.batch_size] return np.array([ resize(imread(filename), (200, 200)) for filename in batch_x]), np.array(batch_y)sequence = CIFAR10Sequence(filenames, labels, batch_size)model.fit(sequence, epochs=10)``` Using sample weighting and class weightingWith the default settings the weight of a sample is decided by its frequencyin the dataset. There are two methods to weight the data, independent ofsample frequency:* Class weights* Sample weights Class weightsThis is set by passing a dictionary to the `class_weight` argument to`Model.fit()`. This dictionary maps class indices to the weight that shouldbe used for samples belonging to this class.This can be used to balance classes without resampling, or to train amodel that gives more importance to a particular class.For instance, if class "0" is half as represented as class "1" in your data,you could use `Model.fit(..., class_weight={0: 1., 1: 0.5})`. Here's a NumPy example where we use class weights or sample weights togive more importance to the correct classification of class 5 (whichis the digit "5" in the MNIST dataset).
###Code
import numpy as np
class_weight = {
0: 1.0,
1: 1.0,
2: 1.0,
3: 1.0,
4: 1.0,
# Set weight "2" for class "5",
# making this class 2x more important
5: 2.0,
6: 1.0,
7: 1.0,
8: 1.0,
9: 1.0,
}
print("Fit with class weight")
model = get_compiled_model()
model.fit(x_train, y_train, class_weight=class_weight, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
Sample weightsFor fine grained control, or if you are not building a classifier,you can use "sample weights".- When training from NumPy data: Pass the `sample_weight` argument to `Model.fit()`.- When training from `tf.data` or any other sort of iterator: Yield `(input_batch, label_batch, sample_weight_batch)` tuples.A "sample weights" array is an array of numbers that specify how much weighteach sample in a batch should have in computing the total loss. It is commonlyused in imbalanced classification problems (the idea being to give more weightto rarely-seen classes).When the weights used are ones and zeros, the array can be used as a *mask* forthe loss function (entirely discarding the contribution of certain samples tothe total loss).
###Code
sample_weight = np.ones(shape=(len(y_train),))
sample_weight[y_train == 5] = 2.0
print("Fit with sample weight")
model = get_compiled_model()
model.fit(x_train, y_train, sample_weight=sample_weight, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
Here's a matching `Dataset` example:
###Code
sample_weight = np.ones(shape=(len(y_train),))
sample_weight[y_train == 5] = 2.0
# Create a Dataset that includes sample weights
# (3rd element in the return tuple).
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train, sample_weight))
# Shuffle and slice the dataset.
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
model = get_compiled_model()
model.fit(train_dataset, epochs=1)
###Output
_____no_output_____
###Markdown
Passing data to multi-input, multi-output modelsIn the previous examples, we were considering a model with a single input (a tensor ofshape `(764,)`) and a single output (a prediction tensor of shape `(10,)`). But whatabout models that have multiple inputs or outputs?Consider the following model, which has an image input of shape `(32, 32, 3)` (that's`(height, width, channels)`) and a time series input of shape `(None, 10)` (that's`(timesteps, features)`). Our model will have two outputs computed from thecombination of these inputs: a "score" (of shape `(1,)`) and a probabilitydistribution over five classes (of shape `(5,)`).
###Code
image_input = keras.Input(shape=(32, 32, 3), name="img_input")
timeseries_input = keras.Input(shape=(None, 10), name="ts_input")
x1 = layers.Conv2D(3, 3)(image_input)
x1 = layers.GlobalMaxPooling2D()(x1)
x2 = layers.Conv1D(3, 3)(timeseries_input)
x2 = layers.GlobalMaxPooling1D()(x2)
x = layers.concatenate([x1, x2])
score_output = layers.Dense(1, name="score_output")(x)
class_output = layers.Dense(5, name="class_output")(x)
model = keras.Model(
inputs=[image_input, timeseries_input], outputs=[score_output, class_output]
)
###Output
_____no_output_____
###Markdown
Let's plot this model, so you can clearly see what we're doing here (note that theshapes shown in the plot are batch shapes, rather than per-sample shapes).
###Code
keras.utils.plot_model(model, "multi_input_and_output_model.png", show_shapes=True)
###Output
_____no_output_____
###Markdown
At compilation time, we can specify different losses to different outputs, by passingthe loss functions as a list:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()],
)
###Output
_____no_output_____
###Markdown
If we only passed a single loss function to the model, the same loss function would beapplied to every output (which is not appropriate here).Likewise for metrics:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()],
metrics=[
[
keras.metrics.MeanAbsolutePercentageError(),
keras.metrics.MeanAbsoluteError(),
],
[keras.metrics.CategoricalAccuracy()],
],
)
###Output
_____no_output_____
###Markdown
Since we gave names to our output layers, we could also specify per-output losses andmetrics via a dict:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={
"score_output": keras.losses.MeanSquaredError(),
"class_output": keras.losses.CategoricalCrossentropy(),
},
metrics={
"score_output": [
keras.metrics.MeanAbsolutePercentageError(),
keras.metrics.MeanAbsoluteError(),
],
"class_output": [keras.metrics.CategoricalAccuracy()],
},
)
###Output
_____no_output_____
###Markdown
We recommend the use of explicit names and dicts if you have more than 2 outputs.It's possible to give different weights to different output-specific losses (forinstance, one might wish to privilege the "score" loss in our example, by giving to 2xthe importance of the class loss), using the `loss_weights` argument:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={
"score_output": keras.losses.MeanSquaredError(),
"class_output": keras.losses.CategoricalCrossentropy(),
},
metrics={
"score_output": [
keras.metrics.MeanAbsolutePercentageError(),
keras.metrics.MeanAbsoluteError(),
],
"class_output": [keras.metrics.CategoricalAccuracy()],
},
loss_weights={"score_output": 2.0, "class_output": 1.0},
)
###Output
_____no_output_____
###Markdown
You could also choose not to compute a loss for certain outputs, if these outputs aremeant for prediction but not for training:
###Code
# List loss version
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[None, keras.losses.CategoricalCrossentropy()],
)
# Or dict loss version
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={"class_output": keras.losses.CategoricalCrossentropy()},
)
###Output
_____no_output_____
###Markdown
Passing data to a multi-input or multi-output model in `fit()` works in a similar way asspecifying a loss function in compile: you can pass **lists of NumPy arrays** (with1:1 mapping to the outputs that received a loss function) or **dicts mapping outputnames to NumPy arrays**.
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()],
)
# Generate dummy NumPy data
img_data = np.random.random_sample(size=(100, 32, 32, 3))
ts_data = np.random.random_sample(size=(100, 20, 10))
score_targets = np.random.random_sample(size=(100, 1))
class_targets = np.random.random_sample(size=(100, 5))
# Fit on lists
model.fit([img_data, ts_data], [score_targets, class_targets], batch_size=32, epochs=1)
# Alternatively, fit on dicts
model.fit(
{"img_input": img_data, "ts_input": ts_data},
{"score_output": score_targets, "class_output": class_targets},
batch_size=32,
epochs=1,
)
###Output
_____no_output_____
###Markdown
Here's the `Dataset` use case: similarly as what we did for NumPy arrays, the `Dataset`should return a tuple of dicts.
###Code
train_dataset = tf.data.Dataset.from_tensor_slices(
(
{"img_input": img_data, "ts_input": ts_data},
{"score_output": score_targets, "class_output": class_targets},
)
)
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
model.fit(train_dataset, epochs=1)
###Output
_____no_output_____
###Markdown
Using callbacksCallbacks in Keras are objects that are called at different points during training (atthe start of an epoch, at the end of a batch, at the end of an epoch, etc.). Theycan be used to implement certain behaviors, such as:- Doing validation at different points during training (beyond the built-in per-epochvalidation)- Checkpointing the model at regular intervals or when it exceeds a certain accuracythreshold- Changing the learning rate of the model when training seems to be plateauing- Doing fine-tuning of the top layers when training seems to be plateauing- Sending email or instant message notifications when training ends or where a certainperformance threshold is exceeded- Etc.Callbacks can be passed as a list to your call to `fit()`:
###Code
model = get_compiled_model()
callbacks = [
keras.callbacks.EarlyStopping(
# Stop training when `val_loss` is no longer improving
monitor="val_loss",
# "no longer improving" being defined as "no better than 1e-2 less"
min_delta=1e-2,
# "no longer improving" being further defined as "for at least 2 epochs"
patience=2,
verbose=1,
)
]
model.fit(
x_train,
y_train,
epochs=20,
batch_size=64,
callbacks=callbacks,
validation_split=0.2,
)
###Output
_____no_output_____
###Markdown
Many built-in callbacks are availableThere are many built-in callbacks already available in Keras, such as:- `ModelCheckpoint`: Periodically save the model.- `EarlyStopping`: Stop training when training is no longer improving the validationmetrics.- `TensorBoard`: periodically write model logs that can be visualized in[TensorBoard](https://www.tensorflow.org/tensorboard) (more details in the section"Visualization").- `CSVLogger`: streams loss and metrics data to a CSV file.- etc.See the [callbacks documentation](/api/callbacks/) for the complete list. Writing your own callbackYou can create a custom callback by extending the base class`keras.callbacks.Callback`. A callback has access to its associated model through theclass property `self.model`.Make sure to read the[complete guide to writing custom callbacks](/guides/writing_your_own_callbacks/).Here's a simple example saving a list of per-batch loss values during training:
###Code
class LossHistory(keras.callbacks.Callback):
def on_train_begin(self, logs):
self.per_batch_losses = []
def on_batch_end(self, batch, logs):
self.per_batch_losses.append(logs.get("loss"))
###Output
_____no_output_____
###Markdown
Checkpointing modelsWhen you're training model on relatively large datasets, it's crucial to savecheckpoints of your model at frequent intervals.The easiest way to achieve this is with the `ModelCheckpoint` callback:
###Code
model = get_compiled_model()
callbacks = [
keras.callbacks.ModelCheckpoint(
# Path where to save the model
# The two parameters below mean that we will overwrite
# the current checkpoint if and only if
# the `val_loss` score has improved.
# The saved model name will include the current epoch.
filepath="mymodel_{epoch}",
save_best_only=True, # Only save a model if `val_loss` has improved.
monitor="val_loss",
verbose=1,
)
]
model.fit(
x_train, y_train, epochs=2, batch_size=64, callbacks=callbacks, validation_split=0.2
)
###Output
_____no_output_____
###Markdown
The `ModelCheckpoint` callback can be used to implement fault-tolerance:the ability to restart training from the last saved state of the model in case traininggets randomly interrupted. Here's a basic example:
###Code
import os
# Prepare a directory to store all the checkpoints.
checkpoint_dir = "./ckpt"
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
def make_or_restore_model():
# Either restore the latest model, or create a fresh one
# if there is no checkpoint available.
checkpoints = [checkpoint_dir + "/" + name for name in os.listdir(checkpoint_dir)]
if checkpoints:
latest_checkpoint = max(checkpoints, key=os.path.getctime)
print("Restoring from", latest_checkpoint)
return keras.models.load_model(latest_checkpoint)
print("Creating a new model")
return get_compiled_model()
model = make_or_restore_model()
callbacks = [
# This callback saves a SavedModel every 100 batches.
# We include the training loss in the saved model name.
keras.callbacks.ModelCheckpoint(
filepath=checkpoint_dir + "/ckpt-loss={loss:.2f}", save_freq=100
)
]
model.fit(x_train, y_train, epochs=1, callbacks=callbacks)
###Output
_____no_output_____
###Markdown
You call also write your own callback for saving and restoring models.For a complete guide on serialization and saving, see the[guide to saving and serializing Models](/guides/serialization_and_saving/). Using learning rate schedulesA common pattern when training deep learning models is to gradually reduce the learningas training progresses. This is generally known as "learning rate decay".The learning decay schedule could be static (fixed in advance, as a function of thecurrent epoch or the current batch index), or dynamic (responding to the currentbehavior of the model, in particular the validation loss). Passing a schedule to an optimizerYou can easily use a static learning rate decay schedule by passing a schedule objectas the `learning_rate` argument in your optimizer:
###Code
initial_learning_rate = 0.1
lr_schedule = keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate, decay_steps=100000, decay_rate=0.96, staircase=True
)
optimizer = keras.optimizers.RMSprop(learning_rate=lr_schedule)
###Output
_____no_output_____
###Markdown
Several built-in schedules are available: `ExponentialDecay`, `PiecewiseConstantDecay`,`PolynomialDecay`, and `InverseTimeDecay`. Using callbacks to implement a dynamic learning rate scheduleA dynamic learning rate schedule (for instance, decreasing the learning rate when thevalidation loss is no longer improving) cannot be achieved with these schedule objects,since the optimizer does not have access to validation metrics.However, callbacks do have access to all metrics, including validation metrics! You canthus achieve this pattern by using a callback that modifies the current learning rateon the optimizer. In fact, this is even built-in as the `ReduceLROnPlateau` callback. Visualizing loss and metrics during trainingThe best way to keep an eye on your model during training is to use[TensorBoard](https://www.tensorflow.org/tensorboard) -- a browser-based applicationthat you can run locally that provides you with:- Live plots of the loss and metrics for training and evaluation- (optionally) Visualizations of the histograms of your layer activations- (optionally) 3D visualizations of the embedding spaces learned by your `Embedding`layersIf you have installed TensorFlow with pip, you should be able to launch TensorBoardfrom the command line:```tensorboard --logdir=/full_path_to_your_logs``` Using the TensorBoard callbackThe easiest way to use TensorBoard with a Keras model and the `fit()` method is the`TensorBoard` callback.In the simplest case, just specify where you want the callback to write logs, andyou're good to go:
###Code
keras.callbacks.TensorBoard(
log_dir="/full_path_to_your_logs",
histogram_freq=0, # How often to log histogram visualizations
embeddings_freq=0, # How often to log embedding visualizations
update_freq="epoch",
) # How often to write logs (default: once per epoch)
###Output
_____no_output_____
###Markdown
Training & evaluation with the built-in methods**Author:** [fchollet](https://twitter.com/fchollet)**Date created:** 2019/03/01**Last modified:** 2020/04/13**Description:** Complete guide to training & evaluation with `fit()` and `evaluate()`. Setup
###Code
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
###Output
_____no_output_____
###Markdown
IntroductionThis guide covers training, evaluation, and prediction (inference) modelswhen using built-in APIs for training & validation (such as `model.fit()`,`model.evaluate()`, `model.predict()`).If you are interested in leveraging `fit()` while specifying yourown training step function, see the guide["customizing what happens in `fit()`"](/guides/customizing_what_happens_in_fit/).If you are interested in writing your own training & evaluation loops fromscratch, see the guide["writing a training loop from scratch"](/guides/writing_a_training_loop_from_scratch/).In general, whether you are using built-in loops or writing your own, model training &evaluation works strictly in the same way across every kind of Keras model --Sequential models, models built with the Functional API, and models written fromscratch via model subclassing.This guide doesn't cover distributed training. For distributed training, seeour [guide to multi-gpu & distributed training](/guides/distributed_training/). API overview: a first end-to-end exampleWhen passing data to the built-in training loops of a model, you should either use**NumPy arrays** (if your data is small and fits in memory) or **`tf.data Dataset`objects**. In the next few paragraphs, we'll use the MNIST dataset as NumPy arrays, inorder to demonstrate how to use optimizers, losses, and metrics.Let's consider the following model (here, we build in with the Functional API, but itcould be a Sequential model or a subclassed model as well):
###Code
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, activation="softmax", name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
###Output
_____no_output_____
###Markdown
Here's what the typical end-to-end workflow looks like, consisting of:- Training- Validation on a holdout set generated from the original training data- Evaluation on the test dataWe'll use MNIST data for this example.
###Code
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
# Preprocess the data (these are NumPy arrays)
x_train = x_train.reshape(60000, 784).astype("float32") / 255
x_test = x_test.reshape(10000, 784).astype("float32") / 255
y_train = y_train.astype("float32")
y_test = y_test.astype("float32")
# Reserve 10,000 samples for validation
x_val = x_train[-10000:]
y_val = y_train[-10000:]
x_train = x_train[:-10000]
y_train = y_train[:-10000]
###Output
_____no_output_____
###Markdown
We specify the training configuration (optimizer, loss, metrics):
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(), # Optimizer
# Loss function to minimize
loss=keras.losses.SparseCategoricalCrossentropy(),
# List of metrics to monitor
metrics=[keras.metrics.SparseCategoricalAccuracy()],
)
###Output
_____no_output_____
###Markdown
We call `fit()`, which will train the model by slicing the data into "batches" of size"batch_size", and repeatedly iterating over the entire dataset for a given number of"epochs".
###Code
print("Fit model on training data")
history = model.fit(
x_train,
y_train,
batch_size=64,
epochs=2,
# We pass some validation for
# monitoring validation loss and metrics
# at the end of each epoch
validation_data=(x_val, y_val),
)
###Output
_____no_output_____
###Markdown
The returned "history" object holds a record of the loss values and metric valuesduring training:
###Code
history.history
###Output
_____no_output_____
###Markdown
We evaluate the model on the test data via `evaluate()`:
###Code
# Evaluate the model on the test data using `evaluate`
print("Evaluate on test data")
results = model.evaluate(x_test, y_test, batch_size=128)
print("test loss, test acc:", results)
# Generate predictions (probabilities -- the output of the last layer)
# on new data using `predict`
print("Generate predictions for 3 samples")
predictions = model.predict(x_test[:3])
print("predictions shape:", predictions.shape)
###Output
_____no_output_____
###Markdown
Now, let's review each piece of this workflow in detail. The `compile()` method: specifying a loss, metrics, and an optimizerTo train a model with `fit()`, you need to specify a loss function, an optimizer, andoptionally, some metrics to monitor.You pass these to the model as arguments to the `compile()` method:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=[keras.metrics.SparseCategoricalAccuracy()],
)
###Output
_____no_output_____
###Markdown
The `metrics` argument should be a list -- your model can have any number of metrics.If your model has multiple outputs, you can specify different losses and metrics foreach output, and you can modulate the contribution of each output to the total loss ofthe model. You will find more details about this in the section **"Passing data tomulti-input, multi-output models"**.Note that if you're satisfied with the default settings, in many cases the optimizer,loss, and metrics can be specified via string identifiers as a shortcut:
###Code
model.compile(
optimizer="rmsprop",
loss="sparse_categorical_crossentropy",
metrics=["sparse_categorical_accuracy"],
)
###Output
_____no_output_____
###Markdown
For later reuse, let's put our model definition and compile step in functions; we willcall them several times across different examples in this guide.
###Code
def get_uncompiled_model():
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, activation="softmax", name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
def get_compiled_model():
model = get_uncompiled_model()
model.compile(
optimizer="rmsprop",
loss="sparse_categorical_crossentropy",
metrics=["sparse_categorical_accuracy"],
)
return model
###Output
_____no_output_____
###Markdown
Many built-in optimizers, losses, and metrics are availableIn general, you won't have to create from scratch your own losses, metrics, oroptimizers, because what you need is likely already part of the Keras API:Optimizers:- `SGD()` (with or without momentum)- `RMSprop()`- `Adam()`- etc.Losses:- `MeanSquaredError()`- `KLDivergence()`- `CosineSimilarity()`- etc.Metrics:- `AUC()`- `Precision()`- `Recall()`- etc. Custom lossesThere are two ways to provide custom losses with Keras. The first example creates afunction that accepts inputs `y_true` and `y_pred`. The following example shows a lossfunction that computes the mean squared error between the real data and thepredictions:
###Code
def custom_mean_squared_error(y_true, y_pred):
return tf.math.reduce_mean(tf.square(y_true - y_pred))
model = get_uncompiled_model()
model.compile(optimizer=keras.optimizers.Adam(), loss=custom_mean_squared_error)
# We need to one-hot encode the labels to use MSE
y_train_one_hot = tf.one_hot(y_train, depth=10)
model.fit(x_train, y_train_one_hot, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
If you need a loss function that takes in parameters beside `y_true` and `y_pred`, youcan subclass the `tf.keras.losses.Loss` class and implement the following two methods:- `__init__(self)`: accept parameters to pass during the call of your loss function- `call(self, y_true, y_pred)`: use the targets (y_true) and the model predictions(y_pred) to compute the model's lossLet's say you want to use mean squared error, but with an added term thatwill de-incentivize prediction values far from 0.5 (we assume that the categoricaltargets are one-hot encoded and take values between 0 and 1). Thiscreates an incentive for the model not to be too confident, which may helpreduce overfitting (we won't know if it works until we try!).Here's how you would do it:
###Code
class CustomMSE(keras.losses.Loss):
def __init__(self, regularization_factor=0.1, name="custom_mse"):
super().__init__(name=name)
self.regularization_factor = regularization_factor
def call(self, y_true, y_pred):
mse = tf.math.reduce_mean(tf.square(y_true - y_pred))
reg = tf.math.reduce_mean(tf.square(0.5 - y_pred))
return mse + reg * self.regularization_factor
model = get_uncompiled_model()
model.compile(optimizer=keras.optimizers.Adam(), loss=CustomMSE())
y_train_one_hot = tf.one_hot(y_train, depth=10)
model.fit(x_train, y_train_one_hot, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
Custom metricsIf you need a metric that isn't part of the API, you can easily create custom metricsby subclassing the `tf.keras.metrics.Metric` class. You will need to implement 4methods:- `__init__(self)`, in which you will create state variables for your metric.- `update_state(self, y_true, y_pred, sample_weight=None)`, which uses the targetsy_true and the model predictions y_pred to update the state variables.- `result(self)`, which uses the state variables to compute the final results.- `reset_states(self)`, which reinitializes the state of the metric.State update and results computation are kept separate (in `update_state()` and`result()`, respectively) because in some cases, results computation might be veryexpensive, and would only be done periodically.Here's a simple example showing how to implement a `CategoricalTruePositives` metric,that counts how many samples were correctly classified as belonging to a given class:
###Code
class CategoricalTruePositives(keras.metrics.Metric):
def __init__(self, name="categorical_true_positives", **kwargs):
super(CategoricalTruePositives, self).__init__(name=name, **kwargs)
self.true_positives = self.add_weight(name="ctp", initializer="zeros")
def update_state(self, y_true, y_pred, sample_weight=None):
y_pred = tf.reshape(tf.argmax(y_pred, axis=1), shape=(-1, 1))
values = tf.cast(y_true, "int32") == tf.cast(y_pred, "int32")
values = tf.cast(values, "float32")
if sample_weight is not None:
sample_weight = tf.cast(sample_weight, "float32")
values = tf.multiply(values, sample_weight)
self.true_positives.assign_add(tf.reduce_sum(values))
def result(self):
return self.true_positives
def reset_states(self):
# The state of the metric will be reset at the start of each epoch.
self.true_positives.assign(0.0)
model = get_uncompiled_model()
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=[CategoricalTruePositives()],
)
model.fit(x_train, y_train, batch_size=64, epochs=3)
###Output
_____no_output_____
###Markdown
Handling losses and metrics that don't fit the standard signatureThe overwhelming majority of losses and metrics can be computed from `y_true` and`y_pred`, where `y_pred` is an output of your model. But not all of them. Forinstance, a regularization loss may only require the activation of a layer (there areno targets in this case), and this activation may not be a model output.In such cases, you can call `self.add_loss(loss_value)` from inside the call method ofa custom layer. Losses added in this way get added to the "main" loss during training(the one passed to `compile()`). Here's a simple example that adds activityregularization (note that activity regularization is built-in in all Keras layers --this layer is just for the sake of providing a concrete example):
###Code
class ActivityRegularizationLayer(layers.Layer):
def call(self, inputs):
self.add_loss(tf.reduce_sum(inputs) * 0.1)
return inputs # Pass-through layer.
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
# Insert activity regularization as a layer
x = ActivityRegularizationLayer()(x)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
# The displayed loss will be much higher than before
# due to the regularization component.
model.fit(x_train, y_train, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
You can do the same for logging metric values, using `add_metric()`:
###Code
class MetricLoggingLayer(layers.Layer):
def call(self, inputs):
# The `aggregation` argument defines
# how to aggregate the per-batch values
# over each epoch:
# in this case we simply average them.
self.add_metric(
keras.backend.std(inputs), name="std_of_activation", aggregation="mean"
)
return inputs # Pass-through layer.
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
# Insert std logging as a layer.
x = MetricLoggingLayer()(x)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
model.fit(x_train, y_train, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
In the [Functional API](/guides/functional_api/),you can also call `model.add_loss(loss_tensor)`,or `model.add_metric(metric_tensor, name, aggregation)`.Here's a simple example:
###Code
inputs = keras.Input(shape=(784,), name="digits")
x1 = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x2 = layers.Dense(64, activation="relu", name="dense_2")(x1)
outputs = layers.Dense(10, name="predictions")(x2)
model = keras.Model(inputs=inputs, outputs=outputs)
model.add_loss(tf.reduce_sum(x1) * 0.1)
model.add_metric(keras.backend.std(x1), name="std_of_activation", aggregation="mean")
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
model.fit(x_train, y_train, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
Note that when you pass losses via `add_loss()`, it becomes possible to call`compile()` without a loss function, since the model already has a loss to minimize.Consider the following `LogisticEndpoint` layer: it takes as inputstargets & logits, and it tracks a crossentropy loss via `add_loss()`. It alsotracks classification accuracy via `add_metric()`.
###Code
class LogisticEndpoint(keras.layers.Layer):
def __init__(self, name=None):
super(LogisticEndpoint, self).__init__(name=name)
self.loss_fn = keras.losses.BinaryCrossentropy(from_logits=True)
self.accuracy_fn = keras.metrics.BinaryAccuracy()
def call(self, targets, logits, sample_weights=None):
# Compute the training-time loss value and add it
# to the layer using `self.add_loss()`.
loss = self.loss_fn(targets, logits, sample_weights)
self.add_loss(loss)
# Log accuracy as a metric and add it
# to the layer using `self.add_metric()`.
acc = self.accuracy_fn(targets, logits, sample_weights)
self.add_metric(acc, name="accuracy")
# Return the inference-time prediction tensor (for `.predict()`).
return tf.nn.softmax(logits)
###Output
_____no_output_____
###Markdown
You can use it in a model with two inputs (input data & targets), compiled without a`loss` argument, like this:
###Code
import numpy as np
inputs = keras.Input(shape=(3,), name="inputs")
targets = keras.Input(shape=(10,), name="targets")
logits = keras.layers.Dense(10)(inputs)
predictions = LogisticEndpoint(name="predictions")(logits, targets)
model = keras.Model(inputs=[inputs, targets], outputs=predictions)
model.compile(optimizer="adam") # No loss argument!
data = {
"inputs": np.random.random((3, 3)),
"targets": np.random.random((3, 10)),
}
model.fit(data)
###Output
_____no_output_____
###Markdown
For more information about training multi-input models, see the section **Passing datato multi-input, multi-output models**. Automatically setting apart a validation holdout setIn the first end-to-end example you saw, we used the `validation_data` argument to passa tuple of NumPy arrays `(x_val, y_val)` to the model for evaluating a validation lossand validation metrics at the end of each epoch.Here's another option: the argument `validation_split` allows you to automaticallyreserve part of your training data for validation. The argument value represents thefraction of the data to be reserved for validation, so it should be set to a numberhigher than 0 and lower than 1. For instance, `validation_split=0.2` means "use 20% ofthe data for validation", and `validation_split=0.6` means "use 60% of the data forvalidation".The way the validation is computed is by taking the last x% samples of the arraysreceived by the fit call, before any shuffling.Note that you can only use `validation_split` when training with NumPy data.
###Code
model = get_compiled_model()
model.fit(x_train, y_train, batch_size=64, validation_split=0.2, epochs=1)
###Output
_____no_output_____
###Markdown
Training & evaluation from tf.data DatasetsIn the past few paragraphs, you've seen how to handle losses, metrics, and optimizers,and you've seen how to use the `validation_data` and `validation_split` arguments infit, when your data is passed as NumPy arrays.Let's now take a look at the case where your data comes in the form of a`tf.data.Dataset` object.The `tf.data` API is a set of utilities in TensorFlow 2.0 for loading and preprocessingdata in a way that's fast and scalable.For a complete guide about creating `Datasets`, see the[tf.data documentation](https://www.tensorflow.org/guide/data).You can pass a `Dataset` instance directly to the methods `fit()`, `evaluate()`, and`predict()`:
###Code
model = get_compiled_model()
# First, let's create a training Dataset instance.
# For the sake of our example, we'll use the same MNIST data as before.
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
# Shuffle and slice the dataset.
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Now we get a test dataset.
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test))
test_dataset = test_dataset.batch(64)
# Since the dataset already takes care of batching,
# we don't pass a `batch_size` argument.
model.fit(train_dataset, epochs=3)
# You can also evaluate or predict on a dataset.
print("Evaluate")
result = model.evaluate(test_dataset)
dict(zip(model.metrics_names, result))
###Output
_____no_output_____
###Markdown
Note that the Dataset is reset at the end of each epoch, so it can be reused of thenext epoch.If you want to run training only on a specific number of batches from this Dataset, youcan pass the `steps_per_epoch` argument, which specifies how many training steps themodel should run using this Dataset before moving on to the next epoch.If you do this, the dataset is not reset at the end of each epoch, instead we just keepdrawing the next batches. The dataset will eventually run out of data (unless it is aninfinitely-looping dataset).
###Code
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Only use the 100 batches per epoch (that's 64 * 100 samples)
model.fit(train_dataset, epochs=3, steps_per_epoch=100)
###Output
_____no_output_____
###Markdown
Using a validation datasetYou can pass a `Dataset` instance as the `validation_data` argument in `fit()`:
###Code
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Prepare the validation dataset
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(64)
model.fit(train_dataset, epochs=1, validation_data=val_dataset)
###Output
_____no_output_____
###Markdown
At the end of each epoch, the model will iterate over the validation dataset andcompute the validation loss and validation metrics.If you want to run validation only on a specific number of batches from this dataset,you can pass the `validation_steps` argument, which specifies how many validationsteps the model should run with the validation dataset before interrupting validationand moving on to the next epoch:
###Code
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Prepare the validation dataset
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(64)
model.fit(
train_dataset,
epochs=1,
# Only run validation using the first 10 batches of the dataset
# using the `validation_steps` argument
validation_data=val_dataset,
validation_steps=10,
)
###Output
_____no_output_____
###Markdown
Note that the validation dataset will be reset after each use (so that you will alwaysbe evaluating on the same samples from epoch to epoch).The argument `validation_split` (generating a holdout set from the training data) isnot supported when training from `Dataset` objects, since this features requires theability to index the samples of the datasets, which is not possible in general withthe `Dataset` API. Other input formats supportedBesides NumPy arrays, eager tensors, and TensorFlow `Datasets`, it's possible to traina Keras model using Pandas dataframes, or from Python generators that yield batches ofdata & labels.In particular, the `keras.utils.Sequence` class offers a simple interface to buildPython data generators that are multiprocessing-aware and can be shuffled.In general, we recommend that you use:- NumPy input data if your data is small and fits in memory- `Dataset` objects if you have large datasets and you need to do distributed training- `Sequence` objects if you have large datasets and you need to do a lot of customPython-side processing that cannot be done in TensorFlow (e.g. if you rely on external librariesfor data loading or preprocessing). Using a `keras.utils.Sequence` object as input`keras.utils.Sequence` is a utility that you can subclass to obtain a Python generator withtwo important properties:- It works well with multiprocessing.- It can be shuffled (e.g. when passing `shuffle=True` in `fit()`).A `Sequence` must implement two methods:- `__getitem__`- `__len__`The method `__getitem__` should return a complete batch.If you want to modify your dataset between epochs, you may implement `on_epoch_end`.Here's a quick example:```pythonfrom skimage.io import imreadfrom skimage.transform import resizeimport numpy as np Here, `filenames` is list of path to the images and `labels` are the associated labels.class CIFAR10Sequence(Sequence): def __init__(self, filenames, labels, batch_size): self.filenames, self.labels = filenames, labels self.batch_size = batch_size def __len__(self): return int(np.ceil(len(self.filenames) / float(self.batch_size))) def __getitem__(self, idx): batch_x = self.filenames[idx * self.batch_size:(idx + 1) * self.batch_size] batch_y = self.labels[idx * self.batch_size:(idx + 1) * self.batch_size] return np.array([ resize(imread(filename), (200, 200)) for filename in batch_x]), np.array(batch_y)sequence = CIFAR10Sequence(filenames, labels, batch_size)model.fit(sequence, epochs=10)``` Using sample weighting and class weightingBesides input data and target data, it is possible to pass sample weights or classweights to a model when using fit:- When training from NumPy data: via the `sample_weight` and `class_weight` arguments.- When training from `Dataset` objects: by having the `Dataset` return a tuple`(input_batch, target_batch, sample_weight_batch)`.A "sample weights" array is an array of numbers that specify how much weight eachsample in a batch should have in computing the total loss. It is commonly used inimbalanced classification problems (the idea being to give more weight to rarely-seenclasses). When the weights used are ones and zeros, the array can be used as a maskfor the loss function (entirely discarding the contribution of certain samples to thetotal loss).A "class weights" dict is a more specific instance of the same concept: it maps classindices to the sample weight that should be used for samples belonging to this class.For instance, if class "0" is twice less represented than class "1" in your data, youcould use `class_weight={0: 1., 1: 0.5}`.Here's a NumPy example where we use class weights or sample weights to give moreimportance to the correct classification of class 5 (which is the digit "5" in theMNIST dataset).
###Code
import numpy as np
class_weight = {
0: 1.0,
1: 1.0,
2: 1.0,
3: 1.0,
4: 1.0,
# Set weight "2" for class "5",
# making this class 2x more important
5: 2.0,
6: 1.0,
7: 1.0,
8: 1.0,
9: 1.0,
}
print("Fit with class weight")
model = get_compiled_model()
model.fit(x_train, y_train, class_weight=class_weight, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
Here's the same example using `sample_weight` instead:
###Code
sample_weight = np.ones(shape=(len(y_train),))
sample_weight[y_train == 5] = 2.0
print("Fit with sample weight")
model = get_compiled_model()
model.fit(x_train, y_train, sample_weight=sample_weight, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
Here's a matching `Dataset` example:
###Code
sample_weight = np.ones(shape=(len(y_train),))
sample_weight[y_train == 5] = 2.0
# Create a Dataset that includes sample weights
# (3rd element in the return tuple).
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train, sample_weight))
# Shuffle and slice the dataset.
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
model = get_compiled_model()
model.fit(train_dataset, epochs=1)
###Output
_____no_output_____
###Markdown
Passing data to multi-input, multi-output modelsIn the previous examples, we were considering a model with a single input (a tensor ofshape `(764,)`) and a single output (a prediction tensor of shape `(10,)`). But whatabout models that have multiple inputs or outputs?Consider the following model, which has an image input of shape `(32, 32, 3)` (that's`(height, width, channels)`) and a timeseries input of shape `(None, 10)` (that's`(timesteps, features)`). Our model will have two outputs computed from thecombination of these inputs: a "score" (of shape `(1,)`) and a probabilitydistribution over five classes (of shape `(5,)`).
###Code
image_input = keras.Input(shape=(32, 32, 3), name="img_input")
timeseries_input = keras.Input(shape=(None, 10), name="ts_input")
x1 = layers.Conv2D(3, 3)(image_input)
x1 = layers.GlobalMaxPooling2D()(x1)
x2 = layers.Conv1D(3, 3)(timeseries_input)
x2 = layers.GlobalMaxPooling1D()(x2)
x = layers.concatenate([x1, x2])
score_output = layers.Dense(1, name="score_output")(x)
class_output = layers.Dense(5, activation="softmax", name="class_output")(x)
model = keras.Model(
inputs=[image_input, timeseries_input], outputs=[score_output, class_output]
)
###Output
_____no_output_____
###Markdown
Let's plot this model, so you can clearly see what we're doing here (note that theshapes shown in the plot are batch shapes, rather than per-sample shapes).
###Code
keras.utils.plot_model(model, "multi_input_and_output_model.png", show_shapes=True)
###Output
_____no_output_____
###Markdown
At compilation time, we can specify different losses to different outputs, by passingthe loss functions as a list:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()],
)
###Output
_____no_output_____
###Markdown
If we only passed a single loss function to the model, the same loss function would beapplied to every output (which is not appropriate here).Likewise for metrics:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()],
metrics=[
[
keras.metrics.MeanAbsolutePercentageError(),
keras.metrics.MeanAbsoluteError(),
],
[keras.metrics.CategoricalAccuracy()],
],
)
###Output
_____no_output_____
###Markdown
Since we gave names to our output layers, we could also specify per-output losses andmetrics via a dict:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={
"score_output": keras.losses.MeanSquaredError(),
"class_output": keras.losses.CategoricalCrossentropy(),
},
metrics={
"score_output": [
keras.metrics.MeanAbsolutePercentageError(),
keras.metrics.MeanAbsoluteError(),
],
"class_output": [keras.metrics.CategoricalAccuracy()],
},
)
###Output
_____no_output_____
###Markdown
We recommend the use of explicit names and dicts if you have more than 2 outputs.It's possible to give different weights to different output-specific losses (forinstance, one might wish to privilege the "score" loss in our example, by giving to 2xthe importance of the class loss), using the `loss_weights` argument:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={
"score_output": keras.losses.MeanSquaredError(),
"class_output": keras.losses.CategoricalCrossentropy(),
},
metrics={
"score_output": [
keras.metrics.MeanAbsolutePercentageError(),
keras.metrics.MeanAbsoluteError(),
],
"class_output": [keras.metrics.CategoricalAccuracy()],
},
loss_weights={"score_output": 2.0, "class_output": 1.0},
)
###Output
_____no_output_____
###Markdown
You could also chose not to compute a loss for certain outputs, if these outputs meantfor prediction but not for training:
###Code
# List loss version
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[None, keras.losses.CategoricalCrossentropy()],
)
# Or dict loss version
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={"class_output": keras.losses.CategoricalCrossentropy()},
)
###Output
_____no_output_____
###Markdown
Passing data to a multi-input or multi-output model in fit works in a similar way asspecifying a loss function in compile: you can pass **lists of NumPy arrays** (with1:1 mapping to the outputs that received a loss function) or **dicts mapping outputnames to NumPy arrays**.
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()],
)
# Generate dummy NumPy data
img_data = np.random.random_sample(size=(100, 32, 32, 3))
ts_data = np.random.random_sample(size=(100, 20, 10))
score_targets = np.random.random_sample(size=(100, 1))
class_targets = np.random.random_sample(size=(100, 5))
# Fit on lists
model.fit([img_data, ts_data], [score_targets, class_targets], batch_size=32, epochs=1)
# Alternatively, fit on dicts
model.fit(
{"img_input": img_data, "ts_input": ts_data},
{"score_output": score_targets, "class_output": class_targets},
batch_size=32,
epochs=1,
)
###Output
_____no_output_____
###Markdown
Here's the `Dataset` use case: similarly as what we did for NumPy arrays, the `Dataset`should return a tuple of dicts.
###Code
train_dataset = tf.data.Dataset.from_tensor_slices(
(
{"img_input": img_data, "ts_input": ts_data},
{"score_output": score_targets, "class_output": class_targets},
)
)
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
model.fit(train_dataset, epochs=1)
###Output
_____no_output_____
###Markdown
Using callbacksCallbacks in Keras are objects that are called at different point during training (atthe start of an epoch, at the end of a batch, at the end of an epoch, etc.) and whichcan be used to implement behaviors such as:- Doing validation at different points during training (beyond the built-in per-epochvalidation)- Checkpointing the model at regular intervals or when it exceeds a certain accuracythreshold- Changing the learning rate of the model when training seems to be plateauing- Doing fine-tuning of the top layers when training seems to be plateauing- Sending email or instant message notifications when training ends or where a certainperformance threshold is exceeded- Etc.Callbacks can be passed as a list to your call to `fit()`:
###Code
model = get_compiled_model()
callbacks = [
keras.callbacks.EarlyStopping(
# Stop training when `val_loss` is no longer improving
monitor="val_loss",
# "no longer improving" being defined as "no better than 1e-2 less"
min_delta=1e-2,
# "no longer improving" being further defined as "for at least 2 epochs"
patience=2,
verbose=1,
)
]
model.fit(
x_train,
y_train,
epochs=20,
batch_size=64,
callbacks=callbacks,
validation_split=0.2,
)
###Output
_____no_output_____
###Markdown
Many built-in callbacks are available- `ModelCheckpoint`: Periodically save the model.- `EarlyStopping`: Stop training when training is no longer improving the validationmetrics.- `TensorBoard`: periodically write model logs that can be visualized in[TensorBoard](https://www.tensorflow.org/tensorboard) (more details in the section"Visualization").- `CSVLogger`: streams loss and metrics data to a CSV file.- etc.See the [callbacks documentation](/api/callbacks/) for the complete list. Writing your own callbackYou can create a custom callback by extending the base class`keras.callbacks.Callback`. A callback has access to its associated model through theclass property `self.model`.Make sure to read the[complete guide to writing custom callbacks](/guides/writing_your_own_callbacks/).Here's a simple example saving a list of per-batch loss values during training:
###Code
class LossHistory(keras.callbacks.Callback):
def on_train_begin(self, logs):
self.per_batch_losses = []
def on_batch_end(self, batch, logs):
self.per_batch_losses.append(logs.get("loss"))
###Output
_____no_output_____
###Markdown
Checkpointing modelsWhen you're training model on relatively large datasets, it's crucial to savecheckpoints of your model at frequent intervals.The easiest way to achieve this is with the `ModelCheckpoint` callback:
###Code
model = get_compiled_model()
callbacks = [
keras.callbacks.ModelCheckpoint(
# Path where to save the model
# The two parameters below mean that we will overwrite
# the current checkpoint if and only if
# the `val_loss` score has improved.
# The saved model name will include the current epoch.
filepath="mymodel_{epoch}",
save_best_only=True, # Only save a model if `val_loss` has improved.
monitor="val_loss",
verbose=1,
)
]
model.fit(
x_train, y_train, epochs=2, batch_size=64, callbacks=callbacks, validation_split=0.2
)
###Output
_____no_output_____
###Markdown
The `ModelCheckpoint` callback can be used to implement fault-tolerance:the ability to restart training from the last saved state of the model in case traininggets randomly interrupted. Here's a basic example:
###Code
import os
# Prepare a directory to store all the checkpoints.
checkpoint_dir = "./ckpt"
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
def make_or_restore_model():
# Either restore the latest model, or create a fresh one
# if there is no checkpoint available.
checkpoints = [checkpoint_dir + "/" + name for name in os.listdir(checkpoint_dir)]
if checkpoints:
latest_checkpoint = max(checkpoints, key=os.path.getctime)
print("Restoring from", latest_checkpoint)
return keras.models.load_model(latest_checkpoint)
print("Creating a new model")
return get_compiled_model()
model = make_or_restore_model()
callbacks = [
# This callback saves a SavedModel every 100 batches.
# We include the training loss in the saved model name.
keras.callbacks.ModelCheckpoint(
filepath=checkpoint_dir + "/ckpt-loss={loss:.2f}", save_freq=100
)
]
model.fit(x_train, y_train, epochs=1, callbacks=callbacks)
###Output
_____no_output_____
###Markdown
You call also write your own callback for saving and restoring models.For a complete guide on serialization and saving, see the[guide to saving and serializing Models](/guides/serialization_and_saving/). Using learning rate schedulesA common pattern when training deep learning models is to gradually reduce the learningas training progresses. This is generally known as "learning rate decay".The learning decay schedule could be static (fixed in advance, as a function of thecurrent epoch or the current batch index), or dynamic (responding to the currentbehavior of the model, in particular the validation loss). Passing a schedule to an optimizerYou can easily use a static learning rate decay schedule by passing a schedule objectas the `learning_rate` argument in your optimizer:
###Code
initial_learning_rate = 0.1
lr_schedule = keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate, decay_steps=100000, decay_rate=0.96, staircase=True
)
optimizer = keras.optimizers.RMSprop(learning_rate=lr_schedule)
###Output
_____no_output_____
###Markdown
Several built-in schedules are available: `ExponentialDecay`, `PiecewiseConstantDecay`,`PolynomialDecay`, and `InverseTimeDecay`. Using callbacks to implement a dynamic learning rate scheduleA dynamic learning rate schedule (for instance, decreasing the learning rate when thevalidation loss is no longer improving) cannot be achieved with these schedule objectssince the optimizer does not have access to validation metrics.However, callbacks do have access to all metrics, including validation metrics! You canthus achieve this pattern by using a callback that modifies the current learning rateon the optimizer. In fact, this is even built-in as the `ReduceLROnPlateau` callback. Visualizing loss and metrics during trainingThe best way to keep an eye on your model during training is to use[TensorBoard](https://www.tensorflow.org/tensorboard), a browser-based applicationthat you can run locally that provides you with:- Live plots of the loss and metrics for training and evaluation- (optionally) Visualizations of the histograms of your layer activations- (optionally) 3D visualizations of the embedding spaces learned by your `Embedding`layersIf you have installed TensorFlow with pip, you should be able to launch TensorBoardfrom the command line:```tensorboard --logdir=/full_path_to_your_logs``` Using the TensorBoard callbackThe easiest way to use TensorBoard with a Keras model and the fit method is the`TensorBoard` callback.In the simplest case, just specify where you want the callback to write logs, andyou're good to go:
###Code
keras.callbacks.TensorBoard(
log_dir="/full_path_to_your_logs",
histogram_freq=0, # How often to log histogram visualizations
embeddings_freq=0, # How often to log embedding visualizations
update_freq="epoch",
) # How often to write logs (default: once per epoch)
###Output
_____no_output_____
###Markdown
Training & evaluation with the built-in methods**Author:** [fchollet](https://twitter.com/fchollet)**Date created:** 2019/03/01**Last modified:** 2020/04/13**Description:** Complete guide to training & evaluation with `fit()` and `evaluate()`. Setup
###Code
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
###Output
_____no_output_____
###Markdown
IntroductionThis guide covers training, evaluation, and prediction (inference) modelswhen using built-in APIs for training & validation (such as `model.fit()`,`model.evaluate()`, `model.predict()`).If you are interested in leveraging `fit()` while specifying yourown training step function, see the guide["customizing what happens in `fit()`"](/guides/customizing_what_happens_in_fit/).If you are interested in writing your own training & evaluation loops fromscratch, see the guide["writing a training loop from scratch"](/guides/writing_a_training_loop_from_scratch/).In general, whether you are using built-in loops or writing your own, model training &evaluation works strictly in the same way across every kind of Keras model --Sequential models, models built with the Functional API, and models written fromscratch via model subclassing.This guide doesn't cover distributed training. For distributed training, seeour [guide to multi-gpu & distributed training](/guides/distributed_training/). API overview: a first end-to-end exampleWhen passing data to the built-in training loops of a model, you should either use**NumPy arrays** (if your data is small and fits in memory) or **`tf.data Dataset`objects**. In the next few paragraphs, we'll use the MNIST dataset as NumPy arrays, inorder to demonstrate how to use optimizers, losses, and metrics.Let's consider the following model (here, we build in with the Functional API, but itcould be a Sequential model or a subclassed model as well):
###Code
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, activation="softmax", name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
###Output
_____no_output_____
###Markdown
Here's what the typical end-to-end workflow looks like, consisting of:- Training- Validation on a holdout set generated from the original training data- Evaluation on the test dataWe'll use MNIST data for this example.
###Code
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
# Preprocess the data (these are NumPy arrays)
x_train = x_train.reshape(60000, 784).astype("float32") / 255
x_test = x_test.reshape(10000, 784).astype("float32") / 255
y_train = y_train.astype("float32")
y_test = y_test.astype("float32")
# Reserve 10,000 samples for validation
x_val = x_train[-10000:]
y_val = y_train[-10000:]
x_train = x_train[:-10000]
y_train = y_train[:-10000]
###Output
_____no_output_____
###Markdown
We specify the training configuration (optimizer, loss, metrics):
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(), # Optimizer
# Loss function to minimize
loss=keras.losses.SparseCategoricalCrossentropy(),
# List of metrics to monitor
metrics=["sparse_categorical_accuracy"],
)
###Output
_____no_output_____
###Markdown
We call `fit()`, which will train the model by slicing the data into "batches" of size"batch_size", and repeatedly iterating over the entire dataset for a given number of"epochs".
###Code
print("Fit model on training data")
history = model.fit(
x_train,
y_train,
batch_size=64,
epochs=2,
# We pass some validation for
# monitoring validation loss and metrics
# at the end of each epoch
validation_data=(x_val, y_val),
)
###Output
_____no_output_____
###Markdown
The returned "history" object holds a record of the loss values and metric valuesduring training:
###Code
history.history
###Output
_____no_output_____
###Markdown
We evaluate the model on the test data via `evaluate()`:
###Code
# Evaluate the model on the test data using `evaluate`
print("Evaluate on test data")
results = model.evaluate(x_test, y_test, batch_size=128)
print("test loss, test acc:", results)
# Generate predictions (probabilities -- the output of the last layer)
# on new data using `predict`
print("Generate predictions for 3 samples")
predictions = model.predict(x_test[:3])
print("predictions shape:", predictions.shape)
###Output
_____no_output_____
###Markdown
Now, let's review each piece of this workflow in detail. The `compile()` method: specifying a loss, metrics, and an optimizerTo train a model with `fi()`t, you need to specify a loss function, an optimizer, andoptionally, some metrics to monitor.You pass these to the model as arguments to the `compile()` method:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=[keras.metrics.sparse_categorical_accuracy],
)
###Output
_____no_output_____
###Markdown
The `metrics` argument should be a list -- you model can have any number of metrics.If your model has multiple outputs, your can specify different losses and metrics foreach output, and you can modulate the contribution of each output to the total loss ofthe model. You will find more details about this in the section **"Passing data tomulti-input, multi-output models"**.Note that if you're satisfied with the default settings, in many cases the optimizer,loss, and metrics can be specified via string identifiers as a shortcut:
###Code
model.compile(
optimizer="rmsprop",
loss="sparse_categorical_crossentropy",
metrics=["sparse_categorical_accuracy"],
)
###Output
_____no_output_____
###Markdown
For later reuse, let's put our model definition and compile step in functions; we willcall them several times across different examples in this guide.
###Code
def get_uncompiled_model():
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, activation="softmax", name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
def get_compiled_model():
model = get_uncompiled_model()
model.compile(
optimizer="rmsprop",
loss="sparse_categorical_crossentropy",
metrics=["sparse_categorical_accuracy"],
)
return model
###Output
_____no_output_____
###Markdown
Many built-in optimizers, losses, and metrics are availableIn general, you won't have to create from scratch your own losses, metrics, oroptimizers, because what you need is likely already part of the Keras API:Optimizers:- `SGD()` (with or without momentum)- `RMSprop()`- `Adam()`- etc.Losses:- `MeanSquaredError()`- `KLDivergence()`- `CosineSimilarity()`- etc.Metrics:- `AUC()`- `Precision()`- `Recall()`- etc. Custom lossesThere are two ways to provide custom losses with Keras. The first example creates afunction that accepts inputs `y_true` and `y_pred`. The following example shows a lossfunction that computes the mean squared error between the real data and thepredictions:
###Code
def custom_mean_squared_error(y_true, y_pred):
return tf.math.reduce_mean(tf.square(y_true - y_pred))
model = get_uncompiled_model()
model.compile(optimizer=keras.optimizers.Adam(), loss=custom_mean_squared_error)
# We need to one-hot encode the labels to use MSE
y_train_one_hot = tf.one_hot(y_train, depth=10)
model.fit(x_train, y_train_one_hot, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
If you need a loss function that takes in parameters beside `y_true` and `y_pred`, youcan subclass the `tf.keras.losses.Loss` class and implement the following two methods:- `__init__(self)`: accept parameters to pass during the call of your loss function- `call(self, y_true, y_pred)`: use the targets (y_true) and the model predictions(y_pred) to compute the model's lossLet's say you want to use mean squared error, but with an added term thatwill de-incentivize prediction values far from 0.5 (we assume that the categoricaltargets are one-hot encoded and take values between 0 and 1). Thiscreates an incentive for the model not to be too confident, which may helpreduce overfitting (we won't know if it works until we try!).Here's how you would do it:
###Code
class CustomMSE(keras.losses.Loss):
def __init__(self, regularization_factor=0.1, name="custom_mse"):
super().__init__(name=name)
self.regularization_factor = regularization_factor
def call(self, y_true, y_pred):
mse = tf.math.reduce_mean(tf.square(y_true - y_pred))
reg = tf.math.reduce_mean(tf.square(0.5 - y_pred))
return mse + reg * self.regularization_factor
model = get_uncompiled_model()
model.compile(optimizer=keras.optimizers.Adam(), loss=CustomMSE())
y_train_one_hot = tf.one_hot(y_train, depth=10)
model.fit(x_train, y_train_one_hot, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
Custom metricsIf you need a metric that isn't part of the API, you can easily create custom metricsby subclassing the `tf.keras.metrics.Metric` class. You will need to implement 4methods:- `__init__(self)`, in which you will create state variables for your metric.- `update_state(self, y_true, y_pred, sample_weight=None)`, which uses the targetsy_true and the model predictions y_pred to update the state variables.- `result(self)`, which uses the state variables to compute the final results.- `reset_states(self)`, which reinitializes the state of the metric.State update and results computation are kept separate (in `update_state()` and`result()`, respectively) because in some cases, results computation might be veryexpensive, and would only be done periodically.Here's a simple example showing how to implement a `CategoricalTruePositives` metric,that counts how many samples where correctly classified as belonging to a given class:
###Code
class CategoricalTruePositives(keras.metrics.Metric):
def __init__(self, name="categorical_true_positives", **kwargs):
super(CategoricalTruePositives, self).__init__(name=name, **kwargs)
self.true_positives = self.add_weight(name="ctp", initializer="zeros")
def update_state(self, y_true, y_pred, sample_weight=None):
y_pred = tf.reshape(tf.argmax(y_pred, axis=1), shape=(-1, 1))
values = tf.cast(y_true, "int32") == tf.cast(y_pred, "int32")
values = tf.cast(values, "float32")
if sample_weight is not None:
sample_weight = tf.cast(sample_weight, "float32")
values = tf.multiply(values, sample_weight)
self.true_positives.assign_add(tf.reduce_sum(values))
def result(self):
return self.true_positives
def reset_states(self):
# The state of the metric will be reset at the start of each epoch.
self.true_positives.assign(0.0)
model = get_uncompiled_model()
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=[CategoricalTruePositives()],
)
model.fit(x_train, y_train, batch_size=64, epochs=3)
###Output
_____no_output_____
###Markdown
Handling losses and metrics that don't fit the standard signatureThe overwhelming majority of losses and metrics can be computed from y_true and`y_pred`, where `y_pred` is an output of your model. But not all of them. Forinstance, a regularization loss may only require the activation of a layer (there areno targets in this case), and this activation may not be a model output.In such cases, you can call `self.add_loss(loss_value)` from inside the call method ofa custom layer. Losses added in this way get added to the "main" loss during training(the one passed to `compile()`). Here's a simple example that adds activityregularization (note that activity regularization is built-in in all Keras layers --this layer is just for the sake of providing a concrete example):
###Code
class ActivityRegularizationLayer(layers.Layer):
def call(self, inputs):
self.add_loss(tf.reduce_sum(inputs) * 0.1)
return inputs # Pass-through layer.
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
# Insert activity regularization as a layer
x = ActivityRegularizationLayer()(x)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
# The displayed loss will be much higher than before
# due to the regularization component.
model.fit(x_train, y_train, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
You can do the same for logging metric values, using `add_metric()`:
###Code
class MetricLoggingLayer(layers.Layer):
def call(self, inputs):
# The `aggregation` argument defines
# how to aggregate the per-batch values
# over each epoch:
# in this case we simply average them.
self.add_metric(
keras.backend.std(inputs), name="std_of_activation", aggregation="mean"
)
return inputs # Pass-through layer.
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
# Insert std logging as a layer.
x = MetricLoggingLayer()(x)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
model.fit(x_train, y_train, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
In the [Functional API](/guides/functional_api/),you can also call `model.add_loss(loss_tensor)`,or `model.add_metric(metric_tensor, name, aggregation)`.Here's a simple example:
###Code
inputs = keras.Input(shape=(784,), name="digits")
x1 = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x2 = layers.Dense(64, activation="relu", name="dense_2")(x1)
outputs = layers.Dense(10, name="predictions")(x2)
model = keras.Model(inputs=inputs, outputs=outputs)
model.add_loss(tf.reduce_sum(x1) * 0.1)
model.add_metric(keras.backend.std(x1), name="std_of_activation", aggregation="mean")
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
model.fit(x_train, y_train, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
Note that when you pass losses via `add_loss()`, it becomes possible to call`compile()` without a loss function, since the model already has a loss to minimize.Consider the following `LogisticEndpoint` layer: it takes as inputstargets & logits, and it tracks a crossentropy loss via `add_loss()`. It alsotracks classification accuracy via `add_metric()`.
###Code
class LogisticEndpoint(keras.layers.Layer):
def __init__(self, name=None):
super(LogisticEndpoint, self).__init__(name=name)
self.loss_fn = keras.losses.BinaryCrossentropy(from_logits=True)
self.accuracy_fn = keras.metrics.BinaryAccuracy()
def call(self, targets, logits, sample_weights=None):
# Compute the training-time loss value and add it
# to the layer using `self.add_loss()`.
loss = self.loss_fn(targets, logits, sample_weights)
self.add_loss(loss)
# Log accuracy as a metric and add it
# to the layer using `self.add_metric()`.
acc = self.accuracy_fn(targets, logits, sample_weights)
self.add_metric(acc, name="accuracy")
# Return the inference-time prediction tensor (for `.predict()`).
return tf.nn.softmax(logits)
###Output
_____no_output_____
###Markdown
You can use it in a model with two inputs (input data & targets), compiled without a`loss` argument, like this:
###Code
import numpy as np
inputs = keras.Input(shape=(3,), name="inputs")
targets = keras.Input(shape=(10,), name="targets")
logits = keras.layers.Dense(10)(inputs)
predictions = LogisticEndpoint(name="predictions")(logits, targets)
model = keras.Model(inputs=[inputs, targets], outputs=predictions)
model.compile(optimizer="adam") # No loss argument!
data = {
"inputs": np.random.random((3, 3)),
"targets": np.random.random((3, 10)),
}
model.fit(data)
###Output
_____no_output_____
###Markdown
For more information about training multi-input models, see the section **Passing datato multi-input, multi-output models**. Automatically setting apart a validation holdout setIn the first end-to-end example you saw, we used the `validation_data` argument to passa tuple of NumPy arrays `(x_val, y_val)` to the model for evaluating a validation lossand validation metrics at the end of each epoch.Here's another option: the argument `validation_split` allows you to automaticallyreserve part of your training data for validation. The argument value represents thefraction of the data to be reserved for validation, so it should be set to a numberhigher than 0 and lower than 1. For instance, validation_split=0.2`` means "use 20% ofthe data for validation", and `validation_split=0.6` means "use 60% of the data forvalidation".The way the validation is computed is by taking the last x% samples of the arraysreceived by the fit call, before any shuffling.Note that you can only use `validation_split` when training with NumPy data.
###Code
model = get_compiled_model()
model.fit(x_train, y_train, batch_size=64, validation_split=0.2, epochs=1)
###Output
_____no_output_____
###Markdown
Training & evaluation from tf.data DatasetsIn the past few paragraphs, you've seen how to handle losses, metrics, and optimizers,and you've seen how to use the `validation_data` and `validation_split` arguments infit, when your data is passed as NumPy arrays.Let's now take a look at the case where your data comes in the form of a`tf.data.Dataset` object.The `tf.data` API is a set of utilities in TensorFlow 2.0 for loading and preprocessingdata in a way that's fast and scalable.For a complete guide about creating `Datasets`, see the[tf.data documentation](https://www.tensorflow.org/guide/data).You can pass a `Dataset` instance directly to the methods `fit()`, `evaluate()`, and`predict()`:
###Code
model = get_compiled_model()
# First, let's create a training Dataset instance.
# For the sake of our example, we'll use the same MNIST data as before.
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
# Shuffle and slice the dataset.
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Now we get a test dataset.
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test))
test_dataset = test_dataset.batch(64)
# Since the dataset already takes care of batching,
# we don't pass a `batch_size` argument.
model.fit(train_dataset, epochs=3)
# You can also evaluate or predict on a dataset.
print("Evaluate")
result = model.evaluate(test_dataset)
dict(zip(model.metrics_names, result))
###Output
_____no_output_____
###Markdown
Note that the Dataset is reset at the end of each epoch, so it can be reused of thenext epoch.If you want to run training only on a specific number of batches from this Dataset, youcan pass the `steps_per_epoch` argument, which specifies how many training steps themodel should run using this Dataset before moving on to the next epoch.If you do this, the dataset is not reset at the end of each epoch, instead we just keepdrawing the next batches. The dataset will eventually run out of data (unless it is aninfinitely-looping dataset).
###Code
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Only use the 100 batches per epoch (that's 64 * 100 samples)
model.fit(train_dataset, epochs=3, steps_per_epoch=100)
###Output
_____no_output_____
###Markdown
Using a validation datasetYou can pass a `Dataset` instance as the `validation_data` argument in `fit()`:
###Code
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Prepare the validation dataset
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(64)
model.fit(train_dataset, epochs=1, validation_data=val_dataset)
###Output
_____no_output_____
###Markdown
At the end of each epoch, the model will iterate over the validation dataset andcompute the validation loss and validation metrics.If you want to run validation only on a specific number of batches from this dataset,you can pass the `validation_steps` argument, which specifies how many validationsteps the model should run with the validation dataset before interrupting validationand moving on to the next epoch:
###Code
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Prepare the validation dataset
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(64)
model.fit(
train_dataset,
epochs=1,
# Only run validation using the first 10 batches of the dataset
# using the `validation_steps` argument
validation_data=val_dataset,
validation_steps=10,
)
###Output
_____no_output_____
###Markdown
Note that the validation dataset will be reset after each use (so that you will alwaysbe evaluating on the same samples from epoch to epoch).The argument `validation_split` (generating a holdout set from the training data) isnot supported when training from `Dataset` objects, since this features requires theability to index the samples of the datasets, which is not possible in general withthe `Dataset` API. Other input formats supportedBesides NumPy arrays, eager tensors, and TensorFlow `Datasets`, it's possible to traina Keras model using Pandas dataframes, or from Python generators that yield batches ofdata & labels.In particular, the `keras.utils.Sequence` class offers a simple interface to buildPython data generators that are multiprocessing-aware and can be shuffled.In general, we recommend that you use:- NumPy input data if your data is small and fits in memory- `Dataset` objects if you have large datasets and you need to do distributed training- `Sequence` objects if you have large datasets and you need to do a lot of customPython-side processing that cannot be done in TensorFlow (e.g. if you rely on external librariesfor data loading or preprocessing). Using a `keras.utils.Sequence` object as input`keras.utils.Sequence` is a utility that you can subclass to obtain a Python generator withtwo important properties:- It works well with multiprocessing.- It can be shuffled (e.g. when passing `shuffle=True` in `fit()`).A `Sequence` must implement two methods:- `__getitem__`- `__len__`The method `__getitem__` should return a complete batch.If you want to modify your dataset between epochs, you may implement `on_epoch_end`.Here's a quick example:```pythonfrom skimage.io import imreadfrom skimage.transform import resizeimport numpy as np Here, `filenames` is list of path to the images and `labels` are the associated labels.class CIFAR10Sequence(Sequence): def __init__(self, filenames, labels, batch_size): self.filenames, self.labels = filenames, labels self.batch_size = batch_size def __len__(self): return int(np.ceil(len(self.filenames) / float(self.batch_size))) def __getitem__(self, idx): batch_x = self.filenames[idx * self.batch_size:(idx + 1) * self.batch_size] batch_y = self.labels[idx * self.batch_size:(idx + 1) * self.batch_size] return np.array([ resize(imread(filename), (200, 200)) for filename in batch_x]), np.array(batch_y)sequence = CIFAR10Sequence(filenames, labels, batch_size)model.fit(sequence, epochs=10)``` Using sample weighting and class weightingBesides input data and target data, it is possible to pass sample weights or classweights to a model when using fit:- When training from NumPy data: via the `sample_weight` and `class_weight` arguments.- When training from `Dataset` objects: by having the `Dataset` return a tuple`(input_batch, target_batch, sample_weight_batch)`.A "sample weights" array is an array of numbers that specify how much weight eachsample in a batch should have in computing the total loss. It is commonly used inimbalanced classification problems (the idea being to give more weight to rarely-seenclasses). When the weights used are ones and zeros, the array can be used as a maskfor the loss function (entirely discarding the contribution of certain samples to thetotal loss).A "class weights" dict is a more specific instance of the same concept: it maps classindices to the sample weight that should be used for samples belonging to this class.For instance, if class "0" is twice less represented than class "1" in your data, youcould use `class_weight={0: 1., 1: 0.5}`.Here's a NumPy example where we use class weights or sample weights to give moreimportance to the correct classification of class 5 (which is the digit "5" in theMNIST dataset).
###Code
import numpy as np
class_weight = {
0: 1.0,
1: 1.0,
2: 1.0,
3: 1.0,
4: 1.0,
# Set weight "2" for class "5",
# making this class 2x more important
5: 2.0,
6: 1.0,
7: 1.0,
8: 1.0,
9: 1.0,
}
print("Fit with class weight")
model = get_compiled_model()
model.fit(x_train, y_train, class_weight=class_weight, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
Here's the same example using `sample_weight` instead:
###Code
sample_weight = np.ones(shape=(len(y_train),))
sample_weight[y_train == 5] = 2.0
print("Fit with sample weight")
model = get_compiled_model()
model.fit(x_train, y_train, sample_weight=sample_weight, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
Here's a matching `Dataset` example:
###Code
sample_weight = np.ones(shape=(len(y_train),))
sample_weight[y_train == 5] = 2.0
# Create a Dataset that includes sample weights
# (3rd element in the return tuple).
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train, sample_weight))
# Shuffle and slice the dataset.
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
model = get_compiled_model()
model.fit(train_dataset, epochs=1)
###Output
_____no_output_____
###Markdown
Passing data to multi-input, multi-output modelsIn the previous examples, we were considering a model with a single input (a tensor ofshape `(764,)`) and a single output (a prediction tensor of shape `(10,)`). But whatabout models that have multiple inputs or outputs?Consider the following model, which has an image input of shape `(32, 32, 3)` (that's`(height, width, channels)`) and a timeseries input of shape `(None, 10)` (that's`(timesteps, features)`). Our model will have two outputs computed from thecombination of these inputs: a "score" (of shape `(1,)`) and a probabilitydistribution over five classes (of shape `(5,)`).
###Code
image_input = keras.Input(shape=(32, 32, 3), name="img_input")
timeseries_input = keras.Input(shape=(None, 10), name="ts_input")
x1 = layers.Conv2D(3, 3)(image_input)
x1 = layers.GlobalMaxPooling2D()(x1)
x2 = layers.Conv1D(3, 3)(timeseries_input)
x2 = layers.GlobalMaxPooling1D()(x2)
x = layers.concatenate([x1, x2])
score_output = layers.Dense(1, name="score_output")(x)
class_output = layers.Dense(5, activation="softmax", name="class_output")(x)
model = keras.Model(
inputs=[image_input, timeseries_input], outputs=[score_output, class_output]
)
###Output
_____no_output_____
###Markdown
Let's plot this model, so you can clearly see what we're doing here (note that theshapes shown in the plot are batch shapes, rather than per-sample shapes).
###Code
keras.utils.plot_model(model, "multi_input_and_output_model.png", show_shapes=True)
###Output
_____no_output_____
###Markdown
At compilation time, we can specify different losses to different outputs, by passingthe loss functions as a list:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()],
)
###Output
_____no_output_____
###Markdown
If we only passed a single loss function to the model, the same loss function would beapplied to every output (which is not appropriate here).Likewise for metrics:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()],
metrics=[
[
keras.metrics.MeanAbsolutePercentageError(),
keras.metrics.MeanAbsoluteError(),
],
[keras.metrics.CategoricalAccuracy()],
],
)
###Output
_____no_output_____
###Markdown
Since we gave names to our output layers, we could also specify per-output losses andmetrics via a dict:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={
"score_output": keras.losses.MeanSquaredError(),
"class_output": keras.losses.CategoricalCrossentropy(),
},
metrics={
"score_output": [
keras.metrics.MeanAbsolutePercentageError(),
keras.metrics.MeanAbsoluteError(),
],
"class_output": [keras.metrics.CategoricalAccuracy()],
},
)
###Output
_____no_output_____
###Markdown
We recommend the use of explicit names and dicts if you have more than 2 outputs.It's possible to give different weights to different output-specific losses (forinstance, one might wish to privilege the "score" loss in our example, by giving to 2xthe importance of the class loss), using the `loss_weights` argument:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={
"score_output": keras.losses.MeanSquaredError(),
"class_output": keras.losses.CategoricalCrossentropy(),
},
metrics={
"score_output": [
keras.metrics.MeanAbsolutePercentageError(),
keras.metrics.MeanAbsoluteError(),
],
"class_output": [keras.metrics.CategoricalAccuracy()],
},
loss_weights={"score_output": 2.0, "class_output": 1.0},
)
###Output
_____no_output_____
###Markdown
You could also chose not to compute a loss for certain outputs, if these outputs meantfor prediction but not for training:
###Code
# List loss version
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[None, keras.losses.CategoricalCrossentropy()],
)
# Or dict loss version
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={"class_output": keras.losses.CategoricalCrossentropy()},
)
###Output
_____no_output_____
###Markdown
Passing data to a multi-input or multi-output model in fit works in a similar way asspecifying a loss function in compile: you can pass **lists of NumPy arrays** (with1:1 mapping to the outputs that received a loss function) or **dicts mapping outputnames to NumPy arrays**.
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()],
)
# Generate dummy NumPy data
img_data = np.random.random_sample(size=(100, 32, 32, 3))
ts_data = np.random.random_sample(size=(100, 20, 10))
score_targets = np.random.random_sample(size=(100, 1))
class_targets = np.random.random_sample(size=(100, 5))
# Fit on lists
model.fit([img_data, ts_data], [score_targets, class_targets], batch_size=32, epochs=1)
# Alternatively, fit on dicts
model.fit(
{"img_input": img_data, "ts_input": ts_data},
{"score_output": score_targets, "class_output": class_targets},
batch_size=32,
epochs=1,
)
###Output
_____no_output_____
###Markdown
Here's the `Dataset` use case: similarly as what we did for NumPy arrays, the `Dataset`should return a tuple of dicts.
###Code
train_dataset = tf.data.Dataset.from_tensor_slices(
(
{"img_input": img_data, "ts_input": ts_data},
{"score_output": score_targets, "class_output": class_targets},
)
)
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
model.fit(train_dataset, epochs=1)
###Output
_____no_output_____
###Markdown
Using callbacksCallbacks in Keras are objects that are called at different point during training (atthe start of an epoch, at the end of a batch, at the end of an epoch, etc.) and whichcan be used to implement behaviors such as:- Doing validation at different points during training (beyond the built-in per-epochvalidation)- Checkpointing the model at regular intervals or when it exceeds a certain accuracythreshold- Changing the learning rate of the model when training seems to be plateauing- Doing fine-tuning of the top layers when training seems to be plateauing- Sending email or instant message notifications when training ends or where a certainperformance threshold is exceeded- Etc.Callbacks can be passed as a list to your call to `fit()`:
###Code
model = get_compiled_model()
callbacks = [
keras.callbacks.EarlyStopping(
# Stop training when `val_loss` is no longer improving
monitor="val_loss",
# "no longer improving" being defined as "no better than 1e-2 less"
min_delta=1e-2,
# "no longer improving" being further defined as "for at least 2 epochs"
patience=2,
verbose=1,
)
]
model.fit(
x_train,
y_train,
epochs=20,
batch_size=64,
callbacks=callbacks,
validation_split=0.2,
)
###Output
_____no_output_____
###Markdown
Many built-in callbacks are available- `ModelCheckpoint`: Periodically save the model.- `EarlyStopping`: Stop training when training is no longer improving the validationmetrics.- `TensorBoard`: periodically write model logs that can be visualized in[TensorBoard](https://www.tensorflow.org/tensorboard) (more details in the section"Visualization").- `CSVLogger`: streams loss and metrics data to a CSV file.- etc.See the [callbacks documentation](/api/callbacks/) for the complete list. Writing your own callbackYou can create a custom callback by extending the base class`keras.callbacks.Callback`. A callback has access to its associated model through theclass property `self.model`.Make sure to read the[complete guide to writing custom callbacks](/guides/writing_your_own_callbacks/).Here's a simple example saving a list of per-batch loss values during training:
###Code
class LossHistory(keras.callbacks.Callback):
def on_train_begin(self, logs):
self.per_batch_losses = []
def on_batch_end(self, batch, logs):
self.per_batch_losses.append(logs.get("loss"))
###Output
_____no_output_____
###Markdown
Checkpointing modelsWhen you're training model on relatively large datasets, it's crucial to savecheckpoints of your model at frequent intervals.The easiest way to achieve this is with the `ModelCheckpoint` callback:
###Code
model = get_compiled_model()
callbacks = [
keras.callbacks.ModelCheckpoint(
# Path where to save the model
# The two parameters below mean that we will overwrite
# the current checkpoint if and only if
# the `val_loss` score has improved.
# The saved model name will include the current epoch.
filepath="mymodel_{epoch}",
save_best_only=True, # Only save a model if `val_loss` has improved.
monitor="val_loss",
verbose=1,
)
]
model.fit(
x_train, y_train, epochs=2, batch_size=64, callbacks=callbacks, validation_split=0.2
)
###Output
_____no_output_____
###Markdown
The `ModelCheckpoint` callback can be used to implement fault-tolerance:the ability to restart training from the last saved state of the model in case traininggets randomly interrupted. Here's a basic example:
###Code
import os
# Prepare a directory to store all the checkpoints.
checkpoint_dir = "./ckpt"
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
def make_or_restore_model():
# Either restore the latest model, or create a fresh one
# if there is no checkpoint available.
checkpoints = [checkpoint_dir + "/" + name for name in os.listdir(checkpoint_dir)]
if checkpoints:
latest_checkpoint = max(checkpoints, key=os.path.getctime)
print("Restoring from", latest_checkpoint)
return keras.models.load_model(latest_checkpoint)
print("Creating a new model")
return get_compiled_model()
model = make_or_restore_model()
callbacks = [
# This callback saves a SavedModel every 100 batches.
# We include the training loss in the saved model name.
keras.callbacks.ModelCheckpoint(
filepath=checkpoint_dir + "/ckpt-loss={loss:.2f}", save_freq=100
)
]
model.fit(x_train, y_train, epochs=1, callbacks=callbacks)
###Output
_____no_output_____
###Markdown
You call also write your own callback for saving and restoring models.For a complete guide on serialization and saving, see the[guide to saving and serializing Models](/guides/serialization_and_saving/). Using learning rate schedulesA common pattern when training deep learning models is to gradually reduce the learningas training progresses. This is generally known as "learning rate decay".The learning decay schedule could be static (fixed in advance, as a function of thecurrent epoch or the current batch index), or dynamic (responding to the currentbehavior of the model, in particular the validation loss). Passing a schedule to an optimizerYou can easily use a static learning rate decay schedule by passing a schedule objectas the `learning_rate` argument in your optimizer:
###Code
initial_learning_rate = 0.1
lr_schedule = keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate, decay_steps=100000, decay_rate=0.96, staircase=True
)
optimizer = keras.optimizers.RMSprop(learning_rate=lr_schedule)
###Output
_____no_output_____
###Markdown
Several built-in schedules are available: `ExponentialDecay`, `PiecewiseConstantDecay`,`PolynomialDecay`, and `InverseTimeDecay`. Using callbacks to implement a dynamic learning rate scheduleA dynamic learning rate schedule (for instance, decreasing the learning rate when thevalidation loss is no longer improving) cannot be achieved with these schedule objectssince the optimizer does not have access to validation metrics.However, callbacks do have access to all metrics, including validation metrics! You canthus achieve this pattern by using a callback that modifies the current learning rateon the optimizer. In fact, this is even built-in as the `ReduceLROnPlateau` callback. Visualizing loss and metrics during trainingThe best way to keep an eye on your model during training is to use[TensorBoard](https://www.tensorflow.org/tensorboard), a browser-based applicationthat you can run locally that provides you with:- Live plots of the loss and metrics for training and evaluation- (optionally) Visualizations of the histograms of your layer activations- (optionally) 3D visualizations of the embedding spaces learned by your `Embedding`layersIf you have installed TensorFlow with pip, you should be able to launch TensorBoardfrom the command line:```tensorboard --logdir=/full_path_to_your_logs``` Using the TensorBoard callbackThe easiest way to use TensorBoard with a Keras model and the fit method is the`TensorBoard` callback.In the simplest case, just specify where you want the callback to write logs, andyou're good to go:
###Code
keras.callbacks.TensorBoard(
log_dir="/full_path_to_your_logs",
histogram_freq=0, # How often to log histogram visualizations
embeddings_freq=0, # How often to log embedding visualizations
update_freq="epoch",
) # How often to write logs (default: once per epoch)
###Output
_____no_output_____
###Markdown
Training & evaluation with the built-in methods**Author:** [fchollet](https://twitter.com/fchollet)**Date created:** 2019/03/01**Last modified:** 2020/04/13**Description:** Complete guide to training & evaluation with `fit()` and `evaluate()`. Setup
###Code
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
###Output
_____no_output_____
###Markdown
IntroductionThis guide covers training, evaluation, and prediction (inference) modelswhen using built-in APIs for training & validation (such as `Model.fit()`,`Model.evaluate()` and `Model.predict()`).If you are interested in leveraging `fit()` while specifying yourown training step function, see the[Customizing what happens in `fit()` guide](/guides/customizing_what_happens_in_fit/).If you are interested in writing your own training & evaluation loops fromscratch, see the guide["writing a training loop from scratch"](/guides/writing_a_training_loop_from_scratch/).In general, whether you are using built-in loops or writing your own, model training &evaluation works strictly in the same way across every kind of Keras model --Sequential models, models built with the Functional API, and models written fromscratch via model subclassing.This guide doesn't cover distributed training, which is covered in our[guide to multi-GPU & distributed training](https://keras.io/guides/distributed_training/). API overview: a first end-to-end exampleWhen passing data to the built-in training loops of a model, you should either use**NumPy arrays** (if your data is small and fits in memory) or **`tf.data Dataset`objects**. In the next few paragraphs, we'll use the MNIST dataset as NumPy arrays, inorder to demonstrate how to use optimizers, losses, and metrics.Let's consider the following model (here, we build in with the Functional API, but itcould be a Sequential model or a subclassed model as well):
###Code
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, activation="softmax", name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
###Output
_____no_output_____
###Markdown
Here's what the typical end-to-end workflow looks like, consisting of:- Training- Validation on a holdout set generated from the original training data- Evaluation on the test dataWe'll use MNIST data for this example.
###Code
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
# Preprocess the data (these are NumPy arrays)
x_train = x_train.reshape(60000, 784).astype("float32") / 255
x_test = x_test.reshape(10000, 784).astype("float32") / 255
y_train = y_train.astype("float32")
y_test = y_test.astype("float32")
# Reserve 10,000 samples for validation
x_val = x_train[-10000:]
y_val = y_train[-10000:]
x_train = x_train[:-10000]
y_train = y_train[:-10000]
###Output
_____no_output_____
###Markdown
We specify the training configuration (optimizer, loss, metrics):
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(), # Optimizer
# Loss function to minimize
loss=keras.losses.SparseCategoricalCrossentropy(),
# List of metrics to monitor
metrics=[keras.metrics.SparseCategoricalAccuracy()],
)
###Output
_____no_output_____
###Markdown
We call `fit()`, which will train the model by slicing the data into "batches" of size`batch_size`, and repeatedly iterating over the entire dataset for a given number of`epochs`.
###Code
print("Fit model on training data")
history = model.fit(
x_train,
y_train,
batch_size=64,
epochs=2,
# We pass some validation for
# monitoring validation loss and metrics
# at the end of each epoch
validation_data=(x_val, y_val),
)
###Output
_____no_output_____
###Markdown
The returned `history` object holds a record of the loss values and metric valuesduring training:
###Code
history.history
###Output
_____no_output_____
###Markdown
We evaluate the model on the test data via `evaluate()`:
###Code
# Evaluate the model on the test data using `evaluate`
print("Evaluate on test data")
results = model.evaluate(x_test, y_test, batch_size=128)
print("test loss, test acc:", results)
# Generate predictions (probabilities -- the output of the last layer)
# on new data using `predict`
print("Generate predictions for 3 samples")
predictions = model.predict(x_test[:3])
print("predictions shape:", predictions.shape)
###Output
_____no_output_____
###Markdown
Now, let's review each piece of this workflow in detail. The `compile()` method: specifying a loss, metrics, and an optimizerTo train a model with `fit()`, you need to specify a loss function, an optimizer, andoptionally, some metrics to monitor.You pass these to the model as arguments to the `compile()` method:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=[keras.metrics.SparseCategoricalAccuracy()],
)
###Output
_____no_output_____
###Markdown
The `metrics` argument should be a list -- your model can have any number of metrics.If your model has multiple outputs, you can specify different losses and metrics foreach output, and you can modulate the contribution of each output to the total loss ofthe model. You will find more details about this in the **Passing data to multi-input,multi-output models** section.Note that if you're satisfied with the default settings, in many cases the optimizer,loss, and metrics can be specified via string identifiers as a shortcut:
###Code
model.compile(
optimizer="rmsprop",
loss="sparse_categorical_crossentropy",
metrics=["sparse_categorical_accuracy"],
)
###Output
_____no_output_____
###Markdown
For later reuse, let's put our model definition and compile step in functions; we willcall them several times across different examples in this guide.
###Code
def get_uncompiled_model():
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, activation="softmax", name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
def get_compiled_model():
model = get_uncompiled_model()
model.compile(
optimizer="rmsprop",
loss="sparse_categorical_crossentropy",
metrics=["sparse_categorical_accuracy"],
)
return model
###Output
_____no_output_____
###Markdown
Many built-in optimizers, losses, and metrics are availableIn general, you won't have to create your own losses, metrics, or optimizersfrom scratch, because what you need is likely to be already part of the Keras API:Optimizers:- `SGD()` (with or without momentum)- `RMSprop()`- `Adam()`- etc.Losses:- `MeanSquaredError()`- `KLDivergence()`- `CosineSimilarity()`- etc.Metrics:- `AUC()`- `Precision()`- `Recall()`- etc. Custom lossesIf you need to create a custom loss, Keras provides two ways to do so.The first method involves creating a function that accepts inputs `y_true` and`y_pred`. The following example shows a loss function that computes the mean squarederror between the real data and the predictions:
###Code
def custom_mean_squared_error(y_true, y_pred):
return tf.math.reduce_mean(tf.square(y_true - y_pred))
model = get_uncompiled_model()
model.compile(optimizer=keras.optimizers.Adam(), loss=custom_mean_squared_error)
# We need to one-hot encode the labels to use MSE
y_train_one_hot = tf.one_hot(y_train, depth=10)
model.fit(x_train, y_train_one_hot, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
If you need a loss function that takes in parameters beside `y_true` and `y_pred`, youcan subclass the `tf.keras.losses.Loss` class and implement the following two methods:- `__init__(self)`: accept parameters to pass during the call of your loss function- `call(self, y_true, y_pred)`: use the targets (y_true) and the model predictions(y_pred) to compute the model's lossLet's say you want to use mean squared error, but with an added term thatwill de-incentivize prediction values far from 0.5 (we assume that the categoricaltargets are one-hot encoded and take values between 0 and 1). Thiscreates an incentive for the model not to be too confident, which may helpreduce overfitting (we won't know if it works until we try!).Here's how you would do it:
###Code
class CustomMSE(keras.losses.Loss):
def __init__(self, regularization_factor=0.1, name="custom_mse"):
super().__init__(name=name)
self.regularization_factor = regularization_factor
def call(self, y_true, y_pred):
mse = tf.math.reduce_mean(tf.square(y_true - y_pred))
reg = tf.math.reduce_mean(tf.square(0.5 - y_pred))
return mse + reg * self.regularization_factor
model = get_uncompiled_model()
model.compile(optimizer=keras.optimizers.Adam(), loss=CustomMSE())
y_train_one_hot = tf.one_hot(y_train, depth=10)
model.fit(x_train, y_train_one_hot, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
Custom metricsIf you need a metric that isn't part of the API, you can easily create custom metricsby subclassing the `tf.keras.metrics.Metric` class. You will need to implement 4methods:- `__init__(self)`, in which you will create state variables for your metric.- `update_state(self, y_true, y_pred, sample_weight=None)`, which uses the targetsy_true and the model predictions y_pred to update the state variables.- `result(self)`, which uses the state variables to compute the final results.- `reset_states(self)`, which reinitializes the state of the metric.State update and results computation are kept separate (in `update_state()` and`result()`, respectively) because in some cases, the results computation might be veryexpensive and would only be done periodically.Here's a simple example showing how to implement a `CategoricalTruePositives` metricthat counts how many samples were correctly classified as belonging to a given class:
###Code
class CategoricalTruePositives(keras.metrics.Metric):
def __init__(self, name="categorical_true_positives", **kwargs):
super(CategoricalTruePositives, self).__init__(name=name, **kwargs)
self.true_positives = self.add_weight(name="ctp", initializer="zeros")
def update_state(self, y_true, y_pred, sample_weight=None):
y_pred = tf.reshape(tf.argmax(y_pred, axis=1), shape=(-1, 1))
values = tf.cast(y_true, "int32") == tf.cast(y_pred, "int32")
values = tf.cast(values, "float32")
if sample_weight is not None:
sample_weight = tf.cast(sample_weight, "float32")
values = tf.multiply(values, sample_weight)
self.true_positives.assign_add(tf.reduce_sum(values))
def result(self):
return self.true_positives
def reset_states(self):
# The state of the metric will be reset at the start of each epoch.
self.true_positives.assign(0.0)
model = get_uncompiled_model()
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=[CategoricalTruePositives()],
)
model.fit(x_train, y_train, batch_size=64, epochs=3)
###Output
_____no_output_____
###Markdown
Handling losses and metrics that don't fit the standard signatureThe overwhelming majority of losses and metrics can be computed from `y_true` and`y_pred`, where `y_pred` is an output of your model -- but not all of them. Forinstance, a regularization loss may only require the activation of a layer (there areno targets in this case), and this activation may not be a model output.In such cases, you can call `self.add_loss(loss_value)` from inside the call method ofa custom layer. Losses added in this way get added to the "main" loss during training(the one passed to `compile()`). Here's a simple example that adds activityregularization (note that activity regularization is built-in in all Keras layers --this layer is just for the sake of providing a concrete example):
###Code
class ActivityRegularizationLayer(layers.Layer):
def call(self, inputs):
self.add_loss(tf.reduce_sum(inputs) * 0.1)
return inputs # Pass-through layer.
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
# Insert activity regularization as a layer
x = ActivityRegularizationLayer()(x)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
# The displayed loss will be much higher than before
# due to the regularization component.
model.fit(x_train, y_train, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
You can do the same for logging metric values, using `add_metric()`:
###Code
class MetricLoggingLayer(layers.Layer):
def call(self, inputs):
# The `aggregation` argument defines
# how to aggregate the per-batch values
# over each epoch:
# in this case we simply average them.
self.add_metric(
keras.backend.std(inputs), name="std_of_activation", aggregation="mean"
)
return inputs # Pass-through layer.
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
# Insert std logging as a layer.
x = MetricLoggingLayer()(x)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
model.fit(x_train, y_train, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
In the [Functional API](/guides/functional_api/),you can also call `model.add_loss(loss_tensor)`,or `model.add_metric(metric_tensor, name, aggregation)`.Here's a simple example:
###Code
inputs = keras.Input(shape=(784,), name="digits")
x1 = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x2 = layers.Dense(64, activation="relu", name="dense_2")(x1)
outputs = layers.Dense(10, name="predictions")(x2)
model = keras.Model(inputs=inputs, outputs=outputs)
model.add_loss(tf.reduce_sum(x1) * 0.1)
model.add_metric(keras.backend.std(x1), name="std_of_activation", aggregation="mean")
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
model.fit(x_train, y_train, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
Note that when you pass losses via `add_loss()`, it becomes possible to call`compile()` without a loss function, since the model already has a loss to minimize.Consider the following `LogisticEndpoint` layer: it takes as inputstargets & logits, and it tracks a crossentropy loss via `add_loss()`. It alsotracks classification accuracy via `add_metric()`.
###Code
class LogisticEndpoint(keras.layers.Layer):
def __init__(self, name=None):
super(LogisticEndpoint, self).__init__(name=name)
self.loss_fn = keras.losses.BinaryCrossentropy(from_logits=True)
self.accuracy_fn = keras.metrics.BinaryAccuracy()
def call(self, targets, logits, sample_weights=None):
# Compute the training-time loss value and add it
# to the layer using `self.add_loss()`.
loss = self.loss_fn(targets, logits, sample_weights)
self.add_loss(loss)
# Log accuracy as a metric and add it
# to the layer using `self.add_metric()`.
acc = self.accuracy_fn(targets, logits, sample_weights)
self.add_metric(acc, name="accuracy")
# Return the inference-time prediction tensor (for `.predict()`).
return tf.nn.softmax(logits)
###Output
_____no_output_____
###Markdown
You can use it in a model with two inputs (input data & targets), compiled without a`loss` argument, like this:
###Code
import numpy as np
inputs = keras.Input(shape=(3,), name="inputs")
targets = keras.Input(shape=(10,), name="targets")
logits = keras.layers.Dense(10)(inputs)
predictions = LogisticEndpoint(name="predictions")(logits, targets)
model = keras.Model(inputs=[inputs, targets], outputs=predictions)
model.compile(optimizer="adam") # No loss argument!
data = {
"inputs": np.random.random((3, 3)),
"targets": np.random.random((3, 10)),
}
model.fit(data)
###Output
_____no_output_____
###Markdown
For more information about training multi-input models, see the section **Passing datato multi-input, multi-output models**. Automatically setting apart a validation holdout setIn the first end-to-end example you saw, we used the `validation_data` argument to passa tuple of NumPy arrays `(x_val, y_val)` to the model for evaluating a validation lossand validation metrics at the end of each epoch.Here's another option: the argument `validation_split` allows you to automaticallyreserve part of your training data for validation. The argument value represents thefraction of the data to be reserved for validation, so it should be set to a numberhigher than 0 and lower than 1. For instance, `validation_split=0.2` means "use 20% ofthe data for validation", and `validation_split=0.6` means "use 60% of the data forvalidation".The way the validation is computed is by taking the last x% samples of the arraysreceived by the `fit()` call, before any shuffling.Note that you can only use `validation_split` when training with NumPy data.
###Code
model = get_compiled_model()
model.fit(x_train, y_train, batch_size=64, validation_split=0.2, epochs=1)
###Output
_____no_output_____
###Markdown
Training & evaluation from tf.data DatasetsIn the past few paragraphs, you've seen how to handle losses, metrics, and optimizers,and you've seen how to use the `validation_data` and `validation_split` arguments in`fit()`, when your data is passed as NumPy arrays.Let's now take a look at the case where your data comes in the form of a`tf.data.Dataset` object.The `tf.data` API is a set of utilities in TensorFlow 2.0 for loading and preprocessingdata in a way that's fast and scalable.For a complete guide about creating `Datasets`, see the[tf.data documentation](https://www.tensorflow.org/guide/data).You can pass a `Dataset` instance directly to the methods `fit()`, `evaluate()`, and`predict()`:
###Code
model = get_compiled_model()
# First, let's create a training Dataset instance.
# For the sake of our example, we'll use the same MNIST data as before.
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
# Shuffle and slice the dataset.
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Now we get a test dataset.
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test))
test_dataset = test_dataset.batch(64)
# Since the dataset already takes care of batching,
# we don't pass a `batch_size` argument.
model.fit(train_dataset, epochs=3)
# You can also evaluate or predict on a dataset.
print("Evaluate")
result = model.evaluate(test_dataset)
dict(zip(model.metrics_names, result))
###Output
_____no_output_____
###Markdown
Note that the Dataset is reset at the end of each epoch, so it can be reused of thenext epoch.If you want to run training only on a specific number of batches from this Dataset, youcan pass the `steps_per_epoch` argument, which specifies how many training steps themodel should run using this Dataset before moving on to the next epoch.If you do this, the dataset is not reset at the end of each epoch, instead we just keepdrawing the next batches. The dataset will eventually run out of data (unless it is aninfinitely-looping dataset).
###Code
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Only use the 100 batches per epoch (that's 64 * 100 samples)
model.fit(train_dataset, epochs=3, steps_per_epoch=100)
###Output
_____no_output_____
###Markdown
Using a validation datasetYou can pass a `Dataset` instance as the `validation_data` argument in `fit()`:
###Code
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Prepare the validation dataset
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(64)
model.fit(train_dataset, epochs=1, validation_data=val_dataset)
###Output
_____no_output_____
###Markdown
At the end of each epoch, the model will iterate over the validation dataset andcompute the validation loss and validation metrics.If you want to run validation only on a specific number of batches from this dataset,you can pass the `validation_steps` argument, which specifies how many validationsteps the model should run with the validation dataset before interrupting validationand moving on to the next epoch:
###Code
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Prepare the validation dataset
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(64)
model.fit(
train_dataset,
epochs=1,
# Only run validation using the first 10 batches of the dataset
# using the `validation_steps` argument
validation_data=val_dataset,
validation_steps=10,
)
###Output
_____no_output_____
###Markdown
Note that the validation dataset will be reset after each use (so that you will alwaysbe evaluating on the same samples from epoch to epoch).The argument `validation_split` (generating a holdout set from the training data) isnot supported when training from `Dataset` objects, since this feature requires theability to index the samples of the datasets, which is not possible in general withthe `Dataset` API. Other input formats supportedBesides NumPy arrays, eager tensors, and TensorFlow `Datasets`, it's possible to traina Keras model using Pandas dataframes, or from Python generators that yield batches ofdata & labels.In particular, the `keras.utils.Sequence` class offers a simple interface to buildPython data generators that are multiprocessing-aware and can be shuffled.In general, we recommend that you use:- NumPy input data if your data is small and fits in memory- `Dataset` objects if you have large datasets and you need to do distributed training- `Sequence` objects if you have large datasets and you need to do a lot of customPython-side processing that cannot be done in TensorFlow (e.g. if you rely on external librariesfor data loading or preprocessing). Using a `keras.utils.Sequence` object as input`keras.utils.Sequence` is a utility that you can subclass to obtain a Python generator withtwo important properties:- It works well with multiprocessing.- It can be shuffled (e.g. when passing `shuffle=True` in `fit()`).A `Sequence` must implement two methods:- `__getitem__`- `__len__`The method `__getitem__` should return a complete batch.If you want to modify your dataset between epochs, you may implement `on_epoch_end`.Here's a quick example:```pythonfrom skimage.io import imreadfrom skimage.transform import resizeimport numpy as np Here, `filenames` is list of path to the images and `labels` are the associated labels.class CIFAR10Sequence(Sequence): def __init__(self, filenames, labels, batch_size): self.filenames, self.labels = filenames, labels self.batch_size = batch_size def __len__(self): return int(np.ceil(len(self.filenames) / float(self.batch_size))) def __getitem__(self, idx): batch_x = self.filenames[idx * self.batch_size:(idx + 1) * self.batch_size] batch_y = self.labels[idx * self.batch_size:(idx + 1) * self.batch_size] return np.array([ resize(imread(filename), (200, 200)) for filename in batch_x]), np.array(batch_y)sequence = CIFAR10Sequence(filenames, labels, batch_size)model.fit(sequence, epochs=10)``` Using sample weighting and class weightingWith the default settings the weight of a sample is decided by its frequencyin the dataset. There are two methods to weight the data, independent ofsample frequency:* Class weights* Sample weights Class weightsThis is set by passing a dictionary to the `class_weight` argument to`Model.fit()`. This dictionary maps class indices to the weight that shouldbe used for samples belonging to this class.This can be used to balance classes without resampling, or to train amodel that gives more importance to a particular class.For instance, if class "0" is half as represented as class "1" in your data,you could use `Model.fit(..., class_weight={0: 1., 1: 0.5})`. Here's a NumPy example where we use class weights or sample weights togive more importance to the correct classification of class 5 (whichis the digit "5" in the MNIST dataset).
###Code
import numpy as np
class_weight = {
0: 1.0,
1: 1.0,
2: 1.0,
3: 1.0,
4: 1.0,
# Set weight "2" for class "5",
# making this class 2x more important
5: 2.0,
6: 1.0,
7: 1.0,
8: 1.0,
9: 1.0,
}
print("Fit with class weight")
model = get_compiled_model()
model.fit(x_train, y_train, class_weight=class_weight, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
Sample weightsFor fine grained control, or if you are not building a classifier,you can use "sample weights".- When training from NumPy data: Pass the `sample_weight` argument to `Model.fit()`.- When training from `tf.data` or any other sort of iterator: Yield `(input_batch, label_batch, sample_weight_batch)` tuples.A "sample weights" array is an array of numbers that specify how much weighteach sample in a batch should have in computing the total loss. It is commonlyused in imbalanced classification problems (the idea being to give more weightto rarely-seen classes).When the weights used are ones and zeros, the array can be used as a *mask* forthe loss function (entirely discarding the contribution of certain samples tothe total loss).
###Code
sample_weight = np.ones(shape=(len(y_train),))
sample_weight[y_train == 5] = 2.0
print("Fit with sample weight")
model = get_compiled_model()
model.fit(x_train, y_train, sample_weight=sample_weight, batch_size=64, epochs=1)
###Output
_____no_output_____
###Markdown
Here's a matching `Dataset` example:
###Code
sample_weight = np.ones(shape=(len(y_train),))
sample_weight[y_train == 5] = 2.0
# Create a Dataset that includes sample weights
# (3rd element in the return tuple).
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train, sample_weight))
# Shuffle and slice the dataset.
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
model = get_compiled_model()
model.fit(train_dataset, epochs=1)
###Output
_____no_output_____
###Markdown
Passing data to multi-input, multi-output modelsIn the previous examples, we were considering a model with a single input (a tensor ofshape `(764,)`) and a single output (a prediction tensor of shape `(10,)`). But whatabout models that have multiple inputs or outputs?Consider the following model, which has an image input of shape `(32, 32, 3)` (that's`(height, width, channels)`) and a time series input of shape `(None, 10)` (that's`(timesteps, features)`). Our model will have two outputs computed from thecombination of these inputs: a "score" (of shape `(1,)`) and a probabilitydistribution over five classes (of shape `(5,)`).
###Code
image_input = keras.Input(shape=(32, 32, 3), name="img_input")
timeseries_input = keras.Input(shape=(None, 10), name="ts_input")
x1 = layers.Conv2D(3, 3)(image_input)
x1 = layers.GlobalMaxPooling2D()(x1)
x2 = layers.Conv1D(3, 3)(timeseries_input)
x2 = layers.GlobalMaxPooling1D()(x2)
x = layers.concatenate([x1, x2])
score_output = layers.Dense(1, name="score_output")(x)
class_output = layers.Dense(5, name="class_output")(x)
model = keras.Model(
inputs=[image_input, timeseries_input], outputs=[score_output, class_output]
)
###Output
_____no_output_____
###Markdown
Let's plot this model, so you can clearly see what we're doing here (note that theshapes shown in the plot are batch shapes, rather than per-sample shapes).
###Code
keras.utils.plot_model(model, "multi_input_and_output_model.png", show_shapes=True)
###Output
_____no_output_____
###Markdown
At compilation time, we can specify different losses to different outputs, by passingthe loss functions as a list:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()],
)
###Output
_____no_output_____
###Markdown
If we only passed a single loss function to the model, the same loss function would beapplied to every output (which is not appropriate here).Likewise for metrics:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()],
metrics=[
[
keras.metrics.MeanAbsolutePercentageError(),
keras.metrics.MeanAbsoluteError(),
],
[keras.metrics.CategoricalAccuracy()],
],
)
###Output
_____no_output_____
###Markdown
Since we gave names to our output layers, we could also specify per-output losses andmetrics via a dict:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={
"score_output": keras.losses.MeanSquaredError(),
"class_output": keras.losses.CategoricalCrossentropy(),
},
metrics={
"score_output": [
keras.metrics.MeanAbsolutePercentageError(),
keras.metrics.MeanAbsoluteError(),
],
"class_output": [keras.metrics.CategoricalAccuracy()],
},
)
###Output
_____no_output_____
###Markdown
We recommend the use of explicit names and dicts if you have more than 2 outputs.It's possible to give different weights to different output-specific losses (forinstance, one might wish to privilege the "score" loss in our example, by giving to 2xthe importance of the class loss), using the `loss_weights` argument:
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={
"score_output": keras.losses.MeanSquaredError(),
"class_output": keras.losses.CategoricalCrossentropy(),
},
metrics={
"score_output": [
keras.metrics.MeanAbsolutePercentageError(),
keras.metrics.MeanAbsoluteError(),
],
"class_output": [keras.metrics.CategoricalAccuracy()],
},
loss_weights={"score_output": 2.0, "class_output": 1.0},
)
###Output
_____no_output_____
###Markdown
You could also choose not to compute a loss for certain outputs, if these outputs aremeant for prediction but not for training:
###Code
# List loss version
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[None, keras.losses.CategoricalCrossentropy()],
)
# Or dict loss version
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={"class_output": keras.losses.CategoricalCrossentropy()},
)
###Output
_____no_output_____
###Markdown
Passing data to a multi-input or multi-output model in `fit()` works in a similar way asspecifying a loss function in compile: you can pass **lists of NumPy arrays** (with1:1 mapping to the outputs that received a loss function) or **dicts mapping outputnames to NumPy arrays**.
###Code
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()],
)
# Generate dummy NumPy data
img_data = np.random.random_sample(size=(100, 32, 32, 3))
ts_data = np.random.random_sample(size=(100, 20, 10))
score_targets = np.random.random_sample(size=(100, 1))
class_targets = np.random.random_sample(size=(100, 5))
# Fit on lists
model.fit([img_data, ts_data], [score_targets, class_targets], batch_size=32, epochs=1)
# Alternatively, fit on dicts
model.fit(
{"img_input": img_data, "ts_input": ts_data},
{"score_output": score_targets, "class_output": class_targets},
batch_size=32,
epochs=1,
)
###Output
_____no_output_____
###Markdown
Here's the `Dataset` use case: similarly as what we did for NumPy arrays, the `Dataset`should return a tuple of dicts.
###Code
train_dataset = tf.data.Dataset.from_tensor_slices(
(
{"img_input": img_data, "ts_input": ts_data},
{"score_output": score_targets, "class_output": class_targets},
)
)
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
model.fit(train_dataset, epochs=1)
###Output
_____no_output_____
###Markdown
Using callbacksCallbacks in Keras are objects that are called at different points during training (atthe start of an epoch, at the end of a batch, at the end of an epoch, etc.). Theycan be used to implement certain behaviors, such as:- Doing validation at different points during training (beyond the built-in per-epochvalidation)- Checkpointing the model at regular intervals or when it exceeds a certain accuracythreshold- Changing the learning rate of the model when training seems to be plateauing- Doing fine-tuning of the top layers when training seems to be plateauing- Sending email or instant message notifications when training ends or where a certainperformance threshold is exceeded- Etc.Callbacks can be passed as a list to your call to `fit()`:
###Code
model = get_compiled_model()
callbacks = [
keras.callbacks.EarlyStopping(
# Stop training when `val_loss` is no longer improving
monitor="val_loss",
# "no longer improving" being defined as "no better than 1e-2 less"
min_delta=1e-2,
# "no longer improving" being further defined as "for at least 2 epochs"
patience=2,
verbose=1,
)
]
model.fit(
x_train,
y_train,
epochs=20,
batch_size=64,
callbacks=callbacks,
validation_split=0.2,
)
###Output
_____no_output_____
###Markdown
Many built-in callbacks are availableThere are many built-in callbacks already available in Keras, such as:- `ModelCheckpoint`: Periodically save the model.- `EarlyStopping`: Stop training when training is no longer improving the validationmetrics.- `TensorBoard`: periodically write model logs that can be visualized in[TensorBoard](https://www.tensorflow.org/tensorboard) (more details in the section"Visualization").- `CSVLogger`: streams loss and metrics data to a CSV file.- etc.See the [callbacks documentation](/api/callbacks/) for the complete list. Writing your own callbackYou can create a custom callback by extending the base class`keras.callbacks.Callback`. A callback has access to its associated model through theclass property `self.model`.Make sure to read the[complete guide to writing custom callbacks](/guides/writing_your_own_callbacks/).Here's a simple example saving a list of per-batch loss values during training:
###Code
class LossHistory(keras.callbacks.Callback):
def on_train_begin(self, logs):
self.per_batch_losses = []
def on_batch_end(self, batch, logs):
self.per_batch_losses.append(logs.get("loss"))
###Output
_____no_output_____
###Markdown
Checkpointing modelsWhen you're training model on relatively large datasets, it's crucial to savecheckpoints of your model at frequent intervals.The easiest way to achieve this is with the `ModelCheckpoint` callback:
###Code
model = get_compiled_model()
callbacks = [
keras.callbacks.ModelCheckpoint(
# Path where to save the model
# The two parameters below mean that we will overwrite
# the current checkpoint if and only if
# the `val_loss` score has improved.
# The saved model name will include the current epoch.
filepath="mymodel_{epoch}",
save_best_only=True, # Only save a model if `val_loss` has improved.
monitor="val_loss",
verbose=1,
)
]
model.fit(
x_train, y_train, epochs=2, batch_size=64, callbacks=callbacks, validation_split=0.2
)
###Output
_____no_output_____
###Markdown
The `ModelCheckpoint` callback can be used to implement fault-tolerance:the ability to restart training from the last saved state of the model in case traininggets randomly interrupted. Here's a basic example:
###Code
import os
# Prepare a directory to store all the checkpoints.
checkpoint_dir = "./ckpt"
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
def make_or_restore_model():
# Either restore the latest model, or create a fresh one
# if there is no checkpoint available.
checkpoints = [checkpoint_dir + "/" + name for name in os.listdir(checkpoint_dir)]
if checkpoints:
latest_checkpoint = max(checkpoints, key=os.path.getctime)
print("Restoring from", latest_checkpoint)
return keras.models.load_model(latest_checkpoint)
print("Creating a new model")
return get_compiled_model()
model = make_or_restore_model()
callbacks = [
# This callback saves a SavedModel every 100 batches.
# We include the training loss in the saved model name.
keras.callbacks.ModelCheckpoint(
filepath=checkpoint_dir + "/ckpt-loss={loss:.2f}", save_freq=100
)
]
model.fit(x_train, y_train, epochs=1, callbacks=callbacks)
###Output
_____no_output_____
###Markdown
You call also write your own callback for saving and restoring models.For a complete guide on serialization and saving, see the[guide to saving and serializing Models](/guides/serialization_and_saving/). Using learning rate schedulesA common pattern when training deep learning models is to gradually reduce the learningas training progresses. This is generally known as "learning rate decay".The learning decay schedule could be static (fixed in advance, as a function of thecurrent epoch or the current batch index), or dynamic (responding to the currentbehavior of the model, in particular the validation loss). Passing a schedule to an optimizerYou can easily use a static learning rate decay schedule by passing a schedule objectas the `learning_rate` argument in your optimizer:
###Code
initial_learning_rate = 0.1
lr_schedule = keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate, decay_steps=100000, decay_rate=0.96, staircase=True
)
optimizer = keras.optimizers.RMSprop(learning_rate=lr_schedule)
###Output
_____no_output_____
###Markdown
Several built-in schedules are available: `ExponentialDecay`, `PiecewiseConstantDecay`,`PolynomialDecay`, and `InverseTimeDecay`. Using callbacks to implement a dynamic learning rate scheduleA dynamic learning rate schedule (for instance, decreasing the learning rate when thevalidation loss is no longer improving) cannot be achieved with these schedule objects,since the optimizer does not have access to validation metrics.However, callbacks do have access to all metrics, including validation metrics! You canthus achieve this pattern by using a callback that modifies the current learning rateon the optimizer. In fact, this is even built-in as the `ReduceLROnPlateau` callback. Visualizing loss and metrics during trainingThe best way to keep an eye on your model during training is to use[TensorBoard](https://www.tensorflow.org/tensorboard) -- a browser-based applicationthat you can run locally that provides you with:- Live plots of the loss and metrics for training and evaluation- (optionally) Visualizations of the histograms of your layer activations- (optionally) 3D visualizations of the embedding spaces learned by your `Embedding`layersIf you have installed TensorFlow with pip, you should be able to launch TensorBoardfrom the command line:```tensorboard --logdir=/full_path_to_your_logs``` Using the TensorBoard callbackThe easiest way to use TensorBoard with a Keras model and the `fit()` method is the`TensorBoard` callback.In the simplest case, just specify where you want the callback to write logs, andyou're good to go:
###Code
keras.callbacks.TensorBoard(
log_dir="/full_path_to_your_logs",
histogram_freq=0, # How often to log histogram visualizations
embeddings_freq=0, # How often to log embedding visualizations
update_freq="epoch",
) # How often to write logs (default: once per epoch)
###Output
_____no_output_____ |
Aula06_02_11_2021/BusquedaBinaria.ipynb | ###Markdown
Búsqueda BinariaEs un algoritmo para encontrar un elemento en una lista ordenada de elementos. Se divide repetidamente a la mitad la lista que podría contener al elemento, hasta reducir las ubicaciones posibles a solo una.
###Code
def binarySearch(lista, target):
# Dado un conjunto de valores y un valor de búsqueda,
# la función retornará la posición del valor encontrado,
# caso contrario retornará -1
a = lista
N = len(a)
L = 0
R = N - 1
while L <= R:
mid = int(L + (R - L) / 2)
if a[mid] == target:
return mid
if a[mid] < target:
L = mid + 1
else:
R = mid - 1
return -1
array = [2, 3, 5, 6, 8, 10, 12]
x = 10
print(binarySearch(array,x))
###Output
5
|
docs/tutorials/render_colored_points.ipynb | ###Markdown
Render a colored point cloudThis tutorial shows how to:- set up a renderer - render the point cloud - vary the rendering settings such as compositing and camera position Import modules Ensure `torch` and `torchvision` are installed. If `pytorch3d` is not installed, install it using the following cell:
###Code
import os
import sys
import torch
need_pytorch3d=False
try:
import pytorch3d
except ModuleNotFoundError:
need_pytorch3d=True
if need_pytorch3d:
if torch.__version__.startswith("1.10.") and sys.platform.startswith("linux"):
# We try to install PyTorch3D via a released wheel.
pyt_version_str=torch.__version__.split("+")[0].replace(".", "")
version_str="".join([
f"py3{sys.version_info.minor}_cu",
torch.version.cuda.replace(".",""),
f"_pyt{pyt_version_str}"
])
!pip install pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html
else:
# We try to install PyTorch3D from source.
!curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz
!tar xzf 1.10.0.tar.gz
os.environ["CUB_HOME"] = os.getcwd() + "/cub-1.10.0"
!pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'
import os
import torch
import torch.nn.functional as F
import matplotlib.pyplot as plt
# Util function for loading point clouds|
import numpy as np
# Data structures and functions for rendering
from pytorch3d.structures import Pointclouds
from pytorch3d.vis.plotly_vis import AxisArgs, plot_batch_individually, plot_scene
from pytorch3d.renderer import (
look_at_view_transform,
FoVOrthographicCameras,
PointsRasterizationSettings,
PointsRenderer,
PulsarPointsRenderer,
PointsRasterizer,
AlphaCompositor,
NormWeightedCompositor
)
###Output
_____no_output_____
###Markdown
Load a point cloud and corresponding colorsLoad and create a **Point Cloud** object. **Pointclouds** is a unique datastructure provided in PyTorch3D for working with batches of point clouds of different sizes. If running this notebook using **Google Colab**, run the following cell to fetch the pointcloud data and save it at the path `data/PittsburghBridge`:If running locally, the data is already available at the correct path.
###Code
!mkdir -p data/PittsburghBridge
!wget -P data/PittsburghBridge https://dl.fbaipublicfiles.com/pytorch3d/data/PittsburghBridge/pointcloud.npz
# Setup
if torch.cuda.is_available():
device = torch.device("cuda:0")
torch.cuda.set_device(device)
else:
device = torch.device("cpu")
# Set paths
DATA_DIR = "./data"
obj_filename = os.path.join(DATA_DIR, "PittsburghBridge/pointcloud.npz")
# Load point cloud
pointcloud = np.load(obj_filename)
verts = torch.Tensor(pointcloud['verts']).to(device)
rgb = torch.Tensor(pointcloud['rgb']).to(device)
point_cloud = Pointclouds(points=[verts], features=[rgb])
###Output
_____no_output_____
###Markdown
Create a rendererA renderer in PyTorch3D is composed of a **rasterizer** and a **shader** which each have a number of subcomponents such as a **camera** (orthographic/perspective). Here we initialize some of these components and use default values for the rest.In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **alpha compositing**. Then we learn how to vary different components using the modular API. [1] SynSin: End to end View Synthesis from a Single Image. Olivia Wiles, Georgia Gkioxari, Richard Szeliski, Justin Johnson. CVPR 2020.
###Code
# Initialize a camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = FoVOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to raster_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an alpha compositor (nearer points
# are weighted more heavily). See [1] for an explanation.
rasterizer = PointsRasterizer(cameras=cameras, raster_settings=raster_settings)
renderer = PointsRenderer(
rasterizer=rasterizer,
compositor=AlphaCompositor()
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
We will now modify the **renderer** to use **alpha compositing** with a set background color.
###Code
renderer = PointsRenderer(
rasterizer=rasterizer,
# Pass in background_color to the alpha compositor, setting the background color
# to the 3 item tuple, representing rgb on a scale of 0 -> 1, in this case blue
compositor=AlphaCompositor(background_color=(0, 0, 1))
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **weighted compositing**.
###Code
# Initialize a camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = FoVOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to rasterize_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an weighted compositor (3D points are
# weighted according to their distance to a pixel and accumulated using a weighted sum)
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
compositor=NormWeightedCompositor()
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
We will now modify the **renderer** to use **weighted compositing** with a set background color.
###Code
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
# Pass in background_color to the norm weighted compositor, setting the background color
# to the 3 item tuple, representing rgb on a scale of 0 -> 1, in this case red
compositor=NormWeightedCompositor(background_color=(1,0,0))
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
Using the pulsar backendSwitching to the pulsar backend is easy! The pulsar backend has a compositor built-in, so the `compositor` argument is not required when creating it (a warning will be displayed if you provide it nevertheless). It pre-allocates memory on the rendering device, that's why it needs the `n_channels` at construction time.All parameters for the renderer forward function are batch-wise except the background color (in this example, `gamma`) and you have to provide as many values as you have examples in your batch. The background color is optional and by default set to all zeros. You can find a detailed explanation of how gamma influences the rendering function here in the paper [Fast Differentiable Raycasting for Neural Rendering usingSphere-based Representations](https://arxiv.org/pdf/2004.07484.pdf).You can also use the `native` backend for the pulsar backend which already provides access to point opacity. The native backend can be imported from `pytorch3d.renderer.points.pulsar`; you can find examples for this in the folder `docs/examples`.
###Code
renderer = PulsarPointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
n_channels=4
).to(device)
images = renderer(point_cloud, gamma=(1e-4,),
bg_col=torch.tensor([0.0, 1.0, 0.0, 1.0], dtype=torch.float32, device=device))
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
View pointclouds in Plotly figuresHere we use the PyTorch3D function `plot_scene` to render the pointcloud in a Plotly figure. `plot_scene` returns a plotly figure with trace and subplots defined by the input.
###Code
plot_scene({
"Pointcloud": {
"person": point_cloud
}
})
###Output
_____no_output_____
###Markdown
We will now render a batch of pointclouds. The first pointcloud is the same as above, and the second is all-black and offset by 2 in all dimensions so we can see them on the same plot.
###Code
point_cloud_batch = Pointclouds(points=[verts, verts + 2], features=[rgb, torch.zeros_like(rgb)])
# render both in the same plot in different traces
fig = plot_scene({
"Pointcloud": {
"person": point_cloud_batch[0],
"person2": point_cloud_batch[1]
}
})
fig.show()
# render both in the same plot in one trace
fig = plot_scene({
"Pointcloud": {
"2 people": point_cloud_batch
}
})
fig.show()
###Output
_____no_output_____
###Markdown
Render a colored point cloudThis tutorial shows how to:- set up a renderer - render the point cloud - vary the rendering settings such as compositing and camera position Import modules If `torch`, `torchvision` and `pytorch3d` are not installed, run the following cell:
###Code
!pip install torch torchvision
!pip install 'git+https://github.com/facebookresearch/pytorch3d.git'
import os
os.chdir('../..')
import torch
import torch.nn.functional as F
import matplotlib.pyplot as plt
from skimage.io import imread
# Util function for loading point clouds
import numpy as np
# Data structures and functions for rendering
from pytorch3d.structures import Pointclouds
from pytorch3d.renderer import (
look_at_view_transform,
OpenGLOrthographicCameras,
PointsRasterizationSettings,
PointsRenderer,
PointsRasterizer,
AlphaCompositor,
NormWeightedCompositor
)
###Output
_____no_output_____
###Markdown
Load a point cloud and corresponding colorsLoad and create a **Point Cloud** object. **Pointclouds** is a unique datastructure provided in PyTorch3D for working with batches of point clouds of different sizes. If running this notebook using **Google Colab**, run the following cell to fetch the pointcloud data and save it at the path `data/PittsburghBridge`:If running locally, the data is already available at the correct path.
###Code
!mkdir -p data/PittsburghBridge
!wget -P data/PittsburghBridge https://dl.fbaipublicfiles.com/pytorch3d/data/PittsburghBridge/pointcloud.npz
# Setup
device = torch.device("cuda:0")
torch.cuda.set_device(device)
# Set paths
DATA_DIR = "./data"
obj_filename = os.path.join(DATA_DIR, "PittsburghBridge/pointcloud.npz")
# Load point cloud
pointcloud = np.load(obj_filename)
verts = torch.Tensor(pointcloud['verts']).to(device)
rgb = torch.Tensor(pointcloud['rgb']).to(device)
point_cloud = Pointclouds(points=[verts], features=[rgb])
###Output
_____no_output_____
###Markdown
Create a rendererA renderer in PyTorch3D is composed of a **rasterizer** and a **shader** which each have a number of subcomponents such as a **camera** (orthgraphic/perspective). Here we initialize some of these components and use default values for the rest.In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **alpha compositing**. Then we learn how to vary different components using the modular API. [1] SynSin: End to end View Synthesis from a Single Image. Olivia Wiles, Georgia Gkioxari, Richard Szeliski, Justin Johnson. CVPR 2020.
###Code
# Initialize an OpenGL perspective camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = OpenGLOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to raster_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an alpha compositor (nearer points
# are weighted more heavily). See [1] for an explanation.
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
compositor=AlphaCompositor(composite_params=None)
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off")
###Output
_____no_output_____
###Markdown
In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **weighted compositing**.
###Code
# Initialize an OpenGL perspective camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = OpenGLOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to rasterize_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an weighted compositor (3D points are
# weighted according to their distance to a pixel and accumulated using a weighted sum)
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
compositor=NormWeightedCompositor(composite_params=None)
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off")
###Output
_____no_output_____
###Markdown
Render a colored point cloudThis tutorial shows how to:- set up a renderer - render the point cloud - vary the rendering settings such as compositing and camera position Import modules If `torch`, `torchvision` and `pytorch3d` are not installed, run the following cell:
###Code
!pip install torch torchvision
import os
import sys
import torch
if torch.__version__=='1.6.0+cu101' and sys.platform.startswith('linux'):
!pip install pytorch3d
else:
need_pytorch3d=False
try:
import pytorch3d
except ModuleNotFoundError:
need_pytorch3d=True
if need_pytorch3d:
!curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz
!tar xzf 1.10.0.tar.gz
os.environ["CUB_HOME"] = os.getcwd() + "/cub-1.10.0"
!pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'
import os
import torch
import torch.nn.functional as F
import matplotlib.pyplot as plt
from skimage.io import imread
# Util function for loading point clouds|
import numpy as np
# Data structures and functions for rendering
from pytorch3d.structures import Pointclouds
from pytorch3d.vis.plotly_vis import AxisArgs, plot_batch_individually, plot_scene
from pytorch3d.renderer import (
look_at_view_transform,
FoVOrthographicCameras,
PointsRasterizationSettings,
PointsRenderer,
PulsarPointsRenderer,
PointsRasterizer,
AlphaCompositor,
NormWeightedCompositor
)
###Output
_____no_output_____
###Markdown
Load a point cloud and corresponding colorsLoad and create a **Point Cloud** object. **Pointclouds** is a unique datastructure provided in PyTorch3D for working with batches of point clouds of different sizes. If running this notebook using **Google Colab**, run the following cell to fetch the pointcloud data and save it at the path `data/PittsburghBridge`:If running locally, the data is already available at the correct path.
###Code
!mkdir -p data/PittsburghBridge
!wget -P data/PittsburghBridge https://dl.fbaipublicfiles.com/pytorch3d/data/PittsburghBridge/pointcloud.npz
# Setup
if torch.cuda.is_available():
device = torch.device("cuda:0")
torch.cuda.set_device(device)
else:
device = torch.device("cpu")
# Set paths
DATA_DIR = "./data"
obj_filename = os.path.join(DATA_DIR, "PittsburghBridge/pointcloud.npz")
# Load point cloud
pointcloud = np.load(obj_filename)
verts = torch.Tensor(pointcloud['verts']).to(device)
rgb = torch.Tensor(pointcloud['rgb']).to(device)
point_cloud = Pointclouds(points=[verts], features=[rgb])
###Output
_____no_output_____
###Markdown
Create a rendererA renderer in PyTorch3D is composed of a **rasterizer** and a **shader** which each have a number of subcomponents such as a **camera** (orthgraphic/perspective). Here we initialize some of these components and use default values for the rest.In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **alpha compositing**. Then we learn how to vary different components using the modular API. [1] SynSin: End to end View Synthesis from a Single Image. Olivia Wiles, Georgia Gkioxari, Richard Szeliski, Justin Johnson. CVPR 2020.
###Code
# Initialize a camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = FoVOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to raster_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an alpha compositor (nearer points
# are weighted more heavily). See [1] for an explanation.
rasterizer = PointsRasterizer(cameras=cameras, raster_settings=raster_settings)
renderer = PointsRenderer(
rasterizer=rasterizer,
compositor=AlphaCompositor()
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
###Output
_____no_output_____
###Markdown
We will now modify the **renderer** to use **alpha compositing** with a set background color.
###Code
renderer = PointsRenderer(
rasterizer=rasterizer,
# Pass in background_color to the alpha compositor, setting the background color
# to the 3 item tuple, representing rgb on a scale of 0 -> 1, in this case blue
compositor=AlphaCompositor(background_color=(0, 0, 1))
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
###Output
_____no_output_____
###Markdown
In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **weighted compositing**.
###Code
# Initialize a camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = FoVOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to rasterize_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an weighted compositor (3D points are
# weighted according to their distance to a pixel and accumulated using a weighted sum)
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
compositor=NormWeightedCompositor()
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
###Output
_____no_output_____
###Markdown
We will now modify the **renderer** to use **weighted compositing** with a set background color.
###Code
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
# Pass in background_color to the norm weighted compositor, setting the background color
# to the 3 item tuple, representing rgb on a scale of 0 -> 1, in this case red
compositor=NormWeightedCompositor(background_color=(1,0,0))
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
###Output
_____no_output_____
###Markdown
Render a colored point cloudThis tutorial shows how to:- set up a renderer - render the point cloud - vary the rendering settings such as compositing and camera position Import modules Ensure `torch` and `torchvision` are installed. If `pytorch3d` is not installed, install it using the following cell:
###Code
import os
import sys
import torch
need_pytorch3d=False
try:
import pytorch3d
except ModuleNotFoundError:
need_pytorch3d=True
if need_pytorch3d:
if torch.__version__.startswith("1.10.") and sys.platform.startswith("linux"):
# We try to install PyTorch3D via a released wheel.
pyt_version_str=torch.__version__.split("+")[0].replace(".", "")
version_str="".join([
f"py3{sys.version_info.minor}_cu",
torch.version.cuda.replace(".",""),
f"_pyt{pyt_version_str}"
])
!pip install pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html
else:
# We try to install PyTorch3D from source.
!curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz
!tar xzf 1.10.0.tar.gz
os.environ["CUB_HOME"] = os.getcwd() + "/cub-1.10.0"
!pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'
import os
import torch
import torch.nn.functional as F
import matplotlib.pyplot as plt
# Util function for loading point clouds|
import numpy as np
# Data structures and functions for rendering
from pytorch3d.structures import Pointclouds
from pytorch3d.vis.plotly_vis import AxisArgs, plot_batch_individually, plot_scene
from pytorch3d.renderer import (
look_at_view_transform,
FoVOrthographicCameras,
PointsRasterizationSettings,
PointsRenderer,
PulsarPointsRenderer,
PointsRasterizer,
AlphaCompositor,
NormWeightedCompositor
)
###Output
_____no_output_____
###Markdown
Load a point cloud and corresponding colorsLoad and create a **Point Cloud** object. **Pointclouds** is a unique datastructure provided in PyTorch3D for working with batches of point clouds of different sizes. If running this notebook using **Google Colab**, run the following cell to fetch the pointcloud data and save it at the path `data/PittsburghBridge`:If running locally, the data is already available at the correct path.
###Code
!mkdir -p data/PittsburghBridge
!wget -P data/PittsburghBridge https://dl.fbaipublicfiles.com/pytorch3d/data/PittsburghBridge/pointcloud.npz
# Setup
if torch.cuda.is_available():
device = torch.device("cuda:0")
torch.cuda.set_device(device)
else:
device = torch.device("cpu")
# Set paths
DATA_DIR = "./data"
obj_filename = os.path.join(DATA_DIR, "PittsburghBridge/pointcloud.npz")
# Load point cloud
pointcloud = np.load(obj_filename)
verts = torch.Tensor(pointcloud['verts']).to(device)
rgb = torch.Tensor(pointcloud['rgb']).to(device)
point_cloud = Pointclouds(points=[verts], features=[rgb])
###Output
_____no_output_____
###Markdown
Create a rendererA renderer in PyTorch3D is composed of a **rasterizer** and a **shader** which each have a number of subcomponents such as a **camera** (orthographic/perspective). Here we initialize some of these components and use default values for the rest.In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **alpha compositing**. Then we learn how to vary different components using the modular API. [1] SynSin: End to end View Synthesis from a Single Image. Olivia Wiles, Georgia Gkioxari, Richard Szeliski, Justin Johnson. CVPR 2020.
###Code
# Initialize a camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = FoVOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to raster_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an alpha compositor (nearer points
# are weighted more heavily). See [1] for an explanation.
rasterizer = PointsRasterizer(cameras=cameras, raster_settings=raster_settings)
renderer = PointsRenderer(
rasterizer=rasterizer,
compositor=AlphaCompositor()
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
We will now modify the **renderer** to use **alpha compositing** with a set background color.
###Code
renderer = PointsRenderer(
rasterizer=rasterizer,
# Pass in background_color to the alpha compositor, setting the background color
# to the 3 item tuple, representing rgb on a scale of 0 -> 1, in this case blue
compositor=AlphaCompositor(background_color=(0, 0, 1))
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **weighted compositing**.
###Code
# Initialize a camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = FoVOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to rasterize_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an weighted compositor (3D points are
# weighted according to their distance to a pixel and accumulated using a weighted sum)
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
compositor=NormWeightedCompositor()
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
We will now modify the **renderer** to use **weighted compositing** with a set background color.
###Code
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
# Pass in background_color to the norm weighted compositor, setting the background color
# to the 3 item tuple, representing rgb on a scale of 0 -> 1, in this case red
compositor=NormWeightedCompositor(background_color=(1,0,0))
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
Using the pulsar backendSwitching to the pulsar backend is easy! The pulsar backend has a compositor built-in, so the `compositor` argument is not required when creating it (a warning will be displayed if you provide it nevertheless). It pre-allocates memory on the rendering device, that's why it needs the `n_channels` at construction time.All parameters for the renderer forward function are batch-wise except the background color (in this example, `gamma`) and you have to provide as many values as you have examples in your batch. The background color is optional and by default set to all zeros. You can find a detailed explanation of how gamma influences the rendering function here in the paper [Fast Differentiable Raycasting for Neural Rendering usingSphere-based Representations](https://arxiv.org/pdf/2004.07484.pdf).You can also use the `native` backend for the pulsar backend which already provides access to point opacity. The native backend can be imported from `pytorch3d.renderer.points.pulsar`; you can find examples for this in the folder `docs/examples`.
###Code
renderer = PulsarPointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
n_channels=4
).to(device)
images = renderer(point_cloud, gamma=(1e-4,),
bg_col=torch.tensor([0.0, 1.0, 0.0, 1.0], dtype=torch.float32, device=device))
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
View pointclouds in Plotly figuresHere we use the PyTorch3D function `plot_scene` to render the pointcloud in a Plotly figure. `plot_scene` returns a plotly figure with trace and subplots defined by the input.
###Code
plot_scene({
"Pointcloud": {
"person": point_cloud
}
})
###Output
_____no_output_____
###Markdown
We will now render a batch of pointclouds. The first pointcloud is the same as above, and the second is all-black and offset by 2 in all dimensions so we can see them on the same plot.
###Code
point_cloud_batch = Pointclouds(points=[verts, verts + 2], features=[rgb, torch.zeros_like(rgb)])
# render both in the same plot in different traces
fig = plot_scene({
"Pointcloud": {
"person": point_cloud_batch[0],
"person2": point_cloud_batch[1]
}
})
fig.show()
# render both in the same plot in one trace
fig = plot_scene({
"Pointcloud": {
"2 people": point_cloud_batch
}
})
fig.show()
###Output
_____no_output_____
###Markdown
For batches, we can also use `plot_batch_individually` to avoid constructing the scene dictionary ourselves.
###Code
# render both in 1 row in different subplots
fig2 = plot_batch_individually(point_cloud_batch, ncols=2)
fig2.show()
# modify the plotly figure height and width
fig2.update_layout(height=500, width=500)
fig2.show()
###Output
_____no_output_____
###Markdown
We can also modify the axis arguments and axis backgrounds for either function, and title our plots in `plot_batch_individually`.
###Code
fig3 = plot_batch_individually(
point_cloud_batch,
xaxis={"backgroundcolor":"rgb(200, 200, 230)"},
yaxis={"backgroundcolor":"rgb(230, 200, 200)"},
zaxis={"backgroundcolor":"rgb(200, 230, 200)"},
subplot_titles=["Pointcloud1", "Pointcloud2"], # this should have a title for each subplot, titles can be ""
axis_args=AxisArgs(showgrid=True))
fig3.show()
###Output
_____no_output_____
###Markdown
Render a colored point cloudThis tutorial shows how to:- set up a renderer - render the point cloud - vary the rendering settings such as compositing and camera position Import modules If `torch`, `torchvision` and `pytorch3d` are not installed, run the following cell:
###Code
!pip install torch torchvision
import sys
import torch
if torch.__version__=='1.6.0+cu101' and sys.platform.startswith('linux'):
!pip install pytorch3d
else:
!pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'
import os
import torch
import torch.nn.functional as F
import matplotlib.pyplot as plt
from skimage.io import imread
# Util function for loading point clouds|
import numpy as np
# Data structures and functions for rendering
from pytorch3d.structures import Pointclouds
from pytorch3d.vis import AxisArgs, plot_batch_individually, plot_scene
from pytorch3d.renderer import (
look_at_view_transform,
FoVOrthographicCameras,
PointsRasterizationSettings,
PointsRenderer,
PointsRasterizer,
AlphaCompositor,
NormWeightedCompositor
)
###Output
_____no_output_____
###Markdown
Load a point cloud and corresponding colorsLoad and create a **Point Cloud** object. **Pointclouds** is a unique datastructure provided in PyTorch3D for working with batches of point clouds of different sizes. If running this notebook using **Google Colab**, run the following cell to fetch the pointcloud data and save it at the path `data/PittsburghBridge`:If running locally, the data is already available at the correct path.
###Code
!mkdir -p data/PittsburghBridge
!wget -P data/PittsburghBridge https://dl.fbaipublicfiles.com/pytorch3d/data/PittsburghBridge/pointcloud.npz
# Setup
if torch.cuda.is_available():
device = torch.device("cuda:0")
torch.cuda.set_device(device)
else:
device = torch.device("cpu")
# Set paths
DATA_DIR = "./data"
obj_filename = os.path.join(DATA_DIR, "PittsburghBridge/pointcloud.npz")
# Load point cloud
pointcloud = np.load(obj_filename)
verts = torch.Tensor(pointcloud['verts']).to(device)
rgb = torch.Tensor(pointcloud['rgb']).to(device)
point_cloud = Pointclouds(points=[verts], features=[rgb])
###Output
_____no_output_____
###Markdown
Create a rendererA renderer in PyTorch3D is composed of a **rasterizer** and a **shader** which each have a number of subcomponents such as a **camera** (orthgraphic/perspective). Here we initialize some of these components and use default values for the rest.In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **alpha compositing**. Then we learn how to vary different components using the modular API. [1] SynSin: End to end View Synthesis from a Single Image. Olivia Wiles, Georgia Gkioxari, Richard Szeliski, Justin Johnson. CVPR 2020.
###Code
# Initialize a camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = FoVOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to raster_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an alpha compositor (nearer points
# are weighted more heavily). See [1] for an explanation.
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
compositor=AlphaCompositor()
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
###Output
_____no_output_____
###Markdown
We will now modify the **renderer** to use **alpha compositing** with a set background color.
###Code
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
# Pass in background_color to the alpha compositor, setting the background color
# to the 3 item tuple, representing rgb on a scale of 0 -> 1, in this case blue
compositor=AlphaCompositor(background_color=(0, 0, 1))
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
###Output
_____no_output_____
###Markdown
In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **weighted compositing**.
###Code
# Initialize a camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = FoVOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to rasterize_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an weighted compositor (3D points are
# weighted according to their distance to a pixel and accumulated using a weighted sum)
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
compositor=NormWeightedCompositor()
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
###Output
_____no_output_____
###Markdown
We will now modify the **renderer** to use **weighted compositing** with a set background color.
###Code
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
# Pass in background_color to the norm weighted compositor, setting the background color
# to the 3 item tuple, representing rgb on a scale of 0 -> 1, in this case red
compositor=NormWeightedCompositor(background_color=(1,0,0))
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
###Output
_____no_output_____
###Markdown
View pointclouds in Plotly figuresHere we use the PyTorch3D function `plot_scene` to render the pointcloud in a Plotly figure. `plot_scene` returns a plotly figure with trace and subplots defined by the input.
###Code
plot_scene({
"Pointcloud": {
"person": point_cloud
}
})
###Output
_____no_output_____
###Markdown
We will now render a batch of pointclouds. The first pointcloud is the same as above, and the second is all-black and offset by 2 in all dimensions so we can see them on the same plot.
###Code
point_cloud_batch = Pointclouds(points=[verts, verts + 2], features=[rgb, torch.zeros_like(rgb)])
# render both in the same plot in different traces
fig = plot_scene({
"Pointcloud": {
"person": point_cloud_batch[0],
"person2": point_cloud_batch[1]
}
})
fig.show()
# render both in the same plot in one trace
fig = plot_scene({
"Pointcloud": {
"2 people": point_cloud_batch
}
})
fig.show()
###Output
_____no_output_____
###Markdown
For batches, we can also use `plot_batch_individually` to avoid constructing the scene dictionary ourselves.
###Code
# render both in 1 row in different subplots
fig2 = plot_batch_individually(point_cloud_batch, ncols=2)
fig2.show()
# modify the plotly figure height and width
fig2.update_layout(height=500, width=500)
fig2.show()
###Output
_____no_output_____
###Markdown
We can also modify the axis arguments and axis backgrounds for either function, and title our plots in `plot_batch_individually`.
###Code
fig3 = plot_batch_individually(
point_cloud_batch,
xaxis={"backgroundcolor":"rgb(200, 200, 230)"},
yaxis={"backgroundcolor":"rgb(230, 200, 200)"},
zaxis={"backgroundcolor":"rgb(200, 230, 200)"},
subplot_titles=["Pointcloud1", "Pointcloud2"], # this should have a title for each subplot, titles can be ""
axis_args=AxisArgs(showgrid=True))
fig3.show()
###Output
_____no_output_____
###Markdown
Render a colored point cloudThis tutorial shows how to:- set up a renderer - render the point cloud - vary the rendering settings such as compositing and camera position Import modules Ensure `torch` and `torchvision` are installed. If `pytorch3d` is not installed, install it using the following cell:
###Code
import os
import sys
import torch
need_pytorch3d=False
try:
import pytorch3d
except ModuleNotFoundError:
need_pytorch3d=True
if need_pytorch3d:
if torch.__version__.startswith("1.7") and sys.platform.startswith("linux"):
# We try to install PyTorch3D via a released wheel.
version_str="".join([
f"py3{sys.version_info.minor}_cu",
torch.version.cuda.replace(".",""),
f"_pyt{torch.__version__[0:5:2]}"
])
!pip install pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html
else:
# We try to install PyTorch3D from source.
!curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz
!tar xzf 1.10.0.tar.gz
os.environ["CUB_HOME"] = os.getcwd() + "/cub-1.10.0"
!pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'
import os
import torch
import torch.nn.functional as F
import matplotlib.pyplot as plt
from skimage.io import imread
# Util function for loading point clouds|
import numpy as np
# Data structures and functions for rendering
from pytorch3d.structures import Pointclouds
from pytorch3d.vis.plotly_vis import AxisArgs, plot_batch_individually, plot_scene
from pytorch3d.renderer import (
look_at_view_transform,
FoVOrthographicCameras,
PointsRasterizationSettings,
PointsRenderer,
PulsarPointsRenderer,
PointsRasterizer,
AlphaCompositor,
NormWeightedCompositor
)
###Output
_____no_output_____
###Markdown
Load a point cloud and corresponding colorsLoad and create a **Point Cloud** object. **Pointclouds** is a unique datastructure provided in PyTorch3D for working with batches of point clouds of different sizes. If running this notebook using **Google Colab**, run the following cell to fetch the pointcloud data and save it at the path `data/PittsburghBridge`:If running locally, the data is already available at the correct path.
###Code
!mkdir -p data/PittsburghBridge
!wget -P data/PittsburghBridge https://dl.fbaipublicfiles.com/pytorch3d/data/PittsburghBridge/pointcloud.npz
# Setup
if torch.cuda.is_available():
device = torch.device("cuda:0")
torch.cuda.set_device(device)
else:
device = torch.device("cpu")
# Set paths
DATA_DIR = "./data"
obj_filename = os.path.join(DATA_DIR, "PittsburghBridge/pointcloud.npz")
# Load point cloud
pointcloud = np.load(obj_filename)
verts = torch.Tensor(pointcloud['verts']).to(device)
rgb = torch.Tensor(pointcloud['rgb']).to(device)
point_cloud = Pointclouds(points=[verts], features=[rgb])
###Output
_____no_output_____
###Markdown
Create a rendererA renderer in PyTorch3D is composed of a **rasterizer** and a **shader** which each have a number of subcomponents such as a **camera** (orthographic/perspective). Here we initialize some of these components and use default values for the rest.In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **alpha compositing**. Then we learn how to vary different components using the modular API. [1] SynSin: End to end View Synthesis from a Single Image. Olivia Wiles, Georgia Gkioxari, Richard Szeliski, Justin Johnson. CVPR 2020.
###Code
# Initialize a camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = FoVOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to raster_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an alpha compositor (nearer points
# are weighted more heavily). See [1] for an explanation.
rasterizer = PointsRasterizer(cameras=cameras, raster_settings=raster_settings)
renderer = PointsRenderer(
rasterizer=rasterizer,
compositor=AlphaCompositor()
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
###Output
_____no_output_____
###Markdown
We will now modify the **renderer** to use **alpha compositing** with a set background color.
###Code
renderer = PointsRenderer(
rasterizer=rasterizer,
# Pass in background_color to the alpha compositor, setting the background color
# to the 3 item tuple, representing rgb on a scale of 0 -> 1, in this case blue
compositor=AlphaCompositor(background_color=(0, 0, 1))
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
###Output
_____no_output_____
###Markdown
In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **weighted compositing**.
###Code
# Initialize a camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = FoVOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to rasterize_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an weighted compositor (3D points are
# weighted according to their distance to a pixel and accumulated using a weighted sum)
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
compositor=NormWeightedCompositor()
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
###Output
_____no_output_____
###Markdown
We will now modify the **renderer** to use **weighted compositing** with a set background color.
###Code
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
# Pass in background_color to the norm weighted compositor, setting the background color
# to the 3 item tuple, representing rgb on a scale of 0 -> 1, in this case red
compositor=NormWeightedCompositor(background_color=(1,0,0))
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
###Output
_____no_output_____
###Markdown
Using the pulsar backendSwitching to the pulsar backend is easy! The pulsar backend has a compositor built-in, so the `compositor` argument is not required when creating it (a warning will be displayed if you provide it nevertheless). It pre-allocates memory on the rendering device, that's why it needs the `n_channels` at construction time.All parameters for the renderer forward function are batch-wise except the background color (in this example, `gamma`) and you have to provide as many values as you have examples in your batch. The background color is optional and by default set to all zeros. You can find a detailed explanation of how gamma influences the rendering function here in the paper [Fast Differentiable Raycasting for Neural Rendering usingSphere-based Representations](https://arxiv.org/pdf/2004.07484.pdf).You can also use the `native` backend for the pulsar backend which already provides access to point opacity. The native backend can be imported from `pytorch3d.renderer.points.pulsar`; you can find examples for this in the folder `docs/examples`.
###Code
renderer = PulsarPointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
n_channels=4
).to(device)
images = renderer(point_cloud, gamma=(1e-4,),
bg_col=torch.tensor([0.0, 1.0, 0.0, 1.0], dtype=torch.float32, device=device))
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
###Output
_____no_output_____
###Markdown
View pointclouds in Plotly figuresHere we use the PyTorch3D function `plot_scene` to render the pointcloud in a Plotly figure. `plot_scene` returns a plotly figure with trace and subplots defined by the input.
###Code
plot_scene({
"Pointcloud": {
"person": point_cloud
}
})
###Output
_____no_output_____
###Markdown
We will now render a batch of pointclouds. The first pointcloud is the same as above, and the second is all-black and offset by 2 in all dimensions so we can see them on the same plot.
###Code
point_cloud_batch = Pointclouds(points=[verts, verts + 2], features=[rgb, torch.zeros_like(rgb)])
# render both in the same plot in different traces
fig = plot_scene({
"Pointcloud": {
"person": point_cloud_batch[0],
"person2": point_cloud_batch[1]
}
})
fig.show()
# render both in the same plot in one trace
fig = plot_scene({
"Pointcloud": {
"2 people": point_cloud_batch
}
})
fig.show()
###Output
_____no_output_____
###Markdown
For batches, we can also use `plot_batch_individually` to avoid constructing the scene dictionary ourselves.
###Code
# render both in 1 row in different subplots
fig2 = plot_batch_individually(point_cloud_batch, ncols=2)
fig2.show()
# modify the plotly figure height and width
fig2.update_layout(height=500, width=500)
fig2.show()
###Output
_____no_output_____
###Markdown
We can also modify the axis arguments and axis backgrounds for either function, and title our plots in `plot_batch_individually`.
###Code
fig3 = plot_batch_individually(
point_cloud_batch,
xaxis={"backgroundcolor":"rgb(200, 200, 230)"},
yaxis={"backgroundcolor":"rgb(230, 200, 200)"},
zaxis={"backgroundcolor":"rgb(200, 230, 200)"},
subplot_titles=["Pointcloud1", "Pointcloud2"], # this should have a title for each subplot, titles can be ""
axis_args=AxisArgs(showgrid=True))
fig3.show()
###Output
_____no_output_____
###Markdown
For batches, we can also use `plot_batch_individually` to avoid constructing the scene dictionary ourselves.
###Code
# render both in 1 row in different subplots
fig2 = plot_batch_individually(point_cloud_batch, ncols=2)
fig2.show()
# modify the plotly figure height and width
fig2.update_layout(height=500, width=500)
fig2.show()
###Output
_____no_output_____
###Markdown
We can also modify the axis arguments and axis backgrounds for either function, and title our plots in `plot_batch_individually`.
###Code
fig3 = plot_batch_individually(
point_cloud_batch,
xaxis={"backgroundcolor":"rgb(200, 200, 230)"},
yaxis={"backgroundcolor":"rgb(230, 200, 200)"},
zaxis={"backgroundcolor":"rgb(200, 230, 200)"},
subplot_titles=["Pointcloud1", "Pointcloud2"], # this should have a title for each subplot, titles can be ""
axis_args=AxisArgs(showgrid=True))
fig3.show()
###Output
_____no_output_____
###Markdown
Render a colored point cloudThis tutorial shows how to:- set up a renderer - render the point cloud - vary the rendering settings such as compositing and camera position Import modules If `torch`, `torchvision` and `pytorch3d` are not installed, run the following cell:
###Code
!pip install torch torchvision
import sys
import torch
if torch.__version__=='1.6.0+cu101' and sys.platform.startswith('linux'):
!pip install pytorch3d
else:
!pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'
import os
import torch
import torch.nn.functional as F
import matplotlib.pyplot as plt
from skimage.io import imread
# Util function for loading point clouds|
import numpy as np
# Data structures and functions for rendering
from pytorch3d.structures import Pointclouds
from pytorch3d.visualization import AxisArgs, plot_pointclouds
from pytorch3d.renderer import (
look_at_view_transform,
FoVOrthographicCameras,
PointsRasterizationSettings,
PointsRenderer,
PointsRasterizer,
AlphaCompositor,
NormWeightedCompositor
)
###Output
_____no_output_____
###Markdown
Load a point cloud and corresponding colorsLoad and create a **Point Cloud** object. **Pointclouds** is a unique datastructure provided in PyTorch3D for working with batches of point clouds of different sizes. If running this notebook using **Google Colab**, run the following cell to fetch the pointcloud data and save it at the path `data/PittsburghBridge`:If running locally, the data is already available at the correct path.
###Code
!mkdir -p data/PittsburghBridge
!wget -P data/PittsburghBridge https://dl.fbaipublicfiles.com/pytorch3d/data/PittsburghBridge/pointcloud.npz
# Setup
if torch.cuda.is_available():
device = torch.device("cuda:0")
torch.cuda.set_device(device)
else:
device = torch.device("cpu")
# Set paths
DATA_DIR = "./data"
obj_filename = os.path.join(DATA_DIR, "PittsburghBridge/pointcloud.npz")
# Load point cloud
pointcloud = np.load(obj_filename)
verts = torch.Tensor(pointcloud['verts']).to(device)
rgb = torch.Tensor(pointcloud['rgb']).to(device)
point_cloud = Pointclouds(points=[verts], features=[rgb])
###Output
_____no_output_____
###Markdown
Create a rendererA renderer in PyTorch3D is composed of a **rasterizer** and a **shader** which each have a number of subcomponents such as a **camera** (orthgraphic/perspective). Here we initialize some of these components and use default values for the rest.In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **alpha compositing**. Then we learn how to vary different components using the modular API. [1] SynSin: End to end View Synthesis from a Single Image. Olivia Wiles, Georgia Gkioxari, Richard Szeliski, Justin Johnson. CVPR 2020.
###Code
# Initialize a camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = FoVOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to raster_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an alpha compositor (nearer points
# are weighted more heavily). See [1] for an explanation.
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
compositor=AlphaCompositor()
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
###Output
_____no_output_____
###Markdown
We will now modify the **renderer** to use **alpha compositing** with a set background color.
###Code
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
# Pass in background_color to the alpha compositor, setting the background color
# to the 3 item tuple, representing rgb on a scale of 0 -> 1, in this case blue
compositor=AlphaCompositor(background_color=(0, 0, 1))
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
###Output
_____no_output_____
###Markdown
In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **weighted compositing**.
###Code
# Initialize a camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = FoVOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to rasterize_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an weighted compositor (3D points are
# weighted according to their distance to a pixel and accumulated using a weighted sum)
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
compositor=NormWeightedCompositor()
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
###Output
_____no_output_____
###Markdown
We will now modify the **renderer** to use **weighted compositing** with a set background color.
###Code
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
# Pass in background_color to the norm weighted compositor, setting the background color
# to the 3 item tuple, representing rgb on a scale of 0 -> 1, in this case red
compositor=NormWeightedCompositor(background_color=(1,0,0))
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
###Output
_____no_output_____
###Markdown
View pointclouds in Plotly figuresHere we use the PyTorch3D function `plot_pointclouds` to render the Pointcloud in a Plotly figure. `plot_pointclouds` returns a plotly figure with a trace for each pointcloud.
###Code
plot_pointclouds(point_cloud)
###Output
_____no_output_____
###Markdown
We will now render a batch of pointclouds. The first pointcloud is the same as above, and the second is all-black and offset by 2 in all dimensions so we can see them on the same plot.
###Code
point_cloud_batch = Pointclouds(points=[verts, verts + 2], features=[rgb, torch.zeros_like(rgb)])
# render both in the same plot
fig = plot_pointclouds(point_cloud_batch)
fig.show()
# render both in 1 row in different subplots
fig2 = plot_pointclouds(point_cloud_batch, in_subplots=True, ncols=2)
fig2.show()
# modify the plotly figure height and width
fig2.update_layout(height=500, width=500)
fig2.show()
###Output
_____no_output_____
###Markdown
We can also modify the axis arguments and axis backgrounds, and title our plots.
###Code
fig3 = plot_pointclouds(point_cloud_batch, xaxis={"backgroundcolor":"rgb(200, 200, 230)"},
yaxis={"backgroundcolor":"rgb(230, 200, 200)"},
zaxis={"backgroundcolor":"rgb(200, 230, 200)"},
subplot_titles=["2 pointclouds"], # this should have a title for each subplot, titles can be ""
axis_args=AxisArgs(showgrid=True))
fig3.show()
###Output
_____no_output_____
###Markdown
Render a colored point cloudThis tutorial shows how to:- set up a renderer - render the point cloud - vary the rendering settings such as compositing and camera position Import modules Ensure `torch` and `torchvision` are installed. If `pytorch3d` is not installed, install it using the following cell:
###Code
import os
import sys
import torch
need_pytorch3d=False
try:
import pytorch3d
except ModuleNotFoundError:
need_pytorch3d=True
if need_pytorch3d:
if torch.__version__.startswith("1.7") and sys.platform.startswith("linux"):
# We try to install PyTorch3D via a released wheel.
version_str="".join([
f"py3{sys.version_info.minor}_cu",
torch.version.cuda.replace(".",""),
f"_pyt{torch.__version__[0:5:2]}"
])
!pip install pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html
else:
# We try to install PyTorch3D from source.
!curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz
!tar xzf 1.10.0.tar.gz
os.environ["CUB_HOME"] = os.getcwd() + "/cub-1.10.0"
!pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'
import os
import torch
import torch.nn.functional as F
import matplotlib.pyplot as plt
from skimage.io import imread
# Util function for loading point clouds|
import numpy as np
# Data structures and functions for rendering
from pytorch3d.structures import Pointclouds
from pytorch3d.vis.plotly_vis import AxisArgs, plot_batch_individually, plot_scene
from pytorch3d.renderer import (
look_at_view_transform,
FoVOrthographicCameras,
PointsRasterizationSettings,
PointsRenderer,
PulsarPointsRenderer,
PointsRasterizer,
AlphaCompositor,
NormWeightedCompositor
)
###Output
_____no_output_____
###Markdown
Load a point cloud and corresponding colorsLoad and create a **Point Cloud** object. **Pointclouds** is a unique datastructure provided in PyTorch3D for working with batches of point clouds of different sizes. If running this notebook using **Google Colab**, run the following cell to fetch the pointcloud data and save it at the path `data/PittsburghBridge`:If running locally, the data is already available at the correct path.
###Code
!mkdir -p data/PittsburghBridge
!wget -P data/PittsburghBridge https://dl.fbaipublicfiles.com/pytorch3d/data/PittsburghBridge/pointcloud.npz
# Setup
if torch.cuda.is_available():
device = torch.device("cuda:0")
torch.cuda.set_device(device)
else:
device = torch.device("cpu")
# Set paths
DATA_DIR = "./data"
obj_filename = os.path.join(DATA_DIR, "PittsburghBridge/pointcloud.npz")
# Load point cloud
pointcloud = np.load(obj_filename)
verts = torch.Tensor(pointcloud['verts']).to(device)
rgb = torch.Tensor(pointcloud['rgb']).to(device)
point_cloud = Pointclouds(points=[verts], features=[rgb])
###Output
_____no_output_____
###Markdown
Create a rendererA renderer in PyTorch3D is composed of a **rasterizer** and a **shader** which each have a number of subcomponents such as a **camera** (orthgraphic/perspective). Here we initialize some of these components and use default values for the rest.In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **alpha compositing**. Then we learn how to vary different components using the modular API. [1] SynSin: End to end View Synthesis from a Single Image. Olivia Wiles, Georgia Gkioxari, Richard Szeliski, Justin Johnson. CVPR 2020.
###Code
# Initialize a camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = FoVOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to raster_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an alpha compositor (nearer points
# are weighted more heavily). See [1] for an explanation.
rasterizer = PointsRasterizer(cameras=cameras, raster_settings=raster_settings)
renderer = PointsRenderer(
rasterizer=rasterizer,
compositor=AlphaCompositor()
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
###Output
_____no_output_____
###Markdown
We will now modify the **renderer** to use **alpha compositing** with a set background color.
###Code
renderer = PointsRenderer(
rasterizer=rasterizer,
# Pass in background_color to the alpha compositor, setting the background color
# to the 3 item tuple, representing rgb on a scale of 0 -> 1, in this case blue
compositor=AlphaCompositor(background_color=(0, 0, 1))
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
###Output
_____no_output_____
###Markdown
In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **weighted compositing**.
###Code
# Initialize a camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = FoVOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to rasterize_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an weighted compositor (3D points are
# weighted according to their distance to a pixel and accumulated using a weighted sum)
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
compositor=NormWeightedCompositor()
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
###Output
_____no_output_____
###Markdown
We will now modify the **renderer** to use **weighted compositing** with a set background color.
###Code
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
# Pass in background_color to the norm weighted compositor, setting the background color
# to the 3 item tuple, representing rgb on a scale of 0 -> 1, in this case red
compositor=NormWeightedCompositor(background_color=(1,0,0))
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
###Output
_____no_output_____
###Markdown
Using the pulsar backendSwitching to the pulsar backend is easy! The pulsar backend has a compositor built-in, so the `compositor` argument is not required when creating it (a warning will be displayed if you provide it nevertheless). It pre-allocates memory on the rendering device, that's why it needs the `n_channels` at construction time.All parameters for the renderer forward function are batch-wise except the background color (in this example, `gamma`) and you have to provide as many values as you have examples in your batch. The background color is optional and by default set to all zeros. You can find a detailed explanation of how gamma influences the rendering function here in the paper [Fast Differentiable Raycasting for Neural Rendering usingSphere-based Representations](https://arxiv.org/pdf/2004.07484.pdf).You can also use the `native` backend for the pulsar backend which already provides access to point opacity. The native backend can be imported from `pytorch3d.renderer.points.pulsar`; you can find examples for this in the folder `docs/examples`.
###Code
renderer = PulsarPointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
n_channels=4
).to(device)
images = renderer(point_cloud, gamma=(1e-4,),
bg_col=torch.tensor([0.0, 1.0, 0.0, 1.0], dtype=torch.float32, device=device))
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
###Output
_____no_output_____
###Markdown
View pointclouds in Plotly figuresHere we use the PyTorch3D function `plot_scene` to render the pointcloud in a Plotly figure. `plot_scene` returns a plotly figure with trace and subplots defined by the input.
###Code
plot_scene({
"Pointcloud": {
"person": point_cloud
}
})
###Output
_____no_output_____
###Markdown
We will now render a batch of pointclouds. The first pointcloud is the same as above, and the second is all-black and offset by 2 in all dimensions so we can see them on the same plot.
###Code
point_cloud_batch = Pointclouds(points=[verts, verts + 2], features=[rgb, torch.zeros_like(rgb)])
# render both in the same plot in different traces
fig = plot_scene({
"Pointcloud": {
"person": point_cloud_batch[0],
"person2": point_cloud_batch[1]
}
})
fig.show()
# render both in the same plot in one trace
fig = plot_scene({
"Pointcloud": {
"2 people": point_cloud_batch
}
})
fig.show()
###Output
_____no_output_____
###Markdown
For batches, we can also use `plot_batch_individually` to avoid constructing the scene dictionary ourselves.
###Code
# render both in 1 row in different subplots
fig2 = plot_batch_individually(point_cloud_batch, ncols=2)
fig2.show()
# modify the plotly figure height and width
fig2.update_layout(height=500, width=500)
fig2.show()
###Output
_____no_output_____
###Markdown
We can also modify the axis arguments and axis backgrounds for either function, and title our plots in `plot_batch_individually`.
###Code
fig3 = plot_batch_individually(
point_cloud_batch,
xaxis={"backgroundcolor":"rgb(200, 200, 230)"},
yaxis={"backgroundcolor":"rgb(230, 200, 200)"},
zaxis={"backgroundcolor":"rgb(200, 230, 200)"},
subplot_titles=["Pointcloud1", "Pointcloud2"], # this should have a title for each subplot, titles can be ""
axis_args=AxisArgs(showgrid=True))
fig3.show()
###Output
_____no_output_____
###Markdown
Render a colored point cloudThis tutorial shows how to:- set up a renderer - render the point cloud - vary the rendering settings such as compositing and camera position Import modules Ensure `torch` and `torchvision` are installed. If `pytorch3d` is not installed, install it using the following cell:
###Code
import os
import sys
import torch
need_pytorch3d=False
try:
import pytorch3d
except ModuleNotFoundError:
need_pytorch3d=True
if need_pytorch3d:
if torch.__version__.startswith("1.11.") and sys.platform.startswith("linux"):
# We try to install PyTorch3D via a released wheel.
pyt_version_str=torch.__version__.split("+")[0].replace(".", "")
version_str="".join([
f"py3{sys.version_info.minor}_cu",
torch.version.cuda.replace(".",""),
f"_pyt{pyt_version_str}"
])
!pip install fvcore iopath
!pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html
else:
# We try to install PyTorch3D from source.
!curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz
!tar xzf 1.10.0.tar.gz
os.environ["CUB_HOME"] = os.getcwd() + "/cub-1.10.0"
!pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'
import os
import torch
import torch.nn.functional as F
import matplotlib.pyplot as plt
# Util function for loading point clouds|
import numpy as np
# Data structures and functions for rendering
from pytorch3d.structures import Pointclouds
from pytorch3d.vis.plotly_vis import AxisArgs, plot_batch_individually, plot_scene
from pytorch3d.renderer import (
look_at_view_transform,
FoVOrthographicCameras,
PointsRasterizationSettings,
PointsRenderer,
PulsarPointsRenderer,
PointsRasterizer,
AlphaCompositor,
NormWeightedCompositor
)
###Output
_____no_output_____
###Markdown
Load a point cloud and corresponding colorsLoad and create a **Point Cloud** object. **Pointclouds** is a unique datastructure provided in PyTorch3D for working with batches of point clouds of different sizes. If running this notebook using **Google Colab**, run the following cell to fetch the pointcloud data and save it at the path `data/PittsburghBridge`:If running locally, the data is already available at the correct path.
###Code
!mkdir -p data/PittsburghBridge
!wget -P data/PittsburghBridge https://dl.fbaipublicfiles.com/pytorch3d/data/PittsburghBridge/pointcloud.npz
# Setup
if torch.cuda.is_available():
device = torch.device("cuda:0")
torch.cuda.set_device(device)
else:
device = torch.device("cpu")
# Set paths
DATA_DIR = "./data"
obj_filename = os.path.join(DATA_DIR, "PittsburghBridge/pointcloud.npz")
# Load point cloud
pointcloud = np.load(obj_filename)
verts = torch.Tensor(pointcloud['verts']).to(device)
rgb = torch.Tensor(pointcloud['rgb']).to(device)
point_cloud = Pointclouds(points=[verts], features=[rgb])
###Output
_____no_output_____
###Markdown
Create a rendererA renderer in PyTorch3D is composed of a **rasterizer** and a **shader** which each have a number of subcomponents such as a **camera** (orthographic/perspective). Here we initialize some of these components and use default values for the rest.In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **alpha compositing**. Then we learn how to vary different components using the modular API. [1] SynSin: End to end View Synthesis from a Single Image. Olivia Wiles, Georgia Gkioxari, Richard Szeliski, Justin Johnson. CVPR 2020.
###Code
# Initialize a camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = FoVOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to raster_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an alpha compositor (nearer points
# are weighted more heavily). See [1] for an explanation.
rasterizer = PointsRasterizer(cameras=cameras, raster_settings=raster_settings)
renderer = PointsRenderer(
rasterizer=rasterizer,
compositor=AlphaCompositor()
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
We will now modify the **renderer** to use **alpha compositing** with a set background color.
###Code
renderer = PointsRenderer(
rasterizer=rasterizer,
# Pass in background_color to the alpha compositor, setting the background color
# to the 3 item tuple, representing rgb on a scale of 0 -> 1, in this case blue
compositor=AlphaCompositor(background_color=(0, 0, 1))
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **weighted compositing**.
###Code
# Initialize a camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = FoVOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to rasterize_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an weighted compositor (3D points are
# weighted according to their distance to a pixel and accumulated using a weighted sum)
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
compositor=NormWeightedCompositor()
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
We will now modify the **renderer** to use **weighted compositing** with a set background color.
###Code
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
# Pass in background_color to the norm weighted compositor, setting the background color
# to the 3 item tuple, representing rgb on a scale of 0 -> 1, in this case red
compositor=NormWeightedCompositor(background_color=(1,0,0))
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
Using the pulsar backendSwitching to the pulsar backend is easy! The pulsar backend has a compositor built-in, so the `compositor` argument is not required when creating it (a warning will be displayed if you provide it nevertheless). It pre-allocates memory on the rendering device, that's why it needs the `n_channels` at construction time.All parameters for the renderer forward function are batch-wise except the background color (in this example, `gamma`) and you have to provide as many values as you have examples in your batch. The background color is optional and by default set to all zeros. You can find a detailed explanation of how gamma influences the rendering function here in the paper [Fast Differentiable Raycasting for Neural Rendering usingSphere-based Representations](https://arxiv.org/pdf/2004.07484.pdf).You can also use the `native` backend for the pulsar backend which already provides access to point opacity. The native backend can be imported from `pytorch3d.renderer.points.pulsar`; you can find examples for this in the folder `docs/examples`.
###Code
renderer = PulsarPointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
n_channels=4
).to(device)
images = renderer(point_cloud, gamma=(1e-4,),
bg_col=torch.tensor([0.0, 1.0, 0.0, 1.0], dtype=torch.float32, device=device))
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
View pointclouds in Plotly figuresHere we use the PyTorch3D function `plot_scene` to render the pointcloud in a Plotly figure. `plot_scene` returns a plotly figure with trace and subplots defined by the input.
###Code
plot_scene({
"Pointcloud": {
"person": point_cloud
}
})
###Output
_____no_output_____
###Markdown
We will now render a batch of pointclouds. The first pointcloud is the same as above, and the second is all-black and offset by 2 in all dimensions so we can see them on the same plot.
###Code
point_cloud_batch = Pointclouds(points=[verts, verts + 2], features=[rgb, torch.zeros_like(rgb)])
# render both in the same plot in different traces
fig = plot_scene({
"Pointcloud": {
"person": point_cloud_batch[0],
"person2": point_cloud_batch[1]
}
})
fig.show()
# render both in the same plot in one trace
fig = plot_scene({
"Pointcloud": {
"2 people": point_cloud_batch
}
})
fig.show()
###Output
_____no_output_____
###Markdown
For batches, we can also use `plot_batch_individually` to avoid constructing the scene dictionary ourselves.
###Code
# render both in 1 row in different subplots
fig2 = plot_batch_individually(point_cloud_batch, ncols=2)
fig2.show()
# modify the plotly figure height and width
fig2.update_layout(height=500, width=500)
fig2.show()
###Output
_____no_output_____
###Markdown
We can also modify the axis arguments and axis backgrounds for either function, and title our plots in `plot_batch_individually`.
###Code
fig3 = plot_batch_individually(
point_cloud_batch,
xaxis={"backgroundcolor":"rgb(200, 200, 230)"},
yaxis={"backgroundcolor":"rgb(230, 200, 200)"},
zaxis={"backgroundcolor":"rgb(200, 230, 200)"},
subplot_titles=["Pointcloud1", "Pointcloud2"], # this should have a title for each subplot, titles can be ""
axis_args=AxisArgs(showgrid=True))
fig3.show()
###Output
_____no_output_____
###Markdown
Render a colored point cloudThis tutorial shows how to:- set up a renderer - render the point cloud - vary the rendering settings such as compositing and camera position Import modules If `torch`, `torchvision` and `pytorch3d` are not installed, run the following cell:
###Code
!pip install torch torchvision
import sys
import torch
if torch.__version__=='1.6.0+cu101' and sys.platform.startswith('linux'):
!pip install pytorch3d
else:
!pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'
import os
import torch
import torch.nn.functional as F
import matplotlib.pyplot as plt
from skimage.io import imread
# Util function for loading point clouds
import numpy as np
# Data structures and functions for rendering
from pytorch3d.structures import Pointclouds
from pytorch3d.renderer import (
look_at_view_transform,
FoVOrthographicCameras,
PointsRasterizationSettings,
PointsRenderer,
PointsRasterizer,
AlphaCompositor,
NormWeightedCompositor
)
###Output
_____no_output_____
###Markdown
Load a point cloud and corresponding colorsLoad and create a **Point Cloud** object. **Pointclouds** is a unique datastructure provided in PyTorch3D for working with batches of point clouds of different sizes. If running this notebook using **Google Colab**, run the following cell to fetch the pointcloud data and save it at the path `data/PittsburghBridge`:If running locally, the data is already available at the correct path.
###Code
!mkdir -p data/PittsburghBridge
!wget -P data/PittsburghBridge https://dl.fbaipublicfiles.com/pytorch3d/data/PittsburghBridge/pointcloud.npz
# Setup
if torch.cuda.is_available():
device = torch.device("cuda:0")
torch.cuda.set_device(device)
else:
device = torch.device("cpu")
# Set paths
DATA_DIR = "./data"
obj_filename = os.path.join(DATA_DIR, "PittsburghBridge/pointcloud.npz")
# Load point cloud
pointcloud = np.load(obj_filename)
verts = torch.Tensor(pointcloud['verts']).to(device)
rgb = torch.Tensor(pointcloud['rgb']).to(device)
point_cloud = Pointclouds(points=[verts], features=[rgb])
###Output
_____no_output_____
###Markdown
Create a rendererA renderer in PyTorch3D is composed of a **rasterizer** and a **shader** which each have a number of subcomponents such as a **camera** (orthgraphic/perspective). Here we initialize some of these components and use default values for the rest.In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **alpha compositing**. Then we learn how to vary different components using the modular API. [1] SynSin: End to end View Synthesis from a Single Image. Olivia Wiles, Georgia Gkioxari, Richard Szeliski, Justin Johnson. CVPR 2020.
###Code
# Initialize a camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = FoVOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to raster_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an alpha compositor (nearer points
# are weighted more heavily). See [1] for an explanation.
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
compositor=AlphaCompositor()
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
###Output
_____no_output_____
###Markdown
In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **weighted compositing**.
###Code
# Initialize a camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = FoVOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to rasterize_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an weighted compositor (3D points are
# weighted according to their distance to a pixel and accumulated using a weighted sum)
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
compositor=NormWeightedCompositor()
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
###Output
_____no_output_____
###Markdown
Render a colored point cloudThis tutorial shows how to:- set up a renderer - render the point cloud - vary the rendering settings such as compositing and camera position Import modules If `torch`, `torchvision` and `pytorch3d` are not installed, run the following cell:
###Code
!pip install torch torchvision
!pip install 'git+https://github.com/facebookresearch/pytorch3d.git'
import os
#os.chdir('../..')
import torch
import torch.nn.functional as F
import matplotlib.pyplot as plt
from skimage.io import imread
# Util function for loading point clouds
import numpy as np
# Data structures and functions for rendering
from pytorch3d.structures import Pointclouds
from pytorch3d.renderer import (
look_at_view_transform,
OpenGLOrthographicCameras,
PointsRasterizationSettings,
PointsRenderer,
PointsRasterizer,
AlphaCompositor,
NormWeightedCompositor
)
###Output
_____no_output_____
###Markdown
Load a point cloud and corresponding colorsLoad and create a **Point Cloud** object. **Pointclouds** is a unique datastructure provided in PyTorch3D for working with batches of point clouds of different sizes. If running this notebook using **Google Colab**, run the following cell to fetch the pointcloud data and save it at the path `data/PittsburghBridge`:If running locally, the data is already available at the correct path.
###Code
!mkdir -p data/PittsburghBridge
!wget -P data/PittsburchBridge https://dl.fbaipublicfiles.com/pytorch3d/data/PittsburghBridge/pointcloud.npz
# Setup
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
if device == "cuda:0": torch.cuda.set_device(device)
# Set paths
DATA_DIR = "./data"
obj_filename = os.path.join(DATA_DIR, "PittsburghBridge/pointcloud.npz")
# Load point cloud
pointcloud = np.load(obj_filename)
verts = torch.Tensor(pointcloud['verts']).to(device)
rgb = torch.Tensor(pointcloud['rgb']).to(device)
point_cloud = Pointclouds(points=[verts], features=[rgb])
###Output
_____no_output_____
###Markdown
Create a rendererA renderer in PyTorch3D is composed of a **rasterizer** and a **shader** which each have a number of subcomponents such as a **camera** (orthgraphic/perspective). Here we initialize some of these components and use default values for the rest.In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **alpha compositing**. Then we learn how to vary different components using the modular API. [1] SynSin: End to end View Synthesis from a Single Image. Olivia Wiles, Georgia Gkioxari, Richard Szeliski, Justin Johnson. CVPR 2020.
###Code
# Initialize an OpenGL perspective camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = OpenGLOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to raster_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an alpha compositor (nearer points
# are weighted more heavily). See [1] for an explanation.
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
compositor=AlphaCompositor(composite_params=None)
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off")
###Output
_____no_output_____
###Markdown
In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **weighted compositing**.
###Code
# Initialize an OpenGL perspective camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = OpenGLOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to rasterize_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an weighted compositor (3D points are
# weighted according to their distance to a pixel and accumulated using a weighted sum)
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
compositor=NormWeightedCompositor(composite_params=None)
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off")
###Output
_____no_output_____
###Markdown
Render a colored point cloudThis tutorial shows how to:- set up a renderer - render the point cloud - vary the rendering settings such as compositing and camera position Import modules Ensure `torch` and `torchvision` are installed. If `pytorch3d` is not installed, install it using the following cell:
###Code
import os
import sys
import torch
need_pytorch3d=False
try:
import pytorch3d
except ModuleNotFoundError:
need_pytorch3d=True
if need_pytorch3d:
if torch.__version__.startswith("1.10.") and sys.platform.startswith("linux"):
# We try to install PyTorch3D via a released wheel.
version_str="".join([
f"py3{sys.version_info.minor}_cu",
torch.version.cuda.replace(".",""),
f"_pyt{torch.__version__[0:5:2]}"
])
!pip install pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html
else:
# We try to install PyTorch3D from source.
!curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz
!tar xzf 1.10.0.tar.gz
os.environ["CUB_HOME"] = os.getcwd() + "/cub-1.10.0"
!pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'
import os
import torch
import torch.nn.functional as F
import matplotlib.pyplot as plt
# Util function for loading point clouds|
import numpy as np
# Data structures and functions for rendering
from pytorch3d.structures import Pointclouds
from pytorch3d.vis.plotly_vis import AxisArgs, plot_batch_individually, plot_scene
from pytorch3d.renderer import (
look_at_view_transform,
FoVOrthographicCameras,
PointsRasterizationSettings,
PointsRenderer,
PulsarPointsRenderer,
PointsRasterizer,
AlphaCompositor,
NormWeightedCompositor
)
###Output
_____no_output_____
###Markdown
Load a point cloud and corresponding colorsLoad and create a **Point Cloud** object. **Pointclouds** is a unique datastructure provided in PyTorch3D for working with batches of point clouds of different sizes. If running this notebook using **Google Colab**, run the following cell to fetch the pointcloud data and save it at the path `data/PittsburghBridge`:If running locally, the data is already available at the correct path.
###Code
!mkdir -p data/PittsburghBridge
!wget -P data/PittsburghBridge https://dl.fbaipublicfiles.com/pytorch3d/data/PittsburghBridge/pointcloud.npz
# Setup
if torch.cuda.is_available():
device = torch.device("cuda:0")
torch.cuda.set_device(device)
else:
device = torch.device("cpu")
# Set paths
DATA_DIR = "./data"
obj_filename = os.path.join(DATA_DIR, "PittsburghBridge/pointcloud.npz")
# Load point cloud
pointcloud = np.load(obj_filename)
verts = torch.Tensor(pointcloud['verts']).to(device)
rgb = torch.Tensor(pointcloud['rgb']).to(device)
point_cloud = Pointclouds(points=[verts], features=[rgb])
###Output
_____no_output_____
###Markdown
Create a rendererA renderer in PyTorch3D is composed of a **rasterizer** and a **shader** which each have a number of subcomponents such as a **camera** (orthographic/perspective). Here we initialize some of these components and use default values for the rest.In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **alpha compositing**. Then we learn how to vary different components using the modular API. [1] SynSin: End to end View Synthesis from a Single Image. Olivia Wiles, Georgia Gkioxari, Richard Szeliski, Justin Johnson. CVPR 2020.
###Code
# Initialize a camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = FoVOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to raster_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an alpha compositor (nearer points
# are weighted more heavily). See [1] for an explanation.
rasterizer = PointsRasterizer(cameras=cameras, raster_settings=raster_settings)
renderer = PointsRenderer(
rasterizer=rasterizer,
compositor=AlphaCompositor()
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
We will now modify the **renderer** to use **alpha compositing** with a set background color.
###Code
renderer = PointsRenderer(
rasterizer=rasterizer,
# Pass in background_color to the alpha compositor, setting the background color
# to the 3 item tuple, representing rgb on a scale of 0 -> 1, in this case blue
compositor=AlphaCompositor(background_color=(0, 0, 1))
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **weighted compositing**.
###Code
# Initialize a camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = FoVOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to rasterize_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an weighted compositor (3D points are
# weighted according to their distance to a pixel and accumulated using a weighted sum)
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
compositor=NormWeightedCompositor()
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
We will now modify the **renderer** to use **weighted compositing** with a set background color.
###Code
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
# Pass in background_color to the norm weighted compositor, setting the background color
# to the 3 item tuple, representing rgb on a scale of 0 -> 1, in this case red
compositor=NormWeightedCompositor(background_color=(1,0,0))
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
Using the pulsar backendSwitching to the pulsar backend is easy! The pulsar backend has a compositor built-in, so the `compositor` argument is not required when creating it (a warning will be displayed if you provide it nevertheless). It pre-allocates memory on the rendering device, that's why it needs the `n_channels` at construction time.All parameters for the renderer forward function are batch-wise except the background color (in this example, `gamma`) and you have to provide as many values as you have examples in your batch. The background color is optional and by default set to all zeros. You can find a detailed explanation of how gamma influences the rendering function here in the paper [Fast Differentiable Raycasting for Neural Rendering usingSphere-based Representations](https://arxiv.org/pdf/2004.07484.pdf).You can also use the `native` backend for the pulsar backend which already provides access to point opacity. The native backend can be imported from `pytorch3d.renderer.points.pulsar`; you can find examples for this in the folder `docs/examples`.
###Code
renderer = PulsarPointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
n_channels=4
).to(device)
images = renderer(point_cloud, gamma=(1e-4,),
bg_col=torch.tensor([0.0, 1.0, 0.0, 1.0], dtype=torch.float32, device=device))
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
View pointclouds in Plotly figuresHere we use the PyTorch3D function `plot_scene` to render the pointcloud in a Plotly figure. `plot_scene` returns a plotly figure with trace and subplots defined by the input.
###Code
plot_scene({
"Pointcloud": {
"person": point_cloud
}
})
###Output
_____no_output_____
###Markdown
We will now render a batch of pointclouds. The first pointcloud is the same as above, and the second is all-black and offset by 2 in all dimensions so we can see them on the same plot.
###Code
point_cloud_batch = Pointclouds(points=[verts, verts + 2], features=[rgb, torch.zeros_like(rgb)])
# render both in the same plot in different traces
fig = plot_scene({
"Pointcloud": {
"person": point_cloud_batch[0],
"person2": point_cloud_batch[1]
}
})
fig.show()
# render both in the same plot in one trace
fig = plot_scene({
"Pointcloud": {
"2 people": point_cloud_batch
}
})
fig.show()
###Output
_____no_output_____
###Markdown
For batches, we can also use `plot_batch_individually` to avoid constructing the scene dictionary ourselves.
###Code
# render both in 1 row in different subplots
fig2 = plot_batch_individually(point_cloud_batch, ncols=2)
fig2.show()
# modify the plotly figure height and width
fig2.update_layout(height=500, width=500)
fig2.show()
###Output
_____no_output_____
###Markdown
We can also modify the axis arguments and axis backgrounds for either function, and title our plots in `plot_batch_individually`.
###Code
fig3 = plot_batch_individually(
point_cloud_batch,
xaxis={"backgroundcolor":"rgb(200, 200, 230)"},
yaxis={"backgroundcolor":"rgb(230, 200, 200)"},
zaxis={"backgroundcolor":"rgb(200, 230, 200)"},
subplot_titles=["Pointcloud1", "Pointcloud2"], # this should have a title for each subplot, titles can be ""
axis_args=AxisArgs(showgrid=True))
fig3.show()
###Output
_____no_output_____
###Markdown
Render a colored point cloudThis tutorial shows how to:- set up a renderer - render the point cloud - vary the rendering settings such as compositing and camera position Import modules Ensure `torch` and `torchvision` are installed. If `pytorch3d` is not installed, install it using the following cell:
###Code
import os
import sys
import torch
need_pytorch3d=False
try:
import pytorch3d
except ModuleNotFoundError:
need_pytorch3d=True
if need_pytorch3d:
if torch.__version__.startswith("1.9") and sys.platform.startswith("linux"):
# We try to install PyTorch3D via a released wheel.
version_str="".join([
f"py3{sys.version_info.minor}_cu",
torch.version.cuda.replace(".",""),
f"_pyt{torch.__version__[0:5:2]}"
])
!pip install pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html
else:
# We try to install PyTorch3D from source.
!curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz
!tar xzf 1.10.0.tar.gz
os.environ["CUB_HOME"] = os.getcwd() + "/cub-1.10.0"
!pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'
import os
import torch
import torch.nn.functional as F
import matplotlib.pyplot as plt
# Util function for loading point clouds|
import numpy as np
# Data structures and functions for rendering
from pytorch3d.structures import Pointclouds
from pytorch3d.vis.plotly_vis import AxisArgs, plot_batch_individually, plot_scene
from pytorch3d.renderer import (
look_at_view_transform,
FoVOrthographicCameras,
PointsRasterizationSettings,
PointsRenderer,
PulsarPointsRenderer,
PointsRasterizer,
AlphaCompositor,
NormWeightedCompositor
)
###Output
_____no_output_____
###Markdown
Load a point cloud and corresponding colorsLoad and create a **Point Cloud** object. **Pointclouds** is a unique datastructure provided in PyTorch3D for working with batches of point clouds of different sizes. If running this notebook using **Google Colab**, run the following cell to fetch the pointcloud data and save it at the path `data/PittsburghBridge`:If running locally, the data is already available at the correct path.
###Code
!mkdir -p data/PittsburghBridge
!wget -P data/PittsburghBridge https://dl.fbaipublicfiles.com/pytorch3d/data/PittsburghBridge/pointcloud.npz
# Setup
if torch.cuda.is_available():
device = torch.device("cuda:0")
torch.cuda.set_device(device)
else:
device = torch.device("cpu")
# Set paths
DATA_DIR = "./data"
obj_filename = os.path.join(DATA_DIR, "PittsburghBridge/pointcloud.npz")
# Load point cloud
pointcloud = np.load(obj_filename)
verts = torch.Tensor(pointcloud['verts']).to(device)
rgb = torch.Tensor(pointcloud['rgb']).to(device)
point_cloud = Pointclouds(points=[verts], features=[rgb])
###Output
_____no_output_____
###Markdown
Create a rendererA renderer in PyTorch3D is composed of a **rasterizer** and a **shader** which each have a number of subcomponents such as a **camera** (orthographic/perspective). Here we initialize some of these components and use default values for the rest.In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **alpha compositing**. Then we learn how to vary different components using the modular API. [1] SynSin: End to end View Synthesis from a Single Image. Olivia Wiles, Georgia Gkioxari, Richard Szeliski, Justin Johnson. CVPR 2020.
###Code
# Initialize a camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = FoVOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to raster_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an alpha compositor (nearer points
# are weighted more heavily). See [1] for an explanation.
rasterizer = PointsRasterizer(cameras=cameras, raster_settings=raster_settings)
renderer = PointsRenderer(
rasterizer=rasterizer,
compositor=AlphaCompositor()
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
We will now modify the **renderer** to use **alpha compositing** with a set background color.
###Code
renderer = PointsRenderer(
rasterizer=rasterizer,
# Pass in background_color to the alpha compositor, setting the background color
# to the 3 item tuple, representing rgb on a scale of 0 -> 1, in this case blue
compositor=AlphaCompositor(background_color=(0, 0, 1))
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **weighted compositing**.
###Code
# Initialize a camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = FoVOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to rasterize_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an weighted compositor (3D points are
# weighted according to their distance to a pixel and accumulated using a weighted sum)
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
compositor=NormWeightedCompositor()
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
We will now modify the **renderer** to use **weighted compositing** with a set background color.
###Code
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
# Pass in background_color to the norm weighted compositor, setting the background color
# to the 3 item tuple, representing rgb on a scale of 0 -> 1, in this case red
compositor=NormWeightedCompositor(background_color=(1,0,0))
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
Using the pulsar backendSwitching to the pulsar backend is easy! The pulsar backend has a compositor built-in, so the `compositor` argument is not required when creating it (a warning will be displayed if you provide it nevertheless). It pre-allocates memory on the rendering device, that's why it needs the `n_channels` at construction time.All parameters for the renderer forward function are batch-wise except the background color (in this example, `gamma`) and you have to provide as many values as you have examples in your batch. The background color is optional and by default set to all zeros. You can find a detailed explanation of how gamma influences the rendering function here in the paper [Fast Differentiable Raycasting for Neural Rendering usingSphere-based Representations](https://arxiv.org/pdf/2004.07484.pdf).You can also use the `native` backend for the pulsar backend which already provides access to point opacity. The native backend can be imported from `pytorch3d.renderer.points.pulsar`; you can find examples for this in the folder `docs/examples`.
###Code
renderer = PulsarPointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
n_channels=4
).to(device)
images = renderer(point_cloud, gamma=(1e-4,),
bg_col=torch.tensor([0.0, 1.0, 0.0, 1.0], dtype=torch.float32, device=device))
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
View pointclouds in Plotly figuresHere we use the PyTorch3D function `plot_scene` to render the pointcloud in a Plotly figure. `plot_scene` returns a plotly figure with trace and subplots defined by the input.
###Code
plot_scene({
"Pointcloud": {
"person": point_cloud
}
})
###Output
_____no_output_____
###Markdown
We will now render a batch of pointclouds. The first pointcloud is the same as above, and the second is all-black and offset by 2 in all dimensions so we can see them on the same plot.
###Code
point_cloud_batch = Pointclouds(points=[verts, verts + 2], features=[rgb, torch.zeros_like(rgb)])
# render both in the same plot in different traces
fig = plot_scene({
"Pointcloud": {
"person": point_cloud_batch[0],
"person2": point_cloud_batch[1]
}
})
fig.show()
# render both in the same plot in one trace
fig = plot_scene({
"Pointcloud": {
"2 people": point_cloud_batch
}
})
fig.show()
###Output
_____no_output_____
###Markdown
For batches, we can also use `plot_batch_individually` to avoid constructing the scene dictionary ourselves.
###Code
# render both in 1 row in different subplots
fig2 = plot_batch_individually(point_cloud_batch, ncols=2)
fig2.show()
# modify the plotly figure height and width
fig2.update_layout(height=500, width=500)
fig2.show()
###Output
_____no_output_____
###Markdown
We can also modify the axis arguments and axis backgrounds for either function, and title our plots in `plot_batch_individually`.
###Code
fig3 = plot_batch_individually(
point_cloud_batch,
xaxis={"backgroundcolor":"rgb(200, 200, 230)"},
yaxis={"backgroundcolor":"rgb(230, 200, 200)"},
zaxis={"backgroundcolor":"rgb(200, 230, 200)"},
subplot_titles=["Pointcloud1", "Pointcloud2"], # this should have a title for each subplot, titles can be ""
axis_args=AxisArgs(showgrid=True))
fig3.show()
###Output
_____no_output_____
###Markdown
Render a colored point cloudThis tutorial shows how to:- set up a renderer - render the point cloud - vary the rendering settings such as compositing and camera position Import modules Ensure `torch` and `torchvision` are installed. If `pytorch3d` is not installed, install it using the following cell:
###Code
import os
import sys
import torch
need_pytorch3d=False
try:
import pytorch3d
except ModuleNotFoundError:
need_pytorch3d=True
if need_pytorch3d:
if torch.__version__.startswith("1.7") and sys.platform.startswith("linux"):
# We try to install PyTorch3D via a released wheel.
version_str="".join([
f"py3{sys.version_info.minor}_cu",
torch.version.cuda.replace(".",""),
f"_pyt{torch.__version__[0:5:2]}"
])
!pip install pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html
else:
# We try to install PyTorch3D from source.
!curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz
!tar xzf 1.10.0.tar.gz
os.environ["CUB_HOME"] = os.getcwd() + "/cub-1.10.0"
!pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'
import os
import torch
import torch.nn.functional as F
import matplotlib.pyplot as plt
# Util function for loading point clouds|
import numpy as np
# Data structures and functions for rendering
from pytorch3d.structures import Pointclouds
from pytorch3d.vis.plotly_vis import AxisArgs, plot_batch_individually, plot_scene
from pytorch3d.renderer import (
look_at_view_transform,
FoVOrthographicCameras,
PointsRasterizationSettings,
PointsRenderer,
PulsarPointsRenderer,
PointsRasterizer,
AlphaCompositor,
NormWeightedCompositor
)
###Output
_____no_output_____
###Markdown
Load a point cloud and corresponding colorsLoad and create a **Point Cloud** object. **Pointclouds** is a unique datastructure provided in PyTorch3D for working with batches of point clouds of different sizes. If running this notebook using **Google Colab**, run the following cell to fetch the pointcloud data and save it at the path `data/PittsburghBridge`:If running locally, the data is already available at the correct path.
###Code
!mkdir -p data/PittsburghBridge
!wget -P data/PittsburghBridge https://dl.fbaipublicfiles.com/pytorch3d/data/PittsburghBridge/pointcloud.npz
# Setup
if torch.cuda.is_available():
device = torch.device("cuda:0")
torch.cuda.set_device(device)
else:
device = torch.device("cpu")
# Set paths
DATA_DIR = "./data"
obj_filename = os.path.join(DATA_DIR, "PittsburghBridge/pointcloud.npz")
# Load point cloud
pointcloud = np.load(obj_filename)
verts = torch.Tensor(pointcloud['verts']).to(device)
rgb = torch.Tensor(pointcloud['rgb']).to(device)
point_cloud = Pointclouds(points=[verts], features=[rgb])
###Output
_____no_output_____
###Markdown
Create a rendererA renderer in PyTorch3D is composed of a **rasterizer** and a **shader** which each have a number of subcomponents such as a **camera** (orthographic/perspective). Here we initialize some of these components and use default values for the rest.In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **alpha compositing**. Then we learn how to vary different components using the modular API. [1] SynSin: End to end View Synthesis from a Single Image. Olivia Wiles, Georgia Gkioxari, Richard Szeliski, Justin Johnson. CVPR 2020.
###Code
# Initialize a camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = FoVOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to raster_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an alpha compositor (nearer points
# are weighted more heavily). See [1] for an explanation.
rasterizer = PointsRasterizer(cameras=cameras, raster_settings=raster_settings)
renderer = PointsRenderer(
rasterizer=rasterizer,
compositor=AlphaCompositor()
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
We will now modify the **renderer** to use **alpha compositing** with a set background color.
###Code
renderer = PointsRenderer(
rasterizer=rasterizer,
# Pass in background_color to the alpha compositor, setting the background color
# to the 3 item tuple, representing rgb on a scale of 0 -> 1, in this case blue
compositor=AlphaCompositor(background_color=(0, 0, 1))
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **weighted compositing**.
###Code
# Initialize a camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = FoVOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to rasterize_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an weighted compositor (3D points are
# weighted according to their distance to a pixel and accumulated using a weighted sum)
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
compositor=NormWeightedCompositor()
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
We will now modify the **renderer** to use **weighted compositing** with a set background color.
###Code
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
# Pass in background_color to the norm weighted compositor, setting the background color
# to the 3 item tuple, representing rgb on a scale of 0 -> 1, in this case red
compositor=NormWeightedCompositor(background_color=(1,0,0))
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
Using the pulsar backendSwitching to the pulsar backend is easy! The pulsar backend has a compositor built-in, so the `compositor` argument is not required when creating it (a warning will be displayed if you provide it nevertheless). It pre-allocates memory on the rendering device, that's why it needs the `n_channels` at construction time.All parameters for the renderer forward function are batch-wise except the background color (in this example, `gamma`) and you have to provide as many values as you have examples in your batch. The background color is optional and by default set to all zeros. You can find a detailed explanation of how gamma influences the rendering function here in the paper [Fast Differentiable Raycasting for Neural Rendering usingSphere-based Representations](https://arxiv.org/pdf/2004.07484.pdf).You can also use the `native` backend for the pulsar backend which already provides access to point opacity. The native backend can be imported from `pytorch3d.renderer.points.pulsar`; you can find examples for this in the folder `docs/examples`.
###Code
renderer = PulsarPointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
n_channels=4
).to(device)
images = renderer(point_cloud, gamma=(1e-4,),
bg_col=torch.tensor([0.0, 1.0, 0.0, 1.0], dtype=torch.float32, device=device))
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
View pointclouds in Plotly figuresHere we use the PyTorch3D function `plot_scene` to render the pointcloud in a Plotly figure. `plot_scene` returns a plotly figure with trace and subplots defined by the input.
###Code
plot_scene({
"Pointcloud": {
"person": point_cloud
}
})
###Output
_____no_output_____
###Markdown
We will now render a batch of pointclouds. The first pointcloud is the same as above, and the second is all-black and offset by 2 in all dimensions so we can see them on the same plot.
###Code
point_cloud_batch = Pointclouds(points=[verts, verts + 2], features=[rgb, torch.zeros_like(rgb)])
# render both in the same plot in different traces
fig = plot_scene({
"Pointcloud": {
"person": point_cloud_batch[0],
"person2": point_cloud_batch[1]
}
})
fig.show()
# render both in the same plot in one trace
fig = plot_scene({
"Pointcloud": {
"2 people": point_cloud_batch
}
})
fig.show()
###Output
_____no_output_____
###Markdown
For batches, we can also use `plot_batch_individually` to avoid constructing the scene dictionary ourselves.
###Code
# render both in 1 row in different subplots
fig2 = plot_batch_individually(point_cloud_batch, ncols=2)
fig2.show()
# modify the plotly figure height and width
fig2.update_layout(height=500, width=500)
fig2.show()
###Output
_____no_output_____
###Markdown
We can also modify the axis arguments and axis backgrounds for either function, and title our plots in `plot_batch_individually`.
###Code
fig3 = plot_batch_individually(
point_cloud_batch,
xaxis={"backgroundcolor":"rgb(200, 200, 230)"},
yaxis={"backgroundcolor":"rgb(230, 200, 200)"},
zaxis={"backgroundcolor":"rgb(200, 230, 200)"},
subplot_titles=["Pointcloud1", "Pointcloud2"], # this should have a title for each subplot, titles can be ""
axis_args=AxisArgs(showgrid=True))
fig3.show()
###Output
_____no_output_____
###Markdown
Render a colored point cloudThis tutorial shows how to:- set up a renderer - render the point cloud - vary the rendering settings such as compositing and camera position Import modules If `torch`, `torchvision` and `pytorch3d` are not installed, run the following cell:
###Code
!pip install torch torchvision
import sys
import torch
if torch.__version__=='1.6.0+cu101' and sys.platform.startswith('linux'):
!pip install pytorch3d
else:
!pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'
import os
import torch
import torch.nn.functional as F
import matplotlib.pyplot as plt
from skimage.io import imread
# Util function for loading point clouds|
import numpy as np
# Data structures and functions for rendering
from pytorch3d.structures import Pointclouds
from pytorch3d.renderer import (
look_at_view_transform,
FoVOrthographicCameras,
PointsRasterizationSettings,
PointsRenderer,
PointsRasterizer,
AlphaCompositor,
NormWeightedCompositor
)
###Output
_____no_output_____
###Markdown
Load a point cloud and corresponding colorsLoad and create a **Point Cloud** object. **Pointclouds** is a unique datastructure provided in PyTorch3D for working with batches of point clouds of different sizes. If running this notebook using **Google Colab**, run the following cell to fetch the pointcloud data and save it at the path `data/PittsburghBridge`:If running locally, the data is already available at the correct path.
###Code
!mkdir -p data/PittsburghBridge
!wget -P data/PittsburghBridge https://dl.fbaipublicfiles.com/pytorch3d/data/PittsburghBridge/pointcloud.npz
# Setup
if torch.cuda.is_available():
device = torch.device("cuda:0")
torch.cuda.set_device(device)
else:
device = torch.device("cpu")
# Set paths
DATA_DIR = "./data"
obj_filename = os.path.join(DATA_DIR, "PittsburghBridge/pointcloud.npz")
# Load point cloud
pointcloud = np.load(obj_filename)
verts = torch.Tensor(pointcloud['verts']).to(device)
rgb = torch.Tensor(pointcloud['rgb']).to(device)
point_cloud = Pointclouds(points=[verts], features=[rgb])
###Output
_____no_output_____
###Markdown
Create a rendererA renderer in PyTorch3D is composed of a **rasterizer** and a **shader** which each have a number of subcomponents such as a **camera** (orthgraphic/perspective). Here we initialize some of these components and use default values for the rest.In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **alpha compositing**. Then we learn how to vary different components using the modular API. [1] SynSin: End to end View Synthesis from a Single Image. Olivia Wiles, Georgia Gkioxari, Richard Szeliski, Justin Johnson. CVPR 2020.
###Code
# Initialize a camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = FoVOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to raster_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an alpha compositor (nearer points
# are weighted more heavily). See [1] for an explanation.
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
compositor=AlphaCompositor()
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
###Output
_____no_output_____
###Markdown
We will now modify the **renderer** to use **alpha compositing** with a set background color.
###Code
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
# Pass in background_color to the alpha compositor, setting the background color
# to the 3 item tuple, representing rgb on a scale of 0 -> 1, in this case blue
compositor=AlphaCompositor(background_color=(0, 0, 1))
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
###Output
_____no_output_____
###Markdown
In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **weighted compositing**.
###Code
# Initialize a camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = FoVOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to rasterize_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an weighted compositor (3D points are
# weighted according to their distance to a pixel and accumulated using a weighted sum)
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
compositor=NormWeightedCompositor()
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
###Output
_____no_output_____
###Markdown
We will now modify the **renderer** to use **weighted compositing** with a set background color.
###Code
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
# Pass in background_color to the norm weighted compositor, setting the background color
# to the 3 item tuple, representing rgb on a scale of 0 -> 1, in this case red
compositor=NormWeightedCompositor(background_color=(1,0,0))
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
###Output
_____no_output_____
###Markdown
Render a colored point cloudThis tutorial shows how to:- set up a renderer - render the point cloud - vary the rendering settings such as compositing and camera position Import modules Ensure `torch` and `torchvision` are installed. If `pytorch3d` is not installed, install it using the following cell:
###Code
import os
import sys
import torch
need_pytorch3d=False
try:
import pytorch3d
except ModuleNotFoundError:
need_pytorch3d=True
if need_pytorch3d:
if torch.__version__.startswith("1.9") and sys.platform.startswith("linux"):
# We try to install PyTorch3D via a released wheel.
version_str="".join([
f"py3{sys.version_info.minor}_cu",
torch.version.cuda.replace(".",""),
f"_pyt{torch.__version__[0:5:2]}"
])
!pip install pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html
else:
# We try to install PyTorch3D from source.
!curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz
!tar xzf 1.10.0.tar.gz
os.environ["CUB_HOME"] = os.getcwd() + "/cub-1.10.0"
!pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'
import os
import torch
import torch.nn.functional as F
import matplotlib.pyplot as plt
# Util function for loading point clouds|
import numpy as np
# Data structures and functions for rendering
from pytorch3d.structures import Pointclouds
from pytorch3d.vis.plotly_vis import AxisArgs, plot_batch_individually, plot_scene
from pytorch3d.renderer import (
look_at_view_transform,
FoVOrthographicCameras,
PointsRasterizationSettings,
PointsRenderer,
PulsarPointsRenderer,
PointsRasterizer,
AlphaCompositor,
NormWeightedCompositor
)
###Output
_____no_output_____
###Markdown
Load a point cloud and corresponding colorsLoad and create a **Point Cloud** object. **Pointclouds** is a unique datastructure provided in PyTorch3D for working with batches of point clouds of different sizes. If running this notebook using **Google Colab**, run the following cell to fetch the pointcloud data and save it at the path `data/PittsburghBridge`:If running locally, the data is already available at the correct path.
###Code
!mkdir -p data/PittsburghBridge
!wget -P data/PittsburghBridge https://dl.fbaipublicfiles.com/pytorch3d/data/PittsburghBridge/pointcloud.npz
# Setup
if torch.cuda.is_available():
device = torch.device("cuda:0")
torch.cuda.set_device(device)
else:
device = torch.device("cpu")
# Set paths
DATA_DIR = "./data"
obj_filename = os.path.join(DATA_DIR, "PittsburghBridge/pointcloud.npz")
# Load point cloud
pointcloud = np.load(obj_filename)
verts = torch.Tensor(pointcloud['verts']).to(device)
rgb = torch.Tensor(pointcloud['rgb']).to(device)
point_cloud = Pointclouds(points=[verts], features=[rgb])
###Output
_____no_output_____
###Markdown
Create a rendererA renderer in PyTorch3D is composed of a **rasterizer** and a **shader** which each have a number of subcomponents such as a **camera** (orthographic/perspective). Here we initialize some of these components and use default values for the rest.In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **alpha compositing**. Then we learn how to vary different components using the modular API. [1] SynSin: End to end View Synthesis from a Single Image. Olivia Wiles, Georgia Gkioxari, Richard Szeliski, Justin Johnson. CVPR 2020.
###Code
# Initialize a camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = FoVOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to raster_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an alpha compositor (nearer points
# are weighted more heavily). See [1] for an explanation.
rasterizer = PointsRasterizer(cameras=cameras, raster_settings=raster_settings)
renderer = PointsRenderer(
rasterizer=rasterizer,
compositor=AlphaCompositor()
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
We will now modify the **renderer** to use **alpha compositing** with a set background color.
###Code
renderer = PointsRenderer(
rasterizer=rasterizer,
# Pass in background_color to the alpha compositor, setting the background color
# to the 3 item tuple, representing rgb on a scale of 0 -> 1, in this case blue
compositor=AlphaCompositor(background_color=(0, 0, 1))
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **weighted compositing**.
###Code
# Initialize a camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = FoVOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to rasterize_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an weighted compositor (3D points are
# weighted according to their distance to a pixel and accumulated using a weighted sum)
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
compositor=NormWeightedCompositor()
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
We will now modify the **renderer** to use **weighted compositing** with a set background color.
###Code
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
# Pass in background_color to the norm weighted compositor, setting the background color
# to the 3 item tuple, representing rgb on a scale of 0 -> 1, in this case red
compositor=NormWeightedCompositor(background_color=(1,0,0))
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
Using the pulsar backendSwitching to the pulsar backend is easy! The pulsar backend has a compositor built-in, so the `compositor` argument is not required when creating it (a warning will be displayed if you provide it nevertheless). It pre-allocates memory on the rendering device, that's why it needs the `n_channels` at construction time.All parameters for the renderer forward function are batch-wise except the background color (in this example, `gamma`) and you have to provide as many values as you have examples in your batch. The background color is optional and by default set to all zeros. You can find a detailed explanation of how gamma influences the rendering function here in the paper [Fast Differentiable Raycasting for Neural Rendering usingSphere-based Representations](https://arxiv.org/pdf/2004.07484.pdf).You can also use the `native` backend for the pulsar backend which already provides access to point opacity. The native backend can be imported from `pytorch3d.renderer.points.pulsar`; you can find examples for this in the folder `docs/examples`.
###Code
renderer = PulsarPointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
n_channels=4
).to(device)
images = renderer(point_cloud, gamma=(1e-4,),
bg_col=torch.tensor([0.0, 1.0, 0.0, 1.0], dtype=torch.float32, device=device))
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
View pointclouds in Plotly figuresHere we use the PyTorch3D function `plot_scene` to render the pointcloud in a Plotly figure. `plot_scene` returns a plotly figure with trace and subplots defined by the input.
###Code
plot_scene({
"Pointcloud": {
"person": point_cloud
}
})
###Output
_____no_output_____
###Markdown
We will now render a batch of pointclouds. The first pointcloud is the same as above, and the second is all-black and offset by 2 in all dimensions so we can see them on the same plot.
###Code
point_cloud_batch = Pointclouds(points=[verts, verts + 2], features=[rgb, torch.zeros_like(rgb)])
# render both in the same plot in different traces
fig = plot_scene({
"Pointcloud": {
"person": point_cloud_batch[0],
"person2": point_cloud_batch[1]
}
})
fig.show()
# render both in the same plot in one trace
fig = plot_scene({
"Pointcloud": {
"2 people": point_cloud_batch
}
})
fig.show()
###Output
_____no_output_____
###Markdown
For batches, we can also use `plot_batch_individually` to avoid constructing the scene dictionary ourselves.
###Code
# render both in 1 row in different subplots
fig2 = plot_batch_individually(point_cloud_batch, ncols=2)
fig2.show()
# modify the plotly figure height and width
fig2.update_layout(height=500, width=500)
fig2.show()
###Output
_____no_output_____
###Markdown
We can also modify the axis arguments and axis backgrounds for either function, and title our plots in `plot_batch_individually`.
###Code
fig3 = plot_batch_individually(
point_cloud_batch,
xaxis={"backgroundcolor":"rgb(200, 200, 230)"},
yaxis={"backgroundcolor":"rgb(230, 200, 200)"},
zaxis={"backgroundcolor":"rgb(200, 230, 200)"},
subplot_titles=["Pointcloud1", "Pointcloud2"], # this should have a title for each subplot, titles can be ""
axis_args=AxisArgs(showgrid=True))
fig3.show()
###Output
_____no_output_____
###Markdown
Render a colored point cloudThis tutorial shows how to:- set up a renderer - render the point cloud - vary the rendering settings such as compositing and camera position Import modules If `torch`, `torchvision` and `pytorch3d` are not installed, run the following cell:
###Code
!pip install torch torchvision
import sys
import torch
if torch.__version__=='1.6.0+cu101' and sys.platform.startswith('linux'):
!pip install pytorch3d
else:
!pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'
import os
import torch
import torch.nn.functional as F
import matplotlib.pyplot as plt
from skimage.io import imread
# Util function for loading point clouds|
import numpy as np
# Data structures and functions for rendering
from pytorch3d.structures import Pointclouds
from pytorch3d.vis.plotly_vis import AxisArgs, plot_batch_individually, plot_scene
from pytorch3d.renderer import (
look_at_view_transform,
FoVOrthographicCameras,
PointsRasterizationSettings,
PointsRenderer,
PulsarPointsRenderer,
PointsRasterizer,
AlphaCompositor,
NormWeightedCompositor
)
###Output
_____no_output_____
###Markdown
Load a point cloud and corresponding colorsLoad and create a **Point Cloud** object. **Pointclouds** is a unique datastructure provided in PyTorch3D for working with batches of point clouds of different sizes. If running this notebook using **Google Colab**, run the following cell to fetch the pointcloud data and save it at the path `data/PittsburghBridge`:If running locally, the data is already available at the correct path.
###Code
!mkdir -p data/PittsburghBridge
!wget -P data/PittsburghBridge https://dl.fbaipublicfiles.com/pytorch3d/data/PittsburghBridge/pointcloud.npz
# Setup
if torch.cuda.is_available():
device = torch.device("cuda:0")
torch.cuda.set_device(device)
else:
device = torch.device("cpu")
# Set paths
DATA_DIR = "./data"
obj_filename = os.path.join(DATA_DIR, "PittsburghBridge/pointcloud.npz")
# Load point cloud
pointcloud = np.load(obj_filename)
verts = torch.Tensor(pointcloud['verts']).to(device)
rgb = torch.Tensor(pointcloud['rgb']).to(device)
point_cloud = Pointclouds(points=[verts], features=[rgb])
###Output
_____no_output_____
###Markdown
Create a rendererA renderer in PyTorch3D is composed of a **rasterizer** and a **shader** which each have a number of subcomponents such as a **camera** (orthgraphic/perspective). Here we initialize some of these components and use default values for the rest.In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **alpha compositing**. Then we learn how to vary different components using the modular API. [1] SynSin: End to end View Synthesis from a Single Image. Olivia Wiles, Georgia Gkioxari, Richard Szeliski, Justin Johnson. CVPR 2020.
###Code
# Initialize a camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = FoVOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to raster_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an alpha compositor (nearer points
# are weighted more heavily). See [1] for an explanation.
rasterizer = PointsRasterizer(cameras=cameras, raster_settings=raster_settings)
renderer = PointsRenderer(
rasterizer=rasterizer,
compositor=AlphaCompositor()
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
###Output
_____no_output_____
###Markdown
We will now modify the **renderer** to use **alpha compositing** with a set background color.
###Code
renderer = PointsRenderer(
rasterizer=rasterizer,
# Pass in background_color to the alpha compositor, setting the background color
# to the 3 item tuple, representing rgb on a scale of 0 -> 1, in this case blue
compositor=AlphaCompositor(background_color=(0, 0, 1))
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
###Output
_____no_output_____
###Markdown
In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **weighted compositing**.
###Code
# Initialize a camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = FoVOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to rasterize_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an weighted compositor (3D points are
# weighted according to their distance to a pixel and accumulated using a weighted sum)
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
compositor=NormWeightedCompositor()
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
###Output
_____no_output_____
###Markdown
We will now modify the **renderer** to use **weighted compositing** with a set background color.
###Code
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
# Pass in background_color to the norm weighted compositor, setting the background color
# to the 3 item tuple, representing rgb on a scale of 0 -> 1, in this case red
compositor=NormWeightedCompositor(background_color=(1,0,0))
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
###Output
_____no_output_____
###Markdown
Using the pulsar backendSwitching to the pulsar backend is easy! The pulsar backend has a compositor built-in, so the `compositor` argument is not required when creating it (a warning will be displayed if you provide it nevertheless). It pre-allocates memory on the rendering device, that's why it needs the `n_channels` at construction time.All parameters for the renderer forward function are batch-wise except the background color (in this example, `gamma`) and you have to provide as many values as you have examples in your batch. The background color is optional and by default set to all zeros. You can find a detailed explanation of how gamma influences the rendering function here in the paper [Fast Differentiable Raycasting for Neural Rendering usingSphere-based Representations](https://arxiv.org/pdf/2004.07484.pdf).You can also use the `native` backend for the pulsar backend which already provides access to point opacity. The native backend can be imported from `pytorch3d.renderer.points.pulsar`; you can find examples for this in the folder `docs/examples`.
###Code
renderer = PulsarPointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
n_channels=4
).to(device)
images = renderer(point_cloud, gamma=(1e-4,),
bg_col=torch.tensor([0.0, 1.0, 0.0, 1.0], dtype=torch.float32, device=device))
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
###Output
_____no_output_____
###Markdown
View pointclouds in Plotly figuresHere we use the PyTorch3D function `plot_scene` to render the pointcloud in a Plotly figure. `plot_scene` returns a plotly figure with trace and subplots defined by the input.
###Code
plot_scene({
"Pointcloud": {
"person": point_cloud
}
})
###Output
_____no_output_____
###Markdown
We will now render a batch of pointclouds. The first pointcloud is the same as above, and the second is all-black and offset by 2 in all dimensions so we can see them on the same plot.
###Code
point_cloud_batch = Pointclouds(points=[verts, verts + 2], features=[rgb, torch.zeros_like(rgb)])
# render both in the same plot in different traces
fig = plot_scene({
"Pointcloud": {
"person": point_cloud_batch[0],
"person2": point_cloud_batch[1]
}
})
fig.show()
# render both in the same plot in one trace
fig = plot_scene({
"Pointcloud": {
"2 people": point_cloud_batch
}
})
fig.show()
###Output
_____no_output_____
###Markdown
For batches, we can also use `plot_batch_individually` to avoid constructing the scene dictionary ourselves.
###Code
# render both in 1 row in different subplots
fig2 = plot_batch_individually(point_cloud_batch, ncols=2)
fig2.show()
# modify the plotly figure height and width
fig2.update_layout(height=500, width=500)
fig2.show()
###Output
_____no_output_____
###Markdown
We can also modify the axis arguments and axis backgrounds for either function, and title our plots in `plot_batch_individually`.
###Code
fig3 = plot_batch_individually(
point_cloud_batch,
xaxis={"backgroundcolor":"rgb(200, 200, 230)"},
yaxis={"backgroundcolor":"rgb(230, 200, 200)"},
zaxis={"backgroundcolor":"rgb(200, 230, 200)"},
subplot_titles=["Pointcloud1", "Pointcloud2"], # this should have a title for each subplot, titles can be ""
axis_args=AxisArgs(showgrid=True))
fig3.show()
###Output
_____no_output_____
###Markdown
Render a colored point cloudThis tutorial shows how to:- set up a renderer - render the point cloud - vary the rendering settings such as compositing and camera position Import modules If `torch`, `torchvision` and `pytorch3d` are not installed, run the following cell:
###Code
!pip install torch torchvision
!pip install 'git+https://github.com/facebookresearch/pytorch3d.git'
import os
import torch
import torch.nn.functional as F
import matplotlib.pyplot as plt
from skimage.io import imread
# Util function for loading point clouds
import numpy as np
# Data structures and functions for rendering
from pytorch3d.structures import Pointclouds
from pytorch3d.renderer import (
look_at_view_transform,
OpenGLOrthographicCameras,
PointsRasterizationSettings,
PointsRenderer,
PointsRasterizer,
AlphaCompositor,
NormWeightedCompositor
)
###Output
_____no_output_____
###Markdown
Load a point cloud and corresponding colorsLoad and create a **Point Cloud** object. **Pointclouds** is a unique datastructure provided in PyTorch3D for working with batches of point clouds of different sizes. If running this notebook using **Google Colab**, run the following cell to fetch the pointcloud data and save it at the path `data/PittsburghBridge`:If running locally, the data is already available at the correct path.
###Code
!mkdir -p data/PittsburghBridge
!wget -P data/PittsburghBridge https://dl.fbaipublicfiles.com/pytorch3d/data/PittsburghBridge/pointcloud.npz
# Setup
if torch.cuda.is_available():
device = torch.device("cuda:0")
torch.cuda.set_device(device)
else:
device = torch.device("cpu")
# Set paths
DATA_DIR = "./data"
obj_filename = os.path.join(DATA_DIR, "PittsburghBridge/pointcloud.npz")
# Load point cloud
pointcloud = np.load(obj_filename)
verts = torch.Tensor(pointcloud['verts']).to(device)
rgb = torch.Tensor(pointcloud['rgb']).to(device)
point_cloud = Pointclouds(points=[verts], features=[rgb])
###Output
_____no_output_____
###Markdown
Create a rendererA renderer in PyTorch3D is composed of a **rasterizer** and a **shader** which each have a number of subcomponents such as a **camera** (orthgraphic/perspective). Here we initialize some of these components and use default values for the rest.In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **alpha compositing**. Then we learn how to vary different components using the modular API. [1] SynSin: End to end View Synthesis from a Single Image. Olivia Wiles, Georgia Gkioxari, Richard Szeliski, Justin Johnson. CVPR 2020.
###Code
# Initialize an OpenGL perspective camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = OpenGLOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to raster_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an alpha compositor (nearer points
# are weighted more heavily). See [1] for an explanation.
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
compositor=AlphaCompositor()
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
###Output
_____no_output_____
###Markdown
In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **weighted compositing**.
###Code
# Initialize an OpenGL perspective camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = OpenGLOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to rasterize_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an weighted compositor (3D points are
# weighted according to their distance to a pixel and accumulated using a weighted sum)
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
compositor=NormWeightedCompositor()
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
###Output
_____no_output_____
###Markdown
Render a colored point cloudThis tutorial shows how to:- set up a renderer - render the point cloud - vary the rendering settings such as compositing and camera position Import modules If `torch`, `torchvision` and `pytorch3d` are not installed, run the following cell:
###Code
!pip install torch torchvision
!pip install 'git+https://github.com/facebookresearch/pytorch3d.git'
import os
os.chdir('../..')
import torch
import torch.nn.functional as F
import matplotlib.pyplot as plt
from skimage.io import imread
# Util function for loading point clouds
import numpy as np
# Data structures and functions for rendering
from pytorch3d.structures import Pointclouds
from pytorch3d.renderer import (
look_at_view_transform,
OpenGLOrthographicCameras,
PointsRasterizationSettings,
PointsRenderer,
PointsRasterizer,
AlphaCompositor,
NormWeightedCompositor
)
###Output
_____no_output_____
###Markdown
Load a point cloud and corresponding colorsLoad a `.ply` file and create a **Point Cloud** object. **Pointclouds** is a unique datastructure provided in PyTorch3D for working with batches of point clouds of different sizes. If running this notebook using **Google Colab**, run the following cell to fetch the pointcloud data and save it at the path `data/PittsburghBridge`:If running locally, the data is already available at the correct path.
###Code
!mkdir -p data/PittsburghBridge
!wget -P data/PittsburghBridge https://dl.fbaipublicfiles.com/pytorch3d/data/PittsburghBridge/pointcloud.npz
# Setup
device = torch.device("cuda:0")
torch.cuda.set_device(device)
# Set paths
DATA_DIR = "./data"
obj_filename = os.path.join(DATA_DIR, "PittsburghBridge/pointcloud.npz")
# Load point cloud
pointcloud = np.load(obj_filename)
verts = torch.Tensor(pointcloud['verts']).to(device)
rgb = torch.Tensor(pointcloud['rgb']).to(device)
point_cloud = Pointclouds(points=[verts], features=[rgb])
###Output
_____no_output_____
###Markdown
Create a rendererA renderer in PyTorch3D is composed of a **rasterizer** and a **shader** which each have a number of subcomponents such as a **camera** (orthgraphic/perspective). Here we initialize some of these components and use default values for the rest.In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **alpha compositing**. Then we learn how to vary different components using the modular API. [1] SynSin: End to end View Synthesis from a Single Image. Olivia Wiles, Georgia Gkioxari, Richard Szeliski, Justin Johnson. CVPR 2020.
###Code
# Initialize an OpenGL perspective camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = OpenGLOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to raster_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an alpha compositor (nearer points
# are weighted more heavily). See [1] for an explanation.
renderer = PointsRenderer(
rasterizer=PointsRasterizer(
cameras=cameras,
raster_settings=raster_settings
),
compositor=AlphaCompositor(
device=device,
composite_params=None
)
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off")
###Output
_____no_output_____
###Markdown
In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **weighted compositing**.
###Code
# Initialize an OpenGL perspective camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = OpenGLOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to rasterize_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an weighted compositor (3D points are
# weighted according to their distance to a pixel and accumulated using a weighted sum)
renderer = PointsRenderer(
rasterizer=PointsRasterizer(
cameras=cameras,
raster_settings=raster_settings
),
compositor=NormWeightedCompositor(
device=device,
composite_params=None
)
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off")
###Output
_____no_output_____
###Markdown
Using the pulsar backendSwitching to the pulsar backend is easy! The pulsar backend has a compositor built-in, so the `compositor` argument is not required when creating it (a warning will be displayed if you provide it nevertheless). It pre-allocates memory on the rendering device, that's why it needs the `n_channels` at construction time.All parameters for the renderer forward function are batch-wise except the background color (in this example, `gamma`) and you have to provide as many values as you have examples in your batch. The background color is optional and by default set to all zeros. You can find a detailed explanation of how gamma influences the rendering function here in the paper [Fast Differentiable Raycasting for Neural Rendering usingSphere-based Representations](https://arxiv.org/pdf/2004.07484.pdf).You can also use the `native` backend for the pulsar backend which already provides access to point opacity. The native backend can be imported from `pytorch3d.renderer.points.pulsar`; you can find examples for this in the folder `docs/examples`.
###Code
renderer = PulsarPointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
n_channels=4
).to(device)
images = renderer(point_cloud, gamma=(1e-4,),
bg_col=torch.tensor([0.0, 1.0, 0.0, 1.0], dtype=torch.float32, device=device))
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
###Output
_____no_output_____
###Markdown
View pointclouds in Plotly figuresHere we use the PyTorch3D function `plot_scene` to render the pointcloud in a Plotly figure. `plot_scene` returns a plotly figure with trace and subplots defined by the input.
###Code
plot_scene({
"Pointcloud": {
"person": point_cloud
}
})
###Output
_____no_output_____
###Markdown
We will now render a batch of pointclouds. The first pointcloud is the same as above, and the second is all-black and offset by 2 in all dimensions so we can see them on the same plot.
###Code
point_cloud_batch = Pointclouds(points=[verts, verts + 2], features=[rgb, torch.zeros_like(rgb)])
# render both in the same plot in different traces
fig = plot_scene({
"Pointcloud": {
"person": point_cloud_batch[0],
"person2": point_cloud_batch[1]
}
})
fig.show()
# render both in the same plot in one trace
fig = plot_scene({
"Pointcloud": {
"2 people": point_cloud_batch
}
})
fig.show()
###Output
_____no_output_____
###Markdown
For batches, we can also use `plot_batch_individually` to avoid constructing the scene dictionary ourselves.
###Code
# render both in 1 row in different subplots
fig2 = plot_batch_individually(point_cloud_batch, ncols=2)
fig2.show()
# modify the plotly figure height and width
fig2.update_layout(height=500, width=500)
fig2.show()
###Output
_____no_output_____
###Markdown
We can also modify the axis arguments and axis backgrounds for either function, and title our plots in `plot_batch_individually`.
###Code
fig3 = plot_batch_individually(
point_cloud_batch,
xaxis={"backgroundcolor":"rgb(200, 200, 230)"},
yaxis={"backgroundcolor":"rgb(230, 200, 200)"},
zaxis={"backgroundcolor":"rgb(200, 230, 200)"},
subplot_titles=["Pointcloud1", "Pointcloud2"], # this should have a title for each subplot, titles can be ""
axis_args=AxisArgs(showgrid=True))
fig3.show()
###Output
_____no_output_____
###Markdown
Render a colored point cloudThis tutorial shows how to:- set up a renderer - render the point cloud - vary the rendering settings such as compositing and camera position Import modules If `torch`, `torchvision` and `pytorch3d` are not installed, run the following cell:
###Code
!pip install torch torchvision
!pip install 'git+https://github.com/facebookresearch/pytorch3d.git'
import os
import torch
import torch.nn.functional as F
import matplotlib.pyplot as plt
from skimage.io import imread
# Util function for loading point clouds
import numpy as np
# Data structures and functions for rendering
from pytorch3d.structures import Pointclouds
from pytorch3d.renderer import (
look_at_view_transform,
OpenGLOrthographicCameras,
PointsRasterizationSettings,
PointsRenderer,
PointsRasterizer,
AlphaCompositor,
NormWeightedCompositor
)
###Output
_____no_output_____
###Markdown
Load a point cloud and corresponding colorsLoad and create a **Point Cloud** object. **Pointclouds** is a unique datastructure provided in PyTorch3D for working with batches of point clouds of different sizes. If running this notebook using **Google Colab**, run the following cell to fetch the pointcloud data and save it at the path `data/PittsburghBridge`:If running locally, the data is already available at the correct path.
###Code
!mkdir -p data/PittsburghBridge
!wget -P data/PittsburghBridge https://dl.fbaipublicfiles.com/pytorch3d/data/PittsburghBridge/pointcloud.npz
# Setup
if torch.cuda.is_available():
device = torch.device("cuda:0")
torch.cuda.set_device(device)
else:
device = torch.device("cpu")
# Set paths
DATA_DIR = "./data"
obj_filename = os.path.join(DATA_DIR, "PittsburghBridge/pointcloud.npz")
# Load point cloud
pointcloud = np.load(obj_filename)
verts = torch.Tensor(pointcloud['verts']).to(device)
rgb = torch.Tensor(pointcloud['rgb']).to(device)
point_cloud = Pointclouds(points=[verts], features=[rgb])
###Output
_____no_output_____
###Markdown
Create a rendererA renderer in PyTorch3D is composed of a **rasterizer** and a **shader** which each have a number of subcomponents such as a **camera** (orthgraphic/perspective). Here we initialize some of these components and use default values for the rest.In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **alpha compositing**. Then we learn how to vary different components using the modular API. [1] SynSin: End to end View Synthesis from a Single Image. Olivia Wiles, Georgia Gkioxari, Richard Szeliski, Justin Johnson. CVPR 2020.
###Code
# Initialize an OpenGL perspective camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = OpenGLOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to raster_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an alpha compositor (nearer points
# are weighted more heavily). See [1] for an explanation.
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
compositor=AlphaCompositor(composite_params=None)
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
###Output
_____no_output_____
###Markdown
In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **weighted compositing**.
###Code
# Initialize an OpenGL perspective camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = OpenGLOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to rasterize_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an weighted compositor (3D points are
# weighted according to their distance to a pixel and accumulated using a weighted sum)
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
compositor=NormWeightedCompositor(composite_params=None)
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
###Output
_____no_output_____
###Markdown
Render a colored point cloudThis tutorial shows how to:- set up a renderer - render the point cloud - vary the rendering settings such as compositing and camera position Import modules If `torch`, `torchvision` and `pytorch3d` are not installed, run the following cell:
###Code
!pip install torch torchvision
!pip install 'git+https://github.com/facebookresearch/pytorch3d.git'
import os
os.chdir('../..')
import torch
import torch.nn.functional as F
import matplotlib.pyplot as plt
from skimage.io import imread
# Util function for loading point clouds
import numpy as np
# Data structures and functions for rendering
from pytorch3d.structures import Pointclouds
from pytorch3d.renderer import (
look_at_view_transform,
OpenGLOrthographicCameras,
PointsRasterizationSettings,
PointsRenderer,
PointsRasterizer,
AlphaCompositor,
NormWeightedCompositor
)
###Output
_____no_output_____
###Markdown
Load a point cloud and corresponding colorsLoad a `.ply` file and create a **Point Cloud** object. **Pointclouds** is a unique datastructure provided in PyTorch3D for working with batches of point clouds of different sizes. If running this notebook using **Google Colab**, run the following cell to fetch the pointcloud data and save it at the path `data/PittsburghBridge`:If running locally, the data is already available at the correct path.
###Code
!mkdir -p data/PittsburghBridge
!wget -P data/PittsburghBridge https://dl.fbaipublicfiles.com/pytorch3d/data/PittsburghBridge/pointcloud.npz
# Setup
device = torch.device("cuda:0")
torch.cuda.set_device(device)
# Set paths
DATA_DIR = "./data"
obj_filename = os.path.join(DATA_DIR, "PittsburghBridge/pointcloud.npz")
# Load point cloud
pointcloud = np.load(obj_filename)
verts = torch.Tensor(pointcloud['verts']).to(device)
rgb = torch.Tensor(pointcloud['rgb']).to(device)
point_cloud = Pointclouds(points=[verts], features=[rgb])
###Output
_____no_output_____
###Markdown
Create a rendererA renderer in PyTorch3D is composed of a **rasterizer** and a **shader** which each have a number of subcomponents such as a **camera** (orthgraphic/perspective). Here we initialize some of these components and use default values for the rest.In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **alpha compositing**. Then we learn how to vary different components using the modular API. [1] SynSin: End to end View Synthesis from a Single Image. Olivia Wiles, Georgia Gkioxari, Richard Szeliski, Justin Johnson. CVPR 2020.
###Code
# Initialize an OpenGL perspective camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = OpenGLOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to raster_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10,
bin_size = None,
max_points_per_bin = None
)
# Create a points renderer by compositing points using an alpha compositor (nearer points
# are weighted more heavily). See [1] for an explanation.
renderer = PointsRenderer(
rasterizer=PointsRasterizer(
cameras=cameras,
raster_settings=raster_settings
),
compositor=AlphaCompositor(
device=device,
composite_params=None
)
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off")
###Output
_____no_output_____
###Markdown
In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **weighted compositing**.
###Code
# Initialize an OpenGL perspective camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = OpenGLOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to rasterize_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10,
bin_size = None,
max_points_per_bin = None
)
# Create a points renderer by compositing points using an weighted compositor (3D points are
# weighted according to their distance to a pixel and accumulated using a weighted sum)
renderer = PointsRenderer(
rasterizer=PointsRasterizer(
cameras=cameras,
raster_settings=raster_settings
),
compositor=NormWeightedCompositor(
device=device,
composite_params=None
)
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off")
###Output
_____no_output_____
###Markdown
Render a colored point cloudThis tutorial shows how to:- set up a renderer - render the point cloud - vary the rendering settings such as compositing and camera position Import modules Ensure `torch` and `torchvision` are installed. If `pytorch3d` is not installed, install it using the following cell:
###Code
import os
import sys
import torch
need_pytorch3d=False
try:
import pytorch3d
except ModuleNotFoundError:
need_pytorch3d=True
if need_pytorch3d:
if torch.__version__.startswith("1.10.") and sys.platform.startswith("linux"):
# We try to install PyTorch3D via a released wheel.
pyt_version_str=torch.__version__.split("+")[0].replace(".", "")
version_str="".join([
f"py3{sys.version_info.minor}_cu",
torch.version.cuda.replace(".",""),
f"_pyt{pyt_version_str}"
])
!pip install fvcore iopath
!pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html
else:
# We try to install PyTorch3D from source.
!curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz
!tar xzf 1.10.0.tar.gz
os.environ["CUB_HOME"] = os.getcwd() + "/cub-1.10.0"
!pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'
import os
import torch
import torch.nn.functional as F
import matplotlib.pyplot as plt
# Util function for loading point clouds|
import numpy as np
# Data structures and functions for rendering
from pytorch3d.structures import Pointclouds
from pytorch3d.vis.plotly_vis import AxisArgs, plot_batch_individually, plot_scene
from pytorch3d.renderer import (
look_at_view_transform,
FoVOrthographicCameras,
PointsRasterizationSettings,
PointsRenderer,
PulsarPointsRenderer,
PointsRasterizer,
AlphaCompositor,
NormWeightedCompositor
)
###Output
_____no_output_____
###Markdown
Load a point cloud and corresponding colorsLoad and create a **Point Cloud** object. **Pointclouds** is a unique datastructure provided in PyTorch3D for working with batches of point clouds of different sizes. If running this notebook using **Google Colab**, run the following cell to fetch the pointcloud data and save it at the path `data/PittsburghBridge`:If running locally, the data is already available at the correct path.
###Code
!mkdir -p data/PittsburghBridge
!wget -P data/PittsburghBridge https://dl.fbaipublicfiles.com/pytorch3d/data/PittsburghBridge/pointcloud.npz
# Setup
if torch.cuda.is_available():
device = torch.device("cuda:0")
torch.cuda.set_device(device)
else:
device = torch.device("cpu")
# Set paths
DATA_DIR = "./data"
obj_filename = os.path.join(DATA_DIR, "PittsburghBridge/pointcloud.npz")
# Load point cloud
pointcloud = np.load(obj_filename)
verts = torch.Tensor(pointcloud['verts']).to(device)
rgb = torch.Tensor(pointcloud['rgb']).to(device)
point_cloud = Pointclouds(points=[verts], features=[rgb])
###Output
_____no_output_____
###Markdown
Create a rendererA renderer in PyTorch3D is composed of a **rasterizer** and a **shader** which each have a number of subcomponents such as a **camera** (orthographic/perspective). Here we initialize some of these components and use default values for the rest.In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **alpha compositing**. Then we learn how to vary different components using the modular API. [1] SynSin: End to end View Synthesis from a Single Image. Olivia Wiles, Georgia Gkioxari, Richard Szeliski, Justin Johnson. CVPR 2020.
###Code
# Initialize a camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = FoVOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to raster_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an alpha compositor (nearer points
# are weighted more heavily). See [1] for an explanation.
rasterizer = PointsRasterizer(cameras=cameras, raster_settings=raster_settings)
renderer = PointsRenderer(
rasterizer=rasterizer,
compositor=AlphaCompositor()
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
We will now modify the **renderer** to use **alpha compositing** with a set background color.
###Code
renderer = PointsRenderer(
rasterizer=rasterizer,
# Pass in background_color to the alpha compositor, setting the background color
# to the 3 item tuple, representing rgb on a scale of 0 -> 1, in this case blue
compositor=AlphaCompositor(background_color=(0, 0, 1))
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **weighted compositing**.
###Code
# Initialize a camera.
R, T = look_at_view_transform(20, 10, 0)
cameras = FoVOrthographicCameras(device=device, R=R, T=T, znear=0.01)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. Refer to rasterize_points.py for explanations of these parameters.
raster_settings = PointsRasterizationSettings(
image_size=512,
radius = 0.003,
points_per_pixel = 10
)
# Create a points renderer by compositing points using an weighted compositor (3D points are
# weighted according to their distance to a pixel and accumulated using a weighted sum)
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
compositor=NormWeightedCompositor()
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
We will now modify the **renderer** to use **weighted compositing** with a set background color.
###Code
renderer = PointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
# Pass in background_color to the norm weighted compositor, setting the background color
# to the 3 item tuple, representing rgb on a scale of 0 -> 1, in this case red
compositor=NormWeightedCompositor(background_color=(1,0,0))
)
images = renderer(point_cloud)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
Using the pulsar backendSwitching to the pulsar backend is easy! The pulsar backend has a compositor built-in, so the `compositor` argument is not required when creating it (a warning will be displayed if you provide it nevertheless). It pre-allocates memory on the rendering device, that's why it needs the `n_channels` at construction time.All parameters for the renderer forward function are batch-wise except the background color (in this example, `gamma`) and you have to provide as many values as you have examples in your batch. The background color is optional and by default set to all zeros. You can find a detailed explanation of how gamma influences the rendering function here in the paper [Fast Differentiable Raycasting for Neural Rendering usingSphere-based Representations](https://arxiv.org/pdf/2004.07484.pdf).You can also use the `native` backend for the pulsar backend which already provides access to point opacity. The native backend can be imported from `pytorch3d.renderer.points.pulsar`; you can find examples for this in the folder `docs/examples`.
###Code
renderer = PulsarPointsRenderer(
rasterizer=PointsRasterizer(cameras=cameras, raster_settings=raster_settings),
n_channels=4
).to(device)
images = renderer(point_cloud, gamma=(1e-4,),
bg_col=torch.tensor([0.0, 1.0, 0.0, 1.0], dtype=torch.float32, device=device))
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");
###Output
_____no_output_____
###Markdown
View pointclouds in Plotly figuresHere we use the PyTorch3D function `plot_scene` to render the pointcloud in a Plotly figure. `plot_scene` returns a plotly figure with trace and subplots defined by the input.
###Code
plot_scene({
"Pointcloud": {
"person": point_cloud
}
})
###Output
_____no_output_____
###Markdown
We will now render a batch of pointclouds. The first pointcloud is the same as above, and the second is all-black and offset by 2 in all dimensions so we can see them on the same plot.
###Code
point_cloud_batch = Pointclouds(points=[verts, verts + 2], features=[rgb, torch.zeros_like(rgb)])
# render both in the same plot in different traces
fig = plot_scene({
"Pointcloud": {
"person": point_cloud_batch[0],
"person2": point_cloud_batch[1]
}
})
fig.show()
# render both in the same plot in one trace
fig = plot_scene({
"Pointcloud": {
"2 people": point_cloud_batch
}
})
fig.show()
###Output
_____no_output_____
###Markdown
For batches, we can also use `plot_batch_individually` to avoid constructing the scene dictionary ourselves.
###Code
# render both in 1 row in different subplots
fig2 = plot_batch_individually(point_cloud_batch, ncols=2)
fig2.show()
# modify the plotly figure height and width
fig2.update_layout(height=500, width=500)
fig2.show()
###Output
_____no_output_____
###Markdown
We can also modify the axis arguments and axis backgrounds for either function, and title our plots in `plot_batch_individually`.
###Code
fig3 = plot_batch_individually(
point_cloud_batch,
xaxis={"backgroundcolor":"rgb(200, 200, 230)"},
yaxis={"backgroundcolor":"rgb(230, 200, 200)"},
zaxis={"backgroundcolor":"rgb(200, 230, 200)"},
subplot_titles=["Pointcloud1", "Pointcloud2"], # this should have a title for each subplot, titles can be ""
axis_args=AxisArgs(showgrid=True))
fig3.show()
###Output
_____no_output_____ |
ipynb/prepare_data.ipynb | ###Markdown
First run everything in the submodule `predicting-poverty-replication`. Then:- copy the `malawi_2016` folder into `data/LSMS` relative to the current directory- copy - copy `predicting-poverty-replication/aggregated_feats.npy` to the current directory
###Code
import pandas as pd
import numpy as np
import os
df = pd.read_stata('../LSMS/input/malawi/IHS4 Consumption Aggregate.dta')
PPP_2013 = 116.28
df = pd.read_stata('../LSMS/input/malawi/IHS4 Consumption Aggregate.dta')
df['persons_in_household'] = (df['rexpagg']/df['rexpaggpc']).astype(int)
df['annual_consumption_hh'] = df['rexpagg']
df['annual_consumption_hh'] /= PPP_2013 # accounting for purchasing power parity
df['annual_phone_consumption_hh'] = df['rexp_cat083']
df['annual_phone_consumption_hh'] = df['annual_phone_consumption_hh']/PPP_2013
df = df[['case_id', 'annual_consumption_hh', 'annual_phone_consumption_hh', 'persons_in_household']] # grab these columns
df_geo = pd.read_stata('../LSMS/input/malawi/HouseholdGeovariables_stata11/HouseholdGeovariablesIHS4.dta')
df_cords = df_geo[['case_id', 'HHID', 'lat_modified', 'lon_modified']]
df_cords.rename(columns={'lat_modified': 'lat', 'lon_modified': 'lon'}, inplace=True)
df_hhf = pd.read_stata('../LSMS/input/malawi/HH_MOD_F.dta')
df_hhf = df_hhf[['case_id', 'HHID', 'hh_f34', 'hh_f35']]
df_hhf.rename(columns={'hh_f34': 'cellphones_ph', 'hh_f35': 'estimated_annual_phone_cost_ph'}, inplace=True)
df = pd.merge(df, df_cords[['case_id', 'HHID']], on='case_id')
df_combined = pd.merge(df, df_cords, on=['case_id', 'HHID'])
df_combined = pd.merge(df_combined, df_hhf, on=['case_id', 'HHID'])
df_combined.shape
df_combined.head()
df_combined['persons_in_household'].isna().sum()
df_stats = df_combined.copy()
data_cols = ['annual_consumption_hh', 'annual_phone_consumption_hh', 'cellphones_ph', 'estimated_annual_phone_cost_ph']
for c in data_cols:
df_stats[c + '_na'] = df_stats[c].isna()
df_stats
to_grab = ['lat', 'lon'] + [c + '_na' for c in data_cols]
clust_nas = df_stats.groupby(['lat', 'lon']).mean().reset_index()[to_grab]
clust_counts = df_stats.groupby(['lat', 'lon']).count().reset_index()[['lat', 'lon', 'persons_in_household']].rename(columns={'persons_in_household': 'num_hh_surveyed'})
df_clusters = df_combined.groupby(['lat', 'lon']).sum().reset_index()
for c in data_cols:
# persons in household is now really all persons surveyed in cluster
df_clusters[c[:-3] + '_pc'] = df_clusters[c] / df_clusters['persons_in_household']
df_clusters.drop(data_cols, axis=1, inplace=True)
df_clusters.rename(columns={'persons_in_household': 'persons_surveyed'}, inplace=True)
df_clusters.head()
df_clusters.shape
df_clusters = pd.merge(df_clusters, clust_nas, on=['lat', 'lon'])
df_clusters = pd.merge(df_clusters, clust_counts, on=['lat', 'lon'])
df_clusters.head()
df_clusters.shape
rename = {c: 'cluster_' + c for c in df_clusters.columns}
df_clusters.rename(columns=rename, inplace=True)
df_clusters.head()
import geoio
filename = '../LSMS/Nightlights/2013/F182013.v4c_web.stable_lights.avg_vis.tif'
img = geoio.GeoImage(filename)
im_array = np.squeeze(img.get_data())
import math
def create_space(lat, lon):
# these are pulled from the paper to make the 10km^2 area
return lat - (180/math.pi)*(5000/6378137), lon - (180/math.pi)*(5000/6378137)/math.cos(lat), \
lat + (180/math.pi)*(5000/6378137), lon + (180/math.pi)*(5000/6378137)/math.cos(lat)
household_nightlights = []
for i,r in df_clusters.iterrows():
min_lat, min_lon, max_lat, max_lon = create_space(r.cluster_lat, r.cluster_lon)
xminPixel, yminPixel = img.proj_to_raster(min_lon, min_lat)
xmaxPixel, ymaxPixel = img.proj_to_raster(max_lon, max_lat)
xminPixel, xmaxPixel = min(xminPixel, xmaxPixel), max(xminPixel, xmaxPixel)
yminPixel, ymaxPixel = min(yminPixel, ymaxPixel), max(yminPixel, ymaxPixel)
xminPixel, yminPixel, xmaxPixel, ymaxPixel = int(xminPixel), int(yminPixel), int(xmaxPixel), int(ymaxPixel)
household_nightlights.append(im_array[yminPixel:ymaxPixel,xminPixel:xmaxPixel].mean())
df_clusters['cluster_nightlights'] = household_nightlights
df_clusters.head()
to_look = ['cluster_' + c[:-3] + '_pc' for c in data_cols] + ['cluster_nightlights']
df_clusters[to_look].corr()
df_clusters.to_csv('cluster_data.csv', index=False)
###Output
_____no_output_____
###Markdown
Prepare data for training and save them to disk.
###Code
from trackml.dataset import load_event
from trackml.randomize import shuffle_hits
from trackml.score import score_event
import os
import numpy as np
import pandas as pd
import glob
import math
from process_data import data_uID
path='input/train_1'
data = data_uID()
def check_files(path):
train = np.unique([p.split('-')[0] for p in sorted(glob.glob(path + '/**'))])
for event in train:
try:
hits, cells, particles, truth = load_event(event)
except:
print("reading event:", event)
data.load_training(path='input/train_1', eta_cut=3.5)
for key in data.event_list.keys():
post_fix = ".pkl.gz"
value = data.event_list[key]
value[0].to_pickle(key+"_filtered_hits"+post_fix)
value[1].to_pickle(key+"_filtered_particles"+post_fix)
total_particles = sum(x[0].shape[0] for x in data.event_list.values())
total_hits = sum(x[1].shape[0] for x in data.event_list.values())
print(total_particles)
print(total_hits)
from utils import get_features
import pickle
def ten_hits_data(path, out_name):
train = np.unique([p.split('-')[0] for p in sorted(glob.glob(path + '/**'))])
event_list = []
for event in train:
try:
hits, cells, particles, truth = load_event(event)
pIDs = particles[particles['nhits'] == 10]['particle_id']
hits_truth = pd.merge(hits, truth, on=['hit_id'])
hits_truth = get_features(hits_truth)
track_list = []
for pID in pIDs:
if pID == 0:
continue
this_track = hits_truth[hits_truth['particle_id'] == pID][['r', 'phi', 'z']].values
# this_track = hits_truth[(hits_truth['particle_id'] == pID) & (hits_truth['eta'] > 1) & (hits_truth['eta'] > -1)][['r', 'phi', 'z']].values
track_list.append(this_track)
event_list.append(track_list)
except:
print("reading event:", event)
with open(out_name, 'wb') as fp:
pickle.dump(event_list, fp)
return event_list
events = ten_hits_data('input/train_1', 'ten_hists.npy')
with open('ten_hists.npy', 'wb') as fp:
pickle.dump(events, fp)
events = ten_hits_data('input/train_1', 'ten_hists_eta_less_1.npy')
def training_data_with_eta_cut(train_dir='input/train_1', event_prefix="event000001000", eta_cut=3.2):
hits, cells, particles, truth = load_event(os.path.join(train_dir, event_prefix))
hits_features = get_features(hits)
# high_eta_hits = hits_features[(hits_features['eta'] > eta_cut) | (hits_features['eta'] < -1 * eta_cut)]
high_eta_hits = hits_features[(hits_features['eta'] > eta_cut) | (hits_features['eta'] < -1 * eta_cut)]
uID_for_higheta = make_uID(high_eta_hits)
high_eta_hits_uID = pd.merge(high_eta_hits, uID_for_higheta, on=['volume_id', 'layer_id', 'module_id'])
train_data_higheta = high_eta_hits_uID.merge(filter_truth(truth), on='hit_id')[['uID', 'particle_id']]
return train_data_higheta, uID_for_higheta.shape[0]
###Output
_____no_output_____ |
04_linear_models.ipynb | ###Markdown
Linear Models in Classification$$ \hat{\mathbf{y}} = f(\sum_{i=1}^{d}x_{i}w_{i} + b) = f(\mathbf{x^{T}w} + b)$$ Where:$d$ = number of features / dimensions$\mathbf{x}$ = vector containing input features$\mathbf{w}$ = vector of weights (or coefficients)$b$ = bias$f$ is some thresholding function Typically:$ f(x)= \begin{cases} 1,& \text{if } x > 0\\ 0, & x \leq 0\end{cases}$
###Code
X, y = utils.make_classification()
plt.figure(figsize=(8, 6))
plt.scatter(X[:, 0], X[:, 1], c=y)
lr = LogisticRegression()
lr.fit(X, y)
plt.figure(figsize=(8, 6))
utils.draw_decision_boundary(lr, X, y)
###Output
_____no_output_____
###Markdown
Exercise: Try fitting the logistic regression model on the following dataset and plot the decision boundary
###Code
X, y = utils.make_moons(noise=0.01)
plt.figure(figsize=(8, 6))
plt.scatter(X[:, 0], X[:, 1], c=y)
# enter code here
###Output
_____no_output_____
###Markdown
Question: What went wrong?
###Code
from sklearn.svm import SVC
svc = SVC()
svc.fit(X, y)
utils.draw_decision_boundary(svc, X, y)
###Output
_____no_output_____
###Markdown
Exercise: Try to fit a SVC to the following dataset and plot the decision boundary
###Code
X, y = utils.make_circles(factor=0.1, noise=0.1)
plt.figure(figsize=(8, 6))
plt.scatter(X[:, 0], X[:, 1], c=y)
# enter code here
###Output
_____no_output_____
###Markdown
Linear Models in Regression $$ \hat{\mathbf{y}} = \sum_{i=1}^{d}\beta_{i}x_{i} + \epsilon = \mathbf{x^T\beta} + \epsilon$$ Where:$d$ = number of features / dimensions$\mathbf{x}$ = vector containing input features$\mathbf{\beta}$ = vector of coefficients$\epsilon$ = error, residue or nouse
###Code
x = np.arange(100)
y = np.linspace(0, 1, 100) + np.random.rand(100,) * 0.1
plt.figure(figsize=(8, 6))
plt.scatter(x, y)
plt.xlabel('$x$')
plt.ylabel('$y$')
lr = LinearRegression()
lr.fit(x.reshape(-1, 1), y)
y_hat = lr.predict(x.reshape(-1, 1))
plt.figure(figsize=(8, 6))
plt.scatter(x, y, label='original')
plt.plot(x, y_hat, 'g', label='predicted')
plt.legend()
###Output
_____no_output_____
###Markdown
Examining the coefficients of the trained model
###Code
lr.coef_
lr.intercept_
y_hat_unfit = x * lr.coef_ + lr.intercept_
plt.figure(figsize=(8, 6))
plt.plot(x, y_hat, 'go', x, y_hat_unfit, 'r-')
###Output
_____no_output_____
###Markdown
Exercise: Fit a linear model to the following dataset and find its slope and intercept.
###Code
X, y = utils.make_regression_exercise()
plt.figure(figsize=(8, 6))
plt.scatter(X, y)
plt.xticks([])
plt.yticks([])
# enter code here
# Examine the source code of `utils.make_regression_exercise` to check answer
###Output
_____no_output_____
###Markdown
Sparsity in Linear Models and Data Compression All regression is approximation, solving$$ Ax = b $$ Where:$$ A \in \mathbb{R}^{m \times n} $$ When$m > n$, $A$ is a tall matrix -> overdetermined system of equations$m underdetermined system of equations An overdetermined system has no solution -> solve an approximation For example, find a solution that produces the least MSE, i.e.$$ \underset{x}{min}\|Ax - b\|^2 $$ An underdetermined system has infinitely many solutions -> impose a constraint on solutions For example, find the _sparsest_ solution Example: The Simplest Impossible Problem Which two numbers have the mean 3? Arithmetic mean as matrix multiplication:$ A = \begin{bmatrix}0.5 & 0.5 \\0 & 0\end{bmatrix}$$b = \begin{bmatrix}3\\0\end{bmatrix} $$x = \begin{bmatrix}x_{1}\\x_{2}\end{bmatrix}$ Then solve$Ax = b$
###Code
A = np.array([[0.5, 0.5], [0, 0]])
b = np.array([[3], [0]])
lr.fit(A, b) # Linear Regression
print(lr.coef_)
lasso = Lasso(alpha=0.0001)
lasso.fit(A, b)
print(lasso.coef_)
###Output
_____no_output_____
###Markdown
Example: DTMF - Linear Combination of Two Sinusoids![](dtmf.jpg)
###Code
Fs = 32768
duration = 0.25
t = np.linspace(0, duration, duration * Fs)
f1, f2 = 697, 1336
y1 = np.sin(2 * np.pi * f1 * t);
y2 = np.sin(2 * np.pi * f2 * t);
y = (y1 + y2) / 2
plt.figure(figsize=(8, 6))
plt.plot(t, y)
from IPython.display import Audio
Audio(y, rate=44100)
###Output
_____no_output_____
###Markdown
Repeat the signal ten times
###Code
t = np.linspace(0, duration * 10, duration * 10 * Fs)
f1, f2 = 697, 1336
y1 = np.sin(2 * np.pi * f1 * t);
y2 = np.sin(2 * np.pi * f2 * t);
y = (y1 + y2) / 2
Audio(y, rate=44100)
###Output
_____no_output_____
###Markdown
Recreate the original signal for simplicity
###Code
t = np.linspace(0, duration , duration * Fs)
f1, f2 = 697, 1336
y1 = np.sin(2 * np.pi * f1 * t);
y2 = np.sin(2 * np.pi * f2 * t);
y = (y1 + y2) / 2
###Output
_____no_output_____
###Markdown
Randomly sampling the signal
###Code
N = y.shape[0] # length of the signal
M = 800 # number of samples
plt.figure(figsize=(10, 8))
plt.subplot(211), plt.plot(t, y)
plt.xlim(0, 0.125)
plt.title('Original Signal')
# Randomly sampling the test signal
k = np.random.randint(0, N, (M,))
k = np.sort(k) # making sure the random samples are monotonic
b = y[k]
plt.subplot(212), plt.plot(t, y, 'b', t[k],b,'r.')
plt.xlim(0, 0.125)
plt.title('Original Signal with Random Samples')
###Output
_____no_output_____
###Markdown
Discrete Cosine Coefficients as the data that "predict" the signal Or, which signal, when operated on by a DCT, will produce the sampled signal?
###Code
D = dct(np.eye(N), axis=0)
A = D[k,:]
lasso = Lasso(alpha=0.001)
lasso.fit(A, b)
print(lasso.coef_)
print('Sparsity {} %'.format((lasso.coef_ != 0).sum() / lasso.coef_.shape[0] * 100))
recons = idct(lasso.coef_)
plt.figure(figsize=(10, 8))
plt.subplot(211), plt.plot(recons)
plt.title('Reconstructed signal')
plt.subplot(212), plt.plot(np.linspace(0, Fs / 2, N), lasso.coef_), plt.xlim(0, 2500)
plt.title('Sparse Coefficients')
###Output
_____no_output_____
###Markdown
Thresholding the coefficients:
###Code
coefs = lasso.coef_.copy()
coefs[np.abs(coefs) <= 0.1] = 0
recons_th = idct(coefs)
plt.figure(figsize=(10, 8))
plt.subplot(211), plt.plot(recons_th)
plt.title('Reconstructed signal')
plt.subplot(212), plt.plot(np.linspace(0, Fs /2, N), coefs), plt.xlim(0, 2500)
plt.title('Sparse Coefficients (Thresholded)')
Audio(np.tile(recons_th, 10), rate=44100)
###Output
_____no_output_____ |
3Dtest_Patrick.ipynb | ###Markdown
3D scan feature detection Patrick Ruan [email protected] 1. [A. Given the scan file visualize the raw data in a 3D graph](A)2. [Automatically detect the separate sections of the scan and visualize them in separate 3D graphs](B)3. [Plot of the face of each section in a 2D graph](C)4. [Clean up the noise on each section so the 2D graphs are smoother](D)5. [*3 of the 5 sections have some dimples inset into the shape. Detect the radius of these dimples](E)6. [*Extrapolate the 2D graphs to extend an extra 1000 points farther than their current start and end](F) Read data from CSV file using pandas
###Code
import pandas as pd
import numpy as np
df=pd.read_csv('point-cloud.csv',header=None)
scanned_data=np.asarray(df)
scanned_data
###Output
_____no_output_____
###Markdown
A. Given the scan file visualize the raw data in a 3D graph.
###Code
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
X,Y=np.meshgrid(np.arange(0,scanned_data.shape[1],1),np.arange(0,scanned_data.shape[0],1))
fig = plt.figure()
ax = Axes3D(fig)
ax=fig.gca(projection='3d')
ax.scatter(X,Y,scanned_data)
plt.show()
###Output
_____no_output_____
###Markdown
The figure above shows distribution of the data from CSV file. Looks like the values less than -30, especially the value "-99.9999", could be ignored.
###Code
def plot_3d(data):
shape=data.shape
Y=np.arange(0,shape[0],1)
X=np.arange(0,shape[1],1)
X,Y=np.meshgrid(X,Y)
fig = plt.figure()
ax = Axes3D(fig)
ax=fig.gca(projection='3d')
ax.set_zlim3d(-30,10) # Limitation of axis-z
ax.scatter(X,Y,data)
plt.show()
###Output
_____no_output_____
###Markdown
Answer:
###Code
plot_3d(scanned_data)
###Output
_____no_output_____
###Markdown
----- B. Automatically detect the separate sections of the scan and visualize them in separate 3D graphs.
###Code
thred=-99
vis_data=np.zeros(scanned_data.shape)
temp_data=scanned_data+(np.max(scanned_data)-np.min(scanned_data))
vis_data=temp_data/np.max(temp_data)*255
# for i in range(scanned_data.shape[0]):
# for j in range(scanned_data.shape[1]):
# if scanned_data[i][j]>thred:
# vis_data[i][j]=1
# else:
# vis_data[i][j]=0
plt.figure(figsize=(15,10))
plt.imshow(vis_data.transpose(),cmap ='gray')
def get_boundary(segments,margin):
boundary=[]
mar=margin
for i in range(len(segments)-1):
if mar==True:
mar=False
else:
boundary.append((segments[i]+segments[i+1])//2)
mar=True
boundary.append(0)
boundary.append(800)
return sorted(boundary)
def vector_segmentor(vector,thred):
segments=[]
segments.append(0)
for i in range(vector.size):
if vector[i]<thred:
vector[i]=0
else:
vector[i]=1
for i in range(vector.size-1):
if vector[i]!=vector[i+1]:
segments.append(i+1)
segments.append(800)
return get_boundary(segments,margin=True)
boundary=vector_segmentor(np.sum(scanned_data,axis=0),-30*5000)
boundary
def segmentor(data):
sections={}
boundary=vector_segmentor(np.sum(data,axis=0),-30*5000)
for i in range(len(boundary)-1):
sections[i]=scanned_data[:,boundary[i]:boundary[i+1]]
return sections
original_data_sections=segmentor(scanned_data)
original_data_sections
###Output
_____no_output_____
###Markdown
Answer:
###Code
plot_3d(original_data_sections[0]) # The first section
plot_3d(original_data_sections[1]) #The second section
plot_3d(original_data_sections[2]) #The third section
plot_3d(original_data_sections[3]) #The fourth section
plot_3d(original_data_sections[4]) #The fifth section
###Output
_____no_output_____
###Markdown
------- C. Plot of the face of each section in a 2D graph.
###Code
def plotface(data,title):
shape=data.shape
Y=np.arange(0,shape[0],1)
X=np.arange(0,shape[1],1)
plt.figure(figsize=(10,3))
fig=plt.subplot(1, 1, 1)
fig.set_title(title)
for i in range(data.shape[1]):
plt.scatter(Y,data[:,i],color='red')
plt.ylim(-25, 10)
# plt.subplot(1, 2, 2)
# for j in range (data.shape[0]):
# plt.scatter(X,data[j,:],color='red')
# plt.ylim(-25,10)
plt.show()
###Output
_____no_output_____
###Markdown
Answer: Projection in the x-axis direction of each section.
###Code
plotface(original_data_sections[0],"The first section 2D graph") # The first section
plotface(original_data_sections[1],"The second section 2D graph") # The second section
plotface(original_data_sections[2],"The third section 2D graph") # The third section
plotface(original_data_sections[3],"The fourth section 2D graph") # The fourth section
plotface(original_data_sections[4],"The fifth section 2D graph") # The fifth section
###Output
_____no_output_____
###Markdown
------------ D. Clean up the noise on each section so the 2D graphs are smoother. Like cleaning up the noise for images, I attempted some famous algorithms for this test. D.1 Mean filtering
###Code
def mean_filter(data,mask_size):
x_size=data.shape[0]
y_size=data.shape[1]
result_data=np.zeros((x_size,y_size))
pad_data = np.full((x_size+mask_size-1,y_size+mask_size-1),-99.9999)
pad_data[:x_size,:y_size]=data
for i in range(x_size):
for j in range(y_size):
sum=0
for x in range(mask_size):
for y in range(mask_size):
sum=sum+pad_data[i+x][j+y]
result_data[i][j]=sum/(mask_size*mask_size)
return result_data
mean_filtered_data=mean_filter(scanned_data,5)
mean_data_sections=segmentor(mean_filtered_data)
###Output
_____no_output_____
###Markdown
D.2 Median filtering
###Code
def median_filter(data,mask_size):
x_size=data.shape[0]
y_size=data.shape[1]
result_data=np.zeros((x_size,y_size))
pad_data = np.full((x_size+mask_size-1,y_size+mask_size-1),-99.9999)
pad_data[:x_size,:y_size]=data
for i in range(x_size):
for j in range(y_size):
mask_list=[]
for x in range(mask_size):
for y in range(mask_size):
mask_list.append(pad_data[i+x][j+y])
result_data[i][j]=sorted(mask_list)[(mask_size*mask_size)//2]
return result_data
median_filtered_data=median_filter(scanned_data,5)
median_data_sections=segmentor(median_filtered_data)
def compare(datas,titles):
plt.figure(figsize=(15,3))
for i in range(len(datas)):
shape=datas[i].shape
Y=np.arange(0,shape[0],1)
fig=plt.subplot(1,len(datas),i+1)
fig.set_title(titles[i])
for j in range(shape[1]):
plt.scatter(Y,datas[i][:,j],color='red')
plt.ylim(-25,10)
plt.show()
###Output
_____no_output_____
###Markdown
Answer:
###Code
titles=['Original Data','Mean Filtered Data with Mask Size 5','Median Filtered Data with Mask Size 5']
compare([original_data_sections[0],mean_data_sections[0],median_data_sections[0]],titles) #The first section
compare([original_data_sections[1],mean_data_sections[1],median_data_sections[1]],titles) #The second section
compare([original_data_sections[2],mean_data_sections[2],median_data_sections[2]],titles) #The third section
compare([original_data_sections[3],mean_data_sections[3],median_data_sections[3]],titles) #The fourth section
compare([original_data_sections[4],mean_data_sections[4],median_data_sections[4]],titles) #The fifth section
###Output
_____no_output_____
###Markdown
----------- E. 3 of the 5 sections have some dimples inset into the shape. Detect the radius of these dimples. From the figures above, it could be clear to see that section 1, section 3 and section 5 have dimples insert into the shape. e.1 My idea of caculating the radius of dimples![Imgur](https://i.imgur.com/AczgpJB.jpg)
###Code
def cal_radius(A,C,thred):
y=C[1]+thred
last_dis_diff=np.sqrt(np.sum(np.square(np.asarray(A-C))))
# print(last_dis_diff)
while True:
AO=np.sqrt(np.sum(np.square(np.asarray([C[0],y])-A)))
# print(AO)
CO=np.sqrt(np.sum(np.square(np.asarray([C[0],y])-C)))
# print(CO)
dis_diff=abs(CO-AO)
# print(dis_diff)
if(dis_diff<last_dis_diff):
y=y+thred
else:
break
last_dis_diff=dis_diff
return y-C[1]
#Example:
cal_radius(np.asarray([0,2]),np.asarray([3,0]),0.01)
###Output
_____no_output_____
###Markdown
e.2 Find the index range of dimples of each section. Use the median value of each row to deal with this problem.
###Code
def vis_data(data,thred):
result=np.zeros((data.shape[0]))
for i in range(data.shape[0]):
data_list=[]
for j in range(data.shape[1]):
if data[i][j]>thred:
data_list.append(data[i][j])
if data_list!=[]:
result[i]=data_list[len(data_list)//2]
return result
## I have also attempted to use mean value of each row, but it seems median value performs better!
def vis_data_mean(data,thred):
result=np.zeros((data.shape[0]))
for i in range(data.shape[0]):
sum=0
index=0
for j in range(data.shape[1]):
if data[i][j]>thred:
sum+=data[i][j]
index=index+1
if index!=0:
result[i]=sum/index
return result
def lets_see(data,p,title):
Y=np.arange(0,data.shape[0],1)
plt.figure(figsize=(10,3))
fig=plt.subplot(1, 1, 1)
fig.set_title(title)
for i in range(data.shape[1]):
plt.scatter(Y,data[:,i],color='red')
plt.ylim(-15, 10)
for i in p:
plt.plot(Y,i,color='b')
plt.show()
lets_see(scanned_data,[vis_data(scanned_data,-30)],"Use the median value of each row")
###Output
_____no_output_____
###Markdown
Section 1:
###Code
plt.figure(figsize=(15,3))
t=vis_data(original_data_sections[0],-30)
Y=np.arange(0,5000,1)
plt.subplot(1,2,1)
plt.plot(Y,t)
section1_dimple_range=t[930:1200]
plt.subplot(1,2,2)
plt.plot(np.arange(0,section1_dimple_range.shape[0],1),section1_dimple_range)
###Output
_____no_output_____
###Markdown
Section 3:
###Code
plt.figure(figsize=(15,3))
t=vis_data(original_data_sections[2],-30)
Y=np.arange(0,5000,1)
plt.subplot(1,2,1)
plt.plot(Y,t)
section3_dimple_range=t[910:1200]
plt.subplot(1,2,2)
plt.plot(np.arange(0,section3_dimple_range.shape[0],1),section3_dimple_range)
###Output
_____no_output_____
###Markdown
Section 5:
###Code
plt.figure(figsize=(15,3))
t=vis_data(original_data_sections[4],-30)
Y=np.arange(0,5000,1)
plt.subplot(1,2,1)
plt.plot(Y,t)
section5_dimple_range=t[940:1200]
plt.subplot(1,2,2)
plt.plot(np.arange(0,section5_dimple_range.shape[0],1),section5_dimple_range)
###Output
_____no_output_____
###Markdown
e.3 Find the coordinates of local minima point and edge point.
###Code
def find_local_minima(vector):
local_minima=np.min(vector)
local_minima_index=(np.where(vector==local_minima)[0][0]+np.where(vector==local_minima)[0][-1])//2
edge=0
edge_index=0
for i in range(vector.size-1):
if vector[i]>vector[i+1]:
edge_index=i
edge=vector[i]
break
return np.asarray([edge_index,edge]),np.asarray([local_minima_index,local_minima])
section_1s=find_local_minima(section1_dimple_range)
section_3s=find_local_minima(section3_dimple_range)
section_5s=find_local_minima(section5_dimple_range)
###Output
_____no_output_____
###Markdown
e.4 Calculating radius & Answer PS: Assume the two dimples of each sections have the same radius. The radius is calculated based on one of the dimples.
###Code
section1_radius=cal_radius(section_1s[0],section_1s[1],0.1)
plotface(original_data_sections[0][700:1300,:],"Radius of dimple in section 1 is about "+ str(round(section1_radius,3)))
print(section1_radius)
section3_radius=cal_radius(section_3s[0],section_3s[1],0.1)
plotface(original_data_sections[2][700:1300,:],"Radius of dimple in section 3 is about "+ str(round(section3_radius,3)))
print(section3_radius)
section5_radius=cal_radius(section_5s[0],section_5s[1],0.1)
plotface(original_data_sections[4][700:1300,:],"Radius of dimple in section 5 is about "+ str(round(section5_radius,3)))
print(section5_radius)
###Output
_____no_output_____
###Markdown
-------------- F. Extrapolate the 2D graphs to extend an extra 1000 points farther than their current start and end.
###Code
plotface(scanned_data,"2D graphs of the scanned data")
###Output
_____no_output_____
###Markdown
My idea of extrapolation From the 2D graph above, it could be predicted that both ends of the shape would continue to extend downward. Calculate the slope of the data and predict the extrapolated data.
###Code
def vis_max_min(data,thred,m):
result=np.zeros((data.shape[0]))
for i in range(data.shape[0]):
data_list=[]
for j in range(data.shape[1]):
if data[i][j]>thred:
data_list.append(data[i][j])
if data_list!=[]:
if m=="max":
result[i]=max(data_list)
elif m=="min":
result[i]=min(data_list)
else:
print ("ERROR!")
return 0
return result
part_data_right=scanned_data[3500:4000,:]
p_max_right=vis_max_min(part_data_right,-30,"max")
p_min_right=vis_max_min(part_data_right,-30,"min")
p_right=[p_max_right,p_min_right]
lets_see(part_data_right,p_right,"Right part of the shape")
part_data_left=scanned_data[400:950,:]
p_max_left=vis_max_min(part_data_left,-30,"max")
p_min_left=vis_max_min(part_data_left,-30,"min")
p_left=[p_max_left,p_min_left]
lets_see(part_data_left,p_left,"Left part of the shape")
###Output
_____no_output_____
###Markdown
F.1 Linear Regression F.1.1 Right side
###Code
def linearR(x,y):
X = np.vstack([x, np.ones(len(x))]).T
m, c = np.linalg.lstsq(X, y, rcond=None)[0]
return m,c
def showboth(x,p):
y_max=p[0]
y_min=p[1]
m_max, c_max = linearR(x,y_max)
m_min, c_min = linearR(x,y_min)
plt.plot(x, m_max*x + c_max, x, m_min*x + c_min, 'b')
plt.plot(x,p[0],x,p[1],color='r')
return [[m_max,c_max],[m_min,c_min]]
plt.figure(figsize=(15,3))
x_left = np.asarray([i for i in range(400,950)])
m_left_max, c_left_max = linearR(x_left,p_left[0])
m_left_min, c_left_min = linearR(x_left,p_left[1])
fig=plt.subplot(1,2,1)
fig.set_title("Linear Regression fit on left")
p2_left_max=np.poly1d(np.polyfit(x_left,p_max_left,2))
p2_left_min=np.poly1d(np.polyfit(x_left,p_min_left,2))
plt.plot(x_left, p2_left_max(x_left),x_left,p2_left_min(x_left),color='b')
plt.plot(x_left, p_max_left,x_left,p_min_left,color='r')
x_right = np.asarray([i for i in range(3500,4000)])
m_right_max, c_right_max = linearR(x_right,p_right[0])
m_right_min, c_right_min = linearR(x_right,p_right[1])
fig=plt.subplot(1,2,2)
fig.set_title("Linear Regression on right")
plt.plot(x_right, m_right_max*x_right + c_right_max, x_right, m_right_min*x_right + c_right_min, 'b')
plt.plot(x_right, p_right[0],x_right,p_right[1],color='r')
###Output
_____no_output_____
###Markdown
F.1.3 Extrapolation
###Code
def extrapolate_lr(old_data,ran,mc):
new_data=np.zeros((old_data.shape[0]+2*ran,old_data.shape[1]))
new_data[ran:ran+old_data.shape[0],:]=old_data
for j in range(ran):
new_data[j][:]=np.random.uniform(mc[0][1][0]*j+mc[0][1][1],mc[0][0][0]*j+mc[0][0][1],size=(1,new_data.shape[1]))
for j in range(old_data.shape[0]+ran,old_data.shape[0]+2*ran):
new_data[j][:]=np.random.uniform(mc[1][1][0]*j+mc[1][1][1],mc[1][0][0]*j+mc[1][0][1],size=(1,new_data.shape[1]))
return new_data
mc=[mc_left,mc_right]
new_data_lr=extrapolate_lr(scanned_data[400:4000,:],500,mc)
###Output
_____no_output_____
###Markdown
F.2 Polynomial Fit
###Code
plt.figure(figsize=(15,3))
fig=plt.subplot(1,2,1)
fig.set_title("Polynomials fit on left, p=2")
p2_left_max=np.poly1d(np.polyfit(x_left,p_max_left,2))
p2_left_min=np.poly1d(np.polyfit(x_left,p_min_left,2))
plt.plot(x_left, p2_left_max(x_left),x_left,p2_left_min(x_left),color='b')
plt.plot(x_left, p_max_left,x_left,p_min_left,color='r')
fig=plt.subplot(1,2,2)
fig.set_title("Polynomials fit on right, p=2")
p2_right_max=np.poly1d(np.polyfit(x_right,p_max_right,2))
p2_right_min=np.poly1d(np.polyfit(x_right,p_min_right,2))
plt.plot(x_right, p2_right_max(x_right),x_right,p2_right_min(x_right),color='b')
plt.plot(x_right, p_max_right,x_right,p_min_right,color='r')
def extrapolate_poly(old_data,ran,p2):
new_data=np.zeros((old_data.shape[0]+2*ran,old_data.shape[1]))
new_data[ran:ran+old_data.shape[0],:]=old_data
for j in range(ran):
new_data[j][:]=np.random.uniform(p2[0][1](j),p2[0][0](j),size=(1,new_data.shape[1]))
for j in range(old_data.shape[0]+ran,old_data.shape[0]+2*ran):
new_data[j][:]=np.random.uniform(p2[1][1](j),p2[1][0](j),size=(1,new_data.shape[1]))
return new_data
p2=[[p2_left_max,p2_left_min],[p2_right_max,p2_right_min]]
new_data_poly=extrapolate_poly(scanned_data[400:4000,:],500,p2)
###Output
_____no_output_____
###Markdown
Answer:
###Code
compare([new_data_lr,new_data_poly],["Extrapolated data using linear regression","Extrapolated data using polynomial fit"])
###Output
_____no_output_____ |
appyters/Enrichr_Consensus_Terms/EnrichrConsensus.ipynb | ###Markdown
Get Input
###Code
%%appyter code_exec
{% set input_gene_set = FileField(
name='input_gene_set',
label='Gene Set',
default='input.gmt',
section="PRIMARY",
examples={
'input.gmt': 'https://appyters.maayanlab.cloud/storage/EnrichrConsensus/sample_input/10input.gmt'
}
) %}
input_gene_set = {{ input_gene_set }}
%%appyter code_exec
transcription_libraries = {{ MultiChoiceField(name='transcription_libraries',
description='Select the Enrichr libraries you would like in your figure.',
label='Transcription',
default=[],
section = 'PRIMARY',
choices=[
'ARCHS4_TFs_Coexp',
'ChEA_2016',
'ENCODE_and_ChEA_Consensus_TFs_from_ChIP-X',
'ENCODE_Histone_Modifications_2015',
'ENCODE_TF_ChIP-seq_2015',
'Epigenomics_Roadmap_HM_ChIP-seq',
'Enrichr_Submissions_TF-Gene_Coocurrence',
'Genome_Browser_PWMs',
'lncHUB_lncRNA_Co-Expression',
'miRTarBase_2017',
'TargetScan_microRNA_2017',
'TF-LOF_Expression_from_GEO',
'TF_Perturbations_Followed_by_Expression',
'Transcription_Factor_PPIs',
'TRANSFAC_and_JASPAR_PWMs',
'TRRUST_Transcription_Factors_2019']) }}
pathways_libraries = {{ MultiChoiceField(name='pathways_libraries',
description='Select the Enrichr libraries you would like in your figure.',
label='Pathways',
default=[],
section = 'PRIMARY',
choices=[
'ARCHS4_Kinases_Coexp',
'BioCarta_2016',
'BioPlanet_2019',
'BioPlex_2017',
'CORUM',
'Elsevier_Pathway_Collection',
'HMS_LINCS_KinomeScan',
'HumanCyc_2016',
'huMAP',
'KEA_2015',
'KEGG_2019_Human',
'KEGG_2019_Mouse',
'Kinase_Perturbations_from_GEO_down',
'Kinase_Perturbations_from_GEO_up',
'L1000_Kinase_and_GPCR_Perturbations_down',
'L1000_Kinase_and_GPCR_Perturbations_up',
'NCI-Nature_2016',
'NURSA_Human_Endogenous_Complexome',
'Panther_2016',
'Phosphatase_Substrates_from_DEPOD',
'PPI_Hub_Proteins',
'Reactome_2016',
'SILAC_Phosphoproteomics',
'SubCell_BarCode',
'Virus-Host_PPI_P-HIPSTer_2020',
'WikiPathways_2019_Human',
'WikiPathways_2019_Mouse']) }}
ontologies_libraries = {{ MultiChoiceField(name='ontologies_libraries',
description='Select the Enrichr libraries you would like in your figure.',
label='Ontologies',
default=[],
section = 'PRIMARY',
choices=[
'GO_Biological_Process_2018',
'GO_Cellular_Component_2018',
'GO_Molecular_Function_2018',
'Human_Phenotype_Ontology',
'Jensen_COMPARTMENTS',
'Jensen_DISEASES',
'Jensen_TISSUES',
'MGI_Mammalian_Phenotype_Level_4_2019']) }}
diseases_drugs_libraries = {{ MultiChoiceField(name='diseases_drugs_libraries',
description='Select the Enrichr libraries you would like in your figure.',
label='Diseases/Drugs',
default=[],
section = 'PRIMARY',
choices=[
'Achilles_fitness_decrease',
'Achilles_fitness_increase',
'ARCHS4_IDG_Coexp',
'ClinVar_2019',
'dbGaP',
'DepMap_WG_CRISPR_Screens_Broad_CellLines_2019',
'DepMap_WG_CRISPR_Screens_Sanger_CellLines_2019',
'DisGeNET',
'DrugMatrix',
'DSigDB',
'GeneSigDB',
'GWAS_Catalog_2019',
'LINCS_L1000_Chem_Pert_down',
'LINCS_L1000_Chem_Pert_up',
'LINCS_L1000_Ligand_Perturbations_down',
'LINCS_L1000_Ligand_Perturbations_up',
'MSigDB_Computational',
'MSigDB_Oncogenic_Signatures',
'Old_CMAP_down',
'Old_CMAP_up',
'OMIM_Disease',
'OMIM_Expanded',
'PheWeb_2019',
'Rare_Diseases_AutoRIF_ARCHS4_Predictions',
'Rare_Diseases_AutoRIF_Gene_Lists',
'Rare_Diseases_GeneRIF_ARCHS4_Predictions',
'Rare_Diseases_GeneRIF_Gene_Lists',
'UK_Biobank_GWAS_v1',
'Virus_Perturbations_from_GEO_down',
'Virus_Perturbations_from_GEO_up',
'VirusMINT'])
}}
libraries = transcription_libraries + pathways_libraries + ontologies_libraries + diseases_drugs_libraries
enrichment = {}
with open(input_gene_set) as o:
for line in o:
unpacked = line.strip().split("\t\t")
if not len(unpacked) == 2:
raise ValueError("GMT is not formatted properly, please consult the README of the appyter for proper formatting")
sigid, geneset_str = unpacked
geneset = geneset_str.split("\t")
enrichment[sigid] = {
"genes": [i.split(",")[0] for i in geneset]
}
num_sigs = len(enrichment)
input_sigs = pd.DataFrame.from_dict(enrichment, orient="index")
display(input_sigs.head(10))
display(Markdown("**Table %d** Input Signatures"%(table)), display_id="input_sigs")
table+=1
###Output
_____no_output_____
###Markdown
User defined parameters
###Code
%%appyter code_exec
alpha = {{FloatField(name='alpha', label='p-value cutoff', default=0.05, section='PRIMARY')}}
top_results = {{IntField(name='min_count', label='Top results', description="Number of top results to keep", default=25, section='PRIMARY')}}
width = {{FloatField(name='width', label='image width', default=15, section='PRIMARY')}}
height = {{FloatField(name='height', label='image height', default=15, section='PRIMARY')}}
###Output
_____no_output_____
###Markdown
Enrichment
###Code
failed_userlist = []
failed_enrich = {}
for description, values in enrichment.items():
print("Querying %s"%(description), end="\r", flush=True)
genes = values["genes"]
for tries in range(5):
try:
userListId = addList(genes, description)
enrichment[description]["userListId"] = userListId
break
except Exception as e:
print(e)
time.sleep(0.5)
else:
failed_userlist.append(description)
continue
time.sleep(0.1)
enrichment[description]["libraries"] = {}
for library in libraries:
for tries in range(5):
try:
userlistId = enrichment[description]["userListId"]
results = enrich(userListId, library, alpha)
enrichment[description]["libraries"][library] = results
break
except Exception as e:
print(e)
time.sleep(0.5)
else:
if description not in failed_enrich:
failed_enrich[description] = []
failed_enrich[description].append(library)
continue
time.sleep(0.1)
if len(failed_userlist):
print("Failed to add %d list"%len(failed_userlist))
if len(failed_enrich):
print("Failed enrichment for %d gene sets"%len(failed_enrich))
for lib in libraries:
display(Markdown("## %s"%lib.replace("_"," ")), display_id="title_%s"%lib)
term_df,table = get_dataframe(enrichment, lib, table, display_id=lib)
consensus, table = get_consensus(term_df, lib, top_results, table, display_id=lib)
# Visualize
consensus_df = term_df.loc[consensus.index]
if (consensus_df.shape[1] > 0):
clustergram_filename = "%s_consensus_clust.tsv"%lib
clustergram_caption = "Clustergrammer for the top %d consensus terms for %s "%(top_results, lib.replace("_"," "))
clustergrammer(consensus_df,
clustergram_filename,
clustergrammer_url,
lib,
figure,
clustergram_caption,
)
figure+=1
results_count = len(consensus.index) if len(consensus.index) < top_results else top_results
heatmap(consensus_df, "%s_consensus.svg"%lib, lib, width, height)
display(Markdown("**Figure %d** Heatmap for the top %d consensus terms for %s. [Download figure](%s_consensus.svg)"%(figure, results_count, lib.replace("_"," "), lib)),
display_id="heatmap_caption_%s"%lib)
figure+=1
# if num_sigs <=15:
status = stackedBarPlot(consensus_df, "%s_consensus_barplot.svg"%lib, display_id=lib)
if status:
display(Markdown("**Figure %d** Stacked bar plot for the top %d consensus terms for **%s**. [Download figure](%s_consensus_barplot.svg)"%(figure, top_results, lib.replace("_"," "), lib)),
display_id="stacked_bar_caption_%s"%lib)
figure +=1
else:
print("No terms found")
###Output
_____no_output_____
###Markdown
Get Input
###Code
%%appyter code_exec
{% set input_gene_set = FileField(
name='input_gene_set',
label='Gene Set',
default='input.gmt',
section="PRIMARY",
examples={
'input.gmt': 'https://appyters.maayanlab.cloud/storage/EnrichrConsensus/sample_input/10input.gmt'
}
) %}
input_gene_set = {{ input_gene_set }}
%%appyter code_exec
transcription_libraries = {{ MultiChoiceField(name='transcription_libraries',
description='Select the Enrichr libraries you would like in your figure.',
label='Transcription',
default=[],
section = 'PRIMARY',
choices=[
'ARCHS4_TFs_Coexp',
'ChEA_2016',
'ENCODE_and_ChEA_Consensus_TFs_from_ChIP-X',
'ENCODE_Histone_Modifications_2015',
'ENCODE_TF_ChIP-seq_2015',
'Epigenomics_Roadmap_HM_ChIP-seq',
'Enrichr_Submissions_TF-Gene_Coocurrence',
'Genome_Browser_PWMs',
'lncHUB_lncRNA_Co-Expression',
'miRTarBase_2017',
'TargetScan_microRNA_2017',
'TF-LOF_Expression_from_GEO',
'TF_Perturbations_Followed_by_Expression',
'Transcription_Factor_PPIs',
'TRANSFAC_and_JASPAR_PWMs',
'TRRUST_Transcription_Factors_2019']) }}
pathways_libraries = {{ MultiChoiceField(name='pathways_libraries',
description='Select the Enrichr libraries you would like in your figure.',
label='Pathways',
default=[],
section = 'PRIMARY',
choices=[
'ARCHS4_Kinases_Coexp',
'BioCarta_2016',
'BioPlanet_2019',
'BioPlex_2017',
'CORUM',
'Elsevier_Pathway_Collection',
'HMS_LINCS_KinomeScan',
'HumanCyc_2016',
'huMAP',
'KEA_2015',
'KEGG_2019_Human',
'KEGG_2019_Mouse',
'Kinase_Perturbations_from_GEO_down',
'Kinase_Perturbations_from_GEO_up',
'L1000_Kinase_and_GPCR_Perturbations_down',
'L1000_Kinase_and_GPCR_Perturbations_up',
'NCI-Nature_2016',
'NURSA_Human_Endogenous_Complexome',
'Panther_2016',
'Phosphatase_Substrates_from_DEPOD',
'PPI_Hub_Proteins',
'Reactome_2016',
'SILAC_Phosphoproteomics',
'SubCell_BarCode',
'Virus-Host_PPI_P-HIPSTer_2020',
'WikiPathways_2019_Human',
'WikiPathways_2019_Mouse']) }}
ontologies_libraries = {{ MultiChoiceField(name='ontologies_libraries',
description='Select the Enrichr libraries you would like in your figure.',
label='Ontologies',
default=[],
section = 'PRIMARY',
choices=[
'GO_Biological_Process_2018',
'GO_Cellular_Component_2018',
'GO_Molecular_Function_2018',
'Human_Phenotype_Ontology',
'Jensen_COMPARTMENTS',
'Jensen_DISEASES',
'Jensen_TISSUES',
'MGI_Mammalian_Phenotype_Level_4_2019']) }}
diseases_drugs_libraries = {{ MultiChoiceField(name='diseases_drugs_libraries',
description='Select the Enrichr libraries you would like in your figure.',
label='Diseases/Drugs',
default=[],
section = 'PRIMARY',
choices=[
'Achilles_fitness_decrease',
'Achilles_fitness_increase',
'ARCHS4_IDG_Coexp',
'ClinVar_2019',
'dbGaP',
'DepMap_WG_CRISPR_Screens_Broad_CellLines_2019',
'DepMap_WG_CRISPR_Screens_Sanger_CellLines_2019',
'DisGeNET',
'DrugMatrix',
'DSigDB',
'GeneSigDB',
'GWAS_Catalog_2019',
'LINCS_L1000_Chem_Pert_down',
'LINCS_L1000_Chem_Pert_up',
'LINCS_L1000_Ligand_Perturbations_down',
'LINCS_L1000_Ligand_Perturbations_up',
'MSigDB_Computational',
'MSigDB_Oncogenic_Signatures',
'Old_CMAP_down',
'Old_CMAP_up',
'OMIM_Disease',
'OMIM_Expanded',
'PheWeb_2019',
'Rare_Diseases_AutoRIF_ARCHS4_Predictions',
'Rare_Diseases_AutoRIF_Gene_Lists',
'Rare_Diseases_GeneRIF_ARCHS4_Predictions',
'Rare_Diseases_GeneRIF_Gene_Lists',
'UK_Biobank_GWAS_v1',
'Virus_Perturbations_from_GEO_down',
'Virus_Perturbations_from_GEO_up',
'VirusMINT'])
}}
libraries = transcription_libraries + pathways_libraries + ontologies_libraries + diseases_drugs_libraries
enrichment = {}
with open(input_gene_set) as o:
for line in o:
unpacked = line.strip().split("\t")
if len(unpacked) == 1:
raise ValueError("Line '%s' is either empty or not formatted properly. Please consult README for more information"%line)
sigid = unpacked[0]
geneset = [i for i in unpacked[1:] if len(i) > 0]
enrichment[sigid] = {
"genes": [i.split(",")[0] for i in geneset]
}
num_sigs = len(enrichment)
input_sigs = pd.DataFrame.from_dict(enrichment, orient="index")
display(input_sigs.head(10))
display(Markdown("**Table %d** Input Signatures"%(table)), display_id="input_sigs")
table+=1
###Output
_____no_output_____
###Markdown
User defined parameters
###Code
%%appyter code_exec
alpha = {{FloatField(name='alpha', label='p-value cutoff', default=0.05, section='PRIMARY')}}
top_results = {{IntField(name='min_count', label='Top results', description="Number of top results to keep", default=25, section='PRIMARY')}}
width = {{FloatField(name='width', label='image width', default=15, section='PRIMARY')}}
height = {{FloatField(name='height', label='image height', default=15, section='PRIMARY')}}
###Output
_____no_output_____
###Markdown
Enrichment
###Code
failed_userlist = []
failed_enrich = {}
for description, values in enrichment.items():
print("Querying %s"%(description), end="\r", flush=True)
genes = values["genes"]
for tries in range(5):
try:
userListId = addList(genes, description)
enrichment[description]["userListId"] = userListId
break
except Exception as e:
print(e)
time.sleep(0.5)
else:
failed_userlist.append(description)
continue
time.sleep(0.1)
enrichment[description]["libraries"] = {}
for library in libraries:
for tries in range(5):
try:
userlistId = enrichment[description]["userListId"]
results = enrich(userListId, library, alpha)
enrichment[description]["libraries"][library] = results
break
except Exception as e:
print(e)
time.sleep(0.5)
else:
if description not in failed_enrich:
failed_enrich[description] = []
failed_enrich[description].append(library)
continue
time.sleep(0.1)
if len(failed_userlist):
print("Failed to add %d list"%len(failed_userlist))
if len(failed_enrich):
print("Failed enrichment for %d gene sets"%len(failed_enrich))
for lib in libraries:
display(Markdown("## %s"%lib.replace("_"," ")), display_id="title_%s"%lib)
term_df,table = get_dataframe(enrichment, lib, table, display_id=lib)
consensus, table = get_consensus(term_df, lib, top_results, table, display_id=lib)
# Visualize
consensus_df = term_df.loc[consensus.index]
if (consensus_df.shape[1] > 0):
clustergram_filename = "%s_consensus_clust.tsv"%lib
clustergram_caption = "Clustergrammer for the top %d consensus terms for %s "%(top_results, lib.replace("_"," "))
clustergrammer(consensus_df,
clustergram_filename,
clustergrammer_url,
lib,
figure,
clustergram_caption,
)
figure+=1
results_count = len(consensus.index) if len(consensus.index) < top_results else top_results
heatmap(consensus_df, "%s_consensus.svg"%lib, lib, width, height)
display(Markdown("**Figure %d** Heatmap for the top %d consensus terms for %s. [Download figure](%s_consensus.svg)"%(figure, results_count, lib.replace("_"," "), lib)),
display_id="heatmap_caption_%s"%lib)
figure+=1
# if num_sigs <=15:
status = stackedBarPlot(consensus_df, "%s_consensus_barplot.svg"%lib, display_id=lib)
if status:
display(Markdown("**Figure %d** Stacked bar plot for the top %d consensus terms for **%s**. [Download figure](%s_consensus_barplot.svg)"%(figure, top_results, lib.replace("_"," "), lib)),
display_id="stacked_bar_caption_%s"%lib)
figure +=1
else:
print("No terms found")
###Output
_____no_output_____ |
Set/Sets.ipynb | ###Markdown
SETS1. unordered2. mutable3. No duplicates
###Code
myset={1,2,3}
print(myset)
myset={1,2,3,1,2}
print(myset)
myset={1,2,3}
print(myset)
myset=set({"Hello"})
print(myset)
myset={}
print(myset)
print(type(myset))
# so to be a set
myset2=set({})
print(myset2)
print(type(myset2))
###Output
{}
<class 'dict'>
set()
<class 'set'>
###Markdown
Add Elements
###Code
myset=set({})
myset.add(1)
myset.add(2)
myset.add(3)
print(myset)
###Output
{1, 2, 3}
###Markdown
Remove Elements remove()
###Code
myset=set({})
myset.add(1)
myset.add(2)
myset.add(3)
myset.remove(3)
print(myset)
###Output
{1, 2}
###Markdown
discard()
###Code
myset=set({})
myset.add(1)
myset.add(2)
myset.add(3)
a=myset.discard(4)
print(myset)
myset=set({})
myset.add(1)
myset.add(2)
myset.add(3)
myset.discard(3)
print(myset)
print(myset)
myset=set({})
myset.add(1)
myset.add(2)
myset.add(3)
myset.clear()
print(myset)
myset=set({})
myset.add(1)
myset.add(2)
myset.add(3)
print(myset.pop())
print(myset)
###Output
1
{2, 3}
###Markdown
TypeError: add() takes exactly one argument (2 given)
###Code
myset=set({})
myset.add(1)
myset.add(2)
myset.add(3)
for i in myset:
print(i)
if 1 in myset:
print("Yes")
###Output
Yes
###Markdown
UNION
###Code
odds={1,3,5,7,9}
evens={0,2,4,6,8}
primes={2,3,5,7}
u=odds.union(evens)
print(u)
###Output
{0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
###Markdown
INTERSECTION
###Code
odds={1,3,5,7,9}
evens={0,2,4,6,8}
primes={2,3,5,7}
u=odds.intersection(evens)
print(u)
v=odds.intersection(primes)
print(v)
###Output
set()
{3, 5, 7}
###Markdown
DIFFERENCE
###Code
setA={1,2,3,4,5,6,7,8,9}
setB={1,2,3,10,11,12}
diff=setA.difference(setB)
print(diff)
diff2=setB.difference(setA)
print(diff2)
###Output
{4, 5, 6, 7, 8, 9}
{10, 11, 12}
###Markdown
SYMMETRIC DIFFERENCE
###Code
setA={1,2,3,4,5,6,7,8,9}
setB={1,2,3,10,11,12}
symdiff=setA.symmetric_difference(setB)
print(symdiff)
symdiff2=setB.symmetric_difference(setA)
print(symdiff2)
###Output
{4, 5, 6, 7, 8, 9, 10, 11, 12}
{4, 5, 6, 7, 8, 9, 10, 11, 12}
###Markdown
UPDATE
###Code
setA={1,2,3,4,5,6,7,8,9}
setB={1,2,3,10,11,12}
setA.update(setB)
print(setA)
print(setB)
###Output
{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}
{1, 2, 3, 10, 11, 12}
###Markdown
INTERSECTION UPDATE
###Code
setA={1,2,3,4,5,6,7,8,9}
setB={1,2,3,10,11,12}
setA.intersection_update(setB)
print(setA)
###Output
{1, 2, 3}
###Markdown
DIFFERENCE UPDATE
###Code
setA={1,2,3,4,5,6,7,8,9}
setB={1,2,3,10,11,12}
setA.difference_update(setB)
print(setA)
###Output
{4, 5, 6, 7, 8, 9}
###Markdown
SYMMETRIC DIFFERENCE UPDATE
###Code
setA={1,2,3,4,5,6,7,8,9}
setB={1,2,3,10,11,12}
setA.symmetric_difference_update(setB)
print(setA)
###Output
{4, 5, 6, 7, 8, 9, 10, 11, 12}
###Markdown
ISSUBSET
###Code
setA={1,2,3,4,5,6,7,8,9}
setB={1,2,3}
print(setA.issubset(setB))
print(setB.issubset(setA))
###Output
False
True
###Markdown
ISSUPERSETif set A has all the elements of set b then set a is superset of B
###Code
setA={1,2,3,4,5,6,7,8,9}
setB={1,2,3}
print(setA.issuperset(setB))
print(setB.issuperset(setA))
###Output
True
False
###Markdown
ISDISJOINhave same elements then false
###Code
setA={1,2,3,4,5,6,7,8,9}
setB={1,2,3}
setC={7,8}
print(setA.isdisjoint(setB))
print(setB.isdisjoint(setA))
print(setB.isdisjoint(setC))
###Output
False
False
True
###Markdown
making copy of set
###Code
setA ={1,2,3,4,5,6}
setB=setA
setB.add(7)
print(setB)
print(setA)
###Output
{1, 2, 3, 4, 5, 6, 7}
{1, 2, 3, 4, 5, 6, 7}
###Markdown
making copy of the set without changing the original set
###Code
setA ={1,2,3,4,5,6}
setB=setA.copy()
setB.add(7)
print(setB)
print(setA)
setA ={1,2,3,4,5,6}
setB=set(setA)
setB.add(7)
print(setB)
print(setA)
###Output
{1, 2, 3, 4, 5, 6, 7}
{1, 2, 3, 4, 5, 6}
###Markdown
frozenset() - cant modify after initiation
###Code
a = frozenset([1,2,3,4])
print(a)
#a.add(1)
#a.remove(2)
print(a)
#AttributeError: 'frozenset' object has no attribute 'add'
#AttributeError: 'frozenset' object has no attribute 'remove'
###Output
frozenset({1, 2, 3, 4})
|
python/d2l-en/mxnet/chapter_natural-language-processing-pretraining/bert-pretraining.ipynb | ###Markdown
Pretraining BERT:label:`sec_bert-pretraining`With the BERT model implemented in :numref:`sec_bert`and the pretraining examples generated from the WikiText-2 dataset in :numref:`sec_bert-dataset`, we will pretrain BERT on the WikiText-2 dataset in this section.
###Code
from mxnet import autograd, gluon, init, np, npx
from d2l import mxnet as d2l
npx.set_np()
###Output
_____no_output_____
###Markdown
To start, we load the WikiText-2 dataset as minibatchesof pretraining examples for masked language modeling and next sentence prediction.The batch size is 512 and the maximum length of a BERT input sequence is 64.Note that in the original BERT model, the maximum length is 512.
###Code
batch_size, max_len = 512, 64
train_iter, vocab = d2l.load_data_wiki(batch_size, max_len)
###Output
_____no_output_____
###Markdown
Pretraining BERTThe original BERT has two versions of different model sizes :cite:`Devlin.Chang.Lee.ea.2018`.The base model ($\text{BERT}_{\text{BASE}}$) uses 12 layers (transformer encoder blocks)with 768 hidden units (hidden size) and 12 self-attention heads.The large model ($\text{BERT}_{\text{LARGE}}$) uses 24 layerswith 1024 hidden units and 16 self-attention heads.Notably, the former has 110 million parameters while the latter has 340 million parameters.For demonstration with ease,we define a small BERT, using 2 layers, 128 hidden units, and 2 self-attention heads.
###Code
net = d2l.BERTModel(len(vocab), num_hiddens=128, ffn_num_hiddens=256,
num_heads=2, num_layers=2, dropout=0.2)
devices = d2l.try_all_gpus()
net.initialize(init.Xavier(), ctx=devices)
loss = gluon.loss.SoftmaxCELoss()
###Output
_____no_output_____
###Markdown
Before defining the training loop,we define a helper function `_get_batch_loss_bert`.Given the shard of training examples,this function computes the loss for both the masked language modeling and next sentence prediction tasks.Note that the final loss of BERT pretrainingis just the sum of both the masked language modeling lossand the next sentence prediction loss.
###Code
#@save
def _get_batch_loss_bert(net, loss, vocab_size, tokens_X_shards,
segments_X_shards, valid_lens_x_shards,
pred_positions_X_shards, mlm_weights_X_shards,
mlm_Y_shards, nsp_y_shards):
mlm_ls, nsp_ls, ls = [], [], []
for (tokens_X_shard, segments_X_shard, valid_lens_x_shard,
pred_positions_X_shard, mlm_weights_X_shard, mlm_Y_shard,
nsp_y_shard) in zip(
tokens_X_shards, segments_X_shards, valid_lens_x_shards,
pred_positions_X_shards, mlm_weights_X_shards, mlm_Y_shards,
nsp_y_shards):
# Forward pass
_, mlm_Y_hat, nsp_Y_hat = net(
tokens_X_shard, segments_X_shard, valid_lens_x_shard.reshape(-1),
pred_positions_X_shard)
# Compute masked language model loss
mlm_l = loss(
mlm_Y_hat.reshape((-1, vocab_size)), mlm_Y_shard.reshape(-1),
mlm_weights_X_shard.reshape((-1, 1)))
mlm_l = mlm_l.sum() / (mlm_weights_X_shard.sum() + 1e-8)
# Compute next sentence prediction loss
nsp_l = loss(nsp_Y_hat, nsp_y_shard)
nsp_l = nsp_l.mean()
mlm_ls.append(mlm_l)
nsp_ls.append(nsp_l)
ls.append(mlm_l + nsp_l)
npx.waitall()
return mlm_ls, nsp_ls, ls
###Output
_____no_output_____
###Markdown
Invoking the two aforementioned helper functions,the following `train_bert` functiondefines the procedure to pretrain BERT (`net`) on the WikiText-2 (`train_iter`) dataset.Training BERT can take very long.Instead of specifying the number of epochs for trainingas in the `train_ch13` function (see :numref:`sec_image_augmentation`),the input `num_steps` of the following functionspecifies the number of iteration steps for training.
###Code
def train_bert(train_iter, net, loss, vocab_size, devices, num_steps):
trainer = gluon.Trainer(net.collect_params(), 'adam',
{'learning_rate': 0.01})
step, timer = 0, d2l.Timer()
animator = d2l.Animator(xlabel='step', ylabel='loss',
xlim=[1, num_steps], legend=['mlm', 'nsp'])
# Sum of masked language modeling losses, sum of next sentence prediction
# losses, no. of sentence pairs, count
metric = d2l.Accumulator(4)
num_steps_reached = False
while step < num_steps and not num_steps_reached:
for batch in train_iter:
(tokens_X_shards, segments_X_shards, valid_lens_x_shards,
pred_positions_X_shards, mlm_weights_X_shards,
mlm_Y_shards, nsp_y_shards) = [gluon.utils.split_and_load(
elem, devices, even_split=False) for elem in batch]
timer.start()
with autograd.record():
mlm_ls, nsp_ls, ls = _get_batch_loss_bert(
net, loss, vocab_size, tokens_X_shards, segments_X_shards,
valid_lens_x_shards, pred_positions_X_shards,
mlm_weights_X_shards, mlm_Y_shards, nsp_y_shards)
for l in ls:
l.backward()
trainer.step(1)
mlm_l_mean = sum([float(l) for l in mlm_ls]) / len(mlm_ls)
nsp_l_mean = sum([float(l) for l in nsp_ls]) / len(nsp_ls)
metric.add(mlm_l_mean, nsp_l_mean, batch[0].shape[0], 1)
timer.stop()
animator.add(step + 1,
(metric[0] / metric[3], metric[1] / metric[3]))
step += 1
if step == num_steps:
num_steps_reached = True
break
print(f'MLM loss {metric[0] / metric[3]:.3f}, '
f'NSP loss {metric[1] / metric[3]:.3f}')
print(f'{metric[2] / timer.sum():.1f} sentence pairs/sec on '
f'{str(devices)}')
###Output
_____no_output_____
###Markdown
We can plot both the masked language modeling loss and the next sentence prediction lossduring BERT pretraining.
###Code
train_bert(train_iter, net, loss, len(vocab), devices, 50)
###Output
MLM loss 7.329, NSP loss 0.827
4332.3 sentence pairs/sec on [gpu(0), gpu(1)]
###Markdown
Representing Text with BERTAfter pretraining BERT,we can use it to represent single text, text pairs, or any token in them.The following function returns the BERT (`net`) representations for all tokensin `tokens_a` and `tokens_b`.
###Code
def get_bert_encoding(net, tokens_a, tokens_b=None):
tokens, segments = d2l.get_tokens_and_segments(tokens_a, tokens_b)
token_ids = np.expand_dims(np.array(vocab[tokens], ctx=devices[0]),
axis=0)
segments = np.expand_dims(np.array(segments, ctx=devices[0]), axis=0)
valid_len = np.expand_dims(np.array(len(tokens), ctx=devices[0]), axis=0)
encoded_X, _, _ = net(token_ids, segments, valid_len)
return encoded_X
###Output
_____no_output_____
###Markdown
Consider the sentence "a crane is flying".Recall the input representation of BERT as discussed in :numref:`subsec_bert_input_rep`.After inserting special tokens “<cls>” (used for classification)and “<sep>” (used for separation),the BERT input sequence has a length of six.Since zero is the index of the “<cls>” token,`encoded_text[:, 0, :]` is the BERT representation of the entire input sentence.To evaluate the polysemy token "crane",we also print out the first three elements of the BERT representation of the token.
###Code
tokens_a = ['a', 'crane', 'is', 'flying']
encoded_text = get_bert_encoding(net, tokens_a)
# Tokens: '<cls>', 'a', 'crane', 'is', 'flying', '<sep>'
encoded_text_cls = encoded_text[:, 0, :]
encoded_text_crane = encoded_text[:, 2, :]
encoded_text.shape, encoded_text_cls.shape, encoded_text_crane[0][:3]
###Output
_____no_output_____
###Markdown
Now consider a sentence pair"a crane driver came" and "he just left".Similarly, `encoded_pair[:, 0, :]` is the encoded result of the entire sentence pair from the pretrained BERT.Note that the first three elements of the polysemy token "crane" are different from those when the context is different.This supports that BERT representations are context-sensitive.
###Code
tokens_a, tokens_b = ['a', 'crane', 'driver', 'came'], ['he', 'just', 'left']
encoded_pair = get_bert_encoding(net, tokens_a, tokens_b)
# Tokens: '<cls>', 'a', 'crane', 'driver', 'came', '<sep>', 'he', 'just',
# 'left', '<sep>'
encoded_pair_cls = encoded_pair[:, 0, :]
encoded_pair_crane = encoded_pair[:, 2, :]
encoded_pair.shape, encoded_pair_cls.shape, encoded_pair_crane[0][:3]
###Output
_____no_output_____ |
analysis/quick_data_look.ipynb | ###Markdown
Imports
###Code
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
%matplotlib inline
from scipy.io import wavfile
import warnings
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
Loading the data
###Code
# I'm ignoring sampling rate for the moment (the first arg here)
_, c2 = wavfile.read('../sounds/wav/cello_pluck/single/c2.wav')
_, a3 = wavfile.read('../sounds/wav/cello_pluck/single/a3.wav')
_, a3_d3 = wavfile.read('../sounds/wav/cello_pluck/multi/a3_d3.wav')
_, whis = wavfile.read('../sounds/wav/whistle.wav')
# spliting the wavefile into two channels
c2_chan1, c2_chan2 = zip(*c2)
a3_chan1, a3_chan2 = zip(*a3)
a3_d3_chan1, a3_d3_chan2 = zip(*a3_d3)
whis_chan1, whis_chan2 = zip(*whis)
###Output
_____no_output_____
###Markdown
Quick plot of the different notesTwo somewhat surprising things:- The cello sound is more distorted than I would expect- A human whistle has really high frequency
###Code
# creating the subplots
fig, axarr = plt.subplots(2, 2, figsize=(16, 16))
# The numbers here needed some tinkering to plot at the right point
axarr[0][0].plot(c2_chan1[70000:75000], "-")
axarr[0][1].plot(a3_chan1[80000:85000], "-")
axarr[1][0].plot(a3_d3_chan1[70000:200000], "-")
axarr[1][1].plot(whis_chan1[70000:71000], "-")
# labeling each plot with it's note
axarr[0][0].set(title="Cello C2")
axarr[0][1].set(title="Cello A3")
axarr[1][0].set(title="Cello A3 D3")
_ = axarr[1][1].set(title="Human whistle")
###Output
_____no_output_____ |
notebooks/Diagnostic_output_overview.ipynb | ###Markdown
Diagnostic output overviewThe MIKE FM DA module can output 3 different types of diagnostic outputs:1. Measurement diagnostic - which relates to a specific measurement2. Non-measurement point diagnostic - results for a specific variable and point3. Global assimilation statisticsAll are read by the FMDAp method `read_diagnostic()`
###Code
import fmdap
###Output
_____no_output_____
###Markdown
1. Measurement diagnostic overviewMeasurement diagnostic (type 1) comes in two kinds depending on the type measurement they refer to:* point* spatially distributed (e.g. track measurement)They furthermore behave differently depending on presence of assimilation updates or not.Measurement diagnostic have the following main data properties:* forecast* result* innovationIf the file contains updates (from assimilation) it will also have properties:* forecast_at_update* analysis* incrementIf the file does not have updates, the forecast and result properties will be identical. Point measurement diagnostic without updates Point measurement diagnostic with assimilation updates
###Code
fn = '../tests/testdata/Diagnostics_F16_EnKF.dfs0'
diag = fmdap.read_diagnostic(fn, name="F16")
diag
diag.type
diag.has_updates
###Output
_____no_output_____
###Markdown
Track measurement diagnostic with assimilation updates
###Code
fn = '../tests/testdata/Diagnostics_Altimetry_C2.dfs0'
diag = fmdap.read_diagnostic(fn, name="c2 alti")
diag
###Output
_____no_output_____
###Markdown
2. Non-measurement point diagnostic overviewNon-measurement diagnostic (type 2) are always point-type. The don't have any measurement information. They behave differently depending on presence of assimilation updates or not.Non-measurement diagnostic have the following main data properties:* forecast* resultIf the file contains updates (from assimilation) it will also have properties:* forecast_at_update* analysis* incrementIf the file does not have updates, the forecast and result properties will be identical. Non-measurement point without updates
###Code
fn = '../tests/testdata/diagnostics_nonMeas_SSC1.dfs0'
diag = fmdap.read_diagnostic(fn)
diag
diag.type
diag.has_updates
###Output
_____no_output_____
###Markdown
3. Global assimilation statisticsCurrently, global assimilation statistics files have very limited support!
###Code
fn = '../tests/testdata/Global_stats.dfs0'
diag = fmdap.read_diagnostic(fn)
diag.type
###Output
_____no_output_____ |
Week 1 Workshop.ipynb | ###Markdown
Python Basics The goal of this week's practical is to get you started using Python, Jupyter Notebooks, and Git, three tools that you will use through the semester in your work. **Python** is our language of choice in this unit. You may have seen it before, if not, you need to learn basic Python coding.You are looking at a **Jupyter Notebook**, it is a document that mixes text, code and the output of the code. A lot of your work will be creating notebooks like this to present your analysis. **Git** is a distributed version control system (DVCS), you will use it to keep track of your work and ensure that you have a backup copy of what you are doing. You should have checked this notebook out of **Github** using Git. Your task this week is to complete some programming work in this worksheet and commit your changes to your own Bitbucket repository. Your task this week is to complete some basic programming tasks with Python in this worksheet. There are questions below with a space for you to write code to achieve the given outcomes. Write the code, test it, and when you are done, submit your work as described at the end of the notebook. The tasks aren't meant to be complicated Python problems, just some simple tasks to get you started with this process. String ManipulationThe next cell defines three strings that you will use in the first group of questions. Note that the first uses single quotes, the second uses double quotes and the third uses three double quotes since it includes newline characters. These are all valid ways of writing strings in Python and are equivalent.
###Code
title = 'Jabberwocky'
author = "Lewis Carrol"
text = """'Twas brillig, and the slithy toves
Did gyre and gimble in the wabe:
All mimsy were the borogoves,
And the mome raths outgrabe.
"Beware the Jabberwock, my son!
The jaws that bite, the claws that catch!
Beware the Jubjub bird, and shun
The frumious Bandersnatch!"
He took his vorpal sword in hand;
Long time the manxome foe he sought—
So rested he by the Tumtum tree
And stood awhile in thought."""
# text from https://www.poetryfoundation.org/poems/42916/jabberwocky
###Output
_____no_output_____
###Markdown
Write code to print the length of each of these strings.
###Code
# write your code here
###Output
_____no_output_____
###Markdown
Write code to create a new string in a variable 'summary' that contains the title, the author and the first 20 characters of the description, with a ':' character between each one (ie `'Jabberwocky:Lewis Carrol:’Twas brillig, and t'`)
###Code
# write your code here
###Output
_____no_output_____
###Markdown
Write code to find the number of words in the text. Hint, this is easy in Python since strings support the [split method](https://docs.python.org/3.6/library/stdtypes.htmlstr.split) that returns a list of strings after splitting on whitespace (or another character if you wish). Try split on the string, then find out how many strings are in the resulting list.
###Code
# write your code here
###Output
_____no_output_____
###Markdown
The `.split` method can also be used to split into lines by telling it to split on the `\n` character (i.e. `text.split('\n')`. Use this to count how many lines there are in the poem.
###Code
# write your code here
###Output
_____no_output_____
###Markdown
Lists of NumbersPython lists can store any data type. Here we'll work with lists of integers and find out about ways of working with them.The first thing we'll do is create a list of numbers to work with. Here I'm using the `range` function to generate a list of integers up to 20 and then converting that into a list with the `list` function (`range` returns a Python range object which represents the sequence but takes up less space than a full list because it doesn't actually store all the numbers; converting to a list forces it to create all the numbers https://pynative.com/python-range-function/).
###Code
nums = list(range(20))
nums
###Output
_____no_output_____
###Markdown
Most of the time we'll be working with a package called Pandas which has lots of functions for working with numerical data. However, there are a few functions in Python for working with lists of numbers. Use the `min`, `max` and `sum` functions on the list `nums` to print out the largest and smallest numbers and the sum of all the numbers.
###Code
# write your code here
###Output
_____no_output_____
###Markdown
We can use the square bracket notation to access individual elements in the list or ranges of elements. * Write code to print the fifth element of the list. * Write code to print all of the elements from the third to the ninth element. * Write code to print every element after the twelfth element._Remember that list indexes start at zero_. Lists are _mutable_ which means they can be changed. Write code to set the third element in the list to 99. FunctionsA function is a way to group together a number of lines of code that do a particular job. If you have experience programming in other languages this will be familiar to you. If not then you should look at some resources like [this](https://en.wikibooks.org/wiki/Non-Programmer%27s_Tutorial_for_Python_3/Defining_Functions) or [this](https://overiq.com/python-101/functions-in-python/) to understand them. The last exercise for the week is to write a function. If you are just starting out then write a function to print out a message over a few lines (more than one print statement) such as:``` I learned how to write Python! I wrote a function to show this message!```If you have more experience, write a function that takes a numerical list and returns the average value of the numbers in the list using the `sum` and `len` functions.
###Code
# write your code here
###Output
_____no_output_____
###Markdown
Call your function in the next cell
###Code
# write your code here
###Output
_____no_output_____
###Markdown
Python Basics The goal of this week's practical is to get you started using Python, Jupyter Notebooks, and Git, three tools that you will use through the semester in your work. **Python** is our language of choice in this unit. You may have seen it before, if not, you need to learn basic Python coding.You are looking at a **Jupyter Notebook**, it is a document that mixes text, code and the output of the code. A lot of your work will be creating notebooks like this to present your analysis. **Git** is a distributed version control system (DVCS), you will use it to keep track of your work and ensure that you have a backup copy of what you are doing. You should have checked this notebook out of **Github** using Git. Your task this week is to complete some programming work in this worksheet and commit your changes to your own Bitbucket repository. Your task this week is to complete some basic programming tasks with Python in this worksheet. There are questions below with a space for you to write code to achieve the given outcomes. Write the code, test it, and when you are done, submit your work as described at the end of the notebook. The tasks aren't meant to be complicated Python problems, just some simple tasks to get you started with this process. String ManipulationThe next cell defines three strings that you will use in the first group of questions. Note that the first uses single quotes, the second uses double quotes and the third uses three double quotes since it includes newline characters. These are all valid ways of writing strings in Python and are equivalent.
###Code
title = 'Jabberwocky'
author = "Lewis Carrol"
text = """'Twas brillig, and the slithy toves
Did gyre and gimble in the wabe:
All mimsy were the borogoves,
And the mome raths outgrabe.
"Beware the Jabberwock, my son!
The jaws that bite, the claws that catch!
Beware the Jubjub bird, and shun
The frumious Bandersnatch!"
He took his vorpal sword in hand;
Long time the manxome foe he sought—
So rested he by the Tumtum tree
And stood awhile in thought."""
# text from https://www.poetryfoundation.org/poems/42916/jabberwocky
###Output
_____no_output_____
###Markdown
Write code to print the length of each of these strings.
###Code
print (len(text))
print (len(title))
print(len(author))
###Output
432
11
12
###Markdown
Write code to create a new string in a variable 'summary' that contains the title, the author and the first 20 characters of the description, with a ':' character between each one (ie `'Jabberwocky:Lewis Carrol:’Twas brillig, and t'`)
###Code
summary = title + ":" + author + ":" + text[0:20]
print(summary)
###Output
Jabberwocky:Lewis Carrol:'Twas brillig, and t
###Markdown
Write code to find the number of words in the text. Hint, this is easy in Python since strings support the [split method](https://docs.python.org/3.6/library/stdtypes.htmlstr.split) that returns a list of strings after splitting on whitespace (or another character if you wish). Try split on the string, then find out how many strings are in the resulting list.
###Code
noOfWords =text.split()
#print(noOfWords)
print(len(noOfWords))
###Output
71
###Markdown
The `.split` method can also be used to split into lines by telling it to split on the `\n` character (i.e. `text.split('\n')`. Use this to count how many lines there are in the poem.
###Code
# write your code here
lines =text.split('\n')
print(len(lines))
###Output
14
###Markdown
Control StructuresHere you will explore Python control structures - conditionals and loops. Write a for loop over the words in the description and count how many times the word 'and' occurs. Your solution will have an if statement inside the for loop.Here you will encounter Python's required indentation for the first time. This will annoy you at first but you will learn to either love it or hate it with time...
###Code
# write your for loop here
for word in noOfWords:
print(word)
count = 0
for word in noOfWords:
if (word =="and"): count += 1
print(count)
###Output
3
###Markdown
Note that one of the instances of 'and' in the text is capitalised, can you modify your code so that it finds this one too? The solution is to use the `.lower` method to lowercase the string before you compare it with your target 'and'.
###Code
# write your code here
count1 = 0
for word in noOfWords:
if (word.lower() =="and"): count1 += 1
# write your code here
print(count1)
###Output
5
###Markdown
FunctionsPython is a dynamically typed language so we don't need to declare the type of a variable or declare the return type of a function (although Python 3 introduced optional [type hints](https://stackoverflow.com/documentation/python/1766/type-hintst=201607251908319482596)). Apart from that the idea of writing a function in Python is the same as in Processing or (methods in) Java.Write a function that takes a single string argument and returns the number of words in the string using the code you wrote above to count words.
###Code
def fun_string(textstring):
Words =textstring.split()
print(len(Words))
###Output
_____no_output_____
###Markdown
Use your function to find the number of words in the text string defined above.
###Code
# write your code here
textstring1= text
fun_string(textstring1)
print("santosh")
###Output
santosh
###Markdown
Python Basics The goal of this week's practical is to get you started using Python, Jupyter Notebooks, and Git, three tools that you will use through the semester in your work. **Python** is our language of choice in this unit. You may have seen it before, if not, you need to learn basic Python coding.You are looking at a **Jupyter Notebook**, it is a document that mixes text, code and the output of the code. A lot of your work will be creating notebooks like this to present your analysis. **Git** is a distributed version control system (DVCS), you will use it to keep track of your work and ensure that you have a backup copy of what you are doing. You should have checked this notebook out of **Github** using Git. Your task this week is to complete some programming work in this worksheet and commit your changes to your own Bitbucket repository. Your task this week is to complete some basic programming tasks with Python in this worksheet. There are questions below with a space for you to write code to achieve the given outcomes. Write the code, test it, and when you are done, submit your work as described at the end of the notebook. The tasks aren't meant to be complicated Python problems, just some simple tasks to get you started with this process. String ManipulationThe next cell defines three strings that you will use in the first group of questions. Note that the first uses single quotes, the second uses double quotes and the third uses three double quotes since it includes newline characters. These are all valid ways of writing strings in Python and are equivalent.
###Code
title = 'Jabberwocky'
author = "Lewis Carrol"
text = """'Twas brillig, and the slithy toves
Did gyre and gimble in the wabe:
All mimsy were the borogoves,
And the mome raths outgrabe.
"Beware the Jabberwock, my son!
The jaws that bite, the claws that catch!
Beware the Jubjub bird, and shun
The frumious Bandersnatch!"
He took his vorpal sword in hand;
Long time the manxome foe he sought—
So rested he by the Tumtum tree
And stood awhile in thought."""
# text from https://www.poetryfoundation.org/poems/42916/jabberwocky
###Output
_____no_output_____
###Markdown
Write code to print the length of each of these strings.
###Code
title = 'Jabberwocky'
author = "Lewis Carrol"
text = """'Twas brillig, and the slithy toves
Did gyre and gimble in the wabe:
All mimsy were the borogoves,
And the mome raths outgrabe.
"Beware the Jabberwock, my son!
The jaws that bite, the claws that catch!
Beware the Jubjub bird, and shun
The frumious Bandersnatch!"
He took his vorpal sword in hand;
Long time the manxome foe he sought—
So rested he by the Tumtum tree
And stood awhile in thought."""
print("Length of title is: ",len(title))
print("Length of author is: ",len(author))
print("Length of text is: ",len(text))
###Output
Length of title is: 11
Length of author is: 12
Length of text is: 432
###Markdown
Write code to create a new string in a variable 'summary' that contains the title, the author and the first 20 characters of the description, with a ':' character between each one (ie `'Jabberwocky:Lewis Carrol:’Twas brillig, and t'`)
###Code
summary="""'Jabberwocky:Lewis Carrol:’Twas brillig, and t'"""
print(summary)
###Output
'Jabberwocky:Lewis Carrol:’Twas brillig, and t'
###Markdown
Write code to find the number of words in the text. Hint, this is easy in Python since strings support the [split method](https://docs.python.org/3.6/library/stdtypes.htmlstr.split) that returns a list of strings after splitting on whitespace (or another character if you wish). Try split on the string, then find out how many strings are in the resulting list.
###Code
words = summary.split(' ',-1)
print(words)
print("\n I tried various formats of the separator parameter to have both while space and ':' as delimiter, but could not get it right. \n")
print("There are: ",len(words)," strings in the list")
###Output
["'Jabberwocky:Lewis", 'Carrol:’Twas', 'brillig,', 'and', "t'"]
I tried various formats of the separator parameter to have both while space and ':' as delimiter, but could not get it right.
There are: 5 strings in the list
###Markdown
The `.split` method can also be used to split into lines by telling it to split on the `\n` character (i.e. `text.split('\n')`. Use this to count how many lines there are in the poem.
###Code
print(text)
sentences = text.split('\n')
print(sentences, "\n")
print("There are: ",len(sentences)," lines in the poem")
###Output
["'Twas brillig, and the slithy toves", ' Did gyre and gimble in the wabe:', 'All mimsy were the borogoves,', ' And the mome raths outgrabe.', '', '"Beware the Jabberwock, my son!', ' The jaws that bite, the claws that catch!', 'Beware the Jubjub bird, and shun', ' The frumious Bandersnatch!"', '', 'He took his vorpal sword in hand;', ' Long time the manxome foe he sought—', 'So rested he by the Tumtum tree', ' And stood awhile in thought.']
There are: 14 lines in the poem
###Markdown
Control StructuresHere you will explore Python control structures - conditionals and loops. Write a for loop over the words in the description and count how many times the word 'and' occurs. Your solution will have an if statement inside the for loop.Here you will encounter Python's required indentation for the first time. This will annoy you at first but you will learn to either love it or hate it with time...
###Code
print(text)
wordslist = text.split()
print(wordslist)
findword = 'and'
print(findword)
andCount = 0
counts=0
for x in wordslist:
if findword in wordslist:
andCount +=1
print(andCount)
else:
andCount = andCount
print(andCount)
return wordslist
print("I cannot get this right")
###Output
I cannot get this right
###Markdown
Note that one of the instances of 'and' in the text is capitalised, can you modify your code so that it finds this one too? The solution is to use the `.lower` method to lowercase the string before you compare it with your target 'and'.
###Code
dont know how to do this
###Output
_____no_output_____
###Markdown
FunctionsPython is a dynamically typed language so we don't need to declare the type of a variable or declare the return type of a function (although Python 3 introduced optional [type hints](https://stackoverflow.com/documentation/python/1766/type-hintst=201607251908319482596)). Apart from that the idea of writing a function in Python is the same as in Processing or (methods in) Java.Write a function that takes a single string argument and returns the number of words in the string using the code you wrote above to count words.
###Code
WordCount = len(text.split())
# total no of words
print ("The number of words in string are : " + str(WordCount))
###Output
The number of words in string are : 71
###Markdown
Use your function to find the number of words in the text string defined above.
###Code
print("I'm confused by what the difference between these questions are")
###Output
I'm confused by what the difference between these questions are
###Markdown
Python Basics The goal of this week's practical is to get you started using Python, Jupyter Notebooks, and Git, three tools that you will use through the semester in your work. **Python** is our language of choice in this unit. You may have seen it before, if not, you need to learn basic Python coding.You are looking at a **Jupyter Notebook**, it is a document that mixes text, code and the output of the code. A lot of your work will be creating notebooks like this to present your analysis. **Git** is a distributed version control system (DVCS), you will use it to keep track of your work and ensure that you have a backup copy of what you are doing. You should have checked this notebook out of **Github** using Git. Your task this week is to complete some programming work in this worksheet and commit your changes to your own Bitbucket repository. Your task this week is to complete some basic programming tasks with Python in this worksheet. There are questions below with a space for you to write code to achieve the given outcomes. Write the code, test it, and when you are done, submit your work as described at the end of the notebook. The tasks aren't meant to be complicated Python problems, just some simple tasks to get you started with this process. String ManipulationThe next cell defines three strings that you will use in the first group of questions. Note that the first uses single quotes, the second uses double quotes and the third uses three double quotes since it includes newline characters. These are all valid ways of writing strings in Python and are equivalent.
###Code
title = 'Jabberwocky'
author = "Lewis Carrol"
text = """'Twas brillig, and the slithy toves
Did gyre and gimble in the wabe:
All mimsy were the borogoves,
And the mome raths outgrabe.
"Beware the Jabberwock, my son!
The jaws that bite, the claws that catch!
Beware the Jubjub bird, and shun
The frumious Bandersnatch!"
He took his vorpal sword in hand;
Long time the manxome foe he sought—
So rested he by the Tumtum tree
And stood awhile in thought."""
# text from https://www.poetryfoundation.org/poems/42916/jabberwocky
###Output
_____no_output_____
###Markdown
Write code to print the length of each of these strings.
###Code
# write your code here
###Output
_____no_output_____
###Markdown
Write code to create a new string in a variable 'summary' that contains the title, the author and the first 20 characters of the description, with a ':' character between each one (ie `'Jabberwocky:Lewis Carrol:’Twas brillig, and t'`)
###Code
# write your code here
###Output
_____no_output_____
###Markdown
Write code to find the number of words in the text. Hint, this is easy in Python since strings support the [split method](https://docs.python.org/3.6/library/stdtypes.htmlstr.split) that returns a list of strings after splitting on whitespace (or another character if you wish). Try split on the string, then find out how many strings are in the resulting list.
###Code
# write your code here
###Output
_____no_output_____
###Markdown
The `.split` method can also be used to split into lines by telling it to split on the `\n` character (i.e. `text.split('\n')`. Use this to count how many lines there are in the poem.
###Code
# write your code here
###Output
_____no_output_____
###Markdown
Control StructuresHere you will explore Python control structures - conditionals and loops. Write a for loop over the words in the description and count how many times the word 'and' occurs. Your solution will have an if statement inside the for loop.Here you will encounter Python's required indentation for the first time. This will annoy you at first but you will learn to either love it or hate it with time...
###Code
# write your for loop here
###Output
_____no_output_____
###Markdown
Note that one of the instances of 'and' in the text is capitalised, can you modify your code so that it finds this one too? The solution is to use the `.lower` method to lowercase the string before you compare it with your target 'and'.
###Code
# write your code here
###Output
_____no_output_____
###Markdown
FunctionsPython is a dynamically typed language so we don't need to declare the type of a variable or declare the return type of a function (although Python 3 introduced optional [type hints](https://stackoverflow.com/documentation/python/1766/type-hintst=201607251908319482596)). Apart from that the idea of writing a function in Python is the same as in Processing or (methods in) Java.Write a function that takes a single string argument and returns the number of words in the string using the code you wrote above to count words.
###Code
# write your code here
###Output
_____no_output_____
###Markdown
Use your function to find the number of words in the text string defined above.
###Code
# write your code here
###Output
_____no_output_____ |
Trainer-Collaboratories/Fine_Tuning/InceptionV3/Fine_tuning_InceptionV3(GAP_256_0,5).ipynb | ###Markdown
**Import Google Drive**
###Code
from google.colab import drive
drive.mount('/content/drive')
###Output
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
###Markdown
**Import Library**
###Code
import glob
import numpy as np
import os
import shutil
np.random.seed(42)
from sklearn.preprocessing import LabelEncoder
import cv2
import tensorflow as tf
import keras
import shutil
import random
import warnings
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.utils import class_weight
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, cohen_kappa_score
###Output
Using TensorFlow backend.
/usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
import pandas.util.testing as tm
###Markdown
**Load Data**
###Code
os.chdir('/content/drive/My Drive/Colab Notebooks/DATA RD/')
Train = glob.glob('/content/drive/My Drive/Colab Notebooks/DATA RD/DATASETS/Data Split/Train/*')
Val=glob.glob('/content/drive/My Drive/Colab Notebooks/DATA RD/DATASETS/Data Split/Validation/*')
Test=glob.glob('/content/drive/My Drive/Colab Notebooks/DATA RD/DATASETS/Data Split/Test/*')
import matplotlib.image as mpimg
for ima in Train[600:601]:
img=mpimg.imread(ima)
imgplot = plt.imshow(img)
plt.show()
###Output
_____no_output_____
###Markdown
**Data Preparation**
###Code
nrows = 224
ncolumns = 224
channels = 3
def read_and_process_image(list_of_images):
X = [] # images
y = [] # labels
for image in list_of_images:
X.append(cv2.resize(cv2.imread(image, cv2.IMREAD_COLOR), (nrows,ncolumns), interpolation=cv2.INTER_CUBIC)) #Read the image
#get the labels
if 'Normal' in image:
y.append(0)
elif 'Mild' in image:
y.append(1)
elif 'Moderate' in image:
y.append(2)
elif 'Severe' in image:
y.append(3)
return X, y
X_train, y_train = read_and_process_image(Train)
X_val, y_val = read_and_process_image(Val)
X_test, y_test = read_and_process_image(Test)
import seaborn as sns
import gc
gc.collect()
#Convert list to numpy array
X_train = np.array(X_train)
y_train= np.array(y_train)
X_val = np.array(X_val)
y_val= np.array(y_val)
X_test = np.array(X_test)
y_test= np.array(y_test)
print('Train:',X_train.shape,y_train.shape)
print('Val:',X_val.shape,y_val.shape)
print('Test',X_test.shape,y_test.shape)
sns.countplot(y_train)
plt.title('Total Data Training')
sns.countplot(y_val)
plt.title('Total Data Validasi')
sns.countplot(y_test)
plt.title('Total Data Test')
y_train_ohe = pd.get_dummies(y_train)
y_val_ohe=pd.get_dummies(y_val)
y_test_ohe=pd.get_dummies(y_test)
y_train_ohe.shape,y_val_ohe.shape,y_test_ohe.shape
###Output
_____no_output_____
###Markdown
**Model Parameters**
###Code
batch_size = 16
EPOCHS = 100
WARMUP_EPOCHS = 2
LEARNING_RATE = 0.001
WARMUP_LEARNING_RATE = 1e-3
HEIGHT = 224
WIDTH = 224
CANAL = 3
N_CLASSES = 4
ES_PATIENCE = 5
RLROP_PATIENCE = 3
DECAY_DROP = 0.5
###Output
_____no_output_____
###Markdown
**Data Generator**
###Code
train_datagen =tf.keras.preprocessing.image.ImageDataGenerator(
rotation_range=360,
horizontal_flip=True,
vertical_flip=True)
test_datagen=tf.keras.preprocessing.image.ImageDataGenerator()
train_generator = train_datagen.flow(X_train, y_train_ohe, batch_size=batch_size)
val_generator = test_datagen.flow(X_val, y_val_ohe, batch_size=batch_size)
test_generator = test_datagen.flow(X_test, y_test_ohe, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
**Define Model**
###Code
IMG_SHAPE = (224, 224, 3)
base_model =tf.keras.applications.InceptionV3(weights='imagenet',
include_top=False,
input_shape=IMG_SHAPE)
x =tf.keras.layers.GlobalAveragePooling2D()(base_model.output)
x =tf.keras.layers.Dropout(0.5)(x)
x =tf.keras.layers.Dense(256, activation='relu')(x)
x =tf.keras.layers.Dropout(0.5)(x)
final_output =tf.keras.layers.Dense(N_CLASSES, activation='softmax', name='final_output')(x)
model =tf.keras.models.Model(inputs=base_model.inputs,outputs=final_output)
###Output
_____no_output_____
###Markdown
**Train Top Layers**
###Code
for layer in model.layers:
layer.trainable = False
for i in range(-5, 0):
model.layers[i].trainable = True
metric_list = ["accuracy"]
optimizer =tf.keras.optimizers.Adam(lr=WARMUP_LEARNING_RATE)
model.compile(optimizer=optimizer, loss="categorical_crossentropy", metrics=metric_list)
model.summary()
import time
start = time.time()
STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size
STEP_SIZE_VALID = val_generator.n//val_generator.batch_size
history_warmup = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=val_generator,
validation_steps=STEP_SIZE_VALID,
epochs=WARMUP_EPOCHS,
verbose=1).history
end = time.time()
print('Waktu Training:', end - start)
###Output
WARNING:tensorflow:From <ipython-input-17-42947d619a66>:13: Model.fit_generator (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.
Instructions for updating:
Please use Model.fit, which supports generators.
Epoch 1/2
375/375 [==============================] - 58s 154ms/step - loss: 4.3694 - accuracy: 0.3085 - val_loss: 1.3780 - val_accuracy: 0.2648
Epoch 2/2
375/375 [==============================] - 57s 153ms/step - loss: 1.3609 - accuracy: 0.3183 - val_loss: 1.2310 - val_accuracy: 0.3884
Waktu Training: 120.41391205787659
###Markdown
**Train Fine Tuning**
###Code
for layer in model.layers:
layer.trainable = True
es =tf.keras.callbacks.EarlyStopping(monitor='val_loss', mode='min', patience=ES_PATIENCE, restore_best_weights=True, verbose=1)
rlrop =tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', mode='min', patience=RLROP_PATIENCE, factor=DECAY_DROP, min_lr=1e-6, verbose=1)
callback_list = [es]
optimizer =tf.keras.optimizers.Adam(lr=LEARNING_RATE)
model.compile(optimizer=optimizer, loss="categorical_crossentropy", metrics=metric_list)
model.summary()
history_finetunning = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=val_generator,
validation_steps=STEP_SIZE_VALID,
epochs=EPOCHS,
callbacks=callback_list,
verbose=1).history
###Output
Epoch 1/100
375/375 [==============================] - 61s 163ms/step - loss: 1.1473 - accuracy: 0.4673 - val_loss: 3919.1833 - val_accuracy: 0.2500
Epoch 2/100
375/375 [==============================] - 60s 159ms/step - loss: 1.0331 - accuracy: 0.5672 - val_loss: 1723.6674 - val_accuracy: 0.2453
Epoch 3/100
375/375 [==============================] - 60s 160ms/step - loss: 0.8967 - accuracy: 0.6210 - val_loss: 5.4514 - val_accuracy: 0.3945
Epoch 4/100
375/375 [==============================] - 60s 160ms/step - loss: 0.8077 - accuracy: 0.6688 - val_loss: 0.9963 - val_accuracy: 0.6042
Epoch 5/100
375/375 [==============================] - 60s 160ms/step - loss: 0.7522 - accuracy: 0.6880 - val_loss: 0.8692 - val_accuracy: 0.6660
Epoch 6/100
375/375 [==============================] - 60s 160ms/step - loss: 0.7205 - accuracy: 0.7160 - val_loss: 0.6419 - val_accuracy: 0.7473
Epoch 7/100
375/375 [==============================] - 60s 159ms/step - loss: 0.7514 - accuracy: 0.6995 - val_loss: 26.4811 - val_accuracy: 0.3246
Epoch 8/100
375/375 [==============================] - 59s 158ms/step - loss: 0.6691 - accuracy: 0.7290 - val_loss: 2.8968 - val_accuracy: 0.6499
Epoch 9/100
375/375 [==============================] - 60s 159ms/step - loss: 0.6446 - accuracy: 0.7368 - val_loss: 0.8468 - val_accuracy: 0.6808
Epoch 10/100
375/375 [==============================] - 60s 159ms/step - loss: 0.6179 - accuracy: 0.7470 - val_loss: 1.6806 - val_accuracy: 0.5376
Epoch 11/100
375/375 [==============================] - ETA: 0s - loss: 0.5978 - accuracy: 0.7555Restoring model weights from the end of the best epoch.
375/375 [==============================] - 60s 160ms/step - loss: 0.5978 - accuracy: 0.7555 - val_loss: 1.0112 - val_accuracy: 0.7083
Epoch 00011: early stopping
###Markdown
**Model Graph**
###Code
history = {'loss': history_warmup['loss'] + history_finetunning['loss'],
'val_loss': history_warmup['val_loss'] + history_finetunning['val_loss'],
'acc': history_warmup['accuracy'] + history_finetunning['accuracy'],
'val_acc': history_warmup['val_accuracy'] + history_finetunning['val_accuracy']}
sns.set_style("whitegrid")
fig, (ax1, ax2) = plt.subplots(2, 1, sharex='col', figsize=(20, 18))
ax1.plot(history['loss'], label='Train loss')
ax1.plot(history['val_loss'], label='Validation loss')
ax1.legend(loc='best')
ax1.set_title('Loss')
ax2.plot(history['acc'], label='Train accuracy')
ax2.plot(history['val_acc'], label='Validation accuracy')
ax2.legend(loc='best')
ax2.set_title('Accuracy')
plt.xlabel('Epochs')
sns.despine()
plt.show()
###Output
_____no_output_____
###Markdown
**Evaluate Model**
###Code
loss_Val, acc_Val = model.evaluate(X_val, y_val_ohe,batch_size=1, verbose=1)
print("Validation: accuracy = %f ; loss_v = %f" % (acc_Val, loss_Val))
lastFullTrainPred = np.empty((0, N_CLASSES))
lastFullTrainLabels = np.empty((0, N_CLASSES))
lastFullValPred = np.empty((0, N_CLASSES))
lastFullValLabels = np.empty((0, N_CLASSES))
for i in range(STEP_SIZE_TRAIN+1):
im, lbl = next(train_generator)
scores = model.predict(im, batch_size=train_generator.batch_size)
lastFullTrainPred = np.append(lastFullTrainPred, scores, axis=0)
lastFullTrainLabels = np.append(lastFullTrainLabels, lbl, axis=0)
for i in range(STEP_SIZE_VALID+1):
im, lbl = next(val_generator)
scores = model.predict(im, batch_size=val_generator.batch_size)
lastFullValPred = np.append(lastFullValPred, scores, axis=0)
lastFullValLabels = np.append(lastFullValLabels, lbl, axis=0)
lastFullComPred = np.concatenate((lastFullTrainPred, lastFullValPred))
lastFullComLabels = np.concatenate((lastFullTrainLabels, lastFullValLabels))
complete_labels = [np.argmax(label) for label in lastFullComLabels]
train_preds = [np.argmax(pred) for pred in lastFullTrainPred]
train_labels = [np.argmax(label) for label in lastFullTrainLabels]
validation_preds = [np.argmax(pred) for pred in lastFullValPred]
validation_labels = [np.argmax(label) for label in lastFullValLabels]
fig, (ax1, ax2) = plt.subplots(1, 2, sharex='col', figsize=(24, 7))
labels = ['0 - No DR', '1 - Mild', '2 - Moderate', '3 - Severe']
train_cnf_matrix = confusion_matrix(train_labels, train_preds)
validation_cnf_matrix = confusion_matrix(validation_labels, validation_preds)
train_cnf_matrix_norm = train_cnf_matrix.astype('float') / train_cnf_matrix.sum(axis=1)[:, np.newaxis]
validation_cnf_matrix_norm = validation_cnf_matrix.astype('float') / validation_cnf_matrix.sum(axis=1)[:, np.newaxis]
train_df_cm = pd.DataFrame(train_cnf_matrix_norm, index=labels, columns=labels)
validation_df_cm = pd.DataFrame(validation_cnf_matrix_norm, index=labels, columns=labels)
sns.heatmap(train_df_cm, annot=True, fmt='.2f', cmap="Blues",ax=ax1).set_title('Train')
sns.heatmap(validation_df_cm, annot=True, fmt='.2f', cmap="Blues",ax=ax2).set_title('Validation')
plt.show()
###Output
_____no_output_____ |
notebooks/09-er-training-adaboost.ipynb | ###Markdown
I'm now repeating code a lot here, bad! Will need to write bits of this
###Code
path_to_data = os.path.abspath(os.path.join(os.getcwd(),
"..",
"data/processed/"
))
data_dict = train.load_processed_data(file_path=path_to_data)
X = data_dict["X_train"]
y = data_dict["y_train"]
X_train, X_test, y_train, y_test = sel.train_test_split(X,
y,
test_size=0.3,
random_state=42,
shuffle=True,
stratify=y)
X_train_smol, X_test_smol, y_train_smol, y_test_smol = sel.train_test_split(X,
y,
test_size=0.09,
train_size=0.21,
random_state=42,
shuffle=True,
stratify=y)
ada_clf = ensemble.AdaBoostClassifier(tree.DecisionTreeClassifier(class_weight="balanced",),
random_state=42,
)
ada_params = {"n_estimators": [100,200,500,1000,2000],
"learning_rate": [0.01, 0.1, 1, 10]
}
grid_search = sel.GridSearchCV(estimator=ada_clf,
param_grid=ada_params,
scoring = "f1_macro",
n_jobs = 2,
cv=3,
verbose = 10
)
grid_search.fit(X_train_smol, y_train_smol)
grid_search.best_params_
grid_search.best_score_
###Output
_____no_output_____ |
CS/CSC321/Tutorial/tut2.ipynb | ###Markdown
Tutorial: ClassificationAgenda:1. Classification running example: Iris Flowers2. Weight space & feature space intuition3. Perceptron convergence proof4. Gradient Descent for Multiclass Logisitc Regression
###Code
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Classification with IrisWe're going to use the Iris dataset.We will only work with the first 2 flower classes(Setosa and Versicolour), and with just the firsttwo features: length and width of the sepal If you don't know what the sepal is, see thisdiagram:https://www.math.umd.edu/~petersd/666/html/iris_with_labels.jpg
###Code
from sklearn.datasets import load_iris
iris = load_iris()
print iris['DESCR']
# code from
# http://stackoverflow.com/questions/21131707/multiple-data-in-scatter-matrix
from pandas.tools.plotting import scatter_matrix
import pandas as pd
iris_data = pd.DataFrame(data=iris['data'],columns=iris['feature_names'])
iris_data["target"] = iris['target']
color_wheel = {1: "#0392cf",
2: "#7bc043",
3: "#ee4035"}
colors = iris_data["target"].map(lambda x: color_wheel.get(x + 1))
ax = scatter_matrix(iris_data, color=colors, alpha=0.6, figsize=(15, 15), diagonal='hist')
# Select first 2 flower classes (~100 rows)
# And first 2 features
sepal_len = iris['data'][:100,0]
sepal_wid = iris['data'][:100,1]
labels = iris['target'][:100]
# We will also center the data
# This is done to make numbers nice, so that we have no
# need for biases in our classification. (You might not
# be able to remove biases this way in general.)
sepal_len -= np.mean(sepal_len)
sepal_wid -= np.mean(sepal_wid)
# Plot Iris
plt.scatter(sepal_len,
sepal_wid,
c=labels,
cmap=plt.cm.Paired)
plt.xlabel("sepal length")
plt.ylabel("sepal width")
###Output
_____no_output_____
###Markdown
Plotting Decision BoundaryPlot decision boundary hypothese $$w_1 x_1 + w_2 x_2 \ge 0$$for classification as Setosa.
###Code
def plot_sep(w1, w2, color='green'):
'''
Plot decision boundary hypothesis
w1 * sepal_len + w2 * sepal_wid = 0
in input space, highlighting the hyperplane
'''
plt.scatter(sepal_len,
sepal_wid,
c=labels,
cmap=plt.cm.Paired)
plt.title("Separation in Input Space")
plt.ylim([-1.5,1.5])
plt.xlim([-1.5,2])
plt.xlabel("sepal length")
plt.ylabel("sepal width")
if w2 != 0:
m = -w1/w2
t = 1 if w2 > 0 else -1
plt.plot(
[-1.5,2.0],
[-1.5*m, 2.0*m],
'-y',
color=color)
plt.fill_between(
[-1.5, 2.0],
[m*-1.5, m*2.0],
[t*1.5, t*1.5],
alpha=0.2,
color=color)
if w2 == 0: # decision boundary is vertical
t = 1 if w1 > 0 else -1
plt.plot([0, 0],
[-1.5, 2.0],
'-y',
color=color)
plt.fill_between(
[0, 2.0*t],
[-1.5, -2.0],
[1.5, 2],
alpha=0.2,
color=color)
# Example hypothesis
# sepal_wid >= 0
plot_sep(0, 1)
# Another example hypothesis:
# -0.5*sepal_len + 1*sepal_wid >= 0
plot_sep(-0.5, 1)
# We're going to hand pick one point and
# analyze that point:
a1 = sepal_len[41]
a2 = sepal_wid[41]
print (a1, a2) # (-0.97, -0.79)
plot_sep(-0.5, 1)
plt.plot(a1, a2, 'ob') # highlight the point
###Output
(-0.97100000000000097, -0.79400000000000004)
###Markdown
Plot Constraints in Weight SpaceWe'll plot the constraints for some of the pointsthat we chose earlier.
###Code
def plot_weight_space(sepal_len, sepal_wid, lab=1,
color='steelblue',
maxlim=2.0):
plt.title("Constraint(s) in Weight Space")
plt.ylim([-maxlim,maxlim])
plt.xlim([-maxlim,maxlim])
plt.xlabel("w1")
plt.ylabel("w2")
if sepal_wid != 0:
m = -sepal_len/sepal_wid
t = 1*lab if sepal_wid > 0 else -1*lab
plt.plot([-maxlim, maxlim],
[-maxlim*m, maxlim*m],
'-y',
color=color)
plt.fill_between(
[-maxlim, maxlim], # x
[m*-maxlim, m*maxlim], # y-min
[t*maxlim, t*maxlim], # y-max
alpha=0.2,
color=color)
if sepal_wid == 0: # decision boundary is vertical
t = 1*lab if sepal_len > 0 else -1*lab
plt.plot([0, 0],
[-maxlim, maxlim],
'-y',
color=color)
plt.fill_between(
[0, 2.0*t],
[-maxlim, -maxlim],
[maxlim, maxlim],
alpha=0.2,
color=color)
# Plot the constraint for the point identified earlier:
a1 = sepal_len[41]
a2 = sepal_wid[41]
print (a1, a2)
# Do this on the board first by hand
plot_weight_space(a1, a2, lab=1)
# Below is the hypothesis we plotted earlier
# Notice it falls outside the range.
plt.plot(-0.5, 1, 'og')
###Output
(-0.97100000000000097, -0.79400000000000004)
###Markdown
Perceptron Learning Rule ExampleWe'll take one step using the perceptron learning rule
###Code
# Using the perceptron learning rule
# TODO: Fill in
w1 = -0.5 # + ...
w2 = 1 # + ...
# This should bring the point closer to the boundary
# In this case, the step brought the point into the
# condition boundary
plot_weight_space(a1, a2, lab=1)
plt.plot(-0.5+a1, 1+a2, 'og')
# old hypothesis
plt.plot(-0.5, 1, 'og')
plt.plot([-0.5, -0.5+a1], [1, 1+a2], '-g')
plt.axes().set_aspect('equal', 'box')
# Which means that the point (a1, a2) in input
# space is correctly classified.
plot_sep(-0.5+a1, 1+a2)
###Output
_____no_output_____
###Markdown
Visualizing Multiple ConstraintsWe'll visualize multiple constraints in weight space.
###Code
# Pick a second point
b1 = sepal_len[84]
b2 = sepal_wid[84]
plot_sep(-0.5+a1, 1+a2)
plt.plot(b1, b2, 'or') # plot the circle in red
# our weights fall outside constraint of second pt.
plot_weight_space(a1, a2, lab=1, color='blue')
plot_weight_space(b1, b2, lab=-1, color='red')
plt.plot(w1, w2, 'ob')
# Example of a separating hyperplane
plot_weight_space(a1, a2, lab=1, color='blue')
plot_weight_space(b1, b2, lab=-1, color='red')
plt.plot(-1, 1, 'ok')
plt.show()
plot_sep(-1, 1)
plt.show()
###Output
_____no_output_____
###Markdown
Perceptron Convergence Proof:(From Geoffrey Hinton's slides 2d)Hopeful claim: Every time the perceptron makes a mistake, the learning algo moves the current weight vector closer to all feasible weight vectorsBUT: weight vector may not get close to feasible vector in the boundary
###Code
# The feasible region is inside the intersection of these two regions:
plot_weight_space(a1, a2, lab=1, color='blue')
#plot_weight_space(b1, b2, lab=-1, color='red')
# This is a vector in the feasible region.
plt.plot(-0.3, 0.3, 'ok')
# We started with this point
plt.plot(-0.5, 1, 'og')
# And ended up here
plt.plot(-0.5+a1, 1+a2, 'or')
# Notice that red point is further away to black than the green
plt.axes().set_aspect('equal', 'box')
###Output
_____no_output_____
###Markdown
* So consider “generously feasible†weight vectors that lie within the feasible region by a margin at least as great as the length of the input vector that defines each constraint plane.* Every time the perceptron makes a mistake, the squared distance to all of these generously feasible weight vectors is always decreased by at least the squared length of the update vector.
###Code
plot_weight_space(a1, a2, lab=1, color='blue' ,maxlim=15)
plot_weight_space(b1, b2, lab=-1, color='red', maxlim=15)
# We started with this point
plt.plot(-0.5, 1, 'og')
plt.plot(-0.5+a1, 1+a2, 'or')
plt.axes().set_aspect('equal', 'box')
# red is closer to "generously feasible" vectors on the top left
###Output
_____no_output_____
###Markdown
Inform Sketch of Proof of Convergence* Each time the perceptron makes a mistake, the current weight vector moves to decrease its squared distance from every weight vector in the “generously feasible†region.* The squared distance decreases by at least the squared length of the input vector.* So after a finite number of mistakes, the weight vector must lie in the feasible region if this region exists. Gradient Descent for Multiclass Logisitc RegressionMulticlass logistic regression:\begin{align}{\bf z} &= {\bf W}{\bf x} + {\bf b} \\{\bf y} &= \text{softmax}({\bf z}) \\{\mathcal L}_\text{CE} &= -{\bf t}^T(\log \bf{y}) \end{align}Draw out the shapes on the board before continuing.
###Code
# Aside: lots of functions work on vectors
print np.log([1.5,2,3])
print np.exp([1.5,2,3])
###Output
[ 0.40546511 0.69314718 1.09861229]
[ 4.48168907 7.3890561 20.08553692]
###Markdown
Start by expanding the cross entropy loss so that we can work with it$$ {\mathcal L}_\text{CE} = -\sum_l t_l \log(y_l)$$ Main setupWe'll take the derivative with respect to the loss:\begin{align}\frac{\partial {\mathcal L}_\text{CE}}{\partial w_{kj}} &= \frac{\partial }{\partial w_{kj}} (-\sum_l t_l \log(y_l)) \\&= -\sum_l \frac{t_l}{y_l} \frac{\partial y_l}{\partial w_{kj}}\end{align} Normally in calculus we have the rule:\begin{align}\frac{\partial y_l}{\partial w_{kj}} &= \sum_m \frac{\partial y_l}{\partial z_m} \frac{\partial z_m}{\partial w_{kj}}\end{align}But $w_{kj}$ is independent of $z_m$ for $m \ne k$, so \begin{align}\frac{\partial y_l}{\partial w_{kj}} &= \frac{\partial y_l}{\partial z_k} \frac{\partial z_k}{\partial w_{kj}}\end{align}AND$$\frac{\partial z_k}{\partial w_{kj}} = x_j$$ Thus\begin{align}\frac{\partial {\mathcal L}_\text{CE}}{\partial w_{kj}} &= -\sum_l \frac{t_l}{y_l} \frac{\partial y_l}{\partial z_k} \frac{\partial z_k}{\partial w_{kj}} \\&= -\sum_l \frac{t_l}{y_l} \frac{\partial y_l}{\partial z_k} x_j \\&= x_j (-\sum_l \frac{t_l}{y_l} \frac{\partial y_l}{\partial z_k}) \\&= x_j \frac{\partial {\mathcal L}_\text{CE}}{\partial z_k} \end{align} Derivative with respect to $z_k$But we can show (on board) that$$\frac{\partial y_l}{\partial z_k} = y_k (I_{k,l} - y_l)$$Where $I_{k,l} = 1$ if $k=l$ and $0$ otherwise. Therefore\begin{align}\frac{\partial {\mathcal L}_\text{CE}}{\partial z_k} &= -\sum_l \frac{t_l}{y_l} (y_k (I_{k,l} - y_l)) \\&= -\frac{t_k}{y_k} y_k(1 - y_k) - \sum_{l \ne k} \frac{t_l}{y_l} (-y_k y_l) \\&= - t_k(1 - y_k) + \sum_{l \ne k} t_l y_k \\&= -t_k + t_k y_k + \sum_{l \ne k} t_l y_k \\&= -t_k + \sum_{l} t_l y_k \\&= -t_k + y_k \sum_{l} t_l \\&= -t_k + y_k \\&= y_k - t_k\end{align} Putting it all together\begin{align}\frac{\partial {\mathcal L}_\text{CE}}{\partial w_{kj}}&= x_j (y_k - t_k)\end{align} VectorizationOuter product.\begin{align}\frac{\partial {\mathcal L}_\text{CE}}{\partial {\bf W}}&= (\bf{y} - \bf{t}) \bf{x}^T \\\frac{\partial {\mathcal L}_\text{CE}}{\partial {\bf b}}&= (\bf{y} - \bf{t})\end{align}
###Code
def softmax(x):
#return np.exp(x) / np.sum(np.exp(x))
return np.exp(x - max(x)) / np.sum(np.exp(x - max(x)))
x1 = np.array([1,3,3])
softmax(x1)
x2 = np.array([1000,3000,3000])
softmax(x2)
def gradient(W, b, x, t):
'''
Gradient update for a single data point.
returns dW and db
This is meant to show how to implement the
obtained equation in code. (not tested)
'''
z = np.matmul(W, x) + b
y = softmax(z)
dW = np.matmul(x, (y-t).T)
db = (y-t)
return dW, db
###Output
_____no_output_____ |
cleared-demos/linear_systems/Vanilla Gaussian Elimination.ipynb | ###Markdown
Gaussian EliminationCopyright (C) 2020 Andreas KloecknerMIT LicensePermission is hereby granted, free of charge, to any person obtaining a copyof this software and associated documentation files (the "Software"), to dealin the Software without restriction, including without limitation the rightsto use, copy, modify, merge, publish, distribute, sublicense, and/or sellcopies of the Software, and to permit persons to whom the Software isfurnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included inall copies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS ORIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THEAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHERLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS INTHE SOFTWARE.
###Code
import numpy as np
np.random.seed(5)
n = 4
A = np.round(np.random.randn(n, n) * 5)
A
###Output
_____no_output_____
###Markdown
Gaussian EliminationCopyright (C) 2020 Andreas KloecknerMIT LicensePermission is hereby granted, free of charge, to any person obtaining a copyof this software and associated documentation files (the "Software"), to dealin the Software without restriction, including without limitation the rightsto use, copy, modify, merge, publish, distribute, sublicense, and/or sellcopies of the Software, and to permit persons to whom the Software isfurnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included inall copies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS ORIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THEAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHERLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS INTHE SOFTWARE.
###Code
import numpy as np
np.random.seed(5)
n = 4
A = np.round(np.random.randn(n, n) * 5)
A
###Output
_____no_output_____ |
src/scripts/experiment-1-searchstims/training-histories-10stims.ipynb | ###Markdown
training histories for all models trained with searchnets stimuli
###Code
from pathlib import Path
import matplotlib as mpl
import matplotlib.pyplot as plt
import pandas as pd
import pyprojroot
import seaborn as sns
import searchnets
###Output
_____no_output_____
###Markdown
helper functions
###Code
def cm_to_inches(cm):
return cm / 2.54
###Output
_____no_output_____
###Markdown
constants
###Code
SOURCE_DATA_ROOT = pyprojroot.here('results/searchstims/source_data/10stims')
FIGURES_ROOT = pyprojroot.here('docs/paper/figures/experiment-1/searchstims-10stims')
df_trainhist = pd.read_csv(SOURCE_DATA_ROOT.joinpath('training_history.csv'))
df_trainhist.head()
###Output
_____no_output_____
###Markdown
make figuresfirst, figure in paper
###Code
RC= {'axes.labelsize': 6,
'axes.titlesize': 6,
'xtick.labelsize': 4,
'ytick.labelsize': 4,
'legend.fontsize': 4,
}
sns.set_style("darkgrid")
sns.set_context("paper", rc=RC)
N_ROWS = 2
N_COLS = 3 # train loss, val loss, val acc for transfer / initalize
DPI=300
FIGSIZE = tuple(cm_to_inches(size) for size in (10, 5))
ys = ['loss/train', 'loss/val', 'acc/val']
ylabels = ['loss', 'loss', 'accuracy']
col_labels = ['training', 'validation', 'validation']
def trainhist(df_trainhist, net_name, save_root=FIGURES_ROOT, save_fig=False):
fig, ax = plt.subplots(N_ROWS, N_COLS, figsize=FIGSIZE, dpi=DPI)
df_net_trainhist = df_trainhist[df_trainhist.net_name == net_name]
for method in df_net_trainhist.method.unique():
df_method_trainhist = df_net_trainhist[df_net_trainhist.method == method]
n_replicates = len(df_method_trainhist.replicate.unique())
if method == 'transfer':
row = 0
palette = sns.color_palette("Set2", n_colors=n_replicates)
elif method == 'initialize':
row = 1
palette = sns.color_palette("Set1", n_colors=n_replicates)
for col, (y, ylabel, col_label) in enumerate(zip(ys, ylabels, col_labels)):
sns.lineplot(x='step', y=y, hue='replicate', data=df_method_trainhist,
ci=None, legend=False, alpha=0.75, ax=ax[row, col], palette=palette,
linewidth=0.5);
ax[row, col].set_ylabel(ylabel)
ax[row, col].set_xlabel('')
ax[row, col].yaxis.set_major_formatter(plt.matplotlib.ticker.StrMethodFormatter('{x:0.2f}'))
if row == 0:
ax[row, col].set_title(col_label)
ax[row, col].tick_params(axis='both', which='both', length=0) # turn off invisible ticks
ax[row, 0].set_ylim([-0.1, 1])
ax[row, 1].set_ylim([-0.1, 1])
ax[row, 2].set_ylim([0., 1.1])
if col == 0:
if row == 0:
ax[row, col].text(0, -0.5, method, fontweight='bold', fontsize=6)
elif row == 1:
ax[row, col].text(0, -0.5, method, fontweight='bold', fontsize=6)
ax[1, 1].set_xlabel('step', fontsize=6)
fig.tight_layout(h_pad=.01, w_pad=0.1)
if save_fig:
for ext in ('svg', 'png'):
fig_path = save_root.joinpath(
f'{net_name}-training-history.{ext}'
)
plt.savefig(fig_path, bbox_inches='tight')
###Output
_____no_output_____
###Markdown
figure with all training histories
###Code
for net_name in df_trainhist.net_name.unique():
trainhist(df_trainhist, net_name, save_root=FIGURES_ROOT, save_fig=True)
###Output
_____no_output_____ |